categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.17890 | null | null | http://arxiv.org/pdf/2405.17890v1 | 2024-05-28T07:12:06Z | 2024-05-28T07:12:06Z | SLMRec: Empowering Small Language Models for Sequential Recommendation | The sequential Recommendation (SR) task involves predicting the next item a user is likely to interact with, given their past interactions. The SR models examine the sequence of a user's actions to discern more complex behavioral patterns and temporal dynamics. Recent research demonstrates the great impact of LLMs on sequential recommendation systems, either viewing sequential recommendation as language modeling or serving as the backbone for user representation. Although these methods deliver outstanding performance, there is scant evidence of the necessity of a large language model and how large the language model is needed, especially in the sequential recommendation scene. Meanwhile, due to the huge size of LLMs, it is inefficient and impractical to apply a LLM-based model in real-world platforms that often need to process billions of traffic logs daily. In this paper, we explore the influence of LLMs' depth by conducting extensive experiments on large-scale industry datasets. Surprisingly, we discover that most intermediate layers of LLMs are redundant. Motivated by this insight, we empower small language models for SR, namely SLMRec, which adopt a simple yet effective knowledge distillation method. Moreover, SLMRec is orthogonal to other post-training efficiency techniques, such as quantization and pruning, so that they can be leveraged in combination. Comprehensive experimental results illustrate that the proposed SLMRec model attains the best performance using only 13% of the parameters found in LLM-based recommendation models, while simultaneously achieving up to 6.6x and 8.0x speedups in training and inference time costs, respectively. | [
"['Wujiang Xu' 'Zujie Liang' 'Jiaojiao Han' 'Xuying Ning' 'Wenfang Lin'\n 'Linxun Chen' 'Feng Wei' 'Yongfeng Zhang']"
] |
null | null | 2405.17897 | null | null | http://arxiv.org/pdf/2405.17897v1 | 2024-05-28T07:18:45Z | 2024-05-28T07:18:45Z | $C^2M^3$: Cycle-Consistent Multi-Model Merging | In this paper, we present a novel data-free method for merging neural networks in weight space. Differently from most existing works, our method optimizes for the permutations of network neurons globally across all layers. This allows us to enforce cycle consistency of the permutations when merging $N geq 3$ models, allowing circular compositions of permutations to be computed without accumulating error along the path. We qualitatively and quantitatively motivate the need for such a constraint, showing its benefits when merging sets of models in scenarios spanning varying architectures and datasets. We finally show that, when coupled with activation renormalization, our approach yields the best results in the task. | [
"['Donato Crisostomi' 'Marco Fumero' 'Daniele Baieri' 'Florian Bernard'\n 'Emanuele Rodolà']"
] |
null | null | 2405.17898 | null | null | http://arxiv.org/pdf/2405.17898v1 | 2024-05-28T07:18:52Z | 2024-05-28T07:18:52Z | FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic
Prediction | The objective of traffic prediction is to accurately forecast and analyze the dynamics of transportation patterns, considering both space and time. However, the presence of distribution shift poses a significant challenge in this field, as existing models struggle to generalize well when faced with test data that significantly differs from the training distribution. To tackle this issue, this paper introduces a simple and universal spatio-temporal prompt-tuning framework-FlashST, which adapts pre-trained models to the specific characteristics of diverse downstream datasets, improving generalization in diverse traffic prediction scenarios. Specifically, the FlashST framework employs a lightweight spatio-temporal prompt network for in-context learning, capturing spatio-temporal invariant knowledge and facilitating effective adaptation to diverse scenarios. Additionally, we incorporate a distribution mapping mechanism to align the data distributions of pre-training and downstream data, facilitating effective knowledge transfer in spatio-temporal forecasting. Empirical evaluations demonstrate the effectiveness of our FlashST across different spatio-temporal prediction tasks using diverse urban datasets. Code is available at https://github.com/HKUDS/FlashST. | [
"['Zhonghang Li' 'Lianghao Xia' 'Yong Xu' 'Chao Huang']"
] |
null | null | 2405.17902 | null | null | http://arxiv.org/pdf/2405.17902v2 | 2024-06-29T07:07:49Z | 2024-05-28T07:24:20Z | Boosting Protein Language Models with Negative Sample Mining | We introduce a pioneering methodology for boosting large language models in the domain of protein representation learning. Our primary contribution lies in the refinement process for correlating the over-reliance on co-evolution knowledge, in a way that networks are trained to distill invaluable insights from negative samples, constituted by protein pairs sourced from disparate categories. By capitalizing on this novel approach, our technique steers the training of transformer-based models within the attention score space. This advanced strategy not only amplifies performance but also reflects the nuanced biological behaviors exhibited by proteins, offering aligned evidence with traditional biological mechanisms such as protein-protein interaction. We experimentally observed improved performance on various tasks over datasets, on top of several well-established large protein models. This innovative paradigm opens up promising horizons for further progress in the realms of protein research and computational biology. | [
"['Yaoyao Xu' 'Xinjian Zhao' 'Xiaozhuang Song' 'Benyou Wang' 'Tianshu Yu']"
] |
null | null | 2405.17905 | null | null | http://arxiv.org/pdf/2405.17905v1 | 2024-05-28T07:27:42Z | 2024-05-28T07:27:42Z | Cycle-YOLO: A Efficient and Robust Framework for Pavement Damage
Detection | With the development of modern society, traffic volume continues to increase in most countries worldwide, leading to an increase in the rate of pavement damage Therefore, the real-time and highly accurate pavement damage detection and maintenance have become the current need. In this paper, an enhanced pavement damage detection method with CycleGAN and improved YOLOv5 algorithm is presented. We selected 7644 self-collected images of pavement damage samples as the initial dataset and augmented it by CycleGAN. Due to a substantial difference between the images generated by CycleGAN and real road images, we proposed a data enhancement method based on an improved Scharr filter, CycleGAN, and Laplacian pyramid. To improve the target recognition effect on a complex background and solve the problem that the spatial pyramid pooling-fast module in the YOLOv5 network cannot handle multiscale targets, we introduced the convolutional block attention module attention mechanism and proposed the atrous spatial pyramid pooling with squeeze-and-excitation structure. In addition, we optimized the loss function of YOLOv5 by replacing the CIoU with EIoU. The experimental results showed that our algorithm achieved a precision of 0.872, recall of 0.854, and mean average [email protected] of 0.882 in detecting three main types of pavement damage: cracks, potholes, and patching. On the GPU, its frames per second reached 68, meeting the requirements for real-time detection. Its overall performance even exceeded the current more advanced YOLOv7 and achieved good results in practical applications, providing a basis for decision-making in pavement damage detection and prevention. | [
"['Zhengji Li' 'Xi Xiao' 'Jiacheng Xie' 'Yuxiao Fan' 'Wentao Wang'\n 'Gang Chen' 'Liqiang Zhang' 'Tianyang Wang']"
] |
null | null | 2405.17914 | null | null | http://arxiv.org/pdf/2405.17914v1 | 2024-05-28T07:34:12Z | 2024-05-28T07:34:12Z | Trustworthy DNN Partition for Blockchain-enabled Digital Twin in
Wireless IIoT Networks | Digital twin (DT) has emerged as a promising solution to enhance manufacturing efficiency in industrial Internet of Things (IIoT) networks. To promote the efficiency and trustworthiness of DT for wireless IIoT networks, we propose a blockchain-enabled DT (B-DT) framework that employs deep neural network (DNN) partitioning technique and reputation-based consensus mechanism, wherein the DTs maintained at the gateway side execute DNN inference tasks using the data collected from their associated IIoT devices. First, we employ DNN partitioning technique to offload the top-layer DNN inference tasks to the access point (AP) side, which alleviates the computation burden at the gateway side and thereby improves the efficiency of DNN inference. Second, we propose a reputation-based consensus mechanism that integrates Proof of Work (PoW) and Proof of Stake (PoS). Specifically, the proposed consensus mechanism evaluates the off-chain reputation of each AP according to its computation resource contributions to the DNN inference tasks, and utilizes the off-chain reputation as a stake to adjust the block generation difficulty. Third, we formulate a stochastic optimization problem of communication resource (i.e., partition point) and computation resource allocation (i.e., computation frequency of APs for top-layer DNN inference and block generation) to minimize system latency under the time-varying channel state and long-term constraints of off-chain reputation, and solve the problem using Lyapunov optimization method. Experimental results show that the proposed dynamic DNN partitioning and resource allocation (DPRA) algorithm outperforms the baselines in terms of reducing the overall latency while guaranteeing the trustworthiness of the B-DT system. | [
"['Xiumei Deng' 'Jun Li' 'Long Shi' 'Kang Wei' 'Ming Ding' 'Yumeng Shao'\n 'Wen Chen' 'Shi Jin']"
] |
null | null | 2405.17918 | null | null | http://arxiv.org/pdf/2405.17918v1 | 2024-05-28T07:38:39Z | 2024-05-28T07:38:39Z | Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of
Learning Curve Extrapolation | In this paper, we address the problem of cost-sensitive multi-fidelity Bayesian Optimization (BO) for efficient hyperparameter optimization (HPO). Specifically, we assume a scenario where users want to early-stop the BO when the performance improvement is not satisfactory with respect to the required computational cost. Motivated by this scenario, we introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO. This utility function, combined with our novel acquisition function and stopping criterion, allows us to dynamically choose for each BO step the best configuration that we expect to maximally improve the utility in future, and also automatically stop the BO around the maximum utility. Further, we improve the sample efficiency of existing learning curve (LC) extrapolation methods with transfer learning, while successfully capturing the correlations between different configurations to develop a sensible surrogate function for multi-fidelity BO. We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider, achieving significantly better trade-off between cost and performance of BO. | [
"['Dong Bok Lee' 'Aoxuan Silvia Zhang' 'Byungjoo Kim' 'Junhyeon Park'\n 'Juho Lee' 'Sung Ju Hwang' 'Hae Beom Lee']"
] |
null | null | 2405.17927 | null | null | http://arxiv.org/pdf/2405.17927v1 | 2024-05-28T07:48:15Z | 2024-05-28T07:48:15Z | The Evolution of Multimodal Model Architectures | This work uniquely identifies and characterizes four prevalent multimodal model architectural patterns in the contemporary multimodal landscape. Systematically categorizing models by architecture type facilitates monitoring of developments in the multimodal domain. Distinct from recent survey papers that present general information on multimodal architectures, this research conducts a comprehensive exploration of architectural details and identifies four specific architectural types. The types are distinguished by their respective methodologies for integrating multimodal inputs into the deep neural network model. The first two types (Type A and B) deeply fuses multimodal inputs within the internal layers of the model, whereas the following two types (Type C and D) facilitate early fusion at the input stage. Type-A employs standard cross-attention, whereas Type-B utilizes custom-designed layers for modality fusion within the internal layers. On the other hand, Type-C utilizes modality-specific encoders, while Type-D leverages tokenizers to process the modalities at the model's input stage. The identified architecture types aid the monitoring of any-to-any multimodal model development. Notably, Type-C and Type-D are currently favored in the construction of any-to-any multimodal models. Type-C, distinguished by its non-tokenizing multimodal model architecture, is emerging as a viable alternative to Type-D, which utilizes input-tokenizing techniques. To assist in model selection, this work highlights the advantages and disadvantages of each architecture type based on data and compute requirements, architecture complexity, scalability, simplification of adding modalities, training objectives, and any-to-any multimodal generation capability. | [
"['Shakti N. Wadekar' 'Abhishek Chaurasia' 'Aman Chadha'\n 'Eugenio Culurciello']"
] |
null | null | 2405.17931 | null | null | http://arxiv.org/pdf/2405.17931v1 | 2024-05-28T07:53:40Z | 2024-05-28T07:53:40Z | Online Merging Optimizers for Boosting Rewards and Mitigating Tax in
Alignment | Effectively aligning Large Language Models (LLMs) with human-centric values while preventing the degradation of abilities acquired through Pre-training and Supervised Fine-tuning (SFT) poses a central challenge in Reinforcement Learning from Human Feedback (RLHF). In this paper, we first discover that interpolating RLHF and SFT model parameters can adjust the trade-off between human preference and basic capabilities, thereby reducing the alignment tax at the cost of alignment reward. Inspired by this, we propose integrating the RL policy and SFT models at each optimization step in RLHF to continuously regulate the training direction, introducing the Online Merging Optimizer. Specifically, we merge gradients with the parameter differences between SFT and pretrained models, effectively steering the gradient towards maximizing rewards in the direction of SFT optimization. We demonstrate that our optimizer works well with different LLM families, such as Qwen and LLaMA, across various model sizes ranging from 1.8B to 8B, various RLHF algorithms like DPO and KTO, and existing model merging methods. It significantly enhances alignment reward while mitigating alignment tax, achieving higher overall performance across 14 benchmarks. | [
"['Keming Lu' 'Bowen Yu' 'Fei Huang' 'Yang Fan' 'Runji Lin' 'Chang Zhou']"
] |
null | null | 2405.17932 | null | null | http://arxiv.org/pdf/2405.17932v1 | 2024-05-28T07:56:49Z | 2024-05-28T07:56:49Z | Towards Communication-efficient Federated Learning via Sparse and
Aligned Adaptive Optimization | Adaptive moment estimation (Adam), as a Stochastic Gradient Descent (SGD) variant, has gained widespread popularity in federated learning (FL) due to its fast convergence. However, federated Adam (FedAdam) algorithms suffer from a threefold increase in uplink communication overhead compared to federated SGD (FedSGD) algorithms, which arises from the necessity to transmit both local model updates and first and second moment estimates from distributed devices to the centralized server for aggregation. Driven by this issue, we propose a novel sparse FedAdam algorithm called FedAdam-SSM, wherein distributed devices sparsify the updates of local model parameters and moment estimates and subsequently upload the sparse representations to the centralized server. To further reduce the communication overhead, the updates of local model parameters and moment estimates incorporate a shared sparse mask (SSM) into the sparsification process, eliminating the need for three separate sparse masks. Theoretically, we develop an upper bound on the divergence between the local model trained by FedAdam-SSM and the desired model trained by centralized Adam, which is related to sparsification error and imbalanced data distribution. By minimizing the divergence bound between the model trained by FedAdam-SSM and centralized Adam, we optimize the SSM to mitigate the learning performance degradation caused by sparsification error. Additionally, we provide convergence bounds for FedAdam-SSM in both convex and non-convex objective function settings, and investigate the impact of local epoch, learning rate and sparsification ratio on the convergence rate of FedAdam-SSM. Experimental results show that FedAdam-SSM outperforms baselines in terms of convergence rate (over 1.1$times$ faster than the sparse FedAdam baselines) and test accuracy (over 14.5% ahead of the quantized FedAdam baselines). | [
"['Xiumei Deng' 'Jun Li' 'Kang Wei' 'Long Shi' 'Zeihui Xiong' 'Ming Ding'\n 'Wen Chen' 'Shi Jin' 'H. Vincent Poor']"
] |
null | null | 2405.17938 | null | null | http://arxiv.org/pdf/2405.17938v1 | 2024-05-28T08:02:42Z | 2024-05-28T08:02:42Z | RC-Mixup: A Data Augmentation Strategy against Noisy Data for Regression
Tasks | We study the problem of robust data augmentation for regression tasks in the presence of noisy data. Data augmentation is essential for generalizing deep learning models, but most of the techniques like the popular Mixup are primarily designed for classification tasks on image data. Recently, there are also Mixup techniques that are specialized to regression tasks like C-Mixup. In comparison to Mixup, which takes linear interpolations of pairs of samples, C-Mixup is more selective in which samples to mix based on their label distances for better regression performance. However, C-Mixup does not distinguish noisy versus clean samples, which can be problematic when mixing and lead to suboptimal model performance. At the same time, robust training has been heavily studied where the goal is to train accurate models against noisy data through multiple rounds of model training. We thus propose our data augmentation strategy RC-Mixup, which tightly integrates C-Mixup with multi-round robust training methods for a synergistic effect. In particular, C-Mixup improves robust training in identifying clean data, while robust training provides cleaner data to C-Mixup for it to perform better. A key advantage of RC-Mixup is that it is data-centric where the robust model training algorithm itself does not need to be modified, but can simply benefit from data mixing. We show in our experiments that RC-Mixup significantly outperforms C-Mixup and robust training baselines on noisy data benchmarks and can be integrated with various robust training methods. | [
"['Seong-Hyeon Hwang' 'Minsu Kim' 'Steven Euijong Whang']"
] |
null | null | 2405.17951 | null | null | http://arxiv.org/pdf/2405.17951v1 | 2024-05-28T08:28:18Z | 2024-05-28T08:28:18Z | Efficient Time Series Processing for Transformers and State-Space Models
through Token Merging | Transformer architectures have shown promising results in time series processing. However, despite recent advances in subquadratic attention mechanisms or state-space models, processing very long sequences still imposes significant computational requirements. Token merging, which involves replacing multiple tokens with a single one calculated as their linear combination, has shown to considerably improve the throughput of vision transformer architectures while maintaining accuracy. In this work, we go beyond computer vision and perform the first investigations of token merging in time series analysis on both time series transformers and state-space models. To effectively scale token merging to long sequences, we introduce local merging, a domain-specific token merging algorithm that selectively combines tokens within a local neighborhood, adjusting the computational complexity from linear to quadratic based on the neighborhood size. Our comprehensive empirical evaluation demonstrates that token merging offers substantial computational benefits with minimal impact on accuracy across various models and datasets. On the recently proposed Chronos foundation model, we achieve accelerations up to 5400% with only minor accuracy degradations. | [
"['Leon Götz' 'Marcel Kollovieh' 'Stephan Günnemann' 'Leo Schwinn']"
] |
null | null | 2405.17955 | null | null | http://arxiv.org/pdf/2405.17955v1 | 2024-05-28T08:34:41Z | 2024-05-28T08:34:41Z | Efficient Prior Calibration From Indirect Data | Bayesian inversion is central to the quantification of uncertainty within problems arising from numerous applications in science and engineering. To formulate the approach, four ingredients are required: a forward model mapping the unknown parameter to an element of a solution space, often the solution space for a differential equation; an observation operator mapping an element of the solution space to the data space; a noise model describing how noise pollutes the observations; and a prior model describing knowledge about the unknown parameter before the data is acquired. This paper is concerned with learning the prior model from data; in particular, learning the prior from multiple realizations of indirect data obtained through the noisy observation process. The prior is represented, using a generative model, as the pushforward of a Gaussian in a latent space; the pushforward map is learned by minimizing an appropriate loss function. A metric that is well-defined under empirical approximation is used to define the loss function for the pushforward map to make an implementable methodology. Furthermore, an efficient residual-based neural operator approximation of the forward model is proposed and it is shown that this may be learned concurrently with the pushforward map, using a bilevel optimization formulation of the problem; this use of neural operator approximation has the potential to make prior learning from indirect data more computationally efficient, especially when the observation process is expensive, non-smooth or not known. The ideas are illustrated with the Darcy flow inverse problem of finding permeability from piezometric head measurements. | [
"['O. Deniz Akyildiz' 'Mark Girolami' 'Andrew M. Stuart'\n 'Arnaud Vadeboncoeur']"
] |
null | null | 2405.17968 | null | null | http://arxiv.org/pdf/2405.17968v1 | 2024-05-28T08:55:02Z | 2024-05-28T08:55:02Z | Matroid Semi-Bandits in Sublinear Time | We study the matroid semi-bandits problem, where at each round the learner plays a subset of $K$ arms from a feasible set, and the goal is to maximize the expected cumulative linear rewards. Existing algorithms have per-round time complexity at least $Omega(K)$, which becomes expensive when $K$ is large. To address this computational issue, we propose FasterCUCB whose sampling rule takes time sublinear in $K$ for common classes of matroids: $O(Dtext{ polylog}(K)text{ polylog}(T))$ for uniform matroids, partition matroids, and graphical matroids, and $O(Dsqrt{K}text{ polylog}(T))$ for transversal matroids. Here, $D$ is the maximum number of elements in any feasible subset of arms, and $T$ is the horizon. Our technique is based on dynamic maintenance of an approximate maximum-weight basis over inner-product weights. Although the introduction of an approximate maximum-weight basis presents a challenge in regret analysis, we can still guarantee an upper bound on regret as tight as CUCB in the sense that it matches the gap-dependent lower bound by Kveton et al. (2014a) asymptotically. | [
"['Ruo-Chun Tzeng' 'Naoto Ohsaka' 'Kaito Ariu']"
] |
null | null | 2405.17969 | null | null | http://arxiv.org/pdf/2405.17969v1 | 2024-05-28T08:56:33Z | 2024-05-28T08:56:33Z | Knowledge Circuits in Pretrained Transformers | The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store knowledge have long been a subject of intense interest and investigation among researchers. To date, most studies have concentrated on isolated components within these models, such as the Multilayer Perceptrons and attention head. In this paper, we delve into the computation graph of the language model to uncover the knowledge circuits that are instrumental in articulating specific knowledge. The experiments, conducted with GPT2 and TinyLLAMA, has allowed us to observe how certain information heads, relation heads, and Multilayer Perceptrons collaboratively encode knowledge within the model. Moreover, we evaluate the impact of current knowledge editing techniques on these knowledge circuits, providing deeper insights into the functioning and constraints of these editing methodologies. Finally, we utilize knowledge circuits to analyze and interpret language model behaviors such as hallucinations and in-context learning. We believe the knowledge circuit holds potential for advancing our understanding of Transformers and guiding the improved design of knowledge editing. Code and data are available in https://github.com/zjunlp/KnowledgeCircuits. | [
"['Yunzhi Yao' 'Ningyu Zhang' 'Zekun Xi' 'Mengru Wang' 'Ziwen Xu'\n 'Shumin Deng' 'Huajun Chen']"
] |
null | null | 2405.17983 | null | null | http://arxiv.org/pdf/2405.17983v1 | 2024-05-28T09:16:08Z | 2024-05-28T09:16:08Z | Reinforced Model Predictive Control via Trust-Region Quasi-Newton Policy
Optimization | Model predictive control can optimally deal with nonlinear systems under consideration of constraints. The control performance depends on the model accuracy and the prediction horizon. Recent advances propose to use reinforcement learning applied to a parameterized model predictive controller to recover the optimal control performance even if an imperfect model or short prediction horizons are used. However, common reinforcement learning algorithms rely on first order updates, which only have a linear convergence rate and hence need an excessive amount of dynamic data. Higher order updates are typically intractable if the policy is approximated with neural networks due to the large number of parameters. In this work, we use a parameterized model predictive controller as policy, and leverage the small amount of necessary parameters to propose a trust-region constrained Quasi-Newton training algorithm for policy optimization with a superlinear convergence rate. We show that the required second order derivative information can be calculated by the solution of a linear system of equations. A simulation study illustrates that the proposed training algorithm outperforms other algorithms in terms of data efficiency and accuracy. | [
"['Dean Brandner' 'Sergio Lucia']"
] |
null | null | 2405.17984 | null | null | http://arxiv.org/pdf/2405.17984v1 | 2024-05-28T09:17:58Z | 2024-05-28T09:17:58Z | Cross-Context Backdoor Attacks against Graph Prompt Learning | Graph Prompt Learning (GPL) bridges significant disparities between pretraining and downstream applications to alleviate the knowledge transfer bottleneck in real-world graph learning. While GPL offers superior effectiveness in graph knowledge transfer and computational efficiency, the security risks posed by backdoor poisoning effects embedded in pretrained models remain largely unexplored. Our study provides a comprehensive analysis of GPL's vulnerability to backdoor attacks. We introduce textit{CrossBA}, the first cross-context backdoor attack against GPL, which manipulates only the pretraining phase without requiring knowledge of downstream applications. Our investigation reveals both theoretically and empirically that tuning trigger graphs, combined with prompt transformations, can seamlessly transfer the backdoor threat from pretrained encoders to downstream applications. Through extensive experiments involving 3 representative GPL methods across 5 distinct cross-context scenarios and 5 benchmark datasets of node and graph classification tasks, we demonstrate that textit{CrossBA} consistently achieves high attack success rates while preserving the functionality of downstream applications over clean input. We also explore potential countermeasures against textit{CrossBA} and conclude that current defenses are insufficient to mitigate textit{CrossBA}. Our study highlights the persistent backdoor threats to GPL systems, raising trustworthiness concerns in the practices of GPL techniques. | [
"['Xiaoting Lyu' 'Yufei Han' 'Wei Wang' 'Hangwei Qian' 'Ivor Tsang'\n 'Xiangliang Zhang']"
] |
null | null | 2405.17995 | null | null | http://arxiv.org/pdf/2405.17995v1 | 2024-05-28T09:28:52Z | 2024-05-28T09:28:52Z | DMT-JEPA: Discriminative Masked Targets for Joint-Embedding Predictive
Architecture | The joint-embedding predictive architecture (JEPA) recently has shown impressive results in extracting visual representations from unlabeled imagery under a masking strategy. However, we reveal its disadvantages, notably its insufficient understanding of local semantics. This deficiency originates from masked modeling in the embedding space, resulting in a reduction of discriminative power and can even lead to the neglect of critical local semantics. To bridge this gap, we introduce DMT-JEPA, a novel masked modeling objective rooted in JEPA, specifically designed to generate discriminative latent targets from neighboring information. Our key idea is simple: we consider a set of semantically similar neighboring patches as a target of a masked patch. To be specific, the proposed DMT-JEPA (a) computes feature similarities between each masked patch and its corresponding neighboring patches to select patches having semantically meaningful relations, and (b) employs lightweight cross-attention heads to aggregate features of neighboring patches as the masked targets. Consequently, DMT-JEPA demonstrates strong discriminative power, offering benefits across a diverse spectrum of downstream tasks. Through extensive experiments, we demonstrate our effectiveness across various visual benchmarks, including ImageNet-1K image classification, ADE20K semantic segmentation, and COCO object detection tasks. Code is available at: url{https://github.com/DMTJEPA/DMTJEPA}. | [
"['Shentong Mo' 'Sukmin Yun']"
] |
null | null | 2405.18009 | null | null | http://arxiv.org/pdf/2405.18009v1 | 2024-05-28T09:50:46Z | 2024-05-28T09:50:46Z | Exploring Context Window of Large Language Models via Decomposed
Positional Vectors | Transformer-based large language models (LLMs) typically have a limited context window, resulting in significant performance degradation when processing text beyond the length of the context window. Extensive studies have been proposed to extend the context window and achieve length extrapolation of LLMs, but there is still a lack of in-depth interpretation of these approaches. In this study, we explore the positional information within and beyond the context window for deciphering the underlying mechanism of LLMs. By using a mean-based decomposition method, we disentangle positional vectors from hidden states of LLMs and analyze their formation and effect on attention. Furthermore, when texts exceed the context window, we analyze the change of positional vectors in two settings, i.e., direct extrapolation and context window extension. Based on our findings, we design two training-free context window extension methods, positional vector replacement and attention window extension. Experimental results show that our methods can effectively extend the context window length. | [
"['Zican Dong' 'Junyi Li' 'Xin Men' 'Wayne Xin Zhao' 'Bingbing Wang'\n 'Zhen Tian' 'Weipeng Chen' 'Ji-Rong Wen']"
] |
null | null | 2405.18029 | null | null | http://arxiv.org/pdf/2405.18029v1 | 2024-05-28T10:25:06Z | 2024-05-28T10:25:06Z | Are Image Distributions Indistinguishable to Humans Indistinguishable to
Classifiers? | The ultimate goal of generative models is to characterize the data distribution perfectly. For image generation, common metrics of visual quality (e.g., FID), and the truthlikeness of generated images to the human eyes seem to suggest that we are close to achieving it. However, through distribution classification tasks, we find that, in the eyes of classifiers parameterized by neural networks, the strongest diffusion models are still far from this goal. Specifically, classifiers consistently and effortlessly distinguish between real and generated images in various settings. Further, we observe an intriguing discrepancy: classifiers can identify differences between diffusion models with similar performance (e.g., U-ViT-H vs. DiT-XL), but struggle to differentiate between the smallest and largest models in the same family (e.g., EDM2-XS vs. EDM2-XXL), whereas humans exhibit the opposite tendency. As an explanation, our comprehensive empirical study suggests that, unlike humans, classifiers tend to classify images through edge and high-frequency components. We believe that our methodology can serve as a probe to understand how generative models work and inspire further thought on how existing models can be improved and how the abuse of such models can be prevented. | [
"['Zebin You' 'Xinyu Zhang' 'Hanzhong Guo' 'Jingdong Wang' 'Chongxuan Li']"
] |
null | null | 2405.18031 | null | null | http://arxiv.org/pdf/2405.18031v1 | 2024-05-28T10:28:45Z | 2024-05-28T10:28:45Z | Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized
Optimization over Time-Varying Networks | We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network. This problem is relatively well-studied in the scenario when the objective functions are smooth, or the links of the network are fixed in time, or both. In particular, lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established, along with matching optimal algorithms. However, the remaining and most challenging setting of non-smooth decentralized optimization over time-varying networks is largely underexplored, as neither lower bounds nor optimal algorithms are known in the literature. We resolve this fundamental gap with the following contributions: (i) we establish the first lower bounds on the communication and subgradient computation complexities of solving non-smooth convex decentralized optimization problems over time-varying networks; (ii) we develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art. | [
"['Dmitry Kovalev' 'Ekaterina Borodich' 'Alexander Gasnikov'\n 'Dmitrii Feoktistov']"
] |
null | null | 2405.18036 | null | null | http://arxiv.org/pdf/2405.18036v1 | 2024-05-28T10:40:20Z | 2024-05-28T10:40:20Z | ForecastGrapher: Redefining Multivariate Time Series Forecasting with
Graph Neural Networks | The challenge of effectively learning inter-series correlations for multivariate time series forecasting remains a substantial and unresolved problem. Traditional deep learning models, which are largely dependent on the Transformer paradigm for modeling long sequences, often fail to integrate information from multiple time series into a coherent and universally applicable model. To bridge this gap, our paper presents ForecastGrapher, a framework reconceptualizes multivariate time series forecasting as a node regression task, providing a unique avenue for capturing the intricate temporal dynamics and inter-series correlations. Our approach is underpinned by three pivotal steps: firstly, generating custom node embeddings to reflect the temporal variations within each series; secondly, constructing an adaptive adjacency matrix to encode the inter-series correlations; and thirdly, augmenting the GNNs' expressive power by diversifying the node feature distribution. To enhance this expressive power, we introduce the Group Feature Convolution GNN (GFC-GNN). This model employs a learnable scaler to segment node features into multiple groups and applies one-dimensional convolutions with different kernel lengths to each group prior to the aggregation phase. Consequently, the GFC-GNN method enriches the diversity of node feature distribution in a fully end-to-end fashion. Through extensive experiments and ablation studies, we show that ForecastGrapher surpasses strong baselines and leading published techniques in the domain of multivariate time series forecasting. | [
"['Wanlin Cai' 'Kun Wang' 'Hao Wu' 'Xiaoxu Chen' 'Yuankai Wu']"
] |
null | null | 2405.18039 | null | null | http://arxiv.org/pdf/2405.18039v2 | 2024-06-21T07:06:30Z | 2024-05-28T10:50:35Z | Large Language Model-Driven Curriculum Design for Mobile Networks | This study introduces an innovative framework that employs large language models (LLMs) to automate the design and generation of curricula for reinforcement learning (RL). As mobile networks evolve towards the 6G era, managing their increasing complexity and dynamic nature poses significant challenges. Conventional RL approaches often suffer from slow convergence and poor generalization due to conflicting objectives and the large state and action spaces associated with mobile networks. To address these shortcomings, we introduce curriculum learning, a method that systematically exposes the RL agent to progressively challenging tasks, improving convergence and generalization. However, curriculum design typically requires extensive domain knowledge and manual human effort. Our framework mitigates this by utilizing the generative capabilities of LLMs to automate the curriculum design process, significantly reducing human effort while improving the RL agent's convergence and performance. We deploy our approach within a simulated mobile network environment and demonstrate improved RL convergence rates, generalization to unseen scenarios, and overall performance enhancements. As a case study, we consider autonomous coordination and user association in mobile networks. Our obtained results highlight the potential of combining LLM-based curriculum generation with RL for managing next-generation wireless networks, marking a significant step towards fully autonomous network operations. | [
"['Omar Erak' 'Omar Alhussein' 'Shimaa Naser' 'Nouf Alabbasi' 'De Mi'\n 'Sami Muhaidat']"
] |
null | null | 2405.18040 | null | null | http://arxiv.org/pdf/2405.18040v1 | 2024-05-28T10:51:38Z | 2024-05-28T10:51:38Z | Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew
Resilience | Federated learning (FL) has recently emerged as a compelling machine learning paradigm, prioritizing the protection of privacy for training data. The increasing demand to address issues such as ``the right to be forgotten'' and combat data poisoning attacks highlights the importance of techniques, known as textit{unlearning}, which facilitate the removal of specific training data from trained FL models. Despite numerous unlearning methods proposed for centralized learning, they often prove inapplicable to FL due to fundamental differences in the operation of the two learning paradigms. Consequently, unlearning in FL remains in its early stages, presenting several challenges. Many existing unlearning solutions in FL require a costly retraining process, which can be burdensome for clients. Moreover, these methods are primarily validated through experiments, lacking theoretical assurances. In this study, we introduce Fast-FedUL, a tailored unlearning method for FL, which eliminates the need for retraining entirely. Through meticulous analysis of the target client's influence on the global model in each round, we develop an algorithm to systematically remove the impact of the target client from the trained model. In addition to presenting empirical findings, we offer a theoretical analysis delineating the upper bound of our unlearned model and the exact retrained model (the one obtained through retraining using untargeted clients). Experimental results with backdoor attack scenarios indicate that Fast-FedUL effectively removes almost all traces of the target client, while retaining the knowledge of untargeted clients (obtaining a high accuracy of up to 98% on the main task). Significantly, Fast-FedUL attains the lowest time complexity, providing a speed that is 1000 times faster than retraining. Our source code is publicly available at url{https://github.com/thanhtrunghuynh93/fastFedUL}. | [
"['Thanh Trung Huynh' 'Trong Bang Nguyen' 'Phi Le Nguyen'\n 'Thanh Tam Nguyen' 'Matthias Weidlich' 'Quoc Viet Hung Nguyen'\n 'Karl Aberer']"
] |
null | null | 2405.18042 | null | null | http://arxiv.org/pdf/2405.18042v1 | 2024-05-28T10:54:26Z | 2024-05-28T10:54:26Z | Visualizing the loss landscape of Self-supervised Vision Transformer | The Masked autoencoder (MAE) has drawn attention as a representative self-supervised approach for masked image modeling with vision transformers. However, even though MAE shows better generalization capability than fully supervised training from scratch, the reason why has not been explored. In another line of work, the Reconstruction Consistent Masked Auto Encoder (RC-MAE), has been proposed which adopts a self-distillation scheme in the form of an exponential moving average (EMA) teacher into MAE, and it has been shown that the EMA-teacher performs a conditional gradient correction during optimization. To further investigate the reason for better generalization of the self-supervised ViT when trained by MAE (MAE-ViT) and the effect of the gradient correction of RC-MAE from the perspective of optimization, we visualize the loss landscapes of the self-supervised vision transformer by both MAE and RC-MAE and compare them with the supervised ViT (Sup-ViT). Unlike previous loss landscape visualizations of neural networks based on classification task loss, we visualize the loss landscape of ViT by computing pre-training task loss. Through the lens of loss landscapes, we find two interesting observations: (1) MAE-ViT has a smoother and wider overall loss curvature than Sup-ViT. (2) The EMA-teacher allows MAE to widen the region of convexity in both pretraining and linear probing, leading to quicker convergence. To the best of our knowledge, this work is the first to investigate the self-supervised ViT through the lens of the loss landscape. | [
"['Youngwan Lee' 'Jeffrey Ryan Willette' 'Jonghee Kim' 'Sung Ju Hwang']"
] |
null | null | 2405.18045 | null | null | http://arxiv.org/pdf/2405.18045v1 | 2024-05-28T11:00:41Z | 2024-05-28T11:00:41Z | Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning:
From InfoNCE to Kernel-Based Losses | What do different contrastive learning (CL) losses actually optimize for? Although multiple CL methods have demonstrated remarkable representation learning capabilities, the differences in their inner workings remain largely opaque. In this work, we analyse several CL families and prove that, under certain conditions, they admit the same minimisers when optimizing either their batch-level objectives or their expectations asymptotically. In both cases, an intimate connection with the hyperspherical energy minimisation (HEM) problem resurfaces. Drawing inspiration from this, we introduce a novel CL objective, coined Decoupled Hyperspherical Energy Loss (DHEL). DHEL simplifies the problem by decoupling the target hyperspherical energy from the alignment of positive examples while preserving the same theoretical guarantees. Going one step further, we show the same results hold for another relevant CL family, namely kernel contrastive learning (KCL), with the additional advantage of the expected loss being independent of batch size, thus identifying the minimisers in the non-asymptotic regime. Empirical results demonstrate improved downstream performance and robustness across combinations of different batch sizes and hyperparameters and reduced dimensionality collapse, on several computer vision datasets. | [
"['Panagiotis Koromilas' 'Giorgos Bouritsas' 'Theodoros Giannakopoulos'\n 'Mihalis Nicolaou' 'Yannis Panagakis']"
] |
null | null | 2405.18047 | null | null | http://arxiv.org/pdf/2405.18047v1 | 2024-05-28T11:02:01Z | 2024-05-28T11:02:01Z | 2BP: 2-Stage Backpropagation | As Deep Neural Networks (DNNs) grow in size and complexity, they often exceed the memory capacity of a single accelerator, necessitating the sharding of model parameters across multiple accelerators. Pipeline parallelism is a commonly used sharding strategy for training large DNNs. However, current implementations of pipeline parallelism are being unintentionally bottlenecked by the automatic differentiation tools provided by ML frameworks. This paper introduces 2-stage backpropagation (2BP). By splitting the backward propagation step into two separate stages, we can reduce idle compute time. We tested 2BP on various model architectures and pipelining schedules, achieving increases in throughput in all cases. Using 2BP, we were able to achieve a 1.70x increase in throughput compared to traditional methods when training a LLaMa-like transformer with 7 billion parameters across 4 GPUs. | [
"['Christopher Rae' 'Joseph K. L. Lee' 'James Richings']"
] |
null | null | 2405.18050 | null | null | http://arxiv.org/pdf/2405.18050v1 | 2024-05-28T11:05:41Z | 2024-05-28T11:05:41Z | Learning-Based Link Anomaly Detection in Continuous-Time Dynamic Graphs | Anomaly detection in continuous-time dynamic graphs is an emerging field yet under-explored in the context of learning-based approaches. In this paper, we pioneer structured analyses of link-level anomalies and graph representation learning for identifying anomalous links in these graphs. First, we introduce a fine-grain taxonomy for edge-level anomalies leveraging structural, temporal, and contextual graph properties. We present a method for generating and injecting such typed anomalies into graphs. Next, we introduce a novel method to generate continuous-time dynamic graphs with consistent patterns across time, structure, and context. To allow temporal graph methods to learn the link anomaly detection task, we extend the generic link prediction setting by: (1) conditioning link existence on contextual edge attributes; and (2) refining the training regime to accommodate diverse perturbations in the negative edge sampler. Building on this, we benchmark methods for anomaly detection. Comprehensive experiments on synthetic and real-world datasets -- featuring synthetic and labeled organic anomalies and employing six state-of-the-art learning methods -- validate our taxonomy and generation processes for anomalies and benign graphs, as well as our approach to adapting link prediction methods for anomaly detection. Our results further reveal that different learning methods excel in capturing different aspects of graph normality and detecting different types of anomalies. We conclude with a comprehensive list of findings highlighting opportunities for future research. | [
"['Tim Poštuvan' 'Claas Grohnfeldt' 'Michele Russo' 'Giulio Lovisotto']"
] |
null | null | 2405.18068 | null | null | http://arxiv.org/pdf/2405.18068v1 | 2024-05-28T11:28:59Z | 2024-05-28T11:28:59Z | A Survey of Latent Factor Models in Recommender Systems | Recommender systems are essential tools in the digital era, providing personalized content to users in areas like e-commerce, entertainment, and social media. Among the many approaches developed to create these systems, latent factor models have proven particularly effective. This survey systematically reviews latent factor models in recommender systems, focusing on their core principles, methodologies, and recent advancements. The literature is examined through a structured framework covering learning data, model architecture, learning strategies, and optimization techniques. The analysis includes a taxonomy of contributions and detailed discussions on the types of learning data used, such as implicit feedback, trust, and content data, various models such as probabilistic, nonlinear, and neural models, and an exploration of diverse learning strategies like online learning, transfer learning, and active learning. Furthermore, the survey addresses the optimization strategies used to train latent factor models, improving their performance and scalability. By identifying trends, gaps, and potential research directions, this survey aims to provide valuable insights for researchers and practitioners looking to advance the field of recommender systems. | [
"['Hind I. Alshbanat' 'Hafida Benhidour' 'Said Kerrache']"
] |
null | null | 2405.18069 | null | null | http://arxiv.org/pdf/2405.18069v1 | 2024-05-28T11:29:25Z | 2024-05-28T11:29:25Z | An Empirical Analysis of Forgetting in Pre-trained Models with
Incremental Low-Rank Updates | Broad, open source availability of large pretrained foundation models on the internet through platforms such as HuggingFace has taken the world of practical deep learning by storm. A classical pipeline for neural network training now typically consists of finetuning these pretrained network on a small target dataset instead of training from scratch. In the case of large models this can be done even on modest hardware using a low rank training technique known as Low-Rank Adaptation (LoRA). While Low Rank training has already been studied in the continual learning setting, existing works often consider storing the learned adapter along with the existing model but rarely attempt to modify the weights of the pretrained model by merging the LoRA with the existing weights after finishing the training of each task. In this article we investigate this setting and study the impact of LoRA rank on the forgetting of the pretraining foundation task and on the plasticity and forgetting of subsequent ones. We observe that this rank has an important impact on forgetting of both the pretraining and downstream tasks. We also observe that vision transformers finetuned in that way exhibit a sort of ``contextual'' forgetting, a behaviour that we do not observe for residual networks and that we believe has not been observed yet in previous continual learning works. | [
"['Albin Soutif--Cormerais' 'Simone Magistri' 'Joost van de Weijer'\n 'Andew D. Bagdanov']"
] |
null | null | 2405.18075 | null | null | http://arxiv.org/pdf/2405.18075v1 | 2024-05-28T11:30:19Z | 2024-05-28T11:30:19Z | Implicitly Guided Design with PropEn: Match your Data to Follow the
Gradient | Across scientific domains, generating new models or optimizing existing ones while meeting specific criteria is crucial. Traditional machine learning frameworks for guided design use a generative model and a surrogate model (discriminator), requiring large datasets. However, real-world scientific applications often have limited data and complex landscapes, making data-hungry models inefficient or impractical. We propose a new framework, PropEn, inspired by ``matching'', which enables implicit guidance without training a discriminator. By matching each sample with a similar one that has a better property value, we create a larger training dataset that inherently indicates the direction of improvement. Matching, combined with an encoder-decoder architecture, forms a domain-agnostic generative framework for property enhancement. We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution, allowing efficient design optimization. Extensive evaluations in toy problems and scientific applications, such as therapeutic protein design and airfoil optimization, demonstrate PropEn's advantages over common baselines. Notably, the protein design results are validated with wet lab experiments, confirming the competitiveness and effectiveness of our approach. | [
"['Nataša Tagasovska' 'Vladimir Gligorijević' 'Kyunghyun Cho'\n 'Andreas Loukas']"
] |
null | null | 2405.18077 | null | null | http://arxiv.org/pdf/2405.18077v1 | 2024-05-28T11:37:59Z | 2024-05-28T11:37:59Z | Design Principles for Falsifiable, Replicable and Reproducible Empirical
ML Research | Empirical research plays a fundamental role in the machine learning domain. At the heart of impactful empirical research lies the development of clear research hypotheses, which then shape the design of experiments. The execution of experiments must be carried out with precision to ensure reliable results, followed by statistical analysis to interpret these outcomes. This process is key to either supporting or refuting initial hypotheses. Despite its importance, there is a high variability in research practices across the machine learning community and no uniform understanding of quality criteria for empirical research. To address this gap, we propose a model for the empirical research process, accompanied by guidelines to uphold the validity of empirical research. By embracing these recommendations, greater consistency, enhanced reliability and increased impact can be achieved. | [
"['Daniel Vranješ' 'Oliver Niggemann']"
] |
null | null | 2405.18080 | null | null | http://arxiv.org/pdf/2405.18080v1 | 2024-05-28T11:41:41Z | 2024-05-28T11:41:41Z | HarmoDT: Harmony Multi-Task Decision Transformer for Offline
Reinforcement Learning | The purpose of offline multi-task reinforcement learning (MTRL) is to develop a unified policy applicable to diverse tasks without the need for online environmental interaction. Recent advancements approach this through sequence modeling, leveraging the Transformer architecture's scalability and the benefits of parameter sharing to exploit task similarities. However, variations in task content and complexity pose significant challenges in policy formulation, necessitating judicious parameter sharing and management of conflicting gradients for optimal policy performance. In this work, we introduce the Harmony Multi-Task Decision Transformer (HarmoDT), a novel solution designed to identify an optimal harmony subspace of parameters for each task. We approach this as a bi-level optimization problem, employing a meta-learning framework that leverages gradient-based techniques. The upper level of this framework is dedicated to learning a task-specific mask that delineates the harmony subspace, while the inner level focuses on updating parameters to enhance the overall performance of the unified policy. Empirical evaluations on a series of benchmarks demonstrate the superiority of HarmoDT, verifying the effectiveness of our approach. | [
"['Shengchao Hu' 'Ziqing Fan' 'Li Shen' 'Ya Zhang' 'Yanfeng Wang'\n 'Dacheng Tao']"
] |
null | null | 2405.18084 | null | null | http://arxiv.org/pdf/2405.18084v1 | 2024-05-28T11:45:30Z | 2024-05-28T11:45:30Z | Guidance and Control Networks with Periodic Activation Functions | Inspired by the versatility of sinusoidal representation networks (SIRENs), we present a modified Guidance & Control Networks (G&CNETs) variant using periodic activation functions in the hidden layers. We demonstrate that the resulting G&CNETs train faster and achieve a lower overall training error on three different control scenarios on which G&CNETs have been tested previously. A preliminary analysis is presented in an attempt to explain the superior performance of the SIREN architecture for the particular types of tasks that G&CNETs excel on. | [
"['Sebastien Origer' 'Dario Izzo']"
] |
null | null | 2405.18091 | null | null | http://arxiv.org/pdf/2405.18091v1 | 2024-05-28T11:57:29Z | 2024-05-28T11:57:29Z | An adaptive transfer learning perspective on classification in
non-stationary environments | We consider a semi-supervised classification problem with non-stationary label-shift in which we observe a labelled data set followed by a sequence of unlabelled covariate vectors in which the marginal probabilities of the class labels may change over time. Our objective is to predict the corresponding class-label for each covariate vector, without ever observing the ground-truth labels, beyond the initial labelled data set. Previous work has demonstrated the potential of sophisticated variants of online gradient descent to perform competitively with the optimal dynamic strategy (Bai et al. 2022). In this work we explore an alternative approach grounded in statistical methods for adaptive transfer learning. We demonstrate the merits of this alternative methodology by establishing a high-probability regret bound on the test error at any given individual test-time, which adapt automatically to the unknown dynamics of the marginal label probabilities. Further more, we give bounds on the average dynamic regret which match the average guarantees of the online learning perspective for any given time interval. | [
"['Henry W J Reeve']"
] |
null | null | 2405.18093 | null | null | http://arxiv.org/pdf/2405.18093v1 | 2024-05-28T11:59:44Z | 2024-05-28T11:59:44Z | Pipette: Automatic Fine-grained Large Language Model Training
Configurator for Real-World Clusters | Training large language models (LLMs) is known to be challenging because of the huge computational and memory capacity requirements. To address these issues, it is common to use a cluster of GPUs with 3D parallelism, which splits a model along the data batch, pipeline stage, and intra-layer tensor dimensions. However, the use of 3D parallelism produces the additional challenge of finding the optimal number of ways on each dimension and mapping the split models onto the GPUs. Several previous studies have attempted to automatically find the optimal configuration, but many of these lacked several important aspects. For instance, the heterogeneous nature of the interconnect speeds is often ignored. While the peak bandwidths for the interconnects are usually made equal, the actual attained bandwidth varies per link in real-world clusters. Combined with the critical path modeling that does not properly consider the communication, they easily fall into sub-optimal configurations. In addition, they often fail to consider the memory requirement per GPU, often recommending solutions that could not be executed. To address these challenges, we propose Pipette, which is an automatic fine-grained LLM training configurator for real-world clusters. By devising better performance models along with the memory estimator and fine-grained individual GPU assignment, Pipette achieves faster configurations that satisfy the memory constraints. We evaluated Pipette on large clusters to show that it provides a significant speedup over the prior art. The implementation of Pipette is available at https://github.com/yimjinkyu1/date2024_pipette. | [
"['Jinkyu Yim' 'Jaeyong Song' 'Yerim Choi' 'Jaebeen Lee' 'Jaewon Jung'\n 'Hongsun Jang' 'Jinho Lee']"
] |
null | null | 2405.18095 | null | null | http://arxiv.org/pdf/2405.18095v2 | 2024-05-31T22:28:18Z | 2024-05-28T12:01:52Z | Is machine learning good or bad for the natural sciences? | Machine learning (ML) methods are having a huge impact across all of the sciences. However, ML has a strong ontology - in which only the data exist - and a strong epistemology - in which a model is considered good if it performs well on held-out training data. These philosophies are in strong conflict with both standard practices and key philosophies in the natural sciences. Here we identify some locations for ML in the natural sciences at which the ontology and epistemology are valuable. For example, when an expressive machine learning model is used in a causal inference to represent the effects of confounders, such as foregrounds, backgrounds, or instrument calibration parameters, the model capacity and loose philosophy of ML can make the results more trustworthy. We also show that there are contexts in which the introduction of ML introduces strong, unwanted statistical biases. For one, when ML models are used to emulate physical (or first-principles) simulations, they amplify confirmation biases. For another, when expressive regressions are used to label datasets, those labels cannot be used in downstream joint or ensemble analyses without taking on uncontrolled biases. The question in the title is being asked of all of the natural sciences; that is, we are calling on the scientific communities to take a step back and consider the role and value of ML in their fields; the (partial) answers we give here come from the particular perspective of physics. | [
"['David W. Hogg' 'Soledad Villar']"
] |
null | null | 2405.18100 | null | null | http://arxiv.org/pdf/2405.18100v1 | 2024-05-28T12:05:20Z | 2024-05-28T12:05:20Z | A Pontryagin Perspective on Reinforcement Learning | Reinforcement learning has traditionally focused on learning state-dependent policies to solve optimal control problems in a closed-loop fashion. In this work, we introduce the paradigm of open-loop reinforcement learning where a fixed action sequence is learned instead. We present three new algorithms: one robust model-based method and two sample-efficient model-free methods. Rather than basing our algorithms on Bellman's equation from dynamic programming, our work builds on Pontryagin's principle from the theory of open-loop optimal control. We provide convergence guarantees and evaluate all methods empirically on a pendulum swing-up task, as well as on two high-dimensional MuJoCo tasks, demonstrating remarkable performance compared to existing baselines. | [
"['Onno Eberhard' 'Claire Vernade' 'Michael Muehlebach']"
] |
null | null | 2405.18110 | null | null | http://arxiv.org/pdf/2405.18110v1 | 2024-05-28T12:18:19Z | 2024-05-28T12:18:19Z | Individual Contributions as Intrinsic Exploration Scaffolds for
Multi-agent Reinforcement Learning | In multi-agent reinforcement learning (MARL), effective exploration is critical, especially in sparse reward environments. Although introducing global intrinsic rewards can foster exploration in such settings, it often complicates credit assignment among agents. To address this difficulty, we propose Individual Contributions as intrinsic Exploration Scaffolds (ICES), a novel approach to motivate exploration by assessing each agent's contribution from a global view. In particular, ICES constructs exploration scaffolds with Bayesian surprise, leveraging global transition information during centralized training. These scaffolds, used only in training, help to guide individual agents towards actions that significantly impact the global latent state transitions. Additionally, ICES separates exploration policies from exploitation policies, enabling the former to utilize privileged global information during training. Extensive experiments on cooperative benchmark tasks with sparse rewards, including Google Research Football (GRF) and StarCraft Multi-agent Challenge (SMAC), demonstrate that ICES exhibits superior exploration capabilities compared with baselines. The code is publicly available at https://github.com/LXXXXR/ICES. | [
"['Xinran Li' 'Zifan Liu' 'Shibo Chen' 'Jun Zhang']"
] |
null | null | 2405.18119 | null | null | http://arxiv.org/pdf/2405.18119v2 | 2024-07-05T15:23:58Z | 2024-05-28T12:28:12Z | Low-Resource Crop Classification from Multi-Spectral Time Series Using
Lossless Compressors | Deep learning has significantly improved the accuracy of crop classification using multispectral temporal data. However, these models have complex structures with numerous parameters, requiring large amounts of data and costly training. In low-resource situations with fewer labeled samples, deep learning models perform poorly due to insufficient data. Conversely, compressors are data-type agnostic, and non-parametric methods do not bring underlying assumptions. Inspired by this insight, we propose a non-training alternative to deep learning models, aiming to address these situations. Specifically, the Symbolic Representation Module is proposed to convert the reflectivity into symbolic representations. The symbolic representations are then cross-transformed in both the channel and time dimensions to generate symbolic embeddings. Next, the Multi-scale Normalised Compression Distance (MNCD) is designed to measure the correlation between any two symbolic embeddings. Finally, based on the MNCDs, high quality crop classification can be achieved using only a k-nearest-neighbor classifier kNN. The entire framework is ready-to-use and lightweight. Without any training, it outperformed, on average, 7 advanced deep learning models trained at scale on three benchmark datasets. It also outperforms more than half of these models in the few-shot setting with sparse crop labels. Therefore, the high performance and robustness of our non-training framework makes it truly applicable to real-world crop mapping. Codes are available at: https://github.com/qinfengsama/Compressor-Based-Crop-Mapping. | [
"['Wei Cheng' 'Hongrui Ye' 'Xiao Wen' 'Jiachen Zhang' 'Jiping Xu'\n 'Feifan Zhang']"
] |
null | null | 2405.18127 | null | null | http://arxiv.org/pdf/2405.18127v1 | 2024-05-28T12:39:24Z | 2024-05-28T12:39:24Z | Graph Coarsening with Message-Passing Guarantees | Graph coarsening aims to reduce the size of a large graph while preserving some of its key properties, which has been used in many applications to reduce computational load and memory footprint. For instance, in graph machine learning, training Graph Neural Networks (GNNs) on coarsened graphs leads to drastic savings in time and memory. However, GNNs rely on the Message-Passing (MP) paradigm, and classical spectral preservation guarantees for graph coarsening do not directly lead to theoretical guarantees when performing naive message-passing on the coarsened graph. In this work, we propose a new message-passing operation specific to coarsened graphs, which exhibit theoretical guarantees on the preservation of the propagated signal. Interestingly, and in a sharp departure from previous proposals, this operation on coarsened graphs is oriented, even when the original graph is undirected. We conduct node classification tasks on synthetic and real data and observe improved results compared to performing naive message-passing on the coarsened graph. | [
"['Antonin Joly' 'Nicolas Keriven']"
] |
null | null | 2405.18137 | null | null | http://arxiv.org/pdf/2405.18137v1 | 2024-05-28T12:51:01Z | 2024-05-28T12:51:01Z | Exploiting LLM Quantization | Quantization leverages lower-precision weights to reduce the memory usage of large language models (LLMs) and is a key technique for enabling their deployment on commodity hardware. While LLM quantization's impact on utility has been extensively explored, this work for the first time studies its adverse effects from a security perspective. We reveal that widely used quantization methods can be exploited to produce a harmful quantized LLM, even though the full-precision counterpart appears benign, potentially tricking users into deploying the malicious quantized model. We demonstrate this threat using a three-staged attack framework: (i) first, we obtain a malicious LLM through fine-tuning on an adversarial task; (ii) next, we quantize the malicious model and calculate constraints that characterize all full-precision models that map to the same quantized model; (iii) finally, using projected gradient descent, we tune out the poisoned behavior from the full-precision model while ensuring that its weights satisfy the constraints computed in step (ii). This procedure results in an LLM that exhibits benign behavior in full precision but when quantized, it follows the adversarial behavior injected in step (i). We experimentally demonstrate the feasibility and severity of such an attack across three diverse scenarios: vulnerable code generation, content injection, and over-refusal attack. In practice, the adversary could host the resulting full-precision model on an LLM community hub such as Hugging Face, exposing millions of users to the threat of deploying its malicious quantized version on their devices. | [
"['Kazuki Egashira' 'Mark Vero' 'Robin Staab' 'Jingxuan He' 'Martin Vechev']"
] |
null | null | 2405.18144 | null | null | http://arxiv.org/pdf/2405.18144v1 | 2024-05-28T13:02:56Z | 2024-05-28T13:02:56Z | 4-bit Shampoo for Memory-Efficient Network Training | Second-order optimizers, maintaining a matrix termed a preconditioner, are superior to first-order optimizers in both theory and practice. The states forming the preconditioner and its inverse root restrict the maximum size of models trained by second-order optimizers. To address this, compressing 32-bit optimizer states to lower bitwidths has shown promise in reducing memory usage. However, current approaches only pertain to first-order optimizers. In this paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit Shampoo, maintaining performance similar to that of 32-bit ones. We show that quantizing the eigenvector matrix of the preconditioner in 4-bit Shampoo is remarkably better than quantizing the preconditioner itself both theoretically and experimentally. By rectifying the orthogonality of the quantized eigenvector matrix, we enhance the approximation of the preconditioner's eigenvector matrix, which also benefits the computation of its inverse 4-th root. Besides, we find that linear square quantization slightly outperforms dynamic tree quantization when quantizing second-order optimizer states. Evaluation on various networks for image classification demonstrates that our 4-bit Shampoo achieves comparable test accuracy to its 32-bit counterpart while being more memory-efficient. The source code will be made available. | [
"['Sike Wang' 'Jia Li' 'Pan Zhou' 'Hua Huang']"
] |
null | null | 2405.18146 | null | null | http://arxiv.org/pdf/2405.18146v2 | 2024-06-11T06:47:50Z | 2024-05-28T13:06:32Z | Unified Low-rank Compression Framework for Click-through Rate Prediction | Deep Click-Through Rate (CTR) prediction models play an important role in modern industrial recommendation scenarios. However, high memory overhead and computational costs limit their deployment in resource-constrained environments. Low-rank approximation is an effective method for computer vision and natural language processing models, but its application in compressing CTR prediction models has been less explored. Due to the limited memory and computing resources, compression of CTR prediction models often confronts three fundamental challenges, i.e., (1). How to reduce the model sizes to adapt to edge devices? (2). How to speed up CTR prediction model inference? (3). How to retain the capabilities of original models after compression? Previous low-rank compression research mostly uses tensor decomposition, which can achieve a high parameter compression ratio, but brings in AUC degradation and additional computing overhead. To address these challenges, we propose a unified low-rank decomposition framework for compressing CTR prediction models. We find that even with the most classic matrix decomposition SVD method, our framework can achieve better performance than the original model. To further improve the effectiveness of our framework, we locally compress the output features instead of compressing the model weights. Our unified low-rank compression framework can be applied to embedding tables and MLP layers in various CTR prediction models. Extensive experiments on two academic datasets and one real industrial benchmark demonstrate that, with 3-5x model size reduction, our compressed models can achieve both faster inference and higher AUC than the uncompressed original models. Our code is at https://github.com/yuhao318/Atomic_Feature_Mimicking. | [
"['Hao Yu' 'Minghao Fu' 'Jiandong Ding' 'Yusheng Zhou' 'Jianxin Wu']"
] |
null | null | 2405.18153 | null | null | http://arxiv.org/pdf/2405.18153v1 | 2024-05-28T13:14:26Z | 2024-05-28T13:14:26Z | Practical aspects for the creation of an audio dataset from field
recordings with optimized labeling budget with AI-assisted strategy | Machine Listening focuses on developing technologies to extract relevant information from audio signals. A critical aspect of these projects is the acquisition and labeling of contextualized data, which is inherently complex and requires specific resources and strategies. Despite the availability of some audio datasets, many are unsuitable for commercial applications. The paper emphasizes the importance of Active Learning (AL) using expert labelers over crowdsourcing, which often lacks detailed insights into dataset structures. AL is an iterative process combining human labelers and AI models to optimize the labeling budget by intelligently selecting samples for human review. This approach addresses the challenge of handling large, constantly growing datasets that exceed available computational resources and memory. The paper presents a comprehensive data-centric framework for Machine Listening projects, detailing the configuration of recording nodes, database structure, and labeling budget optimization in resource-constrained scenarios. Applied to an industrial port in Valencia, Spain, the framework successfully labeled 6540 ten-second audio samples over five months with a small team, demonstrating its effectiveness and adaptability to various resource availability situations. | [
"['Javier Naranjo-Alcazar' 'Jordi Grau-Haro' 'Ruben Ribes-Serrano'\n 'Pedro Zuccarello']"
] |
null | null | 2405.18161 | null | null | http://arxiv.org/pdf/2405.18161v1 | 2024-05-28T13:23:04Z | 2024-05-28T13:23:04Z | Back to the Drawing Board for Fair Representation Learning | The goal of Fair Representation Learning (FRL) is to mitigate biases in machine learning models by learning data representations that enable high accuracy on downstream tasks while minimizing discrimination based on sensitive attributes. The evaluation of FRL methods in many recent works primarily focuses on the tradeoff between downstream fairness and accuracy with respect to a single task that was used to approximate the utility of representations during training (proxy task). This incentivizes retaining only features relevant to the proxy task while discarding all other information. In extreme cases, this can cause the learned representations to collapse to a trivial, binary value, rendering them unusable in transfer settings. In this work, we argue that this approach is fundamentally mismatched with the original motivation of FRL, which arises from settings with many downstream tasks unknown at training time (transfer tasks). To remedy this, we propose to refocus the evaluation protocol of FRL methods primarily around the performance on transfer tasks. A key challenge when conducting such an evaluation is the lack of adequate benchmarks. We address this by formulating four criteria that a suitable evaluation procedure should fulfill. Based on these, we propose TransFair, a benchmark that satisfies these criteria, consisting of novel variations of popular FRL datasets with carefully calibrated transfer tasks. In this setting, we reevaluate state-of-the-art FRL methods, observing that they often overfit to the proxy task, which causes them to underperform on certain transfer tasks. We further highlight the importance of task-agnostic learning signals for FRL methods, as they can lead to more transferrable representations. | [
"['Angéline Pouget' 'Nikola Jovanović' 'Mark Vero' 'Robin Staab'\n 'Martin Vechev']"
] |
null | null | 2405.18165 | null | null | http://arxiv.org/pdf/2405.18165v1 | 2024-05-28T13:25:31Z | 2024-05-28T13:25:31Z | Time Series Representation Models | Time series analysis remains a major challenge due to its sparse characteristics, high dimensionality, and inconsistent data quality. Recent advancements in transformer-based techniques have enhanced capabilities in forecasting and imputation; however, these methods are still resource-heavy, lack adaptability, and face difficulties in integrating both local and global attributes of time series. To tackle these challenges, we propose a new architectural concept for time series analysis based on introspection. Central to this concept is the self-supervised pretraining of Time Series Representation Models (TSRMs), which once learned can be easily tailored and fine-tuned for specific tasks, such as forecasting and imputation, in an automated and resource-efficient manner. Our architecture is equipped with a flexible and hierarchical representation learning process, which is robust against missing data and outliers. It can capture and learn both local and global features of the structure, semantics, and crucial patterns of a given time series category, such as heart rate data. Our learned time series representation models can be efficiently adapted to a specific task, such as forecasting or imputation, without manual intervention. Furthermore, our architecture's design supports explainability by highlighting the significance of each input value for the task at hand. Our empirical study using four benchmark datasets shows that, compared to investigated state-of-the-art baseline methods, our architecture improves imputation and forecasting errors by up to 90.34% and 71.54%, respectively, while reducing the required trainable parameters by up to 92.43%. The source code is available at https://github.com/RobertLeppich/TSRM. | [
"['Robert Leppich' 'Vanessa Borst' 'Veronika Lesch' 'Samuel Kounev']"
] |
null | null | 2405.18172 | null | null | http://arxiv.org/pdf/2405.18172v1 | 2024-05-28T13:33:08Z | 2024-05-28T13:33:08Z | AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across
Any Scenario | While image-based virtual try-on has made significant strides, emerging approaches still fall short of delivering high-fidelity and robust fitting images across various scenarios, as their models suffer from issues of ill-fitted garment styles and quality degrading during the training process, not to mention the lack of support for various combinations of attire. Therefore, we first propose a lightweight, scalable, operator known as Hydra Block for attire combinations. This is achieved through a parallel attention mechanism that facilitates the feature injection of multiple garments from conditionally encoded branches into the main network. Secondly, to significantly enhance the model's robustness and expressiveness in real-world scenarios, we evolve its potential across diverse settings by synthesizing the residuals of multiple models, as well as implementing a mask region boost strategy to overcome the instability caused by information leakage in existing models. Equipped with the above design, AnyFit surpasses all baselines on high-resolution benchmarks and real-world data by a large gap, excelling in producing well-fitting garments replete with photorealistic and rich details. Furthermore, AnyFit's impressive performance on high-fidelity virtual try-ons in any scenario from any image, paves a new path for future research within the fashion community. | [
"['Yuhan Li' 'Hao Zhou' 'Wenxiang Shang' 'Ran Lin' 'Xuanhong Chen'\n 'Bingbing Ni']"
] |
null | null | 2405.18176 | null | null | http://arxiv.org/pdf/2405.18176v2 | 2024-05-29T14:17:13Z | 2024-05-28T13:43:34Z | SEMF: Supervised Expectation-Maximization Framework for Predicting
Intervals | This work introduces the Supervised Expectation-Maximization Framework (SEMF), a versatile and model-agnostic framework that generates prediction intervals for datasets with complete or missing data. SEMF extends the Expectation-Maximization (EM) algorithm, traditionally used in unsupervised learning, to a supervised context, enabling it to extract latent representations for uncertainty estimation. The framework demonstrates robustness through extensive empirical evaluation across 11 tabular datasets, achieving$unicode{x2013}$in some cases$unicode{x2013}$narrower normalized prediction intervals and higher coverage than traditional quantile regression methods. Furthermore, SEMF integrates seamlessly with existing machine learning algorithms, such as gradient-boosted trees and neural networks, exemplifying its usefulness for real-world applications. The experimental results highlight SEMF's potential to advance state-of-the-art techniques in uncertainty quantification. | [
"['Ilia Azizi' 'Marc-Olivier Boldi' 'Valérie Chavez-Demoulin']"
] |
null | null | 2405.18180 | null | null | http://arxiv.org/pdf/2405.18180v1 | 2024-05-28T13:47:21Z | 2024-05-28T13:47:21Z | Safe Reinforcement Learning in Black-Box Environments via Adaptive
Shielding | Empowering safe exploration of reinforcement learning (RL) agents during training is a critical impediment towards deploying RL agents in many real-world scenarios. Training RL agents in unknown, black-box environments poses an even greater safety risk when prior knowledge of the domain/task is unavailable. We introduce ADVICE (Adaptive Shielding with a Contrastive Autoencoder), a novel post-shielding technique that distinguishes safe and unsafe features of state-action pairs during training, thus protecting the RL agent from executing actions that yield potentially hazardous outcomes. Our comprehensive experimental evaluation against state-of-the-art safe RL exploration techniques demonstrates how ADVICE can significantly reduce safety violations during training while maintaining a competitive outcome reward. | [
"['Daniel Bethell' 'Simos Gerasimou' 'Radu Calinescu' 'Calum Imrie']"
] |
null | null | 2405.18187 | null | null | http://arxiv.org/pdf/2405.18187v1 | 2024-05-28T14:01:03Z | 2024-05-28T14:01:03Z | AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained
Optimization | Implicit Q-learning (IQL) serves as a strong baseline for offline RL, which learns the value function using only dataset actions through quantile regression. However, it is unclear how to recover the implicit policy from the learned implicit Q-function and why IQL can utilize weighted regression for policy extraction. IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function. In this work, we introduce a different way to solve the implicit policy-finding problem (IPF) by formulating this problem as an optimization problem. Based on this optimization problem, we further propose two practical algorithms AlignIQL and AlignIQL-hard, which inherit the advantages of decoupling actor from critic in IQL and provide insights into why IQL can use weighted regression for policy extraction. Compared with IQL and IDQL, we find our method keeps the simplicity of IQL and solves the implicit policy-finding problem. Experimental results on D4RL datasets show that our method achieves competitive or superior results compared with other SOTA offline RL methods. Especially in complex sparse reward tasks like Antmaze and Adroit, our method outperforms IQL and IDQL by a significant margin. | [
"['Longxiang He' 'Li Shen' 'Junbo Tan' 'Xueqian Wang']"
] |
null | null | 2405.18190 | null | null | http://arxiv.org/pdf/2405.18190v1 | 2024-05-28T14:02:44Z | 2024-05-28T14:02:44Z | Mutation-Bias Learning in Games | We present two variants of a multi-agent reinforcement learning algorithm based on evolutionary game theoretic considerations. The intentional simplicity of one variant enables us to prove results on its relationship to a system of ordinary differential equations of replicator-mutator dynamics type, allowing us to present proofs on the algorithm's convergence conditions in various settings via its ODE counterpart. The more complicated variant enables comparisons to Q-learning based algorithms. We compare both variants experimentally to WoLF-PHC and frequency-adjusted Q-learning on a range of settings, illustrating cases of increasing dimensionality where our variants preserve convergence in contrast to more complicated algorithms. The availability of analytic results provides a degree of transferability of results as compared to purely empirical case studies, illustrating the general utility of a dynamical systems perspective on multi-agent reinforcement learning when addressing questions of convergence and reliable generalisation. | [
"['Johann Bauer' 'Sheldon West' 'Eduardo Alonso' 'Mark Broom']"
] |
null | null | 2405.18193 | null | null | http://arxiv.org/pdf/2405.18193v1 | 2024-05-28T14:03:52Z | 2024-05-28T14:03:52Z | In-Context Symmetries: Self-Supervised Learning through Contextual World
Models | At the core of self-supervised learning for vision is the idea of learning invariant or equivariant representations with respect to a set of data transformations. This approach, however, introduces strong inductive biases, which can render the representations fragile in downstream tasks that do not conform to these symmetries. In this work, drawing insights from world models, we propose to instead learn a general representation that can adapt to be invariant or equivariant to different transformations by paying attention to context -- a memory module that tracks task-specific states, actions, and future states. Here, the action is the transformation, while the current and future states respectively represent the input's representation before and after the transformation. Our proposed algorithm, Contextual Self-Supervised Learning (ContextSSL), learns equivariance to all transformations (as opposed to invariance). In this way, the model can learn to encode all relevant features as general representations while having the versatility to tail down to task-wise symmetries when given a few examples as the context. Empirically, we demonstrate significant performance gains over existing methods on equivariance-related tasks, supported by both qualitative and quantitative evaluations. | [
"['Sharut Gupta' 'Chenyu Wang' 'Yifei Wang' 'Tommi Jaakkola'\n 'Stefanie Jegelka']"
] |
null | null | 2405.18194 | null | null | http://arxiv.org/pdf/2405.18194v2 | 2024-05-29T10:01:43Z | 2024-05-28T14:04:09Z | Delving into Differentially Private Transformer | Deep learning with differential privacy (DP) has garnered significant attention over the past years, leading to the development of numerous methods aimed at enhancing model accuracy and training efficiency. This paper delves into the problem of training Transformer models with differential privacy. Our treatment is modular: the logic is to `reduce' the problem of training DP Transformer to the more basic problem of training DP vanilla neural nets. The latter is better understood and amenable to many model-agnostic methods. Such `reduction' is done by first identifying the hardness unique to DP Transformer training: the attention distraction phenomenon and a lack of compatibility with existing techniques for efficient gradient clipping. To deal with these two issues, we propose the Re-Attention Mechanism and Phantom Clipping, respectively. We believe that our work not only casts new light on training DP Transformers but also promotes a modular treatment to advance research in the field of differentially private deep learning. | [
"['Youlong Ding' 'Xueyang Wu' 'Yining Meng' 'Yonggang Luo' 'Hao Wang'\n 'Weike Pan']"
] |
null | null | 2405.18196 | null | null | http://arxiv.org/pdf/2405.18196v1 | 2024-05-28T14:06:10Z | 2024-05-28T14:06:10Z | Render and Diffuse: Aligning Image and Action Spaces for Diffusion-based
Behaviour Cloning | In the field of Robot Learning, the complex mapping between high-dimensional observations such as RGB images and low-level robotic actions, two inherently very different spaces, constitutes a complex learning problem, especially with limited amounts of data. In this work, we introduce Render and Diffuse (R&D) a method that unifies low-level robot actions and RGB observations within the image space using virtual renders of the 3D model of the robot. Using this joint observation-action representation it computes low-level robot actions using a learnt diffusion process that iteratively updates the virtual renders of the robot. This space unification simplifies the learning problem and introduces inductive biases that are crucial for sample efficiency and spatial generalisation. We thoroughly evaluate several variants of R&D in simulation and showcase their applicability on six everyday tasks in the real world. Our results show that R&D exhibits strong spatial generalisation capabilities and is more sample efficient than more common image-to-action methods. | [
"['Vitalis Vosylius' 'Younggyo Seo' 'Jafar Uruç' 'Stephen James']"
] |
null | null | 2405.18199 | null | null | http://arxiv.org/pdf/2405.18199v1 | 2024-05-28T14:08:04Z | 2024-05-28T14:08:04Z | Adam with model exponential moving average is effective for nonconvex
optimization | In this work, we offer a theoretical analysis of two modern optimization techniques for training large and complex models: (i) adaptive optimization algorithms, such as Adam, and (ii) the model exponential moving average (EMA). Specifically, we demonstrate that a clipped version of Adam with model EMA achieves the optimal convergence rates in various nonconvex optimization settings, both smooth and nonsmooth. Moreover, when the scale varies significantly across different coordinates, we demonstrate that the coordinate-wise adaptivity of Adam is provably advantageous. Notably, unlike previous analyses of Adam, our analysis crucially relies on its core elements -- momentum and discounting factors -- as well as model EMA, motivating their wide applications in practice. | [
"['Kwangjun Ahn' 'Ashok Cutkosky']"
] |
null | null | 2405.18202 | null | null | http://arxiv.org/pdf/2405.18202v1 | 2024-05-28T14:10:51Z | 2024-05-28T14:10:51Z | IM-Context: In-Context Learning for Imbalanced Regression Tasks | Regression models often fail to generalize effectively in regions characterized by highly imbalanced label distributions. Previous methods for deep imbalanced regression rely on gradient-based weight updates, which tend to overfit in underrepresented regions. This paper proposes a paradigm shift towards in-context learning as an effective alternative to conventional in-weight learning methods, particularly for addressing imbalanced regression. In-context learning refers to the ability of a model to condition itself, given a prompt sequence composed of in-context samples (input-label pairs) alongside a new query input to generate predictions, without requiring any parameter updates. In this paper, we study the impact of the prompt sequence on the model performance from both theoretical and empirical perspectives. We emphasize the importance of localized context in reducing bias within regions of high imbalance. Empirical evaluations across a variety of real-world datasets demonstrate that in-context learning substantially outperforms existing in-weight learning methods in scenarios with high levels of imbalance. | [
"['Ismail Nejjar' 'Faez Ahmed' 'Olga Fink']"
] |
null | null | 2405.18206 | null | null | http://arxiv.org/pdf/2405.18206v1 | 2024-05-28T14:12:25Z | 2024-05-28T14:12:25Z | Multi-CATE: Multi-Accurate Conditional Average Treatment Effect
Estimation Robust to Unknown Covariate Shifts | Estimating heterogeneous treatment effects is important to tailor treatments to those individuals who would most likely benefit. However, conditional average treatment effect predictors may often be trained on one population but possibly deployed on different, possibly unknown populations. We use methodology for learning multi-accurate predictors to post-process CATE T-learners (differenced regressions) to become robust to unknown covariate shifts at the time of deployment. The method works in general for pseudo-outcome regression, such as the DR-learner. We show how this approach can combine (large) confounded observational and (smaller) randomized datasets by learning a confounded predictor from the observational dataset, and auditing for multi-accuracy on the randomized controlled trial. We show improvements in bias and mean squared error in simulations with increasingly larger covariate shift, and on a semi-synthetic case study of a parallel large observational study and smaller randomized controlled experiment. Overall, we establish a connection between methods developed for multi-distribution learning and achieve appealing desiderata (e.g. external validity) in causal inference and machine learning. | [
"['Christoph Kern' 'Michael Kim' 'Angela Zhou']"
] |
null | null | 2405.18208 | null | null | http://arxiv.org/pdf/2405.18208v1 | 2024-05-28T14:13:32Z | 2024-05-28T14:13:32Z | A Human-Like Reasoning Framework for Multi-Phases Planning Task with
Large Language Models | Recent studies have highlighted their proficiency in some simple tasks like writing and coding through various reasoning strategies. However, LLM agents still struggle with tasks that require comprehensive planning, a process that challenges current models and remains a critical research issue. In this study, we concentrate on travel planning, a Multi-Phases planning problem, that involves multiple interconnected stages, such as outlining, information gathering, and planning, often characterized by the need to manage various constraints and uncertainties. Existing reasoning approaches have struggled to effectively address this complex task. Our research aims to address this challenge by developing a human-like planning framework for LLM agents, i.e., guiding the LLM agent to simulate various steps that humans take when solving Multi-Phases problems. Specifically, we implement several strategies to enable LLM agents to generate a coherent outline for each travel query, mirroring human planning patterns. Additionally, we integrate Strategy Block and Knowledge Block into our framework: Strategy Block facilitates information collection, while Knowledge Block provides essential information for detailed planning. Through our extensive experiments, we demonstrate that our framework significantly improves the planning capabilities of LLM agents, enabling them to tackle the travel planning task with improved efficiency and effectiveness. Our experimental results showcase the exceptional performance of the proposed framework; when combined with GPT-4-Turbo, it attains $10times$ the performance gains in comparison to the baseline framework deployed on GPT-4-Turbo. | [
"['Chengxing Xie' 'Difan Zou']"
] |
null | null | 2405.18209 | null | null | http://arxiv.org/pdf/2405.18209v1 | 2024-05-28T14:15:18Z | 2024-05-28T14:15:18Z | Safe Multi-Agent Reinforcement Learning with Bilevel Optimization in
Autonomous Driving | Ensuring safety in MARL, particularly when deploying it in real-world applications such as autonomous driving, emerges as a critical challenge. To address this challenge, traditional safe MARL methods extend MARL approaches to incorporate safety considerations, aiming to minimize safety risk values. However, these safe MARL algorithms often fail to model other agents and lack convergence guarantees, particularly in dynamically complex environments. In this study, we propose a safe MARL method grounded in a Stackelberg model with bi-level optimization, for which convergence analysis is provided. Derived from our theoretical analysis, we develop two practical algorithms, namely Constrained Stackelberg Q-learning (CSQ) and Constrained Stackelberg Multi-Agent Deep Deterministic Policy Gradient (CS-MADDPG), designed to facilitate MARL decision-making in autonomous driving applications. To evaluate the effectiveness of our algorithms, we developed a safe MARL autonomous driving benchmark and conducted experiments on challenging autonomous driving scenarios, such as merges, roundabouts, intersections, and racetracks. The experimental results indicate that our algorithms, CSQ and CS-MADDPG, outperform several strong MARL baselines, such as Bi-AC, MACPO, and MAPPO-L, regarding reward and safety performance. The demos and source code are available at {https://github.com/SafeRL-Lab/Safe-MARL-in-Autonomous-Driving.git}. | [
"['Zhi Zheng' 'Shangding Gu']"
] |
null | null | 2405.18217 | null | null | http://arxiv.org/pdf/2405.18217v1 | 2024-05-28T14:20:49Z | 2024-05-28T14:20:49Z | Understanding Inter-Concept Relationships in Concept-Based Models | Concept-based explainability methods provide insight into deep learning systems by constructing explanations using human-understandable concepts. While the literature on human reasoning demonstrates that we exploit relationships between concepts when solving tasks, it is unclear whether concept-based methods incorporate the rich structure of inter-concept relationships. We analyse the concept representations learnt by concept-based models to understand whether these models correctly capture inter-concept relationships. First, we empirically demonstrate that state-of-the-art concept-based models produce representations that lack stability and robustness, and such methods fail to capture inter-concept relationships. Then, we develop a novel algorithm which leverages inter-concept relationships to improve concept intervention accuracy, demonstrating how correctly capturing inter-concept relationships can improve downstream tasks. | [
"['Naveen Raman' 'Mateo Espinosa Zarlenga' 'Mateja Jamnik']"
] |
null | null | 2405.18218 | null | null | http://arxiv.org/pdf/2405.18218v1 | 2024-05-28T14:21:15Z | 2024-05-28T14:21:15Z | FinerCut: Finer-grained Interpretable Layer Pruning for Large Language
Models | Overparametrized transformer networks are the state-of-the-art architecture for Large Language Models (LLMs). However, such models contain billions of parameters making large compute a necessity, while raising environmental concerns. To address these issues, we propose FinerCut, a new form of fine-grained layer pruning, which in contrast to prior work at the transformer block level, considers all self-attention and feed-forward network (FFN) layers within blocks as individual pruning candidates. FinerCut prunes layers whose removal causes minimal alternation to the model's output -- contributing to a new, lean, interpretable, and task-agnostic pruning method. Tested across 9 benchmarks, our approach retains 90% performance of Llama3-8B with 25% layers removed, and 95% performance of Llama3-70B with 30% layers removed, all without fine-tuning or post-pruning reconstruction. Strikingly, we observe intriguing results with FinerCut: 42% (34 out of 80) of the self-attention layers in Llama3-70B can be removed while preserving 99% of its performance -- without additional fine-tuning after removal. Moreover, FinerCut provides a tool to inspect the types and locations of pruned layers, allowing to observe interesting pruning behaviors. For instance, we observe a preference for pruning self-attention layers, often at deeper consecutive decoder layers. We hope our insights inspire future efficient LLM architecture designs. | [
"['Yang Zhang' 'Yawei Li' 'Xinpeng Wang' 'Qianli Shen' 'Barbara Plank'\n 'Bernd Bischl' 'Mina Rezaei' 'Kenji Kawaguchi']"
] |
null | null | 2405.18220 | null | null | http://arxiv.org/pdf/2405.18220v1 | 2024-05-28T14:28:28Z | 2024-05-28T14:28:28Z | Non-negative Tensor Mixture Learning for Discrete Density Estimation | We present an expectation-maximization (EM) based unified framework for non-negative tensor decomposition that optimizes the Kullback-Leibler divergence. To avoid iterations in each M-step and learning rate tuning, we establish a general relationship between low-rank decomposition and many-body approximation. Using this connection, we exploit that the closed-form solution of the many-body approximation can be used to update all parameters simultaneously in the M-step. Our framework not only offers a unified methodology for a variety of low-rank structures, including CP, Tucker, and Train decompositions, but also their combinations forming mixtures of tensors as well as robust adaptive noise modeling. Empirically, we demonstrate that our framework provides superior generalization for discrete density estimation compared to conventional tensor-based approaches. | [
"['Kazu Ghalamkari' 'Jesper Løve Hinrich' 'Morten Mørup']"
] |
null | null | 2405.18221 | null | null | http://arxiv.org/pdf/2405.18221v1 | 2024-05-28T14:29:31Z | 2024-05-28T14:29:31Z | Recurrent Natural Policy Gradient for POMDPs | In this paper, we study a natural policy gradient method based on recurrent neural networks (RNNs) for partially-observable Markov decision processes, whereby RNNs are used for policy parameterization and policy evaluation to address curse of dimensionality in non-Markovian reinforcement learning. We present finite-time and finite-width analyses for both the critic (recurrent temporal difference learning), and correspondingly-operated recurrent natural policy gradient method in the near-initialization regime. Our analysis demonstrates the efficiency of RNNs for problems with short-term memory with explicit bounds on the required network widths and sample complexity, and points out the challenges in the case of long-term dependencies. | [
"['Semih Cayci' 'Atilla Eryilmaz']"
] |
null | null | 2405.18222 | null | null | http://arxiv.org/pdf/2405.18222v1 | 2024-05-28T14:30:07Z | 2024-05-28T14:30:07Z | From Learning to Optimize to Learning Optimization Algorithms | Towards designing learned optimization algorithms that are usable beyond their training setting, we identify key principles that classical algorithms obey, but have up to now, not been used for Learning to Optimize (L2O). Following these principles, we provide a general design pipeline, taking into account data, architecture and learning strategy, and thereby enabling a synergy between classical optimization and L2O, resulting in a philosophy of Learning Optimization Algorithms. As a consequence our learned algorithms perform well far beyond problems from the training distribution. We demonstrate the success of these novel principles by designing a new learning-enhanced BFGS algorithm and provide numerical experiments evidencing its adaptation to many settings at test time. | [
"['Camille Castera' 'Peter Ochs']"
] |
null | null | 2405.18236 | null | null | http://arxiv.org/pdf/2405.18236v2 | 2024-07-04T09:37:24Z | 2024-05-28T14:46:03Z | Position Paper: Think Globally, React Locally -- Bringing Real-time
Reference-based Website Phishing Detection on macOS | Background. The recent surge in phishing attacks keeps undermining the effectiveness of the traditional anti-phishing blacklist approaches. On-device anti-phishing solutions are gaining popularity as they offer faster phishing detection locally. Aim. We aim to eliminate the delay in recognizing and recording phishing campaigns in databases via on-device solutions that identify phishing sites immediately when encountered by the user rather than waiting for a web crawler's scan to finish. Additionally, utilizing operating system-specific resources and frameworks, we aim to minimize the impact on system performance and depend on local processing to protect user privacy. Method. We propose a phishing detection solution that uses a combination of computer vision and on-device machine learning models to analyze websites in real time. Our reference-based approach analyzes the visual content of webpages, identifying phishing attempts through layout analysis, credential input areas detection, and brand impersonation criteria combination. Results. Our case study shows it's feasible to perform background processing on-device continuously, for the case of the web browser requiring the resource use of 16% of a single CPU core and less than 84MB of RAM on Apple M1 while maintaining the accuracy of brand logo detection at 46.6% (comparable with baselines), and of Credential Requiring Page detection at 98.1% (improving the baseline by 3.1%), within the test dataset. Conclusions. Our results demonstrate the potential of on-device, real-time phishing detection systems to enhance cybersecurity defensive technologies and extend the scope of phishing detection to more similar regions of interest, e.g., email clients and messenger windows. | [
"['Ivan Petrukha' 'Nataliia Stulova' 'Sergii Kryvoblotskyi']"
] |
null | null | 2405.18237 | null | null | http://arxiv.org/pdf/2405.18237v2 | 2024-06-04T00:56:22Z | 2024-05-28T14:46:20Z | Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear
Regression | We study the trajectory of iterations and the convergence rates of the Expectation-Maximization (EM) algorithm for two-component Mixed Linear Regression (2MLR). The fundamental goal of MLR is to learn the regression models from unlabeled observations. The EM algorithm finds extensive applications in solving the mixture of linear regressions. Recent results have established the super-linear convergence of EM for 2MLR in the noiseless and high SNR settings under some assumptions and its global convergence rate with random initialization has been affirmed. However, the exponent of convergence has not been theoretically estimated and the geometric properties of the trajectory of EM iterations are not well-understood. In this paper, first, using Bessel functions we provide explicit closed-form expressions for the EM updates under all SNR regimes. Then, in the noiseless setting, we completely characterize the behavior of EM iterations by deriving a recurrence relation at the population level and notably show that all the iterations lie on a certain cycloid. Based on this new trajectory-based analysis, we exhibit the theoretical estimate for the exponent of super-linear convergence and further improve the statistical error bound at the finite-sample level. Our analysis provides a new framework for studying the behavior of EM for Mixed Linear Regression. | [
"['Zhankun Luo' 'Abolfazl Hashemi']"
] |
null | null | 2405.18253 | null | null | http://arxiv.org/pdf/2405.18253v1 | 2024-05-28T15:04:17Z | 2024-05-28T15:04:17Z | Truthful Dataset Valuation by Pointwise Mutual Information | A common way to evaluate a dataset in ML involves training a model on this dataset and assessing the model's performance on a test set. However, this approach has two issues: (1) it may incentivize undesirable data manipulation in data marketplaces, as the self-interested data providers seek to modify the dataset to maximize their evaluation scores; (2) it may select datasets that overfit to potentially small test sets. We propose a new data valuation method that provably guarantees the following: data providers always maximize their expected score by truthfully reporting their observed data. Any manipulation of the data, including but not limited to data duplication, adding random data, data removal, or re-weighting data from different groups, cannot increase their expected score. Our method, following the paradigm of proper scoring rules, measures the pointwise mutual information (PMI) of the test dataset and the evaluated dataset. However, computing the PMI of two datasets is challenging. We introduce a novel PMI measuring method that greatly improves tractability within Bayesian machine learning contexts. This is accomplished through a new characterization of PMI that relies solely on the posterior probabilities of the model parameter at an arbitrarily selected value. Finally, we support our theoretical results with simulations and further test the effectiveness of our data valuation method in identifying the top datasets among multiple data providers. Interestingly, our method outperforms the standard approach of selecting datasets based on the trained model's test performance, suggesting that our truthful valuation score can also be more robust to overfitting. | [
"['Shuran Zheng' 'Yongchan Kwon' 'Xuan Qi' 'James Zou']"
] |
null | null | 2405.18267 | null | null | http://arxiv.org/pdf/2405.18267v2 | 2024-07-12T19:17:42Z | 2024-05-28T15:17:58Z | CT-based brain ventricle segmentation via diffusion Schrödinger Bridge
without target domain ground truths | Efficient and accurate brain ventricle segmentation from clinical CT scans is critical for emergency surgeries like ventriculostomy. With the challenges in poor soft tissue contrast and a scarcity of well-annotated databases for clinical brain CTs, we introduce a novel uncertainty-aware ventricle segmentation technique without the need of CT segmentation ground truths by leveraging diffusion-model-based domain adaptation. Specifically, our method employs the diffusion Schr"odinger Bridge and an attention recurrent residual U-Net to capitalize on unpaired CT and MRI scans to derive automatic CT segmentation from those of the MRIs, which are more accessible. Importantly, we propose an end-to-end, joint training framework of image translation and segmentation tasks, and demonstrate its benefit over training individual tasks separately. By comparing the proposed method against similar setups using two different GAN models for domain adaptation (CycleGAN and CUT), we also reveal the advantage of diffusion models towards improved segmentation and image translation quality. With a Dice score of 0.78$pm$0.27, our proposed method outperformed the compared methods, including SynSeg-Net, while providing intuitive uncertainty measures to further facilitate quality control of the automatic segmentation outcomes. The implementation of our proposed method is available at: https://github.com/HealthX-Lab/DiffusionSynCTSeg. | [
"['Reihaneh Teimouri' 'Marta Kersten-Oertel' 'Yiming Xiao']"
] |
null | null | 2405.18273 | null | null | http://arxiv.org/pdf/2405.18273v1 | 2024-05-28T15:24:30Z | 2024-05-28T15:24:30Z | Synchronization on circles and spheres with nonlinear interactions | We consider the dynamics of $n$ points on a sphere in $mathbb{R}^d$ ($d geq 2$) which attract each other according to a function $varphi$ of their inner products. When $varphi$ is linear ($varphi(t) = t$), the points converge to a common value (i.e., synchronize) in various connectivity scenarios: this is part of classical work on Kuramoto oscillator networks. When $varphi$ is exponential ($varphi(t) = e^{beta t}$), these dynamics correspond to a limit of how idealized transformers process data, as described by Geshkovski et al. (2024). Accordingly, they ask whether synchronization occurs for exponential $varphi$. In the context of consensus for multi-agent control, Markdahl et al. (2018) show that for $d geq 3$ (spheres), if the interaction graph is connected and $varphi$ is increasing and convex, then the system synchronizes. What is the situation on circles ($d=2$)? First, we show that $varphi$ being increasing and convex is no longer sufficient. Then we identify a new condition (that the Taylor coefficients of $varphi'$ are decreasing) under which we do have synchronization on the circle. In so doing, we provide some answers to the open problems posed by Geshkovski et al. (2024). | [
"['Christopher Criscitiello' 'Quentin Rebjock' 'Andrew D. McRae'\n 'Nicolas Boumal']"
] |
null | null | 2405.18274 | null | null | http://arxiv.org/pdf/2405.18274v1 | 2024-05-28T15:24:35Z | 2024-05-28T15:24:35Z | Signal-Plus-Noise Decomposition of Nonlinear Spiked Random Matrix Models | In this paper, we study a nonlinear spiked random matrix model where a nonlinear function is applied element-wise to a noise matrix perturbed by a rank-one signal. We establish a signal-plus-noise decomposition for this model and identify precise phase transitions in the structure of the signal components at critical thresholds of signal strength. To demonstrate the applicability of this decomposition, we then utilize it to study new phenomena in the problems of signed signal recovery in nonlinear models and community detection in transformed stochastic block models. Finally, we validate our results through a series of numerical simulations. | [
"['Behrad Moniri' 'Hamed Hassani']"
] |
null | null | 2405.18278 | null | null | http://arxiv.org/pdf/2405.18278v1 | 2024-05-28T15:29:40Z | 2024-05-28T15:29:40Z | NotPlaNET: Removing False Positives from Planet Hunters TESS with
Machine Learning | Differentiating between real transit events and false positive signals in photometric time series data is a bottleneck in the identification of transiting exoplanets, particularly long-period planets. This differentiation typically requires visual inspection of a large number of transit-like signals to rule out instrumental and astrophysical false positives that mimic planetary transit signals. We build a one-dimensional convolutional neural network (CNN) to separate eclipsing binaries and other false positives from potential planet candidates, reducing the number of light curves that require human vetting. Our CNN is trained using the TESS light curves that were identified by Planet Hunters citizen scientists as likely containing a transit. We also include the background flux and centroid information. The light curves are visually inspected and labeled by project scientists and are minimally pre-processed, with only normalization and data augmentation taking place before training. The median percentage of contaminants flagged across the test sectors is 18% with a maximum of 37% and a minimum of 10%. Our model keeps 100% of the planets for 16 of the 18 test sectors, while incorrectly flagging one planet candidate (0.3%) for one sector and two (0.6%) for the remaining sector. Our method shows potential to reduce the number of light curves requiring manual vetting by up to a third with minimal misclassification of planet candidates. | [
"['Valentina Tardugno Poleo' 'Nora Eisner' 'David W. Hogg']"
] |
null | null | 2405.18281 | null | null | http://arxiv.org/pdf/2405.18281v1 | 2024-05-28T15:34:33Z | 2024-05-28T15:34:33Z | MODL: Multilearner Online Deep Learning | Online deep learning solves the problem of learning from streams of data, reconciling two opposing objectives: learn fast and learn deep. Existing work focuses almost exclusively on exploring pure deep learning solutions, which are much better suited to handle the "deep" than the "fast" part of the online learning equation. In our work, we propose a different paradigm, based on a hybrid multilearner approach. First, we develop a fast online logistic regression learner. This learner does not rely on backpropagation. Instead, it uses closed form recursive updates of model parameters, handling the fast learning part of the online learning problem. We then analyze the existing online deep learning theory and show that the widespread ODL approach, currently operating at complexity $O(L^2)$ in terms of the number of layers $L$, can be equivalently implemented in $O(L)$ complexity. This further leads us to the cascaded multilearner design, in which multiple shallow and deep learners are co-trained to solve the online learning problem in a cooperative, synergistic fashion. We show that this approach achieves state-of-the-art results on common online learning datasets, while also being able to handle missing features gracefully. Our code is publicly available at https://github.com/AntonValk/MODL. | [
"['Antonios Valkanas' 'Boris N. Oreshkin' 'Mark Coates']"
] |
null | null | 2405.18284 | null | null | http://arxiv.org/pdf/2405.18284v2 | 2024-06-01T16:22:14Z | 2024-05-28T15:36:48Z | Adaptive debiased SGD in high-dimensional GLMs with streaming data | Online statistical inference facilitates real-time analysis of sequentially collected data, making it different from traditional methods that rely on static datasets. This paper introduces a novel approach to online inference in high-dimensional generalized linear models, where we update regression coefficient estimates and their standard errors upon each new data arrival. In contrast to existing methods that either require full dataset access or large-dimensional summary statistics storage, our method operates in a single-pass mode, significantly reducing both time and space complexity. The core of our methodological innovation lies in an adaptive stochastic gradient descent algorithm tailored for dynamic objective functions, coupled with a novel online debiasing procedure. This allows us to maintain low-dimensional summary statistics while effectively controlling optimization errors introduced by the dynamically changing loss functions. We demonstrate that our method, termed the Approximated Debiased Lasso (ADL), not only mitigates the need for the bounded individual probability condition but also significantly improves numerical performance. Numerical experiments demonstrate that the proposed ADL method consistently exhibits robust performance across various covariance matrix structures. | [
"['Ruijian Han' 'Lan Luo' 'Yuanhang Luo' 'Yuanyuan Lin' 'Jian Huang']"
] |
null | null | 2405.18289 | null | null | http://arxiv.org/pdf/2405.18289v1 | 2024-05-28T15:42:45Z | 2024-05-28T15:42:45Z | Highway Reinforcement Learning | Learning from multi-step off-policy data collected by a set of policies is a core problem of reinforcement learning (RL). Approaches based on importance sampling (IS) often suffer from large variances due to products of IS ratios. Typical IS-free methods, such as $n$-step Q-learning, look ahead for $n$ time steps along the trajectory of actions (where $n$ is called the lookahead depth) and utilize off-policy data directly without any additional adjustment. They work well for proper choices of $n$. We show, however, that such IS-free methods underestimate the optimal value function (VF), especially for large $n$, restricting their capacity to efficiently utilize information from distant future time steps. To overcome this problem, we introduce a novel, IS-free, multi-step off-policy method that avoids the underestimation issue and converges to the optimal VF. At its core lies a simple but non-trivial emph{highway gate}, which controls the information flow from the distant future by comparing it to a threshold. The highway gate guarantees convergence to the optimal VF for arbitrary $n$ and arbitrary behavioral policies. It gives rise to a novel family of off-policy RL algorithms that safely learn even when $n$ is very large, facilitating rapid credit assignment from the far future to the past. On tasks with greatly delayed rewards, including video games where the reward is given only at the end of the game, our new methods outperform many existing multi-step off-policy algorithms. | [
"['Yuhui Wang' 'Miroslav Strupl' 'Francesco Faccio' 'Qingyuan Wu'\n 'Haozhe Liu' 'Michał Grudzień' 'Xiaoyang Tan' 'Jürgen Schmidhuber']"
] |
null | null | 2405.18291 | null | null | http://arxiv.org/pdf/2405.18291v1 | 2024-05-28T15:43:29Z | 2024-05-28T15:43:29Z | FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in
Federated Learning | Collaborative fairness stands as an essential element in federated learning to encourage client participation by equitably distributing rewards based on individual contributions. Existing methods primarily focus on adjusting gradient allocations among clients to achieve collaborative fairness. However, they frequently overlook crucial factors such as maintaining consistency across local models and catering to the diverse requirements of high-contributing clients. This oversight inevitably decreases both fairness and model accuracy in practice. To address these issues, we propose FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness, backed by a theoretical convergence guarantee. First, we present the concept of "bounded collaborative fairness (BCF)", which ensures fairness by tailoring rewards to individual clients based on their contributions. Second, to implement the BCF, we design a submodel allocation module with a theoretical guarantee of fairness. This module incentivizes high-contributing clients with high-performance submodels containing a diverse range of crucial neurons, thereby preserving consistency across local models. Third, we further develop a dynamic aggregation module to adaptively aggregate submodels, ensuring the equitable treatment of low-frequency neurons and consequently enhancing overall model accuracy. Extensive experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy. We see this work as a significant step towards incentivizing broader client participation in federated learning. The source code is available at https://github.com/wangzihuixmu/FedSAC. | [
"['Zihui Wang' 'Zheng Wang' 'Lingjuan Lyu' 'Zhaopeng Peng' 'Zhicheng Yang'\n 'Chenglu Wen' 'Rongshan Yu' 'Cheng Wang' 'Xiaoliang Fan']"
] |
null | null | 2405.18293 | null | null | http://arxiv.org/pdf/2405.18293v2 | 2024-06-03T15:07:01Z | 2024-05-28T15:48:27Z | CF-OPT: Counterfactual Explanations for Structured Prediction | Optimization layers in deep neural networks have enjoyed a growing popularity in structured learning, improving the state of the art on a variety of applications. Yet, these pipelines lack interpretability since they are made of two opaque layers: a highly non-linear prediction model, such as a deep neural network, and an optimization layer, which is typically a complex black-box solver. Our goal is to improve the transparency of such methods by providing counterfactual explanations. We build upon variational autoencoders a principled way of obtaining counterfactuals: working in the latent space leads to a natural notion of plausibility of explanations. We finally introduce a variant of the classic loss for VAE training that improves their performance in our specific structured context. These provide the foundations of CF-OPT, a first-order optimization algorithm that can find counterfactual explanations for a broad class of structured learning architectures. Our numerical results show that both close and plausible explanations can be obtained for problems from the recent literature. | [
"['Germain Vivier-Ardisson' 'Alexandre Forel' 'Axel Parmentier'\n 'Thibaut Vidal']"
] |
null | null | 2405.18296 | null | null | http://arxiv.org/pdf/2405.18296v1 | 2024-05-28T15:50:10Z | 2024-05-28T15:50:10Z | Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD
Training | Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations. Current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup modeling different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setting, which we prove to be exact in high dimension. Notably, our analysis reveals how different properties of sub-populations influence bias at different timescales, showing a shifting preference of the classifier during training. Applying our findings to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real datasets, including CIFAR10, MNIST, and CelebA. | [
"['Anchit Jain' 'Rozhin Nobahari' 'Aristide Baratin'\n 'Stefano Sarao Mannelli']"
] |
null | null | 2405.18298 | null | null | http://arxiv.org/pdf/2405.18298v1 | 2024-05-28T15:50:50Z | 2024-05-28T15:50:50Z | Context-Specific Refinements of Bayesian Network Classifiers | Supervised classification is one of the most ubiquitous tasks in machine learning. Generative classifiers based on Bayesian networks are often used because of their interpretability and competitive accuracy. The widely used naive and TAN classifiers are specific instances of Bayesian network classifiers with a constrained underlying graph. This paper introduces novel classes of generative classifiers extending TAN and other famous types of Bayesian network classifiers. Our approach is based on staged tree models, which extend Bayesian networks by allowing for complex, context-specific patterns of dependence. We formally study the relationship between our novel classes of classifiers and Bayesian networks. We introduce and implement data-driven learning routines for our models and investigate their accuracy in an extensive computational study. The study demonstrates that models embedding asymmetric information can enhance classification accuracy. | [
"['Manuele Leonelli' 'Gherardo Varando']"
] |
null | null | 2405.18299 | null | null | http://arxiv.org/pdf/2405.18299v1 | 2024-05-28T15:51:18Z | 2024-05-28T15:51:18Z | Deep Learning Innovations for Underwater Waste Detection: An In-Depth
Analysis | Addressing the issue of submerged underwater trash is crucial for safeguarding aquatic ecosystems and preserving marine life. While identifying debris present on the surface of water bodies is straightforward, assessing the underwater submerged waste is a challenge due to the image distortions caused by factors such as light refraction, absorption, suspended particles, color shifts, and occlusion. This paper conducts a comprehensive review of state-of-the-art architectures and on the existing datasets to establish a baseline for submerged waste and trash detection. The primary goal remains to establish the benchmark of the object localization techniques to be leveraged by advanced underwater sensors and autonomous underwater vehicles. The ultimate objective is to explore the underwater environment, to identify, and remove underwater debris. The absence of benchmarks (dataset or algorithm) in many researches emphasizes the need for a more robust algorithmic solution. Through this research, we aim to give performance comparative analysis of various underwater trash detection algorithms. | [
"['Jaskaran Singh Walia' 'Pavithra L K']"
] |
null | null | 2405.18306 | null | null | http://arxiv.org/pdf/2405.18306v1 | 2024-05-28T16:00:23Z | 2024-05-28T16:00:23Z | Learning Staged Trees from Incomplete Data | Staged trees are probabilistic graphical models capable of representing any class of non-symmetric independence via a coloring of its vertices. Several structural learning routines have been defined and implemented to learn staged trees from data, under the frequentist or Bayesian paradigm. They assume a data set has been observed fully and, in practice, observations with missing entries are either dropped or imputed before learning the model. Here, we introduce the first algorithms for staged trees that handle missingness within the learning of the model. To this end, we characterize the likelihood of staged tree models in the presence of missing data and discuss pseudo-likelihoods that approximate it. A structural expectation-maximization algorithm estimating the model directly from the full likelihood is also implemented and evaluated. A computational experiment showcases the performance of the novel learning algorithms, demonstrating that it is feasible to account for different missingness patterns when learning staged trees. | [
"['Jack Storror Carter' 'Manuele Leonelli' 'Eva Riccomagno'\n 'Gherardo Varando']"
] |
null | null | 2405.18311 | null | null | http://arxiv.org/pdf/2405.18311v1 | 2024-05-28T16:02:11Z | 2024-05-28T16:02:11Z | Deterministic and statistical calibration of constitutive models from
full-field data with parametric physics-informed neural networks | The calibration of constitutive models from full-field data has recently gained increasing interest due to improvements in full-field measurement capabilities. In addition to the experimental characterization of novel materials, continuous structural health monitoring is another application that is of great interest. However, monitoring is usually associated with severe time constraints, difficult to meet with standard numerical approaches. Therefore, parametric physics-informed neural networks (PINNs) for constitutive model calibration from full-field displacement data are investigated. In an offline stage, a parametric PINN can be trained to learn a parameterized solution of the underlying partial differential equation. In the subsequent online stage, the parametric PINN then acts as a surrogate for the parameters-to-state map in calibration. We test the proposed approach for the deterministic least-squares calibration of a linear elastic as well as a hyperelastic constitutive model from noisy synthetic displacement data. We further carry out Markov chain Monte Carlo-based Bayesian inference to quantify the uncertainty. A proper statistical evaluation of the results underlines the high accuracy of the deterministic calibration and that the estimated uncertainty is valid. Finally, we consider experimental data and show that the results are in good agreement with a Finite Element Method-based calibration. Due to the fast evaluation of PINNs, calibration can be performed in near real-time. This advantage is particularly evident in many-query applications such as Markov chain Monte Carlo-based Bayesian inference. | [
"['David Anton' 'Jendrik-Alexander Tröger' 'Henning Wessels' 'Ulrich Römer'\n 'Alexander Henkes' 'Stefan Hartmann']"
] |
null | null | 2405.18314 | null | null | http://arxiv.org/pdf/2405.18314v1 | 2024-05-28T16:07:17Z | 2024-05-28T16:07:17Z | Deriving Causal Order from Single-Variable Interventions: Guarantees &
Algorithm | Targeted and uniform interventions to a system are crucial for unveiling causal relationships. While several methods have been developed to leverage interventional data for causal structure learning, their practical application in real-world scenarios often remains challenging. Recent benchmark studies have highlighted these difficulties, even when large numbers of single-variable intervention samples are available. In this work, we demonstrate, both theoretically and empirically, that such datasets contain a wealth of causal information that can be effectively extracted under realistic assumptions about the data distribution. More specifically, we introduce the notion of interventional faithfulness, which relies on comparisons between the marginal distributions of each variable across observational and interventional settings, and we introduce a score on causal orders. Under this assumption, we are able to prove strong theoretical guarantees on the optimum of our score that also hold for large-scale settings. To empirically verify our theory, we introduce Intersort, an algorithm designed to infer the causal order from datasets containing large numbers of single-variable interventions by approximately optimizing our score. Intersort outperforms baselines (GIES, PC and EASE) on almost all simulated data settings replicating common benchmarks in the field. Our proposed novel approach to modeling interventional datasets thus offers a promising avenue for advancing causal inference, highlighting significant potential for further enhancements under realistic assumptions. | [
"['Mathieu Chevalley' 'Patrick Schwab' 'Arash Mehrjou']"
] |
null | null | 2405.18327 | null | null | http://arxiv.org/pdf/2405.18327v1 | 2024-05-28T16:21:20Z | 2024-05-28T16:21:20Z | Histopathology Based AI Model Predicts Anti-Angiogenic Therapy Response
in Renal Cancer Clinical Trial | Predictive biomarkers of treatment response are lacking for metastatic clear cell renal cell carcinoma (ccRCC), a tumor type that is treated with angiogenesis inhibitors, immune checkpoint inhibitors, mTOR inhibitors and a HIF2 inhibitor. The Angioscore, an RNA-based quantification of angiogenesis, is arguably the best candidate to predict anti-angiogenic (AA) response. However, the clinical adoption of transcriptomic assays faces several challenges including standardization, time delay, and high cost. Further, ccRCC tumors are highly heterogenous, and sampling multiple areas for sequencing is impractical. Here we present a novel deep learning (DL) approach to predict the Angioscore from ubiquitous histopathology slides. To overcome the lack of interpretability, one of the biggest limitations of typical DL models, our model produces a visual vascular network which is the basis of the model's prediction. To test its reliability, we applied this model to multiple cohorts including a clinical trial dataset. Our model accurately predicts the RNA-based Angioscore on multiple independent cohorts (spearman correlations of 0.77 and 0.73). Further, the predictions help unravel meaningful biology such as association of angiogenesis with grade, stage, and driver mutation status. Finally, we find our model can predict response to AA therapy, in both a real-world cohort and the IMmotion150 clinical trial. The predictive power of our model vastly exceeds that of CD31, a marker of vasculature, and nearly rivals the performance (c-index 0.66 vs 0.67) of the ground truth RNA-based Angioscore at a fraction of the cost. By providing a robust yet interpretable prediction of the Angioscore from histopathology slides alone, our approach offers insights into angiogenesis biology and AA treatment response. | [
"['Jay Jasti' 'Hua Zhong' 'Vandana Panwar' 'Vipul Jarmale' 'Jeffrey Miyata'\n 'Deyssy Carrillo' 'Alana Christie' 'Dinesh Rakheja' 'Zora Modrusan'\n 'Edward Ernest Kadel III' 'Niha Beig' 'Mahrukh Huseni' 'James Brugarolas'\n 'Payal Kapur' 'Satwik Rajaram']"
] |
null | null | 2405.18328 | null | null | http://arxiv.org/pdf/2405.18328v1 | 2024-05-28T16:22:18Z | 2024-05-28T16:22:18Z | Warm Start Marginal Likelihood Optimisation for Iterative Gaussian
Processes | Gaussian processes are a versatile probabilistic machine learning model whose effectiveness often depends on good hyperparameters, which are typically learned by maximising the marginal likelihood. In this work, we consider iterative methods, which use iterative linear system solvers to approximate marginal likelihood gradients up to a specified numerical precision, allowing a trade-off between compute time and accuracy of a solution. We introduce a three-level hierarchy of marginal likelihood optimisation for iterative Gaussian processes, and identify that the computational costs are dominated by solving sequential batches of large positive-definite systems of linear equations. We then propose to amortise computations by reusing solutions of linear system solvers as initialisations in the next step, providing a $textit{warm start}$. Finally, we discuss the necessary conditions and quantify the consequences of warm starts and demonstrate their effectiveness on regression tasks, where warm starts achieve the same results as the conventional procedure while providing up to a $16 times$ average speed-up among datasets. | [
"['Jihao Andreas Lin' 'Shreyas Padhy' 'Bruno Mlodozeniec'\n 'José Miguel Hernández-Lobato']"
] |
null | null | 2405.18334 | null | null | http://arxiv.org/pdf/2405.18334v3 | 2024-07-01T02:10:50Z | 2024-05-28T16:28:51Z | SketchQL Demonstration: Zero-shot Video Moment Querying with Sketches | In this paper, we will present SketchQL, a video database management system (VDBMS) for retrieving video moments with a sketch-based query interface. This novel interface allows users to specify object trajectory events with simple mouse drag-and-drop operations. Users can use trajectories of single objects as building blocks to compose complex events. Using a pre-trained model that encodes trajectory similarity, SketchQL achieves zero-shot video moments retrieval by performing similarity searches over the video to identify clips that are the most similar to the visual query. In this demonstration, we introduce the graphic user interface of SketchQL and detail its functionalities and interaction mechanisms. We also demonstrate the end-to-end usage of SketchQL from query composition to video moments retrieval using real-world scenarios. | [
"['Renzhi Wu' 'Pramod Chunduri' 'Dristi J Shah' 'Ashmitha Julius Aravind'\n 'Ali Payani' 'Xu Chu' 'Joy Arulraj' 'Kexin Rong']"
] |
null | null | 2405.18335 | null | null | http://arxiv.org/abs/2405.18335v1 | 2024-05-28T16:28:58Z | 2024-05-28T16:28:58Z | Interpretable classification of wiki-review streams | Wiki articles are created and maintained by a crowd of editors, producing a continuous stream of reviews. Reviews can take the form of additions, reverts, or both. This crowdsourcing model is exposed to manipulation since neither reviews nor editors are automatically screened and purged. To protect articles against vandalism or damage, the stream of reviews can be mined to classify reviews and profile editors in real-time. The goal of this work is to anticipate and explain which reviews to revert. This way, editors are informed why their edits will be reverted. The proposed method employs stream-based processing, updating the profiling and classification models on each incoming event. The profiling uses side and content-based features employing Natural Language Processing, and editor profiles are incrementally updated based on their reviews. Since the proposed method relies on self-explainable classification algorithms, it is possible to understand why a review has been classified as a revert or a non-revert. In addition, this work contributes an algorithm for generating synthetic data for class balancing, making the final classification fairer. The proposed online method was tested with a real data set from Wikivoyage, which was balanced through the aforementioned synthetic data generation. The results attained near-90 % values for all evaluation metrics (accuracy, precision, recall, and F-measure). | [
"['Silvia García Méndez' 'Fátima Leal' 'Benedita Malheiro'\n 'Juan Carlos Burguillo Rial']"
] |
null | null | 2405.18347 | null | null | http://arxiv.org/pdf/2405.18347v1 | 2024-05-28T16:43:57Z | 2024-05-28T16:43:57Z | Dataset Growth | Deep learning benefits from the growing abundance of available data. Meanwhile, efficiently dealing with the growing data scale has become a challenge. Data publicly available are from different sources with various qualities, and it is impractical to do manual cleaning against noise and redundancy given today's data scale. There are existing techniques for cleaning/selecting the collected data. However, these methods are mainly proposed for offline settings that target one of the cleanness and redundancy problems. In practice, data are growing exponentially with both problems. This leads to repeated data curation with sub-optimal efficiency. To tackle this challenge, we propose InfoGrowth, an efficient online algorithm for data cleaning and selection, resulting in a growing dataset that keeps up to date with awareness of cleanliness and diversity. InfoGrowth can improve data quality/efficiency on both single-modal and multi-modal tasks, with an efficient and scalable design. Its framework makes it practical for real-world data engines. | [
"['Ziheng Qin' 'Zhaopan Xu' 'Yukun Zhou' 'Zangwei Zheng' 'Zebang Cheng'\n 'Hao Tang' 'Lei Shang' 'Baigui Sun' 'Xiaojiang Peng' 'Radu Timofte'\n 'Hongxun Yao' 'Kai Wang' 'Yang You']"
] |
null | null | 2405.18351 | null | null | http://arxiv.org/pdf/2405.18351v1 | 2024-05-28T16:49:28Z | 2024-05-28T16:49:28Z | Evaluating Bayesian deep learning for radio galaxy classification | The radio astronomy community is rapidly adopting deep learning techniques to deal with the huge data volumes expected from the next generation of radio observatories. Bayesian neural networks (BNNs) provide a principled way to model uncertainty in the predictions made by such deep learning models and will play an important role in extracting well-calibrated uncertainty estimates on their outputs. In this work, we evaluate the performance of different BNNs against the following criteria: predictive performance, uncertainty calibration and distribution-shift detection for the radio galaxy classification problem. | [
"['Devina Mohan' 'Anna M. M. Scaife']"
] |
null | null | 2405.18353 | null | null | http://arxiv.org/pdf/2405.18353v2 | 2024-06-06T14:32:38Z | 2024-05-28T16:52:52Z | Simulating infinite-dimensional nonlinear diffusion bridges | The diffusion bridge is a type of diffusion process that conditions on hitting a specific state within a finite time period. It has broad applications in fields such as Bayesian inference, financial mathematics, control theory, and shape analysis. However, simulating the diffusion bridge for natural data can be challenging due to both the intractability of the drift term and continuous representations of the data. Although several methods are available to simulate finite-dimensional diffusion bridges, infinite-dimensional cases remain unresolved. In the paper, we present a solution to this problem by merging score-matching techniques with operator learning, enabling a direct approach to score-matching for the infinite-dimensional bridge. We construct the score to be discretization invariant, which is natural given the underlying spatially continuous process. We conduct a series of experiments, ranging from synthetic examples with closed-form solutions to the stochastic nonlinear evolution of real-world biological shape data, and our method demonstrates high efficacy, particularly due to its ability to adapt to any resolution without extra training. | [
"['Gefan Yang' 'Elizabeth Louise Baker' 'Michael L. Severinsen'\n 'Christy Anna Hipsley' 'Stefan Sommer']"
] |
null | null | 2405.18358 | null | null | http://arxiv.org/pdf/2405.18358v1 | 2024-05-28T16:55:41Z | 2024-05-28T16:55:41Z | MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex
Visual Reasoning | Recent advancements in Multi-modal Large Language Models (MLLMs) have significantly improved their performance in tasks combining vision and language. However, challenges persist in detailed multi-modal understanding, comprehension of complex tasks, and reasoning over multi-modal information. This paper introduces MMCTAgent, a novel multi-modal critical thinking agent framework designed to address the inherent limitations of current MLLMs in complex visual reasoning tasks. Inspired by human cognitive processes and critical thinking, MMCTAgent iteratively analyzes multi-modal information, decomposes queries, plans strategies, and dynamically evolves its reasoning. Additionally, MMCTAgent incorporates critical thinking elements such as verification of final answers and self-reflection through a novel approach that defines a vision-based critic and identifies task-specific evaluation criteria, thereby enhancing its decision-making abilities. Through rigorous evaluations across various image and video understanding benchmarks, we demonstrate that MMCTAgent (with and without the critic) outperforms both foundational MLLMs and other tool-augmented pipelines. | [
"['Somnath Kumar' 'Yash Gadhia' 'Tanuja Ganu' 'Akshay Nambi']"
] |
null | null | 2405.18359 | null | null | http://arxiv.org/pdf/2405.18359v1 | 2024-05-28T16:56:42Z | 2024-05-28T16:56:42Z | Bridging the Gap: Dynamic Learning Strategies for Improving Multilingual
Performance in LLMs | Large language models (LLMs) are at the forefront of transforming numerous domains globally. However, their inclusivity and effectiveness remain limited for non-Latin scripts and low-resource languages. This paper tackles the imperative challenge of enhancing the multilingual performance of LLMs without extensive training or fine-tuning. Through systematic investigation and evaluation of diverse languages using popular question-answering (QA) datasets, we present novel techniques that unlock the true potential of LLMs in a polyglot landscape. Our approach encompasses three key strategies that yield significant improvements in multilingual proficiency. First, by meticulously optimizing prompts tailored for polyglot LLMs, we unlock their latent capabilities, resulting in substantial performance boosts across languages. Second, we introduce a new hybrid approach that synergizes LLM Retrieval Augmented Generation (RAG) with multilingual embeddings and achieves improved multilingual task performance. Finally, we introduce a novel learning approach that dynamically selects the optimal prompt strategy, LLM model, and embedding model per query at run-time. This dynamic adaptation maximizes the efficacy of LLMs across languages, outperforming best static and random strategies. Additionally, our approach adapts configurations in both offline and online settings, and can seamlessly adapt to new languages and datasets, leading to substantial advancements in multilingual understanding and generation across diverse languages. | [
"['Somnath Kumar' 'Vaibhav Balloli' 'Mercy Ranjit' 'Kabir Ahuja'\n 'Tanuja Ganu' 'Sunayana Sitaram' 'Kalika Bali' 'Akshay Nambi']"
] |
null | null | 2405.18369 | null | null | http://arxiv.org/pdf/2405.18369v1 | 2024-05-28T17:08:31Z | 2024-05-28T17:08:31Z | PromptWizard: Task-Aware Agent-driven Prompt Optimization Framework | Large language models (LLMs) have revolutionized AI across diverse domains, showcasing remarkable capabilities. Central to their success is the concept of prompting, which guides model output generation. However, manual prompt engineering is labor-intensive and domain-specific, necessitating automated solutions. This paper introduces PromptWizard, a novel framework leveraging LLMs to iteratively synthesize and refine prompts tailored to specific tasks. Unlike existing approaches, PromptWizard optimizes both prompt instructions and in-context examples, maximizing model performance. The framework iteratively refines prompts by mutating instructions and incorporating negative examples to deepen understanding and ensure diversity. It further enhances both instructions and examples with the aid of a critic, synthesizing new instructions and examples enriched with detailed reasoning steps for optimal performance. PromptWizard offers several key features and capabilities, including computational efficiency compared to state-of-the-art approaches, adaptability to scenarios with varying amounts of training data, and effectiveness with smaller LLMs. Rigorous evaluation across 35 tasks on 8 datasets demonstrates PromptWizard's superiority over existing prompt strategies, showcasing its efficacy and scalability in prompt optimization. | [
"['Eshaan Agarwal' 'Vivek Dani' 'Tanuja Ganu' 'Akshay Nambi']"
] |
null | null | 2405.18373 | null | null | http://arxiv.org/pdf/2405.18373v1 | 2024-05-28T17:11:34Z | 2024-05-28T17:11:34Z | A Hessian-Aware Stochastic Differential Equation for Modelling SGD | Continuous-time approximation of Stochastic Gradient Descent (SGD) is a crucial tool to study its escaping behaviors from stationary points. However, existing stochastic differential equation (SDE) models fail to fully capture these behaviors, even for simple quadratic objectives. Built on a novel stochastic backward error analysis framework, we derive the Hessian-Aware Stochastic Modified Equation (HA-SME), an SDE that incorporates Hessian information of the objective function into both its drift and diffusion terms. Our analysis shows that HA-SME matches the order-best approximation error guarantee among existing SDE models in the literature, while achieving a significantly reduced dependence on the smoothness parameter of the objective. Further, for quadratic objectives, under mild conditions, HA-SME is proved to be the first SDE model that recovers exactly the SGD dynamics in the distributional sense. Consequently, when the local landscape near a stationary point can be approximated by quadratics, HA-SME is expected to accurately predict the local escaping behaviors of SGD. | [
"['Xiang Li' 'Zebang Shen' 'Liang Zhang' 'Niao He']"
] |
null | null | 2405.18376 | null | null | http://arxiv.org/pdf/2405.18376v1 | 2024-05-28T17:18:17Z | 2024-05-28T17:18:17Z | Empowering Source-Free Domain Adaptation with MLLM-driven Curriculum
Learning | Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to a target domain using only unlabeled target data. Current SFDA methods face challenges in effectively leveraging pre-trained knowledge and exploiting target domain data. Multimodal Large Language Models (MLLMs) offer remarkable capabilities in understanding visual and textual information, but their applicability to SFDA poses challenges such as instruction-following failures, intensive computational demands, and difficulties in performance measurement prior to adaptation. To alleviate these issues, we propose Reliability-based Curriculum Learning (RCL), a novel framework that integrates multiple MLLMs for knowledge exploitation via pseudo-labeling in SFDA. Our framework incorporates proposed Reliable Knowledge Transfer, Self-correcting and MLLM-guided Knowledge Expansion, and Multi-hot Masking Refinement to progressively exploit unlabeled data in the target domain. RCL achieves state-of-the-art (SOTA) performance on multiple SFDA benchmarks, e.g., $textbf{+9.4%}$ on DomainNet, demonstrating its effectiveness in enhancing adaptability and robustness without requiring access to source data. Code: https://github.com/Dong-Jie-Chen/RCL. | [
"['Dongjie Chen' 'Kartik Patwari' 'Zhengfeng Lai' 'Sen-ching Cheung'\n 'Chen-Nee Chuah']"
] |
null | null | 2405.18378 | null | null | http://arxiv.org/pdf/2405.18378v2 | 2024-05-29T11:31:19Z | 2024-05-28T17:22:15Z | A Canonization Perspective on Invariant and Equivariant Learning | In many applications, we desire neural networks to exhibit invariance or equivariance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for attaining symmetries efficiently by averaging over input-dependent subsets of the group, i.e., frames. What we currently lack is a principled understanding of the design of frames. In this work, we introduce a canonization perspective that provides an essential and complete view of the design of frames. Canonization is a classic approach for attaining invariance by mapping inputs to their canonical forms. We show that there exists an inherent connection between frames and canonical forms. Leveraging this connection, we can efficiently compare the complexity of frames as well as determine the optimality of certain frames. Guided by this principle, we design novel frames for eigenvectors that are strictly superior to existing methods -- some are even optimal -- both theoretically and empirically. The reduction to the canonization perspective further uncovers equivalences between previous methods. These observations suggest that canonization provides a fundamental understanding of existing frame-averaging methods and unifies existing equivariant and invariant learning methods. | [
"['George Ma' 'Yifei Wang' 'Derek Lim' 'Stefanie Jegelka' 'Yisen Wang']"
] |
null | null | 2405.18379 | null | null | http://arxiv.org/pdf/2405.18379v2 | 2024-06-08T01:17:13Z | 2024-05-28T17:22:15Z | A Note on the Prediction-Powered Bootstrap | We introduce PPBoot: a bootstrap-based method for prediction-powered inference. PPBoot is applicable to arbitrary estimation problems and is very simple to implement, essentially only requiring one application of the bootstrap. Through a series of examples, we demonstrate that PPBoot often performs nearly identically to (and sometimes better than) the earlier PPI(++) method based on asymptotic normality$unicode{x2013}$when the latter is applicable$unicode{x2013}$without requiring any asymptotic characterizations. Given its versatility, PPBoot could simplify and expand the scope of application of prediction-powered inference to problems where central limit theorems are hard to prove. | [
"['Tijana Zrnic']"
] |
null | null | 2405.18380 | null | null | http://arxiv.org/pdf/2405.18380v1 | 2024-05-28T17:22:22Z | 2024-05-28T17:22:22Z | OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for
Memory-Efficient LLM Fine-tuning | The rapid advancements in Large Language Models (LLMs) have revolutionized various natural language processing tasks. However, the substantial size of LLMs presents significant challenges in training or fine-tuning. While parameter-efficient approaches such as low-rank adaptation (LoRA) have gained popularity, they often compromise performance compared to full-rank fine-tuning. In this paper, we propose Outlier-weighed Layerwise Sampled Low-Rank Projection (OwLore), a new memory-efficient fine-tuning approach, inspired by the layerwise outlier distribution of LLMs, which dynamically samples pre-trained layers to fine-tune instead of adding additional adaptors. We first interpret the outlier phenomenon through the lens of Heavy-Tailed Self-Regularization theory (HT-SR), discovering that layers with more outliers tend to be more heavy-tailed and consequently better trained. Inspired by this finding, OwLore strategically assigns higher sampling probabilities to layers with more outliers to better leverage the knowledge stored in pre-trained LLMs. To further mitigate the memory demands of fine-tuning, we integrate gradient low-rank projection into our approach, which facilitates each layer to be efficiently trained in a low-rank manner. By incorporating the efficient characteristics of low-rank and optimal layerwise sampling, OwLore significantly improves the memory-performance trade-off in LLM pruning. Our extensive experiments across various architectures, including LLaMa2, LLaMa3, and Mistral, demonstrate that OwLore consistently outperforms baseline approaches, including full fine-tuning. Specifically, it achieves up to a 1.1% average accuracy gain on the Commonsense Reasoning benchmark, a 3.0% improvement on MMLU, and a notable 10% boost on MT-Bench, while being more memory efficient. OwLore allows us to fine-tune LLaMa2-7B with only 21GB of memory. | [
"['Pengxiang Li' 'Lu Yin' 'Xiaowei Gao' 'Shiwei Liu']"
] |
null | null | 2405.18383 | null | null | http://arxiv.org/pdf/2405.18383v1 | 2024-05-28T17:25:43Z | 2024-05-28T17:25:43Z | Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy
Planning Automated Segmentation | The 2024 Brain Tumor Segmentation Meningioma Radiotherapy (BraTS-MEN-RT) challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs with expert-annotated target labels for patients with intact or post-operative meningioma that underwent either conventional external beam radiotherapy or stereotactic radiosurgery. Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space, accompanied by a single-label "target volume" representing the gross tumor volume (GTV) and any at-risk post-operative site. Target volume annotations adhere to established radiotherapy planning protocols, ensuring consistency across cases and institutions. For pre-operative meningiomas, the target volume encompasses the entire GTV and associated nodular dural tail, while for post-operative cases, it includes at-risk resection cavity margins as determined by the treating institution. Case annotations were reviewed and approved by expert neuroradiologists and radiation oncologists. Participating teams will develop, containerize, and evaluate automated segmentation models using this comprehensive dataset. Model performance will be assessed using the lesion-wise Dice Similarity Coefficient and the 95% Hausdorff distance. The top-performing teams will be recognized at the Medical Image Computing and Computer Assisted Intervention Conference in October 2024. BraTS-MEN-RT is expected to significantly advance automated radiotherapy planning by enabling precise tumor segmentation and facilitating tailored treatment, ultimately improving patient outcomes. | [
"['Dominic LaBella' 'Katherine Schumacher' 'Michael Mix' 'Kevin Leu'\n 'Shan McBurney-Lin' 'Pierre Nedelec' 'Javier Villanueva-Meyer'\n 'Jonathan Shapey' 'Tom Vercauteren' 'Kazumi Chia' 'Omar Al-Salihi'\n 'Justin Leu' 'Lia Halasz' 'Yury Velichko' 'Chunhao Wang'\n 'John Kirkpatrick' 'Scott Floyd' 'Zachary J. Reitman' 'Trey Mullikin'\n 'Ulas Bagci' 'Sean Sachdev' 'Jona A. Hattangadi-Gluth' 'Tyler Seibert'\n 'Nikdokht Farid' 'Connor Puett' 'Matthew W. Pease' 'Kevin Shiue'\n 'Syed Muhammad Anwar' 'Shahriar Faghani' 'Muhammad Ammar Haider'\n 'Pranav Warman' 'Jake Albrecht' 'András Jakab' 'Mana Moassefi'\n 'Verena Chung' 'Alejandro Aristizabal' 'Alexandros Karargyris'\n 'Hasan Kassem' 'Sarthak Pati' 'Micah Sheller' 'Christina Huang'\n 'Aaron Coley' 'Siddharth Ghanta' 'Alex Schneider' 'Conrad Sharp'\n 'Rachit Saluja' 'Florian Kofler' 'Philipp Lohmann' 'Phillipp Vollmuth'\n 'Louis Gagnon' 'Maruf Adewole' 'Hongwei Bran Li'\n 'Anahita Fathi Kazerooni' 'Nourel Hoda Tahon' 'Udunna Anazodo'\n 'Ahmed W. Moawad' 'Bjoern Menze' 'Marius George Linguraru'\n 'Mariam Aboian' 'Benedikt Wiestler' 'Ujjwal Baid' 'Gian-Marco Conte'\n 'Andreas M. T. Rauschecker' 'Ayman Nada' 'Aly H. Abayazeed'\n 'Raymond Huang' 'Maria Correia de Verdier' 'Jeffrey D. Rudie'\n 'Spyridon Bakas' 'Evan Calabrese']"
] |
null | null | 2405.18386 | null | null | http://arxiv.org/pdf/2405.18386v2 | 2024-05-29T17:05:32Z | 2024-05-28T17:27:20Z | Instruct-MusicGen: Unlocking Text-to-Music Editing for Music Language
Models via Instruction Tuning | Recent advances in text-to-music editing, which employ text queries to modify music (e.g. by changing its style or adjusting instrumental components), present unique challenges and opportunities for AI-assisted music creation. Previous approaches in this domain have been constrained by the necessity to train specific editing models from scratch, which is both resource-intensive and inefficient; other research uses large language models to predict edited music, resulting in imprecise audio reconstruction. To Combine the strengths and address these limitations, we introduce Instruct-MusicGen, a novel approach that finetunes a pretrained MusicGen model to efficiently follow editing instructions such as adding, removing, or separating stems. Our approach involves a modification of the original MusicGen architecture by incorporating a text fusion module and an audio fusion module, which allow the model to process instruction texts and audio inputs concurrently and yield the desired edited music. Remarkably, Instruct-MusicGen only introduces 8% new parameters to the original MusicGen model and only trains for 5K steps, yet it achieves superior performance across all tasks compared to existing baselines, and demonstrates performance comparable to the models trained for specific tasks. This advancement not only enhances the efficiency of text-to-music editing but also broadens the applicability of music language models in dynamic music production environments. | [
"['Yixiao Zhang' 'Yukara Ikemiya' 'Woosung Choi' 'Naoki Murata'\n 'Marco A. Martínez-Ramírez' 'Liwei Lin' 'Gus Xia' 'Wei-Hsiang Liao'\n 'Yuki Mitsufuji' 'Simon Dixon']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.