categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.16077
| null | null |
http://arxiv.org/pdf/2402.16077v2
|
2024-06-18T12:07:34Z
|
2024-02-25T12:40:42Z
|
Equivariant Frames and the Impossibility of Continuous Canonicalization
|
Canonicalization provides an architecture-agnostic method for enforcing equivariance, with generalizations such as frame-averaging recently gaining prominence as a lightweight and flexible alternative to equivariant architectures. Recent works have found an empirical benefit to using probabilistic frames instead, which learn weighted distributions over group elements. In this work, we provide strong theoretical justification for this phenomenon: for commonly-used groups, there is no efficiently computable choice of frame that preserves continuity of the function being averaged. In other words, unweighted frame-averaging can turn a smooth, non-symmetric function into a discontinuous, symmetric function. To address this fundamental robustness problem, we formally define and construct emph{weighted} frames, which provably preserve continuity, and demonstrate their utility by constructing efficient and continuous weighted frames for the actions of $SO(2)$, $SO(3)$, and $S_n$ on point clouds.
|
[
"['Nadav Dym' 'Hannah Lawrence' 'Jonathan W. Siegel']"
] |
null | null |
2402.16078
| null | null |
http://arxiv.org/pdf/2402.16078v2
|
2024-04-18T14:13:19Z
|
2024-02-25T13:05:25Z
|
Beyond Spatio-Temporal Representations: Evolving Fourier Transform for
Temporal Graphs
|
We present the Evolving Graph Fourier Transform (EFT), the first invertible spectral transform that captures evolving representations on temporal graphs. We motivate our work by the inadequacy of existing methods for capturing the evolving graph spectra, which are also computationally expensive due to the temporal aspect along with the graph vertex domain. We view the problem as an optimization over the Laplacian of the continuous time dynamic graph. Additionally, we propose pseudo-spectrum relaxations that decompose the transformation process, making it highly computationally efficient. The EFT method adeptly captures the evolving graph's structural and positional properties, making it effective for downstream tasks on evolving graphs. Hence, as a reference implementation, we develop a simple neural model induced with EFT for capturing evolving graph spectra. We empirically validate our theoretical findings on a number of large-scale and standard temporal graph benchmarks and demonstrate that our model achieves state-of-the-art performance.
|
[
"['Anson Bastos' 'Kuldeep Singh' 'Abhishek Nadgeri' 'Manish Singh'\n 'Toyotaro Suzumura']"
] |
null | null |
2402.16090
| null | null |
http://arxiv.org/pdf/2402.16090v1
|
2024-02-25T13:37:36Z
|
2024-02-25T13:37:36Z
|
Key Design Choices in Source-Free Unsupervised Domain Adaptation: An
In-depth Empirical Analysis
|
This study provides a comprehensive benchmark framework for Source-Free Unsupervised Domain Adaptation (SF-UDA) in image classification, aiming to achieve a rigorous empirical understanding of the complex relationships between multiple key design factors in SF-UDA methods. The study empirically examines a diverse set of SF-UDA techniques, assessing their consistency across datasets, sensitivity to specific hyperparameters, and applicability across different families of backbone architectures. Moreover, it exhaustively evaluates pre-training datasets and strategies, particularly focusing on both supervised and self-supervised methods, as well as the impact of fine-tuning on the source domain. Our analysis also highlights gaps in existing benchmark practices, guiding SF-UDA research towards more effective and general approaches. It emphasizes the importance of backbone architecture and pre-training dataset selection on SF-UDA performance, serving as an essential reference and providing key insights. Lastly, we release the source code of our experimental framework. This facilitates the construction, training, and testing of SF-UDA methods, enabling systematic large-scale experimental analysis and supporting further research efforts in this field.
|
[
"['Andrea Maracani' 'Raffaello Camoriano' 'Elisa Maiettini' 'Davide Talon'\n 'Lorenzo Rosasco' 'Lorenzo Natale']"
] |
null | null |
2402.16091
| null | null |
http://arxiv.org/pdf/2402.16091v1
|
2024-02-25T13:37:53Z
|
2024-02-25T13:37:53Z
|
Bayesian Neural Network For Personalized Federated Learning Parameter
Selection
|
Federated learning's poor performance in the presence of heterogeneous data remains one of the most pressing issues in the field. Personalized federated learning departs from the conventional paradigm in which all clients employ the same model, instead striving to discover an individualized model for each client to address the heterogeneity in the data. One of such approach involves personalizing specific layers of neural networks. However, prior endeavors have not provided a dependable rationale, and some have selected personalized layers that are entirely distinct and conflicting. In this work, we take a step further by proposing personalization at the elemental level, rather than the traditional layer-level personalization. To select personalized parameters, we introduce Bayesian neural networks and rely on the uncertainty they offer to guide our selection of personalized parameters. Finally, we validate our algorithm's efficacy on several real-world datasets, demonstrating that our proposed approach outperforms existing baselines.
|
[
"['Mengen Luo' 'Ercan Engin Kuruoglu']"
] |
null | null |
2402.16105
| null | null |
http://arxiv.org/pdf/2402.16105v3
|
2024-05-24T15:31:57Z
|
2024-02-25T15:08:37Z
|
Informed Meta-Learning
|
In noisy and low-data regimes prevalent in real-world applications, a key challenge of machine learning lies in effectively incorporating inductive biases that promote data efficiency and robustness. Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines. While the former relies on a purely data-driven source of priors, the latter is guided by prior domain knowledge. In this paper, we formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations, such as natural language; thus, unlocking complementarity in cross-task knowledge sharing of humans and machines. We establish the foundational components of informed meta-learning and present a concrete instantiation of this framework--the Informed Neural Process. Through a series of experiments, we demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
|
[
"['Katarzyna Kobalczyk' 'Mihaela van der Schaar']"
] |
null | null |
2402.16119
| null | null |
http://arxiv.org/abs/2402.16119v1
|
2024-02-25T15:37:14Z
|
2024-02-25T15:37:14Z
|
DeepForge: Leveraging AI for Microstructural Control in Metal Forming
via Model Predictive Control
|
This study presents a novel method for microstructure control in closed die hot forging that combines Model Predictive Control (MPC) with a developed machine learning model called DeepForge. DeepForge uses an architecture that combines 1D convolutional neural networks and gated recurrent units. It uses surface temperature measurements of a workpiece as input to predict microstructure changes during forging. The paper also details DeepForge's architecture and the finite element simulation model used to generate the data set, using a three-stroke forging process. The results demonstrate DeepForge's ability to predict microstructure with a mean absolute error of 0.4$pm$0.3%. In addition, the study explores the use of MPC to adjust inter-stroke wait times, effectively counteracting temperature disturbances to achieve a target grain size of less than 35 microns within a specific 2D region of the workpiece. These results are then verified experimentally, demonstrating a significant step towards improved control and quality in forging processes where temperature can be used as an additional degree of freedom in the process.
|
[
"['Jan Petrik' 'Markus Bambach']"
] |
null | null |
2402.16123
| null | null |
http://arxiv.org/pdf/2402.16123v2
|
2024-04-28T12:03:38Z
|
2024-02-25T15:46:33Z
|
InstructEdit: Instruction-based Knowledge Editing for Large Language
Models
|
Knowledge editing for large language models can offer an efficient solution to alter a model's behavior without negatively impacting the overall performance. However, the current approaches encounter issues with limited generalizability across tasks, necessitating one distinct editor for each task, significantly hindering the broader applications. To address this, we take the first step to analyze the multi-task generalization issue in knowledge editing. Specifically, we develop an instruction-based editing technique, termed InstructEdit, which facilitates the editor's adaptation to various task performances simultaneously using simple instructions. With only one unified editor for each LLM, we empirically demonstrate that InstructEdit can improve the editor's control, leading to an average 14.86% increase in Reliability in multi-task editing setting. Furthermore, experiments involving holdout unseen task illustrate that InstructEdit consistently surpass previous strong baselines. To further investigate the underlying mechanisms of instruction-based knowledge editing, we analyze the principal components of the editing gradient directions, which unveils that instructions can help control optimization direction with stronger OOD generalization. Code and datasets are available in https://github.com/zjunlp/EasyEdit.
|
[
"['Ningyu Zhang' 'Bozhong Tian' 'Siyuan Cheng' 'Xiaozhuan Liang' 'Yi Hu'\n 'Kouying Xue' 'Yanjie Gou' 'Xi Chen' 'Huajun Chen']"
] |
null | null |
2402.16131
| null | null |
http://arxiv.org/pdf/2402.16131v1
|
2024-02-25T16:11:32Z
|
2024-02-25T16:11:32Z
|
A VAE-based Framework for Learning Multi-Level Neural Granger-Causal
Connectivity
|
Granger causality has been widely used in various application domains to capture lead-lag relationships amongst the components of complex dynamical systems, and the focus in extant literature has been on a single dynamical system. In certain applications in macroeconomics and neuroscience, one has access to data from a collection of related such systems, wherein the modeling task of interest is to extract the shared common structure that is embedded across them, as well as to identify the idiosyncrasies within individual ones. This paper introduces a Variational Autoencoder (VAE) based framework that jointly learns Granger-causal relationships amongst components in a collection of related-yet-heterogeneous dynamical systems, and handles the aforementioned task in a principled way. The performance of the proposed framework is evaluated on several synthetic data settings and benchmarked against existing approaches designed for individual system learning. The method is further illustrated on a real dataset involving time series data from a neurophysiological experiment and produces interpretable results.
|
[
"['Jiahe Lin' 'Huitian Lei' 'George Michailidis']"
] |
null | null |
2402.16153
| null | null |
http://arxiv.org/pdf/2402.16153v1
|
2024-02-25T17:19:41Z
|
2024-02-25T17:19:41Z
|
ChatMusician: Understanding and Generating Music Intrinsically with LLM
|
While Large Language Models (LLMs) demonstrate impressive capabilities in text generation, we find that their ability has yet to be generalized to music, humanity's creative language. We introduce ChatMusician, an open-source LLM that integrates intrinsic musical abilities. It is based on continual pre-training and finetuning LLaMA2 on a text-compatible music representation, ABC notation, and the music is treated as a second language. ChatMusician can understand and generate music with a pure text tokenizer without any external multi-modal neural structures or tokenizers. Interestingly, endowing musical abilities does not harm language abilities, even achieving a slightly higher MMLU score. Our model is capable of composing well-structured, full-length music, conditioned on texts, chords, melodies, motifs, musical forms, etc, surpassing GPT-4 baseline. On our meticulously curated college-level music understanding benchmark, MusicTheoryBench, ChatMusician surpasses LLaMA2 and GPT-3.5 on zero-shot setting by a noticeable margin. Our work reveals that LLMs can be an excellent compressor for music, but there remains significant territory to be conquered. We release our 4B token music-language corpora MusicPile, the collected MusicTheoryBench, code, model and demo in GitHub.
|
[
"['Ruibin Yuan' 'Hanfeng Lin' 'Yi Wang' 'Zeyue Tian' 'Shangda Wu'\n 'Tianhao Shen' 'Ge Zhang' 'Yuhang Wu' 'Cong Liu' 'Ziya Zhou' 'Ziyang Ma'\n 'Liumeng Xue' 'Ziyu Wang' 'Qin Liu' 'Tianyu Zheng' 'Yizhi Li'\n 'Yinghao Ma' 'Yiming Liang' 'Xiaowei Chi' 'Ruibo Liu' 'Zili Wang'\n 'Pengfei Li' 'Jingcheng Wu' 'Chenghua Lin' 'Qifeng Liu' 'Tao Jiang'\n 'Wenhao Huang' 'Wenhu Chen' 'Emmanouil Benetos' 'Jie Fu' 'Gus Xia'\n 'Roger Dannenberg' 'Wei Xue' 'Shiyin Kang' 'Yike Guo']"
] |
null | null |
2402.16157
| null | null |
http://arxiv.org/pdf/2402.16157v1
|
2024-02-25T17:35:31Z
|
2024-02-25T17:35:31Z
|
Consensus learning: A novel decentralised ensemble learning paradigm
|
The widespread adoption of large-scale machine learning models in recent years highlights the need for distributed computing for efficiency and scalability. This work introduces a novel distributed machine learning paradigm -- emph{consensus learning} -- which combines classical ensemble methods with consensus protocols deployed in peer-to-peer systems. These algorithms consist of two phases: first, participants develop their models and submit predictions for any new data inputs; second, the individual predictions are used as inputs for a communication phase, which is governed by a consensus protocol. Consensus learning ensures user data privacy, while also inheriting the safety measures against Byzantine attacks from the underlying consensus mechanism. We provide a detailed theoretical analysis for a particular consensus protocol and compare the performance of the consensus learning ensemble with centralised ensemble learning algorithms. The discussion is supplemented by various numerical simulations, which describe the robustness of the algorithms against Byzantine participants.
|
[
"['Horia Magureanu' 'Naïri Usher']"
] |
null | null |
2402.16158
| null | null |
http://arxiv.org/pdf/2402.16158v1
|
2024-02-25T17:37:53Z
|
2024-02-25T17:37:53Z
|
Distribution-Free Fair Federated Learning with Small Samples
|
As federated learning gains increasing importance in real-world applications due to its capacity for decentralized data training, addressing fairness concerns across demographic groups becomes critically important. However, most existing machine learning algorithms for ensuring fairness are designed for centralized data environments and generally require large-sample and distributional assumptions, underscoring the urgent need for fairness techniques adapted for decentralized and heterogeneous systems with finite-sample and distribution-free guarantees. To address this issue, this paper introduces FedFaiREE, a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples. Our approach accounts for unique challenges in decentralized environments, such as client heterogeneity, communication costs, and small sample sizes. We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
|
[
"['Qichuan Yin' 'Junzhou Huang' 'Huaxiu Yao' 'Linjun Zhang']"
] |
null | null |
2402.16181
| null | null |
http://arxiv.org/pdf/2402.16181v1
|
2024-02-25T20:07:13Z
|
2024-02-25T20:07:13Z
|
How Can LLM Guide RL? A Value-Based Approach
|
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback. However, RL algorithms may require extensive trial-and-error interactions to collect useful feedback for improvement. On the other hand, recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities for planning tasks, lacking the ability to autonomously refine their responses based on feedback. Therefore, in this paper, we study how the policy prior provided by the LLM can enhance the sample efficiency of RL algorithms. Specifically, we develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning, particularly when the difference between the ideal policy and the LLM-informed policy is small, which suggests that the initial policy is close to optimal, reducing the need for further exploration. Additionally, we present a practical algorithm SLINVIT that simplifies the construction of the value function and employs subgoals to reduce the search complexity. Our experiments across three interactive environments ALFWorld, InterCode, and BlocksWorld demonstrate that our method achieves state-of-the-art success rates and also surpasses previous RL and LLM approaches in terms of sample efficiency. Our code is available at https://github.com/agentification/Language-Integrated-VI.
|
[
"['Shenao Zhang' 'Sirui Zheng' 'Shuqi Ke' 'Zhihan Liu' 'Wanxin Jin'\n 'Jianbo Yuan' 'Yingxiang Yang' 'Hongxia Yang' 'Zhaoran Wang']"
] |
null | null |
2402.16184
| null | null |
http://arxiv.org/pdf/2402.16184v1
|
2024-02-25T20:11:40Z
|
2024-02-25T20:11:40Z
|
Deep Neural Network Initialization with Sparsity Inducing Activations
|
Inducing and leveraging sparse activations during training and inference is a promising avenue for improving the computational efficiency of deep networks, which is increasingly important as network sizes continue to grow and their application becomes more widespread. Here we use the large width Gaussian process limit to analyze the behaviour, at random initialization, of nonlinear activations that induce sparsity in the hidden outputs. A previously unreported form of training instability is proven for arguably two of the most natural candidates for hidden layer sparsification; those being a shifted ReLU ($phi(x)=max(0, x-tau)$ for $tauge 0$) and soft thresholding ($phi(x)=0$ for $|x|letau$ and $x-text{sign}(x)tau$ for $|x|>tau$). We show that this instability is overcome by clipping the nonlinear activation magnitude, at a level prescribed by the shape of the associated Gaussian process variance map. Numerical experiments verify the theory and show that the proposed magnitude clipped sparsifying activations can be trained with training and test fractional sparsity as high as 85% while retaining close to full accuracy.
|
[
"['Ilan Price' 'Nicholas Daultry Ball' 'Samuel C. H. Lam' 'Adam C. Jones'\n 'Jared Tanner']"
] |
null | null |
2402.16187
| null | null |
http://arxiv.org/pdf/2402.16187v2
|
2024-05-25T20:51:49Z
|
2024-02-25T20:24:07Z
|
No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design
Choices
|
Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating the misuse of such AI-generated content. However, we show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack -- leading to fundamental trade-offs in robustness, utility, and usability. To navigate these trade-offs, we rigorously study a set of simple yet effective attacks on common watermarking systems, and propose guidelines and defenses for LLM watermarking in practice.
|
[
"['Qi Pang' 'Shengyuan Hu' 'Wenting Zheng' 'Virginia Smith']"
] |
null | null |
2402.16196
| null | null |
http://arxiv.org/abs/2402.16196v2
|
2024-04-23T18:22:08Z
|
2024-02-25T20:39:44Z
|
Combining Machine Learning with Computational Fluid Dynamics using
OpenFOAM and SmartSim
|
Combining machine learning (ML) with computational fluid dynamics (CFD) opens many possibilities for improving simulations of technical and natural systems. However, CFD+ML algorithms require exchange of data, synchronization, and calculation on heterogeneous hardware, making their implementation for large-scale problems exceptionally challenging. We provide an effective and scalable solution to developing CFD+ML algorithms using open source software OpenFOAM and SmartSim. SmartSim provides an Orchestrator that significantly simplifies the programming of CFD+ML algorithms and a Redis database that ensures highly scalable data exchange between ML and CFD clients. We show how to leverage SmartSim to effectively couple different segments of OpenFOAM with ML, including pre/post-processing applications, solvers, function objects, and mesh motion solvers. We additionally provide an OpenFOAM sub-module with examples that can be used as starting points for real-world applications in CFD+ML.
|
[
"['Tomislav Maric' 'Mohammed Elwardi Fadeli' 'Alessandro Rigazzi'\n 'Andrew Shao' 'Andre Weiner']"
] |
null | null |
2402.16197
| null | null |
http://arxiv.org/pdf/2402.16197v1
|
2024-02-25T20:43:55Z
|
2024-02-25T20:43:55Z
|
Language Models for Code Completion: A Practical Evaluation
|
Transformer-based language models for automatic code completion have shown great promise so far, yet the evaluation of these models rarely uses real data. This study provides both quantitative and qualitative assessments of three public code language models when completing real-world code. We first developed an open-source IDE extension, Code4Me, for the online evaluation of the models. We collected real auto-completion usage data for over a year from more than 1200 users, resulting in over 600K valid completions. These models were then evaluated using six standard metrics across twelve programming languages. Next, we conducted a qualitative study of 1690 real-world completion requests to identify the reasons behind the poor model performance. A comparative analysis of the models' performance in online and offline settings was also performed, using benchmark synthetic datasets and two masking strategies. Our findings suggest that while developers utilize code completion across various languages, the best results are achieved for mainstream languages such as Python and Java. InCoder outperformed the other models across all programming languages, highlighting the significance of training data and objectives. Our study also revealed that offline evaluations do not accurately reflect real-world scenarios. Upon qualitative analysis of the model's predictions, we found that 66.3% of failures were due to the models' limitations, 24.4% occurred due to inappropriate model usage in a development context, and 9.3% were valid requests that developers overwrote. Given these findings, we propose several strategies to overcome the current limitations. These include refining training objectives, improving resilience to typographical errors, adopting hybrid approaches, and enhancing implementations and usability.
|
[
"['Maliheh Izadi' 'Jonathan Katzy' 'Tim van Dam' 'Marc Otten'\n 'Razvan Mihai Popescu' 'Arie van Deursen']"
] |
null | null |
2402.16200
| null | null |
http://arxiv.org/pdf/2402.16200v1
|
2024-02-25T21:25:06Z
|
2024-02-25T21:25:06Z
|
IR2: Information Regularization for Information Retrieval
|
Effective information retrieval (IR) in settings with limited training data, particularly for complex queries, remains a challenging task. This paper introduces IR2, Information Regularization for Information Retrieval, a technique for reducing overfitting during synthetic data generation. This approach, representing a novel application of regularization techniques in synthetic data creation for IR, is tested on three recent IR tasks characterized by complex queries: DORIS-MAE, ArguAna, and WhatsThatBook. Experimental results indicate that our regularization techniques not only outperform previous synthetic query generation methods on the tasks considered but also reduce cost by up to 50%. Furthermore, this paper categorizes and explores three regularization methods at different stages of the query synthesis pipeline-input, prompt, and output-each offering varying degrees of performance improvement compared to models where no regularization is applied. This provides a systematic approach for optimizing synthetic data generation in data-limited, complex-query IR scenarios. All code, prompts and synthetic data are available at https://github.com/Info-Regularization/Information-Regularization.
|
[
"['Jianyou Wang' 'Kaicheng Wang' 'Xiaoyue Wang' 'Weili Cao'\n 'Ramamohan Paturi' 'Leon Bergen']"
] |
null | null |
2402.16230
| null | null |
http://arxiv.org/pdf/2402.16230v1
|
2024-02-26T01:18:53Z
|
2024-02-26T01:18:53Z
|
GARNN: An Interpretable Graph Attentive Recurrent Neural Network for
Predicting Blood Glucose Levels via Multivariate Time Series
|
Accurate prediction of future blood glucose (BG) levels can effectively improve BG management for people living with diabetes, thereby reducing complications and improving quality of life. The state of the art of BG prediction has been achieved by leveraging advanced deep learning methods to model multi-modal data, i.e., sensor data and self-reported event data, organised as multi-variate time series (MTS). However, these methods are mostly regarded as ``black boxes'' and not entirely trusted by clinicians and patients. In this paper, we propose interpretable graph attentive recurrent neural networks (GARNNs) to model MTS, explaining variable contributions via summarizing variable importance and generating feature maps by graph attention mechanisms instead of post-hoc analysis. We evaluate GARNNs on four datasets, representing diverse clinical scenarios. Upon comparison with twelve well-established baseline methods, GARNNs not only achieve the best prediction accuracy but also provide high-quality temporal interpretability, in particular for postprandial glucose levels as a result of corresponding meal intake and insulin injection. These findings underline the potential of GARNN as a robust tool for improving diabetes care, bridging the gap between deep learning technology and real-world healthcare solutions.
|
[
"['Chengzhe Piao' 'Taiyu Zhu' 'Stephanie E Baldeweg' 'Paul Taylor'\n 'Pantelis Georgiou' 'Jiahao Sun' 'Jun Wang' 'Kezhi Li']"
] |
null | null |
2402.16237
| null | null |
http://arxiv.org/pdf/2402.16237v1
|
2024-02-26T01:46:56Z
|
2024-02-26T01:46:56Z
|
Active Level Set Estimation for Continuous Search Space with Theoretical
Guarantee
|
A common problem encountered in many real-world applications is level set estimation where the goal is to determine the region in the function domain where the function is above or below a given threshold. When the function is black-box and expensive to evaluate, the level sets need to be found in a minimum set of function evaluations. Existing methods often assume a discrete search space with a finite set of data points for function evaluations and estimating the level sets. When applied to a continuous search space, these methods often need to first discretize the space which leads to poor results while needing high computational time. While some methods cater for the continuous setting, they still lack a proper guarantee for theoretical convergence. To address this problem, we propose a novel algorithm that does not need any discretization and can directly work in continuous search spaces. Our method suggests points by constructing an acquisition function that is defined as a measure of confidence of the function being higher or lower than the given threshold. A theoretical analysis for the convergence of the algorithm to an accurate solution is provided. On multiple synthetic and real-world datasets, our algorithm successfully outperforms state-of-the-art methods.
|
[
"['Giang Ngo' 'Dang Nguyen' 'Dat Phan-Trong' 'Sunil Gupta']"
] |
null | null |
2402.16247
| null | null |
http://arxiv.org/pdf/2402.16247v1
|
2024-02-26T02:13:36Z
|
2024-02-26T02:13:36Z
|
Learning Translations: Emergent Communication Pretraining for
Cooperative Language Acquisition
|
In Emergent Communication (EC) agents learn to communicate with one another, but the protocols that they develop are specialised to their training community. This observation led to research into Zero-Shot Coordination (ZSC) for learning communication strategies that are robust to agents not encountered during training. However, ZSC typically assumes that no prior data is available about the agents that will be encountered in the zero-shot setting. In many cases, this presents an unnecessarily hard problem and rules out communication via preestablished conventions. We propose a novel AI challenge called a Cooperative Language Acquisition Problem (CLAP) in which the ZSC assumptions are relaxed by allowing a 'joiner' agent to learn from a dataset of interactions between agents in a target community. We propose and compare two methods for solving CLAPs: Imitation Learning (IL), and Emergent Communication pretraining and Translation Learning (ECTL), in which an agent is trained in self-play with EC and then learns from the data to translate between the emergent protocol and the target community's protocol.
|
[
"['Dylan Cope' 'Peter McBurney']"
] |
null | null |
2402.16255
| null | null |
http://arxiv.org/pdf/2402.16255v1
|
2024-02-26T02:37:39Z
|
2024-02-26T02:37:39Z
|
Watch Your Head: Assembling Projection Heads to Save the Reliability of
Federated Models
|
Federated learning encounters substantial challenges with heterogeneous data, leading to performance degradation and convergence issues. While considerable progress has been achieved in mitigating such an impact, the reliability aspect of federated models has been largely disregarded. In this study, we conduct extensive experiments to investigate the reliability of both generic and personalized federated models. Our exploration uncovers a significant finding: textbf{federated models exhibit unreliability when faced with heterogeneous data}, demonstrating poor calibration on in-distribution test data and low uncertainty levels on out-of-distribution data. This unreliability is primarily attributed to the presence of biased projection heads, which introduce miscalibration into the federated models. Inspired by this observation, we propose the "Assembled Projection Heads" (APH) method for enhancing the reliability of federated models. By treating the existing projection head parameters as priors, APH randomly samples multiple initialized parameters of projection heads from the prior and further performs targeted fine-tuning on locally available data under varying learning rates. Such a head ensemble introduces parameter diversity into the deterministic model, eliminating the bias and producing reliable predictions via head averaging. We evaluate the effectiveness of the proposed APH method across three prominent federated benchmarks. Experimental results validate the efficacy of APH in model calibration and uncertainty estimation. Notably, APH can be seamlessly integrated into various federated approaches but only requires less than 30% additional computation cost for 100$times$ inferences within large models.
|
[
"['Jinqian Chen' 'Jihua Zhu' 'Qinghai Zheng' 'Zhongyu Li' 'Zhiqiang Tian']"
] |
null | null |
2402.16268
| null | null |
http://arxiv.org/pdf/2402.16268v1
|
2024-02-26T03:09:06Z
|
2024-02-26T03:09:06Z
|
Foundation Model Transparency Reports
|
Foundation models are critical digital technologies with sweeping societal impact that necessitates transparency. To codify how foundation model developers should provide transparency about the development and deployment of their models, we propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media. While external documentation of societal harms prompted social media transparency reports, our objective is to institutionalize transparency reporting for foundation models while the industry is still nascent. To design our reports, we identify 6 design principles given the successes and shortcomings of social media transparency reporting. To further schematize our reports, we draw upon the 100 transparency indicators from the Foundation Model Transparency Index. Given these indicators, we measure the extent to which they overlap with the transparency requirements included in six prominent government policies (e.g., the EU AI Act, the US Executive Order on Safe, Secure, and Trustworthy AI). Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions. We encourage foundation model developers to regularly publish transparency reports, building upon recommendations from the G7 and the White House.
|
[
"['Rishi Bommasani' 'Kevin Klyman' 'Shayne Longpre' 'Betty Xiong'\n 'Sayash Kapoor' 'Nestor Maslej' 'Arvind Narayanan' 'Percy Liang']"
] |
null | null |
2402.16269
| null | null |
http://arxiv.org/pdf/2402.16269v1
|
2024-02-26T03:10:11Z
|
2024-02-26T03:10:11Z
|
From Large Language Models and Optimization to Decision Optimization
CoPilot: A Research Manifesto
|
Significantly simplifying the creation of optimization models for real-world business problems has long been a major goal in applying mathematical optimization more widely to important business and societal decisions. The recent capabilities of Large Language Models (LLMs) present a timely opportunity to achieve this goal. Therefore, we propose research at the intersection of LLMs and optimization to create a Decision Optimization CoPilot (DOCP) - an AI tool designed to assist any decision maker, interacting in natural language to grasp the business problem, subsequently formulating and solving the corresponding optimization model. This paper outlines our DOCP vision and identifies several fundamental requirements for its implementation. We describe the state of the art through a literature survey and experiments using ChatGPT. We show that a) LLMs already provide substantial novel capabilities relevant to a DOCP, and b) major research challenges remain to be addressed. We also propose possible research directions to overcome these gaps. We also see this work as a call to action to bring together the LLM and optimization communities to pursue our vision, thereby enabling much more widespread improved decision-making.
|
[
"['Segev Wasserkrug' 'Leonard Boussioux' 'Dick den Hertog'\n 'Farzaneh Mirzazadeh' 'Ilker Birbil' 'Jannis Kurtz' 'Donato Maragno']"
] |
null | null |
2402.16278
| null | null |
http://arxiv.org/pdf/2402.16278v3
|
2024-03-10T10:04:41Z
|
2024-02-26T03:46:01Z
|
A Self-matching Training Method with Annotation Embedding Models for
Ontology Subsumption Prediction
|
Recently, ontology embeddings representing entities in a low-dimensional space have been proposed for ontology completion. However, the ontology embeddings for concept subsumption prediction do not address the difficulties of similar and isolated entities and fail to extract the global information of annotation axioms from an ontology. In this paper, we propose a self-matching training method for the two ontology embedding models: Inverted-index Matrix Embedding (InME) and Co-occurrence Matrix Embedding (CoME). The two embeddings capture the global and local information in annotation axioms by means of the occurring locations of each word in a set of axioms and the co-occurrences of words in each axiom. The self-matching training method increases the robustness of the concept subsumption prediction when predicted superclasses are similar to subclasses and are isolated to other entities in an ontology. Our evaluation experiments show that the self-matching training method with InME outperforms the existing ontology embeddings for the GO and FoodOn ontologies and that the method with the concatenation of CoME and OWL2Vec* outperforms them for the HeLiS ontology.
|
[
"['Yukihiro Shiraishi' 'Ken Kaneiwa']"
] |
null | null |
2402.16285
| null | null |
http://arxiv.org/pdf/2402.16285v1
|
2024-02-26T04:06:05Z
|
2024-02-26T04:06:05Z
|
A Comparison of Deep Learning Models for Proton Background Rejection
with the AMS Electromagnetic Calorimeter
|
The Alpha Magnetic Spectrometer (AMS) is a high-precision particle detector onboard the International Space Station containing six different subdetectors. The Transition Radiation Detector and Electromagnetic Calorimeter (ECAL) are used to separate electrons/positrons from the abundant cosmic-ray proton background. The positron flux measured in space by AMS falls with a power law which unexpectedly softens above 25 GeV and then hardens above 280 GeV. Several theoretical models try to explain these phenomena, and a purer measurement of positrons at higher energies is needed to help test them. The currently used methods to reject the proton background at high energies involve extrapolating shower features from the ECAL to use as inputs for boosted decision tree and likelihood classifiers. We present a new approach for particle identification with the AMS ECAL using deep learning (DL). By taking the energy deposition within all the ECAL cells as an input and treating them as pixels in an image-like format, we train an MLP, a CNN, and multiple ResNets and Convolutional vision Transformers (CvTs) as shower classifiers. Proton rejection performance is evaluated using Monte Carlo (MC) events and ISS data separately. For MC, using events with a reconstructed energy between 0.2 - 2 TeV, at 90% electron accuracy, the proton rejection power of our CvT model is more than 5 times that of the other DL models. Similarly, for ISS data with a reconstructed energy between 50 - 70 GeV, the proton rejection power of our CvT model is more than 2.5 times that of the other DL models.
|
[
"['Raheem Karim Hashmani' 'Emre Akbaş' 'Melahat Bilge Demirköz']"
] |
null | null |
2402.16297
| null | null |
http://arxiv.org/pdf/2402.16297v2
|
2024-05-23T07:21:27Z
|
2024-02-26T04:39:01Z
|
A Poisson-Gamma Dynamic Factor Model with Time-Varying Transition
Dynamics
|
Probabilistic approaches for handling count-valued time sequences have attracted amounts of research attentions because their ability to infer explainable latent structures and to estimate uncertainties, and thus are especially suitable for dealing with emph{noisy} and emph{incomplete} count data. Among these models, Poisson-Gamma Dynamical Systems (PGDSs) are proven to be effective in capturing the evolving dynamics underlying observed count sequences. However, the state-of-the-art PGDS still fails to capture the emph{time-varying} transition dynamics that are commonly observed in real-world count time sequences. To mitigate this gap, a non-stationary PGDS is proposed to allow the underlying transition matrices to evolve over time, and the evolving transition matrices are modeled by sophisticatedly-designed Dirichlet Markov chains. Leveraging Dirichlet-Multinomial-Beta data augmentation techniques, a fully-conjugate and efficient Gibbs sampler is developed to perform posterior simulation. Experiments show that, in comparison with related models, the proposed non-stationary PGDS achieves improved predictive performance due to its capacity to learn non-stationary dependency structure captured by the time-evolving transition matrices.
|
[
"['Jiahao Wang' 'Sikun Yang' 'Heinz Koeppl' 'Xiuzhen Cheng' 'Pengfei Hu'\n 'Guoming Zhang']"
] |
null | null |
2402.16299
| null | null |
http://arxiv.org/pdf/2402.16299v1
|
2024-02-26T04:43:44Z
|
2024-02-26T04:43:44Z
|
Against Filter Bubbles: Diversified Music Recommendation via Weighted
Hypergraph Embedding Learning
|
Recommender systems serve a dual purpose for users: sifting out inappropriate or mismatched information while accurately identifying items that align with their preferences. Numerous recommendation algorithms are designed to provide users with a personalized array of information tailored to their preferences. Nevertheless, excessive personalization can confine users within a "filter bubble". Consequently, achieving the right balance between accuracy and diversity in recommendations is a pressing concern. To address this challenge, exemplified by music recommendation, we introduce the Diversified Weighted Hypergraph music Recommendation algorithm (DWHRec). In the DWHRec algorithm, the initial connections between users and listened tracks are represented by a weighted hypergraph. Simultaneously, associations between artists, albums and tags with tracks are also appended to the hypergraph. To explore users' latent preferences, a hypergraph-based random walk embedding method is applied to the constructed hypergraph. In our investigation, accuracy is gauged by the alignment between the user and the track, whereas the array of recommended track types measures diversity. We rigorously compared DWHRec against seven state-of-the-art recommendation algorithms using two real-world music datasets. The experimental results validate DWHRec as a solution that adeptly harmonizes accuracy and diversity, delivering a more enriched musical experience. Beyond music recommendation, DWHRec can be extended to cater to other scenarios with similar data structures.
|
[
"['Chaoguang Luo' 'Liuying Wen' 'Yong Qin' 'Liangwei Yang' 'Zhineng Hu'\n 'Philip S. Yu']"
] |
null | null |
2402.16300
| null | null |
http://arxiv.org/pdf/2402.16300v2
|
2024-05-27T13:08:37Z
|
2024-02-26T04:43:50Z
|
Conformalized Selective Regression
|
Should prediction models always deliver a prediction? In the pursuit of maximum predictive performance, critical considerations of reliability and fairness are often overshadowed, particularly when it comes to the role of uncertainty. Selective regression, also known as the "reject option," allows models to abstain from predictions in cases of considerable uncertainty. Initially proposed seven decades ago, approaches to selective regression have mostly focused on distribution-based proxies for measuring uncertainty, particularly conditional variance. However, this focus neglects the significant influence of model-specific biases on a model's performance. In this paper, we propose a novel approach to selective regression by leveraging conformal prediction, which provides grounded confidence measures for individual predictions based on model-specific biases. In addition, we propose a standardized evaluation framework to allow proper comparison of selective regression approaches. Via an extensive experimental approach, we demonstrate how our proposed approach, conformalized selective regression, demonstrates an advantage over multiple state-of-the-art baselines.
|
[
"['Anna Sokol' 'Nuno Moniz' 'Nitesh Chawla']"
] |
null | null |
2402.16302
| null | null |
http://arxiv.org/pdf/2402.16302v1
|
2024-02-26T04:58:42Z
|
2024-02-26T04:58:42Z
|
Graph Diffusion Policy Optimization
|
Recent research has made significant progress in optimizing diffusion models for specific downstream objectives, which is an important pursuit in fields such as graph generation for drug design. However, directly applying these models to graph diffusion presents challenges, resulting in suboptimal performance. This paper introduces graph diffusion policy optimization (GDPO), a novel approach to optimize graph diffusion models for arbitrary (e.g., non-differentiable) objectives using reinforcement learning. GDPO is based on an eager policy gradient tailored for graph diffusion models, developed through meticulous analysis and promising improved performance. Experimental results show that GDPO achieves state-of-the-art performance in various graph generation tasks with complex and diverse objectives. Code is available at https://github.com/sail-sg/GDPO.
|
[
"['Yijing Liu' 'Chao Du' 'Tianyu Pang' 'Chongxuan Li' 'Wei Chen' 'Min Lin']"
] |
null | null |
2402.16305
| null | null |
http://arxiv.org/pdf/2402.16305v1
|
2024-02-26T05:08:40Z
|
2024-02-26T05:08:40Z
|
Referee Can Play: An Alternative Approach to Conditional Generation via
Model Inversion
|
As a dominant force in text-to-image generation tasks, Diffusion Probabilistic Models (DPMs) face a critical challenge in controllability, struggling to adhere strictly to complex, multi-faceted instructions. In this work, we aim to address this alignment challenge for conditional generation tasks. First, we provide an alternative view of state-of-the-art DPMs as a way of inverting advanced Vision-Language Models (VLMs). With this formulation, we naturally propose a training-free approach that bypasses the conventional sampling process associated with DPMs. By directly optimizing images with the supervision of discriminative VLMs, the proposed method can potentially achieve a better text-image alignment. As proof of concept, we demonstrate the pipeline with the pre-trained BLIP-2 model and identify several key designs for improved image generation. To further enhance the image fidelity, a Score Distillation Sampling module of Stable Diffusion is incorporated. By carefully balancing the two components during optimization, our method can produce high-quality images with near state-of-the-art performance on T2I-Compbench.
|
[
"['Xuantong Liu' 'Tianyang Hu' 'Wenjia Wang' 'Kenji Kawaguchi' 'Yuan Yao']"
] |
null | null |
2402.16310
| null | null |
http://arxiv.org/pdf/2402.16310v3
|
2024-06-07T01:14:26Z
|
2024-02-26T05:28:36Z
|
REPLAY: Modeling Time-Varying Temporal Regularities of Human Mobility
for Location Prediction over Sparse Trajectories
|
Location prediction forecasts a user's location based on historical user mobility traces. To tackle the intrinsic sparsity issue of real-world user mobility traces, spatiotemporal contexts have been shown as significantly useful. Existing solutions mostly incorporate spatiotemporal distances between locations in mobility traces, either by feeding them as additional inputs to Recurrent Neural Networks (RNNs) or by using them to search for informative past hidden states for prediction. However, such distance-based methods fail to capture the time-varying temporal regularities of human mobility, where human mobility is often more regular in the morning than in other periods, for example; this suggests the usefulness of the actual timestamps besides the temporal distances. Against this background, we propose REPLAY, a general RNN architecture learning to capture the time-varying temporal regularities for location prediction. Specifically, REPLAY not only resorts to the spatiotemporal distances in sparse trajectories to search for the informative past hidden states, but also accommodates the time-varying temporal regularities by incorporating smoothed timestamp embeddings using Gaussian weighted averaging with timestamp-specific learnable bandwidths, which can flexibly adapt to the temporal regularities of different strengths across different timestamps. Our extensive evaluation compares REPLAY against a sizable collection of state-of-the-art techniques on two real-world datasets. Results show that REPLAY consistently and significantly outperforms state-of-the-art methods by 7.7%-10.9% in the location prediction task, and the bandwidths reveal interesting patterns of the time-varying temporal regularities.
|
[
"['Bangchao Deng' 'Bingqing Qu' 'Pengyang Wang' 'Dingqi Yang'\n 'Benjamin Fankhauser' 'Philippe Cudre-Mauroux']"
] |
null | null |
2402.16312
| null | null |
http://arxiv.org/pdf/2402.16312v1
|
2024-02-26T05:31:14Z
|
2024-02-26T05:31:14Z
|
Federated Contextual Cascading Bandits with Asynchronous Communication
and Heterogeneous Users
|
We study the problem of federated contextual combinatorial cascading bandits, where $|mathcal{U}|$ agents collaborate under the coordination of a central server to provide tailored recommendations to the $|mathcal{U}|$ corresponding users. Existing works consider either a synchronous framework, necessitating full agent participation and global synchronization, or assume user homogeneity with identical behaviors. We overcome these limitations by considering (1) federated agents operating in an asynchronous communication paradigm, where no mandatory synchronization is required and all agents communicate independently with the server, (2) heterogeneous user behaviors, where users can be stratified into $J le |mathcal{U}|$ latent user clusters, each exhibiting distinct preferences. For this setting, we propose a UCB-type algorithm with delicate communication protocols. Through theoretical analysis, we give sub-linear regret bounds on par with those achieved in the synchronous framework, while incurring only logarithmic communication costs. Empirical evaluation on synthetic and real-world datasets validates our algorithm's superior performance in terms of regrets and communication costs.
|
[
"['Hantao Yang' 'Xutong Liu' 'Zhiyong Wang' 'Hong Xie' 'John C. S. Lui'\n 'Defu Lian' 'Enhong Chen']"
] |
null | null |
2402.16321
| null | null |
http://arxiv.org/pdf/2402.16321v1
|
2024-02-26T06:01:38Z
|
2024-02-26T06:01:38Z
|
Self-Supervised Speech Quality Estimation and Enhancement Using Only
Clean Speech
|
Speech quality estimation has recently undergone a paradigm shift from human-hearing expert designs to machine-learning models. However, current models rely mainly on supervised learning, which is time-consuming and expensive for label collection. To solve this problem, we propose VQScore, a self-supervised metric for evaluating speech based on the quantization error of a vector-quantized-variational autoencoder (VQ-VAE). The training of VQ-VAE relies on clean speech; hence, large quantization errors can be expected when the speech is distorted. To further improve correlation with real quality scores, domain knowledge of speech processing is incorporated into the model design. We found that the vector quantization mechanism could also be used for self-supervised speech enhancement (SE) model training. To improve the robustness of the encoder for SE, a novel self-distillation mechanism combined with adversarial training is introduced. In summary, the proposed speech quality estimation method and enhancement models require only clean speech for training without any label requirements. Experimental results show that the proposed VQScore and enhancement model are competitive with supervised baselines. The code will be released after publication.
|
[
"['Szu-Wei Fu' 'Kuo-Hsuan Hung' 'Yu Tsao' 'Yu-Chiang Frank Wang']"
] |
null | null |
2402.16324
| null | null |
http://arxiv.org/pdf/2402.16324v2
|
2024-06-03T02:37:28Z
|
2024-02-26T06:08:25Z
|
Achieving $\tilde{O}(1/ε)$ Sample Complexity for Constrained
Markov Decision Process
|
We consider the reinforcement learning problem for the constrained Markov decision process (CMDP), which plays a central role in satisfying safety or resource constraints in sequential learning and decision-making. In this problem, we are given finite resources and a MDP with unknown transition probabilities. At each stage, we take an action, collecting a reward and consuming some resources, all assumed to be unknown and need to be learned over time. In this work, we take the first step towards deriving optimal problem-dependent guarantees for the CMDP problems. We derive a logarithmic regret bound, which translates into a $O(frac{1}{Deltacdoteps}cdotlog^2(1/eps))$ sample complexity bound, with $Delta$ being a problem-dependent parameter, yet independent of $eps$. Our sample complexity bound improves upon the state-of-art $O(1/eps^2)$ sample complexity for CMDP problems established in the previous literature, in terms of the dependency on $eps$. To achieve this advance, we develop a new framework for analyzing CMDP problems. To be specific, our algorithm operates in the primal space and we resolve the primal LP for the CMDP problem at each period in an online manner, with textit{adaptive} remaining resource capacities. The key elements of our algorithm are: i) a characterization of the instance hardness via LP basis, ii) an eliminating procedure that identifies one optimal basis of the primal LP, and; iii) a resolving procedure that is adaptive to the remaining resources and sticks to the characterized optimal basis.
|
[
"['Jiashuo Jiang' 'Yinyu Ye']"
] |
null | null |
2402.16326
| null | null |
http://arxiv.org/abs/2402.16326v3
|
2024-03-31T08:45:51Z
|
2024-02-26T06:20:28Z
|
A Provably Accurate Randomized Sampling Algorithm for Logistic
Regression
|
In statistics and machine learning, logistic regression is a widely-used supervised learning technique primarily employed for binary classification tasks. When the number of observations greatly exceeds the number of predictor variables, we present a simple, randomized sampling-based algorithm for logistic regression problem that guarantees high-quality approximations to both the estimated probabilities and the overall discrepancy of the model. Our analysis builds upon two simple structural conditions that boil down to randomized matrix multiplication, a fundamental and well-understood primitive of randomized numerical linear algebra. We analyze the properties of estimated probabilities of logistic regression when leverage scores are used to sample observations, and prove that accurate approximations can be achieved with a sample whose size is much smaller than the total number of observations. To further validate our theoretical findings, we conduct comprehensive empirical evaluations. Overall, our work sheds light on the potential of using randomized sampling approaches to efficiently approximate the estimated probabilities in logistic regression, offering a practical and computationally efficient solution for large-scale datasets.
|
[
"['Agniva Chowdhury' 'Pradeep Ramuhalli']"
] |
null | null |
2402.16346
| null | null |
http://arxiv.org/pdf/2402.16346v2
|
2024-06-01T09:21:46Z
|
2024-02-26T07:00:24Z
|
Boosting Graph Pooling with Persistent Homology
|
Recently, there has been an emerging trend to integrate persistent homology (PH) into graph neural networks (GNNs) to enrich expressive power. However, naively plugging PH features into GNN layers always results in marginal improvement with low interpretability. In this paper, we investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner. In this fashion, message passing in the coarsened graph acts along persistent pooled topology, leading to improved performance. Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility.
|
[
"['Chaolong Ying' 'Xinjian Zhao' 'Tianshu Yu']"
] |
null | null |
2402.16349
| null | null |
http://arxiv.org/pdf/2402.16349v1
|
2024-02-26T07:07:00Z
|
2024-02-26T07:07:00Z
|
C-GAIL: Stabilizing Generative Adversarial Imitation Learning with
Control Theory
|
Generative Adversarial Imitation Learning (GAIL) trains a generative policy to mimic a demonstrator. It uses on-policy Reinforcement Learning (RL) to optimize a reward signal derived from a GAN-like discriminator. A major drawback of GAIL is its training instability - it inherits the complex training dynamics of GANs, and the distribution shift introduced by RL. This can cause oscillations during training, harming its sample efficiency and final policy performance. Recent work has shown that control theory can help with the convergence of a GAN's training. This paper extends this line of work, conducting a control-theoretic analysis of GAIL and deriving a novel controller that not only pushes GAIL to the desired equilibrium but also achieves asymptotic stability in a 'one-step' setting. Based on this, we propose a practical algorithm 'Controlled-GAIL' (C-GAIL). On MuJoCo tasks, our controlled variant is able to speed up the rate of convergence, reduce the range of oscillation and match the expert's distribution more closely both for vanilla GAIL and GAIL-DAC.
|
[
"['Tianjiao Luo' 'Tim Pearce' 'Huayu Chen' 'Jianfei Chen' 'Jun Zhu']"
] |
null | null |
2402.16353
| null | null |
http://arxiv.org/pdf/2402.16353v1
|
2024-02-26T07:18:57Z
|
2024-02-26T07:18:57Z
|
An optimal tradeoff between entanglement and copy complexity for state
tomography
|
There has been significant interest in understanding how practical constraints on contemporary quantum devices impact the complexity of quantum learning. For the classic question of tomography, recent work tightly characterized the copy complexity for any protocol that can only measure one copy of the unknown state at a time, showing it is polynomially worse than if one can make fully-entangled measurements. While we now have a fairly complete picture of the rates for such tasks in the near-term and fault-tolerant regimes, it remains poorly understood what the landscape in between looks like. In this work, we study tomography in the natural setting where one can make measurements of $t$ copies at a time. For sufficiently small $epsilon$, we show that for any $t le d^2$, $widetilde{Theta}(frac{d^3}{sqrt{t}epsilon^2})$ copies are necessary and sufficient to learn an unknown $d$-dimensional state $rho$ to trace distance $epsilon$. This gives a smooth and optimal interpolation between the known rates for single-copy and fully-entangled measurements. To our knowledge, this is the first smooth entanglement-copy tradeoff known for any quantum learning task, and for tomography, no intermediate point on this curve was known, even at $t = 2$. An important obstacle is that unlike the optimal single-copy protocol, the optimal fully-entangled protocol is inherently biased and thus precludes naive batching approaches. Instead, we devise a novel two-stage procedure that uses Keyl's algorithm to refine a crude estimate for $rho$ based on single-copy measurements. A key insight is to use Schur-Weyl sampling not to estimate the spectrum of $rho$, but to estimate the deviation of $rho$ from the maximally mixed state. When $rho$ is far from the maximally mixed state, we devise a novel quantum splitting procedure that reduces to the case where $rho$ is close to maximally mixed.
|
[
"['Sitan Chen' 'Jerry Li' 'Allen Liu']"
] |
null | null |
2402.16354
| null | null |
http://arxiv.org/pdf/2402.16354v2
|
2024-05-27T14:31:38Z
|
2024-02-26T07:19:23Z
|
Language-guided Skill Learning with Temporal Variational Inference
|
We present an algorithm for skill discovery from expert demonstrations. The algorithm first utilizes Large Language Models (LLMs) to propose an initial segmentation of the trajectories. Following that, a hierarchical variational inference framework incorporates the LLM-generated segmentation information to discover reusable skills by merging trajectory segments. To further control the trade-off between compression and reusability, we introduce a novel auxiliary objective based on the Minimum Description Length principle that helps guide this skill discovery process. Our results demonstrate that agents equipped with our method are able to discover skills that help accelerate learning and outperform baseline skill learning approaches on new long-horizon tasks in BabyAI, a grid world navigation environment, as well as ALFRED, a household simulation environment.
|
[
"['Haotian Fu' 'Pratyusha Sharma' 'Elias Stengel-Eskin' 'George Konidaris'\n 'Nicolas Le Roux' 'Marc-Alexandre Côté' 'Xingdi Yuan']"
] |
null | null |
2402.16358
| null | null |
http://arxiv.org/pdf/2402.16358v2
|
2024-04-23T06:18:32Z
|
2024-02-26T07:22:51Z
|
An Integrated Data Processing Framework for Pretraining Foundation
Models
|
The ability of the foundation models heavily relies on large-scale, diverse, and high-quality pretraining data. In order to improve data quality, researchers and practitioners often have to manually curate datasets from difference sources and develop dedicated data cleansing pipeline for each data repository. Lacking a unified data processing framework, this process is repetitive and cumbersome. To mitigate this issue, we propose a data processing framework that integrates a Processing Module which consists of a series of operators at different granularity levels, and an Analyzing Module which supports probing and evaluation of the refined data. The proposed framework is easy to use and highly flexible. In this demo paper, we first introduce how to use this framework with some example use cases and then demonstrate its effectiveness in improving the data quality with an automated evaluation with ChatGPT and an end-to-end evaluation in pretraining the GPT-2 model. The code and demonstration videos are accessible on GitHub.
|
[
"['Yiding Sun' 'Feng Wang' 'Yutao Zhu' 'Wayne Xin Zhao' 'Jiaxin Mao']"
] |
null | null |
2402.16359
| null | null |
http://arxiv.org/pdf/2402.16359v2
|
2024-02-27T18:54:40Z
|
2024-02-26T07:24:32Z
|
Feedback Efficient Online Fine-Tuning of Diffusion Models
|
Diffusion models excel at modeling complex data distributions, including those of images, proteins, and small molecules. However, in many cases, our goal is to model parts of the distribution that maximize certain properties: for example, we may want to generate images with high aesthetic quality, or molecules with high bioactivity. It is natural to frame this as a reinforcement learning (RL) problem, in which the objective is to fine-tune a diffusion model to maximize a reward function that corresponds to some property. Even with access to online queries of the ground-truth reward function, efficiently discovering high-reward samples can be challenging: they might have a low probability in the initial distribution, and there might be many infeasible samples that do not even have a well-defined reward (e.g., unnatural images or physically impossible molecules). In this work, we propose a novel reinforcement learning procedure that efficiently explores on the manifold of feasible samples. We present a theoretical analysis providing a regret guarantee, as well as empirical validation across three domains: images, biological sequences, and molecules.
|
[
"['Masatoshi Uehara' 'Yulai Zhao' 'Kevin Black' 'Ehsan Hajiramezanali'\n 'Gabriele Scalia' 'Nathaniel Lee Diamant' 'Alex M Tseng' 'Sergey Levine'\n 'Tommaso Biancalani']"
] |
null | null |
2402.16364
| null | null |
http://arxiv.org/pdf/2402.16364v1
|
2024-02-26T07:33:28Z
|
2024-02-26T07:33:28Z
|
Where Do We Go from Here? Multi-scale Allocentric Relational Inference
from Natural Spatial Descriptions
|
When communicating routes in natural language, the concept of {em acquired spatial knowledge} is crucial for geographic information retrieval (GIR) and in spatial cognitive research. However, NLP navigation studies often overlook the impact of such acquired knowledge on textual descriptions. Current navigation studies concentrate on egocentric local descriptions (e.g., `it will be on your right') that require reasoning over the agent's local perception. These instructions are typically given as a sequence of steps, with each action-step explicitly mentioning and being followed by a landmark that the agent can use to verify they are on the right path (e.g., `turn right and then you will see...'). In contrast, descriptions based on knowledge acquired through a map provide a complete view of the environment and capture its overall structure. These instructions (e.g., `it is south of Central Park and a block north of a police station') are typically non-sequential, contain allocentric relations, with multiple spatial relations and implicit actions, without any explicit verification. This paper introduces the Rendezvous (RVS) task and dataset, which includes 10,404 examples of English geospatial instructions for reaching a target location using map-knowledge. Our analysis reveals that RVS exhibits a richer use of spatial allocentric relations, and requires resolving more spatial relations simultaneously compared to previous text-based navigation benchmarks.
|
[
"['Tzuf Paz-Argaman' 'Sayali Kulkarni' 'John Palowitch' 'Jason Baldridge'\n 'Reut Tsarfaty']"
] |
null | null |
2402.16369
| null | null |
http://arxiv.org/pdf/2402.16369v1
|
2024-02-26T07:47:12Z
|
2024-02-26T07:47:12Z
|
Generative AI in Vision: A Survey on Models, Metrics and Applications
|
Generative AI models have revolutionized various fields by enabling the creation of realistic and diverse data samples. Among these models, diffusion models have emerged as a powerful approach for generating high-quality images, text, and audio. This survey paper provides a comprehensive overview of generative AI diffusion and legacy models, focusing on their underlying techniques, applications across different domains, and their challenges. We delve into the theoretical foundations of diffusion models, including concepts such as denoising diffusion probabilistic models (DDPM) and score-based generative modeling. Furthermore, we explore the diverse applications of these models in text-to-image, image inpainting, and image super-resolution, along with others, showcasing their potential in creative tasks and data augmentation. By synthesizing existing research and highlighting critical advancements in this field, this survey aims to provide researchers and practitioners with a comprehensive understanding of generative AI diffusion and legacy models and inspire future innovations in this exciting area of artificial intelligence.
|
[
"['Gaurav Raut' 'Apoorv Singh']"
] |
null | null |
2402.16374
| null | null |
http://arxiv.org/pdf/2402.16374v2
|
2024-03-07T05:07:49Z
|
2024-02-26T07:52:40Z
|
Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning
|
Graph learning plays a pivotal role and has gained significant attention in various application scenarios, from social network analysis to recommendation systems, for its effectiveness in modeling complex data relations represented by graph structural data. In reality, the real-world graph data typically show dynamics over time, with changing node attributes and edge structure, leading to the severe graph data distribution shift issue. This issue is compounded by the diverse and complex nature of distribution shifts, which can significantly impact the performance of graph learning methods in degraded generalization and adaptation capabilities, posing a substantial challenge to their effectiveness. In this survey, we provide a comprehensive review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning. Concretely, according to the observability of distributions in the inference stage and the availability of sufficient supervision information in the training stage, we categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning. For each scenario, a detailed taxonomy is proposed, with specific descriptions and discussions of existing progress made in distribution-shifted graph learning. Additionally, we discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field. The survey is positioned to provide general guidance for the development of effective graph learning algorithms in handling graph distribution shifts, and to stimulate future research and advancements in this area.
|
[
"['Man Wu' 'Xin Zheng' 'Qin Zhang' 'Xiao Shen' 'Xiong Luo' 'Xingquan Zhu'\n 'Shirui Pan']"
] |
null | null |
2402.16380
| null | null |
http://arxiv.org/pdf/2402.16380v1
|
2024-02-26T07:58:33Z
|
2024-02-26T07:58:33Z
|
An Automated End-to-End Open-Source Software for High-Quality
Text-to-Speech Dataset Generation
|
Data availability is crucial for advancing artificial intelligence applications, including voice-based technologies. As content creation, particularly in social media, experiences increasing demand, translation and text-to-speech (TTS) technologies have become essential tools. Notably, the performance of these TTS technologies is highly dependent on the quality of the training data, emphasizing the mutual dependence of data availability and technological progress. This paper introduces an end-to-end tool to generate high-quality datasets for text-to-speech (TTS) models to address this critical need for high-quality data. The contributions of this work are manifold and include: the integration of language-specific phoneme distribution into sample selection, automation of the recording process, automated and human-in-the-loop quality assurance of recordings, and processing of recordings to meet specified formats. The proposed application aims to streamline the dataset creation process for TTS models through these features, thereby facilitating advancements in voice-based technologies.
|
[
"['Ahmet Gunduz' 'Kamer Ali Yuksel' 'Kareem Darwish' 'Golara Javadi'\n 'Fabio Minazzi' 'Nicola Sobieski' 'Sebastien Bratieres']"
] |
null | null |
2402.16383
| null | null |
http://arxiv.org/pdf/2402.16383v1
|
2024-02-26T08:08:30Z
|
2024-02-26T08:08:30Z
|
Self Supervised Correlation-based Permutations for Multi-View Clustering
|
Fusing information from different modalities can enhance data analysis tasks, including clustering. However, existing multi-view clustering (MVC) solutions are limited to specific domains or rely on a suboptimal and computationally demanding two-stage procedure of representation and clustering. We propose an end-to-end deep learning-based MVC framework for general data (image, tabular, etc.). Our approach involves learning meaningful fused data representations with a novel permutation-based canonical correlation objective. Concurrently, we learn cluster assignments by identifying consistent pseudo-labels across multiple views. We demonstrate the effectiveness of our model using ten MVC benchmark datasets. Theoretically, we show that our model approximates the supervised linear discrimination analysis (LDA) representation. Additionally, we provide an error bound induced by false-pseudo label annotations.
|
[
"['Ran Eisenberg' 'Jonathan Svirsky' 'Ofir Lindenbaum']"
] |
null | null |
2402.16387
| null | null |
http://arxiv.org/pdf/2402.16387v1
|
2024-02-26T08:22:22Z
|
2024-02-26T08:22:22Z
|
On the Generalization Capability of Temporal Graph Learning Algorithms:
Theoretical Insights and a Simpler Method
|
Temporal Graph Learning (TGL) has become a prevalent technique across diverse real-world applications, especially in domains where data can be represented as a graph and evolves over time. Although TGL has recently seen notable progress in algorithmic solutions, its theoretical foundations remain largely unexplored. This paper aims at bridging this gap by investigating the generalization ability of different TGL algorithms (e.g., GNN-based, RNN-based, and memory-based methods) under the finite-wide over-parameterized regime. We establish the connection between the generalization error of TGL algorithms and "the number of layers/steps" in the GNN-/RNN-based TGL methods and "the feature-label alignment (FLA) score", where FLA can be used as a proxy for the expressive power and explains the performance of memory-based methods. Guided by our theoretical analysis, we propose Simplified-Temporal-Graph-Network, which enjoys a small generalization error, improved overall performance, and lower model complexity. Extensive experiments on real-world datasets demonstrate the effectiveness of our method. Our theoretical findings and proposed algorithm offer essential insights into TGL from a theoretical standpoint, laying the groundwork for the designing practical TGL algorithms in future studies.
|
[
"['Weilin Cong' 'Jian Kang' 'Hanghang Tong' 'Mehrdad Mahdavi']"
] |
null | null |
2402.16388
| null | null |
http://arxiv.org/pdf/2402.16388v2
|
2024-03-02T13:40:04Z
|
2024-02-26T08:22:40Z
|
Uncertainty Quantification in Anomaly Detection with Cross-Conformal
$p$-Values
|
Given the growing significance of reliable, trustworthy, and explainable machine learning, the requirement of uncertainty quantification for anomaly detection systems has become increasingly important. In this context, effectively controlling Type I error rates ($alpha$) without compromising the statistical power ($1-beta$) of these systems can build trust and reduce costs related to false discoveries, particularly when follow-up procedures are expensive. Leveraging the principles of conformal prediction emerges as a promising approach for providing respective statistical guarantees by calibrating a model's uncertainty. This work introduces a novel framework for anomaly detection, termed cross-conformal anomaly detection, building upon well-known cross-conformal methods designed for prediction tasks. With that, it addresses a natural research gap by extending previous works in the context of inductive conformal anomaly detection, relying on the split-conformal approach for model calibration. Drawing on insights from conformal prediction, we demonstrate that the derived methods for calculating cross-conformal $p$-values strike a practical compromise between statistical efficiency (full-conformal) and computational efficiency (split-conformal) for uncertainty-quantified anomaly detection on benchmark datasets.
|
[
"['Oliver Hennhöfer' 'Christine Preisach']"
] |
null | null |
2402.16402
| null | null |
http://arxiv.org/pdf/2402.16402v1
|
2024-02-26T08:55:10Z
|
2024-02-26T08:55:10Z
|
Graph Learning with Distributional Edge Layouts
|
Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., GraphSage) under heuristic assumptions. In this paper, we for the first time pose that these layouts can be globally sampled via Langevin dynamics following Boltzmann distribution equipped with explicit physical energy, leading to higher feasibility in the physical world. We argue that such a collection of sampled/optimized layouts can capture the wide energy distribution and bring extra expressivity on top of WL-test, therefore easing downstream tasks. As such, we propose Distributional Edge Layouts (DELs) to serve as a complement to a variety of GNNs. DEL is a pre-processing strategy independent of subsequent GNN variants, thus being highly flexible. Experimental results demonstrate that DELs consistently and substantially improve a series of GNN baselines, achieving state-of-the-art performance on multiple datasets.
|
[
"['Xinjian Zhao' 'Chaolong Ying' 'Tianshu Yu']"
] |
null | null |
2402.16408
| null | null |
http://arxiv.org/pdf/2402.16408v1
|
2024-02-26T09:04:07Z
|
2024-02-26T09:04:07Z
|
Stable Training of Normalizing Flows for High-dimensional Variational
Inference
|
Variational inference with normalizing flows (NFs) is an increasingly popular alternative to MCMC methods. In particular, NFs based on coupling layers (Real NVPs) are frequently used due to their good empirical performance. In theory, increasing the depth of normalizing flows should lead to more accurate posterior approximations. However, in practice, training deep normalizing flows for approximating high-dimensional posterior distributions is often infeasible due to the high variance of the stochastic gradients. In this work, we show that previous methods for stabilizing the variance of stochastic gradient descent can be insufficient to achieve stable training of Real NVPs. As the source of the problem, we identify that, during training, samples often exhibit unusual high values. As a remedy, we propose a combination of two methods: (1) soft-thresholding of the scale in Real NVPs, and (2) a bijective soft log transformation of the samples. We evaluate these and other previously proposed modification on several challenging target distributions, including a high-dimensional horseshoe logistic regression model. Our experiments show that with our modifications, stable training of Real NVPs for posteriors with several thousand dimensions is possible, allowing for more accurate marginal likelihood estimation via importance sampling. Moreover, we evaluate several common training techniques and architecture choices and provide practical advise for training NFs for high-dimensional variational inference.
|
[
"['Daniel Andrade']"
] |
null | null |
2402.16412
| null | null |
http://arxiv.org/pdf/2402.16412v1
|
2024-02-26T09:11:12Z
|
2024-02-26T09:11:12Z
|
TOTEM: TOkenized Time Series EMbeddings for General Time Series Analysis
|
The field of general time series analysis has recently begun to explore unified modeling, where a common architectural backbone can be retrained on a specific task for a specific dataset. In this work, we approach unification from a complementary vantage point: unification across tasks and domains. To this end, we explore the impact of discrete, learnt, time series data representations that enable generalist, cross-domain training. Our method, TOTEM, or TOkenized Time Series EMbeddings, proposes a simple tokenizer architecture that embeds time series data from varying domains using a discrete vectorized representation learned in a self-supervised manner. TOTEM works across multiple tasks and domains with minimal to no tuning. We study the efficacy of TOTEM with an extensive evaluation on 17 real world time series datasets across 3 tasks. We evaluate both the specialist (i.e., training a model on each domain) and generalist (i.e., training a single model on many domains) settings, and show that TOTEM matches or outperforms previous best methods on several popular benchmarks. The code can be found at: https://github.com/SaberaTalukder/TOTEM.
|
[
"['Sabera Talukder' 'Yisong Yue' 'Georgia Gkioxari']"
] |
null | null |
2402.16435
| null | null |
http://arxiv.org/pdf/2402.16435v1
|
2024-02-26T09:32:28Z
|
2024-02-26T09:32:28Z
|
Training Implicit Generative Models via an Invariant Statistical Loss
|
Implicit generative models have the capability to learn arbitrary complex data distributions. On the downside, training requires telling apart real data from artificially-generated ones using adversarial discriminators, leading to unstable training and mode-dropping issues. As reported by Zahee et al. (2017), even in the one-dimensional (1D) case, training a generative adversarial network (GAN) is challenging and often suboptimal. In this work, we develop a discriminator-free method for training one-dimensional (1D) generative implicit models and subsequently expand this method to accommodate multivariate cases. Our loss function is a discrepancy measure between a suitably chosen transformation of the model samples and a uniform distribution; hence, it is invariant with respect to the true distribution of the data. We first formulate our method for 1D random variables, providing an effective solution for approximate reparameterization of arbitrary complex distributions. Then, we consider the temporal setting (both univariate and multivariate), in which we model the conditional distribution of each sample given the history of the process. We demonstrate through numerical simulations that this new method yields promising results, successfully learning true distributions in a variety of scenarios and mitigating some of the well-known problems that state-of-the-art implicit methods present.
|
[
"['José Manuel de Frutos' 'Pablo M. Olmos' 'Manuel A. Vázquez'\n 'Joaquín Míguez']"
] |
null | null |
2402.16442
| null | null |
http://arxiv.org/pdf/2402.16442v1
|
2024-02-26T09:38:39Z
|
2024-02-26T09:38:39Z
|
On Distributed Larger-Than-Memory Subset Selection With Pairwise
Submodular Functions
|
Many learning problems hinge on the fundamental problem of subset selection, i.e., identifying a subset of important and representative points. For example, selecting the most significant samples in ML training cannot only reduce training costs but also enhance model quality. Submodularity, a discrete analogue of convexity, is commonly used for solving subset selection problems. However, existing algorithms for optimizing submodular functions are sequential, and the prior distributed methods require at least one central machine to fit the target subset. In this paper, we relax the requirement of having a central machine for the target subset by proposing a novel distributed bounding algorithm with provable approximation guarantees. The algorithm iteratively bounds the minimum and maximum utility values to select high quality points and discard the unimportant ones. When bounding does not find the complete subset, we use a multi-round, partition-based distributed greedy algorithm to identify the remaining subset. We show that these algorithms find high quality subsets on CIFAR-100 and ImageNet with marginal or no loss in quality compared to centralized methods, and scale to a dataset with 13 billion points.
|
[
"['Maximilian Böther' 'Abraham Sebastian' 'Pranjal Awasthi' 'Ana Klimovic'\n 'Srikumar Ramalingam']"
] |
null | null |
2402.16463
| null | null |
http://arxiv.org/pdf/2402.16463v1
|
2024-02-26T10:11:28Z
|
2024-02-26T10:11:28Z
|
Learning to Schedule Online Tasks with Bandit Feedback
|
Online task scheduling serves an integral role for task-intensive applications in cloud computing and crowdsourcing. Optimal scheduling can enhance system performance, typically measured by the reward-to-cost ratio, under some task arrival distribution. On one hand, both reward and cost are dependent on task context (e.g., evaluation metric) and remain black-box in practice. These render reward and cost hard to model thus unknown before decision making. On the other hand, task arrival behaviors remain sensitive to factors like unpredictable system fluctuation whereby a prior estimation or the conventional assumption of arrival distribution (e.g., Poisson) may fail. This implies another practical yet often neglected challenge, i.e., uncertain task arrival distribution. Towards effective scheduling under a stationary environment with various uncertainties, we propose a double-optimistic learning based Robbins-Monro (DOL-RM) algorithm. Specifically, DOL-RM integrates a learning module that incorporates optimistic estimation for reward-to-cost ratio and a decision module that utilizes the Robbins-Monro method to implicitly learn task arrival distribution while making scheduling decisions. Theoretically, DOL-RM achieves convergence gap and no regret learning with a sub-linear regret of $O(T^{3/4})$, which is the first result for online task scheduling under uncertain task arrival distribution and unknown reward and cost. Our numerical results in a synthetic experiment and a real-world application demonstrate the effectiveness of DOL-RM in achieving the best cumulative reward-to-cost ratio compared with other state-of-the-art baselines.
|
[
"['Yongxin Xu' 'Shangshang Wang' 'Hengquan Guo' 'Xin Liu' 'Ziyu Shao']"
] |
null | null |
2402.16516
| null | null |
http://arxiv.org/abs/2402.16516v2
|
2024-06-18T02:09:45Z
|
2024-02-26T11:54:54Z
|
Generative Pretrained Hierarchical Transformer for Time Series
Forecasting
|
Recent efforts have been dedicated to enhancing time series forecasting accuracy by introducing advanced network architectures and self-supervised pretraining strategies. Nevertheless, existing approaches still exhibit two critical drawbacks. Firstly, these methods often rely on a single dataset for training, limiting the model's generalizability due to the restricted scale of the training data. Secondly, the one-step generation schema is widely followed, which necessitates a customized forecasting head and overlooks the temporal dependencies in the output series, and also leads to increased training costs under different horizon length settings. To address these issues, we propose a novel generative pretrained hierarchical transformer architecture for forecasting, named textbf{GPHT}. There are two aspects of key designs in GPHT. On the one hand, we advocate for constructing a mixed dataset under the channel-independent assumption for pretraining our model, comprising various datasets from diverse data scenarios. This approach significantly expands the scale of training data, allowing our model to uncover commonalities in time series data and facilitating improved transfer to specific datasets. On the other hand, GPHT employs an auto-regressive forecasting approach, effectively modeling temporal dependencies in the output series. Importantly, no customized forecasting head is required, enabling textit{a single model to forecast at arbitrary horizon settings.} We conduct sufficient experiments on eight datasets with mainstream self-supervised pretraining models and supervised models. The results demonstrated that GPHT surpasses the baseline models across various fine-tuning and zero/few-shot learning settings in the traditional long-term forecasting task. We make our codes publicly availablefootnote{https://github.com/icantnamemyself/GPHT}.
|
[
"['Zhiding Liu' 'Jiqian Yang' 'Mingyue Cheng' 'Yucong Luo' 'Zhi Li']"
] |
null | null |
2402.16517
| null | null |
http://arxiv.org/pdf/2402.16517v1
|
2024-02-26T11:58:02Z
|
2024-02-26T11:58:02Z
|
Discovering Artificial Viscosity Models for Discontinuous Galerkin
Approximation of Conservation Laws using Physics-Informed Machine Learning
|
Finite element-based high-order solvers of conservation laws offer large accuracy but face challenges near discontinuities due to the Gibbs phenomenon. Artificial viscosity is a popular and effective solution to this problem based on physical insight. In this work, we present a physics-informed machine learning algorithm to automate the discovery of artificial viscosity models in a non-supervised paradigm. The algorithm is inspired by reinforcement learning and trains a neural network acting cell-by-cell (the viscosity model) by minimizing a loss defined as the difference with respect to a reference solution thanks to automatic differentiation. This enables a dataset-free training procedure. We prove that the algorithm is effective by integrating it into a state-of-the-art Runge-Kutta discontinuous Galerkin solver. We showcase several numerical tests on scalar and vectorial problems, such as Burgers' and Euler's equations in one and two dimensions. Results demonstrate that the proposed approach trains a model that is able to outperform classical viscosity models. Moreover, we show that the learnt artificial viscosity model is able to generalize across different problems and parameters.
|
[
"['Matteo Caldana' 'Paola F. Antonietti' \"Luca Dede'\"]"
] |
null | null |
2402.16539
| null | null |
http://arxiv.org/pdf/2402.16539v1
|
2024-02-26T12:55:51Z
|
2024-02-26T12:55:51Z
|
Integrating Large Language Models with Graphical Session-Based
Recommendation
|
With the rapid development of Large Language Models (LLMs), various explorations have arisen to utilize LLMs capability of context understanding on recommender systems. While pioneering strategies have primarily transformed traditional recommendation tasks into challenges of natural language generation, there has been a relative scarcity of exploration in the domain of session-based recommendation (SBR) due to its specificity. SBR has been primarily dominated by Graph Neural Networks, which have achieved many successful outcomes due to their ability to capture both the implicit and explicit relationships between adjacent behaviors. The structural nature of graphs contrasts with the essence of natural language, posing a significant adaptation gap for LLMs. In this paper, we introduce large language models with graphical Session-Based recommendation, named LLMGR, an effective framework that bridges the aforementioned gap by harmoniously integrating LLMs with Graph Neural Networks (GNNs) for SBR tasks. This integration seeks to leverage the complementary strengths of LLMs in natural language understanding and GNNs in relational data processing, leading to a more powerful session-based recommender system that can understand and recommend items within a session. Moreover, to endow the LLM with the capability to empower SBR tasks, we design a series of prompts for both auxiliary and major instruction tuning tasks. These prompts are crafted to assist the LLM in understanding graph-structured data and align textual information with nodes, effectively translating nuanced user interactions into a format that can be understood and utilized by LLM architectures. Extensive experiments on three real-world datasets demonstrate that LLMGR outperforms several competitive baselines, indicating its effectiveness in enhancing SBR tasks and its potential as a research direction for future exploration.
|
[
"['Naicheng Guo' 'Hongwei Cheng' 'Qianqiao Liang' 'Linxun Chen' 'Bing Han']"
] |
null | null |
2402.16543
| null | null |
http://arxiv.org/pdf/2402.16543v2
|
2024-04-10T12:01:43Z
|
2024-02-26T13:01:45Z
|
Model-based deep reinforcement learning for accelerated learning from
flow simulations
|
In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. While reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. In this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. Specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. The model-based learning reduces the overall training time by up to $85%$ for the fluidic pinball test case. Even larger savings are expected for more demanding flow simulations.
|
[
"['Andre Weiner' 'Janis Geise']"
] |
null | null |
2402.16544
| null | null |
http://arxiv.org/pdf/2402.16544v1
|
2024-02-26T13:03:26Z
|
2024-02-26T13:03:26Z
|
Label Learning Method Based on Tensor Projection
|
Multi-view clustering method based on anchor graph has been widely concerned due to its high efficiency and effectiveness. In order to avoid post-processing, most of the existing anchor graph-based methods learn bipartite graphs with connected components. However, such methods have high requirements on parameters, and in some cases it may not be possible to obtain bipartite graphs with clear connected components. To end this, we propose a label learning method based on tensor projection (LLMTP). Specifically, we project anchor graph into the label space through an orthogonal projection matrix to obtain cluster labels directly. Considering that the spatial structure information of multi-view data may be ignored to a certain extent when projected in different views separately, we extend the matrix projection transformation to tensor projection, so that the spatial structure information between views can be fully utilized. In addition, we introduce the tensor Schatten $p$-norm regularization to make the clustering label matrices of different views as consistent as possible. Extensive experiments have proved the effectiveness of the proposed method.
|
[
"['Jing Li' 'Quanxue Gao' 'Qianqian Wang' 'Cheng Deng' 'Deyan Xie']"
] |
null | null |
2402.16562
| null | null |
http://arxiv.org/pdf/2402.16562v2
|
2024-03-29T18:05:17Z
|
2024-02-26T13:39:04Z
|
Q-FOX Learning: Breaking Tradition in Reinforcement Learning
|
Reinforcement learning (RL) is a subset of artificial intelligence (AI) where agents learn the best action by interacting with the environment, making it suitable for tasks that do not require labeled data or direct supervision. Hyperparameters (HP) tuning refers to choosing the best parameter that leads to optimal solutions in RL algorithms. Manual or random tuning of the HP may be a crucial process because variations in this parameter lead to changes in the overall learning aspects and different rewards. In this paper, a novel and automatic HP-tuning method called Q-FOX is proposed. This uses both the FOX optimizer, a new optimization method inspired by nature that mimics red foxes' hunting behavior, and the commonly used, easy-to-implement RL Q-learning algorithm to solve the problem of HP tuning. Moreover, a new objective function is proposed which prioritizes the reward over the mean squared error (MSE) and learning time (steps). Q-FOX has been evaluated on two OpenAI Gym environment control tasks: Cart Pole and Frozen Lake. It exposed greater cumulative rewards than HP tuning with other optimizers, such as PSO, GA, Bee, or randomly selected HP. The cumulative reward for the Cart Pole task was 32.08, and for the Frozen Lake task was 0.95. Despite the robustness of Q-FOX, it has limitations. It cannot be used directly in real-word problems before choosing the HP in a simulation environment because its processes work iteratively, making it time-consuming. The results indicate that Q-FOX has played an essential role in HP tuning for RL algorithms to effectively solve different control tasks.
|
[
"['Mahmood A. Jumaah' 'Yossra H. Ali' 'Tarik A. Rashid']"
] |
null | null |
2402.16565
| null | null |
http://arxiv.org/pdf/2402.16565v2
|
2024-04-17T06:31:27Z
|
2024-02-26T13:43:25Z
|
Partial Rankings of Optimizers
|
We introduce a framework for benchmarking optimizers according to multiple criteria over various test functions. Based on a recently introduced union-free generic depth function for partial orders/rankings, it fully exploits the ordinal information and allows for incomparability. Our method describes the distribution of all partial orders/rankings, avoiding the notorious shortcomings of aggregation. This permits to identify test functions that produce central or outlying rankings of optimizers and to assess the quality of benchmarking suites.
|
[
"['Julian Rodemann' 'Hannah Blocher']"
] |
null | null |
2402.16569
| null | null |
http://arxiv.org/pdf/2402.16569v2
|
2024-02-27T13:59:32Z
|
2024-02-26T13:47:32Z
|
Pretrained Visual Uncertainties
|
Accurate uncertainty estimation is vital to trustworthy machine learning, yet uncertainties typically have to be learned for each task anew. This work introduces the first pretrained uncertainty modules for vision models. Similar to standard pretraining this enables the zero-shot transfer of uncertainties learned on a large pretraining dataset to specialized downstream datasets. We enable our large-scale pretraining on ImageNet-21k by solving a gradient conflict in previous uncertainty modules and accelerating the training by up to 180x. We find that the pretrained uncertainties generalize to unseen datasets. In scrutinizing the learned uncertainties, we find that they capture aleatoric uncertainty, disentangled from epistemic components. We demonstrate that this enables safe retrieval and uncertainty-aware dataset visualization. To encourage applications to further problems and domains, we release all pretrained checkpoints and code under https://github.com/mkirchhof/url .
|
[
"['Michael Kirchhof' 'Mark Collier' 'Seong Joon Oh' 'Enkelejda Kasneci']"
] |
null | null |
2402.16570
| null | null |
http://arxiv.org/pdf/2402.16570v1
|
2024-02-26T13:48:44Z
|
2024-02-26T13:48:44Z
|
Searching a Lightweight Network Architecture for Thermal Infrared
Pedestrian Tracking
|
Manually-designed network architectures for thermal infrared pedestrian tracking (TIR-PT) require substantial effort from human experts. Neural networks with ResNet backbones are popular for TIR-PT. However, TIR-PT is a tracking task and more challenging than classification and detection. This paper makes an early attempt to search an optimal network architecture for TIR-PT automatically, employing single-bottom and dual-bottom cells as basic search units and incorporating eight operation candidates within the search space. To expedite the search process, a random channel selection strategy is employed prior to assessing operation candidates. Classification, batch hard triplet, and center loss are jointly used to retrain the searched architecture. The outcome is a high-performance network architecture that is both parameter- and computation-efficient. Extensive experiments proved the effectiveness of the automated method.
|
[
"['Peng Gao' 'Xiao Liu' 'Yu Wang' 'Ru-Yue Yuan']"
] |
null | null |
2402.16578
| null | null |
http://arxiv.org/pdf/2402.16578v1
|
2024-02-26T14:01:34Z
|
2024-02-26T14:01:34Z
|
Multi-Bit Distortion-Free Watermarking for Large Language Models
|
Methods for watermarking large language models have been proposed that distinguish AI-generated text from human-generated text by slightly altering the model output distribution, but they also distort the quality of the text, exposing the watermark to adversarial detection. More recently, distortion-free watermarking methods were proposed that require a secret key to detect the watermark. The prior methods generally embed zero-bit watermarks that do not provide additional information beyond tagging a text as being AI-generated. We extend an existing zero-bit distortion-free watermarking method by embedding multiple bits of meta-information as part of the watermark. We also develop a computationally efficient decoder that extracts the embedded information from the watermark with low bit error rate.
|
[
"['Massieh Kordi Boroujeny' 'Ya Jiang' 'Kai Zeng' 'Brian Mark']"
] |
null | null |
2402.16609
| null | null |
http://arxiv.org/pdf/2402.16609v1
|
2024-02-23T16:01:37Z
|
2024-02-23T16:01:37Z
|
Combining Transformer based Deep Reinforcement Learning with
Black-Litterman Model for Portfolio Optimization
|
As a model-free algorithm, deep reinforcement learning (DRL) agent learns and makes decisions by interacting with the environment in an unsupervised way. In recent years, DRL algorithms have been widely applied by scholars for portfolio optimization in consecutive trading periods, since the DRL agent can dynamically adapt to market changes and does not rely on the specification of the joint dynamics across the assets. However, typical DRL agents for portfolio optimization cannot learn a policy that is aware of the dynamic correlation between portfolio asset returns. Since the dynamic correlations among portfolio assets are crucial in optimizing the portfolio, the lack of such knowledge makes it difficult for the DRL agent to maximize the return per unit of risk, especially when the target market permits short selling (i.e., the US stock market). In this research, we propose a hybrid portfolio optimization model combining the DRL agent and the Black-Litterman (BL) model to enable the DRL agent to learn the dynamic correlation between the portfolio asset returns and implement an efficacious long/short strategy based on the correlation. Essentially, the DRL agent is trained to learn the policy to apply the BL model to determine the target portfolio weights. To test our DRL agent, we construct the portfolio based on all the Dow Jones Industrial Average constitute stocks. Empirical results of the experiments conducted on real-world United States stock market data demonstrate that our DRL agent significantly outperforms various comparison portfolio choice strategies and alternative DRL frameworks by at least 42% in terms of accumulated return. In terms of the return per unit of risk, our DRL agent significantly outperforms various comparative portfolio choice strategies and alternative strategies based on other machine learning frameworks.
|
[
"['Ruoyu Sun' 'Angelos Stefanidis' 'Zhengyong Jiang' 'Jionglong Su']"
] |
null | null |
2402.16627
| null | null |
http://arxiv.org/pdf/2402.16627v3
|
2024-06-04T01:08:56Z
|
2024-02-26T15:01:16Z
|
Contextualized Diffusion Models for Text-Guided Image and Video
Generation
|
Conditional diffusion models have exhibited superior performance in high-fidelity text-guided visual generation and editing. Nevertheless, prevailing text-guided visual diffusion models primarily focus on incorporating text-visual relationships exclusively into the reverse process, often disregarding their relevance in the forward process. This inconsistency between forward and reverse processes may limit the precise conveyance of textual semantics in visual synthesis results. To address this issue, we propose a novel and general contextualized diffusion model (ContextDiff) by incorporating the cross-modal context encompassing interactions and alignments between text condition and visual sample into forward and reverse processes. We propagate this context to all timesteps in the two processes to adapt their trajectories, thereby facilitating cross-modal conditional modeling. We generalize our contextualized diffusion to both DDPMs and DDIMs with theoretical derivations, and demonstrate the effectiveness of our model in evaluations with two challenging tasks: text-to-image generation, and text-to-video editing. In each task, our ContextDiff achieves new state-of-the-art performance, significantly enhancing the semantic alignment between text condition and generated samples, as evidenced by quantitative and qualitative evaluations. Our code is available at https://github.com/YangLing0818/ContextDiff
|
[
"['Ling Yang' 'Zhilong Zhang' 'Zhaochen Yu' 'Jingwei Liu' 'Minkai Xu'\n 'Stefano Ermon' 'Bin Cui']"
] |
null | null |
2402.16639
| null | null |
http://arxiv.org/pdf/2402.16639v1
|
2024-02-26T15:09:56Z
|
2024-02-26T15:09:56Z
|
Differentiable Particle Filtering using Optimal Placement Resampling
|
Particle filters are a frequent choice for inference tasks in nonlinear and non-Gaussian state-space models. They can either be used for state inference by approximating the filtering distribution or for parameter inference by approximating the marginal data (observation) likelihood. A good proposal distribution and a good resampling scheme are crucial to obtain low variance estimates. However, traditional methods like multinomial resampling introduce nondifferentiability in PF-based loss functions for parameter estimation, prohibiting gradient-based learning tasks. This work proposes a differentiable resampling scheme by deterministic sampling from an empirical cumulative distribution function. We evaluate our method on parameter inference tasks and proposal learning.
|
[
"['Domonkos Csuzdi' 'Olivér Törő' 'Tamás Bécsi']"
] |
null | null |
2402.16658
| null | null |
http://arxiv.org/pdf/2402.16658v1
|
2024-02-23T15:42:13Z
|
2024-02-23T15:42:13Z
|
Multi-Objective Learning for Deformable Image Registration
|
Deformable image registration (DIR) involves optimization of multiple conflicting objectives, however, not many existing DIR algorithms are multi-objective (MO). Further, while there has been progress in the design of deep learning algorithms for DIR, there is no work in the direction of MO DIR using deep learning. In this paper, we fill this gap by combining a recently proposed approach for MO training of neural networks with a well-known deep neural network for DIR and create a deep learning based MO DIR approach. We evaluate the proposed approach for DIR of pelvic magnetic resonance imaging (MRI) scans. We experimentally demonstrate that the proposed MO DIR approach -- providing multiple registration outputs for each patient that each correspond to a different trade-off between the objectives -- has additional desirable properties from a clinical use point-of-view as compared to providing a single DIR output. The experiments also show that the proposed MO DIR approach provides a better spread of DIR outputs across the entire trade-off front than simply training multiple neural networks with weights for each objective sampled from a grid of possible values.
|
[
"['Monika Grewal' 'Henrike Westerveld' 'Peter A. N. Bosman'\n 'Tanja Alderliesten']"
] |
null | null |
2402.16661
| null | null |
http://arxiv.org/pdf/2402.16661v1
|
2024-02-26T15:35:10Z
|
2024-02-26T15:35:10Z
|
Penalized Generative Variable Selection
|
Deep networks are increasingly applied to a wide variety of data, including data with high-dimensional predictors. In such analysis, variable selection can be needed along with estimation/model building. Many of the existing deep network studies that incorporate variable selection have been limited to methodological and numerical developments. In this study, we consider modeling/estimation using the conditional Wasserstein Generative Adversarial networks. Group Lasso penalization is applied for variable selection, which may improve model estimation/prediction, interpretability, stability, etc. Significantly advancing from the existing literature, the analysis of censored survival data is also considered. We establish the convergence rate for variable selection while considering the approximation error, and obtain a more efficient distribution estimation. Simulations and the analysis of real experimental data demonstrate satisfactory practical utility of the proposed analysis.
|
[
"['Tong Wang' 'Jian Huang' 'Shuangge Ma']"
] |
null | null |
2402.16668
| null | null |
http://arxiv.org/pdf/2402.16668v1
|
2024-02-26T15:40:46Z
|
2024-02-26T15:40:46Z
|
Program-Based Strategy Induction for Reinforcement Learning
|
Typical models of learning assume incremental estimation of continuously-varying decision variables like expected rewards. However, this class of models fails to capture more idiosyncratic, discrete heuristics and strategies that people and animals appear to exhibit. Despite recent advances in strategy discovery using tools like recurrent networks that generalize the classic models, the resulting strategies are often onerous to interpret, making connections to cognition difficult to establish. We use Bayesian program induction to discover strategies implemented by programs, letting the simplicity of strategies trade off against their effectiveness. Focusing on bandit tasks, we find strategies that are difficult or unexpected with classical incremental learning, like asymmetric learning from rewarded and unrewarded trials, adaptive horizon-dependent random exploration, and discrete state switching.
|
[
"['Carlos G. Correa' 'Thomas L. Griffiths' 'Nathaniel D. Daw']"
] |
null | null |
2402.16681
| null | null |
http://arxiv.org/pdf/2402.16681v2
|
2024-03-18T07:10:46Z
|
2024-02-26T15:59:38Z
|
Enhancing Continuous Domain Adaptation with Multi-Path Transfer
Curriculum
|
Addressing the large distribution gap between training and testing data has long been a challenge in machine learning, giving rise to fields such as transfer learning and domain adaptation. Recently, Continuous Domain Adaptation (CDA) has emerged as an effective technique, closing this gap by utilizing a series of intermediate domains. This paper contributes a novel CDA method, W-MPOT, which rigorously addresses the domain ordering and error accumulation problems overlooked by previous studies. Specifically, we construct a transfer curriculum over the source and intermediate domains based on Wasserstein distance, motivated by theoretical analysis of CDA. Then we transfer the source model to the target domain through multiple valid paths in the curriculum using a modified version of continuous optimal transport. A bidirectional path consistency constraint is introduced to mitigate the impact of accumulated mapping errors during continuous transfer. We extensively evaluate W-MPOT on multiple datasets, achieving up to 54.1% accuracy improvement on multi-session Alzheimer MR image classification and 94.7% MSE reduction on battery capacity estimation.
|
[
"['Hanbing Liu' 'Jingge Wang' 'Xuan Zhang' 'Ye Guo' 'Yang Li']"
] |
null | null |
2402.16683
| null | null |
http://arxiv.org/abs/2402.16683v2
|
2024-06-15T20:50:17Z
|
2024-02-26T16:01:35Z
|
Re-Envisioning Numerical Information Field Theory (NIFTy.re): A Library
for Gaussian Processes and Variational Inference
|
Imaging is the process of transforming noisy, incomplete data into a space that humans can interpret. NIFTy is a Bayesian framework for imaging and has already successfully been applied to many fields in astrophysics. Previous design decisions held the performance and the development of methods in NIFTy back. We present a rewrite of NIFTy, coined NIFTy.re, which reworks the modeling principle, extends the inference strategies, and outsources much of the heavy lifting to JAX. The rewrite dramatically accelerates models written in NIFTy, lays the foundation for new types of inference machineries, improves maintainability, and enables interoperability between NIFTy and the JAX machine learning ecosystem.
|
[
"['Gordian Edenhofer' 'Philipp Frank' 'Jakob Roth' 'Reimar H. Leike'\n 'Massin Guerdi' 'Lukas I. Scheel-Platz' 'Matteo Guardiani'\n 'Vincent Eberle' 'Margret Westerkamp' 'Torsten A. Enßlin']"
] |
null | null |
2402.16688
| null | null |
http://arxiv.org/pdf/2402.16688v1
|
2024-02-26T16:04:47Z
|
2024-02-26T16:04:47Z
|
On the connection between Noise-Contrastive Estimation and Contrastive
Divergence
|
Noise-contrastive estimation (NCE) is a popular method for estimating unnormalised probabilistic models, such as energy-based models, which are effective for modelling complex data distributions. Unlike classical maximum likelihood (ML) estimation that relies on importance sampling (resulting in ML-IS) or MCMC (resulting in contrastive divergence, CD), NCE uses a proxy criterion to avoid the need for evaluating an often intractable normalisation constant. Despite apparent conceptual differences, we show that two NCE criteria, ranking NCE (RNCE) and conditional NCE (CNCE), can be viewed as ML estimation methods. Specifically, RNCE is equivalent to ML estimation combined with conditional importance sampling, and both RNCE and CNCE are special cases of CD. These findings bridge the gap between the two method classes and allow us to apply techniques from the ML-IS and CD literature to NCE, offering several advantageous extensions.
|
[
"['Amanda Olmin' 'Jakob Lindqvist' 'Lennart Svensson' 'Fredrik Lindsten']"
] |
null | null |
2402.16705
| null | null |
http://arxiv.org/pdf/2402.16705v1
|
2024-02-26T16:21:53Z
|
2024-02-26T16:21:53Z
|
SelectIT: Selective Instruction Tuning for Large Language Models via
Uncertainty-Aware Self-Reflection
|
Instruction tuning (IT) is crucial to tailoring large language models (LLMs) towards human-centric interactions. Recent advancements have shown that the careful selection of a small, high-quality subset of IT data can significantly enhance the performance of LLMs. Despite this, common approaches often rely on additional models or data sets, which increases costs and limits widespread adoption. In this work, we propose a novel approach, termed SelectIT, that capitalizes on the foundational capabilities of the LLM itself. Specifically, we exploit the intrinsic uncertainty present in LLMs to more effectively select high-quality IT data, without the need for extra resources. Furthermore, we introduce a novel IT dataset, the Selective Alpaca, created by applying SelectIT to the Alpaca-GPT4 dataset. Empirical results demonstrate that IT using Selective Alpaca leads to substantial model ability enhancement. The robustness of SelectIT has also been corroborated in various foundation models and domain-specific tasks. Our findings suggest that longer and more computationally intensive IT data may serve as superior sources of IT, offering valuable insights for future research in this area. Data, code, and scripts are freely available at https://github.com/Blue-Raincoat/SelectIT.
|
[
"['Liangxin Liu' 'Xuebo Liu' 'Derek F. Wong' 'Dongfang Li' 'Ziyi Wang'\n 'Baotian Hu' 'Min Zhang']"
] |
null | null |
2402.16710
| null | null |
http://arxiv.org/pdf/2402.16710v2
|
2024-07-01T02:35:19Z
|
2024-02-26T16:27:08Z
|
Cost Aware Best Arm Identification
|
In this paper, we study a best arm identification problem with dual objects. In addition to the classic reward, each arm is associated with a cost distribution and the goal is to identify the largest reward arm using the minimum expected cost. We call it emph{Cost Aware Best Arm Identification} (CABAI), which captures the separation of testing and implementation phases in product development pipelines and models the objective shift between phases, i.e., cost for testing and reward for implementation. We first derive a theoretical lower bound for CABAI and propose an algorithm called $mathsf{CTAS}$ to match it asymptotically. To reduce the computation of $mathsf{CTAS}$, we further propose a simple algorithm called emph{Chernoff Overlap} (CO), based on a square-root rule, which we prove is optimal in simplified two-armed models and generalizes well in numerical experiments. Our results show that (i) ignoring the heterogeneous action cost results in sub-optimality in practice, and (ii) simple algorithms can deliver near-optimal performance over a wide range of problems.
|
[
"['Kellen Kanarios' 'Qining Zhang' 'Lei Ying']"
] |
null | null |
2402.16712
| null | null |
http://arxiv.org/pdf/2402.16712v2
|
2024-03-06T17:16:38Z
|
2024-02-26T16:30:58Z
|
l1-norm regularized l1-norm best-fit lines
|
In this work, we propose an optimization framework for estimating a sparse robust one-dimensional subspace. Our objective is to minimize both the representation error and the penalty, in terms of the l1-norm criterion. Given that the problem is NP-hard, we introduce a linear relaxation-based approach. Additionally, we present a novel fitting procedure, utilizing simple ratios and sorting techniques. The proposed algorithm demonstrates a worst-case time complexity of $O(n^2 m log n)$ and, in certain instances, achieves global optimality for the sparse robust subspace, thereby exhibiting polynomial time efficiency. Compared to extant methodologies, the proposed algorithm finds the subspace with the lowest discordance, offering a smoother trade-off between sparsity and fit. Its architecture affords scalability, evidenced by a 16-fold improvement in computational speeds for matrices of 2000x2000 over CPU version. Furthermore, this method is distinguished by several advantages, including its independence from initialization and deterministic and replicable procedures. Furthermore, this method is distinguished by several advantages, including its independence from initialization and deterministic and replicable procedures. The real-world example demonstrates the effectiveness of algorithm in achieving meaningful sparsity, underscoring its precise and useful application across various domains.
|
[
"['Xiao Ling' 'Paul Brooks']"
] |
null | null |
2402.16726
| null | null |
http://arxiv.org/pdf/2402.16726v2
|
2024-02-27T04:58:24Z
|
2024-02-26T16:48:12Z
|
Interpreting Grokked Transformers in Complex Modular Arithmetic
|
Grokking has been actively explored to reveal the mystery of delayed generalization. Identifying interpretable algorithms inside the grokked models is a suggestive hint to understanding its mechanism. In this work, beyond the simplest and well-studied modular addition, we observe the internal circuits learned through grokking in complex modular arithmetic via interpretable reverse engineering, which highlights the significant difference in their dynamics: subtraction poses a strong asymmetry on Transformer; multiplication requires cosine-biased components at all the frequencies in a Fourier domain; polynomials often result in the superposition of the patterns from elementary arithmetic, but clear patterns do not emerge in challenging cases; grokking can easily occur even in higher-degree formulas with basic symmetric and alternating expressions. We also introduce the novel progress measure for modular arithmetic; Fourier Frequency Sparsity and Fourier Coefficient Ratio, which not only indicate the late generalization but also characterize distinctive internal representations of grokked models per modular operation. Our empirical analysis emphasizes the importance of holistic evaluation among various combinations.
|
[
"['Hiroki Furuta' 'Gouki Minegishi' 'Yusuke Iwasawa' 'Yutaka Matsuo']"
] |
null | null |
2402.16731
| null | null |
http://arxiv.org/pdf/2402.16731v2
|
2024-03-25T18:51:02Z
|
2024-02-26T16:52:35Z
|
Accelerating Graph Neural Networks on Real Processing-In-Memory Systems
|
Graph Neural Networks (GNNs) are emerging ML models to analyze graph-structure data. Graph Neural Network (GNN) execution involves both compute-intensive and memory-intensive kernels, the latter dominates the total time, being significantly bottlenecked by data movement between memory and processors. Processing-In-Memory (PIM) systems can alleviate this data movement bottleneck by placing simple processors near or inside to memory arrays. In this work, we introduce PyGim, an efficient ML framework that accelerates GNNs on real PIM systems. We propose intelligent parallelization techniques for memory-intensive kernels of GNNs tailored for real PIM systems, and develop handy Python API for them. We provide hybrid GNN execution, in which the compute-intensive and memory-intensive kernels are executed in processor-centric and memory-centric computing systems, respectively, to match their algorithmic nature. We extensively evaluate PyGim on a real-world PIM system with 1992 PIM cores using emerging GNN models, and demonstrate that it outperforms its state-of-the-art CPU counterpart on Intel Xeon by on average 3.04x, and achieves higher resource utilization than CPU and GPU systems. Our work provides useful recommendations for software, system and hardware designers. PyGim will be open-sourced to enable the widespread use of PIM systems in GNNs.
|
[
"['Christina Giannoula' 'Peiming Yang' 'Ivan Fernandez Vega'\n 'Jiacheng Yang' 'Yu Xin Li' 'Juan Gomez Luna' 'Mohammad Sadrosadati'\n 'Onur Mutlu' 'Gennady Pekhimenko']"
] |
null | null |
2402.16734
| null | null |
http://arxiv.org/pdf/2402.16734v1
|
2024-02-26T16:53:23Z
|
2024-02-26T16:53:23Z
|
Investigating the Robustness of Vision Transformers against Label Noise
in Medical Image Classification
|
Label noise in medical image classification datasets significantly hampers the training of supervised deep learning methods, undermining their generalizability. The test performance of a model tends to decrease as the label noise rate increases. Over recent years, several methods have been proposed to mitigate the impact of label noise in medical image classification and enhance the robustness of the model. Predominantly, these works have employed CNN-based architectures as the backbone of their classifiers for feature extraction. However, in recent years, Vision Transformer (ViT)-based backbones have replaced CNNs, demonstrating improved performance and a greater ability to learn more generalizable features, especially when the dataset is large. Nevertheless, no prior work has rigorously investigated how transformer-based backbones handle the impact of label noise in medical image classification. In this paper, we investigate the architectural robustness of ViT against label noise and compare it to that of CNNs. We use two medical image classification datasets -- COVID-DU-Ex, and NCT-CRC-HE-100K -- both corrupted by injecting label noise at various rates. Additionally, we show that pretraining is crucial for ensuring ViT's improved robustness against label noise in supervised training.
|
[
"['Bidur Khanal' 'Prashant Shrestha' 'Sanskar Amgain' 'Bishesh Khanal'\n 'Binod Bhattarai' 'Cristian A. Linte']"
] |
null | null |
2402.16748
| null | null |
http://arxiv.org/pdf/2402.16748v1
|
2024-02-26T17:09:18Z
|
2024-02-26T17:09:18Z
|
Enhancing Hypergradients Estimation: A Study of Preconditioning and
Reparameterization
|
Bilevel optimization aims to optimize an outer objective function that depends on the solution to an inner optimization problem. It is routinely used in Machine Learning, notably for hyperparameter tuning. The conventional method to compute the so-called hypergradient of the outer problem is to use the Implicit Function Theorem (IFT). As a function of the error of the inner problem resolution, we study the error of the IFT method. We analyze two strategies to reduce this error: preconditioning the IFT formula and reparameterizing the inner problem. We give a detailed account of the impact of these two modifications on the error, highlighting the role played by higher-order derivatives of the functionals at stake. Our theoretical findings explain when super efficiency, namely reaching an error on the hypergradient that depends quadratically on the error on the inner problem, is achievable and compare the two approaches when this is impossible. Numerical evaluations on hyperparameter tuning for regression problems substantiate our theoretical findings.
|
[
"['Zhenzhang Ye' 'Gabriel Peyré' 'Daniel Cremers' 'Pierre Ablin']"
] |
null | null |
2402.16770
| null | null |
http://arxiv.org/pdf/2402.16770v2
|
2024-04-11T17:40:57Z
|
2024-02-26T17:39:23Z
|
Neural population geometry and optimal coding of tasks with shared
latent structure
|
Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. However, it remains unclear what aspects of neural activity contribute to these computational capabilities. Here, we develop an analytical theory linking the geometry of a neural population's activity to the generalization performance of a linear readout on a set of tasks that depend on a common latent structure. We show that four geometric measures of the activity determine performance across tasks. Using this theory, we find that experimentally observed disentangled representations naturally emerge as an optimal solution to the multi-task learning problem. When data is scarce, these optimal neural codes compress less informative latent variables, and when data is abundant, they expand these variables in the state space. We validate our theory using macaque ventral stream recordings. Our results therefore tie population geometry to multi-task learning.
|
[
"['Albert J. Wakhloo' 'Will Slatton' 'SueYeon Chung']"
] |
null | null |
2402.16778
| null | null |
http://arxiv.org/pdf/2402.16778v1
|
2024-02-26T17:49:37Z
|
2024-02-26T17:49:37Z
|
On the Growth of Mistakes in Differentially Private Online Learning: A
Lower Bound Perspective
|
In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(varepsilon,delta)$-DP online algorithms, for $T$ such that $log Tleq O(1 / delta)$, the expected number of mistakes incurred by the algorithm grows as $Omega(log frac{T}{delta})$. This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of $T$. To the best of our knowledge, our work is the first result towards settling lower bounds for DP-Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
|
[
"['Daniil Dmitriev' 'Kristóf Szabó' 'Amartya Sanyal']"
] |
null | null |
2402.16785
| null | null |
http://arxiv.org/pdf/2402.16785v2
|
2024-05-31T15:03:11Z
|
2024-02-26T18:00:29Z
|
CARTE: Pretraining and Transfer for Tabular Learning
|
Pretrained deep-learning models are the go-to solution for images or text. However, for tabular data the standard is still to train tree-based models. Indeed, transfer learning on tables hits the challenge of data integration: finding correspondences, correspondences in the entries (entity matching) where different words may denote the same entity, correspondences across columns (schema matching), which may come in different orders, names... We propose a neural architecture that does not need such correspondences. As a result, we can pretrain it on background data that has not been matched. The architecture -- CARTE for Context Aware Representation of Table Entries -- uses a graph representation of tabular (or relational) data to process tables with different columns, string embedding of entries and columns names to model an open vocabulary, and a graph-attentional network to contextualize entries with column names and neighboring entries. An extensive benchmark shows that CARTE facilitates learning, outperforming a solid set of baselines including the best tree-based models. CARTE also enables joint learning across tables with unmatched columns, enhancing a small table with bigger ones. CARTE opens the door to large pretrained models for tabular data.
|
[
"['Myung Jun Kim' 'Léo Grinsztajn' 'Gaël Varoquaux']"
] |
null | null |
2402.16788
| null | null |
http://arxiv.org/pdf/2402.16788v3
|
2024-06-24T16:41:30Z
|
2024-02-26T18:01:41Z
|
Why Transformers Need Adam: A Hessian Perspective
|
SGD performs worse than Adam by a significant margin on Transformers, but the reason remains unclear. In this work, we provide an explanation through the lens of Hessian: (i) Transformers are "heterogeneous": the Hessian spectrum across parameter blocks vary dramatically, a phenomenon we call "block heterogeneity"; (ii) Heterogeneity hampers SGD: SGD performs worse than Adam on problems with block heterogeneity. To validate (i) and (ii), we check various Transformers, CNNs, MLPs, and quadratic problems, and find that SGD can perform on par with Adam on problems without block heterogeneity, but performs worse than Adam when the heterogeneity exists. Our initial theoretical analysis indicates that SGD performs worse because it applies one single learning rate to all blocks, which cannot handle the heterogeneity among blocks. This limitation could be ameliorated if we use coordinate-wise learning rates, as designed in Adam.
|
[
"['Yushun Zhang' 'Congliang Chen' 'Tian Ding' 'Ziniu Li' 'Ruoyu Sun'\n 'Zhi-Quan Luo']"
] |
null | null |
2402.16792
| null | null |
http://arxiv.org/pdf/2402.16792v1
|
2024-02-26T18:05:55Z
|
2024-02-26T18:05:55Z
|
Rate-Optimal Rank Aggregation with Private Pairwise Rankings
|
In various real-world scenarios like recommender systems and political surveys, pairwise rankings are commonly collected and utilized for rank aggregation to obtain an overall ranking of items. However, preference rankings can reveal individuals' personal preferences, underscoring the need to protect them before releasing for downstream analysis. In this paper, we address the challenge of preserving privacy while ensuring the utility of rank aggregation based on pairwise rankings generated from the Bradley-Terry-Luce (BTL) model. Using the randomized response mechanism to perturb raw pairwise rankings is a common privacy protection strategy used in practice, but a critical challenge arises because the privatized rankings no longer adhere to the BTL model, resulting in significant bias in downstream rank aggregation tasks. Motivated from this, we propose a debiased randomized response mechanism to protect the raw pairwise rankings, ensuring consistent estimation of true preferences and rankings in downstream rank aggregation. Theoretically, we offer insights into the relationship between overall privacy guarantees and estimation errors from private ranking data, and establish minimax rates for estimation errors. This enables the determination of optimal privacy guarantees that balance consistency in rank aggregation with robust privacy protection. We also investigate convergence rates of expected ranking errors for partial and full ranking recovery, quantifying how privacy protection influences the specification of top-$K$ item sets and complete rankings. Our findings are validated through extensive simulations and a real application.
|
[
"['Shirong Xu' 'Will Wei Sun' 'Guang Cheng']"
] |
null | null |
2402.16793
| null | null |
http://arxiv.org/pdf/2402.16793v1
|
2024-02-26T18:07:27Z
|
2024-02-26T18:07:27Z
|
Failures and Successes of Cross-Validation for Early-Stopped Gradient
Descent
|
We analyze the statistical properties of generalized cross-validation (GCV) and leave-one-out cross-validation (LOOCV) applied to early-stopped gradient descent (GD) in high-dimensional least squares regression. We prove that GCV is generically inconsistent as an estimator of the prediction risk of early-stopped GD, even for a well-specified linear model with isotropic features. In contrast, we show that LOOCV converges uniformly along the GD trajectory to the prediction risk. Our theory requires only mild assumptions on the data distribution and does not require the underlying regression function to be linear. Furthermore, by leveraging the individual LOOCV errors, we construct consistent estimators for the entire prediction error distribution along the GD trajectory and consistent estimators for a wide class of error functionals. This in particular enables the construction of pathwise prediction intervals based on GD iterates that have asymptotically correct nominal coverage conditional on the training data.
|
[
"['Pratik Patil' 'Yuchen Wu' 'Ryan J. Tibshirani']"
] |
null | null |
2402.16795
| null | null |
http://arxiv.org/abs/2402.16795v2
|
2024-06-28T19:33:48Z
|
2024-02-26T18:08:52Z
|
If in a Crowdsourced Data Annotation Pipeline, a GPT-4
|
Recent studies indicated GPT-4 outperforms online crowd workers in data labeling accuracy, notably workers from Amazon Mechanical Turk (MTurk). However, these studies were criticized for deviating from standard crowdsourcing practices and emphasizing individual workers' performances over the whole data-annotation process. This paper compared GPT-4 and an ethical and well-executed MTurk pipeline, with 415 workers labeling 3,177 sentence segments from 200 scholarly articles using the CODA-19 scheme. Two worker interfaces yielded 127,080 labels, which were then used to infer the final labels through eight label-aggregation algorithms. Our evaluation showed that despite best practices, MTurk pipeline's highest accuracy was 81.5%, whereas GPT-4 achieved 83.6%. Interestingly, when combining GPT-4's labels with crowd labels collected via an advanced worker interface for aggregation, 2 out of the 8 algorithms achieved an even higher accuracy (87.5%, 87.0%). Further analysis suggested that, when the crowd's and GPT-4's labeling strengths are complementary, aggregating them could increase labeling accuracy.
|
[
"['Zeyu He' 'Chieh-Yang Huang' 'Chien-Kuang Cornelia Ding'\n 'Shaurya Rohatgi' \"Ting-Hao 'Kenneth' Huang\"]"
] |
null | null |
2402.16796
| null | null |
http://arxiv.org/pdf/2402.16796v2
|
2024-03-06T02:19:38Z
|
2024-02-26T18:09:24Z
|
Expressive Whole-Body Control for Humanoid Robots
|
Can we enable humanoid robots to generate rich, diverse, and expressive motions in the real world? We propose to learn a whole-body control policy on a human-sized robot to mimic human motions as realistic as possible. To train such a policy, we leverage the large-scale human motion capture data from the graphics community in a Reinforcement Learning framework. However, directly performing imitation learning with the motion capture dataset would not work on the real humanoid robot, given the large gap in degrees of freedom and physical capabilities. Our method Expressive Whole-Body Control (Exbody) tackles this problem by encouraging the upper humanoid body to imitate a reference motion, while relaxing the imitation constraint on its two legs and only requiring them to follow a given velocity robustly. With training in simulation and Sim2Real transfer, our policy can control a humanoid robot to walk in different styles, shake hands with humans, and even dance with a human in the real world. We conduct extensive studies and comparisons on diverse motions in both simulation and the real world to show the effectiveness of our approach.
|
[
"['Xuxin Cheng' 'Yandong Ji' 'Junming Chen' 'Ruihan Yang' 'Ge Yang'\n 'Xiaolong Wang']"
] |
null | null |
2402.16801
| null | null |
http://arxiv.org/pdf/2402.16801v2
|
2024-06-03T14:12:27Z
|
2024-02-26T18:19:07Z
|
Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement
Learning
|
Benchmarks play a crucial role in the development and analysis of reinforcement learning (RL) algorithms. We identify that existing benchmarks used for research into open-ended learning fall into one of two categories. Either they are too slow for meaningful research to be performed without enormous computational resources, like Crafter, NetHack and Minecraft, or they are not complex enough to pose a significant challenge, like Minigrid and Procgen. To remedy this, we first present Craftax-Classic: a ground-up rewrite of Crafter in JAX that runs up to 250x faster than the Python-native original. A run of PPO using 1 billion environment interactions finishes in under an hour using only a single GPU and averages 90% of the optimal reward. To provide a more compelling challenge we present the main Craftax benchmark, a significant extension of the Crafter mechanics with elements inspired from NetHack. Solving Craftax requires deep exploration, long term planning and memory, as well as continual adaptation to novel situations as more of the world is discovered. We show that existing methods including global and episodic exploration, as well as unsupervised environment design fail to make material progress on the benchmark. We believe that Craftax can for the first time allow researchers to experiment in a complex, open-ended environment with limited computational resources.
|
[
"['Michael Matthews' 'Michael Beukman' 'Benjamin Ellis' 'Mikayel Samvelyan'\n 'Matthew Jackson' 'Samuel Coward' 'Jakob Foerster']"
] |
null | null |
2402.16811
| null | null |
http://arxiv.org/pdf/2402.16811v1
|
2024-02-26T18:34:58Z
|
2024-02-26T18:34:58Z
|
Stopping Bayesian Optimization with Probabilistic Regret Bounds
|
Bayesian optimization is a popular framework for efficiently finding high-quality solutions to difficult problems based on limited prior information. As a rule, these algorithms operate by iteratively choosing what to try next until some predefined budget has been exhausted. We investigate replacing this de facto stopping rule with an $(epsilon, delta)$-criterion: stop when a solution has been found whose value is within $epsilon > 0$ of the optimum with probability at least $1 - delta$ under the model. Given access to the prior distribution of problems, we show how to verify this condition in practice using a limited number of draws from the posterior. For Gaussian process priors, we prove that Bayesian optimization with the proposed criterion stops in finite time and returns a point that satisfies the $(epsilon, delta)$-criterion under mild assumptions. These findings are accompanied by extensive empirical results which demonstrate the strengths and weaknesses of this approach.
|
[
"['James T. Wilson']"
] |
null | null |
2402.16814
| null | null |
http://arxiv.org/pdf/2402.16814v3
|
2024-04-12T11:38:20Z
|
2024-02-26T18:37:16Z
|
Box Facets and Cut Facets of Lifted Multicut Polytopes
|
The lifted multicut problem is a combinatorial optimization problem whose feasible solutions relate one-to-one to the decompositions of a graph $G = (V, E)$. Given an augmentation $widehat{G} = (V, E cup F)$ of $G$ and given costs $c in mathbb{R}^{E cup F}$, the objective is to minimize the sum of those $c_{uw}$ with $uw in E cup F$ for which $u$ and $w$ are in distinct components. For $F = emptyset$, the problem specializes to the multicut problem, and for $E = tbinom{V}{2}$ to the clique partitioning problem. We study a binary linear program formulation of the lifted multicut problem. More specifically, we contribute to the analysis of the associated lifted multicut polytopes: Firstly, we establish a necessary, sufficient and efficiently decidable condition for a lower box inequality to define a facet. Secondly, we show that deciding whether a cut inequality of the binary linear program defines a facet is NP-hard.
|
[
"['Lucas Fabian Naumann' 'Jannik Irmai' 'Shengxian Zhao' 'Bjoern Andres']"
] |
null | null |
2402.16819
| null | null |
http://arxiv.org/pdf/2402.16819v2
|
2024-02-27T15:22:57Z
|
2024-02-26T18:43:45Z
|
Nemotron-4 15B Technical Report
|
We introduce Nemotron-4 15B, a 15-billion-parameter large multilingual language model trained on 8 trillion text tokens. Nemotron-4 15B demonstrates strong performance when assessed on English, multilingual, and coding tasks: it outperforms all existing similarly-sized open models on 4 out of 7 downstream evaluation areas and achieves competitive performance to the leading open models in the remaining ones. Specifically, Nemotron-4 15B exhibits the best multilingual capabilities of all similarly-sized models, even outperforming models over four times larger and those explicitly specialized for multilingual tasks.
|
[
"['Jupinder Parmar' 'Shrimai Prabhumoye' 'Joseph Jennings'\n 'Mostofa Patwary' 'Sandeep Subramanian' 'Dan Su' 'Chen Zhu'\n 'Deepak Narayanan' 'Aastha Jhunjhunwala' 'Ayush Dattagupta' 'Vibhu Jawa'\n 'Jiwei Liu' 'Ameya Mahabaleshwarkar' 'Osvald Nitski' 'Annika Brundyn'\n 'James Maki' 'Miguel Martinez' 'Jiaxuan You' 'John Kamalu'\n 'Patrick LeGresley' 'Denys Fridman' 'Jared Casper' 'Ashwath Aithal'\n 'Oleksii Kuchaiev' 'Mohammad Shoeybi' 'Jonathan Cohen' 'Bryan Catanzaro']"
] |
null | null |
2402.16822
| null | null |
http://arxiv.org/pdf/2402.16822v1
|
2024-02-26T18:47:27Z
|
2024-02-26T18:47:27Z
|
Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts
|
As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to user inputs is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem, and uses open-ended search to generate prompts that are both effective and diverse. It can uncover a model's vulnerabilities across a broad range of domains including, in this paper, safety, question answering, and cybersecurity. We also demonstrate that fine-tuning on synthetic data generated by Rainbow Teaming improves the safety of state-of-the-art LLMs without hurting their general capabilities and helpfulness, paving the path to open-ended self-improvement.
|
[
"['Mikayel Samvelyan' 'Sharath Chandra Raparthy' 'Andrei Lupu'\n 'Eric Hambro' 'Aram H. Markosyan' 'Manish Bhatt' 'Yuning Mao'\n 'Minqi Jiang' 'Jack Parker-Holder' 'Jakob Foerster' 'Tim Rocktäschel'\n 'Roberta Raileanu']"
] |
null | null |
2402.16823
| null | null |
http://arxiv.org/pdf/2402.16823v2
|
2024-02-27T11:03:10Z
|
2024-02-26T18:48:27Z
|
Language Agents as Optimizable Graphs
|
Various human-designed prompt engineering techniques have been proposed to improve problem solvers based on Large Language Models (LLMs), yielding many disparate code bases. We unify these approaches by describing LLM-based agents as computational graphs. The nodes implement functions to process multimodal data or query LLMs, and the edges describe the information flow between operations. Graphs can be recursively combined into larger composite graphs representing hierarchies of inter-agent collaboration (where edges connect operations of different agents). Our novel automatic graph optimizers (1) refine node-level LLM prompts (node optimization) and (2) improve agent orchestration by changing graph connectivity (edge optimization). Experiments demonstrate that our framework can be used to efficiently develop, integrate, and automatically improve various LLM agents. The code can be found at https://github.com/metauto-ai/gptswarm.
|
[
"['Mingchen Zhuge' 'Wenyi Wang' 'Louis Kirsch' 'Francesco Faccio'\n 'Dmitrii Khizbullin' 'Jürgen Schmidhuber']"
] |
null | null |
2402.16827
| null | null |
http://arxiv.org/pdf/2402.16827v2
|
2024-03-08T20:04:01Z
|
2024-02-26T18:54:35Z
|
A Survey on Data Selection for Language Models
|
A major factor in the recent success of large language models is the use of enormous and ever-growing text datasets for unsupervised pre-training. However, naively training a model on all available data may not be optimal (or feasible), as the quality of available text data can vary. Filtering out data can also decrease the carbon footprint and financial costs of training models by reducing the amount of training required. Data selection methods aim to determine which candidate data points to include in the training dataset and how to appropriately sample from the selected data points. The promise of improved data selection methods has caused the volume of research in the area to rapidly expand. However, because deep learning is mostly driven by empirical evidence and experimentation on large-scale data is expensive, few organizations have the resources for extensive data selection research. Consequently, knowledge of effective data selection practices has become concentrated within a few organizations, many of which do not openly share their findings and methodologies. To narrow this gap in knowledge, we present a comprehensive review of existing literature on data selection methods and related research areas, providing a taxonomy of existing approaches. By describing the current landscape of research, this work aims to accelerate progress in data selection by establishing an entry point for new and established researchers. Additionally, throughout this review we draw attention to noticeable holes in the literature and conclude the paper by proposing promising avenues for future research.
|
[
"['Alon Albalak' 'Yanai Elazar' 'Sang Michael Xie' 'Shayne Longpre'\n 'Nathan Lambert' 'Xinyi Wang' 'Niklas Muennighoff' 'Bairu Hou'\n 'Liangming Pan' 'Haewon Jeong' 'Colin Raffel' 'Shiyu Chang'\n 'Tatsunori Hashimoto' 'William Yang Wang']"
] |
null | null |
2402.16828
| null | null |
http://arxiv.org/pdf/2402.16828v1
|
2024-02-26T18:55:13Z
|
2024-02-26T18:55:13Z
|
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
|
The scalability of deep learning models is fundamentally limited by computing resources, memory, and communication. Although methods like low-rank adaptation (LoRA) have reduced the cost of model finetuning, its application in model pre-training remains largely unexplored. This paper explores extending LoRA to model pre-training, identifying the inherent constraints and limitations of standard LoRA in this context. We introduce LoRA-the-Explorer (LTE), a novel bi-level optimization algorithm designed to enable parallel training of multiple low-rank heads across computing nodes, thereby reducing the need for frequent synchronization. Our approach includes extensive experimentation on vision transformers using various vision datasets, demonstrating that LTE is competitive with standard pre-training.
|
[
"['Minyoung Huh' 'Brian Cheung' 'Jeremy Bernstein' 'Phillip Isola'\n 'Pulkit Agrawal']"
] |
null | null |
2402.16829
| null | null |
http://arxiv.org/pdf/2402.16829v1
|
2024-02-26T18:55:15Z
|
2024-02-26T18:55:15Z
|
GISTEmbed: Guided In-sample Selection of Training Negatives for Text
Embedding Fine-tuning
|
Embedding models are integral to AI applications like semantic search, personalized recommendations, and retrieval augmented generation for LLMs, necessitating high-quality training data. However, the limited scalability of manual data curation prompts the need for automated methods to ensure data integrity. Traditional unsupervised triplet mining automates training data generation, crucial for embedding model training, yet inadvertently injects biases and noise, thereby degrading model performance. Addressing this, we introduce GISTEmbed, a novel strategy that enhances in-batch negative selection during contrastive training through a guide model. This approach departs from reliance on random sampling and equal utility assumption of batch negatives, significantly reducing noise from data quality issues and improving model fine-tuning. Benchmarked against the Massive Text Embedding Benchmark (MTEB), GISTEmbed showcases consistent performance improvements across various model sizes and achieves state-of-the-art results in select categories. This framework enables significant enhancements for smaller models by leveraging the capabilities of powerful yet resource-intensive large models. GISTEmbed can potentially revolutionize the creation of highly efficient, smaller models, democratizing access to advanced AI technologies. Making these technologies more accessible and cost-effective, especially for applications constrained by resources, significantly expands the impact and accessibility of state-of-the-art AI solutions across diverse sectors.
|
[
"['Aivin V. Solatorio']"
] |
null | null |
2402.16830
| null | null |
http://arxiv.org/pdf/2402.16830v1
|
2024-02-26T18:56:42Z
|
2024-02-26T18:56:42Z
|
SKILL: Similarity-aware Knowledge distILLation for Speech
Self-Supervised Learning
|
Self-supervised learning (SSL) has achieved remarkable success across various speech-processing tasks. To enhance its efficiency, previous works often leverage the use of compression techniques. A notable recent attempt is DPHuBERT, which applies joint knowledge distillation (KD) and structured pruning to learn a significantly smaller SSL model. In this paper, we contribute to this research domain by introducing SKILL, a novel method that conducts distillation across groups of layers instead of distilling individual arbitrarily selected layers within the teacher network. The identification of the layers to distill is achieved through a hierarchical clustering procedure applied to layer similarity measures. Extensive experiments demonstrate that our distilled version of WavLM Base+ not only outperforms DPHuBERT but also achieves state-of-the-art results in the 30M parameters model class across several SUPERB tasks.
|
[
"['Luca Zampierin' 'Ghouthi Boukli Hacene' 'Bac Nguyen' 'Mirco Ravanelli']"
] |
null | null |
2402.16842
| null | null |
http://arxiv.org/pdf/2402.16842v2
|
2024-02-27T18:06:29Z
|
2024-02-26T18:59:12Z
|
Asymmetry in Low-Rank Adapters of Foundation Models
|
Parameter-efficient fine-tuning optimizes large, pre-trained foundation models by updating a subset of parameters; in this class, Low-Rank Adaptation (LoRA) is particularly effective. Inspired by an effort to investigate the different roles of LoRA matrices during fine-tuning, this paper characterizes and leverages unexpected asymmetry in the importance of low-rank adapter matrices. Specifically, when updating the parameter matrices of a neural network by adding a product $BA$, we observe that the $B$ and $A$ matrices have distinct functions: $A$ extracts features from the input, while $B$ uses these features to create the desired output. Based on this observation, we demonstrate that fine-tuning $B$ is inherently more effective than fine-tuning $A$, and that a random untrained $A$ should perform nearly as well as a fine-tuned one. Using an information-theoretic lens, we also bound the generalization of low-rank adapters, showing that the parameter savings of exclusively training $B$ improves the bound. We support our conclusions with experiments on RoBERTa, BART-Large, LLaMA-2, and ViTs.
|
[
"['Jiacheng Zhu' 'Kristjan Greenewald' 'Kimia Nadjahi'\n 'Haitz Sáez de Ocáriz Borde' 'Rickard Brüel Gabrielsson' 'Leshem Choshen'\n 'Marzyeh Ghassemi' 'Mikhail Yurochkin' 'Justin Solomon']"
] |
null | null |
2402.16843
| null | null |
http://arxiv.org/pdf/2402.16843v1
|
2024-02-26T18:59:18Z
|
2024-02-26T18:59:18Z
|
Multi-LoRA Composition for Image Generation
|
Low-Rank Adaptation (LoRA) is extensively utilized in text-to-image models for the accurate rendition of specific elements like distinct characters or unique styles in generated images. Nonetheless, existing methods face challenges in effectively composing multiple LoRAs, especially as the number of LoRAs to be integrated grows, thus hindering the creation of complex imagery. In this paper, we study multi-LoRA composition through a decoding-centric perspective. We present two training-free methods: LoRA Switch, which alternates between different LoRAs at each denoising step, and LoRA Composite, which simultaneously incorporates all LoRAs to guide more cohesive image synthesis. To evaluate the proposed approaches, we establish ComposLoRA, a new comprehensive testbed as part of this research. It features a diverse range of LoRA categories with 480 composition sets. Utilizing an evaluation framework based on GPT-4V, our findings demonstrate a clear improvement in performance with our methods over the prevalent baseline, particularly evident when increasing the number of LoRAs in a composition.
|
[
"['Ming Zhong' 'Yelong Shen' 'Shuohang Wang' 'Yadong Lu' 'Yizhu Jiao'\n 'Siru Ouyang' 'Donghan Yu' 'Jiawei Han' 'Weizhu Chen']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.