categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.14329 | null | null | http://arxiv.org/pdf/2406.14329v1 | 2024-06-20T14:00:01Z | 2024-06-20T14:00:01Z | Adaptive Adversarial Cross-Entropy Loss for Sharpness-Aware Minimization | Recent advancements in learning algorithms have demonstrated that the sharpness of the loss surface is an effective measure for improving the generalization gap. Building upon this concept, Sharpness-Aware Minimization (SAM) was proposed to enhance model generalization and achieved state-of-the-art performance. SAM consists of two main steps, the weight perturbation step and the weight updating step. However, the perturbation in SAM is determined by only the gradient of the training loss, or cross-entropy loss. As the model approaches a stationary point, this gradient becomes small and oscillates, leading to inconsistent perturbation directions and also has a chance of diminishing the gradient. Our research introduces an innovative approach to further enhancing model generalization. We propose the Adaptive Adversarial Cross-Entropy (AACE) loss function to replace standard cross-entropy loss for SAM's perturbation. AACE loss and its gradient uniquely increase as the model nears convergence, ensuring consistent perturbation direction and addressing the gradient diminishing issue. Additionally, a novel perturbation-generating function utilizing AACE loss without normalization is proposed, enhancing the model's exploratory capabilities in near-optimum stages. Empirical testing confirms the effectiveness of AACE, with experiments demonstrating improved performance in image classification tasks using Wide ResNet and PyramidNet across various datasets. The reproduction code is available online | [
"['Tanapat Ratchatorn' 'Masayuki Tanaka']"
]
|
null | null | 2406.14340 | null | null | http://arxiv.org/pdf/2406.14340v1 | 2024-06-20T14:07:39Z | 2024-06-20T14:07:39Z | Learning rate adaptive stochastic gradient descent optimization methods:
numerical simulations for deep learning methods for partial differential
equations and convergence analyses | It is known that the standard stochastic gradient descent (SGD) optimization method, as well as accelerated and adaptive SGD optimization methods such as the Adam optimizer fail to converge if the learning rates do not converge to zero (as, for example, in the situation of constant learning rates). Numerical simulations often use human-tuned deterministic learning rate schedules or small constant learning rates. The default learning rate schedules for SGD optimization methods in machine learning implementation frameworks such as TensorFlow and Pytorch are constant learning rates. In this work we propose and study a learning-rate-adaptive approach for SGD optimization methods in which the learning rate is adjusted based on empirical estimates for the values of the objective function of the considered optimization problem (the function that one intends to minimize). In particular, we propose a learning-rate-adaptive variant of the Adam optimizer and implement it in case of several neural network learning problems, particularly, in the context of deep learning approximation methods for partial differential equations such as deep Kolmogorov methods, physics-informed neural networks, and deep Ritz methods. In each of the presented learning problems the proposed learning-rate-adaptive variant of the Adam optimizer faster reduces the value of the objective function than the Adam optimizer with the default learning rate. For a simple class of quadratic minimization problems we also rigorously prove that a learning-rate-adaptive variant of the SGD optimization method converges to the minimizer of the considered minimization problem. Our convergence proof is based on an analysis of the laws of invariant measures of the SGD method as well as on a more general convergence analysis for SGD with random but predictable learning rates which we develop in this work. | [
"['Steffen Dereich' 'Arnulf Jentzen' 'Adrian Riekert']"
]
|
null | null | 2406.14341 | null | null | http://arxiv.org/pdf/2406.14341v1 | 2024-06-20T14:09:00Z | 2024-06-20T14:09:00Z | HoTPP Benchmark: Are We Good at the Long Horizon Events Forecasting? | In sequential event prediction, which finds applications in finance, retail, social networks, and healthcare, a crucial task is forecasting multiple future events within a specified time horizon. Traditionally, this has been addressed through autoregressive generation using next-event prediction models, such as Marked Temporal Point Processes. However, autoregressive methods use their own output for future predictions, potentially reducing quality as the prediction horizon extends. In this paper, we challenge traditional approaches by introducing a novel benchmark, HoTPP, specifically designed to evaluate a model's ability to predict event sequences over a horizon. This benchmark features a new metric inspired by object detection in computer vision, addressing the limitations of existing metrics in assessing models with imprecise time-step predictions. Our evaluations on established datasets employing various models demonstrate that high accuracy in next-event prediction does not necessarily translate to superior horizon prediction, and vice versa. HoTPP aims to serve as a valuable tool for developing more robust event sequence prediction methods, ultimately paving the way for further advancements in the field. | [
"['Ivan Karpukhin' 'Foma Shipilov' 'Andrey Savchenko']"
]
|
null | null | 2406.14347 | null | null | http://arxiv.org/pdf/2406.14347v1 | 2024-06-20T14:14:59Z | 2024-06-20T14:14:59Z | $\nabla^2$DFT: A Universal Quantum Chemistry Dataset of Drug-Like
Molecules and a Benchmark for Neural Network Potentials | Methods of computational quantum chemistry provide accurate approximations of molecular properties crucial for computer-aided drug discovery and other areas of chemical science. However, high computational complexity limits the scalability of their applications. Neural network potentials (NNPs) are a promising alternative to quantum chemistry methods, but they require large and diverse datasets for training. This work presents a new dataset and benchmark called $nabla^2$DFT that is based on the nablaDFT. It contains twice as much molecular structures, three times more conformations, new data types and tasks, and state-of-the-art models. The dataset includes energies, forces, 17 molecular properties, Hamiltonian and overlap matrices, and a wavefunction object. All calculations were performed at the DFT level ($omega$B97X-D/def2-SVP) for each conformation. Moreover, $nabla^2$DFT is the first dataset that contains relaxation trajectories for a substantial number of drug-like molecules. We also introduce a novel benchmark for evaluating NNPs in molecular property prediction, Hamiltonian prediction, and conformational optimization tasks. Finally, we propose an extendable framework for training NNPs and implement 10 models within it. | [
"['Kuzma Khrabrov' 'Anton Ber' 'Artem Tsypin' 'Konstantin Ushenin'\n 'Egor Rumiantsev' 'Alexander Telepov' 'Dmitry Protasov' 'Ilya Shenbin'\n 'Anton Alekseev' 'Mikhail Shirokikh' 'Sergey Nikolenko'\n 'Elena Tutubalina' 'Artur Kadurin']"
]
|
null | null | 2406.14349 | null | null | http://arxiv.org/pdf/2406.14349v1 | 2024-06-20T14:17:57Z | 2024-06-20T14:17:57Z | Can you trust your explanations? A robustness test for feature
attribution methods | The increase of legislative concerns towards the usage of Artificial Intelligence (AI) has recently led to a series of regulations striving for a more transparent, trustworthy and accountable AI. Along with these proposals, the field of Explainable AI (XAI) has seen a rapid growth but the usage of its techniques has at times led to unexpected results. The robustness of the approaches is, in fact, a key property often overlooked: it is necessary to evaluate the stability of an explanation (to random and adversarial perturbations) to ensure that the results are trustable. To this end, we propose a test to evaluate the robustness to non-adversarial perturbations and an ensemble approach to analyse more in depth the robustness of XAI methods applied to neural networks and tabular datasets. We will show how leveraging manifold hypothesis and ensemble approaches can be beneficial to an in-depth analysis of the robustness. | [
"['Ilaria Vascotto' 'Alex Rodriguez' 'Alessandro Bonaita' 'Luca Bortolussi']"
]
|
null | null | 2406.14351 | null | null | http://arxiv.org/pdf/2406.14351v1 | 2024-06-20T14:20:50Z | 2024-06-20T14:20:50Z | Automatic Labels are as Effective as Manual Labels in Biomedical Images
Classification with Deep Learning | The increasing availability of biomedical data is helping to design more robust deep learning (DL) algorithms to analyze biomedical samples. Currently, one of the main limitations to train DL algorithms to perform a specific task is the need for medical experts to label data. Automatic methods to label data exist, however automatic labels can be noisy and it is not completely clear when automatic labels can be adopted to train DL models. This paper aims to investigate under which circumstances automatic labels can be adopted to train a DL model on the classification of Whole Slide Images (WSI). The analysis involves multiple architectures, such as Convolutional Neural Networks (CNN) and Vision Transformer (ViT), and over 10000 WSIs, collected from three use cases: celiac disease, lung cancer and colon cancer, which one including respectively binary, multiclass and multilabel data. The results allow identifying 10% as the percentage of noisy labels that lead to train competitive models for the classification of WSIs. Therefore, an algorithm generating automatic labels needs to fit this criterion to be adopted. The application of the Semantic Knowledge Extractor Tool (SKET) algorithm to generate automatic labels leads to performance comparable to the one obtained with manual labels, since it generates a percentage of noisy labels between 2-5%. Automatic labels are as effective as manual ones, reaching solid performance comparable to the one obtained training models with manual labels. | [
"['Niccolò Marini' 'Stefano Marchesin' 'Lluis Borras Ferris'\n 'Simon Püttmann' 'Marek Wodzinski' 'Riccardo Fratti' 'Damian Podareanu'\n 'Alessandro Caputo' 'Svetla Boytcheva' 'Simona Vatrano'\n 'Filippo Fraggetta' 'Iris Nagtegaal' 'Gianmaria Silvello'\n 'Manfredo Atzori' 'Henning Müller']"
]
|
null | null | 2406.14362 | null | null | http://arxiv.org/pdf/2406.14362v1 | 2024-06-20T14:36:12Z | 2024-06-20T14:36:12Z | Communication-Efficient Byzantine-Resilient Federated Zero-Order
Optimization | We introduce CYBER-0, the first zero-order optimization algorithm for memory-and-communication efficient Federated Learning, resilient to Byzantine faults. We show through extensive numerical experiments on the MNIST dataset and finetuning RoBERTa-Large that CYBER-0 outperforms state-of-the-art algorithms in terms of communication and memory efficiency while reaching similar accuracy. We provide theoretical guarantees on its convergence for convex loss functions. | [
"['Afonso de Sá Delgado Neto' 'Maximilian Egger' 'Mayank Bakshi'\n 'Rawad Bitar']"
]
|
null | null | 2406.14377 | null | null | http://arxiv.org/pdf/2406.14377v1 | 2024-06-20T14:45:13Z | 2024-06-20T14:45:13Z | Computation-Efficient Semi-Supervised Learning for ECG-based
Cardiovascular Diseases Detection | Label scarcity problem is the main challenge that hinders the wide application of deep learning systems in automatic cardiovascular diseases (CVDs) detection using electrocardiography (ECG). Tuning pre-trained models alleviates this problem by transferring knowledge learned from large datasets to downstream small datasets. However, bottlenecks in computational efficiency and CVDs detection performance limit its clinical applications. It is difficult to improve the detection performance without significantly sacrificing model computational efficiency. Here, we propose a computation-efficient semi-supervised learning paradigm (FastECG) for robust and computation-efficient CVDs detection using ECG. It enables a robust adaptation of pre-trained models on downstream datasets with limited supervision and high computational efficiency. First, a random-deactivation technique is developed to achieve robust and fast low-rank adaptation of pre-trained weights. Subsequently, we propose a one-shot rank allocation module to determine the optimal ranks for the update matrices of the pre-trained weights. Finally, a lightweight semi-supervised learning pipeline is introduced to enhance model performance by leveraging labeled and unlabeled data with high computational efficiency. Extensive experiments on four downstream ECG datasets demonstrate that FastECG not only outperforms the state-of-the-art methods in multi-label CVDs detection but also consumes fewer GPU footprints, training time, and parameter storage space. As such, this paradigm provides an effective solution for achieving high computational efficiency and robust detection performance in the clinical applications of pre-trained models under limited supervision. | [
"['Rushuang Zhou' 'Zijun Liu' 'Lei Clifton' 'David A. Clifton'\n 'Kannie W. Y. Chan' 'Yuan-Ting Zhang' 'Yining Dong']"
]
|
null | null | 2406.14380 | null | null | http://arxiv.org/pdf/2406.14380v3 | 2024-07-05T13:40:48Z | 2024-06-20T14:53:26Z | Estimating Treatment Effects under Recommender Interference: A
Structured Neural Networks Approach | Recommender systems are essential for content-sharing platforms by curating personalized content. To evaluate updates to recommender systems targeting content creators, platforms frequently rely on creator-side randomized experiments. The treatment effect measures the change in outcomes when a new algorithm is implemented compared to the status quo. We show that the standard difference-in-means estimator can lead to biased estimates due to recommender interference that arises when treated and control creators compete for exposure. We propose a "recommender choice model" that describes which item gets exposed from a pool containing both treated and control items. By combining a structural choice model with neural networks, this framework directly models the interference pathway while accounting for rich viewer-content heterogeneity. We construct a debiased estimator of the treatment effect and prove it is $sqrt n$-consistent and asymptotically normal with potentially correlated samples. We validate our estimator's empirical performance with a field experiment on Weixin short-video platform. In addition to the standard creator-side experiment, we conduct a costly double-sided randomization design to obtain a benchmark estimate free from interference bias. We show that the proposed estimator yields results comparable to the benchmark, whereas the standard difference-in-means estimator can exhibit significant bias and even produce reversed signs. | [
"['Ruohan Zhan' 'Shichao Han' 'Yuchen Hu' 'Zhenling Jiang']"
]
|
null | null | 2406.14388 | null | null | http://arxiv.org/pdf/2406.14388v1 | 2024-06-20T15:05:06Z | 2024-06-20T15:05:06Z | Active Diffusion Subsampling | Subsampling is commonly used to mitigate costs associated with data acquisition, such as time or energy requirements, motivating the development of algorithms for estimating the fully-sampled signal of interest $x$ from partially observed measurements $y$. In maximum-entropy sampling, one selects measurement locations that are expected to have the highest entropy, so as to minimize uncertainty about $x$. This approach relies on an accurate model of the posterior distribution over future measurements, given the measurements observed so far. Recently, diffusion models have been shown to produce high-quality posterior samples of high-dimensional signals using guided diffusion. In this work, we propose Active Diffusion Subsampling (ADS), a method for performing active subsampling using guided diffusion in which the model tracks a distribution of beliefs over the true state of $x$ throughout the reverse diffusion process, progressively decreasing its uncertainty by choosing to acquire measurements with maximum expected entropy, and ultimately generating the posterior distribution $p(x | y)$. ADS can be applied using pre-trained diffusion models for any subsampling rate, and does not require task-specific retraining - just the specification of a measurement model. Furthermore, the maximum entropy sampling policy employed by ADS is interpretable, enhancing transparency relative to existing methods using black-box policies. Experimentally, we show that ADS outperforms fixed sampling strategies, and study an application of ADS in Magnetic Resonance Imaging acceleration using the fastMRI dataset, finding that ADS performs competitively with supervised methods. Code available at https://active-diffusion-subsampling.github.io/. | [
"['Oisin Nolan' 'Tristan S. W. Stevens' 'Wessel L. van Nierop'\n 'Ruud J. G. van Sloun']"
]
|
null | null | 2406.14393 | null | null | http://arxiv.org/pdf/2406.14393v2 | 2024-07-12T08:15:45Z | 2024-06-20T15:12:27Z | Jailbreaking as a Reward Misspecification Problem | The widespread adoption of large language models (LLMs) has raised concerns about their safety and reliability, particularly regarding their vulnerability to adversarial attacks. In this paper, we propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process. We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness and robustness in detecting harmful backdoor prompts. Building upon these insights, we present ReMiss, a system for automated red teaming that generates adversarial prompts against various target aligned LLMs. ReMiss achieves state-of-the-art attack success rates on the AdvBench benchmark while preserving the human readability of the generated prompts. Detailed analysis highlights the unique advantages brought by the proposed reward misspecification objective compared to previous methods. | [
"['Zhihui Xie' 'Jiahui Gao' 'Lei Li' 'Zhenguo Li' 'Qi Liu' 'Lingpeng Kong']"
]
|
null | null | 2406.14399 | null | null | http://arxiv.org/pdf/2406.14399v1 | 2024-06-20T15:18:52Z | 2024-06-20T15:18:52Z | WEATHER-5K: A Large-scale Global Station Weather Dataset Towards
Comprehensive Time-series Forecasting Benchmark | Global Station Weather Forecasting (GSWF) is crucial for various sectors, including aviation, agriculture, energy, and disaster preparedness. Recent advancements in deep learning have significantly improved the accuracy of weather predictions by optimizing models based on public meteorological data. However, existing public datasets for GSWF optimization and benchmarking still suffer from significant limitations, such as small sizes, limited temporal coverage, and a lack of comprehensive variables. These shortcomings prevent them from effectively reflecting the benchmarks of current forecasting methods and fail to support the real needs of operational weather forecasting. To address these challenges, we present the WEATHER-5K dataset. This dataset comprises a comprehensive collection of data from 5,672 weather stations worldwide, spanning a 10-year period with one-hour intervals. It includes multiple crucial weather elements, providing a more reliable and interpretable resource for forecasting. Furthermore, our WEATHER-5K dataset can serve as a benchmark for comprehensively evaluating existing well-known forecasting models, extending beyond GSWF methods to support future time-series research challenges and opportunities. The dataset and benchmark implementation are publicly available at: https://github.com/taohan10200/WEATHER-5K. | [
"['Tao Han' 'Song Guo' 'Zhenghao Chen' 'Wanghan Xu' 'Lei Bai']"
]
|
null | null | 2406.14401 | null | null | http://arxiv.org/pdf/2406.14401v1 | 2024-06-20T15:22:44Z | 2024-06-20T15:22:44Z | Fair Streaming Feature Selection | Streaming feature selection techniques have become essential in processing real-time data streams, as they facilitate the identification of the most relevant attributes from continuously updating information. Despite their performance, current algorithms to streaming feature selection frequently fall short in managing biases and avoiding discrimination that could be perpetuated by sensitive attributes, potentially leading to unfair outcomes in the resulting models. To address this issue, we propose FairSFS, a novel algorithm for Fair Streaming Feature Selection, to uphold fairness in the feature selection process without compromising the ability to handle data in an online manner. FairSFS adapts to incoming feature vectors by dynamically adjusting the feature set and discerns the correlations between classification attributes and sensitive attributes from this revised set, thereby forestalling the propagation of sensitive data. Empirical evaluations show that FairSFS not only maintains accuracy that is on par with leading streaming feature selection methods and existing fair feature techniques but also significantly improves fairness metrics. | [
"['Zhangling Duan' 'Tianci Li' 'Xingyu Wu' 'Zhaolong Ling' 'Jingye Yang'\n 'Zhaohong Jia']"
]
|
null | null | 2406.14404 | null | null | http://arxiv.org/pdf/2406.14404v1 | 2024-06-20T15:25:13Z | 2024-06-20T15:25:13Z | Predicting Probabilities of Error to Combine Quantization and Early
Exiting: QuEE | Machine learning models can solve complex tasks but often require significant computational resources during inference. This has led to the development of various post-training computation reduction methods that tackle this issue in different ways, such as quantization which reduces the precision of weights and arithmetic operations, and dynamic networks which adapt computation to the sample at hand. In this work, we propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE. Our algorithm can be seen as a form of soft early exiting or input-dependent compression. Rather than a binary decision between exiting or continuing, we introduce the possibility of continuing with reduced computation. This complicates the traditionally considered early exiting problem, which we solve through a principled formulation. The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation. We demonstrate the effectiveness of our method through empirical evaluation, as well as exploring the conditions for its success on 4 classification datasets. | [
"['Florence Regol' 'Joud Chataoui' 'Bertrand Charpentier' 'Mark Coates'\n 'Pablo Piantanida' 'Stephan Gunnemann']"
]
|
null | null | 2406.14408 | null | null | http://arxiv.org/pdf/2406.14408v2 | 2024-06-21T02:51:41Z | 2024-06-20T15:31:05Z | FVEL: Interactive Formal Verification Environment with Large Language
Models via Theorem Proving | Formal verification (FV) has witnessed growing significance with current emerging program synthesis by the evolving large language models (LLMs). However, current formal verification mainly resorts to symbolic verifiers or hand-craft rules, resulting in limitations for extensive and flexible verification. On the other hand, formal languages for automated theorem proving, such as Isabelle, as another line of rigorous verification, are maintained with comprehensive rules and theorems. In this paper, we propose FVEL, an interactive Formal Verification Environment with LLMs. Specifically, FVEL transforms a given code to be verified into Isabelle, and then conducts verification via neural automated theorem proving with an LLM. The joined paradigm leverages the rigorous yet abundant formulated and organized rules in Isabelle and is also convenient for introducing and adjusting cutting-edge LLMs. To achieve this goal, we extract a large-scale FVELER3. The FVELER dataset includes code dependencies and verification processes that are formulated in Isabelle, containing 758 theories, 29,125 lemmas, and 200,646 proof steps in total with in-depth dependencies. We benchmark FVELER in the FVEL environment by first fine-tuning LLMs with FVELER and then evaluating them on Code2Inv and SV-COMP. The results show that FVEL with FVELER fine-tuned Llama3- 8B solves 17.39% (69 -> 81) more problems, and Mistral-7B 12% (75 -> 84) more problems in SV-COMP. And the proportion of proof errors is reduced. Project page: https://fveler.github.io/. | [
"['Xiaohan Lin' 'Qingxing Cao' 'Yinya Huang' 'Haiming Wang' 'Jianqiao Lu'\n 'Zhengying Liu' 'Linqi Song' 'Xiaodan Liang']"
]
|
null | null | 2406.14415 | null | null | http://arxiv.org/pdf/2406.14415v1 | 2024-06-20T15:34:17Z | 2024-06-20T15:34:17Z | Vectorized Representation Dreamer (VRD): Dreaming-Assisted Multi-Agent
Motion-Forecasting | For an autonomous vehicle to plan a path in its environment, it must be able to accurately forecast the trajectory of all dynamic objects in its proximity. While many traditional methods encode observations in the scene to solve this problem, there are few approaches that consider the effect of the ego vehicle's behavior on the future state of the world. In this paper, we introduce VRD, a vectorized world model-inspired approach to the multi-agent motion forecasting problem. Our method combines a traditional open-loop training regime with a novel dreamed closed-loop training pipeline that leverages a kinematic reconstruction task to imagine the trajectory of all agents, conditioned on the action of the ego vehicle. Quantitative and qualitative experiments are conducted on the Argoverse 2 multi-world forecasting evaluation dataset and the intersection drone (inD) dataset to demonstrate the performance of our proposed model. Our model achieves state-of-the-art performance on the single prediction miss rate metric on the Argoverse 2 dataset and performs on par with the leading models for the single prediction displacement metrics. | [
"['Hunter Schofield' 'Hamidreza Mirkhani' 'Mohammed Elmahgiubi'\n 'Kasra Rezaee' 'Jinjun Shan']"
]
|
null | null | 2406.14420 | null | null | http://arxiv.org/pdf/2406.14420v1 | 2024-06-20T15:40:38Z | 2024-06-20T15:40:38Z | Communication-efficient Vertical Federated Learning via Compressed Error
Feedback | Communication overhead is a known bottleneck in federated learning (FL). To address this, lossy compression is commonly used on the information communicated between the server and clients during training. In horizontal FL, where each client holds a subset of the samples, such communication-compressed training methods have recently seen significant progress. However, in their vertical FL counterparts, where each client holds a subset of the features, our understanding remains limited. To address this, we propose an error feedback compressed vertical federated learning (EFVFL) method to train split neural networks. In contrast with previous communication-compressed methods for vertical FL, EFVFL does not require a vanishing compression error for the gradient norm to converge to zero for smooth nonconvex problems. By leveraging error feedback, our method can achieve a $mathcal{O}(1/T)$ convergence rate in the full-batch case, improving over the state-of-the-art $mathcal{O}(1/sqrt{T})$ rate under $mathcal{O}(1/sqrt{T})$ compression error, and matching the rate of uncompressed methods. Further, when the objective function satisfies the Polyak-{L}ojasiewicz inequality, our method converges linearly. In addition to improving convergence rates, our method also supports the use of private labels. Numerical experiments show that EFVFL significantly improves over the prior art, confirming our theoretical results. | [
"['Pedro Valdeira' 'João Xavier' 'Cláudia Soares' 'Yuejie Chi']"
]
|
null | null | 2406.14424 | null | null | http://arxiv.org/pdf/2406.14424v1 | 2024-06-20T15:47:37Z | 2024-06-20T15:47:37Z | CascadeServe: Unlocking Model Cascades for Inference Serving | Machine learning (ML) models are increasingly deployed to production, calling for efficient inference serving systems. Efficient inference serving is complicated by two challenges: (i) ML models incur high computational costs, and (ii) the request arrival rates of practical applications have frequent, high, and sudden variations which make it hard to correctly provision hardware. Model cascades are positioned to tackle both of these challenges, as they (i) save work while maintaining accuracy, and (ii) expose a high-resolution trade-off between work and accuracy, allowing for fine-grained adjustments to request arrival rates. Despite their potential, model cascades haven't been used inside an online serving system. This comes with its own set of challenges, including workload adaption, model replication onto hardware, inference scheduling, request batching, and more. In this work, we propose CascadeServe, which automates and optimizes end-to-end inference serving with cascades. CascadeServe operates in an offline and online phase. In the offline phase, the system pre-computes a gear plan that specifies how to serve inferences online. In the online phase, the gear plan allows the system to serve inferences while making near-optimal adaptations to the query load at negligible decision overheads. We find that CascadeServe saves 2-3x in cost across a wide spectrum of the latency-accuracy space when compared to state-of-the-art baselines on different workloads. | [
"['Ferdi Kossmann' 'Ziniu Wu' 'Alex Turk' 'Nesime Tatbul' 'Lei Cao'\n 'Samuel Madden']"
]
|
null | null | 2406.14425 | null | null | http://arxiv.org/pdf/2406.14425v2 | 2024-06-25T13:48:41Z | 2024-06-20T15:49:28Z | SynDARin: Synthesising Datasets for Automated Reasoning in Low-Resource
Languages | Question Answering (QA) datasets have been instrumental in developing and evaluating Large Language Model (LLM) capabilities. However, such datasets are scarce for languages other than English due to the cost and difficulties of collection and manual annotation. This means that producing novel models and measuring the performance of multilingual LLMs in low-resource languages is challenging. To mitigate this, we propose $textbf{S}$yn$textbf{DAR}$in, a method for generating and validating QA datasets for low-resource languages. We utilize parallel content mining to obtain $textit{human-curated}$ paragraphs between English and the target language. We use the English data as context to $textit{generate}$ synthetic multiple-choice (MC) question-answer pairs, which are automatically translated and further validated for quality. Combining these with their designated non-English $textit{human-curated}$ paragraphs form the final QA dataset. The method allows to maintain the content quality, reduces the likelihood of factual errors, and circumvents the need for costly annotation. To test the method, we created a QA dataset with $1.2$K samples for the Armenian language. The human evaluation shows that $98%$ of the generated English data maintains quality and diversity in the question types and topics, while the translation validation pipeline can filter out $sim70%$ of data with poor quality. We use the dataset to benchmark state-of-the-art LLMs, showing their inability to achieve human accuracy with some model performances closer to random chance. This shows that the generated dataset is non-trivial and can be used to evaluate reasoning capabilities in low-resource language. | [
"['Gayane Ghazaryan' 'Erik Arakelyan' 'Pasquale Minervini'\n 'Isabelle Augenstein']"
]
|
null | null | 2406.14426 | null | null | http://arxiv.org/pdf/2406.14426v1 | 2024-06-20T15:50:12Z | 2024-06-20T15:50:12Z | Transferable Boltzmann Generators | The generation of equilibrium samples of molecular systems has been a long-standing problem in statistical physics. Boltzmann Generators are a generative machine learning method that addresses this issue by learning a transformation via a normalizing flow from a simple prior distribution to the target Boltzmann distribution of interest. Recently, flow matching has been employed to train Boltzmann Generators for small molecular systems in Cartesian coordinates. We extend this work and propose a first framework for Boltzmann Generators that are transferable across chemical space, such that they predict zero-shot Boltzmann distributions for test molecules without being retrained for these systems. These transferable Boltzmann Generators allow approximate sampling from the target distribution of unseen systems, as well as efficient reweighting to the target Boltzmann distribution. The transferability of the proposed framework is evaluated on dipeptides, where we show that it generalizes efficiently to unseen systems. Furthermore, we demonstrate that our proposed architecture enhances the efficiency of Boltzmann Generators trained on single molecular systems. | [
"['Leon Klein' 'Frank Noé']"
]
|
null | null | 2406.14429 | null | null | http://arxiv.org/pdf/2406.14429v1 | 2024-06-20T15:54:21Z | 2024-06-20T15:54:21Z | CollaFuse: Collaborative Diffusion Models | In the landscape of generative artificial intelligence, diffusion-based models have emerged as a promising method for generating synthetic images. However, the application of diffusion models poses numerous challenges, particularly concerning data availability, computational requirements, and privacy. Traditional approaches to address these shortcomings, like federated learning, often impose significant computational burdens on individual clients, especially those with constrained resources. In response to these challenges, we introduce a novel approach for distributed collaborative diffusion models inspired by split learning. Our approach facilitates collaborative training of diffusion models while alleviating client computational burdens during image synthesis. This reduced computational burden is achieved by retaining data and computationally inexpensive processes locally at each client while outsourcing the computationally expensive processes to shared, more efficient server resources. Through experiments on the common CelebA dataset, our approach demonstrates enhanced privacy by reducing the necessity for sharing raw data. These capabilities hold significant potential across various application areas, including the design of edge computing solutions. Thus, our work advances distributed machine learning by contributing to the evolution of collaborative diffusion models. | [
"['Simeon Allmendinger' 'Domenique Zipperling' 'Lukas Struppek'\n 'Niklas Kühl']"
]
|
null | null | 2406.14442 | null | null | http://arxiv.org/pdf/2406.14442v1 | 2024-06-20T16:06:39Z | 2024-06-20T16:06:39Z | Graph Representation Learning Strategies for Omics Data: A Case Study on
Parkinson's Disease | Omics data analysis is crucial for studying complex diseases, but its high dimensionality and heterogeneity challenge classical statistical and machine learning methods. Graph neural networks have emerged as promising alternatives, yet the optimal strategies for their design and optimization in real-world biomedical challenges remain unclear. This study evaluates various graph representation learning models for case-control classification using high-throughput biological data from Parkinson's disease and control samples. We compare topologies derived from sample similarity networks and molecular interaction networks, including protein-protein and metabolite-metabolite interactions (PPI, MMI). Graph Convolutional Network (GCNs), Chebyshev spectral graph convolution (ChebyNet), and Graph Attention Network (GAT), are evaluated alongside advanced architectures like graph transformers, the graph U-net, and simpler models like multilayer perceptron (MLP). These models are systematically applied to transcriptomics and metabolomics data independently. Our comparative analysis highlights the benefits and limitations of various architectures in extracting patterns from omics data, paving the way for more accurate and interpretable models in biomedical research. | [
"['Elisa Gómez de Lope' 'Saurabh Deshpande' 'Ramón Viñas Torné'\n 'Pietro Liò' 'Enrico Glaab' 'Stéphane P. A. Bordas']"
]
|
null | null | 2406.14446 | null | null | http://arxiv.org/pdf/2406.14446v1 | 2024-06-20T16:08:40Z | 2024-06-20T16:08:40Z | Maintenance Required: Updating and Extending Bootstrapped Human Activity
Recognition Systems for Smart Homes | Developing human activity recognition (HAR) systems for smart homes is not straightforward due to varied layouts of the homes and their personalized settings, as well as idiosyncratic behaviors of residents. As such, off-the-shelf HAR systems are effective in limited capacity for an individual home, and HAR systems often need to be derived "from scratch", which comes with substantial efforts and often is burdensome to the resident. Previous work has successfully targeted the initial phase. At the end of this initial phase, we identify seed points. We build on bootstrapped HAR systems and introduce an effective updating and extension procedure for continuous improvement of HAR systems with the aim of keeping up with ever changing life circumstances. Our method makes use of the seed points identified at the end of the initial bootstrapping phase. A contrastive learning framework is trained using these seed points and labels obtained for the same. This model is then used to improve the segmentation accuracy of the identified prominent activities. Improvements in the activity recognition system through this procedure help model the majority of the routine activities in the smart home. We demonstrate the effectiveness of our procedure through experiments on the CASAS datasets that show the practical value of our approach. | [
"['Shruthi K. Hiremath' 'Thomas Ploetz']"
]
|
null | null | 2406.14456 | null | null | http://arxiv.org/pdf/2406.14456v1 | 2024-06-20T16:15:21Z | 2024-06-20T16:15:21Z | Capturing Temporal Components for Time Series Classification | Analyzing sequential data is crucial in many domains, particularly due to the abundance of data collected from the Internet of Things paradigm. Time series classification, the task of categorizing sequential data, has gained prominence, with machine learning approaches demonstrating remarkable performance on public benchmark datasets. However, progress has primarily been in designing architectures for learning representations from raw data at fixed (or ideal) time scales, which can fail to generalize to longer sequences. This work introduces a textit{compositional representation learning} approach trained on statistically coherent components extracted from sequential data. Based on a multi-scale change space, an unsupervised approach is proposed to segment the sequential data into chunks with similar statistical properties. A sequence-based encoder model is trained in a multi-task setting to learn compositional representations from these temporal components for time series classification. We demonstrate its effectiveness through extensive experiments on publicly available time series classification benchmarks. Evaluating the coherence of segmented components shows its competitive performance on the unsupervised segmentation task. | [
"['Venkata Ragavendra Vavilthota' 'Ranjith Ramanathan'\n 'Sathyanarayanan N. Aakur']"
]
|
null | null | 2406.14458 | null | null | http://arxiv.org/pdf/2406.14458v1 | 2024-06-20T16:17:07Z | 2024-06-20T16:17:07Z | Centimeter Positioning Accuracy using AI/ML for 6G Applications | This research looks at using AI/ML to achieve centimeter-level user positioning in 6G applications such as the Industrial Internet of Things (IIoT). Initial results show that our AI/ML-based method can estimate user positions with an accuracy of 17 cm in an indoor factory environment. In this proposal, we highlight our approaches and future directions. | [
"['Sai Prasanth Kotturi' 'Radha Krishna Ganti']"
]
|
null | null | 2406.14469 | null | null | http://arxiv.org/pdf/2406.14469v2 | 2024-06-24T15:40:40Z | 2024-06-20T16:32:18Z | Fusion of Movement and Naive Predictions for Point Forecasting in
Univariate Random Walks | Traditional methods for point forecasting in univariate random walks often fail to surpass naive benchmarks due to data unpredictability. This study introduces a novel forecasting method that fuses movement prediction (binary classification) with naive forecasts for accurate one-step-ahead point forecasting. The method's efficacy is demonstrated through theoretical analysis, simulations, and real-world data experiments. It reliably exceeds naive forecasts with movement prediction accuracies as low as 0.55, outperforming baseline models like ARIMA, linear regression, MLP, and LSTM networks in forecasting the S&P 500 index and Bitcoin prices. This method is particularly advantageous when accurate point predictions are challenging but accurate movement predictions are attainable, translating movement predictions into point forecasts in random walk contexts. | [
"['Cheng Zhang']"
]
|
null | null | 2406.14473 | null | null | http://arxiv.org/pdf/2406.14473v1 | 2024-06-20T16:34:07Z | 2024-06-20T16:34:07Z | Data-Centric AI in the Age of Large Language Models | This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making the key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and yet it receives disproportionally low attention from the research community. We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization. In each scenario, we underscore the importance of data, highlight promising research directions, and articulate the potential impacts on the research community and, where applicable, the society as a whole. For instance, we advocate for a suite of data-centric benchmarks tailored to the scale and complexity of data for LLMs. These benchmarks can be used to develop new data curation methods and document research efforts and results, which can help promote openness and transparency in AI and LLM research. | [
"['Xinyi Xu' 'Zhaoxuan Wu' 'Rui Qiao' 'Arun Verma' 'Yao Shu' 'Jingtan Wang'\n 'Xinyuan Niu' 'Zhenfeng He' 'Jiangwei Chen' 'Zijian Zhou'\n 'Gregory Kang Ruey Lau' 'Hieu Dao' 'Lucas Agussurja'\n 'Rachael Hwee Ling Sim' 'Xiaoqiang Lin' 'Wenyang Hu' 'Zhongxiang Dai'\n 'Pang Wei Koh' 'Bryan Kian Hsiang Low']"
]
|
null | null | 2406.14478 | null | null | http://arxiv.org/abs/2406.14478v1 | 2024-06-20T16:40:55Z | 2024-06-20T16:40:55Z | Toward data-driven research: preliminary study to predict surface
roughness in material extrusion using previously published data with Machine
Learning | Material extrusion is one of the most commonly used approaches within the additive manufacturing processes available. Despite its popularity and related technical advancements, process reliability and quality assurance remain only partially solved. In particular, the surface roughness caused by this process is a key concern. To solve this constraint, experimental plans have been exploited to optimize surface roughness in recent years. However, the latter empirical trial and error process is extremely time- and resource-consuming. Thus, this study aims to avoid using large experimental programs to optimize surface roughness in material extrusion. Methodology. This research provides an in-depth analysis of the effect of several printing parameters: layer height, printing temperature, printing speed and wall thickness. The proposed data-driven predictive modeling approach takes advantage of Machine Learning models to automatically predict surface roughness based on the data gathered from the literature and the experimental data generated for testing. Findings. Using 10-fold cross-validation of data gathered from the literature, the proposed Machine Learning solution attains a 0.93 correlation with a mean absolute percentage error of 13 %. When testing with our own data, the correlation diminishes to 0.79 and the mean absolute percentage error reduces to 8 %. Thus, the solution for predicting surface roughness in extrusion-based printing offers competitive results regarding the variability of the analyzed factors. Originality. As available manufacturing data continue to increase on a daily basis, the ability to learn from these large volumes of data is critical in future manufacturing and science. Specifically, the power of Machine Learning helps model surface roughness with limited experimental tests. | [
"['Fátima García-Martínez' 'Diego Carou' 'Francisco de Arriba-Pérez'\n 'Silvia García-Méndez']"
]
|
null | null | 2406.14479 | null | null | http://arxiv.org/pdf/2406.14479v1 | 2024-06-20T16:41:09Z | 2024-06-20T16:41:09Z | On Layer-wise Representation Similarity: Application for Multi-Exit
Models with a Single Classifier | Analyzing the similarity of internal representations within and across different models has been an important technique for understanding the behavior of deep neural networks. Most existing methods for analyzing the similarity between representations of high dimensions, such as those based on Canonical Correlation Analysis (CCA) and widely used Centered Kernel Alignment (CKA), rely on statistical properties of the representations for a set of data points. In this paper, we focus on transformer models and study the similarity of representations between the hidden layers of individual transformers. In this context, we show that a simple sample-wise cosine similarity metric is capable of capturing the similarity and aligns with the complicated CKA. Our experimental results on common transformers reveal that representations across layers are positively correlated, albeit the similarity decreases when layers are far apart. We then propose an aligned training approach to enhance the similarity between internal representations, with trained models that enjoy the following properties: (1) the last-layer classifier can be directly applied right after any hidden layers, yielding intermediate layer accuracies much higher than those under standard training, (2) the layer-wise accuracies monotonically increase and reveal the minimal depth needed for the given task, (3) when served as multi-exit models, they achieve on-par performance with standard multi-exit architectures which consist of additional classifiers designed for early exiting in shallow layers. To our knowledge, our work is the first to show that one common classifier is sufficient for multi-exit models. We conduct experiments on both vision and NLP tasks to demonstrate the performance of the proposed aligned training. | [
"['Jiachen Jiang' 'Jinxin Zhou' 'Zhihui Zhu']"
]
|
null | null | 2406.14481 | null | null | http://arxiv.org/pdf/2406.14481v1 | 2024-06-20T16:43:22Z | 2024-06-20T16:43:22Z | Revealing Vision-Language Integration in the Brain with Multimodal
Networks | We use (multi)modal deep neural networks (DNNs) to probe for sites of multimodal integration in the human brain by predicting stereoencephalography (SEEG) recordings taken while human subjects watched movies. We operationalize sites of multimodal integration as regions where a multimodal vision-language model predicts recordings better than unimodal language, unimodal vision, or linearly-integrated language-vision models. Our target DNN models span different architectures (e.g., convolutional networks and transformers) and multimodal training techniques (e.g., cross-attention and contrastive learning). As a key enabling step, we first demonstrate that trained vision and language models systematically outperform their randomly initialized counterparts in their ability to predict SEEG signals. We then compare unimodal and multimodal models against one another. Because our target DNN models often have different architectures, number of parameters, and training sets (possibly obscuring those differences attributable to integration), we carry out a controlled comparison of two models (SLIP and SimCLR), which keep all of these attributes the same aside from input modality. Using this approach, we identify a sizable number of neural sites (on average 141 out of 1090 total sites or 12.94%) and brain regions where multimodal integration seems to occur. Additionally, we find that among the variants of multimodal training techniques we assess, CLIP-style training is the best suited for downstream prediction of the neural activity in these sites. | [
"['Vighnesh Subramaniam' 'Colin Conwell' 'Christopher Wang'\n 'Gabriel Kreiman' 'Boris Katz' 'Ignacio Cases' 'Andrei Barbu']"
]
|
null | null | 2406.14483 | null | null | http://arxiv.org/pdf/2406.14483v1 | 2024-06-20T16:45:41Z | 2024-06-20T16:45:41Z | Valid Error Bars for Neural Weather Models using Conformal Prediction | Neural weather models have shown immense potential as inexpensive and accurate alternatives to physics-based models. However, most models trained to perform weather forecasting do not quantify the uncertainty associated with their forecasts. This limits the trust in the model and the usefulness of the forecasts. In this work we construct and formalise a conformal prediction framework as a post-processing method for estimating this uncertainty. The method is model-agnostic and gives calibrated error bounds for all variables, lead times and spatial locations. No modifications are required to the model and the computational cost is negligible compared to model training. We demonstrate the usefulness of the conformal prediction framework on a limited area neural weather model for the Nordic region. We further explore the advantages of the framework for deterministic and probabilistic models. | [
"['Vignesh Gopakumar' 'Joel Oskarrson' 'Ander Gray' 'Lorenzo Zanisi'\n 'Stanislas Pamela' 'Daniel Giles' 'Matt Kusner' 'Marc Deisenroth']"
]
|
null | null | 2406.14495 | null | null | http://arxiv.org/pdf/2406.14495v1 | 2024-06-20T16:59:38Z | 2024-06-20T16:59:38Z | rKAN: Rational Kolmogorov-Arnold Networks | The development of Kolmogorov-Arnold networks (KANs) marks a significant shift from traditional multi-layer perceptrons in deep learning. Initially, KANs employed B-spline curves as their primary basis function, but their inherent complexity posed implementation challenges. Consequently, researchers have explored alternative basis functions such as Wavelets, Polynomials, and Fractional functions. In this research, we explore the use of rational functions as a novel basis function for KANs. We propose two different approaches based on Pade approximation and rational Jacobi functions as trainable basis functions, establishing the rational KAN (rKAN). We then evaluate rKAN's performance in various deep learning and physics-informed tasks to demonstrate its practicality and effectiveness in function approximation. | [
"['Alireza Afzal Aghaei']"
]
|
null | null | 2406.14507 | null | null | http://arxiv.org/pdf/2406.14507v1 | 2024-06-20T17:12:20Z | 2024-06-20T17:12:20Z | On Newton's Method to Unlearn Neural Networks | Machine unlearning facilitates personal data ownership, including the ``right to be forgotten''. The proliferation of applications of emph{neural networks} (NNs) trained on users' personal data calls for the need to develop algorithms to unlearn an NN. Since retraining is costly, efficiency is often achieved through approximate unlearning which aims to unlearn a trained NN to be close to the retrained one (in distribution). Though the Newton's method has been used by previous works to approximately unlearn linear models, adapting it for unlearning an NN often encounters degenerate Hessians that make computing the Newton's update impossible. In this paper, we will first show that when coupled with naive yet often effective solutions to mitigate the degeneracy issue for unlearning, the Newton's method surprisingly suffers from catastrophic forgetting. To overcome this difficulty, we revise the Newton's method to include a theoretically justified regularizer and propose a cubic-regularized Newton's method for unlearning an NN. The cubic regularizer comes with the benefits of not requiring manual finetuning and affording a natural interpretation. Empirical evaluation on several models and real-world datasets shows that our method is more resilient to catastrophic forgetting and performs better than the baselines, especially in sequential unlearning. | [
"['Nhung Bui' 'Xinyang Lu' 'See-Kiong Ng' 'Bryan Kian Hsian Low']"
]
|
null | null | 2406.14517 | null | null | http://arxiv.org/pdf/2406.14517v1 | 2024-06-20T17:27:14Z | 2024-06-20T17:27:14Z | PostMark: A Robust Blackbox Watermark for Large Language Models | The most effective techniques to detect LLM-generated text rely on inserting a detectable signature -- or watermark -- during the model's decoding process. Most existing watermarking methods require access to the underlying LLM's logits, which LLM API providers are loath to share due to fears of model distillation. As such, these watermarks must be implemented independently by each LLM provider. In this paper, we develop PostMark, a modular post-hoc watermarking procedure in which an input-dependent set of words (determined via a semantic embedding) is inserted into the text after the decoding process has completed. Critically, PostMark does not require logit access, which means it can be implemented by a third party. We also show that PostMark is more robust to paraphrasing attacks than existing watermarking methods: our experiments cover eight baseline algorithms, five base LLMs, and three datasets. Finally, we evaluate the impact of PostMark on text quality using both automated and human assessments, highlighting the trade-off between quality and robustness to paraphrasing. We release our code, outputs, and annotations at https://github.com/lilakk/PostMark. | [
"['Yapei Chang' 'Kalpesh Krishna' 'Amir Houmansadr' 'John Wieting'\n 'Mohit Iyyer']"
]
|
null | null | 2406.14525 | null | null | http://arxiv.org/pdf/2406.14525v1 | 2024-06-20T17:38:16Z | 2024-06-20T17:38:16Z | Towards evolution of Deep Neural Networks through contrastive
Self-Supervised learning | Deep Neural Networks (DNNs) have been successfully applied to a wide range of problems. However, two main limitations are commonly pointed out. The first one is that they require long time to design. The other is that they heavily rely on labelled data, which can sometimes be costly and hard to obtain. In order to address the first problem, neuroevolution has been proved to be a plausible option to automate the design of DNNs. As for the second problem, self-supervised learning has been used to leverage unlabelled data to learn representations. Our goal is to study how neuroevolution can help self-supervised learning to bridge the gap to supervised learning in terms of performance. In this work, we propose a framework that is able to evolve deep neural networks using self-supervised learning. Our results on the CIFAR-10 dataset show that it is possible to evolve adequate neural networks while reducing the reliance on labelled data. Moreover, an analysis to the structure of the evolved networks suggests that the amount of labelled data fed to them has less effect on the structure of networks that learned via self-supervised learning, when compared to individuals that relied on supervised learning. | [
"['Adriano Vinhas' 'João Correia' 'Penousal Machado']"
]
|
null | null | 2406.14526 | null | null | http://arxiv.org/pdf/2406.14526v1 | 2024-06-20T17:38:16Z | 2024-06-20T17:38:16Z | Fantastic Copyrighted Beasts and How (Not) to Generate Them | Recent studies show that image and video generation models can be prompted to reproduce copyrighted content from their training data, raising serious legal concerns around copyright infringement. Copyrighted characters, in particular, pose a difficult challenge for image generation services, with at least one lawsuit already awarding damages based on the generation of these characters. Yet, little research has empirically examined this issue. We conduct a systematic evaluation to fill this gap. First, we build CopyCat, an evaluation suite consisting of diverse copyrighted characters and a novel evaluation pipeline. Our evaluation considers both the detection of similarity to copyrighted characters and generated image's consistency with user input. Our evaluation systematically shows that both image and video generation models can still generate characters even if characters' names are not explicitly mentioned in the prompt, sometimes with only two generic keywords (e.g., prompting with "videogame, plumber" consistently generates Nintendo's Mario character). We then introduce techniques to semi-automatically identify such keywords or descriptions that trigger character generation. Using our evaluation suite, we study runtime mitigation strategies, including both existing methods and new strategies we propose. Our findings reveal that commonly employed strategies, such as prompt rewriting in the DALL-E system, are not sufficient as standalone guardrails. These strategies must be coupled with other approaches, like negative prompting, to effectively reduce the unintended generation of copyrighted characters. Our work provides empirical grounding to the discussion of copyright mitigation strategies and offers actionable insights for model deployers actively implementing them. | [
"['Luxi He' 'Yangsibo Huang' 'Weijia Shi' 'Tinghao Xie' 'Haotian Liu'\n 'Yue Wang' 'Luke Zettlemoyer' 'Chiyuan Zhang' 'Danqi Chen'\n 'Peter Henderson']"
]
|
null | null | 2406.14528 | null | null | http://arxiv.org/pdf/2406.14528v1 | 2024-06-20T17:40:18Z | 2024-06-20T17:40:18Z | DeciMamba: Exploring the Length Extrapolation Potential of Mamba | Long-range sequence processing poses a significant challenge for Transformers due to their quadratic complexity in input length. A promising alternative is Mamba, which demonstrates high performance and achieves Transformer-level capabilities while requiring substantially fewer computational resources. In this paper we explore the length-generalization capabilities of Mamba, which we find to be relatively limited. Through a series of visualizations and analyses we identify that the limitations arise from a restricted effective receptive field, dictated by the sequence length used during training. To address this constraint, we introduce DeciMamba, a context-extension method specifically designed for Mamba. This mechanism, built on top of a hidden filtering mechanism embedded within the S6 layer, enables the trained model to extrapolate well even without additional training. Empirical experiments over real-world long-range NLP tasks show that DeciMamba can extrapolate to context lengths that are 25x times longer than the ones seen during training, and does so without utilizing additional computational resources. We will release our code and models. | [
"['Assaf Ben-Kish' 'Itamar Zimerman' 'Shady Abu-Hussein' 'Nadav Cohen'\n 'Amir Globerson' 'Lior Wolf' 'Raja Giryes']"
]
|
null | null | 2406.14529 | null | null | http://arxiv.org/pdf/2406.14529v1 | 2024-06-20T17:41:34Z | 2024-06-20T17:41:34Z | A Benchmarking Study of Kolmogorov-Arnold Networks on Tabular Data | Kolmogorov-Arnold Networks (KANs) have very recently been introduced into the world of machine learning, quickly capturing the attention of the entire community. However, KANs have mostly been tested for approximating complex functions or processing synthetic data, while a test on real-world tabular datasets is currently lacking. In this paper, we present a benchmarking study comparing KANs and Multi-Layer Perceptrons (MLPs) on tabular datasets. The study evaluates task performance and training times. From the results obtained on the various datasets, KANs demonstrate superior or comparable accuracy and F1 scores, excelling particularly in datasets with numerous instances, suggesting robust handling of complex data. We also highlight that this performance improvement of KANs comes with a higher computational cost when compared to MLPs of comparable sizes. | [
"['Eleonora Poeta' 'Flavio Giobergia' 'Eliana Pastor' 'Tania Cerquitelli'\n 'Elena Baralis']"
]
|
null | null | 2406.14532 | null | null | http://arxiv.org/pdf/2406.14532v1 | 2024-06-20T17:45:54Z | 2024-06-20T17:45:54Z | RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math
Reasoning by Eight-Fold | Training on model-generated synthetic data is a promising approach for finetuning LLMs, but it remains unclear when it helps or hurts. In this paper, we investigate this question for math reasoning via an empirical study, followed by building a conceptual understanding of our observations. First, we find that while the typical approach of finetuning a model on synthetic correct or positive problem-solution pairs generated by capable models offers modest performance gains, sampling more correct solutions from the finetuned learner itself followed by subsequent fine-tuning on this self-generated data $textbf{doubles}$ the efficiency of the same synthetic problems. At the same time, training on model-generated positives can amplify various spurious correlations, resulting in flat or even inverse scaling trends as the amount of data increases. Surprisingly, we find that several of these issues can be addressed if we also utilize negative responses, i.e., model-generated responses that are deemed incorrect by a final answer verifier. Crucially, these negatives must be constructed such that the training can appropriately recover the utility or advantage of each intermediate step in the negative response. With this per-step scheme, we are able to attain consistent gains over only positive data, attaining performance similar to amplifying the amount of synthetic data by $mathbf{8 times}$. We show that training on per-step negatives can help to unlearn spurious correlations in the positive data, and is equivalent to advantage-weighted reinforcement learning (RL), implying that it inherits robustness benefits of RL over imitating positive data alone. | [
"['Amrith Setlur' 'Saurabh Garg' 'Xinyang Geng' 'Naman Garg'\n 'Virginia Smith' 'Aviral Kumar']"
]
|
null | null | 2406.14537 | null | null | http://arxiv.org/pdf/2406.14537v1 | 2024-06-20T17:48:24Z | 2024-06-20T17:48:24Z | MacroHFT: Memory Augmented Context-aware Reinforcement Learning On High
Frequency Trading | High-frequency trading (HFT) that executes algorithmic trading in short time scales, has recently occupied the majority of cryptocurrency market. Besides traditional quantitative trading methods, reinforcement learning (RL) has become another appealing approach for HFT due to its terrific ability of handling high-dimensional financial data and solving sophisticated sequential decision-making problems, emph{e.g.,} hierarchical reinforcement learning (HRL) has shown its promising performance on second-level HFT by training a router to select only one sub-agent from the agent pool to execute the current transaction. However, existing RL methods for HFT still have some defects: 1) standard RL-based trading agents suffer from the overfitting issue, preventing them from making effective policy adjustments based on financial context; 2) due to the rapid changes in market conditions, investment decisions made by an individual agent are usually one-sided and highly biased, which might lead to significant loss in extreme markets. To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, emph{a.k.a.} MacroHFT, which consists of two training phases: 1) we first train multiple types of sub-agents with the market data decomposed according to various financial indicators, specifically market trend and volatility, where each agent owns a conditional adapter to adjust its trading policy according to market conditions; 2) then we train a hyper-agent to mix the decisions from these sub-agents and output a consistently profitable meta-policy to handle rapid market fluctuations, equipped with a memory mechanism to enhance the capability of decision-making. Extensive experiments on various cryptocurrency markets demonstrate that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks. | [
"['Chuqiao Zong' 'Chaojie Wang' 'Molei Qin' 'Lei Feng' 'Xinrun Wang'\n 'Bo An']"
]
|
null | null | 2406.14541 | null | null | http://arxiv.org/pdf/2406.14541v2 | 2024-06-21T14:00:02Z | 2024-06-20T17:52:29Z | Are LLMs Naturally Good at Synthetic Tabular Data Generation? | Large language models (LLMs) have demonstrated their prowess in generating synthetic text and images; however, their potential for generating tabular data -- arguably the most common data type in business and scientific applications -- is largely underexplored. This paper demonstrates that LLMs, used as-is, or after traditional fine-tuning, are severely inadequate as synthetic table generators. Due to the autoregressive nature of LLMs, fine-tuning with random order permutation runs counter to the importance of modeling functional dependencies, and renders LLMs unable to model conditional mixtures of distributions (key to capturing real world constraints). We showcase how LLMs can be made to overcome some of these deficiencies by making them permutation-aware. | [
"['Shengzhe Xu' 'Cho-Ting Lee' 'Mandar Sharma' 'Raquib Bin Yousuf'\n 'Nikhil Muralidhar' 'Naren Ramakrishnan']"
]
|
null | null | 2406.14546 | null | null | http://arxiv.org/pdf/2406.14546v1 | 2024-06-20T17:55:04Z | 2024-06-20T17:55:04Z | Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from
Disparate Training Data | One way to address safety risks from large language models (LLMs) is to censor dangerous knowledge from their training data. While this removes the explicit information, implicit information can remain scattered across various training documents. Could an LLM infer the censored knowledge by piecing together these implicit hints? As a step towards answering this question, we study inductive out-of-context reasoning (OOCR), a type of generalization in which LLMs infer latent information from evidence distributed across training documents and apply it to downstream tasks without in-context learning. Using a suite of five tasks, we demonstrate that frontier LLMs can perform inductive OOCR. In one experiment we finetune an LLM on a corpus consisting only of distances between an unknown city and other known cities. Remarkably, without in-context examples or Chain of Thought, the LLM can verbalize that the unknown city is Paris and use this fact to answer downstream questions. Further experiments show that LLMs trained only on individual coin flip outcomes can verbalize whether the coin is biased, and those trained only on pairs $(x,f(x))$ can articulate a definition of $f$ and compute inverses. While OOCR succeeds in a range of cases, we also show that it is unreliable, particularly for smaller LLMs learning complex structures. Overall, the ability of LLMs to "connect the dots" without explicit in-context learning poses a potential obstacle to monitoring and controlling the knowledge acquired by LLMs. | [
"['Johannes Treutlein' 'Dami Choi' 'Jan Betley' 'Cem Anil' 'Samuel Marks'\n 'Roger Baker Grosse' 'Owain Evans']"
]
|
null | null | 2406.14548 | null | null | http://arxiv.org/pdf/2406.14548v1 | 2024-06-20T17:56:02Z | 2024-06-20T17:56:02Z | Consistency Models Made Easy | Consistency models (CMs) are an emerging class of generative models that offer faster sampling than traditional diffusion models. CMs enforce that all points along a sampling trajectory are mapped to the same initial point. But this target leads to resource-intensive training: for example, as of 2024, training a SoTA CM on CIFAR-10 takes one week on 8 GPUs. In this work, we propose an alternative scheme for training CMs, vastly improving the efficiency of building such models. Specifically, by expressing CM trajectories via a particular differential equation, we argue that diffusion models can be viewed as a special case of CMs with a specific discretization. We can thus fine-tune a consistency model starting from a pre-trained diffusion model and progressively approximate the full consistency condition to stronger degrees over the training process. Our resulting method, which we term Easy Consistency Tuning (ECT), achieves vastly improved training times while indeed improving upon the quality of previous methods: for example, ECT achieves a 2-step FID of 2.73 on CIFAR10 within 1 hour on a single A100 GPU, matching Consistency Distillation trained of hundreds of GPU hours. Owing to this computational efficiency, we investigate the scaling law of CMs under ECT, showing that they seem to obey classic power law scaling, hinting at their ability to improve efficiency and performance at larger scales. Code (https://github.com/locuslab/ect) is available. | [
"['Zhengyang Geng' 'Ashwini Pokle' 'William Luo' 'Justin Lin'\n 'J. Zico Kolter']"
]
|
null | null | 2406.14549 | null | null | http://arxiv.org/pdf/2406.14549v1 | 2024-06-20T17:56:17Z | 2024-06-20T17:56:17Z | Uncovering Latent Memories: Assessing Data Leakage and Memorization
Patterns in Large Language Models | The proliferation of large language models has revolutionized natural language processing tasks, yet it raises profound concerns regarding data privacy and security. Language models are trained on extensive corpora including potentially sensitive or proprietary information, and the risk of data leakage -- where the model response reveals pieces of such information -- remains inadequately understood. This study examines susceptibility to data leakage by quantifying the phenomenon of memorization in machine learning models, focusing on the evolution of memorization patterns over training. We investigate how the statistical characteristics of training data influence the memories encoded within the model by evaluating how repetition influences memorization. We reproduce findings that the probability of memorizing a sequence scales logarithmically with the number of times it is present in the data. Furthermore, we find that sequences which are not apparently memorized after the first encounter can be uncovered throughout the course of training even without subsequent encounters. The presence of these latent memorized sequences presents a challenge for data privacy since they may be hidden at the final checkpoint of the model. To this end, we develop a diagnostic test for uncovering these latent memorized sequences by considering their cross entropy loss. | [
"['Sunny Duan' 'Mikail Khona' 'Abhiram Iyer' 'Rylan Schaeffer'\n 'Ila R Fiete']"
]
|
null | null | 2406.14563 | null | null | http://arxiv.org/pdf/2406.14563v1 | 2024-06-20T17:59:58Z | 2024-06-20T17:59:58Z | Model Merging and Safety Alignment: One Bad Model Spoils the Bunch | Merging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligned models. This work investigates the effects of model merging on alignment. We evaluate several popular model merging techniques, demonstrating that existing methods do not only transfer domain expertise but also propagate misalignment. We propose a simple two-step approach to address this problem: (i) generating synthetic safety and domain-specific data, and (ii) incorporating these generated data into the optimization process of existing data-aware model merging techniques. This allows us to treat alignment as a skill that can be maximized in the resulting merged LLM. Our experiments illustrate the effectiveness of integrating alignment-related data during merging, resulting in models that excel in both domain expertise and alignment. | [
"['Hasan Abed Al Kader Hammoud' 'Umberto Michieli' 'Fabio Pizzati'\n 'Philip Torr' 'Adel Bibi' 'Bernard Ghanem' 'Mete Ozay']"
]
|
null | null | 2406.14571 | null | null | http://arxiv.org/pdf/2406.14571v1 | 2024-06-11T05:26:45Z | 2024-06-11T05:26:45Z | PreSto: An In-Storage Data Preprocessing System for Training
Recommendation Models | Training recommendation systems (RecSys) faces several challenges as it requires the "data preprocessing" stage to preprocess an ample amount of raw data and feed them to the GPU for training in a seamless manner. To sustain high training throughput, state-of-the-art solutions reserve a large fleet of CPU servers for preprocessing which incurs substantial deployment cost and power consumption. Our characterization reveals that prior CPU-centric preprocessing is bottlenecked on feature generation and feature normalization operations as it fails to reap out the abundant inter-/intra-feature parallelism in RecSys preprocessing. PreSto is a storage-centric preprocessing system leveraging In-Storage Processing (ISP), which offloads the bottlenecked preprocessing operations to our ISP units. We show that PreSto outperforms the baseline CPU-centric system with a $9.6times$ speedup in end-to-end preprocessing time, $4.3times$ enhancement in cost-efficiency, and $11.3times$ improvement in energyefficiency on average for production-scale RecSys preprocessing. | [
"['Yunjae Lee' 'Hyeseong Kim' 'Minsoo Rhu']"
]
|
null | null | 2406.14585 | null | null | http://arxiv.org/abs/2406.14585v1 | 2024-06-19T10:12:17Z | 2024-06-19T10:12:17Z | Deep-learning-assisted reconfigurable metasurface antenna for real-time
holographic beam steering | We propose a metasurface antenna capable of real time holographic beam steering. An array of reconfigurable dipoeles can generate on demand far field patterns of radiation through the specific encoding of meta atomic states. i.e., the configuration of each dipole. Suitable states for the generation of the desired patterns can be identified using iteartion, but this is very slow and needs to be done for each far field pattern. Here, we present a deep learning based method for the control of a metasurface antenna with point dipole elements that vary in their state using dipole polarizability. Instead of iteration, we adopt a deep learning algorithm that combines an autoencoder with an electromagnetic scattering equation to determin the states required for a target far field pattern in real time. The scattering equation from Born approximation is used as the decoder in training the neural network, and analytic Green's function calculation is used to check the validity of Born approximation. Our learning based algorithm requires a computing time of within in 200 microseconds to determine the meta atomic states, thus enabling the real time opeartion of a holographic antenna. | [
"['Hyunjun Ma' 'Jin-soo Kim' 'Jong-Ho Choe' 'Q-Han Park']"
]
|
null | null | 2406.14591 | null | null | http://arxiv.org/pdf/2406.14591v1 | 2024-06-20T10:21:55Z | 2024-06-20T10:21:55Z | Physics-informed neural networks for parameter learning of wildfire
spreading | Wildland fires pose terrifying natural hazards, underscoring the urgent need to develop data-driven and physics-informed digital twins for wildfire prevention, monitoring, intervention, and response. In this direction of research, this work introduces a physics-informed neural network (PiNN) to learn the unknown parameters of an interpretable wildfire spreading model. The considered wildfire spreading model integrates fundamental physical laws articulated by key model parameters, essential for capturing the complex behavior of wildfires. The proposed machine learning approach leverages the theory of artificial neural networks with the physical constraints governing wildfire dynamics, such as the first principles of mass and energy conservation. Training of the PiNN for physics-informed parameter identification is realized using data of the temporal evolution of one- and two-dimensional (plane surface) fire fronts that have been obtained from a high-fidelity simulator of the wildfire spreading model under consideration. The parameter learning results demonstrate the remarkable predictive ability of the proposed PiNN in uncovering the unknown coefficients in both the one- and two-dimensional fire spreading scenarios. Additionally, this methodology exhibits robustness by identifying the same parameters in the presence of noisy data. The proposed framework is envisioned to be incorporated in a physics-informed digital twin for intelligent wildfire management and risk assessment. | [
"['Konstantinos Vogiatzoglou' 'Costas Papadimitriou' 'Vasilis Bontozoglou'\n 'Konstantinos Ampountolas']"
]
|
null | null | 2406.14593 | null | null | http://arxiv.org/pdf/2406.14593v2 | 2024-06-24T12:25:04Z | 2024-06-20T17:08:42Z | Enhancing Dropout-based Bayesian Neural Networks with Multi-Exit on FPGA | Reliable uncertainty estimation plays a crucial role in various safety-critical applications such as medical diagnosis and autonomous driving. In recent years, Bayesian neural networks (BayesNNs) have gained substantial research and industrial interests due to their capability to make accurate predictions with reliable uncertainty estimation. However, the algorithmic complexity and the resulting hardware performance of BayesNNs hinder their adoption in real-life applications. To bridge this gap, this paper proposes an algorithm and hardware co-design framework that can generate field-programmable gate array (FPGA)-based accelerators for efficient BayesNNs. At the algorithm level, we propose novel multi-exit dropout-based BayesNNs with reduced computational and memory overheads while achieving high accuracy and quality of uncertainty estimation. At the hardware level, this paper introduces a transformation framework that can generate FPGA-based accelerators for the proposed efficient multi-exit BayesNNs. Several optimization techniques such as the mix of spatial and temporal mappings are introduced to reduce resource consumption and improve the overall hardware performance. Comprehensive experiments demonstrate that our approach can achieve higher energy efficiency compared to CPU, GPU, and other state-of-the-art hardware implementations. To support the future development of this research, we have open-sourced our code at: https://github.com/os-hxfan/MCME_FPGA_Acc.git | [
"['Hao Mark Chen' 'Liam Castelli' 'Martin Ferianc' 'Hongyu Zhou'\n 'Shuanglong Liu' 'Wayne Luk' 'Hongxiang Fan']"
]
|
null | null | 2406.14595 | null | null | http://arxiv.org/pdf/2406.14595v2 | 2024-07-01T19:58:00Z | 2024-06-20T17:43:18Z | Adversaries Can Misuse Combinations of Safe Models | Developers try to evaluate whether an AI system can be misused by adversaries before releasing it; for example, they might test whether a model enables cyberoffense, user manipulation, or bioterrorism. In this work, we show that individually testing models for misuse is inadequate; adversaries can misuse combinations of models even when each individual model is safe. The adversary accomplishes this by first decomposing tasks into subtasks, then solving each subtask with the best-suited model. For example, an adversary might solve challenging-but-benign subtasks with an aligned frontier model, and easy-but-malicious subtasks with a weaker misaligned model. We study two decomposition methods: manual decomposition where a human identifies a natural decomposition of a task, and automated decomposition where a weak model generates benign tasks for a frontier model to solve, then uses the solutions in-context to solve the original task. Using these decompositions, we empirically show that adversaries can create vulnerable code, explicit images, python scripts for hacking, and manipulative tweets at much higher rates with combinations of models than either individual model. Our work suggests that even perfectly-aligned frontier systems can enable misuse without ever producing malicious outputs, and that red-teaming efforts should extend beyond single models in isolation. | [
"['Erik Jones' 'Anca Dragan' 'Jacob Steinhardt']"
]
|
null | null | 2406.14596 | null | null | http://arxiv.org/pdf/2406.14596v1 | 2024-06-20T17:45:02Z | 2024-06-20T17:45:02Z | ICAL: Continual Learning of Multimodal Agents by Transforming
Trajectories into Actionable Insights | Large-scale generative language and vision-language models (LLMs and VLMs) excel in few-shot in-context learning for decision making and instruction following. However, they require high-quality exemplar demonstrations to be included in their context window. In this work, we ask: Can LLMs and VLMs generate their own prompt examples from generic, sub-optimal demonstrations? We propose In-Context Abstraction Learning (ICAL), a method that builds a memory of multimodal experience insights from sub-optimal demonstrations and human feedback. Given a noisy demonstration in a new domain, VLMs abstract the trajectory into a general program by fixing inefficient actions and annotating cognitive abstractions: task relationships, object state changes, temporal subgoals, and task construals. These abstractions are refined and adapted interactively through human feedback while the agent attempts to execute the trajectory in a similar environment. The resulting abstractions, when used as exemplars in the prompt, significantly improve decision-making in retrieval-augmented LLM and VLM agents. Our ICAL agent surpasses the state-of-the-art in dialogue-based instruction following in TEACh, multimodal web agents in VisualWebArena, and action anticipation in Ego4D. In TEACh, we achieve a 12.6% improvement in goal-condition success. In VisualWebArena, our task success rate improves over the SOTA from 14.3% to 22.7%. In Ego4D action forecasting, we improve over few-shot GPT-4V and remain competitive with supervised models. We show finetuning our retrieval-augmented in-context agent yields additional improvements. Our approach significantly reduces reliance on expert-crafted examples and consistently outperforms in-context learning from action plans that lack such insights. | [
"['Gabriel Sarch' 'Lawrence Jang' 'Michael J. Tarr' 'William W. Cohen'\n 'Kenneth Marino' 'Katerina Fragkiadaki']"
]
|
null | null | 2406.14635 | null | null | http://arxiv.org/pdf/2406.14635v1 | 2024-06-20T18:03:27Z | 2024-06-20T18:03:27Z | Harvesting Efficient On-Demand Order Pooling from Skilled Couriers:
Enhancing Graph Representation Learning for Refining Real-time Many-to-One
Assignments | The recent past has witnessed a notable surge in on-demand food delivery (OFD) services, offering delivery fulfillment within dozens of minutes after an order is placed. In OFD, pooling multiple orders for simultaneous delivery in real-time order assignment is a pivotal efficiency source, which may in turn extend delivery time. Constructing high-quality order pooling to harmonize platform efficiency with the experiences of consumers and couriers, is crucial to OFD platforms. However, the complexity and real-time nature of order assignment, making extensive calculations impractical, significantly limit the potential for order consolidation. Moreover, offline environment is frequently riddled with unknown factors, posing challenges for the platform's perceptibility and pooling decisions. Nevertheless, delivery behaviors of skilled couriers (SCs) who know the environment well, can improve system awareness and effectively inform decisions. Hence a SC delivery network (SCDN) is constructed, based on an enhanced attributed heterogeneous network embedding approach tailored for OFD. It aims to extract features from rich temporal and spatial information, and uncover the latent potential for order combinations embedded within SC trajectories. Accordingly, the vast search space of order assignment can be effectively pruned through scalable similarity calculations of low-dimensional vectors, making comprehensive and high-quality pooling outcomes more easily identified in real time. SCDN has now been deployed in Meituan dispatch system. Online tests reveal that with SCDN, the pooling quality and extent have been greatly improved. And our system can boost couriers'efficiency by 45-55% during noon peak hours, while upholding the timely delivery commitment. | [
"['Yile Liang' 'Jiuxia Zhao' 'Donghui Li' 'Jie Feng' 'Chen Zhang'\n 'Xuetao Ding' 'Jinghua Hao' 'Renqing He']"
]
|
null | null | 2406.14654 | null | null | http://arxiv.org/pdf/2406.14654v1 | 2024-06-20T18:17:58Z | 2024-06-20T18:17:58Z | Major Entity Identification: A Generalizable Alternative to Coreference
Resolution | The limited generalization of coreference resolution (CR) models has been a major bottleneck in the task's broad application. Prior work has identified annotation differences, especially for mention detection, as one of the main reasons for the generalization gap and proposed using additional annotated target domain data. Rather than relying on this additional annotation, we propose an alternative formulation of the CR task, Major Entity Identification (MEI), where we: (a) assume the target entities to be specified in the input, and (b) limit the task to only the frequent entities. Through extensive experiments, we demonstrate that MEI models generalize well across domains on multiple datasets with supervised models and LLM-based few-shot prompting. Additionally, the MEI task fits the classification framework, which enables the use of classification-based metrics that are more robust than the current CR metrics. Finally, MEI is also of practical use as it allows a user to search for all mentions of a particular entity or a group of entities of interest. | [
"['Kawshik Manikantan' 'Shubham Toshniwal' 'Makarand Tapaswi'\n 'Vineet Gandhi']"
]
|
null | null | 2406.14655 | null | null | http://arxiv.org/pdf/2406.14655v1 | 2024-06-20T18:21:24Z | 2024-06-20T18:21:24Z | HYPERmotion: Learning Hybrid Behavior Planning for Autonomous
Loco-manipulation | Enabling robots to autonomously perform hybrid motions in diverse environments can be beneficial for long-horizon tasks such as material handling, household chores, and work assistance. This requires extensive exploitation of intrinsic motion capabilities, extraction of affordances from rich environmental information, and planning of physical interaction behaviors. Despite recent progress has demonstrated impressive humanoid whole-body control abilities, they struggle to achieve versatility and adaptability for new tasks. In this work, we propose HYPERmotion, a framework that learns, selects and plans behaviors based on tasks in different scenarios. We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints and create a motion library to store the learned skills. We apply the planning and reasoning features of the large language models (LLMs) to complex loco-manipulation tasks, constructing a hierarchical task graph that comprises a series of primitive behaviors to bridge lower-level execution with higher-level planning. By leveraging the interaction of distilled spatial geometry and 2D observation with a visual language model (VLM) to ground knowledge into a robotic morphology selector to choose appropriate actions in single- or dual-arm, legged or wheeled locomotion. Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks, demonstrating high autonomy from free-text commands in unstructured scenes. Videos and website: hy-motion.github.io/ | [
"['Jin Wang' 'Rui Dai' 'Weijie Wang' 'Luca Rossini' 'Francesco Ruscelli'\n 'Nikos Tsagarakis']"
]
|
null | null | 2406.14657 | null | null | http://arxiv.org/pdf/2406.14657v2 | 2024-07-05T16:51:15Z | 2024-06-20T18:22:59Z | OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization
Dataset | We introduce OpenDebateEvidence, a comprehensive dataset for argument mining and summarization sourced from the American Competitive Debate community. This dataset includes over 3.5 million documents with rich metadata, making it one of the most extensive collections of debate evidence. OpenDebateEvidence captures the complexity of arguments in high school and college debates, providing valuable resources for training and evaluation. Our extensive experiments demonstrate the efficacy of fine-tuning state-of-the-art large language models for argumentative abstractive summarization across various methods, models, and datasets. By providing this comprehensive resource, we aim to advance computational argumentation and support practical applications for debaters, educators, and researchers. OpenDebateEvidence is publicly available to support further research and innovation in computational argumentation. Access it here: https://huggingface.co/datasets/Yusuf5/OpenCaselist | [
"['Allen Roush' 'Yusuf Shabazz' 'Arvind Balaji' 'Peter Zhang'\n 'Stefano Mezza' 'Markus Zhang' 'Sanjay Basu' 'Sriram Vishwanath'\n 'Mehdi Fatemi' 'Ravid Shwartz-Ziv']"
]
|
null | null | 2406.14662 | null | null | http://arxiv.org/pdf/2406.14662v1 | 2024-06-20T18:30:09Z | 2024-06-20T18:30:09Z | Advantage Alignment Algorithms | The growing presence of artificially intelligent agents in everyday decision-making, from LLM assistants to autonomous vehicles, hints at a future in which conflicts may arise from each agent optimizing individual interests. In general-sum games these conflicts are apparent, where naive Reinforcement Learning agents get stuck in Pareto-suboptimal Nash equilibria. Consequently, opponent shaping has been introduced as a method with success at finding socially beneficial equilibria in social dilemmas. In this work, we introduce Advantage Alignment, a family of algorithms derived from first principles that perform opponent shaping efficiently and intuitively. This is achieved by aligning the advantages of conflicting agents in a given game by increasing the probability of mutually-benefiting actions. We prove that existing opponent shaping methods, including LOLA and LOQA, implicitly perform Advantage Alignment. Compared to these works, Advantage Alignment mathematically simplifies the formulation of opponent shaping and seamlessly works for continuous action domains. We also demonstrate the effectiveness of our algorithm in a wide range of social dilemmas, achieving state of the art results in each case, including a social dilemma version of the Negotiation Game. | [
"['Juan Agustin Duque' 'Milad Aghajohari' 'Tim Cooijmans' 'Tianyu Zhang'\n 'Aaron Courville']"
]
|
null | null | 2406.14670 | null | null | http://arxiv.org/pdf/2406.14670v1 | 2024-06-20T18:47:43Z | 2024-06-20T18:47:43Z | Exploring Design Choices for Building Language-Specific LLMs | Despite rapid progress in large language models (LLMs), their performance on a vast majority of languages remain unsatisfactory. In this paper, we study building language-specific LLMs by adapting monolingual and multilingual LLMs. We conduct systematic experiments on how design choices (base model selection, vocabulary extension, and continued fine-tuning) impact the adapted LLM, both in terms of efficiency (how many tokens are needed to encode the same amount of information) and end task performance. We find that (1) the initial performance before the adaptation is not always indicative of the final performance. (2) Efficiency can easily improved with simple vocabulary extension and continued fine-tuning in most LLMs we study, and (3) The optimal adaptation method is highly language-dependent, and the simplest approach works well across various experimental settings. Adapting English-centric models can yield better results than adapting multilingual models despite their worse initial performance on low-resource languages. Together, our work lays foundations on efficiently building language-specific LLMs by adapting existing LLMs. | [
"['Atula Tejaswi' 'Nilesh Gupta' 'Eunsol Choi']"
]
|
null | null | 2406.14675 | null | null | http://arxiv.org/pdf/2406.14675v1 | 2024-06-20T18:54:27Z | 2024-06-20T18:54:27Z | This Looks Better than That: Better Interpretable Models with ProtoPNeXt | Prototypical-part models are a popular interpretable alternative to black-box deep learning models for computer vision. However, they are difficult to train, with high sensitivity to hyperparameter tuning, inhibiting their application to new datasets and our understanding of which methods truly improve their performance. To facilitate the careful study of prototypical-part networks (ProtoPNets), we create a new framework for integrating components of prototypical-part models -- ProtoPNeXt. Using ProtoPNeXt, we show that applying Bayesian hyperparameter tuning and an angular prototype similarity metric to the original ProtoPNet is sufficient to produce new state-of-the-art accuracy for prototypical-part models on CUB-200 across multiple backbones. We further deploy this framework to jointly optimize for accuracy and prototype interpretability as measured by metrics included in ProtoPNeXt. Using the same resources, this produces models with substantially superior semantics and changes in accuracy between +1.3% and -1.5%. The code and trained models will be made publicly available upon publication. | [
"['Frank Willard' 'Luke Moffett' 'Emmanuel Mokel' 'Jon Donnelly'\n 'Stark Guo' 'Julia Yang' 'Giyoung Kim' 'Alina Jade Barnett'\n 'Cynthia Rudin']"
]
|
null | null | 2406.14682 | null | null | http://arxiv.org/pdf/2406.14682v1 | 2024-06-20T19:08:29Z | 2024-06-20T19:08:29Z | Uniform Convergence of Adversarially Robust Classifiers | In recent years there has been significant interest in the effect of different types of adversarial perturbations in data classification problems. Many of these models incorporate the adversarial power, which is an important parameter with an associated trade-off between accuracy and robustness. This work considers a general framework for adversarially-perturbed classification problems, in a large data or population-level limit. In such a regime, we demonstrate that as adversarial strength goes to zero that optimal classifiers converge to the Bayes classifier in the Hausdorff distance. This significantly strengthens previous results, which generally focus on $L^1$-type convergence. The main argument relies upon direct geometric comparisons and is inspired by techniques from geometric measure theory. | [
"['Rachel Morris' 'Ryan Murray']"
]
|
null | null | 2406.14683 | null | null | http://arxiv.org/pdf/2406.14683v1 | 2024-06-20T19:11:35Z | 2024-06-20T19:11:35Z | TAGLAS: An atlas of text-attributed graph datasets in the era of large
graph and language models | In this report, we present TAGLAS, an atlas of text-attributed graph (TAG) datasets and benchmarks. TAGs are graphs with node and edge features represented in text, which have recently gained wide applicability in training graph-language or graph foundation models. In TAGLAS, we collect and integrate more than 23 TAG datasets with domains ranging from citation graphs to molecule graphs and tasks from node classification to graph question-answering. Unlike previous graph datasets and benchmarks, all datasets in TAGLAS have a unified node and edge text feature format, which allows a graph model to be simultaneously trained and evaluated on multiple datasets from various domains. Further, we provide a standardized, efficient, and simplified way to load all datasets and tasks. We also provide useful utils like text-to-embedding conversion, and graph-to-text conversion, which can facilitate different evaluation scenarios. Finally, we also provide standard and easy-to-use evaluation utils. The project is open-sourced at https://github.com/JiaruiFeng/TAGLAS and is still under construction. Please expect more datasets/features in the future. | [
"['Jiarui Feng' 'Hao Liu' 'Lecheng Kong' 'Yixin Chen' 'Muhan Zhang']"
]
|
null | null | 2406.14686 | null | null | http://arxiv.org/pdf/2406.14686v1 | 2024-06-20T19:20:00Z | 2024-06-20T19:20:00Z | A Contrastive Learning Approach to Mitigate Bias in Speech Models | Speech models may be affected by performance imbalance in different population subgroups, raising concerns about fair treatment across these groups. Prior attempts to mitigate unfairness either focus on user-defined subgroups, potentially overlooking other affected subgroups, or do not explicitly improve the internal representation at the subgroup level. This paper proposes the first adoption of contrastive learning to mitigate speech model bias in underperforming subgroups. We employ a three-level learning technique that guides the model in focusing on different scopes for the contrastive loss, i.e., task, subgroup, and the errors within subgroups. The experiments on two spoken language understanding datasets and two languages demonstrate that our approach improves internal subgroup representations, thus reducing model bias and enhancing performance. | [
"['Alkis Koudounas' 'Flavio Giobergia' 'Eliana Pastor' 'Elena Baralis']"
]
|
null | null | 2406.14693 | null | null | http://arxiv.org/pdf/2406.14693v1 | 2024-06-20T19:29:04Z | 2024-06-20T19:29:04Z | Voice Disorder Analysis: a Transformer-based Approach | Voice disorders are pathologies significantly affecting patient quality of life. However, non-invasive automated diagnosis of these pathologies is still under-explored, due to both a shortage of pathological voice data, and diversity of the recording types used for the diagnosis. This paper proposes a novel solution that adopts transformers directly working on raw voice signals and addresses data shortage through synthetic data generation and data augmentation. Further, we consider many recording types at the same time, such as sentence reading and sustained vowel emission, by employing a Mixture of Expert ensemble to align the predictions on different data types. The experimental results, obtained on both public and private datasets, show the effectiveness of our solution in the disorder detection and classification tasks and largely improve over existing approaches. | [
"['Alkis Koudounas' 'Gabriele Ciravegna' 'Marco Fantini' 'Giovanni Succo'\n 'Erika Crosetti' 'Tania Cerquitelli' 'Elena Baralis']"
]
|
null | null | 2406.14697 | null | null | http://arxiv.org/pdf/2406.14697v1 | 2024-06-20T19:39:17Z | 2024-06-20T19:39:17Z | A Benchmark Study of Deep-RL Methods for Maximum Coverage Problems over
Graphs | Recent years have witnessed a growing trend toward employing deep reinforcement learning (Deep-RL) to derive heuristics for combinatorial optimization (CO) problems on graphs. Maximum Coverage Problem (MCP) and its probabilistic variant on social networks, Influence Maximization (IM), have been particularly prominent in this line of research. In this paper, we present a comprehensive benchmark study that thoroughly investigates the effectiveness and efficiency of five recent Deep-RL methods for MCP and IM. These methods were published in top data science venues, namely S2V-DQN, Geometric-QN, GCOMB, RL4IM, and LeNSE. Our findings reveal that, across various scenarios, the Lazy Greedy algorithm consistently outperforms all Deep-RL methods for MCP. In the case of IM, theoretically sound algorithms like IMM and OPIM demonstrate superior performance compared to Deep-RL methods in most scenarios. Notably, we observe an abnormal phenomenon in IM problem where Deep-RL methods slightly outperform IMM and OPIM when the influence spread nearly does not increase as the budget increases. Furthermore, our experimental results highlight common issues when applying Deep-RL methods to MCP and IM in practical settings. Finally, we discuss potential avenues for improving Deep-RL methods. Our benchmark study sheds light on potential challenges in current deep reinforcement learning research for solving combinatorial optimization problems. | [
"['Zhicheng Liang' 'Yu Yang' 'Xiangyu Ke' 'Xiaokui Xiao' 'Yunjun Gao']"
]
|
null | null | 2406.14699 | null | null | http://arxiv.org/pdf/2406.14699v1 | 2024-06-20T19:44:37Z | 2024-06-20T19:44:37Z | Preferential Multi-Objective Bayesian Optimization | Preferential Bayesian optimization (PBO) is a framework for optimizing a decision-maker's latent preferences over available design choices. While preferences often involve multiple conflicting objectives, existing work in PBO assumes that preferences can be encoded by a single objective function. For example, in robotic assistive devices, technicians often attempt to maximize user comfort while simultaneously minimizing mechanical energy consumption for longer battery life. Similarly, in autonomous driving policy design, decision-makers wish to understand the trade-offs between multiple safety and performance attributes before committing to a policy. To address this gap, we propose the first framework for PBO with multiple objectives. Within this framework, we present dueling scalarized Thompson sampling (DSTS), a multi-objective generalization of the popular dueling Thompson algorithm, which may be of interest beyond the PBO setting. We evaluate DSTS across four synthetic test functions and two simulated exoskeleton personalization and driving policy design tasks, showing that it outperforms several benchmarks. Finally, we prove that DSTS is asymptotically consistent. As a direct consequence, this result provides, to our knowledge, the first convergence guarantee for dueling Thompson sampling in the PBO setting. | [
"['Raul Astudillo' 'Kejun Li' 'Maegan Tucker' 'Chu Xin Cheng'\n 'Aaron D. Ames' 'Yisong Yue']"
]
|
null | null | 2406.14715 | null | null | http://arxiv.org/pdf/2406.14715v1 | 2024-06-20T20:19:30Z | 2024-06-20T20:19:30Z | An Advanced Physics-Informed Neural Operator for Comprehensive Design
Optimization of Highly-Nonlinear Systems: An Aerospace Composites Processing
Case Study | Deep Operator Networks (DeepONets) and their physics-informed variants have shown significant promise in learning mappings between function spaces of partial differential equations, enhancing the generalization of traditional neural networks. However, for highly nonlinear real-world applications like aerospace composites processing, existing models often fail to capture underlying solutions accurately and are typically limited to single input functions, constraining rapid process design development. This paper introduces an advanced physics-informed DeepONet tailored for such complex systems with multiple input functions. Equipped with architectural enhancements like nonlinear decoders and effective training strategies such as curriculum learning and domain decomposition, the proposed model handles high-dimensional design spaces with significantly improved accuracy, outperforming the vanilla physics-informed DeepONet by two orders of magnitude. Its zero-shot prediction capability across a broad design space makes it a powerful tool for accelerating composites process design and optimization, with potential applications in other engineering fields characterized by strong nonlinearity. | [
"['Milad Ramezankhani' 'Anirudh Deodhar' 'Rishi Yash Parekh'\n 'Dagnachew Birru']"
]
|
null | null | 2406.14731 | null | null | http://arxiv.org/pdf/2406.14731v1 | 2024-06-20T20:54:06Z | 2024-06-20T20:54:06Z | Pathological Regularization Regimes in Classification Tasks | In this paper we demonstrate the possibility of a trend reversal in binary classification tasks between the dataset and a classification score obtained from a trained model. This trend reversal occurs for certain choices of the regularization parameter for model training, namely, if the parameter is contained in what we call the pathological regularization regime. For ridge regression, we give necessary and sufficient algebraic conditions on the dataset for the existence of a pathological regularization regime. Moreover, our results provide a data science practitioner with a hands-on tool to avoid hyperparameter choices suffering from trend reversal. We furthermore present numerical results on pathological regularization regimes for logistic regression. Finally, we draw connections to datasets exhibiting Simpson's paradox, providing a natural source of pathological datasets. | [
"['Maximilian Wiesmann' 'Paul Larsen']"
]
|
null | null | 2406.14742 | null | null | http://arxiv.org/pdf/2406.14742v1 | 2024-06-20T21:13:39Z | 2024-06-20T21:13:39Z | Latent Variable Sequence Identification for Cognitive Models with Neural
Bayes Estimation | Extracting time-varying latent variables from computational cognitive models is a key step in model-based neural analysis, which aims to understand the neural correlates of cognitive processes. However, existing methods only allow researchers to infer latent variables that explain subjects' behavior in a relatively small class of cognitive models. For example, a broad class of relevant cognitive models with analytically intractable likelihood is currently out of reach from standard techniques, based on Maximum a Posteriori parameter estimation. Here, we present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space using recurrent neural networks and simulated datasets. We show that our approach achieves competitive performance in inferring latent variable sequences in both tractable and intractable models. Furthermore, the approach is generalizable across different computational models and is adaptable for both continuous and discrete latent spaces. We then demonstrate its applicability in real world datasets. Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models for model-based neural analyses, and thus test a broader set of theories. | [
"['Ti-Fen Pan' 'Jing-Jing Li' 'Bill Thompson' 'Anne Collins']"
]
|
null | null | 2406.14743 | null | null | http://arxiv.org/pdf/2406.14743v1 | 2024-06-20T21:24:47Z | 2024-06-20T21:24:47Z | A General Online Algorithm for Optimizing Complex Performance Metrics | We consider sequential maximization of performance metrics that are general functions of a confusion matrix of a classifier (such as precision, F-measure, or G-mean). Such metrics are, in general, non-decomposable over individual instances, making their optimization very challenging. While they have been extensively studied under different frameworks in the batch setting, their analysis in the online learning regime is very limited, with only a few distinguished exceptions. In this paper, we introduce and analyze a general online algorithm that can be used in a straightforward way with a variety of complex performance metrics in binary, multi-class, and multi-label classification problems. The algorithm's update and prediction rules are appealingly simple and computationally efficient without the need to store any past data. We show the algorithm attains $mathcal{O}(frac{ln n}{n})$ regret for concave and smooth metrics and verify the efficiency of the proposed algorithm in empirical studies. | [
"['Wojciech Kotłowski' 'Marek Wydmuch' 'Erik Schultheis' 'Rohit Babbar'\n 'Krzysztof Dembczyński']"
]
|
null | null | 2406.14746 | null | null | http://arxiv.org/pdf/2406.14746v1 | 2024-06-20T21:36:54Z | 2024-06-20T21:36:54Z | Relational Reasoning On Graphs Using Opinion Dynamics | From pedestrians to Kuramoto oscillators, interactions between agents govern how a multitude of dynamical systems evolve in space and time. Discovering how these agents relate to each other can improve our understanding of the often complex dynamics that underlie these systems. Recent works learn to categorize relationships between agents based on observations of their physical behavior. These approaches are limited in that the relationship categories are modelled as independent and mutually exclusive, when in real world systems categories are often interacting. In this work, we introduce a level of abstraction between the physical behavior of agents and the categories that define their behavior. To do this, we learn a mapping from the agents' states to their affinities for each category in a graph neural network. We integrate the physical proximity of agents and their affinities in a nonlinear opinion dynamics model which provides a mechanism to identify mutually exclusive categories, predict an agent's evolution in time, and control an agent's behavior. We demonstrate the utility of our model for learning interpretable categories for mechanical systems, and demonstrate its efficacy on several long-horizon trajectory prediction benchmarks where we consistently out perform existing methods. | [
"['Yulong Yang' 'Bowen Feng' 'Keqin Wang' 'Naomi Leonard'\n 'Adji Bousso Dieng' 'Christine Allen-Blanchette']"
]
|
null | null | 2406.14753 | null | null | http://arxiv.org/pdf/2406.14753v1 | 2024-06-20T21:50:46Z | 2024-06-20T21:50:46Z | A General Control-Theoretic Approach for Reinforcement Learning: Theory
and Algorithms | We devise a control-theoretic reinforcement learning approach to support direct learning of the optimal policy. We establish theoretical properties of our approach and derive an algorithm based on a specific instance of this approach. Our empirical results demonstrate the significant benefits of our approach. | [
"['Weiqin Chen' 'Mark S. Squillante' 'Chai Wah Wu' 'Santiago Paternain']"
]
|
null | null | 2406.14760 | null | null | http://arxiv.org/pdf/2406.14760v1 | 2024-06-20T22:10:52Z | 2024-06-20T22:10:52Z | An LLM Feature-based Framework for Dialogue Constructiveness Assessment | Research on dialogue constructiveness assessment focuses on (i) analysing conversational factors that influence individuals to take specific actions, win debates, change their perspectives or broaden their open-mindedness and (ii) predicting constructive outcomes following dialogues for such use cases. These objectives can be achieved by training either interpretable feature-based models (which often involve costly human annotations) or neural models such as pre-trained language models (which have empirically shown higher task accuracy but lack interpretability). We propose a novel LLM feature-based framework that combines the strengths of feature-based and neural approaches while mitigating their downsides, in assessing dialogue constructiveness. The framework first defines a set of dataset-independent and interpretable linguistic features, which can be extracted by both prompting an LLM and simple heuristics. Such features are then used to train LLM feature-based models. We apply this framework to three datasets of dialogue constructiveness and find that our LLM feature-based models significantly outperform standard feature-based models and neural models, and tend to learn more robust prediction rules instead of relying on superficial shortcuts (as seen with neural models). Further, we demonstrate that interpreting these LLM feature-based models can yield valuable insights into what makes a dialogue constructive. | [
"['Lexin Zhou' 'Youmna Farag' 'Andreas Vlachos']"
]
|
null | null | 2406.14762 | null | null | http://arxiv.org/pdf/2406.14762v1 | 2024-06-20T22:22:31Z | 2024-06-20T22:22:31Z | Regularized Distribution Matching Distillation for One-step Unpaired
Image-to-Image Translation | Diffusion distillation methods aim to compress the diffusion models into efficient one-step generators while trying to preserve quality. Among them, Distribution Matching Distillation (DMD) offers a suitable framework for training general-form one-step generators, applicable beyond unconditional generation. In this work, we introduce its modification, called Regularized Distribution Matching Distillation, applicable to unpaired image-to-image (I2I) problems. We demonstrate its empirical performance in application to several translation tasks, including 2D examples and I2I between different image datasets, where it performs on par or better than multi-step diffusion baselines. | [
"['Denis Rakitin' 'Ivan Shchekotov' 'Dmitry Vetrov']"
]
|
null | null | 2406.14764 | null | null | http://arxiv.org/pdf/2406.14764v1 | 2024-06-20T22:28:11Z | 2024-06-20T22:28:11Z | RE-AdaptIR: Improving Information Retrieval through Reverse Engineered
Adaptation | Large language models (LLMs) fine-tuned for text-retrieval have demonstrated state-of-the-art results across several information retrieval (IR) benchmarks. However, supervised training for improving these models requires numerous labeled examples, which are generally unavailable or expensive to acquire. In this work, we explore the effectiveness of extending reverse engineered adaptation to the context of information retrieval (RE-AdaptIR). We use RE-AdaptIR to improve LLM-based IR models using only unlabeled data. We demonstrate improved performance both in training domains as well as zero-shot in domains where the models have seen no queries. We analyze performance changes in various fine-tuning scenarios and offer findings of immediate use to practitioners. | [
"['William Fleshman' 'Benjamin Van Durme']"
]
|
null | null | 2406.14765 | null | null | http://arxiv.org/pdf/2406.14765v1 | 2024-06-20T22:30:06Z | 2024-06-20T22:30:06Z | ChatGPT as Research Scientist: Probing GPT's Capabilities as a Research
Librarian, Research Ethicist, Data Generator and Data Predictor | How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research Librarian, Research Ethicist, Data Generator, and Novel Data Predictor, using psychological science as a testing field. In Study 1 (Research Librarian), unlike human researchers, GPT-3.5 and GPT-4 hallucinated, authoritatively generating fictional references 36.0% and 5.4% of the time, respectively, although GPT-4 exhibited an evolving capacity to acknowledge its fictions. In Study 2 (Research Ethicist), GPT-4 (though not GPT-3.5) proved capable of detecting violations like p-hacking in fictional research protocols, correcting 88.6% of blatantly presented issues, and 72.6% of subtly presented issues. In Study 3 (Data Generator), both models consistently replicated patterns of cultural bias previously discovered in large language corpora, indicating that ChatGPT can simulate known results, an antecedent to usefulness for both data generation and skills like hypothesis generation. Contrastingly, in Study 4 (Novel Data Predictor), neither model was successful at predicting new results absent in their training data, and neither appeared to leverage substantially new information when predicting more versus less novel outcomes. Together, these results suggest that GPT is a flawed but rapidly improving librarian, a decent research ethicist already, capable of data generation in simple domains with known characteristics but poor at predicting novel patterns of empirical data to aid future experimentation. | [
"['Steven A. Lehr' 'Aylin Caliskan' 'Suneragiri Liyanage'\n 'Mahzarin R. Banaji']"
]
|
null | null | 2406.14774 | null | null | http://arxiv.org/pdf/2406.14774v1 | 2024-06-20T22:56:31Z | 2024-06-20T22:56:31Z | Evaluating Numerical Reasoning in Text-to-Image Models | Text-to-image generative models are capable of producing high-quality images that often faithfully depict concepts described using natural language. In this work, we comprehensively evaluate a range of text-to-image models on numerical reasoning tasks of varying difficulty, and show that even the most advanced models have only rudimentary numerical skills. Specifically, their ability to correctly generate an exact number of objects in an image is limited to small numbers, it is highly dependent on the context the number term appears in, and it deteriorates quickly with each successive number. We also demonstrate that models have poor understanding of linguistic quantifiers (such as "a few" or "as many as"), the concept of zero, and struggle with more advanced concepts such as partial quantities and fractional representations. We bundle prompts, generated images and human annotations into GeckoNum, a novel benchmark for evaluation of numerical reasoning. | [
"['Ivana Kajić' 'Olivia Wiles' 'Isabela Albuquerque' 'Matthias Bauer'\n 'Su Wang' 'Jordi Pont-Tuset' 'Aida Nematzadeh']"
]
|
null | null | 2406.14775 | null | null | http://arxiv.org/pdf/2406.14775v1 | 2024-06-20T22:57:38Z | 2024-06-20T22:57:38Z | Machine Learning Global Simulation of Nonlocal Gravity Wave Propagation | Global climate models typically operate at a grid resolution of hundreds of kilometers and fail to resolve atmospheric mesoscale processes, e.g., clouds, precipitation, and gravity waves (GWs). Model representation of these processes and their sources is essential to the global circulation and planetary energy budget, but subgrid scale contributions from these processes are often only approximately represented in models using parameterizations. These parameterizations are subject to approximations and idealizations, which limit their capability and accuracy. The most drastic of these approximations is the "single-column approximation" which completely neglects the horizontal evolution of these processes, resulting in key biases in current climate models. With a focus on atmospheric GWs, we present the first-ever global simulation of atmospheric GW fluxes using machine learning (ML) models trained on the WINDSET dataset to emulate global GW emulation in the atmosphere, as an alternative to traditional single-column parameterizations. Using an Attention U-Net-based architecture trained on globally resolved GW momentum fluxes, we illustrate the importance and effectiveness of global nonlocality, when simulating GWs using data-driven schemes. | [
"['Aman Gupta' 'Aditi Sheshadri' 'Sujit Roy' 'Vishal Gaur' 'Manil Maskey'\n 'Rahul Ramachandran']"
]
|
null | null | 2406.14777 | null | null | http://arxiv.org/pdf/2406.14777v1 | 2024-06-20T23:00:25Z | 2024-06-20T23:00:25Z | Learning to Cover: Online Learning and Optimization with Irreversible
Decisions | We define an online learning and optimization problem with irreversible decisions contributing toward a coverage target. At each period, a decision-maker selects facilities to open, receives information on the success of each one, and updates a machine learning model to guide future decisions. The goal is to minimize costs across a finite horizon under a chance constraint reflecting the coverage target. We derive an optimal algorithm and a tight lower bound in an asymptotic regime characterized by a large target number of facilities $mtoinfty$ but a finite horizon $Tinmathbb{Z}_+$. We find that the regret grows sub-linearly at a rate $Thetaleft(m^{frac{1}{2}cdotfrac{1}{1-2^{-T}}}right)$, thus converging exponentially fast to $Theta(sqrt{m})$. We establish the robustness of this result to the learning environment; we also extend it to a more complicated facility location setting in a bipartite facility-customer graph with a target on customer coverage. Throughout, constructive proofs identify a policy featuring limited exploration initially for learning purposes, and fast exploitation later on for optimization purposes once uncertainty gets mitigated. These findings underscore the benefits of limited online learning and optimization, in that even a few rounds can provide significant benefits as compared to a no-learning baseline. | [
"['Alexandre Jacquillat' 'Michael Lingzhi Li']"
]
|
null | null | 2406.14784 | null | null | http://arxiv.org/pdf/2406.14784v1 | 2024-06-20T23:23:23Z | 2024-06-20T23:23:23Z | Active Learning for Fair and Stable Online Allocations | We explore an active learning approach for dynamic fair resource allocation problems. Unlike previous work that assumes full feedback from all agents on their allocations, we consider feedback from a select subset of agents at each epoch of the online resource allocation process. Despite this restriction, our proposed algorithms provide regret bounds that are sub-linear in number of time-periods for various measures that include fairness metrics commonly used in resource allocation problems and stability considerations in matching mechanisms. The key insight of our algorithms lies in adaptively identifying the most informative feedback using dueling upper and lower confidence bounds. With this strategy, we show that efficient decision-making does not require extensive feedback and produces efficient outcomes for a variety of problem classes. | [
"['Riddhiman Bhattacharya' 'Thanh Nguyen' 'Will Wei Sun'\n 'Mohit Tawarmalani']"
]
|
null | null | 2406.14785 | null | null | http://arxiv.org/pdf/2406.14785v1 | 2024-06-20T23:27:06Z | 2024-06-20T23:27:06Z | Understanding Finetuning for Factual Knowledge Extraction | In this work, we study the impact of QA fine-tuning data on downstream factuality. We show that fine-tuning on lesser-known facts that are poorly stored during pretraining yields significantly worse factuality than fine-tuning on well-known facts, even when all facts are seen during pretraining. We prove this phenomenon theoretically, showing that training on lesser-known facts can lead the model to ignore subject entity names and instead output a generic plausible response even when the relevant factual knowledge is encoded in the model. On three question answering benchmarks (PopQA, Entity Questions, and MMLU) and two language models (Llama-2-7B and Mistral-7B), we find that (i) finetuning on a completely factual but lesser-known subset of the data deteriorates downstream factuality (5-10%) and (ii) finetuning on a subset of better-known examples matches or outperforms finetuning on the entire dataset. Ultimately, our results shed light on the interaction between pretrained knowledge and finetuning data and demonstrate the importance of taking into account how facts are stored in the pretrained model when fine-tuning for knowledge-intensive tasks. | [
"['Gaurav Ghosal' 'Tatsunori Hashimoto' 'Aditi Raghunathan']"
]
|
null | null | 2406.14786 | null | null | http://arxiv.org/pdf/2406.14786v1 | 2024-06-20T23:27:41Z | 2024-06-20T23:27:41Z | Graph Structure Learning with Interpretable Bayesian Neural Networks | Graphs serve as generic tools to encode the underlying relational structure of data. Often this graph is not given, and so the task of inferring it from nodal observations becomes important. Traditional approaches formulate a convex inverse problem with a smoothness promoting objective and rely on iterative methods to obtain a solution. In supervised settings where graph labels are available, one can unroll and truncate these iterations into a deep network that is trained end-to-end. Such a network is parameter efficient and inherits inductive bias from the optimization formulation, an appealing aspect for data constrained settings in, e.g., medicine, finance, and the natural sciences. But typically such settings care equally about uncertainty over edge predictions, not just point estimates. Here we introduce novel iterations with independently interpretable parameters, i.e., parameters whose values - independent of other parameters' settings - proportionally influence characteristics of the estimated graph, such as edge sparsity. After unrolling these iterations, prior knowledge over such graph characteristics shape prior distributions over these independently interpretable network parameters to yield a Bayesian neural network (BNN) capable of graph structure learning (GSL) from smooth signal observations. Fast execution and parameter efficiency allow for high-fidelity posterior approximation via Markov Chain Monte Carlo (MCMC) and thus uncertainty quantification on edge predictions. Synthetic and real data experiments corroborate this model's ability to provide well-calibrated estimates of uncertainty, in test cases that include unveiling economic sector modular structure from S$&$P$500$ data and recovering pairwise digit similarities from MNIST images. Overall, this framework enables GSL in modest-scale applications where uncertainty on the data structure is paramount. | [
"['Max Wasserman' 'Gonzalo Mateos']"
]
|
null | null | 2406.14794 | null | null | http://arxiv.org/pdf/2406.14794v3 | 2024-07-12T07:28:55Z | 2024-06-20T23:51:32Z | ImageFlowNet: Forecasting Multiscale Trajectories of Disease Progression
with Irregularly-Sampled Longitudinal Medical Images | The forecasting of disease progression from images is a holy grail for clinical decision making. However, this task is complicated by the inherent high dimensionality, temporal sparsity and sampling irregularity in longitudinal image acquisitions. Existing methods often rely on extracting hand-crafted features and performing time-series analysis in this vector space, leading to a loss of rich spatial information within the images. To overcome these challenges, we introduce ImageFlowNet, a novel framework that learns latent-space flow fields that evolve multiscale representations in joint embedding spaces using neural ODEs and SDEs to model disease progression in the image domain. Notably, ImageFlowNet learns multiscale joint representation spaces by combining cohorts of patients together so that information can be transferred between the patient samples. The dynamics then provide plausible trajectories of progression, with the SDE providing alternative trajectories from the same starting point. We provide theoretical insights that support our formulation of ODEs, and motivate our regularizations involving high-level visual features, latent space organization, and trajectory smoothness. We then demonstrate ImageFlowNet's effectiveness through empirical evaluations on three longitudinal medical image datasets depicting progression in retinal geographic atrophy, multiple sclerosis, and glioblastoma. | [
"['Chen Liu' 'Ke Xu' 'Liangbo L. Shen' 'Guillaume Huguet' 'Zilong Wang'\n 'Alexander Tong' 'Danilo Bzdok' 'Jay Stewart' 'Jay C. Wang'\n 'Lucian V. Del Priore' 'Smita Krishnaswamy']"
]
|
null | null | 2406.14796 | null | null | http://arxiv.org/pdf/2406.14796v1 | 2024-06-21T00:13:17Z | 2024-06-21T00:13:17Z | MU-Bench: A Multitask Multimodal Benchmark for Machine Unlearning | Recent advancements in Machine Unlearning (MU) have introduced solutions to selectively remove certain training samples, such as those with outdated or sensitive information, from trained models. Despite these advancements, evaluation of MU methods have been inconsistent, employing different trained models and architectures, and sample removal strategies, which hampers accurate comparison. In addition, prior MU approaches have mainly focused on singular tasks or modalities, which is not comprehensive. To address these limitations, we develop MU-Bench, the first comprehensive benchmark for MU that (i) unifies the sets of deleted samples and trained models, and (ii) provides broad coverage of tasks and data modalities, including previously unexplored domains such as speech and video classification. Our evaluation show that RandLabel and SalUn are the most effective general MU approaches on MU-Bench, and BadT and SCRUB are capable of achieving random performance on the deletion set. We analyze several under-investigated aspects of unlearning, including scalability, the impacts of parameter-efficient fine-tuning and curriculum learning, and susceptibility to dataset biases. MU-Bench provides an easy-to-use package that includes dataset splits, models, and implementations, together with a leader board to enable unified and scalable MU research. | [
"['Jiali Cheng' 'Hadi Amiri']"
]
|
null | null | 2406.14798 | null | null | http://arxiv.org/pdf/2406.14798v1 | 2024-06-21T00:16:55Z | 2024-06-21T00:16:55Z | Probabilistic Emulation of a Global Climate Model with Spherical
DYffusion | Data-driven deep learning models are on the verge of transforming global weather forecasting. It is an open question if this success can extend to climate modeling, where long inference rollouts and data complexity pose significant challenges. Here, we present the first conditional generative model able to produce global climate ensemble simulations that are accurate and physically consistent. Our model runs at 6-hourly time steps and is shown to be stable for 10-year-long simulations. Our approach beats relevant baselines and nearly reaches a gold standard for successful climate model emulation. We discuss the key design choices behind our dynamics-informed diffusion model-based approach which enables this significant step towards efficient, data-driven climate simulations that can help us better understand the Earth and adapt to a changing climate. | [
"['Salva Rühling Cachay' 'Brian Henn' 'Oliver Watt-Meyer'\n 'Christopher S. Bretherton' 'Rose Yu']"
]
|
null | null | 2406.14808 | null | null | http://arxiv.org/pdf/2406.14808v1 | 2024-06-21T01:13:18Z | 2024-06-21T01:13:18Z | On the estimation rate of Bayesian PINN for inverse problems | Solving partial differential equations (PDEs) and their inverse problems using Physics-informed neural networks (PINNs) is a rapidly growing approach in the physics and machine learning community. Although several architectures exist for PINNs that work remarkably in practice, our theoretical understanding of their performances is somewhat limited. In this work, we study the behavior of a Bayesian PINN estimator of the solution of a PDE from $n$ independent noisy measurement of the solution. We focus on a class of equations that are linear in their parameters (with unknown coefficients $theta_star$). We show that when the partial differential equation admits a classical solution (say $u_star$), differentiable to order $beta$, the mean square error of the Bayesian posterior mean is at least of order $n^{-2beta/(2beta + d)}$. Furthermore, we establish a convergence rate of the linear coefficients of $theta_star$ depending on the order of the underlying differential operator. Last but not least, our theoretical results are validated through extensive simulations. | [
"['Yi Sun' 'Debarghya Mukherjee' 'Yves Atchade']"
]
|
null | null | 2406.14815 | null | null | http://arxiv.org/pdf/2406.14815v2 | 2024-06-26T22:23:23Z | 2024-06-21T01:32:03Z | Latent diffusion models for parameterization and data assimilation of
facies-based geomodels | Geological parameterization entails the representation of a geomodel using a small set of latent variables and a mapping from these variables to grid-block properties such as porosity and permeability. Parameterization is useful for data assimilation (history matching), as it maintains geological realism while reducing the number of variables to be determined. Diffusion models are a new class of generative deep-learning procedures that have been shown to outperform previous methods, such as generative adversarial networks, for image generation tasks. Diffusion models are trained to "denoise", which enables them to generate new geological realizations from input fields characterized by random noise. Latent diffusion models, which are the specific variant considered in this study, provide dimension reduction through use of a low-dimensional latent variable. The model developed in this work includes a variational autoencoder for dimension reduction and a U-net for the denoising process. Our application involves conditional 2D three-facies (channel-levee-mud) systems. The latent diffusion model is shown to provide realizations that are visually consistent with samples from geomodeling software. Quantitative metrics involving spatial and flow-response statistics are evaluated, and general agreement between the diffusion-generated models and reference realizations is observed. Stability tests are performed to assess the smoothness of the parameterization method. The latent diffusion model is then used for ensemble-based data assimilation. Two synthetic "true" models are considered. Significant uncertainty reduction, posterior P$_{10}$-P$_{90}$ forecasts that generally bracket observed data, and consistent posterior geomodels, are achieved in both cases. | [
"['Guido Di Federico' 'Louis J. Durlofsky']"
]
|
null | null | 2406.14835 | null | null | http://arxiv.org/pdf/2406.14835v1 | 2024-06-21T02:35:30Z | 2024-06-21T02:35:30Z | ToVo: Toxicity Taxonomy via Voting | Existing toxic detection models face significant limitations, such as lack of transparency, customization, and reproducibility. These challenges stem from the closed-source nature of their training data and the paucity of explanations for their evaluation mechanism. To address these issues, we propose a dataset creation mechanism that integrates voting and chain-of-thought processes, producing a high-quality open-source dataset for toxic content detection. Our methodology ensures diverse classification metrics for each sample and includes both classification scores and explanatory reasoning for the classifications. We utilize the dataset created through our proposed mechanism to train our model, which is then compared against existing widely-used detectors. Our approach not only enhances transparency and customizability but also facilitates better fine-tuning for specific use cases. This work contributes a robust framework for developing toxic content detection models, emphasizing openness and adaptability, thus paving the way for more effective and user-specific content moderation solutions. | [
"['Tinh Son Luong' 'Thanh-Thien Le' 'Thang Viet Doan' 'Linh Ngo Van'\n 'Thien Huu Nguyen' 'Diep Thi-Ngoc Nguyen']"
]
|
null | null | 2406.14838 | null | null | http://arxiv.org/pdf/2406.14838v1 | 2024-06-21T02:43:25Z | 2024-06-21T02:43:25Z | Bayesian neural networks for predicting uncertainty in full-field
material response | Stress and material deformation field predictions are among the most important tasks in computational mechanics. These predictions are typically made by solving the governing equations of continuum mechanics using finite element analysis, which can become computationally prohibitive considering complex microstructures and material behaviors. Machine learning (ML) methods offer potentially cost effective surrogates for these applications. However, existing ML surrogates are either limited to low-dimensional problems and/or do not provide uncertainty estimates in the predictions. This work proposes an ML surrogate framework for stress field prediction and uncertainty quantification for diverse materials microstructures. A modified Bayesian U-net architecture is employed to provide a data-driven image-to-image mapping from initial microstructure to stress field with prediction (epistemic) uncertainty estimates. The Bayesian posterior distributions for the U-net parameters are estimated using three state-of-the-art inference algorithms: the posterior sampling-based Hamiltonian Monte Carlo method and two variational approaches, the Monte-Carlo Dropout method and the Bayes by Backprop algorithm. A systematic comparison of the predictive accuracy and uncertainty estimates for these methods is performed for a fiber reinforced composite material and polycrystalline microstructure application. It is shown that the proposed methods yield predictions of high accuracy compared to the FEA solution, while uncertainty estimates depend on the inference approach. Generally, the Hamiltonian Monte Carlo and Bayes by Backprop methods provide consistent uncertainty estimates. Uncertainty estimates from Monte Carlo Dropout, on the other hand, are more difficult to interpret and depend strongly on the method's design. | [
"['George D. Pasparakis' 'Lori Graham-Brady' 'Michael D. Shields']"
]
|
null | null | 2406.14841 | null | null | http://arxiv.org/pdf/2406.14841v1 | 2024-06-21T02:58:45Z | 2024-06-21T02:58:45Z | TabularMark: Watermarking Tabular Datasets for Machine Learning | Watermarking is broadly utilized to protect ownership of shared data while preserving data utility. However, existing watermarking methods for tabular datasets fall short on the desired properties (detectability, non-intrusiveness, and robustness) and only preserve data utility from the perspective of data statistics, ignoring the performance of downstream ML models trained on the datasets. Can we watermark tabular datasets without significantly compromising their utility for training ML models while preventing attackers from training usable ML models on attacked datasets? In this paper, we propose a hypothesis testing-based watermarking scheme, TabularMark. Data noise partitioning is utilized for data perturbation during embedding, which is adaptable for numerical and categorical attributes while preserving the data utility. For detection, a custom-threshold one proportion z-test is employed, which can reliably determine the presence of the watermark. Experiments on real-world and synthetic datasets demonstrate the superiority of TabularMark in detectability, non-intrusiveness, and robustness. | [
"['Yihao Zheng' 'Haocheng Xia' 'Junyuan Pang' 'Jinfei Liu' 'Kui Ren'\n 'Lingyang Chu' 'Yang Cao' 'Li Xiong']"
]
|
null | null | 2406.14844 | null | null | http://arxiv.org/pdf/2406.14844v1 | 2024-06-21T03:13:40Z | 2024-06-21T03:13:40Z | DN-CL: Deep Symbolic Regression against Noise via Contrastive Learning | Noise ubiquitously exists in signals due to numerous factors including physical, electronic, and environmental effects. Traditional methods of symbolic regression, such as genetic programming or deep learning models, aim to find the most fitting expressions for these signals. However, these methods often overlook the noise present in real-world data, leading to reduced fitting accuracy. To tackle this issue, we propose textit{textbf{D}eep Symbolic Regression against textbf{N}oise via textbf{C}ontrastive textbf{L}earning (DN-CL)}. DN-CL employs two parameter-sharing encoders to embed data points from various data transformations into feature shields against noise. This model treats noisy data and clean data as different views of the ground-truth mathematical expressions. Distances between these features are minimized, utilizing contrastive learning to distinguish between 'positive' noise-corrected pairs and 'negative' contrasting pairs. Our experiments indicate that DN-CL demonstrates superior performance in handling both noisy and clean data, presenting a promising method of symbolic regression. | [
"['Jingyi Liu' 'Yanjie Li' 'Lina Yu' 'Min Wu' 'Weijun Li' 'Wenqiang Li'\n 'Meilan Hao' 'Yusong Deng' 'Shu Wei']"
]
|
null | null | 2406.14846 | null | null | http://arxiv.org/pdf/2406.14846v1 | 2024-06-21T03:21:26Z | 2024-06-21T03:21:26Z | Graph Edge Representation via Tensor Product Graph Convolutional
Representation | Graph Convolutional Networks (GCNs) have been widely studied. The core of GCNs is the definition of convolution operators on graphs. However, existing Graph Convolution (GC) operators are mainly defined on adjacency matrix and node features and generally focus on obtaining effective node embeddings which cannot be utilized to address the graphs with (high-dimensional) edge features. To address this problem, by leveraging tensor contraction representation and tensor product graph diffusion theories, this paper analogously defines an effective convolution operator on graphs with edge features which is named as Tensor Product Graph Convolution (TPGC). The proposed TPGC aims to obtain effective edge embeddings. It provides a complementary model to traditional graph convolutions (GCs) to address the more general graph data analysis with both node and edge features. Experimental results on several graph learning tasks demonstrate the effectiveness of the proposed TPGC. | [
"['Bo Jiang' 'Sheng Ge' 'Ziyan Zhang' 'Beibei Wang' 'Jin Tang' 'Bin Luo']"
]
|
null | null | 2406.14856 | null | null | http://arxiv.org/pdf/2406.14856v1 | 2024-06-21T04:02:19Z | 2024-06-21T04:02:19Z | Accessible, At-Home Detection of Parkinson's Disease via Multi-task
Video Analysis | Limited access to neurological care leads to missed diagnoses of Parkinson's disease (PD), leaving many individuals unidentified and untreated. We trained a novel neural network-based fusion architecture to detect Parkinson's disease (PD) by analyzing features extracted from webcam recordings of three tasks: finger tapping, facial expression (smiling), and speech (uttering a sentence containing all letters of the alphabet). Additionally, the model incorporated Monte Carlo Dropout to improve prediction accuracy by considering uncertainties. The study participants (n = 845, 272 with PD) were randomly split into three sets: 60% for training, 20% for model selection (hyper-parameter tuning), and 20% for final performance evaluation. The dataset consists of 1102 sessions, each session containing videos of all three tasks. Our proposed model achieved significantly better accuracy, area under the ROC curve (AUROC), and sensitivity at non-inferior specificity compared to any single-task model. Withholding uncertain predictions further boosted the performance, achieving 88.0% (95% CI: 87.7% - 88.4%) accuracy, 93.0% (92.8% - 93.2%) AUROC, 79.3% (78.4% - 80.2%) sensitivity, and 92.6% (92.3% - 92.8%) specificity, at the expense of not being able to predict for 2.3% (2.0% - 2.6%) data. Further analysis suggests that the trained model does not exhibit any detectable bias across sex and ethnic subgroups and is most effective for individuals aged between 50 and 80. This accessible, low-cost approach requiring only an internet-enabled device with a webcam and microphone paves the way for convenient PD screening at home, particularly in regions with limited access to clinical specialists. | [
"['Md Saiful Islam' 'Tariq Adnan' 'Jan Freyberg' 'Sangwu Lee'\n 'Abdelrahman Abdelkader' 'Meghan Pawlik' 'Cathe Schwartz' 'Karen Jaffe'\n 'Ruth B. Schneider' 'E Ray Dorsey' 'Ehsan Hoque']"
]
|
null | null | 2406.14862 | null | null | http://arxiv.org/pdf/2406.14862v3 | 2024-06-28T13:19:37Z | 2024-06-21T04:39:03Z | LatentExplainer: Explaining Latent Representations in Deep Generative
Models with Multi-modal Foundation Models | Deep generative models like VAEs and diffusion models have advanced various generation tasks by leveraging latent variables to learn data distributions and generate high-quality samples. Despite the field of explainable AI making strides in interpreting machine learning models, understanding latent variables in generative models remains challenging. This paper introduces LatentExplainer, a framework for automatically generating semantically meaningful explanations of latent variables in deep generative models. LatentExplainer tackles three main challenges: inferring the meaning of latent variables, aligning explanations with inductive biases, and handling varying degrees of explainability. By perturbing latent variables and interpreting changes in generated data, the framework provides a systematic approach to understanding and controlling the data generation process, enhancing the transparency and interpretability of deep generative models. We evaluate our proposed method on several real-world and synthetic datasets, and the results demonstrate superior performance in generating high-quality explanations of latent variables. | [
"['Mengdan Zhu' 'Raasikh Kanjiani' 'Jiahui Lu' 'Andrew Choi' 'Qirui Ye'\n 'Liang Zhao']"
]
|
null | null | 2406.14864 | null | null | http://arxiv.org/pdf/2406.14864v1 | 2024-06-21T04:50:02Z | 2024-06-21T04:50:02Z | A review of feature selection strategies utilizing graph data structures
and knowledge graphs | Feature selection in Knowledge Graphs (KGs) are increasingly utilized in diverse domains, including biomedical research, Natural Language Processing (NLP), and personalized recommendation systems. This paper delves into the methodologies for feature selection within KGs, emphasizing their roles in enhancing machine learning (ML) model efficacy, hypothesis generation, and interpretability. Through this comprehensive review, we aim to catalyze further innovation in feature selection for KGs, paving the way for more insightful, efficient, and interpretable analytical models across various domains. Our exploration reveals the critical importance of scalability, accuracy, and interpretability in feature selection techniques, advocating for the integration of domain knowledge to refine the selection process. We highlight the burgeoning potential of multi-objective optimization and interdisciplinary collaboration in advancing KG feature selection, underscoring the transformative impact of such methodologies on precision medicine, among other fields. The paper concludes by charting future directions, including the development of scalable, dynamic feature selection algorithms and the integration of explainable AI principles to foster transparency and trust in KG-driven models. | [
"['Sisi Shao' 'Pedro Henrique Ribeiro' 'Christina Ramirez' 'Jason H. Moore']"
]
|
null | null | 2406.14867 | null | null | http://arxiv.org/pdf/2406.14867v1 | 2024-06-21T05:05:39Z | 2024-06-21T05:05:39Z | DistiLRR: Transferring Code Repair for Low-Resource Programming
Languages | Large language models (LLMs) have shown remarkable performance on code generation tasks. A recent application of LLMs for code generation is iterative code repair, where a model fixes an incorrect program by rationalizing about errors and generating a new program. However, code repair is primarily studied on high-resource languages like Python, and the framework's efficacy is under-explored on low-resource languages. To apply code repair for low-resource languages, we propose Distilling Low-Resource Repairs (DistiLRR), an approach that transfers the reasoning and code generation ability from a teacher model to a student model. Our results show that DistiLRR consistently outperforms baselines on low-resource languages, but has similar performance on high-resource languages. To investigate this behavior, we perform a further analysis and find that the correlation between rationale quality and code correctness is weaker than previously perceived. We hypothesize this weakness is magnified in low-resource settings where base models lack deep knowledge of a programming language, leading to wavering benefits of code repair between high-resource and low-resource languages. | [
"['Kyle Wong' 'Alfonso Amayuelas' 'Liangming Pan' 'William Yang Wang']"
]
|
null | null | 2406.14868 | null | null | http://arxiv.org/pdf/2406.14868v2 | 2024-06-25T08:44:24Z | 2024-06-21T05:13:20Z | Direct Multi-Turn Preference Optimization for Language Agents | Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss. | [
"['Wentao Shi' 'Mengqi Yuan' 'Junkang Wu' 'Qifan Wang' 'Fuli Feng']"
]
|
null | null | 2406.14871 | null | null | http://arxiv.org/pdf/2406.14871v1 | 2024-06-21T05:35:57Z | 2024-06-21T05:35:57Z | I don't trust you (anymore)! -- The effect of students' LLM use on
Lecturer-Student-Trust in Higher Education | Trust plays a pivotal role in Lecturer-Student-Collaboration, encompassing teaching and research aspects. The advent of Large Language Models (LLMs) in platforms like Open AI's ChatGPT, coupled with their cost-effectiveness and high-quality results, has led to their rapid adoption among university students. However, discerning genuine student input from LLM-generated output poses a challenge for lecturers. This dilemma jeopardizes the trust relationship between lecturers and students, potentially impacting university downstream activities, particularly collaborative research initiatives. Despite attempts to establish guidelines for student LLM use, a clear framework mutually beneficial for lecturers and students in higher education remains elusive. This study addresses the research question: How does the use of LLMs by students impact Informational and Procedural Justice, influencing Team Trust and Expected Team Performance? Methodically, we applied a quantitative construct-based survey, evaluated using techniques of Structural Equation Modelling (PLS- SEM) to examine potential relationships among these constructs. Our findings based on 23 valid respondents from Ndejje University indicate that lecturers are less concerned about the fairness of LLM use per se but are more focused on the transparency of student utilization, which significantly influences Team Trust positively. This research contributes to the global discourse on integrating and regulating LLMs and subsequent models in education. We propose that guidelines should support LLM use while enforcing transparency in Lecturer-Student- Collaboration to foster Team Trust and Performance. The study contributes valuable insights for shaping policies enabling ethical and transparent LLMs usage in education to ensure effectiveness of collaborative learning environments. | [
"['Simon Kloker' 'Matthew Bazanya' 'Twaha Kateete']"
]
|
null | null | 2406.14876 | null | null | http://arxiv.org/pdf/2406.14876v1 | 2024-06-21T05:57:08Z | 2024-06-21T05:57:08Z | Training Greedy Policy for Proposal Batch Selection in Expensive
Multi-Objective Combinatorial Optimization | Active learning is increasingly adopted for expensive multi-objective combinatorial optimization problems, but it involves a challenging subset selection problem, optimizing the batch acquisition score that quantifies the goodness of a batch for evaluation. Due to the excessively large search space of the subset selection problem, prior methods optimize the batch acquisition on the latent space, which has discrepancies with the actual space, or optimize individual acquisition scores without considering the dependencies among candidates in a batch instead of directly optimizing the batch acquisition. To manage the vast search space, a simple and effective approach is the greedy method, which decomposes the problem into smaller subproblems, yet it has difficulty in parallelization since each subproblem depends on the outcome from the previous ones. To this end, we introduce a novel greedy-style subset selection algorithm that optimizes batch acquisition directly on the combinatorial space by sequential greedy sampling from the greedy policy, specifically trained to address all greedy subproblems concurrently. Notably, our experiments on the red fluorescent proteins design task show that our proposed method achieves the baseline performance in 1.69x fewer queries, demonstrating its efficiency. | [
"['Deokjae Lee' 'Hyun Oh Song' 'Kyunghyun Cho']"
]
|
null | null | 2406.14878 | null | null | http://arxiv.org/pdf/2406.14878v1 | 2024-06-21T05:58:19Z | 2024-06-21T05:58:19Z | MOS: Model Synergy for Test-Time Adaptation on LiDAR-Based 3D Object
Detection | LiDAR-based 3D object detection is pivotal across many applications, yet the performance of such detection systems often degrades after deployment, especially when faced with unseen test point clouds originating from diverse locations or subjected to corruption. In this work, we introduce a new online adaptation framework for detectors named Model Synergy (MOS). Specifically, MOS dynamically assembles best-fit supermodels for each test batch from a bank of historical checkpoints, leveraging long-term knowledge to guide model updates without forgetting. The model assembly is directed by the proposed synergy weights (SW), employed for weighted averaging of the selected checkpoints to minimize redundancy in the composite supermodel. These weights are calculated by evaluating the similarity of predicted bounding boxes on test data and the feature independence among model pairs in the bank. To maintain an informative yet compact model bank, we pop out checkpoints with the lowest average SW scores and insert newly updated model weights. Our method was rigorously tested against prior test-time domain adaptation strategies on three datasets and under eight types of corruptions, demonstrating its superior adaptability to changing scenes and conditions. Remarkably, our approach achieved a 67.3% increase in performance in a complex "cross-corruption" scenario, which involves cross-dataset inconsistencies and real-world scene corruptions, providing a more realistic testbed of adaptation capabilities. The code is available at https://github.com/zhuoxiao-chen/MOS. | [
"['Zhuoxiao Chen' 'Junjie Meng' 'Mahsa Baktashmotlagh' 'Zi Huang'\n 'Yadan Luo']"
]
|
null | null | 2406.14880 | null | null | http://arxiv.org/pdf/2406.14880v1 | 2024-06-21T06:02:58Z | 2024-06-21T06:02:58Z | Pathformer: Recursive Path Query Encoding for Complex Logical Query
Answering | Complex Logical Query Answering (CLQA) over incomplete knowledge graphs is a challenging task. Recently, Query Embedding (QE) methods are proposed to solve CLQA by performing multi-hop logical reasoning. However, most of them only consider historical query context information while ignoring future information, which leads to their failure to capture the complex dependencies behind the elements of a query. In recent years, the transformer architecture has shown a strong ability to model long-range dependencies between words. The bidirectional attention mechanism proposed by the transformer can solve the limitation of these QE methods regarding query context. Still, as a sequence model, it is difficult for the transformer to model complex logical queries with branch structure computation graphs directly. To this end, we propose a neural one-point embedding method called Pathformer based on the tree-like computation graph, i.e., query computation tree. Specifically, Pathformer decomposes the query computation tree into path query sequences by branches and then uses the transformer encoder to recursively encode these path query sequences to obtain the final query embedding. This allows Pathformer to fully utilize future context information to explicitly model the complex interactions between various parts of the path query. Experimental results show that Pathformer outperforms existing competitive neural QE methods, and we found that Pathformer has the potential to be applied to non-one-point embedding space. | [
"['Chongzhi Zhang' 'Zhiping Peng' 'Junhao Zheng' 'Linghao Wang'\n 'Ruifeng Shi' 'Qianli Ma']"
]
|
null | null | 2406.14904 | null | null | http://arxiv.org/pdf/2406.14904v1 | 2024-06-21T06:51:13Z | 2024-06-21T06:51:13Z | Enhancing reliability in prediction intervals using point forecasters:
Heteroscedastic Quantile Regression and Width-Adaptive Conformal Inference | Building prediction intervals for time series forecasting problems presents a complex challenge, particularly when relying solely on point predictors, a common scenario for practitioners in the industry. While research has primarily focused on achieving increasingly efficient valid intervals, we argue that, when evaluating a set of intervals, traditional measures alone are insufficient. There are additional crucial characteristics: the intervals must vary in length, with this variation directly linked to the difficulty of the prediction, and the coverage of the interval must remain independent of the difficulty of the prediction for practical utility. We propose the Heteroscedastic Quantile Regression (HQR) model and the Width-Adaptive Conformal Inference (WACI) method, providing theoretical coverage guarantees, to overcome those issues, respectively. The methodologies are evaluated in the context of Electricity Price Forecasting and Wind Power Forecasting, representing complex scenarios in time series forecasting. The results demonstrate that HQR and WACI not only improve or achieve typical measures of validity and efficiency but also successfully fulfil the commonly ignored mentioned characteristics. | [
"['Carlos Sebastián' 'Carlos E. González-Guillén' 'Jesús Juan']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.