categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.16306 | null | null | http://arxiv.org/pdf/2406.16306v1 | 2024-06-24T04:08:35Z | 2024-06-24T04:08:35Z | Cascade Reward Sampling for Efficient Decoding-Time Alignment | Aligning large language models (LLMs) with human preferences is critical for their deployment. Recently, decoding-time alignment has emerged as an effective plug-and-play technique that requires no fine-tuning of model parameters. However, generating text that achieves both high reward and high likelihood remains a significant challenge. Existing methods often fail to generate high-reward text or incur substantial computational costs. In this paper, we propose Cascade Reward Sampling (CARDS) to address both issues, guaranteeing the generation of high-reward and high-likelihood text with significantly low costs. Based on our analysis of reward models (RMs) on incomplete text and our observation that high-reward prefixes induce high-reward complete text, we use rejection sampling to iteratively generate small semantic segments to form such prefixes. The segment length is dynamically determined by the predictive uncertainty of LLMs. This strategy guarantees desirable prefixes for subsequent generations and significantly reduces wasteful token re-generations and the number of reward model scoring. Our experiments demonstrate substantial gains in both generation efficiency and alignment ratings compared to the baselines, achieving five times faster text generation and 99% win-ties in GPT-4/Claude-3 helpfulness evaluation. | [
"['Bolian Li' 'Yifan Wang' 'Ananth Grama' 'Ruqi Zhang']"
] |
null | null | 2406.16308 | null | null | http://arxiv.org/pdf/2406.16308v1 | 2024-06-24T04:17:03Z | 2024-06-24T04:17:03Z | Anomaly Detection of Tabular Data Using LLMs | Large language models (LLMs) have shown their potential in long-context understanding and mathematical reasoning. In this paper, we study the problem of using LLMs to detect tabular anomalies and show that pre-trained LLMs are zero-shot batch-level anomaly detectors. That is, without extra distribution-specific model fitting, they can discover hidden outliers in a batch of data, demonstrating their ability to identify low-density data regions. For LLMs that are not well aligned with anomaly detection and frequently output factual errors, we apply simple yet effective data-generating processes to simulate synthetic batch-level anomaly detection datasets and propose an end-to-end fine-tuning strategy to bring out the potential of LLMs in detecting real anomalies. Experiments on a large anomaly detection benchmark (ODDS) showcase i) GPT-4 has on-par performance with the state-of-the-art transductive learning-based anomaly detection methods and ii) the efficacy of our synthetic dataset and fine-tuning strategy in aligning LLMs to this task. | [
"['Aodong Li' 'Yunhan Zhao' 'Chen Qiu' 'Marius Kloft' 'Padhraic Smyth'\n 'Maja Rudolph' 'Stephan Mandt']"
] |
null | null | 2406.16316 | null | null | http://arxiv.org/pdf/2406.16316v1 | 2024-06-24T04:50:12Z | 2024-06-24T04:50:12Z | Does Cross-Cultural Alignment Change the Commonsense Morality of
Language Models? | Alignment of the language model with human preferences is a common approach to making a language model useful to end users. However, most alignment work is done in English, and human preference datasets are dominated by English, reflecting only the preferences of English-speaking annotators. Nevertheless, it is common practice to use the English preference data, either directly or by translating it into the target language, when aligning a multilingual language model. The question is whether such an alignment strategy marginalizes the preference of non-English speaking users. To this end, we investigate the effect of aligning Japanese language models with (mostly) English resources. In particular, we focus on evaluating whether the commonsense morality of the resulting fine-tuned models is aligned with Japanese culture using the JCommonsenseMorality (JCM) and ETHICS datasets. The experimental results show that the fine-tuned model outperforms the SFT model. However, it does not demonstrate the same level of improvement as a model fine-tuned using the JCM, suggesting that while some aspects of commonsense morality are transferable, others may not be. | [
"['Yuu Jinnai']"
] |
null | null | 2406.16321 | null | null | http://arxiv.org/pdf/2406.16321v1 | 2024-06-24T05:14:09Z | 2024-06-24T05:14:09Z | Multimodal Graph Benchmark | Associating unstructured data with structured information is crucial for real-world tasks that require relevance search. However, existing graph learning benchmarks often overlook the rich semantic information associate with each node. To bridge such gap, we introduce the Multimodal Graph Benchmark (MM-GRAPH), the first comprehensive multi-modal graph benchmark that incorporates both textual and visual information. MM-GRAPH surpasses previous efforts, which have primarily focused on text-attributed graphs with various connectivity patterns. MM-GRAPH consists of five graph learning datasets of various scales that are appropriate for different learning tasks. Their multimodal node features, enabling a more comprehensive evaluation of graph learning algorithms in real-world scenarios. To facilitate research on multimodal graph learning, we further provide an extensive study on the performance of various graph neural networks in the presence of features from various modalities. MM-GRAPH aims to foster research on multimodal graph learning and drive the development of more advanced and robust graph learning algorithms. By providing a diverse set of datasets and benchmarks, MM-GRAPH enables researchers to evaluate and compare their models in realistic settings, ultimately leading to improved performance on real-world applications that rely on multimodal graph data. | [
"['Jing Zhu' 'Yuhang Zhou' 'Shengyi Qian' 'Zhongmou He' 'Tong Zhao'\n 'Neil Shah' 'Danai Koutra']"
] |
null | null | 2406.16322 | null | null | http://arxiv.org/abs/2406.16322v1 | 2024-06-24T05:15:15Z | 2024-06-24T05:15:15Z | Lesion-Aware Cross-Phase Attention Network for Renal Tumor Subtype
Classification on Multi-Phase CT Scans | Multi-phase computed tomography (CT) has been widely used for the preoperative diagnosis of kidney cancer due to its non-invasive nature and ability to characterize renal lesions. However, since enhancement patterns of renal lesions across CT phases are different even for the same lesion type, the visual assessment by radiologists suffers from inter-observer variability in clinical practice. Although deep learning-based approaches have been recently explored for differential diagnosis of kidney cancer, they do not explicitly model the relationships between CT phases in the network design, limiting the diagnostic performance. In this paper, we propose a novel lesion-aware cross-phase attention network (LACPANet) that can effectively capture temporal dependencies of renal lesions across CT phases to accurately classify the lesions into five major pathological subtypes from time-series multi-phase CT images. We introduce a 3D inter-phase lesion-aware attention mechanism to learn effective 3D lesion features that are used to estimate attention weights describing the inter-phase relations of the enhancement patterns. We also present a multi-scale attention scheme to capture and aggregate temporal patterns of lesion features at different spatial scales for further improvement. Extensive experiments on multi-phase CT scans of kidney cancer patients from the collected dataset demonstrate that our LACPANet outperforms state-of-the-art approaches in diagnostic accuracy. | [
"['Kwang-Hyun Uhm' 'Seung-Won Jung' 'Sung-Hoo Hong' 'Sung-Jea Ko']"
] |
null | null | 2406.16349 | null | null | http://arxiv.org/pdf/2406.16349v1 | 2024-06-24T06:44:14Z | 2024-06-24T06:44:14Z | AnnotatedTables: A Large Tabular Dataset with Language Model Annotations | Tabular data is ubiquitous in real-world applications and abundant on the web, yet its annotation has traditionally required human labor, posing a significant scalability bottleneck for tabular machine learning. Our methodology can successfully annotate a large amount of tabular data and can be flexibly steered to generate various types of annotations based on specific research objectives, as we demonstrate with SQL annotation and input-target column annotation as examples. As a result, we release AnnotatedTables, a collection of 32,119 databases with LLM-generated annotations. The dataset includes 405,616 valid SQL programs, making it the largest SQL dataset with associated tabular data that supports query execution. To further demonstrate the value of our methodology and dataset, we perform two follow-up research studies. 1) We investigate whether LLMs can translate SQL programs to Rel programs, a database language previously unknown to LLMs, while obtaining the same execution results. Using our Incremental Prompt Engineering methods based on execution feedback, we show that LLMs can produce adequate translations with few-shot learning. 2) We evaluate the performance of TabPFN, a recent neural tabular classifier trained on Bayesian priors, on 2,720 tables with input-target columns identified and annotated by LLMs. On average, TabPFN performs on par with the baseline AutoML method, though the relative performance can vary significantly from one data table to another, making both models viable for practical applications depending on the situation. Our findings underscore the potential of LLMs in automating the annotation of large volumes of diverse tabular data. | [
"['Yaojie Hu' 'Ilias Fountalis' 'Jin Tian' 'Nikolaos Vasiloglou']"
] |
null | null | 2406.16351 | null | null | http://arxiv.org/pdf/2406.16351v1 | 2024-06-24T06:47:47Z | 2024-06-24T06:47:47Z | METRIK: Measurement-Efficient Randomized Controlled Trials using
Transformers with Input Masking | Clinical randomized controlled trials (RCTs) collect hundreds of measurements spanning various metric types (e.g., laboratory tests, cognitive/motor assessments, etc.) across 100s-1000s of subjects to evaluate the effect of a treatment, but do so at the cost of significant trial expense. To reduce the number of measurements, trial protocols can be revised to remove metrics extraneous to the study's objective, but doing so requires additional human labor and limits the set of hypotheses that can be studied with the collected data. In contrast, a planned missing design (PMD) can reduce the amount of data collected without removing any metric by imputing the unsampled data. Standard PMDs randomly sample data to leverage statistical properties of imputation algorithms, but are ad hoc, hence suboptimal. Methods that learn PMDs produce more sample-efficient PMDs, but are not suitable for RCTs because they require ample prior data (150+ subjects) to model the data distribution. Therefore, we introduce a framework called Measurement EfficienT Randomized Controlled Trials using Transformers with Input MasKing (METRIK), which, for the first time, calculates a PMD specific to the RCT from a modest amount of prior data (e.g., 60 subjects). Specifically, METRIK models the PMD as a learnable input masking layer that is optimized with a state-of-the-art imputer based on the Transformer architecture. METRIK implements a novel sampling and selection algorithm to generate a PMD that satisfies the trial designer's objective, i.e., whether to maximize sampling efficiency or imputation performance for a given sampling budget. Evaluated across five real-world clinical RCT datasets, METRIK increases the sampling efficiency of and imputation performance under the generated PMD by leveraging correlations over time and across metrics, thereby removing the need to manually remove metrics from the RCT. | [
"['Sayeri Lala' 'Niraj K. Jha']"
] |
null | null | 2406.16355 | null | null | http://arxiv.org/pdf/2406.16355v1 | 2024-06-24T06:52:50Z | 2024-06-24T06:52:50Z | Compact Model Parameter Extraction via Derivative-Free Optimization | In this paper, we address the problem of compact model parameter extraction to simultaneously extract tens of parameters via derivative-free optimization. Traditionally, parameter extraction is performed manually by dividing the complete set of parameters into smaller subsets, each targeting different operational regions of the device, a process that can take several days or even weeks. Our approach streamlines this process by employing derivative-free optimization to identify a good parameter set that best fits the compact model without performing an exhaustive number of simulations. We further enhance the optimization process to address critical issues in device modeling by carefully choosing a loss function that evaluates model performance consistently across varying magnitudes by focusing on relative errors (as opposed to absolute errors), prioritizing accuracy in key operational regions of the device above a certain threshold, and reducing sensitivity to outliers. Furthermore, we utilize the concept of train-test split to assess the model fit and avoid overfitting. This is done by fitting 80% of the data and testing the model efficacy with the remaining 20%. We demonstrate the effectiveness of our methodology by successfully modeling two semiconductor devices: a diamond Schottky diode and a GaN-on-SiC HEMT, with the latter involving the ASM-HEMT DC model, which requires simultaneously extracting 35 model parameters to fit the model to the measured data. These examples demonstrate the effectiveness of our approach and showcase the practical benefits of derivative-free optimization in device modeling. | [
"['Rafael Perez Martinez' 'Masaya Iwamoto' 'Kelly Woo' 'Zhengliang Bian'\n 'Roberto Tinti' 'Stephen Boyd' 'Srabanti Chowdhury']"
] |
null | null | 2406.16357 | null | null | http://arxiv.org/abs/2406.16357v1 | 2024-06-24T06:53:37Z | 2024-06-24T06:53:37Z | Towards Lightweight Graph Neural Network Search with Curriculum Graph
Sparsification | Graph Neural Architecture Search (GNAS) has achieved superior performance on various graph-structured tasks. However, existing GNAS studies overlook the applications of GNAS in resource-constraint scenarios. This paper proposes to design a joint graph data and architecture mechanism, which identifies important sub-architectures via the valuable graph data. To search for optimal lightweight Graph Neural Networks (GNNs), we propose a Lightweight Graph Neural Architecture Search with Graph SparsIfication and Network Pruning (GASSIP) method. In particular, GASSIP comprises an operation-pruned architecture search module to enable efficient lightweight GNN search. Meanwhile, we design a novel curriculum graph data sparsification module with an architecture-aware edge-removing difficulty measurement to help select optimal sub-architectures. With the aid of two differentiable masks, we iteratively optimize these two modules to efficiently search for the optimal lightweight architecture. Extensive experiments on five benchmarks demonstrate the effectiveness of GASSIP. Particularly, our method achieves on-par or even higher node classification performance with half or fewer model parameters of searched GNNs and a sparser graph. | [
"['Beini Xie' 'Heng Chang' 'Ziwei Zhang' 'Zeyang Zhang' 'Simin Wu'\n 'Xin Wang' 'Yuan Meng' 'Wenwu Zhu']"
] |
null | null | 2406.16424 | null | null | http://arxiv.org/pdf/2406.16424v1 | 2024-06-24T08:18:19Z | 2024-06-24T08:18:19Z | Memory-Enhanced Neural Solvers for Efficient Adaptation in Combinatorial
Optimization | Combinatorial Optimization is crucial to numerous real-world applications, yet still presents challenges due to its (NP-)hard nature. Amongst existing approaches, heuristics often offer the best trade-off between quality and scalability, making them suitable for industrial use. While Reinforcement Learning (RL) offers a flexible framework for designing heuristics, its adoption over handcrafted heuristics remains incomplete within industrial solvers. Existing learned methods still lack the ability to adapt to specific instances and fully leverage the available computational budget. The current best methods either rely on a collection of pre-trained policies, or on data-inefficient fine-tuning; hence failing to fully utilize newly available information within the constraints of the budget. In response, we present MEMENTO, an RL approach that leverages memory to improve the adaptation of neural solvers at inference time. MEMENTO enables updating the action distribution dynamically based on the outcome of previous decisions. We validate its effectiveness on benchmark problems, in particular Traveling Salesman and Capacitated Vehicle Routing, demonstrating it can successfully be combined with standard methods to boost their performance under a given budget, both in and out-of-distribution, improving their performance on all 12 evaluated tasks. | [
"['Felix Chalumeau' 'Refiloe Shabe' 'Noah de Nicola' 'Arnu Pretorius'\n 'Thomas D. Barrett' 'Nathan Grinsztajn']"
] |
null | null | 2406.16426 | null | null | http://arxiv.org/pdf/2406.16426v2 | 2024-07-08T13:35:12Z | 2024-06-24T08:20:43Z | Fault Detection for agents on power grid topology optimization: A
Comprehensive analysis | The topology optimization of transmission networks using Deep Reinforcement Learning (DRL) has increasingly come into focus. Various researchers have proposed different DRL agents, which are often benchmarked on the Grid2Op environment from the Learning to Run a Power Network (L2RPN) challenges. The environments have many advantages with their realistic chronics and underlying power flow backends. However, the interpretation of agent survival or failure is not always clear, as there are a variety of potential causes. In this work, we focus on the failures of the power grid to identify patterns and detect them a priori. We collect the failed chronics of three different agents on the WCCI 2022 L2RPN environment, totaling about 40k data points. By clustering, we are able to detect five distinct clusters, identifying different failure types. Further, we propose a multi-class prediction approach to detect failures beforehand and evaluate five different models. Here, the Light Gradient-Boosting Machine (LightGBM) shows the best performance, with an accuracy of 86%. It also correctly identifies in 91% of the time failure and survival observations. Finally, we provide a detailed feature importance analysis that identifies critical features and regions in the grid. | [
"['Malte Lehna' 'Mohamed Hassouna' 'Dmitry Degtyar' 'Sven Tomforde'\n 'Christoph Scholz']"
] |
null | null | 2406.16437 | null | null | http://arxiv.org/pdf/2406.16437v1 | 2024-06-24T08:29:58Z | 2024-06-24T08:29:58Z | Theory on Mixture-of-Experts in Continual Learning | Continual learning (CL) has garnered significant attention because of its ability to adapt to new tasks that arrive over time. Catastrophic forgetting (of old tasks) has been identified as a major issue in CL, as the model adapts to new tasks. The Mixture-of-Experts (MoE) model has recently been shown to effectively mitigate catastrophic forgetting in CL, by employing a gating network to sparsify and distribute diverse tasks among multiple experts. However, there is a lack of theoretical analysis of MoE and its impact on the learning performance in CL. This paper provides the first theoretical results to characterize the impact of MoE in CL via the lens of overparameterized linear regression tasks. We establish the benefit of MoE over a single expert by proving that the MoE model can diversify its experts to specialize in different tasks, while its router learns to select the right expert for each task and balance the loads across all experts. Our study further suggests an intriguing fact that the MoE in CL needs to terminate the update of the gating network after sufficient training rounds to attain system convergence, which is not needed in the existing MoE studies that do not consider the continual task arrival. Furthermore, we provide explicit expressions for the expected forgetting and overall generalization error to characterize the benefit of MoE in the learning performance in CL. Interestingly, adding more experts requires additional rounds before convergence, which may not enhance the learning performance. Finally, we conduct experiments on both synthetic and real datasets to extend these insights from linear models to deep neural networks (DNNs), which also shed light on the practical algorithm design for MoE in CL. | [
"['Hongbo Li' 'Sen Lin' 'Lingjie Duan' 'Yingbin Liang' 'Ness B. Shroff']"
] |
null | null | 2406.16456 | null | null | http://arxiv.org/pdf/2406.16456v1 | 2024-06-24T08:53:45Z | 2024-06-24T08:53:45Z | Automated Privacy-Preserving Techniques via Meta-Learning | Sharing private data for learning tasks is pivotal for transparent and secure machine learning applications. Many privacy-preserving techniques have been proposed for this task aiming to transform the data while ensuring the privacy of individuals. Some of these techniques have been incorporated into tools, whereas others are accessed through various online platforms. However, such tools require manual configuration, which can be complex and time-consuming. Moreover, they require substantial expertise, potentially restricting their use to those with advanced technical knowledge. In this paper, we propose AUTOPRIV, the first automated privacy-preservation method, that eliminates the need for any manual configuration. AUTOPRIV employs meta-learning to automate the de-identification process, facilitating the secure release of data for machine learning tasks. The main goal is to anticipate the predictive performance and privacy risk of a large set of privacy configurations. We provide a ranked list of the most promising solutions, which are likely to achieve an optimal approximation within a new domain. AUTOPRIV is highly effective as it reduces computational complexity and energy consumption considerably. | [
"['Tânia Carvalho' 'Nuno Moniz' 'Luís Antunes']"
] |
null | null | 2406.16466 | null | null | http://arxiv.org/pdf/2406.16466v1 | 2024-06-24T09:16:17Z | 2024-06-24T09:16:17Z | SLOctolyzer: Fully automatic analysis toolkit for segmentation and
feature extracting in scanning laser ophthalmoscopy images | Purpose: To describe SLOctolyzer: an open-source analysis toolkit for en face retinal vessels appearing in infrared reflectance scanning laser ophthalmoscopy (SLO) images. Methods: SLOctolyzer includes two main modules: segmentation and measurement. The segmentation module use deep learning methods to delineate retinal anatomy, while the measurement module quantifies key retinal vascular features such as vessel complexity, density, tortuosity, and calibre. We evaluate the segmentation module using unseen data and measure its reproducibility. Results: SLOctolyzer's segmentation module performed well against unseen internal test data (Dice for all-vessels, 0.9097; arteries, 0.8376; veins, 0.8525; optic disc, 0.9430; fovea, 0.8837). External validation against severe retinal pathology showed decreased performance (Dice for arteries, 0.7180; veins, 0.7470; optic disc, 0.9032). SLOctolyzer had good reproducibility (mean difference for fractal dimension, -0.0007; vessel density, -0.0003; vessel calibre, -0.3154 $mu$m; tortuosity density, 0.0013). SLOctolyzer can process a macula-centred SLO image in under 20 seconds and a disc-centred SLO image in under 30 seconds using a standard laptop CPU. Conclusions: To our knowledge, SLOctolyzer is the first open-source tool to convert raw SLO images into reproducible and clinically meaningful retinal vascular parameters. SLO images are captured simultaneous to optical coherence tomography (OCT), and we believe our software will be useful for extracting retinal vascular measurements from large OCT image sets and linking them to ocular or systemic diseases. It requires no specialist knowledge or proprietary software, and allows manual correction of segmentations and re-computing of vascular metrics. SLOctolyzer is freely available at https://github.com/jaburke166/SLOctolyzer. | [
"['Jamie Burke' 'Samuel Gibbon' 'Justin Engelmann' 'Adam Threlfall'\n 'Ylenia Giarratano' 'Charlene Hamid' 'Stuart King' 'Ian J. C. MacCormick'\n 'Tom MacGillivray']"
] |
null | null | 2406.16468 | null | null | http://arxiv.org/pdf/2406.16468v1 | 2024-06-24T09:16:59Z | 2024-06-24T09:16:59Z | The Hidden Pitfalls of the Cosine Similarity Loss | We show that the gradient of the cosine similarity between two points goes to zero in two under-explored settings: (1) if a point has large magnitude or (2) if the points are on opposite ends of the latent space. Counterintuitively, we prove that optimizing the cosine similarity between points forces them to grow in magnitude. Thus, (1) is unavoidable in practice. We then observe that these derivations are extremely general -- they hold across deep learning architectures and for many of the standard self-supervised learning (SSL) loss functions. This leads us to propose cut-initialization: a simple change to network initialization that helps all studied SSL methods converge faster. | [
"['Andrew Draganov' 'Sharvaree Vadgama' 'Erik J. Bekkers']"
] |
null | null | 2406.16481 | null | null | http://arxiv.org/pdf/2406.16481v1 | 2024-06-24T09:36:58Z | 2024-06-24T09:36:58Z | Improving Quaternion Neural Networks with Quaternionic Activation
Functions | In this paper, we propose novel quaternion activation functions where we modify either the quaternion magnitude or the phase, as an alternative to the commonly used split activation functions. We define criteria that are relevant for quaternion activation functions, and subsequently we propose our novel activation functions based on this analysis. Instead of applying a known activation function like the ReLU or Tanh on the quaternion elements separately, these activation functions consider the quaternion properties and respect the quaternion space $mathbb{H}$. In particular, all quaternion components are utilized to calculate all output components, carrying out the benefit of the Hamilton product in e.g. the quaternion convolution to the activation functions. The proposed activation functions can be incorporated in arbitrary quaternion valued neural networks trained with gradient descent techniques. We further discuss the derivatives of the proposed activation functions where we observe beneficial properties for the activation functions affecting the phase. Specifically, they prove to be sensitive on basically the whole input range, thus improved gradient flow can be expected. We provide an elaborate experimental evaluation of our proposed quaternion activation functions including comparison with the split ReLU and split Tanh on two image classification tasks using the CIFAR-10 and SVHN dataset. There, especially the quaternion activation functions affecting the phase consistently prove to provide better performance. | [
"['Johannes Pöppelbaum' 'Andreas Schwung']"
] |
null | null | 2406.16484 | null | null | http://arxiv.org/pdf/2406.16484v1 | 2024-06-24T09:39:30Z | 2024-06-24T09:39:30Z | Robust prediction under missingness shifts | Prediction becomes more challenging with missing covariates. What method is chosen to handle missingness can greatly affect how models perform. In many real-world problems, the best prediction performance is achieved by models that can leverage the informative nature of a value being missing. Yet, the reasons why a covariate goes missing can change once a model is deployed in practice. If such a missingness shift occurs, the conditional probability of a value being missing differs in the target data. Prediction performance in the source data may no longer be a good selection criterion, and approaches that do not rely on informative missingness may be preferable. However, we show that the Bayes predictor remains unchanged by ignorable shifts for which the probability of missingness only depends on observed data. Any consistent estimator of the Bayes predictor may therefore result in robust prediction under those conditions, although we show empirically that different methods appear robust to different types of shifts. If the missingness shift is non-ignorable, the Bayes predictor may change due to the shift. While neither approach recovers the Bayes predictor in this case, we found empirically that disregarding missingness was most beneficial when it was highly informative. | [
"['Patrick Rockenschaub' 'Zhicong Xian' 'Alireza Zamanian' 'Marta Piperno'\n 'Octavia-Andreea Ciora' 'Elisabeth Pachl' 'Narges Ahmidi']"
] |
null | null | 2406.16501 | null | null | http://arxiv.org/pdf/2406.16501v1 | 2024-06-24T10:10:03Z | 2024-06-24T10:10:03Z | UNICAD: A Unified Approach for Attack Detection, Noise Reduction and
Novel Class Identification | As the use of Deep Neural Networks (DNNs) becomes pervasive, their vulnerability to adversarial attacks and limitations in handling unseen classes poses significant challenges. The state-of-the-art offers discrete solutions aimed to tackle individual issues covering specific adversarial attack scenarios, classification or evolving learning. However, real-world systems need to be able to detect and recover from a wide range of adversarial attacks without sacrificing classification accuracy and to flexibly act in {bf unseen} scenarios. In this paper, UNICAD, is proposed as a novel framework that integrates a variety of techniques to provide an adaptive solution. For the targeted image classification, UNICAD achieves accurate image classification, detects unseen classes, and recovers from adversarial attacks using Prototype and Similarity-based DNNs with denoising autoencoders. Our experiments performed on the CIFAR-10 dataset highlight UNICAD's effectiveness in adversarial mitigation and unseen class classification, outperforming traditional models. | [
"['Alvaro Lopez Pellicer' 'Kittipos Giatgong' 'Yi Li' 'Neeraj Suri'\n 'Plamen Angelov']"
] |
null | null | 2406.16525 | null | null | http://arxiv.org/pdf/2406.16525v1 | 2024-06-24T11:01:43Z | 2024-06-24T11:01:43Z | OAML: Outlier Aware Metric Learning for OOD Detection Enhancement | Out-of-distribution (OOD) detection methods have been developed to identify objects that a model has not seen during training. The Outlier Exposure (OE) methods use auxiliary datasets to train OOD detectors directly. However, the collection and learning of representative OOD samples may pose challenges. To tackle these issues, we propose the Outlier Aware Metric Learning (OAML) framework. The main idea of our method is to use the k-NN algorithm and Stable Diffusion model to generate outliers for training at the feature level without making any distributional assumptions. To increase feature discrepancies in the semantic space, we develop a mutual information-based contrastive learning approach for learning from OOD data effectively. Both theoretical and empirical results confirm the effectiveness of this contrastive learning technique. Furthermore, we incorporate knowledge distillation into our learning framework to prevent degradation of in-distribution classification accuracy. The combination of contrastive learning and knowledge distillation algorithms significantly enhances the performance of OOD detection. Experimental results across various datasets show that our method significantly outperforms previous OE methods. | [
"['Heng Gao' 'Zhuolin He' 'Shoumeng Qiu' 'Jian Pu']"
] |
null | null | 2406.16527 | null | null | http://arxiv.org/pdf/2406.16527v1 | 2024-06-24T11:04:43Z | 2024-06-24T11:04:43Z | SyROCCo: Enhancing Systematic Reviews using Machine Learning | The sheer number of research outputs published every year makes systematic reviewing increasingly time- and resource-intensive. This paper explores the use of machine learning techniques to help navigate the systematic review process. ML has previously been used to reliably 'screen' articles for review - that is, identify relevant articles based on reviewers' inclusion criteria. The application of ML techniques to subsequent stages of a review, however, such as data extraction and evidence mapping, is in its infancy. We therefore set out to develop a series of tools that would assist in the profiling and analysis of 1,952 publications on the theme of 'outcomes-based contracting'. Tools were developed for the following tasks: assign publications into 'policy area' categories; identify and extract key information for evidence mapping, such as organisations, laws, and geographical information; connect the evidence base to an existing dataset on the same topic; and identify subgroups of articles that may share thematic content. An interactive tool using these techniques and a public dataset with their outputs have been released. Our results demonstrate the utility of ML techniques to enhance evidence accessibility and analysis within the systematic review processes. These efforts show promise in potentially yielding substantial efficiencies for future systematic reviewing and for broadening their analytical scope. Our work suggests that there may be implications for the ease with which policymakers and practitioners can access evidence. While ML techniques seem poised to play a significant role in bridging the gap between research and policy by offering innovative ways of gathering, accessing, and analysing data from systematic reviews, we also highlight their current limitations and the need to exercise caution in their application, particularly given the potential for errors and biases. | [
"['Zheng Fang' 'Miguel Arana-Catania' 'Felix-Anselm van Lier'\n 'Juliana Outes Velarde' 'Harry Bregazzi' 'Mara Airoldi' 'Eleanor Carter'\n 'Rob Procter']"
] |
null | null | 2406.16530 | null | null | http://arxiv.org/pdf/2406.16530v1 | 2024-06-24T11:09:08Z | 2024-06-24T11:09:08Z | Conditional Bayesian Quadrature | We propose a novel approach for estimating conditional or parametric expectations in the setting where obtaining samples or evaluating integrands is costly. Through the framework of probabilistic numerical methods (such as Bayesian quadrature), our novel approach allows to incorporates prior information about the integrands especially the prior smoothness knowledge about the integrands and the conditional expectation. As a result, our approach provides a way of quantifying uncertainty and leads to a fast convergence rate, which is confirmed both theoretically and empirically on challenging tasks in Bayesian sensitivity analysis, computational finance and decision making under uncertainty. | [
"['Zonghao Chen' 'Masha Naslidnyk' 'Arthur Gretton' 'François-Xavier Briol']"
] |
null | null | 2406.16535 | null | null | http://arxiv.org/pdf/2406.16535v1 | 2024-06-24T11:16:26Z | 2024-06-24T11:16:26Z | Token-based Decision Criteria Are Suboptimal in In-context Learning | In-Context Learning (ICL) typically utilizes classification criteria from probabilities of manually selected label tokens. However, we argue that such token-based classification criteria lead to suboptimal decision boundaries, despite delicate calibrations through translation and constrained rotation. To address this problem, we propose Hidden Calibration, which renounces token probabilities and uses the nearest centroid classifier on the LM's last hidden states. In detail, we use the nearest centroid classification on the hidden states, assigning the category of the nearest centroid previously observed from a few-shot calibration set to the test sample as the predicted label. Our experiments on 3 models and 10 classification datasets indicate that Hidden Calibration consistently outperforms current token-based calibrations by about 20%. Our further analysis demonstrates that Hidden Calibration finds better classification criteria with less inter-categories overlap, and LMs provide linearly separable intra-category clusters with the help of demonstrations, which supports Hidden Calibration and gives new insights into the conventional ICL. | [
"['Hakaze Cho' 'Yoshihiro Sakai' 'Mariko Kato' 'Kenshiro Tanaka'\n 'Akira Ishii' 'Naoya Inoue']"
] |
null | null | 2406.16540 | null | null | http://arxiv.org/pdf/2406.16540v1 | 2024-06-24T11:20:44Z | 2024-06-24T11:20:44Z | Improving robustness to corruptions with multiplicative weight
perturbations | Deep neural networks (DNNs) excel on clean images but struggle with corrupted ones. Incorporating specific corruptions into the data augmentation pipeline can improve robustness to those corruptions but may harm performance on clean images and other types of distortion. In this paper, we introduce an alternative approach that improves the robustness of DNNs to a wide range of corruptions without compromising accuracy on clean images. We first demonstrate that input perturbations can be mimicked by multiplicative perturbations in the weight space. Leveraging this, we propose Data Augmentation via Multiplicative Perturbation (DAMP), a training method that optimizes DNNs under random multiplicative weight perturbations. We also examine the recently proposed Adaptive Sharpness-Aware Minimization (ASAM) and show that it optimizes DNNs under adversarial multiplicative weight perturbations. Experiments on image classification datasets (CIFAR-10/100, TinyImageNet and ImageNet) and neural network architectures (ResNet50, ViT-S/16) show that DAMP enhances model generalization performance in the presence of corruptions across different settings. Notably, DAMP is able to train a ViT-S/16 on ImageNet from scratch, reaching the top-1 error of 23.7% which is comparable to ResNet50 without extensive data augmentations. | [
"['Trung Trinh' 'Markus Heinonen' 'Luigi Acerbi' 'Samuel Kaski']"
] |
null | null | 2406.16552 | null | null | http://arxiv.org/pdf/2406.16552v1 | 2024-06-24T11:41:12Z | 2024-06-24T11:41:12Z | Inference of Sequential Patterns for Neural Message Passing in Temporal
Graphs | The modelling of temporal patterns in dynamic graphs is an important current research issue in the development of time-aware GNNs. Whether or not a specific sequence of events in a temporal graph constitutes a temporal pattern not only depends on the frequency of its occurrence. We consider whether it deviates from what is expected in a temporal graph where timestamps are randomly shuffled. While accounting for such a random baseline is important to model temporal patterns, it has mostly been ignored by current temporal graph neural networks. To address this issue we propose HYPA-DBGNN, a novel two-step approach that combines (i) the inference of anomalous sequential patterns in time series data on graphs based on a statistically principled null model, with (ii) a neural message passing approach that utilizes a higher-order De Bruijn graph whose edges capture overrepresented sequential patterns. Our method leverages hypergeometric graph ensembles to identify anomalous edges within both first- and higher-order De Bruijn graphs, which encode the temporal ordering of events. The model introduces an inductive bias that enhances model interpretability. We evaluate our approach for static node classification using benchmark datasets and a synthetic dataset that showcases its ability to incorporate the observed inductive bias regarding over- and under-represented temporal edges. We demonstrate the framework's effectiveness in detecting similar patterns within empirical datasets, resulting in superior performance compared to baseline methods in node classification tasks. To the best of our knowledge, our work is the first to introduce statistically informed GNNs that leverage temporal and causal sequence anomalies. HYPA-DBGNN represents a path for bridging the gap between statistical graph inference and neural graph representation learning, with potential applications to static GNNs. | [
"['Jan von Pichowski' 'Vincenzo Perri' 'Lisi Qarkaxhija' 'Ingo Scholtes']"
] |
null | null | 2406.16557 | null | null | http://arxiv.org/pdf/2406.16557v1 | 2024-06-24T11:50:31Z | 2024-06-24T11:50:31Z | Efficient k-means with Individual Fairness via Exponential Tilting | In location-based resource allocation scenarios, the distances between each individual and the facility are desired to be approximately equal, thereby ensuring fairness. Individually fair clustering is often employed to achieve the principle of treating all points equally, which can be applied in these scenarios. This paper proposes a novel algorithm, tilted k-means (TKM), aiming to achieve individual fairness in clustering. We integrate the exponential tilting into the sum of squared errors (SSE) to formulate a novel objective function called tilted SSE. We demonstrate that the tilted SSE can generalize to SSE and employ the coordinate descent and first-order gradient method for optimization. We propose a novel fairness metric, the variance of the distances within each cluster, which can alleviate the Matthew Effect typically caused by existing fairness metrics. Our theoretical analysis demonstrates that the well-known k-means++ incurs a multiplicative error of O(k log k), and we establish the convergence of TKM under mild conditions. In terms of fairness, we prove that the variance generated by TKM decreases with a scaled hyperparameter. In terms of efficiency, we demonstrate the time complexity is linear with the dataset size. Our experiments demonstrate that TKM outperforms state-of-the-art methods in effectiveness, fairness, and efficiency. | [
"['Shengkun Zhu' 'Jinshan Zeng' 'Yuan Sun' 'Sheng Wang' 'Xiaodong Li'\n 'Zhiyong Peng']"
] |
null | null | 2406.16565 | null | null | http://arxiv.org/pdf/2406.16565v1 | 2024-06-24T12:02:20Z | 2024-06-24T12:02:20Z | Noisy Neighbors: Efficient membership inference attacks against LLMs | The potential of transformer-based LLMs risks being hindered by privacy concerns due to their reliance on extensive datasets, possibly including sensitive information. Regulatory measures like GDPR and CCPA call for using robust auditing tools to address potential privacy issues, with Membership Inference Attacks (MIA) being the primary method for assessing LLMs' privacy risks. Differently from traditional MIA approaches, often requiring computationally intensive training of additional models, this paper introduces an efficient methodology that generates textit{noisy neighbors} for a target sample by adding stochastic noise in the embedding space, requiring operating the target model in inference mode only. Our findings demonstrate that this approach closely matches the effectiveness of employing shadow models, showing its usability in practical privacy auditing scenarios. | [
"['Filippo Galli' 'Luca Melis' 'Tommaso Cucinotta']"
] |
null | null | 2406.16571 | null | null | http://arxiv.org/pdf/2406.16571v1 | 2024-06-24T12:09:19Z | 2024-06-24T12:09:19Z | Differentiable Distributionally Robust Optimization Layers | In recent years, there has been a growing research interest in decision-focused learning, which embeds optimization problems as a layer in learning pipelines and demonstrates a superior performance than the prediction-focused approach. However, for distributionally robust optimization (DRO), a popular paradigm for decision-making under uncertainty, it is still unknown how to embed it as a layer, i.e., how to differentiate decisions with respect to an ambiguity set. In this paper, we develop such differentiable DRO layers for generic mixed-integer DRO problems with parameterized second-order conic ambiguity sets and discuss its extension to Wasserstein ambiguity sets. To differentiate the mixed-integer decisions, we propose a novel dual-view methodology by handling continuous and discrete parts of decisions via different principles. Specifically, we construct a differentiable energy-based surrogate to implement the dual-view methodology and use importance sampling to estimate its gradient. We further prove that such a surrogate enjoys the asymptotic convergency under regularization. As an application of the proposed differentiable DRO layers, we develop a novel decision-focused learning pipeline for contextual distributionally robust decision-making tasks and compare it with the prediction-focused approach in experiments. | [
"['Xutao Ma' 'Chao Ning' 'Wenli Du']"
] |
null | null | 2406.16583 | null | null | http://arxiv.org/pdf/2406.16583v1 | 2024-06-24T12:16:51Z | 2024-06-24T12:16:51Z | Personalized federated learning based on feature fusion | Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy. However, due to the heterogeneity of data, models, and devices, the final global model may need to perform better for tasks on each client. Communication bottlenecks, data heterogeneity, and model heterogeneity have been common challenges in federated learning. In this work, we considered a label distribution skew problem, a type of data heterogeneity easily overlooked. In the context of classification, we propose a personalized federated learning approach called pFedPM. In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models. These feature representations play a role in preserving privacy to some extent. We use a hyperparameter $a$ to mix local and global features, which enables us to control the degree of personalization. We also introduced a relation network as an additional decision layer, which provides a non-linear learnable classifier to predict labels. Experimental results show that, with an appropriate setting of $a$, our scheme outperforms several recent FL methods on MNIST, FEMNIST, and CRIFAR10 datasets and achieves fewer communications. | [
"['Wolong Xing' 'Zhenkui Shi' 'Hongyan Peng' 'Xiantao Hu' 'Xianxian Li']"
] |
null | null | 2406.16590 | null | null | http://arxiv.org/pdf/2406.16590v1 | 2024-06-24T12:28:22Z | 2024-06-24T12:28:22Z | Forecasting with Deep Learning: Beyond Average of Average of Average
Performance | Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. We hypothesize that averaging performance over all samples dilutes relevant information about the relative performance of models. Particularly, conditions in which this relative performance is different than the overall accuracy. We address this limitation by proposing a novel framework for evaluating univariate time series forecasting models from multiple perspectives, such as one-step ahead forecasting versus multi-step ahead forecasting. We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques. While classical methods (e.g. ARIMA) are long-standing approaches to forecasting, deep neural networks (e.g. NHITS) have recently shown state-of-the-art forecasting performance in benchmark datasets. We conducted extensive experiments that show NHITS generally performs best, but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, NHITS only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that, when dealing with anomalies, NHITS is outperformed by methods such as Theta. These findings highlight the importance of aspect-based model evaluation. | [
"['Vitor Cerqueira' 'Luis Roque' 'Carlos Soares']"
] |
null | null | 2406.16593 | null | null | http://arxiv.org/pdf/2406.16593v1 | 2024-06-24T12:33:56Z | 2024-06-24T12:33:56Z | Measuring the Recyclability of Electronic Components to Assist Automatic
Disassembly and Sorting Waste Printed Circuit Boards | The waste of electrical and electronic equipment has been increased due to the fast evolution of technology products and competition of many IT sectors. Every year millions of tons of electronic waste are thrown into the environment which causes high consequences for human health. Therefore, it is crucial to control this waste flow using technology, especially using Artificial Intelligence but also reclamation of critical raw materials for new production processes. In this paper, we focused on the measurement of recyclability of waste electronic components (WECs) from waste printed circuit boards (WPCBs) using mathematical innovation model. This innovative approach evaluates both the recyclability and recycling difficulties of WECs, integrating an AI model for improved disassembly and sorting. Assessing the recyclability of individual electronic components present on WPCBs provides insight into the recovery potential of valuable materials and indicates the level of complexity involved in recycling in terms of economic worth and production utility. This novel measurement approach helps AI models in accurately determining the number of classes to be identified and sorted during the automated disassembly of discarded PCBs. It also facilitates the model in iterative training and validation of individual electronic components. | [
"['Muhammad Mohsin' 'Xianlai Zeng' 'Stefano Rovetta' 'Francesco Masulli']"
] |
null | null | 2406.16605 | null | null | http://arxiv.org/pdf/2406.16605v1 | 2024-06-24T12:46:15Z | 2024-06-24T12:46:15Z | CLEAR: Can Language Models Really Understand Causal Graphs? | Causal reasoning is a cornerstone of how humans interpret the world. To model and reason about causality, causal graphs offer a concise yet effective solution. Given the impressive advancements in language models, a crucial question arises: can they really understand causal graphs? To this end, we pioneer an investigation into language models' understanding of causal graphs. Specifically, we develop a framework to define causal graph understanding, by assessing language models' behaviors through four practical criteria derived from diverse disciplines (e.g., philosophy and psychology). We then develop CLEAR, a novel benchmark that defines three complexity levels and encompasses 20 causal graph-based tasks across these levels. Finally, based on our framework and benchmark, we conduct extensive experiments on six leading language models and summarize five empirical findings. Our results indicate that while language models demonstrate a preliminary understanding of causal graphs, significant potential for improvement remains. Our project website is at https://github.com/OpenCausaLab/CLEAR. | [
"['Sirui Chen' 'Mengying Xu' 'Kun Wang' 'Xingyu Zeng' 'Rui Zhao'\n 'Shengjie Zhao' 'Chaochao Lu']"
] |
null | null | 2406.16606 | null | null | http://arxiv.org/pdf/2406.16606v1 | 2024-06-24T12:46:16Z | 2024-06-24T12:46:16Z | Cherry on the Cake: Fairness is NOT an Optimization Problem | Fair cake-cutting is a mathematical subfield that studies the problem of fairly dividing a resource among a number of participants. The so-called ``cake,'' as an object, represents any resource that can be distributed among players. This concept is connected to supervised multi-label classification: any dataset can be thought of as a cake that needs to be distributed, where each label is a player that receives its share of the dataset. In particular, any efficient cake-cutting solution for the dataset is equivalent to an optimal decision function. Although we are not the first to demonstrate this connection, the important ramifications of this parallel seem to have been partially forgotten. We revisit these classical results and demonstrate how this connection can be prolifically used for fairness in machine learning problems. Understanding the set of achievable fair decisions is a fundamental step in finding optimal fair solutions and satisfying fairness requirements. By employing the tools of cake-cutting theory, we have been able to describe the behavior of optimal fair decisions, which, counterintuitively, often exhibit quite unfair properties. Specifically, in order to satisfy fairness constraints, it is sometimes preferable, in the name of optimality, to purposefully make mistakes and deny giving the positive label to deserving individuals in a community in favor of less worthy individuals within the same community. This practice is known in the literature as cherry-picking and has been described as ``blatantly unfair.'' | [
"['Marco Favier' 'Toon Calders']"
] |
null | null | 2406.16608 | null | null | http://arxiv.org/abs/2406.16608v1 | 2024-06-24T12:47:21Z | 2024-06-24T12:47:21Z | When Invariant Representation Learning Meets Label Shift: Insufficiency
and Theoretical Insights | As a crucial step toward real-world learning scenarios with changing environments, dataset shift theory and invariant representation learning algorithm have been extensively studied to relax the identical distribution assumption in classical learning setting. Among the different assumptions on the essential of shifting distributions, generalized label shift (GLS) is the latest developed one which shows great potential to deal with the complex factors within the shift. In this paper, we aim to explore the limitations of current dataset shift theory and algorithm, and further provide new insights by presenting a comprehensive understanding of GLS. From theoretical aspect, two informative generalization bounds are derived, and the GLS learner is proved to be sufficiently close to optimal target model from the Bayesian perspective. The main results show the insufficiency of invariant representation learning, and prove the sufficiency and necessity of GLS correction for generalization, which provide theoretical supports and innovations for exploring generalizable model under dataset shift. From methodological aspect, we provide a unified view of existing shift correction frameworks, and propose a kernel embedding-based correction algorithm (KECA) to minimize the generalization error and achieve successful knowledge transfer. Both theoretical results and extensive experiment evaluations demonstrate the sufficiency and necessity of GLS correction for addressing dataset shift and the superiority of proposed algorithm. | [
"['You-Wei Luo' 'Chuan-Xian Ren']"
] |
null | null | 2406.16609 | null | null | http://arxiv.org/pdf/2406.16609v1 | 2024-06-24T12:48:44Z | 2024-06-24T12:48:44Z | Evaluating the Robustness of Deep-Learning Algorithm-Selection Models by
Evolving Adversarial Instances | Deep neural networks (DNN) are increasingly being used to perform algorithm-selection in combinatorial optimisation domains, particularly as they accommodate input representations which avoid designing and calculating features. Mounting evidence from domains that use images as input shows that deep convolutional networks are vulnerable to adversarial samples, in which a small perturbation of an instance can cause the DNN to misclassify. However, it remains unknown as to whether deep recurrent networks (DRN) which have recently been shown promise as algorithm-selectors in the bin-packing domain are equally vulnerable. We use an evolutionary algorithm (EA) to find perturbations of instances from two existing benchmarks for online bin packing that cause trained DRNs to misclassify: adversarial samples are successfully generated from up to 56% of the original instances depending on the dataset. Analysis of the new misclassified instances sheds light on the `fragility' of some training instances, i.e. instances where it is trivial to find a small perturbation that results in a misclassification and the factors that influence this. Finally, the method generates a large number of new instances misclassified with a wide variation in confidence, providing a rich new source of training data to create more robust models. | [
"['Emma Hart' 'Quentin Renau' 'Kevin Sim' 'Mohamad Alissa']"
] |
null | null | 2406.16619 | null | null | http://arxiv.org/pdf/2406.16619v1 | 2024-06-24T13:02:36Z | 2024-06-24T13:02:36Z | No More Sliding-Windows: Dynamic Functional Connectivity Based On Random
Convolutions Without Learning | In the field of dynamic functional connectivity, the sliding-window method is widely used and its stability is generally recognized. However, the sliding-window method's data processing within the window is overly simplistic, which to some extent limits its effectiveness. This study proposes a feature expansion method based on random convolution, which achieves better and more noise-resistant results than the sliding-window method without requiring training. Experiments on simulated data show that the dynamic functional connectivity matrix and time series obtained using the random convolution method have a higher degree of fit (95.59%) with the standard answers within shorter time windows, compared to the sliding-window method (45.99%). Gender difference studies on real data also reveal that the random convolution method uncovers more gender differences than the sliding-window method. Through theoretical analysis, we propose a more comprehensive convolutional functional connectivity computation model, with the sliding-window method being a special case of this model, thereby opening up vast potential for research methods in dynamic functional connectivity. | [
"['Yongjie Duan' 'Zhiying Long']"
] |
null | null | 2406.16635 | null | null | http://arxiv.org/pdf/2406.16635v1 | 2024-06-24T13:41:08Z | 2024-06-24T13:41:08Z | ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models | The high power consumption and latency-sensitive deployments of large language models (LLMs) have motivated techniques like quantization and sparsity. Contextual sparsity, where the sparsity pattern is input-dependent, is crucial in LLMs because the permanent removal of attention heads or neurons from LLMs can significantly degrade accuracy. Prior work has attempted to model contextual sparsity using neural networks trained to predict activation magnitudes, which can be used to dynamically prune structures with low predicted activation magnitude. In this paper, we look beyond magnitude-based pruning criteria to assess attention head and neuron importance in LLMs. We developed a novel predictor called ShadowLLM, which can shadow the LLM behavior and enforce better sparsity patterns, resulting in over 15% improvement in end-to-end accuracy without increasing latency compared to previous methods. ShadowLLM achieves up to a 20% speed-up over the state-of-the-art DejaVu framework. These enhancements are validated on models with up to 30 billion parameters. Our code is available at href{https://github.com/abdelfattah-lab/shadow_llm/}{ShadowLLM}. | [
"['Yash Akhauri' 'Ahmed F AbouElhamayed' 'Jordan Dotzel' 'Zhiru Zhang'\n 'Alexander M Rush' 'Safeen Huda' 'Mohamed S Abdelfattah']"
] |
null | null | 2406.16659 | null | null | http://arxiv.org/pdf/2406.16659v1 | 2024-06-24T14:09:45Z | 2024-06-24T14:09:45Z | Data-driven Modeling in Metrology -- A Short Introduction, Current
Developments and Future Perspectives | Mathematical models are vital to the field of metrology, playing a key role in the derivation of measurement results and the calculation of uncertainties from measurement data, informed by an understanding of the measurement process. These models generally represent the correlation between the quantity being measured and all other pertinent quantities. Such relationships are used to construct measurement systems that can interpret measurement data to generate conclusions and predictions about the measurement system itself. Classic models are typically analytical, built on fundamental physical principles. However, the rise of digital technology, expansive sensor networks, and high-performance computing hardware have led to a growing shift towards data-driven methodologies. This trend is especially prominent when dealing with large, intricate networked sensor systems in situations where there is limited expert understanding of the frequently changing real-world contexts. Here, we demonstrate the variety of opportunities that data-driven modeling presents, and how they have been already implemented in various real-world applications. | [
"['Linda-Sophie Schneider' 'Patrick Krauss' 'Nadine Schiering'\n 'Christopher Syben' 'Richard Schielein' 'Andreas Maier']"
] |
null | null | 2406.16666 | null | null | http://arxiv.org/pdf/2406.16666v1 | 2024-06-24T14:20:02Z | 2024-06-24T14:20:02Z | Cubic regularized subspace Newton for non-convex optimization | This paper addresses the optimization problem of minimizing non-convex continuous functions, which is relevant in the context of high-dimensional machine learning applications characterized by over-parametrization. We analyze a randomized coordinate second-order method named SSCN which can be interpreted as applying cubic regularization in random subspaces. This approach effectively reduces the computational complexity associated with utilizing second-order information, rendering it applicable in higher-dimensional scenarios. Theoretically, we establish convergence guarantees for non-convex functions, with interpolating rates for arbitrary subspace sizes and allowing inexact curvature estimation. When increasing subspace size, our complexity matches $mathcal{O}(epsilon^{-3/2})$ of the cubic regularization (CR) rate. Additionally, we propose an adaptive sampling scheme ensuring exact convergence rate of $mathcal{O}(epsilon^{-3/2}, epsilon^{-3})$ to a second-order stationary point, even without sampling all coordinates. Experimental results demonstrate substantial speed-ups achieved by SSCN compared to conventional first-order methods. | [
"['Jim Zhao' 'Aurelien Lucchi' 'Nikita Doikov']"
] |
null | null | 2406.16678 | null | null | http://arxiv.org/pdf/2406.16678v1 | 2024-06-24T14:36:11Z | 2024-06-24T14:36:11Z | Segment Any Text: A Universal Approach for Robust, Efficient and
Adaptable Sentence Segmentation | Segmenting text into sentences plays an early and crucial role in many NLP systems. This is commonly achieved by using rule-based or statistical methods relying on lexical features such as punctuation. Although some recent works no longer exclusively rely on punctuation, we find that no prior method achieves all of (i) robustness to missing punctuation, (ii) effective adaptability to new domains, and (iii) high efficiency. We introduce a new model - Segment any Text (SaT) - to solve this problem. To enhance robustness, we propose a new pretraining scheme that ensures less reliance on punctuation. To address adaptability, we introduce an extra stage of parameter-efficient fine-tuning, establishing state-of-the-art performance in distinct domains such as verses from lyrics and legal documents. Along the way, we introduce architectural modifications that result in a threefold gain in speed over the previous state of the art and solve spurious reliance on context far in the future. Finally, we introduce a variant of our model with fine-tuning on a diverse, multilingual mixture of sentence-segmented data, acting as a drop-in replacement and enhancement for existing segmentation tools. Overall, our contributions provide a universal approach for segmenting any text. Our method outperforms all baselines - including strong LLMs - across 8 corpora spanning diverse domains and languages, especially in practically relevant situations where text is poorly formatted. Our models and code, including documentation, are available at https://huggingface.co/segment-any-text under the MIT license. | [
"['Markus Frohmann' 'Igor Sterner' 'Ivan Vulić' 'Benjamin Minixhofer'\n 'Markus Schedl']"
] |
null | null | 2406.16681 | null | null | http://arxiv.org/pdf/2406.16681v1 | 2024-06-24T14:42:27Z | 2024-06-24T14:42:27Z | A Comprehensive Review of Emerging Approaches in Machine Learning for De
Novo PROTAC Design | Targeted protein degradation (TPD) is a rapidly growing field in modern drug discovery that aims to regulate the intracellular levels of proteins by harnessing the cell's innate degradation pathways to selectively target and degrade disease-related proteins. This strategy creates new opportunities for therapeutic intervention in cases where occupancy-based inhibitors have not been successful. Proteolysis-targeting chimeras (PROTACs) are at the heart of TPD strategies, which leverage the ubiquitin-proteasome system for the selective targeting and proteasomal degradation of pathogenic proteins. As the field evolves, it becomes increasingly apparent that the traditional methodologies for designing such complex molecules have limitations. This has led to the use of machine learning (ML) and generative modeling to improve and accelerate the development process. In this review, we explore the impact of ML on de novo PROTAC design $-$ an aspect of molecular design that has not been comprehensively reviewed despite its significance. We delve into the distinct characteristics of PROTAC linker design, underscoring the complexities required to create effective bifunctional molecules capable of TPD. We then examine how ML in the context of fragment-based drug design (FBDD), honed in the realm of small-molecule drug discovery, is paving the way for PROTAC linker design. Our review provides a critical evaluation of the limitations inherent in applying this method to the complex field of PROTAC development. Moreover, we review existing ML works applied to PROTAC design, highlighting pioneering efforts and, importantly, the limitations these studies face. By offering insights into the current state of PROTAC development and the integral role of ML in PROTAC design, we aim to provide valuable perspectives for researchers in their pursuit of better design strategies for this new modality. | [
"['Yossra Gharbi' 'Rocío Mercado']"
] |
null | null | 2406.16683 | null | null | http://arxiv.org/pdf/2406.16683v1 | 2024-06-24T14:43:02Z | 2024-06-24T14:43:02Z | Repulsive Score Distillation for Diverse Sampling of Diffusion Models | Score distillation sampling has been pivotal for integrating diffusion models into generation of complex visuals. Despite impressive results it suffers from mode collapse and lack of diversity. To cope with this challenge, we leverage the gradient flow interpretation of score distillation to propose Repulsive Score Distillation (RSD). In particular, we propose a variational framework based on repulsion of an ensemble of particles that promotes diversity. Using a variational approximation that incorporates a coupling among particles, the repulsion appears as a simple regularization that allows interaction of particles based on their relative pairwise similarity, measured e.g., via radial basis kernels. We design RSD for both unconstrained and constrained sampling scenarios. For constrained sampling we focus on inverse problems in the latent space that leads to an augmented variational formulation, that strikes a good balance between compute, quality and diversity. Our extensive experiments for text-to-image generation, and inverse problems demonstrate that RSD achieves a superior trade-off between diversity and quality compared with state-of-the-art alternatives. | [
"['Nicolas Zilberstein' 'Morteza Mardani' 'Santiago Segarra']"
] |
null | null | 2406.16687 | null | null | http://arxiv.org/pdf/2406.16687v1 | 2024-06-24T14:46:34Z | 2024-06-24T14:46:34Z | Link Prediction with Untrained Message Passing Layers | Message passing neural networks (MPNNs) operate on graphs by exchanging information between neigbouring nodes. MPNNs have been successfully applied to various node-, edge-, and graph-level tasks in areas like molecular science, computer vision, natural language processing, and combinatorial optimization. However, most MPNNs require training on large amounts of labeled data, which can be costly and time-consuming. In this work, we explore the use of various untrained message passing layers in graph neural networks, i.e. variants of popular message passing architecture where we remove all trainable parameters that are used to transform node features in the message passing step. Focusing on link prediction, we find that untrained message passing layers can lead to competitive and even superior performance compared to fully trained MPNNs, especially in the presence of high-dimensional features. We provide a theoretical analysis of untrained message passing by relating the inner products of features implicitly produced by untrained message passing layers to path-based topological node similarity measures. As such, untrained message passing architectures can be viewed as a highly efficient and interpretable approach to link prediction. | [
"['Lisi Qarkaxhija' 'Anatol E. Wegner' 'Ingo Scholtes']"
] |
null | null | 2406.16689 | null | null | http://arxiv.org/pdf/2406.16689v1 | 2024-06-24T14:50:05Z | 2024-06-24T14:50:05Z | Coding schemes in neural networks learning classification tasks | Neural networks posses the crucial ability to generate meaningful representations of task-dependent features. Indeed, with appropriate scaling, supervised learning in neural networks can result in strong, task-dependent feature learning. However, the nature of the emergent representations, which we call the `coding scheme', is still unclear. To understand the emergent coding scheme, we investigate fully-connected, wide neural networks learning classification tasks using the Bayesian framework where learning shapes the posterior distribution of the network weights. Consistent with previous findings, our analysis of the feature learning regime (also known as `non-lazy', `rich', or `mean-field' regime) shows that the networks acquire strong, data-dependent features. Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity. In linear networks, an analog coding scheme of the task emerges. Despite the strong representations, the mean predictor is identical to the lazy case. In nonlinear networks, spontaneous symmetry breaking leads to either redundant or sparse coding schemes. Our findings highlight how network properties such as scaling of weights and neuronal nonlinearity can profoundly influence the emergent representations. | [
"['Alexander van Meegen' 'Haim Sompolinsky']"
] |
null | null | 2406.16698 | null | null | http://arxiv.org/pdf/2406.16698v1 | 2024-06-24T15:01:05Z | 2024-06-24T15:01:05Z | Learning Interpretable Fair Representations | Numerous approaches have been recently proposed for learning fair representations that mitigate unfair outcomes in prediction tasks. A key motivation for these methods is that the representations can be used by third parties with unknown objectives. However, because current fair representations are generally not interpretable, the third party cannot use these fair representations for exploration, or to obtain any additional insights, besides the pre-contracted prediction tasks. Thus, to increase data utility beyond prediction tasks, we argue that the representations need to be fair, yet interpretable. We propose a general framework for learning interpretable fair representations by introducing an interpretable "prior knowledge" during the representation learning process. We implement this idea and conduct experiments with ColorMNIST and Dsprite datasets. The results indicate that in addition to being interpretable, our representations attain slightly higher accuracy and fairer outcomes in a downstream classification task compared to state-of-the-art fair representations. | [
"['Tianhao Wang' 'Zana Buçinca' 'Zilin Ma']"
] |
null | null | 2406.16707 | null | null | http://arxiv.org/pdf/2406.16707v1 | 2024-06-24T15:09:22Z | 2024-06-24T15:09:22Z | Probabilistic Subgoal Representations for Hierarchical Reinforcement
learning | In goal-conditioned hierarchical reinforcement learning (HRL), a high-level policy specifies a subgoal for the low-level policy to reach. Effective HRL hinges on a suitable subgoal represen tation function, abstracting state space into latent subgoal space and inducing varied low-level behaviors. Existing methods adopt a subgoal representation that provides a deterministic mapping from state space to latent subgoal space. Instead, this paper utilizes Gaussian Processes (GPs) for the first probabilistic subgoal representation. Our method employs a GP prior on the latent subgoal space to learn a posterior distribution over the subgoal representation functions while exploiting the long-range correlation in the state space through learnable kernels. This enables an adaptive memory that integrates long-range subgoal information from prior planning steps allowing to cope with stochastic uncertainties. Furthermore, we propose a novel learning objective to facilitate the simultaneous learning of probabilistic subgoal representations and policies within a unified framework. In experiments, our approach outperforms state-of-the-art baselines in standard benchmarks but also in environments with stochastic elements and under diverse reward conditions. Additionally, our model shows promising capabilities in transferring low-level policies across different tasks. | [
"['Vivienne Huiling Wang' 'Tinghuai Wang' 'Wenyan Yang'\n 'Joni-Kristian Kämäräinen' 'Joni Pajarinen']"
] |
null | null | 2406.16708 | null | null | http://arxiv.org/pdf/2406.16708v1 | 2024-06-24T15:09:29Z | 2024-06-24T15:09:29Z | CausalFormer: An Interpretable Transformer for Temporal Causal Discovery | Temporal causal discovery is a crucial task aimed at uncovering the causal relations within time series data. The latest temporal causal discovery methods usually train deep learning models on prediction tasks to uncover the causality between time series. They capture causal relations by analyzing the parameters of some components of the trained models, e.g., attention weights and convolution weights. However, this is an incomplete mapping process from the model parameters to the causality and fails to investigate the other components, e.g., fully connected layers and activation functions, that are also significant for causal discovery. To facilitate the utilization of the whole deep learning models in temporal causal discovery, we proposed an interpretable transformer-based causal discovery model termed CausalFormer, which consists of the causality-aware transformer and the decomposition-based causality detector. The causality-aware transformer learns the causal representation of time series data using a prediction task with the designed multi-kernel causal convolution which aggregates each input time series along the temporal dimension under the temporal priority constraint. Then, the decomposition-based causality detector interprets the global structure of the trained causality-aware transformer with the proposed regression relevance propagation to identify potential causal relations and finally construct the causal graph. Experiments on synthetic, simulated, and real datasets demonstrate the state-of-the-art performance of CausalFormer on discovering temporal causality. Our code is available at https://github.com/lingbai-kong/CausalFormer. | [
"['Lingbai Kong' 'Wengen Li' 'Hanchen Yang' 'Yichao Zhang' 'Jihong Guan'\n 'Shuigeng Zhou']"
] |
null | null | 2406.16714 | null | null | http://arxiv.org/pdf/2406.16714v1 | 2024-06-24T15:16:45Z | 2024-06-24T15:16:45Z | AutoDetect: Towards a Unified Framework for Automated Weakness Detection
in Large Language Models | Although Large Language Models (LLMs) are becoming increasingly powerful, they still exhibit significant but subtle weaknesses, such as mistakes in instruction-following or coding tasks. As these unexpected errors could lead to severe consequences in practical deployments, it is crucial to investigate the limitations within LLMs systematically. Traditional benchmarking approaches cannot thoroughly pinpoint specific model deficiencies, while manual inspections are costly and not scalable. In this paper, we introduce a unified framework, AutoDetect, to automatically expose weaknesses in LLMs across various tasks. Inspired by the educational assessment process that measures students' learning outcomes, AutoDetect consists of three LLM-powered agents: Examiner, Questioner, and Assessor. The collaboration among these three agents is designed to realize comprehensive and in-depth weakness identification. Our framework demonstrates significant success in uncovering flaws, with an identification success rate exceeding 30% in prominent models such as ChatGPT and Claude. More importantly, these identified weaknesses can guide specific model improvements, proving more effective than untargeted data augmentation methods like Self-Instruct. Our approach has led to substantial enhancements in popular LLMs, including the Llama series and Mistral-7b, boosting their performance by over 10% across several benchmarks. Code and data are publicly available at https://github.com/thu-coai/AutoDetect. | [
"['Jiale Cheng' 'Yida Lu' 'Xiaotao Gu' 'Pei Ke' 'Xiao Liu' 'Yuxiao Dong'\n 'Hongning Wang' 'Jie Tang' 'Minlie Huang']"
] |
null | null | 2406.16715 | null | null | http://arxiv.org/pdf/2406.16715v1 | 2024-06-24T15:17:49Z | 2024-06-24T15:17:49Z | GC-Bench: A Benchmark Framework for Graph Condensation with New Insights | Graph condensation (GC) is an emerging technique designed to learn a significantly smaller graph that retains the essential information of the original graph. This condensed graph has shown promise in accelerating graph neural networks while preserving performance comparable to those achieved with the original, larger graphs. Additionally, this technique facilitates downstream applications such as neural architecture search and enhances our understanding of redundancy in large graphs. Despite the rapid development of GC methods, a systematic evaluation framework remains absent, which is necessary to clarify the critical designs for particular evaluative aspects. Furthermore, several meaningful questions have not been investigated, such as whether GC inherently preserves certain graph properties and offers robustness even without targeted design efforts. In this paper, we introduce GC-Bench, a comprehensive framework to evaluate recent GC methods across multiple dimensions and to generate new insights. Our experimental findings provide a deeper insights into the GC process and the characteristics of condensed graphs, guiding future efforts in enhancing performance and exploring new applications. Our code is available at url{https://github.com/Emory-Melody/GraphSlim/tree/main/benchmark}. | [
"['Shengbo Gong' 'Juntong Ni' 'Noveen Sachdeva' 'Carl Yang' 'Wei Jin']"
] |
null | null | 2406.16738 | null | null | http://arxiv.org/pdf/2406.16738v1 | 2024-06-24T15:45:20Z | 2024-06-24T15:45:20Z | Inducing Group Fairness in LLM-Based Decisions | Prompting Large Language Models (LLMs) has created new and interesting means for classifying textual data. While evaluating and remediating group fairness is a well-studied problem in classifier fairness literature, some classical approaches (e.g., regularization) do not carry over, and some new opportunities arise (e.g., prompt-based remediation). We measure fairness of LLM-based classifiers on a toxicity classification task, and empirically show that prompt-based classifiers may lead to unfair decisions. We introduce several remediation techniques and benchmark their fairness and performance trade-offs. We hope our work encourages more research on group fairness in LLM-based classifiers. | [
"['James Atwood' 'Preethi Lahoti' 'Ananth Balashankar' 'Flavien Prost'\n 'Ahmad Beirami']"
] |
null | null | 2406.16740 | null | null | http://arxiv.org/pdf/2406.16740v2 | 2024-07-01T15:27:50Z | 2024-06-24T15:45:37Z | Learning the boundary-to-domain mapping using Lifting Product Fourier
Neural Operators for partial differential equations | Neural operators such as the Fourier Neural Operator (FNO) have been shown to provide resolution-independent deep learning models that can learn mappings between function spaces. For example, an initial condition can be mapped to the solution of a partial differential equation (PDE) at a future time-step using a neural operator. Despite the popularity of neural operators, their use to predict solution functions over a domain given only data over the boundary (such as a spatially varying Dirichlet boundary condition) remains unexplored. In this paper, we refer to such problems as boundary-to-domain problems; they have a wide range of applications in areas such as fluid mechanics, solid mechanics, heat transfer etc. We present a novel FNO-based architecture, named Lifting Product FNO (or LP-FNO) which can map arbitrary boundary functions defined on the lower-dimensional boundary to a solution in the entire domain. Specifically, two FNOs defined on the lower-dimensional boundary are lifted into the higher dimensional domain using our proposed lifting product layer. We demonstrate the efficacy and resolution independence of the proposed LP-FNO for the 2D Poisson equation. | [
"['Aditya Kashi' 'Arka Daw' 'Muralikrishnan Gopalakrishnan Meena' 'Hao Lu']"
] |
null | null | 2406.16745 | null | null | http://arxiv.org/pdf/2406.16745v1 | 2024-06-24T15:53:11Z | 2024-06-24T15:53:11Z | Bandits with Preference Feedback: A Stackelberg Game Perspective | Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for fine-tuning large language models. The problem is well understood in simplified settings with linear target functions or over finite small domains that limit practical interest. Taking the next step, we consider infinite domains and nonlinear (kernelized) rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm. We propose MAXMINLCB, which emulates this trade-off as a zero-sum Stackelberg game, and chooses action pairs that are informative and yield favorable rewards. MAXMINLCB consistently outperforms existing algorithms and satisfies an anytime-valid rate-optimal regret guarantee. This is due to our novel preference-based confidence sequences for kernelized logistic estimators. | [
"['Barna Pásztor' 'Parnian Kassraie' 'Andreas Krause']"
] |
null | null | 2406.16746 | null | null | http://arxiv.org/pdf/2406.16746v2 | 2024-06-26T02:19:01Z | 2024-06-24T15:55:49Z | The Responsible Foundation Model Development Cheatsheet: A Review of
Tools & Resources | Foundation model development attracts a rapidly expanding body of contributors, scientists, and applications. To help shape responsible development practices, we introduce the Foundation Model Development Cheatsheet: a growing collection of 250+ tools and resources spanning text, vision, and speech modalities. We draw on a large body of prior work to survey resources (e.g. software, documentation, frameworks, guides, and practical tools) that support informed data selection, processing, and understanding, precise and limitation-aware artifact documentation, efficient model training, advance awareness of the environmental impact from training, careful model evaluation of capabilities, risks, and claims, as well as responsible model release, licensing and deployment practices. We hope this curated collection of resources helps guide more responsible development. The process of curating this list, enabled us to review the AI development ecosystem, revealing what tools are critically missing, misused, or over-used in existing practices. We find that (i) tools for data sourcing, model evaluation, and monitoring are critically under-serving ethical and real-world needs, (ii) evaluations for model safety, capabilities, and environmental impact all lack reproducibility and transparency, (iii) text and particularly English-centric analyses continue to dominate over multilingual and multi-modal analyses, and (iv) evaluation of systems, rather than just models, is needed so that capabilities and impact are assessed in context. | [
"['Shayne Longpre' 'Stella Biderman' 'Alon Albalak' 'Hailey Schoelkopf'\n 'Daniel McDuff' 'Sayash Kapoor' 'Kevin Klyman' 'Kyle Lo'\n 'Gabriel Ilharco' 'Nay San' 'Maribeth Rauh' 'Aviya Skowron'\n 'Bertie Vidgen' 'Laura Weidinger' 'Arvind Narayanan' 'Victor Sanh'\n 'David Adelani' 'Percy Liang' 'Rishi Bommasani' 'Peter Henderson'\n 'Sasha Luccioni' 'Yacine Jernite' 'Luca Soldaini']"
] |
null | null | 2406.16747 | null | null | http://arxiv.org/pdf/2406.16747v1 | 2024-06-24T15:55:59Z | 2024-06-24T15:55:59Z | Sparser is Faster and Less is More: Efficient Sparse Attention for
Long-Range Transformers | Accommodating long sequences efficiently in autoregressive Transformers, especially within an extended context window, poses significant challenges due to the quadratic computational complexity and substantial KV memory requirements inherent in self-attention mechanisms. In this work, we introduce SPARSEK Attention, a novel sparse attention mechanism designed to overcome these computational and memory obstacles while maintaining performance. Our approach integrates a scoring network and a differentiable top-k mask operator, SPARSEK, to select a constant number of KV pairs for each query, thereby enabling gradient-based optimization. As a result, SPARSEK Attention offers linear time complexity and constant memory footprint during generation. Experimental results reveal that SPARSEK Attention outperforms previous sparse attention methods and provides significant speed improvements during both training and inference, particularly in language modeling and downstream tasks. Furthermore, our method can be seamlessly integrated into pre-trained Large Language Models (LLMs) with minimal fine-tuning, offering a practical solution for effectively managing long-range dependencies in diverse applications. | [
"['Chao Lou' 'Zixia Jia' 'Zilong Zheng' 'Kewei Tu']"
] |
null | null | 2406.16748 | null | null | http://arxiv.org/pdf/2406.16748v1 | 2024-06-24T15:57:48Z | 2024-06-24T15:57:48Z | OCALM: Object-Centric Assessment with Language Models | Properly defining a reward signal to efficiently train a reinforcement learning (RL) agent is a challenging task. Designing balanced objective functions from which a desired behavior can emerge requires expert knowledge, especially for complex environments. Learning rewards from human feedback or using large language models (LLMs) to directly provide rewards are promising alternatives, allowing non-experts to specify goals for the agent. However, black-box reward models make it difficult to debug the reward. In this work, we propose Object-Centric Assessment with Language Models (OCALM) to derive inherently interpretable reward functions for RL agents from natural language task descriptions. OCALM uses the extensive world-knowledge of LLMs while leveraging the object-centric nature common to many environments to derive reward functions focused on relational concepts, providing RL agents with the ability to derive policies from task descriptions. | [
"['Timo Kaufmann' 'Jannis Blüml' 'Antonia Wüst' 'Quentin Delfosse'\n 'Kristian Kersting' 'Eyke Hüllermeier']"
] |
null | null | 2406.16749 | null | null | http://arxiv.org/pdf/2406.16749v1 | 2024-06-24T15:57:49Z | 2024-06-24T15:57:49Z | Inferring stochastic low-rank recurrent neural networks from neural data | A central aim in computational neuroscience is to relate the activity of large populations of neurons to an underlying dynamical system. Models of these neural dynamics should ideally be both interpretable and fit the observed data well. Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics. However, it is unclear how to best fit low-rank RNNs to data consisting of noisy observations of an underlying stochastic system. Here, we propose to fit stochastic low-rank RNNs with variational sequential Monte Carlo methods. We validate our method on several datasets consisting of both continuous and spiking neural data, where we obtain lower dimensional latent dynamics than current state of the art methods. Additionally, for low-rank models with piecewise linear nonlinearities, we show how to efficiently identify all fixed points in polynomial rather than exponential cost in the number of units, making analysis of the inferred dynamics tractable for large RNNs. Our method both elucidates the dynamical systems underlying experimental recordings and provides a generative model whose trajectories match observed trial-to-trial variability. | [
"['Matthijs Pals' 'A Erdem Sağtekin' 'Felix Pei' 'Manuel Gloeckler'\n 'Jakob H Macke']"
] |
null | null | 2406.16754 | null | null | http://arxiv.org/pdf/2406.16754v1 | 2024-06-24T16:00:20Z | 2024-06-24T16:00:20Z | The MRI Scanner as a Diagnostic: Image-less Active Sampling | Despite the high diagnostic accuracy of Magnetic Resonance Imaging (MRI), using MRI as a Point-of-Care (POC) disease identification tool poses significant accessibility challenges due to the use of high magnetic field strength and lengthy acquisition times. We ask a simple question: Can we dynamically optimise acquired samples, at the patient level, according to an (automated) downstream decision task, while discounting image reconstruction? We propose an ML-based framework that learns an active sampling strategy, via reinforcement learning, at a patient-level to directly infer disease from undersampled k-space. We validate our approach by inferring Meniscus Tear in undersampled knee MRI data, where we achieve diagnostic performance comparable with ML-based diagnosis, using fully sampled k-space data. We analyse task-specific sampling policies, showcasing the adaptability of our active sampling approach. The introduced frugal sampling strategies have the potential to reduce high field strength requirements that in turn strengthen the viability of MRI-based POC disease identification and associated preliminary screening tools. | [
"['Yuning Du' 'Rohan Dharmakumar' 'Sotirios A. Tsaftaris']"
] |
null | null | 2406.16756 | null | null | http://arxiv.org/pdf/2406.16756v1 | 2024-06-24T16:03:57Z | 2024-06-24T16:03:57Z | Addressing Polarization and Unfairness in Performative Prediction | When machine learning (ML) models are used in applications that involve humans (e.g., online recommendation, school admission, hiring, lending), the model itself may trigger changes in the distribution of targeted data it aims to predict. Performative prediction (PP) is a framework that explicitly considers such model-dependent distribution shifts when learning ML models. While significant efforts have been devoted to finding performative stable (PS) solutions in PP for system robustness, their societal implications are less explored and it is unclear whether PS solutions are aligned with social norms such as fairness. In this paper, we set out to examine the fairness property of PS solutions in performative prediction. We first show that PS solutions can incur severe polarization effects and group-wise loss disparity. Although existing fairness mechanisms commonly used in literature can help mitigate unfairness, they may fail and disrupt the stability under model-dependent distribution shifts. We thus propose novel fairness intervention mechanisms that can simultaneously achieve both stability and fairness in PP settings. Both theoretical analysis and experiments are provided to validate the proposed method. | [
"['Kun Jin' 'Tian Xie' 'Yang Liu' 'Xueru Zhang']"
] |
null | null | 2406.16766 | null | null | http://arxiv.org/pdf/2406.16766v1 | 2024-06-24T16:23:30Z | 2024-06-24T16:23:30Z | Conformal time series decomposition with component-wise exchangeability | Conformal prediction offers a practical framework for distribution-free uncertainty quantification, providing finite-sample coverage guarantees under relatively mild assumptions on data exchangeability. However, these assumptions cease to hold for time series due to their temporally correlated nature. In this work, we present a novel use of conformal prediction for time series forecasting that incorporates time series decomposition. This approach allows us to model different temporal components individually. By applying specific conformal algorithms to each component and then merging the obtained prediction intervals, we customize our methods to account for the different exchangeability regimes underlying each component. Our decomposition-based approach is thoroughly discussed and empirically evaluated on synthetic and real-world data. We find that the method provides promising results on well-structured time series, but can be limited by factors such as the decomposition step for more complex data. | [
"['Derck W. E. Prinzhorn' 'Thijmen Nijdam' 'Putri A. van der Linden'\n 'Alexander Timans']"
] |
null | null | 2406.16768 | null | null | http://arxiv.org/pdf/2406.16768v1 | 2024-06-24T16:24:34Z | 2024-06-24T16:24:34Z | WARP: On the Benefits of Weight Averaged Rewarded Policies | Reinforcement learning from human feedback (RLHF) aligns large language models (LLMs) by encouraging their generations to have high rewards, using a reward model trained on human preferences. To prevent the forgetting of pre-trained knowledge, RLHF usually incorporates a KL regularization; this forces the policy to remain close to its supervised fine-tuned initialization, though it hinders the reward optimization. To tackle the trade-off between KL and reward, in this paper we introduce a novel alignment strategy named Weight Averaged Rewarded Policies (WARP). WARP merges policies in the weight space at three distinct stages. First, it uses the exponential moving average of the policy as a dynamic anchor in the KL regularization. Second, it applies spherical interpolation to merge independently fine-tuned policies into a new enhanced one. Third, it linearly interpolates between this merged model and the initialization, to recover features from pre-training. This procedure is then applied iteratively, with each iteration's final model used as an advanced initialization for the next, progressively refining the KL-reward Pareto front, achieving superior rewards at fixed KL. Experiments with GEMMA policies validate that WARP improves their quality and alignment, outperforming other open-source LLMs. | [
"['Alexandre Ramé' 'Johan Ferret' 'Nino Vieillard' 'Robert Dadashi'\n 'Léonard Hussenot' 'Pierre-Louis Cedoz' 'Pier Giuseppe Sessa'\n 'Sertan Girgin' 'Arthur Douillard' 'Olivier Bachem']"
] |
null | null | 2406.16782 | null | null | http://arxiv.org/pdf/2406.16782v1 | 2024-06-24T16:44:45Z | 2024-06-24T16:44:45Z | Confidence Aware Inverse Constrained Reinforcement Learning | In coming up with solutions to real-world problems, humans implicitly adhere to constraints that are too numerous and complex to be specified completely. However, reinforcement learning (RL) agents need these constraints to learn the correct optimal policy in these settings. The field of Inverse Constraint Reinforcement Learning (ICRL) deals with this problem and provides algorithms that aim to estimate the constraints from expert demonstrations collected offline. Practitioners prefer to know a measure of confidence in the estimated constraints, before deciding to use these constraints, which allows them to only use the constraints that satisfy a desired level of confidence. However, prior works do not allow users to provide the desired level of confidence for the inferred constraints. This work provides a principled ICRL method that can take a confidence level with a set of expert demonstrations and outputs a constraint that is at least as constraining as the true underlying constraint with the desired level of confidence. Further, unlike previous methods, this method allows a user to know if the number of expert trajectories is insufficient to learn a constraint with a desired level of confidence, and therefore collect more expert trajectories as required to simultaneously learn constraints with the desired level of confidence and a policy that achieves the desired level of performance. | [
"['Sriram Ganapathi Subramanian' 'Guiliang Liu' 'Mohammed Elmahgiubi'\n 'Kasra Rezaee' 'Pascal Poupart']"
] |
null | null | 2406.16783 | null | null | http://arxiv.org/pdf/2406.16783v2 | 2024-06-28T10:14:53Z | 2024-06-24T16:45:13Z | M2Lingual: Enhancing Multilingual, Multi-Turn Instruction Alignment in
Large Language Models | Instruction finetuning (IFT) is critical for aligning Large Language Models (LLMs) to follow instructions. While many effective IFT datasets have been introduced recently, they predominantly focus on high-resource languages like English. To better align LLMs across a broad spectrum of languages and tasks, we propose a fully synthetic, novel taxonomy (Evol) guided Multilingual, Multi-turn instruction finetuning dataset, called M2Lingual. It is constructed by first selecting a diverse set of seed examples and then utilizing the proposed Evol taxonomy to convert these seeds into complex and challenging multi-turn instructions. We demonstrate the effectiveness of M2Lingual by training LLMs of varying sizes and showcasing the enhanced performance across a diverse set of languages. We contribute the 2 step Evol taxonomy with the guided generation code: https://github.com/ServiceNow/M2Lingual, as well as the first fully synthetic, general and task-oriented, multi-turn, multilingual dataset built with Evol - M2Lingual: https://huggingface.co/datasets/ServiceNow-AI/ M2Lingual - containing 182K total IFT pairs, covering 70 languages and 17+ NLP tasks. | [
"['Rishabh Maheshwary' 'Vikas Yadav' 'Hoang Nguyen' 'Khyati Mahajan'\n 'Sathwik Tejaswi Madhusudhan']"
] |
null | null | 2406.16791 | null | null | http://arxiv.org/pdf/2406.16791v1 | 2024-06-24T16:55:03Z | 2024-06-24T16:55:03Z | Enabling more efficient and cost-effective AI/ML systems with Collective
Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and
reproducible optimization tournaments | In this white paper, I present my community effort to automatically co-design cheaper, faster and more energy-efficient software and hardware for AI, ML and other popular workloads with the help of the Collective Mind framework (CM), virtualized MLOps, MLPerf benchmarks and reproducible optimization tournaments. I developed CM to modularize, automate and virtualize the tedious process of building, running, profiling and optimizing complex applications across rapidly evolving open-source and proprietary AI/ML models, datasets, software and hardware. I achieved that with the help of portable, reusable and technology-agnostic automation recipes (ResearchOps) for MLOps and DevOps (CM4MLOps) discovered in close collaboration with academia and industry when reproducing more than 150 research papers and organizing the 1st mass-scale community benchmarking of ML and AI systems using CM and MLPerf. I donated CM and CM4MLOps to MLCommons to help connect academia and industry to learn how to build and run AI and other emerging workloads in the most efficient and cost-effective way using a common and technology-agnostic automation, virtualization and reproducibility framework while unifying knowledge exchange, protecting everyone's intellectual property, enabling portable skills, and accelerating transfer of the state-of-the-art research to production. My long-term vision is to make AI accessible to everyone by making it a commodity automatically produced from the most suitable open-source and proprietary components from different vendors based on user demand, requirements and constraints such as cost, latency, throughput, accuracy, energy, size and other important characteristics. | [
"['Grigori Fursin']"
] |
null | null | 2406.16793 | null | null | http://arxiv.org/pdf/2406.16793v5 | 2024-07-03T16:38:17Z | 2024-06-24T16:56:41Z | Adam-mini: Use Fewer Learning Rates To Gain More | We propose Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini reduces memory by cutting down the learning rate resources in Adam (i.e., $1/sqrt{v}$). We find that $geq$ 90% of these learning rates in $v$ could be harmlessly removed if we (1) carefully partition the parameters into blocks following our proposed principle on Hessian structure; (2) assign a single but good learning rate to each parameter block. We further find that, for each of these parameter blocks, there exists a single high-quality learning rate that can outperform Adam, provided that sufficient resources are available to search it out. We then provide one cost-effective way to find good learning rates and propose Adam-mini. Empirically, we verify that Adam-mini performs on par or better than AdamW on various language models sized from 125M to 7B for pre-training, supervised fine-tuning, and RLHF. The reduced memory footprint of Adam-mini also alleviates communication overheads among GPUs and CPUs, thereby increasing throughput. For instance, Adam-mini achieves 49.6% higher throughput than AdamW when pre-training Llama2-7B on $2times$ A800-80GB GPUs, which saves 33% wall-clock time for pre-training. | [
"['Yushun Zhang' 'Congliang Chen' 'Ziniu Li' 'Tian Ding' 'Chenwei Wu'\n 'Yinyu Ye' 'Zhi-Quan Luo' 'Ruoyu Sun']"
] |
null | null | 2406.16802 | null | null | http://arxiv.org/pdf/2406.16802v1 | 2024-06-24T17:14:31Z | 2024-06-24T17:14:31Z | Improved Regret Bounds for Bandits with Expert Advice | In this research note, we revisit the bandits with expert advice problem. Under a restricted feedback model, we prove a lower bound of order $sqrt{K T ln(N/K)}$ for the worst-case regret, where $K$ is the number of actions, $N>K$ the number of experts, and $T$ the time horizon. This matches a previously known upper bound of the same order and improves upon the best available lower bound of $sqrt{K T (ln N) / (ln K)}$. For the standard feedback model, we prove a new instance-based upper bound that depends on the agreement between the experts and provides a logarithmic improvement compared to prior results. | [
"['Nicolò Cesa-Bianchi' 'Khaled Eldowa' 'Emmanuel Esposito'\n 'Julia Olkhovskaya']"
] |
null | null | 2406.16807 | null | null | http://arxiv.org/pdf/2406.16807v1 | 2024-06-24T17:19:34Z | 2024-06-24T17:19:34Z | Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback
for Text-to-Image Generation | Human feedback plays a critical role in learning and refining reward models for text-to-image generation, but the optimal form the feedback should take for learning an accurate reward function has not been conclusively established. This paper investigates the effectiveness of fine-grained feedback which captures nuanced distinctions in image quality and prompt-alignment, compared to traditional coarse-grained feedback (for example, thumbs up/down or ranking between a set of options). While fine-grained feedback holds promise, particularly for systems catering to diverse societal preferences, we show that demonstrating its superiority to coarse-grained feedback is not automatic. Through experiments on real and synthetic preference data, we surface the complexities of building effective models due to the interplay of model choice, feedback type, and the alignment between human judgment and computational interpretation. We identify key challenges in eliciting and utilizing fine-grained feedback, prompting a reassessment of its assumed benefits and practicality. Our findings -- e.g., that fine-grained feedback can lead to worse models for a fixed budget, in some settings; however, in controlled settings with known attributes, fine grained rewards can indeed be more helpful -- call for careful consideration of feedback attributes and potentially beckon novel modeling approaches to appropriately unlock the potential value of fine-grained feedback in-the-wild. | [
"['Katherine M. Collins' 'Najoung Kim' 'Yonatan Bitton' 'Verena Rieser'\n 'Shayegan Omidshafiei' 'Yushi Hu' 'Sherol Chen' 'Senjuti Dutta'\n 'Minsuk Chang' 'Kimin Lee' 'Youwei Liang' 'Georgina Evans' 'Sahil Singla'\n 'Gang Li' 'Adrian Weller' 'Junfeng He' 'Deepak Ramachandran'\n 'Krishnamurthy Dj Dvijotham']"
] |
null | null | 2406.16810 | null | null | http://arxiv.org/pdf/2406.16810v1 | 2024-06-24T17:22:36Z | 2024-06-24T17:22:36Z | PISTOL: Dataset Compilation Pipeline for Structural Unlearning of LLMs | Recently, machine unlearning, which seeks to erase specific data stored in the pre-trained or fine-tuned models, has emerged as a crucial protective measure for LLMs. However, unlearning approaches for LLMs that have been considered thus far have focused on the removal of independent data points and have not taken into account that the stored facts are logically connected to one another and form an implicit knowledge graph. To facilitate the development of structural unlearning methods, which are essential for the practical application of unlearning, we propose PISTOL, a pipeline for compiling multi-scenario datasets for benchmarking structural LLM unlearning. Additionally, leveraging sample datasets synthesized using PISTOL, we conducted benchmarks with four distinct unlearning methods on both Llama2-7B and Mistral-7B models. This analysis helps to illustrate the prevailing challenges in effectively and robustly removing highly inter-connected data, batched data, or data skewed towards a specific domain. It also highlights the choice of pre-trained model can impact unlearning performance. This work not only advances our understandings on the limitation of current LLMs unlearning methods and proposes future research directions, but also provides a replicable framework for ongoing exploration and validation in the field. | [
"['Xinchi Qiu' 'William F. Shen' 'Yihong Chen' 'Nicola Cancedda'\n 'Pontus Stenetorp' 'Nicholas D. Lane']"
] |
null | null | 2406.16821 | null | null | http://arxiv.org/pdf/2406.16821v1 | 2024-06-24T17:31:41Z | 2024-06-24T17:31:41Z | General Binding Affinity Guidance for Diffusion Models in
Structure-Based Drug Design | Structure-Based Drug Design (SBDD) focuses on generating valid ligands that strongly and specifically bind to a designated protein pocket. Several methods use machine learning for SBDD to generate these ligands in 3D space, conditioned on the structure of a desired protein pocket. Recently, diffusion models have shown success here by modeling the underlying distributions of atomic positions and types. While these methods are effective in considering the structural details of the protein pocket, they often fail to explicitly consider the binding affinity. Binding affinity characterizes how tightly the ligand binds to the protein pocket, and is measured by the change in free energy associated with the binding process. It is one of the most crucial metrics for benchmarking the effectiveness of the interaction between a ligand and protein pocket. To address this, we propose BADGER: Binding Affinity Diffusion Guidance with Enhanced Refinement. BADGER is a general guidance method to steer the diffusion sampling process towards improved protein-ligand binding, allowing us to adjust the distribution of the binding affinity between ligands and proteins. Our method is enabled by using a neural network (NN) to model the energy function, which is commonly approximated by AutoDock Vina (ADV). ADV's energy function is non-differentiable, and estimates the affinity based on the interactions between a ligand and target protein receptor. By using a NN as a differentiable energy function proxy, we utilize the gradient of our learned energy function as a guidance method on top of any trained diffusion model. We show that our method improves the binding affinity of generated ligands to their protein receptors by up to 60%, significantly surpassing previous machine learning methods. We also show that our guidance method is flexible and can be easily applied to other diffusion-based SBDD frameworks. | [
"['Yue Jian' 'Curtis Wu' 'Danny Reidenbach' 'Aditi S. Krishnapriyan']"
] |
null | null | 2406.16829 | null | null | http://arxiv.org/pdf/2406.16829v2 | 2024-07-05T21:49:08Z | 2024-06-24T17:38:02Z | Understanding and Mitigating Tokenization Bias in Language Models | State-of-the-art language models are autoregressive and operate on subword units known as tokens. Specifically, one must encode the conditioning string into a list of tokens before passing to the language models for next-token prediction. We show that popular encoding schemes, such as maximum prefix encoding (MPE) and byte-pair-encoding (BPE), induce a sampling bias that cannot be mitigated with more training or data. To counter this universal problem, for each encoding scheme above, we propose a novel algorithm to obtain unbiased estimates from any language model trained on tokenized data. Our methods do not require finetuning the model, and the complexity, defined as the number of model runs, scales linearly with the sequence length in the case of MPE. As a result, we show that one can simulate token-free behavior from a tokenized language model. We empirically verify the correctness of our method through a Markov-chain setup, where it accurately recovers the transition probabilities, as opposed to the conventional method of directly prompting tokens into the language model. | [
"['Buu Phan' 'Marton Havasi' 'Matthew Muckley' 'Karen Ullrich']"
] |
null | null | 2406.16833 | null | null | http://arxiv.org/pdf/2406.16833v1 | 2024-06-24T17:41:53Z | 2024-06-24T17:41:53Z | USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and
$\underline{D}$ogmatism in Long $\underline{C}$onversations | Identifying user's opinions and stances in long conversation threads on various topics can be extremely critical for enhanced personalization, market research, political campaigns, customer service, conflict resolution, targeted advertising, and content moderation. Hence, training language models to automate this task is critical. However, to train such models, gathering manual annotations has multiple challenges: 1) It is time-consuming and costly; 2) Conversation threads could be very long, increasing chances of noisy annotations; and 3) Interpreting instances where a user changes their opinion within a conversation is difficult because often such transitions are subtle and not expressed explicitly. Inspired by the recent success of large language models (LLMs) for complex natural language processing (NLP) tasks, we leverage Mistral Large and GPT-4 to automate the human annotation process on the following two tasks while also providing reasoning: i) User Stance classification, which involves labeling a user's stance of a post in a conversation on a five-point scale; ii) User Dogmatism classification, which deals with labeling a user's overall opinion in the conversation on a four-point scale. The majority voting on zero-shot, one-shot, and few-shot annotations from these two LLMs on 764 multi-user Reddit conversations helps us curate the USDC dataset. USDC is then used to finetune and instruction-tune multiple deployable small language models for the 5-class stance and 4-class dogmatism classification tasks. We make the code and dataset publicly available [https://anonymous.4open.science/r/USDC-0F7F]. | [
"['Mounika Marreddy' 'Subba Reddy Oota' 'Venkata Charan Chinni'\n 'Manish Gupta' 'Lucie Flek']"
] |
null | null | 2406.16834 | null | null | http://arxiv.org/pdf/2406.16834v1 | 2024-06-24T17:42:03Z | 2024-06-24T17:42:03Z | Concentration Inequalities for $(f,Γ)$-GANs | Generative adversarial networks (GANs) are unsupervised learning methods for training a generator distribution to produce samples that approximate those drawn from a target distribution. Many such methods can be formulated as minimization of a metric or divergence. Recent works have proven the statistical consistency of GANs that are based on integral probability metrics (IPMs), e.g., WGAN which is based on the 1-Wasserstein metric. IPMs are defined by optimizing a linear functional (difference of expectations) over a space of discriminators. A much larger class of GANs, which allow for the use of nonlinear objective functionals, can be constructed using $(f,Gamma)$-divergences; these generalize and interpolate between IPMs and $f$-divergences (e.g., KL or $alpha$-divergences). Instances of $(f,Gamma)$-GANs have been shown to exhibit improved performance in a number of applications. In this work we study the statistical consistency of $(f,Gamma)$-GANs for general $f$ and $Gamma$. Specifically, we derive finite-sample concentration inequalities. These derivations require novel arguments due to nonlinearity of the objective functional. We demonstrate that our new results reduce to the known results for IPM-GANs in the appropriate limit while also significantly extending the domain of applicability of this theory. | [
"['Jeremiah Birrell']"
] |
null | null | 2406.16838 | null | null | http://arxiv.org/pdf/2406.16838v1 | 2024-06-24T17:45:59Z | 2024-06-24T17:45:59Z | From Decoding to Meta-Generation: Inference-time Algorithms for Large
Language Models | One of the most striking findings in modern research on large language models (LLMs) is that scaling up compute during training leads to better results. However, less attention has been given to the benefits of scaling compute during inference. This survey focuses on these inference-time approaches. We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation. Token-level generation algorithms, often called decoding algorithms, operate by sampling a single token at a time or constructing a token-level search space and then selecting an output. These methods typically assume access to a language model's logits, next-token distributions, or probability scores. Meta-generation algorithms work on partial or full sequences, incorporating domain knowledge, enabling backtracking, and integrating external information. Efficient generation methods aim to reduce token costs and improve the speed of generation. Our survey unifies perspectives from three research communities: traditional natural language processing, modern LLMs, and machine learning systems. | [
"['Sean Welleck' 'Amanda Bertsch' 'Matthew Finlayson' 'Hailey Schoelkopf'\n 'Alex Xie' 'Graham Neubig' 'Ilia Kulikov' 'Zaid Harchaoui']"
] |
null | null | 2406.16846 | null | null | http://arxiv.org/pdf/2406.16846v1 | 2024-06-24T17:51:01Z | 2024-06-24T17:51:01Z | Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via
Data Selection | Machine learning models can fail on subgroups that are underrepresented during training. While techniques such as dataset balancing can improve performance on underperforming groups, they require access to training group annotations and can end up removing large portions of the dataset. In this paper, we introduce Data Debiasing with Datamodels (D3M), a debiasing approach which isolates and removes specific training examples that drive the model's failures on minority groups. Our approach enables us to efficiently train debiased classifiers while removing only a small number of examples, and does not require training group annotations or additional hyperparameter tuning. | [
"['Saachi Jain' 'Kimia Hamidieh' 'Kristian Georgiev' 'Andrew Ilyas'\n 'Marzyeh Ghassemi' 'Aleksander Madry']"
] |
null | null | 2406.16853 | null | null | http://arxiv.org/pdf/2406.16853v1 | 2024-06-24T17:58:13Z | 2024-06-24T17:58:13Z | GeoMFormer: A General Architecture for Geometric Molecular
Representation Learning | Molecular modeling, a central topic in quantum mechanics, aims to accurately calculate the properties and simulate the behaviors of molecular systems. The molecular model is governed by physical laws, which impose geometric constraints such as invariance and equivariance to coordinate rotation and translation. While numerous deep learning approaches have been developed to learn molecular representations under these constraints, most of them are built upon heuristic and costly modules. We argue that there is a strong need for a general and flexible framework for learning both invariant and equivariant features. In this work, we introduce a novel Transformer-based molecular model called GeoMFormer to achieve this goal. Using the standard Transformer modules, two separate streams are developed to maintain and learn invariant and equivariant representations. Carefully designed cross-attention modules bridge the two streams, allowing information fusion and enhancing geometric modeling in each stream. As a general and flexible architecture, we show that many previous architectures can be viewed as special instantiations of GeoMFormer. Extensive experiments are conducted to demonstrate the power of GeoMFormer. All empirical results show that GeoMFormer achieves strong performance on both invariant and equivariant tasks of different types and scales. Code and models will be made publicly available at https://github.com/c-tl/GeoMFormer. | [
"['Tianlang Chen' 'Shengjie Luo' 'Di He' 'Shuxin Zheng' 'Tie-Yan Liu'\n 'Liwei Wang']"
] |
null | null | 2406.16858 | null | null | http://arxiv.org/pdf/2406.16858v2 | 2024-06-30T15:03:25Z | 2024-06-24T17:59:11Z | EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees | Inference with modern Large Language Models (LLMs) is expensive and time-consuming, and speculative sampling has proven to be an effective solution. Most speculative sampling methods such as EAGLE use a static draft tree, implicitly assuming that the acceptance rate of draft tokens depends only on their position. Interestingly, we found that the acceptance rate of draft tokens is also context-dependent. In this paper, building upon EAGLE, we propose EAGLE-2, which introduces a new technique of context-aware dynamic draft tree into drafting modeling. This improvement leverages the fact that the draft model of EAGLE is well-calibrated: the confidence scores from the draft model approximate acceptance rates with small errors. We conducted extensive evaluations on three series of LLMs and six tasks, with EAGLE-2 achieving speedup ratios 3.05x-4.26x, which is 20%-40% faster than EAGLE-1. EAGLE-2 also ensures that the distribution of the generated text remains unchanged, making it a lossless acceleration algorithm. | [
"['Yuhui Li' 'Fangyun Wei' 'Chao Zhang' 'Hongyang Zhang']"
] |
null | null | 2406.16873 | null | null | http://arxiv.org/pdf/2406.16873v1 | 2024-03-29T18:31:50Z | 2024-03-29T18:31:50Z | A Survey of Machine Learning Techniques for Improving Global Navigation
Satellite Systems | Global Navigation Satellite Systems (GNSS)-based positioning plays a crucial role in various applications, including navigation, transportation, logistics, mapping, and emergency services. Traditional GNSS positioning methods are model-based and they utilize satellite geometry and the known properties of satellite signals. However, model-based methods have limitations in challenging environments and often lack adaptability to uncertain noise models. This paper highlights recent advances in Machine Learning (ML) and its potential to address these limitations. It covers a broad range of ML methods, including supervised learning, unsupervised learning, deep learning, and hybrid approaches. The survey provides insights into positioning applications related to GNSS such as signal analysis, anomaly detection, multi-sensor integration, prediction, and accuracy enhancement using ML. It discusses the strengths, limitations, and challenges of current ML-based approaches for GNSS positioning, providing a comprehensive overview of the field. | [
"['Adyasha Mohanty' 'Grace Gao']"
] |
null | null | 2406.16886 | null | null | http://arxiv.org/pdf/2406.16886v1 | 2024-04-25T10:13:18Z | 2024-04-25T10:13:18Z | Sensor Data Augmentation from Skeleton Pose Sequences for Improving
Human Activity Recognition | The proliferation of deep learning has significantly advanced various fields, yet Human Activity Recognition (HAR) has not fully capitalized on these developments, primarily due to the scarcity of labeled datasets. Despite the integration of advanced Inertial Measurement Units (IMUs) in ubiquitous wearable devices like smartwatches and fitness trackers, which offer self-labeled activity data from users, the volume of labeled data remains insufficient compared to domains where deep learning has achieved remarkable success. Addressing this gap, in this paper, we propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model that generates sensor data directly from 3D skeleton pose sequences. our method simultaneously trains the pose-to-sensor network and a human activity classifier, optimizing both data reconstruction and activity recognition. Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset. Experimental results demonstrate the superiority of our framework with significant performance improvements over baseline methods. | [
"['Parham Zolfaghari' 'Vitor Fortes Rey' 'Lala Ray' 'Hyun Kim' 'Sungho Suh'\n 'Paul Lukowicz']"
] |
null | null | 2406.16890 | null | null | http://arxiv.org/pdf/2406.16890v1 | 2024-05-02T23:37:03Z | 2024-05-02T23:37:03Z | TextAge: A Curated and Diverse Text Dataset for Age Classification | Age-related language patterns play a crucial role in understanding linguistic differences and developing age-appropriate communication strategies. However, the lack of comprehensive and diverse datasets has hindered the progress of research in this area. To address this issue, we present TextAge, a curated text dataset that maps sentences to the age and age group of the producer, as well as an underage (under 13) label. TextAge covers a wide range of ages and includes both spoken and written data from various sources such as CHILDES, Meta, Poki Poems-by-kids, JUSThink, and the TV show "Survivor." The dataset undergoes extensive cleaning and preprocessing to ensure data quality and consistency. We demonstrate the utility of TextAge through two applications: Underage Detection and Generational Classification. For Underage Detection, we train a Naive Bayes classifier, fine-tuned RoBERTa, and XLNet models to differentiate between language patterns of minors and young-adults and over. For Generational Classification, the models classify language patterns into different age groups (kids, teens, twenties, etc.). The models excel at classifying the "kids" group but struggle with older age groups, particularly "fifties," "sixties," and "seventies," likely due to limited data samples and less pronounced linguistic differences. TextAge offers a valuable resource for studying age-related language patterns and developing age-sensitive language models. The dataset's diverse composition and the promising results of the classification tasks highlight its potential for various applications, such as content moderation, targeted advertising, and age-appropriate communication. Future work aims to expand the dataset further and explore advanced modeling techniques to improve performance on older age groups. | [
"['Shravan Cheekati' 'Mridul Gupta' 'Vibha Raghu' 'Pranav Raj']"
] |
null | null | 2406.16894 | null | null | http://arxiv.org/pdf/2406.16894v1 | 2024-05-15T12:01:05Z | 2024-05-15T12:01:05Z | An Initial Study of Human-Scale Blockage in sub-THz Radio Propagation
with Application to Indoor Passive Localization | This paper empirically investigates the body induced electromagnetic (EM) effects, namely the human body blockage, by conducting indoor measurement campaigns in the unexplored sub-THz W-band (75-110 GHz) and G-band (170-260 GHz). The proposed analysis focuses on both the alterations of channel frequency response induced by body presence, fully or partially obstructing the line-of-sight (LoS) between transmitter and recevier, as well as on the channel impulse response (CIR) for selected movements of the target, i.e. crossing the LoS of the radio link. Modelling of large scale parameters is also presented using a phantom body object. The proposed study has applications in device-free radio localization and radio frequency (RF) sensing scenarios where the EM radiation or environmental radio signals are collected and processed to detect and locate people without requiring them to wear any electronic devices. Although preliminary, the study reveals that discrimination of the blockage micro-movements is possible, achieving higher precision compared to classical RF sensing and localization using cm-scale wavelengths (2.4-6GHz bands). | [
"['F. Paonessa' 'G. Virone' 'S. Kianoush' 'A. Nordio' 'S. Savazzi']"
] |
null | null | 2406.16895 | null | null | http://arxiv.org/pdf/2406.16895v1 | 2024-05-15T13:51:02Z | 2024-05-15T13:51:02Z | Coronary Artery Disease Classification Using One-dimensional
Convolutional Neural Network | Coronary Artery Disease (CAD) diagnostic to be a major global cause of death, necessitating innovative solutions. Addressing the critical importance of early CAD detection and its impact on the mortality rate, we propose the potential of one-dimensional convolutional neural networks (1D-CNN) to enhance detection accuracy and reduce network complexity. This study goes beyond traditional diagnostic methodologies, leveraging the remarkable ability of 1D-CNN to interpret complex patterns within Electrocardiogram (ECG) signals without depending on feature extraction techniques. We explore the impact of varying sample lengths on model performance and conduct experiments involving layers reduction. The ECG data employed were obtained from the PhysioNet databases, namely the MIMIC III and Fantasia datasets, with respective sampling frequencies of 125 Hz and 250 Hz. The highest accuracy for unseen data obtained with a sample length of 250. These initial findings demonstrate the potential of 1D-CNNs in CAD diagnosis using ECG signals and highlight the sample size's role in achieving high accuracy. | [
"['Atitaya Phoemsuk' 'Vahid Abolghasemi']"
] |
null | null | 2406.16896 | null | null | http://arxiv.org/pdf/2406.16896v1 | 2024-05-15T18:53:05Z | 2024-05-15T18:53:05Z | f-GAN: A frequency-domain-constrained generative adversarial network for
PPG to ECG synthesis | Electrocardiograms (ECGs) and photoplethysmograms (PPGs) are generally used to monitor an individual's cardiovascular health. In clinical settings, ECGs and fingertip PPGs are the main signals used for assessing cardiovascular health, but the equipment necessary for their collection precludes their use in daily monitoring. Although PPGs obtained from wrist-worn devices are susceptible to noise due to motion, they have been widely used to continuously monitor cardiovascular health because of their convenience. Therefore, we would like to combine the ease with which PPGs can be collected with the information that ECGs provide about cardiovascular health by developing models to synthesize ECG signals from paired PPG signals. We tackled this problem using generative adversarial networks (GANs) and found that models trained using the original GAN formulations can be successfully used to synthesize ECG signals from which heart rate can be extracted using standard signal processing pipelines. Incorporating a frequency-domain constraint to model training improved the stability of model performance and also the performance on heart rate estimation. | [
"['Nathan C. L. Kong' 'Dae Lee' 'Huyen Do' 'Dae Hoon Park' 'Cong Xu'\n 'Hongda Mao' 'Jonathan Chung']"
] |
null | null | 2406.16900 | null | null | http://arxiv.org/pdf/2406.16900v1 | 2024-05-30T10:19:21Z | 2024-05-30T10:19:21Z | Utilizing Weak-to-Strong Consistency for Semi-Supervised Glomeruli
Segmentation | Accurate segmentation of glomerulus instances attains high clinical significance in the automated analysis of renal biopsies to aid in diagnosing and monitoring kidney disease. Analyzing real-world histopathology images often encompasses inter-observer variability and requires a labor-intensive process of data annotation. Therefore, conventional supervised learning approaches generally achieve sub-optimal performance when applied to external datasets. Considering these challenges, we present a semi-supervised learning approach for glomeruli segmentation based on the weak-to-strong consistency framework validated on multiple real-world datasets. Our experimental results on 3 independent datasets indicate superior performance of our approach as compared with existing supervised baseline models such as U-Net and SegFormer. | [
"['Irina Zhang' 'Jim Denholm' 'Azam Hamidinekoo' 'Oskar Ålund'\n 'Christopher Bagnall' 'Joana Palés Huix' 'Michal Sulikowski'\n 'Ortensia Vito' 'Arthur Lewis' 'Robert Unwin' 'Magnus Soderberg'\n 'Nikolay Burlutskiy' 'Talha Qaiser']"
] |
null | null | 2406.16901 | null | null | http://arxiv.org/pdf/2406.16901v2 | 2024-06-26T08:54:40Z | 2024-05-31T15:17:12Z | ECGrecover: a Deep Learning Approach for Electrocardiogram Signal
Completion | In this work, we address the challenge of reconstructing the complete 12-lead ECG signal from incomplete parts of it. We focus on two main scenarii: (i) reconstructing missing signal segments within an ECG lead and (ii) recovering missing leads from a single-lead. We propose a model with a U-Net architecture trained on a novel objective function to address the reconstruction problem. This function incorporates both spatial and temporal aspects of the ECG by combining the distance in amplitude between the reconstructed and real signals with the signal trend. Through comprehensive assessments using both a real-life dataset and a publicly accessible one, we demonstrate that the proposed approach consistently outperforms state-of-the-art methods based on generative adversarial networks and a CopyPaste strategy. Our proposed model demonstrates superior performance in standard distortion metrics and preserves critical ECG characteristics, particularly the P, Q, R, S, and T wave coordinates. Two emerging clinical applications emphasize the relevance of our work. The first is the increasing need to digitize paper-stored ECGs for utilization in AI-based applications (automatic annotation and risk-quantification), often limited to digital ECG complete 10s recordings. The second is the widespread use of wearable devices that record ECGs but typically capture only a small subset of the 12 standard leads. In both cases, a non-negligible amount of information is lost or not recorded, which our approach aims to recover to overcome these limitations. | [
"['Alex Lence' 'Ahmad Fall' 'Federica Granese' 'Blaise Hanczar'\n 'Joe-Elie Salem' 'Jean-Daniel Zucker' 'Edi Prifti']"
] |
null | null | 2406.16902 | null | null | http://arxiv.org/pdf/2406.16902v1 | 2024-05-31T18:51:10Z | 2024-05-31T18:51:10Z | Learning Exemplar Representations in Single-Trial EEG Category Decoding | Within neuroimgaing studies it is a common practice to perform repetitions of trials in an experiment when working with a noisy class of data acquisition system, such as electroencephalography (EEG) or magnetoencephalography (MEG). While this approach can be useful in some experimental designs, it presents significant limitations for certain types of analyses, such as identifying the category of an object observed by a subject. In this study we demonstrate that when trials relating to a single object are allowed to appear in both the training and testing sets, almost any classification algorithm is capable of learning the representation of an object given only category labels. This ability to learn object representations is of particular significance as it suggests that the results of several published studies which predict the category of observed objects from EEG signals may be affected by a subtle form of leakage which has inflated their reported accuracies. We demonstrate the ability of both simple classification algorithms, and sophisticated deep learning models, to learn object representations given only category labels. We do this using two datasets; the Kaneshiro et al. (2015) dataset and the Gifford et al. (2022) dataset. Our results raise doubts about the true generalizability of several published models and suggests that the reported performance of these models may be significantly inflated. | [
"['Jack Kilgallen' 'Barak Pearlmutter' 'Jeffery Mark Siskind']"
] |
null | null | 2406.16903 | null | null | http://arxiv.org/pdf/2406.16903v1 | 2024-06-02T17:47:57Z | 2024-06-02T17:47:57Z | Towards a copilot in BIM authoring tool using a large language
model-based agent for intelligent human-machine interaction | Facing increasingly complex BIM authoring software and the accompanying expensive learning costs, designers often seek to interact with the software in a more intelligent and lightweight manner. They aim to automate modeling workflows, avoiding obstacles and difficulties caused by software usage, thereby focusing on the design process itself. To address this issue, we proposed an LLM-based autonomous agent framework that can function as a copilot in the BIM authoring tool, answering software usage questions, understanding the user's design intentions from natural language, and autonomously executing modeling tasks by invoking the appropriate tools. In a case study based on the BIM authoring software Vectorworks, we implemented a software prototype to integrate the proposed framework seamlessly into the BIM authoring scenario. We evaluated the planning and reasoning capabilities of different LLMs within this framework when faced with complex instructions. Our work demonstrates the significant potential of LLM-based agents in design automation and intelligent interaction. | [
"['Changyu Du' 'Stavros Nousias' 'André Borrmann']"
] |
null | null | 2406.16905 | null | null | http://arxiv.org/pdf/2406.16905v1 | 2024-06-03T15:58:26Z | 2024-06-03T15:58:26Z | Optimising Random Forest Machine Learning Algorithms for User VR
Experience Prediction Based on Iterative Local Search-Sparrow Search
Algorithm | In this paper, an improved method for VR user experience prediction is investigated by introducing a sparrow search algorithm and a random forest algorithm improved by an iterative local search-optimised sparrow search algorithm. The study firstly conducted a statistical analysis of the data, and then trained and tested using the traditional random forest model, the random forest model improved by the sparrow search algorithm, and the random forest algorithm improved based on the iterative local search-sparrow search algorithm, respectively. The results show that the traditional random forest model has a prediction accuracy of 93% on the training set but only 73.3% on the test set, which is poor in generalisation; whereas the model improved by the sparrow search algorithm has a prediction accuracy of 94% on the test set, which is improved compared with the traditional model. What is more noteworthy is that the improved model based on the iterative local search-sparrow search algorithm achieves 100% accuracy on both the training and test sets, which is significantly better than the other two methods. These research results provide new ideas and methods for VR user experience prediction, especially the improved model based on the iterative local search-sparrow search algorithm performs well and is able to more accurately predict and classify the user's VR experience. In the future, the application of this method in other fields can be further explored, and its effectiveness can be verified through real cases to promote the development of AI technology in the field of user experience. | [
"['Xirui Tang' 'Feiyang Li' 'Zinan Cao' 'Qixuan Yu' 'Yulu Gong']"
] |
null | null | 2406.16906 | null | null | http://arxiv.org/pdf/2406.16906v1 | 2024-06-03T16:30:19Z | 2024-06-03T16:30:19Z | REST: Efficient and Accelerated EEG Seizure Analysis through Residual
State Updates | EEG-based seizure detection models face challenges in terms of inference speed and memory efficiency, limiting their real-time implementation in clinical devices. This paper introduces a novel graph-based residual state update mechanism (REST) for real-time EEG signal analysis in applications such as epileptic seizure detection. By leveraging a combination of graph neural networks and recurrent structures, REST efficiently captures both non-Euclidean geometry and temporal dependencies within EEG data. Our model demonstrates high accuracy in both seizure detection and classification tasks. Notably, REST achieves a remarkable 9-fold acceleration in inference speed compared to state-of-the-art models, while simultaneously demanding substantially less memory than the smallest model employed for this task. These attributes position REST as a promising candidate for real-time implementation in clinical devices, such as Responsive Neurostimulation or seizure alert systems. | [
"['Arshia Afzal' 'Grigorios Chrysos' 'Volkan Cevher' 'Mahsa Shoaran']"
] |
null | null | 2406.16907 | null | null | http://arxiv.org/pdf/2406.16907v1 | 2024-06-04T01:06:41Z | 2024-06-04T01:06:41Z | RayProNet: A Neural Point Field Framework for Radio Propagation Modeling
in 3D Environments | The radio wave propagation channel is central to the performance of wireless communication systems. In this paper, we introduce a novel machine learning-empowered methodology for wireless channel modeling. The key ingredients include a point-cloud-based neural network and a Spherical Harmonics encoder with light probes. Our approach offers several significant advantages, including the flexibility to adjust antenna radiation patterns and transmitter/receiver locations, the capability to predict radio power maps, and the scalability of large-scale wireless scenes. As a result, it lays the groundwork for an end-to-end pipeline for network planning and deployment optimization. The proposed work is validated in various outdoor and indoor radio environments. | [
"['Ge Cao' 'Zhen Peng']"
] |
null | null | 2406.16908 | null | null | http://arxiv.org/pdf/2406.16908v1 | 2024-06-04T10:53:56Z | 2024-06-04T10:53:56Z | Using Explainable AI for EEG-based Reduced Montage Neonatal Seizure
Detection | The neonatal period is the most vulnerable time for the development of seizures. Seizures in the immature brain lead to detrimental consequences, therefore require early diagnosis. The gold-standard for neonatal seizure detection currently relies on continuous video-EEG monitoring; which involves recording multi-channel electroencephalogram (EEG) alongside real-time video monitoring within a neonatal intensive care unit (NICU). However, video-EEG monitoring technology requires clinical expertise and is often limited to technologically advanced and resourceful settings. Cost-effective new techniques could help the medical fraternity make an accurate diagnosis and advocate treatment without delay. In this work, a novel explainable deep learning model to automate the neonatal seizure detection process with a reduced EEG montage is proposed, which employs convolutional nets, graph attention layers, and fully connected layers. Beyond its ability to detect seizures in real-time with a reduced montage, this model offers the unique advantage of real-time interpretability. By evaluating the performance on the Zenodo dataset with 10-fold cross-validation, the presented model achieves an absolute improvement of 8.31% and 42.86% in area under curve (AUC) and recall, respectively. | [
"['Dinuka Sandun Udayantha' 'Kavindu Weerasinghe' 'Nima Wickramasinghe'\n 'Akila Abeyratne' 'Kithmin Wickremasinghe' 'Jithangi Wanigasinghe'\n 'Anjula De Silva' 'Chamira Edussooriya']"
] |
null | null | 2406.16909 | null | null | http://arxiv.org/pdf/2406.16909v1 | 2024-06-05T13:59:13Z | 2024-06-05T13:59:13Z | Enhancing Computational Efficiency of Motor Imagery BCI Classification
with Block-Toeplitz Augmented Covariance Matrices and Siegel Metric | Electroencephalographic signals are represented as multidimensional datasets. We introduce an enhancement to the augmented covariance method (ACM), exploiting more thoroughly its mathematical properties, in order to improve motor imagery classification.Standard ACM emerges as a combination of phase space reconstruction of dynamical systems and of Riemannian geometry. Indeed, it is based on the construction of a Symmetric Positive Definite matrix to improve classification. But this matrix also has a Block-Toeplitz structure that was previously ignored. This work treats such matrices in the real manifold to which they belong: the set of Block-Toeplitz SPD matrices. After some manipulation, this set is can be seen as the product of an SPD manifold and a Siegel Disk Space.The proposed methodology was tested using the MOABB framework with a within-session evaluation procedure. It achieves a similar classification performance to ACM, which is typically better than -- or at worse comparable to -- state-of-the-art methods. But, it also improves consequently the computational efficiency over ACM, making it even more suitable for real time experiments. | [
"['Igor Carrara' 'Theodore Papadopoulo']"
] |
null | null | 2406.16910 | null | null | http://arxiv.org/pdf/2406.16910v1 | 2024-06-05T16:42:23Z | 2024-06-05T16:42:23Z | Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping
Contrastive Learning | Decoding images from non-invasive electroencephalographic (EEG) signals has been a grand challenge in understanding how the human brain process visual information in real-world scenarios. To cope with the issues of signal-to-noise ratio and nonstationarity, this paper introduces a MUltimodal Similarity-keeping contrastivE learning (MUSE) framework for zero-shot EEG-based image classification. We develop a series of multivariate time-series encoders tailored for EEG signals and assess the efficacy of regularized contrastive EEG-Image pretraining using an extensive visual EEG dataset. Our method achieves state-of-the-art performance, with a top-1 accuracy of 19.3% and a top-5 accuracy of 48.8% in 200-way zero-shot image classification. Furthermore, we visualize neural patterns via model interpretation, shedding light on the visual processing dynamics in the human brain. The code repository for this work is available at: https://github.com/ChiShengChen/MUSE_EEG. | [
"['Chi-Sheng Chen' 'Chun-Shu Wei']"
] |
null | null | 2406.16911 | null | null | http://arxiv.org/pdf/2406.16911v1 | 2024-06-06T10:07:19Z | 2024-06-06T10:07:19Z | Evaluating the Influence of Temporal Context on Automatic Mouse Sleep
Staging through the Application of Human Models | In human sleep staging models, augmenting the temporal context of the input to the range of tens of minutes has recently demonstrated performance improvement. In contrast, the temporal context of mouse sleep staging models is typically in the order of tens of seconds. While long-term time patterns are less clear in mouse sleep, increasing the temporal context further than that of the current mouse sleep staging models might still result in a performance increase, given that the current methods only model very short term patterns. In this study, we examine the influence of increasing the temporal context in mouse sleep staging up to 15 minutes in three mouse cohorts using two recent and high-performing human sleep staging models that account for long-term dependencies. These are compared to two prominent mouse sleep staging models that use a local context of 12 s and 20 s, respectively. An increase in context up to 28 s is observed to have a positive impact on sleep stage classification performance, especially in REM sleep. However, the impact is limited for longer context windows. One of the human sleep scoring models, L-SeqSleepNet, outperforms both mouse models in all cohorts. This suggests that mouse sleep staging can benefit from more temporal context than currently used. | [
"['Javier García Ciudad' 'Morten Mørup' 'Birgitte Rahbek Kornum'\n 'Alexander Neergaard Zahid']"
] |
null | null | 2406.16913 | null | null | http://arxiv.org/pdf/2406.16913v1 | 2024-06-07T12:01:37Z | 2024-06-07T12:01:37Z | L-SFAN: Lightweight Spatially-focused Attention Network for Pain
Behavior Detection | Chronic Low Back Pain (CLBP) afflicts millions globally, significantly impacting individuals' well-being and imposing economic burdens on healthcare systems. While artificial intelligence (AI) and deep learning offer promising avenues for analyzing pain-related behaviors to improve rehabilitation strategies, current models, including convolutional neural networks (CNNs), recurrent neural networks, and graph-based neural networks, have limitations. These approaches often focus singularly on the temporal dimension or require complex architectures to exploit spatial interrelationships within multivariate time series data. To address these limitations, we introduce hbox{L-SFAN}, a lightweight CNN architecture incorporating 2D filters designed to meticulously capture the spatial-temporal interplay of data from motion capture and surface electromyography sensors. Our proposed model, enhanced with an oriented global pooling layer and multi-head self-attention mechanism, prioritizes critical features to better understand CLBP and achieves competitive classification accuracy. Experimental results on the EmoPain database demonstrate that our approach not only enhances performance metrics with significantly fewer parameters but also promotes model interpretability, offering valuable insights for clinicians in managing CLBP. This advancement underscores the potential of AI in transforming healthcare practices for chronic conditions like CLBP, providing a sophisticated framework for the nuanced analysis of complex biomedical data. | [
"['Jorge Ortigoso-Narro' 'Fernando Diaz-de-Maria' 'Mohammad Mahdi Dehshibi'\n 'Ana Tajadura-Jiménez']"
] |
null | null | 2406.16915 | null | null | http://arxiv.org/pdf/2406.16915v1 | 2024-06-07T18:00:00Z | 2024-06-07T18:00:00Z | Unlocking Telemetry Potential: Self-Supervised Learning for Continuous
Clinical Electrocardiogram Monitoring | Machine learning (ML) applied to routine patient monitoring within intensive care units (ICUs) has the potential to improve care by providing clinicians with novel insights into each patient's health and expected response to interventions. This paper applies deep learning to a large volume of unlabeled electrocardiogram (ECG) telemetry signals, which are commonly used for continuous patient monitoring in hospitals but have important differences from the standard, single time-point 12-lead ECG used in many prior machine learning studies. We applied self-supervised learning to pretrain a spectrum of deep networks on approximately 147,000 hours of ECG telemetry data. Our approach leverages this dataset to train models that significantly improve performance on four distinct downstream tasks compared with direct supervised learning using labeled data. These pretrained models enable medically useful predictions and estimates in smaller patient cohorts that are typically limited by the scarcity of labels. Notably, we demonstrate that our pretrained networks can continuously annotate ECG telemetry signals, thereby providing monitoring capabilities that are often unavailable due to the requirement for specialized expertise and time-consuming professional annotations. | [
"['Thomas Kite' 'Uzair Tahamid Siam' 'Brian Ayers' 'Nicholas Houstis'\n 'Aaron D Aguirre']"
] |
null | null | 2406.16926 | null | null | http://arxiv.org/pdf/2406.16926v1 | 2024-06-12T07:05:53Z | 2024-06-12T07:05:53Z | Enhancing Wearable based Real-Time Glucose Monitoring via Phasic Image
Representation Learning based Deep Learning | In the U.S., over a third of adults are pre-diabetic, with 80% unaware of their status. This underlines the need for better glucose monitoring to prevent type 2 diabetes and related heart diseases. Existing wearable glucose monitors are limited by the lack of models trained on small datasets, as collecting extensive glucose data is often costly and impractical. Our study introduces a novel machine learning method using modified recurrence plots in the frequency domain to improve glucose level prediction accuracy from wearable device data, even with limited datasets. This technique combines advanced signal processing with machine learning to extract more meaningful features. We tested our method against existing models using historical data, showing that our approach surpasses the current 87% accuracy benchmark in predicting real-time interstitial glucose levels. | [
"['Yidong Zhu' 'Nadia B Aimandi' 'Mohammad Arif Ul Alam']"
] |
null | null | 2406.16928 | null | null | http://arxiv.org/pdf/2406.16928v1 | 2024-06-12T13:40:03Z | 2024-06-12T13:40:03Z | A Multi-Resolution Mutual Learning Network for Multi-Label ECG
Classification | Electrocardiograms (ECG), which record the electrophysiological activity of the heart, have become a crucial tool for diagnosing these diseases. In recent years, the application of deep learning techniques has significantly improved the performance of ECG signal classification. Multi-resolution feature analysis, which captures and processes information at different time scales, can extract subtle changes and overall trends in ECG signals, showing unique advantages. However, common multi-resolution analysis methods based on simple feature addition or concatenation may lead to the neglect of low-resolution features, affecting model performance. To address this issue, this paper proposes the Multi-Resolution Mutual Learning Network (MRM-Net). MRM-Net includes a dual-resolution attention architecture and a feature complementary mechanism. The dual-resolution attention architecture processes high-resolution and low-resolution features in parallel. Through the attention mechanism, the high-resolution and low-resolution branches can focus on subtle waveform changes and overall rhythm patterns, enhancing the ability to capture critical features in ECG signals. Meanwhile, the feature complementary mechanism introduces mutual feature learning after each layer of the feature extractor. This allows features at different resolutions to reinforce each other, thereby reducing information loss and improving model performance and robustness. Experiments on the PTB-XL and CPSC2018 datasets demonstrate that MRM-Net significantly outperforms existing methods in multi-label ECG classification performance. The code for our framework will be publicly available at https://github.com/wxhdf/MRM. | [
"['Wei Huang' 'Ning Wang' 'Panpan Feng' 'Haiyan Wang' 'Zongmin Wang'\n 'Bing Zhou']"
] |
null | null | 2406.16932 | null | null | http://arxiv.org/abs/2406.16932v1 | 2024-06-14T22:34:13Z | 2024-06-14T22:34:13Z | Xi-Net: Transformer Based Seismic Waveform Reconstructor | Missing/erroneous data is a major problem in today's world. Collected seismic data sometimes contain gaps due to multitude of reasons like interference and sensor malfunction. Gaps in seismic waveforms hamper further signal processing to gain valuable information. Plethora of techniques are used for data reconstruction in other domains like image, video, audio, but translation of those methods to address seismic waveforms demands adapting them to lengthy sequence inputs, which is practically complex. Even if that is accomplished, high computational costs and inefficiency would still persist in these predominantly convolution-based reconstruction models. In this paper, we present a transformer-based deep learning model, Xi-Net, which utilizes multi-faceted time and frequency domain inputs for accurate waveform reconstruction. Xi-Net converts the input waveform to frequency domain, employs separate encoders for time and frequency domains, and one decoder for getting reconstructed output waveform from the fused features. 1D shifted-window transformer blocks form the elementary units of all parts of the model. To the best of our knowledge, this is the first transformer-based deep learning model for seismic waveform reconstruction. We demonstrate this model's prowess by filling 0.5-1s random gaps in 120s waveforms, resembling the original waveform quite closely. The code, models can be found at: https://github.com/Anshuman04/waveformReconstructor. | [
"['Anshuman Gaharwar' 'Parth Parag Kulkarni' 'Joshua Dickey' 'Mubarak Shah']"
] |
null | null | 2406.16934 | null | null | http://arxiv.org/pdf/2406.16934v1 | 2024-06-16T17:53:56Z | 2024-06-16T17:53:56Z | Multi-UAV Multi-RIS QoS-Aware Aerial Communication Systems using DRL and
PSO | Recently, Unmanned Aerial Vehicles (UAVs) have attracted the attention of researchers in academia and industry for providing wireless services to ground users in diverse scenarios like festivals, large sporting events, natural and man-made disasters due to their advantages in terms of versatility and maneuverability. However, the limited resources of UAVs (e.g., energy budget and different service requirements) can pose challenges for adopting UAVs for such applications. Our system model considers a UAV swarm that navigates an area, providing wireless communication to ground users with RIS support to improve the coverage of the UAVs. In this work, we introduce an optimization model with the aim of maximizing the throughput and UAVs coverage through optimal path planning of UAVs and multi-RIS phase configurations. The formulated optimization is challenging to solve using standard linear programming techniques, limiting its applicability in real-time decision-making. Therefore, we introduce a two-step solution using deep reinforcement learning and particle swarm optimization. We conduct extensive simulations and compare our approach to two competitive solutions presented in the recent literature. Our simulation results demonstrate that our adopted approach is 20 % better than the brute-force approach and 30% better than the baseline solution in terms of QoS. | [
"['Marwan Dhuheir' 'Aiman Erbad' 'Ala Al-Fuqaha' 'Mohsen Guizani']"
] |
null | null | 2406.16938 | null | null | http://arxiv.org/pdf/2406.16938v1 | 2024-06-17T09:57:48Z | 2024-06-17T09:57:48Z | Unmixing Noise from Hawkes Process to Model Learned Physiological Events | Physiological signal analysis often involves identifying events crucial to understanding biological dynamics. Traditional methods rely on handcrafted procedures or supervised learning, presenting challenges such as expert dependence, lack of robustness, and the need for extensive labeled data. Data-driven methods like Convolutional Dictionary Learning (CDL) offer an alternative but tend to produce spurious detections. This work introduces UNHaP (Unmix Noise from Hawkes Processes), a novel approach addressing the joint learning of temporal structures in events and the removal of spurious detections. Leveraging marked Hawkes processes, UNHaP distinguishes between events of interest and spurious ones. By treating the event detection output as a mixture of structured and unstructured events, UNHaP efficiently unmixes these processes and estimates their parameters. This approach significantly enhances the understanding of event distributions while minimizing false detection rates. | [
"['Guillaume Staerman' 'Virginie Loison' 'Thomas Moreau']"
] |
null | null | 2406.16943 | null | null | http://arxiv.org/abs/2406.16943v1 | 2024-06-18T12:13:43Z | 2024-06-18T12:13:43Z | EarDA: Towards Accurate and Data-Efficient Earable Activity Sensing | In the realm of smart sensing with the Internet of Things, earable devices are empowered with the capability of multi-modality sensing and intelligence of context-aware computing, leading to its wide usage in Human Activity Recognition (HAR). Nonetheless, unlike the movements captured by Inertial Measurement Unit (IMU) sensors placed on the upper or lower body, those motion signals obtained from earable devices show significant changes in amplitudes and patterns, especially in the presence of dynamic and unpredictable head movements, posing a significant challenge for activity classification. In this work, we present EarDA, an adversarial-based domain adaptation system to extract the domain-independent features across different sensor locations. Moreover, while most deep learning methods commonly rely on training with substantial amounts of labeled data to offer good accuracy, the proposed scheme can release the potential usage of publicly available smartphone-based IMU datasets. Furthermore, we explore the feasibility of applying a filter-based data processing method to mitigate the impact of head movement. EarDA, the proposed system, enables more data-efficient and accurate activity sensing. It achieves an accuracy of 88.8% under HAR task, demonstrating a significant 43% improvement over methods without domain adaptation. This clearly showcases its effectiveness in mitigating domain gaps. | [
"['Shengzhe Lyu' 'Yongliang Chen' 'Di Duan' 'Renqi Jia' 'Weitao Xu']"
] |
null | null | 2406.16947 | null | null | http://arxiv.org/pdf/2406.16947v1 | 2024-06-19T10:28:11Z | 2024-06-19T10:28:11Z | Generative Data Assimilation of Sparse Weather Station Observations at
Kilometer Scales | Data assimilation of observational data into full atmospheric states is essential for weather forecast model initialization. Recently, methods for deep generative data assimilation have been proposed which allow for using new input data without retraining the model. They could also dramatically accelerate the costly data assimilation process used in operational regional weather models. Here, in a central US testbed, we demonstrate the viability of score-based data assimilation in the context of realistically complex km-scale weather. We train an unconditional diffusion model to generate snapshots of a state-of-the-art km-scale analysis product, the High Resolution Rapid Refresh. Then, using score-based data assimilation to incorporate sparse weather station data, the model produces maps of precipitation and surface winds. The generated fields display physically plausible structures, such as gust fronts, and sensitivity tests confirm learnt physics through multivariate relationships. Preliminary skill analysis shows the approach already outperforms a naive baseline of the High-Resolution Rapid Refresh system itself. By incorporating observations from 40 weather stations, 10% lower RMSEs on left-out stations are attained. Despite some lingering imperfections such as insufficiently disperse ensemble DA estimates, we find the results overall an encouraging proof of concept, and the first at km-scale. It is a ripe time to explore extensions that combine increasingly ambitious regional state generators with an increasing set of in situ, ground-based, and satellite remote sensing data streams. | [
"['Peter Manshausen' 'Yair Cohen' 'Jaideep Pathak' 'Mike Pritchard'\n 'Piyush Garg' 'Morteza Mardani' 'Karthik Kashinath' 'Simon Byrne'\n 'Noah Brenowitz']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.