categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.13777
null
null
http://arxiv.org/pdf/2406.13777v1
2024-06-19T19:02:44Z
2024-06-19T19:02:44Z
Game of LLMs: Discovering Structural Constructs in Activities using Large Language Models
Human Activity Recognition is a time-series analysis problem. A popular analysis procedure used by the community assumes an optimal window length to design recognition pipelines. However, in the scenario of smart homes, where activities are of varying duration and frequency, the assumption of a constant sized window does not hold. Additionally, previous works have shown these activities to be made up of building blocks. We focus on identifying these underlying building blocks--structural constructs, with the use of large language models. Identifying these constructs can be beneficial especially in recognizing short-duration and infrequent activities. We also propose the development of an activity recognition procedure that uses these building blocks to model activities, thus helping the downstream task of activity monitoring in smart homes.
[ "['Shruthi K. Hiremath' 'Thomas Ploetz']" ]
null
null
2406.13778
null
null
http://arxiv.org/pdf/2406.13778v1
2024-06-19T19:04:51Z
2024-06-19T19:04:51Z
Benchmarking Unsupervised Online IDS for Masquerade Attacks in CAN
Vehicular controller area networks (CANs) are susceptible to masquerade attacks by malicious adversaries. In masquerade attacks, adversaries silence a targeted ID and then send malicious frames with forged content at the expected timing of benign frames. As masquerade attacks could seriously harm vehicle functionality and are the stealthiest attacks to detect in CAN, recent work has devoted attention to compare frameworks for detecting masquerade attacks in CAN. However, most existing works report offline evaluations using CAN logs already collected using simulations that do not comply with domain's real-time constraints. Here we contribute to advance the state of the art by introducing a benchmark study of four different non-deep learning (DL)-based unsupervised online intrusion detection systems (IDS) for masquerade attacks in CAN. Our approach differs from existing benchmarks in that we analyze the effect of controlling streaming data conditions in a sliding window setting. In doing so, we use realistic masquerade attacks being replayed from the ROAD dataset. We show that although benchmarked IDS are not effective at detecting every attack type, the method that relies on detecting changes at the hierarchical structure of clusters of time series produces the best results at the expense of higher computational overhead. We discuss limitations, open challenges, and how the benchmarked methods can be used for practical unsupervised online CAN IDS for masquerade attacks.
[ "['Pablo Moriano' 'Steven C. Hespeler' 'Mingyan Li' 'Robert A. Bridges']" ]
null
null
2406.13781
null
null
http://arxiv.org/pdf/2406.13781v1
2024-06-19T19:11:22Z
2024-06-19T19:11:22Z
A Primal-Dual Framework for Transformers and Neural Networks
Self-attention is key to the remarkable success of transformers in sequence modeling tasks including many applications in natural language processing and computer vision. Like neural network layers, these attention mechanisms are often developed by heuristics and experience. To provide a principled framework for constructing attention layers in transformers, we show that the self-attention corresponds to the support vector expansion derived from a support vector regression problem, whose primal formulation has the form of a neural network layer. Using our framework, we derive popular attention layers used in practice and propose two new attentions: 1) the Batch Normalized Attention (Attention-BN) derived from the batch normalization layer and 2) the Attention with Scaled Head (Attention-SH) derived from using less training data to fit the SVR model. We empirically demonstrate the advantages of the Attention-BN and Attention-SH in reducing head redundancy, increasing the model's accuracy, and improving the model's efficiency in a variety of practical applications including image and time-series classification.
[ "['Tan M. Nguyen' 'Tam Nguyen' 'Nhat Ho' 'Andrea L. Bertozzi'\n 'Richard G. Baraniuk' 'Stanley J. Osher']" ]
null
null
2406.13791
null
null
http://arxiv.org/pdf/2406.13791v2
2024-06-29T15:29:56Z
2024-06-19T19:35:14Z
IoT-Based Preventive Mental Health Using Knowledge Graphs and Standards for Better Well-Being
Sustainable Development Goals (SDGs) give the UN a road map for development with Agenda 2030 as a target. SDG3 "Good Health and Well-Being" ensures healthy lives and promotes well-being for all ages. Digital technologies can support SDG3. Burnout and even depression could be reduced by encouraging better preventive health. Due to the lack of patient knowledge and focus to take care of their health, it is necessary to help patients before it is too late. New trends such as positive psychology and mindfulness are highly encouraged in the USA. Digital Twin (DT) can help with the continuous monitoring of emotion using physiological signals (e.g., collected via wearables). Digital twins facilitate monitoring and provide constant health insight to improve quality of life and well-being with better personalization. Healthcare DT challenges are standardizing data formats, communication protocols, and data exchange mechanisms. To achieve those data integration and knowledge challenges, we designed the Mental Health Knowledge Graph (ontology and dataset) to boost mental health. The Knowledge Graph (KG) acquires knowledge from ontology-based mental health projects classified within the LOV4IoT ontology catalog (Emotion, Depression, and Mental Health). Furthermore, the KG is mapped to standards (e.g., ontologies) when possible. Standards from ETSI SmartM2M, ITU/WHO, ISO, W3C, NIST, and IEEE are relevant to mental health.
[ "['Amelie Gyrard' 'Seyedali Mohammadi' 'Manas Gaur' 'Antonio Kung']" ]
null
null
2406.13805
null
null
http://arxiv.org/pdf/2406.13805v1
2024-06-19T20:13:42Z
2024-06-19T20:13:42Z
WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this work, we conduct a comprehensive evaluation of LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs. Specifically, we introduce WikiContradict, a benchmark consisting of 253 high-quality, human-annotated instances designed to assess LLM performance when augmented with retrieved passages containing real-world knowledge conflicts. We benchmark a diverse range of both closed and open-source LLMs under different QA scenarios, including RAG with a single passage, and RAG with 2 contradictory passages. Through rigorous human evaluations on a subset of WikiContradict instances involving 5 LLMs and over 3,500 judgements, we shed light on the behaviour and limitations of these models. For instance, when provided with two passages containing contradictory facts, all models struggle to generate answers that accurately reflect the conflicting nature of the context, especially for implicit conflicts requiring reasoning. Since human evaluation is costly, we also introduce an automated model that estimates LLM performance using a strong open-source language model, achieving an F-score of 0.8. Using this automated metric, we evaluate more than 1,500 answers from seven LLMs across all WikiContradict instances. To facilitate future work, we release WikiContradict on: https://ibm.biz/wikicontradict.
[ "['Yufang Hou' 'Alessandra Pascale' 'Javier Carnerero-Cano'\n 'Tigran Tchrakian' 'Radu Marinescu' 'Elizabeth Daly' 'Inkit Padhi'\n 'Prasanna Sattigeri']" ]
null
null
2406.13808
null
null
http://arxiv.org/pdf/2406.13808v3
2024-06-27T06:37:21Z
2024-06-19T20:14:39Z
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?
In this work, we present empirical results regarding the feasibility of using offline large language models (LLMs) in the context of electronic design automation (EDA). The goal is to investigate and evaluate a contemporary language model's (Llama-2-7B) ability to function as a microelectronic Q & A expert as well as its reasoning, and generation capabilities in solving microelectronic-related problems. Llama-2-7B was tested across a variety of adaptation methods, including introducing a novel low-rank knowledge distillation (LoRA-KD) scheme. Our experiments produce both qualitative and quantitative results.
[ "['Nirjhor Rouf' 'Fin Amin' 'Paul D. Franzon']" ]
null
null
2406.13834
null
null
http://arxiv.org/pdf/2406.13834v1
2024-06-19T20:55:12Z
2024-06-19T20:55:12Z
Optimizing Wireless Discontinuous Reception via MAC Signaling Learning
We present a Reinforcement Learning (RL) approach to the problem of controlling the Discontinuous Reception (DRX) policy from a Base Transceiver Station (BTS) in a cellular network. We do so by means of optimally timing the transmission of fast Layer-2 signaling messages (a.k.a. Medium Access Layer (MAC) Control Elements (CEs) as specified in 5G New Radio). Unlike more conventional approaches to DRX optimization, which rely on fine-tuning the values of DRX timers, we assess the gains that can be obtained solely by means of this MAC CE signalling. For the simulation part, we concentrate on traffic types typically encountered in Extended Reality (XR) applications, where the need for battery drain minimization and overheating mitigation are particularly pressing. Both 3GPP 5G New Radio (5G NR) compliant and non-compliant ("beyond 5G") MAC CEs are considered. Our simulation results show that our proposed technique strikes an improved trade-off between latency and energy savings as compared to conventional timer-based approaches that are characteristic of most current implementations. Specifically, our RL-based policy can nearly halve the active time for a single User Equipment (UE) with respect to a na"ive MAC CE transmission policy, and still achieve near 20% active time reduction for 9 simultaneously served UEs.
[ "['Adriano Pastore' 'Adrián Agustín de Dios' 'Álvaro Valcarce']" ]
null
null
2406.13839
null
null
http://arxiv.org/pdf/2406.13839v1
2024-06-19T21:06:44Z
2024-06-19T21:06:44Z
RNA-FrameFlow: Flow Matching for de novo 3D RNA Backbone Design
We introduce RNA-FrameFlow, the first generative model for 3D RNA backbone design. We build upon SE(3) flow matching for protein backbone generation and establish protocols for data preparation and evaluation to address unique challenges posed by RNA modeling. We formulate RNA structures as a set of rigid-body frames and associated loss functions which account for larger, more conformationally flexible RNA backbones (13 atoms per nucleotide) vs. proteins (4 atoms per residue). Toward tackling the lack of diversity in 3D RNA datasets, we explore training with structural clustering and cropping augmentations. Additionally, we define a suite of evaluation metrics to measure whether the generated RNA structures are globally self-consistent (via inverse folding followed by forward folding) and locally recover RNA-specific structural descriptors. The most performant version of RNA-FrameFlow generates locally realistic RNA backbones of 40-150 nucleotides, over 40% of which pass our validity criteria as measured by a self-consistency TM-score >= 0.45, at which two RNAs have the same global fold. Open-source code: https://github.com/rish-16/rna-backbone-design
[ "['Rishabh Anand' 'Chaitanya K. Joshi' 'Alex Morehead' 'Arian R. Jamasb'\n 'Charles Harris' 'Simon V. Mathis' 'Kieran Didi' 'Bryan Hooi'\n 'Pietro Liò']" ]
null
null
2406.13846
null
null
http://arxiv.org/pdf/2406.13846v1
2024-06-19T21:19:37Z
2024-06-19T21:19:37Z
Text Serialization and Their Relationship with the Conventional Paradigms of Tabular Machine Learning
Recent research has explored how Language Models (LMs) can be used for feature representation and prediction in tabular machine learning tasks. This involves employing text serialization and supervised fine-tuning (SFT) techniques. Despite the simplicity of these techniques, significant gaps remain in our understanding of the applicability and reliability of LMs in this context. Our study assesses how emerging LM technologies compare with traditional paradigms in tabular machine learning and evaluates the feasibility of adopting similar approaches with these advanced technologies. At the data level, we investigate various methods of data representation and curation of serialized tabular data, exploring their impact on prediction performance. At the classification level, we examine whether text serialization combined with LMs enhances performance on tabular datasets (e.g. class imbalance, distribution shift, biases, and high dimensionality), and assess whether this method represents a state-of-the-art (SOTA) approach for addressing tabular machine learning challenges. Our findings reveal current pre-trained models should not replace conventional approaches.
[ "['Kyoka Ono' 'Simon A. Lee']" ]
null
null
2406.13851
null
null
http://arxiv.org/pdf/2406.13851v1
2024-06-19T21:27:12Z
2024-06-19T21:27:12Z
Optimizing Quantile-based Trading Strategies in Electricity Arbitrage
Efficiently integrating renewable resources into electricity markets is vital for addressing the challenges of matching real-time supply and demand while reducing the significant energy wastage resulting from curtailments. To address this challenge effectively, the incorporation of storage devices can enhance the reliability and efficiency of the grid, improving market liquidity and reducing price volatility. In short-term electricity markets, participants navigate numerous options, each presenting unique challenges and opportunities, underscoring the critical role of the trading strategy in maximizing profits. This study delves into the optimization of day-ahead and balancing market trading, leveraging quantile-based forecasts. Employing three trading approaches with practical constraints, our research enhances forecast assessment, increases trading frequency, and employs flexible timestamp orders. Our findings underscore the profit potential of simultaneous participation in both day-ahead and balancing markets, especially with larger battery storage systems; despite increased costs and narrower profit margins associated with higher-volume trading, the implementation of high-frequency strategies plays a significant role in maximizing profits and addressing market challenges. Finally, we modelled four commercial battery storage systems and evaluated their economic viability through a scenario analysis, with larger batteries showing a shorter return on investment.
[ "[\"Ciaran O'Connor\" 'Joseph Collins' 'Steven Prestwich' 'Andrea Visentin']" ]
null
null
2406.13864
null
null
http://arxiv.org/pdf/2406.13864v1
2024-06-19T21:48:34Z
2024-06-19T21:48:34Z
Evaluating representation learning on the protein structure universe
We introduce ProteinWorkshop, a comprehensive benchmark suite for representation learning on protein structures with Geometric Graph Neural Networks. We consider large-scale pre-training and downstream tasks on both experimental and predicted structures to enable the systematic evaluation of the quality of the learned structural representation and their usefulness in capturing functional relationships for downstream tasks. We find that: (1) large-scale pretraining on AlphaFold structures and auxiliary tasks consistently improve the performance of both rotation-invariant and equivariant GNNs, and (2) more expressive equivariant GNNs benefit from pretraining to a greater extent compared to invariant models. We aim to establish a common ground for the machine learning and computational biology communities to rigorously compare and advance protein structure representation learning. Our open-source codebase reduces the barrier to entry for working with large protein structure datasets by providing: (1) storage-efficient dataloaders for large-scale structural databases including AlphaFoldDB and ESM Atlas, as well as (2) utilities for constructing new tasks from the entire PDB. ProteinWorkshop is available at: github.com/a-r-j/ProteinWorkshop.
[ "['Arian R. Jamasb' 'Alex Morehead' 'Chaitanya K. Joshi' 'Zuobai Zhang'\n 'Kieran Didi' 'Simon V. Mathis' 'Charles Harris' 'Jian Tang'\n 'Jianlin Cheng' 'Pietro Lio' 'Tom L. Blundell']" ]
null
null
2406.13868
null
null
http://arxiv.org/pdf/2406.13868v1
2024-06-19T22:12:51Z
2024-06-19T22:12:51Z
SDQ: Sparse Decomposed Quantization for LLM Inference
Recently, large language models (LLMs) have shown surprising performance in task-specific workloads as well as general tasks with the given prompts. However, to achieve unprecedented performance, recent LLMs use billions to trillions of parameters, which hinder the wide adaptation of those models due to their extremely large compute and memory requirements. To resolve the issue, various model compression methods are being actively investigated. In this work, we propose SDQ (Sparse Decomposed Quantization) to exploit both structured sparsity and quantization to achieve both high compute and memory efficiency. From our evaluations, we observe that SDQ can achieve 4x effective compute throughput with <1% quality drop.
[ "['Geonhwa Jeong' 'Po-An Tsai' 'Stephen W. Keckler' 'Tushar Krishna']" ]
null
null
2406.13869
null
null
http://arxiv.org/pdf/2406.13869v1
2024-06-19T22:16:40Z
2024-06-19T22:16:40Z
Global Human-guided Counterfactual Explanations for Molecular Properties via Reinforcement Learning
Counterfactual explanations of Graph Neural Networks (GNNs) offer a powerful way to understand data that can naturally be represented by a graph structure. Furthermore, in many domains, it is highly desirable to derive data-driven global explanations or rules that can better explain the high-level properties of the models and data in question. However, evaluating global counterfactual explanations is hard in real-world datasets due to a lack of human-annotated ground truth, which limits their use in areas like molecular sciences. Additionally, the increasing scale of these datasets provides a challenge for random search-based methods. In this paper, we develop a novel global explanation model RLHEX for molecular property prediction. It aligns the counterfactual explanations with human-defined principles, making the explanations more interpretable and easy for experts to evaluate. RLHEX includes a VAE-based graph generator to generate global explanations and an adapter to adjust the latent representation space to human-defined principles. Optimized by Proximal Policy Optimization (PPO), the global explanations produced by RLHEX cover 4.12% more input graphs and reduce the distance between the counterfactual explanation set and the input set by 0.47% on average across three molecular datasets. RLHEX provides a flexible framework to incorporate different human-designed principles into the counterfactual explanation generation process, aligning these explanations with domain expertise. The code and data are released at https://github.com/dqwang122/RLHEX.
[ "['Danqing Wang' 'Antonis Antoniades' 'Kha-Dinh Luong' 'Edwin Zhang'\n 'Mert Kosan' 'Jiachen Li' 'Ambuj Singh' 'William Yang Wang' 'Lei Li']" ]
null
null
2406.13871
null
null
http://arxiv.org/pdf/2406.13871v1
2024-06-19T22:28:18Z
2024-06-19T22:28:18Z
Robust Time Series Forecasting with Non-Heavy-Tailed Gaussian Loss-Weighted Sampler
Forecasting multivariate time series is a computationally intensive task challenged by extreme or redundant samples. Recent resampling methods aim to increase training efficiency by reweighting samples based on their running losses. However, these methods do not solve the problems caused by heavy-tailed distribution losses, such as overfitting to outliers. To tackle these issues, we introduce a novel approach: a Gaussian loss-weighted sampler that multiplies their running losses with a Gaussian distribution weight. It reduces the probability of selecting samples with very low or very high losses while favoring those close to average losses. As it creates a weighted loss distribution that is not heavy-tailed theoretically, there are several advantages to highlight compared to existing methods: 1) it relieves the inefficiency in learning redundant easy samples and overfitting to outliers, 2) It improves training efficiency by preferentially learning samples close to the average loss. Application on real-world time series forecasting datasets demonstrate improvements in prediction quality for 1%-4% using mean square error measurements in channel-independent settings. The code will be available online after 1 the review.
[ "['Jiang You' 'Arben Cela' 'René Natowicz' 'Jacob Ouanounou'\n 'Patrick Siarry']" ]
null
null
2406.13877
null
null
http://arxiv.org/pdf/2406.13877v1
2024-06-19T23:04:27Z
2024-06-19T23:04:27Z
A Systematic Literature Review on the Use of Machine Learning in Software Engineering
Software engineering (SE) is a dynamic field that involves multiple phases all of which are necessary to develop sustainable software systems. Machine learning (ML), a branch of artificial intelligence (AI), has drawn a lot of attention in recent years thanks to its ability to analyze massive volumes of data and extract useful patterns from data. Several studies have focused on examining, categorising, and assessing the application of ML in SE processes. We conducted a literature review on primary studies to address this gap. The study was carried out following the objective and the research questions to explore the current state of the art in applying machine learning techniques in software engineering processes. The review identifies the key areas within software engineering where ML has been applied, including software quality assurance, software maintenance, software comprehension, and software documentation. It also highlights the specific ML techniques that have been leveraged in these domains, such as supervised learning, unsupervised learning, and deep learning. Keywords: machine learning, deep learning, software engineering, natural language processing, source code
[ "['Nyaga Fred' 'I. O. Temkin']" ]
null
null
2406.13879
null
null
http://arxiv.org/pdf/2406.13879v1
2024-06-19T23:15:35Z
2024-06-19T23:15:35Z
A Catalyst Framework for the Quantum Linear System Problem via the Proximal Point Algorithm
Solving systems of linear equations is a fundamental problem, but it can be computationally intensive for classical algorithms in high dimensions. Existing quantum algorithms can achieve exponential speedups for the quantum linear system problem (QLSP) in terms of the problem dimension, but even such a theoretical advantage is bottlenecked by the condition number of the coefficient matrix. In this work, we propose a new quantum algorithm for QLSP inspired by the classical proximal point algorithm (PPA). Our proposed method can be viewed as a meta-algorithm that allows inverting a modified matrix via an existing texttt{QLSP_solver}, thereby directly approximating the solution vector instead of approximating the inverse of the coefficient matrix. By carefully choosing the step size $eta$, the proposed algorithm can effectively precondition the linear system to mitigate the dependence on condition numbers that hindered the applicability of previous approaches.
[ "['Junhyung Lyle Kim' 'Nai-Hui Chia' 'Anastasios Kyrillidis']" ]
null
null
2406.13882
null
null
http://arxiv.org/pdf/2406.13882v1
2024-06-19T23:23:32Z
2024-06-19T23:23:32Z
Allocation Requires Prediction Only if Inequality Is Low
Algorithmic predictions are emerging as a promising solution concept for efficiently allocating societal resources. Fueling their use is an underlying assumption that such systems are necessary to identify individuals for interventions. We propose a principled framework for assessing this assumption: Using a simple mathematical model, we evaluate the efficacy of prediction-based allocations in settings where individuals belong to larger units such as hospitals, neighborhoods, or schools. We find that prediction-based allocations outperform baseline methods using aggregate unit-level statistics only when between-unit inequality is low and the intervention budget is high. Our results hold for a wide range of settings for the price of prediction, treatment effect heterogeneity, and unit-level statistics' learnability. Combined, we highlight the potential limits to improving the efficacy of interventions through prediction.
[ "['Ali Shirali' 'Rediet Abebe' 'Moritz Hardt']" ]
null
null
2406.13888
null
null
http://arxiv.org/pdf/2406.13888v1
2024-06-19T23:34:47Z
2024-06-19T23:34:47Z
Open Problem: Anytime Convergence Rate of Gradient Descent
Recent results show that vanilla gradient descent can be accelerated for smooth convex objectives, merely by changing the stepsize sequence. We show that this can lead to surprisingly large errors indefinitely, and therefore ask: Is there any stepsize schedule for gradient descent that accelerates the classic $mathcal{O}(1/T)$ convergence rate, at emph{any} stopping time $T$?
[ "['Guy Kornowski' 'Ohad Shamir']" ]
null
null
2406.13895
null
null
http://arxiv.org/pdf/2406.13895v1
2024-06-19T23:51:26Z
2024-06-19T23:51:26Z
INFusion: Diffusion Regularized Implicit Neural Representations for 2D and 3D accelerated MRI reconstruction
Implicit Neural Representations (INRs) are a learning-based approach to accelerate Magnetic Resonance Imaging (MRI) acquisitions, particularly in scan-specific settings when only data from the under-sampled scan itself are available. Previous work demonstrates that INRs improve rapid MRI through inherent regularization imposed by neural network architectures. Typically parameterized by fully-connected neural networks, INRs support continuous image representations by taking a physical coordinate location as input and outputting the intensity at that coordinate. Previous work has applied unlearned regularization priors during INR training and have been limited to 2D or low-resolution 3D acquisitions. Meanwhile, diffusion based generative models have received recent attention as they learn powerful image priors decoupled from the measurement model. This work proposes INFusion, a technique that regularizes the optimization of INRs from under-sampled MR measurements with pre-trained diffusion models for improved image reconstruction. In addition, we propose a hybrid 3D approach with our diffusion regularization that enables INR application on large-scale 3D MR datasets. 2D experiments demonstrate improved INR training with our proposed diffusion regularization, and 3D experiments demonstrate feasibility of INR training with diffusion regularization on 3D matrix sizes of 256 by 256 by 80.
[ "['Yamin Arefeen' 'Brett Levac' 'Zach Stoebner' 'Jonathan Tamir']" ]
null
null
2406.13903
null
null
http://arxiv.org/pdf/2406.13903v1
2024-06-20T00:25:43Z
2024-06-20T00:25:43Z
Generative AI for Enhancing Active Learning in Education: A Comparative Study of GPT-3.5 and GPT-4 in Crafting Customized Test Questions
This study investigates how LLMs, specifically GPT-3.5 and GPT-4, can develop tailored questions for Grade 9 math, aligning with active learning principles. By utilizing an iterative method, these models adjust questions based on difficulty and content, responding to feedback from a simulated 'student' model. A novel aspect of the research involved using GPT-4 as a 'teacher' to create complex questions, with GPT-3.5 as the 'student' responding to these challenges. This setup mirrors active learning, promoting deeper engagement. The findings demonstrate GPT-4's superior ability to generate precise, challenging questions and notable improvements in GPT-3.5's ability to handle more complex problems after receiving instruction from GPT-4. These results underscore the potential of LLMs to mimic and enhance active learning scenarios, offering a promising path for AI in customized education. This research contributes to understanding how AI can support personalized learning experiences, highlighting the need for further exploration in various educational contexts
[ "['Hamdireza Rouzegar' 'Masoud Makrehchi']" ]
null
null
2406.13909
null
null
http://arxiv.org/pdf/2406.13909v1
2024-06-20T00:42:02Z
2024-06-20T00:42:02Z
Beyond Optimism: Exploration With Partially Observable Rewards
Exploration in reinforcement learning (RL) remains an open challenge. RL algorithms rely on observing rewards to train the agent, and if informative rewards are sparse the agent learns slowly or may not learn at all. To improve exploration and reward discovery, popular algorithms rely on optimism. But what if sometimes rewards are unobservable, e.g., situations of partial monitoring in bandits and the recent formalism of monitored Markov decision process? In this case, optimism can lead to suboptimal behavior that does not explore further to collapse uncertainty. With this paper, we present a novel exploration strategy that overcomes the limitations of existing methods and guarantees convergence to an optimal policy even when rewards are not always observable. We further propose a collection of tabular environments for benchmarking exploration in RL (with and without unobservable rewards) and show that our method outperforms existing ones.
[ "['Simone Parisi' 'Alireza Kazemipour' 'Michael Bowling']" ]
null
null
2406.13920
null
null
http://arxiv.org/pdf/2406.13920v1
2024-06-20T01:24:18Z
2024-06-20T01:24:18Z
Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks
Graph neural networks (GNNs) have achieved tremendous success, but recent studies have shown that GNNs are vulnerable to adversarial attacks, which significantly hinders their use in safety-critical scenarios. Therefore, the design of robust GNNs has attracted increasing attention. However, existing research has mainly been conducted via experimental trial and error, and thus far, there remains a lack of a comprehensive understanding of the vulnerability of GNNs. To address this limitation, we systematically investigate the adversarial robustness of GNNs by considering graph data patterns, model-specific factors, and the transferability of adversarial examples. Through extensive experiments, a set of principled guidelines is obtained for improving the adversarial robustness of GNNs, for example: (i) rather than highly regular graphs, the training graph data with diverse structural patterns is crucial for model robustness, which is consistent with the concept of adversarial training; (ii) the large model capacity of GNNs with sufficient training data has a positive effect on model robustness, and only a small percentage of neurons in GNNs are affected by adversarial attacks; (iii) adversarial transfer is not symmetric and the adversarial examples produced by the small-capacity model have stronger adversarial transferability. This work illuminates the vulnerabilities of GNNs and opens many promising avenues for designing robust GNNs.
[ "['Tao Wu' 'Canyixing Cui' 'Xingping Xian' 'Shaojie Qiao' 'Chao Wang'\n 'Lin Yuan' 'Shui Yu']" ]
null
null
2406.13928
null
null
http://arxiv.org/pdf/2406.13928v1
2024-06-20T01:49:42Z
2024-06-20T01:49:42Z
Optimal deep learning of holomorphic operators between Banach spaces
Operator learning problems arise in many key areas of scientific computing where Partial Differential Equations (PDEs) are used to model physical systems. In such scenarios, the operators map between Banach or Hilbert spaces. In this work, we tackle the problem of learning operators between Banach spaces, in contrast to the vast majority of past works considering only Hilbert spaces. We focus on learning holomorphic operators - an important class of problems with many applications. We combine arbitrary approximate encoders and decoders with standard feedforward Deep Neural Network (DNN) architectures - specifically, those with constant width exceeding the depth - under standard $ell^2$-loss minimization. We first identify a family of DNNs such that the resulting Deep Learning (DL) procedure achieves optimal generalization bounds for such operators. For standard fully-connected architectures, we then show that there are uncountably many minimizers of the training problem that yield equivalent optimal performance. The DNN architectures we consider are `problem agnostic', with width and depth only depending on the amount of training data $m$ and not on regularity assumptions of the target operator. Next, we show that DL is optimal for this problem: no recovery procedure can surpass these generalization bounds up to log terms. Finally, we present numerical results demonstrating the practical performance on challenging problems including the parametric diffusion, Navier-Stokes-Brinkman and Boussinesq PDEs.
[ "['Ben Adcock' 'Nick Dexter' 'Sebastian Moraga']" ]
null
null
2406.13929
null
null
http://arxiv.org/pdf/2406.13929v1
2024-06-20T01:53:25Z
2024-06-20T01:53:25Z
Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination
In this paper, we identify a new category of bias that induces input-conflicting hallucinations, where large language models (LLMs) generate responses inconsistent with the content of the input context. This issue we have termed the false negative problem refers to the phenomenon where LLMs are predisposed to return negative judgments when assessing the correctness of a statement given the context. In experiments involving pairs of statements that contain the same information but have contradictory factual directions, we observe that LLMs exhibit a bias toward false negatives. Specifically, the model presents greater overconfidence when responding with False. Furthermore, we analyze the relationship between the false negative problem and context and query rewriting and observe that both effectively tackle false negatives in LLMs.
[ "['Jongyoon Song' 'Sangwon Yu' 'Sungroh Yoon']" ]
null
null
2406.13930
null
null
http://arxiv.org/pdf/2406.13930v1
2024-06-20T01:55:08Z
2024-06-20T01:55:08Z
Soft-QMIX: Integrating Maximum Entropy For Monotonic Value Function Factorization
Multi-agent reinforcement learning (MARL) tasks often utilize a centralized training with decentralized execution (CTDE) framework. QMIX is a successful CTDE method that learns a credit assignment function to derive local value functions from a global value function, defining a deterministic local policy. However, QMIX is hindered by its poor exploration strategy. While maximum entropy reinforcement learning (RL) promotes better exploration through stochastic policies, QMIX's process of credit assignment conflicts with the maximum entropy objective and the decentralized execution requirement, making it unsuitable for maximum entropy RL. In this paper, we propose an enhancement to QMIX by incorporating an additional local Q-value learning method within the maximum entropy RL framework. Our approach constrains the local Q-value estimates to maintain the correct ordering of all actions. Due to the monotonicity of the QMIX value function, these updates ensure that locally optimal actions align with globally optimal actions. We theoretically prove the monotonic improvement and convergence of our method to an optimal solution. Experimentally, we validate our algorithm in matrix games, Multi-Agent Particle Environment and demonstrate state-of-the-art performance in SMAC-v2.
[ "['Wentse Chen' 'Shiyu Huang' 'Jeff Schneider']" ]
null
null
2406.13936
null
null
http://arxiv.org/pdf/2406.13936v1
2024-06-20T02:08:50Z
2024-06-20T02:08:50Z
Communication-Efficient Adaptive Batch Size Strategies for Distributed Local Gradient Methods
Modern deep neural networks often require distributed training with many workers due to their large size. As worker numbers increase, communication overheads become the main bottleneck in data-parallel minibatch stochastic gradient methods with per-iteration gradient synchronization. Local gradient methods like Local SGD reduce communication by only syncing after several local steps. Despite understanding their convergence in i.i.d. and heterogeneous settings and knowing the importance of batch sizes for efficiency and generalization, optimal local batch sizes are difficult to determine. We introduce adaptive batch size strategies for local gradient methods that increase batch sizes adaptively to reduce minibatch gradient variance. We provide convergence guarantees under homogeneous data conditions and support our claims with image classification experiments, demonstrating the effectiveness of our strategies in training and generalization.
[ "['Tim Tsz-Kit Lau' 'Weijian Li' 'Chenwei Xu' 'Han Liu' 'Mladen Kolar']" ]
null
null
2406.13942
null
null
http://arxiv.org/pdf/2406.13942v1
2024-06-20T02:20:23Z
2024-06-20T02:20:23Z
Synthesizing Multimodal Electronic Health Records via Predictive Diffusion Models
Synthesizing electronic health records (EHR) data has become a preferred strategy to address data scarcity, improve data quality, and model fairness in healthcare. However, existing approaches for EHR data generation predominantly rely on state-of-the-art generative techniques like generative adversarial networks, variational autoencoders, and language models. These methods typically replicate input visits, resulting in inadequate modeling of temporal dependencies between visits and overlooking the generation of time information, a crucial element in EHR data. Moreover, their ability to learn visit representations is limited due to simple linear mapping functions, thus compromising generation quality. To address these limitations, we propose a novel EHR data generation model called EHRPD. It is a diffusion-based model designed to predict the next visit based on the current one while also incorporating time interval estimation. To enhance generation quality and diversity, we introduce a novel time-aware visit embedding module and a pioneering predictive denoising diffusion probabilistic model (PDDPM). Additionally, we devise a predictive U-Net (PU-Net) to optimize P-DDPM.We conduct experiments on two public datasets and evaluate EHRPD from fidelity, privacy, and utility perspectives. The experimental results demonstrate the efficacy and utility of the proposed EHRPD in addressing the aforementioned limitations and advancing EHR data generation.
[ "['Yuan Zhong' 'Xiaochen Wang' 'Jiaqi Wang' 'Xiaokun Zhang' 'Yaqing Wang'\n 'Mengdi Huai' 'Cao Xiao' 'Fenglong Ma']" ]
null
null
2406.13944
null
null
http://arxiv.org/pdf/2406.13944v1
2024-06-20T02:23:28Z
2024-06-20T02:23:28Z
Generalization error of min-norm interpolators in transfer learning
This paper establishes the generalization error of pooled min-$ell_2$-norm interpolation in transfer learning where data from diverse distributions are available. Min-norm interpolators emerge naturally as implicit regularized limits of modern machine learning algorithms. Previous work characterized their out-of-distribution risk when samples from the test distribution are unavailable during training. However, in many applications, a limited amount of test data may be available during training, yet properties of min-norm interpolation in this setting are not well-understood. We address this gap by characterizing the bias and variance of pooled min-$ell_2$-norm interpolation under covariate and model shifts. The pooled interpolator captures both early fusion and a form of intermediate fusion. Our results have several implications: under model shift, for low signal-to-noise ratio (SNR), adding data always hurts. For higher SNR, transfer learning helps as long as the shift-to-signal (SSR) ratio lies below a threshold that we characterize explicitly. By consistently estimating these ratios, we provide a data-driven method to determine: (i) when the pooled interpolator outperforms the target-based interpolator, and (ii) the optimal number of target samples that minimizes the generalization error. Under covariate shift, if the source sample size is small relative to the dimension, heterogeneity between between domains improves the risk, and vice versa. We establish a novel anisotropic local law to achieve these characterizations, which may be of independent interest in random matrix theory. We supplement our theoretical characterizations with comprehensive simulations that demonstrate the finite-sample efficacy of our results.
[ "['Yanke Song' 'Sohom Bhattacharya' 'Pragya Sur']" ]
null
null
2406.13945
null
null
http://arxiv.org/pdf/2406.13945v1
2024-06-20T02:25:07Z
2024-06-20T02:25:07Z
CityBench: Evaluating the Capabilities of Large Language Model as World Model
Large language models (LLMs) with powerful generalization ability has been widely used in many domains. A systematic and reliable evaluation of LLMs is a crucial step in their development and applications, especially for specific professional fields. In the urban domain, there have been some early explorations about the usability of LLMs, but a systematic and scalable evaluation benchmark is still lacking. The challenge in constructing a systematic evaluation benchmark for the urban domain lies in the diversity of data and scenarios, as well as the complex and dynamic nature of cities. In this paper, we propose CityBench, an interactive simulator based evaluation platform, as the first systematic evaluation benchmark for the capability of LLMs for urban domain. First, we build CitySim to integrate the multi-source data and simulate fine-grained urban dynamics. Based on CitySim, we design 7 tasks in 2 categories of perception-understanding and decision-making group to evaluate the capability of LLMs as city-scale world model for urban domain. Due to the flexibility and ease-of-use of CitySim, our evaluation platform CityBench can be easily extended to any city in the world. We evaluate 13 well-known LLMs including open source LLMs and commercial LLMs in 13 cities around the world. Extensive experiments demonstrate the scalability and effectiveness of proposed CityBench and shed lights for the future development of LLMs in urban domain. The dataset, benchmark and source codes are openly accessible to the research community via https://github.com/tsinghua-fib-lab/CityBench
[ "['Jie Feng' 'Jun Zhang' 'Junbo Yan' 'Xin Zhang' 'Tianjian Ouyang'\n 'Tianhui Liu' 'Yuwei Du' 'Siqi Guo' 'Yong Li']" ]
null
null
2406.13948
null
null
http://arxiv.org/pdf/2406.13948v1
2024-06-20T02:32:16Z
2024-06-20T02:32:16Z
CityGPT: Empowering Urban Spatial Cognition of Large Language Models
Large language models(LLMs) with powerful language generation and reasoning capabilities have already achieved success in many domains, e.g., math and code generation. However, due to the lacking of physical world's corpus and knowledge during training, they usually fail to solve many real-life tasks in the urban space. In this paper, we propose CityGPT, a systematic framework for enhancing the capability of LLMs on understanding urban space and solving the related urban tasks by building a city-scale world model in the model. First, we construct a diverse instruction tuning dataset CityInstruction for injecting urban knowledge and enhancing spatial reasoning capability effectively. By using a mixture of CityInstruction and general instruction data, we fine-tune various LLMs (e.g., ChatGLM3-6B, Qwen1.5 and LLama3 series) to enhance their capability without sacrificing general abilities. To further validate the effectiveness of proposed methods, we construct a comprehensive benchmark CityEval to evaluate the capability of LLMs on diverse urban scenarios and problems. Extensive evaluation results demonstrate that small LLMs trained with CityInstruction can achieve competitive performance with commercial LLMs in the comprehensive evaluation of CityEval. The source codes are openly accessible to the research community via https://github.com/tsinghua-fib-lab/CityGPT.
[ "['Jie Feng' 'Yuwei Du' 'Tianhui Liu' 'Siqi Guo' 'Yuming Lin' 'Yong Li']" ]
null
null
2406.13961
null
null
http://arxiv.org/pdf/2406.13961v1
2024-06-20T03:02:49Z
2024-06-20T03:02:49Z
Equivariant Offline Reinforcement Learning
Sample efficiency is critical when applying learning-based methods to robotic manipulation due to the high cost of collecting expert demonstrations and the challenges of on-robot policy learning through online Reinforcement Learning (RL). Offline RL addresses this issue by enabling policy learning from an offline dataset collected using any behavioral policy, regardless of its quality. However, recent advancements in offline RL have predominantly focused on learning from large datasets. Given that many robotic manipulation tasks can be formulated as rotation-symmetric problems, we investigate the use of $SO(2)$-equivariant neural networks for offline RL with a limited number of demonstrations. Our experimental results show that equivariant versions of Conservative Q-Learning (CQL) and Implicit Q-Learning (IQL) outperform their non-equivariant counterparts. We provide empirical evidence demonstrating how equivariance improves offline learning algorithms in the low-data regime.
[ "['Arsh Tangri' 'Ondrej Biza' 'Dian Wang' 'David Klee' 'Owen Howell'\n 'Robert Platt']" ]
null
null
2406.13966
null
null
http://arxiv.org/abs/2406.13966v1
2024-06-20T03:15:53Z
2024-06-20T03:15:53Z
Causal Inference with Latent Variables: Recent Advances and Future Prospectives
Causality lays the foundation for the trajectory of our world. Causal inference (CI), which aims to infer intrinsic causal relations among variables of interest, has emerged as a crucial research topic. Nevertheless, the lack of observation of important variables (e.g., confounders, mediators, exogenous variables, etc.) severely compromises the reliability of CI methods. The issue may arise from the inherent difficulty in measuring the variables. Additionally, in observational studies where variables are passively recorded, certain covariates might be inadvertently omitted by the experimenter. Depending on the type of unobserved variables and the specific CI task, various consequences can be incurred if these latent variables are carelessly handled, such as biased estimation of causal effects, incomplete understanding of causal mechanisms, lack of individual-level causal consideration, etc. In this survey, we provide a comprehensive review of recent developments in CI with latent variables. We start by discussing traditional CI techniques when variables of interest are assumed to be fully observed. Afterward, under the taxonomy of circumvention and inference-based methods, we provide an in-depth discussion of various CI strategies to handle latent variables, covering the tasks of causal effect estimation, mediation analysis, counterfactual reasoning, and causal discovery. Furthermore, we generalize the discussion to graph data where interference among units may exist. Finally, we offer fresh aspects for further advancement of CI with latent variables, especially new opportunities in the era of large language models (LLMs).
[ "['Yaochen Zhu' 'Yinhan He' 'Jing Ma' 'Mengxuan Hu' 'Sheng Li' 'Jundong Li']" ]
null
null
2406.13968
null
null
http://arxiv.org/pdf/2406.13968v1
2024-06-20T03:22:32Z
2024-06-20T03:22:32Z
Recent Advances in Traffic Accident Analysis and Prediction: A Comprehensive Review of Machine Learning Techniques
Traffic accidents pose a severe global public health issue, leading to 1.19 million fatalities annually, with the greatest impact on individuals aged 5 to 29 years old. This paper addresses the critical need for advanced predictive methods in road safety by conducting a comprehensive review of recent advancements in applying machine learning (ML) techniques to traffic accident analysis and prediction. It examines 191 studies from the last five years, focusing on predicting accident risk, frequency, severity, duration, as well as general statistical analysis of accident data. To our knowledge, this study is the first to provide such a comprehensive review, covering the state-of-the-art across a wide range of domains related to accident analysis and prediction. The review highlights the effectiveness of integrating diverse data sources and advanced ML techniques to improve prediction accuracy and handle the complexities of traffic data. By mapping the current landscape and identifying gaps in the literature, this study aims to guide future research towards significantly reducing traffic-related deaths and injuries by 2030, aligning with the World Health Organization (WHO) targets.
[ "['Noushin Behboudi' 'Sobhan Moosavi' 'Rajiv Ramnath']" ]
null
null
2406.13971
null
null
http://arxiv.org/pdf/2406.13971v1
2024-06-20T03:31:28Z
2024-06-20T03:31:28Z
Complex fractal trainability boundary can arise from trivial non-convexity
Training neural networks involves optimizing parameters to minimize a loss function, where the nature of the loss function and the optimization strategy are crucial for effective training. Hyperparameter choices, such as the learning rate in gradient descent (GD), significantly affect the success and speed of convergence. Recent studies indicate that the boundary between bounded and divergent hyperparameters can be fractal, complicating reliable hyperparameter selection. However, the nature of this fractal boundary and methods to avoid it remain unclear. In this study, we focus on GD to investigate the loss landscape properties that might lead to fractal trainability boundaries. We discovered that fractal boundaries can emerge from simple non-convex perturbations, i.e., adding or multiplying cosine type perturbations to quadratic functions. The observed fractal dimensions are influenced by factors like parameter dimension, type of non-convexity, perturbation wavelength, and perturbation amplitude. Our analysis identifies "roughness of perturbation", which measures the gradient's sensitivity to parameter changes, as the factor controlling fractal dimensions of trainability boundaries. We observed a clear transition from non-fractal to fractal trainability boundaries as roughness increases, with the critical roughness causing the perturbed loss function non-convex. Thus, we conclude that fractal trainability boundaries can arise from very simple non-convexity. We anticipate that our findings will enhance the understanding of complex behaviors during neural network training, leading to more consistent and predictable training strategies.
[ "['Yizhou Liu']" ]
null
null
2406.13979
null
null
http://arxiv.org/pdf/2406.13979v1
2024-06-20T04:01:35Z
2024-06-20T04:01:35Z
Knowledge-driven Subspace Fusion and Gradient Coordination for Multi-modal Learning
Multi-modal learning plays a crucial role in cancer diagnosis and prognosis. Current deep learning based multi-modal approaches are often limited by their abilities to model the complex correlations between genomics and histology data, addressing the intrinsic complexity of tumour ecosystem where both tumour and microenvironment contribute to malignancy. We propose a biologically interpretative and robust multi-modal learning framework to efficiently integrate histology images and genomics by decomposing the feature subspace of histology images and genomics, reflecting distinct tumour and microenvironment features. To enhance cross-modal interactions, we design a knowledge-driven subspace fusion scheme, consisting of a cross-modal deformable attention module and a gene-guided consistency strategy. Additionally, in pursuit of dynamically optimizing the subspace knowledge, we further propose a novel gradient coordination learning strategy. Extensive experiments demonstrate the effectiveness of the proposed method, outperforming state-of-the-art techniques in three downstream tasks of glioma diagnosis, tumour grading, and survival analysis. Our code is available at https://github.com/helenypzhang/Subspace-Multimodal-Learning.
[ "['Yupei Zhang' 'Xiaofei Wang' 'Fangliangzi Meng' 'Jin Tang' 'Chao Li']" ]
null
null
2406.13984
null
null
http://arxiv.org/pdf/2406.13984v1
2024-06-20T04:24:51Z
2024-06-20T04:24:51Z
Reducing Memory Contention and I/O Congestion for Disk-based GNN Training
Graph neural networks (GNNs) gain wide popularity. Large graphs with high-dimensional features become common and training GNNs on them is non-trivial on an ordinary machine. Given a gigantic graph, even sample-based GNN training cannot work efficiently, since it is difficult to keep the graph's entire data in memory during the training process. Leveraging a solid-state drive (SSD) or other storage devices to extend the memory space has been studied in training GNNs. Memory and I/Os are hence critical for effectual disk-based training. We find that state-of-the-art (SoTA) disk-based GNN training systems severely suffer from issues like the memory contention between a graph's topological and feature data, and severe I/O congestion upon loading data from SSD for training. We accordingly develop GNNDrive. GNNDrive 1) minimizes the memory footprint with holistic buffer management across sampling and extracting, and 2) avoids I/O congestion through a strategy of asynchronous feature extraction. It also avoids costly data preparation on the critical path and makes the most of software and hardware resources. Experiments show that GNNDrive achieves superior performance. For example, when training with the Papers100M dataset and GraphSAGE model, GNNDrive is faster than SoTA PyG+, Ginex, and MariusGNN by 16.9x, 2.6x, and 2.7x, respectively.
[ "['Qisheng Jiang' 'Lei Jia' 'Chundong Wang']" ]
null
null
2406.13985
null
null
http://arxiv.org/pdf/2406.13985v1
2024-06-20T04:25:41Z
2024-06-20T04:25:41Z
The Elusive Pursuit of Replicating PATE-GAN: Benchmarking, Auditing, Debugging
Synthetic data created by differentially private (DP) generative models is increasingly used in real-world settings. In this context, PATE-GAN has emerged as a popular algorithm, combining Generative Adversarial Networks (GANs) with the private training approach of PATE (Private Aggregation of Teacher Ensembles). In this paper, we analyze and benchmark six open-source PATE-GAN implementations, including three by (a subset of) the original authors. First, we shed light on architecture deviations and empirically demonstrate that none replicate the utility performance reported in the original paper. Then, we present an in-depth privacy evaluation, including DP auditing, showing that all implementations leak more privacy than intended and uncovering 17 privacy violations and 5 other bugs. Our codebase is available from https://github.com/spalabucr/pategan-audit.
[ "['Georgi Ganev' 'Meenatchi Sundaram Muthu Selva Annamalai'\n 'Emiliano De Cristofaro']" ]
null
null
2406.13987
null
null
http://arxiv.org/pdf/2406.13987v2
2024-06-21T03:11:38Z
2024-06-20T04:26:45Z
Image anomaly detection and prediction scheme based on SSA optimized ResNet50-BiGRU model
Image anomaly detection is a popular research direction, with many methods emerging in recent years due to rapid advancements in computing. The use of artificial intelligence for image anomaly detection has been widely studied. By analyzing images of athlete posture and movement, it is possible to predict injury status and suggest necessary adjustments. Most existing methods rely on convolutional networks to extract information from irrelevant pixel data, limiting model accuracy. This paper introduces a network combining Residual Network (ResNet) and Bidirectional Gated Recurrent Unit (BiGRU), which can predict potential injury types and provide early warnings by analyzing changes in muscle and bone poses from video images. To address the high complexity of this network, the Sparrow search algorithm was used for optimization. Experiments conducted on four datasets demonstrated that our model has the smallest error in image anomaly detection compared to other models, showing strong adaptability. This provides a new approach for anomaly detection and predictive analysis in images, contributing to the sustainable development of human health and performance.
[ "['Qianhui Wan' 'Zecheng Zhang' 'Liheng Jiang' 'Zhaoqi Wang' 'Yan Zhou']" ]
null
null
2406.13989
null
null
http://arxiv.org/pdf/2406.13989v1
2024-06-20T04:32:34Z
2024-06-20T04:32:34Z
Random pairing MLE for estimation of item parameters in Rasch model
The Rasch model, a classical model in the item response theory, is widely used in psychometrics to model the relationship between individuals' latent traits and their binary responses on assessments or questionnaires. In this paper, we introduce a new likelihood-based estimator -- random pairing maximum likelihood estimator ($mathsf{RPtext{-}MLE}$) and its bootstrapped variant multiple random pairing MLE ($mathsf{MRPtext{-}MLE}$) that faithfully estimate the item parameters in the Rasch model. The new estimators have several appealing features compared to existing ones. First, both work for sparse observations, an increasingly important scenario in the big data era. Second, both estimators are provably minimax optimal in terms of finite sample $ell_{infty}$ estimation error. Lastly, $mathsf{RPtext{-}MLE}$ admits precise distributional characterization that allows uncertainty quantification on the item parameters, e.g., construction of confidence intervals of the item parameters. The main idea underlying $mathsf{RPtext{-}MLE}$ and $mathsf{MRPtext{-}MLE}$ is to randomly pair user-item responses to form item-item comparisons. This is carefully designed to reduce the problem size while retaining statistical independence. We also provide empirical evidence of the efficacy of the two new estimators using both simulated and real data.
[ "['Yuepeng Yang' 'Cong Ma']" ]
null
null
2406.13991
null
null
http://arxiv.org/pdf/2406.13991v1
2024-06-20T04:41:54Z
2024-06-20T04:41:54Z
Bayesian Inverse Reinforcement Learning for Non-Markovian Rewards
Inverse reinforcement learning (IRL) is the problem of inferring a reward function from expert behavior. There are several approaches to IRL, but most are designed to learn a Markovian reward. However, a reward function might be non-Markovian, depending on more than just the current state, such as a reward machine (RM). Although there has been recent work on inferring RMs, it assumes access to the reward signal, absent in IRL. We propose a Bayesian IRL (BIRL) framework for inferring RMs directly from expert behavior, requiring significant changes to the standard framework. We define a new reward space, adapt the expert demonstration to include history, show how to compute the reward posterior, and propose a novel modification to simulated annealing to maximize this posterior. We demonstrate that our method performs well when optimizing according to its inferred reward and compares favorably to an existing method that learns exclusively binary non-Markovian rewards.
[ "['Noah Topper' 'Alvaro Velasquez' 'George Atia']" ]
null
null
2406.13993
null
null
http://arxiv.org/pdf/2406.13993v1
2024-06-20T04:44:20Z
2024-06-20T04:44:20Z
Exploring Changes in Nation Perception with Nationality-Assigned Personas in LLMs
Persona assignment has become a common strategy for customizing LLM use to particular tasks and contexts. In this study, we explore how perceptions of different nations change when LLMs are assigned specific nationality personas. We assign 193 different nationality personas (e.g., an American person) to four LLMs and examine how the LLM perceptions of countries change. We find that all LLM-persona combinations tend to favor Western European nations, though nation-personas push LLM behaviors to focus more on and view more favorably the nation-persona's own region. Eastern European, Latin American, and African nations are viewed more negatively by different nationality personas. Our study provides insight into how biases and stereotypes are realized within LLMs when adopting different national personas. In line with the "Blueprint for an AI Bill of Rights", our findings underscore the critical need for developing mechanisms to ensure LLMs uphold fairness and not over-generalize at a global scale.
[ "['Mahammed Kamruzzaman' 'Gene Louis Kim']" ]
null
null
2406.13995
null
null
http://arxiv.org/pdf/2406.13995v1
2024-06-20T04:49:41Z
2024-06-20T04:49:41Z
Prediction of Unobserved Bifurcation by Unsupervised Extraction of Slowly Time-Varying System Parameter Dynamics from Time Series Using Reservoir Computing
Nonlinear and non-stationary processes are prevalent in various natural and physical phenomena, where system dynamics can change qualitatively due to bifurcation phenomena. Traditional machine learning methods have advanced our ability to learn and predict such systems from observed time series data. However, predicting the behavior of systems with temporal parameter variations without knowledge of true parameter values remains a significant challenge. This study leverages the reservoir computing framework to address this problem by unsupervised extraction of slowly varying system parameters from time series data. We propose a model architecture consisting of a slow reservoir with long timescale internal dynamics and a fast reservoir with short timescale dynamics. The slow reservoir extracts the temporal variation of system parameters, which are then used to predict unknown bifurcations in the fast dynamics. Through experiments using data generated from chaotic dynamical systems, we demonstrate the ability to predict bifurcations not present in the training data. Our approach shows potential for applications in fields such as neuroscience, material science, and weather prediction, where slow dynamics influencing qualitative changes are often unobservable.
[ "['Keita Tokuda' 'Yuichi Katori']" ]
null
null
2406.14003
null
null
http://arxiv.org/pdf/2406.14003v2
2024-06-22T01:06:55Z
2024-06-20T05:13:33Z
Deep Optimal Experimental Design for Parameter Estimation Problems
Optimal experimental design is a well studied field in applied science and engineering. Techniques for estimating such a design are commonly used within the framework of parameter estimation. Nonetheless, in recent years parameter estimation techniques are changing rapidly with the introduction of deep learning techniques to replace traditional estimation methods. This in turn requires the adaptation of optimal experimental design that is associated with these new techniques. In this paper we investigate a new experimental design methodology that uses deep learning. We show that the training of a network as a Likelihood Free Estimator can be used to significantly simplify the design process and circumvent the need for the computationally expensive bi-level optimization problem that is inherent in optimal experimental design for non-linear systems. Furthermore, deep design improves the quality of the recovery process for parameter estimation problems. As proof of concept we apply our methodology to two different systems of Ordinary Differential Equations.
[ "['Md Shahriar Rahim Siddiqui' 'Arman Rahmim' 'Eldad Haber']" ]
null
null
2406.14004
null
null
http://arxiv.org/pdf/2406.14004v1
2024-06-20T05:15:48Z
2024-06-20T05:15:48Z
Do Not Wait: Learning Re-Ranking Model Without User Feedback At Serving Time in E-Commerce
Recommender systems have been widely used in e-commerce, and re-ranking models are playing an increasingly significant role in the domain, which leverages the inter-item influence and determines the final recommendation lists. Online learning methods keep updating a deployed model with the latest available samples to capture the shifting of the underlying data distribution in e-commerce. However, they depend on the availability of real user feedback, which may be delayed by hours or even days, such as item purchases, leading to a lag in model enhancement. In this paper, we propose a novel extension of online learning methods for re-ranking modeling, which we term LAST, an acronym for Learning At Serving Time. It circumvents the requirement of user feedback by using a surrogate model to provide the instructional signal needed to steer model improvement. Upon receiving an online request, LAST finds and applies a model modification on the fly before generating a recommendation result for the request. The modification is request-specific and transient. It means the modification is tailored to and only to the current request to capture the specific context of the request. After a request, the modification is discarded, which helps to prevent error propagation and stabilizes the online learning procedure since the predictions of the surrogate model may be inaccurate. Most importantly, as a complement to feedback-based online learning methods, LAST can be seamlessly integrated into existing online learning systems to create a more adaptive and responsive recommendation experience. Comprehensive experiments, both offline and online, affirm that LAST outperforms state-of-the-art re-ranking models.
[ "['Yuan Wang' 'Zhiyu Li' 'Changshuo Zhang' 'Sirui Chen' 'Xiao Zhang'\n 'Jun Xu' 'Quan Lin']" ]
null
null
2406.14005
null
null
http://arxiv.org/pdf/2406.14005v2
2024-06-21T12:41:17Z
2024-06-20T05:18:37Z
Information Guided Regularization for Fine-tuning Language Models
The pretraining-fine-tuning paradigm has been the de facto strategy for transfer learning in modern language modeling. With the understanding that task adaptation in LMs is often a function of parameters shared across tasks, we argue that a more surgical approach to regularization needs to exist for smoother transfer learning. Towards this end, we investigate how the pretraining loss landscape is affected by these task-sensitive parameters through an information-theoretic lens. We then leverage the findings from our investigations to devise a novel approach to dropout for improved model regularization and better downstream generalization. This approach, named guided dropout, is both task & architecture agnostic and adds no computational overhead to the fine-tuning process. Through empirical evaluations, we showcase that our approach to regularization yields consistently better performance, even in scenarios of data paucity, compared to standardized baselines.
[ "['Mandar Sharma' 'Nikhil Muralidhar' 'Shengzhe Xu' 'Raquib Bin Yousuf'\n 'Naren Ramakrishnan']" ]
null
null
2406.14009
null
null
http://arxiv.org/pdf/2406.14009v1
2024-06-20T05:51:37Z
2024-06-20T05:51:37Z
Confidence Intervals and Simultaneous Confidence Bands Based on Deep Learning
Deep learning models have significantly improved prediction accuracy in various fields, gaining recognition across numerous disciplines. Yet, an aspect of deep learning that remains insufficiently addressed is the assessment of prediction uncertainty. Producing reliable uncertainty estimators could be crucial in practical terms. For instance, predictions associated with a high degree of uncertainty could be sent for further evaluation. Recent works in uncertainty quantification of deep learning predictions, including Bayesian posterior credible intervals and a frequentist confidence-interval estimation, have proven to yield either invalid or overly conservative intervals. Furthermore, there is currently no method for quantifying uncertainty that can accommodate deep neural networks for survival (time-to-event) data that involves right-censored outcomes. In this work, we provide a valid non-parametric bootstrap method that correctly disentangles data uncertainty from the noise inherent in the adopted optimization algorithm, ensuring that the resulting point-wise confidence intervals or the simultaneous confidence bands are accurate (i.e., valid and not overly conservative). The proposed ad-hoc method can be easily integrated into any deep neural network without interfering with the training process. The utility of the proposed approach is illustrated by constructing simultaneous confidence bands for survival curves derived from deep neural networks for survival data with right censoring.
[ "['Asaf Ben Arie' 'Malka Gorfine']" ]
null
null
2406.14014
null
null
http://arxiv.org/pdf/2406.14014v1
2024-06-20T06:08:52Z
2024-06-20T06:08:52Z
Feature Fusion Based on Mutual-Cross-Attention Mechanism for EEG Emotion Recognition
An objective and accurate emotion diagnostic reference is vital to psychologists, especially when dealing with patients who are difficult to communicate with for pathological reasons. Nevertheless, current systems based on Electroencephalography (EEG) data utilized for sentiment discrimination have some problems, including excessive model complexity, mediocre accuracy, and limited interpretability. Consequently, we propose a novel and effective feature fusion mechanism named Mutual-Cross-Attention (MCA). Combining with a specially customized 3D Convolutional Neural Network (3D-CNN), this purely mathematical mechanism adeptly discovers the complementary relationship between time-domain and frequency-domain features in EEG data. Furthermore, the new designed Channel-PSD-DE 3D feature also contributes to the high performance. The proposed method eventually achieves 99.49% (valence) and 99.30% (arousal) accuracy on DEAP dataset.
[ "['Yimin Zhao' 'Jin Gu']" ]
null
null
2406.14015
null
null
http://arxiv.org/pdf/2406.14015v1
2024-06-20T06:12:23Z
2024-06-20T06:12:23Z
CohortNet: Empowering Cohort Discovery for Interpretable Healthcare Analytics
Cohort studies are of significant importance in the field of healthcare analysis. However, existing methods typically involve manual, labor-intensive, and expert-driven pattern definitions or rely on simplistic clustering techniques that lack medical relevance. Automating cohort studies with interpretable patterns has great potential to facilitate healthcare analysis but remains an unmet need in prior research efforts. In this paper, we propose a cohort auto-discovery model, CohortNet, for interpretable healthcare analysis, focusing on the effective identification, representation, and exploitation of cohorts characterized by medically meaningful patterns. CohortNet initially learns fine-grained patient representations by separately processing each feature, considering both individual feature trends and feature interactions at each time step. Subsequently, it classifies each feature into distinct states and employs a heuristic cohort exploration strategy to effectively discover substantial cohorts with concrete patterns. For each identified cohort, it learns comprehensive cohort representations with credible evidence through associated patient retrieval. Ultimately, given a new patient, CohortNet can leverage relevant cohorts with distinguished importance, which can provide a more holistic understanding of the patient's conditions. Extensive experiments on three real-world datasets demonstrate that it consistently outperforms state-of-the-art approaches and offers interpretable insights from diverse perspectives in a top-down fashion.
[ "['Qingpeng Cai' 'Kaiping Zheng' 'H. V. Jagadish' 'Beng Chin Ooi'\n 'James Yip']" ]
null
null
2406.14021
null
null
http://arxiv.org/pdf/2406.14021v1
2024-06-20T06:37:35Z
2024-06-20T06:37:35Z
HIGHT: Hierarchical Graph Tokenization for Graph-Language Alignment
Recently there has been a surge of interest in extending the success of large language models (LLMs) to graph modality, such as social networks and molecules. As LLMs are predominantly trained with 1D text data, most existing approaches adopt a graph neural network to represent a graph as a series of node tokens and feed these tokens to LLMs for graph-language alignment. Despite achieving some successes, existing approaches have overlooked the hierarchical structures that are inherent in graph data. Especially, in molecular graphs, the high-order structural information contains rich semantics of molecular functional groups, which encode crucial biochemical functionalities of the molecules. We establish a simple benchmark showing that neglecting the hierarchical information in graph tokenization will lead to subpar graph-language alignment and severe hallucination in generated outputs. To address this problem, we propose a novel strategy called HIerarchical GrapH Tokenization (HIGHT). HIGHT employs a hierarchical graph tokenizer that extracts and encodes the hierarchy of node, motif, and graph levels of informative tokens to improve the graph perception of LLMs. HIGHT also adopts an augmented graph-language supervised fine-tuning dataset, enriched with the hierarchical graph information, to further enhance the graph-language alignment. Extensive experiments on 7 molecule-centric benchmarks confirm the effectiveness of HIGHT in reducing hallucination by 40%, as well as significant improvements in various molecule-language downstream tasks.
[ "['Yongqiang Chen' 'Quanming Yao' 'Juzheng Zhang' 'James Cheng'\n 'Yatao Bian']" ]
null
null
2406.14022
null
null
http://arxiv.org/pdf/2406.14022v1
2024-06-20T06:37:47Z
2024-06-20T06:37:47Z
Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning
The emergence of in-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) for recognizing the task from demonstrations and utilizing pre-trained priors, and task learning (TL) for learning from demonstrations. However, relationships between the two abilities and how such relationships affect the emergence of ICL is unclear. In this paper, we take the first step by examining the pre-training dynamics of the emergence of ICL. With carefully designed metrics, we find that these two abilities are, in fact, competitive during pre-training. Moreover, we observe a strong negative correlation between the competition and ICL performance. Further analysis of common pre-training factors (i.e., model size, dataset size, and data curriculum) demonstrates possible ways to manage the competition. Based on these insights, we propose a simple yet effective method to better integrate these two abilities for ICL at inference time. Through adaptive ensemble learning, the performance of ICL can be significantly boosted, enabling two small models to outperform a larger one with more than twice the parameters. The code is available at https://github.com/RUCAIBox/Competitive-ICL.
[ "['Xiaolei Wang' 'Xinyu Tang' 'Wayne Xin Zhao' 'Ji-Rong Wen']" ]
null
null
2406.14026
null
null
http://arxiv.org/pdf/2406.14026v1
2024-06-20T06:46:23Z
2024-06-20T06:46:23Z
Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations
Language models (LMs) are known to suffer from forgetting of previously learned examples when fine-tuned, breaking stability of deployed LM systems. Despite efforts on mitigating forgetting, few have investigated whether, and how forgotten upstream examples are associated with newly learned tasks. Insights on such associations enable efficient and targeted mitigation of forgetting. In this paper, we empirically analyze forgetting that occurs in $N$ upstream examples while the model learns $M$ new tasks and visualize their associations with a $M times N$ matrix. We empirically demonstrate that the degree of forgetting can often be approximated by simple multiplicative contributions of the upstream examples and newly learned tasks. We also reveal more complicated patterns where specific subsets of examples are forgotten with statistics and visualization. Following our analysis, we predict forgetting that happens on upstream examples when learning a new task with matrix completion over the empirical associations, outperforming prior approaches that rely on trainable LMs. Project website: https://inklab.usc.edu/lm-forgetting-prediction/
[ "['Xisen Jin' 'Xiang Ren']" ]
null
null
2406.14033
null
null
http://arxiv.org/pdf/2406.14033v1
2024-06-20T06:51:51Z
2024-06-20T06:51:51Z
Ensembles of Probabilistic Regression Trees
Tree-based ensemble methods such as random forests, gradient-boosted trees, and Bayesianadditive regression trees have been successfully used for regression problems in many applicationsand research studies. In this paper, we study ensemble versions of probabilisticregression trees that provide smooth approximations of the objective function by assigningeach observation to each region with respect to a probability distribution. We prove thatthe ensemble versions of probabilistic regression trees considered are consistent, and experimentallystudy their bias-variance trade-off and compare them with the state-of-the-art interms of performance prediction.
[ "['Alexandre Seiller' 'Éric Gaussier' 'Emilie Devijver' 'Marianne Clausel'\n 'Sami Alkhoury']" ]
null
null
2406.14036
null
null
http://arxiv.org/pdf/2406.14036v1
2024-06-20T06:56:35Z
2024-06-20T06:56:35Z
Toward Infinite-Long Prefix in Transformer
Prompting and contextual-based fine-tuning methods, which we call Prefix Learning, have been proposed to enhance the performance of language models on various downstream tasks that can match full parameter fine-tuning. There remains a limited theoretical understanding of how these methods work. In this paper, we aim to relieve this limitation by studying the learning ability of Prefix Learning from the perspective of prefix length. In particular, we approximate the infinite-long Prefix Learning optimization process by the Neural Tangent Kernel (NTK) technique. We formulate and solve it as a learning problem of the infinite-long prefix in a one-layer attention network. Our results confirm the over-parameterization property and arbitrary small loss convergence guarantee of the infinite-long Prefix Learning in attention. To the implementation end, we propose our NTK-Attention method, which is "equivalent" to attention computation with arbitrary prefix length efficiently. Its time complexity mainly depends on the sub-quadratic of input length (without prefix), and our method only requires $d^2 + d$ extra parameters for representation, where $d$ is the feature dimension. In addition, we conducted experiments that compare our NTK-Attention with full parameters fine-tuning, LoRA, and P-Tuning V2 methods across vision or natural language datasets. The results indicate our approach may be a promising parameter-efficient-fine-tuning method since it has demonstrated superior performance in numerous scenarios. Our code can be found at url{https://github.com/ChristianYang37/chiwun/tree/main/src/NTK-Attention}.
[ "['Jiuxiang Gu' 'Yingyu Liang' 'Zhenmei Shi' 'Zhao Song' 'Chiwun Yang']" ]
null
null
2406.14040
null
null
http://arxiv.org/pdf/2406.14040v1
2024-06-20T07:00:56Z
2024-06-20T07:00:56Z
A Practical Diffusion Path for Sampling
Diffusion models are state-of-the-art methods in generative modeling when samples from a target probability distribution are available, and can be efficiently sampled, using score matching to estimate score vectors guiding a Langevin process. However, in the setting where samples from the target are not available, e.g. when this target's density is known up to a normalization constant, the score estimation task is challenging. Previous approaches rely on Monte Carlo estimators that are either computationally heavy to implement or sample-inefficient. In this work, we propose a computationally attractive alternative, relying on the so-called dilation path, that yields score vectors that are available in closed-form. This path interpolates between a Dirac and the target distribution using a convolution. We propose a simple implementation of Langevin dynamics guided by the dilation path, using adaptive step-sizes. We illustrate the results of our sampling method on a range of tasks, and shows it performs better than classical alternatives.
[ "['Omar Chehab' 'Anna Korba']" ]
null
null
2406.14044
null
null
http://arxiv.org/pdf/2406.14044v1
2024-06-20T07:08:07Z
2024-06-20T07:08:07Z
Encoder-Decoder Neural Networks in Interpretation of X-ray Spectra
Encoder-decoder neural networks (EDNN) condense information most relevant to the output of the feedforward network to activation values at a bottleneck layer. We study the use of this architecture in emulation and interpretation of simulated X-ray spectroscopic data with the aim to identify key structural characteristics for the spectra, previously studied using emulator-based component analysis (ECA). We find an EDNN to outperform ECA in covered target variable variance, but also discover complications in interpreting the latent variables in physical terms. As a compromise of the benefits of these two approaches, we develop a network where the linear projection of ECA is used, thus maintaining the beneficial characteristics of vector expansion from the latent variables for their interpretation. These results underline the necessity of information recovery after its condensation and identification of decisive structural degrees for the output spectra for a justified interpretation.
[ "['Jalmari Passilahti' 'Anton Vladyka' 'Johannes Niskanen']" ]
null
null
2406.14045
null
null
http://arxiv.org/pdf/2406.14045v1
2024-06-20T07:09:19Z
2024-06-20T07:09:19Z
Understanding Different Design Choices in Training Large Time Series Models
Inspired by Large Language Models (LLMs), Time Series Forecasting (TSF), a long-standing task in time series analysis, is undergoing a transition towards Large Time Series Models (LTSMs), aiming to train universal transformer-based models for TSF. However, training LTSMs on heterogeneous time series data poses unique challenges, including diverse frequencies, dimensions, and patterns across datasets. Recent endeavors have studied and evaluated various design choices aimed at enhancing LTSM training and generalization capabilities, spanning pre-processing techniques, model configurations, and dataset configurations. In this work, we comprehensively analyze these design choices and aim to identify the best practices for training LTSM. Moreover, we propose emph{time series prompt}, a novel statistical prompting strategy tailored to time series data. Furthermore, based on the observations in our analysis, we introduce texttt{LTSM-bundle}, which bundles the best design choices we have identified. Empirical results demonstrate that texttt{LTSM-bundle} achieves superior zero-shot and few-shot performances compared to state-of-the-art LSTMs and traditional TSF methods on benchmark datasets.
[ "['Yu-Neng Chuang' 'Songchen Li' 'Jiayi Yuan' 'Guanchu Wang'\n 'Kwei-Herng Lai' 'Leisheng Yu' 'Sirui Ding' 'Chia-Yuan Chang'\n 'Qiaoyu Tan' 'Daochen Zha' 'Xia Hu']" ]
null
null
2406.14047
null
null
http://arxiv.org/pdf/2406.14047v1
2024-06-20T07:11:27Z
2024-06-20T07:11:27Z
Constrained Meta Agnostic Reinforcement Learning
Meta-Reinforcement Learning (Meta-RL) aims to acquire meta-knowledge for quick adaptation to diverse tasks. However, applying these policies in real-world environments presents a significant challenge in balancing rapid adaptability with adherence to environmental constraints. Our novel approach, Constraint Model Agnostic Meta Learning (C-MAML), merges meta learning with constrained optimization to address this challenge. C-MAML enables rapid and efficient task adaptation by incorporating task-specific constraints directly into its meta-algorithm framework during the training phase. This fusion results in safer initial parameters for learning new tasks. We demonstrate the effectiveness of C-MAML in simulated locomotion with wheeled robot tasks of varying complexity, highlighting its practicality and robustness in dynamic environments.
[ "['Karam Daaboul' 'Florian Kuhm' 'Tim Joseph' 'J. Marius Zoellner']" ]
null
null
2406.14054
null
null
http://arxiv.org/pdf/2406.14054v1
2024-06-20T07:24:24Z
2024-06-20T07:24:24Z
Urban-Focused Multi-Task Offline Reinforcement Learning with Contrastive Data Sharing
Enhancing diverse human decision-making processes in an urban environment is a critical issue across various applications, including ride-sharing vehicle dispatching, public transportation management, and autonomous driving. Offline reinforcement learning (RL) is a promising approach to learn and optimize human urban strategies (or policies) from pre-collected human-generated spatial-temporal urban data. However, standard offline RL faces two significant challenges: (1) data scarcity and data heterogeneity, and (2) distributional shift. In this paper, we introduce MODA -- a Multi-Task Offline Reinforcement Learning with Contrastive Data Sharing approach. MODA addresses the challenges of data scarcity and heterogeneity in a multi-task urban setting through Contrastive Data Sharing among tasks. This technique involves extracting latent representations of human behaviors by contrasting positive and negative data pairs. It then shares data presenting similar representations with the target task, facilitating data augmentation for each task. Moreover, MODA develops a novel model-based multi-task offline RL algorithm. This algorithm constructs a robust Markov Decision Process (MDP) by integrating a dynamics model with a Generative Adversarial Network (GAN). Once the robust MDP is established, any online RL or planning algorithm can be applied. Extensive experiments conducted in a real-world multi-task urban setting validate the effectiveness of MODA. The results demonstrate that MODA exhibits significant improvements compared to state-of-the-art baselines, showcasing its capability in advancing urban decision-making processes. We also made our code available to the research community.
[ "['Xinbo Zhao' 'Yingxue Zhang' 'Xin Zhang' 'Yu Yang' 'Yiqun Xie'\n 'Yanhua Li' 'Jun Luo']" ]
null
null
2406.14059
null
null
http://arxiv.org/pdf/2406.14059v1
2024-06-20T07:32:07Z
2024-06-20T07:32:07Z
Tracking solutions of time-varying variational inequalities
Tracking the solution of time-varying variational inequalities is an important problem with applications in game theory, optimization, and machine learning. Existing work considers time-varying games or time-varying optimization problems. For strongly convex optimization problems or strongly monotone games, these results provide tracking guarantees under the assumption that the variation of the time-varying problem is restrained, that is, problems with a sublinear solution path. In this work we extend existing results in two ways: In our first result, we provide tracking bounds for (1) variational inequalities with a sublinear solution path but not necessarily monotone functions, and (2) for periodic time-varying variational inequalities that do not necessarily have a sublinear solution path-length. Our second main contribution is an extensive study of the convergence behavior and trajectory of discrete dynamical systems of periodic time-varying VI. We show that these systems can exhibit provably chaotic behavior or can converge to the solution. Finally, we illustrate our theoretical results with experiments.
[ "['Hédi Hadiji' 'Sarah Sachs' 'Cristóbal Guzmán']" ]
null
null
2406.14071
null
null
http://arxiv.org/pdf/2406.14071v1
2024-06-20T07:45:38Z
2024-06-20T07:45:38Z
Bayesian Bandit Algorithms with Approximate Inference in Stochastic Linear Bandits
Bayesian bandit algorithms with approximate Bayesian inference have been widely used in real-world applications. Nevertheless, their theoretical justification is less investigated in the literature, especially for contextual bandit problems. To fill this gap, we propose a general theoretical framework to analyze stochastic linear bandits in the presence of approximate inference and conduct regret analysis on two Bayesian bandit algorithms, Linear Thompson sampling (LinTS) and the extension of Bayesian Upper Confidence Bound, namely Linear Bayesian Upper Confidence Bound (LinBUCB). We demonstrate that both LinTS and LinBUCB can preserve their original rates of regret upper bound but with a sacrifice of larger constant terms when applied with approximate inference. These results hold for general Bayesian inference approaches, under the assumption that the inference error measured by two different $alpha$-divergences is bounded. Additionally, by introducing a new definition of well-behaved distributions, we show that LinBUCB improves the regret rate of LinTS from $tilde{O}(d^{3/2}sqrt{T})$ to $tilde{O}(dsqrt{T})$, matching the minimax optimal rate. To our knowledge, this work provides the first regret bounds in the setting of stochastic linear bandits with bounded approximate inference errors.
[ "['Ziyi Huang' 'Henry Lam' 'Haofeng Zhang']" ]
null
null
2406.14073
null
null
http://arxiv.org/pdf/2406.14073v1
2024-06-20T07:50:11Z
2024-06-20T07:50:11Z
Exploring Layerwise Adversarial Robustness Through the Lens of t-SNE
Adversarial examples, designed to trick Artificial Neural Networks (ANNs) into producing wrong outputs, highlight vulnerabilities in these models. Exploring these weaknesses is crucial for developing defenses, and so, we propose a method to assess the adversarial robustness of image-classifying ANNs. The t-distributed Stochastic Neighbor Embedding (t-SNE) technique is used for visual inspection, and a metric, which compares the clean and perturbed embeddings, helps pinpoint weak spots in the layers. Analyzing two ANNs on CIFAR-10, one designed by humans and another via NeuroEvolution, we found that differences between clean and perturbed representations emerge early on, in the feature extraction layers, affecting subsequent classification. The findings with our metric are supported by the visual analysis of the t-SNE maps.
[ "['Inês Valentim' 'Nuno Antunes' 'Nuno Lourenço']" ]
null
null
2406.14082
null
null
http://arxiv.org/pdf/2406.14082v1
2024-06-20T07:59:29Z
2024-06-20T07:59:29Z
FLoCoRA: Federated learning compression with low-rank adaptation
Low-Rank Adaptation (LoRA) methods have gained popularity in efficient parameter fine-tuning of models containing hundreds of billions of parameters. In this work, instead, we demonstrate the application of LoRA methods to train small-vision models in Federated Learning (FL) from scratch. We first propose an aggregation-agnostic method to integrate LoRA within FL, named FLoCoRA, showing that the method is capable of reducing communication costs by 4.8 times, while having less than 1% accuracy degradation, for a CIFAR-10 classification task with a ResNet-8. Next, we show that the same method can be extended with an affine quantization scheme, dividing the communication cost by 18.6 times, while comparing it with the standard method, with still less than 1% of accuracy loss, tested with on a ResNet-18 model. Our formulation represents a strong baseline for message size reduction, even when compared to conventional model compression works, while also reducing the training memory requirements due to the low-rank adaptation.
[ "['Lucas Grativol Ribeiro' 'Mathieu Leonardon' 'Guillaume Muller'\n 'Virginie Fresse' 'Matthieu Arzel']" ]
null
null
2406.14086
null
null
http://arxiv.org/pdf/2406.14086v1
2024-06-20T08:01:28Z
2024-06-20T08:01:28Z
Seg-LSTM: Performance of xLSTM for Semantic Segmentation of Remotely Sensed Images
Recent advancements in autoregressive networks with linear complexity have driven significant research progress, demonstrating exceptional performance in large language models. A representative model is the Extended Long Short-Term Memory (xLSTM), which incorporates gating mechanisms and memory structures, performing comparably to Transformer architectures in long-sequence language tasks. Autoregressive networks such as xLSTM can utilize image serialization to extend their application to visual tasks such as classification and segmentation. Although existing studies have demonstrated Vision-LSTM's impressive results in image classification, its performance in image semantic segmentation remains unverified. Our study represents the first attempt to evaluate the effectiveness of Vision-LSTM in the semantic segmentation of remotely sensed images. This evaluation is based on a specifically designed encoder-decoder architecture named Seg-LSTM, and comparisons with state-of-the-art segmentation networks. Our study found that Vision-LSTM's performance in semantic segmentation was limited and generally inferior to Vision-Transformers-based and Vision-Mamba-based models in most comparative tests. Future research directions for enhancing Vision-LSTM are recommended. The source code is available from https://github.com/zhuqinfeng1999/Seg-LSTM.
[ "['Qinfeng Zhu' 'Yuanzhi Cai' 'Lei Fan']" ]
null
null
2406.14087
null
null
http://arxiv.org/pdf/2406.14087v1
2024-06-20T08:02:49Z
2024-06-20T08:02:49Z
Semi Supervised Heterogeneous Domain Adaptation via Disentanglement and Pseudo-Labelling
Semi-supervised domain adaptation methods leverage information from a source labelled domain with the goal of generalizing over a scarcely labelled target domain. While this setting already poses challenges due to potential distribution shifts between domains, an even more complex scenario arises when source and target data differs in modality representation (e.g. they are acquired by sensors with different characteristics). For instance, in remote sensing, images may be collected via various acquisition modes (e.g. optical or radar), different spectral characteristics (e.g. RGB or multi-spectral) and spatial resolutions. Such a setting is denoted as Semi-Supervised Heterogeneous Domain Adaptation (SSHDA) and it exhibits an even more severe distribution shift due to modality heterogeneity across domains.To cope with the challenging SSHDA setting, here we introduce SHeDD (Semi-supervised Heterogeneous Domain Adaptation via Disentanglement) an end-to-end neural framework tailored to learning a target domain classifier by leveraging both labelled and unlabelled data from heterogeneous data sources. SHeDD is designed to effectively disentangle domain-invariant representations, relevant for the downstream task, from domain-specific information, that can hinder the cross-modality transfer. Additionally, SHeDD adopts an augmentation-based consistency regularization mechanism that takes advantages of reliable pseudo-labels on the unlabelled target samples to further boost its generalization ability on the target domain. Empirical evaluations on two remote sensing benchmarks, encompassing heterogeneous data in terms of acquisition modes and spectral/spatial resolutions, demonstrate the quality of SHeDD compared to both baseline and state-of-the-art competing approaches. Our code is publicly available here: https://github.com/tanodino/SSHDA/
[ "['Cassio F. Dantas' 'Raffaele Gaetano' 'Dino Ienco']" ]
null
null
2406.14088
null
null
http://arxiv.org/pdf/2406.14088v1
2024-06-20T08:04:07Z
2024-06-20T08:04:07Z
ReaLHF: Optimized RLHF Training for Large Language Models through Parameter Reallocation
Reinforcement Learning from Human Feedback (RLHF) stands as a pivotal technique in empowering large language model (LLM) applications. Since RLHF involves diverse computational workloads and intricate dependencies among multiple LLMs, directly adopting parallelization techniques from supervised training can result in sub-optimal performance. To overcome this limitation, we propose a novel approach named parameter ReaLlocation, which dynamically redistributes LLM parameters in the cluster and adapts parallelization strategies during training. Building upon this idea, we introduce ReaLHF, a pioneering system capable of automatically discovering and running efficient execution plans for RLHF training given the desired algorithmic and hardware configurations. ReaLHF formulates the execution plan for RLHF as an augmented dataflow graph. Based on this formulation, ReaLHF employs a tailored search algorithm with a lightweight cost estimator to discover an efficient execution plan. Subsequently, the runtime engine deploys the selected plan by effectively parallelizing computations and redistributing parameters. We evaluate ReaLHF on the LLaMA-2 models with up to $4times70$ billion parameters and 128 GPUs. The experiment results showcase ReaLHF's substantial speedups of $2.0-10.6times$ compared to baselines. Furthermore, the execution plans generated by ReaLHF exhibit an average of $26%$ performance improvement over heuristic approaches based on Megatron-LM. The source code of ReaLHF is publicly available at https://github.com/openpsi-project/ReaLHF .
[ "['Zhiyu Mei' 'Wei Fu' 'Kaiwei Li' 'Guangju Wang' 'Huanchen Zhang' 'Yi Wu']" ]
null
null
2406.14095
null
null
http://arxiv.org/pdf/2406.14095v1
2024-06-20T08:21:52Z
2024-06-20T08:21:52Z
Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization
Bi-level optimization (BO) has become a fundamental mathematical framework for addressing hierarchical machine learning problems. As deep learning models continue to grow in size, the demand for scalable bi-level optimization solutions has become increasingly critical. Traditional gradient-based bi-level optimization algorithms, due to their inherent characteristics, are ill-suited to meet the demands of large-scale applications. In this paper, we introduce $textbf{F}$orward $textbf{G}$radient $textbf{U}$nrolling with $textbf{F}$orward $textbf{F}$radient, abbreviated as $(textbf{FG})^2textbf{U}$, which achieves an unbiased stochastic approximation of the meta gradient for bi-level optimization. $(text{FG})^2text{U}$ circumvents the memory and approximation issues associated with classical bi-level optimization approaches, and delivers significantly more accurate gradient estimates than existing large-scale bi-level optimization approaches. Additionally, $(text{FG})^2text{U}$ is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems to achieve significant computational efficiency. In practice, $(text{FG})^2text{U}$ and other methods can be strategically placed at different stages of the training process to achieve a more cost-effective two-phase paradigm. Further, $(text{FG})^2text{U}$ is easy to implement within popular deep learning frameworks, and can be conveniently adapted to address more challenging zeroth-order bi-level optimization scenarios. We provide a thorough convergence analysis and a comprehensive practical discussion for $(text{FG})^2text{U}$, complemented by extensive empirical evaluations, showcasing its superior performance in diverse large-scale bi-level optimization tasks.
[ "['Qianli Shen' 'Yezhen Wang' 'Zhouhao Yang' 'Xiang Li' 'Haonan Wang'\n 'Yang Zhang' 'Jonathan Scarlett' 'Zhanxing Zhu' 'Kenji Kawaguchi']" ]
null
null
2406.14096
null
null
http://arxiv.org/pdf/2406.14096v1
2024-06-20T08:22:07Z
2024-06-20T08:22:07Z
Graph Neural Networks for Job Shop Scheduling Problems: A Survey
Job shop scheduling problems (JSSPs) represent a critical and challenging class of combinatorial optimization problems. Recent years have witnessed a rapid increase in the application of graph neural networks (GNNs) to solve JSSPs, albeit lacking a systematic survey of the relevant literature. This paper aims to thoroughly review prevailing GNN methods for different types of JSSPs and the closely related flow-shop scheduling problems (FSPs), especially those leveraging deep reinforcement learning (DRL). We begin by presenting the graph representations of various JSSPs, followed by an introduction to the most commonly used GNN architectures. We then review current GNN-based methods for each problem type, highlighting key technical elements such as graph representations, GNN architectures, GNN tasks, and training algorithms. Finally, we summarize and analyze the advantages and limitations of GNNs in solving JSSPs and provide potential future research opportunities. We hope this survey can motivate and inspire innovative approaches for more powerful GNN-based approaches in tackling JSSPs and other scheduling problems.
[ "['Igor G. Smit' 'Jianan Zhou' 'Robbert Reijnen' 'Yaoxin Wu' 'Jian Chen'\n 'Cong Zhang' 'Zaharah Bukhsh' 'Wim Nuijten' 'Yingqian Zhang']" ]
null
null
2406.14111
null
null
http://arxiv.org/abs/2406.14111v1
2024-06-20T08:50:57Z
2024-06-20T08:50:57Z
Expander Hierarchies for Normalized Cuts on Graphs
Expander decompositions of graphs have significantly advanced the understanding of many classical graph problems and led to numerous fundamental theoretical results. However, their adoption in practice has been hindered due to their inherent intricacies and large hidden factors in their asymptotic running times. Here, we introduce the first practically efficient algorithm for computing expander decompositions and their hierarchies and demonstrate its effectiveness and utility by incorporating it as the core component in a novel solver for the normalized cut graph clustering objective. Our extensive experiments on a variety of large graphs show that our expander-based algorithm outperforms state-of-the-art solvers for normalized cut with respect to solution quality by a large margin on a variety of graph classes such as citation, e-mail, and social networks or web graphs while remaining competitive in running time.
[ "['Kathrin Hanauer' 'Monika Henzinger' 'Robin Münk' 'Harald Räcke'\n 'Maximilian Vötsch']" ]
null
null
2406.14124
null
null
http://arxiv.org/pdf/2406.14124v2
2024-06-21T02:30:32Z
2024-06-20T09:09:34Z
Measuring Sample Importance in Data Pruning for Training LLMs from a Data Compression Perspective
Compute-efficient training of large language models (LLMs) has become an important research problem. In this work, we consider data pruning as a method of data-efficient training of LLMs, where we take a data compression view on data pruning. We argue that the amount of information of a sample, or the achievable compression on its description length, represents its sample importance. The key idea is that, less informative samples are likely to contain redundant information, and thus should be pruned first. We leverage log-likelihood function of trained models as a surrogate to measure information content of samples. Experiments reveal a surprising insight that information-based pruning can enhance the generalization capability of the model, improves upon language modeling and downstream tasks as compared to the model trained on the entire dataset.
[ "['Minsang Kim' 'Seungjun Baek']" ]
null
null
2406.14142
null
null
http://arxiv.org/pdf/2406.14142v1
2024-06-20T09:34:31Z
2024-06-20T09:34:31Z
Geometric Self-Supervised Pretraining on 3D Protein Structures using Subgraphs
Protein representation learning aims to learn informative protein embeddings capable of addressing crucial biological questions, such as protein function prediction. Although sequence-based transformer models have shown promising results by leveraging the vast amount of protein sequence data in a self-supervised way, there is still a gap in applying these methods to 3D protein structures. In this work, we propose a pre-training scheme going beyond trivial masking methods leveraging 3D and hierarchical structures of proteins. We propose a novel self-supervised method to pretrain 3D graph neural networks on 3D protein structures, by predicting the distances between local geometric centroids of protein subgraphs and the global geometric centroid of the protein. The motivation for this method is twofold. First, the relative spatial arrangements and geometric relationships among different regions of a protein are crucial for its function. Moreover, proteins are often organized in a hierarchical manner, where smaller substructures, such as secondary structure elements, assemble into larger domains. By considering subgraphs and their relationships to the global protein structure, the model can learn to reason about these hierarchical levels of organization. We experimentally show that our proposed pertaining strategy leads to significant improvements in the performance of 3D GNNs in various protein classification tasks.
[ "['Michail Chatzianastasis' 'George Dasoulas' 'Michalis Vazirgiannis']" ]
null
null
2406.14144
null
null
http://arxiv.org/pdf/2406.14144v1
2024-06-20T09:35:22Z
2024-06-20T09:35:22Z
Finding Safety Neurons in Large Language Models
Large language models (LLMs) excel in various capabilities but also pose safety risks such as generating harmful content and misinformation, even after safety alignment. In this paper, we explore the inner mechanisms of safety alignment from the perspective of mechanistic interpretability, focusing on identifying and analyzing safety neurons within LLMs that are responsible for safety behaviors. We propose generation-time activation contrasting to locate these neurons and dynamic activation patching to evaluate their causal effects. Experiments on multiple recent LLMs show that: (1) Safety neurons are sparse and effective. We can restore $90$% safety performance with intervention only on about $5$% of all the neurons. (2) Safety neurons encode transferrable mechanisms. They exhibit consistent effectiveness on different red-teaming datasets. The finding of safety neurons also interprets "alignment tax". We observe that the identified key neurons for safety and helpfulness significantly overlap, but they require different activation patterns of the shared neurons. Furthermore, we demonstrate an application of safety neurons in detecting unsafe outputs before generation. Our findings may promote further research on understanding LLM alignment. The source codes will be publicly released to facilitate future research.
[ "['Jianhui Chen' 'Xiaozhi Wang' 'Zijun Yao' 'Yushi Bai' 'Lei Hou'\n 'Juanzi Li']" ]
null
null
2406.14149
null
null
http://arxiv.org/pdf/2406.14149v1
2024-06-20T09:43:36Z
2024-06-20T09:43:36Z
CheMFi: A Multifidelity Dataset of Quantum Chemical Properties of Diverse Molecules
Progress in both Machine Learning (ML) and conventional Quantum Chemistry (QC) computational methods have resulted in high accuracy ML models for QC properties ranging from atomization energies to excitation energies. Various datasets such as MD17, MD22, and WS22, which consist of properties calculated at some level of QC method, or fidelity, have been generated to benchmark such ML models. The term fidelity refers to the accuracy of the chosen QC method to the actual real value of the property. The higher the fidelity, the more accurate the calculated property, albeit at a higher computational cost. Research in multifidelity ML (MFML) methods, where ML models are trained on data from more than one numerical QC method, has shown the effectiveness of such models over single fidelity methods. Much research is progressing in this direction for diverse applications ranging from energy band gaps to excitation energies. A major hurdle for effective research in this field of research in the community is the lack of a diverse multifidelity dataset for benchmarking. Here, we present a comprehensive multifidelity dataset drawn from the WS22 molecular conformations. We provide the quantum Chemistry MultiFidelity (CheMFi) dataset consisting of five fidelities calculated with the TD-DFT formalism. The fidelities differ in their basis set choice and are namely: STO-3G, 3-21G, 6-31G, def2-SVP, and def2-TZVP. CheMFi offers to the community a variety of QC properties including vertical excitation energies, oscillator strengths, molecular dipole moments, and ground state energies. In addition to the dataset, multifidelity benchmarks are set with state-of-the-art MFML and optimized-MFML
[ "['Vivin Vinod' 'Peter Zaspel']" ]
null
null
2406.14150
null
null
http://arxiv.org/pdf/2406.14150v1
2024-06-20T09:44:53Z
2024-06-20T09:44:53Z
Multi-modal Transfer Learning between Biological Foundation Models
Biological sequences encode fundamental instructions for the building blocks of life, in the form of DNA, RNA, and proteins. Modeling these sequences is key to understand disease mechanisms and is an active research area in computational biology. Recently, Large Language Models have shown great promise in solving certain biological tasks but current approaches are limited to a single sequence modality (DNA, RNA, or protein). Key problems in genomics intrinsically involve multiple modalities, but it remains unclear how to adapt general-purpose sequence models to those cases. In this work we propose a multi-modal model that connects DNA, RNA, and proteins by leveraging information from different pre-trained modality-specific encoders. We demonstrate its capabilities by applying it to the largely unsolved problem of predicting how multiple RNA transcript isoforms originate from the same gene (i.e. same DNA sequence) and map to different transcription expression levels across various human tissues. We show that our model, dubbed IsoFormer, is able to accurately predict differential transcript expression, outperforming existing methods and leveraging the use of multiple modalities. Our framework also achieves efficient transfer knowledge from the encoders pre-training as well as in between modalities. We open-source our model, paving the way for new multi-modal gene expression approaches.
[ "['Juan Jose Garau-Luis' 'Patrick Bordes' 'Liam Gonzalez' 'Masa Roller'\n 'Bernardo P. de Almeida' 'Lorenz Hexemer' 'Christopher Blum'\n 'Stefan Laurent' 'Jan Grzegorzewski' 'Maren Lang' 'Thomas Pierrot'\n 'Guillaume Richard']" ]
null
null
2406.14154
null
null
http://arxiv.org/pdf/2406.14154v1
2024-06-20T09:52:10Z
2024-06-20T09:52:10Z
Watching the Watchers: A Comparative Fairness Audit of Cloud-based Content Moderation Services
Online platforms face the challenge of moderating an ever-increasing volume of content, including harmful hate speech. In the absence of clear legal definitions and a lack of transparency regarding the role of algorithms in shaping decisions on content moderation, there is a critical need for external accountability. Our study contributes to filling this gap by systematically evaluating four leading cloud-based content moderation services through a third-party audit, highlighting issues such as biases against minorities and vulnerable groups that may arise through over-reliance on these services. Using a black-box audit approach and four benchmark data sets, we measure performance in explicit and implicit hate speech detection as well as counterfactual fairness through perturbation sensitivity analysis and present disparities in performance for certain target identity groups and data sets. Our analysis reveals that all services had difficulties detecting implicit hate speech, which relies on more subtle and codified messages. Moreover, our results point to the need to remove group-specific bias. It seems that biases towards some groups, such as Women, have been mostly rectified, while biases towards other groups, such as LGBTQ+ and PoC remain.
[ "['David Hartmann' 'Amin Oueslati' 'Dimitri Staufer']" ]
null
null
2406.14156
null
null
http://arxiv.org/pdf/2406.14156v1
2024-06-20T09:53:56Z
2024-06-20T09:53:56Z
Tractable Equilibrium Computation in Markov Games through Risk Aversion
A significant roadblock to the development of principled multi-agent reinforcement learning is the fact that desired solution concepts like Nash equilibria may be intractable to compute. To overcome this obstacle, we take inspiration from behavioral economics and show that -- by imbuing agents with important features of human decision-making like risk aversion and bounded rationality -- a class of risk-averse quantal response equilibria (RQE) become tractable to compute in all $n$-player matrix and finite-horizon Markov games. In particular, we show that they emerge as the endpoint of no-regret learning in suitably adjusted versions of the games. Crucially, the class of computationally tractable RQE is independent of the underlying game structure and only depends on agents' degree of risk-aversion and bounded rationality. To validate the richness of this class of solution concepts we show that it captures peoples' patterns of play in a number of 2-player matrix games previously studied in experimental economics. Furthermore, we give a first analysis of the sample complexity of computing these equilibria in finite-horizon Markov games when one has access to a generative model and validate our findings on a simple multi-agent reinforcement learning benchmark.
[ "['Eric Mazumdar' 'Kishan Panaganti' 'Laixi Shi']" ]
null
null
2406.14161
null
null
http://arxiv.org/pdf/2406.14161v1
2024-06-20T10:01:22Z
2024-06-20T10:01:22Z
Iterative Sizing Field Prediction for Adaptive Mesh Generation From Expert Demonstrations
Many engineering systems require accurate simulations of complex physical systems. Yet, analytical solutions are only available for simple problems, necessitating numerical approximations such as the Finite Element Method (FEM). The cost and accuracy of the FEM scale with the resolution of the underlying computational mesh. To balance computational speed and accuracy meshes with adaptive resolution are used, allocating more resources to critical parts of the geometry. Currently, practitioners often resort to hand-crafted meshes, which require extensive expert knowledge and are thus costly to obtain. Our approach, Adaptive Meshing By Expert Reconstruction (AMBER), views mesh generation as an imitation learning problem. AMBER combines a graph neural network with an online data acquisition scheme to predict the projected sizing field of an expert mesh on a given intermediate mesh, creating a more accurate subsequent mesh. This iterative process ensures efficient and accurate imitation of expert mesh resolutions on arbitrary new geometries during inference. We experimentally validate AMBER on heuristic 2D meshes and 3D meshes provided by a human expert, closely matching the provided demonstrations and outperforming a single-step CNN baseline.
[ "['Niklas Freymuth' 'Philipp Dahlinger' 'Tobias Würth' 'Philipp Becker'\n 'Aleksandar Taranovic' 'Onno Grönheim' 'Luise Kärger' 'Gerhard Neumann']" ]
null
null
2406.14169
null
null
http://arxiv.org/pdf/2406.14169v1
2024-06-20T10:20:02Z
2024-06-20T10:20:02Z
Optimizing Novelty of Top-k Recommendations using Large Language Models and Reinforcement Learning
Given an input query, a recommendation model is trained using user feedback data (e.g., click data) to output a ranked list of items. In real-world systems, besides accuracy, an important consideration for a new model is novelty of its top-k recommendations w.r.t. an existing deployed model. However, novelty of top-k items is a difficult goal to optimize a model for, since it involves a non-differentiable sorting operation on the model's predictions. Moreover, novel items, by definition, do not have any user feedback data. Given the semantic capabilities of large language models, we address these problems using a reinforcement learning (RL) formulation where large language models provide feedback for the novel items. However, given millions of candidate items, the sample complexity of a standard RL algorithm can be prohibitively high. To reduce sample complexity, we reduce the top-k list reward to a set of item-wise rewards and reformulate the state space to consist of <query, item> tuples such that the action space is reduced to a binary decision; and show that this reformulation results in a significantly lower complexity when the number of items is large. We evaluate the proposed algorithm on improving novelty for a query-ad recommendation task on a large-scale search engine. Compared to supervised finetuning on recent <query, ad> pairs, the proposed RL-based algorithm leads to significant novelty gains with minimal loss in recall. We obtain similar results on the ORCAS query-webpage matching dataset and a product recommendation dataset based on Amazon reviews.
[ "['Amit Sharma' 'Hua Li' 'Xue Li' 'Jian Jiao']" ]
null
null
2406.14183
null
null
http://arxiv.org/pdf/2406.14183v2
2024-06-21T09:57:50Z
2024-06-20T10:43:28Z
Latent Functional Maps
Neural models learn data representations that lie on low-dimensional manifolds, yet modeling the relation between these representational spaces is an ongoing challenge. By integrating spectral geometry principles into neural modeling, we show that this problem can be better addressed in the functional domain, mitigating complexity, while enhancing interpretability and performances on downstream tasks. To this end, we introduce a multi-purpose framework to the representation learning community, which allows to: (i) compare different spaces in an interpretable way and measure their intrinsic similarity; (ii) find correspondences between them, both in unsupervised and weakly supervised settings, and (iii) to effectively transfer representations between distinct spaces. We validate our framework on various applications, ranging from stitching to retrieval tasks, demonstrating that latent functional maps can serve as a swiss-army knife for representation alignment.
[ "['Marco Fumero' 'Marco Pegoraro' 'Valentino Maiorca' 'Francesco Locatello'\n 'Emanuele Rodolà']" ]
null
null
2406.14191
null
null
http://arxiv.org/pdf/2406.14191v2
2024-07-05T07:38:02Z
2024-06-20T10:51:06Z
Temporal Knowledge Graph Question Answering: A Survey
Knowledge Base Question Answering (KBQA) has been a long-standing field to answer questions based on knowledge bases. Recently, the evolving dynamics of knowledge have attracted a growing interest in Temporal Knowledge Graph Question Answering (TKGQA), an emerging task to answer temporal questions. However, this field grapples with ambiguities in defining temporal questions and lacks a systematic categorization of existing methods for TKGQA. In response, this paper provides a thorough survey from two perspectives: the taxonomy of temporal questions and the methodological categorization for TKGQA. Specifically, we first establish a detailed taxonomy of temporal questions engaged in prior studies. Subsequently, we provide a comprehensive review of TKGQA techniques of two categories: semantic parsing-based and TKG embedding-based. Building on this review, the paper outlines potential research directions aimed at advancing the field of TKGQA. This work aims to serve as a comprehensive reference for TKGQA and to stimulate further research.
[ "['Miao Su' 'Zixuan Li' 'Zhuo Chen' 'Long Bai' 'Xiaolong Jin' 'Jiafeng Guo']" ]
null
null
2406.14207
null
null
http://arxiv.org/pdf/2406.14207v3
2024-06-27T07:01:27Z
2024-06-20T11:25:50Z
LayerMatch: Do Pseudo-labels Benefit All Layers?
Deep neural networks have achieved remarkable performance across various tasks when supplied with large-scale labeled data. However, the collection of labeled data can be time-consuming and labor-intensive. Semi-supervised learning (SSL), particularly through pseudo-labeling algorithms that iteratively assign pseudo-labels for self-training, offers a promising solution to mitigate the dependency of labeled data. Previous research generally applies a uniform pseudo-labeling strategy across all model layers, assuming that pseudo-labels exert uniform influence throughout. Contrasting this, our theoretical analysis and empirical experiment demonstrate feature extraction layer and linear classification layer have distinct learning behaviors in response to pseudo-labels. Based on these insights, we develop two layer-specific pseudo-label strategies, termed Grad-ReLU and Avg-Clustering. Grad-ReLU mitigates the impact of noisy pseudo-labels by removing the gradient detrimental effects of pseudo-labels in the linear classification layer. Avg-Clustering accelerates the convergence of feature extraction layer towards stable clustering centers by integrating consistent outputs. Our approach, LayerMatch, which integrates these two strategies, can avoid the severe interference of noisy pseudo-labels in the linear classification layer while accelerating the clustering capability of the feature extraction layer. Through extensive experimentation, our approach consistently demonstrates exceptional performance on standard semi-supervised learning benchmarks, achieving a significant improvement of 10.38% over baseline method and a 2.44% increase compared to state-of-the-art methods.
[ "['Chaoqi Liang' 'Guanglei Yang' 'Lifeng Qiao' 'Zitong Huang'\n 'Hongliang Yan' 'Yunchao Wei' 'Wangmeng Zuo']" ]
null
null
2406.14210
null
null
http://arxiv.org/pdf/2406.14210v1
2024-06-20T11:26:32Z
2024-06-20T11:26:32Z
Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset
Structural magnetic resonance imaging (MRI) studies have shown that Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain. However, the absence of segmentation that highlights brain degenerative changes presents unique challenges for training CNN-based classifiers in a supervised fashion. In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification. Using the 3D T1-weighted MRI data of cognitive normal (CN) subjects from the synthetic neuroimaging LDM100K dataset, lightweight 3D CNN-based models are trained for brain age prediction, brain image rotation classification, brain image reconstruction and a multi-head task combining all three tasks into one. Feature extractors trained on the LDM100K synthetic dataset achieved similar performance compared to the same model using real-world data. This supports the feasibility of utilising large-scale synthetic data for pretext task training. All the training and testing splits are performed on the subject-level to prevent data leakage issues. Alongside the simple preprocessing steps, the random cropping data augmentation technique shows consistent improvement across all experiments.
[ "['Chen Zheng']" ]
null
null
2406.14217
null
null
http://arxiv.org/pdf/2406.14217v1
2024-06-20T11:33:14Z
2024-06-20T11:33:14Z
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning
Federated learning is highly susceptible to model poisoning attacks, especially those meticulously crafted for servers. Traditional defense methods mainly focus on updating assessments or robust aggregation against manually crafted myopic attacks. When facing advanced attacks, their defense stability is notably insufficient. Therefore, it is imperative to develop adaptive defenses against such advanced poisoning attacks. We find that benign clients exhibit significantly higher data distribution stability than malicious clients in federated learning in both CV and NLP tasks. Therefore, the malicious clients can be recognized by observing the stability of their data distribution. In this paper, we propose AdaAggRL, an RL-based Adaptive Aggregation method, to defend against sophisticated poisoning attacks. Specifically, we first utilize distribution learning to simulate the clients' data distributions. Then, we use the maximum mean discrepancy (MMD) to calculate the pairwise similarity of the current local model data distribution, its historical data distribution, and global model data distribution. Finally, we use policy learning to adaptively determine the aggregation weights based on the above similarities. Experiments on four real-world datasets demonstrate that the proposed defense model significantly outperforms widely adopted defense models for sophisticated attacks.
[ "['Yujing Wang' 'Hainan Zhang' 'Sijia Wen' 'Wangjie Qiu' 'Binghui Guo']" ]
null
null
2406.14220
null
null
http://arxiv.org/pdf/2406.14220v2
2024-07-01T16:30:23Z
2024-06-20T11:40:12Z
Evaluation of Deep Learning Semantic Segmentation for Land Cover Mapping on Multispectral, Hyperspectral and High Spatial Aerial Imagery
In the rise of climate change, land cover mapping has become such an urgent need in environmental monitoring. The accuracy of land cover classification has gotten increasingly based on the improvement of remote sensing data. Land cover classification using satellite imageries has been explored and become more prevalent in recent years, but the methodologies remain some drawbacks of subjective and time-consuming. Some deep learning techniques have been utilized to overcome these limitations. However, most studies implemented just one image type to evaluate algorithms for land cover mapping. Therefore, our study conducted deep learning semantic segmentation in multispectral, hyperspectral, and high spatial aerial image datasets for landcover mapping. This research implemented a semantic segmentation method such as Unet, Linknet, FPN, and PSPnet for categorizing vegetation, water, and others (i.e., soil and impervious surface). The LinkNet model obtained high accuracy in IoU (Intersection Over Union) at 0.92 in all datasets, which is comparable with other mentioned techniques. In evaluation with different image types, the multispectral images showed higher performance with the IoU, and F1-score are 0.993 and 0.997, respectively. Our outcome highlighted the efficiency and broad applicability of LinkNet and multispectral image on land cover classification. This research contributes to establishing an approach on landcover segmentation via open source for long-term future application.
[ "['Ilham Adi Panuntun' 'Ying-Nong Chen' 'Ilham Jamaluddin'\n 'Thi Linh Chi Tran']" ]
null
null
2406.14231
null
null
http://arxiv.org/pdf/2406.14231v1
2024-06-20T11:51:25Z
2024-06-20T11:51:25Z
aeon: a Python toolkit for learning from time series
aeon is a unified Python 3 library for all machine learning tasks involving time series. The package contains modules for time series forecasting, classification, extrinsic regression and clustering, as well as a variety of utilities, transformations and distance measures designed for time series data. aeon also has a number of experimental modules for tasks such as anomaly detection, similarity search and segmentation. aeon follows the scikit-learn API as much as possible to help new users and enable easy integration of aeon estimators with useful tools such as model selection and pipelines. It provides a broad library of time series algorithms, including efficient implementations of the very latest advances in research. Using a system of optional dependencies, aeon integrates a wide variety of packages into a single interface while keeping the core framework with minimal dependencies. The package is distributed under the 3-Clause BSD license and is available at https://github.com/ aeon-toolkit/aeon. This version was submitted to the JMLR journal on 02 Nov 2023 for v0.5.0 of aeon. At the time of this preprint aeon has released v0.9.0, and has had substantial changes.
[ "['Matthew Middlehurst' 'Ali Ismail-Fawaz' 'Antoine Guillaume'\n 'Christopher Holder' 'David Guijo Rubio' 'Guzal Bulatova'\n 'Leonidas Tsaprounis' 'Lukasz Mentel' 'Martin Walter' 'Patrick Schäfer'\n 'Anthony Bagnall']" ]
null
null
2406.14232
null
null
http://arxiv.org/pdf/2406.14232v1
2024-06-20T11:55:39Z
2024-06-20T11:55:39Z
Enhancing robustness of data-driven SHM models: adversarial training with circle loss
Structural health monitoring (SHM) is critical to safeguarding the safety and reliability of aerospace, civil, and mechanical infrastructure. Machine learning-based data-driven approaches have gained popularity in SHM due to advancements in sensors and computational power. However, machine learning models used in SHM are vulnerable to adversarial examples -- even small changes in input can lead to different model outputs. This paper aims to address this problem by discussing adversarial defenses in SHM. In this paper, we propose an adversarial training method for defense, which uses circle loss to optimize the distance between features in training to keep examples away from the decision boundary. Through this simple yet effective constraint, our method demonstrates substantial improvements in model robustness, surpassing existing defense mechanisms.
[ "['Xiangli Yang' 'Xijie Deng' 'Hanwei Zhang' 'Yang Zou' 'Jianxi Yang']" ]
null
null
2406.14246
null
null
http://arxiv.org/pdf/2406.14246v1
2024-06-20T12:14:09Z
2024-06-20T12:14:09Z
Non-Negative Universal Differential Equations With Applications in Systems Biology
Universal differential equations (UDEs) leverage the respective advantages of mechanistic models and artificial neural networks and combine them into one dynamic model. However, these hybrid models can suffer from unrealistic solutions, such as negative values for biochemical quantities. We present non-negative UDE (nUDEs), a constrained UDE variant that guarantees non-negative values. Furthermore, we explore regularisation techniques to improve generalisation and interpretability of UDEs.
[ "['Maren Philipps' 'Antonia Körner' 'Jakob Vanhoefer' 'Dilan Pathirana'\n 'Jan Hasenauer']" ]
null
null
2406.14259
null
null
http://arxiv.org/abs/2406.14259v1
2024-06-20T12:28:47Z
2024-06-20T12:28:47Z
MEAT: Median-Ensemble Adversarial Training for Improving Robustness and Generalization
Self-ensemble adversarial training methods improve model robustness by ensembling models at different training epochs, such as model weight averaging (WA). However, previous research has shown that self-ensemble defense methods in adversarial training (AT) still suffer from robust overfitting, which severely affects the generalization performance. Empirically, in the late phases of training, the AT becomes more overfitting to the extent that the individuals for weight averaging also suffer from overfitting and produce anomalous weight values, which causes the self-ensemble model to continue to undergo robust overfitting due to the failure in removing the weight anomalies. To solve this problem, we aim to tackle the influence of outliers in the weight space in this work and propose an easy-to-operate and effective Median-Ensemble Adversarial Training (MEAT) method to solve the robust overfitting phenomenon existing in self-ensemble defense from the source by searching for the median of the historical model weights. Experimental results show that MEAT achieves the best robustness against the powerful AutoAttack and can effectively allievate the robust overfitting. We further demonstrate that most defense methods can improve robust generalization and robustness by combining with MEAT.
[ "['Zhaozhe Hu' 'Jia-Li Yin' 'Bin Chen' 'Luojun Lin' 'Bo-Hao Chen'\n 'Ximeng Liu']" ]
null
null
2406.14265
null
null
http://arxiv.org/pdf/2406.14265v1
2024-06-20T12:41:39Z
2024-06-20T12:41:39Z
VeriFlow: Modeling Distributions for Neural Network Verification
Formal verification has emerged as a promising method to ensure the safety and reliability of neural networks. Naively verifying a safety property amounts to ensuring the safety of a neural network for the whole input space irrespective of any training or test set. However, this also implies that the safety of the neural network is checked even for inputs that do not occur in the real-world and have no meaning at all, often resulting in spurious errors. To tackle this shortcoming, we propose the VeriFlow architecture as a flow based density model tailored to allow any verification approach to restrict its search to the some data distribution of interest. We argue that our architecture is particularly well suited for this purpose because of two major properties. First, we show that the transformation and log-density function that are defined by our model are piece-wise affine. Therefore, the model allows the usage of verifiers based on SMT with linear arithmetic. Second, upper density level sets (UDL) of the data distribution take the shape of an $L^p$-ball in the latent space. As a consequence, representations of UDLs specified by a given probability are effectively computable in latent space. This allows for SMT and abstract interpretation approaches with fine-grained, probabilistically interpretable, control regarding on how (a)typical the inputs subject to verification are.
[ "['Faried Abu Zaid' 'Daniel Neider' 'Mustafa Yalçıner']" ]
null
null
2406.14274
null
null
http://arxiv.org/pdf/2406.14274v1
2024-06-20T12:54:07Z
2024-06-20T12:54:07Z
Learning to Discover Knowledge: A Weakly-Supervised Partial Domain Adaptation Approach
Domain adaptation has shown appealing performance by leveraging knowledge from a source domain with rich annotations. However, for a specific target task, it is cumbersome to collect related and high-quality source domains. In real-world scenarios, large-scale datasets corrupted with noisy labels are easy to collect, stimulating a great demand for automatic recognition in a generalized setting, i.e., weakly-supervised partial domain adaptation (WS-PDA), which transfers a classifier from a large source domain with noises in labels to a small unlabeled target domain. As such, the key issues of WS-PDA are: 1) how to sufficiently discover the knowledge from the noisy labeled source domain and the unlabeled target domain, and 2) how to successfully adapt the knowledge across domains. In this paper, we propose a simple yet effective domain adaptation approach, termed as self-paced transfer classifier learning (SP-TCL), to address the above issues, which could be regarded as a well-performing baseline for several generalized domain adaptation tasks. The proposed model is established upon the self-paced learning scheme, seeking a preferable classifier for the target domain. Specifically, SP-TCL learns to discover faithful knowledge via a carefully designed prudent loss function and simultaneously adapts the learned knowledge to the target domain by iteratively excluding source examples from training under the self-paced fashion. Extensive evaluations on several benchmark datasets demonstrate that SP-TCL significantly outperforms state-of-the-art approaches on several generalized domain adaptation tasks.
[ "['Mengcheng Lan' 'Min Meng' 'Jun Yu' 'Jigang Wu']" ]
null
null
2406.14281
null
null
http://arxiv.org/pdf/2406.14281v1
2024-06-20T13:07:06Z
2024-06-20T13:07:06Z
FairX: A comprehensive benchmarking tool for model analysis using fairness, utility, and explainability
We present FairX, an open-source Python-based benchmarking tool designed for the comprehensive analysis of models under the umbrella of fairness, utility, and eXplainability (XAI). FairX enables users to train benchmarking bias-removal models and evaluate their fairness using a wide array of fairness metrics, data utility metrics, and generate explanations for model predictions, all within a unified framework. Existing benchmarking tools do not have the way to evaluate synthetic data generated from fair generative models, also they do not have the support for training fair generative models either. In FairX, we add fair generative models in the collection of our fair-model library (pre-processing, in-processing, post-processing) and evaluation metrics for evaluating the quality of synthetic fair data. This version of FairX supports both tabular and image datasets. It also allows users to provide their own custom datasets. The open-source FairX benchmarking package is publicly available at https://github.com/fahim-sikder/FairX.
[ "['Md Fahim Sikder' 'Resmi Ramachandranpillai' 'Daniel de Leng'\n 'Fredrik Heintz']" ]
null
null
2406.14287
null
null
http://arxiv.org/pdf/2406.14287v1
2024-06-20T13:14:00Z
2024-06-20T13:14:00Z
Segmentation of Non-Small Cell Lung Carcinomas: Introducing DRU-Net and Multi-Lens Distortion
Considering the increased workload in pathology laboratories today, automated tools such as artificial intelligence models can help pathologists with their tasks and ease the workload. In this paper, we are proposing a segmentation model (DRU-Net) that can provide a delineation of human non-small cell lung carcinomas and an augmentation method that can improve classification results. The proposed model is a fused combination of truncated pre-trained DenseNet201 and ResNet101V2 as a patch-wise classifier followed by a lightweight U-Net as a refinement model. We have used two datasets (Norwegian Lung Cancer Biobank and Haukeland University Hospital lung cancer cohort) to create our proposed model. The DRU-Net model achieves an average of 0.91 Dice similarity coefficient. The proposed spatial augmentation method (multi-lens distortion) improved the network performance by 3%. Our findings show that choosing image patches that specifically include regions of interest leads to better results for the patch-wise classifier compared to other sampling methods. The qualitative analysis showed that the DRU-Net model is generally successful in detecting the tumor. On the test set, some of the cases showed areas of false positive and false negative segmentation in the periphery, particularly in tumors with inflammatory and reactive changes.
[ "['Soroush Oskouei' 'Marit Valla' 'André Pedersen' 'Erik Smistad'\n 'Vibeke Grotnes Dale' 'Maren Høibø' 'Sissel Gyrid Freim Wahl'\n 'Mats Dehli Haugum' 'Thomas Langø' 'Maria Paula Ramnefjell'\n 'Lars Andreas Akslen' 'Gabriel Kiss' 'Hanne Sorger']" ]
null
null
2406.14288
null
null
http://arxiv.org/pdf/2406.14288v1
2024-06-20T13:14:44Z
2024-06-20T13:14:44Z
Revisiting Modularity Maximization for Graph Clustering: A Contrastive Learning Perspective
Graph clustering, a fundamental and challenging task in graph mining, aims to classify nodes in a graph into several disjoint clusters. In recent years, graph contrastive learning (GCL) has emerged as a dominant line of research in graph clustering and advances the new state-of-the-art. However, GCL-based methods heavily rely on graph augmentations and contrastive schemes, which may potentially introduce challenges such as semantic drift and scalability issues. Another promising line of research involves the adoption of modularity maximization, a popular and effective measure for community detection, as the guiding principle for clustering tasks. Despite the recent progress, the underlying mechanism of modularity maximization is still not well understood. In this work, we dig into the hidden success of modularity maximization for graph clustering. Our analysis reveals the strong connections between modularity maximization and graph contrastive learning, where positive and negative examples are naturally defined by modularity. In light of our results, we propose a community-aware graph clustering framework, coined MAGI, which leverages modularity maximization as a contrastive pretext task to effectively uncover the underlying information of communities in graphs, while avoiding the problem of semantic drift. Extensive experiments on multiple graph datasets verify the effectiveness of MAGI in terms of scalability and clustering performance compared to state-of-the-art graph clustering methods. Notably, MAGI easily scales a sufficiently large graph with 100M nodes while outperforming strong baselines.
[ "['Yunfei Liu' 'Jintang Li' 'Yuehe Chen' 'Ruofan Wu' 'Ericbk Wang'\n 'Jing Zhou' 'Sheng Tian' 'Shuheng Shen' 'Xing Fu' 'Changhua Meng'\n 'Weiqiang Wang' 'Liang Chen']" ]
null
null
2406.14301
null
null
http://arxiv.org/pdf/2406.14301v1
2024-06-20T13:27:44Z
2024-06-20T13:27:44Z
Resource Optimization for Tail-Based Control in Wireless Networked Control Systems
Achieving control stability is one of the key design challenges of scalable Wireless Networked Control Systems (WNCS) under limited communication and computing resources. This paper explores the use of an alternative control concept defined as tail-based control, which extends the classical Linear Quadratic Regulator (LQR) cost function for multiple dynamic control systems over a shared wireless network. We cast the control of multiple control systems as a network-wide optimization problem and decouple it in terms of sensor scheduling, plant state prediction, and control policies. Toward this, we propose a solution consisting of a scheduling algorithm based on Lyapunov optimization for sensing, a mechanism based on Gaussian Process Regression (GPR) for state prediction and uncertainty estimation, and a control policy based on Reinforcement Learning (RL) to ensure tail-based control stability. A set of discrete time-invariant mountain car control systems is used to evaluate the proposed solution and is compared against four variants that use state-of-the-art scheduling, prediction, and control methods. The experimental results indicate that the proposed method yields 22% reduction in overall cost in terms of communication and control resource utilization compared to state-of-the-art methods.
[ "['Rasika Vijithasena' 'Rafaela Scaciota' 'Mehdi Bennis'\n 'Sumudu Samarakoon']" ]
null
null
2406.14302
null
null
http://arxiv.org/pdf/2406.14302v1
2024-06-20T13:30:25Z
2024-06-20T13:30:25Z
Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning
Identifying latent representations or causal structures is important for good generalization and downstream task performance. However, both fields have been developed rather independently. We observe that several methods in both representation and causal structure learning rely on the same data-generating process (DGP), namely, exchangeable but not i.i.d. (independent and identically distributed) data. We provide a unified framework, termed Identifiable Exchangeable Mechanisms (IEM), for representation and structure learning under the lens of exchangeability. IEM provides new insights that let us relax the necessary conditions for causal structure identification in exchangeable non--i.i.d. data. We also demonstrate the existence of a duality condition in identifiable representation learning, leading to new identifiability results. We hope this work will pave the way for further research in causal representation learning.
[ "['Patrik Reizinger' 'Siyuan Guo' 'Ferenc Huszár' 'Bernhard Schölkopf'\n 'Wieland Brendel']" ]
null
null
2406.14308
null
null
http://arxiv.org/pdf/2406.14308v1
2024-06-20T13:37:29Z
2024-06-20T13:37:29Z
FIESTA: Fourier-Based Semantic Augmentation with Uncertainty Guidance for Enhanced Domain Generalizability in Medical Image Segmentation
Single-source domain generalization (SDG) in medical image segmentation (MIS) aims to generalize a model using data from only one source domain to segment data from an unseen target domain. Despite substantial advances in SDG with data augmentation, existing methods often fail to fully consider the details and uncertain areas prevalent in MIS, leading to mis-segmentation. This paper proposes a Fourier-based semantic augmentation method called FIESTA using uncertainty guidance to enhance the fundamental goals of MIS in an SDG context by manipulating the amplitude and phase components in the frequency domain. The proposed Fourier augmentative transformer addresses semantic amplitude modulation based on meaningful angular points to induce pertinent variations and harnesses the phase spectrum to ensure structural coherence. Moreover, FIESTA employs epistemic uncertainty to fine-tune the augmentation process, improving the ability of the model to adapt to diverse augmented data and concentrate on areas with higher ambiguity. Extensive experiments across three cross-domain scenarios demonstrate that FIESTA surpasses recent state-of-the-art SDG approaches in segmentation performance and significantly contributes to boosting the applicability of the model in medical imaging modalities.
[ "['Kwanseok Oh' 'Eunjin Jeon' 'Da-Woon Heo' 'Yooseung Shin' 'Heung-Il Suk']" ]
null
null
2406.14309
null
null
http://arxiv.org/pdf/2406.14309v1
2024-06-20T13:39:14Z
2024-06-20T13:39:14Z
Emerging-properties Mapping Using Spatial Embedding Statistics: EMUSES
Understanding complex phenomena often requires analyzing high-dimensional data to uncover emergent properties that arise from multifactorial interactions. Here, we present EMUSES (Emerging-properties Mapping Using Spatial Embedding Statistics), an innovative approach employing Uniform Manifold Approximation and Projection (UMAP) to create high-dimensional embeddings that reveal latent structures within data. EMUSES facilitates the exploration and prediction of emergent properties by statistically analyzing these latent spaces. Using three distinct datasets--a handwritten digits dataset from the National Institute of Standards and Technology (NIST, E. Alpaydin, 1998), the Chicago Face Database (Ma et al., 2015), and brain disconnection data post-stroke (Talozzi et al., 2023)--we demonstrate EMUSES' effectiveness in detecting and interpreting emergent properties. Our method not only predicts outcomes with high accuracy but also provides clear visualizations and statistical insights into the underlying interactions within the data. By bridging the gap between predictive accuracy and interpretability, EMUSES offers researchers a powerful tool to understand the multifactorial origins of complex phenomena.
[ "['Chris Foulon' 'Marcela Ovando-Tellez' 'Lia Talozzi' 'Maurizio Corbetta'\n 'Anna Matsulevits' 'Michel Thiebaut de Schotten']" ]
null
null
2406.14322
null
null
http://arxiv.org/pdf/2406.14322v2
2024-07-03T14:05:20Z
2024-06-20T13:54:32Z
Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning
Large language models (LLMs) have emerged as powerful tools for tackling complex tasks across diverse domains, but they also raise privacy concerns when fine-tuned on sensitive data due to potential memorization. While differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit, current evaluations on LLMs mostly treat each example (text record) as the privacy unit. This leads to uneven user privacy guarantees when contributions per user vary. We therefore study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users. We present a systematic evaluation of user-level DP for LLM fine-tuning on natural language generation tasks. Focusing on two mechanisms for achieving user-level DP guarantees, Group Privacy and User-wise DP-SGD, we investigate design choices like data selection strategies and parameter tuning for the best privacy-utility tradeoff.
[ "['Lynn Chua' 'Badih Ghazi' 'Yangsibo Huang' 'Pritish Kamath' 'Ravi Kumar'\n 'Daogao Liu' 'Pasin Manurangsi' 'Amer Sinha' 'Chiyuan Zhang']" ]
null
null
2406.14324
null
null
http://arxiv.org/pdf/2406.14324v1
2024-06-20T13:56:05Z
2024-06-20T13:56:05Z
Revealing the learning process in reinforcement learning agents through attention-oriented metrics
The learning process of a reinforcement learning (RL) agent remains poorly understood beyond the mathematical formulation of its learning algorithm. To address this gap, we introduce attention-oriented metrics (ATOMs) to investigate the development of an RL agent's attention during training. We tested ATOMs on three variations of a Pong game, each designed to teach the agent distinct behaviours, complemented by a behavioural assessment. Our findings reveal that ATOMs successfully delineate the attention patterns of an agent trained on each game variation, and that these differences in attention patterns translate into differences in the agent's behaviour. Through continuous monitoring of ATOMs during training, we observed that the agent's attention developed in phases, and that these phases were consistent across games. Finally, we noted that the agent's attention to its paddle emerged relatively late in the training and coincided with a marked increase in its performance score. Overall, we believe that ATOMs could significantly enhance our understanding of RL agents' learning processes, which is essential for improving their reliability and efficiency.
[ "['Charlotte Beylier' 'Simon M. Hofmann' 'Nico Scherf']" ]
null
null
2406.14325
null
null
http://arxiv.org/pdf/2406.14325v2
2024-07-02T15:36:32Z
2024-06-20T13:56:42Z
Reproducibility in Machine Learning-based Research: Overview, Barriers and Drivers
Research in various fields is currently experiencing challenges regarding the reproducibility of results. This problem is also prevalent in machine learning (ML) research. The issue arises, for example, due to unpublished data and/or source code and the sensitivity of ML training conditions. Although different solutions have been proposed to address this issue, such as using ML platforms, the level of reproducibility in ML-driven research remains unsatisfactory. Therefore, in this article, we discuss the reproducibility of ML-driven research with three main aims: (i) identifying the barriers to reproducibility when applying ML in research as well as categorize the barriers to different types of reproducibility (description, code, data, and experiment reproducibility), (ii) discussing potential drivers such as tools, practices, and interventions that support ML reproducibility, as well as distinguish between technology-driven drivers, procedural drivers, and drivers related to awareness and education, and (iii) mapping the drivers to the barriers. With this work, we hope to provide insights and to contribute to the decision-making process regarding the adoption of different solutions to support ML reproducibility.
[ "['Harald Semmelrock' 'Tony Ross-Hellauer' 'Simone Kopeinik'\n 'Dieter Theiler' 'Armin Haberl' 'Stefan Thalmann' 'Dominik Kowald']" ]
null
null
2406.14328
null
null
http://arxiv.org/pdf/2406.14328v1
2024-06-20T13:59:34Z
2024-06-20T13:59:34Z
Computing Within Limits: An Empirical Study of Energy Consumption in ML Training and Inference
Machine learning (ML) has seen tremendous advancements, but its environmental footprint remains a concern. Acknowledging the growing environmental impact of ML this paper investigates Green ML, examining various model architectures and hyperparameters in both training and inference phases to identify energy-efficient practices. Our study leverages software-based power measurements for ease of replication across diverse configurations, models and datasets. In this paper, we examine multiple models and hardware configurations to identify correlations across the various measurements and metrics and key contributors to energy reduction. Our analysis offers practical guidelines for constructing sustainable ML operations, emphasising energy consumption and carbon footprint reductions while maintaining performance. As identified, short-lived profiling can quantify the long-term expected energy consumption. Moreover, model parameters can also be used to accurately estimate the expected total energy without the need for extensive experimentation.
[ "['Ioannis Mavromatis' 'Kostas Katsaros' 'Aftab Khan']" ]