categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.03130 | null | null | http://arxiv.org/pdf/2405.03130v1 | 2024-05-06T02:54:53Z | 2024-05-06T02:54:53Z | Deep Learning for Causal Inference: A Comparison of Architectures for
Heterogeneous Treatment Effect Estimation | Causal inference has gained much popularity in recent years, with interests ranging from academic, to industrial, to educational, and all in between. Concurrently, the study and usage of neural networks has also grown profoundly (albeit at a far faster rate). What we aim to do in this blog write-up is demonstrate a Neural Network causal inference architecture. We develop a fully connected neural network implementation of the popular Bayesian Causal Forest algorithm, a state of the art tree based method for estimating heterogeneous treatment effects. We compare our implementation to existing neural network causal inference methodologies, showing improvements in performance in simulation settings. We apply our method to a dataset examining the effect of stress on sleep. | [
"['Demetrios Papakostas' 'Andrew Herren' 'P. Richard Hahn'\n 'Francisco Castillo']"
]
|
null | null | 2405.03131 | null | null | http://arxiv.org/pdf/2405.03131v1 | 2024-05-06T02:55:50Z | 2024-05-06T02:55:50Z | WDMoE: Wireless Distributed Large Language Models with Mixture of
Experts | Large Language Models (LLMs) have achieved significant success in various natural language processing tasks, but how wireless communications can support LLMs has not been extensively studied. In this paper, we propose a wireless distributed LLMs paradigm based on Mixture of Experts (MoE), named WDMoE, deploying LLMs collaboratively across edge servers of base station (BS) and mobile devices in the wireless communications system. Specifically, we decompose the MoE layer in LLMs by deploying the gating network and the preceding neural network layer at BS, while distributing the expert networks across the devices. This arrangement leverages the parallel capabilities of expert networks on distributed devices. Moreover, to overcome the instability of wireless communications, we design an expert selection policy by taking into account both the performance of the model and the end-to-end latency, which includes both transmission delay and inference delay. Evaluations conducted across various LLMs and multiple datasets demonstrate that WDMoE not only outperforms existing models, such as Llama 2 with 70 billion parameters, but also significantly reduces end-to-end latency. | [
"['Nan Xue' 'Yaping Sun' 'Zhiyong Chen' 'Meixia Tao' 'Xiaodong Xu'\n 'Liang Qian' 'Shuguang Cui' 'Ping Zhang']"
]
|
null | null | 2405.03133 | null | null | http://arxiv.org/pdf/2405.03133v1 | 2024-05-06T03:06:33Z | 2024-05-06T03:06:33Z | Lory: Fully Differentiable Mixture-of-Experts for Autoregressive
Language Model Pre-training | Mixture-of-experts (MoE) models facilitate efficient scaling; however, training the router network introduces the challenge of optimizing a non-differentiable, discrete objective. Recently, a fully-differentiable MoE architecture, SMEAR, was proposed (Muqeeth et al., 2023), which softly merges experts in the parameter space; nevertheless, its effectiveness was only demonstrated in downstream fine-tuning on classification tasks. In this paper, we present Lory, the first approach that scales such architectures to autoregressive language model pre-training. Lory introduces two key techniques: (1) a causal segment routing strategy that achieves high efficiency for expert merging operations while preserving the autoregressive nature of language models; (2) a similarity-based data batching method that encourages expert specialization by grouping similar documents in training instances. We pre-train a series of Lory models on 150B tokens from scratch, with up to 32 experts and 30B (1.5B active) parameters. Experimental results show significant performance gains over parameter-matched dense models on both perplexity (+13.9%) and a variety of downstream tasks (+1.5%-11.1%). Despite segment-level routing, Lory models achieve competitive performance compared to state-of-the-art MoE models with token-level routing. We further demonstrate that the trained experts in Lory capture domain-level specialization without supervision. Our work highlights the potential of fully-differentiable MoE architectures for language model pre-training and advocates future research in this area. | [
"['Zexuan Zhong' 'Mengzhou Xia' 'Danqi Chen' 'Mike Lewis']"
]
|
null | null | 2405.03140 | null | null | http://arxiv.org/pdf/2405.03140v2 | 2024-05-27T14:26:21Z | 2024-05-06T03:27:23Z | TimeMIL: Advancing Multivariate Time Series Classification via a
Time-aware Multiple Instance Learning | Deep neural networks, including transformers and convolutional neural networks, have significantly improved multivariate time series classification (MTSC). However, these methods often rely on supervised learning, which does not fully account for the sparsity and locality of patterns in time series data (e.g., diseases-related anomalous points in ECG). To address this challenge, we formally reformulate MTSC as a weakly supervised problem, introducing a novel multiple-instance learning (MIL) framework for better localization of patterns of interest and modeling time dependencies within time series. Our novel approach, TimeMIL, formulates the temporal correlation and ordering within a time-aware MIL pooling, leveraging a tokenized transformer with a specialized learnable wavelet positional token. The proposed method surpassed 26 recent state-of-the-art methods, underscoring the effectiveness of the weakly supervised TimeMIL in MTSC. The code will be available at https://github.com/xiwenc1/TimeMIL. | [
"['Xiwen Chen' 'Peijie Qiu' 'Wenhui Zhu' 'Huayu Li' 'Hao Wang'\n 'Aristeidis Sotiras' 'Yalin Wang' 'Abolfazl Razi']"
]
|
null | null | 2405.03144 | null | null | http://arxiv.org/pdf/2405.03144v1 | 2024-05-06T03:39:50Z | 2024-05-06T03:39:50Z | PTQ4SAM: Post-Training Quantization for Segment Anything | Segment Anything Model (SAM) has achieved impressive performance in many computer vision tasks. However, as a large-scale model, the immense memory and computation costs hinder its practical deployment. In this paper, we propose a post-training quantization (PTQ) framework for Segment Anything Model, namely PTQ4SAM. First, we investigate the inherent bottleneck of SAM quantization attributed to the bimodal distribution in post-Key-Linear activations. We analyze its characteristics from both per-tensor and per-channel perspectives, and propose a Bimodal Integration strategy, which utilizes a mathematically equivalent sign operation to transform the bimodal distribution into a relatively easy-quantized normal distribution offline. Second, SAM encompasses diverse attention mechanisms (i.e., self-attention and two-way cross-attention), resulting in substantial variations in the post-Softmax distributions. Therefore, we introduce an Adaptive Granularity Quantization for Softmax through searching the optimal power-of-two base, which is hardware-friendly. Extensive experimental results across various vision tasks (instance segmentation, semantic segmentation and object detection), datasets and model variants show the superiority of PTQ4SAM. For example, when quantizing SAM-L to 6-bit, we achieve lossless accuracy for instance segmentation, about 0.5% drop with theoretical 3.9$times$ acceleration. The code is available at url{https://github.com/chengtao-lv/PTQ4SAM}. | [
"['Chengtao Lv' 'Hong Chen' 'Jinyang Guo' 'Yifu Ding' 'Xianglong Liu']"
]
|
null | null | 2405.03146 | null | null | http://arxiv.org/pdf/2405.03146v2 | 2024-05-08T02:10:36Z | 2024-05-06T03:42:34Z | Quantifying the Capabilities of LLMs across Scale and Precision | Scale is often attributed as one of the factors that cause an increase in the performance of LLMs, resulting in models with billion and trillion parameters. One of the limitations of such large models is the high computational requirements that limit their usage, deployment, and debugging in resource-constrained scenarios. Two commonly used alternatives to bypass these limitations are to use the smaller versions of LLMs (e.g. Llama 7B instead of Llama 70B) and lower the memory requirements by using quantization. While these approaches effectively address the limitation of resources, their impact on model performance needs thorough examination. In this study, we perform a comprehensive evaluation to investigate the effect of model scale and quantization on the performance. We experiment with two major families of open-source instruct models ranging from 7 billion to 70 billion parameters. Our extensive zero-shot experiments across various tasks including natural language understanding, reasoning, misinformation detection, and hallucination reveal that larger models generally outperform their smaller counterparts, suggesting that scale remains an important factor in enhancing performance. We found that larger models show exceptional resilience to precision reduction and can maintain high accuracy even at 4-bit quantization for numerous tasks and they serve as a better solution than using smaller models at high precision under similar memory requirements. | [
"['Sher Badshah' 'Hassan Sajjad']"
]
|
null | null | 2405.03150 | null | null | http://arxiv.org/pdf/2405.03150v1 | 2024-05-06T04:01:42Z | 2024-05-06T04:01:42Z | Video Diffusion Models: A Survey | Diffusion generative models have recently become a robust technique for producing and modifying coherent, high-quality video. This survey offers a systematic overview of critical elements of diffusion models for video generation, covering applications, architectural choices, and the modeling of temporal dynamics. Recent advancements in the field are summarized and grouped into development trends. The survey concludes with an overview of remaining challenges and an outlook on the future of the field. Website: https://github.com/ndrwmlnk/Awesome-Video-Diffusion-Models | [
"['Andrew Melnik' 'Michal Ljubljanac' 'Cong Lu' 'Qi Yan' 'Weiming Ren'\n 'Helge Ritter']"
]
|
null | null | 2405.03153 | null | null | http://arxiv.org/pdf/2405.03153v1 | 2024-05-06T04:06:45Z | 2024-05-06T04:06:45Z | Exploring the Potential of the Large Language Models (LLMs) in
Identifying Misleading News Headlines | In the digital age, the prevalence of misleading news headlines poses a significant challenge to information integrity, necessitating robust detection mechanisms. This study explores the efficacy of Large Language Models (LLMs) in identifying misleading versus non-misleading news headlines. Utilizing a dataset of 60 articles, sourced from both reputable and questionable outlets across health, science & tech, and business domains, we employ three LLMs- ChatGPT-3.5, ChatGPT-4, and Gemini-for classification. Our analysis reveals significant variance in model performance, with ChatGPT-4 demonstrating superior accuracy, especially in cases with unanimous annotator agreement on misleading headlines. The study emphasizes the importance of human-centered evaluation in developing LLMs that can navigate the complexities of misinformation detection, aligning technical proficiency with nuanced human judgment. Our findings contribute to the discourse on AI ethics, emphasizing the need for models that are not only technically advanced but also ethically aligned and sensitive to the subtleties of human interpretation. | [
"['Md Main Uddin Rony' 'Md Mahfuzul Haque' 'Mohammad Ali'\n 'Ahmed Shatil Alam' 'Naeemul Hassan']"
]
|
null | null | 2405.03158 | null | null | http://arxiv.org/pdf/2405.03158v1 | 2024-05-06T04:35:01Z | 2024-05-06T04:35:01Z | Decentralized Online Learning in General-Sum Stackelberg Games | We study an online learning problem in general-sum Stackelberg games, where players act in a decentralized and strategic manner. We study two settings depending on the type of information for the follower: (1) the limited information setting where the follower only observes its own reward, and (2) the side information setting where the follower has extra side information about the leader's reward. We show that for the follower, myopically best responding to the leader's action is the best strategy for the limited information setting, but not necessarily so for the side information setting -- the follower can manipulate the leader's reward signals with strategic actions, and hence induce the leader's strategy to converge to an equilibrium that is better off for itself. Based on these insights, we study decentralized online learning for both players in the two settings. Our main contribution is to derive last-iterate convergence and sample complexity results in both settings. Notably, we design a new manipulation strategy for the follower in the latter setting, and show that it has an intrinsic advantage against the best response strategy. Our theories are also supported by empirical results. | [
"['Yaolong Yu' 'Haipeng Chen']"
]
|
null | null | 2405.03162 | null | null | http://arxiv.org/pdf/2405.03162v1 | 2024-05-06T04:44:22Z | 2024-05-06T04:44:22Z | Advancing Multimodal Medical Capabilities of Gemini | Many clinical tasks require an understanding of specialized data, such as medical images and genomics, which is not typically found in general-purpose large multimodal models. Building upon Gemini's multimodal models, we develop several models within the new Med-Gemini family that inherit core capabilities of Gemini and are optimized for medical use via fine-tuning with 2D and 3D radiology, histopathology, ophthalmology, dermatology and genomic data. Med-Gemini-2D sets a new standard for AI-based chest X-ray (CXR) report generation based on expert evaluation, exceeding previous best results across two separate datasets by an absolute margin of 1% and 12%, where 57% and 96% of AI reports on normal cases, and 43% and 65% on abnormal cases, are evaluated as "equivalent or better" than the original radiologists' reports. We demonstrate the first ever large multimodal model-based report generation for 3D computed tomography (CT) volumes using Med-Gemini-3D, with 53% of AI reports considered clinically acceptable, although additional research is needed to meet expert radiologist reporting quality. Beyond report generation, Med-Gemini-2D surpasses the previous best performance in CXR visual question answering (VQA) and performs well in CXR classification and radiology VQA, exceeding SoTA or baselines on 17 of 20 tasks. In histopathology, ophthalmology, and dermatology image classification, Med-Gemini-2D surpasses baselines across 18 out of 20 tasks and approaches task-specific model performance. Beyond imaging, Med-Gemini-Polygenic outperforms the standard linear polygenic risk score-based approach for disease risk prediction and generalizes to genetically correlated diseases for which it has never been trained. Although further development and evaluation are necessary in the safety-critical medical domain, our results highlight the potential of Med-Gemini across a wide range of medical tasks. | [
"['Lin Yang' 'Shawn Xu' 'Andrew Sellergren' 'Timo Kohlberger' 'Yuchen Zhou'\n 'Ira Ktena' 'Atilla Kiraly' 'Faruk Ahmed' 'Farhad Hormozdiari'\n 'Tiam Jaroensri' 'Eric Wang' 'Ellery Wulczyn' 'Fayaz Jamil'\n 'Theo Guidroz' 'Chuck Lau' 'Siyuan Qiao' 'Yun Liu' 'Akshay Goel'\n 'Kendall Park' 'Arnav Agharwal' 'Nick George' 'Yang Wang' 'Ryutaro Tanno'\n 'David G. T. Barrett' 'Wei-Hung Weng' 'S. Sara Mahdavi' 'Khaled Saab'\n 'Tao Tu' 'Sreenivasa Raju Kalidindi' 'Mozziyar Etemadi' 'Jorge Cuadros'\n 'Gregory Sorensen' 'Yossi Matias' 'Katherine Chou' 'Greg Corrado'\n 'Joelle Barral' 'Shravya Shetty' 'David Fleet' 'S. M. Ali Eslami'\n 'Daniel Tse' 'Shruthi Prabhakara' 'Cory McLean' 'Dave Steiner'\n 'Rory Pilgrim' 'Christopher Kelly' 'Shekoofeh Azizi' 'Daniel Golden']"
]
|
null | null | 2405.03180 | null | null | http://arxiv.org/pdf/2405.03180v2 | 2024-06-30T16:18:30Z | 2024-05-06T06:05:41Z | Braced Fourier Continuation and Regression for Anomaly Detection | In this work, the concept of Braced Fourier Continuation and Regression (BFCR) is introduced. BFCR is a novel and computationally efficient means of finding nonlinear regressions or trend lines in arbitrary one-dimensional data sets. The Braced Fourier Continuation (BFC) and BFCR algorithms are first outlined, followed by a discussion of the properties of BFCR as well as demonstrations of how BFCR trend lines may be used effectively for anomaly detection both within and at the edges of arbitrary one-dimensional data sets. Finally, potential issues which may arise while using BFCR for anomaly detection as well as possible mitigation techniques are outlined and discussed. All source code and example data sets are either referenced or available via GitHub, and all associated code is written entirely in Python. | [
"['Josef Sabuda']"
]
|
null | null | 2405.03185 | null | null | http://arxiv.org/pdf/2405.03185v1 | 2024-05-06T06:23:06Z | 2024-05-06T06:23:06Z | Spatiotemporal Implicit Neural Representation as a Generalized Traffic
Data Learner | Spatiotemporal Traffic Data (STTD) measures the complex dynamical behaviors of the multiscale transportation system. Existing methods aim to reconstruct STTD using low-dimensional models. However, they are limited to data-specific dimensions or source-dependent patterns, restricting them from unifying representations. Here, we present a novel paradigm to address the STTD learning problem by parameterizing STTD as an implicit neural representation. To discern the underlying dynamics in low-dimensional regimes, coordinate-based neural networks that can encode high-frequency structures are employed to directly map coordinates to traffic variables. To unravel the entangled spatial-temporal interactions, the variability is decomposed into separate processes. We further enable modeling in irregular spaces such as sensor graphs using spectral embedding. Through continuous representations, our approach enables the modeling of a variety of STTD with a unified input, thereby serving as a generalized learner of the underlying traffic dynamics. It is also shown that it can learn implicit low-rank priors and smoothness regularization from the data, making it versatile for learning different dominating data patterns. We validate its effectiveness through extensive experiments in real-world scenarios, showcasing applications from corridor to network scales. Empirical results not only indicate that our model has significant superiority over conventional low-rank models, but also highlight that the versatility of the approach extends to different data domains, output resolutions, and network topologies. Comprehensive model analyses provide further insight into the inductive bias of STTD. We anticipate that this pioneering modeling perspective could lay the foundation for universal representation of STTD in various real-world tasks. | [
"['Tong Nie' 'Guoyang Qin' 'Wei Ma' 'Jian Sun']"
]
|
null | null | 2405.03188 | null | null | http://arxiv.org/pdf/2405.03188v1 | 2024-05-06T06:28:44Z | 2024-05-06T06:28:44Z | Hyperbolic Geometric Latent Diffusion Model for Graph Generation | Diffusion models have made significant contributions to computer vision, sparking a growing interest in the community recently regarding the application of them to graph generation. Existing discrete graph diffusion models exhibit heightened computational complexity and diminished training efficiency. A preferable and natural way is to directly diffuse the graph within the latent space. However, due to the non-Euclidean structure of graphs is not isotropic in the latent space, the existing latent diffusion models effectively make it difficult to capture and preserve the topological information of graphs. To address the above challenges, we propose a novel geometrically latent diffusion framework HypDiff. Specifically, we first establish a geometrically latent space with interpretability measures based on hyperbolic geometry, to define anisotropic latent diffusion processes for graphs. Then, we propose a geometrically latent diffusion process that is constrained by both radial and angular geometric properties, thereby ensuring the preservation of the original topological properties in the generative graphs. Extensive experimental results demonstrate the superior effectiveness of HypDiff for graph generation with various topologies. | [
"['Xingcheng Fu' 'Yisen Gao' 'Yuecen Wei' 'Qingyun Sun' 'Hao Peng'\n 'Jianxin Li' 'Xianxian Li']"
]
|
null | null | 2405.03192 | null | null | http://arxiv.org/pdf/2405.03192v2 | 2024-05-09T02:20:42Z | 2024-05-06T06:31:47Z | QuadraNet V2: Efficient and Sustainable Training of High-Order Neural
Networks with Quadratic Adaptation | Machine learning is evolving towards high-order models that necessitate pre-training on extensive datasets, a process associated with significant overheads. Traditional models, despite having pre-trained weights, are becoming obsolete due to architectural differences that obstruct the effective transfer and initialization of these weights. To address these challenges, we introduce a novel framework, QuadraNet V2, which leverages quadratic neural networks to create efficient and sustainable high-order learning models. Our method initializes the primary term of the quadratic neuron using a standard neural network, while the quadratic term is employed to adaptively enhance the learning of data non-linearity or shifts. This integration of pre-trained primary terms with quadratic terms, which possess advanced modeling capabilities, significantly augments the information characterization capacity of the high-order network. By utilizing existing pre-trained weights, QuadraNet V2 reduces the required GPU hours for training by 90% to 98.4% compared to training from scratch, demonstrating both efficiency and effectiveness. | [
"['Chenhui Xu' 'Xinyao Wang' 'Fuxun Yu' 'Jinjun Xiong' 'Xiang Chen']"
]
|
null | null | 2405.03198 | null | null | http://arxiv.org/pdf/2405.03198v1 | 2024-05-06T06:47:14Z | 2024-05-06T06:47:14Z | Stability Evaluation via Distributional Perturbation Analysis | The performance of learning models often deteriorates when deployed in out-of-sample environments. To ensure reliable deployment, we propose a stability evaluation criterion based on distributional perturbations. Conceptually, our stability evaluation criterion is defined as the minimal perturbation required on our observed dataset to induce a prescribed deterioration in risk evaluation. In this paper, we utilize the optimal transport (OT) discrepancy with moment constraints on the textit{(sample, density)} space to quantify this perturbation. Therefore, our stability evaluation criterion can address both emph{data corruptions} and emph{sub-population shifts} -- the two most common types of distribution shifts in real-world scenarios. To further realize practical benefits, we present a series of tractable convex formulations and computational methods tailored to different classes of loss functions. The key technical tool to achieve this is the strong duality theorem provided in this paper. Empirically, we validate the practical utility of our stability evaluation criterion across a host of real-world applications. These empirical studies showcase the criterion's ability not only to compare the stability of different learning models and features but also to provide valuable guidelines and strategies to further improve models. | [
"['Jose Blanchet' 'Peng Cui' 'Jiajin Li' 'Jiashuo Liu']"
]
|
null | null | 2405.03199 | null | null | http://arxiv.org/pdf/2405.03199v2 | 2024-05-20T07:48:21Z | 2024-05-06T06:47:44Z | Boosting MLPs with a Coarsening Strategy for Long-Term Time Series
Forecasting | Deep learning methods have been exerting their strengths in long-term time series forecasting. However, they often struggle to strike a balance between expressive power and computational efficiency. Resorting to multi-layer perceptrons (MLPs) provides a compromising solution, yet they suffer from two critical problems caused by the intrinsic point-wise mapping mode, in terms of deficient contextual dependencies and inadequate information bottleneck. Here, we propose the Coarsened Perceptron Network (CP-Net), featured by a coarsening strategy that alleviates the above problems associated with the prototype MLPs by forming information granules in place of solitary temporal points. The CP-Net utilizes primarily a two-stage framework for extracting semantic and contextual patterns, which preserves correlations over larger timespans and filters out volatile noises. This is further enhanced by a multi-scale setting, where patterns of diverse granularities are fused towards a comprehensive prediction. Based purely on convolutions of structural simplicity, CP-Net is able to maintain a linear computational complexity and low runtime, while demonstrates an improvement of 4.1% compared with the SOTA method on seven forecasting benchmarks. | [
"['Nannan Bian' 'Minhong Zhu' 'Li Chen' 'Weiran Cai']"
]
|
null | null | 2405.03205 | null | null | http://arxiv.org/pdf/2405.03205v2 | 2024-05-23T07:47:02Z | 2024-05-06T07:10:09Z | Anchored Answers: Unravelling Positional Bias in GPT-2's Multiple-Choice
Questions | Large Language Models (LLMs), such as the GPT-4 and LLaMA families, have demonstrated considerable success across diverse tasks, including multiple-choice questions (MCQs). However, these models exhibit a positional bias, particularly an even worse anchored bias in the GPT-2 family, where they consistently favour the first choice 'A' in MCQs during inference. This anchored bias challenges the integrity of GPT-2's decision-making process, as it skews performance based on the position rather than the content of the choices in MCQs. In this study, we utilise the mechanistic interpretability approach to identify the internal modules within GPT-2 models responsible for this bias. We focus on the Multi-Layer Perceptron (MLP) layers and attention heads, using the "logit lens" method to trace and modify the specific value vectors that contribute to the bias. By updating these vectors within MLP and recalibrating attention patterns to neutralise the preference for the first choice 'A', we effectively mitigate the anchored bias. Our interventions not only mitigate the bias but also improve the overall MCQ prediction accuracy for the GPT-2 family across various datasets. This work represents the first comprehensive mechanistic analysis of anchored bias in MCQs within the GPT-2 models, introducing targeted, minimal-intervention strategies that significantly enhance GPT2 model robustness and accuracy in MCQs. Our code is available at https://github.com/ruizheliUOA/Anchored_Bias_GPT2. | [
"['Ruizhe Li' 'Yanjun Gao']"
]
|
null | null | 2405.03221 | null | null | http://arxiv.org/pdf/2405.03221v1 | 2024-05-06T07:30:31Z | 2024-05-06T07:30:31Z | Spatial and Surface Correspondence Field for Interaction Transfer | In this paper, we introduce a new method for the task of interaction transfer. Given an example interaction between a source object and an agent, our method can automatically infer both surface and spatial relationships for the agent and target objects within the same category, yielding more accurate and valid transfers. Specifically, our method characterizes the example interaction using a combined spatial and surface representation. We correspond the agent points and object points related to the representation to the target object space using a learned spatial and surface correspondence field, which represents objects as deformed and rotated signed distance fields. With the corresponded points, an optimization is performed under the constraints of our spatial and surface interaction representation and additional regularization. Experiments conducted on human-chair and hand-mug interaction transfer tasks show that our approach can handle larger geometry and topology variations between source and target shapes, significantly outperforming state-of-the-art methods. | [
"['Zeyu Huang' 'Honghao Xu' 'Haibin Huang' 'Chongyang Ma' 'Hui Huang'\n 'Ruizhen Hu']"
]
|
null | null | 2405.03228 | null | null | http://arxiv.org/pdf/2405.03228v1 | 2024-05-06T07:40:13Z | 2024-05-06T07:40:13Z | TED: Accelerate Model Training by Internal Generalization | Large language models have demonstrated strong performance in recent years, but the high cost of training drives the need for efficient methods to compress dataset sizes. We propose TED pruning, a method that addresses the challenge of overfitting under high pruning ratios by quantifying the model's ability to improve performance on pruned data while fitting retained data, known as Internal Generalization (IG). TED uses an optimization objective based on Internal Generalization Distance (IGD), measuring changes in IG before and after pruning to align with true generalization performance and achieve implicit regularization. The IGD optimization objective was verified to allow the model to achieve the smallest upper bound on generalization error. The impact of small mask fluctuations on IG is studied through masks and Taylor approximation, and fast estimation of IGD is enabled. In analyzing continuous training dynamics, the prior effect of IGD is validated, and a progressive pruning strategy is proposed. Experiments on image classification, natural language understanding, and large language model fine-tuning show TED achieves lossless performance with 60-70% of the data. Upon acceptance, our code will be made publicly available. | [
"['Jinying Xiao' 'Ping Li' 'Jie Nie']"
]
|
null | null | 2405.03234 | null | null | http://arxiv.org/pdf/2405.03234v2 | 2024-05-07T21:25:15Z | 2024-05-06T07:44:07Z | A Reliable Framework for Human-in-the-Loop Anomaly Detection in Time
Series | Time series anomaly detection is a critical machine learning task for numerous applications, such as finance, healthcare, and industrial systems. However, even high-performed models may exhibit potential issues such as biases, leading to unreliable outcomes and misplaced confidence. While model explanation techniques, particularly visual explanations, offer valuable insights to detect such issues by elucidating model attributions of their decision, many limitations still exist -- They are primarily instance-based and not scalable across dataset, and they provide one-directional information from the model to the human side, lacking a mechanism for users to address detected issues. To fulfill these gaps, we introduce HILAD, a novel framework designed to foster a dynamic and bidirectional collaboration between humans and AI for enhancing anomaly detection models in time series. Through our visual interface, HILAD empowers domain experts to detect, interpret, and correct unexpected model behaviors at scale. Our evaluation with two time series datasets and user studies demonstrates the effectiveness of HILAD in fostering a deeper human understanding, immediate corrective actions, and the reliability enhancement of models. | [
"['Ziquan Deng' 'Xiwei Xuan' 'Kwan-Liu Ma' 'Zhaodan Kong']"
]
|
null | null | 2405.03235 | null | null | http://arxiv.org/pdf/2405.03235v1 | 2024-05-06T07:44:46Z | 2024-05-06T07:44:46Z | Cross-Modal Domain Adaptation in Brain Disease Diagnosis: Maximum Mean
Discrepancy-based Convolutional Neural Networks | Brain disorders are a major challenge to global health, causing millions of deaths each year. Accurate diagnosis of these diseases relies heavily on advanced medical imaging techniques such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT). However, the scarcity of annotated data poses a significant challenge in deploying machine learning models for medical diagnosis. To address this limitation, deep learning techniques have shown considerable promise. Domain adaptation techniques enhance a model's ability to generalize across imaging modalities by transferring knowledge from one domain (e.g., CT images) to another (e.g., MRI images). Such cross-modality adaptation is essential to improve the ability of models to consistently generalize across different imaging modalities. This study collected relevant resources from the Kaggle website and employed the Maximum Mean Difference (MMD) method - a popular domain adaptation method - to reduce the differences between imaging domains. By combining MMD with Convolutional Neural Networks (CNNs), the accuracy and utility of the model is obviously enhanced. The excellent experimental results highlight the great potential of data-driven domain adaptation techniques to improve diagnostic accuracy and efficiency, especially in resource-limited environments. By bridging the gap between different imaging modalities, the study aims to provide clinicians with more reliable diagnostic tools. | [
"['Xuran Zhu']"
]
|
null | null | 2405.03236 | null | null | http://arxiv.org/pdf/2405.03236v1 | 2024-05-06T07:44:50Z | 2024-05-06T07:44:50Z | Federated Reinforcement Learning with Constraint Heterogeneity | We study a Federated Reinforcement Learning (FedRL) problem with constraint heterogeneity. In our setting, we aim to solve a reinforcement learning problem with multiple constraints while $N$ training agents are located in $N$ different environments with limited access to the constraint signals and they are expected to collaboratively learn a policy satisfying all constraint signals. Such learning problems are prevalent in scenarios of Large Language Model (LLM) fine-tuning and healthcare applications. To solve the problem, we propose federated primal-dual policy optimization methods based on traditional policy gradient methods. Specifically, we introduce $N$ local Lagrange functions for agents to perform local policy updates, and these agents are then scheduled to periodically communicate on their local policies. Taking natural policy gradient (NPG) and proximal policy optimization (PPO) as policy optimization methods, we mainly focus on two instances of our algorithms, ie, {FedNPG} and {FedPPO}. We show that FedNPG achieves global convergence with an $tilde{O}(1/sqrt{T})$ rate, and FedPPO efficiently solves complicated learning tasks with the use of deep neural networks. | [
"['Hao Jin' 'Liangyu Zhang' 'Zhihua Zhang']"
]
|
null | null | 2405.03239 | null | null | http://arxiv.org/pdf/2405.03239v1 | 2024-05-06T07:48:34Z | 2024-05-06T07:48:34Z | Deep Learning for Detecting and Early Predicting Chronic Obstructive
Pulmonary Disease from Spirogram Time Series: A UK Biobank Study | Chronic Obstructive Pulmonary Disease (COPD) is a chronic inflammatory lung condition that causes airflow obstruction. The existing methods can only detect patients who already have COPD based on obvious features shown in the spirogram (In this article, the spirogram specifically involves measuring Volume-Flow curve time series). Early prediction of COPD risk is vital for monitoring COPD disease progression, slowing it down, or even preventing its onset. However, these methods fail to early predict an individual's probability of COPD in the future based on subtle features in the spirogram. To address this gap, for the first time, we propose DeepSpiro, a method based on deep learning for early prediction of future COPD risk. DeepSpiro consists of four parts. First, we construct Volume-Flow curves guided by Time-Volume instability smoothing (SpiroSmoother) to enhance the stability of the original Volume-Flow curves precisely. Second, we extract critical features from the evolution of varied-length key patches (SpiroEncoder) to capture the key temporal evolution from original high-dimensional dynamic sequences to a unified low-dimensional temporal representation. Third, we explain the model based on temporal attention and heterogeneous feature fusion (SpiroExplainer), which integrates information from heterogeneous data such as spirogram and demographic information. Fourth, we predict the risk of COPD based on the evolution of key patch concavity (SpiroPredictor), enabling accurate prediction of the risk of disease in high-risk patients who are not yet diagnosed, for up to 1, 2, 3, 4, 5 years, and beyond. We conduct experiments on the UK Biobank dataset. Results show that DeepSpiro achieves an AUC value of 0.8328 in the task of detecting COPD. In early prediction tasks, high-risk and low-risk groups show significant differences in the future, with a p-value of <0.001. | [
"['Shuhao Mei' 'Yuxi Zhou' 'Jiahao Xu' 'Yuxuan Wan' 'Shan Cao'\n 'Qinghao Zhao' 'Shijia Geng' 'Junqing Xie' 'Shenda Hong']"
]
|
null | null | 2405.03244 | null | null | http://arxiv.org/pdf/2405.03244v1 | 2024-05-06T07:52:44Z | 2024-05-06T07:52:44Z | Examining Changes in Internal Representations of Continual Learning
Models Through Tensor Decomposition | Continual learning (CL) has spurred the development of several methods aimed at consolidating previous knowledge across sequential learning. Yet, the evaluations of these methods have primarily focused on the final output, such as changes in the accuracy of predicted classes, overlooking the issue of representational forgetting within the model. In this paper, we propose a novel representation-based evaluation framework for CL models. This approach involves gathering internal representations from throughout the continual learning process and formulating three-dimensional tensors. The tensors are formed by stacking representations, such as layer activations, generated from several inputs and model `snapshots', throughout the learning process. By conducting tensor component analysis (TCA), we aim to uncover meaningful patterns about how the internal representations evolve, expecting to highlight the merits or shortcomings of examined CL strategies. We conduct our analyses across different model architectures and importance-based continual learning strategies, with a curated task selection. While the results of our approach mirror the difference in performance of various CL strategies, we found that our methodology did not directly highlight specialized clusters of neurons, nor provide an immediate understanding the evolution of filters. We believe a scaled down version of our approach will provide insight into the benefits and pitfalls of using TCA to study continual learning dynamics. | [
"['Nishant Suresh Aswani' 'Amira Guesmi' 'Muhammad Abdullah Hanif'\n 'Muhammad Shafique']"
]
|
null | null | 2405.03248 | null | null | http://arxiv.org/pdf/2405.03248v1 | 2024-05-06T08:00:43Z | 2024-05-06T08:00:43Z | Communication-Efficient Federated Learning with Adaptive Compression
under Dynamic Bandwidth | Federated learning can train models without directly providing local data to the server. However, the frequent updating of the local model brings the problem of large communication overhead. Recently, scholars have achieved the communication efficiency of federated learning mainly by model compression. But they ignore two problems: 1) network state of each client changes dynamically; 2) network state among clients is not the same. The clients with poor bandwidth update local model slowly, which leads to low efficiency. To address this challenge, we propose a communication-efficient federated learning algorithm with adaptive compression under dynamic bandwidth (called AdapComFL). Concretely, each client performs bandwidth awareness and bandwidth prediction. Then, each client adaptively compresses its local model via the improved sketch mechanism based on his predicted bandwidth. Further, the server aggregates sketched models with different sizes received. To verify the effectiveness of the proposed method, the experiments are based on real bandwidth data which are collected from the network topology we build, and benchmark datasets which are obtained from open repositories. We show the performance of AdapComFL algorithm, and compare it with existing algorithms. The experimental results show that our AdapComFL achieves more efficient communication as well as competitive accuracy compared to existing algorithms. | [
"['Ying Zhuansun' 'Dandan Li' 'Xiaohong Huang' 'Caijun Sun']"
]
|
null | null | 2405.03251 | null | null | http://arxiv.org/pdf/2405.03251v1 | 2024-05-06T08:15:29Z | 2024-05-06T08:15:29Z | Exploring the Frontiers of Softmax: Provable Optimization, Applications
in Diffusion Model, and Beyond | The softmax activation function plays a crucial role in the success of large language models (LLMs), particularly in the self-attention mechanism of the widely adopted Transformer architecture. However, the underlying learning dynamics that contribute to the effectiveness of softmax remain largely unexplored. As a step towards better understanding, this paper provides a theoretical study of the optimization and generalization properties of two-layer softmax neural networks, providing theoretical insights into their superior performance as other activation functions, such as ReLU and exponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis reveals that the normalization effect of the softmax function leads to a good perturbation property of the induced NTK matrix, resulting in a good convex region of the loss landscape. Consequently, softmax neural networks can learn the target function in the over-parametrization regime. To demonstrate the broad applicability of our theoretical findings, we apply them to the task of learning score estimation functions in diffusion models, a promising approach for generative modeling. Our analysis shows that gradient-based algorithms can learn the score function with a provable accuracy. Our work provides a deeper understanding of the effectiveness of softmax neural networks and their potential in various domains, paving the way for further advancements in natural language processing and beyond. | [
"['Jiuxiang Gu' 'Chenyang Li' 'Yingyu Liang' 'Zhenmei Shi' 'Zhao Song']"
]
|
null | null | 2405.03255 | null | null | http://arxiv.org/pdf/2405.03255v1 | 2024-05-06T08:24:06Z | 2024-05-06T08:24:06Z | Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning | Multi-modality spatio-temporal (MoST) data extends spatio-temporal (ST) data by incorporating multiple modalities, which is prevalent in monitoring systems, encompassing diverse traffic demands and air quality assessments. Despite significant strides in ST modeling in recent years, there remains a need to emphasize harnessing the potential of information from different modalities. Robust MoST forecasting is more challenging because it possesses (i) high-dimensional and complex internal structures and (ii) dynamic heterogeneity caused by temporal, spatial, and modality variations. In this study, we propose a novel MoST learning framework via Self-Supervised Learning, namely MoSSL, which aims to uncover latent patterns from temporal, spatial, and modality perspectives while quantifying dynamic heterogeneity. Experiment results on two real-world MoST datasets verify the superiority of our approach compared with the state-of-the-art baselines. Model implementation is available at https://github.com/beginner-sketch/MoSSL. | [
"['Jiewen Deng' 'Renhe Jiang' 'Jiaqi Zhang' 'Xuan Song']"
]
|
null | null | 2405.03262 | null | null | http://arxiv.org/pdf/2405.03262v2 | 2024-06-10T11:04:04Z | 2024-05-06T08:34:15Z | End-to-End Reinforcement Learning of Curative Curtailment with Partial
Measurement Availability | In the course of the energy transition, the expansion of generation and consumption will change, and many of these technologies, such as PV systems, electric cars and heat pumps, will influence the power flow, especially in the distribution grids. Scalable methods that can make decisions for each grid connection are needed to enable congestion-free grid operation in the distribution grids. This paper presents a novel end-to-end approach to resolving congestion in distribution grids with deep reinforcement learning. Our architecture learns to curtail power and set appropriate reactive power to determine a non-congested and, thus, feasible grid state. State-of-the-art methods such as the optimal power flow (OPF) demand high computational costs and detailed measurements of every bus in a grid. In contrast, the presented method enables decisions under sparse information with just some buses observable in the grid. Distribution grids are generally not yet fully digitized and observable, so this method can be used for decision-making on the majority of low-voltage grids. On a real low-voltage grid the approach resolves 100% of violations in the voltage band and 98.8% of asset overloads. The results show that decisions can also be made on real grids that guarantee sufficient quality for congestion-free grid operation. | [
"['Hinrikus Wolf' 'Luis Böttcher' 'Sarra Bouchkati' 'Philipp Lutat'\n 'Jens Breitung' 'Bastian Jung' 'Tina Möllemann' 'Viktor Todosijević'\n 'Jan Schiefelbein-Lach' 'Oliver Pohl' 'Andreas Ulbig' 'Martin Grohe']"
]
|
null | null | 2405.03293 | null | null | http://arxiv.org/pdf/2405.03293v1 | 2024-05-06T09:14:58Z | 2024-05-06T09:14:58Z | Deep Learning and genetic algorithms for cosmological Bayesian inference
speed-up | In this paper, we present a novel approach to accelerate the Bayesian inference process, focusing specifically on the nested sampling algorithms. Bayesian inference plays a crucial role in cosmological parameter estimation, providing a robust framework for extracting theoretical insights from observational data. However, its computational demands can be substantial, primarily due to the need for numerous likelihood function evaluations. Our proposed method utilizes the power of deep learning, employing feedforward neural networks to approximate the likelihood function dynamically during the Bayesian inference process. Unlike traditional approaches, our method trains neural networks on-the-fly using the current set of live points as training data, without the need for pre-training. This flexibility enables adaptation to various theoretical models and datasets. We perform simple hyperparameter optimization using genetic algorithms to suggest initial neural network architectures for learning each likelihood function. Once sufficient accuracy is achieved, the neural network replaces the original likelihood function. The implementation integrates with nested sampling algorithms and has been thoroughly evaluated using both simple cosmological dark energy models and diverse observational datasets. Additionally, we explore the potential of genetic algorithms for generating initial live points within nested sampling inference, opening up new avenues for enhancing the efficiency and effectiveness of Bayesian inference methods. | [
"['Isidro Gómez-Vargas' 'J. Alberto Vázquez']"
]
|
null | null | 2405.03296 | null | null | http://arxiv.org/pdf/2405.03296v1 | 2024-05-06T09:17:23Z | 2024-05-06T09:17:23Z | Coefficient Decomposition for Spectral Graph Convolution | Spectral graph convolutional network (SGCN) is a kind of graph neural networks (GNN) based on graph signal filters, and has shown compelling expressivity for modeling graph-structured data. Most SGCNs adopt polynomial filters and learn the coefficients from the training data. Many of them focus on which polynomial basis leads to optimal expressive power and models' architecture is little discussed. In this paper, we propose a general form in terms of spectral graph convolution, where the coefficients of polynomial basis are stored in a third-order tensor. Then, we show that the convolution block in existing SGCNs can be derived by performing a certain coefficient decomposition operation on the coefficient tensor. Based on the generalized view, we develop novel spectral graph convolutions CoDeSGC-CP and -Tucker by tensor decomposition CP and Tucker on the coefficient tensor. Extensive experimental results demonstrate that the proposed convolutions achieve favorable performance improvements. | [
"['Feng Huang' 'Wen Zhang']"
]
|
null | null | 2405.03298 | null | null | http://arxiv.org/pdf/2405.03298v1 | 2024-05-06T09:20:17Z | 2024-05-06T09:20:17Z | Online Clustering of Known and Emerging Malware Families | Malware attacks have become significantly more frequent and sophisticated in recent years. Therefore, malware detection and classification are critical components of information security. Due to the large amount of malware samples available, it is essential to categorize malware samples according to their malicious characteristics. Clustering algorithms are thus becoming more widely used in computer security to analyze the behavior of malware variants and discover new malware families. Online clustering algorithms help us to understand malware behavior and produce a quicker response to new threats. This paper introduces a novel machine learning-based model for the online clustering of malicious samples into malware families. Streaming data is divided according to the clustering decision rule into samples from known and new emerging malware families. The streaming data is classified using the weighted k-nearest neighbor classifier into known families, and the online k-means algorithm clusters the remaining streaming data and achieves a purity of clusters from 90.20% for four clusters to 93.34% for ten clusters. This work is based on static analysis of portable executable files for the Windows operating system. Experimental results indicate that the proposed online clustering model can create high-purity clusters corresponding to malware families. This allows malware analysts to receive similar malware samples, speeding up their analysis. | [
"['Olha Jurečková' 'Martin Jureček' 'Mark Stamp']"
]
|
null | null | 2405.03301 | null | null | http://arxiv.org/pdf/2405.03301v1 | 2024-05-06T09:21:35Z | 2024-05-06T09:21:35Z | Interpretable Network Visualizations: A Human-in-the-Loop Approach for
Post-hoc Explainability of CNN-based Image Classification | Transparency and explainability in image classification are essential for establishing trust in machine learning models and detecting biases and errors. State-of-the-art explainability methods generate saliency maps to show where a specific class is identified, without providing a detailed explanation of the model's decision process. Striving to address such a need, we introduce a post-hoc method that explains the entire feature extraction process of a Convolutional Neural Network. These explanations include a layer-wise representation of the features the model extracts from the input. Such features are represented as saliency maps generated by clustering and merging similar feature maps, to which we associate a weight derived by generalizing Grad-CAM for the proposed methodology. To further enhance these explanations, we include a set of textual labels collected through a gamified crowdsourcing activity and processed using NLP techniques and Sentence-BERT. Finally, we show an approach to generate global explanations by aggregating labels across multiple images. | [
"['Matteo Bianchi' 'Antonio De Santis' 'Andrea Tocchetti' 'Marco Brambilla']"
]
|
null | null | 2405.03311 | null | null | http://arxiv.org/abs/2405.03311v1 | 2024-05-06T09:39:13Z | 2024-05-06T09:39:13Z | Federated Learning for Drowsiness Detection in Connected Vehicles | Ensuring driver readiness poses challenges, yet driver monitoring systems can assist in determining the driver's state. By observing visual cues, such systems recognize various behaviors and associate them with specific conditions. For instance, yawning or eye blinking can indicate driver drowsiness. Consequently, an abundance of distributed data is generated for driver monitoring. Employing machine learning techniques, such as driver drowsiness detection, presents a potential solution. However, transmitting the data to a central machine for model training is impractical due to the large data size and privacy concerns. Conversely, training on a single vehicle would limit the available data and likely result in inferior performance. To address these issues, we propose a federated learning framework for drowsiness detection within a vehicular network, leveraging the YawDD dataset. Our approach achieves an accuracy of 99.2%, demonstrating its promise and comparability to conventional deep learning techniques. Lastly, we show how our model scales using various number of federated clients | [
"['William Lindskog' 'Valentin Spannagl' 'Christian Prehofer']"
]
|
null | null | 2405.03314 | null | null | http://arxiv.org/pdf/2405.03314v1 | 2024-05-06T09:41:31Z | 2024-05-06T09:41:31Z | Deep Learning-based Point Cloud Registration for Augmented
Reality-guided Surgery | Point cloud registration aligns 3D point clouds using spatial transformations. It is an important task in computer vision, with applications in areas such as augmented reality (AR) and medical imaging. This work explores the intersection of two research trends: the integration of AR into image-guided surgery and the use of deep learning for point cloud registration. The main objective is to evaluate the feasibility of applying deep learning-based point cloud registration methods for image-to-patient registration in augmented reality-guided surgery. We created a dataset of point clouds from medical imaging and corresponding point clouds captured with a popular AR device, the HoloLens 2. We evaluate three well-established deep learning models in registering these data pairs. While we find that some deep learning methods show promise, we show that a conventional registration pipeline still outperforms them on our challenging dataset. | [
"['Maximilian Weber' 'Daniel Wild' 'Jens Kleesiek' 'Jan Egger'\n 'Christina Gsaxner']"
]
|
null | null | 2405.03316 | null | null | http://arxiv.org/pdf/2405.03316v1 | 2024-05-06T09:48:47Z | 2024-05-06T09:48:47Z | Provably Unlearnable Examples | The exploitation of publicly accessible data has led to escalating concerns regarding data privacy and intellectual property (IP) breaches in the age of artificial intelligence. As a strategy to safeguard both data privacy and IP-related domain knowledge, efforts have been undertaken to render shared data unlearnable for unauthorized models in the wild. Existing methods apply empirically optimized perturbations to the data in the hope of disrupting the correlation between the inputs and the corresponding labels such that the data samples are converted into Unlearnable Examples (UEs). Nevertheless, the absence of mechanisms that can verify how robust the UEs are against unknown unauthorized models and train-time techniques engenders several problems. First, the empirically optimized perturbations may suffer from the problem of cross-model generalization, which echoes the fact that the unauthorized models are usually unknown to the defender. Second, UEs can be mitigated by train-time techniques such as data augmentation and adversarial training. Furthermore, we find that a simple recovery attack can restore the clean-task performance of the classifiers trained on UEs by slightly perturbing the learned weights. To mitigate the aforementioned problems, in this paper, we propose a mechanism for certifying the so-called $(q, eta)$-Learnability of an unlearnable dataset via parametric smoothing. A lower certified $(q, eta)$-Learnability indicates a more robust protection over the dataset. Finally, we try to 1) improve the tightness of certified $(q, eta)$-Learnability and 2) design Provably Unlearnable Examples (PUEs) which have reduced $(q, eta)$-Learnability. According to experimental results, PUEs demonstrate both decreased certified $(q, eta)$-Learnability and enhanced empirical robustness compared to existing UEs. | [
"['Derui Wang' 'Minhui Xue' 'Bo Li' 'Seyit Camtepe' 'Liming Zhu']"
]
|
null | null | 2405.03320 | null | null | http://arxiv.org/pdf/2405.03320v1 | 2024-05-06T09:55:11Z | 2024-05-06T09:55:11Z | Denoising of Geodetic Time Series Using Spatiotemporal Graph Neural
Networks: Application to Slow Slip Event Extraction | Geospatial data has been transformative for the monitoring of the Earth, yet, as in the case of (geo)physical monitoring, the measurements can have variable spatial and temporal sampling and may be associated with a significant level of perturbations degrading the signal quality. Denoising geospatial data is, therefore, essential, yet often challenging because the observations may comprise noise coming from different origins, including both environmental signals and instrumental artifacts, which are spatially and temporally correlated, thus hard to disentangle. This study addresses the denoising of multivariate time series acquired by irregularly distributed networks of sensors, requiring specific methods to handle the spatiotemporal correlation of the noise and the signal of interest. Specifically, our method focuses on the denoising of geodetic position time series, used to monitor ground displacement worldwide with centimeter- to-millimeter precision. Among the signals affecting GNSS data, slow slip events (SSEs) are of interest to seismologists. These are transients of deformation that are weakly emerging compared to other signals. Here, we design SSEdenoiser, a multi-station spatiotemporal graph-based attentive denoiser that learns latent characteristics of GNSS noise to reveal SSE-related displacement with sub-millimeter precision. It is based on the key combination of graph recurrent networks and spatiotemporal Transformers. The proposed method is applied to the Cascadia subduction zone, where SSEs occur along with bursts of tectonic tremors, a seismic rumbling identified from independent seismic recordings. The extracted events match the spatiotemporal evolution of tremors. This good space-time correlation of the denoised GNSS signals with the tremors validates the proposed denoising procedure. | [
"['Giuseppe Costantino' 'Sophie Giffard-Roisin' 'Mauro Dalla Mura'\n 'Anne Socquet']"
]
|
null | null | 2405.03327 | null | null | http://arxiv.org/pdf/2405.03327v1 | 2024-05-06T10:05:46Z | 2024-05-06T10:05:46Z | Clustering of Disease Trajectories with Explainable Machine Learning: A
Case Study on Postoperative Delirium Phenotypes | The identification of phenotypes within complex diseases or syndromes is a fundamental component of precision medicine, which aims to adapt healthcare to individual patient characteristics. Postoperative delirium (POD) is a complex neuropsychiatric condition with significant heterogeneity in its clinical manifestations and underlying pathophysiology. We hypothesize that POD comprises several distinct phenotypes, which cannot be directly observed in clinical practice. Identifying these phenotypes could enhance our understanding of POD pathogenesis and facilitate the development of targeted prevention and treatment strategies. In this paper, we propose an approach that combines supervised machine learning for personalized POD risk prediction with unsupervised clustering techniques to uncover potential POD phenotypes. We first demonstrate our approach using synthetic data, where we simulate patient cohorts with predefined phenotypes based on distinct sets of informative features. We aim to mimic any clinical disease with our synthetic data generation method. By training a predictive model and applying SHAP, we show that clustering patients in the SHAP feature importance space successfully recovers the true underlying phenotypes, outperforming clustering in the raw feature space. We then present a case study using real-world data from a cohort of elderly surgical patients. The results showcase the utility of our approach in uncovering clinically relevant subtypes of complex disorders like POD, paving the way for more precise and personalized treatment strategies. | [
"['Xiaochen Zheng' 'Manuel Schürch' 'Xingyu Chen' 'Maria Angeliki Komninou'\n 'Reto Schüpbach' 'Ahmed Allam' 'Jan Bartussek' 'Michael Krauthammer']"
]
|
null | null | 2405.03329 | null | null | http://arxiv.org/pdf/2405.03329v1 | 2024-05-06T10:09:35Z | 2024-05-06T10:09:35Z | Policy Learning for Balancing Short-Term and Long-Term Rewards | Empirical researchers and decision-makers spanning various domains frequently seek profound insights into the long-term impacts of interventions. While the significance of long-term outcomes is undeniable, an overemphasis on them may inadvertently overshadow short-term gains. Motivated by this, this paper formalizes a new framework for learning the optimal policy that effectively balances both long-term and short-term rewards, where some long-term outcomes are allowed to be missing. In particular, we first present the identifiability of both rewards under mild assumptions. Next, we deduce the semiparametric efficiency bounds, along with the consistency and asymptotic normality of their estimators. We also reveal that short-term outcomes, if associated, contribute to improving the estimator of the long-term reward. Based on the proposed estimators, we develop a principled policy learning approach and further derive the convergence rates of regret and estimation errors associated with the learned policy. Extensive experiments are conducted to validate the effectiveness of the proposed method, demonstrating its practical applicability. | [
"['Peng Wu' 'Ziyu Shen' 'Feng Xie' 'Zhongyao Wang' 'Chunchen Liu'\n 'Yan Zeng']"
]
|
null | null | 2405.03341 | null | null | http://arxiv.org/pdf/2405.03341v3 | 2024-05-24T06:32:06Z | 2024-05-06T10:42:28Z | Enhancing Q-Learning with Large Language Model Heuristics | Q-learning excels in learning from feedback within sequential decision-making tasks but often requires extensive sampling to achieve significant improvements. While reward shaping can enhance learning efficiency, non-potential-based methods introduce biases that affect performance, and potential-based reward shaping, though unbiased, lacks the ability to provide heuristics for state-action pairs, limiting its effectiveness in complex environments. Large language models (LLMs) can achieve zero-shot learning for simpler tasks, but they suffer from low inference speeds and occasional hallucinations. To address these challenges, we propose textbf{LLM-guided Q-learning}, a framework that leverages LLMs as heuristics to aid in learning the Q-function for reinforcement learning. Our theoretical analysis demonstrates that this approach adapts to hallucinations, improves sample efficiency, and avoids biasing final performance. Experimental results show that our algorithm is general, robust, and capable of preventing ineffective exploration. | [
"['Xiefeng Wu']"
]
|
null | null | 2405.03342 | null | null | http://arxiv.org/pdf/2405.03342v3 | 2024-07-05T10:09:10Z | 2024-05-06T10:49:51Z | Doubly Robust Causal Effect Estimation under Networked Interference via
Targeted Learning | Causal effect estimation under networked interference is an important but challenging problem. Available parametric methods are limited in their model space, while previous semiparametric methods, e.g., leveraging neural networks to fit only one single nuisance function, may still encounter misspecification problems under networked interference without appropriate assumptions on the data generation process. To mitigate bias stemming from misspecification, we propose a novel doubly robust causal effect estimator under networked interference, by adapting the targeted learning technique to the training of neural networks. Specifically, we generalize the targeted learning technique into the networked interference setting and establish the condition under which an estimator achieves double robustness. Based on the condition, we devise an end-to-end causal effect estimator by transforming the identified theoretical condition into a targeted loss. Moreover, we provide a theoretical analysis of our designed estimator, revealing a faster convergence rate compared to a single nuisance model. Extensive experimental results on two real-world networks with semisynthetic data demonstrate the effectiveness of our proposed estimators. | [
"['Weilin Chen' 'Ruichu Cai' 'Zeqin Yang' 'Jie Qiao' 'Yuguang Yan'\n 'Zijian Li' 'Zhifeng Hao']"
]
|
null | null | 2405.03355 | null | null | http://arxiv.org/pdf/2405.03355v2 | 2024-05-28T14:47:03Z | 2024-05-06T11:05:13Z | A Generalization Theory of Cross-Modality Distillation with Contrastive
Learning | Cross-modality distillation arises as an important topic for data modalities containing limited knowledge such as depth maps and high-quality sketches. Such techniques are of great importance, especially for memory and privacy-restricted scenarios where labeled training data is generally unavailable. To solve the problem, existing label-free methods leverage a few pairwise unlabeled data to distill the knowledge by aligning features or statistics between the source and target modalities. For instance, one typically aims to minimize the L2 distance or contrastive loss between the learned features of pairs of samples in the source (e.g. image) and the target (e.g. sketch) modalities. However, most algorithms in this domain only focus on the experimental results but lack theoretical insight. To bridge the gap between the theory and practical method of cross-modality distillation, we first formulate a general framework of cross-modality contrastive distillation (CMCD), built upon contrastive learning that leverages both positive and negative correspondence, towards a better distillation of generalizable features. Furthermore, we establish a thorough convergence analysis that reveals that the distance between source and target modalities significantly impacts the test error on downstream tasks within the target modality which is also validated by the empirical results. Extensive experimental results show that our algorithm outperforms existing algorithms consistently by a margin of 2-3% across diverse modalities and tasks, covering modalities of image, sketch, depth map, and audio and tasks of recognition and segmentation. | [
"['Hangyu Lin' 'Chen Liu' 'Chengming Xu' 'Zhengqi Gao' 'Yanwei Fu'\n 'Yuan Yao']"
]
|
null | null | 2405.03363 | null | null | http://arxiv.org/abs/2405.03363v1 | 2024-05-06T11:13:50Z | 2024-05-06T11:13:50Z | Telextiles: End-to-end Remote Transmission of Fabric Tactile Sensation | The tactile sensation of textiles is critical in determining the comfort of clothing. For remote use, such as online shopping, users cannot physically touch the textile of clothes, making it difficult to evaluate its tactile sensation. Tactile sensing and actuation devices are required to transmit the tactile sensation of textiles. The sensing device needs to recognize different garments, even with hand-held sensors. In addition, the existing actuation device can only present a limited number of known patterns and cannot transmit unknown tactile sensations of textiles. To address these issues, we propose Telextiles, an interface that can remotely transmit tactile sensations of textiles by creating a latent space that reflects the proximity of textiles through contrastive self-supervised learning. We confirm that textiles with similar tactile features are located close to each other in the latent space through a two-dimensional plot. We then compress the latent features for known textile samples into the 1D distance and apply the 16 textile samples to the rollers in the order of the distance. The roller is rotated to select the textile with the closest feature if an unknown textile is detected. | [
"['Takekazu Kitagishi' 'Yuichi Hiroi' 'Yuna Watanabe' 'Yuta Itoh'\n 'Jun Rekimoto']"
]
|
null | null | 2405.03376 | null | null | http://arxiv.org/pdf/2405.03376v2 | 2024-05-08T03:27:04Z | 2024-05-06T11:30:55Z | CRA5: Extreme Compression of ERA5 for Portable Global Climate and
Weather Research via an Efficient Variational Transformer | The advent of data-driven weather forecasting models, which learn from hundreds of terabytes (TB) of reanalysis data, has significantly advanced forecasting capabilities. However, the substantial costs associated with data storage and transmission present a major challenge for data providers and users, affecting resource-constrained researchers and limiting their accessibility to participate in AI-based meteorological research. To mitigate this issue, we introduce an efficient neural codec, the Variational Autoencoder Transformer (VAEformer), for extreme compression of climate data to significantly reduce data storage cost, making AI-based meteorological research portable to researchers. Our approach diverges from recent complex neural codecs by utilizing a low-complexity Auto-Encoder transformer. This encoder produces a quantized latent representation through variance inference, which reparameterizes the latent space as a Gaussian distribution. This method improves the estimation of distributions for cross-entropy coding. Extensive experiments demonstrate that our VAEformer outperforms existing state-of-the-art compression methods in the context of climate data. By applying our VAEformer, we compressed the most popular ERA5 climate dataset (226 TB) into a new dataset, CRA5 (0.7 TB). This translates to a compression ratio of over 300 while retaining the dataset's utility for accurate scientific analysis. Further, downstream experiments show that global weather forecasting models trained on the compact CRA5 dataset achieve forecasting accuracy comparable to the model trained on the original dataset. Code, the CRA5 dataset, and the pre-trained model are available at https://github.com/taohan10200/CRA5. | [
"['Tao Han' 'Zhenghao Chen' 'Song Guo' 'Wanghan Xu' 'Lei Bai']"
]
|
null | null | 2405.03379 | null | null | http://arxiv.org/pdf/2405.03379v1 | 2024-05-06T11:33:12Z | 2024-05-06T11:33:12Z | Reverse Forward Curriculum Learning for Extreme Sample and Demonstration
Efficiency in Reinforcement Learning | Reinforcement learning (RL) presents a promising framework to learn policies through environment interaction, but often requires an infeasible amount of interaction data to solve complex tasks from sparse rewards. One direction includes augmenting RL with offline data demonstrating desired tasks, but past work often require a lot of high-quality demonstration data that is difficult to obtain, especially for domains such as robotics. Our approach consists of a reverse curriculum followed by a forward curriculum. Unique to our approach compared to past work is the ability to efficiently leverage more than one demonstration via a per-demonstration reverse curriculum generated via state resets. The result of our reverse curriculum is an initial policy that performs well on a narrow initial state distribution and helps overcome difficult exploration problems. A forward curriculum is then used to accelerate the training of the initial policy to perform well on the full initial state distribution of the task and improve demonstration and sample efficiency. We show how the combination of a reverse curriculum and forward curriculum in our method, RFCL, enables significant improvements in demonstration and sample efficiency compared against various state-of-the-art learning-from-demonstration baselines, even solving previously unsolvable tasks that require high precision and control. | [
"['Stone Tao' 'Arth Shukla' 'Tse-kai Chan' 'Hao Su']"
]
|
null | null | 2405.03381 | null | null | http://arxiv.org/pdf/2405.03381v1 | 2024-05-06T11:40:57Z | 2024-05-06T11:40:57Z | Statistical Edge Detection And UDF Learning For Shape Representation | In the field of computer vision, the numerical encoding of 3D surfaces is crucial. It is classical to represent surfaces with their Signed Distance Functions (SDFs) or Unsigned Distance Functions (UDFs). For tasks like representation learning, surface classification, or surface reconstruction, this function can be learned by a neural network, called Neural Distance Function. This network, and in particular its weights, may serve as a parametric and implicit representation for the surface. The network must represent the surface as accurately as possible. In this paper, we propose a method for learning UDFs that improves the fidelity of the obtained Neural UDF to the original 3D surface. The key idea of our method is to concentrate the learning effort of the Neural UDF on surface edges. More precisely, we show that sampling more training points around surface edges allows better local accuracy of the trained Neural UDF, and thus improves the global expressiveness of the Neural UDF in terms of Hausdorff distance. To detect surface edges, we propose a new statistical method based on the calculation of a $p$-value at each point on the surface. Our method is shown to detect surface edges more accurately than a commonly used local geometric descriptor. | [
"['Virgile Foy' 'Fabrice Gamboa' 'Reda Chhaibi']"
]
|
null | null | 2405.03384 | null | null | http://arxiv.org/pdf/2405.03384v1 | 2024-05-06T11:43:01Z | 2024-05-06T11:43:01Z | GLIP: Electromagnetic Field Exposure Map Completion by Deep Generative
Networks | In Spectrum cartography (SC), the generation of exposure maps for radio frequency electromagnetic fields (RF-EMF) spans dimensions of frequency, space, and time, which relies on a sparse collection of sensor data, posing a challenging ill-posed inverse problem. Cartography methods based on models integrate designed priors, such as sparsity and low-rank structures, to refine the solution of this inverse problem. In our previous work, EMF exposure map reconstruction was achieved by Generative Adversarial Networks (GANs) where physical laws or structural constraints were employed as a prior, but they require a large amount of labeled data or simulated full maps for training to produce efficient results. In this paper, we present a method to reconstruct EMF exposure maps using only the generator network in GANs which does not require explicit training, thus overcoming the limitations of GANs, such as using reference full exposure maps. This approach uses a prior from sensor data as Local Image Prior (LIP) captured by deep convolutional generative networks independent of learning the network parameters from images in an urban environment. Experimental results show that, even when only sparse sensor data are available, our method can produce accurate estimates. | [
"['Mohammed Mallik' 'Davy P. Gaillot' 'Laurent Clavier']"
]
|
null | null | 2405.03386 | null | null | http://arxiv.org/pdf/2405.03386v1 | 2024-05-06T11:44:54Z | 2024-05-06T11:44:54Z | Annot-Mix: Learning with Noisy Class Labels from Multiple Annotators via
a Mixup Extension | Training with noisy class labels impairs neural networks' generalization performance. In this context, mixup is a popular regularization technique to improve training robustness by making memorizing false class labels more difficult. However, mixup neglects that, typically, multiple annotators, e.g., crowdworkers, provide class labels. Therefore, we propose an extension of mixup, which handles multiple class labels per instance while considering which class label originates from which annotator. Integrated into our multi-annotator classification framework annot-mix, it performs superiorly to eight state-of-the-art approaches on eleven datasets with noisy class labels provided either by human or simulated annotators. Our code is publicly available through our repository at https://github.com/ies-research/annot-mix. | [
"['Marek Herde' 'Lukas Lührs' 'Denis Huseljic' 'Bernhard Sick']"
]
|
null | null | 2405.03389 | null | null | http://arxiv.org/pdf/2405.03389v1 | 2024-05-06T11:51:09Z | 2024-05-06T11:51:09Z | Don't Waste Your Time: Early Stopping Cross-Validation | State-of-the-art automated machine learning systems for tabular data often employ cross-validation; ensuring that measured performances generalize to unseen data, or that subsequent ensembling does not overfit. However, using k-fold cross-validation instead of holdout validation drastically increases the computational cost of validating a single configuration. While ensuring better generalization and, by extension, better performance, the additional cost is often prohibitive for effective model selection within a time budget. We aim to make model selection with cross-validation more effective. Therefore, we study early stopping the process of cross-validation during model selection. We investigate the impact of early stopping on random search for two algorithms, MLP and random forest, across 36 classification datasets. We further analyze the impact of the number of folds by considering 3-, 5-, and 10-folds. In addition, we investigate the impact of early stopping with Bayesian optimization instead of random search and also repeated cross-validation. Our exploratory study shows that even a simple-to-understand and easy-to-implement method consistently allows model selection to converge faster; in ~94% of all datasets, on average by ~214%. Moreover, stopping cross-validation enables model selection to explore the search space more exhaustively by considering +167% configurations on average within one hour, while also obtaining better overall performance. | [
"['Edward Bergman' 'Lennart Purucker' 'Frank Hutter']"
]
|
null | null | 2405.03401 | null | null | http://arxiv.org/pdf/2405.03401v1 | 2024-05-06T12:11:46Z | 2024-05-06T12:11:46Z | E2GNN: Efficient Graph Neural Network Ensembles for Semi-Supervised
Classification | This work studies ensemble learning for graph neural networks (GNNs) under the popular semi-supervised setting. Ensemble learning has shown superiority in improving the accuracy and robustness of traditional machine learning by combining the outputs of multiple weak learners. However, adopting a similar idea to integrate different GNN models is challenging because of two reasons. First, GNN is notorious for its poor inference ability, so naively assembling multiple GNN models would deteriorate the inference efficiency. Second, when GNN models are trained with few labeled nodes, their performance are limited. In this case, the vanilla ensemble approach, e.g., majority vote, may be sub-optimal since most base models, i.e., GNNs, may make the wrong predictions. To this end, in this paper, we propose an efficient ensemble learner--E2GNN to assemble multiple GNNs in a learnable way by leveraging both labeled and unlabeled nodes. Specifically, we first pre-train different GNN models on a given data scenario according to the labeled nodes. Next, instead of directly combing their outputs for label inference, we train a simple multi-layer perceptron--MLP model to mimic their predictions on both labeled and unlabeled nodes. Then the unified MLP model is deployed to infer labels for unlabeled or new nodes. Since the predictions of unlabeled nodes from different GNN models may be incorrect, we develop a reinforced discriminator to effectively filter out those wrongly predicted nodes to boost the performance of MLP. By doing this, we suggest a principled approach to tackle the inference issues of GNN ensembles and maintain the merit of ensemble learning: improved performance. Comprehensive experiments over both transductive and inductive settings, across different GNN backbones and 8 benchmark datasets, demonstrate the superiority of E2GNN. | [
"['Xin Zhang' 'Daochen Zha' 'Qiaoyu Tan']"
]
|
null | null | 2405.03409 | null | null | http://arxiv.org/pdf/2405.03409v1 | 2024-05-06T12:20:55Z | 2024-05-06T12:20:55Z | LightTR: A Lightweight Framework for Federated Trajectory Recovery | With the proliferation of GPS-equipped edge devices, huge trajectory data is generated and accumulated in various domains, motivating a variety of urban applications. Due to the limited acquisition capabilities of edge devices, a lot of trajectories are recorded at a low sampling rate, which may lead to the effectiveness drop of urban applications. We aim to recover a high-sampled trajectory based on the low-sampled trajectory in free space, i.e., without road network information, to enhance the usability of trajectory data and support urban applications more effectively. Recent proposals targeting trajectory recovery often assume that trajectories are available at a central location, which fail to handle the decentralized trajectories and hurt privacy. To bridge the gap between decentralized training and trajectory recovery, we propose a lightweight framework, LightTR, for federated trajectory recovery based on a client-server architecture, while keeping the data decentralized and private in each client/platform center (e.g., each data center of a company). Specifically, considering the limited processing capabilities of edge devices, LightTR encompasses a light local trajectory embedding module that offers improved computational efficiency without compromising its feature extraction capabilities. LightTR also features a meta-knowledge enhanced local-global training scheme to reduce communication costs between the server and clients and thus further offer efficiency improvement. Extensive experiments demonstrate the effectiveness and efficiency of the proposed framework. | [
"['Ziqiao Liu' 'Hao Miao' 'Yan Zhao' 'Chenxi Liu' 'Kai Zheng' 'Huan Li']"
]
|
null | null | 2405.03419 | null | null | http://arxiv.org/pdf/2405.03419v1 | 2024-05-06T12:36:17Z | 2024-05-06T12:36:17Z | Automated Metaheuristic Algorithm Design with Autoregressive Learning | Automated design of metaheuristic algorithms offers an attractive avenue to reduce human effort and gain enhanced performance beyond human intuition. Current automated methods design algorithms within a fixed structure and operate from scratch. This poses a clear gap towards fully discovering potentials over the metaheuristic family and fertilizing from prior design experience. To bridge the gap, this paper proposes an autoregressive learning-based designer for automated design of metaheuristic algorithms. Our designer formulates metaheuristic algorithm design as a sequence generation task, and harnesses an autoregressive generative network to handle the task. This offers two advances. First, through autoregressive inference, the designer generates algorithms with diverse lengths and structures, enabling to fully discover potentials over the metaheuristic family. Second, prior design knowledge learned and accumulated in neurons of the designer can be retrieved for designing algorithms for future problems, paving the way to continual design of algorithms for open-ended problem-solving. Extensive experiments on numeral benchmarks and real-world problems reveal that the proposed designer generates algorithms that outperform all human-created baselines on 24 out of 25 test problems. The generated algorithms display various structures and behaviors, reasonably fitting for different problem-solving contexts. Code will be released after paper publication. | [
"['Qi Zhao' 'Tengfei Liu' 'Bai Yan' 'Qiqi Duan' 'Jian Yang' 'Yuhui Shi']"
]
|
null | null | 2405.03427 | null | null | http://arxiv.org/pdf/2405.03427v1 | 2024-05-06T12:47:16Z | 2024-05-06T12:47:16Z | Geometry-aware framework for deep energy method: an application to
structural mechanics with hyperelastic materials | Physics-Informed Neural Networks (PINNs) have gained considerable interest in diverse engineering domains thanks to their capacity to integrate physical laws into deep learning models. Recently, geometry-aware PINN-based approaches that employ the strong form of underlying physical system equations have been developed with the aim of integrating geometric information into PINNs. Despite ongoing research, the assessment of PINNs in problems with various geometries remains an active area of investigation. In this work, we introduce a novel physics-informed framework named the Geometry-Aware Deep Energy Method (GADEM) for solving structural mechanics problems on different geometries. As the weak form of the physical system equation (or the energy-based approach) has demonstrated clear advantages compared to the strong form for solving solid mechanics problems, GADEM employs the weak form and aims to infer the solution on multiple shapes of geometries. Integrating a geometry-aware framework into an energy-based method results in an effective physics-informed deep learning model in terms of accuracy and computational cost. Different ways to represent the geometric information and to encode the geometric latent vectors are investigated in this work. We introduce a loss function of GADEM which is minimized based on the potential energy of all considered geometries. An adaptive learning method is also employed for the sampling of collocation points to enhance the performance of GADEM. We present some applications of GADEM to solve solid mechanics problems, including a loading simulation of a toy tire involving contact mechanics and large deformation hyperelasticity. The numerical results of this work demonstrate the remarkable capability of GADEM to infer the solution on various and new shapes of geometries using only one trained model. | [
"['Thi Nguyen Khoa Nguyen' 'Thibault Dairay' 'Raphaël Meunier'\n 'Christophe Millet' 'Mathilde Mougeot']"
]
|
null | null | 2405.03429 | null | null | http://arxiv.org/pdf/2405.03429v1 | 2024-05-06T12:48:34Z | 2024-05-06T12:48:34Z | ReCycle: Fast and Efficient Long Time Series Forecasting with Residual
Cyclic Transformers | Transformers have recently gained prominence in long time series forecasting by elevating accuracies in a variety of use cases. Regrettably, in the race for better predictive performance the overhead of model architectures has grown onerous, leading to models with computational demand infeasible for most practical applications. To bridge the gap between high method complexity and realistic computational resources, we introduce the Residual Cyclic Transformer, ReCycle. ReCycle utilizes primary cycle compression to address the computational complexity of the attention mechanism in long time series. By learning residuals from refined smoothing average techniques, ReCycle surpasses state-of-the-art accuracy in a variety of application use cases. The reliable and explainable fallback behavior ensured by simple, yet robust, smoothing average techniques additionally lowers the barrier for user acceptance. At the same time, our approach reduces the run time and energy consumption by more than an order of magnitude, making both training and inference feasible on low-performance, low-power and edge computing devices. Code is available at https://github.com/Helmholtz-AI-Energy/ReCycle | [
"['Arvid Weyrauch' 'Thomas Steens' 'Oskar Taubert' 'Benedikt Hanke'\n 'Aslan Eqbal' 'Ewa Götz' 'Achim Streit' 'Markus Götz' 'Charlotte Debus']"
]
|
null | null | 2405.03432 | null | null | http://arxiv.org/pdf/2405.03432v3 | 2024-05-26T16:40:11Z | 2024-05-06T12:54:22Z | Improved Forward-Forward Contrastive Learning | The backpropagation algorithm, or backprop, is a widely utilized optimization technique in deep learning. While there's growing evidence suggesting that models trained with backprop can accurately explain neuronal data, no backprop-like method has yet been discovered in the biological brain for learning. Moreover, employing a naive implementation of backprop in the brain has several drawbacks. In 2022, Geoffrey Hinton proposed a biologically plausible learning method known as the Forward-Forward (FF) algorithm. Shortly after this paper, a modified version called FFCL was introduced. However, FFCL had limitations, notably being a three-stage learning system where the final stage still relied on regular backpropagation. In our approach, we address these drawbacks by eliminating the last two stages of FFCL and completely removing regular backpropagation. Instead, we rely solely on local updates, offering a more biologically plausible alternative. | [
"['Gananath R']"
]
|
null | null | 2405.03435 | null | null | http://arxiv.org/abs/2405.03435v1 | 2024-05-06T12:58:48Z | 2024-05-06T12:58:48Z | A method for quantifying the generalization capabilities of generative
models for solving Ising models | For Ising models with complex energy landscapes, whether the ground state can be found by neural networks depends heavily on the Hamming distance between the training datasets and the ground state. Despite the fact that various recently proposed generative models have shown good performance in solving Ising models, there is no adequate discussion on how to quantify their generalization capabilities. Here we design a Hamming distance regularizer in the framework of a class of generative models, variational autoregressive networks (VAN), to quantify the generalization capabilities of various network architectures combined with VAN. The regularizer can control the size of the overlaps between the ground state and the training datasets generated by networks, which, together with the success rates of finding the ground state, form a quantitative metric to quantify their generalization capabilities. We conduct numerical experiments on several prototypical network architectures combined with VAN, including feed-forward neural networks, recurrent neural networks, and graph neural networks, to quantify their generalization capabilities when solving Ising models. Moreover, considering the fact that the quantification of the generalization capabilities of networks on small-scale problems can be used to predict their relative performance on large-scale problems, our method is of great significance for assisting in the Neural Architecture Search field of searching for the optimal network architectures when solving large-scale Ising models. | [
"['Qunlong Ma' 'Zhi Ma' 'Ming Gao']"
]
|
null | null | 2405.03440 | null | null | http://arxiv.org/pdf/2405.03440v1 | 2024-05-06T13:12:25Z | 2024-05-06T13:12:25Z | Robotic Constrained Imitation Learning for the Peg Transfer Task in
Fundamentals of Laparoscopic Surgery | In this study, we present an implementation strategy for a robot that performs peg transfer tasks in Fundamentals of Laparoscopic Surgery (FLS) via imitation learning, aimed at the development of an autonomous robot for laparoscopic surgery. Robotic laparoscopic surgery presents two main challenges: (1) the need to manipulate forceps using ports established on the body surface as fulcrums, and (2) difficulty in perceiving depth information when working with a monocular camera that displays its images on a monitor. Especially, regarding issue (2), most prior research has assumed the availability of depth images or models of a target to be operated on. Therefore, in this study, we achieve more accurate imitation learning with only monocular images by extracting motion constraints from one exemplary motion of skilled operators, collecting data based on these constraints, and conducting imitation learning based on the collected data. We implemented an overall system using two Franka Emika Panda Robot Arms and validated its effectiveness. | [
"['Kento Kawaharazuka' 'Kei Okada' 'Masayuki Inaba']"
]
|
null | null | 2405.03449 | null | null | http://arxiv.org/pdf/2405.03449v1 | 2024-05-06T13:22:54Z | 2024-05-06T13:22:54Z | Byzantine-Robust Gossip: Insights from a Dual Approach | Distributed approaches have many computational benefits, but they are vulnerable to attacks from a subset of devices transmitting incorrect information. This paper investigates Byzantine-resilient algorithms in a decentralized setting, where devices communicate directly with one another. We leverage the so-called dual approach to design a general robust decentralized optimization method. We provide both global and local clipping rules in the special case of average consensus, with tight convergence guarantees. These clipping rules are practical, and yield results that finely characterize the impact of Byzantine nodes, highlighting for instance a qualitative difference in convergence between global and local clipping thresholds. Lastly, we demonstrate that they can serve as a basis for designing efficient attacks. | [
"['Renaud Gaucher' 'Hadrien Hendrikx' 'Aymeric Dieuleveut']"
]
|
null | null | 2405.03462 | null | null | http://arxiv.org/pdf/2405.03462v1 | 2024-05-06T13:33:38Z | 2024-05-06T13:33:38Z | A Lightweight Neural Architecture Search Model for Medical Image
Classification | Accurate classification of medical images is essential for modern diagnostics. Deep learning advancements led clinicians to increasingly use sophisticated models to make faster and more accurate decisions, sometimes replacing human judgment. However, model development is costly and repetitive. Neural Architecture Search (NAS) provides solutions by automating the design of deep learning architectures. This paper presents ZO-DARTS+, a differentiable NAS algorithm that improves search efficiency through a novel method of generating sparse probabilities by bi-level optimization. Experiments on five public medical datasets show that ZO-DARTS+ matches the accuracy of state-of-the-art solutions while reducing search times by up to three times. | [
"['Lunchen Xie' 'Eugenio Lomurno' 'Matteo Gambella' 'Danilo Ardagna'\n 'Manuel Roveri' 'Matteo Matteucci' 'Qingjiang Shi']"
]
|
null | null | 2405.03468 | null | null | http://arxiv.org/pdf/2405.03468v1 | 2024-05-06T13:44:51Z | 2024-05-06T13:44:51Z | Hierarchic Flows to Estimate and Sample High-dimensional Probabilities | Finding low-dimensional interpretable models of complex physical fields such as turbulence remains an open question, 80 years after the pioneer work of Kolmogorov. Estimating high-dimensional probability distributions from data samples suffers from an optimization and an approximation curse of dimensionality. It may be avoided by following a hierarchic probability flow from coarse to fine scales. This inverse renormalization group is defined by conditional probabilities across scales, renormalized in a wavelet basis. For a $varphi^4$ scalar potential, sampling these hierarchic models avoids the critical slowing down at the phase transition. An outstanding issue is to also approximate non-Gaussian fields having long-range interactions in space and across scales. We introduce low-dimensional models with robust multiscale approximations of high order polynomial energies. They are calculated with a second wavelet transform, which defines interactions over two hierarchies of scales. We estimate and sample these wavelet scattering models to generate 2D vorticity fields of turbulence, and images of dark matter densities. | [
"['Etienne Lempereur' 'Stéphane Mallat']"
]
|
null | null | 2405.03472 | null | null | http://arxiv.org/pdf/2405.03472v2 | 2024-05-28T18:39:53Z | 2024-05-06T13:47:09Z | A Symplectic Analysis of Alternating Mirror Descent | Motivated by understanding the behavior of the Alternating Mirror Descent (AMD) algorithm for bilinear zero-sum games, we study the discretization of continuous-time Hamiltonian flow via the symplectic Euler method. We provide a framework for analysis using results from Hamiltonian dynamics, Lie algebra, and symplectic numerical integrators, with an emphasis on the existence and properties of a conserved quantity, the modified Hamiltonian (MH), for the symplectic Euler method. We compute the MH in closed-form when the original Hamiltonian is a quadratic function, and show that it generally differs from the other conserved quantity known previously in that case. We derive new error bounds on the MH when truncated at orders in the stepsize in terms of the number of iterations, $K$, and use these bounds to show an improved $mathcal{O}(K^{1/5})$ total regret bound and an $mathcal{O}(K^{-4/5})$ duality gap of the average iterates for AMD. Finally, we propose a conjecture which, if true, would imply that the total regret for AMD scales as $mathcal{O}left(K^{varepsilon}right)$ and the duality gap of the average iterates as $mathcal{O}left(K^{-1+varepsilon}right)$ for any $varepsilon>0$, and we can take $varepsilon=0$ upon certain convergence conditions for the MH. | [
"['Jonas Katona' 'Xiuyuan Wang' 'Andre Wibisono']"
]
|
null | null | 2405.03481 | null | null | http://arxiv.org/pdf/2405.03481v1 | 2024-05-06T13:53:09Z | 2024-05-06T13:53:09Z | AnchorGT: Efficient and Flexible Attention Architecture for Scalable
Graph Transformers | Graph Transformers (GTs) have significantly advanced the field of graph representation learning by overcoming the limitations of message-passing graph neural networks (GNNs) and demonstrating promising performance and expressive power. However, the quadratic complexity of self-attention mechanism in GTs has limited their scalability, and previous approaches to address this issue often suffer from expressiveness degradation or lack of versatility. To address this issue, we propose AnchorGT, a novel attention architecture for GTs with global receptive field and almost linear complexity, which serves as a flexible building block to improve the scalability of a wide range of GT models. Inspired by anchor-based GNNs, we employ structurally important $k$-dominating node set as anchors and design an attention mechanism that focuses on the relationship between individual nodes and anchors, while retaining the global receptive field for all nodes. With its intuitive design, AnchorGT can easily replace the attention module in various GT models with different network architectures and structural encodings, resulting in reduced computational overhead without sacrificing performance. In addition, we theoretically prove that AnchorGT attention can be strictly more expressive than Weisfeiler-Lehman test, showing its superiority in representing graph structures. Our experiments on three state-of-the-art GT models demonstrate that their AnchorGT variants can achieve better results while being faster and significantly more memory efficient. | [
"['Wenhao Zhu' 'Guojie Song' 'Liang Wang' 'Shaoguo Liu']"
]
|
null | null | 2405.03484 | null | null | http://arxiv.org/pdf/2405.03484v1 | 2024-05-06T13:55:39Z | 2024-05-06T13:55:39Z | Whispy: Adapting STT Whisper Models to Real-Time Environments | Large general-purpose transformer models have recently become the mainstay in the realm of speech analysis. In particular, Whisper achieves state-of-the-art results in relevant tasks such as speech recognition, translation, language identification, and voice activity detection. However, Whisper models are not designed to be used in real-time conditions, and this limitation makes them unsuitable for a vast plethora of practical applications. In this paper, we introduce Whispy, a system intended to bring live capabilities to the Whisper pretrained models. As a result of a number of architectural optimisations, Whispy is able to consume live audio streams and generate high level, coherent voice transcriptions, while still maintaining a low computational cost. We evaluate the performance of our system on a large repository of publicly available speech datasets, investigating how the transcription mechanism introduced by Whispy impacts on the Whisper output. Experimental results show how Whispy excels in robustness, promptness, and accuracy. | [
"['Antonio Bevilacqua' 'Paolo Saviano' 'Alessandro Amirante'\n 'Simon Pietro Romano']"
]
|
null | null | 2405.03501 | null | null | http://arxiv.org/pdf/2405.03501v1 | 2024-05-06T14:13:38Z | 2024-05-06T14:13:38Z | Boosting Single Positive Multi-label Classification with Generalized
Robust Loss | Multi-label learning (MLL) requires comprehensive multi-semantic annotations that is hard to fully obtain, thus often resulting in missing labels scenarios. In this paper, we investigate Single Positive Multi-label Learning (SPML), where each image is associated with merely one positive label. Existing SPML methods only focus on designing losses using mechanisms such as hard pseudo-labeling and robust losses, mostly leading to unacceptable false negatives. To address this issue, we first propose a generalized loss framework based on expected risk minimization to provide soft pseudo labels, and point out that the former losses can be seamlessly converted into our framework. In particular, we design a novel robust loss based on our framework, which enjoys flexible coordination between false positives and false negatives, and can additionally deal with the imbalance between positive and negative samples. Extensive experiments show that our approach can significantly improve SPML performance and outperform the vast majority of state-of-the-art methods on all the four benchmarks. | [
"['Yanxi Chen' 'Chunxiao Li' 'Xinyang Dai' 'Jinhuan Li' 'Weiyu Sun'\n 'Yiming Wang' 'Renyuan Zhang' 'Tinghe Zhang' 'Bo Wang']"
]
|
null | null | 2405.03516 | null | null | http://arxiv.org/pdf/2405.03516v1 | 2024-05-06T14:29:24Z | 2024-05-06T14:29:24Z | GI-SMN: Gradient Inversion Attack against Federated Learning without
Prior Knowledge | Federated learning (FL) has emerged as a privacy-preserving machine learning approach where multiple parties share gradient information rather than original user data. Recent work has demonstrated that gradient inversion attacks can exploit the gradients of FL to recreate the original user data, posing significant privacy risks. However, these attacks make strong assumptions about the attacker, such as altering the model structure or parameters, gaining batch normalization statistics, or acquiring prior knowledge of the original training set, etc. Consequently, these attacks are not possible in real-world scenarios. To end it, we propose a novel Gradient Inversion attack based on Style Migration Network (GI-SMN), which breaks through the strong assumptions made by previous gradient inversion attacks. The optimization space is reduced by the refinement of the latent code and the use of regular terms to facilitate gradient matching. GI-SMN enables the reconstruction of user data with high similarity in batches. Experimental results have demonstrated that GI-SMN outperforms state-of-the-art gradient inversion attacks in both visual effect and similarity metrics. Additionally, it also can overcome gradient pruning and differential privacy defenses. | [
"['Jin Qian' 'Kaimin Wei' 'Yongdong Wu' 'Jilian Zhang' 'Jipeng Chen'\n 'Huan Bao']"
]
|
null | null | 2405.03526 | null | null | http://arxiv.org/pdf/2405.03526v1 | 2024-05-06T14:44:06Z | 2024-05-06T14:44:06Z | ReinWiFi: A Reinforcement-Learning-Based Framework for the
Application-Layer QoS Optimization of WiFi Networks | In this paper, a reinforcement-learning-based scheduling framework is proposed and implemented to optimize the application-layer quality-of-service (QoS) of a practical wireless local area network (WLAN) suffering from unknown interference. Particularly, application-layer tasks of file delivery and delay-sensitive communication, e.g., screen projection, in a WLAN with enhanced distributed channel access (EDCA) mechanism, are jointly scheduled by adjusting the contention window sizes and application-layer throughput limitation, such that their QoS, including the throughput of file delivery and the round trip time of the delay-sensitive communication, can be optimized. Due to the unknown interference and vendor-dependent implementation of the network interface card, the relation between the scheduling policy and the system QoS is unknown. Hence, a reinforcement learning method is proposed, in which a novel Q-network is trained to map from the historical scheduling parameters and QoS observations to the current scheduling action. It is demonstrated on a testbed that the proposed framework can achieve a significantly better QoS than the conventional EDCA mechanism. | [
"['Qianren Li' 'Bojie Lv' 'Yuncong Hong' 'Rui Wang']"
]
|
null | null | 2405.03534 | null | null | http://arxiv.org/pdf/2405.03534v1 | 2024-05-06T14:52:23Z | 2024-05-06T14:52:23Z | Meta-Evolve: Continuous Robot Evolution for One-to-many Policy Transfer | We investigate the problem of transferring an expert policy from a source robot to multiple different robots. To solve this problem, we propose a method named $Meta$-$Evolve$ that uses continuous robot evolution to efficiently transfer the policy to each target robot through a set of tree-structured evolutionary robot sequences. The robot evolution tree allows the robot evolution paths to be shared, so our approach can significantly outperform naive one-to-one policy transfer. We present a heuristic approach to determine an optimized robot evolution tree. Experiments have shown that our method is able to improve the efficiency of one-to-three transfer of manipulation policy by up to 3.2$times$ and one-to-six transfer of agile locomotion policy by 2.4$times$ in terms of simulation cost over the baseline of launching multiple independent one-to-one policy transfers. | [
"['Xingyu Liu' 'Deepak Pathak' 'Ding Zhao']"
]
|
null | null | 2405.03537 | null | null | http://arxiv.org/pdf/2405.03537v2 | 2024-06-16T14:05:53Z | 2024-05-06T14:55:37Z | Exploring the Efficacy of Federated-Continual Learning Nodes with
Attention-Based Classifier for Robust Web Phishing Detection: An Empirical
Investigation | Web phishing poses a dynamic threat, requiring detection systems to quickly adapt to the latest tactics. Traditional approaches of accumulating data and periodically retraining models are outpaced. We propose a novel paradigm combining federated learning and continual learning, enabling distributed nodes to continually update models on streams of new phishing data, without accumulating data. These locally adapted models are then aggregated at a central server via federated learning. To enhance detection, we introduce a custom attention-based classifier model with residual connections, tailored for web phishing, leveraging attention mechanisms to capture intricate phishing patterns. We evaluate our hybrid learning paradigm across continual learning strategies (cumulative, replay, MIR, LwF) and model architectures through an empirical investigation. Our main contributions are: (1) a new hybrid federated-continual learning paradigm for robust web phishing detection, and (2) a novel attention + residual connections based model explicitly designed for this task, attaining 0.93 accuracy, 0.90 precision, 0.96 recall and 0.93 f1-score with the LwF strategy, outperforming traditional approaches in detecting emerging phishing threats while retaining past knowledge. | [
"['Jesher Joshua M' 'Adhithya R' 'Sree Dananjay S' 'M Revathi']"
]
|
null | null | 2405.03541 | null | null | http://arxiv.org/pdf/2405.03541v1 | 2024-05-06T15:02:16Z | 2024-05-06T15:02:16Z | RepVGG-GELAN: Enhanced GELAN with VGG-STYLE ConvNets for Brain Tumour
Detection | Object detection algorithms particularly those based on YOLO have demonstrated remarkable efficiency in balancing speed and accuracy. However, their application in brain tumour detection remains underexplored. This study proposes RepVGG-GELAN, a novel YOLO architecture enhanced with RepVGG, a reparameterized convolutional approach for object detection tasks particularly focusing on brain tumour detection within medical images. RepVGG-GELAN leverages the RepVGG architecture to improve both speed and accuracy in detecting brain tumours. Integrating RepVGG into the YOLO framework aims to achieve a balance between computational efficiency and detection performance. This study includes a spatial pyramid pooling-based Generalized Efficient Layer Aggregation Network (GELAN) architecture which further enhances the capability of RepVGG. Experimental evaluation conducted on a brain tumour dataset demonstrates the effectiveness of RepVGG-GELAN surpassing existing RCS-YOLO in terms of precision and speed. Specifically, RepVGG-GELAN achieves an increased precision of 4.91% and an increased AP50 of 2.54% over the latest existing approach while operating at 240.7 GFLOPs. The proposed RepVGG-GELAN with GELAN architecture presents promising results establishing itself as a state-of-the-art solution for accurate and efficient brain tumour detection in medical images. The implementation code is publicly available at https://github.com/ThensiB/RepVGG-GELAN. | [
"['Thennarasi Balakrishnan' 'Sandeep Singh Sengar']"
]
|
null | null | 2405.03542 | null | null | http://arxiv.org/pdf/2405.03542v1 | 2024-04-26T09:27:59Z | 2024-04-26T09:27:59Z | Enhancing Channel Estimation in Quantized Systems with a Generative
Prior | Channel estimation in quantized systems is challenging, particularly in low-resolution systems. In this work, we propose to leverage a Gaussian mixture model (GMM) as generative prior, capturing the channel distribution of the propagation environment, to enhance a classical estimation technique based on the expectation-maximization (EM) algorithm for one-bit quantization. Thereby, a maximum a posteriori (MAP) estimate of the most responsible mixture component is inferred for a quantized received signal, which is subsequently utilized in the EM algorithm as side information. Numerical results demonstrate the significant performance improvement of our proposed approach over both a simplistic Gaussian prior and current state-of-the-art channel estimators. Furthermore, the proposed estimation framework exhibits adaptability to higher resolution systems and alternative generative priors. | [
"['Benedikt Fesl' 'Aziz Banna' 'Wolfgang Utschick']"
]
|
null | null | 2405.03546 | null | null | http://arxiv.org/pdf/2405.03546v1 | 2024-05-06T15:10:19Z | 2024-05-06T15:10:19Z | CCDM: Continuous Conditional Diffusion Models for Image Generation | Continuous Conditional Generative Modeling (CCGM) aims to estimate the distribution of high-dimensional data, typically images, conditioned on scalar continuous variables known as regression labels. While Continuous conditional Generative Adversarial Networks (CcGANs) were initially designed for this task, their adversarial training mechanism remains vulnerable to extremely sparse or imbalanced data, resulting in suboptimal outcomes. To enhance the quality of generated images, a promising alternative is to replace CcGANs with Conditional Diffusion Models (CDMs), renowned for their stable training process and ability to produce more realistic images. However, existing CDMs encounter challenges when applied to CCGM tasks due to several limitations such as inadequate U-Net architectures and deficient model fitting mechanisms for handling regression labels. In this paper, we introduce Continuous Conditional Diffusion Models (CCDMs), the first CDM designed specifically for the CCGM task. CCDMs address the limitations of existing CDMs by introducing specially designed conditional diffusion processes, a modified denoising U-Net with a custom-made conditioning mechanism, a novel hard vicinal loss for model fitting, and an efficient conditional sampling procedure. With comprehensive experiments on four datasets with varying resolutions ranging from 64x64 to 192x192, we demonstrate the superiority of the proposed CCDM over state-of-the-art CCGM models, establishing new benchmarks in CCGM. Extensive ablation studies validate the model design and implementation configuration of the proposed CCDM. Our code is publicly available at https://github.com/UBCDingXin/CCDM. | [
"['Xin Ding' 'Yongwei Wang' 'Kao Zhang' 'Z. Jane Wang']"
]
|
null | null | 2405.03547 | null | null | http://arxiv.org/pdf/2405.03547v2 | 2024-05-09T14:44:22Z | 2024-05-06T15:10:46Z | Position: Leverage Foundational Models for Black-Box Optimization | Undeniably, Large Language Models (LLMs) have stirred an extraordinary wave of innovation in the machine learning research domain, resulting in substantial impact across diverse fields such as reinforcement learning, robotics, and computer vision. Their incorporation has been rapid and transformative, marking a significant paradigm shift in the field of machine learning research. However, the field of experimental design, grounded on black-box optimization, has been much less affected by such a paradigm shift, even though integrating LLMs with optimization presents a unique landscape ripe for exploration. In this position paper, we frame the field of black-box optimization around sequence-based foundation models and organize their relationship with previous literature. We discuss the most promising ways foundational language models can revolutionize optimization, which include harnessing the vast wealth of information encapsulated in free-form text to enrich task comprehension, utilizing highly flexible sequence models such as Transformers to engineer superior optimization strategies, and enhancing performance prediction over previously unseen search spaces. | [
"['Xingyou Song' 'Yingtao Tian' 'Robert Tjarko Lange' 'Chansoo Lee'\n 'Yujin Tang' 'Yutian Chen']"
]
|
null | null | 2405.03549 | null | null | http://arxiv.org/pdf/2405.03549v1 | 2024-05-06T15:12:51Z | 2024-05-06T15:12:51Z | Bridging discrete and continuous state spaces: Exploring the Ehrenfest
process in time-continuous diffusion models | Generative modeling via stochastic processes has led to remarkable empirical results as well as to recent advances in their theoretical understanding. In principle, both space and time of the processes can be discrete or continuous. In this work, we study time-continuous Markov jump processes on discrete state spaces and investigate their correspondence to state-continuous diffusion processes given by SDEs. In particular, we revisit the $textit{Ehrenfest process}$, which converges to an Ornstein-Uhlenbeck process in the infinite state space limit. Likewise, we can show that the time-reversal of the Ehrenfest process converges to the time-reversed Ornstein-Uhlenbeck process. This observation bridges discrete and continuous state spaces and allows to carry over methods from one to the respective other setting. Additionally, we suggest an algorithm for training the time-reversal of Markov jump processes which relies on conditional expectations and can thus be directly related to denoising score matching. We demonstrate our methods in multiple convincing numerical experiments. | [
"['Ludwig Winkler' 'Lorenz Richter' 'Manfred Opper']"
]
|
null | null | 2405.03574 | null | null | http://arxiv.org/pdf/2405.03574v1 | 2024-05-06T15:49:46Z | 2024-05-06T15:49:46Z | ILILT: Implicit Learning of Inverse Lithography Technologies | Lithography, transferring chip design masks to the silicon wafer, is the most important phase in modern semiconductor manufacturing flow. Due to the limitations of lithography systems, Extensive design optimizations are required to tackle the design and silicon mismatch. Inverse lithography technology (ILT) is one of the promising solutions to perform pre-fabrication optimization, termed mask optimization. Because of mask optimization problems' constrained non-convexity, numerical ILT solvers rely heavily on good initialization to avoid getting stuck on sub-optimal solutions. Machine learning (ML) techniques are hence proposed to generate mask initialization for ILT solvers with one-shot inference, targeting faster and better convergence during ILT. This paper addresses the question of textit{whether ML models can directly generate high-quality optimized masks without engaging ILT solvers in the loop}. We propose an implicit learning ILT framework: ILILT, which leverages the implicit layer learning method and lithography-conditioned inputs to ground the model. Trained to understand the ILT optimization procedure, ILILT can outperform the state-of-the-art machine learning solutions, significantly improving efficiency and quality. | [
"['Haoyu Yang' 'Haoxing Ren']"
]
|
null | null | 2405.03582 | null | null | http://arxiv.org/pdf/2405.03582v1 | 2024-05-06T15:53:55Z | 2024-05-06T15:53:55Z | Functional Latent Dynamics for Irregularly Sampled Time Series
Forecasting | Irregularly sampled time series with missing values are often observed in multiple real-world applications such as healthcare, climate and astronomy. They pose a significant challenge to standard deep learn- ing models that operate only on fully observed and regularly sampled time series. In order to capture the continuous dynamics of the irreg- ular time series, many models rely on solving an Ordinary Differential Equation (ODE) in the hidden state. These ODE-based models tend to perform slow and require large memory due to sequential operations and a complex ODE solver. As an alternative to complex ODE-based mod- els, we propose a family of models called Functional Latent Dynamics (FLD). Instead of solving the ODE, we use simple curves which exist at all time points to specify the continuous latent state in the model. The coefficients of these curves are learned only from the observed values in the time series ignoring the missing values. Through extensive experi- ments, we demonstrate that FLD achieves better performance compared to the best ODE-based model while reducing the runtime and memory overhead. Specifically, FLD requires an order of magnitude less time to infer the forecasts compared to the best performing forecasting model. | [
"['Christian Klötergens' 'Vijaya Krishna Yalavarthi'\n 'Maximilian Stubbemann' 'Lars Schmidt-Thieme']"
]
|
null | null | 2405.03590 | null | null | http://arxiv.org/pdf/2405.03590v1 | 2024-05-06T16:01:28Z | 2024-05-06T16:01:28Z | Deep Clustering with Self-Supervision using Pairwise Similarities | Deep clustering incorporates embedding into clustering to find a lower-dimensional space appropriate for clustering. In this paper, we propose a novel deep clustering framework with self-supervision using pairwise similarities (DCSS). The proposed method consists of two successive phases. In the first phase, we propose to form hypersphere-like groups of similar data points, i.e. one hypersphere per cluster, employing an autoencoder that is trained using cluster-specific losses. The hyper-spheres are formed in the autoencoder's latent space. In the second phase, we propose to employ pairwise similarities to create a $K$-dimensional space that is capable of accommodating more complex cluster distributions, hence providing more accurate clustering performance. $K$ is the number of clusters. The autoencoder's latent space obtained in the first phase is used as the input of the second phase. The effectiveness of both phases is demonstrated on seven benchmark datasets by conducting a rigorous set of experiments. | [
"['Mohammadreza Sadeghi' 'Narges Armanfard']"
]
|
null | null | 2405.03615 | null | null | http://arxiv.org/pdf/2405.03615v1 | 2024-05-06T16:32:01Z | 2024-05-06T16:32:01Z | Nonnegative Matrix Factorization in Dimensionality Reduction: A Survey | Dimensionality Reduction plays a pivotal role in improving feature learning accuracy and reducing training time by eliminating redundant features, noise, and irrelevant data. Nonnegative Matrix Factorization (NMF) has emerged as a popular and powerful method for dimensionality reduction. Despite its extensive use, there remains a need for a comprehensive analysis of NMF in the context of dimensionality reduction. To address this gap, this paper presents a comprehensive survey of NMF, focusing on its applications in both feature extraction and feature selection. We introduce a classification of dimensionality reduction, enhancing understanding of the underlying concepts. Subsequently, we delve into a thorough summary of diverse NMF approaches used for feature extraction and selection. Furthermore, we discuss the latest research trends and potential future directions of NMF in dimensionality reduction, aiming to highlight areas that need further exploration and development. | [
"['Farid Saberi-Movahed' 'Kamal Berahman' 'Razieh Sheikhpour' 'Yuefeng Li'\n 'Shirui Pan']"
]
|
null | null | 2405.03624 | null | null | http://arxiv.org/pdf/2405.03624v1 | 2024-05-06T16:41:52Z | 2024-05-06T16:41:52Z | $ε$-Policy Gradient for Online Pricing | Combining model-based and model-free reinforcement learning approaches, this paper proposes and analyzes an $epsilon$-policy gradient algorithm for the online pricing learning task. The algorithm extends $epsilon$-greedy algorithm by replacing greedy exploitation with gradient descent step and facilitates learning via model inference. We optimize the regret of the proposed algorithm by quantifying the exploration cost in terms of the exploration probability $epsilon$ and the exploitation cost in terms of the gradient descent optimization and gradient estimation errors. The algorithm achieves an expected regret of order $mathcal{O}(sqrt{T})$ (up to a logarithmic factor) over $T$ trials. | [
"['Lukasz Szpruch' 'Tanut Treetanthiploet' 'Yufei Zhang']"
]
|
null | null | 2405.03636 | null | null | http://arxiv.org/pdf/2405.03636v1 | 2024-05-06T16:55:20Z | 2024-05-06T16:55:20Z | Federated Learning Privacy: Attacks, Defenses, Applications, and Policy
Landscape - A Survey | Deep learning has shown incredible potential across a vast array of tasks and accompanying this growth has been an insatiable appetite for data. However, a large amount of data needed for enabling deep learning is stored on personal devices and recent concerns on privacy have further highlighted challenges for accessing such data. As a result, federated learning (FL) has emerged as an important privacy-preserving technology enabling collaborative training of machine learning models without the need to send the raw, potentially sensitive, data to a central server. However, the fundamental premise that sending model updates to a server is privacy-preserving only holds if the updates cannot be "reverse engineered" to infer information about the private training data. It has been shown under a wide variety of settings that this premise for privacy does {em not} hold. In this survey paper, we provide a comprehensive literature review of the different privacy attacks and defense methods in FL. We identify the current limitations of these attacks and highlight the settings in which FL client privacy can be broken. We dissect some of the successful industry applications of FL and draw lessons for future successful adoption. We survey the emerging landscape of privacy regulation for FL. We conclude with future directions for taking FL toward the cherished goal of generating accurate models while preserving the privacy of the data from its participants. | [
"['Joshua C. Zhao' 'Saurabh Bagchi' 'Salman Avestimehr' 'Kevin S. Chan'\n 'Somali Chaterji' 'Dimitris Dimitriadis' 'Jiacheng Li' 'Ninghui Li'\n 'Arash Nourian' 'Holger R. Roth']"
]
|
null | null | 2405.03637 | null | null | http://arxiv.org/pdf/2405.03637v1 | 2024-05-06T16:55:30Z | 2024-05-06T16:55:30Z | Collage: Light-Weight Low-Precision Strategy for LLM Training | Large models training is plagued by the intense compute cost and limited hardware memory. A practical solution is low-precision representation but is troubled by loss in numerical accuracy and unstable training rendering the model less useful. We argue that low-precision floating points can perform well provided the error is properly compensated at the critical locations in the training process. We propose Collage which utilizes multi-component float representation in low-precision to accurately perform operations with numerical errors accounted. To understand the impact of imprecision to training, we propose a simple and novel metric which tracks the lost information during training as well as differentiates various precision strategies. Our method works with commonly used low-precision such as half-precision ($16$-bit floating points) and can be naturally extended to work with even lower precision such as $8$-bit. Experimental results show that pre-training using Collage removes the requirement of using $32$-bit floating-point copies of the model and attains similar/better training performance compared to $(16, 32)$-bit mixed-precision strategy, with up to $3.7times$ speedup and $sim 15%$ to $23%$ less memory usage in practice. | [
"['Tao Yu' 'Gaurav Gupta' 'Karthick Gopalswamy' 'Amith Mamidala' 'Hao Zhou'\n 'Jeffrey Huynh' 'Youngsuk Park' 'Ron Diamant' 'Anoop Deoras' 'Luke Huan']"
]
|
null | null | 2405.03642 | null | null | http://arxiv.org/pdf/2405.03642v1 | 2024-05-06T17:06:11Z | 2024-05-06T17:06:11Z | Classification of Breast Cancer Histopathology Images using a Modified
Supervised Contrastive Learning Method | Deep neural networks have reached remarkable achievements in medical image processing tasks, specifically classifying and detecting various diseases. However, when confronted with limited data, these networks face a critical vulnerability, often succumbing to overfitting by excessively memorizing the limited information available. This work addresses the challenge mentioned above by improving the supervised contrastive learning method to reduce the impact of false positives. Unlike most existing methods that rely predominantly on fully supervised learning, our approach leverages the advantages of self-supervised learning in conjunction with employing the available labeled data. We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images, and demonstrate an increase in classification accuracy by 1.45% at the image level and 1.42% at the patient level compared to the state-of-the-art method. This improvement corresponds to 93.63% absolute accuracy, highlighting our approach's effectiveness in leveraging data properties to learn more appropriate representation space. | [
"['Matina Mahdizadeh Sani' 'Ali Royat' 'Mahdieh Soleymani Baghshah']"
]
|
null | null | 2405.03649 | null | null | http://arxiv.org/pdf/2405.03649v1 | 2024-05-06T17:12:21Z | 2024-05-06T17:12:21Z | Learning Robust Classifiers with Self-Guided Spurious Correlation
Mitigation | Deep neural classifiers tend to rely on spurious correlations between spurious attributes of inputs and targets to make predictions, which could jeopardize their generalization capability. Training classifiers robust to spurious correlations typically relies on annotations of spurious correlations in data, which are often expensive to get. In this paper, we tackle an annotation-free setting and propose a self-guided spurious correlation mitigation framework. Our framework automatically constructs fine-grained training labels tailored for a classifier obtained with empirical risk minimization to improve its robustness against spurious correlations. The fine-grained training labels are formulated with different prediction behaviors of the classifier identified in a novel spuriousness embedding space. We construct the space with automatically detected conceptual attributes and a novel spuriousness metric which measures how likely a class-attribute correlation is exploited for predictions. We demonstrate that training the classifier to distinguish different prediction behaviors reduces its reliance on spurious correlations without knowing them a priori and outperforms prior methods on five real-world datasets. | [
"['Guangtao Zheng' 'Wenqian Ye' 'Aidong Zhang']"
]
|
null | null | 2405.03650 | null | null | http://arxiv.org/pdf/2405.03650v2 | 2024-06-11T17:12:26Z | 2024-05-06T17:14:09Z | Generated Contents Enrichment | In this paper, we investigate a novel artificial intelligence generation task, termed as generated contents enrichment (GCE). Different from conventional artificial intelligence contents generation task that enriches the given textual description implicitly with limited semantics for generating visually real content, our proposed GCE strives to perform content enrichment explicitly on both the visual and textual domain, from which the enriched contents are visually real, structurally reasonable, and semantically abundant. Towards to solve GCE, we propose a deep end-to-end method that explicitly explores the semantics and inter-semantic relationships during the enrichment. Specifically, we first model the input description as a semantic graph, wherein each node represents an object and each edge corresponds to the inter-object relationship. We then adopt Graph Convolutional Networks on top of the input scene description to predict the enriching objects and their relationships with the input objects. Finally, the enriched description is fed into an image synthesis model to carry out the visual contents generation. Our experiments conducted on the Visual Genome dataset exhibit promising and visually plausible results. | [
"['Mahdi Naseri' 'Jiayan Qiu' 'Zhou Wang']"
]
|
null | null | 2405.03651 | null | null | http://arxiv.org/pdf/2405.03651v1 | 2024-05-06T17:14:34Z | 2024-05-06T17:14:34Z | Adaptive Retrieval and Scalable Indexing for k-NN Search with
Cross-Encoders | Cross-encoder (CE) models which compute similarity by jointly encoding a query-item pair perform better than embedding-based models (dual-encoders) at estimating query-item relevance. Existing approaches perform k-NN search with CE by approximating the CE similarity with a vector embedding space fit either with dual-encoders (DE) or CUR matrix factorization. DE-based retrieve-and-rerank approaches suffer from poor recall on new domains and the retrieval with DE is decoupled from the CE. While CUR-based approaches can be more accurate than the DE-based approach, they require a prohibitively large number of CE calls to compute item embeddings, thus making it impractical for deployment at scale. In this paper, we address these shortcomings with our proposed sparse-matrix factorization based method that efficiently computes latent query and item embeddings to approximate CE scores and performs k-NN search with the approximate CE similarity. We compute item embeddings offline by factorizing a sparse matrix containing query-item CE scores for a set of train queries. Our method produces a high-quality approximation while requiring only a fraction of CE calls as compared to CUR-based methods, and allows for leveraging DE to initialize the embedding space while avoiding compute- and resource-intensive finetuning of DE via distillation. At test time, the item embeddings remain fixed and retrieval occurs over rounds, alternating between a) estimating the test query embedding by minimizing error in approximating CE scores of items retrieved thus far, and b) using the updated test query embedding for retrieving more items. Our k-NN search method improves recall by up to 5% (k=1) and 54% (k=100) over DE-based approaches. Additionally, our indexing approach achieves a speedup of up to 100x over CUR-based and 5x over DE distillation methods, while matching or improving k-NN search recall over baselines. | [
"['Nishant Yadav' 'Nicholas Monath' 'Manzil Zaheer' 'Rob Fergus'\n 'Andrew McCallum']"
]
|
null | null | 2405.03658 | null | null | http://arxiv.org/pdf/2405.03658v1 | 2024-05-06T17:33:58Z | 2024-05-06T17:33:58Z | A review on data-driven constitutive laws for solids | This review article highlights state-of-the-art data-driven techniques to discover, encode, surrogate, or emulate constitutive laws that describe the path-independent and path-dependent response of solids. Our objective is to provide an organized taxonomy to a large spectrum of methodologies developed in the past decades and to discuss the benefits and drawbacks of the various techniques for interpreting and forecasting mechanics behavior across different scales. Distinguishing between machine-learning-based and model-free methods, we further categorize approaches based on their interpretability and on their learning process/type of required data, while discussing the key problems of generalization and trustworthiness. We attempt to provide a road map of how these can be reconciled in a data-availability-aware context. We also touch upon relevant aspects such as data sampling techniques, design of experiments, verification, and validation. | [
"['Jan Niklas Fuhg' 'Govinda Anantha Padmanabha' 'Nikolaos Bouklas'\n 'Bahador Bahmani' 'WaiChing Sun' 'Nikolaos N. Vlassis' 'Moritz Flaschel'\n 'Pietro Carrara' 'Laura De Lorenzis']"
]
|
null | null | 2405.03661 | null | null | http://arxiv.org/pdf/2405.03661v1 | 2024-05-06T17:38:20Z | 2024-05-06T17:38:20Z | Competitive strategies to use "warm start" algorithms with predictions | We consider the problem of learning and using predictions for warm start algorithms with predictions. In this setting, an algorithm is given an instance of a problem, and a prediction of the solution. The runtime of the algorithm is bounded by the distance from the predicted solution to the true solution of the instance. Previous work has shown that when instances are drawn iid from some distribution, it is possible to learn an approximately optimal fixed prediction (Dinitz et al, NeurIPS 2021), and in the adversarial online case, it is possible to compete with the best fixed prediction in hindsight (Khodak et al, NeurIPS 2022). In this work we give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions $mathbf{P}$. That is, the "optimal offline cost" to solve an instance with respect to $mathbf{P}$ is the distance from the true solution to the closest member of $mathbf{P}$. This is analogous to the $k$-medians objective function. In the distributional setting, we show a simple strategy that incurs cost that is at most an $O(k)$ factor worse than the optimal offline cost. We then show a way to leverage learnable coarse information, in the form of partitions of the instance space into groups of "similar" instances, that allows us to potentially avoid this $O(k)$ factor. Finally, we consider an online version of the problem, where we compete against offline strategies that are allowed to maintain a moving set of $k$ predictions or "trajectories," and are charged for how much the predictions move. We give an algorithm that does at most $O(k^4 ln^2 k)$ times as much work as any offline strategy of $k$ trajectories. This algorithm is deterministic (robust to an adaptive adversary), and oblivious to the setting of $k$. Thus the guarantee holds for all $k$ simultaneously. | [
"['Vaidehi Srinivas' 'Avrim Blum']"
]
|
null | null | 2405.03664 | null | null | http://arxiv.org/pdf/2405.03664v2 | 2024-06-03T02:50:35Z | 2024-05-06T17:41:13Z | A New Robust Partial $p$-Wasserstein-Based Metric for Comparing
Distributions | The $2$-Wasserstein distance is sensitive to minor geometric differences between distributions, making it a very powerful dissimilarity metric. However, due to this sensitivity, a small outlier mass can also cause a significant increase in the $2$-Wasserstein distance between two similar distributions. Similarly, sampling discrepancy can cause the empirical $2$-Wasserstein distance on $n$ samples in $mathbb{R}^2$ to converge to the true distance at a rate of $n^{-1/4}$, which is significantly slower than the rate of $n^{-1/2}$ for $1$-Wasserstein distance. We introduce a new family of distances parameterized by $k ge 0$, called $k$-RPW that is based on computing the partial $2$-Wasserstein distance. We show that (1) $k$-RPW satisfies the metric properties, (2) $k$-RPW is robust to small outlier mass while retaining the sensitivity of $2$-Wasserstein distance to minor geometric differences, and (3) when $k$ is a constant, $k$-RPW distance between empirical distributions on $n$ samples in $mathbb{R}^2$ converges to the true distance at a rate of $n^{-1/3}$, which is faster than the convergence rate of $n^{-1/4}$ for the $2$-Wasserstein distance. Using the partial $p$-Wasserstein distance, we extend our distance to any $p in [1,infty]$. By setting parameters $k$ or $p$ appropriately, we can reduce our distance to the total variation, $p$-Wasserstein, and the L'evy-Prokhorov distances. Experiments show that our distance function achieves higher accuracy in comparison to the $1$-Wasserstein, $2$-Wasserstein, and TV distances for image retrieval tasks on noisy real-world data sets. | [
"['Sharath Raghvendra' 'Pouyan Shirzadian' 'Kaiyi Zhang']"
]
|
null | null | 2405.03667 | null | null | http://arxiv.org/pdf/2405.03667v1 | 2024-05-06T17:43:39Z | 2024-05-06T17:43:39Z | Fault Detection and Monitoring using an Information-Driven Strategy:
Method, Theory, and Application | The ability to detect when a system undergoes an incipient fault is of paramount importance in preventing a critical failure. In this work, we propose an information-driven fault detection method based on a novel concept drift detector. The method is tailored to identifying drifts in input-output relationships of additive noise models (i.e., model drifts) and is based on a distribution-free mutual information (MI) estimator. Our scheme does not require prior faulty examples and can be applied distribution-free over a large class of system models. Our core contributions are twofold. First, we demonstrate the connection between fault detection, model drift detection, and testing independence between two random variables. Second, we prove several theoretical properties of the proposed MI-based fault detection scheme: (i) strong consistency, (ii) exponentially fast detection of the non-faulty case, and (iii) control of both significance levels and power of the test. To conclude, we validate our theory with synthetic data and the benchmark dataset N-CMAPSS of aircraft turbofan engines. These empirical results support the usefulness of our methodology in many practical and realistic settings, and the theoretical results show performance guarantees that other methods cannot offer. | [
"['Camilo Ramírez' 'Jorge F. Silva' 'Ferhat Tamssaouet' 'Tomás Rojas'\n 'Marcos E. Orchard']"
]
|
null | null | 2405.03672 | null | null | http://arxiv.org/pdf/2405.03672v3 | 2024-07-01T15:57:59Z | 2024-05-06T17:48:24Z | Cutting through buggy adversarial example defenses: fixing 1 line of
code breaks Sabre | Sabre is a defense to adversarial examples that was accepted at IEEE S&P 2024. We first reveal significant flaws in the evaluation that point to clear signs of gradient masking. We then show the cause of this gradient masking: a bug in the original evaluation code. By fixing a single line of code in the original repository, we reduce Sabre's robust accuracy to 0%. In response to this, the authors modify the defense and introduce a new defense component not described in the original paper. But this fix contains a second bug; modifying one more line of code reduces robust accuracy to below baseline levels. After we released the first version of our paper online, the authors introduced another change to the defense; by commenting out one line of code during attack we reduce the robust accuracy to 0% again. | [
"['Nicholas Carlini']"
]
|
null | null | 2405.03676 | null | null | http://arxiv.org/pdf/2405.03676v1 | 2024-05-06T17:52:04Z | 2024-05-06T17:52:04Z | Why is SAM Robust to Label Noise? | Sharpness-Aware Minimization (SAM) is most known for achieving state-of the-art performances on natural image and language tasks. However, its most pronounced improvements (of tens of percent) is rather in the presence of label noise. Understanding SAM's label noise robustness requires a departure from characterizing the robustness of minimas lying in "flatter" regions of the loss landscape. In particular, the peak performance under label noise occurs with early stopping, far before the loss converges. We decompose SAM's robustness into two effects: one induced by changes to the logit term and the other induced by changes to the network Jacobian. The first can be observed in linear logistic regression where SAM provably up-weights the gradient contribution from clean examples. Although this explicit up-weighting is also observable in neural networks, when we intervene and modify SAM to remove this effect, surprisingly, we see no visible degradation in performance. We infer that SAM's effect in deeper networks is instead explained entirely by the effect SAM has on the network Jacobian. We theoretically derive the implicit regularization induced by this Jacobian effect in two layer linear networks. Motivated by our analysis, we see that cheaper alternatives to SAM that explicitly induce these regularization effects largely recover the benefits in deep networks trained on real-world datasets. | [
"['Christina Baek' 'Zico Kolter' 'Aditi Raghunathan']"
]
|
null | null | 2405.03685 | null | null | http://arxiv.org/pdf/2405.03685v1 | 2024-05-06T17:57:27Z | 2024-05-06T17:57:27Z | Language-Image Models with 3D Understanding | Multi-modal large language models (MLLMs) have shown incredible capabilities in a variety of 2D vision and language tasks. We extend MLLMs' perceptual capabilities to ground and reason about images in 3-dimensional space. To that end, we first develop a large-scale pre-training dataset for 2D and 3D called LV3D by combining multiple existing 2D and 3D recognition datasets under a common task formulation: as multi-turn question-answering. Next, we introduce a new MLLM named Cube-LLM and pre-train it on LV3D. We show that pure data scaling makes a strong 3D perception capability without 3D specific architectural design or training objective. Cube-LLM exhibits intriguing properties similar to LLMs: (1) Cube-LLM can apply chain-of-thought prompting to improve 3D understanding from 2D context information. (2) Cube-LLM can follow complex and diverse instructions and adapt to versatile input and output formats. (3) Cube-LLM can be visually prompted such as 2D box or a set of candidate 3D boxes from specialists. Our experiments on outdoor benchmarks demonstrate that Cube-LLM significantly outperforms existing baselines by 21.3 points of AP-BEV on the Talk2Car dataset for 3D grounded reasoning and 17.7 points on the DriveLM dataset for complex reasoning about driving scenarios, respectively. Cube-LLM also shows competitive results in general MLLM benchmarks such as refCOCO for 2D grounding with (87.0) average score, as well as visual question answering benchmarks such as VQAv2, GQA, SQA, POPE, etc. for complex reasoning. Our project is available at https://janghyuncho.github.io/Cube-LLM. | [
"['Jang Hyun Cho' 'Boris Ivanovic' 'Yulong Cao' 'Edward Schmerling'\n 'Yue Wang' 'Xinshuo Weng' 'Boyi Li' 'Yurong You' 'Philipp Krähenbühl'\n 'Yan Wang' 'Marco Pavone']"
]
|
null | null | 2405.03688 | null | null | http://arxiv.org/pdf/2405.03688v1 | 2024-05-06T17:59:07Z | 2024-05-06T17:59:07Z | Large Language Models Reveal Information Operation Goals, Tactics, and
Narrative Frames | Adversarial information operations can destabilize societies by undermining fair elections, manipulating public opinions on policies, and promoting scams. Despite their widespread occurrence and potential impacts, our understanding of influence campaigns is limited by manual analysis of messages and subjective interpretation of their observable behavior. In this paper, we explore whether these limitations can be mitigated with large language models (LLMs), using GPT-3.5 as a case-study for coordinated campaign annotation. We first use GPT-3.5 to scrutinize 126 identified information operations spanning over a decade. We utilize a number of metrics to quantify the close (if imperfect) agreement between LLM and ground truth descriptions. We next extract coordinated campaigns from two large multilingual datasets from X (formerly Twitter) that respectively discuss the 2022 French election and 2023 Balikaran Philippine-U.S. military exercise in 2023. For each coordinated campaign, we use GPT-3.5 to analyze posts related to a specific concern and extract goals, tactics, and narrative frames, both before and after critical events (such as the date of an election). While the GPT-3.5 sometimes disagrees with subjective interpretation, its ability to summarize and interpret demonstrates LLMs' potential to extract higher-order indicators from text to provide a more complete picture of the information campaigns compared to previous methods. | [
"['Keith Burghardt' 'Kai Chen' 'Kristina Lerman']"
]
|
null | null | 2405.03701 | null | null | http://arxiv.org/pdf/2405.03701v2 | 2024-06-21T02:45:04Z | 2024-05-02T16:05:02Z | QxEAI: Quantum-like evolutionary algorithm for automated probabilistic
forecasting | Forecasting, to estimate future events, is crucial for business and decision-making. This paper proposes QxEAI, a methodology that produces a probabilistic forecast that utilizes a quantum-like evolutionary algorithm based on training a quantum-like logic decision tree and a classical value tree on a small number of related time series. We demonstrate how the application of our quantum-like evolutionary algorithm to forecasting can overcome the challenges faced by classical and other machine learning approaches. By using three real-world datasets (Dow Jones Index, retail sales, gas consumption), we show how our methodology produces accurate forecasts while requiring little to none manual work. | [
"['Kevin Xin' 'Lizhi Xin']"
]
|
null | null | 2405.03702 | null | null | http://arxiv.org/pdf/2405.03702v2 | 2024-05-08T16:59:05Z | 2024-05-02T23:53:29Z | Leafy Spurge Dataset: Real-world Weed Classification Within Aerial Drone
Imagery | Invasive plant species are detrimental to the ecology of both agricultural and wildland areas. Euphorbia esula, or leafy spurge, is one such plant that has spread through much of North America from Eastern Europe. When paired with contemporary computer vision systems, unmanned aerial vehicles, or drones, offer the means to track expansion of problem plants, such as leafy spurge, and improve chances of controlling these weeds. We gathered a dataset of leafy spurge presence and absence in grasslands of western Montana, USA, then surveyed these areas with a commercial drone. We trained image classifiers on these data, and our best performing model, a pre-trained DINOv2 vision transformer, identified leafy spurge with 0.84 accuracy (test set). This result indicates that classification of leafy spurge is tractable, but not solved. We release this unique dataset of labelled and unlabelled, aerial drone imagery for the machine learning community to explore. Improving classification performance of leafy spurge would benefit the fields of ecology, conservation, and remote sensing alike. Code and data are available at our website: leafy-spurge-dataset.github.io. | [
"['Kyle Doherty' 'Max Gurinas' 'Erik Samsoe' 'Charles Casper' 'Beau Larkin'\n 'Philip Ramsey' 'Brandon Trabucco' 'Ruslan Salakhutdinov']"
]
|
null | null | 2405.03706 | null | null | http://arxiv.org/pdf/2405.03706v1 | 2024-05-03T19:11:54Z | 2024-05-03T19:11:54Z | Improving Graph Machine Learning Performance Through Feature
Augmentation Based on Network Control Theory | Network control theory (NCT) offers a robust analytical framework for understanding the influence of network topology on dynamic behaviors, enabling researchers to decipher how certain patterns of external control measures can steer system dynamics towards desired states. Distinguished from other structure-function methodologies, NCT's predictive capabilities can be coupled with deploying Graph Neural Networks (GNNs), which have demonstrated exceptional utility in various network-based learning tasks. However, the performance of GNNs heavily relies on the expressiveness of node features, and the lack of node features can greatly degrade their performance. Furthermore, many real-world systems may lack node-level information, posing a challenge for GNNs.To tackle this challenge, we introduce a novel approach, NCT-based Enhanced Feature Augmentation (NCT-EFA), that assimilates average controllability, along with other centrality indices, into the feature augmentation pipeline to enhance GNNs performance. Our evaluation of NCT-EFA, on six benchmark GNN models across two experimental setting. solely employing average controllability and in combination with additional centrality metrics. showcases an improved performance reaching as high as 11%. Our results demonstrate that incorporating NCT into feature enrichment can substantively extend the applicability and heighten the performance of GNNs in scenarios where node-level information is unavailable. | [
"['Anwar Said' 'Obaid Ullah Ahmad' 'Waseem Abbas' 'Mudassir Shabbir'\n 'Xenofon Koutsoukos']"
]
|
null | null | 2405.03708 | null | null | http://arxiv.org/pdf/2405.03708v3 | 2024-05-13T15:30:42Z | 2024-05-03T21:48:23Z | Delta Tensor: Efficient Vector and Tensor Storage in Delta Lake | The exponential growth of artificial intelligence (AI) and machine learning (ML) applications has necessitated the development of efficient storage solutions for vector and tensor data. This paper presents a novel approach for tensor storage in a Lakehouse architecture using Delta Lake. By adopting the multidimensional array storage strategy from array databases and sparse encoding methods to Delta Lake tables, experiments show that this approach has demonstrated notable improvements in both space and time efficiencies when compared to traditional serialization of tensors. These results provide valuable insights for the development and implementation of optimized vector and tensor storage solutions in data-intensive applications, contributing to the evolution of efficient data management practices in AI and ML domains in cloud-native environments | [
"['Zhiwei Bao' 'Liu Liao-Liao' 'Zhiyu Wu' 'Yifan Zhou' 'Dan Fan'\n 'Michal Aibin' 'Yvonne Coady' 'Andrew Brownsword']"
]
|
null | null | 2405.03709 | null | null | http://arxiv.org/pdf/2405.03709v2 | 2024-05-14T14:21:10Z | 2024-05-03T23:06:31Z | Generating Probabilistic Scenario Programs from Natural Language | For cyber-physical systems (CPS), including robotics and autonomous vehicles, mass deployment has been hindered by fatal errors that occur when operating in rare events. To replicate rare events such as vehicle crashes, many companies have created logging systems and employed crash reconstruction experts to meticulously recreate these valuable events in simulation. However, in these methods, "what if" questions are not easily formulated and answered. We present ScenarioNL, an AI System for creating scenario programs from natural language. Specifically, we generate these programs from police crash reports. Reports normally contain uncertainty about the exact details of the incidents which we represent through a Probabilistic Programming Language (PPL), Scenic. By using Scenic, we can clearly and concisely represent uncertainty and variation over CPS behaviors, properties, and interactions. We demonstrate how commonplace prompting techniques with the best Large Language Models (LLM) are incapable of reasoning about probabilistic scenario programs and generating code for low-resource languages such as Scenic. Our system is comprised of several LLMs chained together with several kinds of prompting strategies, a compiler, and a simulator. We evaluate our system on publicly available autonomous vehicle crash reports in California from the last five years and share insights into how we generate code that is both semantically meaningful and syntactically correct. | [
"['Karim Elmaaroufi' 'Devan Shanker' 'Ana Cismaru'\n 'Marcell Vazquez-Chanlatte' 'Alberto Sangiovanni-Vincentelli'\n 'Matei Zaharia' 'Sanjit A. Seshia']"
]
|
null | null | 2405.03710 | null | null | http://arxiv.org/pdf/2405.03710v1 | 2024-05-03T23:25:15Z | 2024-05-03T23:25:15Z | Automating the Enterprise with Foundation Models | Automating enterprise workflows could unlock $4 trillion/year in productivity gains. Despite being of interest to the data management community for decades, the ultimate vision of end-to-end workflow automation has remained elusive. Current solutions rely on process mining and robotic process automation (RPA), in which a bot is hard-coded to follow a set of predefined rules for completing a workflow. Through case studies of a hospital and large B2B enterprise, we find that the adoption of RPA has been inhibited by high set-up costs (12-18 months), unreliable execution (60% initial accuracy), and burdensome maintenance (requiring multiple FTEs). Multimodal foundation models (FMs) such as GPT-4 offer a promising new approach for end-to-end workflow automation given their generalized reasoning and planning abilities. To study these capabilities we propose ECLAIR, a system to automate enterprise workflows with minimal human supervision. We conduct initial experiments showing that multimodal FMs can address the limitations of traditional RPA with (1) near-human-level understanding of workflows (93% accuracy on a workflow understanding task) and (2) instant set-up with minimal technical barrier (based solely on a natural language description of a workflow, ECLAIR achieves end-to-end completion rates of 40%). We identify human-AI collaboration, validation, and self-improvement as open challenges, and suggest ways they can be solved with data management techniques. Code is available at: https://github.com/HazyResearch/eclair-agents | [
"['Michael Wornow' 'Avanika Narayan' 'Krista Opsahl-Ong' 'Quinn McIntyre'\n 'Nigam H. Shah' 'Christopher Re']"
]
|
null | null | 2405.03711 | null | null | http://arxiv.org/abs/2405.03711v1 | 2024-05-04T06:18:15Z | 2024-05-04T06:18:15Z | Guidance Design for Escape Flight Vehicle Using Evolution Strategy
Enhanced Deep Reinforcement Learning | Guidance commands of flight vehicles are a series of data sets with fixed time intervals, thus guidance design constitutes a sequential decision problem and satisfies the basic conditions for using deep reinforcement learning (DRL). In this paper, we consider the scenario where the escape flight vehicle (EFV) generates guidance commands based on DRL and the pursuit flight vehicle (PFV) generates guidance commands based on the proportional navigation method. For the EFV, the objective of the guidance design entails progressively maximizing the residual velocity, subject to the constraint imposed by the given evasion distance. Thus an irregular dynamic max-min problem of extremely large-scale is formulated, where the time instant when the optimal solution can be attained is uncertain and the optimum solution depends on all the intermediate guidance commands generated before. For solving this problem, a two-step strategy is conceived. In the first step, we use the proximal policy optimization (PPO) algorithm to generate the guidance commands of the EFV. The results obtained by PPO in the global search space are coarse, despite the fact that the reward function, the neural network parameters and the learning rate are designed elaborately. Therefore, in the second step, we propose to invoke the evolution strategy (ES) based algorithm, which uses the result of PPO as the initial value, to further improve the quality of the solution by searching in the local space. Simulation results demonstrate that the proposed guidance design method based on the PPO algorithm is capable of achieving a residual velocity of 67.24 m/s, higher than the residual velocities achieved by the benchmark soft actor-critic and deep deterministic policy gradient algorithms. Furthermore, the proposed ES-enhanced PPO algorithm outperforms the PPO algorithm by 2.7%, achieving a residual velocity of 69.04 m/s. | [
"['Xiao Hu' 'Tianshu Wang' 'Min Gong' 'Shaoshi Yang']"
]
|
null | null | 2405.03712 | null | null | http://arxiv.org/pdf/2405.03712v1 | 2024-05-04T11:22:30Z | 2024-05-04T11:22:30Z | Your Network May Need to Be Rewritten: Network Adversarial Based on
High-Dimensional Function Graph Decomposition | In the past, research on a single low dimensional activation function in networks has led to internal covariate shift and gradient deviation problems. A relatively small research area is how to use function combinations to provide property completion for a single activation function application. We propose a network adversarial method to address the aforementioned challenges. This is the first method to use different activation functions in a network. Based on the existing activation functions in the current network, an adversarial function with opposite derivative image properties is constructed, and the two are alternately used as activation functions for different network layers. For complex situations, we propose a method of high-dimensional function graph decomposition(HD-FGD), which divides it into different parts and then passes through a linear layer. After integrating the inverse of the partial derivatives of each decomposed term, we obtain its adversarial function by referring to the computational rules of the decomposition process. The use of network adversarial methods or the use of HD-FGD alone can effectively replace the traditional MLP+activation function mode. Through the above methods, we have achieved a substantial improvement over standard activation functions regarding both training efficiency and predictive accuracy. The article addresses the adversarial issues associated with several prevalent activation functions, presenting alternatives that can be seamlessly integrated into existing models without any adverse effects. We will release the code as open source after the conference review process is completed. | [
"['Xiaoyan Su' 'Yinghao Zhu' 'Run Li']"
]
|
null | null | 2405.03713 | null | null | http://arxiv.org/pdf/2405.03713v1 | 2024-05-04T14:02:52Z | 2024-05-04T14:02:52Z | Improve Cross-Modality Segmentation by Treating MRI Images as Inverted
CT Scans | Computed tomography (CT) segmentation models frequently include classes that are not currently supported by magnetic resonance imaging (MRI) segmentation models. In this study, we show that a simple image inversion technique can significantly improve the segmentation quality of CT segmentation models on MRI data, by using the TotalSegmentator model, applied to T1-weighted MRI images, as example. Image inversion is straightforward to implement and does not require dedicated graphics processing units (GPUs), thus providing a quick alternative to complex deep modality-transfer models for generating segmentation masks for MRI data. | [
"['Hartmut Häntze' 'Lina Xu' 'Leonhard Donle' 'Felix J. Dorfner'\n 'Alessa Hering' 'Lisa C. Adams' 'Keno K. Bressem']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.