categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2407.00671
null
null
http://arxiv.org/pdf/2407.00671v1
2024-06-30T11:33:49Z
2024-06-30T11:33:49Z
Establishing Deep InfoMax as an effective self-supervised learning methodology in materials informatics
The scarcity of property labels remains a key challenge in materials informatics, whereas materials data without property labels are abundant in comparison. By pretraining supervised property prediction models on self-supervised tasks that depend only on the "intrinsic information" available in any Crystallographic Information File (CIF), there is potential to leverage the large amount of crystal data without property labels to improve property prediction results on small datasets. We apply Deep InfoMax as a self-supervised machine learning framework for materials informatics that explicitly maximises the mutual information between a point set (or graph) representation of a crystal and a vector representation suitable for downstream learning. This allows the pretraining of supervised models on large materials datasets without the need for property labels and without requiring the model to reconstruct the crystal from a representation vector. We investigate the benefits of Deep InfoMax pretraining implemented on the Site-Net architecture to improve the performance of downstream property prediction models with small amounts (<10^3) of data, a situation relevant to experimentally measured materials property databases. Using a property label masking methodology, where we perform self-supervised learning on larger supervised datasets and then train supervised models on a small subset of the labels, we isolate Deep InfoMax pretraining from the effects of distributional shift. We demonstrate performance improvements in the contexts of representation learning and transfer learning on the tasks of band gap and formation energy prediction. Having established the effectiveness of Deep InfoMax pretraining in a controlled environment, our findings provide a foundation for extending the approach to address practical challenges in materials informatics.
[ "['Michael Moran' 'Vladimir V. Gusev' 'Michael W. Gaultois'\n 'Dmytro Antypov' 'Matthew J. Rosseinsky']" ]
null
null
2407.00673
null
null
http://arxiv.org/pdf/2407.00673v1
2024-06-30T12:09:08Z
2024-06-30T12:09:08Z
TEAL: New Selection Strategy for Small Buffers in Experience Replay Class Incremental Learning
Continual Learning is an unresolved challenge, whose relevance increases when considering modern applications. Unlike the human brain, trained deep neural networks suffer from a phenomenon called Catastrophic Forgetting, where they progressively lose previously acquired knowledge upon learning new tasks. To mitigate this problem, numerous methods have been developed, many relying on replaying past exemplars during new task training. However, as the memory allocated for replay decreases, the effectiveness of these approaches diminishes. On the other hand, maintaining a large memory for the purpose of replay is inefficient and often impractical. Here we introduce TEAL, a novel approach to populate the memory with exemplars, that can be integrated with various experience-replay methods and significantly enhance their performance on small memory buffers. We show that TEAL improves the average accuracy of the SOTA method XDER as well as ER and ER-ACE on several image recognition benchmarks, with a small memory buffer of 1-3 exemplars per class in the final task. This confirms the hypothesis that when memory is scarce, it is best to prioritize the most typical data.
[ "['Shahar Shaul-Ariel' 'Daphna Weinshall']" ]
null
null
2407.00693
null
null
http://arxiv.org/pdf/2407.00693v1
2024-06-30T13:30:04Z
2024-06-30T13:30:04Z
BAPO: Base-Anchored Preference Optimization for Personalized Alignment in Large Language Models
While learning to align Large Language Models (LLMs) with human preferences has shown remarkable success, aligning these models to meet the diverse user preferences presents further challenges in preserving previous knowledge. This paper examines the impact of personalized preference optimization on LLMs, revealing that the extent of knowledge loss varies significantly with preference heterogeneity. Although previous approaches have utilized the KL constraint between the reference model and the policy model, we observe that they fail to maintain general knowledge and alignment when facing personalized preferences. To this end, we introduce Base-Anchored Preference Optimization (BAPO), a simple yet effective approach that utilizes the initial responses of reference model to mitigate forgetting while accommodating personalized alignment. BAPO effectively adapts to diverse user preferences while minimally affecting global knowledge or general alignment. Our experiments demonstrate the efficacy of BAPO in various setups.
[ "['Gihun Lee' 'Minchan Jeong' 'Yujin Kim' 'Hojung Jung' 'Jaehoon Oh'\n 'Sangmook Kim' 'Se-Young Yun']" ]
null
null
2407.00696
null
null
http://arxiv.org/pdf/2407.00696v1
2024-06-30T13:37:06Z
2024-06-30T13:37:06Z
Graph in Graph Neural Network
Existing Graph Neural Networks (GNNs) are limited to process graphs each of whose vertices is represented by a vector or a single value, limited their representing capability to describe complex objects. In this paper, we propose the first GNN (called Graph in Graph Neural (GIG) Network) which can process graph-style data (called GIG sample) whose vertices are further represented by graphs. Given a set of graphs or a data sample whose components can be represented by a set of graphs (called multi-graph data sample), our GIG network starts with a GIG sample generation (GSG) module which encodes the input as a textbf{GIG sample}, where each GIG vertex includes a graph. Then, a set of GIG hidden layers are stacked, with each consisting of: (1) a GIG vertex-level updating (GVU) module that individually updates the graph in every GIG vertex based on its internal information; and (2) a global-level GIG sample updating (GGU) module that updates graphs in all GIG vertices based on their relationships, making the updated GIG vertices become global context-aware. This way, both internal cues within the graph contained in each GIG vertex and the relationships among GIG vertices could be utilized for down-stream tasks. Experimental results demonstrate that our GIG network generalizes well for not only various generic graph analysis tasks but also real-world multi-graph data analysis (e.g., human skeleton video-based action recognition), which achieved the new state-of-the-art results on 13 out of 14 evaluated datasets. Our code is publicly available at https://github.com/wangjs96/Graph-in-Graph-Neural-Network.
[ "['Jiongshu Wang' 'Jing Yang' 'Jiankang Deng' 'Hatice Gunes' 'Siyang Song']" ]
null
null
2407.00698
null
null
http://arxiv.org/pdf/2407.00698v1
2024-06-30T13:43:26Z
2024-06-30T13:43:26Z
NourishNet: Proactive Severity State Forecasting of Food Commodity Prices for Global Warning Systems
Price volatility in global food commodities is a critical signal indicating potential disruptions in the food market. Understanding forthcoming changes in these prices is essential for bolstering food security, particularly for nations at risk. The Food and Agriculture Organization of the United Nations (FAO) previously developed sophisticated statistical frameworks for the proactive prediction of food commodity prices, aiding in the creation of global early warning systems. These frameworks utilize food security indicators to produce accurate forecasts, thereby facilitating preparations against potential food shortages. Our research builds on these foundations by integrating robust price security indicators with cutting-edge deep learning (DL) methodologies to reveal complex interdependencies. DL techniques examine intricate dynamics among diverse factors affecting food prices. Through sophisticated time-series forecasting models coupled with a classification model, our approach enhances existing models to better support communities worldwide in advancing their food security initiatives.
[ "['Sydney Balboni' 'Grace Ivey' 'Brett Storoe' 'John Cisler' 'Tyge Plater'\n 'Caitlyn Grant' 'Ella Bruce' 'Benjamin Paulson']" ]
null
null
2407.00699
null
null
http://arxiv.org/pdf/2407.00699v1
2024-06-30T13:44:59Z
2024-06-30T13:44:59Z
Tackling Long-Horizon Tasks with Model-based Offline Reinforcement Learning
Model-based offline reinforcement learning (RL) is a compelling approach that addresses the challenge of learning from limited, static data by generating imaginary trajectories using learned models. However, it falls short in solving long-horizon tasks due to high bias in value estimation from model rollouts. In this paper, we introduce a novel model-based offline RL method, Lower Expectile Q-learning (LEQ), which enhances long-horizon task performance by mitigating the high bias in model-based value estimation via expectile regression of $lambda$-returns. Our empirical results show that LEQ significantly outperforms previous model-based offline RL methods on long-horizon tasks, such as the D4RL AntMaze tasks, matching or surpassing the performance of model-free approaches. Our experiments demonstrate that expectile regression, $lambda$-returns, and critic training on offline data are all crucial for addressing long-horizon tasks. Additionally, LEQ achieves performance comparable to the state-of-the-art model-based and model-free offline RL methods on the NeoRL benchmark and the D4RL MuJoCo Gym tasks.
[ "['Kwanyoung Park' 'Youngwoon Lee']" ]
null
null
2407.00706
null
null
http://arxiv.org/pdf/2407.00706v1
2024-06-30T14:16:27Z
2024-06-30T14:16:27Z
Sum-of-norms regularized Nonnegative Matrix Factorization
When applying nonnegative matrix factorization (NMF), generally the rank parameter is unknown. Such rank in NMF, called the nonnegative rank, is usually estimated heuristically since computing the exact value of it is NP-hard. In this work, we propose an approximation method to estimate such rank while solving NMF on-the-fly. We use sum-of-norm (SON), a group-lasso structure that encourages pairwise similarity, to reduce the rank of a factor matrix where the rank is overestimated at the beginning. On various datasets, SON-NMF is able to reveal the correct nonnegative rank of the data without any prior knowledge nor tuning. SON-NMF is a nonconvx nonsmmoth non-separable non-proximable problem, solving it is nontrivial. First, as rank estimation in NMF is NP-hard, the proposed approach does not enjoy a lower computational complexity. Using a graph-theoretic argument, we prove that the complexity of the SON-NMF is almost irreducible. Second, the per-iteration cost of any algorithm solving SON-NMF is possibly high, which motivated us to propose a first-order BCD algorithm to approximately solve SON-NMF with a low per-iteration cost, in which we do so by the proximal average operator. Lastly, we propose a simple greedy method for post-processing. SON-NMF exhibits favourable features for applications. Beside the ability to automatically estimate the rank from data, SON-NMF can deal with rank-deficient data matrix, can detect weak component with small energy. Furthermore, on the application of hyperspectral imaging, SON-NMF handle the issue of spectral variability naturally.
[ "['Andersen Ang' 'Waqas Bin Hamed' 'Hans De Sterck']" ]
null
null
2407.00708
null
null
http://arxiv.org/pdf/2407.00708v1
2024-06-30T14:20:12Z
2024-06-30T14:20:12Z
Heterogeneous Graph Contrastive Learning with Spectral Augmentation
Heterogeneous graphs can well describe the complex entity relationships in the real world. For example, online shopping networks contain multiple physical types of consumers and products, as well as multiple relationship types such as purchasing and favoriting. More and more scholars pay attention to this research because heterogeneous graph representation learning shows strong application potential in real-world scenarios. However, the existing heterogeneous graph models use data augmentation techniques to enhance the use of graph structure information, which only captures the graph structure information from the spatial topology, ignoring the information displayed in the spectrum dimension of the graph structure. To address the issue that heterogeneous graph representation learning methods fail to model spectral information, this paper introduces a spectral-enhanced graph contrastive learning model (SHCL) and proposes a spectral augmentation algorithm for the first time in heterogeneous graph neural networks. The proposed model learns an adaptive topology augmentation scheme through the heterogeneous graph itself, disrupting the structural information of the heterogeneous graph in the spectrum dimension, and ultimately improving the learning effect of the model. Experimental results on multiple real-world datasets demonstrate substantial advantages of the proposed model.
[ "['Jing Zhang' 'Xiaoqian Jiang' 'Yingjie Xie' 'Cangqi Zhou']" ]
null
null
2407.00710
null
null
http://arxiv.org/pdf/2407.00710v1
2024-06-30T14:21:32Z
2024-06-30T14:21:32Z
Weighted Missing Linear Discriminant Analysis: An Explainable Approach for Classification with Missing Data
As Artificial Intelligence (AI) models are gradually being adopted in real-life applications, the explainability of the model used is critical, especially in high-stakes areas such as medicine, finance, etc. Among the commonly used models, Linear Discriminant Analysis (LDA) is a widely used classification tool that is also explainable thanks to its ability to model class distributions and maximize class separation through linear feature combinations. Nevertheless, real-world data is frequently incomplete, presenting significant challenges for classification tasks and model explanations. In this paper, we propose a novel approach to LDA under missing data, termed textbf{textit{Weighted missing Linear Discriminant Analysis (WLDA)}}, to directly classify observations in data that contains missing values without imputation effectively by estimating the parameters directly on missing data and use a weight matrix for missing values to penalize missing entries during classification. Furthermore, we also analyze the theoretical properties and examine the explainability of the proposed technique in a comprehensive manner. Experimental results demonstrate that WLDA outperforms conventional methods by a significant margin, particularly in scenarios where missing values are present in both training and test sets.
[ "['Tuan L. Vo' 'Uyen Dang' 'Thu Nguyen']" ]
null
null
2407.00717
null
null
http://arxiv.org/pdf/2407.00717v1
2024-06-30T14:55:18Z
2024-06-30T14:55:18Z
Learning System Dynamics without Forgetting
Predicting the trajectories of systems with unknown dynamics (textit{i.e.} the governing rules) is crucial in various research fields, including physics and biology. This challenge has gathered significant attention from diverse communities. Most existing works focus on learning fixed system dynamics within one single system. However, real-world applications often involve multiple systems with different types of dynamics or evolving systems with non-stationary dynamics (dynamics shifts). When data from those systems are continuously collected and sequentially fed to machine learning models for training, these models tend to be biased toward the most recently learned dynamics, leading to catastrophic forgetting of previously observed/learned system dynamics. To this end, we aim to learn system dynamics via continual learning. Specifically, we present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics and encode the system-specific dynamics into binary masks over the model parameters. During the inference stage, the model can select the most confident mask based on the observational data to identify the system and predict future trajectories accordingly. Empirically, we systematically investigate the task configurations and compare the proposed MS-GODE with state-of-the-art techniques. More importantly, we construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics and significantly enriching the research field of machine learning for dynamic systems.
[ "['Xikun Zhang' 'Dongjin Song' 'Yushan Jiang' 'Yixin Chen' 'Dacheng Tao']" ]
null
null
2407.00719
null
null
http://arxiv.org/pdf/2407.00719v1
2024-06-30T15:00:49Z
2024-06-30T15:00:49Z
A Whole-Process Certifiably Robust Aggregation Method Against Backdoor Attacks in Federated Learning
Federated Learning (FL) has garnered widespread adoption across various domains such as finance, healthcare, and cybersecurity. Nonetheless, FL remains under significant threat from backdoor attacks, wherein malicious actors insert triggers into trained models, enabling them to perform certain tasks while still meeting FL's primary objectives. In response, robust aggregation methods have been proposed, which can be divided into three types: ex-ante, ex-durante, and ex-post methods. Given the complementary nature of these methods, combining all three types is promising yet unexplored. Such a combination is non-trivial because it requires leveraging their advantages while overcoming their disadvantages. Our study proposes a novel whole-process certifiably robust aggregation (WPCRA) method for FL, which enhances robustness against backdoor attacks across three phases: ex-ante, ex-durante, and ex-post. Moreover, since the current geometric median estimation method fails to consider differences among clients, we propose a novel weighted geometric median estimation algorithm (WGME). This algorithm estimates the geometric median of model updates from clients based on each client's weight, further improving the robustness of WPCRA against backdoor attacks. We also theoretically prove that WPCRA offers improved certified robustness guarantees with a larger certified radius. We evaluate the advantages of our methods based on the task of loan status prediction. Comparison with baselines shows that our methods significantly improve FL's robustness against backdoor attacks. This study contributes to the literature with a novel WPCRA method and a novel WGME algorithm. Our code is available at https://github.com/brick-brick/WPCRAM.
[ "['Anqi Zhou' 'Yezheng Liu' 'Yidong Chai' 'Hongyi Zhu' 'Xinyue Ge'\n 'Yuanchun Jiang' 'Meng Wang']" ]
null
null
2407.00730
null
null
http://arxiv.org/pdf/2407.00730v1
2024-06-30T15:38:38Z
2024-06-30T15:38:38Z
D-CDLF: Decomposition of Common and Distinctive Latent Factors for Multi-view High-dimensional Data
A typical approach to the joint analysis of multiple high-dimensional data views is to decompose each view's data matrix into three parts: a low-rank common-source matrix generated by common latent factors of all data views, a low-rank distinctive-source matrix generated by distinctive latent factors of the corresponding data view, and an additive noise matrix. Existing decomposition methods often focus on the uncorrelatedness between the common latent factors and distinctive latent factors, but inadequately address the equally necessary uncorrelatedness between distinctive latent factors from different data views. We propose a novel decomposition method, called Decomposition of Common and Distinctive Latent Factors (D-CDLF), to effectively achieve both types of uncorrelatedness for two-view data. We also discuss the estimation of the D-CDLF under high-dimensional settings.
[ "['Hai Shu']" ]
null
null
2407.00731
null
null
http://arxiv.org/pdf/2407.00731v1
2024-06-30T15:38:48Z
2024-06-30T15:38:48Z
Large Language Models Struggle in Token-Level Clinical Named Entity Recognition
Large Language Models (LLMs) have revolutionized various sectors, including healthcare where they are employed in diverse applications. Their utility is particularly significant in the context of rare diseases, where data scarcity, complexity, and specificity pose considerable challenges. In the clinical domain, Named Entity Recognition (NER) stands out as an essential task and it plays a crucial role in extracting relevant information from clinical texts. Despite the promise of LLMs, current research mostly concentrates on document-level NER, identifying entities in a more general context across entire documents, without extracting their precise location. Additionally, efforts have been directed towards adapting ChatGPT for token-level NER. However, there is a significant research gap when it comes to employing token-level NER for clinical texts, especially with the use of local open-source LLMs. This study aims to bridge this gap by investigating the effectiveness of both proprietary and local LLMs in token-level clinical NER. Essentially, we delve into the capabilities of these models through a series of experiments involving zero-shot prompting, few-shot prompting, retrieval-augmented generation (RAG), and instruction-fine-tuning. Our exploration reveals the inherent challenges LLMs face in token-level NER, particularly in the context of rare diseases, and suggests possible improvements for their application in healthcare. This research contributes to narrowing a significant gap in healthcare informatics and offers insights that could lead to a more refined application of LLMs in the healthcare sector.
[ "['Qiuhao Lu' 'Rui Li' 'Andrew Wen' 'Jinlian Wang' 'Liwei Wang'\n 'Hongfang Liu']" ]
null
null
2407.00735
null
null
http://arxiv.org/pdf/2407.00735v1
2024-06-30T15:48:57Z
2024-06-30T15:48:57Z
Generative prediction of flow field based on the diffusion model
We propose a geometry-to-flow diffusion model that utilizes the input of obstacle shape to predict a flow field past the obstacle. The model is based on a learnable Markov transition kernel to recover the data distribution from the Gaussian distribution. The Markov process is conditioned on the obstacle geometry, estimating the noise to be removed at each step, implemented via a U-Net. A cross-attention mechanism incorporates the geometry as a prompt. We train the geometry-to-flow diffusion model using a dataset of flows past simple obstacles, including the circle, ellipse, rectangle, and triangle. For comparison, the CNN model is trained using the same dataset. Tests are carried out on flows past obstacles with simple and complex geometries, representing interpolation and extrapolation on the geometry condition, respectively. In the test set, challenging scenarios include a cross and characters `PKU'. Generated flow fields show that the geometry-to-flow diffusion model is superior to the CNN model in predicting instantaneous flow fields and handling complex geometries. Quantitative analysis of the model accuracy and divergence in the fields demonstrate the high robustness of the diffusion model, indicating that the diffusion model learns physical laws implicitly.
[ "['Jiajun Hu' 'Zhen Lu' 'Yue Yang']" ]
null
null
2407.00736
null
null
http://arxiv.org/pdf/2407.00736v1
2024-06-30T15:50:10Z
2024-06-30T15:50:10Z
Quantum Circuit Synthesis and Compilation Optimization: Overview and Prospects
Quantum computing is regarded as a promising paradigm that may overcome the current computational power bottlenecks in the post-Moore era. The increasing maturity of quantum processors, especially superconducting ones, provides more possibilities for the development and implementation of quantum algorithms. As the crucial stages for quantum algorithm implementation, the logic circuit design and quantum compiling have also received significant attention, which covers key technologies such as quantum logic circuit synthesis (also widely known as quantum architecture search) and optimization, as well as qubit mapping and routing. Recent studies suggest that the scale and precision of related algorithms are steadily increasing, especially with the integration of artificial intelligence methods. In this survey, we systematically review and summarize a vast body of literature, exploring the feasibility of an integrated design and optimization scheme that spans from the algorithmic level to quantum hardware, combining the steps of logic circuit design and compilation optimization. Leveraging the exceptional cognitive and learning capabilities of AI algorithms, one can reduce manual design costs, enhance the precision and efficiency of execution, and facilitate the implementation and validation of the superiority of quantum algorithms on hardware.
[ "['Yan Ge' 'Wu Wenjie' 'Chen Yuheng' 'Pan Kaisen' 'Lu Xudong'\n 'Zhou Zixiang' 'Wang Yuhan' 'Wang Ruocheng' 'Yan Junchi']" ]
null
null
2407.00738
null
null
http://arxiv.org/pdf/2407.00738v1
2024-06-30T15:50:54Z
2024-06-30T15:50:54Z
Engineering an Efficient Object Tracker for Non-Linear Motion
The goal of multi-object tracking is to detect and track all objects in a scene while maintaining unique identifiers for each, by associating their bounding boxes across video frames. This association relies on matching motion and appearance patterns of detected objects. This task is especially hard in case of scenarios involving dynamic and non-linear motion patterns. In this paper, we introduce DeepMoveSORT, a novel, carefully engineered multi-object tracker designed specifically for such scenarios. In addition to standard methods of appearance-based association, we improve motion-based association by employing deep learnable filters (instead of the most commonly used Kalman filter) and a rich set of newly proposed heuristics. Our improvements to motion-based association methods are severalfold. First, we propose a new transformer-based filter architecture, TransFilter, which uses an object's motion history for both motion prediction and noise filtering. We further enhance the filter's performance by careful handling of its motion history and accounting for camera motion. Second, we propose a set of heuristics that exploit cues from the position, shape, and confidence of detected bounding boxes to improve association performance. Our experimental evaluation demonstrates that DeepMoveSORT outperforms existing trackers in scenarios featuring non-linear motion, surpassing state-of-the-art results on three such datasets. We also perform a thorough ablation study to evaluate the contributions of different tracker components which we proposed. Based on our study, we conclude that using a learnable filter instead of the Kalman filter, along with appearance-based association is key to achieving strong general tracking performance.
[ "['Momir Adžemović' 'Predrag Tadić' 'Andrija Petrović' 'Mladen Nikolić']" ]
null
null
2407.00740
null
null
http://arxiv.org/pdf/2407.00740v1
2024-06-30T16:04:29Z
2024-06-30T16:04:29Z
Locate&Edit: Energy-based Text Editing for Efficient, Flexible, and Faithful Controlled Text Generation
Recent approaches to controlled text generation (CTG) often involve manipulating the weights or logits of base language models (LMs) at decoding time. However, these methods are inapplicable to latest black-box LMs and ineffective at preserving the core semantics of the base LM's original generations. In this work, we propose Locate&Edit(L&E), an efficient and flexible energy-based approach to CTG, which edits text outputs from a base LM using off-the-shelf energy models. Given text outputs from the base LM, L&E first locates spans that are most relevant to constraints (e.g., toxicity) utilizing energy models, and then edits these spans by replacing them with more suitable alternatives. Importantly, our method is compatible with black-box LMs, as it requires only the text outputs. Also, since L&E doesn't mandate specific architecture for its component models, it can work with a diverse combination of available off-the-shelf models. Moreover, L&E preserves the base LM's original generations, by selectively modifying constraint-related aspects of the texts and leaving others unchanged. These targeted edits also ensure that L&E operates efficiently. Our experiments confirm that L&E achieves superior semantic preservation of the base LM generations and speed, while simultaneously obtaining competitive or improved constraint satisfaction. Furthermore, we analyze how the granularity of energy distribution impacts CTG performance and find that fine-grained, regression-based energy models improve constraint satisfaction, compared to conventional binary classifier energy models.
[ "['Hye Ryung Son' 'Jay-Yoon Lee']" ]
null
null
2407.00741
null
null
http://arxiv.org/pdf/2407.00741v2
2024-07-03T16:22:15Z
2024-06-30T16:05:31Z
Diffusion Models for Offline Multi-agent Reinforcement Learning with Safety Constraints
In recent advancements in Multi-agent Reinforcement Learning (MARL), its application has extended to various safety-critical scenarios. However, most methods focus on online learning, which presents substantial risks when deployed in real-world settings. Addressing this challenge, we introduce an innovative framework integrating diffusion models within the MARL paradigm. This approach notably enhances the safety of actions taken by multiple agents through risk mitigation while modeling coordinated action. Our framework is grounded in the Centralized Training with Decentralized Execution (CTDE) architecture, augmented by a Diffusion Model for prediction trajectory generation. Additionally, we incorporate a specialized algorithm to further ensure operational safety. We evaluate our model against baselines on the DSRL benchmark. Experiment results demonstrate that our model not only adheres to stringent safety constraints but also achieves superior performance compared to existing methodologies. This underscores the potential of our approach in advancing the safety and efficacy of MARL in real-world applications.
[ "['Jianuo Huang']" ]
null
null
2407.00742
null
null
http://arxiv.org/abs/2407.00742v1
2024-06-30T16:07:49Z
2024-06-30T16:07:49Z
PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph
Polygon representation learning is essential for diverse applications, encompassing tasks such as shape coding, building pattern classification, and geographic question answering. While recent years have seen considerable advancements in this field, much of the focus has been on single polygons, overlooking the intricate inner- and inter-polygonal relationships inherent in multipolygons. To address this gap, our study introduces a comprehensive framework specifically designed for learning representations of polygonal geometries, particularly multipolygons. Central to our approach is the incorporation of a heterogeneous visibility graph, which seamlessly integrates both inner- and inter-polygonal relationships. To enhance computational efficiency and minimize graph redundancy, we implement a heterogeneous spanning tree sampling method. Additionally, we devise a rotation-translation invariant geometric representation, ensuring broader applicability across diverse scenarios. Finally, we introduce Multipolygon-GNN, a novel model tailored to leverage the spatial and semantic heterogeneity inherent in the visibility graph. Experiments on five real-world and synthetic datasets demonstrate its ability to capture informative representations for polygonal geometries.
[ "['Dazhou Yu' 'Yuntong Hu' 'Yun Li' 'Liang Zhao']" ]
null
null
2407.00744
null
null
http://arxiv.org/pdf/2407.00744v1
2024-06-30T16:10:17Z
2024-06-30T16:10:17Z
Disentangled Representations for Causal Cognition
Complex adaptive agents consistently achieve their goals by solving problems that seem to require an understanding of causal information, information pertaining to the causal relationships that exist among elements of combined agent-environment systems. Causal cognition studies and describes the main characteristics of causal learning and reasoning in human and non-human animals, offering a conceptual framework to discuss cognitive performances based on the level of apparent causal understanding of a task. Despite the use of formal intervention-based models of causality, including causal Bayesian networks, psychological and behavioural research on causal cognition does not yet offer a computational account that operationalises how agents acquire a causal understanding of the world. Machine and reinforcement learning research on causality, especially involving disentanglement as a candidate process to build causal representations, represent on the one hand a concrete attempt at designing causal artificial agents that can shed light on the inner workings of natural causal cognition. In this work, we connect these two areas of research to build a unifying framework for causal cognition that will offer a computational perspective on studies of animal cognition, and provide insights in the development of new algorithms for causal reinforcement learning in AI.
[ "['Filippo Torresan' 'Manuel Baltieri']" ]
null
null
2407.00745
null
null
http://arxiv.org/pdf/2407.00745v1
2024-06-30T16:11:42Z
2024-06-30T16:11:42Z
Posterior Sampling with Denoising Oracles via Tilted Transport
Score-based diffusion models have significantly advanced high-dimensional data generation across various domains, by learning a denoising oracle (or score) from datasets. From a Bayesian perspective, they offer a realistic modeling of data priors and facilitate solving inverse problems through posterior sampling. Although many heuristic methods have been developed recently for this purpose, they lack the quantitative guarantees needed in many scientific applications. In this work, we introduce the textit{tilted transport} technique, which leverages the quadratic structure of the log-likelihood in linear inverse problems in combination with the prior denoising oracle to transform the original posterior sampling problem into a new `boosted' posterior that is provably easier to sample from. We quantify the conditions under which this boosted posterior is strongly log-concave, highlighting the dependencies on the condition number of the measurement matrix and the signal-to-noise ratio. The resulting posterior sampling scheme is shown to reach the computational threshold predicted for sampling Ising models [Kunisky'23] with a direct analysis, and is further validated on high-dimensional Gaussian mixture models and scalar field $varphi^4$ models.
[ "['Joan Bruna' 'Jiequn Han']" ]
null
null
2407.00748
null
null
http://arxiv.org/abs/2407.00748v1
2024-06-30T16:13:13Z
2024-06-30T16:13:13Z
Self-consistent Deep Geometric Learning for Heterogeneous Multi-source Spatial Point Data Prediction
Multi-source spatial point data prediction is crucial in fields like environmental monitoring and natural resource management, where integrating data from various sensors is the key to achieving a holistic environmental understanding. Existing models in this area often fall short due to their domain-specific nature and lack a strategy for integrating information from various sources in the absence of ground truth labels. Key challenges include evaluating the quality of different data sources and modeling spatial relationships among them effectively. Addressing these issues, we introduce an innovative multi-source spatial point data prediction framework that adeptly aligns information from varied sources without relying on ground truth labels. A unique aspect of our method is the 'fidelity score,' a quantitative measure for evaluating the reliability of each data source. Furthermore, we develop a geo-location-aware graph neural network tailored to accurately depict spatial relationships between data points. Our framework has been rigorously tested on two real-world datasets and one synthetic dataset. The results consistently demonstrate its superior performance over existing state-of-the-art methods.
[ "['Dazhou Yu' 'Xiaoyun Gong' 'Yun Li' 'Meikang Qiu' 'Liang Zhao']" ]
null
null
2407.00759
null
null
http://arxiv.org/pdf/2407.00759v1
2024-06-30T16:49:29Z
2024-06-30T16:49:29Z
Analysis of Modern Computer Vision Models for Blood Cell Classification
The accurate classification of white blood cells and related blood components is crucial for medical diagnoses. While traditional manual examinations and automated hematology analyzers have been widely used, they are often slow and prone to errors. Recent advancements in deep learning have shown promise for addressing these limitations. Earlier studies have demonstrated the viability of convolutional neural networks such as DenseNet, ResNet, and VGGNet for this task. Building on these foundations, our work employs more recent and efficient models to achieve rapid and accurate results. Specifically, this study used state-of-the-art architectures, including MaxVit, EfficientVit, EfficientNet, EfficientNetV2, and MobileNetV3. This study aimed to evaluate the performance of these models in WBC classification, potentially offering a more efficient and reliable alternative to current methods. Our approach not only addresses the speed and accuracy concerns of traditional techniques but also explores the applicability of innovative deep learning models in hematological analysis.
[ "['Alexander Kim' 'Ryan Kim']" ]
null
null
2407.00760
null
null
http://arxiv.org/pdf/2407.00760v1
2024-06-30T16:50:08Z
2024-06-30T16:50:08Z
Improved Graph-based semi-supervised learning Schemes
In this work, we improve the accuracy of several known algorithms to address the classification of large datasets when few labels are available. Our framework lies in the realm of graph-based semi-supervised learning. With novel modifications on Gaussian Random Fields Learning and Poisson Learning algorithms, we increase the accuracy and create more robust algorithms. Experimental results demonstrate the efficiency and superiority of the proposed methods over conventional graph-based semi-supervised techniques, especially in the context of imbalanced datasets.
[ "['Farid Bozorgnia']" ]
null
null
2407.00761
null
null
http://arxiv.org/pdf/2407.00761v1
2024-06-30T16:50:11Z
2024-06-30T16:50:11Z
Improving the performance of Stein variational inference through extreme sparsification of physically-constrained neural network models
Most scientific machine learning (SciML) applications of neural networks involve hundreds to thousands of parameters, and hence, uncertainty quantification for such models is plagued by the curse of dimensionality. Using physical applications, we show that $L_0$ sparsification prior to Stein variational gradient descent ($L_0$+SVGD) is a more robust and efficient means of uncertainty quantification, in terms of computational cost and performance than the direct application of SGVD or projected SGVD methods. Specifically, $L_0$+SVGD demonstrates superior resilience to noise, the ability to perform well in extrapolated regions, and a faster convergence rate to an optimal solution.
[ "['Govinda Anantha Padmanabha' 'Jan Niklas Fuhg' 'Cosmin Safta'\n 'Reese E. Jones' 'Nikolaos Bouklas']" ]
null
null
2407.00765
null
null
http://arxiv.org/pdf/2407.00765v1
2024-06-30T17:00:42Z
2024-06-30T17:00:42Z
Structured and Balanced Multi-component and Multi-layer Neural Networks
In this work, we propose a balanced multi-component and multi-layer neural network (MMNN) structure to approximate functions with complex features with both accuracy and efficiency in terms of degrees of freedom and computation cost. The main idea is motivated by a multi-component, each of which can be approximated effectively by a single-layer network, and multi-layer decomposition in a "divide-and-conquer" type of strategy to deal with a complex function. While an easy modification to fully connected neural networks (FCNNs) or multi-layer perceptrons (MLPs) through the introduction of balanced multi-component structures in the network, MMNNs achieve a significant reduction of training parameters, a much more efficient training process, and a much improved accuracy compared to FCNNs or MLPs. Extensive numerical experiments are presented to illustrate the effectiveness of MMNNs in approximating high oscillatory functions and its automatic adaptivity in capturing localized features.
[ "['Shijun Zhang' 'Hongkai Zhao' 'Yimin Zhong' 'Haomin Zhou']" ]
null
null
2407.00774
null
null
http://arxiv.org/pdf/2407.00774v1
2024-06-30T17:24:55Z
2024-06-30T17:24:55Z
Advantages of quantum support vector machine in cross-domain classification of quantum states
In this study, we use cross-domain classification using quantum machine learning for quantum advantages to address the entanglement versus separability paradigm. We further demonstrate the efficient classification of Bell diagonal states into zero and non-zero discord classes. The inherited structure of quantum states and its relation with a particular class of quantum states are exploited to intuitively approach the classification of different domain testing states, referred here as crossdomain classification. In addition, we extend our analysis to evaluate the robustness of our model for the analyzed problem using random unitary transformations. Using numerical analysis, our results clearly demonstrate the potential of QSVM for classifying quantum states across the multidimensional Hilbert space.
[ "['Diksha Sharma' 'Vivek Balasaheb Sabale' 'Parvinder Singh' 'Atul Kumar']" ]
null
null
2407.00779
null
null
http://arxiv.org/pdf/2407.00779v1
2024-06-30T17:45:01Z
2024-06-30T17:45:01Z
Towards Faster Matrix Diagonalization with Graph Isomorphism Networks and the AlphaZero Framework
In this paper, we introduce innovative approaches for accelerating the Jacobi method for matrix diagonalization, specifically through the formulation of large matrix diagonalization as a Semi-Markov Decision Process and small matrix diagonalization as a Markov Decision Process. Furthermore, we examine the potential of utilizing scalable architecture between different-sized matrices. During a short training period, our method discovered a significant reduction in the number of steps required for diagonalization and exhibited efficient inference capabilities. Importantly, this approach demonstrated possible scalability to large-sized matrices, indicating its potential for wide-ranging applicability. Upon training completion, we obtain action-state probabilities and transition graphs, which depict transitions between different states. These outputs not only provide insights into the diagonalization process but also pave the way for cost savings pertinent to large-scale matrices. The advancements made in this research enhance the efficacy and scalability of matrix diagonalization, pushing for new possibilities for deployment in practical applications in scientific and engineering domains.
[ "['Geigh Zollicoffer' 'Kshitij Bhatta' 'Manish Bhattarai' 'Phil Romero'\n 'Christian F. A. Negre' 'Anders M. N. Niklasson' 'Adetokunbo Adedoyin']" ]
null
null
2407.00787
null
null
http://arxiv.org/pdf/2407.00787v1
2024-06-30T18:04:16Z
2024-06-30T18:04:16Z
Enhancing Travel Decision-Making: A Contrastive Learning Approach for Personalized Review Rankings in Accommodations
User-generated reviews significantly influence consumer decisions, particularly in the travel domain when selecting accommodations. This paper contribution comprising two main elements. Firstly, we present a novel dataset of authentic guest reviews sourced from a prominent online travel platform, totaling over two million reviews from 50,000 distinct accommodations. Secondly, we propose an innovative approach for personalized review ranking. Our method employs contrastive learning to intricately capture the relationship between a review and the contextual information of its respective reviewer. Through a comprehensive experimental study, we demonstrate that our approach surpasses several baselines across all reported metrics. Augmented by a comparative analysis, we showcase the efficacy of our method in elevating personalized review ranking. The implications of our research extend beyond the travel domain, with potential applications in other sectors where personalized review ranking is paramount, such as online e-commerce platforms.
[ "['Reda Igebaria' 'Eran Fainman' 'Sarai Mizrachi' 'Moran Beladev'\n 'Fengjun Wang']" ]
null
null
2407.00801
null
null
http://arxiv.org/pdf/2407.00801v1
2024-06-30T19:00:49Z
2024-06-30T19:00:49Z
Model-Free Active Exploration in Reinforcement Learning
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution. We adopt an information-theoretical viewpoint and start from the instance-specific lower bound of the number of samples that have to be collected to identify a nearly-optimal policy. Deriving this lower bound along with the optimal exploration strategy entails solving an intricate optimization problem and requires a model of the system. In turn, most existing sample optimal exploration algorithms rely on estimating the model. We derive an approximation of the instance-specific lower bound that only involves quantities that can be inferred using model-free approaches. Leveraging this approximation, we devise an ensemble-based model-free exploration strategy applicable to both tabular and continuous Markov decision processes. Numerical results demonstrate that our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches
[ "['Alessio Russo' 'Alexandre Proutiere']" ]
null
null
2407.00806
null
null
http://arxiv.org/pdf/2407.00806v1
2024-06-30T19:22:59Z
2024-06-30T19:22:59Z
Benchmarks for Reinforcement Learning with Biased Offline Data and Imperfect Simulators
In many reinforcement learning (RL) applications one cannot easily let the agent act in the world; this is true for autonomous vehicles, healthcare applications, and even some recommender systems, to name a few examples. Offline RL provides a way to train agents without real-world exploration, but is often faced with biases due to data distribution shifts, limited coverage, and incomplete representation of the environment. To address these issues, practical applications have tried to combine simulators with grounded offline data, using so-called hybrid methods. However, constructing a reliable simulator is in itself often challenging due to intricate system complexities as well as missing or incomplete information. In this work, we outline four principal challenges for combining offline data with imperfect simulators in RL: simulator modeling error, partial observability, state and action discrepancies, and hidden confounding. To help drive the RL community to pursue these problems, we construct ``Benchmarks for Mechanistic Offline Reinforcement Learning'' (B4MRL), which provide dataset-simulator benchmarks for the aforementioned challenges. Our results suggest the key necessity of such benchmarks for future research.
[ "['Ori Linial' 'Guy Tennenholtz' 'Uri Shalit']" ]
null
null
2407.00809
null
null
http://arxiv.org/pdf/2407.00809v1
2024-06-30T19:28:12Z
2024-06-30T19:28:12Z
Kernel Neural Operators (KNOs) for Scalable, Memory-efficient, Geometrically-flexible Operator Learning
This paper introduces the Kernel Neural Operator (KNO), a novel operator learning technique that uses deep kernel-based integral operators in conjunction with quadrature for function-space approximation of operators (maps from functions to functions). KNOs use parameterized, closed-form, finitely-smooth, and compactly-supported kernels with trainable sparsity parameters within the integral operators to significantly reduce the number of parameters that must be learned relative to existing neural operators. Moreover, the use of quadrature for numerical integration endows the KNO with geometric flexibility that enables operator learning on irregular geometries. Numerical results demonstrate that on existing benchmarks the training and test accuracy of KNOs is higher than popular operator learning techniques while using at least an order of magnitude fewer trainable parameters. KNOs thus represent a new paradigm of low-memory, geometrically-flexible, deep operator learning, while retaining the implementation simplicity and transparency of traditional kernel methods from both scientific computing and machine learning.
[ "['Matthew Lowery' 'John Turnage' 'Zachary Morrow' 'John D. Jakeman'\n 'Akil Narayan' 'Shandian Zhe' 'Varun Shankar']" ]
null
null
2407.00824
null
null
http://arxiv.org/pdf/2407.00824v1
2024-04-14T17:33:09Z
2024-04-14T17:33:09Z
A data-driven approach to modeling brain activity using differential equations
This research focuses on an innovative task of extracting equations from incomplete data, moving away from traditional methods used for complete solutions. The study addresses the challenge of extracting equations from data, particularly in the study of brain activity using electrophysiological data, which is often limited by insufficient information. The study provides a brief review of existing open-source equation derivation approaches in the context of modeling brain activity. The section below introduces a novel algorithm that employs incomplete data and prior domain knowledge to recover differential equations. The algorithm's practicality in real-world scenarios is demonstrated through its application on both synthetic and real datasets.
[ "['Kuratov Andrey']" ]
null
null
2407.00840
null
null
http://arxiv.org/pdf/2407.00840v1
2024-06-30T21:54:41Z
2024-06-30T21:54:41Z
MUSE-Net: Missingness-aware mUlti-branching Self-attention Encoder for Irregular Longitudinal Electronic Health Records
The era of big data has made vast amounts of clinical data readily available, particularly in the form of electronic health records (EHRs), which provides unprecedented opportunities for developing data-driven diagnostic tools to enhance clinical decision making. However, the application of EHRs in data-driven modeling faces challenges such as irregularly spaced multi-variate time series, issues of incompleteness, and data imbalance. Realizing the full data potential of EHRs hinges on the development of advanced analytical models. In this paper, we propose a novel Missingness-aware mUlti-branching Self-attention Encoder (MUSE-Net) to cope with the challenges in modeling longitudinal EHRs for data-driven disease prediction. The MUSE-Net leverages a multi-task Gaussian process (MGP) with missing value masks for data imputation, a multi-branching architecture to address the data imbalance problem, and a time-aware self-attention encoder to account for the irregularly spaced time interval in longitudinal EHRs. We evaluate the proposed MUSE-Net using both synthetic and real-world datasets. Experimental results show that our MUSE-Net outperforms existing methods that are widely used to investigate longitudinal signals.
[ "['Zekai Wang' 'Tieming Liu' 'Bing Yao']" ]
null
null
2407.00843
null
null
http://arxiv.org/pdf/2407.00843v1
2024-06-30T22:33:47Z
2024-06-30T22:33:47Z
A Unified Approach to Extract Intepretable Rules from Tree Ensembles via Integer Programming
Tree ensemble methods represent a popular machine learning model, known for their effectiveness in supervised classification and regression tasks. Their performance derives from aggregating predictions of multiple decision trees, which are renowned for their interpretability properties. However, tree ensemble methods do not reliably exhibit interpretable output. Our work aims to extract an optimized list of rules from a trained tree ensemble, providing the user with a condensed, interpretable model that retains most of the predictive power of the full model. Our approach consists of solving a clean and neat set partitioning problem formulated through Integer Programming. The proposed method works with either tabular or time series data, for both classification and regression tasks, and does not require parameter tuning under the most common setting. Through rigorous computational experiments, we offer statistically significant evidence that our method is competitive with other rule extraction methods and effectively handles time series.
[ "['Lorenzo Bonasera' 'Emilio Carrizosa']" ]
null
null
2407.00849
null
null
http://arxiv.org/pdf/2407.00849v1
2024-06-30T22:59:15Z
2024-06-30T22:59:15Z
Towards Understanding Sensitive and Decisive Patterns in Explainable AI: A Case Study of Model Interpretation in Geometric Deep Learning
The interpretability of machine learning models has gained increasing attention, particularly in scientific domains where high precision and accountability are crucial. This research focuses on distinguishing between two critical data patterns -- sensitive patterns (model-related) and decisive patterns (task-related) -- which are commonly used as model interpretations but often lead to confusion. Specifically, this study compares the effectiveness of two main streams of interpretation methods: post-hoc methods and self-interpretable methods, in detecting these patterns. Recently, geometric deep learning (GDL) has shown superior predictive performance in various scientific applications, creating an urgent need for principled interpretation methods. Therefore, we conduct our study using several representative GDL applications as case studies. We evaluate thirteen interpretation methods applied to three major GDL backbone models, using four scientific datasets to assess how well these methods identify sensitive and decisive patterns. Our findings indicate that post-hoc methods tend to provide interpretations better aligned with sensitive patterns, whereas certain self-interpretable methods exhibit strong and stable performance in detecting decisive patterns. Additionally, our study offers valuable insights into improving the reliability of these interpretation methods. For example, ensembling post-hoc interpretations from multiple models trained on the same task can effectively uncover the task's decisive patterns.
[ "['Jiajun Zhu' 'Siqi Miao' 'Rex Ying' 'Pan Li']" ]
null
null
2407.00866
null
null
http://arxiv.org/pdf/2407.00866v2
2024-07-05T18:01:16Z
2024-07-01T00:20:26Z
Silver Linings in the Shadows: Harnessing Membership Inference for Machine Unlearning
With the continued advancement and widespread adoption of machine learning (ML) models across various domains, ensuring user privacy and data security has become a paramount concern. In compliance with data privacy regulations, such as GDPR, a secure machine learning framework should not only grant users the right to request the removal of their contributed data used for model training but also facilitates the elimination of sensitive data fingerprints within machine learning models to mitigate potential attack - a process referred to as machine unlearning. In this study, we present a novel unlearning mechanism designed to effectively remove the impact of specific data samples from a neural network while considering the performance of the unlearned model on the primary task. In achieving this goal, we crafted a novel loss function tailored to eliminate privacy-sensitive information from weights and activation values of the target model by combining target classification loss and membership inference loss. Our adaptable framework can easily incorporate various privacy leakage approximation mechanisms to guide the unlearning process. We provide empirical evidence of the effectiveness of our unlearning approach with a theoretical upper-bound analysis through a membership inference mechanism as a proof of concept. Our results showcase the superior performance of our approach in terms of unlearning efficacy and latency as well as the fidelity of the primary task, across four datasets and four deep learning architectures.
[ "['Nexhi Sula' 'Abhinav Kumar' 'Jie Hou' 'Han Wang' 'Reza Tourani']" ]
null
null
2407.00878
null
null
http://arxiv.org/pdf/2407.00878v1
2024-04-10T02:20:29Z
2024-04-10T02:20:29Z
A Robust Power Model Training Framework for Cloud Native Runtime Energy Metric Exporter
Estimating power consumption in modern Cloud environments is essential for carbon quantification toward green computing. Specifically, it is important to properly account for the power consumed by each of the running applications, which are packaged as containers. This paper examines multiple challenges associated with this goal. The first challenge is that multiple customers are sharing the same hardware platform (multi-tenancy), where information on the physical servers is mostly obscured. The second challenge is the overhead in power consumption that the Cloud platform control plane induces. This paper addresses these challenges and introduces a novel pipeline framework for power model training. This allows versatile power consumption approximation of individual containers on the basis of available performance counters and other metrics. The proposed model utilizes machine learning techniques to predict the power consumed by the control plane and associated processes, and uses it for isolating the power consumed by the user containers, from the server power consumption. To determine how well the prediction results in an isolation, we introduce a metric termed isolation goodness. Applying the proposed power model does not require online power measurements, nor does it need information on the physical servers, configuration, or information on other tenants sharing the same machine. The results of cross-workload, cross-platform experiments demonstrated the higher accuracy of the proposed model when predicting power consumption of unseen containers on unknown platforms, including on virtual machines.
[ "['Sunyanan Choochotkaew' 'Chen Wang' 'Huamin Chen' 'Tatsuhiro Chiba'\n 'Marcelo Amaral' 'Eun Kyung Lee' 'Tamar Eilam']" ]
null
null
2407.00886
null
null
http://arxiv.org/pdf/2407.00886v1
2024-07-01T01:12:20Z
2024-07-01T01:12:20Z
Mechanistic Interpretation through Contextual Decomposition in Transformers
Transformers exhibit impressive capabilities but are often regarded as black boxes due to challenges in understanding the complex nonlinear relationships between features. Interpreting machine learning models is of paramount importance to mitigate risks, and mechanistic interpretability is in particular of current interest as it opens up a window for guiding manual modifications and reverse-engineering solutions. In this work, we introduce contextual decomposition for transformers (CD-T), extending a prior work on CD for RNNs and CNNs, to address mechanistic interpretation computationally efficiently. CD-T is a flexible interpretation method for transformers. It can capture contributions of combinations of input features or source internal components (e.g. attention heads, feed-forward networks) to (1) final predictions or (2) the output of any target internal component. Using CD-T, we propose a novel algorithm for circuit discovery. On a real-world pathology report classification task: we show CD-T distills a more faithful circuit of attention heads with improved computational efficiency (speed up 2x) than a prior benchmark, path patching. As a versatile interpretation method, CD-T also exhibits exceptional capabilities for local interpretations. CD-T is shown to reliably find words and phrases of contrasting sentiment/topic on SST-2 and AGNews datasets. Through human experiments, we demonstrate CD-T enables users to identify the more accurate of two models and to better trust a model's outputs compared to alternative interpretation methods such as SHAP and LIME.
[ "['Aliyah R. Hsu' 'Yeshwanth Cherapanamjeri' 'Anobel Y. Odisho'\n 'Peter R. Carroll' 'Bin Yu']" ]
null
null
2407.00888
null
null
http://arxiv.org/pdf/2407.00888v1
2024-07-01T01:23:06Z
2024-07-01T01:23:06Z
Papez: Resource-Efficient Speech Separation with Auditory Working Memory
Transformer-based models recently reached state-of-the-art single-channel speech separation accuracy; However, their extreme computational load makes it difficult to deploy them in resource-constrained mobile or IoT devices. We thus present Papez, a lightweight and computation-efficient single-channel speech separation model. Papez is based on three key techniques. We first replace the inter-chunk Transformer with small-sized auditory working memory. Second, we adaptively prune the input tokens that do not need further processing. Finally, we reduce the number of parameters through the recurrent transformer. Our extensive evaluation shows that Papez achieves the best resource and accuracy tradeoffs with a large margin. We publicly share our source code at texttt{https://github.com/snuhcs/Papez}
[ "['Hyunseok Oh' 'Juheon Yi' 'Youngki Lee']" ]
null
null
2407.00890
null
null
http://arxiv.org/pdf/2407.00890v1
2024-07-01T01:25:26Z
2024-07-01T01:25:26Z
Macroeconomic Forecasting with Large Language Models
This paper presents a comparative analysis evaluating the accuracy of Large Language Models (LLMs) against traditional macro time series forecasting approaches. In recent times, LLMs have surged in popularity for forecasting due to their ability to capture intricate patterns in data and quickly adapt across very different domains. However, their effectiveness in forecasting macroeconomic time series data compared to conventional methods remains an area of interest. To address this, we conduct a rigorous evaluation of LLMs against traditional macro forecasting methods, using as common ground the FRED-MD database. Our findings provide valuable insights into the strengths and limitations of LLMs in forecasting macroeconomic time series, shedding light on their applicability in real-world scenarios
[ "['Andrea Carriero' 'Davide Pettenuzzo' 'Shubhranshu Shekhar']" ]
null
null
2407.00891
null
null
http://arxiv.org/pdf/2407.00891v1
2024-07-01T01:28:14Z
2024-07-01T01:28:14Z
ZeroDDI: A Zero-Shot Drug-Drug Interaction Event Prediction Method with Semantic Enhanced Learning and Dual-Modal Uniform Alignment
Drug-drug interactions (DDIs) can result in various pharmacological changes, which can be categorized into different classes known as DDI events (DDIEs). In recent years, previously unobserved/unseen DDIEs have been emerging, posing a new classification task when unseen classes have no labelled instances in the training stage, which is formulated as a zero-shot DDIE prediction (ZS-DDIE) task. However, existing computational methods are not directly applicable to ZS-DDIE, which has two primary challenges: obtaining suitable DDIE representations and handling the class imbalance issue. To overcome these challenges, we propose a novel method named ZeroDDI for the ZS-DDIE task. Specifically, we design a biological semantic enhanced DDIE representation learning module, which emphasizes the key biological semantics and distills discriminative molecular substructure-related semantics for DDIE representation learning. Furthermore, we propose a dual-modal uniform alignment strategy to distribute drug pair representations and DDIE semantic representations uniformly in a unit sphere and align the matched ones, which can mitigate the issue of class imbalance. Extensive experiments showed that ZeroDDI surpasses the baselines and indicate that it is a promising tool for detecting unseen DDIEs. Our code has been released in https://github.com/wzy-Sarah/ZeroDDI.
[ "['Ziyan Wang' 'Zhankun Xiong' 'Feng Huang' 'Xuan Liu' 'Wen Zhang']" ]
null
null
2407.00902
null
null
http://arxiv.org/pdf/2407.00902v1
2024-07-01T01:57:21Z
2024-07-01T01:57:21Z
From Introspection to Best Practices: Principled Analysis of Demonstrations in Multimodal In-Context Learning
Motivated by in-context learning (ICL) capabilities of Large Language models (LLMs), multimodal LLMs with additional visual modality are also exhibited with similar ICL abilities when multiple image-text pairs are provided as demonstrations. However, relatively less work has been done to investigate the principles behind how and why multimodal ICL works. We conduct a systematic and principled evaluation of multimodal ICL for models of different scales on a broad spectrum of new yet critical tasks. Through perturbations over different modality information, we show that modalities matter differently across tasks in multimodal ICL. Considering such modality impact, we further utilize modality-driven demonstration strategies to boost ICL performance. We also identify that demonstration selection is closely related to the models' ability to capture task inductive biases from multimodal ICL. Our principled analysis provides a comprehensive way of understanding the role of demonstrations in multimodal in-context learning, and sheds light on effectively improving multimodal ICL on a wide range of tasks even if those tasks are not seen in or even contradict pretraining data.
[ "['Nan Xu' 'Fei Wang' 'Sheng Zhang' 'Hoifung Poon' 'Muhao Chen']" ]
null
null
2407.00906
null
null
http://arxiv.org/pdf/2407.00906v1
2024-07-01T02:15:27Z
2024-07-01T02:15:27Z
GSO-YOLO: Global Stability Optimization YOLO for Construction Site Detection
Safety issues at construction sites have long plagued the industry, posing risks to worker safety and causing economic damage due to potential hazards. With the advancement of artificial intelligence, particularly in the field of computer vision, the automation of safety monitoring on construction sites has emerged as a solution to this longstanding issue. Despite achieving impressive performance, advanced object detection methods like YOLOv8 still face challenges in handling the complex conditions found at construction sites. To solve these problems, this study presents the Global Stability Optimization YOLO (GSO-YOLO) model to address challenges in complex construction sites. The model integrates the Global Optimization Module (GOM) and Steady Capture Module (SCM) to enhance global contextual information capture and detection stability. The innovative AIoU loss function, which combines CIoU and EIoU, improves detection accuracy and efficiency. Experiments on datasets like SODA, MOCS, and CIS show that GSO-YOLO outperforms existing methods, achieving SOTA performance.
[ "['Yuming Zhang' 'Dongzhi Guan' 'Shouxin Zhang' 'Junhao Su' 'Yunzhi Han'\n 'Jiabin Liu']" ]
null
null
2407.00911
null
null
http://arxiv.org/pdf/2407.00911v1
2024-07-01T02:33:07Z
2024-07-01T02:33:07Z
Deep Image-to-Recipe Translation
The modern saying, "You Are What You Eat" resonates on a profound level, reflecting the intricate connection between our identities and the food we consume. Our project, Deep Image-to-Recipe Translation, is an intersection of computer vision and natural language generation that aims to bridge the gap between cherished food memories and the art of culinary creation. Our primary objective involves predicting ingredients from a given food image. For this task, we first develop a custom convolutional network and then compare its performance to a model that leverages transfer learning. We pursue an additional goal of generating a comprehensive set of recipe steps from a list of ingredients. We frame this process as a sequence-to-sequence task and develop a recurrent neural network that utilizes pre-trained word embeddings. We address several challenges of deep learning including imbalanced datasets, data cleaning, overfitting, and hyperparameter selection. Our approach emphasizes the importance of metrics such as Intersection over Union (IoU) and F1 score in scenarios where accuracy alone might be misleading. For our recipe prediction model, we employ perplexity, a commonly used and important metric for language models. We find that transfer learning via pre-trained ResNet-50 weights and GloVe embeddings provide an exceptional boost to model performance, especially when considering training resource constraints. Although we have made progress on the image-to-recipe translation, there is an opportunity for future exploration with advancements in model architectures, dataset scalability, and enhanced user interaction.
[ "['Jiangqin Ma' 'Bilal Mawji' 'Franz Williams']" ]
null
null
2407.00913
null
null
http://arxiv.org/pdf/2407.00913v1
2024-07-01T02:36:27Z
2024-07-01T02:36:27Z
SecureSpectra: Safeguarding Digital Identity from Deep Fake Threats via Intelligent Signatures
Advancements in DeepFake (DF) audio models pose a significant threat to voice authentication systems, leading to unauthorized access and the spread of misinformation. We introduce a defense mechanism, SecureSpectra, addressing DF threats by embedding orthogonal, irreversible signatures within audio. SecureSpectra leverages the inability of DF models to replicate high-frequency content, which we empirically identify across diverse datasets and DF models. Integrating differential privacy into the pipeline protects signatures from reverse engineering and strikes a delicate balance between enhanced security and minimal performance compromises. Our evaluations on Mozilla Common Voice, LibriSpeech, and VoxCeleb datasets showcase SecureSpectra's superior performance, outperforming recent works by up to 71% in detection accuracy. We open-source SecureSpectra to benefit the research community.
[ "['Oguzhan Baser' 'Kaan Kale' 'Sandeep P. Chinchali']" ]
null
null
2407.00916
null
null
http://arxiv.org/pdf/2407.00916v2
2024-07-03T03:42:46Z
2024-07-01T02:42:27Z
Learnability in Online Kernel Selection with Memory Constraint via Data-dependent Regret Analysis
Online kernel selection is a fundamental problem of online kernel methods.In this paper,we study online kernel selection with memory constraint in which the memory of kernel selection and online prediction procedures is limited to a fixed budget. An essential question is what is the intrinsic relationship among online learnability, memory constraint, and data complexity? To answer the question,it is necessary to show the trade-offs between regret and memory constraint.Previous work gives a worst-case lower bound depending on the data size,and shows learning is impossible within a small memory constraint.In contrast, we present distinct results by offering data-dependent upper bounds that rely on two data complexities:kernel alignment and the cumulative losses of competitive hypothesis.We propose an algorithmic framework giving data-dependent upper bounds for two types of loss functions.For the hinge loss function,our algorithm achieves an expected upper bound depending on kernel alignment.For smooth loss functions,our algorithm achieves a high-probability upper bound depending on the cumulative losses of competitive hypothesis.We also prove a matching lower bound for smooth loss functions.Our results show that if the two data complexities are sub-linear,then learning is possible within a small memory constraint.Our algorithmic framework depends on a new buffer maintaining framework and a reduction from online kernel selection to prediction with expert advice. Finally,we empirically verify the prediction performance of our algorithms on benchmark datasets.
[ "['Junfan Li' 'Shizhong Liao']" ]
null
null
2407.00918
null
null
http://arxiv.org/pdf/2407.00918v1
2024-07-01T02:51:26Z
2024-07-01T02:51:26Z
Robust and Reliable Early-Stage Website Fingerprinting Attacks via Spatial-Temporal Distribution Analysis
Website Fingerprinting (WF) attacks identify the websites visited by users by performing traffic analysis, compromising user privacy. Particularly, DL-based WF attacks demonstrate impressive attack performance. However, the effectiveness of DL-based WF attacks relies on the collected complete and pure traffic during the page loading, which impacts the practicality of these attacks. The WF performance is rather low under dynamic network conditions and various WF defenses, particularly when the analyzed traffic is only a small part of the complete traffic. In this paper, we propose Holmes, a robust and reliable early-stage WF attack. Holmes utilizes temporal and spatial distribution analysis of website traffic to effectively identify websites in the early stages of page loading. Specifically, Holmes develops adaptive data augmentation based on the temporal distribution of website traffic and utilizes a supervised contrastive learning method to extract the correlations between the early-stage traffic and the pre-collected complete traffic. Holmes accurately identifies traffic in the early stages of page loading by computing the correlation of the traffic with the spatial distribution information, which ensures robust and reliable detection according to early-stage traffic. We extensively evaluate Holmes using six datasets. Compared to nine existing DL-based WF attacks, Holmes improves the F1-score of identifying early-stage traffic by an average of 169.18%. Furthermore, we replay the traffic of visiting real-world dark web websites. Holmes successfully identifies dark web websites when the ratio of page loading on average is only 21.71%, with an average precision improvement of 169.36% over the existing WF attacks.
[ "['Xinhao Deng' 'Qi Li' 'Ke Xu']" ]
null
null
2407.00927
null
null
http://arxiv.org/pdf/2407.00927v1
2024-07-01T03:17:34Z
2024-07-01T03:17:34Z
Learnability of Parameter-Bounded Bayes Nets
Bayes nets are extensively used in practice to efficiently represent joint probability distributions over a set of random variables and capture dependency relations. In a seminal paper, Chickering et al. (JMLR 2004) showed that given a distribution $P$, that is defined as the marginal distribution of a Bayes net, it is $mathsf{NP}$-hard to decide whether there is a parameter-bounded Bayes net that represents $P$. They called this problem LEARN. In this work, we extend the $mathsf{NP}$-hardness result of LEARN and prove the $mathsf{NP}$-hardness of a promise search variant of LEARN, whereby the Bayes net in question is guaranteed to exist and one is asked to find such a Bayes net. We complement our hardness result with a positive result about the sample complexity that is sufficient to recover a parameter-bounded Bayes net that is close (in TV distance) to a given distribution $P$, that is represented by some parameter-bounded Bayes net, generalizing a degree-bounded sample complexity result of Brustle et al. (EC 2020).
[ "['Arnab Bhattacharyya' 'Davin Choo' 'Sutanu Gayen' 'Dimitrios Myrisiotis']" ]
null
null
2407.00928
null
null
http://arxiv.org/pdf/2407.00928v1
2024-07-01T03:17:53Z
2024-07-01T03:17:53Z
FoldGPT: Simple and Effective Large Language Model Compression Scheme
The demand for deploying large language models(LLMs) on mobile devices continues to increase, driven by escalating data security concerns and cloud costs. However, network bandwidth and memory limitations pose challenges for deploying billion-level models on mobile devices. In this study, we investigate the outputs of different layers across various scales of LLMs and found that the outputs of most layers exhibit significant similarity. Moreover, this similarity becomes more pronounced as the model size increases, indicating substantial redundancy in the depth direction of the LLMs. Based on this observation, we propose an efficient model volume compression strategy, termed FoldGPT, which combines block removal and block parameter sharing.This strategy consists of three parts: (1) Based on the learnable gating parameters, we determine the block importance ranking while modeling the coupling effect between blocks. Then we delete some redundant layers based on the given removal rate. (2) For the retained blocks, we apply a specially designed group parameter sharing strategy, where blocks within the same group share identical weights, significantly compressing the number of parameters and slightly reducing latency overhead. (3) After sharing these Blocks, we "cure" the mismatch caused by sparsity with a minor amount of fine-tuning and introduce a tail-layer distillation strategy to improve the performance. Experiments demonstrate that FoldGPT outperforms previous state-of-the-art(SOTA) methods in efficient model compression, demonstrating the feasibility of achieving model lightweighting through straightforward block removal and parameter sharing.
[ "['Songwei Liu' 'Chao Zeng' 'Lianqiang Li' 'Chenqian Yan' 'Lean Fu'\n 'Xing Mei' 'Fangmin Chen']" ]
null
null
2407.00935
null
null
http://arxiv.org/pdf/2407.00935v1
2024-07-01T03:35:59Z
2024-07-01T03:35:59Z
Look Ahead or Look Around? A Theoretical Comparison Between Autoregressive and Masked Pretraining
In recent years, the rise of generative self-supervised learning (SSL) paradigms has exhibited impressive performance across visual, language, and multi-modal domains. While the varied designs of generative SSL objectives lead to distinct properties in downstream tasks, a theoretical understanding of these differences remains largely unexplored. In this paper, we establish the first theoretical comparisons between two leading generative SSL paradigms: autoregressive SSL and masked SSL. Through establishing theoretical frameworks, we elucidate the strengths and limitations of autoregressive and masked SSL within the primary evaluation tasks of classification and content generation. Our findings demonstrate that in classification tasks, the flexibility of targeted tokens in masked SSL fosters more inter-sample connections compared to the fixed position of target tokens in autoregressive SSL, which yields superior clustering performance. In content generation tasks, the misalignment between the flexible lengths of test samples and the fixed length of unmasked texts in masked SSL (vs. flexible lengths of conditional texts in autoregressive SSL) hinders its generation performance. To leverage each other's strengths and mitigate weaknesses, we propose diversity-enhanced autoregressive and variable-length masked objectives, which substantially improve the classification performance of autoregressive SSL and the generation performance of masked SSL. Code is available at https://github.com/PKU-ML/LookAheadLookAround.
[ "['Qi Zhang' 'Tianqi Du' 'Haotian Huang' 'Yifei Wang' 'Yisen Wang']" ]
null
null
2407.00943
null
null
http://arxiv.org/pdf/2407.00943v2
2024-07-02T06:16:36Z
2024-07-01T03:52:22Z
FedEx: Expediting Federated Learning over Heterogeneous Mobile Devices by Overlapping and Participant Selection
Training latency is critical for the success of numerous intrigued applications ignited by federated learning (FL) over heterogeneous mobile devices. By revolutionarily overlapping local gradient transmission with continuous local computing, FL can remarkably reduce its training latency over homogeneous clients, yet encounter severe model staleness, model drifts, memory cost and straggler issues in heterogeneous environments. To unleash the full potential of overlapping, we propose, FedEx, a novel underline{fed}erated learning approach to underline{ex}pedite FL training over mobile devices under data, computing and wireless heterogeneity. FedEx redefines the overlapping procedure with staleness ceilings to constrain memory consumption and make overlapping compatible with participation selection (PS) designs. Then, FedEx characterizes the PS utility function by considering the latency reduced by overlapping, and provides a holistic PS solution to address the straggler issue. FedEx also introduces a simple but effective metric to trigger overlapping, in order to avoid model drifts. Experimental results show that compared with its peer designs, FedEx demonstrates substantial reductions in FL training latency over heterogeneous mobile devices with limited memory cost.
[ "['Jiaxiang Geng' 'Boyu Li' 'Xiaoqi Qin' 'Yixuan Li' 'Liang Li'\n 'Yanzhao Hou' 'Miao Pan']" ]
null
null
2407.00945
null
null
http://arxiv.org/pdf/2407.00945v1
2024-07-01T03:57:35Z
2024-07-01T03:57:35Z
Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs
The rapid advancement of large language models (LLMs) has led to architectures with billions to trillions of parameters, posing significant deployment challenges due to their substantial demands on memory, processing power, and energy consumption. Sparse Mixture-of-Experts (SMoE) architectures have emerged as a solution, activating only a subset of parameters per token, thereby achieving faster inference while maintaining performance. However, SMoE models still face limitations in broader deployment due to their large parameter counts and significant GPU memory requirements. In this work, we introduce a gradient-free evolutionary strategy named EEP (Efficient Expert P}runing) to enhance the pruning of experts in SMoE models. EEP relies solely on model inference (i.e., no gradient computation) and achieves greater sparsity while maintaining or even improving performance on downstream tasks. EEP can be used to reduce both the total number of experts (thus saving GPU memory) and the number of active experts (thus accelerating inference). For example, we demonstrate that pruning up to 75% of experts in Mixtral $8times7$B-Instruct results in a substantial reduction in parameters with minimal performance loss. Remarkably, we observe improved performance on certain tasks, such as a significant increase in accuracy on the SQuAD dataset (from 53.4% to 75.4%), when pruning half of the experts. With these results, EEP not only lowers the barrier to deploying SMoE models,but also challenges the conventional understanding of model pruning by showing that fewer experts can lead to better task-specific performance without any fine-tuning. Code is available at https://github.com/imagination-research/EEP.
[ "['Enshu Liu' 'Junyi Zhu' 'Zinan Lin' 'Xuefei Ning' 'Matthew B. Blaschko'\n 'Shengen Yan' 'Guohao Dai' 'Huazhong Yang' 'Yu Wang']" ]
null
null
2407.00948
null
null
http://arxiv.org/pdf/2407.00948v1
2024-07-01T04:07:49Z
2024-07-01T04:07:49Z
The House Always Wins: A Framework for Evaluating Strategic Deception in LLMs
We propose a framework for evaluating strategic deception in large language models (LLMs). In this framework, an LLM acts as a game master in two scenarios: one with random game mechanics and another where it can choose between random or deliberate actions. As an example, we use blackjack because the action space nor strategies involve deception. We benchmark Llama3-70B, GPT-4-Turbo, and Mixtral in blackjack, comparing outcomes against expected distributions in fair play to determine if LLMs develop strategies favoring the "house." Our findings reveal that the LLMs exhibit significant deviations from fair play when given implicit randomness instructions, suggesting a tendency towards strategic manipulation in ambiguous scenarios. However, when presented with an explicit choice, the LLMs largely adhere to fair play, indicating that the framing of instructions plays a crucial role in eliciting or mitigating potentially deceptive behaviors in AI systems.
[ "['Tanush Chopra' 'Michael Li']" ]
null
null
2407.00950
null
null
http://arxiv.org/pdf/2407.00950v1
2024-07-01T04:12:15Z
2024-07-01T04:12:15Z
Causal Bandits: The Pareto Optimal Frontier of Adaptivity, a Reduction to Linear Bandits, and Limitations around Unknown Marginals
In this work, we investigate the problem of adapting to the presence or absence of causal structure in multi-armed bandit problems. In addition to the usual reward signal, we assume the learner has access to additional variables, observed in each round after acting. When these variables $d$-separate the action from the reward, existing work in causal bandits demonstrates that one can achieve strictly better (minimax) rates of regret (Lu et al., 2020). Our goal is to adapt to this favorable "conditionally benign" structure, if it is present in the environment, while simultaneously recovering worst-case minimax regret, if it is not. Notably, the learner has no prior knowledge of whether the favorable structure holds. In this paper, we establish the Pareto optimal frontier of adaptive rates. We prove upper and matching lower bounds on the possible trade-offs in the performance of learning in conditionally benign and arbitrary environments, resolving an open question raised by Bilodeau et al. (2022). Furthermore, we are the first to obtain instance-dependent bounds for causal bandits, by reducing the problem to the linear bandit setting. Finally, we examine the common assumption that the marginal distributions of the post-action contexts are known and show that a nontrivial estimate is necessary for better-than-worst-case minimax rates.
[ "['Ziyi Liu' 'Idan Attias' 'Daniel M. Roy']" ]
null
null
2407.00952
null
null
http://arxiv.org/pdf/2407.00952v1
2024-07-01T04:13:25Z
2024-07-01T04:13:25Z
SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
The scalability of large language models (LLMs) in handling high-complexity models and large-scale datasets has led to tremendous successes in pivotal domains. While there is an urgent need to acquire more training data for LLMs, a concerning reality is the depletion of high-quality public datasets within a few years. In view of this, the federated learning (FL) LLM fine-tuning paradigm recently has been proposed to facilitate collaborative LLM fine-tuning on distributed private data, where multiple data owners collaboratively fine-tune a shared LLM without sharing raw data. However, the staggering model size of LLMs imposes heavy computing and communication burdens on clients, posing significant barriers to the democratization of the FL LLM fine-tuning paradigm. To address this issue, split learning (SL) has emerged as a promising solution by offloading the primary training workload to a server via model partitioning while exchanging activation/activation's gradients with smaller data sizes rather than the entire LLM. Unfortunately, research on the SL LLM fine-tuning paradigm is still in its nascent stage. To fill this gap, in this paper, we propose the first SL LLM fine-tuning framework, named SplitLoRA. SplitLoRA is built on the split federated learning (SFL) framework, amalgamating the advantages of parallel training from FL and model splitting from SL and thus greatly enhancing the training efficiency. It is worth noting that SplitLoRA is the inaugural open-source benchmark for SL LLM fine-tuning, providing a foundation for research efforts dedicated to advancing SL LLM fine-tuning. Extensive simulations validate that SplitLoRA achieves target accuracy in significantly less time than state-of-the-art LLM fine-tuning frameworks, demonstrating the superior training performance of SplitLoRA. The project page is available at https://fduinc.github.io/splitlora/.
[ "['Zheng Lin' 'Xuanjie Hu' 'Yuxin Zhang' 'Zhe Chen' 'Zihan Fang'\n 'Xianhao Chen' 'Ang Li' 'Praneeth Vepakomma' 'Yue Gao']" ]
null
null
2407.00956
null
null
http://arxiv.org/pdf/2407.00956v1
2024-07-01T04:24:07Z
2024-07-01T04:24:07Z
A Closer Look at Deep Learning on Tabular Data
Tabular data is prevalent across various domains in machine learning. Although Deep Neural Network (DNN)-based methods have shown promising performance comparable to tree-based ones, in-depth evaluation of these methods is challenging due to varying performance ranks across diverse datasets. In this paper, we propose a comprehensive benchmark comprising 300 tabular datasets, covering a wide range of task types, size distributions, and domains. We perform an extensive comparison between state-of-the-art deep tabular methods and tree-based methods, revealing the average rank of all methods and highlighting the key factors that influence the success of deep tabular methods. Next, we analyze deep tabular methods based on their training dynamics, including changes in validation metrics and other statistics. For each dataset-method pair, we learn a mapping from both the meta-features of datasets and the first part of the validation curve to the final validation set performance and even the evolution of validation curves. This mapping extracts essential meta-features that influence prediction accuracy, helping the analysis of tabular methods from novel aspects. Based on the performance of all methods on this large benchmark, we identify two subsets of 45 datasets each. The first subset contains datasets that favor either tree-based methods or DNN-based methods, serving as effective analysis tools to evaluate strategies (e.g., attribute encoding strategies) for improving deep tabular models. The second subset contains datasets where the ranks of methods are consistent with the overall benchmark, acting as a probe for tabular analysis. These ``tiny tabular benchmarks'' will facilitate further studies on tabular data.
[ "['Han-Jia Ye' 'Si-Yang Liu' 'Hao-Run Cai' 'Qi-Le Zhou' 'De-Chuan Zhan']" ]
null
null
2407.00958
null
null
http://arxiv.org/pdf/2407.00958v1
2024-07-01T04:29:35Z
2024-07-01T04:29:35Z
Universal Approximation Theory: The basic theory for large language models
Language models have emerged as a critical area of focus in artificial intelligence, particularly with the introduction of groundbreaking innovations like ChatGPT. Large-scale Transformer networks have quickly become the leading approach for advancing natural language processing algorithms. Built on the Transformer architecture, these models enable interactions that closely mimic human communication and, equipped with extensive knowledge, can even assist in guiding human tasks. Despite their impressive capabilities and growing complexity, a key question remains-the theoretical foundations of large language models (LLMs). What makes Transformer so effective for powering intelligent language applications, such as translation and coding? What underlies LLMs' ability for In-Context Learning (ICL)? How does the LoRA scheme enhance the fine-tuning of LLMs? And what supports the practicality of pruning LLMs? To address these critical questions and explore the technological strategies within LLMs, we leverage the Universal Approximation Theory (UAT) to offer a theoretical backdrop, shedding light on the mechanisms that underpin these advancements.
[ "['Wei Wang' 'Qing Li']" ]
null
null
2407.00966
null
null
http://arxiv.org/pdf/2407.00966v1
2024-07-01T04:58:36Z
2024-07-01T04:58:36Z
Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension
In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on $mathbb{R}^d times {pm 1}$ -- is to output a hypothesis that is competitive (to within $epsilon$) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes, we introduce a smoothed-analysis framework that requires a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of $k$-halfspaces in time $k^{poly(frac{log k}{epsilon gamma}) }$ where $gamma$ is the margin parameter. Before our work, the best-known runtime was exponential in $k$ (Arriaga and Vempala, 1999).
[ "['Gautam Chandrasekaran' 'Adam Klivans' 'Vasilis Kontonis' 'Raghu Meka'\n 'Konstantinos Stavropoulos']" ]
null
null
2407.00968
null
null
http://arxiv.org/pdf/2407.00968v1
2024-07-01T05:01:03Z
2024-07-01T05:01:03Z
How Does Overparameterization Affect Features?
Overparameterization, the condition where models have more parameters than necessary to fit their training loss, is a crucial factor for the success of deep learning. However, the characteristics of the features learned by overparameterized networks are not well understood. In this work, we explore this question by comparing models with the same architecture but different widths. We first examine the expressivity of the features of these models, and show that the feature space of overparameterized networks cannot be spanned by concatenating many underparameterized features, and vice versa. This reveals that both overparameterized and underparameterized networks acquire some distinctive features. We then evaluate the performance of these models, and find that overparameterized networks outperform underparameterized networks, even when many of the latter are concatenated. We corroborate these findings using a VGG-16 and ResNet18 on CIFAR-10 and a Transformer on the MNLI classification dataset. Finally, we propose a toy setting to explain how overparameterized networks can learn some important features that the underparamaterized networks cannot learn.
[ "['Ahmet Cagri Duzgun' 'Samy Jelassi' 'Yuanzhi Li']" ]
null
null
2407.00978
null
null
http://arxiv.org/pdf/2407.00978v1
2024-07-01T05:28:40Z
2024-07-01T05:28:40Z
Hybrid RAG-empowered Multi-modal LLM for Secure Healthcare Data Management: A Diffusion-based Contract Theory Approach
Secure data management and effective data sharing have become paramount in the rapidly evolving healthcare landscape. The advancement of generative artificial intelligence has positioned Multi-modal Large Language Models (MLLMs) as crucial tools for managing healthcare data. MLLMs can support multi-modal inputs and generate diverse types of content by leveraging large-scale training on vast amounts of multi-modal data. However, critical challenges persist in developing medical MLLMs, including healthcare data security and freshness issues, affecting the output quality of MLLMs. In this paper, we propose a hybrid Retrieval-Augmented Generation (RAG)-empowered medical MLLMs framework for healthcare data management. This framework leverages a hierarchical cross-chain architecture to facilitate secure data training. Moreover, it enhances the output quality of MLLMs through hybrid RAG, which employs multi-modal metrics to filter various unimodal RAG results and incorporates these retrieval results as additional inputs to MLLMs. Additionally, we employ age of information to indirectly evaluate the data freshness impact of MLLMs and utilize contract theory to incentivize healthcare data holders to share fresh data, mitigating information asymmetry in data sharing. Finally, we utilize a generative diffusion model-based reinforcement learning algorithm to identify the optimal contract for efficient data sharing. Numerical results demonstrate the effectiveness of the proposed schemes, which achieve secure and efficient healthcare data management.
[ "['Cheng Su' 'Jinbo Wen' 'Jiawen Kang' 'Yonghua Wang' 'Hudan Pan'\n 'M. Shamim Hossain']" ]
null
null
2407.00996
null
null
http://arxiv.org/pdf/2407.00996v1
2024-07-01T06:22:38Z
2024-07-01T06:22:38Z
Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?
Small Language Models (SLMs) are generally considered to be more compact versions of large language models (LLMs), typically having fewer than 7 billion parameters. This study investigates the ability of small language models to learn, retain, and subsequently eliminate noise that is typically not found on the internet, where most pretraining datasets are sourced. For this, four pre-trained SLMs were utilized: Olmo 1B, Qwen1.5 1.8B, Gemma 2B, and Phi2 2.7B. The models were instruction-tuned without noise and tested for task execution with in-context learning. Afterward, noise patterns were introduced to evaluate the models' learning and unlearning capabilities. We evaluated the models' performance at various training levels. Phi consistently excelled with word-level noise but performed the worst with character-level noise. Despite being the smallest with approximately 1 billion parameters, Olmo performed consistently well on tasks.
[ "['Nicy Scaria' 'Silvester John Joseph Kennedy' 'Deepak Subramani']" ]
null
null
2407.01001
null
null
http://arxiv.org/pdf/2407.01001v1
2024-07-01T06:31:41Z
2024-07-01T06:31:41Z
Flood Prediction Using Classical and Quantum Machine Learning Models
This study investigates the potential of quantum machine learning to improve flood forecasting we focus on daily flood events along Germany's Wupper River in 2023 our approach combines classical machine learning techniques with QML techniques this hybrid model leverages quantum properties like superposition and entanglement to achieve better accuracy and efficiency classical and QML models are compared based on training time accuracy and scalability results show that QML models offer competitive training times and improved prediction accuracy this research signifies a step towards utilizing quantum technologies for climate change adaptation we emphasize collaboration and continuous innovation to implement this model in real-world flood management ultimately enhancing global resilience against floods
[ "['Marek Grzesiak' 'Param Thakkar']" ]
null
null
2407.01004
null
null
http://arxiv.org/pdf/2407.01004v1
2024-07-01T06:36:27Z
2024-07-01T06:36:27Z
CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect
In causal inference, estimating heterogeneous treatment effects (HTE) is critical for identifying how different subgroups respond to interventions, with broad applications in fields such as precision medicine and personalized advertising. Although HTE estimation methods aim to improve accuracy, how to provide explicit subgroup descriptions remains unclear, hindering data interpretation and strategic intervention management. In this paper, we propose CURLS, a novel rule learning method leveraging HTE, which can effectively describe subgroups with significant treatment effects. Specifically, we frame causal rule learning as a discrete optimization problem, finely balancing treatment effect with variance and considering the rule interpretability. We design an iterative procedure based on the minorize-maximization algorithm and solve a submodular lower bound as an approximation for the original. Quantitative experiments and qualitative case studies verify that compared with state-of-the-art methods, CURLS can find subgroups where the estimated and true effects are 16.1% and 13.8% higher and the variance is 12.0% smaller, while maintaining similar or better estimation accuracy and rule interpretability. Code is available at https://osf.io/zwp2k/.
[ "['Jiehui Zhou' 'Linxiao Yang' 'Xingyu Liu' 'Xinyue Gu' 'Liang Sun'\n 'Wei Chen']" ]
null
null
2407.01005
null
null
http://arxiv.org/abs/2407.01005v1
2024-07-01T06:36:40Z
2024-07-01T06:36:40Z
MARLP: Time-series Forecasting Control for Agricultural Managed Aquifer Recharge
The rapid decline in groundwater around the world poses a significant challenge to sustainable agriculture. To address this issue, agricultural managed aquifer recharge (Ag-MAR) is proposed to recharge the aquifer by artificially flooding agricultural lands using surface water. Ag-MAR requires a carefully selected flooding schedule to avoid affecting the oxygen absorption of crop roots. However, current Ag-MAR scheduling does not take into account complex environmental factors such as weather and soil oxygen, resulting in crop damage and insufficient recharging amounts. This paper proposes MARLP, the first end-to-end data-driven control system for Ag-MAR. We first formulate Ag-MAR as an optimization problem. To that end, we analyze four-year in-field datasets, which reveal the multi-periodicity feature of the soil oxygen level trends and the opportunity to use external weather forecasts and flooding proposals as exogenous clues for soil oxygen prediction. Then, we design a two-stage forecasting framework. In the first stage, it extracts both the cross-variate dependency and the periodic patterns from historical data to conduct preliminary forecasting. In the second stage, it uses weather-soil and flooding-soil causality to facilitate an accurate prediction of soil oxygen levels. Finally, we conduct model predictive control (MPC) for Ag-MAR flooding. To address the challenge of large action spaces, we devise a heuristic planning module to reduce the number of flooding proposals to enable the search for optimal solutions. Real-world experiments show that MARLP reduces the oxygen deficit ratio by 86.8% while improving the recharging amount in unit time by 35.8%, compared with the previous four years.
[ "['Yuning Chen' 'Kang Yang' 'Zhiyu An' 'Brady Holder' 'Luke Paloutzian'\n 'Khaled Bali' 'Wan Du']" ]
null
null
2407.01012
null
null
http://arxiv.org/pdf/2407.01012v3
2024-07-03T05:36:00Z
2024-07-01T06:52:34Z
Swish-T : Enhancing Swish Activation with Tanh Bias for Improved Neural Network Performance
We propose the Swish-T family, an enhancement of the existing non-monotonic activation function Swish. Swish-T is defined by adding a Tanh bias to the original Swish function. This modification creates a family of Swish-T variants, each designed to excel in different tasks, showcasing specific advantages depending on the application context. The Tanh bias allows for broader acceptance of negative values during initial training stages, offering a smoother non-monotonic curve than the original Swish. We ultimately propose the Swish-T$_{textbf{C}}$ function, while Swish-T and Swish-T$_{textbf{B}}$, byproducts of Swish-T$_{textbf{C}}$, also demonstrate satisfactory performance. Furthermore, our ablation study shows that using Swish-T$_{textbf{C}}$ as a non-parametric function can still achieve high performance. The superiority of the Swish-T family has been empirically demonstrated across various models and benchmark datasets, including MNIST, Fashion MNIST, SVHN, CIFAR-10, and CIFAR-100. The code is publicly available at https://github.com/ictseoyoungmin/Swish-T-pytorch.
[ "['Youngmin Seo' 'Jinha Kim' 'Unsang Park']" ]
null
null
2407.01015
null
null
http://arxiv.org/pdf/2407.01015v1
2024-07-01T07:00:44Z
2024-07-01T07:00:44Z
Bayesian Entropy Neural Networks for Physics-Aware Prediction
This paper addresses the need for deep learning models to integrate well-defined constraints into their outputs, driven by their application in surrogate models, learning with limited data and partial information, and scenarios requiring flexible model behavior to incorporate non-data sample information. We introduce Bayesian Entropy Neural Networks (BENN), a framework grounded in Maximum Entropy (MaxEnt) principles, designed to impose constraints on Bayesian Neural Network (BNN) predictions. BENN is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output. To achieve simultaneous uncertainty quantification and constraint satisfaction, we employ the method of multipliers approach. This allows for the concurrent estimation of neural network parameters and the Lagrangian multipliers associated with the constraints. Our experiments, spanning diverse applications such as beam deflection modeling and microstructure generation, demonstrate the effectiveness of BENN. The results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.
[ "['Rahul Rathnakumar' 'Jiayu Huang' 'Hao Yan' 'Yongming Liu']" ]
null
null
2407.01023
null
null
http://arxiv.org/pdf/2407.01023v1
2024-07-01T07:13:14Z
2024-07-01T07:13:14Z
DistML.js: Installation-free Distributed Deep Learning Framework for Web Browsers
We present "DistML.js", a library designed for training and inference of machine learning models within web browsers. Not only does DistML.js facilitate model training on local devices, but it also supports distributed learning through communication with servers. Its design and define-by-run API for deep learning model construction resemble PyTorch, thereby reducing the learning curve for prototyping. Matrix computations involved in model training and inference are executed on the backend utilizing WebGL, enabling high-speed calculations. We provide a comprehensive explanation of DistML.js's design, API, and implementation, alongside practical applications including data parallelism in learning. The source code is publicly available at https://github.com/mil-tokyo/distmljs.
[ "['Masatoshi Hidaka' 'Tomohiro Hashimoto' 'Yuto Nishizawa' 'Tatsuya Harada']" ]
null
null
2407.01031
null
null
http://arxiv.org/pdf/2407.01031v1
2024-07-01T07:26:56Z
2024-07-01T07:26:56Z
PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs
Recent advancements in large language models (LLMs) have indeed showcased their impressive capabilities. On mobile devices, the wealth of valuable, non-public data generated daily holds great promise for locally fine-tuning personalized LLMs, while maintaining privacy through on-device processing. However, the constraints of mobile device resources pose challenges to direct on-device LLM fine-tuning, mainly due to the memory-intensive nature of derivative-based optimization required for saving gradients and optimizer states. To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices. Empirical results demonstrate that the RoBERTa-large model and OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around 4GB and 6.5GB of memory respectively, using derivative-free optimization techniques. This highlights the feasibility of on-device LLM fine-tuning on mobile devices, paving the way for personalized LLMs on resource-constrained devices while safeguarding data privacy.
[ "['Dan Peng' 'Zhihui Fu' 'Jun Wang']" ]
null
null
2407.01032
null
null
http://arxiv.org/pdf/2407.01032v1
2024-07-01T07:32:58Z
2024-07-01T07:32:58Z
Overcoming Common Flaws in the Evaluation of Selective Classification Systems
Selective Classification, wherein models can reject low-confidence predictions, promises reliable translation of machine-learning based classification systems to real-world scenarios such as clinical diagnostics. While current evaluation of these systems typically assumes fixed working points based on pre-defined rejection thresholds, methodological progress requires benchmarking the general performance of systems akin to the $mathrm{AUROC}$ in standard classification. In this work, we define 5 requirements for multi-threshold metrics in selective classification regarding task alignment, interpretability, and flexibility, and show how current approaches fail to meet them. We propose the Area under the Generalized Risk Coverage curve ($mathrm{AUGRC}$), which meets all requirements and can be directly interpreted as the average risk of undetected failures. We empirically demonstrate the relevance of $mathrm{AUGRC}$ on a comprehensive benchmark spanning 6 data sets and 13 confidence scoring functions. We find that the proposed metric substantially changes metric rankings on 5 out of the 6 data sets.
[ "['Jeremias Traub' 'Till J. Bungert' 'Carsten T. Lüth'\n 'Michael Baumgartner' 'Klaus H. Maier-Hein' 'Lena Maier-Hein'\n 'Paul F Jaeger']" ]
null
null
2407.01033
null
null
http://arxiv.org/pdf/2407.01033v1
2024-07-01T07:33:00Z
2024-07-01T07:33:00Z
Neural Networks Trained by Weight Permutation are Universal Approximators
The universal approximation property is fundamental to the success of neural networks, and has traditionally been achieved by training networks without any constraints on their parameters. However, recent experimental research proposed a novel permutation-based training method, which exhibited a desired classification performance without modifying the exact weight values. In this paper, we provide a theoretical guarantee of this permutation training method by proving its ability to guide a ReLU network to approximate one-dimensional continuous functions. Our numerical results further validate this method's efficiency in regression tasks with various initializations. The notable observations during weight permutation suggest that permutation training can provide an innovative tool for describing network learning behavior.
[ "['Yongqiang Cai' 'Gaohang Chen' 'Zhonghua Qiao']" ]
null
null
2407.01036
null
null
http://arxiv.org/pdf/2407.01036v1
2024-07-01T07:40:08Z
2024-07-01T07:40:08Z
Ranking by Lifts: A Cost-Benefit Approach to Large-Scale A/B Tests
A/B testers conducting large-scale tests prioritize lifts and want to be able to control false rejections of the null. This work develops a decision-theoretic framework for maximizing profits subject to false discovery rate (FDR) control. We build an empirical Bayes solution for the problem via the greedy knapsack approach. We derive an oracle rule based on ranking the ratio of expected lifts and the cost of wrong rejections using the local false discovery rate (lfdr) statistic. Our oracle decision rule is valid and optimal for large-scale tests. Further, we establish asymptotic validity for the data-driven procedure and demonstrate finite-sample validity in experimental studies. We also demonstrate the merit of the proposed method over other FDR control methods. Finally, we discuss an application to actual Optimizely experiments.
[ "['Pallavi Basu' 'Ron Berman']" ]
null
null
2407.01049
null
null
http://arxiv.org/pdf/2407.01049v1
2024-07-01T07:56:48Z
2024-07-01T07:56:48Z
SE(3)-Hyena Operator for Scalable Equivariant Learning
Modeling global geometric context while maintaining equivariance is crucial for accurate predictions in many fields such as biology, chemistry, or vision. Yet, this is challenging due to the computational demands of processing high-dimensional data at scale. Existing approaches such as equivariant self-attention or distance-based message passing, suffer from quadratic complexity with respect to sequence length, while localized methods sacrifice global information. Inspired by the recent success of state-space and long-convolutional models, in this work, we introduce SE(3)-Hyena operator, an equivariant long-convolutional model based on the Hyena operator. The SE(3)-Hyena captures global geometric context at sub-quadratic complexity while maintaining equivariance to rotations and translations. Evaluated on equivariant associative recall and n-body modeling, SE(3)-Hyena matches or outperforms equivariant self-attention while requiring significantly less memory and computational resources for long sequences. Our model processes the geometric context of 20k tokens x3.5 times faster than the equivariant transformer and allows x175 longer a context within the same memory budget.
[ "['Artem Moskalev' 'Mangal Prakash' 'Rui Liao' 'Tommaso Mansi']" ]
null
null
2407.01054
null
null
http://arxiv.org/pdf/2407.01054v1
2024-07-01T08:07:02Z
2024-07-01T08:07:02Z
Joint Pruning and Channel-wise Mixed-Precision Quantization for Efficient Deep Neural Networks
The resource requirements of deep neural networks (DNNs) pose significant challenges to their deployment on edge devices. Common approaches to address this issue are pruning and mixed-precision quantization, which lead to latency and memory occupation improvements. These optimization techniques are usually applied independently. We propose a novel methodology to apply them jointly via a lightweight gradient-based search, and in a hardware-aware manner, greatly reducing the time required to generate Pareto-optimal DNNs in terms of accuracy versus cost (i.e., latency or memory). We test our approach on three edge-relevant benchmarks, namely CIFAR-10, Google Speech Commands, and Tiny ImageNet. When targeting the optimization of the memory footprint, we are able to achieve a size reduction of 47.50% and 69.54% at iso-accuracy with the baseline networks with all weights quantized at 8 and 2-bit, respectively. Our method surpasses a previous state-of-the-art approach with up to 56.17% size reduction at iso-accuracy. With respect to the sequential application of state-of-the-art pruning and mixed-precision optimizations, we obtain comparable or superior results, but with a significantly lowered training time. In addition, we show how well-tailored cost models can improve the cost versus accuracy trade-offs when targeting specific hardware for deployment.
[ "['Beatrice Alessandra Motetti' 'Matteo Risso' 'Alessio Burrello'\n 'Enrico Macii' 'Massimo Poncino' 'Daniele Jahier Pagliari']" ]
null
null
2407.01065
null
null
http://arxiv.org/pdf/2407.01065v1
2024-07-01T08:16:25Z
2024-07-01T08:16:25Z
Improve ROI with Causal Learning and Conformal Prediction
In the commercial sphere, such as operations and maintenance, advertising, and marketing recommendations, intelligent decision-making utilizing data mining and neural network technologies is crucial, especially in resource allocation to optimize ROI. This study delves into the Cost-aware Binary Treatment Assignment Problem (C-BTAP) across different industries, with a focus on the state-of-the-art Direct ROI Prediction (DRP) method. However, the DRP model confronts issues like covariate shift and insufficient training data, hindering its real-world effectiveness. Addressing these challenges is essential for ensuring dependable and robust predictions in varied operational contexts. This paper presents a robust Direct ROI Prediction (rDRP) method, designed to address challenges in real-world deployment of neural network-based uplift models, particularly under conditions of covariate shift and insufficient training data. The rDRP method, enhancing the standard DRP model, does not alter the model's structure or require retraining. It utilizes conformal prediction and Monte Carlo dropout for interval estimation, adapting to model uncertainty and data distribution shifts. A heuristic calibration method, inspired by a Kaggle competition, combines point and interval estimates. The effectiveness of these approaches is validated through offline tests and online A/B tests in various settings, demonstrating significant improvements in target rewards compared to the state-of-the-art method.
[ "['Meng Ai' 'Zhuo Chen' 'Jibin Wang' 'Jing Shang' 'Tao Tao' 'Zhen Li']" ]
null
null
2407.01067
null
null
http://arxiv.org/pdf/2407.01067v1
2024-07-01T08:17:19Z
2024-07-01T08:17:19Z
Human-like object concept representations emerge naturally in multimodal large language models
The conceptualization and categorization of natural objects in the human mind have long intrigued cognitive scientists and neuroscientists, offering crucial insights into human perception and cognition. Recently, the rapid development of Large Language Models (LLMs) has raised the attractive question of whether these models can also develop human-like object representations through exposure to vast amounts of linguistic and multimodal data. In this study, we combined behavioral and neuroimaging analysis methods to uncover how the object concept representations in LLMs correlate with those of humans. By collecting large-scale datasets of 4.7 million triplet judgments from LLM and Multimodal LLM (MLLM), we were able to derive low-dimensional embeddings that capture the underlying similarity structure of 1,854 natural objects. The resulting 66-dimensional embeddings were found to be highly stable and predictive, and exhibited semantic clustering akin to human mental representations. Interestingly, the interpretability of the dimensions underlying these embeddings suggests that LLM and MLLM have developed human-like conceptual representations of natural objects. Further analysis demonstrated strong alignment between the identified model embeddings and neural activity patterns in many functionally defined brain ROIs (e.g., EBA, PPA, RSC and FFA). This provides compelling evidence that the object representations in LLMs, while not identical to those in the human, share fundamental commonalities that reflect key schemas of human conceptual knowledge. This study advances our understanding of machine intelligence and informs the development of more human-like artificial cognitive systems.
[ "['Changde Du' 'Kaicheng Fu' 'Bincheng Wen' 'Yi Sun' 'Jie Peng' 'Wei Wei'\n 'Ying Gao' 'Shengpei Wang' 'Chuncheng Zhang' 'Jinpeng Li' 'Shuang Qiu'\n 'Le Chang' 'Huiguang He']" ]
null
null
2407.01079
null
null
http://arxiv.org/pdf/2407.01079v1
2024-07-01T08:34:40Z
2024-07-01T08:34:40Z
On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs)
We investigate the statistical and computational limits of latent textbf{Di}ffusion textbf{T}ransformers (textbf{DiT}s) under the low-dimensional linear latent space assumption. Statistically, we study the universal approximation and sample complexity of the DiTs score function, as well as the distribution recovery property of the initial data. Specifically, under mild data assumptions, we derive an approximation error bound for the score network of latent DiTs, which is sub-linear in the latent space dimension. Additionally, we derive the corresponding sample complexity bound and show that the data distribution generated from the estimated score function converges toward a proximate area of the original one. Computationally, we characterize the hardness of both forward inference and backward computation of latent DiTs, assuming the Strong Exponential Time Hypothesis (SETH). For forward inference, we identify efficient criteria for all possible latent DiTs inference algorithms and showcase our theory by pushing the efficiency toward almost-linear time inference. For backward computation, we leverage the low-rank structure within the gradient computation of DiTs training for possible algorithmic speedup. Specifically, we show that such speedup achieves almost-linear time latent DiTs training by casting the DiTs gradient as a series of chained low-rank approximations with bounded error. Under the low-dimensional assumption, we show that the convergence rate and the computational efficiency are both dominated by the dimension of the subspace, suggesting that latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.
[ "['Jerry Yao-Chieh Hu' 'Weimin Wu' 'Zhuoru Li' 'Zhao Song' 'Han Liu']" ]
null
null
2407.01085
null
null
http://arxiv.org/pdf/2407.01085v1
2024-07-01T08:37:41Z
2024-07-01T08:37:41Z
Rethinking LLM-based Preference Evaluation
Recently, large language model (LLM)-based preference evaluation has been widely adopted to compare pairs of model responses. However, a severe bias towards lengthy responses has been observed, raising concerns about the reliability of this evaluation method. In this work, we designed a series of controlled experiments to study the major impacting factors of the metric of LLM-based preference evaluation, i.e., win rate, and conclude that the win rate is affected by two axes of model response: desirability and information mass, where the former is length-independent and related to trustworthiness, and the latter is length-dependent and can be represented by conditional entropy. We find that length impacts the existing evaluations by influencing information mass. However, a reliable evaluation metric should not only assess content quality but also ensure that the assessment is not confounded by extraneous factors such as response length. Therefore, we propose a simple yet effective adjustment, AdapAlpaca, to the existing practice of win rate measurement. Specifically, by adjusting the lengths of reference answers to match the test model's answers within the same interval, we debias information mass relative to length, ensuring a fair model evaluation.
[ "['Zhengyu Hu' 'Linxin Song' 'Jieyu Zhang' 'Zheyuan Xiao' 'Jingang Wang'\n 'Zhenyu Chen' 'Jieyu Zhao' 'Hui Xiong']" ]
null
null
2407.01092
null
null
http://arxiv.org/pdf/2407.01092v1
2024-07-01T08:49:33Z
2024-07-01T08:49:33Z
Kolmogorov-Arnold Convolutions: Design Principles and Empirical Studies
The emergence of Kolmogorov-Arnold Networks (KANs) has sparked significant interest and debate within the scientific community. This paper explores the application of KANs in the domain of computer vision (CV). We examine the convolutional version of KANs, considering various nonlinearity options beyond splines, such as Wavelet transforms and a range of polynomials. We propose a parameter-efficient design for Kolmogorov-Arnold convolutional layers and a parameter-efficient finetuning algorithm for pre-trained KAN models, as well as KAN convolutional versions of self-attention and focal modulation layers. We provide empirical evaluations conducted on MNIST, CIFAR10, CIFAR100, Tiny ImageNet, ImageNet1k, and HAM10000 datasets for image classification tasks. Additionally, we explore segmentation tasks, proposing U-Net-like architectures with KAN convolutions, and achieving state-of-the-art results on BUSI, GlaS, and CVC datasets. We summarized all of our findings in a preliminary design guide of KAN convolutional models for computer vision tasks. Furthermore, we investigate regularization techniques for KANs. All experimental code and implementations of convolutional layers and models, pre-trained on ImageNet1k weights are available on GitHub via this https://github.com/IvanDrokin/torch-conv-kan
[ "['Ivan Drokin']" ]
null
null
2407.01100
null
null
http://arxiv.org/pdf/2407.01100v1
2024-07-01T09:06:57Z
2024-07-01T09:06:57Z
Eliminating Position Bias of Language Models: A Mechanistic Approach
Position bias has proven to be a prevalent issue of modern language models (LMs), where the models prioritize content based on its position within the given context. This bias often leads to unexpected model failures and hurts performance, robustness, and reliability across various applications. Our mechanistic analysis attributes the position bias to two components employed in nearly all state-of-the-art LMs: causal attention and relative positional encodings. Specifically, we find that causal attention generally causes models to favor distant content, while relative positional encodings like RoPE prefer nearby ones based on the analysis of retrieval-augmented question answering (QA). Further, our empirical study on object detection reveals that position bias is also present in vision-language models (VLMs). Based on the above analyses, we propose to ELIMINATE position bias caused by different input segment orders (e.g., options in LM-as-a-judge, retrieved documents in QA) in a TRAINING-FREE ZERO-SHOT manner. Our method changes the causal attention to bidirectional attention between segments and utilizes model attention values to decide the relative orders of segments instead of using the order provided in input prompts, therefore enabling Position-INvariant inferencE (PINE) at the segment level. By eliminating position bias, models achieve better performance and reliability in downstream tasks where position bias widely exists, such as LM-as-a-judge and retrieval-augmented QA. Notably, PINE is especially useful when adapting LMs for evaluating reasoning pairs: it consistently provides 8 to 10 percentage points performance gains in most cases, and makes Llama-3-70B-Instruct perform even better than GPT-4-0125-preview on the RewardBench reasoning subset.
[ "['Ziqi Wang' 'Hanlin Zhang' 'Xiner Li' 'Kuan-Hao Huang' 'Chi Han'\n 'Shuiwang Ji' 'Sham M. Kakade' 'Hao Peng' 'Heng Ji']" ]
null
null
2407.01110
null
null
http://arxiv.org/pdf/2407.01110v1
2024-07-01T09:19:50Z
2024-07-01T09:19:50Z
SecGenAI: Enhancing Security of Cloud-based Generative AI Applications within Australian Critical Technologies of National Interest
The rapid advancement of Generative AI (GenAI) technologies offers transformative opportunities within Australia's critical technologies of national interest while introducing unique security challenges. This paper presents SecGenAI, a comprehensive security framework for cloud-based GenAI applications, with a focus on Retrieval-Augmented Generation (RAG) systems. SecGenAI addresses functional, infrastructure, and governance requirements, integrating end-to-end security analysis to generate specifications emphasizing data privacy, secure deployment, and shared responsibility models. Aligned with Australian Privacy Principles, AI Ethics Principles, and guidelines from the Australian Cyber Security Centre and Digital Transformation Agency, SecGenAI mitigates threats such as data leakage, adversarial attacks, and model inversion. The framework's novel approach combines advanced machine learning techniques with robust security measures, ensuring compliance with Australian regulations while enhancing the reliability and trustworthiness of GenAI systems. This research contributes to the field of intelligent systems by providing actionable strategies for secure GenAI implementation in industry, fostering innovation in AI applications, and safeguarding national interests.
[ "['Christoforus Yoga Haryanto' 'Minh Hieu Vu' 'Trung Duc Nguyen'\n 'Emily Lomempow' 'Yulia Nurliana' 'Sona Taheri']" ]
null
null
2407.01111
null
null
http://arxiv.org/pdf/2407.01111v1
2024-07-01T09:20:26Z
2024-07-01T09:20:26Z
Proximity Matters: Local Proximity Preserved Balancing for Treatment Effect Estimation
Heterogeneous treatment effect (HTE) estimation from observational data poses significant challenges due to treatment selection bias. Existing methods address this bias by minimizing distribution discrepancies between treatment groups in latent space, focusing on global alignment. However, the fruitful aspect of local proximity, where similar units exhibit similar outcomes, is often overlooked. In this study, we propose Proximity-aware Counterfactual Regression (PCR) to exploit proximity for representation balancing within the HTE estimation context. Specifically, we introduce a local proximity preservation regularizer based on optimal transport to depict the local proximity in discrepancy calculation. Furthermore, to overcome the curse of dimensionality that renders the estimation of discrepancy ineffective, exacerbated by limited data availability for HTE estimation, we develop an informative subspace projector, which trades off minimal distance precision for improved sample complexity. Extensive experiments demonstrate that PCR accurately matches units across different treatment groups, effectively mitigates treatment selection bias, and significantly outperforms competitors. Code is available at https://anonymous.4open.science/status/ncr-B697.
[ "['Hao Wang' 'Zhichao Chen' 'Yuan Shen' 'Jiajun Fan' 'Zhaoran Liu'\n 'Degui Yang' 'Xinggao Liu' 'Haoxuan Li']" ]
null
null
2407.01115
null
null
http://arxiv.org/pdf/2407.01115v1
2024-07-01T09:24:04Z
2024-07-01T09:24:04Z
Enabling Mixed Effects Neural Networks for Diverse, Clustered Data Using Monte Carlo Methods
Neural networks often assume independence among input data samples, disregarding correlations arising from inherent clustering patterns in real-world datasets (e.g., due to different sites or repeated measurements). Recently, mixed effects neural networks (MENNs) which separate cluster-specific 'random effects' from cluster-invariant 'fixed effects' have been proposed to improve generalization and interpretability for clustered data. However, existing methods only allow for approximate quantification of cluster effects and are limited to regression and binary targets with only one clustering feature. We present MC-GMENN, a novel approach employing Monte Carlo methods to train Generalized Mixed Effects Neural Networks. We empirically demonstrate that MC-GMENN outperforms existing mixed effects deep learning models in terms of generalization performance, time complexity, and quantification of inter-cluster variance. Additionally, MC-GMENN is applicable to a wide range of datasets, including multi-class classification tasks with multiple high-cardinality categorical features. For these datasets, we show that MC-GMENN outperforms conventional encoding and embedding methods, simultaneously offering a principled methodology for interpreting the effects of clustering patterns.
[ "['Andrej Tschalzev' 'Paul Nitschke' 'Lukas Kirchdorfer' 'Stefan Lüdtke'\n 'Christian Bartelt' 'Heiner Stuckenschmidt']" ]
null
null
2407.01122
null
null
http://arxiv.org/pdf/2407.01122v1
2024-07-01T09:31:03Z
2024-07-01T09:31:03Z
Calibrated Large Language Models for Binary Question Answering
Quantifying the uncertainty of predictions made by large language models (LLMs) in binary text classification tasks remains a challenge. Calibration, in the context of LLMs, refers to the alignment between the model's predicted probabilities and the actual correctness of its predictions. A well-calibrated model should produce probabilities that accurately reflect the likelihood of its predictions being correct. We propose a novel approach that utilizes the inductive Venn--Abers predictor (IVAP) to calibrate the probabilities associated with the output tokens corresponding to the binary labels. Our experiments on the BoolQ dataset using the Llama 2 model demonstrate that IVAP consistently outperforms the commonly used temperature scaling method for various label token choices, achieving well-calibrated probabilities while maintaining high predictive quality. Our findings contribute to the understanding of calibration techniques for LLMs and provide a practical solution for obtaining reliable uncertainty estimates in binary question answering tasks, enhancing the interpretability and trustworthiness of LLM predictions.
[ "['Patrizio Giovannotti' 'Alexander Gammerman']" ]
null
null
2407.01154
null
null
http://arxiv.org/pdf/2407.01154v1
2024-07-01T10:22:16Z
2024-07-01T10:22:16Z
Wind Estimation in Unmanned Aerial Vehicles with Causal Machine Learning
In this work we demonstrate the possibility of estimating the wind environment of a UAV without specialised sensors, using only the UAV's trajectory, applying a causal machine learning approach. We implement the causal curiosity method which combines machine learning times series classification and clustering with a causal framework. We analyse three distinct wind environments: constant wind, shear wind, and turbulence, and explore different optimisation strategies for optimal UAV manoeuvres to estimate the wind conditions. The proposed approach can be used to design optimal trajectories in challenging weather conditions, and to avoid specialised sensors that add to the UAV's weight and compromise its functionality.
[ "['Abdulaziz Alwalan' 'Miguel Arana-Catania']" ]
null
null
2407.01155
null
null
http://arxiv.org/pdf/2407.01155v1
2024-07-01T10:23:14Z
2024-07-01T10:23:14Z
CPT: Consistent Proxy Tuning for Black-box Optimization
Black-box tuning has attracted recent attention due to that the structure or inner parameters of advanced proprietary models are not accessible. Proxy-tuning provides a test-time output adjustment for tuning black-box language models. It applies the difference of the output logits before and after tuning a smaller white-box "proxy" model to improve the black-box model. However, this technique serves only as a decoding-time algorithm, leading to an inconsistency between training and testing which potentially limits overall performance. To address this problem, we introduce Consistent Proxy Tuning (CPT), a simple yet effective black-box tuning method. Different from Proxy-tuning, CPT additionally exploits the frozen large black-box model and another frozen small white-box model, ensuring consistency between training-stage optimization objective and test-time proxies. This consistency benefits Proxy-tuning and enhances model performance. Note that our method focuses solely on logit-level computation, which makes it model-agnostic and applicable to any task involving logit classification. Extensive experimental results demonstrate the superiority of our CPT in both black-box tuning of Large Language Models (LLMs) and Vision-Language Models (VLMs) across various datasets. The code is available at https://github.com/chunmeifeng/CPT.
[ "['Yuanyang He' 'Zitong Huang' 'Xinxing Xu' 'Rick Siow Mong Goh'\n 'Salman Khan' 'Wangmeng Zuo' 'Yong Liu' 'Chun-Mei Feng']" ]
null
null
2407.01157
null
null
http://arxiv.org/pdf/2407.01157v1
2024-07-01T10:25:47Z
2024-07-01T10:25:47Z
Unaligning Everything: Or Aligning Any Text to Any Image in Multimodal Models
Utilizing a shared embedding space, emerging multimodal models exhibit unprecedented zero-shot capabilities. However, the shared embedding space could lead to new vulnerabilities if different modalities can be misaligned. In this paper, we extend and utilize a recently developed effective gradient-based procedure that allows us to match the embedding of a given text by minimally modifying an image. Using the procedure, we show that we can align the embeddings of distinguishable texts to any image through unnoticeable adversarial attacks in joint image-text models, revealing that semantically unrelated images can have embeddings of identical texts and at the same time visually indistinguishable images can be matched to the embeddings of very different texts. Our technique achieves 100% success rate when it is applied to text datasets and images from multiple sources. Without overcoming the vulnerability, multimodal models cannot robustly align inputs from different modalities in a semantically meaningful way. textbf{Warning: the text data used in this paper are toxic in nature and may be offensive to some readers.}
[ "['Shaeke Salman' 'Md Montasir Bin Shams' 'Xiuwen Liu']" ]
null
null
2407.01163
null
null
http://arxiv.org/pdf/2407.01163v1
2024-07-01T10:33:44Z
2024-07-01T10:33:44Z
Benchmarking Predictive Coding Networks -- Made Simple
In this work, we tackle the problems of efficiency and scalability for predictive coding networks in machine learning. To do so, we first propose a library called PCX, whose focus lies on performance and simplicity, and provides a user-friendly, deep-learning oriented interface. Second, we use PCX to implement a large set of benchmarks for the community to use for their experiments. As most works propose their own tasks and architectures, do not compare one against each other, and focus on small-scale tasks, a simple and fast open-source library adopted by the whole community would address all of these concerns. Third, we perform extensive benchmarks using multiple algorithms, setting new state-of-the-art results in multiple tasks and datasets, as well as highlighting limitations inherent to PC that should be addressed. Thanks to the efficiency of PCX, we are able to analyze larger architectures than commonly used, providing baselines to galvanize community efforts towards one of the main open problems in the field: scalability. The code for PCX is available at textit{https://github.com/liukidar/pcax}.
[ "['Luca Pinchetti' 'Chang Qi' 'Oleh Lokshyn' 'Gaspard Olivers'\n 'Cornelius Emde' 'Mufeng Tang' \"Amine M'Charrak\" 'Simon Frieder'\n 'Bayar Menzat' 'Rafal Bogacz' 'Thomas Lukasiewicz' 'Tommaso Salvatori']" ]
null
null
2407.01171
null
null
http://arxiv.org/pdf/2407.01171v1
2024-07-01T10:44:29Z
2024-07-01T10:44:29Z
Neural Conditional Probability for Inference
We introduce NCP (Neural Conditional Probability), a novel operator-theoretic approach for learning conditional distributions with a particular focus on inference tasks. NCP can be used to build conditional confidence regions and extract important statistics like conditional quantiles, mean, and covariance. It offers streamlined learning through a single unconditional training phase, facilitating efficient inference without the need for retraining even when conditioning changes. By tapping into the powerful approximation capabilities of neural networks, our method efficiently handles a wide variety of complex probability distributions, effectively dealing with nonlinear relationships between input and output variables. Theoretical guarantees ensure both optimization consistency and statistical accuracy of the NCP method. Our experiments show that our approach matches or beats leading methods using a simple Multi-Layer Perceptron (MLP) with two hidden layers and GELU activations. This demonstrates that a minimalistic architecture with a theoretically grounded loss function can achieve competitive results without sacrificing performance, even in the face of more complex architectures.
[ "['Vladimir R. Kostic' 'Karim Lounici' 'Gregoire Pacreau' 'Pietro Novelli'\n 'Giacomo Turri' 'Massimiliano Pontil']" ]
null
null
2407.01178
null
null
http://arxiv.org/pdf/2407.01178v1
2024-07-01T11:07:23Z
2024-07-01T11:07:23Z
$\text{Memory}^3$: Language Modeling with Explicit Memory
The training and inference of large language models (LLMs) are together a costly process that transports knowledge from raw data to meaningful computation. Inspired by the memory hierarchy of the human brain, we reduce this cost by equipping LLMs with explicit memory, a memory format cheaper than model parameters and text retrieval-augmented generation (RAG). Conceptually, with most of its knowledge externalized to explicit memories, the LLM can enjoy a smaller parameter size, training cost, and inference cost, all proportional to the amount of remaining "abstract knowledge". As a preliminary proof of concept, we train from scratch a 2.4B LLM, which achieves better performance than much larger LLMs as well as RAG models, and maintains higher decoding speed than RAG. The model is named $text{Memory}^3$, since explicit memory is the third form of memory in LLMs after implicit memory (model parameters) and working memory (context key-values). We introduce a memory circuitry theory to support the externalization of knowledge, and present novel techniques including a memory sparsification mechanism that makes storage tractable and a two-stage pretraining scheme that facilitates memory formation.
[ "['Hongkang Yang' 'Zehao Lin' 'Wenjin Wang' 'Hao Wu' 'Zhiyu Li' 'Bo Tang'\n 'Wenqiang Wei' 'Jinbo Wang' 'Zeyun Tang' 'Shichao Song' 'Chenyang Xi'\n 'Yu Yu' 'Kai Chen' 'Feiyu Xiong' 'Linpeng Tang' 'Weinan E']" ]
null
null
2407.01194
null
null
http://arxiv.org/abs/2407.01194v1
2024-07-01T11:39:15Z
2024-07-01T11:39:15Z
A Learned Generalized Geodesic Distance Function-Based Approach for Node Feature Augmentation on Graphs
Geodesic distances on manifolds have numerous applications in image processing, computer graphics and computer vision. In this work, we introduce an approach called `LGGD' (Learned Generalized Geodesic Distances). This method involves generating node features by learning a generalized geodesic distance function through a training pipeline that incorporates training data, graph topology and the node content features. The strength of this method lies in the proven robustness of the generalized geodesic distances to noise and outliers. Our contributions encompass improved performance in node classification tasks, competitive results with state-of-the-art methods on real-world graph datasets, the demonstration of the learnability of parameters within the generalized geodesic equation on graph, and dynamic inclusion of new labels.
[ "['Amitoz Azad' 'Yuan Fang']" ]
null
null
2407.01199
null
null
http://arxiv.org/pdf/2407.01199v1
2024-07-01T11:48:33Z
2024-07-01T11:48:33Z
Deep Learning Based Tool Wear Estimation Considering Cutting Conditions
Tool wear conditions impact the final quality of the workpiece. In this study, we propose a deep learning approach based on a convolutional neural network that incorporates cutting conditions as extra model inputs, aiming to improve tool wear estimation accuracy and fulfill industrial demands for zero-shot transferability. Through a series of milling experiments under various cutting parameters, we evaluate the model's performance in terms of tool wear estimation accuracy and its transferability to new fixed or variable cutting parameters. The results consistently highlight our approach's advantage over conventional models that omit cutting conditions, maintaining superior performance irrespective of the stability of the wear development or the limitation of the training dataset. This finding underscores its potential applicability in industrial scenarios.
[ "['Zongshuo Li' 'Markus Meurer' 'Thomas Bergs']" ]
null
null
2407.01200
null
null
http://arxiv.org/pdf/2407.01200v1
2024-07-01T11:49:10Z
2024-07-01T11:49:10Z
Deep Learning Approach for Enhanced Transferability and Learning Capacity in Tool Wear Estimation
As an integral part of contemporary manufacturing, monitoring systems obtain valuable information during machining to oversee the condition of both the process and the machine. Recently, diverse algorithms have been employed to detect tool wear using single or multiple sources of measurements. In this study, a deep learning approach is proposed for estimating tool wear, considering cutting parameters. The model's accuracy and transferability in tool wear estimation were assessed with milling experiments conducted under varying cutting parameters. The results indicate that the proposed method outperforms conventional methods in terms of both transferability and rapid learning capabilities.
[ "['Zongshuo Li' 'Markus Meurer' 'Thomas Bergs']" ]
null
null
2407.01211
null
null
http://arxiv.org/pdf/2407.01211v1
2024-07-01T11:57:53Z
2024-07-01T11:57:53Z
Efficient Cutting Tool Wear Segmentation Based on Segment Anything Model
Tool wear conditions impact the surface quality of the workpiece and its final geometric precision. In this research, we propose an efficient tool wear segmentation approach based on Segment Anything Model, which integrates U-Net as an automated prompt generator to streamline the processes of tool wear detection. Our evaluation covered three Point-of-Interest generation methods and further investigated the effects of variations in training dataset sizes and U-Net training intensities on resultant wear segmentation outcomes. The results consistently highlight our approach's advantage over U-Net, emphasizing its ability to achieve accurate wear segmentation even with limited training datasets. This feature underscores its potential applicability in industrial scenarios where datasets may be limited.
[ "['Zongshuo Li' 'Ding Huo' 'Markus Meurer' 'Thomas Bergs']" ]
null
null
2407.01214
null
null
http://arxiv.org/pdf/2407.01214v1
2024-07-01T11:59:59Z
2024-07-01T11:59:59Z
Revisiting Random Walks for Learning on Graphs
We revisit a simple idea for machine learning on graphs, where a random walk on a graph produces a machine-readable record, and this record is processed by a deep neural network to directly make vertex-level or graph-level predictions. We refer to these stochastic machines as random walk neural networks, and show that we can design them to be isomorphism invariant while capable of universal approximation of graph functions in probability. A useful finding is that almost any kind of record of random walk guarantees probabilistic invariance as long as the vertices are anonymized. This enables us to record random walks in plain text and adopt a language model to read these text records to solve graph tasks. We further establish a parallelism to message passing neural networks using tools from Markov chain theory, and show that over-smoothing in message passing is alleviated by construction in random walk neural networks, while over-squashing manifests as probabilistic under-reaching. We show that random walk neural networks based on pre-trained language models can solve several hard problems on graphs, such as separating strongly regular graphs where the 3-WL test fails, counting substructures, and transductive classification on arXiv citation network without training. Code is available at https://github.com/jw9730/random-walk.
[ "['Jinwoo Kim' 'Olga Zaghen' 'Ayhan Suleymanzade' 'Youngmin Ryou'\n 'Seunghoon Hong']" ]
null
null
2407.01226
null
null
http://arxiv.org/pdf/2407.01226v2
2024-07-09T14:37:11Z
2024-07-01T12:17:01Z
Bayesian grey-box identification of nonlinear convection effects in heat transfer dynamics
We propose a computational procedure for identifying convection in heat transfer dynamics. The procedure is based on a Gaussian process latent force model, consisting of a white-box component (i.e., known physics) for the conduction and linear convection effects and a Gaussian process that acts as a black-box component for the nonlinear convection effects. States are inferred through Bayesian smoothing and we obtain approximate posterior distributions for the kernel covariance function's hyperparameters using Laplace's method. The nonlinear convection function is recovered from the Gaussian process states using a Bayesian regression model. We validate the procedure by simulation error using the identified nonlinear convection function, on both data from a simulated system and measurements from a physical assembly.
[ "['Wouter M. Kouw' 'Caspar Gruijthuijsen' 'Lennart Blanken' 'Enzo Evers'\n 'Timothy Rogers']" ]
null
null
2407.01250
null
null
http://arxiv.org/pdf/2407.01250v1
2024-07-01T12:57:03Z
2024-07-01T12:57:03Z
Metric-Entropy Limits on Nonlinear Dynamical System Learning
This paper is concerned with the fundamental limits of nonlinear dynamical system learning from input-output traces. Specifically, we show that recurrent neural networks (RNNs) are capable of learning nonlinear systems that satisfy a Lipschitz property and forget past inputs fast enough in a metric-entropy optimal manner. As the sets of sequence-to-sequence maps realized by the dynamical systems we consider are significantly more massive than function classes generally considered in deep neural network approximation theory, a refined metric-entropy characterization is needed, namely in terms of order, type, and generalized dimension. We compute these quantities for the classes of exponentially-decaying and polynomially-decaying Lipschitz fading-memory systems and show that RNNs can achieve them.
[ "['Yang Pan' 'Clemens Hutter' 'Helmut Bölcskei']" ]
null
null
2407.01258
null
null
http://arxiv.org/pdf/2407.01258v2
2024-07-02T03:55:11Z
2024-07-01T13:08:09Z
Introducing a Physics-informed Deep Learning Framework for Bridge Scour Prediction
This paper introduces scour physics-informed neural networks (SPINNs), a hybrid physics-data-driven framework for bridge scour prediction using deep learning. SPINNs are developed based on historical scour monitoring data and integrate physics-based empirical equations into neural networks as supplementary loss components. We incorporated three architectures: LSTM, CNN, and NLinear as the base data-driven model. Despite varying performance across different base models and bridges, SPINNs overall outperformed pure data-driven models. In some bridge cases, SPINN reduced forecasting errors by up to 50 percent. In this study, we also explored general models for bridge clusters, trained by aggregating datasets across multiple bridges in a region. The pure data-driven models mostly benefited from this approach, in particular bridges with limited data. However, bridge-specific SPINNs provided more accurate predictions than general SPINNs for almost all case studies. Also, the time-dependent empirical equations derived from SPINNs showed reasonable accuracy in estimating maximum scour depth, providing more accurate predictions compared to HEC-18. Comparing both SPINNs and pure deep learning models with traditional HEC-18 equation indicates substantial improvements in scour prediction accuracy. This study can pave the way for hybrid physics-machine learning methodologies to be implemented for bridge scour design and maintenance.
[ "['Negin Yousefpour' 'Bo Wang']" ]
null
null
2407.01262
null
null
http://arxiv.org/pdf/2407.01262v1
2024-07-01T13:17:09Z
2024-07-01T13:17:09Z
Complementary Fusion of Deep Network and Tree Model for ETA Prediction
Estimated time of arrival (ETA) is a very important factor in the transportation system. It has attracted increasing attentions and has been widely used as a basic service in navigation systems and intelligent transportation systems. In this paper, we propose a novel solution to the ETA estimation problem, which is an ensemble on tree models and neural networks. We proved the accuracy and robustness of the solution on the A/B list and finally won first place in the SIGSPATIAL 2021 GISCUP competition.
[ "['YuRui Huang' 'Jie Zhang' 'HengDa Bao' 'Yang Yang' 'Jian Yang']" ]
null
null
2407.01281
null
null
http://arxiv.org/pdf/2407.01281v1
2024-07-01T13:35:53Z
2024-07-01T13:35:53Z
Bridging Smoothness and Approximation: Theoretical Insights into Over-Smoothing in Graph Neural Networks
In this paper, we explore the approximation theory of functions defined on graphs. Our study builds upon the approximation results derived from the $K$-functional. We establish a theoretical framework to assess the lower bounds of approximation for target functions using Graph Convolutional Networks (GCNs) and examine the over-smoothing phenomenon commonly observed in these networks. Initially, we introduce the concept of a $K$-functional on graphs, establishing its equivalence to the modulus of smoothness. We then analyze a typical type of GCN to demonstrate how the high-frequency energy of the output decays, an indicator of over-smoothing. This analysis provides theoretical insights into the nature of over-smoothing within GCNs. Furthermore, we establish a lower bound for the approximation of target functions by GCNs, which is governed by the modulus of smoothness of these functions. This finding offers a new perspective on the approximation capabilities of GCNs. In our numerical experiments, we analyze several widely applied GCNs and observe the phenomenon of energy decay. These observations corroborate our theoretical results on exponential decay order.
[ "['Guangrui Yang' 'Jianfei Li' 'Ming Li' 'Han Feng' 'Ding-Xuan Zhou']" ]