categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.13268 | null | null | http://arxiv.org/pdf/2404.13268v2 | 2024-05-12T15:11:25Z | 2024-04-20T04:30:38Z | Multi-Cell Decoder and Mutual Learning for Table Structure and Character
Recognition | Extracting table contents from documents such as scientific papers and financial reports and converting them into a format that can be processed by large language models is an important task in knowledge information processing. End-to-end approaches, which recognize not only table structure but also cell contents, achieved performance comparable to state-of-the-art models using external character recognition systems, and have potential for further improvements. In addition, these models can now recognize long tables with hundreds of cells by introducing local attention. However, the models recognize table structure in one direction from the header to the footer, and cell content recognition is performed independently for each cell, so there is no opportunity to retrieve useful information from the neighbor cells. In this paper, we propose a multi-cell content decoder and bidirectional mutual learning mechanism to improve the end-to-end approach. The effectiveness is demonstrated on two large datasets, and the experimental results show comparable performance to state-of-the-art models, even for long tables with large numbers of cells. | [
"['Takaya Kawakatsu']"
]
|
null | null | 2404.13270 | null | null | http://arxiv.org/pdf/2404.13270v1 | 2024-04-20T04:51:59Z | 2024-04-20T04:51:59Z | StrideNET: Swin Transformer for Terrain Recognition with Dynamic
Roughness Extraction | Advancements in deep learning are revolutionizing the classification of remote-sensing images. Transformer-based architectures, utilizing self-attention mechanisms, have emerged as alternatives to conventional convolution methods, enabling the capture of long-range dependencies along with global relationships in the image. Motivated by these advancements, this paper presents StrideNET, a novel dual-branch architecture designed for terrain recognition and implicit properties estimation. The terrain recognition branch utilizes the Swin Transformer, leveraging its hierarchical representation and low computational cost to efficiently capture both local and global features. The terrain properties branch focuses on the extraction of surface properties such as roughness and slipperiness using a statistical texture analysis method. By computing surface terrain properties, an enhanced environmental perception can be obtained. The StrideNET model is trained on a dataset comprising four target terrain classes: Grassy, Marshy, Sandy, and Rocky. StrideNET attains competitive performance compared to contemporary methods. The implications of this work extend to various applications, including environmental monitoring, land use and land cover (LULC) classification, disaster response, precision agriculture, and much more. | [
"['Maitreya Shelare' 'Neha Shigvan' 'Atharva Satam' 'Poonam Sonar']"
]
|
null | null | 2404.13273 | null | null | http://arxiv.org/pdf/2404.13273v1 | 2024-04-20T05:13:56Z | 2024-04-20T05:13:56Z | Multi-feature Reconstruction Network using Crossed-mask Restoration for
Unsupervised Anomaly Detection | Unsupervised anomaly detection using only normal samples is of great significance for quality inspection in industrial manufacturing. Although existing reconstruction-based methods have achieved promising results, they still face two problems: poor distinguishable information in image reconstruction and well abnormal regeneration caused by model over-generalization ability. To overcome the above issues, we convert the image reconstruction into a combination of parallel feature restorations and propose a multi-feature reconstruction network, MFRNet, using crossed-mask restoration in this paper. Specifically, a multi-scale feature aggregator is first developed to generate more discriminative hierarchical representations of the input images from a pre-trained model. Subsequently, a crossed-mask generator is adopted to randomly cover the extracted feature map, followed by a restoration network based on the transformer structure for high-quality repair of the missing regions. Finally, a hybrid loss is equipped to guide model training and anomaly estimation, which gives consideration to both the pixel and structural similarity. Extensive experiments show that our method is highly competitive with or significantly outperforms other state-of-the-arts on four public available datasets and one self-made dataset. | [
"['Junpu Wang' 'Guili Xu' 'Chunlei Li' 'Guangshuai Gao' 'Yuehua Cheng']"
]
|
null | null | 2404.13278 | null | null | http://arxiv.org/pdf/2404.13278v1 | 2024-04-20T05:31:59Z | 2024-04-20T05:31:59Z | Federated Transfer Learning with Task Personalization for Condition
Monitoring in Ultrasonic Metal Welding | Ultrasonic metal welding (UMW) is a key joining technology with widespread industrial applications. Condition monitoring (CM) capabilities are critically needed in UMW applications because process anomalies significantly deteriorate the joining quality. Recently, machine learning models emerged as a promising tool for CM in many manufacturing applications due to their ability to learn complex patterns. Yet, the successful deployment of these models requires substantial training data that may be expensive and time-consuming to collect. Additionally, many existing machine learning models lack generalizability and cannot be directly applied to new process configurations (i.e., domains). Such issues may be potentially alleviated by pooling data across manufacturers, but data sharing raises critical data privacy concerns. To address these challenges, this paper presents a Federated Transfer Learning with Task Personalization (FTL-TP) framework that provides domain generalization capabilities in distributed learning while ensuring data privacy. By effectively learning a unified representation from feature space, FTL-TP can adapt CM models for clients working on similar tasks, thereby enhancing their overall adaptability and performance jointly. To demonstrate the effectiveness of FTL-TP, we investigate two distinct UMW CM tasks, tool condition monitoring and workpiece surface condition classification. Compared with state-of-the-art FL algorithms, FTL-TP achieves a 5.35%--8.08% improvement of accuracy in CM in new target domains. FTL-TP is also shown to perform excellently in challenging scenarios involving unbalanced data distributions and limited client fractions. Furthermore, by implementing the FTL-TP method on an edge-cloud architecture, we show that this method is both viable and efficient in practice. The FTL-TP framework is readily extensible to various other manufacturing applications. | [
"['Ahmadreza Eslaminia' 'Yuquan Meng' 'Klara Nahrstedt' 'Chenhui Shao']"
]
|
null | null | 2404.13300 | null | null | http://arxiv.org/pdf/2404.13300v1 | 2024-04-20T07:11:06Z | 2024-04-20T07:11:06Z | Capturing Momentum: Tennis Match Analysis Using Machine Learning and
Time Series Theory | This paper represents an analysis on the momentum of tennis match. And due to Generalization performance of it, it can be helpful in constructing a system to predict the result of sports game and analyze the performance of player based on the Technical statistics. We First use hidden markov models to predict the momentum which is defined as the performance of players. Then we use Xgboost to prove the significance of momentum. Finally we use LightGBM to evaluate the performance of our model and use SHAP feature importance ranking and weight analysis to find the key points that affect the performance of players. | [
"['Jingdi Lei' 'Tianqi Kang' 'Yuluan Cao' 'Shiwei Ren']"
]
|
null | null | 2404.13309 | null | null | http://arxiv.org/pdf/2404.13309v1 | 2024-04-20T07:38:48Z | 2024-04-20T07:38:48Z | Latent Schr{ö}dinger Bridge Diffusion Model for Generative Learning | This paper aims to conduct a comprehensive theoretical analysis of current diffusion models. We introduce a novel generative learning methodology utilizing the Schr{"o}dinger bridge diffusion model in latent space as the framework for theoretical exploration in this domain. Our approach commences with the pre-training of an encoder-decoder architecture using data originating from a distribution that may diverge from the target distribution, thus facilitating the accommodation of a large sample size through the utilization of pre-existing large-scale models. Subsequently, we develop a diffusion model within the latent space utilizing the Schr{"o}dinger bridge framework. Our theoretical analysis encompasses the establishment of end-to-end error analysis for learning distributions via the latent Schr{"o}dinger bridge diffusion model. Specifically, we control the second-order Wasserstein distance between the generated distribution and the target distribution. Furthermore, our obtained convergence rates effectively mitigate the curse of dimensionality, offering robust theoretical support for prevailing diffusion models. | [
"['Yuling Jiao' 'Lican Kang' 'Huazhen Lin' 'Jin Liu' 'Heng Zuo']"
]
|
null | null | 2404.13316 | null | null | http://arxiv.org/pdf/2404.13316v1 | 2024-04-20T08:21:25Z | 2024-04-20T08:21:25Z | On the stability of Lipschitz continuous control problems and its
application to reinforcement learning | We address the crucial yet underexplored stability properties of the Hamilton--Jacobi--Bellman (HJB) equation in model-free reinforcement learning contexts, specifically for Lipschitz continuous optimal control problems. We bridge the gap between Lipschitz continuous optimal control problems and classical optimal control problems in the viscosity solutions framework, offering new insights into the stability of the value function of Lipschitz continuous optimal control problems. By introducing structural assumptions on the dynamics and reward functions, we further study the rate of convergence of value functions. Moreover, we introduce a generalized framework for Lipschitz continuous control problems that incorporates the original problem and leverage it to propose a new HJB-based reinforcement learning algorithm. The stability properties and performance of the proposed method are tested with well-known benchmark examples in comparison with existing approaches. | [
"['Namkyeong Cho' 'Yeoneung Kim']"
]
|
null | null | 2404.13318 | null | null | http://arxiv.org/pdf/2404.13318v1 | 2024-04-20T08:23:46Z | 2024-04-20T08:23:46Z | EHRFL: Federated Learning Framework for Heterogeneous EHRs and
Precision-guided Selection of Participating Clients | In this study, we provide solutions to two practical yet overlooked scenarios in federated learning for electronic health records (EHRs): firstly, we introduce EHRFL, a framework that facilitates federated learning across healthcare institutions with distinct medical coding systems and database schemas using text-based linearization of EHRs. Secondly, we focus on a scenario where a single healthcare institution initiates federated learning to build a model tailored for itself, in which the number of clients must be optimized in order to reduce expenses incurred by the host. For selecting participating clients, we present a novel precision-based method, leveraging data latents to identify suitable participants for the institution. Our empirical results show that EHRFL effectively enables federated learning across hospitals with different EHR systems. Furthermore, our results demonstrate the efficacy of our precision-based method in selecting reduced number of participating clients without compromising model performance, resulting in lower operational costs when constructing institution-specific models. We believe this work lays a foundation for the broader adoption of federated learning on EHRs. | [
"['Jiyoun Kim' 'Junu Kim' 'Kyunghoon Hur' 'Edward Choi']"
]
|
null | null | 2404.13322 | null | null | http://arxiv.org/pdf/2404.13322v2 | 2024-06-17T12:50:26Z | 2024-04-20T08:34:39Z | MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and
Modalities | In this study, we focus on heterogeneous knowledge transfer across entirely different model architectures, tasks, and modalities. Existing knowledge transfer methods (e.g., backbone sharing, knowledge distillation) often hinge on shared elements within model structures or task-specific features/labels, limiting transfers to complex model types or tasks. To overcome these challenges, we present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models, facilitating the direct interaction, extraction, and application of knowledge within these parameter spaces. The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters and adeptly learning to identify and map parameters into the target model. MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage, including the training trajectory knowledge of the source model. Extensive experiments on heterogeneous knowledge transfer demonstrate significant improvements in challenging settings, where representative approaches may falter or prove less applicable. | [
"['Kunxi Li' 'Tianyu Zhan' 'Kairui Fu' 'Shengyu Zhang' 'Kun Kuang'\n 'Jiwei Li' 'Zhou Zhao' 'Fei Wu']"
]
|
null | null | 2404.13327 | null | null | http://arxiv.org/pdf/2404.13327v2 | 2024-04-23T05:32:11Z | 2024-04-20T09:02:50Z | Comparative Analysis on Snowmelt-Driven Streamflow Forecasting Using
Machine Learning Techniques | The rapid advancement of machine learning techniques has led to their widespread application in various domains including water resources. However, snowmelt modeling remains an area that has not been extensively explored. In this study, we propose a state-of-the-art (SOTA) deep learning sequential model, leveraging the Temporal Convolutional Network (TCN), for snowmelt-driven discharge modeling in the Himalayan basin of the Hindu Kush Himalayan Region. To evaluate the performance of our proposed model, we conducted a comparative analysis with other popular models including Support Vector Regression (SVR), Long Short Term Memory (LSTM), and Transformer. Furthermore, Nested cross-validation (CV) is used with five outer folds and three inner folds, and hyper-parameter tuning is performed on the inner folds. To evaluate the performance of the model mean absolute error (MAE), root mean square error (RMSE), R square ($R^{2}$), Kling-Gupta Efficiency (KGE), and Nash-Sutcliffe Efficiency (NSE) are computed for each outer fold. The average metrics revealed that TCN outperformed the other models, with an average MAE of 0.011, RMSE of 0.023, $R^{2}$ of 0.991, KGE of 0.992, and NSE of 0.991. The findings of this study demonstrate the effectiveness of the deep learning model as compared to traditional machine learning approaches for snowmelt-driven streamflow forecasting. Moreover, the superior performance of TCN highlights its potential as a promising deep learning model for similar hydrological applications. | [
"['Ukesh Thapa' 'Bipun Man Pati' 'Samit Thapa' 'Dhiraj Pyakurel'\n 'Anup Shrestha']"
]
|
null | null | 2404.13342 | null | null | http://arxiv.org/pdf/2404.13342v1 | 2024-04-20T10:40:12Z | 2024-04-20T10:40:12Z | Hyperspectral Anomaly Detection with Self-Supervised Anomaly Prior | The majority of existing hyperspectral anomaly detection (HAD) methods use the low-rank representation (LRR) model to separate the background and anomaly components, where the anomaly component is optimized by handcrafted sparse priors (e.g., $ell_{2,1}$-norm). However, this may not be ideal since they overlook the spatial structure present in anomalies and make the detection result largely dependent on manually set sparsity. To tackle these problems, we redefine the optimization criterion for the anomaly component in the LRR model with a self-supervised network called self-supervised anomaly prior (SAP). This prior is obtained by the pretext task of self-supervised learning, which is customized to learn the characteristics of hyperspectral anomalies. Specifically, this pretext task is a classification task to distinguish the original hyperspectral image (HSI) and the pseudo-anomaly HSI, where the pseudo-anomaly is generated from the original HSI and designed as a prism with arbitrary polygon bases and arbitrary spectral bands. In addition, a dual-purified strategy is proposed to provide a more refined background representation with an enriched background dictionary, facilitating the separation of anomalies from complex backgrounds. Extensive experiments on various hyperspectral datasets demonstrate that the proposed SAP offers a more accurate and interpretable solution than other advanced HAD methods. | [
"['Yidan Liu' 'Weiying Xie' 'Kai Jiang' 'Jiaqing Zhang' 'Yunsong Li'\n 'Leyuan Fang']"
]
|
null | null | 2404.13343 | null | null | http://arxiv.org/pdf/2404.13343v1 | 2024-04-20T10:41:02Z | 2024-04-20T10:41:02Z | UnibucLLM: Harnessing LLMs for Automated Prediction of Item Difficulty
and Response Time for Multiple-Choice Questions | This work explores a novel data augmentation method based on Large Language Models (LLMs) for predicting item difficulty and response time of retired USMLE Multiple-Choice Questions (MCQs) in the BEA 2024 Shared Task. Our approach is based on augmenting the dataset with answers from zero-shot LLMs (Falcon, Meditron, Mistral) and employing transformer-based models based on six alternative feature combinations. The results suggest that predicting the difficulty of questions is more challenging. Notably, our top performing methods consistently include the question text, and benefit from the variability of LLM answers, highlighting the potential of LLMs for improving automated assessment in medical licensing exams. We make our code available https://github.com/ana-rogoz/BEA-2024. | [
"['Ana-Cristina Rogoz' 'Radu Tudor Ionescu']"
]
|
null | null | 2404.13344 | null | null | http://arxiv.org/pdf/2404.13344v1 | 2024-04-20T10:44:13Z | 2024-04-20T10:44:13Z | GRANOLA: Adaptive Normalization for Graph Neural Networks | In recent years, significant efforts have been made to refine the design of Graph Neural Network (GNN) layers, aiming to overcome diverse challenges, such as limited expressive power and oversmoothing. Despite their widespread adoption, the incorporation of off-the-shelf normalization layers like BatchNorm or InstanceNorm within a GNN architecture may not effectively capture the unique characteristics of graph-structured data, potentially reducing the expressive power of the overall architecture. Moreover, existing graph-specific normalization layers often struggle to offer substantial and consistent benefits. In this paper, we propose GRANOLA, a novel graph-adaptive normalization layer. Unlike existing normalization layers, GRANOLA normalizes node features by adapting to the specific characteristics of the graph, particularly by generating expressive representations of its neighborhood structure, obtained by leveraging the propagation of Random Node Features (RNF) in the graph. We present theoretical results that support our design choices. Our extensive empirical evaluation of various graph benchmarks underscores the superior performance of GRANOLA over existing normalization techniques. Furthermore, GRANOLA emerges as the top-performing method among all baselines within the same time complexity of Message Passing Neural Networks (MPNNs). | [
"['Moshe Eliasof' 'Beatrice Bevilacqua' 'Carola-Bibiane Schönlieb'\n 'Haggai Maron']"
]
|
null | null | 2404.13347 | null | null | http://arxiv.org/pdf/2404.13347v1 | 2024-04-20T11:05:47Z | 2024-04-20T11:05:47Z | Augmenting Safety-Critical Driving Scenarios while Preserving Similarity
to Expert Trajectories | Trajectory augmentation serves as a means to mitigate distributional shift in imitation learning. However, imitating trajectories that inadequately represent the original expert data can result in undesirable behaviors, particularly in safety-critical scenarios. We propose a trajectory augmentation method designed to maintain similarity with expert trajectory data. To accomplish this, we first cluster trajectories to identify minority yet safety-critical groups. Then, we combine the trajectories within the same cluster through geometrical transformation to create new trajectories. These trajectories are then added to the training dataset, provided that they meet our specified safety-related criteria. Our experiments exhibit that training an imitation learning model using these augmented trajectories can significantly improve closed-loop performance. | [
"['Hamidreza Mirkhani' 'Behzad Khamidehi' 'Kasra Rezaee']"
]
|
null | null | 2404.13348 | null | null | http://arxiv.org/pdf/2404.13348v1 | 2024-04-20T11:07:29Z | 2024-04-20T11:07:29Z | Socialized Learning: A Survey of the Paradigm Shift for Edge
Intelligence in Networked Systems | Amidst the robust impetus from artificial intelligence (AI) and big data, edge intelligence (EI) has emerged as a nascent computing paradigm, synthesizing AI with edge computing (EC) to become an exemplary solution for unleashing the full potential of AI services. Nonetheless, challenges in communication costs, resource allocation, privacy, and security continue to constrain its proficiency in supporting services with diverse requirements. In response to these issues, this paper introduces socialized learning (SL) as a promising solution, further propelling the advancement of EI. SL is a learning paradigm predicated on social principles and behaviors, aimed at amplifying the collaborative capacity and collective intelligence of agents within the EI system. SL not only enhances the system's adaptability but also optimizes communication, and networking processes, essential for distributed intelligence across diverse devices and platforms. Therefore, a combination of SL and EI may greatly facilitate the development of collaborative intelligence in the future network. This paper presents the findings of a literature review on the integration of EI and SL, summarizing the latest achievements in existing research on EI and SL. Subsequently, we delve comprehensively into the limitations of EI and how it could benefit from SL. Special emphasis is placed on the communication challenges and networking strategies and other aspects within these systems, underlining the role of optimized network solutions in improving system efficacy. Based on these discussions, we elaborate in detail on three integrated components: socialized architecture, socialized training, and socialized inference, analyzing their strengths and weaknesses. Finally, we identify some possible future applications of combining SL and EI, discuss open problems and suggest some future research. | [
"['Xiaofei Wang' 'Yunfeng Zhao' 'Chao Qiu' 'Qinghua Hu'\n 'Victor C. M. Leung']"
]
|
null | null | 2404.13349 | null | null | http://arxiv.org/pdf/2404.13349v1 | 2024-04-20T11:08:07Z | 2024-04-20T11:08:07Z | Breaking the Memory Wall for Heterogeneous Federated Learning with
Progressive Training | This paper presents ProFL, a novel progressive FL framework to effectively break the memory wall. Specifically, ProFL divides the model into different blocks based on its original architecture. Instead of updating the full model in each training round, ProFL first trains the front blocks and safely freezes them after convergence. Training of the next block is then triggered. This process iterates until the training of the whole model is completed. In this way, the memory footprint is effectively reduced for feasible deployment on heterogeneous devices. In order to preserve the feature representation of each block, we decouple the whole training process into two stages: progressive model shrinking and progressive model growing. During the progressive model shrinking stage, we meticulously design corresponding output modules to assist each block in learning the expected feature representation and obtain the initialization parameters. Then, the obtained output modules are utilized in the corresponding progressive model growing stage. Additionally, to control the training pace for each block, a novel metric from the scalar perspective is proposed to assess the learning status of each block and determines when to trigger the training of the next one. Finally, we theoretically prove the convergence of ProFL and conduct extensive experiments on representative models and datasets to evaluate the effectiveness of ProFL. The results demonstrate that ProFL effectively reduces the peak memory footprint by up to 57.4% and improves model accuracy by up to 82.4%. | [
"['Yebo Wu' 'Li Li' 'Chunlin Tian' 'Chengzhong Xu']"
]
|
null | null | 2404.13362 | null | null | http://arxiv.org/pdf/2404.13362v1 | 2024-04-20T12:08:00Z | 2024-04-20T12:08:00Z | Semantically Corrected Amharic Automatic Speech Recognition | Automatic Speech Recognition (ASR) can play a crucial role in enhancing the accessibility of spoken languages worldwide. In this paper, we build a set of ASR tools for Amharic, a language spoken by more than 50 million people primarily in eastern Africa. Amharic is written in the Ge'ez script, a sequence of graphemes with spacings denoting word boundaries. This makes computational processing of Amharic challenging since the location of spacings can significantly impact the meaning of formed sentences. We find that existing benchmarks for Amharic ASR do not account for these spacings and only measure individual grapheme error rates, leading to significantly inflated measurements of in-the-wild performance. In this paper, we first release corrected transcriptions of existing Amharic ASR test datasets, enabling the community to accurately evaluate progress. Furthermore, we introduce a post-processing approach using a transformer encoder-decoder architecture to organize raw ASR outputs into a grammatically complete and semantically meaningful Amharic sentence. Through experiments on the corrected test dataset, our model enhances the semantic correctness of Amharic speech recognition systems, achieving a Character Error Rate (CER) of 5.5% and a Word Error Rate (WER) of 23.3%. | [
"['Samuael Adnew' 'Paul Pu Liang']"
]
|
null | null | 2404.13364 | null | null | http://arxiv.org/pdf/2404.13364v1 | 2024-04-20T12:16:35Z | 2024-04-20T12:16:35Z | MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering | Question-answering systems have revolutionized information retrieval, but linguistic and cultural boundaries limit their widespread accessibility. This research endeavors to bridge the gap of the absence of efficient QnA datasets in low-resource languages by translating the English Question Answering Dataset (SQuAD) using a robust data curation approach. We introduce MahaSQuAD, the first-ever full SQuAD dataset for the Indic language Marathi, consisting of 118,516 training, 11,873 validation, and 11,803 test samples. We also present a gold test set of manually verified 500 examples. Challenges in maintaining context and handling linguistic nuances are addressed, ensuring accurate translations. Moreover, as a QnA dataset cannot be simply converted into any low-resource language using translation, we need a robust method to map the answer translation to its span in the translated passage. Hence, to address this challenge, we also present a generic approach for translating SQuAD into any low-resource language. Thus, we offer a scalable approach to bridge linguistic and cultural gaps present in low-resource languages, in the realm of question-answering systems. The datasets and models are shared publicly at https://github.com/l3cube-pune/MarathiNLP . | [
"['Ruturaj Ghatage' 'Aditya Kulkarni' 'Rajlaxmi Patil' 'Sharvi Endait'\n 'Raviraj Joshi']"
]
|
null | null | 2404.13381 | null | null | http://arxiv.org/pdf/2404.13381v1 | 2024-04-20T13:43:28Z | 2024-04-20T13:43:28Z | DNA: Differentially private Neural Augmentation for contact tracing | The COVID19 pandemic had enormous economic and societal consequences. Contact tracing is an effective way to reduce infection rates by detecting potential virus carriers early. However, this was not generally adopted in the recent pandemic, and privacy concerns are cited as the most important reason. We substantially improve the privacy guarantees of the current state of the art in decentralized contact tracing. Whereas previous work was based on statistical inference only, we augment the inference with a learned neural network and ensure that this neural augmentation satisfies differential privacy. In a simulator for COVID19, even at epsilon=1 per message, this can significantly improve the detection of potentially infected individuals and, as a result of targeted testing, reduce infection rates. This work marks an important first step in integrating deep learning into contact tracing while maintaining essential privacy guarantees. | [
"['Rob Romijnders' 'Christos Louizos' 'Yuki M. Asano' 'Max Welling']"
]
|
null | null | 2404.13386 | null | null | http://arxiv.org/pdf/2404.13386v1 | 2024-04-20T14:06:04Z | 2024-04-20T14:06:04Z | SSVT: Self-Supervised Vision Transformer For Eye Disease Diagnosis Based
On Fundus Images | Machine learning-based fundus image diagnosis technologies trigger worldwide interest owing to their benefits such as reducing medical resource power and providing objective evaluation results. However, current methods are commonly based on supervised methods, bringing in a heavy workload to biomedical staff and hence suffering in expanding effective databases. To address this issue, in this article, we established a label-free method, name 'SSVT',which can automatically analyze un-labeled fundus images and generate high evaluation accuracy of 97.0% of four main eye diseases based on six public datasets and two datasets collected by Beijing Tongren Hospital. The promising results showcased the effectiveness of the proposed unsupervised learning method, and the strong application potential in biomedical resource shortage regions to improve global eye health. | [
"['Jiaqi Wang' 'Mengtian Kang' 'Yong Liu' 'Chi Zhang' 'Ying Liu'\n 'Shiming Li' 'Yue Qi' 'Wenjun Xu' 'Chenyu Tang' 'Edoardo Occhipinti'\n 'Mayinuer Yusufu' 'Ningli Wang' 'Weiling Bai' 'Shuo Gao'\n 'Luigi G. Occhipinti']"
]
|
null | null | 2404.13388 | null | null | http://arxiv.org/pdf/2404.13388v2 | 2024-04-23T13:25:01Z | 2024-04-20T14:15:25Z | Diagnosis of Multiple Fundus Disorders Amidst a Scarcity of Medical
Experts Via Self-supervised Machine Learning | Fundus diseases are major causes of visual impairment and blindness worldwide, especially in underdeveloped regions, where the shortage of ophthalmologists hinders timely diagnosis. AI-assisted fundus image analysis has several advantages, such as high accuracy, reduced workload, and improved accessibility, but it requires a large amount of expert-annotated data to build reliable models. To address this dilemma, we propose a general self-supervised machine learning framework that can handle diverse fundus diseases from unlabeled fundus images. Our method's AUC surpasses existing supervised approaches by 15.7%, and even exceeds performance of a single human expert. Furthermore, our model adapts well to various datasets from different regions, races, and heterogeneous image sources or qualities from multiple cameras or devices. Our method offers a label-free general framework to diagnose fundus diseases, which could potentially benefit telehealth programs for early screening of people at risk of vision loss. | [
"['Yong Liu' 'Mengtian Kang' 'Shuo Gao' 'Chi Zhang' 'Ying Liu' 'Shiming Li'\n 'Yue Qi' 'Arokia Nathan' 'Wenjun Xu' 'Chenyu Tang' 'Edoardo Occhipinti'\n 'Mayinuer Yusufu' 'Ningli Wang' 'Weiling Bai' 'Luigi Occhipinti']"
]
|
null | null | 2404.13391 | null | null | http://arxiv.org/pdf/2404.13391v1 | 2024-04-20T14:21:16Z | 2024-04-20T14:21:16Z | Online Planning of Power Flows for Power Systems Against Bushfires Using
Spatial Context | The 2019-20 Australia bushfire incurred numerous economic losses and significantly affected the operations of power systems. A power station or transmission line can be significantly affected due to bushfires, leading to an increase in operational costs. We study a fundamental but challenging problem of planning the optimal power flow (OPF) for power systems subject to bushfires. Considering the stochastic nature of bushfire spread, we develop a model to capture such dynamics based on Moore's neighborhood model. Under a periodic inspection scheme that reveals the in-situ bushfire status, we propose an online optimization modeling framework that sequentially plans the power flows in the electricity network. Our framework assumes that the spread of bushfires is non-stationary over time, and the spread and containment probabilities are unknown. To meet these challenges, we develop a contextual online learning algorithm that treats the in-situ geographical information of the bushfire as a 'spatial context'. The online learning algorithm learns the unknown probabilities sequentially based on the observed data and then makes the OPF decision accordingly. The sequential OPF decisions aim to minimize the regret function, which is defined as the cumulative loss against the clairvoyant strategy that knows the true model parameters. We provide a theoretical guarantee of our algorithm by deriving a bound on the regret function, which outperforms the regret bound achieved by other benchmark algorithms. Our model assumptions are verified by the real bushfire data from NSW, Australia, and we apply our model to two power systems to illustrate its applicability. | [
"['Jianyu Xu' 'Qiuzhuang Sun' 'Yang Yang' 'Huadong Mo' 'Daoyi Dong']"
]
|
null | null | 2404.13393 | null | null | http://arxiv.org/pdf/2404.13393v1 | 2024-04-20T14:25:34Z | 2024-04-20T14:25:34Z | Transfer Learning for Molecular Property Predictions from Small Data
Sets | Machine learning has emerged as a new tool in chemistry to bypass expensive experiments or quantum-chemical calculations, for example, in high-throughput screening applications. However, many machine learning studies rely on small data sets, making it difficult to efficiently implement powerful deep learning architectures such as message passing neural networks. In this study, we benchmark common machine learning models for the prediction of molecular properties on small data sets, for which the best results are obtained with the message passing neural network PaiNN, as well as SOAP molecular descriptors concatenated to a set of simple molecular descriptors tailored to gradient boosting with regression trees. To further improve the predictive capabilities of PaiNN, we present a transfer learning strategy that uses large data sets to pre-train the respective models and allows to obtain more accurate models after fine-tuning on the original data sets. The pre-training labels are obtained from computationally cheap ab initio or semi-empirical models and corrected by simple linear regression on the target data set to obtain labels that are close to those of the original data. This strategy is tested on the Harvard Oxford Photovoltaics data set (HOPV, HOMO-LUMO-gaps), for which excellent results are obtained, and on the Freesolv data set (solvation energies), where this method is unsuccessful due to a complex underlying learning task and the dissimilar methods used to obtain pre-training and fine-tuning labels. Finally, we find that the final training results do not improve monotonically with the size of the pre-training data set, but pre-training with fewer data points can lead to more biased pre-trained models and higher accuracy after fine-tuning. | [
"['Thorren Kirschbaum' 'Annika Bande']"
]
|
null | null | 2404.13401 | null | null | http://arxiv.org/pdf/2404.13401v1 | 2024-04-20T15:01:35Z | 2024-04-20T15:01:35Z | Approximate Algorithms For $k$-Sparse Wasserstein Barycenter With
Outliers | Wasserstein Barycenter (WB) is one of the most fundamental optimization problems in optimal transportation. Given a set of distributions, the goal of WB is to find a new distribution that minimizes the average Wasserstein distance to them. The problem becomes even harder if we restrict the solution to be ``$k$-sparse''. In this paper, we study the $k$-sparse WB problem in the presence of outliers, which is a more practical setting since real-world data often contains noise. Existing WB algorithms cannot be directly extended to handle the case with outliers, and thus it is urgently needed to develop some novel ideas. First, we investigate the relation between $k$-sparse WB with outliers and the clustering (with outliers) problems. In particular, we propose a clustering based LP method that yields constant approximation factor for the $k$-sparse WB with outliers problem. Further, we utilize the coreset technique to achieve the $(1+epsilon)$-approximation factor for any $epsilon>0$, if the dimensionality is not high. Finally, we conduct the experiments for our proposed algorithms and illustrate their efficiencies in practice. | [
"['Qingyuan Yang' 'Hu Ding']"
]
|
null | null | 2404.13404 | null | null | http://arxiv.org/pdf/2404.13404v1 | 2024-04-20T15:12:47Z | 2024-04-20T15:12:47Z | Solution space and storage capacity of fully connected two-layer neural
networks with generic activation functions | The storage capacity of a binary classification model is the maximum number of random input-output pairs per parameter that the model can learn. It is one of the indicators of the expressive power of machine learning models and is important for comparing the performance of various models. In this study, we analyze the structure of the solution space and the storage capacity of fully connected two-layer neural networks with general activation functions using the replica method from statistical physics. Our results demonstrate that the storage capacity per parameter remains finite even with infinite width and that the weights of the network exhibit negative correlations, leading to a 'division of labor'. In addition, we find that increasing the dataset size triggers a phase transition at a certain transition point where the permutation symmetry of weights is broken, resulting in the solution space splitting into disjoint regions. We identify the dependence of this transition point and the storage capacity on the choice of activation function. These findings contribute to understanding the influence of activation functions and the number of parameters on the structure of the solution space, potentially offering insights for selecting appropriate architectures based on specific objectives. | [
"['Sota Nishiyama' 'Masayuki Ohzeki']"
]
|
null | null | 2404.13421 | null | null | http://arxiv.org/abs/2404.13421v1 | 2024-04-20T16:38:26Z | 2024-04-20T16:38:26Z | MultiConfederated Learning: Inclusive Non-IID Data handling with
Decentralized Federated Learning | Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning. FL operates by aggregating models trained by remote devices which owns the data. Thus, FL enables the training of powerful global models using crowd-sourced data from a large number of learners, without compromising their privacy. However, the aggregating server is a single point of failure when generating the global model. Moreover, the performance of the model suffers when the data is not independent and identically distributed (non-IID data) on all remote devices. This leads to vastly different models being aggregated, which can reduce the performance by as much as 50% in certain scenarios. In this paper, we seek to address the aforementioned issues while retaining the benefits of FL. We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data. Unlike traditional FL, MultiConfederated Learning will maintain multiple models in parallel (instead of a single global model) to help with convergence when the data is non-IID. With the help of transfer learning, learners can converge to fewer models. In order to increase adaptability, learners are allowed to choose which updates to aggregate from their peers. | [
"['Michael Duchesne' 'Kaiwen Zhang' 'Chamseddine Talhi']"
]
|
null | null | 2404.13423 | null | null | http://arxiv.org/pdf/2404.13423v2 | 2024-06-16T11:12:34Z | 2024-04-20T17:06:00Z | PIPER: Primitive-Informed Preference-based Hierarchical Reinforcement
Learning via Hindsight Relabeling | In this work, we introduce PIPER: Primitive-Informed Preference-based Hierarchical reinforcement learning via Hindsight Relabeling, a novel approach that leverages preference-based learning to learn a reward model, and subsequently uses this reward model to relabel higher-level replay buffers. Since this reward is unaffected by lower primitive behavior, our relabeling-based approach is able to mitigate non-stationarity, which is common in existing hierarchical approaches, and demonstrates impressive performance across a range of challenging sparse-reward tasks. Since obtaining human feedback is typically impractical, we propose to replace the human-in-the-loop approach with our primitive-in-the-loop approach, which generates feedback using sparse rewards provided by the environment. Moreover, in order to prevent infeasible subgoal prediction and avoid degenerate solutions, we propose primitive-informed regularization that conditions higher-level policies to generate feasible subgoals for lower-level policies. We perform extensive experiments to show that PIPER mitigates non-stationarity in hierarchical reinforcement learning and achieves greater than 50$%$ success rates in challenging, sparse-reward robotic environments, where most other baselines fail to achieve any significant progress. | [
"['Utsav Singh' 'Wesley A. Suttle' 'Brian M. Sadler' 'Vinay P. Namboodiri'\n 'Amrit Singh Bedi']"
]
|
null | null | 2404.13430 | null | null | http://arxiv.org/pdf/2404.13430v1 | 2024-04-20T17:31:45Z | 2024-04-20T17:31:45Z | React-OT: Optimal Transport for Generating Transition State in Chemical
Reactions | Transition states (TSs) are transient structures that are key in understanding reaction mechanisms and designing catalysts but challenging to be captured in experiments. Alternatively, many optimization algorithms have been developed to search for TSs computationally. Yet the cost of these algorithms driven by quantum chemistry methods (usually density functional theory) is still high, posing challenges for their applications in building large reaction networks for reaction exploration. Here we developed React-OT, an optimal transport approach for generating unique TS structures from reactants and products. React-OT generates highly accurate TS structures with a median structural root mean square deviation (RMSD) of 0.053{AA} and median barrier height error of 1.06 kcal/mol requiring only 0.4 second per reaction. The RMSD and barrier height error is further improved by roughly 25% through pretraining React-OT on a large reaction dataset obtained with a lower level of theory, GFN2-xTB. We envision the great accuracy and fast inference of React-OT useful in targeting TSs when exploring chemical reactions with unknown mechanisms. | [
"['Chenru Duan' 'Guan-Horng Liu' 'Yuanqi Du' 'Tianrong Chen' 'Qiyuan Zhao'\n 'Haojun Jia' 'Carla P. Gomes' 'Evangelos A. Theodorou' 'Heather J. Kulik']"
]
|
null | null | 2404.13441 | null | null | http://arxiv.org/pdf/2404.13441v2 | 2024-05-24T21:57:17Z | 2024-04-20T18:35:45Z | Machine Learning-Assisted Thermoelectric Cooling for On-Demand
Multi-Hotspot Thermal Management | Thermoelectric coolers (TECs) offer a promising solution for direct cooling of local hotspots and active thermal management in advanced electronic systems. However, TECs present significant trade-offs among spatial cooling, heating and power consumption. The optimization of TECs requires extensive simulations, which are impractical for managing actual systems with multiple hotspots under spatial and temporal variations. In this study, we present a novel machine learning-assisted optimization algorithm for thermoelectric coolers that can achieve global optimal temperature by individually controlling TEC units based on real-time multi-hotspot conditions across the entire domain. We train a convolutional neural network (CNN) with a combination of the Inception module and multi-task learning (MTL) approach to comprehend the coupled thermal-electrical physics underlying the system and attain accurate predictions for both temperature and power consumption with and without TECs. Due to the intricate interaction among passive thermal gradient, Peltier effect and Joule effect, a local optimal TEC control experiences spatial temperature trade-off which may not lead to a global optimal solution. To address this issue, we develop a backtracking-based optimization algorithm using the machine learning model to iterate all possible TEC assignments for attaining global optimal solutions. For any m by n matrix with NHS hotspots (n, m <= 10, 0<= NHS <= 20), our algorithm is capable of providing 52.4% peak temperature reduction and its corresponding TEC array control within an average of 1.64 seconds while iterating through tens of temperature predictions behind-the-scenes. This represents a speed increase of over three orders of magnitude compared to traditional FEM strategies which take approximately 27 minutes. | [
"['Jiajian Luo' 'Jaeho Lee']"
]
|
null | null | 2404.13449 | null | null | http://arxiv.org/pdf/2404.13449v1 | 2024-04-20T19:17:40Z | 2024-04-20T19:17:40Z | SiNC+: Adaptive Camera-Based Vitals with Unsupervised Learning of
Periodic Signals | Subtle periodic signals, such as blood volume pulse and respiration, can be extracted from RGB video, enabling noncontact health monitoring at low cost. Advancements in remote pulse estimation -- or remote photoplethysmography (rPPG) -- are currently driven by deep learning solutions. However, modern approaches are trained and evaluated on benchmark datasets with ground truth from contact-PPG sensors. We present the first non-contrastive unsupervised learning framework for signal regression to mitigate the need for labelled video data. With minimal assumptions of periodicity and finite bandwidth, our approach discovers the blood volume pulse directly from unlabelled videos. We find that encouraging sparse power spectra within normal physiological bandlimits and variance over batches of power spectra is sufficient for learning visual features of periodic signals. We perform the first experiments utilizing unlabelled video data not specifically created for rPPG to train robust pulse rate estimators. Given the limited inductive biases, we successfully applied the same approach to camera-based respiration by changing the bandlimits of the target signal. This shows that the approach is general enough for unsupervised learning of bandlimited quasi-periodic signals from different domains. Furthermore, we show that the framework is effective for finetuning models on unlabelled video from a single subject, allowing for personalized and adaptive signal regressors. | [
"['Jeremy Speth' 'Nathan Vance' 'Patrick Flynn' 'Adam Czajka']"
]
|
null | null | 2404.13456 | null | null | http://arxiv.org/pdf/2404.13456v2 | 2024-05-20T21:57:31Z | 2024-04-20T19:51:29Z | Real-Time Safe Control of Neural Network Dynamic Models with Sound
Approximation | Safe control of neural network dynamic models (NNDMs) is important to robotics and many applications. However, it remains challenging to compute an optimal safe control in real time for NNDM. To enable real-time computation, we propose to use a sound approximation of the NNDM in the control synthesis. In particular, we propose Bernstein over-approximated neural dynamics (BOND) based on the Bernstein polynomial over-approximation (BPO) of ReLU activation functions in NNDM. To mitigate the errors introduced by the approximation and to ensure persistent feasibility of the safe control problems, we synthesize a worst-case safety index using the most unsafe approximated state within the BPO relaxation of NNDM offline. For the online real-time optimization, we formulate the first-order Taylor approximation of the nonlinear worst-case safety constraint as an additional linear layer of NNDM with the l2 bounded bias term for the higher-order remainder. Comprehensive experiments with different neural dynamics and safety constraints show that with safety guaranteed, our NNDMs with sound approximation are 10-100 times faster than the safe control baseline that uses mixed integer programming (MIP), validating the effectiveness of the worst-case safety index and scalability of the proposed BOND in real-time large-scale settings. The code is available at https://github.com/intelligent-control-lab/BOND. | [
"['Hanjiang Hu' 'Jianglin Lan' 'Changliu Liu']"
]
|
null | null | 2404.13465 | null | null | http://arxiv.org/abs/2404.13465v1 | 2024-04-20T20:48:42Z | 2024-04-20T20:48:42Z | Do "English" Named Entity Recognizers Work Well on Global Englishes? | The vast majority of the popular English named entity recognition (NER) datasets contain American or British English data, despite the existence of many global varieties of English. As such, it is unclear whether they generalize for analyzing use of English globally. To test this, we build a newswire dataset, the Worldwide English NER Dataset, to analyze NER model performance on low-resource English variants from around the world. We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset. All models trained on the CoNLL or OntoNotes datasets experienced significant performance drops-over 10 F1 in some cases-when tested on the Worldwide English dataset. Upon examination of region-specific errors, we observe the greatest performance drops for Oceania and Africa, while Asia and the Middle East had comparatively strong performance. Lastly, we find that a combined model trained on the Worldwide dataset and either CoNLL or OntoNotes lost only 1-2 F1 on both test sets. | [
"['Alexander Shan' 'John Bauer' 'Riley Carlson' 'Christopher Manning']"
]
|
null | null | 2404.13474 | null | null | http://arxiv.org/pdf/2404.13474v1 | 2024-04-20T21:51:15Z | 2024-04-20T21:51:15Z | Composing Pre-Trained Object-Centric Representations for Robotics From
"What" and "Where" Foundation Models | There have recently been large advances both in pre-training visual representations for robotic control and segmenting unknown category objects in general images. To leverage these for improved robot learning, we propose $textbf{POCR}$, a new framework for building pre-trained object-centric representations for robotic control. Building on theories of "what-where" representations in psychology and computer vision, we use segmentations from a pre-trained model to stably locate across timesteps, various entities in the scene, capturing "where" information. To each such segmented entity, we apply other pre-trained models that build vector descriptions suitable for robotic control tasks, thus capturing "what" the entity is. Thus, our pre-trained object-centric representations for control are constructed by appropriately combining the outputs of off-the-shelf pre-trained models, with no new training. On various simulated and real robotic tasks, we show that imitation policies for robotic manipulators trained on POCR achieve better performance and systematic generalization than state of the art pre-trained representations for robotics, as well as prior object-centric representations that are typically trained from scratch. | [
"['Junyao Shi' 'Jianing Qian' 'Yecheng Jason Ma' 'Dinesh Jayaraman']"
]
|
null | null | 2404.13475 | null | null | http://arxiv.org/pdf/2404.13475v1 | 2024-04-20T22:03:32Z | 2024-04-20T22:03:32Z | PristiQ: A Co-Design Framework for Preserving Data Security of Quantum
Learning in the Cloud | Benefiting from cloud computing, today's early-stage quantum computers can be remotely accessed via the cloud services, known as Quantum-as-a-Service (QaaS). However, it poses a high risk of data leakage in quantum machine learning (QML). To run a QML model with QaaS, users need to locally compile their quantum circuits including the subcircuit of data encoding first and then send the compiled circuit to the QaaS provider for execution. If the QaaS provider is untrustworthy, the subcircuit to encode the raw data can be easily stolen. Therefore, we propose a co-design framework for preserving the data security of QML with the QaaS paradigm, namely PristiQ. By introducing an encryption subcircuit with extra secure qubits associated with a user-defined security key, the security of data can be greatly enhanced. And an automatic search algorithm is proposed to optimize the model to maintain its performance on the encrypted quantum data. Experimental results on simulation and the actual IBM quantum computer both prove the ability of PristiQ to provide high security for the quantum data while maintaining the model performance in QML. | [
"['Zhepeng Wang' 'Yi Sheng' 'Nirajan Koirala' 'Kanad Basu' 'Taeho Jung'\n 'Cheng-Chang Lu' 'Weiwen Jiang']"
]
|
null | null | 2404.13476 | null | null | http://arxiv.org/pdf/2404.13476v1 | 2024-04-20T22:05:48Z | 2024-04-20T22:05:48Z | A Framework for Feasible Counterfactual Exploration incorporating
Causality, Sparsity and Density | The imminent need to interpret the output of a Machine Learning model with counterfactual (CF) explanations - via small perturbations to the input - has been notable in the research community. Although the variety of CF examples is important, the aspect of them being feasible at the same time, does not necessarily apply in their entirety. This work uses different benchmark datasets to examine through the preservation of the logical causal relations of their attributes, whether CF examples can be generated after a small amount of changes to the original input, be feasible and actually useful to the end-user in a real-world case. To achieve this, we used a black box model as a classifier, to distinguish the desired from the input class and a Variational Autoencoder (VAE) to generate feasible CF examples. As an extension, we also extracted two-dimensional manifolds (one for each dataset) that located the majority of the feasible examples, a representation that adequately distinguished them from infeasible ones. For our experimentation we used three commonly used datasets and we managed to generate feasible and at the same time sparse, CF examples that satisfy all possible predefined causal constraints, by confirming their importance with the attributes in a dataset. | [
"['Kleopatra Markou' 'Dimitrios Tomaras' 'Vana Kalogeraki'\n 'Dimitrios Gunopulos']"
]
|
null | null | 2404.13478 | null | null | http://arxiv.org/pdf/2404.13478v1 | 2024-04-20T22:16:56Z | 2024-04-20T22:16:56Z | Deep SE(3)-Equivariant Geometric Reasoning for Precise Placement Tasks | Many robot manipulation tasks can be framed as geometric reasoning tasks, where an agent must be able to precisely manipulate an object into a position that satisfies the task from a set of initial conditions. Often, task success is defined based on the relationship between two objects - for instance, hanging a mug on a rack. In such cases, the solution should be equivariant to the initial position of the objects as well as the agent, and invariant to the pose of the camera. This poses a challenge for learning systems which attempt to solve this task by learning directly from high-dimensional demonstrations: the agent must learn to be both equivariant as well as precise, which can be challenging without any inductive biases about the problem. In this work, we propose a method for precise relative pose prediction which is provably SE(3)-equivariant, can be learned from only a few demonstrations, and can generalize across variations in a class of objects. We accomplish this by factoring the problem into learning an SE(3) invariant task-specific representation of the scene and then interpreting this representation with novel geometric reasoning layers which are provably SE(3) equivariant. We demonstrate that our method can yield substantially more precise placement predictions in simulated placement tasks than previous methods trained with the same amount of data, and can accurately represent relative placement relationships data collected from real-world demonstrations. Supplementary information and videos can be found at https://sites.google.com/view/reldist-iclr-2023. | [
"['Ben Eisner' 'Yi Yang' 'Todor Davchev' 'Mel Vecerik' 'Jonathan Scholz'\n 'David Held']"
]
|
null | null | 2404.13491 | null | null | http://arxiv.org/pdf/2404.13491v1 | 2024-04-21T00:04:38Z | 2024-04-21T00:04:38Z | Accelerating the Generation of Molecular Conformations with Progressive
Distillation of Equivariant Latent Diffusion Models | Recent advances in fast sampling methods for diffusion models have demonstrated significant potential to accelerate generation on image modalities. We apply these methods to 3-dimensional molecular conformations by building on the recently introduced GeoLDM equivariant latent diffusion model (Xu et al., 2023). We evaluate trade-offs between speed gains and quality loss, as measured by molecular conformation structural stability. We introduce Equivariant Latent Progressive Distillation, a fast sampling algorithm that preserves geometric equivariance and accelerates generation from latent diffusion models. Our experiments demonstrate up to 7.5x gains in sampling speed with limited degradation in molecular stability. These results suggest this accelerated sampling method has strong potential for high-throughput in silico molecular conformations screening in computational biochemistry, drug discovery, and life sciences applications. | [
"['Romain Lacombe' 'Neal Vaidya']"
]
|
null | null | 2404.13500 | null | null | http://arxiv.org/pdf/2404.13500v1 | 2024-04-21T01:27:47Z | 2024-04-21T01:27:47Z | Generalized Regression with Conditional GANs | Regression is typically treated as a curve-fitting process where the goal is to fit a prediction function to data. With the help of conditional generative adversarial networks, we propose to solve this age-old problem in a different way; we aim to learn a prediction function whose outputs, when paired with the corresponding inputs, are indistinguishable from feature-label pairs in the training dataset. We show that this approach to regression makes fewer assumptions on the distribution of the data we are fitting to and, therefore, has better representation capabilities. We draw parallels with generalized linear models in statistics and show how our proposal serves as an extension of them to neural networks. We demonstrate the superiority of this new approach to standard regression with experiments on multiple synthetic and publicly available real-world datasets, finding encouraging results, especially with real-world heavy-tailed regression datasets. To make our work more reproducible, we release our source code. Link to repository: https://anonymous.4open.science/r/regressGAN-7B71/ | [
"['Deddy Jobson' 'Eddy Hudson']"
]
|
null | null | 2404.13503 | null | null | http://arxiv.org/pdf/2404.13503v2 | 2024-04-24T20:29:20Z | 2024-04-21T01:53:20Z | Predict to Minimize Swap Regret for All Payoff-Bounded Tasks | A sequence of predictions is calibrated if and only if it induces no swap regret to all down-stream decision tasks. We study the Maximum Swap Regret (MSR) of predictions for binary events: the swap regret maximized over all downstream tasks with bounded payoffs. Previously, the best online prediction algorithm for minimizing MSR is obtained by minimizing the K1 calibration error, which upper bounds MSR up to a constant factor. However, recent work (Qiao and Valiant, 2021) gives an ${Omega}(T^{0.528})$ lower bound for the worst-case expected $K_1$ calibration error incurred by any randomized algorithm in T rounds, presenting a barrier to achieving better rates for MSR. Several relaxations of MSR have been considered to overcome this barrier, via external regret (Kleinberg et al., 2023) and regret bounds depending polynomially on the number of actions in downstream tasks (Noarov et al., 2023; Roth and Shi, 2024). We show that the barrier can be surpassed without any relaxations: we give an efficient randomized prediction algorithm that guarantees $O(sqrt{T}logT)$ expected MSR. We also discuss the economic utility of calibration by viewing MSR as a decision-theoretic calibration error metric and study its relationship to existing metrics. | [
"['Lunjia Hu' 'Yifan Wu']"
]
|
null | null | 2404.13506 | null | null | http://arxiv.org/pdf/2404.13506v2 | 2024-04-23T21:28:26Z | 2024-04-21T02:26:15Z | Parameter Efficient Fine Tuning: A Comprehensive Analysis Across
Applications | The rise of deep learning has marked significant progress in fields such as computer vision, natural language processing, and medical imaging, primarily through the adaptation of pre-trained models for specific tasks. Traditional fine-tuning methods, involving adjustments to all parameters, face challenges due to high computational and memory demands. This has led to the development of Parameter Efficient Fine-Tuning (PEFT) techniques, which selectively update parameters to balance computational efficiency with performance. This review examines PEFT approaches, offering a detailed comparison of various strategies highlighting applications across different domains, including text generation, medical imaging, protein modeling, and speech synthesis. By assessing the effectiveness of PEFT methods in reducing computational load, speeding up training, and lowering memory usage, this paper contributes to making deep learning more accessible and adaptable, facilitating its wider application and encouraging innovation in model optimization. Ultimately, the paper aims to contribute towards insights into PEFT's evolving landscape, guiding researchers and practitioners in overcoming the limitations of conventional fine-tuning approaches. | [
"['Charith Chandra Sai Balne' 'Sreyoshi Bhaduri' 'Tamoghna Roy'\n 'Vinija Jain' 'Aman Chadha']"
]
|
null | null | 2404.13515 | null | null | http://arxiv.org/pdf/2404.13515v2 | 2024-04-25T20:34:32Z | 2024-04-21T03:31:01Z | FedTrans: Efficient Federated Learning via Multi-Model Transformation | Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices. Yet, training and customizing models for FL clients is notoriously challenging due to the heterogeneity of client data, device capabilities, and the massive scale of clients, making individualized model exploration prohibitively expensive. State-of-the-art FL solutions personalize a globally trained model or concurrently train multiple models, but they often incur suboptimal model accuracy and huge training costs. In this paper, we introduce FedTrans, a multi-model FL training framework that automatically produces and trains high-accuracy, hardware-compatible models for individual clients at scale. FedTrans begins with a basic global model, identifies accuracy bottlenecks in model architectures during training, and then employs model transformation to derive new models for heterogeneous clients on the fly. It judiciously assigns models to individual clients while performing soft aggregation on multi-model updates to minimize total training costs. Our evaluations using realistic settings show that FedTrans improves individual client model accuracy by 14% - 72% while slashing training costs by 1.6X - 20X over state-of-the-art solutions. | [
"['Yuxuan Zhu' 'Jiachen Liu' 'Mosharaf Chowdhury' 'Fan Lai']"
]
|
null | null | 2404.13521 | null | null | http://arxiv.org/abs/2404.13521v1 | 2024-04-21T04:06:09Z | 2024-04-21T04:06:09Z | Graph4GUI: Graph Neural Networks for Representing Graphical User
Interfaces | Present-day graphical user interfaces (GUIs) exhibit diverse arrangements of text, graphics, and interactive elements such as buttons and menus, but representations of GUIs have not kept up. They do not encapsulate both semantic and visuo-spatial relationships among elements. To seize machine learning's potential for GUIs more efficiently, Graph4GUI exploits graph neural networks to capture individual elements' properties and their semantic-visuo-spatial constraints in a layout. The learned representation demonstrated its effectiveness in multiple tasks, especially generating designs in a challenging GUI autocompletion task, which involved predicting the positions of remaining unplaced elements in a partially completed GUI. The new model's suggestions showed alignment and visual appeal superior to the baseline method and received higher subjective ratings for preference. Furthermore, we demonstrate the practical benefits and efficiency advantages designers perceive when utilizing our model as an autocompletion plug-in. | [
"['Yue Jiang' 'Changkong Zhou' 'Vikas Garg' 'Antti Oulasvirta']"
]
|
null | null | 2404.13522 | null | null | http://arxiv.org/pdf/2404.13522v2 | 2024-05-30T01:56:53Z | 2024-04-21T04:07:52Z | Error Analysis of Shapley Value-Based Model Explanations: An Informative
Perspective | Shapley value attribution (SVA) is an increasingly popular explainable AI (XAI) method, which quantifies the contribution of each feature to the model's output. However, recent work has shown that most existing methods to implement SVAs have some drawbacks, resulting in biased or unreliable explanations that fail to correctly capture the true intrinsic relationships between features and model outputs. Moreover, the mechanism and consequences of these drawbacks have not been discussed systematically. In this paper, we propose a novel error theoretical analysis framework, in which the explanation errors of SVAs are decomposed into two components: observation bias and structural bias. We further clarify the underlying causes of these two biases and demonstrate that there is a trade-off between them. Based on this error analysis framework, we develop two novel concepts: over-informative and underinformative explanations. We demonstrate how these concepts can be effectively used to understand potential errors of existing SVA methods. In particular, for the widely deployed assumption-based SVAs, we find that they can easily be under-informative due to the distribution drift caused by distributional assumptions. We propose a measurement tool to quantify such a distribution drift. Finally, our experiments illustrate how different existing SVA methods can be over- or under-informative. Our work sheds light on how errors incur in the estimation of SVAs and encourages new less error-prone methods. | [
"['Ningsheng Zhao' 'Jia Yuan Yu' 'Krzysztof Dzieciolowski' 'Trang Bui']"
]
|
null | null | 2404.13528 | null | null | http://arxiv.org/abs/2404.13528v1 | 2024-04-21T04:47:26Z | 2024-04-21T04:47:26Z | SmartMem: Layout Transformation Elimination and Adaptation for Efficient
DNN Execution on Mobile | This work is motivated by recent developments in Deep Neural Networks, particularly the Transformer architectures underlying applications such as ChatGPT, and the need for performing inference on mobile devices. Focusing on emerging transformers (specifically the ones with computationally efficient Swin-like architectures) and large models (e.g., Stable Diffusion and LLMs) based on transformers, we observe that layout transformations between the computational operators cause a significant slowdown in these applications. This paper presents SmartMem, a comprehensive framework for eliminating most layout transformations, with the idea that multiple operators can use the same tensor layout through careful choice of layout and implementation of operations. Our approach is based on classifying the operators into four groups, and considering combinations of producer-consumer edges between the operators. We develop a set of methods for searching such layouts. Another component of our work is developing efficient memory layouts for 2.5 dimensional memory commonly seen in mobile devices. Our experimental results show that SmartMem outperforms 5 state-of-the-art DNN execution frameworks on mobile devices across 18 varied neural networks, including CNNs, Transformers with both local and global attention, as well as LLMs. In particular, compared to DNNFusion, SmartMem achieves an average speedup of 2.8$times$, and outperforms TVM and MNN with speedups of 6.9$times$ and 7.9$times$, respectively, on average. | [
"['Wei Niu' 'Md Musfiqur Rahman Sanim' 'Zhihao Shu' 'Jiexiong Guan'\n 'Xipeng Shen' 'Miao Yin' 'Gagan Agrawal' 'Bin Ren']"
]
|
null | null | 2404.13530 | null | null | http://arxiv.org/pdf/2404.13530v1 | 2024-04-21T04:55:13Z | 2024-04-21T04:55:13Z | Listen Then See: Video Alignment with Speaker Attention | Video-based Question Answering (Video QA) is a challenging task and becomes even more intricate when addressing Socially Intelligent Question Answering (SIQA). SIQA requires context understanding, temporal reasoning, and the integration of multimodal information, but in addition, it requires processing nuanced human behavior. Furthermore, the complexities involved are exacerbated by the dominance of the primary modality (text) over the others. Thus, there is a need to help the task's secondary modalities to work in tandem with the primary modality. In this work, we introduce a cross-modal alignment and subsequent representation fusion approach that achieves state-of-the-art results (82.06% accuracy) on the Social IQ 2.0 dataset for SIQA. Our approach exhibits an improved ability to leverage the video modality by using the audio modality as a bridge with the language modality. This leads to enhanced performance by reducing the prevalent issue of language overfitting and resultant video modality bypassing encountered by current existing techniques. Our code and models are publicly available at https://github.com/sts-vlcc/sts-vlcc | [
"['Aviral Agrawal' 'Carlos Mateo Samudio Lezcano'\n 'Iqui Balam Heredia-Marin' 'Prabhdeep Singh Sethi']"
]
|
null | null | 2404.13557 | null | null | http://arxiv.org/pdf/2404.13557v1 | 2024-04-21T07:05:38Z | 2024-04-21T07:05:38Z | Preconditioned Neural Posterior Estimation for Likelihood-free Inference | Simulation based inference (SBI) methods enable the estimation of posterior distributions when the likelihood function is intractable, but where model simulation is feasible. Popular neural approaches to SBI are the neural posterior estimator (NPE) and its sequential version (SNPE). These methods can outperform statistical SBI approaches such as approximate Bayesian computation (ABC), particularly for relatively small numbers of model simulations. However, we show in this paper that the NPE methods are not guaranteed to be highly accurate, even on problems with low dimension. In such settings the posterior cannot be accurately trained over the prior predictive space, and even the sequential extension remains sub-optimal. To overcome this, we propose preconditioned NPE (PNPE) and its sequential version (PSNPE), which uses a short run of ABC to effectively eliminate regions of parameter space that produce large discrepancy between simulations and data and allow the posterior emulator to be more accurately trained. We present comprehensive empirical evidence that this melding of neural and statistical SBI methods improves performance over a range of examples, including a motivating example involving a complex agent-based model applied to real tumour growth data. | [
"['Xiaoyu Wang' 'Ryan P. Kelly' 'David J. Warne' 'Christopher Drovandi']"
]
|
null | null | 2404.13571 | null | null | http://arxiv.org/pdf/2404.13571v1 | 2024-04-21T08:20:02Z | 2024-04-21T08:20:02Z | Test-Time Training on Graphs with Large Language Models (LLMs) | Graph Neural Networks have demonstrated great success in various fields of multimedia. However, the distribution shift between the training and test data challenges the effectiveness of GNNs. To mitigate this challenge, Test-Time Training (TTT) has been proposed as a promising approach. Traditional TTT methods require a demanding unsupervised training strategy to capture the information from test to benefit the main task. Inspired by the great annotation ability of Large Language Models (LLMs) on Text-Attributed Graphs (TAGs), we propose to enhance the test-time training on graphs with LLMs as annotators. In this paper, we design a novel Test-Time Training pipeline, LLMTTT, which conducts the test-time adaptation under the annotations by LLMs on a carefully-selected node set. Specifically, LLMTTT introduces a hybrid active node selection strategy that considers not only node diversity and representativeness, but also prediction signals from the pre-trained model. Given annotations from LLMs, a two-stage training strategy is designed to tailor the test-time model with the limited and noisy labels. A theoretical analysis ensures the validity of our method and extensive experiments demonstrate that the proposed LLMTTT can achieve a significant performance improvement compared to existing Out-of-Distribution (OOD) generalization methods. | [
"['Jiaxin Zhang' 'Yiqi Wang' 'Xihong Yang' 'Siwei Wang' 'Yu Feng' 'Yu Shi'\n 'Ruicaho Ren' 'En Zhu' 'Xinwang Liu']"
]
|
null | null | 2404.13576 | null | null | http://arxiv.org/pdf/2404.13576v1 | 2024-04-21T08:28:52Z | 2024-04-21T08:28:52Z | I2CANSAY:Inter-Class Analogical Augmentation and Intra-Class
Significance Analysis for Non-Exemplar Online Task-Free Continual Learning | Online task-free continual learning (OTFCL) is a more challenging variant of continual learning which emphasizes the gradual shift of task boundaries and learns in an online mode. Existing methods rely on a memory buffer composed of old samples to prevent forgetting. However,the use of memory buffers not only raises privacy concerns but also hinders the efficient learning of new samples. To address this problem, we propose a novel framework called I2CANSAY that gets rid of the dependence on memory buffers and efficiently learns the knowledge of new data from one-shot samples. Concretely, our framework comprises two main modules. Firstly, the Inter-Class Analogical Augmentation (ICAN) module generates diverse pseudo-features for old classes based on the inter-class analogy of feature distributions for different new classes, serving as a substitute for the memory buffer. Secondly, the Intra-Class Significance Analysis (ISAY) module analyzes the significance of attributes for each class via its distribution standard deviation, and generates the importance vector as a correction bias for the linear classifier, thereby enhancing the capability of learning from new samples. We run our experiments on four popular image classification datasets: CoRe50, CIFAR-10, CIFAR-100, and CUB-200, our approach outperforms the prior state-of-the-art by a large margin. | [
"['Songlin Dong' 'Yingjie Chen' 'Yuhang He' 'Yuhan Jin' 'Alex C. Kot'\n 'Yihong Gong']"
]
|
null | null | 2404.13584 | null | null | http://arxiv.org/pdf/2404.13584v1 | 2024-04-21T08:52:22Z | 2024-04-21T08:52:22Z | Rethink Arbitrary Style Transfer with Transformer and Contrastive
Learning | Arbitrary style transfer holds widespread attention in research and boasts numerous practical applications. The existing methods, which either employ cross-attention to incorporate deep style attributes into content attributes or use adaptive normalization to adjust content features, fail to generate high-quality stylized images. In this paper, we introduce an innovative technique to improve the quality of stylized images. Firstly, we propose Style Consistency Instance Normalization (SCIN), a method to refine the alignment between content and style features. In addition, we have developed an Instance-based Contrastive Learning (ICL) approach designed to understand the relationships among various styles, thereby enhancing the quality of the resulting stylized images. Recognizing that VGG networks are more adept at extracting classification features and need to be better suited for capturing style features, we have also introduced the Perception Encoder (PE) to capture style features. Extensive experiments demonstrate that our proposed method generates high-quality stylized images and effectively prevents artifacts compared with the existing state-of-the-art methods. | [
"['Zhanjie Zhang' 'Jiakai Sun' 'Guangyuan Li' 'Lei Zhao' 'Quanwei Zhang'\n 'Zehua Lan' 'Haolin Yin' 'Wei Xing' 'Huaizhong Lin' 'Zhiwen Zuo']"
]
|
null | null | 2404.13588 | null | null | http://arxiv.org/pdf/2404.13588v1 | 2024-04-21T09:09:21Z | 2024-04-21T09:09:21Z | Machine Unlearning via Null Space Calibration | Machine unlearning aims to enable models to forget specific data instances when receiving deletion requests. Current research centres on efficient unlearning to erase the influence of data from the model and neglects the subsequent impacts on the remaining data. Consequently, existing unlearning algorithms degrade the model's performance after unlearning, known as textit{over-unlearning}. This paper addresses this critical yet under-explored issue by introducing machine underline{U}nlearning via underline{N}ull underline{S}pace underline{C}alibration (UNSC), which can accurately unlearn target samples without over-unlearning. On the contrary, by calibrating the decision space during unlearning, UNSC can significantly improve the model's performance on the remaining samples. In particular, our approach hinges on confining the unlearning process to a specified null space tailored to the remaining samples, which is augmented by strategically pseudo-labeling the unlearning samples. Comparative analyses against several established baselines affirm the superiority of our approach. Code is released at this href{https://github.com/HQC-ML/Machine-Unlearning-via-Null-Space-Calibration}{URL}. | [
"['Huiqiang Chen' 'Tianqing Zhu' 'Xin Yu' 'Wanlei Zhou']"
]
|
null | null | 2404.13591 | null | null | http://arxiv.org/pdf/2404.13591v2 | 2024-04-24T22:32:10Z | 2024-04-21T09:15:02Z | MARVEL: Multidimensional Abstraction and Reasoning through Visual
Evaluation and Learning | While multi-modal large language models (MLLMs) have shown significant progress on many popular visual reasoning benchmarks, whether they possess abstract visual reasoning abilities remains an open question. Similar to the Sudoku puzzles, abstract visual reasoning (AVR) problems require finding high-level patterns (e.g., repetition constraints) that control the input shapes (e.g., digits) in a specific task configuration (e.g., matrix). However, existing AVR benchmarks only considered a limited set of patterns (addition, conjunction), input shapes (rectangle, square), and task configurations (3 by 3 matrices). To evaluate MLLMs' reasoning abilities comprehensively, we introduce MARVEL, a multidimensional AVR benchmark with 770 puzzles composed of six core knowledge patterns, geometric and abstract shapes, and five different task configurations. To inspect whether the model accuracy is grounded in perception and reasoning, MARVEL complements the general AVR question with perception questions in a hierarchical evaluation framework. We conduct comprehensive experiments on MARVEL with nine representative MLLMs in zero-shot and few-shot settings. Our experiments reveal that all models show near-random performance on the AVR question, with significant performance gaps (40%) compared to humans across all patterns and task configurations. Further analysis of perception questions reveals that MLLMs struggle to comprehend the visual features (near-random performance) and even count the panels in the puzzle ( <45%), hindering their ability for abstract reasoning. We release our entire code and dataset. | [
"['Yifan Jiang' 'Jiarui Zhang' 'Kexuan Sun' 'Zhivar Sourati'\n 'Kian Ahrabian' 'Kaixin Ma' 'Filip Ilievski' 'Jay Pujara']"
]
|
null | null | 2404.13604 | null | null | http://arxiv.org/pdf/2404.13604v2 | 2024-06-05T12:54:39Z | 2024-04-21T10:26:13Z | CKGConv: General Graph Convolution with Continuous Kernels | The existing definitions of graph convolution, either from spatial or spectral perspectives, are inflexible and not unified. Defining a general convolution operator in the graph domain is challenging due to the lack of canonical coordinates, the presence of irregular structures, and the properties of graph symmetries. In this work, we propose a novel and general graph convolution framework by parameterizing the kernels as continuous functions of pseudo-coordinates derived via graph positional encoding. We name this Continuous Kernel Graph Convolution (CKGConv). Theoretically, we demonstrate that CKGConv is flexible and expressive. CKGConv encompasses many existing graph convolutions, and exhibits a stronger expressiveness, as powerful as graph transformers in terms of distinguishing non-isomorphic graphs. Empirically, we show that CKGConv-based Networks outperform existing graph convolutional networks and perform comparably to the best graph transformers across a variety of graph datasets. The code and models are publicly available at https://github.com/networkslab/CKGConv. | [
"['Liheng Ma' 'Soumyasundar Pal' 'Yitian Zhang' 'Jiaming Zhou'\n 'Yingxue Zhang' 'Mark Coates']"
]
|
null | null | 2404.13613 | null | null | http://arxiv.org/pdf/2404.13613v1 | 2024-04-21T10:49:41Z | 2024-04-21T10:49:41Z | The Branch Not Taken: Predicting Branching in Online Conversations | Multi-participant discussions tend to unfold in a tree structure rather than a chain structure. Branching may occur for multiple reasons -- from the asynchronous nature of online platforms to a conscious decision by an interlocutor to disengage with part of the conversation. Predicting branching and understanding the reasons for creating new branches is important for many downstream tasks such as summarization and thread disentanglement and may help develop online spaces that encourage users to engage in online discussions in more meaningful ways. In this work, we define the novel task of branch prediction and propose GLOBS (Global Branching Score) -- a deep neural network model for predicting branching. GLOBS is evaluated on three large discussion forums from Reddit, achieving significant improvements over an array of competitive baselines and demonstrating better transferability. We affirm that structural, temporal, and linguistic features contribute to GLOBS success and find that branching is associated with a greater number of conversation participants and tends to occur in earlier levels of the conversation tree. We publicly release GLOBS and our implementation of all baseline models to allow reproducibility and promote further research on this important task. | [
"['Shai Meital' 'Lior Rokach' 'Roman Vainshtein' 'Nir Grinberg']"
]
|
null | null | 2404.13621 | null | null | http://arxiv.org/abs/2404.13621v3 | 2024-06-18T01:40:23Z | 2024-04-21T11:21:27Z | Attack on Scene Flow using Point Clouds | Deep neural networks have made significant advancements in accurately estimating scene flow using point clouds, which is vital for many applications like video analysis, action recognition, and navigation. The robustness of these techniques, however, remains a concern, particularly in the face of adversarial attacks that have been proven to deceive state-of-the-art deep neural networks in many domains. Surprisingly, the robustness of scene flow networks against such attacks has not been thoroughly investigated. To address this problem, the proposed approach aims to bridge this gap by introducing adversarial white-box attacks specifically tailored for scene flow networks. Experimental results show that the generated adversarial examples obtain up to 33.7 relative degradation in average end-point error on the KITTI and FlyingThings3D datasets. The study also reveals the significant impact that attacks targeting point clouds in only one dimension or color channel have on average end-point error. Analyzing the success and failure of these attacks on the scene flow networks and their 2D optical flow network variants shows a higher vulnerability for the optical flow networks. | [
"['Haniyeh Ehsani Oskouie' 'Mohammad-Shahram Moin' 'Shohreh Kasaei']"
]
|
null | null | 2404.13628 | null | null | http://arxiv.org/pdf/2404.13628v1 | 2024-04-21T11:59:53Z | 2024-04-21T11:59:53Z | Mixture of LoRA Experts | LoRA has gained widespread acceptance in the fine-tuning of large pre-trained models to cater to a diverse array of downstream tasks, showcasing notable effectiveness and efficiency, thereby solidifying its position as one of the most prevalent fine-tuning techniques. Due to the modular nature of LoRA's plug-and-play plugins, researchers have delved into the amalgamation of multiple LoRAs to empower models to excel across various downstream tasks. Nonetheless, extant approaches for LoRA fusion grapple with inherent challenges. Direct arithmetic merging may result in the loss of the original pre-trained model's generative capabilities or the distinct identity of LoRAs, thereby yielding suboptimal outcomes. On the other hand, Reference tuning-based fusion exhibits limitations concerning the requisite flexibility for the effective combination of multiple LoRAs. In response to these challenges, this paper introduces the Mixture of LoRA Experts (MoLE) approach, which harnesses hierarchical control and unfettered branch selection. The MoLE approach not only achieves superior LoRA fusion performance in comparison to direct arithmetic merging but also retains the crucial flexibility for combining LoRAs effectively. Extensive experimental evaluations conducted in both the Natural Language Processing (NLP) and Vision & Language (V&L) domains substantiate the efficacy of MoLE. | [
"['Xun Wu' 'Shaohan Huang' 'Furu Wei']"
]
|
null | null | 2404.13630 | null | null | http://arxiv.org/abs/2404.13630v2 | 2024-05-03T13:07:18Z | 2024-04-21T12:06:05Z | Utilizing Deep Learning to Optimize Software Development Processes | This study explores the application of deep learning technologies in software development processes, particularly in automating code reviews, error prediction, and test generation to enhance code quality and development efficiency. Through a series of empirical studies, experimental groups using deep learning tools and control groups using traditional methods were compared in terms of code error rates and project completion times. The results demonstrated significant improvements in the experimental group, validating the effectiveness of deep learning technologies. The research also discusses potential optimization points, methodologies, and technical challenges of deep learning in software development, as well as how to integrate these technologies into existing software development workflows. | [
"['Keqin Li' 'Armando Zhu' 'Peng Zhao' 'Jintong Song' 'Jiabei Liu']"
]
|
null | null | 2404.13631 | null | null | http://arxiv.org/pdf/2404.13631v1 | 2024-04-21T12:11:03Z | 2024-04-21T12:11:03Z | Fermi-Bose Machine | Distinct from human cognitive processing, deep neural networks trained by backpropagation can be easily fooled by adversarial examples. To design a semantically meaningful representation learning, we discard backpropagation, and instead, propose a local contrastive learning, where the representation for the inputs bearing the same label shrink (akin to boson) in hidden layers, while those of different labels repel (akin to fermion). This layer-wise learning is local in nature, being biological plausible. A statistical mechanics analysis shows that the target fermion-pair-distance is a key parameter. Moreover, the application of this local contrastive learning to MNIST benchmark dataset demonstrates that the adversarial vulnerability of standard perceptron can be greatly mitigated by tuning the target distance, i.e., controlling the geometric separation of prototype manifolds. | [
"['Mingshan Xie' 'Yuchen Wang' 'Haiping Huang']"
]
|
null | null | 2404.13634 | null | null | http://arxiv.org/abs/2404.13634v3 | 2024-04-26T05:02:53Z | 2024-04-21T12:16:38Z | Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming
Generative Adversarial Networks | Synthetic data generation offers a promising solution to enhance the usefulness of Electronic Healthcare Records (EHR) by generating realistic de-identified data. However, the existing literature primarily focuses on the quality of synthetic health data, neglecting the crucial aspect of fairness in downstream predictions. Consequently, models trained on synthetic EHR have faced criticism for producing biased outcomes in target tasks. These biases can arise from either spurious correlations between features or the failure of models to accurately represent sub-groups. To address these concerns, we present Bias-transforming Generative Adversarial Networks (Bt-GAN), a GAN-based synthetic data generator specifically designed for the healthcare domain. In order to tackle spurious correlations (i), we propose an information-constrained Data Generation Process that enables the generator to learn a fair deterministic transformation based on a well-defined notion of algorithmic fairness. To overcome the challenge of capturing exact sub-group representations (ii), we incentivize the generator to preserve sub-group densities through score-based weighted sampling. This approach compels the generator to learn from underrepresented regions of the data manifold. We conduct extensive experiments using the MIMIC-III database. Our results demonstrate that Bt-GAN achieves SOTA accuracy while significantly improving fairness and minimizing bias amplification. We also perform an in-depth explainability analysis to provide additional evidence supporting the validity of our study. In conclusion, our research introduces a novel and professional approach to addressing the limitations of synthetic data generation in the healthcare domain. By incorporating fairness considerations and leveraging advanced techniques such as GANs, we pave the way for more reliable and unbiased predictions in healthcare applications. | [
"['Resmi Ramachandranpillai' 'Md Fahim Sikder' 'David Bergström'\n 'Fredrik Heintz']"
]
|
null | null | 2404.13646 | null | null | http://arxiv.org/pdf/2404.13646v1 | 2024-04-21T12:41:30Z | 2024-04-21T12:41:30Z | Physics-informed Mesh-independent Deep Compositional Operator Network | Solving parametric Partial Differential Equations (PDEs) for a broad range of parameters is a critical challenge in scientific computing. To this end, neural operators, which learn mappings from parameters to solutions, have been successfully used. However, the training of neural operators typically demands large training datasets, the acquisition of which can be prohibitively expensive. To address this challenge, physics-informed training can offer a cost-effective strategy. However, current physics-informed neural operators face limitations, either in handling irregular domain shapes or in generalization to various discretizations of PDE parameters with variable mesh sizes. In this research, we introduce a novel physics-informed model architecture which can generalize to parameter discretizations of variable size and irregular domain shapes. Particularly, inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly, and this parameter embedding is integrated with the response embeddings through multiple compositional layers, for more expressivity. Numerical results demonstrate the accuracy and efficiency of the proposed method. | [
"['Weiheng Zhong' 'Hadi Meidani']"
]
|
null | null | 2404.13647 | null | null | http://arxiv.org/pdf/2404.13647v1 | 2024-04-21T12:49:12Z | 2024-04-21T12:49:12Z | Mean Aggregator Is More Robust Than Robust Aggregators Under Label
Poisoning Attacks | Robustness to malicious attacks is of paramount importance for distributed learning. Existing works often consider the classical Byzantine attacks model, which assumes that some workers can send arbitrarily malicious messages to the server and disturb the aggregation steps of the distributed learning process. To defend against such worst-case Byzantine attacks, various robust aggregators have been proven effective and much superior to the often-used mean aggregator. In this paper, we show that robust aggregators are too conservative for a class of weak but practical malicious attacks, as known as label poisoning attacks, where the sample labels of some workers are poisoned. Surprisingly, we are able to show that the mean aggregator is more robust than the state-of-the-art robust aggregators in theory, given that the distributed data are sufficiently heterogeneous. In fact, the learning error of the mean aggregator is proven to be optimal in order. Experimental results corroborate our theoretical findings, demonstrating the superiority of the mean aggregator under label poisoning attacks. | [
"['Jie Peng' 'Weiyu Li' 'Qing Ling']"
]
|
null | null | 2404.13648 | null | null | http://arxiv.org/pdf/2404.13648v1 | 2024-04-21T12:50:38Z | 2024-04-21T12:50:38Z | Data-independent Module-aware Pruning for Hierarchical Vision
Transformers | Hierarchical vision transformers (ViTs) have two advantages over conventional ViTs. First, hierarchical ViTs achieve linear computational complexity with respect to image size by local self-attention. Second, hierarchical ViTs create hierarchical feature maps by merging image patches in deeper layers for dense prediction. However, existing pruning methods ignore the unique properties of hierarchical ViTs and use the magnitude value as the weight importance. This approach leads to two main drawbacks. First, the "local" attention weights are compared at a "global" level, which may cause some "locally" important weights to be pruned due to their relatively small magnitude "globally". The second issue with magnitude pruning is that it fails to consider the distinct weight distributions of the network, which are essential for extracting coarse to fine-grained features at various hierarchical levels. To solve the aforementioned issues, we have developed a Data-independent Module-Aware Pruning method (DIMAP) to compress hierarchical ViTs. To ensure that "local" attention weights at different hierarchical levels are compared fairly in terms of their contribution, we treat them as a module and examine their contribution by analyzing their information distortion. Furthermore, we introduce a novel weight metric that is solely based on weights and does not require input images, thereby eliminating the dependence on the patch merging process. Our method validates its usefulness and strengths on Swin Transformers of different sizes on ImageNet-1k classification. Notably, the top-5 accuracy drop is only 0.07% when we remove 52.5% FLOPs and 52.7% parameters of Swin-B. When we reduce 33.2% FLOPs and 33.2% parameters of Swin-S, we can even achieve a 0.8% higher relative top-5 accuracy than the original model. Code is available at: https://github.com/he-y/Data-independent-Module-Aware-Pruning | [
"['Yang He' 'Joey Tianyi Zhou']"
]
|
null | null | 2404.13649 | null | null | http://arxiv.org/pdf/2404.13649v1 | 2024-04-21T12:52:04Z | 2024-04-21T12:52:04Z | Distributional Principal Autoencoders | Dimension reduction techniques usually lose information in the sense that reconstructed data are not identical to the original data. However, we argue that it is possible to have reconstructed data identically distributed as the original data, irrespective of the retained dimension or the specific mapping. This can be achieved by learning a distributional model that matches the conditional distribution of data given its low-dimensional latent variables. Motivated by this, we propose Distributional Principal Autoencoder (DPA) that consists of an encoder that maps high-dimensional data to low-dimensional latent variables and a decoder that maps the latent variables back to the data space. For reducing the dimension, the DPA encoder aims to minimise the unexplained variability of the data with an adaptive choice of the latent dimension. For reconstructing data, the DPA decoder aims to match the conditional distribution of all data that are mapped to a certain latent value, thus ensuring that the reconstructed data retains the original data distribution. Our numerical results on climate data, single-cell data, and image benchmarks demonstrate the practical feasibility and success of the approach in reconstructing the original distribution of the data. DPA embeddings are shown to preserve meaningful structures of data such as the seasonal cycle for precipitations and cell types for gene expression. | [
"['Xinwei Shen' 'Nicolai Meinshausen']"
]
|
null | null | 2404.13652 | null | null | http://arxiv.org/pdf/2404.13652v1 | 2024-04-21T13:04:58Z | 2024-04-21T13:04:58Z | BANSAI: Towards Bridging the AI Adoption Gap in Industrial Robotics with
Neurosymbolic Programming | Over the past decade, deep learning helped solve manipulation problems across all domains of robotics. At the same time, industrial robots continue to be programmed overwhelmingly using traditional program representations and interfaces. This paper undertakes an analysis of this "AI adoption gap" from an industry practitioner's perspective. In response, we propose the BANSAI approach (Bridging the AI Adoption Gap via Neurosymbolic AI). It systematically leverages principles of neurosymbolic AI to establish data-driven, subsymbolic program synthesis and optimization in modern industrial robot programming workflow. BANSAI conceptually unites several lines of prior research and proposes a path toward practical, real-world validation. | [
"['Benjamin Alt' 'Julia Dvorak' 'Darko Katic' 'Rainer Jäkel'\n 'Michael Beetz' 'Gisela Lanza']"
]
|
null | null | 2404.13655 | null | null | http://arxiv.org/pdf/2404.13655v2 | 2024-04-29T16:21:25Z | 2024-04-21T13:11:59Z | SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph
Convolution and Pooling | Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks. GNNs effectively implement node representation learning through neighborhood aggregation and achieve impressive results in many graph-related tasks. However, most neighborhood aggregation approaches are summation-based, which can be problematic as they may not be sufficiently expressive to encode informative graph structures. Furthermore, though the graph pooling module is also of vital importance for graph learning, especially for the task of graph classification, research on graph down-sampling mechanisms is rather limited. To address the above challenges, we propose a concatenation-based graph convolution mechanism that injectively updates node representations to maximize the discriminative power in distinguishing non-isomorphic subgraphs. In addition, we design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner. WL-SortPool layer-wise sorts node representations (i.e. continuous WL colors) to separately learn the relative importance of subtrees with different depths for the purpose of classification, thus better characterizing the complex graph topology and rich information encoded in the graph. We propose a novel Subgraph Pattern GNN (SPGNN) architecture that incorporates these enhancements. We test the proposed SPGNN architecture on many graph classification benchmarks. Experimental results show that our method can achieve highly competitive results with state-of-the-art graph kernels and other GNN approaches. | [
"['Zehao Dong' 'Muhan Zhang' 'Yixin Chen']"
]
|
null | null | 2404.13663 | null | null | http://arxiv.org/pdf/2404.13663v2 | 2024-05-02T02:58:13Z | 2024-04-21T13:51:31Z | Cumulative Hazard Function Based Efficient Multivariate Temporal Point
Process Learning | Most existing temporal point process models are characterized by conditional intensity function. These models often require numerical approximation methods for likelihood evaluation, which potentially hurts their performance. By directly modelling the integral of the intensity function, i.e., the cumulative hazard function (CHF), the likelihood can be evaluated accurately, making it a promising approach. However, existing CHF-based methods are not well-defined, i.e., the mathematical constraints of CHF are not completely satisfied, leading to untrustworthy results. For multivariate temporal point process, most existing methods model intensity (or density, etc.) functions for each variate, limiting the scalability. In this paper, we explore using neural networks to model a flexible but well-defined CHF and learning the multivariate temporal point process with low parameter complexity. Experimental results on six datasets show that the proposed model achieves the state-of-the-art performance on data fitting and event prediction tasks while having significantly fewer parameters and memory usage than the strong competitors. The source code and data can be obtained from https://github.com/lbq8942/NPP. | [
"['Bingqing Liu']"
]
|
null | null | 2404.13669 | null | null | http://arxiv.org/pdf/2404.13669v1 | 2024-04-21T14:18:49Z | 2024-04-21T14:18:49Z | Rate Analysis of Coupled Distributed Stochastic Approximation for
Misspecified Optimization | We consider an $n$ agents distributed optimization problem with imperfect information characterized in a parametric sense, where the unknown parameter can be solved by a distinct distributed parameter learning problem. Though each agent only has access to its local parameter learning and computational problem, they mean to collaboratively minimize the average of their local cost functions. To address the special optimization problem, we propose a coupled distributed stochastic approximation algorithm, in which every agent updates the current beliefs of its unknown parameter and decision variable by stochastic approximation method; and then averages the beliefs and decision variables of its neighbors over network in consensus protocol. Our interest lies in the convergence analysis of this algorithm. We quantitatively characterize the factors that affect the algorithm performance, and prove that the mean-squared error of the decision variable is bounded by $mathcal{O}(frac{1}{nk})+mathcal{O}left(frac{1}{sqrt{n}(1-rho_w)}right)frac{1}{k^{1.5}}+mathcal{O}big(frac{1}{(1-rho_w)^2} big)frac{1}{k^2}$, where $k$ is the iteration count and $(1-rho_w)$ is the spectral gap of the network weighted adjacency matrix. It reveals that the network connectivity characterized by $(1-rho_w)$ only influences the high order of convergence rate, while the domain rate still acts the same as the centralized algorithm. In addition, we analyze that the transient iteration needed for reaching its dominant rate $mathcal{O}(frac{1}{nk})$ is $mathcal{O}(frac{n}{(1-rho_w)^2})$. Numerical experiments are carried out to demonstrate the theoretical results by taking different CPUs as agents, which is more applicable to real-world distributed scenarios. | [
"['Yaqun Yang' 'Jinlong Lei']"
]
|
null | null | 2404.13671 | null | null | http://arxiv.org/pdf/2404.13671v1 | 2024-04-21T14:22:04Z | 2024-04-21T14:22:04Z | FiLo: Zero-Shot Anomaly Detection by Fine-Grained Description and
High-Quality Localization | Zero-shot anomaly detection (ZSAD) methods entail detecting anomalies directly without access to any known normal or abnormal samples within the target item categories. Existing approaches typically rely on the robust generalization capabilities of multimodal pretrained models, computing similarities between manually crafted textual features representing "normal" or "abnormal" semantics and image features to detect anomalies and localize anomalous patches. However, the generic descriptions of "abnormal" often fail to precisely match diverse types of anomalies across different object categories. Additionally, computing feature similarities for single patches struggles to pinpoint specific locations of anomalies with various sizes and scales. To address these issues, we propose a novel ZSAD method called FiLo, comprising two components: adaptively learned Fine-Grained Description (FG-Des) and position-enhanced High-Quality Localization (HQ-Loc). FG-Des introduces fine-grained anomaly descriptions for each category using Large Language Models (LLMs) and employs adaptively learned textual templates to enhance the accuracy and interpretability of anomaly detection. HQ-Loc, utilizing Grounding DINO for preliminary localization, position-enhanced text prompts, and Multi-scale Multi-shape Cross-modal Interaction (MMCI) module, facilitates more accurate localization of anomalies of different sizes and shapes. Experimental results on datasets like MVTec and VisA demonstrate that FiLo significantly improves the performance of ZSAD in both detection and localization, achieving state-of-the-art performance with an image-level AUC of 83.9% and a pixel-level AUC of 95.9% on the VisA dataset. | [
"['Zhaopeng Gu' 'Bingke Zhu' 'Guibo Zhu' 'Yingying Chen' 'Hao Li'\n 'Ming Tang' 'Jinqiao Wang']"
]
|
null | null | 2404.13682 | null | null | http://arxiv.org/pdf/2404.13682v1 | 2024-04-21T14:53:33Z | 2024-04-21T14:53:33Z | Reproducible data science over data lakes: replayable data pipelines
with Bauplan and Nessie | As the Lakehouse architecture becomes more widespread, ensuring the reproducibility of data workloads over data lakes emerges as a crucial concern for data engineers. However, achieving reproducibility remains challenging. The size of data pipelines contributes to slow testing and iterations, while the intertwining of business logic and data management complicates debugging and increases error susceptibility. In this paper, we highlight recent advancements made at Bauplan in addressing this challenge. We introduce a system designed to decouple compute from data management, by leveraging a cloud runtime alongside Nessie, an open-source catalog with Git semantics. Demonstrating the system's capabilities, we showcase its ability to offer time-travel and branching semantics on top of object storage, and offer full pipeline reproducibility with a few CLI commands. | [
"['Jacopo Tagliabue' 'Ciro Greco']"
]
|
null | null | 2404.13690 | null | null | http://arxiv.org/abs/2404.13690v1 | 2024-04-21T15:33:17Z | 2024-04-21T15:33:17Z | Detecting Compromised IoT Devices Using Autoencoders with Sequential
Hypothesis Testing | IoT devices fundamentally lack built-in security mechanisms to protect themselves from security attacks. Existing works on improving IoT security mostly focus on detecting anomalous behaviors of IoT devices. However, these existing anomaly detection schemes may trigger an overwhelmingly large number of false alerts, rendering them unusable in detecting compromised IoT devices. In this paper we develop an effective and efficient framework, named CUMAD, to detect compromised IoT devices. Instead of directly relying on individual anomalous events, CUMAD aims to accumulate sufficient evidence in detecting compromised IoT devices, by integrating an autoencoder-based anomaly detection subsystem with a sequential probability ratio test (SPRT)-based sequential hypothesis testing subsystem. CUMAD can effectively reduce the number of false alerts in detecting compromised IoT devices, and moreover, it can detect compromised IoT devices quickly. Our evaluation studies based on the public-domain N-BaIoT dataset show that CUMAD can on average reduce the false positive rate from about 3.57% using only the autoencoder-based anomaly detection scheme to about 0.5%; in addition, CUMAD can detect compromised IoT devices quickly, with less than 5 observations on average. | [
"['Md Mainuddin' 'Zhenhai Duan' 'Yingfei Dong']"
]
|
null | null | 2404.13698 | null | null | http://arxiv.org/pdf/2404.13698v1 | 2024-04-21T15:53:06Z | 2024-04-21T15:53:06Z | Resampling-free Particle Filters in High-dimensions | State estimation is crucial for the performance and safety of numerous robotic applications. Among the suite of estimation techniques, particle filters have been identified as a powerful solution due to their non-parametric nature. Yet, in high-dimensional state spaces, these filters face challenges such as 'particle deprivation' which hinders accurate representation of the true posterior distribution. This paper introduces a novel resampling-free particle filter designed to mitigate particle deprivation by forgoing the traditional resampling step. This ensures a broader and more diverse particle set, especially vital in high-dimensional scenarios. Theoretically, our proposed filter is shown to offer a near-accurate representation of the desired posterior distribution in high-dimensional contexts. Empirically, the effectiveness of our approach is underscored through a high-dimensional synthetic state estimation task and a 6D pose estimation derived from videos. We posit that as robotic systems evolve with greater degrees of freedom, particle filters tailored for high-dimensional state spaces will be indispensable. | [
"['Akhilan Boopathy' 'Aneesh Muppidi' 'Peggy Yang' 'Abhiram Iyer'\n 'William Yue' 'Ila Fiete']"
]
|
null | null | 2404.13701 | null | null | http://arxiv.org/pdf/2404.13701v1 | 2024-04-21T16:05:38Z | 2024-04-21T16:05:38Z | Semantic-Rearrangement-Based Multi-Level Alignment for Domain
Generalized Segmentation | Domain generalized semantic segmentation is an essential computer vision task, for which models only leverage source data to learn the capability of generalized semantic segmentation towards the unseen target domains. Previous works typically address this challenge by global style randomization or feature regularization. In this paper, we argue that given the observation that different local semantic regions perform different visual characteristics from the source domain to the target domain, methods focusing on global operations are hard to capture such regional discrepancies, thus failing to construct domain-invariant representations with the consistency from local to global level. Therefore, we propose the Semantic-Rearrangement-based Multi-Level Alignment (SRMA) to overcome this problem. SRMA first incorporates a Semantic Rearrangement Module (SRM), which conducts semantic region randomization to enhance the diversity of the source domain sufficiently. A Multi-Level Alignment module (MLA) is subsequently proposed with the help of such diversity to establish the global-regional-local consistent domain-invariant representations. By aligning features across randomized samples with domain-neutral knowledge at multiple levels, SRMA provides a more robust way to handle the source-target domain gap. Extensive experiments demonstrate the superiority of SRMA over the current state-of-the-art works on various benchmarks. | [
"['Guanlong Jiao' 'Chenyangguang Zhang' 'Haonan Yin' 'Yu Mo' 'Biqing Huang'\n 'Hui Pan' 'Yi Luo' 'Jingxian Liu']"
]
|
null | null | 2404.13702 | null | null | http://arxiv.org/pdf/2404.13702v1 | 2024-04-21T16:16:56Z | 2024-04-21T16:16:56Z | Learning Galaxy Intrinsic Alignment Correlations | The intrinsic alignments (IA) of galaxies, regarded as a contaminant in weak lensing analyses, represents the correlation of galaxy shapes due to gravitational tidal interactions and galaxy formation processes. As such, understanding IA is paramount for accurate cosmological inferences from weak lensing surveys; however, one limitation to our understanding and mitigation of IA is expensive simulation-based modeling. In this work, we present a deep learning approach to emulate galaxy position-position ($xi$), position-orientation ($omega$), and orientation-orientation ($eta$) correlation function measurements and uncertainties from halo occupation distribution-based mock galaxy catalogs. We find strong Pearson correlation values with the model across all three correlation functions and further predict aleatoric uncertainties through a mean-variance estimation training procedure. $xi(r)$ predictions are generally accurate to $leq10%$. Our model also successfully captures the underlying signal of the noisier correlations $omega(r)$ and $eta(r)$, although with a lower average accuracy. We find that the model performance is inhibited by the stochasticity of the data, and will benefit from correlations averaged over multiple data realizations. Our code will be made open source upon journal publication. | [
"['Sneh Pandya' 'Yuanyuan Yang' 'Nicholas Van Alfen' 'Jonathan Blazek'\n 'Robin Walters']"
]
|
null | null | 2404.13704 | null | null | http://arxiv.org/pdf/2404.13704v1 | 2024-04-21T16:29:49Z | 2024-04-21T16:29:49Z | PEMMA: Parameter-Efficient Multi-Modal Adaptation for Medical Image
Segmentation | Imaging modalities such as Computed Tomography (CT) and Positron Emission Tomography (PET) are key in cancer detection, inspiring Deep Neural Networks (DNN) models that merge these scans for tumor segmentation. When both CT and PET scans are available, it is common to combine them as two channels of the input to the segmentation model. However, this method requires both scan types during training and inference, posing a challenge due to the limited availability of PET scans, thereby sometimes limiting the process to CT scans only. Hence, there is a need to develop a flexible DNN architecture that can be trained/updated using only CT scans but can effectively utilize PET scans when they become available. In this work, we propose a parameter-efficient multi-modal adaptation (PEMMA) framework for lightweight upgrading of a transformer-based segmentation model trained only on CT scans to also incorporate PET scans. The benefits of the proposed approach are two-fold. Firstly, we leverage the inherent modularity of the transformer architecture and perform low-rank adaptation (LoRA) of the attention weights to achieve parameter-efficient adaptation. Secondly, since the PEMMA framework attempts to minimize cross modal entanglement, it is possible to subsequently update the combined model using only one modality, without causing catastrophic forgetting of the other modality. Our proposed method achieves comparable results with the performance of early fusion techniques with just 8% of the trainable parameters, especially with a remarkable +28% improvement on the average dice score on PET scans when trained on a single modality. | [
"['Nada Saadi' 'Numan Saeed' 'Mohammad Yaqub' 'Karthik Nandakumar']"
]
|
null | null | 2404.13706 | null | null | http://arxiv.org/pdf/2404.13706v1 | 2024-04-21T16:35:16Z | 2024-04-21T16:35:16Z | Concept Arithmetics for Circumventing Concept Inhibition in Diffusion
Models | Motivated by ethical and legal concerns, the scientific community is actively developing methods to limit the misuse of Text-to-Image diffusion models for reproducing copyrighted, violent, explicit, or personal information in the generated images. Simultaneously, researchers put these newly developed safety measures to the test by assuming the role of an adversary to find vulnerabilities and backdoors in them. We use compositional property of diffusion models, which allows to leverage multiple prompts in a single image generation. This property allows us to combine other concepts, that should not have been affected by the inhibition, to reconstruct the vector, responsible for target concept generation, even though the direct computation of this vector is no longer accessible. We provide theoretical and empirical evidence why the proposed attacks are possible and discuss the implications of these findings for safe model deployment. We argue that it is essential to consider all possible approaches to image generation with diffusion models that can be employed by an adversary. Our work opens up the discussion about the implications of concept arithmetics and compositional inference for safety mechanisms in diffusion models. Content Advisory: This paper contains discussions and model-generated content that may be considered offensive. Reader discretion is advised. Project page: https://cs-people.bu.edu/vpetsiuk/arc | [
"['Vitali Petsiuk' 'Kate Saenko']"
]
|
null | null | 2404.13715 | null | null | http://arxiv.org/pdf/2404.13715v1 | 2024-04-21T17:04:53Z | 2024-04-21T17:04:53Z | TF2AIF: Facilitating development and deployment of accelerated AI models
on the cloud-edge continuum | The B5G/6G evolution relies on connect-compute technologies and highly heterogeneous clusters with HW accelerators, which require specialized coding to be efficiently utilized. The current paper proposes a custom tool for generating multiple SW versions of a certain AI function input in high-level language, e.g., Python TensorFlow, while targeting multiple diverse HW+SW platforms. TF2AIF builds upon disparate tool-flows to create a plethora of relative containers and enable the system orchestrator to deploy the requested function on any peculiar node in the cloud-edge continuum, i.e., to leverage the performance/energy benefits of the underlying HW upon any circumstances. TF2AIF fills an identified gap in today's ecosystem and facilitates research on resource management or automated operations, by demanding minimal time or expertise from users. | [
"['Aimilios Leftheriotis' 'Achilleas Tzenetopoulos' 'George Lentaris'\n 'Dimitrios Soudris' 'Georgios Theodoridis']"
]
|
null | null | 2404.13731 | null | null | http://arxiv.org/pdf/2404.13731v1 | 2024-04-21T18:18:34Z | 2024-04-21T18:18:34Z | Training-Conditional Coverage Bounds for Uniformly Stable Learning
Algorithms | The training-conditional coverage performance of the conformal prediction is known to be empirically sound. Recently, there have been efforts to support this observation with theoretical guarantees. The training-conditional coverage bounds for jackknife+ and full-conformal prediction regions have been established via the notion of $(m,n)$-stability by Liang and Barber~[2023]. Although this notion is weaker than uniform stability, it is not clear how to evaluate it for practical models. In this paper, we study the training-conditional coverage bounds of full-conformal, jackknife+, and CV+ prediction regions from a uniform stability perspective which is known to hold for empirical risk minimization over reproducing kernel Hilbert spaces with convex regularization. We derive coverage bounds for finite-dimensional models by a concentration argument for the (estimated) predictor function, and compare the bounds with existing ones under ridge regression. | [
"['Mehrdad Pournaderi' 'Yu Xiang']"
]
|
null | null | 2404.13733 | null | null | http://arxiv.org/pdf/2404.13733v2 | 2024-05-06T05:34:33Z | 2024-04-21T18:19:27Z | Elucidating the Design Space of Dataset Condensation | Dataset condensation, a concept within data-centric learning, efficiently transfers critical attributes from an original dataset to a synthetic version, maintaining both diversity and realism. This approach significantly improves model training efficiency and is adaptable across multiple application areas. Previous methods in dataset condensation have faced challenges: some incur high computational costs which limit scalability to larger datasets (e.g., MTT, DREAM, and TESLA), while others are restricted to less optimal design spaces, which could hinder potential improvements, especially in smaller datasets (e.g., SRe2L, G-VBSM, and RDED). To address these limitations, we propose a comprehensive design framework that includes specific, effective strategies like implementing soft category-aware matching and adjusting the learning rate schedule. These strategies are grounded in empirical evidence and theoretical backing. Our resulting approach, Elucidate Dataset Condensation (EDC), establishes a benchmark for both small and large-scale dataset condensation. In our testing, EDC achieves state-of-the-art accuracy, reaching 48.6% on ImageNet-1k with a ResNet-18 model at an IPC of 10, which corresponds to a compression ratio of 0.78%. This performance exceeds those of SRe2L, G-VBSM, and RDED by margins of 27.3%, 17.2%, and 6.6%, respectively. | [
"['Shitong Shao' 'Zikai Zhou' 'Huanran Chen' 'Zhiqiang Shen']"
]
|
null | null | 2404.13736 | null | null | http://arxiv.org/pdf/2404.13736v1 | 2024-04-21T18:24:34Z | 2024-04-21T18:24:34Z | Interval Abstractions for Robust Counterfactual Explanations | Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, when slight changes occur in the parameters of the underlying model, CEs found by existing methods often become invalid for the updated models. The literature lacks a way to certify deterministic robustness guarantees for CEs under model changes, in that existing methods to improve CEs' robustness are heuristic, and the robustness performances are evaluated empirically using only a limited number of retrained models. To bridge this gap, we propose a novel interval abstraction technique for parametric machine learning models, which allows us to obtain provable robustness guarantees of CEs under the possibly infinite set of plausible model changes $Delta$. We formalise our robustness notion as the $Delta$-robustness for CEs, in both binary and multi-class classification settings. We formulate procedures to verify $Delta$-robustness based on Mixed Integer Linear Programming, using which we further propose two algorithms to generate CEs that are $Delta$-robust. In an extensive empirical study, we demonstrate how our approach can be used in practice by discussing two strategies for determining the appropriate hyperparameter in our method, and we quantitatively benchmark the CEs generated by eleven methods, highlighting the effectiveness of our algorithms in finding robust CEs. | [
"['Junqi Jiang' 'Francesco Leofante' 'Antonio Rago' 'Francesca Toni']"
]
|
null | null | 2404.13752 | null | null | http://arxiv.org/pdf/2404.13752v2 | 2024-05-23T13:06:59Z | 2024-04-21T19:24:15Z | Towards General Conceptual Model Editing via Adversarial Representation
Engineering | Since the development of Large Language Models (LLMs) has achieved remarkable success, understanding and controlling their internal complex mechanisms has become an urgent problem. Recent research has attempted to interpret their behaviors through the lens of inner representation. However, developing practical and efficient methods for applying these representations for general and flexible model editing remains challenging. In this work, we explore how to use representation engineering methods to guide the editing of LLMs by deploying a representation sensor as an oracle. We first identify the importance of a robust and reliable sensor during editing, then propose an Adversarial Representation Engineering (ARE) framework to provide a unified and interpretable approach for conceptual model editing without compromising baseline performance. Experiments on multiple model editing paradigms demonstrate the effectiveness of ARE in various settings. Code and data are available at https://github.com/Zhang-Yihao/Adversarial-Representation-Engineering. | [
"['Yihao Zhang' 'Zeming Wei' 'Jun Sun' 'Meng Sun']"
]
|
null | null | 2404.13770 | null | null | http://arxiv.org/pdf/2404.13770v1 | 2024-04-21T20:45:18Z | 2024-04-21T20:45:18Z | EncodeNet: A Framework for Boosting DNN Accuracy with Entropy-driven
Generalized Converting Autoencoder | Image classification is a fundamental task in computer vision, and the quest to enhance DNN accuracy without inflating model size or latency remains a pressing concern. We make a couple of advances in this regard, leading to a novel EncodeNet design and training framework. The first advancement involves Converting Autoencoders, a novel approach that transforms images into an easy-to-classify image of its class. Our prior work that applied the Converting Autoencoder and a simple classifier in tandem achieved moderate accuracy over simple datasets, such as MNIST and FMNIST. However, on more complex datasets like CIFAR-10, the Converting Autoencoder has a large reconstruction loss, making it unsuitable for enhancing DNN accuracy. To address these limitations, we generalize the design of Converting Autoencoders by leveraging a larger class of DNNs, those with architectures comprising feature extraction layers followed by classification layers. We incorporate a generalized algorithmic design of the Converting Autoencoder and intraclass clustering to identify representative images, leading to optimized image feature learning. Next, we demonstrate the effectiveness of our EncodeNet design and training framework, improving the accuracy of well-trained baseline DNNs while maintaining the overall model size. EncodeNet's building blocks comprise the trained encoder from our generalized Converting Autoencoders transferring knowledge to a lightweight classifier network - also extracted from the baseline DNN. Our experimental results demonstrate that EncodeNet improves the accuracy of VGG16 from 92.64% to 94.05% on CIFAR-10 and RestNet20 from 74.56% to 76.04% on CIFAR-100. It outperforms state-of-the-art techniques that rely on knowledge distillation and attention mechanisms, delivering higher accuracy for models of comparable size. | [
"['Hasanul Mahmud' 'Kevin Desai' 'Palden Lama' 'Sushil K. Prasad']"
]
|
null | null | 2404.13779 | null | null | http://arxiv.org/pdf/2404.13779v1 | 2024-04-21T21:19:36Z | 2024-04-21T21:19:36Z | Automated Text Mining of Experimental Methodologies from Biomedical
Literature | Biomedical literature is a rapidly expanding field of science and technology. Classification of biomedical texts is an essential part of biomedicine research, especially in the field of biology. This work proposes the fine-tuned DistilBERT, a methodology-specific, pre-trained generative classification language model for mining biomedicine texts. The model has proven its effectiveness in linguistic understanding capabilities and has reduced the size of BERT models by 40% but by 60% faster. The main objective of this project is to improve the model and assess the performance of the model compared to the non-fine-tuned model. We used DistilBert as a support model and pre-trained on a corpus of 32,000 abstracts and complete text articles; our results were impressive and surpassed those of traditional literature classification methods by using RNN or LSTM. Our aim is to integrate this highly specialised and specific model into different research industries. | [
"['Ziqing Guo']"
]
|
null | null | 2404.13785 | null | null | http://arxiv.org/pdf/2404.13785v1 | 2024-04-21T21:36:42Z | 2024-04-21T21:36:42Z | How to Inverting the Leverage Score Distribution? | Leverage score is a fundamental problem in machine learning and theoretical computer science. It has extensive applications in regression analysis, randomized algorithms, and neural network inversion. Despite leverage scores are widely used as a tool, in this paper, we study a novel problem, namely the inverting leverage score problem. We analyze to invert the leverage score distributions back to recover model parameters. Specifically, given a leverage score $sigma in mathbb{R}^n$, the matrix $A in mathbb{R}^{n times d}$, and the vector $b in mathbb{R}^n$, we analyze the non-convex optimization problem of finding $x in mathbb{R}^d$ to minimize $| mathrm{diag}( sigma ) - I_n circ (A(x) (A(x)^top A(x) )^{-1} A(x)^top ) |_F$, where $A(x):= S(x)^{-1} A in mathbb{R}^{n times d} $, $S(x) := mathrm{diag}(s(x)) in mathbb{R}^{n times n}$ and $s(x) : = Ax - b in mathbb{R}^n$. Our theoretical studies include computing the gradient and Hessian, demonstrating that the Hessian matrix is positive definite and Lipschitz, and constructing first-order and second-order algorithms to solve this regression problem. Our work combines iterative shrinking and the induction hypothesis to ensure global convergence rates for the Newton method, as well as the properties of Lipschitz and strong convexity to guarantee the performance of gradient descent. This important study on inverting statistical leverage opens up numerous new applications in interpretation, data recovery, and security. | [
"['Zhihang Li' 'Zhao Song' 'Weixin Wang' 'Junze Yin' 'Zheng Yu']"
]
|
null | null | 2404.13786 | null | null | http://arxiv.org/pdf/2404.13786v1 | 2024-04-21T21:45:23Z | 2024-04-21T21:45:23Z | Soar: Design and Deployment of A Smart Roadside Infrastructure System
for Autonomous Driving | Recently,smart roadside infrastructure (SRI) has demonstrated the potential of achieving fully autonomous driving systems. To explore the potential of infrastructure-assisted autonomous driving, this paper presents the design and deployment of Soar, the first end-to-end SRI system specifically designed to support autonomous driving systems. Soar consists of both software and hardware components carefully designed to overcome various system and physical challenges. Soar can leverage the existing operational infrastructure like street lampposts for a lower barrier of adoption. Soar adopts a new communication architecture that comprises a bi-directional multi-hop I2I network and a downlink I2V broadcast service, which are designed based on off-the-shelf 802.11ac interfaces in an integrated manner. Soar also features a hierarchical DL task management framework to achieve desirable load balancing among nodes and enable them to collaborate efficiently to run multiple data-intensive autonomous driving applications. We deployed a total of 18 Soar nodes on existing lampposts on campus, which have been operational for over two years. Our real-world evaluation shows that Soar can support a diverse set of autonomous driving applications and achieve desirable real-time performance and high communication reliability. Our findings and experiences in this work offer key insights into the development and deployment of next-generation smart roadside infrastructure and autonomous driving systems. | [
"['Shuyao Shi' 'Neiwen Ling' 'Zhehao Jiang' 'Xuan Huang' 'Yuze He'\n 'Xiaoguang Zhao' 'Bufang Yang' 'Chen Bian' 'Jingfei Xia' 'Zhenyu Yan'\n 'Raymond Yeung' 'Guoliang Xing']"
]
|
null | null | 2404.13804 | null | null | http://arxiv.org/pdf/2404.13804v1 | 2024-04-22T00:16:18Z | 2024-04-22T00:16:18Z | Adaptive Heterogeneous Client Sampling for Federated Learning over
Wireless Networks | Federated learning (FL) algorithms usually sample a fraction of clients in each round (partial participation) when the number of participants is large and the server's communication bandwidth is limited. Recent works on the convergence analysis of FL have focused on unbiased client sampling, e.g., sampling uniformly at random, which suffers from slow wall-clock time for convergence due to high degrees of system heterogeneity and statistical heterogeneity. This paper aims to design an adaptive client sampling algorithm for FL over wireless networks that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time. We obtain a new tractable convergence bound for FL algorithms with arbitrary client sampling probability. Based on the bound, we analytically establish the relationship between the total learning time and sampling probability with an adaptive bandwidth allocation scheme, which results in a non-convex optimization problem. We design an efficient algorithm for learning the unknown parameters in the convergence bound and develop a low-complexity algorithm to approximately solve the non-convex problem. Our solution reveals the impact of system and statistical heterogeneity parameters on the optimal client sampling design. Moreover, our solution shows that as the number of sampled clients increases, the total convergence time first decreases and then increases because a larger sampling number reduces the number of rounds for convergence but results in a longer expected time per-round due to limited wireless bandwidth. Experimental results from both hardware prototype and simulation demonstrate that our proposed sampling scheme significantly reduces the convergence time compared to several baseline sampling schemes. | [
"['Bing Luo' 'Wenli Xiao' 'Shiqiang Wang' 'Jianwei Huang'\n 'Leandros Tassiulas']"
]
|
null | null | 2404.13808 | null | null | http://arxiv.org/pdf/2404.13808v1 | 2024-04-22T00:48:56Z | 2024-04-22T00:48:56Z | General Item Representation Learning for Cold-start Content
Recommendations | Cold-start item recommendation is a long-standing challenge in recommendation systems. A common remedy is to use a content-based approach, but rich information from raw contents in various forms has not been fully utilized. In this paper, we propose a domain/data-agnostic item representation learning framework for cold-start recommendations, naturally equipped with multimodal alignment among various features by adopting a Transformer-based architecture. Our proposed model is end-to-end trainable completely free from classification labels, not just costly to collect but suboptimal for recommendation-purpose representation learning. From extensive experiments on real-world movie and news recommendation benchmarks, we verify that our approach better preserves fine-grained user taste than state-of-the-art baselines, universally applicable to multiple domains at large scale. | [
"['Jooeun Kim' 'Jinri Kim' 'Kwangeun Yeo' 'Eungi Kim' 'Kyoung-Woon On'\n 'Jonghwan Mun' 'Joonseok Lee']"
]
|
null | null | 2404.13815 | null | null | http://arxiv.org/pdf/2404.13815v2 | 2024-06-04T02:25:52Z | 2024-04-22T01:28:35Z | Improving Group Robustness on Spurious Correlation Requires Preciser
Group Inference | Standard empirical risk minimization (ERM) models may prioritize learning spurious correlations between spurious features and true labels, leading to poor accuracy on groups where these correlations do not hold. Mitigating this issue often requires expensive spurious attribute (group) labels or relies on trained ERM models to infer group labels when group information is unavailable. However, the significant performance gap in worst-group accuracy between using pseudo group labels and using oracle group labels inspires us to consider further improving group robustness through preciser group inference. Therefore, we propose GIC, a novel method that accurately infers group labels, resulting in improved worst-group performance. GIC trains a spurious attribute classifier based on two key properties of spurious correlations: (1) high correlation between spurious attributes and true labels, and (2) variability in this correlation between datasets with different group distributions. Empirical studies on multiple datasets demonstrate the effectiveness of GIC in inferring group labels, and combining GIC with various downstream invariant learning methods improves worst-group accuracy, showcasing its powerful flexibility. Additionally, through analyzing the misclassifications in GIC, we identify an interesting phenomenon called semantic consistency, which may contribute to better decoupling the association between spurious attributes and labels, thereby mitigating spurious correlation. The code for GIC is available at https://github.com/yujinhanml/GIC. | [
"['Yujin Han' 'Difan Zou']"
]
|
null | null | 2404.13831 | null | null | http://arxiv.org/pdf/2404.13831v2 | 2024-05-21T21:13:04Z | 2024-04-22T02:06:35Z | Data-Driven Performance Guarantees for Classical and Learned Optimizers | We introduce a data-driven approach to analyze the performance of continuous optimization algorithms using generalization guarantees from statistical learning theory. We study classical and learned optimizers to solve families of parametric optimization problems. We build generalization guarantees for classical optimizers, using a sample convergence bound, and for learned optimizers, using the Probably Approximately Correct (PAC)-Bayes framework. To train learned optimizers, we use a gradient-based algorithm to directly minimize the PAC-Bayes upper bound. Numerical experiments in signal processing, control, and meta-learning showcase the ability of our framework to provide strong generalization guarantees for both classical and learned optimizers given a fixed budget of iterations. For classical optimizers, our bounds are much tighter than those that worst-case guarantees provide. For learned optimizers, our bounds outperform the empirical outcomes observed in their non-learned counterparts. | [
"['Rajiv Sambharya' 'Bartolomeo Stellato']"
]
|
null | null | 2404.13841 | null | null | http://arxiv.org/pdf/2404.13841v1 | 2024-04-22T02:41:10Z | 2024-04-22T02:41:10Z | Fair Concurrent Training of Multiple Models in Federated Learning | Federated learning (FL) enables collaborative learning across multiple clients. In most FL work, all clients train a single learning task. However, the recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously, sharing clients' computing and communication resources, which we call Multiple-Model Federated Learning (MMFL). Current MMFL algorithms use naive average-based client-task allocation schemes that can lead to unfair performance when FL tasks have heterogeneous difficulty levels, e.g., tasks with larger models may need more rounds and data to train. Just as naively allocating resources to generic computing jobs with heterogeneous resource needs can lead to unfair outcomes, naive allocation of clients to FL tasks can lead to unfairness, with some tasks having excessively long training times, or lower converged accuracies. Furthermore, in the FL setting, since clients are typically not paid for their training effort, we face a further challenge that some clients may not even be willing to train some tasks, e.g., due to high computational costs, which may exacerbate unfairness in training outcomes across tasks. We address both challenges by firstly designing FedFairMMFL, a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round. We provide guarantees on airness and FedFairMMFL's convergence rate. We then propose a novel auction design that incentivizes clients to train multiple tasks, so as to fairly distribute clients' training efforts across the tasks. We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets. | [
"['Marie Siew' 'Haoran Zhang' 'Jong-Ik Park' 'Yuezhou Liu' 'Yichen Ruan'\n 'Lili Su' 'Stratis Ioannidis' 'Edmund Yeh' 'Carlee Joe-Wong']"
]
|
null | null | 2404.13844 | null | null | http://arxiv.org/pdf/2404.13844v1 | 2024-04-22T02:52:54Z | 2024-04-22T02:52:54Z | ColA: Collaborative Adaptation with Gradient Learning | A primary function of back-propagation is to compute both the gradient of hidden representations and parameters for optimization with gradient descent. Training large models requires high computational costs due to their vast parameter sizes. While Parameter-Efficient Fine-Tuning (PEFT) methods aim to train smaller auxiliary models to save computational space, they still present computational overheads, especially in Fine-Tuning as a Service (FTaaS) for numerous users. We introduce Collaborative Adaptation (ColA) with Gradient Learning (GL), a parameter-free, model-agnostic fine-tuning approach that decouples the computation of the gradient of hidden representations and parameters. In comparison to PEFT methods, ColA facilitates more cost-effective FTaaS by offloading the computation of the gradient to low-cost devices. We also provide a theoretical analysis of ColA and experimentally demonstrate that ColA can perform on par or better than existing PEFT methods on various benchmarks. | [
"['Enmao Diao' 'Qi Le' 'Suya Wu' 'Xinran Wang' 'Ali Anwar' 'Jie Ding'\n 'Vahid Tarokh']"
]
|
null | null | 2404.13846 | null | null | http://arxiv.org/pdf/2404.13846v3 | 2024-07-04T07:40:53Z | 2024-04-22T03:05:19Z | Filtered Direct Preference Optimization | Reinforcement learning from human feedback (RLHF) plays a crucial role in aligning language models with human preferences. While the significance of dataset quality is generally recognized, explicit investigations into its impact within the RLHF framework, to our knowledge, have been limited. This paper addresses the issue of text quality within the preference dataset by focusing on direct preference optimization (DPO), an increasingly adopted reward-model-free RLHF method. We confirm that text quality significantly influences the performance of models optimized with DPO more than those optimized with reward-model-based RLHF. Building on this new insight, we propose an extension of DPO, termed filtered direct preference optimization (fDPO). fDPO uses a trained reward model to monitor the quality of texts within the preference dataset during DPO training. Samples of lower quality are discarded based on comparisons with texts generated by the model being optimized, resulting in a more accurate dataset. Experimental results demonstrate that fDPO enhances the final model performance. Our code is available at https://github.com/CyberAgentAILab/filtered-dpo. | [
"['Tetsuro Morimura' 'Mitsuki Sakamoto' 'Yuu Jinnai' 'Kenshi Abe'\n 'Kaito Ariu']"
]
|
null | null | 2404.13853 | null | null | http://arxiv.org/pdf/2404.13853v1 | 2024-04-22T03:35:19Z | 2024-04-22T03:35:19Z | ICST-DNET: An Interpretable Causal Spatio-Temporal Diffusion Network for
Traffic Speed Prediction | Traffic speed prediction is significant for intelligent navigation and congestion alleviation. However, making accurate predictions is challenging due to three factors: 1) traffic diffusion, i.e., the spatial and temporal causality existing between the traffic conditions of multiple neighboring roads, 2) the poor interpretability of traffic data with complicated spatio-temporal correlations, and 3) the latent pattern of traffic speed fluctuations over time, such as morning and evening rush. Jointly considering these factors, in this paper, we present a novel architecture for traffic speed prediction, called Interpretable Causal Spatio-Temporal Diffusion Network (ICST-DNET). Specifically, ICST-DENT consists of three parts, namely the Spatio-Temporal Causality Learning (STCL), Causal Graph Generation (CGG), and Speed Fluctuation Pattern Recognition (SFPR) modules. First, to model the traffic diffusion within road networks, an STCL module is proposed to capture both the temporal causality on each individual road and the spatial causality in each road pair. The CGG module is then developed based on STCL to enhance the interpretability of the traffic diffusion procedure from the temporal and spatial perspectives. Specifically, a time causality matrix is generated to explain the temporal causality between each road's historical and future traffic conditions. For spatial causality, we utilize causal graphs to visualize the diffusion process in road pairs. Finally, to adapt to traffic speed fluctuations in different scenarios, we design a personalized SFPR module to select the historical timesteps with strong influences for learning the pattern of traffic speed fluctuations. Extensive experimental results prove that ICST-DNET can outperform all existing baselines, as evidenced by the higher prediction accuracy, ability to explain causality, and adaptability to different scenarios. | [
"['Yi Rong' 'Yingchi Mao' 'Yinqiu Liu' 'Ling Chen' 'Xiaoming He'\n 'Dusit Niyato']"
]
|
null | null | 2404.13860 | null | null | http://arxiv.org/pdf/2404.13860v1 | 2024-04-22T04:18:38Z | 2024-04-22T04:18:38Z | Distributional Black-Box Model Inversion Attack with Multi-Agent
Reinforcement Learning | A Model Inversion (MI) attack based on Generative Adversarial Networks (GAN) aims to recover the private training data from complex deep learning models by searching codes in the latent space. However, they merely search a deterministic latent space such that the found latent code is usually suboptimal. In addition, the existing distributional MI schemes assume that an attacker can access the structures and parameters of the target model, which is not always viable in practice. To overcome the above shortcomings, this paper proposes a novel Distributional Black-Box Model Inversion (DBB-MI) attack by constructing the probabilistic latent space for searching the target privacy data. Specifically, DBB-MI does not need the target model parameters or specialized GAN training. Instead, it finds the latent probability distribution by combining the output of the target model with multi-agent reinforcement learning techniques. Then, it randomly chooses latent codes from the latent probability distribution for recovering the private data. As the latent probability distribution closely aligns with the target privacy data in latent space, the recovered data will leak the privacy of training samples of the target model significantly. Abundant experiments conducted on diverse datasets and networks show that the present DBB-MI has better performance than state-of-the-art in attack accuracy, K-nearest neighbor feature distance, and Peak Signal-to-Noise Ratio. | [
"['Huan Bao' 'Kaimin Wei' 'Yongdong Wu' 'Jin Qian' 'Robert H. Deng']"
]
|
null | null | 2404.13879 | null | null | http://arxiv.org/pdf/2404.13879v2 | 2024-05-24T20:19:37Z | 2024-04-22T05:01:29Z | Explicit Lipschitz Value Estimation Enhances Policy Robustness Against
Perturbation | In robotic control tasks, policies trained by reinforcement learning (RL) in simulation often experience a performance drop when deployed on physical hardware, due to modeling error, measurement error, and unpredictable perturbations in the real world. Robust RL methods account for this issue by approximating a worst-case value function during training, but they can be sensitive to approximation errors in the value function and its gradient before training is complete. In this paper, we hypothesize that Lipschitz regularization can help condition the approximated value function gradients, leading to improved robustness after training. We test this hypothesis by combining Lipschitz regularization with an application of Fast Gradient Sign Method to reduce approximation errors when evaluating the value function under adversarial perturbations. Our empirical results demonstrate the benefits of this approach over prior work on a number of continuous control benchmarks. | [
"['Xulin Chen' 'Ruipeng Liu' 'Garrett E. Katz']"
]
|
null | null | 2404.13885 | null | null | http://arxiv.org/pdf/2404.13885v1 | 2024-04-22T05:12:52Z | 2024-04-22T05:12:52Z | Surveying Attitudinal Alignment Between Large Language Models Vs. Humans
Towards 17 Sustainable Development Goals | Large Language Models (LLMs) have emerged as potent tools for advancing the United Nations' Sustainable Development Goals (SDGs). However, the attitudinal disparities between LLMs and humans towards these goals can pose significant challenges. This study conducts a comprehensive review and analysis of the existing literature on the attitudes of LLMs towards the 17 SDGs, emphasizing the comparison between their attitudes and support for each goal and those of humans. We examine the potential disparities, primarily focusing on aspects such as understanding and emotions, cultural and regional differences, task objective variations, and factors considered in the decision-making process. These disparities arise from the underrepresentation and imbalance in LLM training data, historical biases, quality issues, lack of contextual understanding, and skewed ethical values reflected. The study also investigates the risks and harms that may arise from neglecting the attitudes of LLMs towards the SDGs, including the exacerbation of social inequalities, racial discrimination, environmental destruction, and resource wastage. To address these challenges, we propose strategies and recommendations to guide and regulate the application of LLMs, ensuring their alignment with the principles and goals of the SDGs, and therefore creating a more just, inclusive, and sustainable future. | [
"['Qingyang Wu' 'Ying Xu' 'Tingsong Xiao' 'Yunze Xiao' 'Yitong Li'\n 'Tianyang Wang' 'Yichi Zhang' 'Shanghai Zhong' 'Yuwei Zhang' 'Wei Lu'\n 'Yifan Yang']"
]
|
null | null | 2404.13891 | null | null | http://arxiv.org/pdf/2404.13891v2 | 2024-05-14T09:16:46Z | 2024-04-22T05:37:22Z | Minimizing Weighted Counterfactual Regret with Optimistic Online Mirror
Descent | Counterfactual regret minimization (CFR) is a family of algorithms for effectively solving imperfect-information games. It decomposes the total regret into counterfactual regrets, utilizing local regret minimization algorithms, such as Regret Matching (RM) or RM+, to minimize them. Recent research establishes a connection between Online Mirror Descent (OMD) and RM+, paving the way for an optimistic variant PRM+ and its extension PCFR+. However, PCFR+ assigns uniform weights for each iteration when determining regrets, leading to substantial regrets when facing dominated actions. This work explores minimizing weighted counterfactual regret with optimistic OMD, resulting in a novel CFR variant PDCFR+. It integrates PCFR+ and Discounted CFR (DCFR) in a principled manner, swiftly mitigating negative effects of dominated actions and consistently leveraging predictions to accelerate convergence. Theoretical analyses prove that PDCFR+ converges to a Nash equilibrium, particularly under distinct weighting schemes for regrets and average strategies. Experimental results demonstrate PDCFR+'s fast convergence in common imperfect-information games. The code is available at https://github.com/rpSebastian/PDCFRPlus. | [
"['Hang Xu' 'Kai Li' 'Bingyun Liu' 'Haobo Fu' 'Qiang Fu' 'Junliang Xing'\n 'Jian Cheng']"
]
|
null | null | 2404.13895 | null | null | http://arxiv.org/pdf/2404.13895v2 | 2024-05-31T02:04:44Z | 2024-04-22T06:05:35Z | Optimal Design for Human Feedback | Learning of preference models from human feedback has been central to recent advances in artificial intelligence. Motivated by the cost of obtaining high-quality human annotations, we study the problem of data collection for learning preference models. The key idea in our work is to generalize the optimal design, a method for computing information gathering policies, to ranked lists. To show the generality of our ideas, we study both absolute and relative feedback on the lists. We design efficient algorithms for both settings and analyze them. We prove that our preference model estimators improve with more data and so does the ranking error under the estimators. Finally, we experiment with several synthetic and real-world datasets to show the statistical efficiency of our algorithms. | [
"['Subhojyoti Mukherjee' 'Anusha Lalitha' 'Kousha Kalantari'\n 'Aniket Deshmukh' 'Ge Liu' 'Yifei Ma' 'Branislav Kveton']"
]
|
null | null | 2404.13904 | null | null | http://arxiv.org/pdf/2404.13904v4 | 2024-05-16T08:16:04Z | 2024-04-22T06:28:41Z | Deep Regression Representation Learning with Topology | Most works studying representation learning focus only on classification and neglect regression. Yet, the learning objectives and, therefore, the representation topologies of the two tasks are fundamentally different: classification targets class separation, leading to disconnected representations, whereas regression requires ordinality with respect to the target, leading to continuous representations. We thus wonder how the effectiveness of a regression representation is influenced by its topology, with evaluation based on the Information Bottleneck (IB) principle. The IB principle is an important framework that provides principles for learning effective representations. We establish two connections between it and the topology of regression representations. The first connection reveals that a lower intrinsic dimension of the feature space implies a reduced complexity of the representation Z. This complexity can be quantified as the conditional entropy of Z on the target Y, and serves as an upper bound on the generalization error. The second connection suggests a feature space that is topologically similar to the target space will better align with the IB principle. Based on these two connections, we introduce PH-Reg, a regularizer specific to regression that matches the intrinsic dimension and topology of the feature space with the target space. Experiments on synthetic and real-world regression tasks demonstrate the benefits of PH-Reg. Code: https://github.com/needylove/PH-Reg. | [
"['Shihao Zhang' 'kenji kawaguchi' 'Angela Yao']"
]
|
null | null | 2404.13910 | null | null | http://arxiv.org/pdf/2404.13910v1 | 2024-04-22T06:42:21Z | 2024-04-22T06:42:21Z | Integrated Gradient Correlation: a Dataset-wise Attribution Method | Attribution methods are primarily designed to study the distribution of input component contributions to individual model predictions. However, some research applications require a summary of attribution patterns across the entire dataset to facilitate the interpretability of the scrutinized models. In this paper, we present a new method called Integrated Gradient Correlation (IGC) that relates dataset-wise attributions to a model prediction score and enables region-specific analysis by a direct summation over associated components. We demonstrate our method on scalar predictions with the study of image feature representation in the brain from fMRI neural signals and the estimation of neural population receptive fields (NSD dataset), as well as on categorical predictions with the investigation of handwritten digit recognition (MNIST dataset). The resulting IGC attributions show selective patterns, revealing underlying model strategies coherent with their respective objectives. | [
"['Pierre Lelièvre' 'Chien-Chung Chen']"
]
|
null | null | 2404.13941 | null | null | http://arxiv.org/pdf/2404.13941v1 | 2024-04-22T07:34:28Z | 2024-04-22T07:34:28Z | Autoencoder-assisted Feature Ensemble Net for Incipient Faults | Deep learning has shown the great power in the field of fault detection. However, for incipient faults with tiny amplitude, the detection performance of the current deep learning networks (DLNs) is not satisfactory. Even if prior information about the faults is utilized, DLNs can't successfully detect faults 3, 9 and 15 in Tennessee Eastman process (TEP). These faults are notoriously difficult to detect, lacking effective detection technologies in the field of fault detection. In this work, we propose Autoencoder-assisted Feature Ensemble Net (AE-FENet): a deep feature ensemble framework that uses the unsupervised autoencoder to conduct the feature transformation. Compared with the principle component analysis (PCA) technique adopted in the original Feature Ensemble Net (FENet), autoencoder can mine more exact features on incipient faults, which results in the better detection performance of AE-FENet. With same kinds of basic detectors, AE-FENet achieves a state-of-the-art average accuracy over 96% on faults 3, 9 and 15 in TEP, which represents a significant enhancement in performance compared to other methods. Plenty of experiments have been done to extend our framework, proving that DLNs can be utilized efficiently within this architecture. | [
"['Mingxuan Gao' 'Min Wang' 'Maoyin Chen']"
]
|
null | null | 2404.13946 | null | null | http://arxiv.org/pdf/2404.13946v1 | 2024-04-22T07:44:02Z | 2024-04-22T07:44:02Z | Dual Model Replacement:invisible Multi-target Backdoor Attack based on
Federal Learning | In recent years, the neural network backdoor hidden in the parameters of the federated learning model has been proved to have great security risks. Considering the characteristics of trigger generation, data poisoning and model training in backdoor attack, this paper designs a backdoor attack method based on federated learning. Firstly, aiming at the concealment of the backdoor trigger, a TrojanGan steganography model with encoder-decoder structure is designed. The model can encode specific attack information as invisible noise and attach it to the image as a backdoor trigger, which improves the concealment and data transformations of the backdoor trigger.Secondly, aiming at the problem of single backdoor trigger mode, an image poisoning attack method called combination trigger attack is proposed. This method realizes multi-backdoor triggering by multiplexing combined triggers and improves the robustness of backdoor attacks. Finally, aiming at the problem that the local training mechanism leads to the decrease of the success rate of backdoor attack, a dual model replacement backdoor attack algorithm based on federated learning is designed. This method can improve the success rate of backdoor attack while maintaining the performance of the federated learning aggregation model. Experiments show that the attack strategy in this paper can not only achieve high backdoor concealment and diversification of trigger forms under federated learning, but also achieve good attack success rate in multi-target attacks.door concealment and diversification of trigger forms but also achieve good results in multi-target attacks. | [
"['Rong Wang' 'Guichen Zhou' 'Mingjun Gao' 'Yunpeng Xiao']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.