categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2404.01184
null
null
http://arxiv.org/pdf/2404.01184v1
2024-04-01T15:36:39Z
2024-04-01T15:36:39Z
Efficient Motion Planning for Manipulators with Control Barrier Function-Induced Neural Controller
Sampling-based motion planning methods for manipulators in crowded environments often suffer from expensive collision checking and high sampling complexity, which make them difficult to use in real time. To address this issue, we propose a new generalizable control barrier function (CBF)-based steering controller to reduce the number of samples needed in a sampling-based motion planner RRT. Our method combines the strength of CBF for real-time collision-avoidance control and RRT for long-horizon motion planning, by using CBF-induced neural controller (CBF-INC) to generate control signals that steer the system towards sampled configurations by RRT. CBF-INC is learned as Neural Networks and has two variants handling different inputs, respectively: state (signed distance) input and point-cloud input from LiDAR. In the latter case, we also study two different settings: fully and partially observed environmental information. Compared to manually crafted CBF which suffers from over-approximating robot geometry, CBF-INC can balance safety and goal-reaching better without being over-conservative. Given state-based input, our neural CBF-induced neural controller-enhanced RRT (CBF-INC-RRT) can increase the success rate by 14% while reducing the number of nodes explored by 30%, compared with vanilla RRT on hard test cases. Given LiDAR input where vanilla RRT is not directly applicable, we demonstrate that our CBF-INC-RRT can improve the success rate by 10%, compared with planning with other steering controllers. Our project page with supplementary material is at https://mit-realm.github.io/CBF-INC-RRT-website/.
[ "['Mingxin Yu' 'Chenning Yu' 'M-Mahdi Naddaf-Sh' 'Devesh Upadhyay'\n 'Sicun Gao' 'Chuchu Fan']" ]
null
null
2404.01198
null
null
http://arxiv.org/pdf/2404.01198v1
2024-04-01T15:55:45Z
2024-04-01T15:55:45Z
Nearly-tight Approximation Guarantees for the Improving Multi-Armed Bandits Problem
We give nearly-tight upper and lower bounds for the improving multi-armed bandits problem. An instance of this problem has $k$ arms, each of whose reward function is a concave and increasing function of the number of times that arm has been pulled so far. We show that for any randomized online algorithm, there exists an instance on which it must suffer at least an $Omega(sqrt{k})$ approximation factor relative to the optimal reward. We then provide a randomized online algorithm that guarantees an $O(sqrt{k})$ approximation factor, if it is told the maximum reward achievable by the optimal arm in advance. We then show how to remove this assumption at the cost of an extra $O(log k)$ approximation factor, achieving an overall $O(sqrt{k} log k)$ approximation relative to optimal.
[ "['Avrim Blum' 'Kavya Ravichandran']" ]
null
null
2404.01200
null
null
http://arxiv.org/pdf/2404.01200v1
2024-04-01T15:56:58Z
2024-04-01T15:56:58Z
Large-Scale Non-convex Stochastic Constrained Distributionally Robust Optimization
Distributionally robust optimization (DRO) is a powerful framework for training robust models against data distribution shifts. This paper focuses on constrained DRO, which has an explicit characterization of the robustness level. Existing studies on constrained DRO mostly focus on convex loss function, and exclude the practical and challenging case with non-convex loss function, e.g., neural network. This paper develops a stochastic algorithm and its performance analysis for non-convex constrained DRO. The computational complexity of our stochastic algorithm at each iteration is independent of the overall dataset size, and thus is suitable for large-scale applications. We focus on the general Cressie-Read family divergence defined uncertainty set which includes $chi^2$-divergences as a special case. We prove that our algorithm finds an $epsilon$-stationary point with a computational complexity of $mathcal O(epsilon^{-3k_*-5})$, where $k_*$ is the parameter of the Cressie-Read divergence. The numerical results indicate that our method outperforms existing methods.} Our method also applies to the smoothed conditional value at risk (CVaR) DRO.
[ "['Qi Zhang' 'Yi Zhou' 'Ashley Prater-Bennette' 'Lixin Shen' 'Shaofeng Zou']" ]
null
null
2404.01206
null
null
http://arxiv.org/pdf/2404.01206v1
2024-04-01T16:08:18Z
2024-04-01T16:08:18Z
Machine Unlearning for Traditional Models and Large Language Models: A Short Survey
With the implementation of personal data privacy regulations, the field of machine learning (ML) faces the challenge of the "right to be forgotten". Machine unlearning has emerged to address this issue, aiming to delete data and reduce its impact on models according to user requests. Despite the widespread interest in machine unlearning, comprehensive surveys on its latest advancements, especially in the field of Large Language Models (LLMs) is lacking. This survey aims to fill this gap by providing an in-depth exploration of machine unlearning, including the definition, classification and evaluation criteria, as well as challenges in different environments and their solutions. Specifically, this paper categorizes and investigates unlearning on both traditional models and LLMs, and proposes methods for evaluating the effectiveness and efficiency of unlearning, and standards for performance measurement. This paper reveals the limitations of current unlearning techniques and emphasizes the importance of a comprehensive unlearning evaluation to avoid arbitrary forgetting. This survey not only summarizes the key concepts of unlearning technology but also points out its prominent issues and feasible directions for future research, providing valuable guidance for scholars in the field.
[ "['Yi Xu']" ]
null
null
2404.01216
null
null
http://arxiv.org/pdf/2404.01216v2
2024-06-30T18:40:10Z
2024-04-01T16:16:19Z
Novel Node Category Detection Under Subpopulation Shift
In real-world graph data, distribution shifts can manifest in various ways, such as the emergence of new categories and changes in the relative proportions of existing categories. It is often important to detect nodes of novel categories under such distribution shifts for safety or insight discovery purposes. We introduce a new approach, Recall-Constrained Optimization with Selective Link Prediction (RECO-SLIP), to detect nodes belonging to novel categories in attributed graphs under subpopulation shifts. By integrating a recall-constrained learning framework with a sample-efficient link prediction mechanism, RECO-SLIP addresses the dual challenges of resilience against subpopulation shifts and the effective exploitation of graph structure. Our extensive empirical evaluation across multiple graph datasets demonstrates the superior performance of RECO-SLIP over existing methods. The experimental code is available at https://github.com/hsinghuan/novel-node-category-detection.
[ "['Hsing-Huan Chung' 'Shravan Chaudhari' 'Yoav Wald' 'Xing Han'\n 'Joydeep Ghosh']" ]
null
null
2404.01217
null
null
http://arxiv.org/pdf/2404.01217v1
2024-04-01T16:17:11Z
2024-04-01T16:17:11Z
Incorporating Domain Differential Equations into Graph Convolutional Networks to Lower Generalization Discrepancy
Ensuring both accuracy and robustness in time series prediction is critical to many applications, ranging from urban planning to pandemic management. With sufficient training data where all spatiotemporal patterns are well-represented, existing deep-learning models can make reasonably accurate predictions. However, existing methods fail when the training data are drawn from different circumstances (e.g., traffic patterns on regular days) compared to test data (e.g., traffic patterns after a natural disaster). Such challenges are usually classified under domain generalization. In this work, we show that one way to address this challenge in the context of spatiotemporal prediction is by incorporating domain differential equations into Graph Convolutional Networks (GCNs). We theoretically derive conditions where GCNs incorporating such domain differential equations are robust to mismatched training and testing data compared to baseline domain agnostic models. To support our theory, we propose two domain-differential-equation-informed networks called Reaction-Diffusion Graph Convolutional Network (RDGCN), which incorporates differential equations for traffic speed evolution, and Susceptible-Infectious-Recovered Graph Convolutional Network (SIRGCN), which incorporates a disease propagation model. Both RDGCN and SIRGCN are based on reliable and interpretable domain differential equations that allow the models to generalize to unseen patterns. We experimentally show that RDGCN and SIRGCN are more robust with mismatched testing data than the state-of-the-art deep learning methods.
[ "['Yue Sun' 'Chao Chen' 'Yuesheng Xu' 'Sihong Xie' 'Rick S. Blum'\n 'Parv Venkitasubramaniam']" ]
null
null
2404.01218
null
null
http://arxiv.org/pdf/2404.01218v1
2024-04-01T16:18:40Z
2024-04-01T16:18:40Z
Towards System Modelling to Support Diseases Data Extraction from the Electronic Health Records for Physicians Research Activities
The use of Electronic Health Records (EHRs) has increased dramatically in the past 15 years, as, it is considered an important source of managing data od patients. The EHRs are primary sources of disease diagnosis and demographic data of patients worldwide. Therefore, the data can be utilized for secondary tasks such as research. This paper aims to make such data usable for research activities such as monitoring disease statistics for a specific population. As a result, the researchers can detect the disease causes for the behavior and lifestyle of the target group. One of the limitations of EHRs systems is that the data is not available in the standard format but in various forms. Therefore, it is required to first convert the names of the diseases and demographics data into one standardized form to make it usable for research activities. There is a large amount of EHRs available, and solving the standardizing issues requires some optimized techniques. We used a first-hand EHR dataset extracted from EHR systems. Our application uploads the dataset from the EHRs and converts it to the ICD-10 coding system to solve the standardization problem. So, we first apply the steps of pre-processing, annotation, and transforming the data to convert it into the standard form. The data pre-processing is applied to normalize demographic formats. In the annotation step, a machine learning model is used to recognize the diseases from the text. Furthermore, the transforming step converts the disease name to the ICD-10 coding format. The model was evaluated manually by comparing its performance in terms of disease recognition with an available dictionary-based system (MetaMap). The accuracy of the proposed machine learning model is 81%, that outperformed MetaMap accuracy of 67%. This paper contributed to system modelling for EHR data extraction to support research activities.
[ "['Bushra F. Alsaqer' 'Alaa F. Alsaqer' 'Amna Asif']" ]
null
null
2404.01220
null
null
http://arxiv.org/pdf/2404.01220v1
2024-04-01T16:25:08Z
2024-04-01T16:25:08Z
Entity-Centric Reinforcement Learning for Object Manipulation from Pixels
Manipulating objects is a hallmark of human intelligence, and an important task in domains such as robotics. In principle, Reinforcement Learning (RL) offers a general approach to learn object manipulation. In practice, however, domains with more than a few objects are difficult for RL agents due to the curse of dimensionality, especially when learning from raw image observations. In this work we propose a structured approach for visual RL that is suitable for representing multiple objects and their interaction, and use it to learn goal-conditioned manipulation of several objects. Key to our method is the ability to handle goals with dependencies between the objects (e.g., moving objects in a certain order). We further relate our architecture to the generalization capability of the trained agent, based on a theoretical result for compositional generalization, and demonstrate agents that learn with 3 objects but generalize to similar tasks with over 10 objects. Videos and code are available on the project website: https://sites.google.com/view/entity-centric-rl
[ "['Dan Haramati' 'Tal Daniel' 'Aviv Tamar']" ]
null
null
2404.01223
null
null
http://arxiv.org/pdf/2404.01223v1
2024-04-01T16:31:04Z
2024-04-01T16:31:04Z
Feature Splatting: Language-Driven Physics-Based Scene Synthesis and Editing
Scene representations using 3D Gaussian primitives have produced excellent results in modeling the appearance of static and dynamic 3D scenes. Many graphics applications, however, demand the ability to manipulate both the appearance and the physical properties of objects. We introduce Feature Splatting, an approach that unifies physics-based dynamic scene synthesis with rich semantics from vision language foundation models that are grounded by natural language. Our first contribution is a way to distill high-quality, object-centric vision-language features into 3D Gaussians, that enables semi-automatic scene decomposition using text queries. Our second contribution is a way to synthesize physics-based dynamics from an otherwise static scene using a particle-based simulator, in which material properties are assigned automatically via text queries. We ablate key techniques used in this pipeline, to illustrate the challenge and opportunities in using feature-carrying 3D Gaussians as a unified format for appearance, geometry, material properties and semantics grounded on natural language. Project website: https://feature-splatting.github.io/
[ "['Ri-Zhao Qiu' 'Ge Yang' 'Weijia Zeng' 'Xiaolong Wang']" ]
null
null
2404.01224
null
null
http://arxiv.org/pdf/2404.01224v2
2024-04-28T13:14:57Z
2024-04-01T16:31:06Z
Collaborative Pareto Set Learning in Multiple Multi-Objective Optimization Problems
Pareto Set Learning (PSL) is an emerging research area in multi-objective optimization, focusing on training neural networks to learn the mapping from preference vectors to Pareto optimal solutions. However, existing PSL methods are limited to addressing a single Multi-objective Optimization Problem (MOP) at a time. When faced with multiple MOPs, this limitation results in significant inefficiencies and hinders the ability to exploit potential synergies across varying MOPs. In this paper, we propose a Collaborative Pareto Set Learning (CoPSL) framework, which learns the Pareto sets of multiple MOPs simultaneously in a collaborative manner. CoPSL particularly employs an architecture consisting of shared and MOP-specific layers. The shared layers are designed to capture commonalities among MOPs collaboratively, while the MOP-specific layers tailor these general insights to generate solution sets for individual MOPs. This collaborative approach enables CoPSL to efficiently learn the Pareto sets of multiple MOPs in a single execution while leveraging the potential relationships among various MOPs. To further understand these relationships, we experimentally demonstrate that shareable representations exist among MOPs. Leveraging these shared representations effectively improves the capability to approximate Pareto sets. Extensive experiments underscore the superior efficiency and robustness of CoPSL in approximating Pareto sets compared to state-of-the-art approaches on a variety of synthetic and real-world MOPs. Code is available at https://github.com/ckshang/CoPSL.
[ "['Chikai Shang' 'Rongguang Ye' 'Jiaqi Jiang' 'Fangqing Gu']" ]
null
null
2404.01231
null
null
http://arxiv.org/pdf/2404.01231v1
2024-04-01T16:50:54Z
2024-04-01T16:50:54Z
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
It is commonplace to produce application-specific models by fine-tuning large pre-trained models using a small bespoke dataset. The widespread availability of foundation model checkpoints on the web poses considerable risks, including the vulnerability to backdoor attacks. In this paper, we unveil a new vulnerability: the privacy backdoor attack. This black-box privacy attack aims to amplify the privacy leakage that arises when fine-tuning a model: when a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model. We conduct extensive experiments on various datasets and models, including both vision-language models (CLIP) and large language models, demonstrating the broad applicability and effectiveness of such an attack. Additionally, we carry out multiple ablation studies with different fine-tuning methods and inference strategies to thoroughly analyze this new threat. Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
[ "['Yuxin Wen' 'Leo Marchyok' 'Sanghyun Hong' 'Jonas Geiping'\n 'Tom Goldstein' 'Nicholas Carlini']" ]
null
null
2404.01233
null
null
http://arxiv.org/pdf/2404.01233v1
2024-04-01T16:51:19Z
2024-04-01T16:51:19Z
Optimal Ridge Regularization for Out-of-Distribution Prediction
We study the behavior of optimal ridge regularization and optimal ridge risk for out-of-distribution prediction, where the test distribution deviates arbitrarily from the train distribution. We establish general conditions that determine the sign of the optimal regularization level under covariate and regression shifts. These conditions capture the alignment between the covariance and signal structures in the train and test data and reveal stark differences compared to the in-distribution setting. For example, a negative regularization level can be optimal under covariate shift or regression shift, even when the training features are isotropic or the design is underparameterized. Furthermore, we prove that the optimally-tuned risk is monotonic in the data aspect ratio, even in the out-of-distribution setting and when optimizing over negative regularization levels. In general, our results do not make any modeling assumptions for the train or the test distributions, except for moment bounds, and allow for arbitrary shifts and the widest possible range of (negative) regularization levels.
[ "['Pratik Patil' 'Jin-Hong Du' 'Ryan J. Tibshirani']" ]
null
null
2404.01245
null
null
http://arxiv.org/pdf/2404.01245v1
2024-04-01T17:03:41Z
2024-04-01T17:03:41Z
A Statistical Framework of Watermarks for Large Language Models: Pivot, Detection Efficiency and Optimal Rules
Since ChatGPT was introduced in November 2022, embedding (nearly) unnoticeable statistical signals into text generated by large language models (LLMs), also known as watermarking, has been used as a principled approach to provable detection of LLM-generated text from its human-written counterpart. In this paper, we introduce a general and flexible framework for reasoning about the statistical efficiency of watermarks and designing powerful detection rules. Inspired by the hypothesis testing formulation of watermark detection, our framework starts by selecting a pivotal statistic of the text and a secret key -- provided by the LLM to the verifier -- to enable controlling the false positive rate (the error of mistakenly detecting human-written text as LLM-generated). Next, this framework allows one to evaluate the power of watermark detection rules by obtaining a closed-form expression of the asymptotic false negative rate (the error of incorrectly classifying LLM-generated text as human-written). Our framework further reduces the problem of determining the optimal detection rule to solving a minimax optimization program. We apply this framework to two representative watermarks -- one of which has been internally implemented at OpenAI -- and obtain several findings that can be instrumental in guiding the practice of implementing watermarks. In particular, we derive optimal detection rules for these watermarks under our framework. These theoretically derived detection rules are demonstrated to be competitive and sometimes enjoy a higher power than existing detection approaches through numerical experiments.
[ "['Xiang Li' 'Feng Ruan' 'Huiyuan Wang' 'Qi Long' 'Weijie J. Su']" ]
null
null
2404.01257
null
null
http://arxiv.org/abs/2404.01257v1
2024-04-01T17:25:27Z
2024-04-01T17:25:27Z
New logarithmic step size for stochastic gradient descent
In this paper, we propose a novel warm restart technique using a new logarithmic step size for the stochastic gradient descent (SGD) approach. For smooth and non-convex functions, we establish an $O(frac{1}{sqrt{T}})$ convergence rate for the SGD. We conduct a comprehensive implementation to demonstrate the efficiency of the newly proposed step size on the ~FashionMinst,~ CIFAR10, and CIFAR100 datasets. Moreover, we compare our results with nine other existing approaches and demonstrate that the new logarithmic step size improves test accuracy by $0.9%$ for the CIFAR100 dataset when we utilize a convolutional neural network (CNN) model.
[ "['M. Soheil Shamaee' 'S. Fathi Hafshejani' 'Z. Saeidian']" ]
null
null
2404.01260
null
null
http://arxiv.org/pdf/2404.01260v1
2024-04-01T17:30:56Z
2024-04-01T17:30:56Z
Bridging Remote Sensors with Multisensor Geospatial Foundation Models
In the realm of geospatial analysis, the diversity of remote sensors, encompassing both optical and microwave technologies, offers a wealth of distinct observational capabilities. Recognizing this, we present msGFM, a multisensor geospatial foundation model that effectively unifies data from four key sensor modalities. This integration spans an expansive dataset of two million multisensor images. msGFM is uniquely adept at handling both paired and unpaired sensor data. For data originating from identical geolocations, our model employs an innovative cross-sensor pretraining approach in masked image modeling, enabling the synthesis of joint representations from diverse sensors. msGFM, incorporating four remote sensors, upholds strong performance, forming a comprehensive model adaptable to various sensor types. msGFM has demonstrated enhanced proficiency in a range of both single-sensor and multisensor downstream tasks. These include scene classification, segmentation, cloud removal, and pan-sharpening. A key discovery of our research is that representations derived from natural images are not always compatible with the distinct characteristics of geospatial remote sensors, underscoring the limitations of existing representations in this field. Our work can serve as a guide for developing multisensor geospatial pretraining models, paving the way for more advanced geospatial capabilities.
[ "['Boran Han' 'Shuai Zhang' 'Xingjian Shi' 'Markus Reichstein']" ]
null
null
2404.01268
null
null
http://arxiv.org/pdf/2404.01268v1
2024-04-01T17:45:15Z
2024-04-01T17:45:15Z
Mapping the Increasing Use of LLMs in Scientific Papers
Scientific publishing lays the foundation of science by disseminating research findings, fostering collaboration, encouraging reproducibility, and ensuring that scientific knowledge is accessible, verifiable, and built upon over time. Recently, there has been immense speculation about how many people are using large language models (LLMs) like ChatGPT in their academic writing, and to what extent this tool might have an effect on global scientific practices. However, we lack a precise measure of the proportion of academic writing substantially modified or produced by LLMs. To address this gap, we conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals, using a population-level statistical framework to measure the prevalence of LLM-modified content over time. Our statistical estimation operates on the corpus level and is more robust than inference on individual instances. Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers (up to 17.5%). In comparison, Mathematics papers and the Nature portfolio showed the least LLM modification (up to 6.3%). Moreover, at an aggregate level, our analysis reveals that higher levels of LLM-modification are associated with papers whose first authors post preprints more frequently, papers in more crowded research areas, and papers of shorter lengths. Our findings suggests that LLMs are being broadly used in scientific writings.
[ "['Weixin Liang' 'Yaohui Zhang' 'Zhengxuan Wu' 'Haley Lepp' 'Wenlong Ji'\n 'Xuandong Zhao' 'Hancheng Cao' 'Sheng Liu' 'Siyu He' 'Zhi Huang'\n 'Diyi Yang' 'Christopher Potts' 'Christopher D Manning' 'James Y. Zou']" ]
null
null
2404.01270
null
null
http://arxiv.org/pdf/2404.01270v1
2024-04-01T17:46:17Z
2024-04-01T17:46:17Z
Decentralized Collaborative Learning Framework with External Privacy Leakage Analysis
This paper presents two methodological advancements in decentralized multi-task learning under privacy constraints, aiming to pave the way for future developments in next-generation Blockchain platforms. First, we expand the existing framework for collaborative dictionary learning (CollabDict), which has previously been limited to Gaussian mixture models, by incorporating deep variational autoencoders (VAEs) into the framework, with a particular focus on anomaly detection. We demonstrate that the VAE-based anomaly score function shares the same mathematical structure as the non-deep model, and provide comprehensive qualitative comparison. Second, considering the widespread use of "pre-trained models," we provide a mathematical analysis on data privacy leakage when models trained with CollabDict are shared externally. We show that the CollabDict approach, when applied to Gaussian mixtures, adheres to a Renyi differential privacy criterion. Additionally, we propose a practical metric for monitoring internal privacy breaches during the learning process.
[ "['Tsuyoshi Idé' 'Dzung T. Phan' 'Rudy Raymond']" ]
null
null
2404.01273
null
null
http://arxiv.org/pdf/2404.01273v2
2024-06-29T01:28:02Z
2024-04-01T17:48:55Z
TWIN-GPT: Digital Twins for Clinical Trials via Large Language Model
Clinical trials are indispensable for medical research and the development of new treatments. However, clinical trials often involve thousands of participants and can span several years to complete, with a high probability of failure during the process. Recently, there has been a burgeoning interest in virtual clinical trials, which simulate real-world scenarios and hold the potential to significantly enhance patient safety, expedite development, reduce costs, and contribute to the broader scientific knowledge in healthcare. Existing research often focuses on leveraging electronic health records (EHRs) to support clinical trial outcome prediction. Yet, trained with limited clinical trial outcome data, existing approaches frequently struggle to perform accurate predictions. Some research has attempted to generate EHRs to augment model development but has fallen short in personalizing the generation for individual patient profiles. Recently, the emergence of large language models has illuminated new possibilities, as their embedded comprehensive clinical knowledge has proven beneficial in addressing medical issues. In this paper, we propose a large language model-based digital twin creation approach, called TWIN-GPT. TWIN-GPT can establish cross-dataset associations of medical information given limited data, generating unique personalized digital twins for different patients, thereby preserving individual patient characteristics. Comprehensive experiments show that using digital twins created by TWIN-GPT can boost the clinical trial outcome prediction, exceeding various previous prediction approaches.
[ "['Yue Wang' 'Tianfan Fu' 'Yinlong Xu' 'Zihan Ma' 'Hongxia Xu'\n 'Yingzhou Lu' 'Bang Du' 'Honghao Gao' 'Jian Wu']" ]
null
null
2404.01291
null
null
http://arxiv.org/pdf/2404.01291v2
2024-06-18T07:09:55Z
2024-04-01T17:58:06Z
Evaluating Text-to-Visual Generation with Image-to-Text Generation
Despite significant progress in generative AI, comprehensive evaluation remains challenging because of the lack of effective metrics and standardized benchmarks. For instance, the widely-used CLIPScore measures the alignment between a (generated) image and text prompt, but it fails to produce reliable scores for complex prompts involving compositions of objects, attributes, and relations. One reason is that text encoders of CLIP can notoriously act as a "bag of words", conflating prompts such as "the horse is eating the grass" with "the grass is eating the horse". To address this, we introduce the VQAScore, which uses a visual-question-answering (VQA) model to produce an alignment score by computing the probability of a "Yes" answer to a simple "Does this figure show '{text}'?" question. Though simpler than prior art, VQAScore computed with off-the-shelf models produces state-of-the-art results across many (8) image-text alignment benchmarks. We also compute VQAScore with an in-house model that follows best practices in the literature. For example, we use a bidirectional image-question encoder that allows image embeddings to depend on the question being asked (and vice versa). Our in-house model, CLIP-FlanT5, outperforms even the strongest baselines that make use of the proprietary GPT-4V. Interestingly, although we train with only images, VQAScore can also align text with video and 3D models. VQAScore allows researchers to benchmark text-to-visual generation using complex texts that capture the compositional structure of real-world prompts. We introduce GenAI-Bench, a more challenging benchmark with 1,600 compositional text prompts that require parsing scenes, objects, attributes, relationships, and high-order reasoning like comparison and logic. GenAI-Bench also offers over 15,000 human ratings for leading image and video generation models such as Stable Diffusion, DALL-E 3, and Gen2.
[ "['Zhiqiu Lin' 'Deepak Pathak' 'Baiqi Li' 'Jiayao Li' 'Xide Xia'\n 'Graham Neubig' 'Pengchuan Zhang' 'Deva Ramanan']" ]
null
null
2404.01292
null
null
http://arxiv.org/pdf/2404.01292v1
2024-04-01T17:58:30Z
2024-04-01T17:58:30Z
Measuring Style Similarity in Diffusion Models
Generative models are now widely used by graphic designers and artists. Prior works have shown that these models remember and often replicate content from their training data during generation. Hence as their proliferation increases, it has become important to perform a database search to determine whether the properties of the image are attributable to specific training data, every time before a generated image is used for professional purposes. Existing tools for this purpose focus on retrieving images of similar semantic content. Meanwhile, many artists are concerned with style replication in text-to-image models. We present a framework for understanding and extracting style descriptors from images. Our framework comprises a new dataset curated using the insight that style is a subjective property of an image that captures complex yet meaningful interactions of factors including but not limited to colors, textures, shapes, etc. We also propose a method to extract style descriptors that can be used to attribute style of a generated image to the images used in the training dataset of a text-to-image model. We showcase promising results in various style retrieval tasks. We also quantitatively and qualitatively analyze style attribution and matching in the Stable Diffusion model. Code and artifacts are available at https://github.com/learn2phoenix/CSD.
[ "['Gowthami Somepalli' 'Anubhav Gupta' 'Kamal Gupta' 'Shramay Palta'\n 'Micah Goldblum' 'Jonas Geiping' 'Abhinav Shrivastava' 'Tom Goldstein']" ]
null
null
2404.01299
null
null
http://arxiv.org/pdf/2404.01299v2
2024-06-14T17:46:02Z
2024-04-01T17:59:53Z
CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes
Causal video question answering (QA) has garnered increasing interest, yet existing datasets often lack depth in causal reasoning. To address this gap, we capitalize on the unique properties of cartoons and construct CausalChaos!, a novel, challenging causal Why-QA dataset built upon the iconic "Tom and Jerry" cartoon series. Cartoons use the principles of animation that allow animators to create expressive, unambiguous causal relationships between events to form a coherent storyline. Utilizing these properties, along with thought-provoking questions and multi-level answers (answer and detailed causal explanation), our questions involve causal chains that interconnect multiple dynamic interactions between characters and visual scenes. These factors demand models to solve more challenging, yet well-defined causal relationships. We also introduce hard incorrect answer mining, including a causally confusing version that is even more challenging. While models perform well, there is much room for improvement, especially, on open-ended answers. We identify more advanced/explicit causal relationship modeling & joint modeling of vision and language as the immediate areas for future efforts to focus upon. Along with the other complementary datasets, our new challenging dataset will pave the way for these developments in the field.
[ "['Paritosh Parmar' 'Eric Peh' 'Ruirui Chen' 'Ting En Lam' 'Yuhan Chen'\n 'Elston Tan' 'Basura Fernando']" ]
null
null
2404.01300
null
null
http://arxiv.org/pdf/2404.01300v2
2024-04-18T17:59:57Z
2024-04-01T17:59:55Z
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Owing to the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation is irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.6 million images. Once pretrained, the encoder is used for effective 3D transfer learning. Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection.
[ "['Muhammad Zubair Irshad' 'Sergey Zakahrov' 'Vitor Guizilini'\n 'Adrien Gaidon' 'Zsolt Kira' 'Rares Ambrus']" ]
null
null
2404.01306
null
null
http://arxiv.org/pdf/2404.01306v3
2024-06-05T16:07:13Z
2024-02-28T22:21:47Z
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models
Transformer-based Language Models have become ubiquitous in Natural Language Processing (NLP) due to their impressive performance on various tasks. However, expensive training as well as inference remains a significant impediment to their widespread applicability. While enforcing sparsity at various levels of the model architecture has found promise in addressing scaling and efficiency issues, there remains a disconnect between how sparsity affects network topology. Inspired by brain neuronal networks, we explore sparsity approaches through the lens of network topology. Specifically, we exploit mechanisms seen in biological networks, such as preferential attachment and redundant synapse pruning, and show that principled, model-agnostic sparsity approaches are performant and efficient across diverse NLP tasks, spanning both classification (such as natural language inference) and generation (summarization, machine translation), despite our sole objective not being optimizing performance. NeuroPrune is competitive with (or sometimes superior to) baselines on performance and can be up to $10$x faster in terms of training time for a given level of sparsity, simultaneously exhibiting measurable improvements in inference time in many cases.
[ "['Amit Dhurandhar' 'Tejaswini Pedapati' 'Ronny Luss' 'Soham Dan'\n 'Aurelie Lozano' 'Payel Das' 'Georgios Kollias']" ]
null
null
2404.01308
null
null
http://arxiv.org/pdf/2404.01308v1
2024-03-04T08:38:55Z
2024-03-04T08:38:55Z
Learning to Solve Job Shop Scheduling under Uncertainty
Job-Shop Scheduling Problem (JSSP) is a combinatorial optimization problem where tasks need to be scheduled on machines in order to minimize criteria such as makespan or delay. To address more realistic scenarios, we associate a probability distribution with the duration of each task. Our objective is to generate a robust schedule, i.e. that minimizes the average makespan. This paper introduces a new approach that leverages Deep Reinforcement Learning (DRL) techniques to search for robust solutions, emphasizing JSSPs with uncertain durations. Key contributions of this research include: (1) advancements in DRL applications to JSSPs, enhancing generalization and scalability, (2) a novel method for addressing JSSPs with uncertain durations. The Wheatley approach, which integrates Graph Neural Networks (GNNs) and DRL, is made publicly available for further research and applications.
[ "['Guillaume Infantes' 'Stéphanie Roussel' 'Pierre Pereira'\n 'Antoine Jacquet' 'Emmanuel Benazera']" ]
null
null
2404.01317
null
null
http://arxiv.org/abs/2404.01317v1
2024-03-27T13:40:09Z
2024-03-27T13:40:09Z
Intelligent Learning Rate Distribution to reduce Catastrophic Forgetting in Transformers
Pretraining language models on large text corpora is a common practice in natural language processing. Fine-tuning of these models is then performed to achieve the best results on a variety of tasks. In this paper, we investigate the problem of catastrophic forgetting in transformer neural networks and question the common practice of fine-tuning with a flat learning rate for the entire network in this context. We perform a hyperparameter optimization process to find learning rate distributions that are better than a flat learning rate. We combine the learning rate distributions thus found and show that they generalize to better performance with respect to the problem of catastrophic forgetting. We validate these learning rate distributions with a variety of NLP benchmarks from the GLUE dataset.
[ "['Philip Kenneweg' 'Alexander Schulz' 'Sarah Schröder' 'Barbara Hammer']" ]
null
null
2404.01318
null
null
http://arxiv.org/pdf/2404.01318v3
2024-06-16T15:58:44Z
2024-03-28T02:44:02Z
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
Jailbreak attacks cause large language models (LLMs) to generate harmful, unethical, or otherwise objectionable content. Evaluating these attacks presents a number of challenges, which the current collection of benchmarks and evaluation techniques do not adequately address. First, there is no clear standard of practice regarding jailbreaking evaluation. Second, existing works compute costs and success rates in incomparable ways. And third, numerous works are not reproducible, as they withhold adversarial prompts, involve closed-source code, or rely on evolving proprietary APIs. To address these challenges, we introduce JailbreakBench, an open-sourced benchmark with the following components: (1) an evolving repository of state-of-the-art adversarial prompts, which we refer to as jailbreak artifacts; (2) a jailbreaking dataset comprising 100 behaviors -- both original and sourced from prior work -- which align with OpenAI's usage policies; (3) a standardized evaluation framework at https://github.com/JailbreakBench/jailbreakbench that includes a clearly defined threat model, system prompts, chat templates, and scoring functions; and (4) a leaderboard at https://jailbreakbench.github.io/ that tracks the performance of attacks and defenses for various LLMs. We have carefully considered the potential ethical implications of releasing this benchmark, and believe that it will be a net positive for the community.
[ "['Patrick Chao' 'Edoardo Debenedetti' 'Alexander Robey'\n 'Maksym Andriushchenko' 'Francesco Croce' 'Vikash Sehwag'\n 'Edgar Dobriban' 'Nicolas Flammarion' 'George J. Pappas' 'Florian Tramer'\n 'Hamed Hassani' 'Eric Wong']" ]
null
null
2404.01327
null
null
http://arxiv.org/abs/2404.01327v1
2024-03-29T12:10:21Z
2024-03-29T12:10:21Z
Entertainment chatbot for the digital inclusion of elderly people without abstraction capabilities
Current language processing technologies allow the creation of conversational chatbot platforms. Even though artificial intelligence is still too immature to support satisfactory user experience in many mass market domains, conversational interfaces have found their way into ad hoc applications such as call centres and online shopping assistants. However, they have not been applied so far to social inclusion of elderly people, who are particularly vulnerable to the digital divide. Many of them relieve their loneliness with traditional media such as TV and radio, which are known to create a feeling of companionship. In this paper we present the EBER chatbot, designed to reduce the digital gap for the elderly. EBER reads news in the background and adapts its responses to the user's mood. Its novelty lies in the concept of "intelligent radio", according to which, instead of simplifying a digital information system to make it accessible to the elderly, a traditional channel they find familiar -- background news -- is augmented with interactions via voice dialogues. We make it possible by combining Artificial Intelligence Modelling Language, automatic Natural Language Generation and Sentiment Analysis. The system allows accessing digital content of interest by combining words extracted from user answers to chatbot questions with keywords extracted from the news items. This approach permits defining metrics of the abstraction capabilities of the users depending on a spatial representation of the word space. To prove the suitability of the proposed solution we present results of real experiments conducted with elderly people that provided valuable insights. Our approach was considered satisfactory during the tests and improved the information search capabilities of the participants.
[ "['Silvia García-Méndez' 'Francisco de Arriba-Pérez'\n 'Francisco J. González-Castaño' 'José A. Regueiro-Janeiro'\n 'Felipe Gil-Castiñeira']" ]
null
null
2404.01330
null
null
http://arxiv.org/pdf/2404.01330v1
2024-03-29T15:27:28Z
2024-03-29T15:27:28Z
Holo-VQVAE: VQ-VAE for phase-only holograms
Holography stands at the forefront of visual technology innovation, offering immersive, three-dimensional visualizations through the manipulation of light wave amplitude and phase. Contemporary research in hologram generation has predominantly focused on image-to-hologram conversion, producing holograms from existing images. These approaches, while effective, inherently limit the scope of innovation and creativity in hologram generation. In response to this limitation, we present Holo-VQVAE, a novel generative framework tailored for phase-only holograms (POHs). Holo-VQVAE leverages the architecture of Vector Quantized Variational AutoEncoders, enabling it to learn the complex distributions of POHs. Furthermore, it integrates the Angular Spectrum Method into the training process, facilitating learning in the image domain. This framework allows for the generation of unseen, diverse holographic content directly from its intricately learned latent space without requiring pre-existing images. This pioneering work paves the way for groundbreaking applications and methodologies in holographic content creation, opening a new era in the exploration of holographic content.
[ "['Joohyun Park' 'Hyeongyeop Kang']" ]
null
null
2404.01332
null
null
http://arxiv.org/pdf/2404.01332v1
2024-03-29T22:49:43Z
2024-03-29T22:49:43Z
Wait, It's All Token Noise? Always Has Been: Interpreting LLM Behavior Using Shapley Value
The emergence of large language models (LLMs) has opened up exciting possibilities for simulating human behavior and cognitive processes, with potential applications in various domains, including marketing research and consumer behavior analysis. However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain due to glaring divergences that suggest fundamentally different underlying processes at play and the sensitivity of LLM responses to prompt variations. This paper presents a novel approach based on Shapley values from cooperative game theory to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output. Through two applications-a discrete choice experiment and an investigation of cognitive biases-we demonstrate how the Shapley value method can uncover what we term "token noise" effects, a phenomenon where LLM decisions are disproportionately influenced by tokens providing minimal informative content. This phenomenon raises concerns about the robustness and generalizability of insights obtained from LLMs in the context of human behavior simulation. Our model-agnostic approach extends its utility to proprietary LLMs, providing a valuable tool for marketers and researchers to strategically optimize prompts and mitigate apparent cognitive biases. Our findings underscore the need for a more nuanced understanding of the factors driving LLM responses before relying on them as substitutes for human subjects in research settings. We emphasize the importance of researchers reporting results conditioned on specific prompt templates and exercising caution when drawing parallels between human behavior and LLMs.
[ "['Behnam Mohammadi']" ]
null
null
2404.01334
null
null
http://arxiv.org/pdf/2404.01334v1
2024-03-30T12:13:57Z
2024-03-30T12:13:57Z
Augmenting NER Datasets with LLMs: Towards Automated and Refined Annotation
In the field of Natural Language Processing (NLP), Named Entity Recognition (NER) is recognized as a critical technology, employed across a wide array of applications. Traditional methodologies for annotating datasets for NER models are challenged by high costs and variations in dataset quality. This research introduces a novel hybrid annotation approach that synergizes human effort with the capabilities of Large Language Models (LLMs). This approach not only aims to ameliorate the noise inherent in manual annotations, such as omissions, thereby enhancing the performance of NER models, but also achieves this in a cost-effective manner. Additionally, by employing a label mixing strategy, it addresses the issue of class imbalance encountered in LLM-based annotations. Through an analysis across multiple datasets, this method has been consistently shown to provide superior performance compared to traditional annotation methods, even under constrained budget conditions. This study illuminates the potential of leveraging LLMs to improve dataset quality, introduces a novel technique to mitigate class imbalances, and demonstrates the feasibility of achieving high-performance NER in a cost-effective way.
[ "['Yuji Naraki' 'Ryosuke Yamaki' 'Yoshikazu Ikeda' 'Takafumi Horie'\n 'Hiroki Naganuma']" ]
null
null
2404.01335
null
null
http://arxiv.org/pdf/2404.01335v1
2024-03-30T13:25:11Z
2024-03-30T13:25:11Z
Generative AI for Architectural Design: A Literature Review
Generative Artificial Intelligence (AI) has pioneered new methodological paradigms in architectural design, significantly expanding the innovative potential and efficiency of the design process. This paper explores the extensive applications of generative AI technologies in architectural design, a trend that has benefited from the rapid development of deep generative models. This article provides a comprehensive review of the basic principles of generative AI and large-scale models and highlights the applications in the generation of 2D images, videos, and 3D models. In addition, by reviewing the latest literature from 2020, this paper scrutinizes the impact of generative AI technologies at different stages of architectural design, from generating initial architectural 3D forms to producing final architectural imagery. The marked trend of research growth indicates an increasing inclination within the architectural design community towards embracing generative AI, thereby catalyzing a shared enthusiasm for research. These research cases and methodologies have not only proven to enhance efficiency and innovation significantly but have also posed challenges to the conventional boundaries of architectural creativity. Finally, we point out new directions for design innovation and articulate fresh trajectories for applying generative AI in the architectural domain. This article provides the first comprehensive literature review about generative AI for architectural design, and we believe this work can facilitate more research work on this significant topic in architecture.
[ "['Chengyuan Li' 'Tianyu Zhang' 'Xusheng Du' 'Ye Zhang' 'Haoran Xie']" ]
null
null
2404.01337
null
null
http://arxiv.org/abs/2404.01337v1
2024-03-30T16:40:10Z
2024-03-30T16:40:10Z
Detection of Temporality at Discourse Level on Financial News by Combining Natural Language Processing and Machine Learning
Finance-related news such as Bloomberg News, CNN Business and Forbes are valuable sources of real data for market screening systems. In news, an expert shares opinions beyond plain technical analyses that include context such as political, sociological and cultural factors. In the same text, the expert often discusses the performance of different assets. Some key statements are mere descriptions of past events while others are predictions. Therefore, understanding the temporality of the key statements in a text is essential to separate context information from valuable predictions. We propose a novel system to detect the temporality of finance-related news at discourse level that combines Natural Language Processing and Machine Learning techniques, and exploits sophisticated features such as syntactic and semantic dependencies. More specifically, we seek to extract the dominant tenses of the main statements, which may be either explicit or implicit. We have tested our system on a labelled dataset of finance-related news annotated by researchers with knowledge in the field. Experimental results reveal a high detection precision compared to an alternative rule-based baseline approach. Ultimately, this research contributes to the state-of-the-art of market screening by identifying predictive knowledge for financial decision making.
[ "['Silvia García-Méndez' 'Francisco de Arriba-Pérez' 'Ana Barros-Vila'\n 'Francisco J. González-Castaño']" ]
null
null
2404.01338
null
null
http://arxiv.org/abs/2404.01338v1
2024-03-30T17:49:34Z
2024-03-30T17:49:34Z
Automatic detection of relevant information, predictions and forecasts in financial news through topic modelling with Latent Dirichlet Allocation
Financial news items are unstructured sources of information that can be mined to extract knowledge for market screening applications. Manual extraction of relevant information from the continuous stream of finance-related news is cumbersome and beyond the skills of many investors, who, at most, can follow a few sources and authors. Accordingly, we focus on the analysis of financial news to identify relevant text and, within that text, forecasts and predictions. We propose a novel Natural Language Processing (NLP) system to assist investors in the detection of relevant financial events in unstructured textual sources by considering both relevance and temporality at the discursive level. Firstly, we segment the text to group together closely related text. Secondly, we apply co-reference resolution to discover internal dependencies within segments. Finally, we perform relevant topic modelling with Latent Dirichlet Allocation (LDA) to separate relevant from less relevant text and then analyse the relevant text using a Machine Learning-oriented temporal approach to identify predictions and speculative statements. We created an experimental data set composed of 2,158 financial news items that were manually labelled by NLP researchers to evaluate our solution. The ROUGE-L values for the identification of relevant text and predictions/forecasts were 0.662 and 0.982, respectively. To our knowledge, this is the first work to jointly consider relevance and temporality at the discursive level. It contributes to the transfer of human associative discourse capabilities to expert systems through the combination of multi-paragraph topic segmentation and co-reference resolution to separate author expression patterns, topic modelling with LDA to detect relevant text, and discursive temporality analysis to identify forecasts and predictions within this text.
[ "['Silvia García-Méndez' 'Francisco de Arriba-Pérez' 'Ana Barros-Vila'\n 'Francisco J. González-Castaño' 'Enrique Costa-Montenegro']" ]
null
null
2404.01340
null
null
http://arxiv.org/pdf/2404.01340v1
2024-03-31T02:46:27Z
2024-03-31T02:46:27Z
From Similarity to Superiority: Channel Clustering for Time Series Forecasting
Time series forecasting has attracted significant attention in recent decades. Previous studies have demonstrated that the Channel-Independent (CI) strategy improves forecasting performance by treating different channels individually, while it leads to poor generalization on unseen instances and ignores potentially necessary interactions between channels. Conversely, the Channel-Dependent (CD) strategy mixes all channels with even irrelevant and indiscriminate information, which, however, results in oversmoothing issues and limits forecasting accuracy. There is a lack of channel strategy that effectively balances individual channel treatment for improved forecasting performance without overlooking essential interactions between channels. Motivated by our observation of a correlation between the time series model's performance boost against channel mixing and the intrinsic similarity on a pair of channels, we developed a novel and adaptable Channel Clustering Module (CCM). CCM dynamically groups channels characterized by intrinsic similarities and leverages cluster identity instead of channel identity, combining the best of CD and CI worlds. Extensive experiments on real-world datasets demonstrate that CCM can (1) boost the performance of CI and CD models by an average margin of 2.4% and 7.2% on long-term and short-term forecasting, respectively; (2) enable zero-shot forecasting with mainstream time series forecasting models; (3) uncover intrinsic time series patterns among channels and improve interpretability of complex time series models.
[ "['Jialin Chen' 'Jan Eric Lenssen' 'Aosong Feng' 'Weihua Hu' 'Matthias Fey'\n 'Leandros Tassiulas' 'Jure Leskovec' 'Rex Ying']" ]
null
null
2404.01341
null
null
http://arxiv.org/pdf/2404.01341v2
2024-04-27T01:34:41Z
2024-03-31T05:04:38Z
Block-Diagonal Guided DBSCAN Clustering
Cluster analysis plays a crucial role in database mining, and one of the most widely used algorithms in this field is DBSCAN. However, DBSCAN has several limitations, such as difficulty in handling high-dimensional large-scale data, sensitivity to input parameters, and lack of robustness in producing clustering results. This paper introduces an improved version of DBSCAN that leverages the block-diagonal property of the similarity graph to guide the clustering procedure of DBSCAN. The key idea is to construct a graph that measures the similarity between high-dimensional large-scale data points and has the potential to be transformed into a block-diagonal form through an unknown permutation, followed by a cluster-ordering procedure to generate the desired permutation. The clustering structure can be easily determined by identifying the diagonal blocks in the permuted graph. We propose a gradient descent-based method to solve the proposed problem. Additionally, we develop a DBSCAN-based points traversal algorithm that identifies clusters with high densities in the graph and generates an augmented ordering of clusters. The block-diagonal structure of the graph is then achieved through permutation based on the traversal order, providing a flexible foundation for both automatic and interactive cluster analysis. We introduce a split-and-refine algorithm to automatically search for all diagonal blocks in the permuted graph with theoretically optimal guarantees under specific cases. We extensively evaluate our proposed approach on twelve challenging real-world benchmark clustering datasets and demonstrate its superior performance compared to the state-of-the-art clustering method on every dataset.
[ "['Weibing Zhao']" ]
null
null
2404.01345
null
null
http://arxiv.org/pdf/2404.01345v1
2024-03-31T09:52:25Z
2024-03-31T09:52:25Z
Enhancing Bangla Fake News Detection Using Bidirectional Gated Recurrent Units and Deep Learning Techniques
The rise of fake news has made the need for effective detection methods, including in languages other than English, increasingly important. The study aims to address the challenges of Bangla which is considered a less important language. To this end, a complete dataset containing about 50,000 news items is proposed. Several deep learning models have been tested on this dataset, including the bidirectional gated recurrent unit (GRU), the long short-term memory (LSTM), the 1D convolutional neural network (CNN), and hybrid architectures. For this research, we assessed the efficacy of the model utilizing a range of useful measures, including recall, precision, F1 score, and accuracy. This was done by employing a big application. We carry out comprehensive trials to show the effectiveness of these models in identifying bogus news in Bangla, with the Bidirectional GRU model having a stunning accuracy of 99.16%. Our analysis highlights the importance of dataset balance and the need for continual improvement efforts to a substantial degree. This study makes a major contribution to the creation of Bangla fake news detecting systems with limited resources, thereby setting the stage for future improvements in the detection process.
[ "['Utsha Roy' 'Mst. Sazia Tahosin' 'Md. Mahedi Hassan' 'Taminul Islam'\n 'Fahim Imtiaz' 'Md Rezwane Sadik' 'Yassine Maleh' 'Rejwan Bin Sulaiman'\n 'Md. Simul Hasan Talukder']" ]
null
null
2404.01351
null
null
http://arxiv.org/pdf/2404.01351v1
2024-04-01T04:21:49Z
2024-04-01T04:21:49Z
AETTA: Label-Free Accuracy Estimation for Test-Time Adaptation
Test-time adaptation (TTA) has emerged as a viable solution to adapt pre-trained models to domain shifts using unlabeled test data. However, TTA faces challenges of adaptation failures due to its reliance on blind adaptation to unknown test samples in dynamic scenarios. Traditional methods for out-of-distribution performance estimation are limited by unrealistic assumptions in the TTA context, such as requiring labeled data or re-training models. To address this issue, we propose AETTA, a label-free accuracy estimation algorithm for TTA. We propose the prediction disagreement as the accuracy estimate, calculated by comparing the target model prediction with dropout inferences. We then improve the prediction disagreement to extend the applicability of AETTA under adaptation failures. Our extensive evaluation with four baselines and six TTA methods demonstrates that AETTA shows an average of 19.8%p more accurate estimation compared with the baselines. We further demonstrate the effectiveness of accuracy estimation with a model recovery case study, showcasing the practicality of our model recovery based on accuracy estimation. The source code is available at https://github.com/taeckyung/AETTA.
[ "['Taeckyung Lee' 'Sorn Chottananurak' 'Taesik Gong' 'Sung-Ju Lee']" ]
null
null
2404.01353
null
null
http://arxiv.org/pdf/2404.01353v1
2024-04-01T07:35:15Z
2024-04-01T07:35:15Z
Efficiently Distilling LLMs for Edge Applications
Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced. We propose a new method called Multistage Low-rank Fine-tuning of Super-transformers (MLFS) for parameter-efficient supernet training. We show that it is possible to obtain high-quality encoder models that are suitable for commercial edge applications, and that while decoder-only models are resistant to a comparable degree of compression, decoders can be effectively sliced for a significant reduction in training time.
[ "['Achintya Kundu' 'Fabian Lim' 'Aaron Chew' 'Laura Wynter' 'Penny Chong'\n 'Rhui Dih Lee']" ]
null
null
2404.01356
null
null
http://arxiv.org/pdf/2404.01356v1
2024-04-01T09:29:16Z
2024-04-01T09:29:16Z
The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness
Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness. To jointly characterize the susceptibility of prediction accuracy and individual fairness to adversarial perturbations, we introduce a novel robustness definition termed robust accurate fairness. Informally, robust accurate fairness requires that predictions for an instance and its similar counterparts consistently align with the ground truth when subjected to input perturbations. We propose an adversarial attack approach dubbed RAFair to expose false or biased adversarial defects in DNN, which either deceive accuracy or compromise individual fairness. Then, we show that such adversarial instances can be effectively addressed by carefully designed benign perturbations, correcting their predictions to be accurate and fair. Our work explores the double-edged sword of input perturbations to robust accurate fairness in DNN and the potential of using benign perturbations to correct adversarial instances.
[ "['Xuran Li' 'Peng Wu' 'Yanting Chen' 'Xingjun Ma' 'Zhen Zhang'\n 'Kaixiang Dong']" ]
null
null
2404.01358
null
null
http://arxiv.org/pdf/2404.01358v1
2024-04-01T09:48:14Z
2024-04-01T09:48:14Z
Utilizing AI and Social Media Analytics to Discover Adverse Side Effects of GLP-1 Receptor Agonists
Adverse side effects (ASEs) of drugs, revealed after FDA approval, pose a threat to patient safety. To promptly detect overlooked ASEs, we developed a digital health methodology capable of analyzing massive public data from social media, published clinical research, manufacturers' reports, and ChatGPT. We uncovered ASEs associated with the glucagon-like peptide 1 receptor agonists (GLP-1 RA), a market expected to grow exponentially to $133.5 billion USD by 2030. Using a Named Entity Recognition (NER) model, our method successfully detected 21 potential ASEs overlooked upon FDA approval, including irritability and numbness. Our data-analytic approach revolutionizes the detection of unreported ASEs associated with newly deployed drugs, leveraging cutting-edge AI-driven social media analytics. It can increase the safety of new drugs in the marketplace by unlocking the power of social media to support regulators and manufacturers in the rapid discovery of hidden ASE risks.
[ "['Alon Bartal' 'Kathleen M. Jagodnik' 'Nava Pliskin' 'Abraham Seidmann']" ]
null
null
2404.01360
null
null
http://arxiv.org/pdf/2404.01360v1
2024-04-01T11:42:43Z
2024-04-01T11:42:43Z
Harnessing Data and Physics for Deep Learning Phase Recovery
Phase recovery, calculating the phase of a light wave from its intensity measurements, is essential for various applications, such as coherent diffraction imaging, adaptive optics, and biomedical imaging. It enables the reconstruction of an object's refractive index distribution or topography as well as the correction of imaging system aberrations. In recent years, deep learning has been proven to be highly effective in addressing phase recovery problems. Two main deep learning phase recovery strategies are data-driven (DD) with supervised learning mode and physics-driven (PD) with self-supervised learning mode. DD and PD achieve the same goal in different ways and lack the necessary study to reveal similarities and differences. Therefore, in this paper, we comprehensively compare these two deep learning phase recovery strategies in terms of time consumption, accuracy, generalization ability, ill-posedness adaptability, and prior capacity. What's more, we propose a co-driven (CD) strategy of combining datasets and physics for the balance of high- and low-frequency information. The codes for DD, PD, and CD are publicly available at https://github.com/kqwang/DLPR.
[ "['Kaiqiang Wang' 'Edmund Y. Lam']" ]
null
null
2404.01361
null
null
http://arxiv.org/pdf/2404.01361v1
2024-04-01T13:16:34Z
2024-04-01T13:16:34Z
LLM Attributor: Interactive Visual Attribution for LLM Generation
While large language models (LLMs) have shown remarkable capability to generate convincing text across diverse domains, concerns around its potential risks have highlighted the importance of understanding the rationale behind text generation. We present LLM Attributor, a Python library that provides interactive visualizations for training data attribution of an LLM's text generation. Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text. We describe the visual and interactive design of our tool and highlight usage scenarios for LLaMA2 models fine-tuned with two different datasets: online articles about recent disasters and finance-related question-answer pairs. Thanks to LLM Attributor's broad support for computational notebooks, users can easily integrate it into their workflow to interactively visualize attributions of their models. For easier access and extensibility, we open-source LLM Attributor at https://github.com/poloclub/ LLM-Attribution. The video demo is available at https://youtu.be/mIG2MDQKQxM.
[ "['Seongmin Lee' 'Zijie J. Wang' 'Aishwarya Chakravarthy' 'Alec Helbling'\n 'ShengYun Peng' 'Mansi Phute' 'Duen Horng Chau' 'Minsuk Kahng']" ]
null
null
2404.01364
null
null
http://arxiv.org/abs/2404.01364v1
2024-04-01T17:34:18Z
2024-04-01T17:34:18Z
Information Plane Analysis Visualization in Deep Learning via Transfer Entropy
In a feedforward network, Transfer Entropy (TE) can be used to measure the influence that one layer has on another by quantifying the information transfer between them during training. According to the Information Bottleneck principle, a neural model's internal representation should compress the input data as much as possible while still retaining sufficient information about the output. Information Plane analysis is a visualization technique used to understand the trade-off between compression and information preservation in the context of the Information Bottleneck method by plotting the amount of information in the input data against the compressed representation. The claim that there is a causal link between information-theoretic compression and generalization, measured by mutual information, is plausible, but results from different studies are conflicting. In contrast to mutual information, TE can capture temporal relationships between variables. To explore such links, in our novel approach we use TE to quantify information transfer between neural layers and perform Information Plane analysis. We obtained encouraging experimental results, opening the possibility for further investigations.
[ "['Adrian Moldovan' 'Angel Cataron' 'Razvan Andonie']" ]
null
null
2404.01365
null
null
http://arxiv.org/pdf/2404.01365v2
2024-04-05T14:31:21Z
2024-04-01T17:56:06Z
Prompt-prompted Mixture of Experts for Efficient LLM Generation
With the development of transformer-based large language models (LLMs), they have been applied to many fields due to their remarkable utility, but this comes at a considerable computational cost at deployment. Fortunately, some methods such as pruning or constructing a mixture of experts (MoE) aim at exploiting sparsity in transformer feedforward (FF) blocks to gain boosts in speed and reduction in memory requirements. However, these techniques can be very costly and inflexible in practice, as they often require training or are restricted to specific types of architectures. To address this, we introduce GRIFFIN, a novel training-free MoE that selects unique FF experts at the sequence level for efficient generation across a plethora of LLMs with different non-ReLU activation functions. This is possible due to a critical observation that many trained LLMs naturally produce highly structured FF activation patterns within a sequence, which we call flocking. Despite our method's simplicity, we show with 50% of the FF parameters, GRIFFIN maintains the original model's performance with little to no degradation on a variety of classification and generation tasks, all while improving latency (e.g. 1.25$times$ speed-up in Llama 2 13B on an NVIDIA L40). Code is available at https://github.com/hdong920/GRIFFIN.
[ "['Harry Dong' 'Beidi Chen' 'Yuejie Chi']" ]
null
null
2404.01367
null
null
http://arxiv.org/pdf/2404.01367v1
2024-04-01T17:59:48Z
2024-04-01T17:59:48Z
Bigger is not Always Better: Scaling Properties of Latent Diffusion Models
We study the scaling properties of latent diffusion models (LDMs) with an emphasis on their sampling efficiency. While improved network architecture and inference algorithms have shown to effectively boost sampling efficiency of diffusion models, the role of model size -- a critical determinant of sampling efficiency -- has not been thoroughly examined. Through empirical analysis of established text-to-image diffusion models, we conduct an in-depth investigation into how model size influences sampling efficiency across varying sampling steps. Our findings unveil a surprising trend: when operating under a given inference budget, smaller models frequently outperform their larger equivalents in generating high-quality results. Moreover, we extend our study to demonstrate the generalizability of the these findings by applying various diffusion samplers, exploring diverse downstream tasks, evaluating post-distilled models, as well as comparing performance relative to training compute. These findings open up new pathways for the development of LDM scaling strategies which can be employed to enhance generative capabilities within limited inference budgets.
[ "['Kangfu Mei' 'Zhengzhong Tu' 'Mauricio Delbracio' 'Hossein Talebi'\n 'Vishal M. Patel' 'Peyman Milanfar']" ]
null
null
2404.01413
null
null
http://arxiv.org/pdf/2404.01413v2
2024-04-29T23:13:42Z
2024-04-01T18:31:24Z
Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data
The proliferation of generative models, combined with pretraining on web-scale data, raises a timely question: what happens when these models are trained on their own generated outputs? Recent investigations into model-data feedback loops proposed that such loops would lead to a phenomenon termed model collapse, under which performance progressively degrades with each model-data feedback iteration until fitted models become useless. However, those studies largely assumed that new data replace old data over time, where an arguably more realistic assumption is that data accumulate over time. In this paper, we ask: what effect does accumulating data have on model collapse? We empirically study this question by pretraining sequences of language models on text corpora. We confirm that replacing the original real data by each generation's synthetic data does indeed tend towards model collapse, then demonstrate that accumulating the successive generations of synthetic data alongside the original real data avoids model collapse; these results hold across a range of model sizes, architectures, and hyperparameters. We obtain similar results for deep generative models on other types of real data: diffusion models for molecule conformation generation and variational autoencoders for image generation. To understand why accumulating data can avoid model collapse, we use an analytically tractable framework introduced by prior work in which a sequence of linear models are fit to the previous models' outputs. Previous work used this framework to show that if data are replaced, the test error increases with the number of model-fitting iterations; we extend this argument to prove that if data instead accumulate, the test error has a finite upper bound independent of the number of iterations, meaning model collapse no longer occurs.
[ "['Matthias Gerstgrasser' 'Rylan Schaeffer' 'Apratim Dey' 'Rafael Rafailov'\n 'Henry Sleight' 'John Hughes' 'Tomasz Korbak' 'Rajashree Agrawal'\n 'Dhruv Pai' 'Andrey Gromov' 'Daniel A. Roberts' 'Diyi Yang'\n 'David L. Donoho' 'Sanmi Koyejo']" ]
null
null
2404.01430
null
null
http://arxiv.org/pdf/2404.01430v1
2024-04-01T19:04:17Z
2024-04-01T19:04:17Z
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs
Recent advances in large language models (LLMs) have enhanced their ability to process long input contexts. This development is particularly crucial for tasks that involve retrieving knowledge from an external datastore, which can result in long inputs. However, recent studies show a positional bias in LLMs, demonstrating varying performance depending on the location of useful information within the input sequence. In this study, we conduct extensive experiments to investigate the root causes of positional bias. Our findings indicate that the primary contributor to LLM positional bias stems from the inherent positional preferences of different models. We demonstrate that merely employing prompt-based solutions is inadequate for overcoming the positional preferences. To address this positional bias issue of a pre-trained LLM, we developed a Position-Aware Parameter Efficient Fine-Tuning (PAPEFT) approach which is composed of a data augmentation technique and a parameter efficient adapter, enhancing a uniform attention distribution across the input context. Our experiments demonstrate that the proposed approach effectively reduces positional bias, improving LLMs' effectiveness in handling long context sequences for various tasks that require externally retrieved knowledge.
[ "['Zheng Zhang' 'Fan Yang' 'Ziyan Jiang' 'Zheng Chen' 'Zhengyang Zhao'\n 'Chengyuan Ma' 'Liang Zhao' 'Yang Liu']" ]
null
null
2404.01436
null
null
http://arxiv.org/pdf/2404.01436v2
2024-04-03T21:08:40Z
2024-04-01T19:17:45Z
Convergence Guarantees for RMSProp and Adam in Generalized-smooth Non-convex Optimization with Affine Noise Variance
This paper provides the first tight convergence analyses for RMSProp and Adam in non-convex optimization under the most relaxed assumptions of coordinate-wise generalized smoothness and affine noise variance. We first analyze RMSProp, which is a special case of Adam with adaptive learning rates but without first-order momentum. Specifically, to solve the challenges due to dependence among adaptive update, unbounded gradient estimate and Lipschitz constant, we demonstrate that the first-order term in the descent lemma converges and its denominator is upper bounded by a function of gradient norm. Based on this result, we show that RMSProp with proper hyperparameters converges to an $epsilon$-stationary point with an iteration complexity of $mathcal O(epsilon^{-4})$. We then generalize our analysis to Adam, where the additional challenge is due to a mismatch between the gradient and first-order momentum. We develop a new upper bound on the first-order term in the descent lemma, which is also a function of the gradient norm. We show that Adam with proper hyperparameters converges to an $epsilon$-stationary point with an iteration complexity of $mathcal O(epsilon^{-4})$. Our complexity results for both RMSProp and Adam match with the complexity lower bound established in cite{arjevani2023lower}.
[ "['Qi Zhang' 'Yi Zhou' 'Shaofeng Zou']" ]
null
null
2404.01439
null
null
http://arxiv.org/abs/2404.01439v1
2024-04-01T19:22:58Z
2024-04-01T19:22:58Z
Creating emoji lexica from unsupervised sentiment analysis of their descriptions
Online media, such as blogs and social networking sites, generate massive volumes of unstructured data of great interest to analyze the opinions and sentiments of individuals and organizations. Novel approaches beyond Natural Language Processing are necessary to quantify these opinions with polarity metrics. So far, the sentiment expressed by emojis has received little attention. The use of symbols, however, has boomed in the past four years. About twenty billion are typed in Twitter nowadays, and new emojis keep appearing in each new Unicode version, making them increasingly relevant to sentiment analysis tasks. This has motivated us to propose a novel approach to predict the sentiments expressed by emojis in online textual messages, such as tweets, that does not require human effort to manually annotate data and saves valuable time for other analysis tasks. For this purpose, we automatically constructed a novel emoji sentiment lexicon using an unsupervised sentiment analysis system based on the definitions given by emoji creators in Emojipedia. Additionally, we automatically created lexicon variants by also considering the sentiment distribution of the informal texts accompanying emojis. All these lexica are evaluated and compared regarding the improvement obtained by including them in sentiment analysis of the annotated datasets provided by Kralj Novak et al. (2015). The results confirm the competitiveness of our approach.
[ "['Milagros Fernández-Gavilanes' 'Jonathan Juncal-Martínez'\n 'Silvia García-Méndez' 'Enrique Costa-Montenegro'\n 'Francisco Javier González-Castaño']" ]
null
null
2404.01448
null
null
http://arxiv.org/pdf/2404.01448v2
2024-04-09T03:47:27Z
2024-04-01T19:41:33Z
Prior Frequency Guided Diffusion Model for Limited Angle (LA)-CBCT Reconstruction
Cone-beam computed tomography (CBCT) is widely used in image-guided radiotherapy. Reconstructing CBCTs from limited-angle acquisitions (LA-CBCT) is highly desired for improved imaging efficiency, dose reduction, and better mechanical clearance. LA-CBCT reconstruction, however, suffers from severe under-sampling artifacts, making it a highly ill-posed inverse problem. Diffusion models can generate data/images by reversing a data-noising process through learned data distributions; and can be incorporated as a denoiser/regularizer in LA-CBCT reconstruction. In this study, we developed a diffusion model-based framework, prior frequency-guided diffusion model (PFGDM), for robust and structure-preserving LA-CBCT reconstruction. PFGDM uses a conditioned diffusion model as a regularizer for LA-CBCT reconstruction, and the condition is based on high-frequency information extracted from patient-specific prior CT scans which provides a strong anatomical prior for LA-CBCT reconstruction. Specifically, we developed two variants of PFGDM (PFGDM-A and PFGDM-B) with different conditioning schemes. PFGDM-A applies the high-frequency CT information condition until a pre-optimized iteration step, and drops it afterwards to enable both similar and differing CT/CBCT anatomies to be reconstructed. PFGDM-B, on the other hand, continuously applies the prior CT information condition in every reconstruction step, while with a decaying mechanism, to gradually phase out the reconstruction guidance from the prior CT scans. The two variants of PFGDM were tested and compared with current available LA-CBCT reconstruction solutions, via metrics including PSNR and SSIM. PFGDM outperformed all traditional and diffusion model-based methods. PFGDM reconstructs high-quality LA-CBCTs under very-limited gantry angles, allowing faster and more flexible CBCT scans with dose reductions.
[ "['Jiacheng Xie' 'Hua-Chieh Shao' 'Yunxiang Li' 'You Zhang']" ]
null
null
2404.01459
null
null
http://arxiv.org/pdf/2404.01459v1
2024-04-01T20:13:28Z
2024-04-01T20:13:28Z
Game-Theoretic Deep Reinforcement Learning to Minimize Carbon Emissions and Energy Costs for AI Inference Workloads in Geo-Distributed Data Centers
Data centers are increasingly using more energy due to the rise in Artificial Intelligence (AI) workloads, which negatively impacts the environment and raises operational costs. Reducing operating expenses and carbon emissions while maintaining performance in data centers is a challenging problem. This work introduces a unique approach combining Game Theory (GT) and Deep Reinforcement Learning (DRL) for optimizing the distribution of AI inference workloads in geo-distributed data centers to reduce carbon emissions and cloud operating (energy + data transfer) costs. The proposed technique integrates the principles of non-cooperative Game Theory into a DRL framework, enabling data centers to make intelligent decisions regarding workload allocation while considering the heterogeneity of hardware resources, the dynamic nature of electricity prices, inter-data center data transfer costs, and carbon footprints. We conducted extensive experiments comparing our game-theoretic DRL (GT-DRL) approach with current DRL-based and other optimization techniques. The results demonstrate that our strategy outperforms the state-of-the-art in reducing carbon emissions and minimizing cloud operating costs without compromising computational performance. This work has significant implications for achieving sustainability and cost-efficiency in data centers handling AI inference workloads across diverse geographic locations.
[ "['Ninad Hogade' 'Sudeep Pasricha']" ]
null
null
2404.01462
null
null
http://arxiv.org/pdf/2404.01462v1
2024-04-01T20:16:21Z
2024-04-01T20:16:21Z
OpenChemIE: An Information Extraction Toolkit For Chemistry Literature
Information extraction from chemistry literature is vital for constructing up-to-date reaction databases for data-driven chemistry. Complete extraction requires combining information across text, tables, and figures, whereas prior work has mainly investigated extracting reactions from single modalities. In this paper, we present OpenChemIE to address this complex challenge and enable the extraction of reaction data at the document level. OpenChemIE approaches the problem in two steps: extracting relevant information from individual modalities and then integrating the results to obtain a final list of reactions. For the first step, we employ specialized neural models that each address a specific task for chemistry information extraction, such as parsing molecules or reactions from text or figures. We then integrate the information from these modules using chemistry-informed algorithms, allowing for the extraction of fine-grained reaction data from reaction condition and substrate scope investigations. Our machine learning models attain state-of-the-art performance when evaluated individually, and we meticulously annotate a challenging dataset of reaction schemes with R-groups to evaluate our pipeline as a whole, achieving an F1 score of 69.5%. Additionally, the reaction extraction results of ours attain an accuracy score of 64.3% when directly compared against the Reaxys chemical database. We provide OpenChemIE freely to the public as an open-source package, as well as through a web interface.
[ "['Vincent Fan' 'Yujie Qian' 'Alex Wang' 'Amber Wang' 'Connor W. Coley'\n 'Regina Barzilay']" ]
null
null
2404.01464
null
null
http://arxiv.org/pdf/2404.01464v1
2024-04-01T20:25:04Z
2024-04-01T20:25:04Z
Data-Efficient Unsupervised Interpolation Without Any Intermediate Frame for 4D Medical Images
4D medical images, which represent 3D images with temporal information, are crucial in clinical practice for capturing dynamic changes and monitoring long-term disease progression. However, acquiring 4D medical images poses challenges due to factors such as radiation exposure and imaging duration, necessitating a balance between achieving high temporal resolution and minimizing adverse effects. Given these circumstances, not only is data acquisition challenging, but increasing the frame rate for each dataset also proves difficult. To address this challenge, this paper proposes a simple yet effective Unsupervised Volumetric Interpolation framework, UVI-Net. This framework facilitates temporal interpolation without the need for any intermediate frames, distinguishing it from the majority of other existing unsupervised methods. Experiments on benchmark datasets demonstrate significant improvements across diverse evaluation metrics compared to unsupervised and supervised baselines. Remarkably, our approach achieves this superior performance even when trained with a dataset as small as one, highlighting its exceptional robustness and efficiency in scenarios with sparse supervision. This positions UVI-Net as a compelling alternative for 4D medical imaging, particularly in settings where data availability is limited. The source code is available at https://github.com/jungeun122333/UVI-Net.
[ "['JungEun Kim' 'Hangyul Yoon' 'Geondo Park' 'Kyungsu Kim' 'Eunho Yang']" ]
null
null
2404.01466
null
null
http://arxiv.org/pdf/2404.01466v1
2024-04-01T20:33:29Z
2024-04-01T20:33:29Z
TS-CausalNN: Learning Temporal Causal Relations from Non-linear Non-stationary Time Series Data
The growing availability and importance of time series data across various domains, including environmental science, epidemiology, and economics, has led to an increasing need for time-series causal discovery methods that can identify the intricate relationships in the non-stationary, non-linear, and often noisy real world data. However, the majority of current time series causal discovery methods assume stationarity and linear relations in data, making them infeasible for the task. Further, the recent deep learning-based methods rely on the traditional causal structure learning approaches making them computationally expensive. In this paper, we propose a Time-Series Causal Neural Network (TS-CausalNN) - a deep learning technique to discover contemporaneous and lagged causal relations simultaneously. Our proposed architecture comprises (i) convolutional blocks comprising parallel custom causal layers, (ii) acyclicity constraint, and (iii) optimization techniques using the augmented Lagrangian approach. In addition to the simple parallel design, an advantage of the proposed model is that it naturally handles the non-stationarity and non-linearity of the data. Through experiments on multiple synthetic and real world datasets, we demonstrate the empirical proficiency of our proposed approach as compared to several state-of-the-art methods. The inferred graphs for the real world dataset are in good agreement with the domain understanding.
[ "['Omar Faruque' 'Sahara Ali' 'Xue Zheng' 'Jianwu Wang']" ]
null
null
2404.01475
null
null
http://arxiv.org/pdf/2404.01475v1
2024-04-01T20:56:25Z
2024-04-01T20:56:25Z
Are large language models superhuman chemists?
Large language models (LLMs) have gained widespread interest due to their ability to process human language and perform tasks on which they have not been explicitly trained. This is relevant for the chemical sciences, which face the problem of small and diverse datasets that are frequently in the form of text. LLMs have shown promise in addressing these issues and are increasingly being harnessed to predict chemical properties, optimize reactions, and even design and conduct experiments autonomously. However, we still have only a very limited systematic understanding of the chemical reasoning capabilities of LLMs, which would be required to improve models and mitigate potential harms. Here, we introduce "ChemBench," an automated framework designed to rigorously evaluate the chemical knowledge and reasoning abilities of state-of-the-art LLMs against the expertise of human chemists. We curated more than 7,000 question-answer pairs for a wide array of subfields of the chemical sciences, evaluated leading open and closed-source LLMs, and found that the best models outperformed the best human chemists in our study on average. The models, however, struggle with some chemical reasoning tasks that are easy for human experts and provide overconfident, misleading predictions, such as about chemicals' safety profiles. These findings underscore the dual reality that, although LLMs demonstrate remarkable proficiency in chemical tasks, further research is critical to enhancing their safety and utility in chemical sciences. Our findings also indicate a need for adaptations to chemistry curricula and highlight the importance of continuing to develop evaluation frameworks to improve safe and useful LLMs.
[ "['Adrian Mirza' 'Nawaf Alampara' 'Sreekanth Kunchapu' 'Benedict Emoekabu'\n 'Aswanth Krishnan' 'Mara Wilhelmi' 'Macjonathan Okereke'\n 'Juliane Eberhardt' 'Amir Mohammad Elahi' 'Maximilian Greiner'\n 'Caroline T. Holick' 'Tanya Gupta' 'Mehrdad Asgari' 'Christina Glaubitz'\n 'Lea C. Klepsch' 'Yannik Köster' 'Jakob Meyer' 'Santiago Miret'\n 'Tim Hoffmann' 'Fabian Alexander Kreth' 'Michael Ringleb'\n 'Nicole Roesner' 'Ulrich S. Schubert' 'Leanne M. Stafast' 'Dinga Wonanke'\n 'Michael Pieler' 'Philippe Schwaller' 'Kevin Maik Jablonka']" ]
null
null
2404.01476
null
null
http://arxiv.org/pdf/2404.01476v1
2024-04-01T20:58:24Z
2024-04-01T20:58:24Z
TraveLER: A Multi-LMM Agent Framework for Video Question-Answering
Recently, Large Multimodal Models (LMMs) have made significant progress in video question-answering using a frame-wise approach by leveraging large-scale, image-based pretraining in a zero-shot manner. While image-based methods for videos have shown impressive performance, a current limitation is that they often overlook how key timestamps are selected and cannot adjust when incorrect timestamps are identified. Moreover, they are unable to extract details relevant to the question, instead providing general descriptions of the frame. To overcome this, we design a multi-LMM agent framework that travels along the video, iteratively collecting relevant information from keyframes through interactive question-asking until there is sufficient information to answer the question. Specifically, we propose TraveLER, a model that can create a plan to "Traverse" through the video, ask questions about individual frames to "Locate" and store key information, and then "Evaluate" if there is enough information to answer the question. Finally, if there is not enough information, our method is able to "Replan" based on its collected knowledge. Through extensive experiments, we find that the proposed TraveLER approach improves performance on several video question-answering benchmarks, such as NExT-QA, STAR, and Perception Test, without the need to fine-tune on specific datasets.
[ "['Chuyi Shang' 'Amos You' 'Sanjay Subramanian' 'Trevor Darrell'\n 'Roei Herzig']" ]
null
null
2404.01486
null
null
http://arxiv.org/pdf/2404.01486v1
2024-04-01T21:11:43Z
2024-04-01T21:11:43Z
QuAD: Query-based Interpretable Neural Motion Planning for Autonomous Driving
A self-driving vehicle must understand its environment to determine the appropriate action. Traditional autonomy systems rely on object detection to find the agents in the scene. However, object detection assumes a discrete set of objects and loses information about uncertainty, so any errors compound when predicting the future behavior of those agents. Alternatively, dense occupancy grid maps have been utilized to understand free-space. However, predicting a grid for the entire scene is wasteful since only certain spatio-temporal regions are reachable and relevant to the self-driving vehicle. We present a unified, interpretable, and efficient autonomy framework that moves away from cascading modules that first perceive, then predict, and finally plan. Instead, we shift the paradigm to have the planner query occupancy at relevant spatio-temporal points, restricting the computation to those regions of interest. Exploiting this representation, we evaluate candidate trajectories around key factors such as collision avoidance, comfort, and progress for safety and interpretability. Our approach achieves better highway driving quality than the state-of-the-art in high-fidelity closed-loop simulations.
[ "['Sourav Biswas' 'Sergio Casas' 'Quinlan Sykora' 'Ben Agro' 'Abbas Sadat'\n 'Raquel Urtasun']" ]
null
null
2404.01487
null
null
http://arxiv.org/pdf/2404.01487v1
2024-04-01T21:12:44Z
2024-04-01T21:12:44Z
Explainable AI Integrated Feature Engineering for Wildfire Prediction
Wildfires present intricate challenges for prediction, necessitating the use of sophisticated machine learning techniques for effective modelingcite{jain2020review}. In our research, we conducted a thorough assessment of various machine learning algorithms for both classification and regression tasks relevant to predicting wildfires. We found that for classifying different types or stages of wildfires, the XGBoost model outperformed others in terms of accuracy and robustness. Meanwhile, the Random Forest regression model showed superior results in predicting the extent of wildfire-affected areas, excelling in both prediction error and explained variance. Additionally, we developed a hybrid neural network model that integrates numerical data and image information for simultaneous classification and regression. To gain deeper insights into the decision-making processes of these models and identify key contributing features, we utilized eXplainable Artificial Intelligence (XAI) techniques, including TreeSHAP, LIME, Partial Dependence Plots (PDP), and Gradient-weighted Class Activation Mapping (Grad-CAM). These interpretability tools shed light on the significance and interplay of various features, highlighting the complex factors influencing wildfire predictions. Our study not only demonstrates the effectiveness of specific machine learning models in wildfire-related tasks but also underscores the critical role of model transparency and interpretability in environmental science applications.
[ "['Di Fan' 'Ayan Biswas' 'James Paul Ahrens']" ]
null
null
2404.01501
null
null
http://arxiv.org/pdf/2404.01501v1
2024-04-01T21:49:05Z
2024-04-01T21:49:05Z
MosquitoFusion: A Multiclass Dataset for Real-Time Detection of Mosquitoes, Swarms, and Breeding Sites Using Deep Learning
In this paper, we present an integrated approach to real-time mosquito detection using our multiclass dataset (MosquitoFusion) containing 1204 diverse images and leverage cutting-edge technologies, specifically computer vision, to automate the identification of Mosquitoes, Swarms, and Breeding Sites. The pre-trained YOLOv8 model, trained on this dataset, achieved a mean Average Precision (mAP@50) of 57.1%, with precision at 73.4% and recall at 50.5%. The integration of Geographic Information Systems (GIS) further enriches the depth of our analysis, providing valuable insights into spatial patterns. The dataset and code are available at https://github.com/faiyazabdullah/MosquitoFusion.
[ "['Md. Faiyaz Abdullah Sayeedi' 'Fahim Hafiz' 'Md Ashiqur Rahman']" ]
null
null
2404.01509
null
null
http://arxiv.org/pdf/2404.01509v1
2024-04-01T22:25:48Z
2024-04-01T22:25:48Z
Can Biases in ImageNet Models Explain Generalization?
The robust generalization of models to rare, in-distribution (ID) samples drawn from the long tail of the training distribution and to out-of-training-distribution (OOD) samples is one of the major challenges of current deep learning methods. For image classification, this manifests in the existence of adversarial attacks, the performance drops on distorted images, and a lack of generalization to concepts such as sketches. The current understanding of generalization in neural networks is very limited, but some biases that differentiate models from human vision have been identified and might be causing these limitations. Consequently, several attempts with varying success have been made to reduce these biases during training to improve generalization. We take a step back and sanity-check these attempts. Fixing the architecture to the well-established ResNet-50, we perform a large-scale study on 48 ImageNet models obtained via different training methods to understand how and if these biases - including shape bias, spectral biases, and critical bands - interact with generalization. Our extensive study results reveal that contrary to previous findings, these biases are insufficient to accurately predict the generalization of a model holistically. We provide access to all checkpoints and evaluation code at https://github.com/paulgavrikov/biases_vs_generalization
[ "['Paul Gavrikov' 'Janis Keuper']" ]
null
null
2404.01517
null
null
http://arxiv.org/pdf/2404.01517v1
2024-04-01T22:53:09Z
2024-04-01T22:53:09Z
Addressing Heterogeneity in Federated Load Forecasting with Personalization Layers
The advent of smart meters has enabled pervasive collection of energy consumption data for training short-term load forecasting models. In response to privacy concerns, federated learning (FL) has been proposed as a privacy-preserving approach for training, but the quality of trained models degrades as client data becomes heterogeneous. In this paper we propose the use of personalization layers for load forecasting in a general framework called PL-FL. We show that PL-FL outperforms FL and purely local training, while requiring lower communication bandwidth than FL. This is done through extensive simulations on three different datasets from the NREL ComStock repository.
[ "['Shourya Bose' 'Yu Zhang' 'Kibaek Kim']" ]
null
null
2404.01518
null
null
http://arxiv.org/pdf/2404.01518v3
2024-04-08T05:09:19Z
2024-04-01T22:53:47Z
Temporally Consistent Unbalanced Optimal Transport for Unsupervised Action Segmentation
We propose a novel approach to the action segmentation task for long, untrimmed videos, based on solving an optimal transport problem. By encoding a temporal consistency prior into a Gromov-Wasserstein problem, we are able to decode a temporally consistent segmentation from a noisy affinity/matching cost matrix between video frames and action classes. Unlike previous approaches, our method does not require knowing the action order for a video to attain temporal consistency. Furthermore, our resulting (fused) Gromov-Wasserstein problem can be efficiently solved on GPUs using a few iterations of projected mirror descent. We demonstrate the effectiveness of our method in an unsupervised learning setting, where our method is used to generate pseudo-labels for self-training. We evaluate our segmentation approach and unsupervised learning pipeline on the Breakfast, 50-Salads, YouTube Instructions and Desktop Assembly datasets, yielding state-of-the-art results for the unsupervised video action segmentation task.
[ "['Ming Xu' 'Stephen Gould']" ]
null
null
2404.01521
null
null
http://arxiv.org/pdf/2404.01521v1
2024-04-01T23:01:07Z
2024-04-01T23:01:07Z
Fair MP-BOOST: Fair and Interpretable Minipatch Boosting
Ensemble methods, particularly boosting, have established themselves as highly effective and widely embraced machine learning techniques for tabular data. In this paper, we aim to leverage the robust predictive power of traditional boosting methods while enhancing fairness and interpretability. To achieve this, we develop Fair MP-Boost, a stochastic boosting scheme that balances fairness and accuracy by adaptively learning features and observations during training. Specifically, Fair MP-Boost sequentially samples small subsets of observations and features, termed minipatches (MP), according to adaptively learned feature and observation sampling probabilities. We devise these probabilities by combining loss functions, or by combining feature importance scores to address accuracy and fairness simultaneously. Hence, Fair MP-Boost prioritizes important and fair features along with challenging instances, to select the most relevant minipatches for learning. The learned probability distributions also yield intrinsic interpretations of feature importance and important observations in Fair MP-Boost. Through empirical evaluation of simulated and benchmark datasets, we showcase the interpretability, accuracy, and fairness of Fair MP-Boost.
[ "['Camille Olivia Little' 'Genevera I. Allen']" ]
null
null
2404.01530
null
null
http://arxiv.org/abs/2404.01530v1
2024-04-01T23:34:28Z
2024-04-01T23:34:28Z
ML KPI Prediction in 5G and B5G Networks
Network operators are facing new challenges when meeting the needs of their customers. The challenges arise due to the rise of new services, such as HD video streaming, IoT, autonomous driving, etc., and the exponential growth of network traffic. In this context, 5G and B5G networks have been evolving to accommodate a wide range of applications and use cases. Additionally, this evolution brings new features, like the ability to create multiple end-to-end isolated virtual networks using network slicing. Nevertheless, to ensure the quality of service, operators must maintain and optimize their networks in accordance with the key performance indicators (KPIs) and the slice service-level agreements (SLAs). In this paper, we introduce a machine learning (ML) model used to estimate throughput in 5G and B5G networks with end-to-end (E2E) network slices. Then, we combine the predicted throughput with the current network state to derive an estimate of other network KPIs, which can be used to further improve service assurance. To assess the efficiency of our solution, a performance metric was proposed. Numerical evaluations demonstrate that our KPI prediction model outperforms those derived from other methods with the same or nearly the same computational time.
[ "['Nguyen Phuc Tran' 'Oscar Delgado' 'Brigitte Jaumard' 'Fadi Bishay']" ]
null
null
2404.01536
null
null
http://arxiv.org/pdf/2404.01536v1
2024-04-02T00:02:00Z
2024-04-02T00:02:00Z
Laying Anchors: Semantically Priming Numerals in Language Modeling
Off-the-shelf pre-trained language models have become the de facto standard in NLP pipelines for a multitude of downstream tasks. However, the inability of these models to properly encode numerals limits their performance on tasks requiring numeric comprehension. We introduce strategies to semantically prime numerals in any corpus by generating anchors governed by the distribution of numerals in said corpus, thereby enabling mathematically grounded representations of these numeral tokens. We establish the superiority of our proposed techniques through evaluation on a range of numeracy tasks for both in-domain (seen) and out-domain (unseen) numerals. Further, we expand our empirical evaluations to numerals ranging from 1 to 10 billion, a significantly broader range compared to previous studies of the same nature, and we demonstrate significant improvements in the mathematical grounding of our learned embeddings.
[ "['Mandar Sharma' 'Rutuja Murlidhar Taware' 'Pravesh Koirala'\n 'Nikhil Muralidhar' 'Naren Ramakrishnan']" ]
null
null
2404.01542
null
null
http://arxiv.org/pdf/2404.01542v1
2024-04-02T00:54:38Z
2024-04-02T00:54:38Z
Predicting the Performance of Foundation Models via Agreement-on-the-Line
Estimating the out-of-distribution performance in regimes where labels are scarce is critical to safely deploy foundation models. Recently, it was shown that ensembles of neural networks observe the phenomena ``agreement-on-the-line'', which can be leveraged to reliably predict OOD performance without labels. However, in contrast to classical neural networks that are trained on in-distribution data from scratch for numerous epochs, foundation models undergo minimal finetuning from heavily pretrained weights, which may reduce the ensemble diversity needed to observe agreement-on-the-line. In our work, we demonstrate that when lightly finetuning multiple runs from a $textit{single}$ foundation model, the choice of randomness during training (linear head initialization, data ordering, and data subsetting) can lead to drastically different levels of agreement-on-the-line in the resulting ensemble. Surprisingly, only random head initialization is able to reliably induce agreement-on-the-line in finetuned foundation models across vision and language benchmarks. Second, we demonstrate that ensembles of $textit{multiple}$ foundation models pretrained on different datasets but finetuned on the same task can also show agreement-on-the-line. In total, by careful construction of a diverse ensemble, we can utilize agreement-on-the-line-based methods to predict the OOD performance of foundation models with high precision.
[ "['Aman Mehra' 'Rahul Saxena' 'Taeyoun Kim' 'Christina Baek' 'Zico Kolter'\n 'Aditi Raghunathan']" ]
null
null
2404.01551
null
null
http://arxiv.org/pdf/2404.01551v1
2024-04-02T01:30:41Z
2024-04-02T01:30:41Z
Multi-Agent Reinforcement Learning with Control-Theoretic Safety Guarantees for Dynamic Network Bridging
Addressing complex cooperative tasks in safety-critical environments poses significant challenges for Multi-Agent Systems, especially under conditions of partial observability. This work introduces a hybrid approach that integrates Multi-Agent Reinforcement Learning with control-theoretic methods to ensure safe and efficient distributed strategies. Our contributions include a novel setpoint update algorithm that dynamically adjusts agents' positions to preserve safety conditions without compromising the mission's objectives. Through experimental validation, we demonstrate significant advantages over conventional MARL strategies, achieving comparable task performance with zero safety violations. Our findings indicate that integrating safe control with learning approaches not only enhances safety compliance but also achieves good performance in mission objectives.
[ "['Raffaele Galliera' 'Konstantinos Mitsopoulos' 'Niranjan Suri'\n 'Raffaele Romagnoli']" ]
null
null
2404.01557
null
null
http://arxiv.org/pdf/2404.01557v1
2024-04-02T01:45:03Z
2024-04-02T01:45:03Z
Distributed Autonomous Swarm Formation for Dynamic Network Bridging
Effective operation and seamless cooperation of robotic systems are a fundamental component of next-generation technologies and applications. In contexts such as disaster response, swarm operations require coordinated behavior and mobility control to be handled in a distributed manner, with the quality of the agents' actions heavily relying on the communication between them and the underlying network. In this paper, we formulate the problem of dynamic network bridging in a novel Decentralized Partially Observable Markov Decision Process (Dec-POMDP), where a swarm of agents cooperates to form a link between two distant moving targets. Furthermore, we propose a Multi-Agent Reinforcement Learning (MARL) approach for the problem based on Graph Convolutional Reinforcement Learning (DGN) which naturally applies to the networked, distributed nature of the task. The proposed method is evaluated in a simulated environment and compared to a centralized heuristic baseline showing promising results. Moreover, a further step in the direction of sim-to-real transfer is presented, by additionally evaluating the proposed approach in a near Live Virtual Constructive (LVC) UAV framework.
[ "['Raffaele Galliera' 'Thies Möhlenhof' 'Alessandro Amato' 'Daniel Duran'\n 'Kristen Brent Venable' 'Niranjan Suri']" ]
null
null
2404.01569
null
null
http://arxiv.org/abs/2404.01569v1
2024-04-02T02:03:28Z
2024-04-02T02:03:28Z
Evaluating Large Language Models Using Contrast Sets: An Experimental Approach
In the domain of Natural Language Inference (NLI), especially in tasks involving the classification of multiple input texts, the Cross-Entropy Loss metric is widely employed as a standard for error measurement. However, this metric falls short in effectively evaluating a model's capacity to understand language entailments. In this study, we introduce an innovative technique for generating a contrast set for the Stanford Natural Language Inference (SNLI) dataset. Our strategy involves the automated substitution of verbs, adverbs, and adjectives with their synonyms to preserve the original meaning of sentences. This method aims to assess whether a model's performance is based on genuine language comprehension or simply on pattern recognition. We conducted our analysis using the ELECTRA-small model. The model achieved an accuracy of 89.9% on the conventional SNLI dataset but showed a reduced accuracy of 72.5% on our contrast set, indicating a substantial 17% decline. This outcome led us to conduct a detailed examination of the model's learning behaviors. Following this, we improved the model's resilience by fine-tuning it with a contrast-enhanced training dataset specifically designed for SNLI, which increased its accuracy to 85.5% on the contrast sets. Our findings highlight the importance of incorporating diverse linguistic expressions into datasets for NLI tasks. We hope that our research will encourage the creation of more inclusive datasets, thereby contributing to the development of NLI models that are both more sophisticated and effective.
[ "['Manish Sanwal']" ]
null
null
2404.01574
null
null
http://arxiv.org/pdf/2404.01574v2
2024-04-11T02:00:12Z
2024-04-02T02:08:29Z
Multi-granular Adversarial Attacks against Black-box Neural Ranking Models
Adversarial ranking attacks have gained increasing attention due to their success in probing vulnerabilities, and, hence, enhancing the robustness, of neural ranking models. Conventional attack methods employ perturbations at a single granularity, e.g., word or sentence level, to target documents. However, limiting perturbations to a single level of granularity may reduce the flexibility of adversarial examples, thereby diminishing the potential threat of the attack. Therefore, we focus on generating high-quality adversarial examples by incorporating multi-granular perturbations. Achieving this objective involves tackling a combinatorial explosion problem, which requires identifying an optimal combination of perturbations across all possible levels of granularity, positions, and textual pieces. To address this challenge, we transform the multi-granular adversarial attack into a sequential decision-making process, where perturbations in the next attack step build on the perturbed document in the current attack step. Since the attack process can only access the final state without direct intermediate signals, we use reinforcement learning to perform multi-granular attacks. During the reinforcement learning process, two agents work cooperatively to identify multi-granular vulnerabilities as attack targets and organize perturbation candidates into a final perturbation sequence. Experimental results show that our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
[ "['Yu-An Liu' 'Ruqing Zhang' 'Jiafeng Guo' 'Maarten de Rijke' 'Yixing Fan'\n 'Xueqi Cheng']" ]
null
null
2404.01578
null
null
http://arxiv.org/pdf/2404.01578v1
2024-04-02T02:13:00Z
2024-04-02T02:13:00Z
GLEMOS: Benchmark for Instantaneous Graph Learning Model Selection
The choice of a graph learning (GL) model (i.e., a GL algorithm and its hyperparameter settings) has a significant impact on the performance of downstream tasks. However, selecting the right GL model becomes increasingly difficult and time consuming as more and more GL models are developed. Accordingly, it is of great significance and practical value to equip users of GL with the ability to perform a near-instantaneous selection of an effective GL model without manual intervention. Despite the recent attempts to tackle this important problem, there has been no comprehensive benchmark environment to evaluate the performance of GL model selection methods. To bridge this gap, we present GLEMOS in this work, a comprehensive benchmark for instantaneous GL model selection that makes the following contributions. (i) GLEMOS provides extensive benchmark data for fundamental GL tasks, i.e., link prediction and node classification, including the performances of 366 models on 457 graphs on these tasks. (ii) GLEMOS designs multiple evaluation settings, and assesses how effectively representative model selection techniques perform in these different settings. (iii) GLEMOS is designed to be easily extended with new models, new graphs, and new performance records. (iv) Based on the experimental results, we discuss the limitations of existing approaches and highlight future research directions. To promote research on this significant problem, we make the benchmark data and code publicly available at https://github.com/facebookresearch/glemos.
[ "['Namyong Park' 'Ryan Rossi' 'Xing Wang' 'Antoine Simoulin'\n 'Nesreen Ahmed' 'Christos Faloutsos']" ]
null
null
2404.01588
null
null
http://arxiv.org/pdf/2404.01588v1
2024-04-02T02:30:27Z
2024-04-02T02:30:27Z
Hallucination Diversity-Aware Active Learning for Text Summarization
Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i.e., texts that are factually incorrect or unsupported. Existing methods for alleviating hallucinations typically require costly human annotations to identify and correct hallucinations in LLM outputs. Moreover, most of these methods focus on a specific type of hallucination, e.g., entity or token errors, which limits their effectiveness in addressing various types of hallucinations exhibited in LLM outputs. To our best knowledge, in this paper we propose the first active learning framework to alleviate LLM hallucinations, reducing costly human annotations of hallucination needed. By measuring fine-grained hallucinations from errors in semantic frame, discourse and content verifiability in text summarization, we propose HAllucination Diversity-Aware Sampling (HADAS) to select diverse hallucinations for annotations in active learning for LLM finetuning. Extensive experiments on three datasets and different backbone models demonstrate advantages of our method in effectively and efficiently mitigating LLM hallucinations.
[ "['Yu Xia' 'Xu Liu' 'Tong Yu' 'Sungchul Kim' 'Ryan A. Rossi' 'Anup Rao'\n 'Tung Mai' 'Shuai Li']" ]
null
null
2404.01595
null
null
http://arxiv.org/pdf/2404.01595v1
2024-04-02T02:36:21Z
2024-04-02T02:36:21Z
Propensity Score Alignment of Unpaired Multimodal Data
Multimodal representation learning techniques typically rely on paired samples to learn common representations, but paired samples are challenging to collect in fields such as biology where measurement devices often destroy the samples. This paper presents an approach to address the challenge of aligning unpaired samples across disparate modalities in multimodal representation learning. We draw an analogy between potential outcomes in causal inference and potential views in multimodal observations, which allows us to use Rubin's framework to estimate a common space in which to match samples. Our approach assumes we collect samples that are experimentally perturbed by treatments, and uses this to estimate a propensity score from each modality, which encapsulates all shared information between a latent state and treatment and can be used to define a distance between samples. We experiment with two alignment techniques that leverage this distance -- shared nearest neighbours (SNN) and optimal transport (OT) matching -- and find that OT matching results in significant improvements over state-of-the-art alignment approaches in both a synthetic multi-modal setting and in real-world data from NeurIPS Multimodal Single-Cell Integration Challenge.
[ "['Johnny Xi' 'Jason Hartford']" ]
null
null
2404.01598
null
null
http://arxiv.org/pdf/2404.01598v1
2024-04-02T02:39:17Z
2024-04-02T02:39:17Z
Extremum-Seeking Action Selection for Accelerating Policy Optimization
Reinforcement learning for control over continuous spaces typically uses high-entropy stochastic policies, such as Gaussian distributions, for local exploration and estimating policy gradient to optimize performance. Many robotic control problems deal with complex unstable dynamics, where applying actions that are off the feasible control manifolds can quickly lead to undesirable divergence. In such cases, most samples taken from the ambient action space generate low-value trajectories that hardly contribute to policy improvement, resulting in slow or failed learning. We propose to improve action selection in this model-free RL setting by introducing additional adaptive control steps based on Extremum-Seeking Control (ESC). On each action sampled from stochastic policies, we apply sinusoidal perturbations and query for estimated Q-values as the response signal. Based on ESC, we then dynamically improve the sampled actions to be closer to nearby optima before applying them to the environment. Our methods can be easily added in standard policy optimization to improve learning efficiency, which we demonstrate in various control learning environments.
[ "['Ya-Chien Chang' 'Sicun Gao']" ]
null
null
2404.01601
null
null
http://arxiv.org/pdf/2404.01601v1
2024-04-02T02:45:12Z
2024-04-02T02:45:12Z
What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks
We study the capabilities of the transformer architecture with varying depth. Specifically, we designed a novel set of sequence learning tasks to systematically evaluate and comprehend how the depth of transformer affects its ability to perform memorization, reasoning, generalization, and contextual generalization. We show a transformer with only one attention layer can excel in memorization but falls short in other tasks. Then, we show that exhibiting reasoning and generalization ability requires the transformer to have at least two attention layers, while context generalization ability may necessitate three attention layers. Additionally, we identify a class of simple operations that a single attention layer can execute, and show that the complex tasks can be approached as the combinations of these simple operations and thus can be resolved by stacking multiple attention layers. This sheds light on studying more practical and complex tasks beyond our design. Numerical experiments corroborate our theoretical findings.
[ "['Xingwu Chen' 'Difan Zou']" ]
null
null
2404.01608
null
null
http://arxiv.org/pdf/2404.01608v1
2024-04-02T03:06:25Z
2024-04-02T03:06:25Z
FAIRM: Learning invariant representations for algorithmic fairness and domain generalization with minimax optimality
Machine learning methods often assume that the test data have the same distribution as the training data. However, this assumption may not hold due to multiple levels of heterogeneity in applications, raising issues in algorithmic fairness and domain generalization. In this work, we address the problem of fair and generalizable machine learning by invariant principles. We propose a training environment-based oracle, FAIRM, which has desirable fairness and domain generalization properties under a diversity-type condition. We then provide an empirical FAIRM with finite-sample theoretical guarantees under weak distributional assumptions. We then develop efficient algorithms to realize FAIRM in linear models and demonstrate the nonasymptotic performance with minimax optimality. We evaluate our method in numerical experiments with synthetic data and MNIST data and show that it outperforms its counterparts.
[ "['Sai Li' 'Linjun Zhang']" ]
null
null
2404.01611
null
null
http://arxiv.org/pdf/2404.01611v1
2024-04-02T03:18:28Z
2024-04-02T03:18:28Z
Audio Simulation for Sound Source Localization in Virtual Evironment
Non-line-of-sight localization in signal-deprived environments is a challenging yet pertinent problem. Acoustic methods in such predominantly indoor scenarios encounter difficulty due to the reverberant nature. In this study, we aim to locate sound sources to specific locations within a virtual environment by leveraging physically grounded sound propagation simulations and machine learning methods. This process attempts to overcome the issue of data insufficiency to localize sound sources to their location of occurrence especially in post-event localization. We achieve 0.786+/- 0.0136 F1-score using an audio transformer spectrogram approach.
[ "['Yi Di Yuan' 'Swee Liang Wong' 'Jonathan Pan']" ]
null
null
2404.01617
null
null
http://arxiv.org/pdf/2404.01617v1
2024-04-02T03:43:55Z
2024-04-02T03:43:55Z
LLM-ABR: Designing Adaptive Bitrate Algorithms via Large Language Models
We present LLM-ABR, the first system that utilizes the generative capabilities of large language models (LLMs) to autonomously design adaptive bitrate (ABR) algorithms tailored for diverse network characteristics. Operating within a reinforcement learning framework, LLM-ABR empowers LLMs to design key components such as states and neural network architectures. We evaluate LLM-ABR across diverse network settings, including broadband, satellite, 4G, and 5G. LLM-ABR consistently outperforms default ABR algorithms.
[ "['Zhiyuan He' 'Aashish Gottipati' 'Lili Qiu' 'Francis Y. Yan' 'Xufang Luo'\n 'Kenuo Xu' 'Yuqing Yang']" ]
null
null
2404.01628
null
null
http://arxiv.org/pdf/2404.01628v1
2024-04-02T04:29:01Z
2024-04-02T04:29:01Z
Learning Equi-angular Representations for Online Continual Learning
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e.g., single-epoch training). To address the challenge, we propose an efficient online continual learning method using the neural collapse phenomenon. In particular, we induce neural collapse to form a simplex equiangular tight frame (ETF) structure in the representation space so that the continuously learned model with a single epoch can better fit to the streamed data by proposing preparatory data training and residual correction in the representation space. With an extensive set of empirical validations using CIFAR-10/100, TinyImageNet, ImageNet-200, and ImageNet-1K, we show that our proposed method outperforms state-of-the-art methods by a noticeable margin in various online continual learning scenarios such as disjoint and Gaussian scheduled continuous (i.e., boundary-free) data setups.
[ "['Minhyuk Seo' 'Hyunseo Koh' 'Wonje Jeung' 'Minjae Lee' 'San Kim'\n 'Hankook Lee' 'Sungjun Cho' 'Sungik Choi' 'Hyunwoo Kim' 'Jonghyun Choi']" ]
null
null
2404.01632
null
null
http://arxiv.org/pdf/2404.01632v1
2024-04-02T04:33:03Z
2024-04-02T04:33:03Z
Enhancing Functional Safety in Automotive AMS Circuits through Unsupervised Machine Learning
Given the widespread use of safety-critical applications in the automotive field, it is crucial to ensure the Functional Safety (FuSa) of circuits and components within automotive systems. The Analog and Mixed-Signal (AMS) circuits prevalent in these systems are more vulnerable to faults induced by parametric perturbations, noise, environmental stress, and other factors, in comparison to their digital counterparts. However, their continuous signal characteristics present an opportunity for early anomaly detection, enabling the implementation of safety mechanisms to prevent system failure. To address this need, we propose a novel framework based on unsupervised machine learning for early anomaly detection in AMS circuits. The proposed approach involves injecting anomalies at various circuit locations and individual components to create a diverse and comprehensive anomaly dataset, followed by the extraction of features from the observed circuit signals. Subsequently, we employ clustering algorithms to facilitate anomaly detection. Finally, we propose a time series framework to enhance and expedite anomaly detection performance. Our approach encompasses a systematic analysis of anomaly abstraction at multiple levels pertaining to the automotive domain, from hardware- to block-level, where anomalies are injected to create diverse fault scenarios. By monitoring the system behavior under these anomalous conditions, we capture the propagation of anomalies and their effects at different abstraction levels, thereby potentially paving the way for the implementation of reliable safety mechanisms to ensure the FuSa of automotive SoCs. Our experimental findings indicate that our approach achieves 100% anomaly detection accuracy and significantly optimizes the associated latency by 5X, underscoring the effectiveness of our devised solution.
[ "['Ayush Arunachalam' 'Ian Kintz' 'Suvadeep Banerjee' 'Arnab Raha'\n 'Xiankun Jin' 'Fei Su' 'Viswanathan Pillai Prasanth' 'Rubin A. Parekhji'\n 'Suriyaprakash Natarajan' 'Kanad Basu']" ]
null
null
2404.01636
null
null
http://arxiv.org/pdf/2404.01636v1
2024-04-02T04:53:39Z
2024-04-02T04:53:39Z
Learning to Control Camera Exposure via Reinforcement Learning
Adjusting camera exposure in arbitrary lighting conditions is the first step to ensure the functionality of computer vision applications. Poorly adjusted camera exposure often leads to critical failure and performance degradation. Traditional camera exposure control methods require multiple convergence steps and time-consuming processes, making them unsuitable for dynamic lighting conditions. In this paper, we propose a new camera exposure control framework that rapidly controls camera exposure while performing real-time processing by exploiting deep reinforcement learning. The proposed framework consists of four contributions: 1) a simplified training ground to simulate real-world's diverse and dynamic lighting changes, 2) flickering and image attribute-aware reward design, along with lightweight state design for real-time processing, 3) a static-to-dynamic lighting curriculum to gradually improve the agent's exposure-adjusting capability, and 4) domain randomization techniques to alleviate the limitation of the training ground and achieve seamless generalization in the wild.As a result, our proposed method rapidly reaches a desired exposure level within five steps with real-time processing (1 ms). Also, the acquired images are well-exposed and show superiority in various computer vision tasks, such as feature extraction and object detection.
[ "['Kyunghyun Lee' 'Ukcheol Shin' 'Byeong-Uk Lee']" ]
null
null
2404.01642
null
null
http://arxiv.org/pdf/2404.01642v1
2024-04-02T05:16:59Z
2024-04-02T05:16:59Z
ADVREPAIR:Provable Repair of Adversarial Attack
Deep neural networks (DNNs) are increasingly deployed in safety-critical domains, but their vulnerability to adversarial attacks poses serious safety risks. Existing neuron-level methods using limited data lack efficacy in fixing adversaries due to the inherent complexity of adversarial attack mechanisms, while adversarial training, leveraging a large number of adversarial samples to enhance robustness, lacks provability. In this paper, we propose ADVREPAIR, a novel approach for provable repair of adversarial attacks using limited data. By utilizing formal verification, ADVREPAIR constructs patch modules that, when integrated with the original network, deliver provable and specialized repairs within the robustness neighborhood. Additionally, our approach incorporates a heuristic mechanism for assigning patch modules, allowing this defense against adversarial attacks to generalize to other inputs. ADVREPAIR demonstrates superior efficiency, scalability and repair success rate. Different from existing DNN repair methods, our repair can generalize to general inputs, thereby improving the robustness of the neural network globally, which indicates a significant breakthrough in the generalization capability of ADVREPAIR.
[ "['Zhiming Chi' 'Jianan Ma' 'Pengfei Yang' 'Cheng-Chao Huang' 'Renjue Li'\n 'Xiaowei Huang' 'Lijun Zhang']" ]
null
null
2404.01643
null
null
http://arxiv.org/pdf/2404.01643v2
2024-04-20T07:41:32Z
2024-04-02T05:19:27Z
A Closer Look at Spatial-Slice Features Learning for COVID-19 Detection
Conventional Computed Tomography (CT) imaging recognition faces two significant challenges: (1) There is often considerable variability in the resolution and size of each CT scan, necessitating strict requirements for the input size and adaptability of models. (2) CT-scan contains large number of out-of-distribution (OOD) slices. The crucial features may only be present in specific spatial regions and slices of the entire CT scan. How can we effectively figure out where these are located? To deal with this, we introduce an enhanced Spatial-Slice Feature Learning (SSFL++) framework specifically designed for CT scan. It aim to filter out a OOD data within whole CT scan, enabling our to select crucial spatial-slice for analysis by reducing 70% redundancy totally. Meanwhile, we proposed Kernel-Density-based slice Sampling (KDS) method to improve the stability when training and inference stage, therefore speeding up the rate of convergence and boosting performance. As a result, the experiments demonstrate the promising performance of our model using a simple EfficientNet-2D (E2D) model, even with only 1% of the training data. The efficacy of our approach has been validated on the COVID-19-CT-DB datasets provided by the DEF-AI-MIA workshop, in conjunction with CVPR 2024. Our source code is available at https://github.com/ming053l/E2D
[ "['Chih-Chung Hsu' 'Chia-Ming Lee' 'Yang Fan Chiang' 'Yi-Shiuan Chou'\n 'Chih-Yu Jiang' 'Shen-Chieh Tai' 'Chi-Han Tsai']" ]
null
null
2404.01645
null
null
http://arxiv.org/pdf/2404.01645v1
2024-04-02T05:30:39Z
2024-04-02T05:30:39Z
ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design Models
The success of Transformer-based models has encouraged many researchers to learn CAD models using sequence-based approaches. However, learning CAD models is still a challenge, because they can be represented as complex shapes with long construction sequences. Furthermore, the same CAD model can be expressed using different CAD construction sequences. We propose a novel contrastive learning-based approach, named ContrastCAD, that effectively captures semantic information within the construction sequences of the CAD model. ContrastCAD generates augmented views using dropout techniques without altering the shape of the CAD model. We also propose a new CAD data augmentation method, called a Random Replace and Extrude (RRE) method, to enhance the learning performance of the model when training an imbalanced training CAD dataset. Experimental results show that the proposed RRE augmentation method significantly enhances the learning performance of Transformer-based autoencoders, even for complex CAD models having very long construction sequences. The proposed ContrastCAD model is shown to be robust to permutation changes of construction sequences and performs better representation learning by generating representation spaces where similar CAD models are more closely clustered. Our codes are available at https://github.com/cm8908/ContrastCAD.
[ "['Minseop Jung' 'Minseong Kim' 'Jibum Kim']" ]
null
null
2404.01646
null
null
http://arxiv.org/pdf/2404.01646v1
2024-04-02T05:30:54Z
2024-04-02T05:30:54Z
Transformer meets wcDTW to improve real-time battery bids: A new approach to scenario selection
Stochastic battery bidding in real-time energy markets is a nuanced process, with its efficacy depending on the accuracy of forecasts and the representative scenarios chosen for optimization. In this paper, we introduce a pioneering methodology that amalgamates Transformer-based forecasting with weighted constrained Dynamic Time Warping (wcDTW) to refine scenario selection. Our approach harnesses the predictive capabilities of Transformers to foresee Energy prices, while wcDTW ensures the selection of pertinent historical scenarios by maintaining the coherence between multiple uncertain products. Through extensive simulations in the PJM market for July 2023, our method exhibited a 10% increase in revenue compared to the conventional method, highlighting its potential to revolutionize battery bidding strategies in real-time markets.
[ "['Sujal Bhavsar' 'Vera Zaychik Moffitt' 'Justin Appleby']" ]
null
null
2404.01650
null
null
http://arxiv.org/pdf/2404.01650v2
2024-05-29T06:24:55Z
2024-04-02T05:34:33Z
Test-Time Model Adaptation with Only Forward Passes
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts. However, in real-world scenarios, models are usually deployed on resource-limited devices, e.g., FPGAs, and are often quantized and hard-coded with non-modifiable parameters for acceleration. In light of this, existing methods are often infeasible since they heavily depend on computation-intensive backpropagation for model updating that may be not supported. To address this, we propose a test-time Forward-Optimization Adaptation (FOA) method. In FOA, we seek to solely learn a newly added prompt (as model's input) via a derivative-free covariance matrix adaptation evolution strategy. To make this strategy work stably under our online unsupervised setting, we devise a novel fitness function by measuring test-training statistic discrepancy and model prediction entropy. Moreover, we design an activation shifting scheme that directly tunes the model activations for shifted test samples, making them align with the source training domain, thereby further enhancing adaptation performance. Without using any backpropagation and altering model weights, FOA runs on quantized 8-bit ViT outperforms gradient-based TENT on full-precision 32-bit ViT, while achieving an up to 24-fold memory reduction on ImageNet-C.
[ "['Shuaicheng Niu' 'Chunyan Miao' 'Guohao Chen' 'Pengcheng Wu'\n 'Peilin Zhao']" ]
null
null
2404.01657
null
null
http://arxiv.org/pdf/2404.01657v1
2024-04-02T05:59:43Z
2024-04-02T05:59:43Z
Release of Pre-Trained Models for the Japanese Language
AI democratization aims to create a world in which the average person can utilize AI techniques. To achieve this goal, numerous research institutes have attempted to make their results accessible to the public. In particular, large pre-trained models trained on large-scale data have shown unprecedented potential, and their release has had a significant impact. However, most of the released models specialize in the English language, and thus, AI democratization in non-English-speaking communities is lagging significantly. To reduce this gap in AI access, we released Generative Pre-trained Transformer (GPT), Contrastive Language and Image Pre-training (CLIP), Stable Diffusion, and Hidden-unit Bidirectional Encoder Representations from Transformers (HuBERT) pre-trained in Japanese. By providing these models, users can freely interface with AI that aligns with Japanese cultural values and ensures the identity of Japanese culture, thus enhancing the democratization of AI. Additionally, experiments showed that pre-trained models specialized for Japanese can efficiently achieve high performance in Japanese tasks.
[ "['Kei Sawada' 'Tianyu Zhao' 'Makoto Shing' 'Kentaro Mitsui' 'Akio Kaga'\n 'Yukiya Hono' 'Toshiaki Wakatsuki' 'Koh Mitsuda']" ]
null
null
2404.01676
null
null
http://arxiv.org/pdf/2404.01676v1
2024-04-02T06:28:22Z
2024-04-02T06:28:22Z
Incentives in Private Collaborative Machine Learning
Collaborative machine learning involves training models on data from multiple parties but must incentivize their participation. Existing data valuation methods fairly value and reward each party based on shared data or model parameters but neglect the privacy risks involved. To address this, we introduce differential privacy (DP) as an incentive. Each party can select its required DP guarantee and perturb its sufficient statistic (SS) accordingly. The mediator values the perturbed SS by the Bayesian surprise it elicits about the model parameters. As our valuation function enforces a privacy-valuation trade-off, parties are deterred from selecting excessive DP guarantees that reduce the utility of the grand coalition's model. Finally, the mediator rewards each party with different posterior samples of the model parameters. Such rewards still satisfy existing incentives like fairness but additionally preserve DP and a high similarity to the grand coalition's posterior. We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
[ "['Rachael Hwee Ling Sim' 'Yehong Zhang' 'Trong Nghia Hoang' 'Xinyi Xu'\n 'Bryan Kian Hsiang Low' 'Patrick Jaillet']" ]
null
null
2404.01685
null
null
http://arxiv.org/pdf/2404.01685v2
2024-04-04T00:36:18Z
2024-04-02T06:42:14Z
A Methodology for Improving Accuracy of Embedded Spiking Neural Networks through Kernel Size Scaling
Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose a novel methodology that improves the accuracy of SNNs through kernel size scaling. Its key steps include investigating the impact of different kernel sizes on the accuracy, devising new sets of kernel sizes, generating SNN architectures based on the selected kernel sizes, and analyzing the accuracy-memory trade-offs for SNN model selection. The experimental results show that our methodology achieves higher accuracy than state-of-the-art (93.24% accuracy for CIFAR10 and 70.84% accuracy for CIFAR100) with less than 10M parameters and up to 3.45x speed-up of searching time, thereby making it suitable for embedded applications.
[ "['Rachmad Vidya Wicaksana Putra' 'Muhammad Shafique']" ]
null
null
2404.01693
null
null
http://arxiv.org/pdf/2404.01693v1
2024-04-02T06:53:45Z
2024-04-02T06:53:45Z
HeMeNet: Heterogeneous Multichannel Equivariant Network for Protein Multitask Learning
Understanding and leveraging the 3D structures of proteins is central to a variety of biological and drug discovery tasks. While deep learning has been applied successfully for structure-based protein function prediction tasks, current methods usually employ distinct training for each task. However, each of the tasks is of small size, and such a single-task strategy hinders the models' performance and generalization ability. As some labeled 3D protein datasets are biologically related, combining multi-source datasets for larger-scale multi-task learning is one way to overcome this problem. In this paper, we propose a neural network model to address multiple tasks jointly upon the input of 3D protein structures. In particular, we first construct a standard structure-based multi-task benchmark called Protein-MT, consisting of 6 biologically relevant tasks, including affinity prediction and property prediction, integrated from 4 public datasets. Then, we develop a novel graph neural network for multi-task learning, dubbed Heterogeneous Multichannel Equivariant Network (HeMeNet), which is E(3) equivariant and able to capture heterogeneous relationships between different atoms. Besides, HeMeNet can achieve task-specific learning via the task-aware readout mechanism. Extensive evaluations on our benchmark verify the effectiveness of multi-task learning, and our model generally surpasses state-of-the-art models.
[ "['Rong Han' 'Wenbing Huang' 'Lingxiao Luo' 'Xinyan Han' 'Jiaming Shen'\n 'Zhiqiang Zhang' 'Jun Zhou' 'Ting Chen']" ]
null
null
2404.01695
null
null
http://arxiv.org/pdf/2404.01695v1
2024-04-02T06:56:21Z
2024-04-02T06:56:21Z
Selective Temporal Knowledge Graph Reasoning
Temporal Knowledge Graph (TKG), which characterizes temporally evolving facts in the form of (subject, relation, object, timestamp), has attracted much attention recently. TKG reasoning aims to predict future facts based on given historical ones. However, existing TKG reasoning models are unable to abstain from predictions they are uncertain, which will inevitably bring risks in real-world applications. Thus, in this paper, we propose an abstention mechanism for TKG reasoning, which helps the existing models make selective, instead of indiscriminate, predictions. Specifically, we develop a confidence estimator, called Confidence Estimator with History (CEHis), to enable the existing TKG reasoning models to first estimate their confidence in making predictions, and then abstain from those with low confidence. To do so, CEHis takes two kinds of information into consideration, namely, the certainty of the current prediction and the accuracy of historical predictions. Experiments with representative TKG reasoning models on two benchmark datasets demonstrate the effectiveness of the proposed CEHis.
[ "['Zhongni Hou' 'Xiaolong Jin' 'Zixuan Li' 'Long Bai' 'Jiafeng Guo'\n 'Xueqi Cheng']" ]
null
null
2404.01697
null
null
http://arxiv.org/pdf/2404.01697v2
2024-06-18T08:56:13Z
2024-04-02T06:58:41Z
Preventing Model Collapse in Gaussian Process Latent Variable Models
Gaussian process latent variable models (GPLVMs) are a versatile family of unsupervised learning models commonly used for dimensionality reduction. However, common challenges in modeling data with GPLVMs include inadequate kernel flexibility and improper selection of the projection noise, leading to a type of model collapse characterized by vague latent representations that do not reflect the underlying data structure. This paper addresses these issues by, first, theoretically examining the impact of projection variance on model collapse through the lens of a linear GPLVM. Second, we tackle model collapse due to inadequate kernel flexibility by integrating the spectral mixture (SM) kernel and a differentiable random Fourier feature (RFF) kernel approximation, which ensures computational scalability and efficiency through off-the-shelf automatic differentiation tools for learning the kernel hyperparameters, projection variance, and latent representations within the variational inference framework. The proposed GPLVM, named advisedRFLVM, is evaluated across diverse datasets and consistently outperforms various salient competing models, including state-of-the-art variational autoencoders (VAEs) and other GPLVM variants, in terms of informative latent representations and missing data imputation.
[ "['Ying Li' 'Zhidi Lin' 'Feng Yin' 'Michael Minyi Zhang']" ]
null
null
2404.01712
null
null
http://arxiv.org/pdf/2404.01712v3
2024-06-03T15:35:12Z
2024-04-02T07:54:18Z
Efficient and Generalizable Certified Unlearning: A Hessian-free Recollection Approach
Machine unlearning strives to uphold the data owners' right to be forgotten by enabling models to selectively forget specific data. Recent advances suggest precomputing and storing statistics extracted from second-order information and implementing unlearning through Newton-style updates. However, the theoretical analysis of these works often depends on restrictive assumptions of convexity and smoothness, and those mentioned operations on Hessian matrix are extremely costly. As a result, applying these works to high-dimensional models becomes challenging. In this paper, we propose an efficient Hessian-free certified unlearning. We propose to maintain a statistical vector for each data, computed through affine stochastic recursion approximation of the difference between retrained and learned models. Our analysis does not involve inverting Hessian and thus can be extended to non-convex non-smooth objectives. Under same assumptions, we demonstrate advancements of proposed method beyond the state-of-the-art theoretical studies, in terms of generalization, unlearning guarantee, deletion capacity, and computation/storage complexity, and we show that the unlearned model of our proposed approach is close to or same as the retrained model. Based on the strategy of recollecting statistics for forgetting data, we develop an algorithm that achieves near-instantaneous unlearning as it only requires a vector addition operation. Experiments demonstrate that the proposed scheme surpasses existing results by orders of magnitude in terms of time/storage costs, while also enhancing accuracy.
[ "['Xinbao Qiao' 'Meng Zhang' 'Ming Tang' 'Ermin Wei']" ]
null
null
2404.01714
null
null
http://arxiv.org/pdf/2404.01714v3
2024-05-11T13:55:06Z
2024-04-02T07:57:17Z
Conjugate-Gradient-like Based Adaptive Moment Estimation Optimization Algorithm for Deep Learning
Training deep neural networks is a challenging task. In order to speed up training and enhance the performance of deep neural networks, we rectify the vanilla conjugate gradient as conjugate-gradient-like and incorporate it into the generic Adam, and thus propose a new optimization algorithm named CG-like-Adam for deep learning. Specifically, both the first-order and the second-order moment estimation of generic Adam are replaced by the conjugate-gradient-like. Convergence analysis handles the cases where the exponential moving average coefficient of the first-order moment estimation is constant and the first-order moment estimation is unbiased. Numerical experiments show the superiority of the proposed algorithm based on the CIFAR10/100 dataset.
[ "['Jiawu Tian' 'Liwei Xu' 'Xiaowei Zhang' 'Yongqi Li']" ]
null
null
2404.01716
null
null
http://arxiv.org/pdf/2404.01716v1
2024-04-02T08:01:05Z
2024-04-02T08:01:05Z
Effective internal language model training and fusion for factorized transducer model
The internal language model (ILM) of the neural transducer has been widely studied. In most prior work, it is mainly used for estimating the ILM score and is subsequently subtracted during inference to facilitate improved integration with external language models. Recently, various of factorized transducer models have been proposed, which explicitly embrace a standalone internal language model for non-blank token prediction. However, even with the adoption of factorized transducer models, limited improvement has been observed compared to shallow fusion. In this paper, we propose a novel ILM training and decoding strategy for factorized transducer models, which effectively combines the blank, acoustic and ILM scores. Our experiments show a 17% relative improvement over the standard decoding method when utilizing a well-trained ILM and the proposed decoding strategy on LibriSpeech datasets. Furthermore, when compared to a strong RNN-T baseline enhanced with external LM fusion, the proposed model yields a 5.5% relative improvement on general-sets and an 8.9% WER reduction for rare words. The proposed model can achieve superior performance without relying on external language models, rendering it highly efficient for production use-cases. To further improve the performance, we propose a novel and memory-efficient ILM-fusion-aware minimum word error rate (MWER) training method which improves ILM integration significantly.
[ "['Jinxi Guo' 'Niko Moritz' 'Yingyi Ma' 'Frank Seide' 'Chunyang Wu'\n 'Jay Mahadeokar' 'Ozlem Kalinli' 'Christian Fuegen' 'Mike Seltzer']" ]
null
null
2404.01730
null
null
http://arxiv.org/pdf/2404.01730v1
2024-04-02T08:40:07Z
2024-04-02T08:40:07Z
Asymptotics of Language Model Alignment
Let $p$ denote a generative language model. Let $r$ denote a reward model that returns a scalar that captures the degree at which a draw from $p$ is preferred. The goal of language model alignment is to alter $p$ to a new distribution $phi$ that results in a higher expected reward while keeping $phi$ close to $p.$ A popular alignment method is the KL-constrained reinforcement learning (RL), which chooses a distribution $phi_Delta$ that maximizes $E_{phi_{Delta}} r(y)$ subject to a relative entropy constraint $KL(phi_Delta || p) leq Delta.$ Another simple alignment method is best-of-$N$, where $N$ samples are drawn from $p$ and one with highest reward is selected. In this paper, we offer a closed-form characterization of the optimal KL-constrained RL solution. We demonstrate that any alignment method that achieves a comparable trade-off between KL divergence and reward must approximate the optimal KL-constrained RL solution in terms of relative entropy. To further analyze the properties of alignment methods, we introduce two simplifying assumptions: we let the language model be memoryless, and the reward model be linear. Although these assumptions may not reflect complex real-world scenarios, they enable a precise characterization of the asymptotic behavior of both the best-of-$N$ alignment, and the KL-constrained RL method, in terms of information-theoretic quantities. We prove that the reward of the optimal KL-constrained RL solution satisfies a large deviation principle, and we fully characterize its rate function. We also show that the rate of growth of the scaled cumulants of the reward is characterized by a proper Renyi cross entropy. Finally, we show that best-of-$N$ is asymptotically equivalent to KL-constrained RL solution by proving that their expected rewards are asymptotically equal, and concluding that the two distributions must be close in KL divergence.
[ "['Joy Qiping Yang' 'Salman Salamatian' 'Ziteng Sun'\n 'Ananda Theertha Suresh' 'Ahmad Beirami']" ]
null
null
2404.01746
null
null
http://arxiv.org/pdf/2404.01746v1
2024-04-02T09:04:06Z
2024-04-02T09:04:06Z
Towards Scalable & Efficient Interaction-Aware Planning in Autonomous Vehicles using Knowledge Distillation
Real-world driving involves intricate interactions among vehicles navigating through dense traffic scenarios. Recent research focuses on enhancing the interaction awareness of autonomous vehicles to leverage these interactions in decision-making. These interaction-aware planners rely on neural-network-based prediction models to capture inter-vehicle interactions, aiming to integrate these predictions with traditional control techniques such as Model Predictive Control. However, this integration of deep learning-based models with traditional control paradigms often results in computationally demanding optimization problems, relying on heuristic methods. This study introduces a principled and efficient method for combining deep learning with constrained optimization, employing knowledge distillation to train smaller and more efficient networks, thereby mitigating complexity. We demonstrate that these refined networks maintain the problem-solving efficacy of larger models while significantly accelerating optimization. Specifically, in the domain of interaction-aware trajectory planning for autonomous vehicles, we illustrate that training a smaller prediction network using knowledge distillation speeds up optimization without sacrificing accuracy.
[ "['Piyush Gupta' 'David Isele' 'Sangjae Bae']" ]
null
null
2404.01748
null
null
http://arxiv.org/pdf/2404.01748v1
2024-04-02T09:04:56Z
2024-04-02T09:04:56Z
Global Mapping of Exposure and Physical Vulnerability Dynamics in Least Developed Countries using Remote Sensing and Machine Learning
As the world marked the midterm of the Sendai Framework for Disaster Risk Reduction 2015-2030, many countries are still struggling to monitor their climate and disaster risk because of the expensive large-scale survey of the distribution of exposure and physical vulnerability and, hence, are not on track in reducing risks amidst the intensifying effects of climate change. We present an ongoing effort in mapping this vital information using machine learning and time-series remote sensing from publicly available Sentinel-1 SAR GRD and Sentinel-2 Harmonized MSI. We introduce the development of "OpenSendaiBench" consisting of 47 countries wherein most are least developed (LDCs), trained ResNet-50 deep learning models, and demonstrated the region of Dhaka, Bangladesh by mapping the distribution of its informal constructions. As a pioneering effort in auditing global disaster risk over time, this paper aims to advance the area of large-scale risk quantification in informing our collective long-term efforts in reducing climate and disaster risk.
[ "['Joshua Dimasaka' 'Christian Geiß' 'Emily So']" ]
null
null
2404.01769
null
null
http://arxiv.org/pdf/2404.01769v1
2024-04-02T09:31:51Z
2024-04-02T09:31:51Z
Unifying Qualitative and Quantitative Safety Verification of DNN-Controlled Systems
The rapid advance of deep reinforcement learning techniques enables the oversight of safety-critical systems through the utilization of Deep Neural Networks (DNNs). This underscores the pressing need to promptly establish certified safety guarantees for such DNN-controlled systems. Most of the existing verification approaches rely on qualitative approaches, predominantly employing reachability analysis. However, qualitative verification proves inadequate for DNN-controlled systems as their behaviors exhibit stochastic tendencies when operating in open and adversarial environments. In this paper, we propose a novel framework for unifying both qualitative and quantitative safety verification problems of DNN-controlled systems. This is achieved by formulating the verification tasks as the synthesis of valid neural barrier certificates (NBCs). Initially, the framework seeks to establish almost-sure safety guarantees through qualitative verification. In cases where qualitative verification fails, our quantitative verification method is invoked, yielding precise lower and upper bounds on probabilistic safety across both infinite and finite time horizons. To facilitate the synthesis of NBCs, we introduce their $k$-inductive variants. We also devise a simulation-guided approach for training NBCs, aiming to achieve tightness in computing precise certified lower and upper bounds. We prototype our approach into a tool called $textsf{UniQQ}$ and showcase its efficacy on four classic DNN-controlled systems.
[ "['Dapeng Zhi' 'Peixin Wang' 'Si Liu' 'Luke Ong' 'Min Zhang']" ]
null
null
2404.01775
null
null
http://arxiv.org/pdf/2404.01775v1
2024-04-02T09:40:22Z
2024-04-02T09:40:22Z
A noisy elephant in the room: Is your out-of-distribution detector robust to label noise?
The ability to detect unfamiliar or unexpected images is essential for safe deployment of computer vision systems. In the context of classification, the task of detecting images outside of a model's training domain is known as out-of-distribution (OOD) detection. While there has been a growing research interest in developing post-hoc OOD detection methods, there has been comparably little discussion around how these methods perform when the underlying classifier is not trained on a clean, carefully curated dataset. In this work, we take a closer look at 20 state-of-the-art OOD detection methods in the (more realistic) scenario where the labels used to train the underlying classifier are unreliable (e.g. crowd-sourced or web-scraped labels). Extensive experiments across different datasets, noise types & levels, architectures and checkpointing strategies provide insights into the effect of class label noise on OOD detection, and show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods. Code: https://github.com/glhr/ood-labelnoise
[ "['Galadrielle Humblot-Renaux' 'Sergio Escalera' 'Thomas B. Moeslund']" ]