categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.10104 | null | null | http://arxiv.org/pdf/2407.10104v1 | 2024-07-14T07:11:57Z | 2024-07-14T07:11:57Z | A Self-Supervised Learning Pipeline for Demographically Fair Facial
Attribute Classification | Published research highlights the presence of demographic bias in automated facial attribute classification. The proposed bias mitigation techniques are mostly based on supervised learning, which requires a large amount of labeled training data for generalizability and scalability. However, labeled data is limited, requires laborious annotation, poses privacy risks, and can perpetuate human bias. In contrast, self-supervised learning (SSL) capitalizes on freely available unlabeled data, rendering trained models more scalable and generalizable. However, these label-free SSL models may also introduce biases by sampling false negative pairs, especially at low-data regimes 200K images) under low compute settings. Further, SSL-based models may suffer from performance degradation due to a lack of quality assurance of the unlabeled data sourced from the web. This paper proposes a fully self-supervised pipeline for demographically fair facial attribute classifiers. Leveraging completely unlabeled data pseudolabeled via pre-trained encoders, diverse data curation techniques, and meta-learning-based weighted contrastive learning, our method significantly outperforms existing SSL approaches proposed for downstream image classification tasks. Extensive evaluations on the FairFace and CelebA datasets demonstrate the efficacy of our pipeline in obtaining fair performance over existing baselines. Thus, setting a new benchmark for SSL in the fairness of facial attribute classification. | [
"['Sreeraj Ramachandran' 'Ajita Rattani']"
] |
null | null | 2407.10115 | null | null | http://arxiv.org/pdf/2407.10115v1 | 2024-07-14T08:10:20Z | 2024-07-14T08:10:20Z | A Bag of Tricks for Scaling CPU-based Deep FFMs to more than 300m
Predictions per Second | Field-aware Factorization Machines (FFMs) have emerged as a powerful model for click-through rate prediction, particularly excelling in capturing complex feature interactions. In this work, we present an in-depth analysis of our in-house, Rust-based Deep FFM implementation, and detail its deployment on a CPU-only, multi-data-center scale. We overview key optimizations devised for both training and inference, demonstrated by previously unpublished benchmark results in efficient model search and online training. Further, we detail an in-house weight quantization that resulted in more than an order of magnitude reduction in bandwidth footprint related to weight transfers across data-centres. We disclose the engine and associated techniques under an open-source license to contribute to the broader machine learning community. This paper showcases one of the first successful CPU-only deployments of Deep FFMs at such scale, marking a significant stride in practical, low-footprint click-through rate prediction methodologies. | [
"['Blaž Škrlj' 'Benjamin Ben-Shalom' 'Grega Gašperšič' 'Adi Schwartz'\n 'Ramzi Hoseisi' 'Naama Ziporin' 'Davorin Kopič' 'Andraž Tori']"
] |
null | null | 2407.10132 | null | null | http://arxiv.org/pdf/2407.10132v1 | 2024-07-14T09:32:20Z | 2024-07-14T09:32:20Z | Optimal Kernel Choice for Score Function-based Causal Discovery | Score-based methods have demonstrated their effectiveness in discovering causal relationships by scoring different causal structures based on their goodness of fit to the data. Recently, Huang et al. proposed a generalized score function that can handle general data distributions and causal relationships by modeling the relations in reproducing kernel Hilbert space (RKHS). The selection of an appropriate kernel within this score function is crucial for accurately characterizing causal relationships and ensuring precise causal discovery. However, the current method involves manual heuristic selection of kernel parameters, making the process tedious and less likely to ensure optimality. In this paper, we propose a kernel selection method within the generalized score function that automatically selects the optimal kernel that best fits the data. Specifically, we model the generative process of the variables involved in each step of the causal graph search procedure as a mixture of independent noise variables. Based on this model, we derive an automatic kernel selection method by maximizing the marginal likelihood of the variables involved in each search step. We conduct experiments on both synthetic data and real-world benchmarks, and the results demonstrate that our proposed method outperforms heuristic kernel selection methods. | [
"['Wenjie Wang' 'Biwei Huang' 'Feng Liu' 'Xinge You' 'Tongliang Liu'\n 'Kun Zhang' 'Mingming Gong']"
] |
null | null | 2407.10159 | null | null | http://arxiv.org/pdf/2407.10159v1 | 2024-07-14T10:59:34Z | 2024-07-14T10:59:34Z | RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D
LiDAR Segmentation | 3D point clouds play a pivotal role in outdoor scene perception, especially in the context of autonomous driving. Recent advancements in 3D LiDAR segmentation often focus intensely on the spatial positioning and distribution of points for accurate segmentation. However, these methods, while robust in variable conditions, encounter challenges due to sole reliance on coordinates and point intensity, leading to poor isometric invariance and suboptimal segmentation. To tackle this challenge, our work introduces Range-Aware Pointwise Distance Distribution (RAPiD) features and the associated RAPiD-Seg architecture. Our RAPiD features exhibit rigid transformation invariance and effectively adapt to variations in point density, with a design focus on capturing the localized geometry of neighboring structures. They utilize inherent LiDAR isotropic radiation and semantic categorization for enhanced local representation and computational efficiency, while incorporating a 4D distance metric that integrates geometric and surface material reflectivity for improved semantic segmentation. To effectively embed high-dimensional RAPiD features, we propose a double-nested autoencoder structure with a novel class-aware embedding objective to encode high-dimensional features into manageable voxel-wise embeddings. Additionally, we propose RAPiD-Seg which incorporates a channel-wise attention fusion and two effective RAPiD-Seg variants, further optimizing the embedding for enhanced performance and generalization. Our method outperforms contemporary LiDAR segmentation work in terms of mIoU on SemanticKITTI (76.1) and nuScenes (83.6) datasets. | [
"['Li Li' 'Hubert P. H. Shum' 'Toby P. Breckon']"
] |
null | null | 2407.10165 | null | null | http://arxiv.org/pdf/2407.10165v1 | 2024-07-14T11:20:50Z | 2024-07-14T11:20:50Z | The Hidden Influence of Latent Feature Magnitude When Learning with
Imbalanced Data | Machine learning (ML) models have difficulty generalizing when the number of training class instances are numerically imbalanced. The problem of generalization in the face of data imbalance has largely been attributed to the lack of training data for under-represented classes and to feature overlap. The typical remedy is to implement data augmentation for classes with fewer instances or to assign a higher cost to minority class prediction errors or to undersample the prevalent class. However, we show that one of the central causes of impaired generalization when learning with imbalanced data is the inherent manner in which ML models perform inference. These models have difficulty generalizing due to their heavy reliance on the magnitude of encoded signals. During inference, the models predict classes based on a combination of encoded signal magnitudes that linearly sum to the largest scalar. We demonstrate that even with aggressive data augmentation, which generally improves minority class prediction accuracy, parametric ML models still associate a class label with a limited number of feature combinations that sum to a prediction, which can affect generalization. | [
"['Damien A. Dablain' 'Nitesh V. Chawla']"
] |
null | null | 2407.10188 | null | null | http://arxiv.org/pdf/2407.10188v1 | 2024-07-14T13:16:23Z | 2024-07-14T13:16:23Z | Unexpected Benefits of Self-Modeling in Neural Systems | Self-models have been a topic of great interest for decades in studies of human cognition and more recently in machine learning. Yet what benefits do self-models confer? Here we show that when artificial networks learn to predict their internal states as an auxiliary task, they change in a fundamental way. To better perform the self-model task, the network learns to make itself simpler, more regularized, more parameter-efficient, and therefore more amenable to being predictively modeled. To test the hypothesis of self-regularizing through self-modeling, we used a range of network architectures performing three classification tasks across two modalities. In all cases, adding self-modeling caused a significant reduction in network complexity. The reduction was observed in two ways. First, the distribution of weights was narrower when self-modeling was present. Second, a measure of network complexity, the real log canonical threshold (RLCT), was smaller when self-modeling was present. Not only were measures of complexity reduced, but the reduction became more pronounced as greater training weight was placed on the auxiliary task of self-modeling. These results strongly support the hypothesis that self-modeling is more than simply a network learning to predict itself. The learning has a restructuring effect, reducing complexity and increasing parameter efficiency. This self-regularization may help explain some of the benefits of self-models reported in recent machine learning literature, as well as the adaptive value of self-models to biological systems. In particular, these findings may shed light on the possible interaction between the ability to model oneself and the ability to be more easily modeled by others in a social or cooperative context. | [
"['Vickram N. Premakumar' 'Michael Vaiana' 'Florin Pop' 'Judd Rosenblatt'\n 'Diogo Schwerz de Lucena' 'Kirsten Ziman' 'Michael S. A. Graziano']"
] |
null | null | 2407.10194 | null | null | http://arxiv.org/pdf/2407.10194v1 | 2024-07-14T13:32:24Z | 2024-07-14T13:32:24Z | Curriculum Learning for Small Code Language Models | Code language models have emerged as useful tools for various programming tasks, yet they often struggle when it comes to complex ones. In this paper, we explore the potential of curriculum learning in enhancing the performance of these models. While prior research has suggested that curriculum learning does not necessarily help in improving the performance of language models, our results surprisingly show that this may not be the case for code language models. We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models on the task of code execution, while its effect on code completion is less significant. To explore the potential of curriculum learning, we train multiple GPT models with 1 million parameters each to predict the next token and evaluate them on code completion and execution tasks. Our contributions include proposing a novel code difficulty assessment metric by combining software code measures, investigating the effectiveness of Curriculum Learning for code language models, and introducing a Novel Curriculum Learning schedule that enhances the performance of small decoder-only language models in code execution tasks. The results of this paper open the door for more research on the use of curriculum learning for code language models. | [
"['Marwa Naïr' 'Kamel Yamani' 'Lynda Said Lhadj' 'Riyadh Baghdadi']"
] |
null | null | 2407.10196 | null | null | http://arxiv.org/pdf/2407.10196v1 | 2024-07-14T13:37:03Z | 2024-07-14T13:37:03Z | A3S: A General Active Clustering Method with Pairwise Constraints | Active clustering aims to boost the clustering performance by integrating human-annotated pairwise constraints through strategic querying. Conventional approaches with semi-supervised clustering schemes encounter high query costs when applied to large datasets with numerous classes. To address these limitations, we propose a novel Adaptive Active Aggregation and Splitting (A3S) framework, falling within the cluster-adjustment scheme in active clustering. A3S features strategic active clustering adjustment on the initial cluster result, which is obtained by an adaptive clustering algorithm. In particular, our cluster adjustment is inspired by the quantitative analysis of Normalized mutual information gain under the information theory framework and can provably improve the clustering quality. The proposed A3S framework significantly elevates the performance and scalability of active clustering. In extensive experiments across diverse real-world datasets, A3S achieves desired results with significantly fewer human queries compared with existing methods. | [
"['Xun Deng' 'Junlong Liu' 'Han Zhong' 'Fuli Feng' 'Chen Shen'\n 'Xiangnan He' 'Jieping Ye' 'Zheng Wang']"
] |
null | null | 2407.10204 | null | null | http://arxiv.org/pdf/2407.10204v1 | 2024-07-14T13:48:25Z | 2024-07-14T13:48:25Z | Improving Graph Out-of-distribution Generalization on Real-world Data | Existing methods for graph out-of-distribution (OOD) generalization primarily rely on empirical studies on synthetic datasets. Such approaches tend to overemphasize the causal relationships between invariant sub-graphs and labels, thereby neglecting the non-negligible role of environment in real-world scenarios. In contrast to previous studies that impose rigid independence assumptions on environments and invariant sub-graphs, this paper presents the theorems of environment-label dependency and mutable rationale invariance, where the former characterizes the usefulness of environments in determining graph labels while the latter refers to the mutable importance of graph rationales. Based on analytic investigations, a novel variational inference based method named ``Probability Dependency on Environments and Rationales for OOD Graphs on Real-world Data'' (DEROG) is introduced. To alleviate the adverse effect of unknown prior knowledge on environments and rationales, DEROG utilizes generalized Bayesian inference. Further, DEROG employs an EM-based algorithm for optimization. Finally, extensive experiments on real-world datasets under different distribution shifts are conducted to show the superiority of DEROG. Our code is publicly available at https://anonymous.4open.science/r/DEROG-536B. | [
"['Can Xu' 'Yao Cheng' 'Jianxiang Yu' 'Haosen Wang' 'Jingsong Lv'\n 'Xiang Li']"
] |
null | null | 2407.10207 | null | null | http://arxiv.org/pdf/2407.10207v1 | 2024-07-14T14:01:38Z | 2024-07-14T14:01:38Z | Learning to Steer Markovian Agents under Model Uncertainty | Designing incentives for an adapting population is a ubiquitous problem in a wide array of economic applications and beyond. In this work, we study how to design additional rewards to steer multi-agent systems towards desired policies emph{without} prior knowledge of the agents' underlying learning dynamics. We introduce a model-based non-episodic Reinforcement Learning (RL) formulation for our steering problem. Importantly, we focus on learning a emph{history-dependent} steering strategy to handle the inherent model uncertainty about the agents' learning dynamics. We introduce a novel objective function to encode the desiderata of achieving a good steering outcome with reasonable cost. Theoretically, we identify conditions for the existence of steering strategies to guide agents to the desired policies. Complementing our theoretical contributions, we provide empirical algorithms to approximately solve our objective, which effectively tackles the challenge in learning history-dependent strategies. We demonstrate the efficacy of our algorithms through empirical evaluations. | [
"['Jiawei Huang' 'Vinzenz Thoma' 'Zebang Shen' 'Heinrich H. Nax' 'Niao He']"
] |
null | null | 2407.10223 | null | null | http://arxiv.org/pdf/2407.10223v1 | 2024-07-14T14:26:17Z | 2024-07-14T14:26:17Z | Practical Unlearning for Large Language Models | While LLMs have demonstrated impressive performance across various domains and tasks, their security issues have become increasingly severe. Machine unlearning (MU) has emerged as a promising solution to address these issues by removing the influence of undesired data on the target model without compromising its utility in other aspects. MU typically assumes full access to the original training data to preserve utility, which is difficult to achieve in LLM unlearning. Existing LLM unlearning methods often assume access to data most affected by undesired data unlearning. However, this assumption underestimates the entanglement among various LLM capabilities and ignores data access limitations due to various issues. Moreover, these LLM unlearning methods do not sufficiently consider that unlearning requests in real-world scenarios are continuously emerging. To overcome these challenges and achieve practical LLM unlearning, we propose the O3 framework. The O3 framework includes an Out-Of-Distribution (OOD) detector to measure the similarity between input and unlearning data, and an Orthogonal low-rank adapter (LoRA) for continuously unlearning requested data. The OOD detector is trained with a novel contrastive entropy loss and utilizes a local-global layer-aggregated scoring mechanism. The orthogonal LoRA achieves parameter disentanglement among continual unlearning requests. During inference, our O3 framework can smartly decide whether and to what extent to load the unlearning LoRA based on the OOD detector's predictions. Notably, O3's effectiveness does not rely on any retained data. We conducted extensive experiments on O3 and state-of-the-art LLM unlearning methods across three tasks and seven datasets. The results indicate that O3 consistently achieves the best trade-off between unlearning effectiveness and utility preservation, especially when facing continuous unlearning requests. | [
"['Chongyang Gao' 'Lixu Wang' 'Chenkai Weng' 'Xiao Wang' 'Qi Zhu']"
] |
null | null | 2407.10230 | null | null | http://arxiv.org/pdf/2407.10230v1 | 2024-07-14T14:58:03Z | 2024-07-14T14:58:03Z | Weighted Aggregation of Conformity Scores for Classification | Conformal prediction is a powerful framework for constructing prediction sets with valid coverage guarantees in multi-class classification. However, existing methods often rely on a single score function, which can limit their efficiency and informativeness. We propose a novel approach that combines multiple score functions to improve the performance of conformal predictors by identifying optimal weights that minimize prediction set size. Our theoretical analysis establishes a connection between the weighted score functions and subgraph classes of functions studied in Vapnik-Chervonenkis theory, providing a rigorous mathematical basis for understanding the effectiveness of the proposed method. Experiments demonstrate that our approach consistently outperforms single-score conformal predictors while maintaining valid coverage, offering a principled and data-driven way to enhance the efficiency and practicality of conformal prediction in classification tasks. | [
"['Rui Luo' 'Zhixin Zhou']"
] |
null | null | 2407.10238 | null | null | http://arxiv.org/pdf/2407.10238v1 | 2024-07-14T15:11:13Z | 2024-07-14T15:11:13Z | Parameter Estimation for Generalized Low-Rank Matrix Sensing by Learning
on Riemannian Manifolds | We prove convergence guarantees for generalized low-rank matrix sensing -- i.e., where matrix sensing where the observations may be passed through some nonlinear link function. We focus on local convergence of the optimal estimator, ignoring questions of optimization. In particular, assuming the minimizer of the empirical loss $theta^0$ is in a constant size ball around the true parameters $theta^*$, we prove that $d(theta^0,theta^*)=tilde{O}(sqrt{dk^2/n})$. Our analysis relies on tools from Riemannian geometry to handle the rotational symmetry in the parameter space. | [
"['Osbert Bastani']"
] |
null | null | 2407.10239 | null | null | http://arxiv.org/pdf/2407.10239v1 | 2024-04-29T18:51:20Z | 2024-04-29T18:51:20Z | What is Reproducibility in Artificial Intelligence and Machine Learning
Research? | In the rapidly evolving fields of Artificial Intelligence (AI) and Machine Learning (ML), the reproducibility crisis underscores the urgent need for clear validation methodologies to maintain scientific integrity and encourage advancement. The crisis is compounded by the prevalent confusion over validation terminology. Responding to this challenge, we introduce a validation framework that clarifies the roles and definitions of key validation efforts: repeatability, dependent and independent reproducibility, and direct and conceptual replicability. This structured framework aims to provide AI/ML researchers with the necessary clarity on these essential concepts, facilitating the appropriate design, conduct, and interpretation of validation studies. By articulating the nuances and specific roles of each type of validation study, we hope to contribute to a more informed and methodical approach to addressing the challenges of reproducibility, thereby supporting the community's efforts to enhance the reliability and trustworthiness of its research findings. | [
"['Abhyuday Desai' 'Mohamed Abdelhamid' 'Nakul R. Padalkar']"
] |
null | null | 2407.10240 | null | null | http://arxiv.org/pdf/2407.10240v1 | 2024-07-14T15:15:00Z | 2024-07-14T15:15:00Z | xLSTMTime : Long-term Time Series Forecasting With xLSTM | In recent years, transformer-based models have gained prominence in multivariate long-term time series forecasting (LTSF), demonstrating significant advancements despite facing challenges such as high computational demands, difficulty in capturing temporal dynamics, and managing long-term dependencies. The emergence of LTSF-Linear, with its straightforward linear architecture, has notably outperformed transformer-based counterparts, prompting a reevaluation of the transformer's utility in time series forecasting. In response, this paper presents an adaptation of a recent architecture termed extended LSTM (xLSTM) for LTSF. xLSTM incorporates exponential gating and a revised memory structure with higher capacity that has good potential for LTSF. Our adopted architecture for LTSF termed as xLSTMTime surpasses current approaches. We compare xLSTMTime's performance against various state-of-the-art models across multiple real-world da-tasets, demonstrating superior forecasting capabilities. Our findings suggest that refined recurrent architectures can offer competitive alternatives to transformer-based models in LTSF tasks, po-tentially redefining the landscape of time series forecasting. | [
"['Musleh Alharthi' 'Ausif Mahmood']"
] |
null | null | 2407.10247 | null | null | http://arxiv.org/pdf/2407.10247v1 | 2024-04-30T19:07:18Z | 2024-04-30T19:07:18Z | Strategic Integration of Artificial Intelligence in the C-Suite: The
Role of the Chief AI Officer | The integration of Artificial Intelligence (AI) into corporate strategy has become a pivotal focus for organizations aiming to maintain a competitive advantage in the digital age. As AI reshapes business operations and drives innovation, the need for specialized leadership to effectively manage these changes becomes increasingly apparent. In this paper, I explore the role of the Chief AI Officer (CAIO) within the C-suite, emphasizing the necessity of this position for successful AI strategy, integration, and governance. I analyze future scenarios based on current trends in three key areas: the AI Economy, AI Organization, and Competition in the Age of AI. These explorations lay the foundation for identifying the antecedents (environmental, structural, and strategic factors) that justify the inclusion of a CAIO in top management teams. This sets the stage for a comprehensive examination of the CAIO's role and the broader implications of AI leadership. This paper advances the discussion on AI leadership by providing a rationale for the strategic integration of AI at the executive level and examining the role of the Chief AI Officer within organizations. | [
"['Marc Schmitt']"
] |
null | null | 2407.10251 | null | null | http://arxiv.org/pdf/2407.10251v1 | 2024-07-14T15:35:39Z | 2024-07-14T15:35:39Z | Deep Learning Algorithms for Early Diagnosis of Acute Lymphoblastic
Leukemia | Acute lymphoblastic leukemia (ALL) is a form of blood cancer that affects the white blood cells. ALL constitutes approximately 25% of pediatric cancers. Early diagnosis and treatment of ALL are crucial for improving patient outcomes. The task of identifying immature leukemic blasts from normal cells under the microscope can prove challenging, since the images of a healthy and cancerous cell appear similar morphologically. In this study, we propose a binary image classification model to assist in the diagnostic process of ALL. Our model takes as input microscopic images of blood samples and outputs a binary prediction of whether the sample is normal or cancerous. Our dataset consists of 10661 images out of 118 subjects. Deep learning techniques on convolutional neural network architectures were used to achieve accurate classification results. Our proposed method achieved 94.3% accuracy and could be used as an assisting tool for hematologists trying to predict the likelihood of a patient developing ALL. | [
"['Dimitris Papaioannou' 'Ioannis Christou' 'Nikos Anagnou'\n 'Aristotelis Chatziioannou']"
] |
null | null | 2407.10256 | null | null | http://arxiv.org/pdf/2407.10256v1 | 2024-05-03T17:13:26Z | 2024-05-03T17:13:26Z | Towards An Online Incremental Approach to Predict Students Performance | Analytical models developed in offline settings with pre-prepared data are typically used to predict students' performance. However, when data are available over time, this learning method is not suitable anymore. Online learning is increasingly used to update the online models from stream data. A rehearsal technique is typically used, which entails re-training the model on a small training set that is updated each time new data is received. The main challenge in this regard is the construction of the training set with appropriate data samples to maintain good model performance. Typically, a random selection of samples is made, which can deteriorate the model's performance. In this paper, we propose a memory-based online incremental learning approach for updating an online classifier that predicts student performance using stream data. The approach is based on the use of the genetic algorithm heuristic while respecting the memory space constraints as well as the balance of class labels. In contrast to random selection, our approach improves the stability of the analytical model by promoting diversity when creating the training set. As a proof of concept, we applied it to the open dataset OULAD. Our approach achieves a notable improvement in model accuracy, with an enhancement of nearly 10% compared to the current state-of-the-art, while maintaining a relatively low standard deviation in accuracy, ranging from 1% to 2.1%. | [
"['Chahrazed Labba' 'Anne Boyer']"
] |
null | null | 2407.10259 | null | null | http://arxiv.org/pdf/2407.10259v1 | 2024-07-14T15:52:19Z | 2024-07-14T15:52:19Z | Towards detailed and interpretable hybrid modeling of continental-scale
bird migration | Hybrid modeling aims to augment traditional theory-driven models with machine learning components that learn unknown parameters, sub-models or correction terms from data. In this work, we build on FluxRGNN, a recently developed hybrid model of continental-scale bird migration, which combines a movement model inspired by fluid dynamics with recurrent neural networks that capture the complex decision-making processes of birds. While FluxRGNN has been shown to successfully predict key migration patterns, its spatial resolution is constrained by the typically sparse observations obtained from weather radars. Additionally, its trainable components lack explicit incentives to adequately predict take-off and landing events. Both aspects limit our ability to interpret model results ecologically. To address this, we propose two major modifications that allow for more detailed predictions on any desired tessellation while providing control over the interpretability of model components. In experiments on the U.S. weather radar network, the enhanced model effectively leverages the underlying movement model, resulting in strong extrapolation capabilities to unobserved locations. | [
"['Fiona Lippert' 'Bart Kranstauber' 'Patrick Forré' 'E. Emiel van Loon']"
] |
null | null | 2407.10264 | null | null | http://arxiv.org/pdf/2407.10264v1 | 2024-07-14T16:12:57Z | 2024-07-14T16:12:57Z | What Makes and Breaks Safety Fine-tuning? Mechanistic Study | Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment. To better understand the underlying factors that make models safe via safety fine-tuning, we design a synthetic data generation framework that captures salient aspects of an unsafe input by modeling the interaction between the task the model is asked to perform (e.g., ``design'') versus the specific concepts the task is asked to be performed upon (e.g., a ``cycle'' vs. a ``bomb''). Using this, we investigate three well-known safety fine-tuning methods -- supervised safety fine-tuning, direct preference optimization, and unlearning -- and provide significant evidence demonstrating that these methods minimally transform MLP weights to specifically align unsafe inputs into its weights' null space. This yields a clustering of inputs based on whether the model deems them safe or not. Correspondingly, when an adversarial input (e.g., a jailbreak) is provided, its activations are closer to safer samples, leading to the model processing such an input as if it were safe. We validate our findings, wherever possible, on real-world models -- specifically, Llama-2 7B and Llama-3 8B. | [
"['Samyak Jain' 'Ekdeep Singh Lubana' 'Kemal Oksuz' 'Tom Joy'\n 'Philip H. S. Torr' 'Amartya Sanyal' 'Puneet K. Dokania']"
] |
null | null | 2407.10266 | null | null | http://arxiv.org/pdf/2407.10266v1 | 2024-07-14T16:20:42Z | 2024-07-14T16:20:42Z | psifx -- Psychological and Social Interactions Feature Extraction
Package | psifx is a plug-and-play multi-modal feature extraction toolkit, aiming to facilitate and democratize the use of state-of-the-art machine learning techniques for human sciences research. It is motivated by a need (a) to automate and standardize data annotation processes, otherwise involving expensive, lengthy, and inconsistent human labor, such as the transcription or coding of behavior changes from audio and video sources; (b) to develop and distribute open-source community-driven psychology research software; and (c) to enable large-scale access and ease of use to non-expert users. The framework contains an array of tools for tasks, such as speaker diarization, closed-caption transcription and translation from audio, as well as body, hand, and facial pose estimation and gaze tracking from video. The package has been designed with a modular and task-oriented approach, enabling the community to add or update new tools easily. We strongly hope that this package will provide psychologists a simple and practical solution for efficiently a range of audio, linguistic, and visual features from audio and video, thereby creating new opportunities for in-depth study of real-time behavioral phenomena. | [
"['Guillaume Rochette' 'Matthew J. Vowels']"
] |
null | null | 2407.10274 | null | null | http://arxiv.org/pdf/2407.10274v1 | 2024-07-14T17:15:47Z | 2024-07-14T17:15:47Z | Enhancing Weakly-Supervised Histopathology Image Segmentation with
Knowledge Distillation on MIL-Based Pseudo-Labels | Segmenting tumors in histological images is vital for cancer diagnosis. While fully supervised models excel with pixel-level annotations, creating such annotations is labor-intensive and costly. Accurate histopathology image segmentation under weakly-supervised conditions with coarse-grained image labels is still a challenging problem. Although multiple instance learning (MIL) has shown promise in segmentation tasks, surprisingly, no previous pseudo-supervision methods have used MIL-based outputs as pseudo-masks for training. We suspect this stems from concerns over noises in MIL results affecting pseudo supervision quality. To explore the potential of leveraging MIL-based segmentation for pseudo supervision, we propose a novel distillation framework for histopathology image segmentation. This framework introduces a iterative fusion-knowledge distillation strategy, enabling the student model to learn directly from the teacher's comprehensive outcomes. Through dynamic role reversal between the fixed teacher and learnable student models and the incorporation of weighted cross-entropy loss for model optimization, our approach prevents performance deterioration and noise amplification during knowledge distillation. Experimental results on public histopathology datasets, Camelyon16 and Digestpath2019, demonstrate that our approach not only complements various MIL-based segmentation methods but also significantly enhances their performance. Additionally, our method achieves new SOTA in the field. | [
"['Yinsheng He' 'Xingyu Li' 'Roger J. Zemp']"
] |
null | null | 2407.10277 | null | null | http://arxiv.org/pdf/2407.10277v1 | 2024-07-14T17:21:19Z | 2024-07-14T17:21:19Z | Disrupting Diffusion-based Inpainters with Semantic Digression | The fabrication of visual misinformation on the web and social media has increased exponentially with the advent of foundational text-to-image diffusion models. Namely, Stable Diffusion inpainters allow the synthesis of maliciously inpainted images of personal and private figures, and copyrighted contents, also known as deepfakes. To combat such generations, a disruption framework, namely Photoguard, has been proposed, where it adds adversarial noise to the context image to disrupt their inpainting synthesis. While their framework suggested a diffusion-friendly approach, the disruption is not sufficiently strong and it requires a significant amount of GPU and time to immunize the context image. In our work, we re-examine both the minimal and favorable conditions for a successful inpainting disruption, proposing DDD, a "Digression guided Diffusion Disruption" framework. First, we identify the most adversarially vulnerable diffusion timestep range with respect to the hidden space. Within this scope of noised manifold, we pose the problem as a semantic digression optimization. We maximize the distance between the inpainting instance's hidden states and a semantic-aware hidden state centroid, calibrated both by Monte Carlo sampling of hidden states and a discretely projected optimization in the token space. Effectively, our approach achieves stronger disruption and a higher success rate than Photoguard while lowering the GPU memory requirement, and speeding the optimization up to three times faster. | [
"['Geonho Son' 'Juhun Lee' 'Simon S. Woo']"
] |
null | null | 2407.10283 | null | null | http://arxiv.org/pdf/2407.10283v1 | 2024-07-14T17:56:11Z | 2024-07-14T17:56:11Z | Numbers Matter! Bringing Quantity-awareness to Retrieval Systems | Quantitative information plays a crucial role in understanding and interpreting the content of documents. Many user queries contain quantities and cannot be resolved without understanding their semantics, e.g., ``car that costs less than $10k''. Yet, modern search engines apply the same ranking mechanisms for both words and quantities, overlooking magnitude and unit information. In this paper, we introduce two quantity-aware ranking techniques designed to rank both the quantity and textual content either jointly or independently. These techniques incorporate quantity information in available retrieval systems and can address queries with numerical conditions equal, greater than, and less than. To evaluate the effectiveness of our proposed models, we introduce two novel quantity-aware benchmark datasets in the domains of finance and medicine and compare our method against various lexical and neural models. The code and data are available under https://github.com/satya77/QuantityAwareRankers. | [
"['Satya Almasian' 'Milena Bruseva' 'Michael Gertz']"
] |
null | null | 2407.10309 | null | null | http://arxiv.org/pdf/2407.10309v1 | 2024-07-14T19:58:01Z | 2024-07-14T19:58:01Z | Augmented prediction of a true class for Positive Unlabeled data under
selection bias | We introduce a new observational setting for Positive Unlabeled (PU) data where the observations at prediction time are also labeled. This occurs commonly in practice -- we argue that the additional information is important for prediction, and call this task "augmented PU prediction". We allow for labeling to be feature dependent. In such scenario, Bayes classifier and its risk is established and compared with a risk of a classifier which for unlabeled data is based only on predictors. We introduce several variants of the empirical Bayes rule in such scenario and investigate their performance. We emphasise dangers (and ease) of applying classical classification rule in the augmented PU scenario -- due to no preexisting studies, an unaware researcher is prone to skewing the obtained predictions. We conclude that the variant based on recently proposed variational autoencoder designed for PU scenario works on par or better than other considered variants and yields advantage over feature-only based methods in terms of accuracy for unlabeled samples. | [
"['Jan Mielniczuk' 'Adam Wawrzeńczyk']"
] |
null | null | 2407.10315 | null | null | http://arxiv.org/pdf/2407.10315v1 | 2024-07-14T20:22:36Z | 2024-07-14T20:22:36Z | Order parameters and phase transitions of continual learning in deep
neural networks | Continual learning (CL) enables animals to learn new tasks without erasing prior knowledge. CL in artificial neural networks (NNs) is challenging due to catastrophic forgetting, where new learning degrades performance on older tasks. While various techniques exist to mitigate forgetting, theoretical insights into when and why CL fails in NNs are lacking. Here, we present a statistical-mechanics theory of CL in deep, wide NNs, which characterizes the network's input-output mapping as it learns a sequence of tasks. It gives rise to order parameters (OPs) that capture how task relations and network architecture influence forgetting and knowledge transfer, as verified by numerical evaluations. We found that the input and rule similarity between tasks have different effects on CL performance. In addition, the theory predicts that increasing the network depth can effectively reduce overlap between tasks, thereby lowering forgetting. For networks with task-specific readouts, the theory identifies a phase transition where CL performance shifts dramatically as tasks become less similar, as measured by the OPs. Sufficiently low similarity leads to catastrophic anterograde interference, where the network retains old tasks perfectly but completely fails to generalize new learning. Our results delineate important factors affecting CL performance and suggest strategies for mitigating forgetting. | [
"['Haozhe Shan' 'Qianyi Li' 'Haim Sompolinsky']"
] |
null | null | 2407.10327 | null | null | http://arxiv.org/pdf/2407.10327v1 | 2024-07-14T20:50:40Z | 2024-07-14T20:50:40Z | Learning Unlabeled Clients Divergence via Anchor Model Aggregation for
Federated Semi-supervised Learning | Federated semi-supervised learning (FedSemi) refers to scenarios where there may be clients with fully labeled data, clients with partially labeled, and even fully unlabeled clients while preserving data privacy. However, challenges arise from client drift due to undefined heterogeneous class distributions and erroneous pseudo-labels. Existing FedSemi methods typically fail to aggregate models from unlabeled clients due to their inherent unreliability, thus overlooking unique information from their heterogeneous data distribution, leading to sub-optimal results. In this paper, we enable unlabeled client aggregation through SemiAnAgg, a novel Semi-supervised Anchor-Based federated Aggregation. SemiAnAgg learns unlabeled client contributions via an anchor model, effectively harnessing their informative value. Our key idea is that by feeding local client data to the same global model and the same consistently initialized anchor model (i.e., random model), we can measure the importance of each unlabeled client accordingly. Extensive experiments demonstrate that SemiAnAgg achieves new state-of-the-art results on four widely used FedSemi benchmarks, leading to substantial performance improvements: a 9% increase in accuracy on CIFAR-100 and a 7.6% improvement in recall on the medical dataset ISIC-18, compared with prior state-of-the-art. Code is available at: https://github.com/xmed-lab/SemiAnAgg. | [
"['Marawan Elbatel' 'Hualiang Wang' 'Jixiang Chen' 'Hao Wang' 'Xiaomeng Li']"
] |
null | null | 2407.10331 | null | null | http://arxiv.org/pdf/2407.10331v1 | 2024-07-14T21:02:55Z | 2024-07-14T21:02:55Z | 3D Foundation Models Enable Simultaneous Geometry and Pose Estimation of
Grasped Objects | Humans have the remarkable ability to use held objects as tools to interact with their environment. For this to occur, humans internally estimate how hand movements affect the object's movement. We wish to endow robots with this capability. We contribute methodology to jointly estimate the geometry and pose of objects grasped by a robot, from RGB images captured by an external camera. Notably, our method transforms the estimated geometry into the robot's coordinate frame, while not requiring the extrinsic parameters of the external camera to be calibrated. Our approach leverages 3D foundation models, large models pre-trained on huge datasets for 3D vision tasks, to produce initial estimates of the in-hand object. These initial estimations do not have physically correct scales and are in the camera's frame. Then, we formulate, and efficiently solve, a coordinate-alignment problem to recover accurate scales, along with a transformation of the objects to the coordinate frame of the robot. Forward kinematics mappings can subsequently be defined from the manipulator's joint angles to specified points on the object. These mappings enable the estimation of points on the held object at arbitrary configurations, enabling robot motion to be designed with respect to coordinates on the grasped objects. We empirically evaluate our approach on a robot manipulator holding a diverse set of real-world objects. | [
"['Weiming Zhi' 'Haozhan Tang' 'Tianyi Zhang' 'Matthew Johnson-Roberson']"
] |
null | null | 2407.10332 | null | null | http://arxiv.org/pdf/2407.10332v1 | 2024-07-14T21:11:44Z | 2024-07-14T21:11:44Z | Ontology-driven Reinforcement Learning for Personalized Student Support | In the search for more effective education, there is a widespread effort to develop better approaches to personalize student education. Unassisted, educators often do not have time or resources to personally support every student in a given classroom. Motivated by this issue, and by recent advancements in artificial intelligence, this paper presents a general-purpose framework for personalized student support, applicable to any virtual educational system such as a serious game or an intelligent tutoring system. To fit any educational situation, we apply ontologies for their semantic organization, combining them with data collection considerations and multi-agent reinforcement learning. The result is a modular system that can be adapted to any virtual educational software to provide useful personalized assistance to students. | [
"['Ryan Hare' 'Ying Tang']"
] |
null | null | 2407.10333 | null | null | http://arxiv.org/pdf/2407.10333v1 | 2024-07-14T21:20:37Z | 2024-07-14T21:20:37Z | An Interpretable Neural Network for Vegetation Phenotyping with
Visualization of Trait-Based Spectral Features | Plant phenotyping is the assessment of a plant's traits and plant identification is the process of determining the category such as genus and species. In this paper we present an interpretable neural network trained on the UPWINS spectral library which contains spectra with rich metadata across variation in species, health, growth stage, annual variation, and environmental conditions for 13 selected indicator species and natural common background species. We show that the neurons in the network learn spectral indicators for chemical and physiological traits through visualization of the network weights, and we show how these traits are combined by the network for species identification with an accuracy around 90% on a test set. While neural networks are often perceived as `black box' classifiers, our work shows that they can be in fact more explainable and informative than other machine learning methods. We show that the neurons learn fundamental traits about the vegetation, for example the composition of different types of chlorophyll present which indicates species as well as response to illumination conditions. There is clear excess training capacity in our network, and we expect that as the UPWINS spectral library continues to grow the approach in this paper will provide further foundational insights in understanding plant traits. This provides a methodology for designing and interpreting neural networks on spectral data in general, and provides a framework for using neural networks with hyperspectral imagery for understanding vegetation that is extendable to other domains. | [
"['William Basener' 'Abigail Basener' 'Michael Luegering']"
] |
null | null | 2407.10336 | null | null | http://arxiv.org/pdf/2407.10336v1 | 2024-07-14T21:29:28Z | 2024-07-14T21:29:28Z | Thyroidiomics: An Automated Pipeline for Segmentation and Classification
of Thyroid Pathologies from Scintigraphy Images | The objective of this study was to develop an automated pipeline that enhances thyroid disease classification using thyroid scintigraphy images, aiming to decrease assessment time and increase diagnostic accuracy. Anterior thyroid scintigraphy images from 2,643 patients were collected and categorized into diffuse goiter (DG), multinodal goiter (MNG), and thyroiditis (TH) based on clinical reports, and then segmented by an expert. A ResUNet model was trained to perform auto-segmentation. Radiomic features were extracted from both physician (scenario 1) and ResUNet segmentations (scenario 2), followed by omitting highly correlated features using Spearman's correlation, and feature selection using Recursive Feature Elimination (RFE) with XGBoost as the core. All models were trained under leave-one-center-out cross-validation (LOCOCV) scheme, where nine instances of algorithms were iteratively trained and validated on data from eight centers and tested on the ninth for both scenarios separately. Segmentation performance was assessed using the Dice similarity coefficient (DSC), while classification performance was assessed using metrics, such as precision, recall, F1-score, accuracy, area under the Receiver Operating Characteristic (ROC AUC), and area under the precision-recall curve (PRC AUC). ResUNet achieved DSC values of 0.84$pm$0.03, 0.71$pm$0.06, and 0.86$pm$0.02 for MNG, TH, and DG, respectively. Classification in scenario 1 achieved an accuracy of 0.76$pm$0.04 and a ROC AUC of 0.92$pm$0.02 while in scenario 2, classification yielded an accuracy of 0.74$pm$0.05 and a ROC AUC of 0.90$pm$0.02. The automated pipeline demonstrated comparable performance to physician segmentations on several classification metrics across different classes, effectively reducing assessment time while maintaining high diagnostic accuracy. Code available at: https://github.com/ahxmeds/thyroidiomics.git. | [
"['Maziar Sabouri' 'Shadab Ahamed' 'Azin Asadzadeh' 'Atlas Haddadi Avval'\n 'Soroush Bagheri' 'Mohsen Arabi' 'Seyed Rasoul Zakavi' 'Emran Askari'\n 'Ali Rasouli' 'Atena Aghaee' 'Mohaddese Sehati' 'Fereshteh Yousefirizi'\n 'Carlos Uribe' 'Ghasem Hajianfar' 'Habib Zaidi' 'Arman Rahmim']"
] |
null | null | 2407.10341 | null | null | http://arxiv.org/pdf/2407.10341v1 | 2024-07-14T21:41:29Z | 2024-07-14T21:41:29Z | Affordance-Guided Reinforcement Learning via Visual Prompting | Robots equipped with reinforcement learning (RL) have the potential to learn a wide range of skills solely from a reward signal. However, obtaining a robust and dense reward signal for general manipulation tasks remains a challenge. Existing learning-based approaches require significant data, such as demonstrations or examples of success and failure, to learn task-specific reward functions. Recently, there is also a growing adoption of large multi-modal foundation models for robotics. These models can perform visual reasoning in physical contexts and generate coarse robot motions for various manipulation tasks. Motivated by this range of capability, in this work, we propose and study rewards shaped by vision-language models (VLMs). State-of-the-art VLMs have demonstrated an impressive ability to reason about affordances through keypoints in zero-shot, and we leverage this to define dense rewards for robotic learning. On a real-world manipulation task specified by natural language description, we find that these rewards improve the sample efficiency of autonomous RL and enable successful completion of the task in 20K online finetuning steps. Additionally, we demonstrate the robustness of the approach to reductions in the number of in-domain demonstrations used for pretraining, reaching comparable performance in 35K online finetuning steps. | [
"['Olivia Y. Lee' 'Annie Xie' 'Kuan Fang' 'Karl Pertsch' 'Chelsea Finn']"
] |
null | null | 2407.10366 | null | null | http://arxiv.org/pdf/2407.10366v1 | 2024-07-15T00:13:53Z | 2024-07-15T00:13:53Z | Accessing Vision Foundation Models at ImageNet-level Costs | Vision foundation models are renowned for their generalization ability due to massive training data. Nevertheless, they demand tremendous training resources, and the training data is often inaccessible, e.g., CLIP, DINOv2, posing great challenges to developing derivatives that could advance research in this field. In this work, we offer a very simple and general solution, named Proteus, to distill foundation models into smaller equivalents on ImageNet-1K without access to the original training data. Specifically, we remove the designs from conventional knowledge distillation settings that result in dataset bias and present three levels of training objectives, i.e., token, patch, and feature, to maximize the efficacy of knowledge transfer. In this manner, Proteus is trained at ImageNet-level costs with surprising ability, facilitating the accessibility of training foundation models for the broader research community. Leveraging DINOv2-g/14 as the teacher, Proteus-L/14 matches the performance of the Oracle method DINOv2-L/14 (142M training data) across 15 benchmarks and outperforms other vision foundation models including CLIP-L/14 (400M), OpenCLIP-L/14 (400M/2B) and SynCLR-L/14 (600M). | [
"['Yitian Zhang' 'Xu Ma' 'Yue Bai' 'Huan Wang' 'Yun Fu']"
] |
null | null | 2407.10383 | null | null | http://arxiv.org/pdf/2407.10383v1 | 2024-07-15T01:25:46Z | 2024-07-15T01:25:46Z | Learning to Represent Surroundings, Anticipate Motion and Take Informed
Actions in Unstructured Environments | Contemporary robots have become exceptionally skilled at achieving specific tasks in structured environments. However, they often fail when faced with the limitless permutations of real-world unstructured environments. This motivates robotics methods which learn from experience, rather than follow a pre-defined set of rules. In this thesis, we present a range of learning-based methods aimed at enabling robots, operating in dynamic and unstructured environments, to better understand their surroundings, anticipate the actions of others, and take informed actions accordingly. | [
"['Weiming Zhi']"
] |
null | null | 2407.10385 | null | null | http://arxiv.org/pdf/2407.10385v1 | 2024-07-15T01:33:54Z | 2024-07-15T01:33:54Z | By My Eyes: Grounding Multimodal Large Language Models with Sensor Data
via Visual Prompting | Large language models (LLMs) have demonstrated exceptional abilities across various domains. However, utilizing LLMs for ubiquitous sensing applications remains challenging as existing text-prompt methods show significant performance degradation when handling long sensor data sequences. We propose a visual prompting approach for sensor data using multimodal LLMs (MLLMs). We design a visual prompt that directs MLLMs to utilize visualized sensor data alongside the target sensory task descriptions. Additionally, we introduce a visualization generator that automates the creation of optimal visualizations tailored to a given sensory task, eliminating the need for prior task-specific knowledge. We evaluated our approach on nine sensory tasks involving four sensing modalities, achieving an average of 10% higher accuracy than text-based prompts and reducing token costs by 15.8x. Our findings highlight the effectiveness and cost-efficiency of visual prompts with MLLMs for various sensory tasks. | [
"['Hyungjun Yoon' 'Biniyam Aschalew Tolera' 'Taesik Gong' 'Kimin Lee'\n 'Sung-Ju Lee']"
] |
null | null | 2407.10414 | null | null | http://arxiv.org/pdf/2407.10414v1 | 2024-07-15T03:31:42Z | 2024-07-15T03:31:42Z | Teaching CORnet Human fMRI Representations for Enhanced Model-Brain
Alignment | Deep convolutional neural networks (DCNNs) have demonstrated excellent performance in object recognition and have been found to share some similarities with brain visual processing. However, the substantial gap between DCNNs and human visual perception still exists. Functional magnetic resonance imaging (fMRI) as a widely used technique in cognitive neuroscience can record neural activation in the human visual cortex during the process of visual perception. Can we teach DCNNs human fMRI signals to achieve a more brain-like model? To answer this question, this study proposed ReAlnet-fMRI, a model based on the SOTA vision model CORnet but optimized using human fMRI data through a multi-layer encoding-based alignment framework. This framework has been shown to effectively enable the model to learn human brain representations. The fMRI-optimized ReAlnet-fMRI exhibited higher similarity to the human brain than both CORnet and the control model in within-and across-subject as well as within- and across-modality model-brain (fMRI and EEG) alignment evaluations. Additionally, we conducted an in-depth analyses to investigate how the internal representations of ReAlnet-fMRI differ from CORnet in encoding various object dimensions. These findings provide the possibility of enhancing the brain-likeness of visual models by integrating human neural data, helping to bridge the gap between computer vision and visual neuroscience. | [
"['Zitong Lu' 'Yile Wang']"
] |
null | null | 2407.10417 | null | null | http://arxiv.org/pdf/2407.10417v1 | 2024-07-15T03:46:15Z | 2024-07-15T03:46:15Z | Proper losses regret at least 1/2-order | A fundamental challenge in machine learning is the choice of a loss as it characterizes our learning task, is minimized in the training phase, and serves as an evaluation criterion for estimators. Proper losses are commonly chosen, ensuring minimizers of the full risk match the true probability vector. Estimators induced from a proper loss are widely used to construct forecasters for downstream tasks such as classification and ranking. In this procedure, how does the forecaster based on the obtained estimator perform well under a given downstream task? This question is substantially relevant to the behavior of the $p$-norm between the estimated and true probability vectors when the estimator is updated. In the proper loss framework, the suboptimality of the estimated probability vector from the true probability vector is measured by a surrogate regret. First, we analyze a surrogate regret and show that the strict properness of a loss is necessary and sufficient to establish a non-vacuous surrogate regret bound. Second, we solve an important open question that the order of convergence in p-norm cannot be faster than the $1/2$-order of surrogate regrets for a broad class of strictly proper losses. This implies that strongly proper losses entail the optimal convergence rate. | [
"['Han Bao' 'Asuka Takatsu']"
] |
null | null | 2407.10418 | null | null | http://arxiv.org/pdf/2407.10418v1 | 2024-07-15T03:47:16Z | 2024-07-15T03:47:16Z | An integrated perspective of robustness in regression through the lens
of the bias-variance trade-off | This paper presents an integrated perspective on robustness in regression. Specifically, we examine the relationship between traditional outlier-resistant robust estimation and robust optimization, which focuses on parameter estimation resistant to imaginary dataset-perturbations. While both are commonly regarded as robust methods, these concepts demonstrate a bias-variance trade-off, indicating that they follow roughly converse strategies. | [
"['Akifumi Okuno']"
] |
null | null | 2407.10419 | null | null | http://arxiv.org/pdf/2407.10419v1 | 2024-07-15T03:48:16Z | 2024-07-15T03:48:16Z | Omni-Dimensional Frequency Learner for General Time Series Analysis | Frequency domain representation of time series feature offers a concise representation for handling real-world time series data with inherent complexity and dynamic nature. However, current frequency-based methods with complex operations still fall short of state-of-the-art time domain methods for general time series analysis. In this work, we present Omni-Dimensional Frequency Learner (ODFL) model based on a in depth analysis among all the three aspects of the spectrum feature: channel redundancy property among the frequency dimension, the sparse and un-salient frequency energy distribution among the frequency dimension, and the semantic diversity among the variable dimension. Technically, our method is composed of a semantic-adaptive global filter with attention to the un-salient frequency bands and partial operation among the channel dimension. Empirical results show that ODFL achieves consistent state-of-the-art in five mainstream time series analysis tasks, including short- and long-term forecasting, imputation, classification, and anomaly detection, offering a promising foundation for time series analysis. | [
"['Xianing Chen. Hanting Chen' 'Hailin Hu']"
] |
null | null | 2407.10441 | null | null | http://arxiv.org/pdf/2407.10441v1 | 2024-07-15T05:08:38Z | 2024-07-15T05:08:38Z | Enhancing Building Safety Design for Active Shooter Incidents:
Exploration of Building Exit Parameters using Reinforcement Learning-Based
Simulations | With the alarming rise in active shooter incidents (ASIs) in the United States, enhancing public safety through building design has become a pressing need. This study proposes a reinforcement learning-based simulation approach addressing gaps in existing research that has neglected the dynamic behaviours of shooters. We developed an autonomous agent to simulate an active shooter within a realistic office environment, aiming to offer insights into the interactions between building design parameters and ASI outcomes. A case study is conducted to quantitatively investigate the impact of building exit numbers (total count of accessible exits) and configuration (arrangement of which exits are available or not) on evacuation and harm rates. Findings demonstrate that greater exit availability significantly improves evacuation outcomes and reduces harm. Exits nearer to the shooter's initial position hold greater importance for accessibility than those farther away. By encompassing dynamic shooter behaviours, this study offers preliminary insights into effective building safety design against evolving threats. | [
"['Ruying Liu' 'Wanjing Wu' 'Burcin Becerik-Gerber' 'Gale M. Lucas']"
] |
null | null | 2407.10448 | null | null | http://arxiv.org/pdf/2407.10448v1 | 2024-07-15T05:39:56Z | 2024-07-15T05:39:56Z | Spectral Representation for Causal Estimation with Hidden Confounders | We address the problem of causal effect estimation where hidden confounders are present, with a focus on two settings: instrumental variable regression with additional observed confounders, and proxy causal learning. Our approach uses a singular value decomposition of a conditional expectation operator, followed by a saddle-point optimization problem, which, in the context of IV regression, can be thought of as a neural net generalization of the seminal approach due to Darolles et al. [2011]. Saddle-point formulations have gathered considerable attention recently, as they can avoid double sampling bias and are amenable to modern function approximation methods. We provide experimental validation in various settings, and show that our approach outperforms existing methods on common benchmarks. | [
"['Tongzheng Ren' 'Haotian Sun' 'Antoine Moulin' 'Arthur Gretton' 'Bo Dai']"
] |
null | null | 2407.10449 | null | null | http://arxiv.org/pdf/2407.10449v1 | 2024-07-15T05:40:11Z | 2024-07-15T05:40:11Z | A Fast, Robust Elliptical Slice Sampling Implementation for Linearly
Truncated Multivariate Normal Distributions | Elliptical slice sampling, when adapted to linearly truncated multivariate normal distributions, is a rejection-free Markov chain Monte Carlo method. At its core, it requires analytically constructing an ellipse-polytope intersection. The main novelty of this paper is an algorithm that computes this intersection in $mathcal{O}(m log m)$ time, where $m$ is the number of linear inequality constraints representing the polytope. We show that an implementation based on this algorithm enhances numerical stability, speeds up running time, and is easy to parallelize for launching multiple Markov chains. | [
"['Kaiwen Wu' 'Jacob R. Gardner']"
] |
null | null | 2407.10452 | null | null | http://arxiv.org/pdf/2407.10452v1 | 2024-07-15T05:45:09Z | 2024-07-15T05:45:09Z | GraphPrint: Extracting Features from 3D Protein Structure for Drug
Target Affinity Prediction | Accurate drug target affinity prediction can improve drug candidate selection, accelerate the drug discovery process, and reduce drug production costs. Previous work focused on traditional fingerprints or used features extracted based on the amino acid sequence in the protein, ignoring its 3D structure which affects its binding affinity. In this work, we propose GraphPrint: a framework for incorporating 3D protein structure features for drug target affinity prediction. We generate graph representations for protein 3D structures using amino acid residue location coordinates and combine them with drug graph representation and traditional features to jointly learn drug target affinity. Our model achieves a mean square error of 0.1378 and a concordance index of 0.8929 on the KIBA dataset and improves over using traditional protein features alone. Our ablation study shows that the 3D protein structure-based features provide information complementary to traditional features. | [
"['Amritpal Singh']"
] |
null | null | 2407.10454 | null | null | http://arxiv.org/pdf/2407.10454v1 | 2024-07-15T06:07:05Z | 2024-07-15T06:07:05Z | Deflated Dynamics Value Iteration | The Value Iteration (VI) algorithm is an iterative procedure to compute the value function of a Markov decision process, and is the basis of many reinforcement learning (RL) algorithms as well. As the error convergence rate of VI as a function of iteration $k$ is $O(gamma^k)$, it is slow when the discount factor $gamma$ is close to $1$. To accelerate the computation of the value function, we propose Deflated Dynamics Value Iteration (DDVI). DDVI uses matrix splitting and matrix deflation techniques to effectively remove (deflate) the top $s$ dominant eigen-structure of the transition matrix $mathcal{P}^{pi}$. We prove that this leads to a $tilde{O}(gamma^k |lambda_{s+1}|^k)$ convergence rate, where $lambda_{s+1}$is $(s+1)$-th largest eigenvalue of the dynamics matrix. We then extend DDVI to the RL setting and present Deflated Dynamics Temporal Difference (DDTD) algorithm. We empirically show the effectiveness of the proposed algorithms. | [
"['Jongmin Lee' 'Amin Rakhsha' 'Ernest K. Ryu' 'Amir-massoud Farahmand']"
] |
null | null | 2407.10477 | null | null | http://arxiv.org/pdf/2407.10477v1 | 2024-07-15T07:05:34Z | 2024-07-15T07:05:34Z | Deep Learning-Based Operators for Evolutionary Algorithms | We present two novel domain-independent genetic operators that harness the capabilities of deep learning: a crossover operator for genetic algorithms and a mutation operator for genetic programming. Deep Neural Crossover leverages the capabilities of deep reinforcement learning and an encoder-decoder architecture to select offspring genes. BERT mutation masks multiple gp-tree nodes and then tries to replace these masks with nodes that will most likely improve the individual's fitness. We show the efficacy of both operators through experimentation. | [
"['Eliad Shem-Tov' 'Moshe Sipper' 'Achiya Elyasaf']"
] |
null | null | 2407.10481 | null | null | http://arxiv.org/abs/2407.10481v1 | 2024-07-15T07:07:11Z | 2024-07-15T07:07:11Z | SuperPADL: Scaling Language-Directed Physics-Based Control with
Progressive Supervised Distillation | Physically-simulated models for human motion can generate high-quality responsive character animations, often in real-time. Natural language serves as a flexible interface for controlling these models, allowing expert and non-expert users to quickly create and edit their animations. Many recent physics-based animation methods, including those that use text interfaces, train control policies using reinforcement learning (RL). However, scaling these methods beyond several hundred motions has remained challenging. Meanwhile, kinematic animation models are able to successfully learn from thousands of diverse motions by leveraging supervised learning methods. Inspired by these successes, in this work we introduce SuperPADL, a scalable framework for physics-based text-to-motion that leverages both RL and supervised learning to train controllers on thousands of diverse motion clips. SuperPADL is trained in stages using progressive distillation, starting with a large number of specialized experts using RL. These experts are then iteratively distilled into larger, more robust policies using a combination of reinforcement learning and supervised learning. Our final SuperPADL controller is trained on a dataset containing over 5000 skills and runs in real time on a consumer GPU. Moreover, our policy can naturally transition between skills, allowing for users to interactively craft multi-stage animations. We experimentally demonstrate that SuperPADL significantly outperforms RL-based baselines at this large data scale. | [
"['Jordan Juravsky' 'Yunrong Guo' 'Sanja Fidler' 'Xue Bin Peng']"
] |
null | null | 2407.10483 | null | null | http://arxiv.org/pdf/2407.10483v1 | 2024-07-15T07:11:00Z | 2024-07-15T07:11:00Z | G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning | Graph data structures offer a versatile and powerful means to model relationships and interconnections in various domains, promising substantial advantages in data representation, analysis, and visualization. In games, graph-based data structures are omnipresent and represent, for example, game economies, skill trees or complex, branching quest lines. With this paper, we propose G-PCGRL, a novel and controllable method for the procedural generation of graph data using reinforcement learning. Therefore, we frame this problem as manipulating a graph's adjacency matrix to fulfill a given set of constraints. Our method adapts and extends the Procedural Content Generation via Reinforcement Learning (PCGRL) framework and introduces new representations to frame the problem of graph data generation as a Markov decision process. We compare the performance of our method with the original PCGRL, the run time with a random search and evolutionary algorithm, and evaluate G-PCGRL on two graph data domains in games: game economies and skill trees. The results show that our method is capable of generating graph-based content quickly and reliably to support and inspire designers in the game creation process. In addition, trained models are controllable in terms of the type and number of nodes to be generated. | [
"['Florian Rupp' 'Kai Eckert']"
] |
null | null | 2407.10484 | null | null | http://arxiv.org/pdf/2407.10484v1 | 2024-07-15T07:11:44Z | 2024-07-15T07:11:44Z | Understanding Matrix Function Normalizations in Covariance Pooling
through the Lens of Riemannian Geometry | Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations. GCP typically performs classification of the covariance matrices by applying matrix function normalization, such as matrix logarithm or power, followed by a Euclidean classifier. However, covariance matrices inherently lie in a Riemannian manifold, known as the Symmetric Positive Definite (SPD) manifold. The current literature does not provide a satisfactory explanation of why Euclidean classifiers can be applied directly to Riemannian features after the normalization of the matrix power. To mitigate this gap, this paper provides a comprehensive and unified understanding of the matrix logarithm and power from a Riemannian geometry perspective. The underlying mechanism of matrix functions in GCP is interpreted from two perspectives: one based on tangent classifiers (Euclidean classifiers on the tangent space) and the other based on Riemannian classifiers. Via theoretical analysis and empirical validation through extensive experiments on fine-grained and large-scale visual classification datasets, we conclude that the working mechanism of the matrix functions should be attributed to the Riemannian classifiers they implicitly respect. | [
"['Ziheng Chen' 'Yue Song' 'Xiao-Jun Wu' 'Gaowen Liu' 'Nicu Sebe']"
] |
null | null | 2407.10490 | null | null | http://arxiv.org/pdf/2407.10490v1 | 2024-07-15T07:30:28Z | 2024-07-15T07:30:28Z | Learning Dynamics of LLM Finetuning | Learning dynamics, which describes how the learning of specific training examples influences the model's prediction of other examples, give us a powerful tool for understanding the behavior of deep learning systems. We study the learning dynamics of large language models during finetuning, by analyzing the step-wise decomposition and accumulated influence among different responses. Our framework allows a uniform interpretation of many interesting observations about the training of popular algorithms for both instruction tuning and preference tuning. The analysis not only explains where the benefits of these methods come from but also inspires a simple, effective method to further improve the alignment performance. Code for experiments is available at https://github.com/Joshua-Ren/Learning_dynamics_LLM. | [
"['Yi Ren' 'Danica J. Sutherland']"
] |
null | null | 2407.10494 | null | null | http://arxiv.org/pdf/2407.10494v1 | 2024-07-15T07:36:00Z | 2024-07-15T07:36:00Z | Learning to Unlearn for Robust Machine Unlearning | Machine unlearning (MU) seeks to remove knowledge of specific data samples from trained models without the necessity for complete retraining, a task made challenging by the dual objectives of effective erasure of data and maintaining the overall performance of the model. Despite recent advances in this field, balancing between the dual objectives of unlearning remains challenging. From a fresh perspective of generalization, we introduce a novel Learning-to-Unlearn (LTU) framework, which adopts a meta-learning approach to optimize the unlearning process to improve forgetting and remembering in a unified manner. LTU includes a meta-optimization scheme that facilitates models to effectively preserve generalizable knowledge with only a small subset of the remaining set, while thoroughly forgetting the specific data samples. We also introduce a Gradient Harmonization strategy to align the optimization trajectories for remembering and forgetting via mitigating gradient conflicts, thus ensuring efficient and effective model updates. Our approach demonstrates improved efficiency and efficacy for MU, offering a promising solution to the challenges of data rights and model reusability. | [
"['Mark He Huang' 'Lin Geng Foo' 'Jun Liu']"
] |
null | null | 2407.10495 | null | null | http://arxiv.org/pdf/2407.10495v1 | 2024-07-15T07:37:31Z | 2024-07-15T07:37:31Z | Improving Hyperbolic Representations via Gromov-Wasserstein
Regularization | Hyperbolic representations have shown remarkable efficacy in modeling inherent hierarchies and complexities within data structures. Hyperbolic neural networks have been commonly applied for learning such representations from data, but they often fall short in preserving the geometric structures of the original feature spaces. In response to this challenge, our work applies the Gromov-Wasserstein (GW) distance as a novel regularization mechanism within hyperbolic neural networks. The GW distance quantifies how well the original data structure is maintained after embedding the data in a hyperbolic space. Specifically, we explicitly treat the layers of the hyperbolic neural networks as a transport map and calculate the GW distance accordingly. We validate that the GW distance computed based on a training set well approximates the GW distance of the underlying data distribution. Our approach demonstrates consistent enhancements over current state-of-the-art methods across various tasks, including few-shot image classification, as well as semi-supervised graph link prediction and node classification. | [
"['Yifei Yang' 'Wonjun Lee' 'Dongmian Zou' 'Gilad Lerman']"
] |
null | null | 2407.10504 | null | null | http://arxiv.org/pdf/2407.10504v1 | 2024-07-15T07:53:29Z | 2024-07-15T07:53:29Z | A pragmatic policy learning approach to account for users' fatigue in
repeated auctions | Online advertising banners are sold in real-time through auctions.Typically, the more banners a user is shown, the smaller the marginalvalue of the next banner for this user is. This fact can be detected bybasic ML models, that can be used to predict how previously won auctionsdecrease the current opportunity value. However, learning is not enough toproduce a bid that correctly accounts for how winning the current auctionimpacts the future values. Indeed, a policy that uses this prediction tomaximize the expected payoff of the current auction could be dubbedimpatient because such policy does not fully account for the repeatednature of the auctions. Under this perspective, it seems that most biddersin the literature are impatient. Unsurprisingly, impatience induces a cost.We provide two empirical arguments for the importance of this cost ofimpatience. First, an offline counterfactual analysis and, second, a notablebusiness metrics improvement by mitigating the cost of impatience withpolicy learning | [
"['Benjamin Heymann' 'Rémi Chan--Renous-Legoubin' 'Alexandre Gilotte']"
] |
null | null | 2407.10545 | null | null | http://arxiv.org/pdf/2407.10545v1 | 2024-07-15T08:52:20Z | 2024-07-15T08:52:20Z | Efficient Continual Learning with Low Memory Footprint For Edge Device | Continual learning(CL) is a useful technique to acquire dynamic knowledge continually. Although powerful cloud platforms can fully exert the ability of CL,e.g., customized recommendation systems, similar personalized requirements for edge devices are almost disregarded. This phenomenon stems from the huge resource overhead involved in training neural networks and overcoming the forgetting problem of CL. This paper focuses on these scenarios and proposes a compact algorithm called LightCL. Different from other CL methods bringing huge resource consumption to acquire generalizability among all tasks for delaying forgetting, LightCL compress the resource consumption of already generalized components in neural networks and uses a few extra resources to improve memory in other parts. We first propose two new metrics of learning plasticity and memory stability to seek generalizability during CL. Based on the discovery that lower and middle layers have more generalizability and deeper layers are opposite, we $textit{Maintain Generalizability}$ by freezing the lower and middle layers. Then, we $textit{Memorize Feature Patterns}$ to stabilize the feature extracting patterns of previous tasks to improve generalizability in deeper layers. In the experimental comparison, LightCL outperforms other SOTA methods in delaying forgetting and reduces at most $textbf{6.16$times$}$ memory footprint, proving the excellent performance of LightCL in efficiency. We also evaluate the efficiency of our method on an edge device, the Jetson Nano, which further proves our method's practical effectiveness. | [
"['Zeqing Wang' 'Fei Cheng' 'Kangye Ji' 'Bohu Huang']"
] |
null | null | 2407.10547 | null | null | http://arxiv.org/pdf/2407.10547v1 | 2024-07-15T08:57:02Z | 2024-07-15T08:57:02Z | Learning Social Cost Functions for Human-Aware Path Planning | Achieving social acceptance is one of the main goals of Social Robotic Navigation. Despite this topic has received increasing interest in recent years, most of the research has focused on driving the robotic agent along obstacle-free trajectories, planning around estimates of future human motion to respect personal distances and optimize navigation. However, social interactions in everyday life are also dictated by norms that do not strictly depend on movement, such as when standing at the end of a queue rather than cutting it. In this paper, we propose a novel method to recognize common social scenarios and modify a traditional planner's cost function to adapt to them. This solution enables the robot to carry out different social navigation behaviors that would not arise otherwise, maintaining the robustness of traditional navigation. Our approach allows the robot to learn different social norms with a single learned model, rather than having different modules for each task. As a proof of concept, we consider the tasks of queuing and respect interaction spaces of groups of people talking to one another, but the method can be extended to other human activities that do not involve motion. | [
"['Andrea Eirale' 'Matteo Leonetti' 'Marcello Chiaberge']"
] |
null | null | 2407.10558 | null | null | http://arxiv.org/pdf/2407.10558v1 | 2024-07-15T09:15:55Z | 2024-07-15T09:15:55Z | ConTEXTure: Consistent Multiview Images to Texture | We introduce ConTEXTure, a generative network designed to create a texture map/atlas for a given 3D mesh using images from multiple viewpoints. The process begins with generating a front-view image from a text prompt, such as 'Napoleon, front view', describing the 3D mesh. Additional images from different viewpoints are derived from this front-view image and camera poses relative to it. ConTEXTure builds upon the TEXTure network, which uses text prompts for six viewpoints (e.g., 'Napoleon, front view', 'Napoleon, left view', etc.). However, TEXTure often generates images for non-front viewpoints that do not accurately represent those viewpoints.To address this issue, we employ Zero123++, which generates multiple view-consistent images for the six specified viewpoints simultaneously, conditioned on the initial front-view image and the depth maps of the mesh for the six viewpoints. By utilizing these view-consistent images, ConTEXTure learns the texture atlas from all viewpoint images concurrently, unlike previous methods that do so sequentially. This approach ensures that the rendered images from various viewpoints, including back, side, bottom, and top, are free from viewpoint irregularities. | [
"['Jaehoon Ahn' 'Sumin Cho' 'Harim Jung' 'Kibeom Hong' 'Seonghoon Ban'\n 'Moon-Ryul Jung']"
] |
null | null | 2407.10583 | null | null | http://arxiv.org/pdf/2407.10583v1 | 2024-07-15T10:03:24Z | 2024-07-15T10:03:24Z | Three Dogmas of Reinforcement Learning | Modern reinforcement learning has been conditioned by at least three dogmas. The first is the environment spotlight, which refers to our tendency to focus on modeling environments rather than agents. The second is our treatment of learning as finding the solution to a task, rather than adaptation. The third is the reward hypothesis, which states that all goals and purposes can be well thought of as maximization of a reward signal. These three dogmas shape much of what we think of as the science of reinforcement learning. While each of the dogmas have played an important role in developing the field, it is time we bring them to the surface and reflect on whether they belong as basic ingredients of our scientific paradigm. In order to realize the potential of reinforcement learning as a canonical frame for researching intelligent agents, we suggest that it is time we shed dogmas one and two entirely, and embrace a nuanced approach to the third. | [
"['David Abel' 'Mark K. Ho' 'Anna Harutyunyan']"
] |
null | null | 2407.10627 | null | null | http://arxiv.org/pdf/2407.10627v1 | 2024-07-15T11:26:07Z | 2024-07-15T11:26:07Z | Arena Learning: Build Data Flywheel for LLMs Post-training via Simulated
Chatbot Arena | Assessing the effectiveness of large language models (LLMs) presents substantial challenges. The method of conducting human-annotated battles in an online Chatbot Arena is a highly effective evaluative technique. However, this approach is limited by the costs and time required for human annotation. In this paper, we introduce Arena Learning, an innovative offline strategy designed to simulate these arena battles using AI-driven annotations to evaluate battle outcomes, thus facilitating the continuous improvement of the target model through both supervised fine-tuning and reinforcement learning. Arena Learning comprises two key elements. First, it ensures precise evaluations and maintains consistency between offline simulations and online competitions via WizardArena, a pipeline developed to accurately predict the Elo rankings of various models using a meticulously designed offline test set. Our results demonstrate that WizardArena's predictions closely align with those from the online Arena. Second, it involves the continuous improvement of training data based on the battle results and the refined model. We establish a data flywheel to iteratively update the training data by highlighting the weaknesses of the target model based on its battle results, enabling it to learn from the strengths of multiple different models. We apply Arena Learning to train our target model, WizardLM-$beta$, and demonstrate significant performance enhancements across various metrics. This fully automated training and evaluation pipeline sets the stage for continuous advancements in various LLMs via post-training. Notably, Arena Learning plays a pivotal role in the success of WizardLM-2, and this paper serves both as an exploration of its efficacy and a foundational study for future discussions related to WizardLM-2 and its derivatives. | [
"['Haipeng Luo' 'Qingfeng Sun' 'Can Xu' 'Pu Zhao' 'Qingwei Lin'\n 'Jianguang Lou' 'Shifeng Chen' 'Yansong Tang' 'Weizhu Chen']"
] |
null | null | 2407.10629 | null | null | http://arxiv.org/pdf/2407.10629v1 | 2024-07-15T11:28:16Z | 2024-07-15T11:28:16Z | Balancing the Scales: Reinforcement Learning for Fair Classification | Fairness in classification tasks has traditionally focused on bias removal from neural representations, but recent trends favor algorithmic methods that embed fairness into the training process. These methods steer models towards fair performance, preventing potential elimination of valuable information that arises from representation manipulation. Reinforcement Learning (RL), with its capacity for learning through interaction and adjusting reward functions to encourage desired behaviors, emerges as a promising tool in this domain. In this paper, we explore the usage of RL to address bias in imbalanced classification by scaling the reward function to mitigate bias. We employ the contextual multi-armed bandit framework and adapt three popular RL algorithms to suit our objectives, demonstrating a novel approach to mitigating bias. | [
"['Leon Eshuijs' 'Shihan Wang' 'Antske Fokkens']"
] |
null | null | 2407.10630 | null | null | http://arxiv.org/pdf/2407.10630v1 | 2024-07-15T11:30:40Z | 2024-07-15T11:30:40Z | Brain Tumor Classification From MRI Images Using Machine Learning | Brain tumor is a life-threatening problem and hampers the normal functioning of the human body. The average five-year relative survival rate for malignant brain tumors is 35.6 percent. For proper diagnosis and efficient treatment planning, it is necessary to detect the brain tumor in early stages. Due to advancement in medical imaging technology, the brain images are taken in different modalities. The ability to extract relevant characteristics from magnetic resonance imaging (MRI) scans is a crucial step for brain tumor classifiers. Several studies have proposed various strategies to extract relevant features from different modalities of MRI to predict the growth of abnormal tumors. Most techniques used conventional methods of image processing for feature extraction and machine learning for classification. More recently, the use of deep learning algorithms in medical imaging has resulted in significant improvements in the classification and diagnosis of brain tumors. Since tumors are located at different regions of the brain, localizing the tumor and classifying it to a particular category is a challenging task. The objective of this project is to develop a predictive system for brain tumor detection using machine learning(ensembling). | [
"['Vidhyapriya Ranganathan' 'Celshiya Udaiyar' 'Jaisree Jayanth'\n 'Meghaa P V' 'Srija B' 'Uthra S']"
] |
null | null | 2407.10633 | null | null | http://arxiv.org/pdf/2407.10633v1 | 2024-07-15T11:46:21Z | 2024-07-15T11:46:21Z | Evaluating Model Bias Requires Characterizing its Mistakes | The ability to properly benchmark model performance in the face of spurious correlations is important to both build better predictors and increase confidence that models are operating as intended. We demonstrate that characterizing (as opposed to simply quantifying) model mistakes across subgroups is pivotal to properly reflect model biases, which are ignored by standard metrics such as worst-group accuracy or accuracy gap. Inspired by the hypothesis testing framework, we introduce SkewSize, a principled and flexible metric that captures bias from mistakes in a model's predictions. It can be used in multi-class settings or generalised to the open vocabulary setting of generative models. SkewSize is an aggregation of the effect size of the interaction between two categorical variables: the spurious variable representing the bias attribute and the model's prediction. We demonstrate the utility of SkewSize in multiple settings including: standard vision models trained on synthetic data, vision models trained on ImageNet, and large scale vision-and-language models from the BLIP-2 family. In each case, the proposed SkewSize is able to highlight biases not captured by other metrics, while also providing insights on the impact of recently proposed techniques, such as instruction tuning. | [
"['Isabela Albuquerque' 'Jessica Schrouff' 'David Warde-Farley'\n 'Taylan Cemgil' 'Sven Gowal' 'Olivia Wiles']"
] |
null | null | 2407.10641 | null | null | http://arxiv.org/pdf/2407.10641v1 | 2024-07-15T12:00:46Z | 2024-07-15T12:00:46Z | Deep Diffusion Image Prior for Efficient OOD Adaptation in 3D Inverse
Problems | Recent inverse problem solvers that leverage generative diffusion priors have garnered significant attention due to their exceptional quality. However, adaptation of the prior is necessary when there exists a discrepancy between the training and testing distributions. In this work, we propose deep diffusion image prior (DDIP), which generalizes the recent adaptation method of SCD by introducing a formal connection to the deep image prior. Under this framework, we propose an efficient adaptation method dubbed D3IP, specified for 3D measurements, which accelerates DDIP by orders of magnitude while achieving superior performance. D3IP enables seamless integration of 3D inverse solvers and thus leads to coherent 3D reconstruction. Moreover, we show that meta-learning techniques can also be applied to yield even better performance. We show that our method is capable of solving diverse 3D reconstructive tasks from the generative prior trained only with phantom images that are vastly different from the training set, opening up new opportunities of applying diffusion inverse solvers even when training with gold standard data is impossible. Code: https://github.com/HJ-harry/DDIP3D | [
"['Hyungjin Chung' 'Jong Chul Ye']"
] |
null | null | 2407.10652 | null | null | http://arxiv.org/pdf/2407.10652v1 | 2024-07-15T12:13:53Z | 2024-07-15T12:13:53Z | Cutting Through the Clutter: The Potential of LLMs for Efficient
Filtration in Systematic Literature Reviews | In academic research, systematic literature reviews are foundational and highly relevant, yet tedious to create due to the high volume of publications and labor-intensive processes involved. Systematic selection of relevant papers through conventional means like keyword-based filtering techniques can sometimes be inadequate, plagued by semantic ambiguities and inconsistent terminology, which can lead to sub-optimal outcomes. To mitigate the required extensive manual filtering, we explore and evaluate the potential of using Large Language Models (LLMs) to enhance the efficiency, speed, and precision of literature review filtering, reducing the amount of manual screening required. By using models as classification agents acting on a structured database only, we prevent common problems inherent in LLMs, such as hallucinations. We evaluate the real-world performance of such a setup during the construction of a recent literature survey paper with initially more than 8.3k potentially relevant articles under consideration and compare this with human performance on the same dataset. Our findings indicate that employing advanced LLMs like GPT-4o, Claude 3.5 Sonnet, Gemini 1.5 Flash, or Llama3 with simple prompting can significantly reduce the time required for literature filtering - from usually weeks of manual research to only a few minutes. Simultaneously, we crucially show that false negatives can indeed be controlled through a consensus scheme, achieving recalls >98.8% at or even beyond the typical human error threshold, thereby also providing for more accurate and relevant articles selected. Our research not only demonstrates a substantial improvement in the methodology of literature reviews but also sets the stage for further integration and extensive future applications of responsible AI in academic research practices. | [
"['Lucas Joos' 'Daniel A. Keim' 'Maximilian T. Fischer']"
] |
null | null | 2407.10666 | null | null | http://arxiv.org/pdf/2407.10666v1 | 2024-07-15T12:29:17Z | 2024-07-15T12:29:17Z | Flow Perturbation to Accelerate Unbiased Sampling of Boltzmann
distribution | Flow-based generative models have been employed for sampling the Boltzmann distribution, but their application to high-dimensional systems is hindered by the significant computational cost of obtaining the Jacobian of the flow. To overcome this challenge, we introduce the flow perturbation method, which incorporates optimized stochastic perturbations into the flow. By reweighting trajectories generated by the perturbed flow, our method achieves unbiased sampling of the Boltzmann distribution with orders of magnitude speedup compared to both brute force Jacobian calculations and the Hutchinson estimator. Notably, it accurately sampled the Chignolin protein with all atomic Cartesian coordinates explicitly represented, which, to our best knowledge, is the largest molecule ever Boltzmann sampled in such detail using generative models. | [
"['Xin Peng' 'Ang Gao']"
] |
null | null | 2407.10681 | null | null | http://arxiv.org/abs/2407.10681v1 | 2024-07-15T12:58:04Z | 2024-07-15T12:58:04Z | GeoMix: Towards Geometry-Aware Data Augmentation | Mixup has shown considerable success in mitigating the challenges posed by limited labeled data in image classification. By synthesizing samples through the interpolation of features and labels, Mixup effectively addresses the issue of data scarcity. However, it has rarely been explored in graph learning tasks due to the irregularity and connectivity of graph data. Specifically, in node classification tasks, Mixup presents a challenge in creating connections for synthetic data. In this paper, we propose Geometric Mixup (GeoMix), a simple and interpretable Mixup approach leveraging in-place graph editing. It effectively utilizes geometry information to interpolate features and labels with those from the nearby neighborhood, generating synthetic nodes and establishing connections for them. We conduct theoretical analysis to elucidate the rationale behind employing geometry information for node Mixup, emphasizing the significance of locality enhancement-a critical aspect of our method's design. Extensive experiments demonstrate that our lightweight Geometric Mixup achieves state-of-the-art results on a wide variety of standard datasets with limited labeled data. Furthermore, it significantly improves the generalization capability of underlying GNNs across various challenging out-of-distribution generalization tasks. Our code is available at https://github.com/WtaoZhao/geomix. | [
"['Wentao Zhao' 'Qitian Wu' 'Chenxiao Yang' 'Junchi Yan']"
] |
null | null | 2407.10688 | null | null | http://arxiv.org/pdf/2407.10688v1 | 2024-07-15T13:01:47Z | 2024-07-15T13:01:47Z | Probability Passing for Graph Neural Networks: Graph Structure and
Representations Joint Learning | Graph Neural Networks (GNNs) have achieved notable success in the analysis of non-Euclidean data across a wide range of domains. However, their applicability is constrained by the dependence on the observed graph structure. To solve this problem, Latent Graph Inference (LGI) is proposed to infer a task-specific latent structure by computing similarity or edge probability of node features and then apply a GNN to produce predictions. Even so, existing approaches neglect the noise from node features, which affects generated graph structure and performance. In this work, we introduce a novel method called Probability Passing to refine the generated graph structure by aggregating edge probabilities of neighboring nodes based on observed graph. Furthermore, we continue to utilize the LGI framework, inputting the refined graph structure and node features into GNNs to obtain predictions. We name the proposed scheme as Probability Passing-based Graph Neural Network (PPGNN). Moreover, the anchor-based technique is employed to reduce complexity and improve efficiency. Experimental results demonstrate the effectiveness of the proposed method. | [
"['Ziyan Wang' 'YaXuan He' 'Bin Liu']"
] |
null | null | 2407.10702 | null | null | http://arxiv.org/pdf/2407.10702v1 | 2024-07-15T13:17:48Z | 2024-07-15T13:17:48Z | Geometric Analysis of Unconstrained Feature Models with $d=K$ | Recently, interesting empirical phenomena known as Neural Collapse have been observed during the final phase of training deep neural networks for classification tasks. We examine this issue when the feature dimension d is equal to the number of classes K. We demonstrate that two popular unconstrained feature models are strict saddle functions, with every critical point being either a global minimum or a strict saddle point that can be exited using negative curvatures. The primary findings conclusively confirm the conjecture on the unconstrained feature models in previous articles. | [
"['Shao Gu' 'Yi Shen']"
] |
null | null | 2407.10722 | null | null | http://arxiv.org/pdf/2407.10722v1 | 2024-07-15T13:47:55Z | 2024-07-15T13:47:55Z | Mitigating Data Imbalance for Software Vulnerability Assessment: Does
Data Augmentation Help? | Background: Software Vulnerability (SV) assessment is increasingly adopted to address the ever-increasing volume and complexity of SVs. Data-driven approaches have been widely used to automate SV assessment tasks, particularly the prediction of the Common Vulnerability Scoring System (CVSS) metrics such as exploitability, impact, and severity. SV assessment suffers from the imbalanced distributions of the CVSS classes, but such data imbalance has been hardly understood and addressed in the literature. Aims: We conduct a large-scale study to quantify the impacts of data imbalance and mitigate the issue for SV assessment through the use of data augmentation. Method: We leverage nine data augmentation techniques to balance the class distributions of the CVSS metrics. We then compare the performance of SV assessment models with and without leveraging the augmented data. Results: Through extensive experiments on 180k+ real-world SVs, we show that mitigating data imbalance can significantly improve the predictive performance of models for all the CVSS tasks, by up to 31.8% in Matthews Correlation Coefficient. We also discover that simple text augmentation like combining random text insertion, deletion, and replacement can outperform the baseline across the board. Conclusions: Our study provides the motivation and the first promising step toward tackling data imbalance for effective SV assessment. | [
"['Triet H. M. Le' 'M. Ali Babar']"
] |
null | null | 2407.10734 | null | null | http://arxiv.org/pdf/2407.10734v1 | 2024-07-15T14:01:34Z | 2024-07-15T14:01:34Z | On-Device Training of Fully Quantized Deep Neural Networks on Cortex-M
Microcontrollers | On-device training of DNNs allows models to adapt and fine-tune to newly collected data or changing domains while deployed on microcontroller units (MCUs). However, DNN training is a resource-intensive task, making the implementation and execution of DNN training algorithms on MCUs challenging due to low processor speeds, constrained throughput, limited floating-point support, and memory constraints. In this work, we explore on-device training of DNNs for Cortex-M MCUs. We present a method that enables efficient training of DNNs completely in place on the MCU using fully quantized training (FQT) and dynamic partial gradient updates. We demonstrate the feasibility of our approach on multiple vision and time-series datasets and provide insights into the tradeoff between training accuracy, memory overhead, energy, and latency on real hardware. | [
"['Mark Deutel' 'Frank Hannig' 'Christopher Mutschler' 'Jürgen Teich']"
] |
null | null | 2407.10735 | null | null | http://arxiv.org/pdf/2407.10735v1 | 2024-07-15T14:01:35Z | 2024-07-15T14:01:35Z | Transforming Agency. On the mode of existence of Large Language Models | This paper investigates the ontological characterization of Large Language Models (LLMs) like ChatGPT. Between inflationary and deflationary accounts, we pay special attention to their status as agents. This requires explaining in detail the architecture, processing, and training procedures that enable LLMs to display their capacities, and the extensions used to turn LLMs into agent-like systems. After a systematic analysis we conclude that a LLM fails to meet necessary and sufficient conditions for autonomous agency in the light of embodied theories of mind: the individuality condition (it is not the product of its own activity, it is not even directly affected by it), the normativity condition (it does not generate its own norms or goals), and, partially the interactional asymmetry condition (it is not the origin and sustained source of its interaction with the environment). If not agents, then ... what are LLMs? We argue that ChatGPT should be characterized as an interlocutor or linguistic automaton, a library-that-talks, devoid of (autonomous) agency, but capable to engage performatively on non-purposeful yet purpose-structured and purpose-bounded tasks. When interacting with humans, a "ghostly" component of the human-machine interaction makes it possible to enact genuine conversational experiences with LLMs. Despite their lack of sensorimotor and biological embodiment, LLMs textual embodiment (the training corpus) and resource-hungry computational embodiment, significantly transform existing forms of human agency. Beyond assisted and extended agency, the LLM-human coupling can produce midtended forms of agency, closer to the production of intentional agency than to the extended instrumentality of any previous technologies. | [
"['Xabier E. Barandiaran' 'Lola S. Almendros']"
] |
null | null | 2407.10758 | null | null | http://arxiv.org/pdf/2407.10758v1 | 2024-07-15T14:36:05Z | 2024-07-15T14:36:05Z | Continual Deep Learning on the Edge via Stochastic Local Competition
among Subnetworks | Continual learning on edge devices poses unique challenges due to stringent resource constraints. This paper introduces a novel method that leverages stochastic competition principles to promote sparsity, significantly reducing deep network memory footprint and computational demand. Specifically, we propose deep networks that comprise blocks of units that compete locally to win the representation of each arising new task; competition takes place in a stochastic manner. This type of network organization results in sparse task-specific representations from each network layer; the sparsity pattern is obtained during training and is different among tasks. Crucially, our method sparsifies both the weights and the weight gradients, thus facilitating training on edge devices. This is performed on the grounds of winning probability for each unit in a block. During inference, the network retains only the winning unit and zeroes-out all weights pertaining to non-winning units for the task at hand. Thus, our approach is specifically tailored for deployment on edge devices, providing an efficient and scalable solution for continual learning in resource-limited environments. | [
"['Theodoros Christophides' 'Kyriakos Tolias' 'Sotirios Chatzis']"
] |
null | null | 2407.10759 | null | null | http://arxiv.org/pdf/2407.10759v1 | 2024-07-15T14:38:09Z | 2024-07-15T14:38:09Z | Qwen2-Audio Technical Report | We introduce the latest progress of Qwen-Audio, a large-scale audio-language model called Qwen2-Audio, which is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions. In contrast to complex hierarchical tags, we have simplified the pre-training process by utilizing natural language prompts for different data and tasks, and have further expanded the data volume. We have boosted the instruction-following capability of Qwen2-Audio and implemented two distinct audio interaction modes for voice chat and audio analysis. In the voice chat mode, users can freely engage in voice interactions with Qwen2-Audio without text input. In the audio analysis mode, users could provide audio and text instructions for analysis during the interaction. Note that we do not use any system prompts to switch between voice chat and audio analysis modes. Qwen2-Audio is capable of intelligently comprehending the content within audio and following voice commands to respond appropriately. For instance, in an audio segment that simultaneously contains sounds, multi-speaker conversations, and a voice command, Qwen2-Audio can directly understand the command and provide an interpretation and response to the audio. Additionally, DPO has optimized the model's performance in terms of factuality and adherence to desired behavior. According to the evaluation results from AIR-Bench, Qwen2-Audio outperformed previous SOTAs, such as Gemini-1.5-pro, in tests focused on audio-centric instruction-following capabilities. Qwen2-Audio is open-sourced with the aim of fostering the advancement of the multi-modal language community. | [
"['Yunfei Chu' 'Jin Xu' 'Qian Yang' 'Haojie Wei' 'Xipin Wei' 'Zhifang Guo'\n 'Yichong Leng' 'Yuanjun Lv' 'Jinzheng He' 'Junyang Lin' 'Chang Zhou'\n 'Jingren Zhou']"
] |
null | null | 2407.10761 | null | null | http://arxiv.org/pdf/2407.10761v1 | 2024-07-15T14:40:24Z | 2024-07-15T14:40:24Z | Physics-Informed Machine Learning for Smart Additive Manufacturing | Compared to physics-based computational manufacturing, data-driven models such as machine learning (ML) are alternative approaches to achieve smart manufacturing. However, the data-driven ML's "black box" nature has presented a challenge to interpreting its outcomes. On the other hand, governing physical laws are not effectively utilized to develop data-efficient ML algorithms. To leverage the advantages of ML and physical laws of advanced manufacturing, this paper focuses on the development of a physics-informed machine learning (PIML) model by integrating neural networks and physical laws to improve model accuracy, transparency, and generalization with case studies in laser metal deposition (LMD). | [
"['Rahul Sharma' 'Maziar Raissi' 'Y. B. Guo']"
] |
null | null | 2407.10768 | null | null | http://arxiv.org/pdf/2407.10768v1 | 2024-07-15T14:50:15Z | 2024-07-15T14:50:15Z | MSegRNN:Enhanced SegRNN Model with Mamba for Long-Term Time Series
Forecasting | The field of long-term time series forecasting demands handling extensive look-back windows and long-range prediction steps, posing significant challenges for RNN-based methodologies. Among these, SegRNN, a robust RNN-driven model, has gained considerable attention in LTSF analysis for achieving state-of-the-art results while maintaining a remarkably streamlined architecture. Concurrently, the Mamba structure has demonstrated its advantages in small to medium-sized models due to its capability for information selection. This study introduces a variant of SegRNN that preprocesses information using a fine-tuned single-layer Mamba structure. Additionally, it incorporates implicit segmentation and residual structures into the model's encoding section to further reduce the inherent data iterative cycles of RNN architectures and implicitly integrate inter-channel correlations. This variant, named MSegRNN, utilizes the Mamba structure to select useful information, resulting in a transformed sequence. The linear-strategy-adapted derivative retains the superior memory efficiency of the original SegRNN while demonstrating enhanced performance. Empirical evaluations on real-world LTSF datasets demonstrate the superior performance of our model, thereby contributing to the advancement of LTSF methodologies. | [
"['GaoXiang Zhao' 'XiaoQiang Wang']"
] |
null | null | 2407.10775 | null | null | http://arxiv.org/pdf/2407.10775v1 | 2024-07-15T14:54:57Z | 2024-07-15T14:54:57Z | Last-Iterate Global Convergence of Policy Gradients for Constrained
Reinforcement Learning | Constrained Reinforcement Learning (CRL) tackles sequential decision-making problems where agents are required to achieve goals by maximizing the expected return while meeting domain-specific constraints, which are often formulated as expected costs. In this setting, policy-based methods are widely used since they come with several advantages when dealing with continuous-control problems. These methods search in the policy space with an action-based or parameter-based exploration strategy, depending on whether they learn directly the parameters of a stochastic policy or those of a stochastic hyperpolicy. In this paper, we propose a general framework for addressing CRL problems via gradient-based primal-dual algorithms, relying on an alternate ascent/descent scheme with dual-variable regularization. We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-iterate convergence guarantees under (weak) gradient domination assumptions, improving and generalizing existing results. Then, we design C-PGAE and C-PGPE, the action-based and the parameter-based versions of C-PG, respectively, and we illustrate how they naturally extend to constraints defined in terms of risk measures over the costs, as it is often requested in safety-critical scenarios. Finally, we numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines, demonstrating their effectiveness. | [
"['Alessandro Montenegro' 'Marco Mussi' 'Matteo Papini'\n 'Alberto Maria Metelli']"
] |
null | null | 2407.10779 | null | null | http://arxiv.org/pdf/2407.10779v1 | 2024-07-15T14:57:40Z | 2024-07-15T14:57:40Z | The Missing Link: Allocation Performance in Causal Machine Learning | Automated decision-making (ADM) systems are being deployed across a diverse range of critical problem areas such as social welfare and healthcare. Recent work highlights the importance of causal ML models in ADM systems, but implementing them in complex social environments poses significant challenges. Research on how these challenges impact the performance in specific downstream decision-making tasks is limited. Addressing this gap, we make use of a comprehensive real-world dataset of jobseekers to illustrate how the performance of a single CATE model can vary significantly across different decision-making scenarios and highlight the differential influence of challenges such as distribution shifts on predictions and allocations. | [
"['Unai Fischer-Abaigar' 'Christoph Kern' 'Frauke Kreuter']"
] |
null | null | 2407.10780 | null | null | http://arxiv.org/pdf/2407.10780v1 | 2024-07-15T14:59:43Z | 2024-07-15T14:59:43Z | Correlations Are Ruining Your Gradient Descent | Herein the topics of (natural) gradient descent, data decorrelation, and approximate methods for backpropagation are brought into a dialogue. Natural gradient descent illuminates how gradient vectors, pointing at directions of steepest descent, can be improved by considering the local curvature of loss landscapes. We extend this perspective and show that to fully solve the problem illuminated by natural gradients in neural networks, one must recognise that correlations in the data at any linear transformation, including node responses at every layer of a neural network, cause a non-orthonormal relationship between the model's parameters. To solve this requires a solution to decorrelate inputs at each individual layer of a neural network. We describe a range of methods which have been proposed for decorrelation and whitening of node output, while providing a novel method specifically useful for distributed computing and computational neuroscience. Implementing decorrelation within multi-layer neural networks, we can show that not only is training via backpropagation sped up significantly but also existing approximations of backpropagation, which have failed catastrophically in the past, are made performant once more. This has the potential to provide a route forward for approximate gradient descent methods which have previously been discarded, training approaches for analogue and neuromorphic hardware, and potentially insights as to the efficacy and utility of decorrelation processes in the brain. | [
"['Nasir Ahmad']"
] |
null | null | 2407.10784 | null | null | http://arxiv.org/pdf/2407.10784v1 | 2024-07-15T15:02:53Z | 2024-07-15T15:02:53Z | AdapTable: Test-Time Adaptation for Tabular Data via Shift-Aware
Uncertainty Calibrator and Label Distribution Handler | In real-world applications, tabular data often suffer from distribution shifts due to their widespread and abundant nature, leading to erroneous predictions of pre-trained machine learning models. However, addressing such distribution shifts in the tabular domain has been relatively underexplored due to unique challenges such as varying attributes and dataset sizes, as well as the limited representation learning capabilities of deep learning models for tabular data. Particularly, with the recent promising paradigm of test-time adaptation (TTA), where we adapt the off-the-shelf model to the unlabeled target domain during the inference phase without accessing the source domain, we observe that directly adopting commonly used TTA methods from other domains often leads to model collapse. We systematically explore challenges in tabular data test-time adaptation, including skewed entropy, complex latent space decision boundaries, confidence calibration issues with both overconfident and under-confident, and model bias towards source label distributions along with class imbalances. Based on these insights, we introduce AdapTable, a novel tabular test-time adaptation method that directly modifies output probabilities by estimating target label distributions and adjusting initial probabilities based on calibrated uncertainty. Extensive experiments on both natural distribution shifts and synthetic corruptions demonstrate the adaptation efficacy of the proposed method. | [
"['Changhun Kim' 'Taewon Kim' 'Seungyeon Woo' 'June Yong Yang' 'Eunho Yang']"
] |
null | null | 2407.10793 | null | null | http://arxiv.org/pdf/2407.10793v1 | 2024-07-15T15:11:16Z | 2024-07-15T15:11:16Z | GraphEval: A Knowledge-Graph Based LLM Hallucination Evaluation
Framework | Methods to evaluate Large Language Model (LLM) responses and detect inconsistencies, also known as hallucinations, with respect to the provided knowledge, are becoming increasingly important for LLM applications. Current metrics fall short in their ability to provide explainable decisions, systematically check all pieces of information in the response, and are often too computationally expensive to be used in practice. We present GraphEval: a hallucination evaluation framework based on representing information in Knowledge Graph (KG) structures. Our method identifies the specific triples in the KG that are prone to hallucinations and hence provides more insight into where in the response a hallucination has occurred, if at all, than previous methods. Furthermore, using our approach in conjunction with state-of-the-art natural language inference (NLI) models leads to an improvement in balanced accuracy on various hallucination benchmarks, compared to using the raw NLI models. Lastly, we explore the use of GraphEval for hallucination correction by leveraging the structure of the KG, a method we name GraphCorrect, and demonstrate that the majority of hallucinations can indeed be rectified. | [
"['Hannah Sansford' 'Nicholas Richardson' 'Hermina Petric Maretic'\n 'Juba Nait Saada']"
] |
null | null | 2407.10802 | null | null | http://arxiv.org/pdf/2407.10802v1 | 2024-07-15T15:18:28Z | 2024-07-15T15:18:28Z | Motion-prior Contrast Maximization for Dense Continuous-Time Motion
Estimation | Current optical flow and point-tracking methods rely heavily on synthetic datasets. Event cameras are novel vision sensors with advantages in challenging visual conditions, but state-of-the-art frame-based methods cannot be easily adapted to event data due to the limitations of current event simulators. We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories and propose an efficient solution to solve the high-dimensional assignment problem between non-linear trajectories and events. Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model on the real-world dataset EVIMO2 by 29%. In optical flow estimation, our method elevates a simple UNet to achieve state-of-the-art performance among self-supervised methods on the DSEC optical flow benchmark. Our code is available at https://github.com/tub-rip/MotionPriorCMax. | [
"['Friedhelm Hamann' 'Ziyun Wang' 'Ioannis Asmanis' 'Kenneth Chaney'\n 'Guillermo Gallego' 'Kostas Daniilidis']"
] |
null | null | 2407.10803 | null | null | http://arxiv.org/pdf/2407.10803v1 | 2024-07-15T15:18:57Z | 2024-07-15T15:18:57Z | DINO Pre-training for Vision-based End-to-end Autonomous Driving | In this article, we focus on the pre-training of visual autonomous driving agents in the context of imitation learning. Current methods often rely on a classification-based pre-training, which we hypothesise to be holding back from extending capabilities of implicit image understanding. We propose pre-training the visual encoder of a driving agent using the self-distillation with no labels (DINO) method, which relies on a self-supervised learning paradigm.% and is trained on an unrelated task. Our experiments in CARLA environment in accordance with the Leaderboard benchmark reveal that the proposed pre-training is more efficient than classification-based pre-training, and is on par with the recently proposed pre-training based on visual place recognition (VPRPre). | [
"['Shubham Juneja' 'Povilas Daniušis' 'Virginijus Marcinkevičius']"
] |
null | null | 2407.10807 | null | null | http://arxiv.org/pdf/2407.10807v1 | 2024-07-15T15:23:21Z | 2024-07-15T15:23:21Z | Employing Sentence Space Embedding for Classification of Data Stream
from Fake News Domain | Tabular data is considered the last unconquered castle of deep learning, yet the task of data stream classification is stated to be an equally important and demanding research area. Due to the temporal constraints, it is assumed that deep learning methods are not the optimal solution for application in this field. However, excluding the entire -- and prevalent -- group of methods seems rather rash given the progress that has been made in recent years in its development. For this reason, the following paper is the first to present an approach to natural language data stream classification using the sentence space method, which allows for encoding text into the form of a discrete digital signal. This allows the use of convolutional deep networks dedicated to image classification to solve the task of recognizing fake news based on text data. Based on the real-life Fakeddit dataset, the proposed approach was compared with state-of-the-art algorithms for data stream classification based on generalization ability and time complexity. | [
"['Paweł Zyblewski' 'Jakub Klikowski' 'Weronika Borek-Marciniec'\n 'Paweł Ksieniewicz']"
] |
null | null | 2407.10810 | null | null | http://arxiv.org/pdf/2407.10810v1 | 2024-07-15T15:25:45Z | 2024-07-15T15:25:45Z | FabGPT: An Efficient Large Multimodal Model for Complex Wafer Defect
Knowledge Queries | Intelligence is key to advancing integrated circuit (IC) fabrication. Recent breakthroughs in Large Multimodal Models (LMMs) have unlocked unparalleled abilities in understanding images and text, fostering intelligent fabrication. Leveraging the power of LMMs, we introduce FabGPT, a customized IC fabrication large multimodal model for wafer defect knowledge query. FabGPT manifests expertise in conducting defect detection in Scanning Electron Microscope (SEM) images, performing root cause analysis, and providing expert question-answering (Q&A) on fabrication processes. FabGPT matches enhanced multimodal features to automatically detect minute defects under complex wafer backgrounds and reduce the subjectivity of manual threshold settings. Besides, the proposed modulation module and interactive corpus training strategy embed wafer defect knowledge into the pre-trained model, effectively balancing Q&A queries related to defect knowledge and original knowledge and mitigating the modality bias issues. Experiments on in-house fab data (SEM-WaD) show that our FabGPT achieves significant performance improvement in wafer defect detection and knowledge querying. | [
"['Yuqi Jiang' 'Xudong Lu' 'Qian Jin' 'Qi Sun' 'Hanming Wu' 'Cheng Zhuo']"
] |
null | null | 2407.10811 | null | null | http://arxiv.org/pdf/2407.10811v1 | 2024-07-15T15:26:10Z | 2024-07-15T15:26:10Z | GuideLight: "Industrial Solution" Guidance for More Practical Traffic
Signal Control Agents | Currently, traffic signal control (TSC) methods based on reinforcement learning (RL) have proven superior to traditional methods. However, most RL methods face difficulties when applied in the real world due to three factors: input, output, and the cycle-flow relation. The industry's observable input is much more limited than simulation-based RL methods. For real-world solutions, only flow can be reliably collected, whereas common RL methods need more. For the output action, most RL methods focus on acyclic control, which real-world signal controllers do not support. Most importantly, industry standards require a consistent cycle-flow relationship: non-decreasing and different response strategies for low, medium, and high-level flows, which is ignored by the RL methods. To narrow the gap between RL methods and industry standards, we innovatively propose to use industry solutions to guide the RL agent. Specifically, we design behavior cloning and curriculum learning to guide the agent to mimic and meet industry requirements and, at the same time, leverage the power of exploration and exploitation in RL for better performance. We theoretically prove that such guidance can largely decrease the sample complexity to polynomials in the horizon when searching for an optimal policy. Our rigid experiments show that our method has good cycle-flow relation and superior performance. | [
"['Haoyuan Jiang' 'Xuantang Xiong' 'Ziyue Li' 'Hangyu Mao' 'Guanghu Sui'\n 'Jingqing Ruan' 'Yuheng Cheng' 'Hua Wei' 'Wolfgang Ketter' 'Rui Zhao']"
] |
null | null | 2407.10817 | null | null | http://arxiv.org/pdf/2407.10817v1 | 2024-07-15T15:33:45Z | 2024-07-15T15:33:45Z | Foundational Autoraters: Taming Large Language Models for Better
Automatic Evaluation | As large language models (LLMs) advance, it becomes more challenging to reliably evaluate their output due to the high costs of human evaluation. To make progress towards better LLM autoraters, we introduce FLAMe, a family of Foundational Large Autorater Models. FLAMe is trained on our large and diverse collection of 100+ quality assessment tasks comprising 5M+ human judgments, curated and standardized using publicly released human evaluations from previous research. FLAMe significantly improves generalization to a wide variety of held-out tasks, outperforming LLMs trained on proprietary data like GPT-4 and Claude-3 on many tasks. We show that FLAMe can also serve as a powerful starting point for further downstream fine-tuning, using reward modeling evaluation as a case study (FLAMe-RM). Notably, on RewardBench, our FLAMe-RM-24B model (with an accuracy of 87.8%) is the top-performing generative model trained exclusively on permissively licensed data, outperforming both GPT-4-0125 (85.9%) and GPT-4o (84.7%). Additionally, we explore a more computationally efficient approach using a novel tail-patch fine-tuning strategy to optimize our FLAMe multitask mixture for reward modeling evaluation (FLAMe-Opt-RM), offering competitive RewardBench performance while requiring approximately 25x less training datapoints. Overall, our FLAMe variants outperform all popular proprietary LLM-as-a-Judge models we consider across 8 out of 12 autorater evaluation benchmarks, encompassing 53 quality assessment tasks, including RewardBench and LLM-AggreFact. Finally, our analysis reveals that FLAMe is significantly less biased than these LLM-as-a-Judge models on the CoBBLEr autorater bias benchmark, while effectively identifying high-quality responses for code generation. | [
"['Tu Vu' 'Kalpesh Krishna' 'Salaheddin Alzubi' 'Chris Tar'\n 'Manaal Faruqui' 'Yun-Hsuan Sung']"
] |
null | null | 2407.10825 | null | null | http://arxiv.org/pdf/2407.10825v1 | 2024-07-15T15:38:21Z | 2024-07-15T15:38:21Z | Wicked Oddities: Selectively Poisoning for Effective Clean-Label
Backdoor Attacks | Deep neural networks are vulnerable to backdoor attacks, a type of adversarial attack that poisons the training data to manipulate the behavior of models trained on such data. Clean-label attacks are a more stealthy form of backdoor attacks that can perform the attack without changing the labels of poisoned data. Early works on clean-label attacks added triggers to a random subset of the training set, ignoring the fact that samples contribute unequally to the attack's success. This results in high poisoning rates and low attack success rates. To alleviate the problem, several supervised learning-based sample selection strategies have been proposed. However, these methods assume access to the entire labeled training set and require training, which is expensive and may not always be practical. This work studies a new and more practical (but also more challenging) threat model where the attacker only provides data for the target class (e.g., in face recognition systems) and has no knowledge of the victim model or any other classes in the training set. We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate in this setting. Our threat model poses a serious threat in training machine learning models with third-party datasets, since the attack can be performed effectively with limited information. Experiments on benchmark datasets illustrate the effectiveness of our strategies in improving clean-label backdoor attacks. | [
"['Quang H. Nguyen' 'Nguyen Ngoc-Hieu' 'The-Anh Ta' 'Thanh Nguyen-Tang'\n 'Hoang Thanh-Tung' 'Khoa D. Doan']"
] |
null | null | 2407.10827 | null | null | http://arxiv.org/pdf/2407.10827v1 | 2024-07-15T15:38:51Z | 2024-07-15T15:38:51Z | LLM Circuit Analyses Are Consistent Across Training and Scale | Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs' internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein can replicate across model scale. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional pre-training and over model scale. | [
"['Curt Tigges' 'Michael Hanna' 'Qinan Yu' 'Stella Biderman']"
] |
null | null | 2407.10834 | null | null | http://arxiv.org/pdf/2407.10834v1 | 2024-07-15T15:45:07Z | 2024-07-15T15:45:07Z | MetaLLM: A High-performant and Cost-efficient Dynamic Framework for
Wrapping LLMs | The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas. These LLMs come with different abilities and costs in terms of computation or pricing. Since the demand for each query can vary, e.g., because of the queried domain or its complexity, defaulting to one LLM in an application is not usually the best choice, whether it is the biggest, priciest, or even the one with the best average test performance. Consequently, picking the right LLM that is both accurate and cost-effective for an application remains a challenge. In this paper, we introduce MetaLLM, a framework that dynamically and intelligently routes each query to the optimal LLM (among several available LLMs) for classification tasks, achieving significantly improved accuracy and cost-effectiveness. By framing the selection problem as a multi-armed bandit, MetaLLM balances prediction accuracy and cost efficiency under uncertainty. Our experiments, conducted on popular LLM platforms such as OpenAI's GPT models, Amazon's Titan, Anthropic's Claude, and Meta's LLaMa, showcase MetaLLM's efficacy in real-world scenarios, laying the groundwork for future extensions beyond classification tasks. | [
"['Quang H. Nguyen' 'Duy C. Hoang' 'Juliette Decugis' 'Saurav Manchanda'\n 'Nitesh V. Chawla' 'Khoa D. Doan']"
] |
null | null | 2407.10835 | null | null | http://arxiv.org/pdf/2407.10835v1 | 2024-07-15T15:45:29Z | 2024-07-15T15:45:29Z | Exploration in Knowledge Transfer Utilizing Reinforcement Learning | The contribution focuses on the problem of exploration within the task of knowledge transfer. Knowledge transfer refers to the useful application of the knowledge gained while learning the source task in the target task. The intended benefit of knowledge transfer is to speed up the learning process of the target task. The article aims to compare several exploration methods used within a deep transfer learning algorithm, particularly Deep Target Transfer $Q$-learning. The methods used are $epsilon$-greedy, Boltzmann, and upper confidence bound exploration. The aforementioned transfer learning algorithms and exploration methods were tested on the virtual drone problem. The results have shown that the upper confidence bound algorithm performs the best out of these options. Its sustainability to other applications is to be checked. | [
"['Adam Jedlička' 'Tatiana Valentine Guy']"
] |
null | null | 2407.10836 | null | null | http://arxiv.org/pdf/2407.10836v1 | 2024-07-15T15:47:24Z | 2024-07-15T15:47:24Z | Data-Guided Physics-Informed Neural Networks for Solving Inverse
Problems in Partial Differential Equations | Physics-informed neural networks (PINNs) represent a significant advancement in scientific machine learning by integrating fundamental physical laws into their architecture through loss functions. PINNs have been successfully applied to solve various forward and inverse problems in partial differential equations (PDEs). However, a notable challenge can emerge during the early training stages when solving inverse problems. Specifically, data losses remain high while PDE residual losses are minimized rapidly, thereby exacerbating the imbalance between loss terms and impeding the overall efficiency of PINNs. To address this challenge, this study proposes a novel framework termed data-guided physics-informed neural networks (DG-PINNs). The DG-PINNs framework is structured into two distinct phases: a pre-training phase and a fine-tuning phase. In the pre-training phase, a loss function with only the data loss is minimized in a neural network. In the fine-tuning phase, a composite loss function, which consists of the data loss, PDE residual loss, and, if available, initial and boundary condition losses, is minimized in the same neural network. Notably, the pre-training phase ensures that the data loss is already at a low value before the fine-tuning phase commences. This approach enables the fine-tuning phase to converge to a minimal composite loss function with fewer iterations compared to existing PINNs. To validate the effectiveness, noise-robustness, and efficiency of DG-PINNs, extensive numerical investigations are conducted on inverse problems related to several classical PDEs, including the heat equation, wave equation, Euler--Bernoulli beam equation, and Navier--Stokes equation. The numerical results demonstrate that DG-PINNs can accurately solve these inverse problems and exhibit robustness against noise in training data. | [
"['Wei Zhou' 'Y. F. Xu']"
] |
null | null | 2407.10839 | null | null | http://arxiv.org/pdf/2407.10839v1 | 2024-07-15T15:53:13Z | 2024-07-15T15:53:13Z | Offline Reinforcement Learning with Imputed Rewards | Offline Reinforcement Learning (ORL) offers a robust solution to training agents in applications where interactions with the environment must be strictly limited due to cost, safety, or lack of accurate simulation environments. Despite its potential to facilitate deployment of artificial agents in the real world, Offline Reinforcement Learning typically requires very many demonstrations annotated with ground-truth rewards. Consequently, state-of-the-art ORL algorithms can be difficult or impossible to apply in data-scarce scenarios. In this paper we propose a simple but effective Reward Model that can estimate the reward signal from a very limited sample of environment transitions annotated with rewards. Once the reward signal is modeled, we use the Reward Model to impute rewards for a large sample of reward-free transitions, thus enabling the application of ORL techniques. We demonstrate the potential of our approach on several D4RL continuous locomotion tasks. Our results show that, using only 1% of reward-labeled transitions from the original datasets, our learned reward model is able to impute rewards for the remaining 99% of the transitions, from which performant agents can be learned using Offline Reinforcement Learning. | [
"['Carlo Romeo' 'Andrew D. Bagdanov']"
] |
null | null | 2407.10844 | null | null | http://arxiv.org/pdf/2407.10844v1 | 2024-07-15T15:59:39Z | 2024-07-15T15:59:39Z | Rotationally Invariant Latent Distances for Uncertainty Estimation of
Relaxed Energy Predictions by Graph Neural Network Potentials | Graph neural networks (GNNs) have been shown to be astonishingly capable models for molecular property prediction, particularly as surrogates for expensive density functional theory calculations of relaxed energy for novel material discovery. However, one limitation of GNNs in this context is the lack of useful uncertainty prediction methods, as this is critical to the material discovery pipeline. In this work, we show that uncertainty quantification for relaxed energy calculations is more complex than uncertainty quantification for other kinds of molecular property prediction, due to the effect that structure optimizations have on the error distribution. We propose that distribution-free techniques are more useful tools for assessing calibration, recalibrating, and developing uncertainty prediction methods for GNNs performing relaxed energy calculations. We also develop a relaxed energy task for evaluating uncertainty methods for equivariant GNNs, based on distribution-free recalibration and using the Open Catalyst Project dataset. We benchmark a set of popular uncertainty prediction methods on this task, and show that latent distance methods, with our novel improvements, are the most well-calibrated and economical approach for relaxed energy calculations. Finally, we demonstrate that our latent space distance method produces results which align with our expectations on a clustering example, and on specific equation of state and adsorbate coverage examples from outside the training dataset. | [
"['Joseph Musielewicz' 'Janice Lan' 'Matt Uyttendaele' 'John R. Kitchin']"
] |
null | null | 2407.10854 | null | null | http://arxiv.org/pdf/2407.10854v1 | 2024-07-15T16:06:20Z | 2024-07-15T16:06:20Z | Principal Component Flow Map Learning of PDEs from Incomplete, Limited,
and Noisy Data | We present a computational technique for modeling the evolution of dynamical systems in a reduced basis, with a focus on the challenging problem of modeling partially-observed partial differential equations (PDEs) on high-dimensional non-uniform grids. We address limitations of previous work on data-driven flow map learning in the sense that we focus on noisy and limited data to move toward data collection scenarios in real-world applications. Leveraging recent work on modeling PDEs in modal and nodal spaces, we present a neural network structure that is suitable for PDE modeling with noisy and limited data available only on a subset of the state variables or computational domain. In particular, spatial grid-point measurements are reduced using a learned linear transformation, after which the dynamics are learned in this reduced basis before being transformed back out to the nodal space. This approach yields a drastically reduced parameterization of the neural network compared with previous flow map models for nodal space learning. This primarily allows for smaller training data sets, but also enables reduced training times. | [
"['Victor Churchill']"
] |
null | null | 2407.10867 | null | null | http://arxiv.org/pdf/2407.10867v1 | 2024-07-15T16:12:51Z | 2024-07-15T16:12:51Z | Provable Robustness of (Graph) Neural Networks Against Data Poisoning
and Backdoor Attacks | Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data, as well as backdoor attacks that additionally manipulate the test data. These vulnerabilities have led to interest in certifying (i.e., proving) that such changes up to a certain magnitude do not affect test predictions. We, for the first time, certify Graph Neural Networks (GNNs) against poisoning and backdoor attacks targeting the node features of a given graph. Our certificates are white-box and based upon $(i)$ the neural tangent kernel, which characterizes the training dynamics of sufficiently wide networks; and $(ii)$ a novel reformulation of the bilevel optimization problem describing poisoning as a mixed-integer linear program. Consequently, we leverage our framework to provide fundamental insights into the role of graph structure and its connectivity on the worst-case robustness behavior of convolution-based and PageRank-based GNNs. We note that our framework is more general and constitutes the first approach to derive white-box poisoning certificates for NNs, which can be of independent interest beyond graph-related tasks. | [
"['Lukas Gosch' 'Mahalakshmi Sabanayagam' 'Debarghya Ghoshdastidar'\n 'Stephan Günnemann']"
] |
null | null | 2407.10870 | null | null | http://arxiv.org/pdf/2407.10870v1 | 2024-07-15T16:18:06Z | 2024-07-15T16:18:06Z | GPT Sonograpy: Hand Gesture Decoding from Forearm Ultrasound Images via
VLM | Large vision-language models (LVLMs), such as the Generative Pre-trained Transformer 4-omni (GPT-4o), are emerging multi-modal foundation models which have great potential as powerful artificial-intelligence (AI) assistance tools for a myriad of applications, including healthcare, industrial, and academic sectors. Although such foundation models perform well in a wide range of general tasks, their capability without fine-tuning is often limited in specialized tasks. However, full fine-tuning of large foundation models is challenging due to enormous computation/memory/dataset requirements. We show that GPT-4o can decode hand gestures from forearm ultrasound data even with no fine-tuning, and improves with few-shot, in-context learning. | [
"['Keshav Bimbraw' 'Ye Wang' 'Jing Liu' 'Toshiaki Koike-Akino']"
] |
null | null | 2407.10874 | null | null | http://arxiv.org/pdf/2407.10874v1 | 2024-07-15T16:23:53Z | 2024-07-15T16:23:53Z | Random Channel Ablation for Robust Hand Gesture Classification with
Multimodal Biosignals | Biosignal-based hand gesture classification is an important component of effective human-machine interaction. For multimodal biosignal sensing, the modalities often face data loss due to missing channels in the data which can adversely affect the gesture classification performance. To make the classifiers robust to missing channels in the data, this paper proposes using Random Channel Ablation (RChA) during the training process. Ultrasound and force myography (FMG) data were acquired from the forearm for 12 hand gestures over 2 subjects. The resulting multimodal data had 16 total channels, 8 for each modality. The proposed method was applied to convolutional neural network architecture, and compared with baseline, imputation, and oracle methods. Using 5-fold cross-validation for the two subjects, on average, 12.2% and 24.5% improvement was observed for gesture classification with up to 4 and 8 missing channels respectively compared to the baseline. Notably, the proposed method is also robust to an increase in the number of missing channels compared to other methods. These results show the efficacy of using random channel ablation to improve classifier robustness for multimodal and multi-channel biosignal-based hand gesture classification. | [
"['Keshav Bimbraw' 'Jing Liu' 'Ye Wang' 'Toshiaki Koike-Akino']"
] |
null | null | 2407.10878 | null | null | http://arxiv.org/pdf/2407.10878v1 | 2024-07-15T16:28:26Z | 2024-07-15T16:28:26Z | Deep Causal Learning to Explain and Quantify The Geo-Tension's Impact on
Natural Gas Market | Natural gas demand is a crucial factor for predicting natural gas prices and thus has a direct influence on the power system. However, existing methods face challenges in assessing the impact of shocks, such as the outbreak of the Russian-Ukrainian war. In this context, we apply deep neural network-based Granger causality to identify important drivers of natural gas demand. Furthermore, the resulting dependencies are used to construct a counterfactual case without the outbreak of the war, providing a quantifiable estimate of the overall effect of the shock on various German energy sectors. The code and dataset are available at https://github.com/bonaldli/CausalEnergy. | [
"['Philipp Kai Peter' 'Yulin Li' 'Ziyue Li' 'Wolfgang Ketter']"
] |
null | null | 2407.10886 | null | null | http://arxiv.org/pdf/2407.10886v1 | 2024-07-15T16:37:55Z | 2024-07-15T16:37:55Z | SLIP: Securing LLMs IP Using Weights Decomposition | Large language models (LLMs) have recently seen widespread adoption, in both academia and industry. As these models grow, they become valuable intellectual property (IP), reflecting enormous investments by their owners. Moreover, the high cost of cloud-based deployment has driven interest towards deployment to edge devices, yet this risks exposing valuable parameters to theft and unauthorized use. Current methods to protect models' IP on the edge have limitations in terms of practicality, loss in accuracy, or suitability to requirements. In this paper, we introduce a novel hybrid inference algorithm, named SLIP, designed to protect edge-deployed models from theft. SLIP is the first hybrid protocol that is both practical for real-world applications and provably secure, while having zero accuracy degradation and minimal impact on latency. It involves partitioning the model between two computing resources, one secure but expensive, and another cost-effective but vulnerable. This is achieved through matrix decomposition, ensuring that the secure resource retains a maximally sensitive portion of the model's IP while performing a minimal amount of computations, and vice versa for the vulnerable resource. Importantly, the protocol includes security guarantees that prevent attackers from exploiting the partition to infer the secured information. Finally, we present experimental results that show the robustness and effectiveness of our method, positioning it as a compelling solution for protecting LLMs. | [
"['Yehonathan Refael' 'Adam Hakim' 'Lev Greenberg' 'Tal Aviv' 'Satya Lokam'\n 'Ben Fishman' 'Shachar Seidman']"
] |
null | null | 2407.10897 | null | null | http://arxiv.org/pdf/2407.10897v1 | 2024-07-15T16:46:14Z | 2024-07-15T16:46:14Z | Optical Diffusion Models for Image Generation | Diffusion models generate new samples by progressively decreasing the noise from the initially provided random distribution. This inference procedure generally utilizes a trained neural network numerous times to obtain the final output, creating significant latency and energy consumption on digital electronic hardware such as GPUs. In this study, we demonstrate that the propagation of a light beam through a semi-transparent medium can be programmed to implement a denoising diffusion model on image samples. This framework projects noisy image patterns through passive diffractive optical layers, which collectively only transmit the predicted noise term in the image. The optical transparent layers, which are trained with an online training approach, backpropagating the error to the analytical model of the system, are passive and kept the same across different steps of denoising. Hence this method enables high-speed image generation with minimal power consumption, benefiting from the bandwidth and energy efficiency of optical information processing. | [
"['Ilker Oguz' 'Niyazi Ulas Dinc' 'Mustafa Yildirim' 'Junjie Ke'\n 'Innfarn Yoo' 'Qifei Wang' 'Feng Yang' 'Christophe Moser'\n 'Demetri Psaltis']"
] |
null | null | 2407.10910 | null | null | http://arxiv.org/pdf/2407.10910v1 | 2024-07-15T17:10:31Z | 2024-07-15T17:10:31Z | DataDream: Few-shot Guided Dataset Generation | While text-to-image diffusion models have been shown to achieve state-of-the-art results in image synthesis, they have yet to prove their effectiveness in downstream applications. Previous work has proposed to generate data for image classifier training given limited real data access. However, these methods struggle to generate in-distribution images or depict fine-grained features, thereby hindering the generalization of classification models trained on synthetic datasets. We propose DataDream, a framework for synthesizing classification datasets that more faithfully represents the real data distribution when guided by few-shot examples of the target classes. DataDream fine-tunes LoRA weights for the image generation model on the few real images before generating the training data using the adapted model. We then fine-tune LoRA weights for CLIP using the synthetic data to improve downstream image classification over previous approaches on a large variety of datasets. We demonstrate the efficacy of DataDream through extensive experiments, surpassing state-of-the-art classification accuracy with few-shot data across 7 out of 10 datasets, while being competitive on the other 3. Additionally, we provide insights into the impact of various factors, such as the number of real-shot and generated images as well as the fine-tuning compute on model performance. The code is available at https://github.com/ExplainableML/DataDream. | [
"['Jae Myung Kim' 'Jessica Bader' 'Stephan Alaniz' 'Cordelia Schmid'\n 'Zeynep Akata']"
] |
null | null | 2407.10916 | null | null | http://arxiv.org/pdf/2407.10916v1 | 2024-07-15T17:18:42Z | 2024-07-15T17:18:42Z | When Heterophily Meets Heterogeneity: New Graph Benchmarks and Effective
Methods | Many real-world graphs frequently present challenges for graph learning due to the presence of both heterophily and heterogeneity. However, existing benchmarks for graph learning often focus on heterogeneous graphs with homophily or homogeneous graphs with heterophily, leaving a gap in understanding how methods perform on graphs that are both heterogeneous and heterophilic. To bridge this gap, we introduce H2GB, a novel graph benchmark that brings together the complexities of both the heterophily and heterogeneity properties of graphs. Our benchmark encompasses 9 diverse real-world datasets across 5 domains, 28 baseline model implementations, and 26 benchmark results. In addition, we present a modular graph transformer framework UnifiedGT and a new model variant, H2G-former, that excels at this challenging benchmark. By integrating masked label embeddings, cross-type heterogeneous attention, and type-specific FFNs, H2G-former effectively tackles graph heterophily and heterogeneity. Extensive experiments across 26 baselines on H2GB reveal inadequacies of current models on heterogeneous heterophilic graph learning, and demonstrate the superiority of our H2G-former over existing solutions. Both the benchmark and the framework are available on GitHub (https://github.com/junhongmit/H2GB) and PyPI (https://pypi.org/project/H2GB), and documentation can be found at https://junhongmit.github.io/H2GB/. | [
"['Junhong Lin' 'Xiaojie Guo' 'Shuaicheng Zhang' 'Dawei Zhou' 'Yada Zhu'\n 'Julian Shun']"
] |
Subsets and Splits