categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.05440 | null | null | http://arxiv.org/pdf/2404.05440v1 | 2024-04-08T12:19:04Z | 2024-04-08T12:19:04Z | Tree Search-Based Policy Optimization under Stochastic Execution Delay | The standard formulation of Markov decision processes (MDPs) assumes that the agent's decisions are executed immediately. However, in numerous realistic applications such as robotics or healthcare, actions are performed with a delay whose value can even be stochastic. In this work, we introduce stochastic delayed execution MDPs, a new formalism addressing random delays without resorting to state augmentation. We show that given observed delay values, it is sufficient to perform a policy search in the class of Markov policies in order to reach optimal performance, thus extending the deterministic fixed delay case. Armed with this insight, we devise DEZ, a model-based algorithm that optimizes over the class of Markov policies. DEZ leverages Monte-Carlo tree search similar to its non-delayed variant EfficientZero to accurately infer future states from the action queue. Thus, it handles delayed execution while preserving the sample efficiency of EfficientZero. Through a series of experiments on the Atari suite, we demonstrate that although the previous baseline outperforms the naive method in scenarios with constant delay, it underperforms in the face of stochastic delays. In contrast, our approach significantly outperforms the baselines, for both constant and stochastic delays. The code is available at http://github.com/davidva1/Delayed-EZ . | [
"['David Valensi' 'Esther Derman' 'Shie Mannor' 'Gal Dalal']"
]
|
null | null | 2404.05444 | null | null | http://arxiv.org/pdf/2404.05444v1 | 2024-04-08T12:26:06Z | 2024-04-08T12:26:06Z | The Open Autonomy Safety Case Framework | A system safety case is a compelling, comprehensible, and valid argument about the satisfaction of the safety goals of a given system operating in a given environment supported by convincing evidence. Since the publication of UL 4600 in 2020, safety cases have become a best practice for measuring, managing, and communicating the safety of autonomous vehicles (AVs). Although UL 4600 provides guidance on how to build the safety case for an AV, the complexity of AVs and their operating environments, the novelty of the used technology, the need for complying with various regulations and technical standards, and for addressing cybersecurity concerns and ethical considerations make the development of safety cases for AVs challenging. To this end, safety case frameworks have been proposed that bring strategies, argument templates, and other guidance together to support the development of a safety case. This paper introduces the Open Autonomy Safety Case Framework, developed over years of work with the autonomous vehicle industry, as a roadmap for how AVs can be deployed safely and responsibly. | [
"['Michael Wagner' 'Carmen Carlan']"
]
|
null | null | 2404.05445 | null | null | http://arxiv.org/pdf/2404.05445v1 | 2024-04-08T12:27:00Z | 2024-04-08T12:27:00Z | Unsupervised Training of Convex Regularizers using Maximum Likelihood
Estimation | Unsupervised learning is a training approach in the situation where ground truth data is unavailable, such as inverse imaging problems. We present an unsupervised Bayesian training approach to learning convex neural network regularizers using a fixed noisy dataset, based on a dual Markov chain estimation method. Compared to classical supervised adversarial regularization methods, where there is access to both clean images as well as unlimited to noisy copies, we demonstrate close performance on natural image Gaussian deconvolution and Poisson denoising tasks. | [
"['Hong Ye Tan' 'Ziruo Cai' 'Marcelo Pereyra' 'Subhadip Mukherjee'\n 'Junqi Tang' 'Carola-Bibiane Schönlieb']"
]
|
null | null | 2404.05465 | null | null | http://arxiv.org/pdf/2404.05465v1 | 2024-04-08T12:43:32Z | 2024-04-08T12:43:32Z | HAMMR: HierArchical MultiModal React agents for generic VQA | Combining Large Language Models (LLMs) with external specialized tools (LLMs+tools) is a recent paradigm to solve multimodal tasks such as Visual Question Answering (VQA). While this approach was demonstrated to work well when optimized and evaluated for each individual benchmark, in practice it is crucial for the next generation of real-world AI systems to handle a broad range of multimodal problems. Therefore we pose the VQA problem from a unified perspective and evaluate a single system on a varied suite of VQA tasks including counting, spatial reasoning, OCR-based reasoning, visual pointing, external knowledge, and more. In this setting, we demonstrate that naively applying the LLM+tools approach using the combined set of all tools leads to poor results. This motivates us to introduce HAMMR: HierArchical MultiModal React. We start from a multimodal ReAct-based system and make it hierarchical by enabling our HAMMR agents to call upon other specialized agents. This enhances the compositionality of the LLM+tools approach, which we show to be critical for obtaining high accuracy on generic VQA. Concretely, on our generic VQA suite, HAMMR outperforms the naive LLM+tools approach by 19.5%. Additionally, HAMMR achieves state-of-the-art results on this task, outperforming the generic standalone PaLI-X VQA model by 5.0%. | [
"['Lluis Castrejon' 'Thomas Mensink' 'Howard Zhou' 'Vittorio Ferrari'\n 'Andre Araujo' 'Jasper Uijlings']"
]
|
null | null | 2404.05468 | null | null | http://arxiv.org/pdf/2404.05468v5 | 2024-05-28T16:03:21Z | 2024-04-08T12:46:39Z | Mind-to-Image: Projecting Visual Mental Imagination of the Brain from
fMRI | The reconstruction of images observed by subjects from fMRI data collected during visual stimuli has made strong progress in the past decade, thanks to the availability of extensive fMRI datasets and advancements in generative models for image generation. However, the application of visual reconstruction has remained limited. Reconstructing visual imagination presents a greater challenge, with potentially revolutionary applications ranging from aiding individuals with disabilities to verifying witness accounts in court. The primary hurdles in this field are the absence of data collection protocols for visual imagery and the lack of datasets on the subject. Traditionally, fMRI-to-image relies on data collected from subjects exposed to visual stimuli, which poses issues for generating visual imagery based on the difference of brain activity between visual stimulation and visual imagery. For the first time, we have compiled a substantial dataset (around 6h of scans) on visual imagery along with a proposed data collection protocol. We then train a modified version of an fMRI-to-image model and demonstrate the feasibility of reconstructing images from two modes of imagination: from memory and from pure imagination. The resulting pipeline we call Mind-to-Image marks a step towards creating a technology that allow direct reconstruction of visual imagery. | [
"['Hugo Caselles-Dupré' 'Charles Mellerio' 'Paul Hérent'\n 'Alizée Lopez-Persem' 'Benoit Béranger' 'Mathieu Soularue'\n 'Pierre Fautrel' 'Gauthier Vernier' 'Matthieu Cord']"
]
|
null | null | 2404.05482 | null | null | http://arxiv.org/pdf/2404.05482v1 | 2024-04-08T13:01:25Z | 2024-04-08T13:01:25Z | WaveCatBoost for Probabilistic Forecasting of Regional Air Quality Data | Accurate and reliable air quality forecasting is essential for protecting public health, sustainable development, pollution control, and enhanced urban planning. This letter presents a novel WaveCatBoost architecture designed to forecast the real-time concentrations of air pollutants by combining the maximal overlapping discrete wavelet transform (MODWT) with the CatBoost model. This hybrid approach efficiently transforms time series into high-frequency and low-frequency components, thereby extracting signal from noise and improving prediction accuracy and robustness. Evaluation of two distinct regional datasets, from the Central Air Pollution Control Board (CPCB) sensor network and a low-cost air quality sensor system (LAQS), underscores the superior performance of our proposed methodology in real-time forecasting compared to the state-of-the-art statistical and deep learning architectures. Moreover, we employ a conformal prediction strategy to provide probabilistic bands with our forecasts. | [
"['Jintu Borah' 'Tanujit Chakraborty' 'Md. Shahrul Md. Nadzir'\n 'Mylene G. Cayetano' 'Shubhankar Majumdar']"
]
|
null | null | 2404.05484 | null | null | http://arxiv.org/pdf/2404.05484v2 | 2024-05-17T17:18:28Z | 2024-04-08T13:06:23Z | On Computational Modeling of Sleep-Wake Cycle | Why do mammals need to sleep? Neuroscience treats sleep and wake as default and perturbation modes of the brain. It is hypothesized that the brain self-organizes neural activities without environmental inputs. This paper presents a new computational model of the sleep-wake cycle (SWC) for learning and memory. During the sleep mode, the memory consolidation by the thalamocortical system is abstracted by a disentangling operator that maps context-dependent representations (CDR) to context-independent representations (CIR) for generalization. Such a disentangling operator can be mathematically formalized by an integral transform that integrates the context variable from CDR. During the wake mode, the memory formation by the hippocampal-neocortical system is abstracted by an entangling operator from CIR to CDR where the context is introduced by physical motion. When designed as inductive bias, entangled CDR linearizes the problem of unsupervised learning for sensory memory by direct-fit. The concatenation of disentangling and entangling operators forms a disentangling-entangling cycle (DEC) as the building block for sensorimotor learning. We also discuss the relationship of DEC and SWC to the perception-action cycle (PAC) for internal model learning and perceptual control theory for the ecological origin of natural languages. | [
"['Xin Li']"
]
|
null | null | 2404.05501 | null | null | http://arxiv.org/pdf/2404.05501v1 | 2024-04-08T13:25:02Z | 2024-04-08T13:25:02Z | Data Science In Olfaction | Advances in neural sensing technology are making it possible to observe the olfactory process in great detail. In this paper, we conceptualize smell from a Data Science and AI perspective, that relates the properties of odorants to how they are sensed and analyzed in the olfactory system from the nose to the brain. Drawing distinctions to color vision, we argue that smell presents unique measurement challenges, including the complexity of stimuli, the high dimensionality of the sensory apparatus, as well as what constitutes ground truth. In the face of these challenges, we argue for the centrality of odorant-receptor interactions in developing a theory of olfaction. Such a theory is likely to find widespread industrial applications, and enhance our understanding of smell, and in the longer-term, how it relates to other senses and language. As an initial use case of the data, we present results using machine learning-based classification of neural responses to odors as they are recorded in the mouse olfactory bulb with calcium imaging. | [
"['Vivek Agarwal' 'Joshua Harvey' 'Dmitry Rinberg' 'Vasant Dhar']"
]
|
null | null | 2404.05505 | null | null | http://arxiv.org/pdf/2404.05505v1 | 2024-04-08T13:27:07Z | 2024-04-08T13:27:07Z | Taming Transformers for Realistic Lidar Point Cloud Generation | Diffusion Models (DMs) have achieved State-Of-The-Art (SOTA) results in the Lidar point cloud generation task, benefiting from their stable training and iterative refinement during sampling. However, DMs often fail to realistically model Lidar raydrop noise due to their inherent denoising process. To retain the strength of iterative sampling while enhancing the generation of raydrop noise, we introduce LidarGRIT, a generative model that uses auto-regressive transformers to iteratively sample the range images in the latent space rather than image space. Furthermore, LidarGRIT utilises VQ-VAE to separately decode range images and raydrop masks. Our results show that LidarGRIT achieves superior performance compared to SOTA models on KITTI-360 and KITTI odometry datasets. Code available at:https://github.com/hamedhaghighi/LidarGRIT. | [
"['Hamed Haghighi' 'Amir Samadi' 'Mehrdad Dianati' 'Valentina Donzella'\n 'Kurt Debattista']"
]
|
null | null | 2404.05519 | null | null | http://arxiv.org/pdf/2404.05519v1 | 2024-04-08T13:40:01Z | 2024-04-08T13:40:01Z | Investigating the Effectiveness of Cross-Attention to Unlock Zero-Shot
Editing of Text-to-Video Diffusion Models | With recent advances in image and video diffusion models for content creation, a plethora of techniques have been proposed for customizing their generated content. In particular, manipulating the cross-attention layers of Text-to-Image (T2I) diffusion models has shown great promise in controlling the shape and location of objects in the scene. Transferring image-editing techniques to the video domain, however, is extremely challenging as object motion and temporal consistency are difficult to capture accurately. In this work, we take a first look at the role of cross-attention in Text-to-Video (T2V) diffusion models for zero-shot video editing. While one-shot models have shown potential in controlling motion and camera movement, we demonstrate zero-shot control over object shape, position and movement in T2V models. We show that despite the limitations of current T2V models, cross-attention guidance can be a promising approach for editing videos. | [
"['Saman Motamed' 'Wouter Van Gansbeke' 'Luc Van Gool']"
]
|
null | null | 2404.05530 | null | null | http://arxiv.org/pdf/2404.05530v1 | 2024-04-08T13:59:02Z | 2024-04-08T13:59:02Z | Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data | Reinforcement Learning from Human Feedback (RLHF) is a popular method for aligning Language Models (LM) with human values and preferences. RLHF requires a large number of preference pairs as training data, which are often used in both the Supervised Fine-Tuning and Reward Model training, and therefore publicly available datasets are commonly used. In this work, we study to what extent a malicious actor can manipulate the LMs generations by poisoning the preferences, i.e., injecting poisonous preference pairs into these datasets and the RLHF training process. We propose strategies to build poisonous preference pairs and test their performance by poisoning two widely used preference datasets. Our results show that preference poisoning is highly effective: by injecting a small amount of poisonous data (1-5% of the original dataset), we can effectively manipulate the LM to generate a target entity in a target sentiment (positive or negative). The findings from our experiments also shed light on strategies to defend against the preference poisoning attack. | [
"['Tim Baumgärtner' 'Yang Gao' 'Dana Alon' 'Donald Metzler']"
]
|
null | null | 2404.05538 | null | null | http://arxiv.org/pdf/2404.05538v2 | 2024-04-11T09:45:13Z | 2024-04-08T14:06:52Z | Cell-Free Multi-User MIMO Equalization via In-Context Learning | Large pre-trained sequence models, such as transformers, excel as few-shot learners capable of in-context learning (ICL). In ICL, a model is trained to adapt its operation to a new task based on limited contextual information, typically in the form of a few training examples for the given task. Previous work has explored the use of ICL for channel equalization in single-user multi-input and multiple-output (MIMO) systems. In this work, we demonstrate that ICL can be also used to tackle the problem of multi-user equalization in cell-free MIMO systems with limited fronthaul capacity. In this scenario, a task is defined by channel statistics, signal-to-noise ratio, and modulation schemes. The context encompasses the users' pilot sequences, the corresponding quantized received signals, and the current received data signal. Different prompt design strategies are proposed and evaluated that encompass also large-scale fading and modulation information. Experiments demonstrate that ICL-based equalization provides estimates with lower mean squared error as compared to the linear minimum mean squared error equalizer, especially in the presence of limited fronthaul capacity and pilot contamination. | [
"['Matteo Zecchin' 'Kai Yu' 'Osvaldo Simeone']"
]
|
null | null | 2404.05540 | null | null | http://arxiv.org/pdf/2404.05540v1 | 2024-04-08T14:08:56Z | 2024-04-08T14:08:56Z | OPSD: an Offensive Persian Social media Dataset and its baseline
evaluations | The proliferation of hate speech and offensive comments on social media has become increasingly prevalent due to user activities. Such comments can have detrimental effects on individuals' psychological well-being and social behavior. While numerous datasets in the English language exist in this domain, few equivalent resources are available for Persian language. To address this gap, this paper introduces two offensive datasets. The first dataset comprises annotations provided by domain experts, while the second consists of a large collection of unlabeled data obtained through web crawling for unsupervised learning purposes. To ensure the quality of the former dataset, a meticulous three-stage labeling process was conducted, and kappa measures were computed to assess inter-annotator agreement. Furthermore, experiments were performed on the dataset using state-of-the-art language models, both with and without employing masked language modeling techniques, as well as machine learning algorithms, in order to establish the baselines for the dataset using contemporary cutting-edge approaches. The obtained F1-scores for the three-class and two-class versions of the dataset were 76.9% and 89.9% for XLM-RoBERTa, respectively. | [
"['Mehran Safayani' 'Amir Sartipi' 'Amir Hossein Ahmadi' 'Parniyan Jalali'\n 'Amir Hossein Mansouri' 'Mohammad Bisheh-Niasar' 'Zahra Pourbahman']"
]
|
null | null | 2404.05545 | null | null | http://arxiv.org/pdf/2404.05545v1 | 2024-04-08T14:15:56Z | 2024-04-08T14:15:56Z | Evaluating Interventional Reasoning Capabilities of Large Language
Models | Numerous decision-making tasks require estimating causal effects under interventions on different parts of a system. As practitioners consider using large language models (LLMs) to automate decisions, studying their causal reasoning capabilities becomes crucial. A recent line of work evaluates LLMs ability to retrieve commonsense causal facts, but these evaluations do not sufficiently assess how LLMs reason about interventions. Motivated by the role that interventions play in causal inference, in this paper, we conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention. We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning. These benchmarks allow us to isolate the ability of LLMs to accurately predict changes resulting from their ability to memorize facts or find other shortcuts. Our analysis on four LLMs highlights that while GPT- 4 models show promising accuracy at predicting the intervention effects, they remain sensitive to distracting factors in the prompts. | [
"['Tejas Kasetty' 'Divyat Mahajan' 'Gintare Karolina Dziugaite'\n 'Alexandre Drouin' 'Dhanya Sridhar']"
]
|
null | null | 2404.05555 | null | null | http://arxiv.org/pdf/2404.05555v2 | 2024-04-15T08:44:13Z | 2024-04-08T14:28:27Z | On the Convergence of Continual Learning with Adaptive Methods | One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma. However, the convergence of continual learning for each sequential task is less studied so far. In this paper, we provide a convergence analysis of memory-based continual learning with stochastic gradient descent and empirical evidence that training current tasks causes the cumulative degradation of previous tasks. We propose an adaptive method for nonconvex continual learning (NCCL), which adjusts step sizes of both previous and current tasks with the gradients. The proposed method can achieve the same convergence rate as the SGD method when the catastrophic forgetting term which we define in the paper is suppressed at each iteration. Further, we demonstrate that the proposed algorithm improves the performance of continual learning over existing methods for several image classification tasks. | [
"['Seungyub Han' 'Yeongmo Kim' 'Taehyun Cho' 'Jungwoo Lee']"
]
|
null | null | 2404.05567 | null | null | http://arxiv.org/pdf/2404.05567v1 | 2024-04-08T14:39:49Z | 2024-04-08T14:39:49Z | Dense Training, Sparse Inference: Rethinking Training of
Mixture-of-Experts Language Models | Mixture-of-Experts (MoE) language models can reduce computational costs by 2-4$times$ compared to dense models without sacrificing performance, making them more efficient in computation-bounded scenarios. However, MoE models generally require 2-4$times$ times more parameters to achieve comparable performance to a dense model, which incurs larger GPU memory requirements and makes MoE models less efficient in I/O-bounded scenarios like autoregressive generation. In this work, we propose a hybrid dense training and sparse inference framework for MoE models (DS-MoE) which achieves strong computation and parameter efficiency by employing dense computation across all experts during training and sparse computation during inference. Our experiments on training LLMs demonstrate that our DS-MoE models are more parameter-efficient than standard sparse MoEs and are on par with dense models in terms of total parameter size and performance while being computationally cheaper (activating 30-40% of the model's parameters). Performance tests using vLLM show that our DS-MoE-6B model runs up to $1.86times$ faster than similar dense models like Mistral-7B, and between $1.50times$ and $1.71times$ faster than comparable MoEs, such as DeepSeekMoE-16B and Qwen1.5-MoE-A2.7B. | [
"['Bowen Pan' 'Yikang Shen' 'Haokun Liu' 'Mayank Mishra' 'Gaoyuan Zhang'\n 'Aude Oliva' 'Colin Raffel' 'Rameswar Panda']"
]
|
null | null | 2404.05576 | null | null | http://arxiv.org/pdf/2404.05576v5 | 2024-05-13T08:42:19Z | 2024-04-08T14:52:48Z | Dynamic Backtracking in GFlowNets: Enhancing Decision Steps with
Reward-Dependent Adjustment Mechanisms | Generative Flow Networks (GFlowNets or GFNs) are probabilistic models predicated on Markov flows, and they employ specific amortization algorithms to learn stochastic policies that generate compositional substances including biomolecules, chemical materials, etc. With a strong ability to generate high-performance biochemical molecules, GFNs accelerate the discovery of scientific substances, effectively overcoming the time-consuming, labor-intensive, and costly shortcomings of conventional material discovery methods. However, previous studies rarely focus on accumulating exploratory experience by adjusting generative structures, which leads to disorientation in complex sampling spaces. Efforts to address this issue, such as LS-GFN, are limited to local greedy searches and lack broader global adjustments. This paper introduces a novel variant of GFNs, the Dynamic Backtracking GFN (DB-GFN), which improves the adaptability of decision-making steps through a reward-based dynamic backtracking mechanism. DB-GFN allows backtracking during the network construction process according to the current state's reward value, thereby correcting disadvantageous decisions and exploring alternative pathways during the exploration process. When applied to generative tasks involving biochemical molecules and genetic material sequences, DB-GFN outperforms GFN models such as LS-GFN and GTB, as well as traditional reinforcement learning methods, in sample quality, sample exploration quantity, and training convergence speed. Additionally, owing to its orthogonal nature, DB-GFN shows great potential in future improvements of GFNs, and it can be integrated with other strategies to achieve higher search performance. | [
"['Shuai Guo' 'Jielei Chu' 'Lei Zhu' 'Zhaoyu Li' 'Tianrui Li']"
]
|
null | null | 2404.05579 | null | null | http://arxiv.org/pdf/2404.05579v1 | 2024-04-08T14:55:35Z | 2024-04-08T14:55:35Z | Robust Data Pruning: Uncovering and Overcoming Implicit Bias | In the era of exceptionally data-hungry models, careful selection of the training data is essential to mitigate the extensive costs of deep learning. Data pruning offers a solution by removing redundant or uninformative samples from the dataset, which yields faster convergence and improved neural scaling laws. However, little is known about its impact on classification bias of the trained models. We conduct the first systematic study of this effect and reveal that existing data pruning algorithms can produce highly biased classifiers. At the same time, we argue that random data pruning with appropriate class ratios has potential to improve the worst-class performance. We propose a "fairness-aware" approach to pruning and empirically demonstrate its performance on standard computer vision benchmarks. In sharp contrast to existing algorithms, our proposed method continues improving robustness at a tolerable drop of average performance as we prune more from the datasets. We present theoretical analysis of the classification risk in a mixture of Gaussians to further motivate our algorithm and support our findings. | [
"['Artem Vysogorets' 'Kartik Ahuja' 'Julia Kempe']"
]
|
null | null | 2404.05604 | null | null | http://arxiv.org/pdf/2404.05604v1 | 2024-04-08T15:24:20Z | 2024-04-08T15:24:20Z | Technical Report: The Graph Spectral Token -- Enhancing Graph
Transformers with Spectral Information | Graph Transformers have emerged as a powerful alternative to Message-Passing Graph Neural Networks (MP-GNNs) to address limitations such as over-squashing of information exchange. However, incorporating graph inductive bias into transformer architectures remains a significant challenge. In this report, we propose the Graph Spectral Token, a novel approach to directly encode graph spectral information, which captures the global structure of the graph, into the transformer architecture. By parameterizing the auxiliary [CLS] token and leaving other tokens representing graph nodes, our method seamlessly integrates spectral information into the learning process. We benchmark the effectiveness of our approach by enhancing two existing graph transformers, GraphTrans and SubFormer. The improved GraphTrans, dubbed GraphTrans-Spec, achieves over 10% improvements on large graph benchmark datasets while maintaining efficiency comparable to MP-GNNs. SubFormer-Spec demonstrates strong performance across various datasets. | [
"['Zihan Pengmei' 'Zimu Li']"
]
|
null | null | 2404.05605 | null | null | http://arxiv.org/pdf/2404.05605v1 | 2024-04-08T15:25:25Z | 2024-04-08T15:25:25Z | Graph Neural Networks Automated Design and Deployment on Device-Edge
Co-Inference Systems | The key to device-edge co-inference paradigm is to partition models into computation-friendly and computation-intensive parts across the device and the edge, respectively. However, for Graph Neural Networks (GNNs), we find that simply partitioning without altering their structures can hardly achieve the full potential of the co-inference paradigm due to various computational-communication overheads of GNN operations over heterogeneous devices. We present GCoDE, the first automatic framework for GNN that innovatively Co-designs the architecture search and the mapping of each operation on Device-Edge hierarchies. GCoDE abstracts the device communication process into an explicit operation and fuses the search of architecture and the operations mapping in a unified space for joint-optimization. Also, the performance-awareness approach, utilized in the constraint-based search process of GCoDE, enables effective evaluation of architecture efficiency in diverse heterogeneous systems. We implement the co-inference engine and runtime dispatcher in GCoDE to enhance the deployment efficiency. Experimental results show that GCoDE can achieve up to $44.9times$ speedup and $98.2%$ energy reduction compared to existing approaches across various applications and system configurations. | [
"['Ao Zhou' 'Jianlei Yang' 'Tong Qiao' 'Yingjie Qi' 'Zhi Yang'\n 'Weisheng Zhao' 'Chunming Hu']"
]
|
null | null | 2404.05613 | null | null | http://arxiv.org/pdf/2404.05613v1 | 2024-04-08T15:40:22Z | 2024-04-08T15:40:22Z | Deep Representation Learning for Multi-functional Degradation Modeling
of Community-dwelling Aging Population | As the aging population grows, particularly for the baby boomer generation, the United States is witnessing a significant increase in the elderly population experiencing multifunctional disabilities. These disabilities, stemming from a variety of chronic diseases, injuries, and impairments, present a complex challenge due to their multidimensional nature, encompassing both physical and cognitive aspects. Traditional methods often use univariate regression-based methods to model and predict single degradation conditions and assume population homogeneity, which is inadequate to address the complexity and diversity of aging-related degradation. This study introduces a novel framework for multi-functional degradation modeling that captures the multidimensional (e.g., physical and cognitive) and heterogeneous nature of elderly disabilities. Utilizing deep learning, our approach predicts health degradation scores and uncovers latent heterogeneity from elderly health histories, offering both efficient estimation and explainable insights into the diverse effects and causes of aging-related degradation. A real-case study demonstrates the effectiveness and marks a pivotal contribution to accurately modeling the intricate dynamics of elderly degradation, and addresses the healthcare challenges in the aging population. | [
"['Suiyao Chen' 'Xinyi Liu' 'Yulei Li' 'Jing Wu' 'Handong Yao']"
]
|
null | null | 2404.05622 | null | null | http://arxiv.org/pdf/2404.05622v1 | 2024-04-08T15:53:29Z | 2024-04-08T15:53:29Z | How to Evaluate Entity Resolution Systems: An Entity-Centric Framework
with Application to Inventor Name Disambiguation | Entity resolution (record linkage, microclustering) systems are notoriously difficult to evaluate. Looking for a needle in a haystack, traditional evaluation methods use sophisticated, application-specific sampling schemes to find matching pairs of records among an immense number of non-matches. We propose an alternative that facilitates the creation of representative, reusable benchmark data sets without necessitating complex sampling schemes. These benchmark data sets can then be used for model training and a variety of evaluation tasks. Specifically, we propose an entity-centric data labeling methodology that integrates with a unified framework for monitoring summary statistics, estimating key performance metrics such as cluster and pairwise precision and recall, and analyzing root causes for errors. We validate the framework in an application to inventor name disambiguation and through simulation studies. Software: https://github.com/OlivierBinette/er-evaluation/ | [
"['Olivier Binette' 'Youngsoo Baek' 'Siddharth Engineer' 'Christina Jones'\n 'Abel Dasylva' 'Jerome P. Reiter']"
]
|
null | null | 2404.05623 | null | null | http://arxiv.org/pdf/2404.05623v2 | 2024-05-24T19:46:14Z | 2024-04-08T15:53:46Z | AnchorAL: Computationally Efficient Active Learning for Large and
Imbalanced Datasets | Active learning for imbalanced classification tasks is challenging as the minority classes naturally occur rarely. Gathering a large pool of unlabelled data is thus essential to capture minority instances. Standard pool-based active learning is computationally expensive on large pools and often reaches low accuracy by overfitting the initial decision boundary, thus failing to explore the input space and find minority instances. To address these issues we propose AnchorAL. At each iteration, AnchorAL chooses class-specific instances from the labelled set, or anchors, and retrieves the most similar unlabelled instances from the pool. This resulting subpool is then used for active learning. Using a small, fixed-sized subpool AnchorAL allows scaling any active learning strategy to large pools. By dynamically selecting different anchors at each iteration it promotes class balance and prevents overfitting the initial decision boundary, thus promoting the discovery of new clusters of minority instances. In experiments across different classification tasks, active learning strategies, and model architectures AnchorAL is (i) faster, often reducing runtime from hours to minutes, (ii) trains more performant models, (iii) and returns more balanced datasets than competing methods. | [
"['Pietro Lesci' 'Andreas Vlachos']"
]
|
null | null | 2404.05639 | null | null | http://arxiv.org/pdf/2404.05639v1 | 2024-04-08T16:20:15Z | 2024-04-08T16:20:15Z | Investigating the Impact of Quantization on Adversarial Robustness | Quantization is a promising technique for reducing the bit-width of deep models to improve their runtime performance and storage efficiency, and thus becomes a fundamental step for deployment. In real-world scenarios, quantized models are often faced with adversarial attacks which cause the model to make incorrect inferences by introducing slight perturbations. However, recent studies have paid less attention to the impact of quantization on the model robustness. More surprisingly, existing studies on this topic even present inconsistent conclusions, which prompted our in-depth investigation. In this paper, we conduct a first-time analysis of the impact of the quantization pipeline components that can incorporate robust optimization under the settings of Post-Training Quantization and Quantization-Aware Training. Through our detailed analysis, we discovered that this inconsistency arises from the use of different pipelines in different studies, specifically regarding whether robust optimization is performed and at which quantization stage it occurs. Our research findings contribute insights into deploying more secure and robust quantized networks, assisting practitioners in reference for scenarios with high-security requirements and limited resources. | [
"['Qun Li' 'Yuan Meng' 'Chen Tang' 'Jiacheng Jiang' 'Zhi Wang']"
]
|
null | null | 2404.05656 | null | null | http://arxiv.org/pdf/2404.05656v2 | 2024-04-22T15:25:20Z | 2024-04-08T16:39:34Z | Causality Extraction from Nuclear Licensee Event Reports Using a Hybrid
Framework | Industry-wide nuclear power plant operating experience is a critical source of raw data for performing parameter estimations in reliability and risk models. Much operating experience information pertains to failure events and is stored as reports containing unstructured data, such as narratives. Event reports are essential for understanding how failures are initiated and propagated, including the numerous causal relations involved. Causal relation extraction using deep learning represents a significant frontier in the field of natural language processing (NLP), and is crucial since it enables the interpretation of intricate narratives and connections contained within vast amounts of written information. This paper proposed a hybrid framework for causality detection and extraction from nuclear licensee event reports. The main contributions include: (1) we compiled an LER corpus with 20,129 text samples for causality analysis, (2) developed an interactive tool for labeling cause effect pairs, (3) built a deep-learning-based approach for causal relation detection, and (4) developed a knowledge based cause-effect extraction approach. | [
"['Shahidur Rahoman Sohag' 'Sai Zhang' 'Min Xian' 'Shoukun Sun' 'Fei Xu'\n 'Zhegang Ma']"
]
|
null | null | 2404.05678 | null | null | http://arxiv.org/pdf/2404.05678v2 | 2024-04-09T16:17:14Z | 2024-04-08T16:57:44Z | Flexible Fairness Learning via Inverse Conditional Permutation | Equalized odds, as a popular notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm prediction when conditioning on the true outcome. Despite rapid advancements, most of the current research focuses on the violation of equalized odds caused by one sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We address this gap by introducing a fairness learning approach that integrates adversarial learning with a novel inverse conditional permutation. This approach effectively and flexibly handles multiple sensitive attributes, potentially of mixed data types. The efficacy and flexibility of our method are demonstrated through both simulation studies and empirical analysis of real-world datasets. | [
"['Yuheng Lai' 'Leying Guan']"
]
|
null | null | 2404.05688 | null | null | http://arxiv.org/pdf/2404.05688v2 | 2024-05-02T20:09:15Z | 2024-04-08T17:14:32Z | David and Goliath: An Empirical Evaluation of Attacks and Defenses for
QNNs at the Deep Edge | ML is shifting from the cloud to the edge. Edge computing reduces the surface exposing private data and enables reliable throughput guarantees in real-time applications. Of the panoply of devices deployed at the edge, resource-constrained MCUs, e.g., Arm Cortex-M, are more prevalent, orders of magnitude cheaper, and less power-hungry than application processors or GPUs. Thus, enabling intelligence at the deep edge is the zeitgeist, with researchers focusing on unveiling novel approaches to deploy ANNs on these constrained devices. Quantization is a well-established technique that has proved effective in enabling the deployment of neural networks on MCUs; however, it is still an open question to understand the robustness of QNNs in the face of adversarial examples. To fill this gap, we empirically evaluate the effectiveness of attacks and defenses from (full-precision) ANNs on (constrained) QNNs. Our evaluation includes three QNNs targeting TinyML applications, ten attacks, and six defenses. With this study, we draw a set of interesting findings. First, quantization increases the point distance to the decision boundary and leads the gradient estimated by some attacks to explode or vanish. Second, quantization can act as a noise attenuator or amplifier, depending on the noise magnitude, and causes gradient misalignment. Regarding adversarial defenses, we conclude that input pre-processing defenses show impressive results on small perturbations; however, they fall short as the perturbation increases. At the same time, train-based defenses increase the average point distance to the decision boundary, which holds after quantization. However, we argue that train-based defenses still need to smooth the quantization-shift and gradient misalignment phenomenons to counteract adversarial example transferability to QNNs. All artifacts are open-sourced to enable independent validation of results. | [
"['Miguel Costa' 'Sandro Pinto']"
]
|
null | null | 2404.05689 | null | null | http://arxiv.org/pdf/2404.05689v2 | 2024-05-27T06:48:09Z | 2024-04-08T17:15:37Z | Automated discovery of symbolic laws governing skill acquisition from
naturally occurring data | Skill acquisition is a key area of research in cognitive psychology as it encompasses multiple psychological processes. The laws discovered under experimental paradigms are controversial and lack generalizability. This paper aims to unearth the laws of skill learning from large-scale training log data. A two-stage algorithm was developed to tackle the issues of unobservable cognitive states and algorithmic explosion in searching. Initially a deep learning model is employed to determine the learner's cognitive state and assess the feature importance. Subsequently, symbolic regression algorithms are utilized to parse the neural network model into algebraic equations. Experimental results show the algorithm can accurately restore preset laws within a noise range in continuous feedback settings. When applied to Lumosity training data, the method outperforms traditional and recent models in fitness terms. The study reveals two new forms of skill acquisition laws and reaffirms some previous findings. | [
"['Sannyuya Liu' 'Qing Li' 'Xiaoxuan Shen' 'Jianwen Sun' 'Zongkai Yang']"
]
|
null | null | 2404.05693 | null | null | http://arxiv.org/pdf/2404.05693v1 | 2024-04-08T17:18:30Z | 2024-04-08T17:18:30Z | Evaluating the Efficacy of Cut-and-Paste Data Augmentation in Semantic
Segmentation for Satellite Imagery | Satellite imagery is crucial for tasks like environmental monitoring and urban planning. Typically, it relies on semantic segmentation or Land Use Land Cover (LULC) classification to categorize each pixel. Despite the advancements brought about by Deep Neural Networks (DNNs), their performance in segmentation tasks is hindered by challenges such as limited availability of labeled data, class imbalance and the inherent variability and complexity of satellite images. In order to mitigate those issues, our study explores the effectiveness of a Cut-and-Paste augmentation technique for semantic segmentation in satellite images. We adapt this augmentation, which usually requires labeled instances, to the case of semantic segmentation. By leveraging the connected components in the semantic segmentation labels, we extract instances that are then randomly pasted during training. Using the DynamicEarthNet dataset and a U-Net model for evaluation, we found that this augmentation significantly enhances the mIoU score on the test set from 37.9 to 44.1. This finding highlights the potential of the Cut-and-Paste augmentation to improve the generalization capabilities of semantic segmentation models in satellite imagery. | [
"['Ionut M. Motoi' 'Leonardo Saraceni' 'Daniele Nardi'\n 'Thomas A. Ciarfuglia']"
]
|
null | null | 2404.05694 | null | null | http://arxiv.org/pdf/2404.05694v2 | 2024-05-08T08:53:53Z | 2024-04-08T17:24:04Z | Comprehensive Study on German Language Models for Clinical and
Biomedical Text Understanding | Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa. While these models demonstrate remarkable performance on general datasets, they can struggle in specialized domains such as medicine, where unique domain-specific terminologies, domain-specific abbreviations, and varying document structures are common. This paper explores strategies for adapting these models to domain-specific requirements, primarily through continuous pre-training on domain-specific data. We pre-trained several German medical language models on 2.4B tokens derived from translated public English medical data and 3B tokens of German clinical data. The resulting models were evaluated on various German downstream tasks, including named entity recognition (NER), multi-label classification, and extractive question answering. Our results suggest that models augmented by clinical and translation-based pre-training typically outperform general domain models in medical contexts. We conclude that continuous pre-training has demonstrated the ability to match or even exceed the performance of clinical models trained from scratch. Furthermore, pre-training on clinical data or leveraging translated texts have proven to be reliable methods for domain adaptation in medical NLP tasks. | [
"['Ahmad Idrissi-Yaghir' 'Amin Dada' 'Henning Schäfer' 'Kamyar Arzideh'\n 'Giulia Baldini' 'Jan Trienes' 'Max Hasin' 'Jeanette Bewersdorff'\n 'Cynthia S. Schmidt' 'Marie Bauer' 'Kaleb E. Smith' 'Jiang Bian'\n 'Yonghui Wu' 'Jörg Schlötterer' 'Torsten Zesch' 'Peter A. Horn'\n 'Christin Seifert' 'Felix Nensa' 'Jens Kleesiek' 'Christoph M. Friedrich']"
]
|
null | null | 2404.05695 | null | null | http://arxiv.org/pdf/2404.05695v2 | 2024-05-18T10:00:30Z | 2024-04-08T17:26:28Z | Humanoid-Gym: Reinforcement Learning for Humanoid Robot with Zero-Shot
Sim2Real Transfer | Humanoid-Gym is an easy-to-use reinforcement learning (RL) framework based on Nvidia Isaac Gym, designed to train locomotion skills for humanoid robots, emphasizing zero-shot transfer from simulation to the real-world environment. Humanoid-Gym also integrates a sim-to-sim framework from Isaac Gym to Mujoco that allows users to verify the trained policies in different physical simulations to ensure the robustness and generalization of the policies. This framework is verified by RobotEra's XBot-S (1.2-meter tall humanoid robot) and XBot-L (1.65-meter tall humanoid robot) in a real-world environment with zero-shot sim-to-real transfer. The project website and source code can be found at: https://sites.google.com/view/humanoid-gym/. | [
"['Xinyang Gu' 'Yen-Jen Wang' 'Jianyu Chen']"
]
|
null | null | 2404.05723 | null | null | http://arxiv.org/pdf/2404.05723v1 | 2024-04-08T17:58:22Z | 2024-04-08T17:58:22Z | Predicting Overtakes in Trucks Using CAN Data | Safe overtakes in trucks are crucial to prevent accidents, reduce congestion, and ensure efficient traffic flow, making early prediction essential for timely and informed driving decisions. Accordingly, we investigate the detection of truck overtakes from CAN data. Three classifiers, Artificial Neural Networks (ANN), Random Forest, and Support Vector Machines (SVM), are employed for the task. Our analysis covers up to 10 seconds before the overtaking event, using an overlapping sliding window of 1 second to extract CAN features. We observe that the prediction scores of the overtake class tend to increase as we approach the overtake trigger, while the no-overtake class remain stable or oscillates depending on the classifier. Thus, the best accuracy is achieved when approaching the trigger, making early overtaking prediction challenging. The classifiers show good accuracy in classifying overtakes (Recall/TPR > 93%), but accuracy is suboptimal in classifying no-overtakes (TNR typically 80-90% and below 60% for one SVM variant). We further combine two classifiers (Random Forest and linear SVM) by averaging their output scores. The fusion is observed to improve no-overtake classification (TNR > 92%) at the expense of reducing overtake accuracy (TPR). However, the latter is kept above 91% near the overtake trigger. Therefore, the fusion balances TPR and TNR, providing more consistent performance than individual classifiers. | [
"['Talha Hanif Butt' 'Prayag Tiwari' 'Fernando Alonso-Fernandez']"
]
|
null | null | 2404.05728 | null | null | http://arxiv.org/pdf/2404.05728v5 | 2024-06-26T04:07:08Z | 2024-04-08T17:59:44Z | A Large-Scale Exploration of $μ$-Transfer | Large artificial neural networks have become a mainstay of language, vision, and audio processing and synthesis, yet their initializations and learning rates are often set in an unsophisticated fashion, due to the high cost of hyperparameter sweeps at scale. The $mu$-Parameterization ($mu$P) offers a potential solution to this challenge, yielding scaling rules for model initialization and learning rates while reportedly enabling zero-shot hyperparameter transfer from small to large models. Despite its evident promise, the $mu$P method is not yet widely adopted, perhaps due to higher implementation complexity, many variations, or complex theoretical background. This work investigates $mu$P empirically, focusing on the ubiquitous transformer architecture, and aims to answer a simple question: does $mu$-Transfer yield optimal learning rates in practice? Studying models of up to 10B parameters and training budgets of up to 190B tokens, we find $mu$-Transfer works as intended for the majority of important cases, yet also identify a few cases where it may not. | [
"['Lucas Lingle']"
]
|
null | null | 2404.05737 | null | null | http://arxiv.org/pdf/2404.05737v2 | 2024-06-19T19:06:36Z | 2024-03-28T18:06:00Z | Soil respiration signals in response to sustainable soil management
practices enhance soil organic carbon stocks | Development of a spatial-temporal and data-driven model of soil respiration at the global scale based on soil temperature, yearly soil moisture, and soil organic carbon (C) estimates. Prediction of soil respiration on an annual basis (1991-2018) with relatively high accuracy (NSE 0.69, CCC 0.82). Lower soil respiration trends, higher soil respiration magnitudes, and higher soil organic C stocks across areas experiencing the presence of sustainable soil management practices. | [
"['Mario Guevara']"
]
|
null | null | 2404.05741 | null | null | http://arxiv.org/pdf/2404.05741v1 | 2024-04-02T19:53:54Z | 2024-04-02T19:53:54Z | Enhancing Inference Efficiency of Large Language Models: Investigating
Optimization Strategies and Architectural Innovations | Large Language Models are growing in size, and we expect them to continue to do so, as larger models train quicker. However, this increase in size will severely impact inference costs. Therefore model compression is important, to retain the performance of larger models, but with a reduced cost of running them. In this thesis we explore the methods of model compression, and we empirically demonstrate that the simple method of skipping latter attention sublayers in Transformer LLMs is an effective method of model compression, as these layers prove to be redundant, whilst also being incredibly computationally expensive. We observed a 21% speed increase in one-token generation for Llama 2 7B, whilst surprisingly and unexpectedly improving performance over several common benchmarks. | [
"['Georgy Tyukin']"
]
|
null | null | 2404.05746 | null | null | http://arxiv.org/pdf/2404.05746v1 | 2024-04-03T14:33:23Z | 2024-04-03T14:33:23Z | Causality for Earth Science -- A Review on Time-series and
Spatiotemporal Causality Methods | This survey paper covers the breadth and depth of time-series and spatiotemporal causality methods, and their applications in Earth Science. More specifically, the paper presents an overview of causal discovery and causal inference, explains the underlying causal assumptions, and enlists evaluation techniques and key terminologies of the domain area. The paper elicits the various state-of-the-art methods introduced for time-series and spatiotemporal causal analysis along with their strengths and limitations. The paper further describes the existing applications of several methods for answering specific Earth Science questions such as extreme weather events, sea level rise, teleconnections etc. This survey paper can serve as a primer for Data Science researchers interested in data-driven causal study as we share a list of resources, such as Earth Science datasets (synthetic, simulated and observational data) and open source tools for causal analysis. It will equally benefit the Earth Science community interested in taking an AI-driven approach to study the causality of different dynamic and thermodynamic processes as we present the open challenges and opportunities in performing causality-based Earth Science study. | [
"['Sahara Ali' 'Uzma Hasan' 'Xingyan Li' 'Omar Faruque' 'Akila Sampath'\n 'Yiyi Huang' 'Md Osman Gani' 'Jianwu Wang']"
]
|
null | null | 2404.05748 | null | null | http://arxiv.org/pdf/2404.05748v2 | 2024-07-01T19:13:36Z | 2024-04-04T06:06:26Z | Analyzing heterogeneity in Alzheimer Disease using multimodal normative
modeling on imaging-based ATN biomarkers | INTRODUCTION: Previous studies have applied normative modeling on a single neuroimaging modality to investigate Alzheimer Disease (AD) heterogeneity. We employed a deep learning-based multimodal normative framework to analyze individual-level variation across ATN (amyloid-tau-neurodegeneration) imaging biomarkers. METHODS: We selected cross-sectional discovery (n = 665) and replication cohorts (n = 430) with available T1-weighted MRI, amyloid and tau PET. Normative modeling estimated individual-level abnormal deviations in amyloid-positive individuals compared to amyloid-negative controls. Regional abnormality patterns were mapped at different clinical group levels to assess intra-group heterogeneity. An individual-level disease severity index (DSI) was calculated using both the spatial extent and magnitude of abnormal deviations across ATN. RESULTS: Greater intra-group heterogeneity in ATN abnormality patterns was observed in more severe clinical stages of AD. Higher DSI was associated with worse cognitive function and increased risk of disease progression. DISCUSSION: Subject-specific abnormality maps across ATN reveal the heterogeneous impact of AD on the brain. | [
"['Sayantan Kumar' 'Tom Earnest' 'Braden Yang' 'Deydeep Kothapalli'\n 'Andrew J. Aschenbrenner' 'Jason Hassenstab' 'Chengie Xiong' 'Beau Ances'\n 'John Morris' 'Tammie L. S. Benzinger' 'Brian A. Gordon' 'Philip Payne'\n 'Aristeidis Sotiras']"
]
|
null | null | 2404.05752 | null | null | http://arxiv.org/pdf/2404.05752v1 | 2024-04-05T03:52:27Z | 2024-04-05T03:52:27Z | Physics Event Classification Using Large Language Models | The 2023 AI4EIC hackathon was the culmination of the third annual AI4EIC workshop at The Catholic University of America. This workshop brought together researchers from physics, data science and computer science to discuss the latest developments in Artificial Intelligence (AI) and Machine Learning (ML) for the Electron Ion Collider (EIC), including applications for detectors, accelerators, and experimental control. The hackathon, held on the final day of the workshop, involved using a chatbot powered by a Large Language Model, ChatGPT-3.5, to train a binary classifier neutrons and photons in simulated data from the textsc{GlueX} Barrel Calorimeter. In total, six teams of up to four participants from all over the world took part in this intense educational and research event. This article highlights the hackathon challenge, the resources and methodology used, and the results and insights gained from analyzing physics data using the most cutting-edge tools in AI/ML. | [
"['Cristiano Fanelli' 'James Giroux' 'Patrick Moran' 'Hemalata Nayak'\n 'Karthik Suresh' 'Eric Walter']"
]
|
null | null | 2404.05758 | null | null | http://arxiv.org/pdf/2404.05758v1 | 2024-04-05T21:28:56Z | 2024-04-05T21:28:56Z | Implicit Assimilation of Sparse In Situ Data for Dense & Global Storm
Surge Forecasting | Hurricanes and coastal floods are among the most disastrous natural hazards. Both are intimately related to storm surges, as their causes and effects, respectively. However, the short-term forecasting of storm surges has proven challenging, especially when targeting previously unseen locations or sites without tidal gauges. Furthermore, recent work improved short and medium-term weather forecasting but the handling of raw unassimilated data remains non-trivial. In this paper, we tackle both challenges and demonstrate that neural networks can implicitly assimilate sparse in situ tide gauge data with coarse ocean state reanalysis in order to forecast storm surges. We curate a global dataset to learn and validate the dense prediction of storm surges, building on preceding efforts. Other than prior work limited to known gauges, our approach extends to ungauged sites, paving the way for global storm surge forecasting. | [
"['Patrick Ebel' 'Brandon Victor' 'Peter Naylor' 'Gabriele Meoni'\n 'Federico Serva' 'Rochelle Schneider']"
]
|
null | null | 2404.05762 | null | null | http://arxiv.org/pdf/2404.05762v1 | 2024-04-06T11:20:28Z | 2024-04-06T11:20:28Z | Evaluating the Effectiveness of Artificial Intelligence in Predicting
Adverse Drug Reactions among Cancer Patients: A Systematic Review and
Meta-Analysis | Adverse drug reactions considerably impact patient outcomes and healthcare costs in cancer therapy. Using artificial intelligence to predict adverse drug reactions in real time could revolutionize oncology treatment. This study aims to assess the performance of artificial intelligence models in predicting adverse drug reactions in patients with cancer. This is the first systematic review and meta-analysis. Scopus, PubMed, IEEE Xplore, and ACM Digital Library databases were searched for studies in English, French, and Arabic from January 1, 2018, to August 20, 2023. The inclusion criteria were: (1) peer-reviewed research articles; (2) use of artificial intelligence algorithms (machine learning, deep learning, knowledge graphs); (3) study aimed to predict adverse drug reactions (cardiotoxicity, neutropenia, nephrotoxicity, hepatotoxicity); (4) study was on cancer patients. The data were extracted and evaluated by three reviewers for study quality. Of the 332 screened articles, 17 studies (5%) involving 93,248 oncology patients from 17 countries were included in the systematic review, of which ten studies synthesized the meta-analysis. A random-effects model was created to pool the sensitivity, specificity, and AUC of the included studies. The pooled results were 0.82 (95% CI:0.69, 0.9), 0.84 (95% CI:0.75, 0.9), and 0.83 (95% CI:0.77, 0.87) for sensitivity, specificity, and AUC, respectively, of ADR predictive models. Biomarkers proved their effectiveness in predicting ADRs, yet they were adopted by only half of the reviewed studies. The use of AI in cancer treatment shows great potential, with models demonstrating high specificity and sensitivity in predicting ADRs. However, standardized research and multicenter studies are needed to improve the quality of evidence. AI can enhance cancer patient care by bridging the gap between data-driven insights and clinical expertise. | [
"['Fatma Zahra Abdeldjouad' 'Menaouer Brahami' 'Mohammed Sabri']"
]
|
null | null | 2404.05768 | null | null | http://arxiv.org/pdf/2404.05768v2 | 2024-04-10T16:41:49Z | 2024-04-07T14:29:23Z | Streamlining Ocean Dynamics Modeling with Fourier Neural Operators: A
Multiobjective Hyperparameter and Architecture Optimization Approach | Training an effective deep learning model to learn ocean processes involves careful choices of various hyperparameters. We leverage the advanced search algorithms for multiobjective optimization in DeepHyper, a scalable hyperparameter optimization software, to streamline the development of neural networks tailored for ocean modeling. The focus is on optimizing Fourier neural operators (FNOs), a data-driven model capable of simulating complex ocean behaviors. Selecting the correct model and tuning the hyperparameters are challenging tasks, requiring much effort to ensure model accuracy. DeepHyper allows efficient exploration of hyperparameters associated with data preprocessing, FNO architecture-related hyperparameters, and various model training strategies. We aim to obtain an optimal set of hyperparameters leading to the most performant model. Moreover, on top of the commonly used mean squared error for model training, we propose adopting the negative anomaly correlation coefficient as the additional loss term to improve model performance and investigate the potential trade-off between the two terms. The experimental results show that the optimal set of hyperparameters enhanced model performance in single timestepping forecasting and greatly exceeded the baseline configuration in the autoregressive rollout for long-horizon forecasting up to 30 days. Utilizing DeepHyper, we demonstrate an approach to enhance the use of FNOs in ocean dynamics forecasting, offering a scalable solution with improved precision. | [
"['Yixuan Sun' 'Ololade Sowunmi' 'Romain Egele'\n 'Sri Hari Krishna Narayanan' 'Luke Van Roekel' 'Prasanna Balaprakash']"
]
|
null | null | 2404.05774 | null | null | http://arxiv.org/pdf/2404.05774v1 | 2024-04-08T03:38:52Z | 2024-04-08T03:38:52Z | STMGF: An Effective Spatial-Temporal Multi-Granularity Framework for
Traffic Forecasting | Accurate Traffic Prediction is a challenging task in intelligent transportation due to the spatial-temporal aspects of road networks. The traffic of a road network can be affected by long-distance or long-term dependencies where existing methods fall short in modeling them. In this paper, we introduce a novel framework known as Spatial-Temporal Multi-Granularity Framework (STMGF) to enhance the capture of long-distance and long-term information of the road networks. STMGF makes full use of different granularity information of road networks and models the long-distance and long-term information by gathering information in a hierarchical interactive way. Further, it leverages the inherent periodicity in traffic sequences to refine prediction results by matching with recent traffic data. We conduct experiments on two real-world datasets, and the results demonstrate that STMGF outperforms all baseline models and achieves state-of-the-art performance. | [
"['Zhengyang Zhao' 'Haitao Yuan' 'Nan Jiang' 'Minxiao Chen' 'Ning Liu'\n 'Zengxiang Li']"
]
|
null | null | 2404.05779 | null | null | http://arxiv.org/pdf/2404.05779v1 | 2024-04-08T15:19:57Z | 2024-04-08T15:19:57Z | Data Readiness for AI: A 360-Degree Survey | Data are the critical fuel for Artificial Intelligence (AI) models. Poor quality data produces inaccurate and ineffective AI models that may lead to incorrect or unsafe use. Checking for data readiness is a crucial step in improving data quality. Numerous R&D efforts have been spent on improving data quality. However, standardized metrics for evaluating data readiness for use in AI training are still evolving. In this study, we perform a comprehensive survey of metrics used for verifying AI's data readiness. This survey examines more than 120 papers that are published by ACM Digital Library, IEEE Xplore, other reputable journals, and articles published on the web by prominent AI experts. This survey aims to propose a taxonomy of data readiness for AI (DRAI) metrics for structured and unstructured datasets. We anticipate that this taxonomy can lead to new standards for DRAI metrics that would be used for enhancing the quality and accuracy of AI training and inference. | [
"['Kaveen Hiniduma' 'Suren Byna' 'Jean Luca Bez']"
]
|
null | null | 2404.05781 | null | null | http://arxiv.org/pdf/2404.05781v1 | 2024-04-08T16:36:07Z | 2024-04-08T16:36:07Z | Group-specific discriminant analysis reveals statistically validated sex
differences in lateralization of brain functional network | Lateralization is a fundamental feature of the human brain, where sex differences have been observed. Conventional studies in neuroscience on sex-specific lateralization are typically conducted on univariate statistical comparisons between male and female groups. However, these analyses often lack effective validation of group specificity. Here, we formulate modeling sex differences in lateralization of functional networks as a dual-classification problem, consisting of first-order classification for left vs. right functional networks and second-order classification for male vs. female models. To capture sex-specific patterns, we develop the Group-Specific Discriminant Analysis (GSDA) for first-order classification. The evaluation on two public neuroimaging datasets demonstrates the efficacy of GSDA in learning sex-specific models from functional networks, achieving a significant improvement in group specificity over baseline methods. The major sex differences are in the strength of lateralization and the interactions within and between lobes. The GSDA-based method is generic in nature and can be adapted to other group-specific analyses such as handedness-specific or disease-specific analyses. | [
"['Shuo Zhou' 'Junhao Luo' 'Yaya Jiang' 'Haolin Wang' 'Haiping Lu'\n 'Gaolang Gong']"
]
|
null | null | 2404.05782 | null | null | http://arxiv.org/pdf/2404.05782v1 | 2024-04-08T17:33:11Z | 2024-04-08T17:33:11Z | Dynamical stability and chaos in artificial neural network trajectories
along training | The process of training an artificial neural network involves iteratively adapting its parameters so as to minimize the error of the network's prediction, when confronted with a learning task. This iterative change can be naturally interpreted as a trajectory in network space -- a time series of networks -- and thus the training algorithm (e.g. gradient descent optimization of a suitable loss function) can be interpreted as a dynamical system in graph space. In order to illustrate this interpretation, here we study the dynamical properties of this process by analyzing through this lens the network trajectories of a shallow neural network, and its evolution through learning a simple classification task. We systematically consider different ranges of the learning rate and explore both the dynamical and orbital stability of the resulting network trajectories, finding hints of regular and chaotic behavior depending on the learning rate regime. Our findings are put in contrast to common wisdom on convergence properties of neural networks and dynamical systems theory. This work also contributes to the cross-fertilization of ideas between dynamical systems theory, network theory and machine learning | [
"['Kaloyan Danovski' 'Miguel C. Soriano' 'Lucas Lacasa']"
]
|
null | null | 2404.05809 | null | null | http://arxiv.org/pdf/2404.05809v1 | 2024-04-08T18:16:22Z | 2024-04-08T18:16:22Z | Self-Labeling in Multivariate Causality and Quantification for Adaptive
Machine Learning | Adaptive machine learning (ML) aims to allow ML models to adapt to ever-changing environments with potential concept drift after model deployment. Traditionally, adaptive ML requires a new dataset to be manually labeled to tailor deployed models to altered data distributions. Recently, an interactive causality based self-labeling method was proposed to autonomously associate causally related data streams for domain adaptation, showing promising results compared to traditional feature similarity-based semi-supervised learning. Several unanswered research questions remain, including self-labeling's compatibility with multivariate causality and the quantitative analysis of the auxiliary models used in the self-labeling. The auxiliary models, the interaction time model (ITM) and the effect state detector (ESD), are vital to the success of self-labeling. This paper further develops the self-labeling framework and its theoretical foundations to address these research questions. A framework for the application of self-labeling to multivariate causal graphs is proposed using four basic causal relationships, and the impact of non-ideal ITM and ESD performance is analyzed. A simulated experiment is conducted based on a multivariate causal graph, validating the proposed theory. | [
"['Yutian Ren' 'Aaron Haohua Yen' 'G. P. Li']"
]
|
null | null | 2404.05816 | null | null | http://arxiv.org/pdf/2404.05816v1 | 2024-04-08T18:40:25Z | 2024-04-08T18:40:25Z | Centrality Estimators for Probability Density Functions | In this report, we explore the data selection leading to a family of estimators maximizing a centrality. The family allows a nice properties leading to accurate and robust probability density function fitting according to some criteria we define. We establish a link between the centrality estimator and the maximum likelihood, showing that the latter is a particular case. Therefore, a new probability interpretation of Fisher maximum likelihood is provided. We will introduce and study two specific centralities that we have named H"older and Lehmer estimators. A numerical simulation is provided showing the effectiveness of the proposed families of estimators opening the door to development of new concepts and algorithms in machine learning, data mining, statistics, and data analysis. | [
"['Djemel Ziou']"
]
|
null | null | 2404.05817 | null | null | http://arxiv.org/pdf/2404.05817v1 | 2024-04-08T18:41:55Z | 2024-04-08T18:41:55Z | Label Propagation Training Schemes for Physics-Informed Neural Networks
and Gaussian Processes | This paper proposes a semi-supervised methodology for training physics-informed machine learning methods. This includes self-training of physics-informed neural networks and physics-informed Gaussian processes in isolation, and the integration of the two via co-training. We demonstrate via extensive numerical experiments how these methods can ameliorate the issue of propagating information forward in time, which is a common failure mode of physics-informed machine learning. | [
"['Ming Zhong' 'Dehao Liu' 'Raymundo Arroyave' 'Ulisses Braga-Neto']"
]
|
null | null | 2404.05819 | null | null | http://arxiv.org/pdf/2404.05819v1 | 2024-04-08T18:55:07Z | 2024-04-08T18:55:07Z | Just Wing It: Optimal Estimation of Missing Mass in a Markovian Sequence | We study the problem of estimating the stationary mass -- also called the unigram mass -- that is missing from a single trajectory of a discrete-time, ergodic Markov chain. This problem has several applications -- for example, estimating the stationary missing mass is critical for accurately smoothing probability estimates in sequence models. While the classical Good--Turing estimator from the 1950s has appealing properties for i.i.d. data, it is known to be biased in the Markov setting, and other heuristic estimators do not come equipped with guarantees. Operating in the general setting in which the size of the state space may be much larger than the length $n$ of the trajectory, we develop a linear-runtime estimator called emph{Windowed Good--Turing} (textsc{WingIt}) and show that its risk decays as $widetilde{mathcal{O}}(mathsf{T_{mix}}/n)$, where $mathsf{T_{mix}}$ denotes the mixing time of the chain in total variation distance. Notably, this rate is independent of the size of the state space and minimax-optimal up to a logarithmic factor in $n / mathsf{T_{mix}}$. We also present a bound on the variance of the missing mass random variable, which may be of independent interest. We extend our estimator to approximate the stationary mass placed on elements occurring with small frequency in $X^n$. Finally, we demonstrate the efficacy of our estimators both in simulations on canonical chains and on sequences constructed from a popular natural language corpus. | [
"['Ashwin Pananjady' 'Vidya Muthukumar' 'Andrew Thangaraj']"
]
|
null | null | 2404.05824 | null | null | http://arxiv.org/pdf/2404.05824v1 | 2024-04-08T19:23:17Z | 2024-04-08T19:23:17Z | Quantum Adversarial Learning for Kernel Methods | We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable against adversarial attacks, namely small engineered perturbations of the input data can deceive the classifier into predicting the wrong result. Nonetheless, we also show that simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks. Our results find applications in security-critical learning problems and in mitigating the effect of some forms of quantum noise, since the attacker can also be understood as part of the surrounding environment. | [
"['Giuseppe Montalbano' 'Leonardo Banchi']"
]
|
null | null | 2404.05829 | null | null | http://arxiv.org/pdf/2404.05829v1 | 2024-04-08T19:48:36Z | 2024-04-08T19:48:36Z | SambaLingo: Teaching Large Language Models New Languages | Despite the widespread availability of LLMs, there remains a substantial gap in their capabilities and availability across diverse languages. One approach to address these issues has been to take an existing pre-trained LLM and continue to train it on new languages. While prior works have experimented with language adaptation, many questions around best practices and methodology have not been covered. In this paper, we present a comprehensive investigation into the adaptation of LLMs to new languages. Our study covers the key components in this process, including vocabulary extension, direct preference optimization and the data scarcity problem for human alignment in low-resource languages. We scale these experiments across 9 languages and 2 parameter scales (7B and 70B). We compare our models against Llama 2, Aya-101, XGLM, BLOOM and existing language experts, outperforming all prior published baselines. Additionally, all evaluation code and checkpoints are made public to facilitate future research. | [
"['Zoltan Csaki' 'Bo Li' 'Jonathan Li' 'Qiantong Xu' 'Pian Pawakapan'\n 'Leon Zhang' 'Yun Du' 'Hengyu Zhao' 'Changran Hu' 'Urmish Thakker']"
]
|
null | null | 2404.05835 | null | null | http://arxiv.org/pdf/2404.05835v2 | 2024-06-06T15:51:35Z | 2024-04-08T20:02:19Z | Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers
without Retraining | Model Predictive Control (MPC) is a method to control nonlinear systems with guaranteed stability and constraint satisfaction but suffers from high computation times. Approximate MPC (AMPC) with neural networks (NNs) has emerged to address this limitation, enabling deployment on resource-constrained embedded systems. However, when tuning AMPCs for real-world systems, large datasets need to be regenerated and the NN needs to be retrained at every tuning step. This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining. By incorporating local sensitivities of nonlinear programs, the proposed method not only mimics optimal MPC inputs but also adjusts to known changes in physical parameters of the model using linear predictions while still guaranteeing stability. We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU). We use the same NN across both system instances that have different parameters. This work not only represents the first experimental demonstration of AMPC for fast-moving systems on low-cost MCUs to the best of our knowledge, but also showcases generalization across system instances and variations through our parameter-adaptation method. Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems. | [
"['Henrik Hose' 'Alexander Gräfe' 'Sebastian Trimpe']"
]
|
null | null | 2404.05840 | null | null | http://arxiv.org/pdf/2404.05840v3 | 2024-05-17T16:01:54Z | 2024-04-08T20:06:33Z | Attention-Driven Multi-Agent Reinforcement Learning: Enhancing Decisions
with Expertise-Informed Tasks | In this paper, we introduce an alternative approach to enhancing Multi-Agent Reinforcement Learning (MARL) through the integration of domain knowledge and attention-based policy mechanisms. Our methodology focuses on the incorporation of domain-specific expertise into the learning process, which simplifies the development of collaborative behaviors. This approach aims to reduce the complexity and learning overhead typically associated with MARL by enabling agents to concentrate on essential aspects of complex tasks, thus optimizing the learning curve. The utilization of attention mechanisms plays a key role in our model. It allows for the effective processing of dynamic context data and nuanced agent interactions, leading to more refined decision-making. Applied in standard MARL scenarios, such as the Stanford Intelligent Systems Laboratory (SISL) Pursuit and Multi-Particle Environments (MPE) Simple Spread, our method has been shown to improve both learning efficiency and the effectiveness of collaborative behaviors. The results indicate that our attention-based approach can be a viable approach for improving the efficiency of MARL training process, integrating domain-specific knowledge at the action level. | [
"['Andre R Kuroswiski' 'Annie S Wu' 'Angelo Passaro']"
]
|
null | null | 2404.05843 | null | null | http://arxiv.org/pdf/2404.05843v2 | 2024-04-27T19:03:14Z | 2024-04-08T20:14:10Z | Softmax Attention with Constant Cost per Token | We propose a simple modification to the conventional attention mechanism applied by Transformers: Instead of quantifying pairwise query-key similarity with scaled dot-products, we quantify it with the logarithms of scaled dot-products of exponentials. Our modification linearizes attention with exponential kernel feature maps, whose corresponding feature function is infinite dimensional. We show that our modification is expressible as a composition of log-sums of exponentials, with a latent space of constant size, enabling application with constant time and space complexity per token. We implement our modification, verify that it works in practice, and conclude that it is a promising alternative to conventional attention. | [
"['Franz A. Heinsen']"
]
|
null | null | 2404.05849 | null | null | http://arxiv.org/pdf/2404.05849v1 | 2024-04-08T20:31:27Z | 2024-04-08T20:31:27Z | Localizing Moments of Actions in Untrimmed Videos of Infants with Autism
Spectrum Disorder | Autism Spectrum Disorder (ASD) presents significant challenges in early diagnosis and intervention, impacting children and their families. With prevalence rates rising, there is a critical need for accessible and efficient screening tools. Leveraging machine learning (ML) techniques, in particular Temporal Action Localization (TAL), holds promise for automating ASD screening. This paper introduces a self-attention based TAL model designed to identify ASD-related behaviors in infant videos. Unlike existing methods, our approach simplifies complex modeling and emphasizes efficiency, which is essential for practical deployment in real-world scenarios. Importantly, this work underscores the importance of developing computer vision methods capable of operating in naturilistic environments with little equipment control, addressing key challenges in ASD screening. This study is the first to conduct end-to-end temporal action localization in untrimmed videos of infants with ASD, offering promising avenues for early intervention and support. We report baseline results of behavior detection using our TAL model. We achieve 70% accuracy for look face, 79% accuracy for look object, 72% for smile and 65% for vocalization. | [
"['Halil Ismail Helvaci' 'Sen-ching Samson Cheung' 'Chen-Nee Chuah'\n 'Sally Ozonoff']"
]
|
null | null | 2404.05858 | null | null | http://arxiv.org/pdf/2404.05858v1 | 2024-04-08T20:42:10Z | 2024-04-08T20:42:10Z | A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation | Neuromorphic computing mimics computational principles of the brain in $textit{silico}$ and motivates research into event-based vision and spiking neural networks (SNNs). Event cameras (ECs) exclusively capture local intensity changes and offer superior power consumption, response latencies, and dynamic ranges. SNNs replicate biological neuronal dynamics and have demonstrated potential as alternatives to conventional artificial neural networks (ANNs), such as in reducing energy expenditure and inference time in visual classification. Nevertheless, these novel paradigms remain scarcely explored outside the domain of aerial robots. To investigate the utility of brain-inspired sensing and data processing, we developed a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator. Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN, decoding neural activations into avoidance motions, and adjusting plans using a dynamic motion primitive. We conducted experiments with a Kinova Gen3 arm performing simple reaching tasks that involve obstacles in sets of distinct task scenarios and in comparison to a non-adaptive baseline. Our neuromorphic approach facilitated reliable avoidance of imminent collisions in simulated and real-world experiments, where the baseline consistently failed. Trajectory adaptations had low impacts on safety and predictability criteria. Among the notable SNN properties were the correlation of computations with the magnitude of perceived motions and a robustness to different event emulation methods. Tests with a DAVIS346 EC showed similar performance, validating our experimental event emulation. Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods. | [
"['Ahmed Faisal Abdelrahman' 'Matias Valdenegro-Toro' 'Maren Bennewitz'\n 'Paul G. Plöger']"
]
|
null | null | 2404.05868 | null | null | http://arxiv.org/pdf/2404.05868v1 | 2024-04-08T21:05:42Z | 2024-04-08T21:05:42Z | Negative Preference Optimization: From Catastrophic Collapse to
Effective Unlearning | Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from the pre-trained model while preserving the model's utilities on other tasks. Several practical methods have recently been proposed for LLM unlearning, mostly based on gradient ascent (GA) on the loss of undesirable data. However, on certain unlearning tasks, these methods either fail to effectively unlearn the target data or suffer from catastrophic collapse -- a drastic degradation of the model's utilities. In this paper, we propose Negative Preference Optimization (NPO), a simple alignment-inspired method that could efficiently and effectively unlearn a target dataset. We theoretically show that the progression toward catastrophic collapse by minimizing the NPO loss is exponentially slower than GA. Through experiments on synthetic data and the benchmark TOFU dataset, we demonstrate that NPO-based methods achieve a better balance between unlearning the undesirable data and maintaining the model's utilities. We also observe that NPO-based methods generate more sensible outputs than GA-based methods, whose outputs are often gibberish. Remarkably, on TOFU, NPO-based methods are the first to achieve reasonable unlearning results in forgetting 50% (or more) of the training data, whereas existing methods already struggle with forgetting 10% of training data. | [
"['Ruiqi Zhang' 'Licong Lin' 'Yu Bai' 'Song Mei']"
]
|
null | null | 2404.05872 | null | null | http://arxiv.org/abs/2404.05872v1 | 2024-04-08T21:09:59Z | 2024-04-08T21:09:59Z | TabConv: Low-Computation CNN Inference via Table Lookups | Convolutional Neural Networks (CNNs) have demonstrated remarkable ability throughout the field of computer vision. However, CNN inference requires a large number of arithmetic operations, making them expensive to deploy in hardware. Current approaches alleviate this issue by developing hardware-supported, algorithmic processes to simplify spatial convolution functions. However, these methods still heavily rely on matrix multiplication, leading to significant computational overhead. To bridge the gap between hardware, algorithmic acceleration, and approximate matrix multiplication, we propose TabConv, a novel, table-based approximation for convolution to significantly reduce arithmetic operations during inference. Additionally, we introduce a priority masking technique based on cosine similarity to select layers for table-based approximation, thereby maintaining the model performance. We evaluate our approach on popular CNNs: ResNet-18, ResNet-34, and NetworkInNetwork (NIN). TabConv preserves over 93% of the original model's performance while reducing arithmetic operations by 36.5%, 25.8%, and 99.4% for ResNet-18 on CIFAR-10, CIFAR-100, and MNIST, respectively, 35.6% and 99.3% for ResNet-34 on CIFAR-10 and MNIST, and 98.9% for NIN on MNIST, achieving low-computation inference. | [
"['Neelesh Gupta' 'Narayanan Kannan' 'Pengmiao Zhang' 'Viktor Prasanna']"
]
|
null | null | 2404.05875 | null | null | http://arxiv.org/pdf/2404.05875v1 | 2024-04-08T21:15:36Z | 2024-04-08T21:15:36Z | CodecLM: Aligning Language Models with Tailored Synthetic Data | Instruction tuning has emerged as the key in aligning large language models (LLMs) with specific task instructions, thereby mitigating the discrepancy between the next-token prediction objective and users' actual goals. To reduce the labor and time cost to collect or annotate data by humans, researchers start to explore the use of LLMs to generate instruction-aligned synthetic data. Recent works focus on generating diverse instructions and applying LLM to increase instruction complexity, often neglecting downstream use cases. It remains unclear how to tailor high-quality data to elicit better instruction-following abilities in different target instruction distributions and LLMs. To this end, we introduce CodecLM, a general framework for adaptively generating high-quality synthetic data for LLM alignment with different downstream instruction distributions and LLMs. Drawing on the Encode-Decode principles, we use LLMs as codecs to guide the data generation process. We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution, and then decode metadata to create tailored instructions. We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples. Extensive experiments on four open-domain instruction following benchmarks validate the effectiveness of CodecLM over the current state-of-the-arts. | [
"['Zifeng Wang' 'Chun-Liang Li' 'Vincent Perot' 'Long T. Le' 'Jin Miao'\n 'Zizhao Zhang' 'Chen-Yu Lee' 'Tomas Pfister']"
]
|
null | null | 2404.05879 | null | null | http://arxiv.org/pdf/2404.05879v1 | 2024-04-08T21:26:04Z | 2024-04-08T21:26:04Z | Rapid and Precise Topological Comparison with Merge Tree Neural Networks | Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%. | [
"['Yu Qin' 'Brittany Terese Fasy' 'Carola Wenk' 'Brian Summa']"
]
|
null | null | 2404.05891 | null | null | http://arxiv.org/pdf/2404.05891v2 | 2024-06-27T21:54:40Z | 2024-04-08T22:20:23Z | Condition Monitoring with Incomplete Data: An Integrated Variational
Autoencoder and Distance Metric Framework | Condition monitoring of industrial systems is crucial for ensuring safety and maintenance planning, yet notable challenges arise in real-world settings due to the limited or non-existent availability of fault samples. This paper introduces an innovative solution to this problem by proposing a new method for fault detection and condition monitoring for unseen data. Adopting an approach inspired by zero-shot learning, our method can identify faults and assign a relative health index to various operational conditions. Typically, we have plenty of data on normal operations, some data on compromised conditions, and very few (if any) samples of severe faults. We use a variational autoencoder to capture the probabilistic distribution of previously seen and new unseen conditions. The health status is determined by comparing each sample's deviation from a normal operation reference distribution in the latent space. Faults are detected by establishing a threshold for the health indexes, allowing the model to identify severe, unseen faults with high accuracy, even amidst noise. We validate our approach using the run-to-failure IMS-bearing dataset and compare it with other methods. The health indexes generated by our model closely match the established descriptive model of bearing wear, attesting to the robustness and reliability of our method. These findings highlight the potential of our methodology in augmenting fault detection capabilities within industrial domains, thereby contributing to heightened safety protocols and optimized maintenance practices. | [
"['Maryam Ahang' 'Mostafa Abbasi' 'Todd Charter' 'Homayoun Najjaran']"
]
|
null | null | 2404.05894 | null | null | http://arxiv.org/pdf/2404.05894v2 | 2024-04-15T14:41:47Z | 2024-04-08T22:40:57Z | Learning Heuristics for Transit Network Design and Improvement with Deep
Reinforcement Learning | Transit agencies world-wide face tightening budgets. To maintain quality of service while cutting costs, efficient transit network design is essential. But planning a network of public transit routes is a challenging optimization problem. The most successful approaches to date use metaheuristic algorithms to search through the space of possible transit networks by applying low-level heuristics that randomly alter routes in a network. The design of these low-level heuristics has a major impact on the quality of the result. In this paper we use deep reinforcement learning with graph neural nets to learn low-level heuristics for an evolutionary algorithm, instead of designing them manually. These learned heuristics improve the algorithm's results on benchmark synthetic cities with 70 nodes or more, and obtain state-of-the-art results when optimizing operating costs. They also improve upon a simulation of the real transit network in the city of Laval, Canada, by as much as 54% and 18% on two key metrics, and offer cost savings of up to 12% over the city's existing transit network. | [
"['Andrew Holliday' 'Ahmed El-Geneidy' 'Gregory Dudek']"
]
|
null | null | 2404.05898 | null | null | http://arxiv.org/abs/2404.05898v1 | 2024-04-08T22:54:14Z | 2024-04-08T22:54:14Z | Inexact Simplification of Symbolic Regression Expressions with
Locality-sensitive Hashing | Symbolic regression (SR) searches for parametric models that accurately fit a dataset, prioritizing simplicity and interpretability. Despite this secondary objective, studies point out that the models are often overly complex due to redundant operations, introns, and bloat that arise during the iterative process, and can hinder the search with repeated exploration of bloated segments. Applying a fast heuristic algebraic simplification may not fully simplify the expression and exact methods can be infeasible depending on size or complexity of the expressions. We propose a novel agnostic simplification and bloat control for SR employing an efficient memoization with locality-sensitive hashing (LHS). The idea is that expressions and their sub-expressions traversed during the iterative simplification process are stored in a dictionary using LHS, enabling efficient retrieval of similar structures. We iterate through the expression, replacing subtrees with others of same hash if they result in a smaller expression. Empirical results shows that applying this simplification during evolution performs equal or better than without simplification in minimization of error, significantly reducing the number of nonlinear functions. This technique can learn simplification rules that work in general or for a specific problem, and improves convergence while reducing model complexity. | [
"['Guilherme Seidyo Imai Aldeia' 'Fabricio Olivetti de Franca'\n 'William G. La Cava']"
]
|
null | null | 2404.05903 | null | null | http://arxiv.org/pdf/2404.05903v1 | 2024-04-08T23:15:41Z | 2024-04-08T23:15:41Z | Natural Learning | We introduce Natural Learning (NL), a novel algorithm that elevates the explainability and interpretability of machine learning to an extreme level. NL simplifies decisions into intuitive rules, like "We rejected your loan because your income, employment status, and age collectively resemble a rejected prototype more than an accepted prototype." When applied to real-life datasets, NL produces impressive results. For example, in a colon cancer dataset with 1545 patients and 10935 genes, NL achieves 98.1% accuracy, comparable to DNNs and RF, by analyzing just 3 genes of test samples against 2 discovered prototypes. Similarly, in the UCI's WDBC dataset, NL achieves 98.3% accuracy using only 7 features and 2 prototypes. Even on the MNIST dataset (0 vs. 1), NL achieves 99.5% accuracy with only 3 pixels from 2 prototype images. NL is inspired by prototype theory, an old concept in cognitive psychology suggesting that people learn single sparse prototypes to categorize objects. Leveraging this relaxed assumption, we redesign Support Vector Machines (SVM), replacing its mathematical formulation with a fully nearest-neighbor-based solution, and to address the curse of dimensionality, we utilize locality-sensitive hashing. Following theory's generalizability principle, we propose a recursive method to prune non-core features. As a result, NL efficiently discovers the sparsest prototypes in O(n^2pL) with high parallelization capacity in terms of n. Evaluation of NL with 17 benchmark datasets shows its significant outperformance compared to decision trees and logistic regression, two methods widely favored in healthcare for their interpretability. Moreover, NL achieves performance comparable to finetuned black-box models such as deep neural networks and random forests in 40% of cases, with only a 1-2% lower average accuracy. The code is available via http://natural-learning.cc. | [
"['Hadi Fanaee-T']"
]
|
null | null | 2404.05905 | null | null | http://arxiv.org/pdf/2404.05905v1 | 2024-04-08T23:30:15Z | 2024-04-08T23:30:15Z | Computing Transition Pathways for the Study of Rare Events Using Deep
Reinforcement Learning | Understanding the transition events between metastable states in complex systems is an important subject in the fields of computational physics, chemistry and biology. The transition pathway plays an important role in characterizing the mechanism underlying the transition, for example, in the study of conformational changes of bio-molecules. In fact, computing the transition pathway is a challenging task for complex and high-dimensional systems. In this work, we formulate the path-finding task as a cost minimization problem over a particular path space. The cost function is adapted from the Freidlin-Wentzell action functional so that it is able to deal with rough potential landscapes. The path-finding problem is then solved using a actor-critic method based on the deep deterministic policy gradient algorithm (DDPG). The method incorporates the potential force of the system in the policy for generating episodes and combines physical properties of the system with the learning process for molecular systems. The exploitation and exploration nature of reinforcement learning enables the method to efficiently sample the transition events and compute the globally optimal transition pathway. We illustrate the effectiveness of the proposed method using three benchmark systems including an extended Mueller system and the Lennard-Jones system of seven particles. | [
"['Bo Lin' 'Yangzheng Zhong' 'Weiqing Ren']"
]
|
null | null | 2404.05908 | null | null | http://arxiv.org/abs/2404.05908v1 | 2024-04-08T23:46:59Z | 2024-04-08T23:46:59Z | Interpretability in Symbolic Regression: a benchmark of Explanatory
Methods using the Feynman data set | In some situations, the interpretability of the machine learning models plays a role as important as the model accuracy. Interpretability comes from the need to trust the prediction model, verify some of its properties, or even enforce them to improve fairness. Many model-agnostic explanatory methods exists to provide explanations for black-box models. In the regression task, the practitioner can use white-boxes or gray-boxes models to achieve more interpretable results, which is the case of symbolic regression. When using an explanatory method, and since interpretability lacks a rigorous definition, there is a need to evaluate and compare the quality and different explainers. This paper proposes a benchmark scheme to evaluate explanatory methods to explain regression models, mainly symbolic regression models. Experiments were performed using 100 physics equations with different interpretable and non-interpretable regression methods and popular explanation methods, evaluating the performance of the explainers performance with several explanation measures. In addition, we further analyzed four benchmarks from the GP community. The results have shown that Symbolic Regression models can be an interesting alternative to white-box and black-box models that is capable of returning accurate models with appropriate explanations. Regarding the explainers, we observed that Partial Effects and SHAP were the most robust explanation models, with Integrated Gradients being unstable only with tree-based models. This benchmark is publicly available for further experiments. | [
"['Guilherme Seidyo Imai Aldeia' 'Fabricio Olivetti de Franca']"
]
|
null | null | 2404.05913 | null | null | http://arxiv.org/pdf/2404.05913v1 | 2024-04-09T00:07:16Z | 2024-04-09T00:07:16Z | Deep Reinforcement Learning for Personalized Diagnostic Decision
Pathways Using Electronic Health Records: A Comparative Study on Anemia and
Systemic Lupus Erythematosus | Background: Clinical diagnosis is typically reached by following a series of steps recommended by guidelines authored by colleges of experts. Accordingly, guidelines play a crucial role in rationalizing clinical decisions but suffer from limitations as they are built to cover the majority of the population and fail at covering patients with uncommon conditions. Moreover, their updates are long and expensive, making them unsuitable for emerging diseases and practices. Methods: Inspired by guidelines, we formulate the task of diagnosis as a sequential decision-making problem and study the use of Deep Reinforcement Learning (DRL) algorithms to learn the optimal sequence of actions to perform in order to obtain a correct diagnosis from Electronic Health Records (EHRs). We apply DRL on synthetic, but realistic EHRs and develop two clinical use cases: Anemia diagnosis, where the decision pathways follow the schema of a decision tree; and Systemic Lupus Erythematosus (SLE) diagnosis, which follows a weighted criteria score. We particularly evaluate the robustness of our approaches to noisy and missing data since these frequently occur in EHRs. Results: In both use cases, and in the presence of imperfect data, our best DRL algorithms exhibit competitive performance when compared to the traditional classifiers, with the added advantage that they enable the progressive generation of a pathway to the suggested diagnosis which can both guide and explain the decision-making process. Conclusion: DRL offers the opportunity to learn personalized decision pathways to diagnosis. We illustrate with our two use cases their advantages: they generate step-by-step pathways that are self-explanatory; and their correctness is competitive when compared to state-of-the-art approaches. | [
"['Lillian Muyama' 'Antoine Neuraz' 'Adrien Coulet']"
]
|
null | null | 2404.05919 | null | null | http://arxiv.org/pdf/2404.05919v1 | 2024-04-09T00:43:45Z | 2024-04-09T00:43:45Z | AdaGossip: Adaptive Consensus Step-size for Decentralized Deep Learning
with Communication Compression | Decentralized learning is crucial in supporting on-device learning over large distributed datasets, eliminating the need for a central server. However, the communication overhead remains a major bottleneck for the practical realization of such decentralized setups. To tackle this issue, several algorithms for decentralized training with compressed communication have been proposed in the literature. Most of these algorithms introduce an additional hyper-parameter referred to as consensus step-size which is tuned based on the compression ratio at the beginning of the training. In this work, we propose AdaGossip, a novel technique that adaptively adjusts the consensus step-size based on the compressed model differences between neighboring agents. We demonstrate the effectiveness of the proposed method through an exhaustive set of experiments on various Computer Vision datasets (CIFAR-10, CIFAR-100, Fashion MNIST, Imagenette, and ImageNet), model architectures, and network topologies. Our experiments show that the proposed method achieves superior performance ($0-2%$ improvement in test accuracy) compared to the current state-of-the-art method for decentralized learning with communication compression. | [
"['Sai Aparna Aketi' 'Abolfazl Hashemi' 'Kaushik Roy']"
]
|
null | null | 2404.05938 | null | null | http://arxiv.org/pdf/2404.05938v1 | 2024-04-09T01:43:02Z | 2024-04-09T01:43:02Z | Neural networks can be FLOP-efficient integrators of 1D oscillatory
integrands | We demonstrate that neural networks can be FLOP-efficient integrators of one-dimensional oscillatory integrands. We train a feed-forward neural network to compute integrals of highly oscillatory 1D functions. The training set is a parametric combination of functions with varying characters and oscillatory behavior degrees. Numerical examples show that these networks are FLOP-efficient for sufficiently oscillatory integrands with an average FLOP gain of 1000 FLOPs. The network calculates oscillatory integrals better than traditional quadrature methods under the same computational budget or number of floating point operations. We find that feed-forward networks of 5 hidden layers are satisfactory for a relative accuracy of 0.001. The computational burden of inference of the neural network is relatively small, even compared to inner-product pattern quadrature rules. We postulate that our result follows from learning latent patterns in the oscillatory integrands that are otherwise opaque to traditional numerical integrators. | [
"['Anshuman Sinha' 'Spencer H. Bryngelson']"
]
|
null | null | 2404.05950 | null | null | http://arxiv.org/pdf/2404.05950v1 | 2024-04-09T02:11:35Z | 2024-04-09T02:11:35Z | Efficient Multi-Task Reinforcement Learning via Task-Specific Action
Correction | Multi-task reinforcement learning (MTRL) demonstrate potential for enhancing the generalization of a robot, enabling it to perform multiple tasks concurrently. However, the performance of MTRL may still be susceptible to conflicts between tasks and negative interference. To facilitate efficient MTRL, we propose Task-Specific Action Correction (TSAC), a general and complementary approach designed for simultaneous learning of multiple tasks. TSAC decomposes policy learning into two separate policies: a shared policy (SP) and an action correction policy (ACP). To alleviate conflicts resulting from excessive focus on specific tasks' details in SP, ACP incorporates goal-oriented sparse rewards, enabling an agent to adopt a long-term perspective and achieve generalization across tasks. Additional rewards transform the original problem into a multi-objective MTRL problem. Furthermore, to convert the multi-objective MTRL into a single-objective formulation, TSAC assigns a virtual expected budget to the sparse rewards and employs Lagrangian method to transform a constrained single-objective optimization into an unconstrained one. Experimental evaluations conducted on Meta-World's MT10 and MT50 benchmarks demonstrate that TSAC outperforms existing state-of-the-art methods, achieving significant improvements in both sample efficiency and effective action execution. | [
"['Jinyuan Feng' 'Min Chen' 'Zhiqiang Pu' 'Tenghai Qiu' 'Jianqiang Yi']"
]
|
null | null | 2404.05967 | null | null | http://arxiv.org/pdf/2404.05967v1 | 2024-04-09T02:55:12Z | 2024-04-09T02:55:12Z | JSTR: Judgment Improves Scene Text Recognition | In this paper, we present a method for enhancing the accuracy of scene text recognition tasks by judging whether the image and text match each other. While previous studies focused on generating the recognition results from input images, our approach also considers the model's misrecognition results to understand its error tendencies, thus improving the text recognition pipeline. This method boosts text recognition accuracy by providing explicit feedback on the data that the model is likely to misrecognize by predicting correct or incorrect between the image and text. The experimental results on publicly available datasets demonstrate that our proposed method outperforms the baseline and state-of-the-art methods in scene text recognition. | [
"['Masato Fujitake']"
]
|
null | null | 2404.05971 | null | null | http://arxiv.org/pdf/2404.05971v1 | 2024-04-09T02:59:17Z | 2024-04-09T02:59:17Z | Does Transformer Interpretability Transfer to RNNs? | Recent advances in recurrent neural network architectures, such as Mamba and RWKV, have enabled RNNs to match or exceed the performance of equal-size transformers in terms of language modeling perplexity and downstream evaluations, suggesting that future systems may be built on completely new architectures. In this paper, we examine if selected interpretability methods originally designed for transformer language models will transfer to these up-and-coming recurrent architectures. Specifically, we focus on steering model outputs via contrastive activation addition, on eliciting latent predictions via the tuned lens, and eliciting latent knowledge from models fine-tuned to produce false outputs under certain conditions. Our results show that most of these techniques are effective when applied to RNNs, and we show that it is possible to improve some of them by taking advantage of RNNs' compressed state. | [
"['Gonçalo Paulo' 'Thomas Marshall' 'Nora Belrose']"
]
|
null | null | 2404.05976 | null | null | http://arxiv.org/pdf/2404.05976v1 | 2024-04-09T03:10:45Z | 2024-04-09T03:10:45Z | A Cyber Manufacturing IoT System for Adaptive Machine Learning Model
Deployment by Interactive Causality Enabled Self-Labeling | Machine Learning (ML) has been demonstrated to improve productivity in many manufacturing applications. To host these ML applications, several software and Industrial Internet of Things (IIoT) systems have been proposed for manufacturing applications to deploy ML applications and provide real-time intelligence. Recently, an interactive causality enabled self-labeling method has been proposed to advance adaptive ML applications in cyber-physical systems, especially manufacturing, by automatically adapting and personalizing ML models after deployment to counter data distribution shifts. The unique features of the self-labeling method require a novel software system to support dynamism at various levels. This paper proposes the AdaptIoT system, comprised of an end-to-end data streaming pipeline, ML service integration, and an automated self-labeling service. The self-labeling service consists of causal knowledge bases and automated full-cycle self-labeling workflows to adapt multiple ML models simultaneously. AdaptIoT employs a containerized microservice architecture to deliver a scalable and portable solution for small and medium-sized manufacturers. A field demonstration of a self-labeling adaptive ML application is conducted with a makerspace and shows reliable performance. | [
"['Yutian Ren' 'Yuqi He' 'Xuyin Zhang' 'Aaron Yen' 'G. P. Li']"
]
|
null | null | 2404.05980 | null | null | http://arxiv.org/pdf/2404.05980v4 | 2024-07-06T23:15:17Z | 2024-04-09T03:24:10Z | Tackling Structural Hallucination in Image Translation with Local
Diffusion | Recent developments in diffusion models have advanced conditioned image generation, yet they struggle with reconstructing out-of-distribution (OOD) images, such as unseen tumors in medical images, causing "image hallucination" and risking misdiagnosis. We hypothesize such hallucinations result from local OOD regions in the conditional images. We verify that partitioning the OOD region and conducting separate image generations alleviates hallucinations in several applications. From this, we propose a training-free diffusion framework that reduces hallucination with multiple Local Diffusion processes. Our approach involves OOD estimation followed by two modules: a "branching" module generates locally both within and outside OOD regions, and a "fusion" module integrates these predictions into one. Our evaluation shows our method mitigates hallucination over baseline models quantitatively and qualitatively, reducing misdiagnosis by 40% and 25% in the real-world medical and natural image datasets, respectively. It also demonstrates compatibility with various pre-trained diffusion models. | [
"['Seunghoi Kim' 'Chen Jin' 'Tom Diethe' 'Matteo Figini'\n 'Henry F. J. Tregidgo' 'Asher Mullokandov' 'Philip Teare'\n 'Daniel C. Alexander']"
]
|
null | null | 2404.05981 | null | null | http://arxiv.org/pdf/2404.05981v1 | 2024-04-09T03:27:09Z | 2024-04-09T03:27:09Z | A Lightweight Measure of Classification Difficulty from Application
Dataset Characteristics | Despite accuracy and computation benchmarks being widely available to help choose among neural network models, these are usually trained on datasets with many classes, and do not give a precise idea of performance for applications of few (< 10) classes. The conventional procedure to predict performance is to train and test repeatedly on the different models and dataset variations of interest. However, this is computationally expensive. We propose an efficient classification difficulty measure that is calculated from the number of classes and intra- and inter-class similarity metrics of the dataset. After a single stage of training and testing per model family, relative performance for different datasets and models of the same family can be predicted by comparing difficulty measures - without further training and testing. We show how this measure can help a practitioner select a computationally efficient model for a small dataset 6 to 29x faster than through repeated training and testing. We give an example of use of the measure for an industrial application in which options are identified to select a model 42% smaller than the baseline YOLOv5-nano model, and if class merging from 3 to 2 classes meets requirements, 85% smaller. | [
"['Bryan Bo Cao' 'Abhinav Sharma' \"Lawrence O'Gorman\" 'Michael Coss'\n 'Shubham Jain']"
]
|
null | null | 2404.05985 | null | null | http://arxiv.org/pdf/2404.05985v2 | 2024-04-11T08:21:27Z | 2024-04-09T03:36:39Z | Boosting Digital Safeguards: Blending Cryptography and Steganography | In today's digital age, the internet is essential for communication and the sharing of information, creating a critical need for sophisticated data security measures to prevent unauthorized access and exploitation. Cryptography encrypts messages into a cipher text that is incomprehensible to unauthorized readers, thus safeguarding data during its transmission. Steganography, on the other hand, originates from the Greek term for "covered writing" and involves the art of hiding data within another medium, thereby facilitating covert communication by making the message invisible. This proposed approach takes advantage of the latest advancements in Artificial Intelligence (AI) and Deep Learning (DL), especially through the application of Generative Adversarial Networks (GANs), to improve upon traditional steganographic methods. By embedding encrypted data within another medium, our method ensures that the communication remains hidden from prying eyes. The application of GANs enables a smart, secure system that utilizes the inherent sensitivity of neural networks to slight alterations in data, enhancing the protection against detection. By merging the encryption techniques of cryptography with the hiding capabilities of steganography, and augmenting these with the strengths of AI, we introduce a comprehensive security system designed to maintain both the privacy and integrity of information. This system is crafted not just to prevent unauthorized access or modification of data, but also to keep the existence of the data hidden. This fusion of technologies tackles the core challenges of data security in the current era of open digital communication, presenting an advanced solution with the potential to transform the landscape of information security. | [
"['Anamitra Maiti' 'Subham Laha' 'Rishav Upadhaya' 'Soumyajit Biswas'\n 'Vikas Chaudhary' 'Biplab Kar' 'Nikhil Kumar' 'Jaydip Sen']"
]
|
null | null | 2404.05993 | null | null | http://arxiv.org/pdf/2404.05993v1 | 2024-04-09T03:54:28Z | 2024-04-09T03:54:28Z | AEGIS: Online Adaptive AI Content Safety Moderation with Ensemble of LLM
Experts | As Large Language Models (LLMs) and generative AI become more widespread, the content safety risks associated with their use also increase. We find a notable deficiency in high-quality content safety datasets and benchmarks that comprehensively cover a wide range of critical safety areas. To address this, we define a broad content safety risk taxonomy, comprising 13 critical risk and 9 sparse risk categories. Additionally, we curate AEGISSAFETYDATASET, a new dataset of approximately 26, 000 human-LLM interaction instances, complete with human annotations adhering to the taxonomy. We plan to release this dataset to the community to further research and to help benchmark LLM models for safety. To demonstrate the effectiveness of the dataset, we instruction-tune multiple LLM-based safety models. We show that our models (named AEGISSAFETYEXPERTS), not only surpass or perform competitively with the state-of-the-art LLM-based safety models and general purpose LLMs, but also exhibit robustness across multiple jail-break attack categories. We also show how using AEGISSAFETYDATASET during the LLM alignment phase does not negatively impact the performance of the aligned models on MT Bench scores. Furthermore, we propose AEGIS, a novel application of a no-regret online adaptation framework with strong theoretical guarantees, to perform content moderation with an ensemble of LLM content safety experts in deployment | [
"['Shaona Ghosh' 'Prasoon Varshney' 'Erick Galinkin' 'Christopher Parisien']"
]
|
null | null | 2404.06007 | null | null | http://arxiv.org/pdf/2404.06007v1 | 2024-04-09T04:26:16Z | 2024-04-09T04:26:16Z | Collaborative Edge AI Inference over Cloud-RAN | In this paper, a cloud radio access network (Cloud-RAN) based collaborative edge AI inference architecture is proposed. Specifically, geographically distributed devices capture real-time noise-corrupted sensory data samples and extract the noisy local feature vectors, which are then aggregated at each remote radio head (RRH) to suppress sensing noise. To realize efficient uplink feature aggregation, we allow each RRH receives local feature vectors from all devices over the same resource blocks simultaneously by leveraging an over-the-air computation (AirComp) technique. Thereafter, these aggregated feature vectors are quantized and transmitted to a central processor (CP) for further aggregation and downstream inference tasks. Our aim in this work is to maximize the inference accuracy via a surrogate accuracy metric called discriminant gain, which measures the discernibility of different classes in the feature space. The key challenges lie on simultaneously suppressing the coupled sensing noise, AirComp distortion caused by hostile wireless channels, and the quantization error resulting from the limited capacity of fronthaul links. To address these challenges, this work proposes a joint transmit precoding, receive beamforming, and quantization error control scheme to enhance the inference accuracy. Extensive numerical experiments demonstrate the effectiveness and superiority of our proposed optimization algorithm compared to various baselines. | [
"['Pengfei Zhang' 'Dingzhu Wen' 'Guangxu Zhu' 'Qimei Chen' 'Kaifeng Han'\n 'Yuanming Shi']"
]
|
null | null | 2404.06013 | null | null | http://arxiv.org/pdf/2404.06013v1 | 2024-04-09T04:45:18Z | 2024-04-09T04:45:18Z | Feel-Good Thompson Sampling for Contextual Dueling Bandits | Contextual dueling bandits, where a learner compares two options based on context and receives feedback indicating which was preferred, extends classic dueling bandits by incorporating contextual information for decision-making and preference learning. Several algorithms based on the upper confidence bound (UCB) have been proposed for linear contextual dueling bandits. However, no algorithm based on posterior sampling has been developed in this setting, despite the empirical success observed in traditional contextual bandits. In this paper, we propose a Thompson sampling algorithm, named FGTS.CDB, for linear contextual dueling bandits. At the core of our algorithm is a new Feel-Good exploration term specifically tailored for dueling bandits. This term leverages the independence of the two selected arms, thereby avoiding a cross term in the analysis. We show that our algorithm achieves nearly minimax-optimal regret, i.e., $tilde{mathcal{O}}(dsqrt T)$, where $d$ is the model dimension and $T$ is the time horizon. Finally, we evaluate our algorithm on synthetic data and observe that FGTS.CDB outperforms existing algorithms by a large margin. | [
"['Xuheng Li' 'Heyang Zhao' 'Quanquan Gu']"
]
|
null | null | 2404.06023 | null | null | http://arxiv.org/pdf/2404.06023v2 | 2024-04-24T17:15:42Z | 2024-04-09T05:12:44Z | Prelimit Coupling and Steady-State Convergence of Constant-stepsize
Nonsmooth Contractive SA | Motivated by Q-learning, we study nonsmooth contractive stochastic approximation (SA) with constant stepsize. We focus on two important classes of dynamics: 1) nonsmooth contractive SA with additive noise, and 2) synchronous and asynchronous Q-learning, which features both additive and multiplicative noise. For both dynamics, we establish weak convergence of the iterates to a stationary limit distribution in Wasserstein distance. Furthermore, we propose a prelimit coupling technique for establishing steady-state convergence and characterize the limit of the stationary distribution as the stepsize goes to zero. Using this result, we derive that the asymptotic bias of nonsmooth SA is proportional to the square root of the stepsize, which stands in sharp contrast to smooth SA. This bias characterization allows for the use of Richardson-Romberg extrapolation for bias reduction in nonsmooth SA. | [
"['Yixuan Zhang' 'Dongyan Huo' 'Yudong Chen' 'Qiaomin Xie']"
]
|
null | null | 2404.06063 | null | null | http://arxiv.org/pdf/2404.06063v1 | 2024-04-09T07:02:14Z | 2024-04-09T07:02:14Z | All in One: An Empirical Study of GPT for Few-Shot Aspect-Based
Sentiment Anlaysis | Aspect-Based Sentiment Analysis (ABSA) is an indispensable and highly challenging task in natural language processing. Current efforts have focused on specific sub-tasks, making it difficult to comprehensively cover all sub-tasks within the ABSA domain. With the development of Generative Pre-trained Transformers (GPTs), there came inspiration for a one-stop solution to sentiment analysis. In this study, we used GPTs for all sub-tasks of few-shot ABSA while defining a general learning paradigm for this application. We propose the All in One (AiO) model, a simple yet effective two-stage model for all ABSA sub-tasks. In the first stage, a specific backbone network learns the semantic information of the review and generates heuristically enhanced candidates. In the second stage, AiO leverages GPT contextual learning capabilities to generate predictions. The study conducted comprehensive comparative and ablation experiments on five benchmark datasets, and the results show that AiO can effectively handle all ABSA sub-tasks, even with few-shot data. | [
"['Baoxing Jiang']"
]
|
null | null | 2404.06090 | null | null | http://arxiv.org/pdf/2404.06090v1 | 2024-04-09T07:49:05Z | 2024-04-09T07:49:05Z | Fair Graph Neural Network with Supervised Contrastive Regularization | In recent years, Graph Neural Networks (GNNs) have made significant advancements, particularly in tasks such as node classification, link prediction, and graph representation. However, challenges arise from biases that can be hidden not only in the node attributes but also in the connections between entities. Therefore, ensuring fairness in graph neural network learning has become a critical problem. To address this issue, we propose a novel model for training fairness-aware GNN, which enhances the Counterfactual Augmented Fair Graph Neural Network Framework (CAF). Our approach integrates Supervised Contrastive Loss and Environmental Loss to enhance both accuracy and fairness. Experimental validation on three real datasets demonstrates the superiority of our proposed model over CAF and several other existing graph-based learning methods. | [
"['Mahdi Tavassoli Kejani' 'Fadi Dornaika' 'Jean-Michel Loubes']"
]
|
null | null | 2404.06104 | null | null | http://arxiv.org/pdf/2404.06104v1 | 2024-04-09T08:11:46Z | 2024-04-09T08:11:46Z | A singular Riemannian Geometry Approach to Deep Neural Networks III.
Piecewise Differentiable Layers and Random Walks on $n$-dimensional Classes | Neural networks are playing a crucial role in everyday life, with the most modern generative models able to achieve impressive results. Nonetheless, their functioning is still not very clear, and several strategies have been adopted to study how and why these model reach their outputs. A common approach is to consider the data in an Euclidean settings: recent years has witnessed instead a shift from this paradigm, moving thus to more general framework, namely Riemannian Geometry. Two recent works introduced a geometric framework to study neural networks making use of singular Riemannian metrics. In this paper we extend these results to convolutional, residual and recursive neural networks, studying also the case of non-differentiable activation functions, such as ReLU. We illustrate our findings with some numerical experiments on classification of images and thermodynamic problems. | [
"['Alessandro Benfenati' 'Alessio Marta']"
]
|
null | null | 2404.06106 | null | null | http://arxiv.org/pdf/2404.06106v1 | 2024-04-09T08:17:32Z | 2024-04-09T08:17:32Z | Unifying Low Dimensional Observations in Deep Learning Through the Deep
Linear Unconstrained Feature Model | Modern deep neural networks have achieved high performance across various tasks. Recently, researchers have noted occurrences of low-dimensional structure in the weights, Hessian's, gradients, and feature vectors of these networks, spanning different datasets and architectures when trained to convergence. In this analysis, we theoretically demonstrate these observations arising, and show how they can be unified within a generalized unconstrained feature model that can be considered analytically. Specifically, we consider a previously described structure called Neural Collapse, and its multi-layer counterpart, Deep Neural Collapse, which emerges when the network approaches global optima. This phenomenon explains the other observed low-dimensional behaviours on a layer-wise level, such as the bulk and outlier structure seen in Hessian spectra, and the alignment of gradient descent with the outlier eigenspace of the Hessian. Empirical results in both the deep linear unconstrained feature model and its non-linear equivalent support these predicted observations. | [
"['Connall Garrod' 'Jonathan P. Keating']"
]
|
null | null | 2404.06125 | null | null | http://arxiv.org/pdf/2404.06125v1 | 2024-04-09T08:49:41Z | 2024-04-09T08:49:41Z | Learning Model Predictive Control Parameters via Bayesian Optimization
for Battery Fast Charging | Tuning parameters in model predictive control (MPC) presents significant challenges, particularly when there is a notable discrepancy between the controller's predictions and the actual behavior of the closed-loop plant. This mismatch may stem from factors like substantial model-plant differences, limited prediction horizons that do not cover the entire time of interest, or unforeseen system disturbances. Such mismatches can jeopardize both performance and safety, including constraint satisfaction. Traditional methods address this issue by modifying the finite horizon cost function to better reflect the overall operational cost, learning parts of the prediction model from data, or implementing robust MPC strategies, which might be either computationally intensive or overly cautious. As an alternative, directly optimizing or learning the controller parameters to enhance closed-loop performance has been proposed. We apply Bayesian optimization for efficient learning of unknown model parameters and parameterized constraint backoff terms, aiming to improve closed-loop performance of battery fast charging. This approach establishes a hierarchical control framework where Bayesian optimization directly fine-tunes closed-loop behavior towards a global and long-term objective, while MPC handles lower-level, short-term control tasks. For lithium-ion battery fast charging, we show that the learning approach not only ensures safe operation but also maximizes closed-loop performance. This includes maintaining the battery's operation below its maximum terminal voltage and reducing charging times, all achieved using a standard nominal MPC model with a short horizon and notable initial model-plant mismatch. | [
"['Sebastian Hirt' 'Andreas Höhl' 'Joachim Schaeffer' 'Johannes Pohlodek'\n 'Richard D. Braatz' 'Rolf Findeisen']"
]
|
null | null | 2404.06129 | null | null | http://arxiv.org/pdf/2404.06129v2 | 2024-04-23T11:17:51Z | 2024-04-09T08:56:43Z | Adaptable Recovery Behaviors in Robotics: A Behavior Trees and Motion
Generators(BTMG) Approach for Failure Management | In dynamic operational environments, particularly in collaborative robotics, the inevitability of failures necessitates robust and adaptable recovery strategies. Traditional automated recovery strategies, while effective for predefined scenarios, often lack the flexibility required for on-the-fly task management and adaptation to expected failures. Addressing this gap, we propose a novel approach that models recovery behaviors as adaptable robotic skills, leveraging the Behavior Trees and Motion Generators~(BTMG) framework for policy representation. This approach distinguishes itself by employing reinforcement learning~(RL) to dynamically refine recovery behavior parameters, enabling a tailored response to a wide array of failure scenarios with minimal human intervention. We assess our methodology through a series of progressively challenging scenarios within a peg-in-a-hole task, demonstrating the approach's effectiveness in enhancing operational efficiency and task success rates in collaborative robotics settings. We validate our approach using a dual-arm KUKA robot. | [
"['Faseeh Ahmad' 'Matthias Mayr' 'Sulthan Suresh-Fazeela' 'Volker Krueger']"
]
|
null | null | 2404.06144 | null | null | http://arxiv.org/pdf/2404.06144v1 | 2024-04-09T09:09:36Z | 2024-04-09T09:09:36Z | Differential Privacy for Anomaly Detection: Analyzing the Trade-off
Between Privacy and Explainability | Anomaly detection (AD), also referred to as outlier detection, is a statistical process aimed at identifying observations within a dataset that significantly deviate from the expected pattern of the majority of the data. Such a process finds wide application in various fields, such as finance and healthcare. While the primary objective of AD is to yield high detection accuracy, the requirements of explainability and privacy are also paramount. The first ensures the transparency of the AD process, while the second guarantees that no sensitive information is leaked to untrusted parties. In this work, we exploit the trade-off of applying Explainable AI (XAI) through SHapley Additive exPlanations (SHAP) and differential privacy (DP). We perform AD with different models and on various datasets, and we thoroughly evaluate the cost of privacy in terms of decreased accuracy and explainability. Our results show that the enforcement of privacy through DP has a significant impact on detection accuracy and explainability, which depends on both the dataset and the considered AD model. We further show that the visual interpretation of explanations is also influenced by the choice of the AD algorithm. | [
"['Fatima Ezzeddine' 'Mirna Saad' 'Omran Ayoub' 'Davide Andreoletti'\n 'Martin Gjoreski' 'Ihab Sbeity' 'Marc Langheinrich' 'Silvia Giordano']"
]
|
null | null | 2404.06153 | null | null | http://arxiv.org/pdf/2404.06153v1 | 2024-04-09T09:25:16Z | 2024-04-09T09:25:16Z | scRDiT: Generating single-cell RNA-seq data by diffusion transformers
and accelerating sampling | Motivation: Single-cell RNA sequencing (scRNA-seq) is a groundbreaking technology extensively utilized in biological research, facilitating the examination of gene expression at the individual cell level within a given tissue sample. While numerous tools have been developed for scRNA-seq data analysis, the challenge persists in capturing the distinct features of such data and replicating virtual datasets that share analogous statistical properties. Results: Our study introduces a generative approach termed scRNA-seq Diffusion Transformer (scRDiT). This method generates virtual scRNA-seq data by leveraging a real dataset. The method is a neural network constructed based on Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs). This involves subjecting Gaussian noises to the real dataset through iterative noise-adding steps and ultimately restoring the noises to form scRNA-seq samples. This scheme allows us to learn data features from actual scRNA-seq samples during model training. Our experiments, conducted on two distinct scRNA-seq datasets, demonstrate superior performance. Additionally, the model sampling process is expedited by incorporating Denoising Diffusion Implicit Models (DDIM). scRDiT presents a unified methodology empowering users to train neural network models with their unique scRNA-seq datasets, enabling the generation of numerous high-quality scRNA-seq samples. Availability and implementation: https://github.com/DongShengze/scRDiT | [
"['Shengze Dong' 'Zhuorui Cui' 'Ding Liu' 'Jinzhi Lei']"
]
|
null | null | 2404.06162 | null | null | http://arxiv.org/pdf/2404.06162v2 | 2024-05-08T04:36:07Z | 2024-04-09T09:34:25Z | Characterizing Multimodal Long-form Summarization: A Case Study on
Financial Reports | As large language models (LLMs) expand the power of natural language processing to handle long inputs, rigorous and systematic analyses are necessary to understand their abilities and behavior. A salient application is summarization, due to its ubiquity and controversy (e.g., researchers have declared the death of summarization). In this paper, we use financial report summarization as a case study because financial reports not only are long but also use numbers and tables extensively. We propose a computational framework for characterizing multimodal long-form summarization and investigate the behavior of Claude 2.0/2.1, GPT-4/3.5, and Command. We find that GPT-3.5 and Command fail to perform this summarization task meaningfully. For Claude 2 and GPT-4, we analyze the extractiveness of the summary and identify a position bias in LLMs. This position bias disappears after shuffling the input for Claude, which suggests that Claude has the ability to recognize important information. We also conduct a comprehensive investigation on the use of numeric data in LLM-generated summaries and offer a taxonomy of numeric hallucination. We employ prompt engineering to improve GPT-4's use of numbers with limited success. Overall, our analyses highlight the strong capability of Claude 2 in handling long multimodal inputs compared to GPT-4. | [
"['Tianyu Cao' 'Natraj Raman' 'Danial Dervovic' 'Chenhao Tan']"
]
|
null | null | 2404.06167 | null | null | http://arxiv.org/pdf/2404.06167v1 | 2024-04-09T09:46:17Z | 2024-04-09T09:46:17Z | scCDCG: Efficient Deep Structural Clustering for single-cell RNA-seq via
Deep Cut-informed Graph Embedding | Single-cell RNA sequencing (scRNA-seq) is essential for unraveling cellular heterogeneity and diversity, offering invaluable insights for bioinformatics advancements. Despite its potential, traditional clustering methods in scRNA-seq data analysis often neglect the structural information embedded in gene expression profiles, crucial for understanding cellular correlations and dependencies. Existing strategies, including graph neural networks, face challenges in handling the inefficiency due to scRNA-seq data's intrinsic high-dimension and high-sparsity. Addressing these limitations, we introduce scCDCG (single-cell RNA-seq Clustering via Deep Cut-informed Graph), a novel framework designed for efficient and accurate clustering of scRNA-seq data that simultaneously utilizes intercellular high-order structural information. scCDCG comprises three main components: (i) A graph embedding module utilizing deep cut-informed techniques, which effectively captures intercellular high-order structural information, overcoming the over-smoothing and inefficiency issues prevalent in prior graph neural network methods. (ii) A self-supervised learning module guided by optimal transport, tailored to accommodate the unique complexities of scRNA-seq data, specifically its high-dimension and high-sparsity. (iii) An autoencoder-based feature learning module that simplifies model complexity through effective dimension reduction and feature extraction. Our extensive experiments on 6 datasets demonstrate scCDCG's superior performance and efficiency compared to 7 established models, underscoring scCDCG's potential as a transformative tool in scRNA-seq data analysis. Our code is available at: https://github.com/XPgogogo/scCDCG. | [
"['Ping Xu' 'Zhiyuan Ning' 'Meng Xiao' 'Guihai Feng' 'Xin Li'\n 'Yuanchun Zhou' 'Pengfei Wang']"
]
|
null | null | 2404.06170 | null | null | http://arxiv.org/pdf/2404.06170v1 | 2024-04-09T09:49:57Z | 2024-04-09T09:49:57Z | CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using
Embeddings as Teachers | Contrastive Language-Image Pre-training (CLIP) has been shown to improve zero-shot generalization capabilities of language and vision models. In this paper, we extend CLIP for efficient knowledge distillation, by utilizing embeddings as teachers. Typical knowledge distillation frameworks require running forward passes through a teacher model, which is often prohibitive in the case of billion or trillion parameter teachers. In these cases, using only the embeddings of the teacher models to guide the distillation can yield significant computational savings. Our preliminary findings show that CLIP-based knowledge distillation with embeddings can outperform full scale knowledge distillation using $9times$ less memory and $8times$ less training time. Code available at: https://github.com/lnairGT/CLIP-Distillation/ | [
"['Lakshmi Nair']"
]
|
null | null | 2404.06188 | null | null | http://arxiv.org/pdf/2404.06188v1 | 2024-04-09T10:15:18Z | 2024-04-09T10:15:18Z | Diverse Randomized Value Functions: A Provably Pessimistic Approach for
Offline Reinforcement Learning | Offline Reinforcement Learning (RL) faces distributional shift and unreliable value estimation, especially for out-of-distribution (OOD) actions. To address this, existing uncertainty-based methods penalize the value function with uncertainty quantification and demand numerous ensemble networks, posing computational challenges and suboptimal outcomes. In this paper, we introduce a novel strategy employing diverse randomized value functions to estimate the posterior distribution of $Q$-values. It provides robust uncertainty quantification and estimates lower confidence bounds (LCB) of $Q$-values. By applying moderate value penalties for OOD actions, our method fosters a provably pessimistic approach. We also emphasize on diversity within randomized value functions and enhance efficiency by introducing a diversity regularization method, reducing the requisite number of networks. These modules lead to reliable value estimation and efficient policy learning from offline data. Theoretical analysis shows that our method recovers the provably efficient LCB-penalty under linear MDP assumptions. Extensive empirical results also demonstrate that our proposed method significantly outperforms baseline methods in terms of performance and parametric efficiency. | [
"['Xudong Yu' 'Chenjia Bai' 'Hongyi Guo' 'Changhong Wang' 'Zhen Wang']"
]
|
null | null | 2404.06198 | null | null | http://arxiv.org/pdf/2404.06198v2 | 2024-07-05T12:34:41Z | 2024-04-09T10:41:59Z | The impact of data set similarity and diversity on transfer learning
success in time series forecasting | Pre-trained models have become pivotal in enhancing the efficiency and accuracy of time series forecasting on target data sets by leveraging transfer learning. While benchmarks validate the performance of model generalization on various target data sets, there is no structured research providing similarity and diversity measures to explain which characteristics of source and target data lead to transfer learning success. Our study pioneers in systematically evaluating the impact of source-target similarity and source diversity on zero-shot and fine-tuned forecasting outcomes in terms of accuracy, bias, and uncertainty estimation. We investigate these dynamics using pre-trained neural networks across five public source datasets, applied to forecasting five target data sets, including real-world wholesales data. We identify two feature-based similarity and diversity measures, finding that source-target similarity reduces forecasting bias, while source diversity improves forecasting accuracy and uncertainty estimation, but increases the bias. | [
"['Claudia Ehrig' 'Benedikt Sonnleitner' 'Ursula Neumann'\n 'Catherine Cleophas' 'Germain Forestier']"
]
|
null | null | 2404.06206 | null | null | http://arxiv.org/pdf/2404.06206v1 | 2024-04-09T10:53:29Z | 2024-04-09T10:53:29Z | Deep Learning Method for Computing Committor Functions with Adaptive
Sampling | The committor function is a central object for quantifying the transitions between metastable states of dynamical systems. Recently, a number of computational methods based on deep neural networks have been developed for computing the high-dimensional committor function. The success of the methods relies on sampling adequate data for the transition, which still is a challenging task for complex systems at low temperatures. In this work, we propose a deep learning method with two novel adaptive sampling schemes (I and II). In the two schemes, the data are generated actively with a modified potential where the bias potential is constructed from the learned committor function. We theoretically demonstrate the advantages of the sampling schemes and show that the data in sampling scheme II are uniformly distributed along the transition tube. This makes a promising method for studying the transition of complex systems. The efficiency of the method is illustrated in high-dimensional systems including the alanine dipeptide and a solvated dimer system. | [
"['Bo Lin' 'Weiqing Ren']"
]
|
null | null | 2404.06209 | null | null | http://arxiv.org/pdf/2404.06209v1 | 2024-04-09T10:58:21Z | 2024-04-09T10:58:21Z | Elephants Never Forget: Memorization and Learning of Tabular Data in
Large Language Models | While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over. In this work, we address this concern for tabular data. Specifically, we introduce a variety of different techniques to assess whether a language model has seen a tabular dataset during training. This investigation reveals that LLMs have memorized many popular tabular datasets verbatim. We then compare the few-shot learning performance of LLMs on datasets that were seen during training to the performance on datasets released after training. We find that LLMs perform better on datasets seen during training, indicating that memorization leads to overfitting. At the same time, LLMs show non-trivial performance on novel datasets and are surprisingly robust to data transformations. We then investigate the in-context statistical learning abilities of LLMs. Without fine-tuning, we find them to be limited. This suggests that much of the few-shot performance on novel datasets is due to the LLM's world knowledge. Overall, our results highlight the importance of testing whether an LLM has seen an evaluation dataset during pre-training. We make the exposure tests we developed available as the tabmemcheck Python package at https://github.com/interpretml/LLM-Tabular-Memorization-Checker | [
"['Sebastian Bordt' 'Harsha Nori' 'Vanessa Rodrigues' 'Besmira Nushi'\n 'Rich Caruana']"
]
|
null | null | 2404.06212 | null | null | http://arxiv.org/pdf/2404.06212v1 | 2024-04-09T11:00:19Z | 2024-04-09T11:00:19Z | OmniFusion Technical Report | Last year, multimodal architectures served up a revolution in AI-based approaches and solutions, extending the capabilities of large language models (LLM). We propose an textit{OmniFusion} model based on a pretrained LLM and adapters for visual modality. We evaluated and compared several architecture design principles for better text and visual data coupling: MLP and transformer adapters, various CLIP ViT-based encoders (SigLIP, InternVIT, etc.), and their fusing approach, image encoding method (whole image or tiles encoding) and two 7B LLMs (the proprietary one and open-source Mistral). Experiments on 8 visual-language benchmarks show the top score for the best OmniFusion setup in terms of different VQA tasks in comparison with open-source LLaVA-like solutions: VizWiz, Pope, MM-Vet, ScienceQA, MMBench, TextVQA, VQAv2, MMMU. We also propose a variety of situations, where OmniFusion provides highly-detailed answers in different domains: housekeeping, sightseeing, culture, medicine, handwritten and scanned equations recognition, etc. Mistral-based OmniFusion model is an open-source solution with weights, training and inference scripts available at https://github.com/AIRI-Institute/OmniFusion. | [
"['Elizaveta Goncharova' 'Anton Razzhigaev' 'Matvey Mikhalchuk'\n 'Maxim Kurkin' 'Irina Abdullaeva' 'Matvey Skripkin' 'Ivan Oseledets'\n 'Denis Dimitrov' 'Andrey Kuznetsov']"
]
|
null | null | 2404.06218 | null | null | http://arxiv.org/pdf/2404.06218v1 | 2024-04-09T11:12:39Z | 2024-04-09T11:12:39Z | Quantum Circuit $C^*$-algebra Net | This paper introduces quantum circuit $C^*$-algebra net, which provides a connection between $C^*$-algebra nets proposed in classical machine learning and quantum circuits. Using $C^*$-algebra, a generalization of the space of complex numbers, we can represent quantum gates as weight parameters of a neural network. By introducing additional parameters, we can induce interaction among multiple circuits constructed by quantum gates. This interaction enables the circuits to share information among them, which contributes to improved generalization performance in machine learning tasks. As an application, we propose to use the quantum circuit $C^*$-algebra net to encode classical data into quantum states, which enables us to integrate classical data into quantum algorithms. Numerical results demonstrate that the interaction among circuits improves performance significantly in image classification, and encoded data by the quantum circuit $C^*$-algebra net are useful for downstream quantum machine learning tasks. | [
"['Yuka Hashimoto' 'Ryuichiro Hataya']"
]
|
null | null | 2404.06220 | null | null | http://arxiv.org/pdf/2404.06220v1 | 2024-04-09T11:14:45Z | 2024-04-09T11:14:45Z | Zero-Shot Relational Learning for Multimodal Knowledge Graphs | Relational learning is an essential task in the domain of knowledge representation, particularly in knowledge graph completion (KGC).While relational learning in traditional single-modal settings has been extensively studied, exploring it within a multimodal KGC context presents distinct challenges and opportunities. One of the major challenges is inference on newly discovered relations without any associated training data. This zero-shot relational learning scenario poses unique requirements for multimodal KGC, i.e., utilizing multimodality to facilitate relational learning. However, existing works fail to support the leverage of multimodal information and leave the problem unexplored. In this paper, we propose a novel end-to-end framework, consisting of three components, i.e., multimodal learner, structure consolidator, and relation embedding generator, to integrate diverse multimodal information and knowledge graph structures to facilitate the zero-shot relational learning. Evaluation results on two multimodal knowledge graphs demonstrate the superior performance of our proposed method. | [
"['Rui Cai' 'Shichao Pei' 'Xiangliang Zhang']"
]
|
null | null | 2404.06224 | null | null | http://arxiv.org/pdf/2404.06224v1 | 2024-04-09T11:26:59Z | 2024-04-09T11:26:59Z | Low-Cost Generation and Evaluation of Dictionary Example Sentences | Dictionary example sentences play an important role in illustrating word definitions and usage, but manually creating quality sentences is challenging. Prior works have demonstrated that language models can be trained to generate example sentences. However, they relied on costly customized models and word sense datasets for generation and evaluation of their work. Rapid advancements in foundational models present the opportunity to create low-cost, zero-shot methods for the generation and evaluation of dictionary example sentences. We introduce a new automatic evaluation metric called OxfordEval that measures the win-rate of generated sentences against existing Oxford Dictionary sentences. OxfordEval shows high alignment with human judgments, enabling large-scale automated quality evaluation. We experiment with various LLMs and configurations to generate dictionary sentences across word classes. We complement this with a novel approach of using masked language models to identify and select sentences that best exemplify word meaning. The eventual model, FM-MLM, achieves over 85.1% win rate against Oxford baseline sentences according to OxfordEval, compared to 39.8% win rate for prior model-generated sentences. | [
"['Bill Cai' 'Clarence Boon Liang Ng' 'Daniel Tan' 'Shelvia Hotama']"
]
|
null | null | 2404.06225 | null | null | http://arxiv.org/pdf/2404.06225v1 | 2024-04-09T11:27:07Z | 2024-04-09T11:27:07Z | Message Passing Variational Autoregressive Network for Solving
Intractable Ising Models | Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks. Learning a probability distribution of energy configuration or finding the ground states of a disordered, fully connected Ising model is essential for statistical mechanics and NP-hard problems. Despite tremendous efforts, a neural network architecture with the ability to high-accurately solve these fully connected and extremely intractable problems on larger systems is still lacking. Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables. The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures. The advantages also come from the great mitigation of mode collapse during the training process of deep neural networks. Considering these extremely difficult problems to be solved, our method extends the current computational limits of unsupervised neural networks to solve combinatorial optimization problems. | [
"['Qunlong Ma' 'Zhi Ma' 'Jinlong Xu' 'Hairui Zhang' 'Ming Gao']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.