categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.18962 | null | null | http://arxiv.org/pdf/2404.18962v1 | 2024-04-29T05:55:23Z | 2024-04-29T05:55:23Z | An Aggregation-Free Federated Learning for Tackling Data Heterogeneity | The performance of Federated Learning (FL) hinges on the effectiveness of utilizing knowledge from distributed datasets. Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round. This process can cause client drift, especially with significant cross-client data heterogeneity, impacting model performance and convergence of the FL algorithm. To address these challenges, we introduce FedAF, a novel aggregation-free FL algorithm. In this framework, clients collaboratively learn condensed data by leveraging peer knowledge, the server subsequently trains the global model using the condensed data and soft labels received from the clients. FedAF inherently avoids the issue of client drift, enhances the quality of condensed data amid notable data heterogeneity, and improves the global model performance. Extensive numerical studies on several popular benchmark datasets show FedAF surpasses various state-of-the-art FL algorithms in handling label-skew and feature-skew data heterogeneity, leading to superior global model accuracy and faster convergence. | [
"['Yuan Wang' 'Huazhu Fu' 'Renuga Kanagavelu' 'Qingsong Wei' 'Yong Liu'\n 'Rick Siow Mong Goh']"
]
|
null | null | 2404.18963 | null | null | http://arxiv.org/pdf/2404.18963v1 | 2024-04-29T07:03:23Z | 2024-04-29T07:03:23Z | RE-GrievanceAssist: Enhancing Customer Experience through ML-Powered
Complaint Management | In recent years, digital platform companies have faced increasing challenges in managing customer complaints, driven by widespread consumer adoption. This paper introduces an end-to-end pipeline, named RE-GrievanceAssist, designed specifically for real estate customer complaint management. The pipeline consists of three key components: i) response/no-response ML model using TF-IDF vectorization and XGBoost classifier ; ii) user type classifier using fasttext classifier; iii) issue/sub-issue classifier using TF-IDF vectorization and XGBoost classifier. Finally, it has been deployed as a batch job in Databricks, resulting in a remarkable 40% reduction in overall manual effort with monthly cost reduction of Rs 1,50,000 since August 2023. | [
"['Venkatesh C' 'Harshit Oberoi' 'Anurag Kumar Pandey' 'Anil Goyal'\n 'Nikhil Sikka']"
]
|
null | null | 2404.18975 | null | null | http://arxiv.org/pdf/2404.18975v3 | 2024-06-08T19:11:57Z | 2024-04-29T14:39:15Z | M3H: Multimodal Multitask Machine Learning for Healthcare | Developing an integrated many-to-many framework leveraging multimodal data for multiple tasks is crucial to unifying healthcare applications ranging from diagnoses to operations. In resource-constrained hospital environments, a scalable and unified machine learning framework that improves previous forecast performances could improve hospital operations and save costs. We introduce M3H, an explainable Multimodal Multitask Machine Learning for Healthcare framework that consolidates learning from tabular, time-series, language, and vision data for supervised binary/multiclass classification, regression, and unsupervised clustering. It features a novel attention mechanism balancing self-exploitation (learning source-task), and cross-exploration (learning cross-tasks), and offers explainability through a proposed TIM score, shedding light on the dynamics of task learning interdependencies. M3H encompasses an unprecedented range of medical tasks and machine learning problem classes and consistently outperforms traditional single-task models by on average 11.6% across 40 disease diagnoses from 16 medical departments, three hospital operation forecasts, and one patient phenotyping task. The modular design of the framework ensures its generalizability in data processing, task definition, and rapid model prototyping, making it production ready for both clinical and operational healthcare settings, especially those in constrained environments. | [
"['Dimitris Bertsimas' 'Yu Ma']"
]
|
null | null | 2404.18976 | null | null | http://arxiv.org/pdf/2404.18976v1 | 2024-04-29T14:45:28Z | 2024-04-29T14:45:28Z | Foundations of Multisensory Artificial Intelligence | Building multisensory AI systems that learn from multiple sensory inputs such as text, speech, video, real-world sensors, wearable devices, and medical data holds great promise for impact in many scientific areas with practical benefits, such as in supporting human health and well-being, enabling multimedia content processing, and enhancing real-world autonomous agents. By synthesizing a range of theoretical frameworks and application domains, this thesis aims to advance the machine learning foundations of multisensory AI. In the first part, we present a theoretical framework formalizing how modalities interact with each other to give rise to new information for a task. These interactions are the basic building blocks in all multimodal problems, and their quantification enables users to understand their multimodal datasets, design principled approaches to learn these interactions, and analyze whether their model has succeeded in learning. In the second part, we study the design of practical multimodal foundation models that generalize over many modalities and tasks, which presents a step toward grounding large language models to real-world sensory modalities. We introduce MultiBench, a unified large-scale benchmark across a wide range of modalities, tasks, and research areas, followed by the cross-modal attention and multimodal transformer architectures that now underpin many of today's multimodal foundation models. Scaling these architectures on MultiBench enables the creation of general-purpose multisensory AI systems, and we discuss our collaborative efforts in applying these models for real-world impact in affective computing, mental health, cancer prognosis, and robotics. Finally, we conclude this thesis by discussing how future work can leverage these ideas toward more general, interactive, and safe multisensory AI. | [
"['Paul Pu Liang']"
]
|
null | null | 2404.18978 | null | null | http://arxiv.org/pdf/2404.18978v1 | 2024-04-29T14:53:48Z | 2024-04-29T14:53:48Z | Towards Generalizable Agents in Text-Based Educational Environments: A
Study of Integrating RL with LLMs | There has been a growing interest in developing learner models to enhance learning and teaching experiences in educational environments. However, existing works have primarily focused on structured environments relying on meticulously crafted representations of tasks, thereby limiting the agent's ability to generalize skills across tasks. In this paper, we aim to enhance the generalization capabilities of agents in open-ended text-based learning environments by integrating Reinforcement Learning (RL) with Large Language Models (LLMs). We investigate three types of agents: (i) RL-based agents that utilize natural language for state and action representations to find the best interaction strategy, (ii) LLM-based agents that leverage the model's general knowledge and reasoning through prompting, and (iii) hybrid LLM-assisted RL agents that combine these two strategies to improve agents' performance and generalization. To support the development and evaluation of these agents, we introduce PharmaSimText, a novel benchmark derived from the PharmaSim virtual pharmacy environment designed for practicing diagnostic conversations. Our results show that RL-based agents excel in task completion but lack in asking quality diagnostic questions. In contrast, LLM-based agents perform better in asking diagnostic questions but fall short of completing the task. Finally, hybrid LLM-assisted RL agents enable us to overcome these limitations, highlighting the potential of combining RL and LLMs to develop high-performing agents for open-ended learning environments. | [
"['Bahar Radmehr' 'Adish Singla' 'Tanja Käser']"
]
|
null | null | 2404.19065 | null | null | http://arxiv.org/pdf/2404.19065v1 | 2024-04-29T19:12:42Z | 2024-04-29T19:12:42Z | HELPER-X: A Unified Instructable Embodied Agent to Tackle Four
Interactive Vision-Language Domains with Memory-Augmented Language Models | Recent research on instructable agents has used memory-augmented Large Language Models (LLMs) as task planners, a technique that retrieves language-program examples relevant to the input instruction and uses them as in-context examples in the LLM prompt to improve the performance of the LLM in inferring the correct action and task plans. In this technical report, we extend the capabilities of HELPER, by expanding its memory with a wider array of examples and prompts, and by integrating additional APIs for asking questions. This simple expansion of HELPER into a shared memory enables the agent to work across the domains of executing plans from dialogue, natural language instruction following, active question asking, and commonsense room reorganization. We evaluate the agent on four diverse interactive visual-language embodied agent benchmarks: ALFRED, TEACh, DialFRED, and the Tidy Task. HELPER-X achieves few-shot, state-of-the-art performance across these benchmarks using a single agent, without requiring in-domain training, and remains competitive with agents that have undergone in-domain training. | [
"['Gabriel Sarch' 'Sahil Somani' 'Raghav Kapoor' 'Michael J. Tarr'\n 'Katerina Fragkiadaki']"
]
|
null | null | 2404.19073 | null | null | http://arxiv.org/pdf/2404.19073v1 | 2024-04-29T19:32:50Z | 2024-04-29T19:32:50Z | Learning Sparse High-Dimensional Matrix-Valued Graphical Models From
Dependent Data | We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional, stationary matrix-variate Gaussian time series. All past work on high-dimensional matrix graphical models assumes that independent and identically distributed (i.i.d.) observations of the matrix-variate are available. Here we allow dependent observations. We consider a sparse-group lasso-based frequency-domain formulation of the problem with a Kronecker-decomposable power spectral density (PSD), and solve it via an alternating direction method of multipliers (ADMM) approach. The problem is bi-convex which is solved via flip-flop optimization. We provide sufficient conditions for local convergence in the Frobenius norm of the inverse PSD estimators to the true value. This result also yields a rate of convergence. We illustrate our approach using numerical examples utilizing both synthetic and real data. | [
"['Jitendra K Tugnait']"
]
|
null | null | 2404.19075 | null | null | http://arxiv.org/pdf/2404.19075v1 | 2024-04-29T19:41:51Z | 2024-04-29T19:41:51Z | Distributed Stochastic Optimization of a Neural Representation Network
for Time-Space Tomography Reconstruction | 4D time-space reconstruction of dynamic events or deforming objects using X-ray computed tomography (CT) is an extremely ill-posed inverse problem. Existing approaches assume that the object remains static for the duration of several tens or hundreds of X-ray projection measurement images (reconstruction of consecutive limited-angle CT scans). However, this is an unrealistic assumption for many in-situ experiments that causes spurious artifacts and inaccurate morphological reconstructions of the object. To solve this problem, we propose to perform a 4D time-space reconstruction using a distributed implicit neural representation (DINR) network that is trained using a novel distributed stochastic training algorithm. Our DINR network learns to reconstruct the object at its output by iterative optimization of its network parameters such that the measured projection images best match the output of the CT forward measurement model. We use a continuous time and space forward measurement model that is a function of the DINR outputs at a sparsely sampled set of continuous valued object coordinates. Unlike existing state-of-the-art neural representation architectures that forward and back propagate through dense voxel grids that sample the object's entire time-space coordinates, we only propagate through the DINR at a small subset of object coordinates in each iteration resulting in an order-of-magnitude reduction in memory and compute for training. DINR leverages distributed computation across several compute nodes and GPUs to produce high-fidelity 4D time-space reconstructions even for extremely large CT data sizes. We use both simulated parallel-beam and experimental cone-beam X-ray CT datasets to demonstrate the superior performance of our approach. | [
"['K. Aditya Mohan' 'Massimiliano Ferrucci' 'Chuck Divin'\n 'Garrett A. Stevenson' 'Hyojin Kim']"
]
|
null | null | 2404.19087 | null | null | http://arxiv.org/pdf/2404.19087v1 | 2024-04-29T19:58:34Z | 2024-04-29T19:58:34Z | Deep Reinforcement Learning for Advanced Longitudinal Control and
Collision Avoidance in High-Risk Driving Scenarios | Existing Advanced Driver Assistance Systems primarily focus on the vehicle directly ahead, often overlooking potential risks from following vehicles. This oversight can lead to ineffective handling of high risk situations, such as high speed, closely spaced, multi vehicle scenarios where emergency braking by one vehicle might trigger a pile up collision. To overcome these limitations, this study introduces a novel deep reinforcement learning based algorithm for longitudinal control and collision avoidance. This proposed algorithm effectively considers the behavior of both leading and following vehicles. Its implementation in simulated high risk scenarios, which involve emergency braking in dense traffic where traditional systems typically fail, has demonstrated the algorithm ability to prevent potential pile up collisions, including those involving heavy duty vehicles. | [
"['Dianwei Chen' 'Yaobang Gong' 'Xianfeng Yang']"
]
|
null | null | 2404.19094 | null | null | http://arxiv.org/pdf/2404.19094v1 | 2024-04-29T20:19:25Z | 2024-04-29T20:19:25Z | In-Context Symbolic Regression: Leveraging Language Models for Function
Discovery | Symbolic Regression (SR) is a task which aims to extract the mathematical expression underlying a set of empirical observations. Transformer-based methods trained on SR datasets detain the current state-of-the-art in this task, while the application of Large Language Models (LLMs) to SR remains unexplored. This work investigates the integration of pre-trained LLMs into the SR pipeline, utilizing an approach that iteratively refines a functional form based on the prediction error it achieves on the observation set, until it reaches convergence. Our method leverages LLMs to propose an initial set of possible functions based on the observations, exploiting their strong pre-training prior. These functions are then iteratively refined by the model itself and by an external optimizer for their coefficients. The process is repeated until the results are satisfactory. We then analyze Vision-Language Models in this context, exploring the inclusion of plots as visual inputs to aid the optimization process. Our findings reveal that LLMs are able to successfully recover good symbolic equations that fit the given data, outperforming SR baselines based on Genetic Programming, with the addition of images in the input showing promising results for the most complex benchmarks. | [
"['Matteo Merler' 'Nicola Dainese' 'Katsiaryna Haitsiukevich']"
]
|
null | null | 2404.19095 | null | null | http://arxiv.org/pdf/2404.19095v1 | 2024-04-29T20:19:35Z | 2024-04-29T20:19:35Z | Catalyzing Social Interactions in Mixed Reality using ML Recommendation
Systems | We create an innovative mixed reality-first social recommendation model, utilizing features uniquely collected through mixed reality (MR) systems to promote social interaction, such as gaze recognition, proximity, noise level, congestion level, and conversational intensity. We further extend these models to include right-time features to deliver timely notifications. We measure performance metrics across various models by creating a new intersection of user features, MR features, and right-time features. We create four model types trained on different combinations of the feature classes, where we compare the baseline model trained on the class of user features against the models trained on MR features, right-time features, and a combination of all of the feature classes. Due to limitations in data collection and cost, we observe performance degradation in the right-time, mixed reality, and combination models. Despite these challenges, we introduce optimizations to improve accuracy across all models by over 14 percentage points, where the best performing model achieved 24% greater accuracy. | [
"['Sparsh Srivastava' 'Rohan Arora']"
]
|
null | null | 2404.19100 | null | null | http://arxiv.org/pdf/2404.19100v2 | 2024-07-01T16:16:34Z | 2024-04-29T20:43:42Z | Predicting Fairness of ML Software Configurations | This paper investigates the relationships between hyperparameters of machine learning and fairness. Data-driven solutions are increasingly used in critical socio-technical applications where ensuring fairness is important. Rather than explicitly encoding decision logic via control and data structures, the ML developers provide input data, perform some pre-processing, choose ML algorithms, and tune hyperparameters (HPs) to infer a program that encodes the decision logic. Prior works report that the selection of HPs can significantly influence fairness. However, tuning HPs to find an ideal trade-off between accuracy, precision, and fairness has remained an expensive and tedious task. Can we predict fairness of HP configuration for a given dataset? Are the predictions robust to distribution shifts? We focus on group fairness notions and investigate the HP space of 5 training algorithms. We first find that tree regressors and XGBoots significantly outperformed deep neural networks and support vector machines in accurately predicting the fairness of HPs. When predicting the fairness of ML hyperparameters under temporal distribution shift, the tree regressors outperforms the other algorithms with reasonable accuracy. However, the precision depends on the ML training algorithm, dataset, and protected attributes. For example, the tree regressor model was robust for training data shift from 2014 to 2018 on logistic regression and discriminant analysis HPs with sex as the protected attribute; but not for race and other training algorithms. Our method provides a sound framework to efficiently perform fine-tuning of ML training algorithms and understand the relationships between HPs and fairness. | [
"['Salvador Robles Herrera' 'Verya Monjezi' 'Vladik Kreinovich'\n 'Ashutosh Trivedi' 'Saeid Tizpaz-Niari']"
]
|
null | null | 2404.19109 | null | null | http://arxiv.org/pdf/2404.19109v2 | 2024-05-01T04:55:30Z | 2024-04-29T21:19:41Z | The Shape of Money Laundering: Subgraph Representation Learning on the
Blockchain with the Elliptic2 Dataset | Subgraph representation learning is a technique for analyzing local structures (or shapes) within complex networks. Enabled by recent developments in scalable Graph Neural Networks (GNNs), this approach encodes relational information at a subgroup level (multiple connected nodes) rather than at a node level of abstraction. We posit that certain domain applications, such as anti-money laundering (AML), are inherently subgraph problems and mainstream graph techniques have been operating at a suboptimal level of abstraction. This is due in part to the scarcity of annotated datasets of real-world size and complexity, as well as the lack of software tools for managing subgraph GNN workflows at scale. To enable work in fundamental algorithms as well as domain applications in AML and beyond, we introduce Elliptic2, a large graph dataset containing 122K labeled subgraphs of Bitcoin clusters within a background graph consisting of 49M node clusters and 196M edge transactions. The dataset provides subgraphs known to be linked to illicit activity for learning the set of "shapes" that money laundering exhibits in cryptocurrency and accurately classifying new criminal activity. Along with the dataset we share our graph techniques, software tooling, promising early experimental results, and new domain insights already gleaned from this approach. Taken together, we find immediate practical value in this approach and the potential for a new standard in anti-money laundering and forensic analytics in cryptocurrencies and other financial networks. | [
"['Claudio Bellei' 'Muhua Xu' 'Ross Phillips' 'Tom Robinson' 'Mark Weber'\n 'Tim Kaler' 'Charles E. Leiserson' 'Arvind' 'Jie Chen']"
]
|
null | null | 2404.19112 | null | null | http://arxiv.org/pdf/2404.19112v1 | 2024-04-29T21:25:25Z | 2024-04-29T21:25:25Z | Hidden Synergy: $L_1$ Weight Normalization and 1-Path-Norm
Regularization | We present PSiLON Net, an MLP architecture that uses $L_1$ weight normalization for each weight vector and shares the length parameter across the layer. The 1-path-norm provides a bound for the Lipschitz constant of a neural network and reflects on its generalizability, and we show how PSiLON Net's design drastically simplifies the 1-path-norm, while providing an inductive bias towards efficient learning and near-sparse parameters. We propose a pruning method to achieve exact sparsity in the final stages of training, if desired. To exploit the inductive bias of residual networks, we present a simplified residual block, leveraging concatenated ReLU activations. For networks constructed with such blocks, we prove that considering only a subset of possible paths in the 1-path-norm is sufficient to bound the Lipschitz constant. Using the 1-path-norm and this improved bound as regularizers, we conduct experiments in the small data regime using overparameterized PSiLON Nets and PSiLON ResNets, demonstrating reliable optimization and strong performance. | [
"['Aditya Biswas']"
]
|
null | null | 2404.19113 | null | null | http://arxiv.org/pdf/2404.19113v2 | 2024-05-12T22:57:04Z | 2024-04-29T21:25:59Z | Source-Free Domain Adaptation of Weakly-Supervised Object Localization
Models for Histology | Given the emergence of deep learning, digital pathology has gained popularity for cancer diagnosis based on histology images. Deep weakly supervised object localization (WSOL) models can be trained to classify histology images according to cancer grade and identify regions of interest (ROIs) for interpretation, using inexpensive global image-class annotations. A WSOL model initially trained on some labeled source image data can be adapted using unlabeled target data in cases of significant domain shifts caused by variations in staining, scanners, and cancer type. In this paper, we focus on source-free (unsupervised) domain adaptation (SFDA), a challenging problem where a pre-trained source model is adapted to a new target domain without using any source domain data for privacy and efficiency reasons. SFDA of WSOL models raises several challenges in histology, most notably because they are not intended to adapt for both classification and localization tasks. In this paper, 4 state-of-the-art SFDA methods, each one representative of a main SFDA family, are compared for WSOL in terms of classification and localization accuracy. They are the SFDA-Distribution Estimation, Source HypOthesis Transfer, Cross-Domain Contrastive Learning, and Adaptively Domain Statistics Alignment. Experimental results on the challenging Glas (smaller, breast cancer) and Camelyon16 (larger, colon cancer) histology datasets indicate that these SFDA methods typically perform poorly for localization after adaptation when optimized for classification. | [
"['Alexis Guichemerre' 'Soufiane Belharbi' 'Tsiry Mayet' 'Shakeeb Murtaza'\n 'Pourya Shamsolmoali' 'Luke McCaffrey' 'Eric Granger']"
]
|
null | null | 2404.19114 | null | null | http://arxiv.org/pdf/2404.19114v1 | 2024-04-29T21:26:18Z | 2024-04-29T21:26:18Z | Enhancing IoT Security: A Novel Feature Engineering Approach for
ML-Based Intrusion Detection Systems | The integration of Internet of Things (IoT) applications in our daily lives has led to a surge in data traffic, posing significant security challenges. IoT applications using cloud and edge computing are at higher risk of cyberattacks because of the expanded attack surface from distributed edge and cloud services, the vulnerability of IoT devices, and challenges in managing security across interconnected systems leading to oversights. This led to the rise of ML-based solutions for intrusion detection systems (IDSs), which have proven effective in enhancing network security and defending against diverse threats. However, ML-based IDS in IoT systems encounters challenges, particularly from noisy, redundant, and irrelevant features in varied IoT datasets, potentially impacting its performance. Therefore, reducing such features becomes crucial to enhance system performance and minimize computational costs. This paper focuses on improving the effectiveness of ML-based IDS at the edge level by introducing a novel method to find a balanced trade-off between cost and accuracy through the creation of informative features in a two-tier edge-user IoT environment. A hybrid Binary Quantum-inspired Artificial Bee Colony and Genetic Programming algorithm is utilized for this purpose. Three IoT intrusion detection datasets, namely NSL-KDD, UNSW-NB15, and BoT-IoT, are used for the evaluation of the proposed approach. | [
"['Afsaneh Mahanipour' 'Hana Khamfroush']"
]
|
null | null | 2404.19128 | null | null | http://arxiv.org/pdf/2404.19128v1 | 2024-04-29T22:06:17Z | 2024-04-29T22:06:17Z | Q-GroundCAM: Quantifying Grounding in Vision Language Models via GradCAM | Vision and Language Models (VLMs) continue to demonstrate remarkable zero-shot (ZS) performance across various tasks. However, many probing studies have revealed that even the best-performing VLMs struggle to capture aspects of compositional scene understanding, lacking the ability to properly ground and localize linguistic phrases in images. Recent VLM advancements include scaling up both model and dataset sizes, additional training objectives and levels of supervision, and variations in the model architectures. To characterize the grounding ability of VLMs, such as phrase grounding, referring expressions comprehension, and relationship understanding, Pointing Game has been used as an evaluation metric for datasets with bounding box annotations. In this paper, we introduce a novel suite of quantitative metrics that utilize GradCAM activations to rigorously evaluate the grounding capabilities of pre-trained VLMs like CLIP, BLIP, and ALBEF. These metrics offer an explainable and quantifiable approach for a more detailed comparison of the zero-shot capabilities of VLMs and enable measuring models' grounding uncertainty. This characterization reveals interesting tradeoffs between the size of the model, the dataset size, and their performance. | [
"['Navid Rajabi' 'Jana Kosecka']"
]
|
null | null | 2404.19130 | null | null | http://arxiv.org/pdf/2404.19130v1 | 2024-04-29T22:21:24Z | 2024-04-29T22:21:24Z | SpherE: Expressive and Interpretable Knowledge Graph Embedding for Set
Retrieval | Knowledge graphs (KGs), which store an extensive number of relational facts (head, relation, tail), serve various applications. While many downstream tasks highly rely on the expressive modeling and predictive embedding of KGs, most of the current KG representation learning methods, where each entity is embedded as a vector in the Euclidean space and each relation is embedded as a transformation, follow an entity ranking protocol. On one hand, such an embedding design cannot capture many-to-many relations. On the other hand, in many retrieval cases, the users wish to get an exact set of answers without any ranking, especially when the results are expected to be precise, e.g., which genes cause an illness. Such scenarios are commonly referred to as "set retrieval". This work presents a pioneering study on the KG set retrieval problem. We show that the set retrieval highly depends on expressive modeling of many-to-many relations, and propose a new KG embedding model SpherE to address this problem. SpherE is based on rotational embedding methods, but each entity is embedded as a sphere instead of a vector. While inheriting the high interpretability of rotational-based models, our SpherE can more expressively model one-to-many, many-to-one, and many-to-many relations. Through extensive experiments, we show that our SpherE can well address the set retrieval problem while still having a good predictive ability to infer missing facts. The code is available at https://github.com/Violet24K/SpherE. | [
"['Zihao Li' 'Yuyi Ao' 'Jingrui He']"
]
|
null | null | 2404.19132 | null | null | http://arxiv.org/pdf/2404.19132v1 | 2024-04-29T22:31:21Z | 2024-04-29T22:31:21Z | Integrating Present and Past in Unsupervised Continual Learning | We formulate a unifying framework for unsupervised continual learning (UCL), which disentangles learning objectives that are specific to the present and the past data, encompassing stability, plasticity, and cross-task consolidation. The framework reveals that many existing UCL approaches overlook cross-task consolidation and try to balance plasticity and stability in a shared embedding space. This results in worse performance due to a lack of within-task data diversity and reduced effectiveness in learning the current task. Our method, Osiris, which explicitly optimizes all three objectives on separate embedding spaces, achieves state-of-the-art performance on all benchmarks, including two novel benchmarks proposed in this paper featuring semantically structured task sequences. Compared to standard benchmarks, these two structured benchmarks more closely resemble visual signals received by humans and animals when navigating real-world environments. Finally, we show some preliminary evidence that continual models can benefit from such realistic learning scenarios. | [
"['Yipeng Zhang' 'Laurent Charlin' 'Richard Zemel' 'Mengye Ren']"
]
|
null | null | 2404.19141 | null | null | http://arxiv.org/pdf/2404.19141v1 | 2024-04-29T22:54:35Z | 2024-04-29T22:54:35Z | Micro-Macro Spatial-Temporal Graph-based Encoder-Decoder for
Map-Constrained Trajectory Recovery | Recovering intermediate missing GPS points in a sparse trajectory, while adhering to the constraints of the road network, could offer deep insights into users' moving behaviors in intelligent transportation systems. Although recent studies have demonstrated the advantages of achieving map-constrained trajectory recovery via an end-to-end manner, they still face two significant challenges. Firstly, existing methods are mostly sequence-based models. It is extremely hard for them to comprehensively capture the micro-semantics of individual trajectory, including the information of each GPS point and the movement between two GPS points. Secondly, existing approaches ignore the impact of the macro-semantics, i.e., the road conditions and the people's shared travel preferences reflected by a group of trajectories. To address the above challenges, we propose a Micro-Macro Spatial-Temporal Graph-based Encoder-Decoder (MM-STGED). Specifically, we model each trajectory as a graph to efficiently describe the micro-semantics of trajectory and design a novel message-passing mechanism to learn trajectory representations. Additionally, we extract the macro-semantics of trajectories and further incorporate them into a well-designed graph-based decoder to guide trajectory recovery. Extensive experiments conducted on sparse trajectories with three different sampling intervals that are respectively constructed from two real-world trajectory datasets demonstrate the superiority of our proposed model. | [
"['Tonglong Wei' 'Youfang Lin' 'Yan Lin' 'Shengnan Guo' 'Lan Zhang'\n 'Huaiyu Wan']"
]
|
null | null | 2404.19145 | null | null | http://arxiv.org/pdf/2404.19145v2 | 2024-05-01T02:10:51Z | 2024-04-29T23:08:03Z | Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty | Bootstrap is a popular methodology for simulating input uncertainty. However, it can be computationally expensive when the number of samples is large. We propose a new approach called textbf{Orthogonal Bootstrap} that reduces the number of required Monte Carlo replications. We decomposes the target being simulated into two parts: the textit{non-orthogonal part} which has a closed-form result known as Infinitesimal Jackknife and the textit{orthogonal part} which is easier to be simulated. We theoretically and numerically show that Orthogonal Bootstrap significantly reduces the computational cost of Bootstrap while improving empirical accuracy and maintaining the same width of the constructed interval. | [
"['Kaizhao Liu' 'Jose Blanchet' 'Lexing Ying' 'Yiping Lu']"
]
|
null | null | 2404.19157 | null | null | http://arxiv.org/pdf/2404.19157v1 | 2024-04-29T23:38:58Z | 2024-04-29T23:38:58Z | Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian
Processes to Deep Neural Networks | Large neural networks trained on large datasets have become the dominant paradigm in machine learning. These systems rely on maximum likelihood point estimates of their parameters, precluding them from expressing model uncertainty. This may result in overconfident predictions and it prevents the use of deep learning models for sequential decision making. This thesis develops scalable methods to equip neural networks with model uncertainty. In particular, we leverage the linearised Laplace approximation to equip pre-trained neural networks with the uncertainty estimates provided by their tangent linear models. This turns the problem of Bayesian inference in neural networks into one of Bayesian inference in conjugate Gaussian-linear models. Alas, the cost of this remains cubic in either the number of network parameters or in the number of observations times output dimensions. By assumption, neither are tractable. We address this intractability by using stochastic gradient descent (SGD) -- the workhorse algorithm of deep learning -- to perform posterior sampling in linear models and their convex duals: Gaussian processes. With this, we turn back to linearised neural networks, finding the linearised Laplace approximation to present a number of incompatibilities with modern deep learning practices -- namely, stochastic optimisation, early stopping and normalisation layers -- when used for hyperparameter learning. We resolve these and construct a sample-based EM algorithm for scalable hyperparameter learning with linearised neural networks. We apply the above methods to perform linearised neural network inference with ResNet-50 (25M parameters) trained on Imagenet (1.2M observations and 1000 output dimensions). Additionally, we apply our methods to estimate uncertainty for 3d tomographic reconstructions obtained with the deep image prior network. | [
"['Javier Antoran']"
]
|
null | null | 2404.19165 | null | null | http://arxiv.org/pdf/2404.19165v1 | 2024-04-30T00:02:34Z | 2024-04-30T00:02:34Z | DelGrad: Exact gradients in spiking networks for learning transmission
delays and weights | Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Transmission delays play an important role in shaping these temporal characteristics. Recent work has demonstrated the substantial advantages of learning these delays along with synaptic weights, both in terms of accuracy and memory efficiency. However, these approaches suffer from drawbacks in terms of precision and efficiency, as they operate in discrete time and with approximate gradients, while also requiring membrane potential recordings for calculating parameter updates. To alleviate these issues, we propose an analytical approach for calculating exact loss gradients with respect to both synaptic weights and delays in an event-based fashion. The inclusion of delays emerges naturally within our proposed formalism, enriching the model's search space with a temporal dimension. Our algorithm is purely based on the timing of individual spikes and does not require access to other variables such as membrane potentials. We explicitly compare the impact on accuracy and parameter efficiency of different types of delays - axonal, dendritic and synaptic. Furthermore, while previous work on learnable delays in SNNs has been mostly confined to software simulations, we demonstrate the functionality and benefits of our approach on the BrainScaleS-2 neuromorphic platform. | [
"['Julian Göltz' 'Jimmy Weber' 'Laura Kriener' 'Peter Lake'\n 'Melika Payvand' 'Mihai A. Petrovici']"
]
|
null | null | 2404.19218 | null | null | http://arxiv.org/pdf/2404.19218v1 | 2024-04-30T02:39:01Z | 2024-04-30T02:39:01Z | Flight Trajectory Prediction Using an Enhanced CNN-LSTM Network | Aiming at the problem of low accuracy of flight trajectory prediction caused by the high speed of fighters, the diversity of tactical maneuvers, and the transient nature of situational change in close range air combat, this paper proposes an enhanced CNN-LSTM network as a fighter flight trajectory prediction method. Firstly, we extract spatial features from fighter trajectory data using CNN, aggregate spatial features of multiple fighters using the social-pooling module to capture geographic information and positional relationships in the trajectories, and use the attention mechanism to capture mutated trajectory features in air combat; subsequently, we extract temporal features by using the memory nature of LSTM to capture long-term temporal dependence in the trajectories; and finally, we merge the temporal and spatial features to predict the flight trajectories of enemy fighters. Extensive simulation experiments verify that the proposed method improves the trajectory prediction accuracy compared to the original CNN-LSTM method, with the improvements of 32% and 34% in ADE and FDE indicators. | [
"['Qinzhi Hao' 'Jiali Zhang' 'Tengyu Jing' 'Wei Wang']"
]
|
null | null | 2404.19220 | null | null | http://arxiv.org/pdf/2404.19220v1 | 2024-04-30T02:44:41Z | 2024-04-30T02:44:41Z | Regression for matrix-valued data via Kronecker products factorization | We study the matrix-variate regression problem $Y_i = sum_{k} beta_{1k} X_i beta_{2k}^{top} + E_i$ for $i=1,2dots,n$ in the high dimensional regime wherein the response $Y_i$ are matrices whose dimensions $p_{1}times p_{2}$ outgrow both the sample size $n$ and the dimensions $q_{1}times q_{2}$ of the predictor variables $X_i$ i.e., $q_{1},q_{2} ll n ll p_{1},p_{2}$. We propose an estimation algorithm, termed KRO-PRO-FAC, for estimating the parameters ${beta_{1k}} subset Re^{p_1 times q_1}$ and ${beta_{2k}} subset Re^{p_2 times q_2}$ that utilizes the Kronecker product factorization and rearrangement operations from Van Loan and Pitsianis (1993). The KRO-PRO-FAC algorithm is computationally efficient as it does not require estimating the covariance between the entries of the ${Y_i}$. We establish perturbation bounds between $hat{beta}_{1k} -beta_{1k}$ and $hat{beta}_{2k} - beta_{2k}$ in spectral norm for the setting where either the rows of $E_i$ or the columns of $E_i$ are independent sub-Gaussian random vectors. Numerical studies on simulated and real data indicate that our procedure is competitive, in terms of both estimation error and predictive accuracy, compared to other existing methods. | [
"['Yin-Jen Chen' 'Minh Tang']"
]
|
null | null | 2404.19228 | null | null | http://arxiv.org/pdf/2404.19228v1 | 2024-04-30T03:15:04Z | 2024-04-30T03:15:04Z | Understanding Multimodal Contrastive Learning Through Pointwise Mutual
Information | Multimodal representation learning to integrate different modalities, such as text, vision, and audio is important for real-world applications. The symmetric InfoNCE loss proposed in CLIP is a key concept in multimodal representation learning. In this work, we provide a theoretical understanding of the symmetric InfoNCE loss through the lens of the pointwise mutual information and show that encoders that achieve the optimal similarity in the pretraining provide a good representation for downstream classification tasks under mild assumptions. Based on our theoretical results, we also propose a new similarity metric for multimodal contrastive learning by utilizing a nonlinear kernel to enrich the capability. To verify the effectiveness of the proposed method, we demonstrate pretraining of multimodal representation models on the Conceptual Caption datasets and evaluate zero-shot classification and linear classification on common benchmark datasets. | [
"['Toshimitsu Uesaka' 'Taiji Suzuki' 'Yuhta Takida' 'Chieh-Hsin Lai'\n 'Naoki Murata' 'Yuki Mitsufuji']"
]
|
null | null | 2404.19238 | null | null | http://arxiv.org/pdf/2404.19238v1 | 2024-04-30T03:52:00Z | 2024-04-30T03:52:00Z | Pilot Contamination in Massive MIMO Systems: Challenges and Future
Prospects | Massive multiple input multiple output (M-MIMO) technology plays a pivotal role in fifth-generation (5G) and beyond communication systems, offering a wide range of benefits, from increased spectral efficiency (SE) to enhanced energy efficiency and higher reliability. However, these advantages are contingent upon precise channel state information (CSI) availability at the base station (BS). Ensuring precise CSI is challenging due to the constrained size of the coherence interval and the resulting limitations on pilot sequence length. Therefore, reusing pilot sequences in adjacent cells introduces pilot contamination, hindering SE enhancement. This paper reviews recent advancements and addresses research challenges in mitigating pilot contamination and improving channel estimation, categorizing the existing research into three broader categories: pilot assignment schemes, advanced signal processing methods, and advanced channel estimation techniques. Salient representative pilot mitigation/assignment techniques are analyzed and compared in each category. Lastly, possible future research directions are discussed. | [
"['Muhammad Kamran Saeed' 'Ashfaq Khokhar' 'Shakil Ahmed']"
]
|
null | null | 2404.19247 | null | null | http://arxiv.org/pdf/2404.19247v1 | 2024-04-30T04:11:21Z | 2024-04-30T04:11:21Z | Improved AutoEncoder with LSTM module and KL divergence | The task of anomaly detection is to separate anomalous data from normal data in the dataset. Models such as deep convolutional autoencoder (CAE) network and deep supporting vector data description (SVDD) model have been universally employed and have demonstrated significant success in detecting anomalies. However, the over-reconstruction ability of CAE network for anomalous data can easily lead to high false negative rate in detecting anomalous data. On the other hand, the deep SVDD model has the drawback of feature collapse, which leads to a decrease of detection accuracy for anomalies. To address these problems, we propose the Improved AutoEncoder with LSTM module and Kullback-Leibler divergence (IAE-LSTM-KL) model in this paper. An LSTM network is added after the encoder to memorize feature representations of normal data. In the meanwhile, the phenomenon of feature collapse can also be mitigated by penalizing the featured input to SVDD module via KL divergence. The efficacy of the IAE-LSTM-KL model is validated through experiments on both synthetic and real-world datasets. Experimental results show that IAE-LSTM-KL model yields higher detection accuracy for anomalies. In addition, it is also found that the IAE-LSTM-KL model demonstrates enhanced robustness to contaminated outliers in the dataset. | [
"['Wei Huang' 'Bingyang Zhang' 'Kaituo Zhang' 'Hua Gao' 'Rongchun Wan']"
]
|
null | null | 2404.19256 | null | null | http://arxiv.org/pdf/2404.19256v1 | 2024-04-30T04:41:47Z | 2024-04-30T04:41:47Z | Bias Mitigation via Compensation: A Reinforcement Learning Perspective | As AI increasingly integrates with human decision-making, we must carefully consider interactions between the two. In particular, current approaches focus on optimizing individual agent actions but often overlook the nuances of collective intelligence. Group dynamics might require that one agent (e.g., the AI system) compensate for biases and errors in another agent (e.g., the human), but this compensation should be carefully developed. We provide a theoretical framework for algorithmic compensation that synthesizes game theory and reinforcement learning principles to demonstrate the natural emergence of deceptive outcomes from the continuous learning dynamics of agents. We provide simulation results involving Markov Decision Processes (MDP) learning to interact. This work then underpins our ethical analysis of the conditions in which AI agents should adapt to biases and behaviors of other agents in dynamic and complex decision-making environments. Overall, our approach addresses the nuanced role of strategic deception of humans, challenging previous assumptions about its detrimental effects. We assert that compensation for others' biases can enhance coordination and ethical alignment: strategic deception, when ethically managed, can positively shape human-AI interactions. | [
"['Nandhini Swaminathan' 'David Danks']"
]
|
null | null | 2404.19261 | null | null | http://arxiv.org/pdf/2404.19261v1 | 2024-04-30T04:54:15Z | 2024-04-30T04:54:15Z | High dimensional analysis reveals conservative sharpening and a
stochastic edge of stability | Recent empirical and theoretical work has shown that the dynamics of the large eigenvalues of the training loss Hessian have some remarkably robust features across models and datasets in the full batch regime. There is often an early period of progressive sharpening where the large eigenvalues increase, followed by stabilization at a predictable value known as the edge of stability. Previous work showed that in the stochastic setting, the eigenvalues increase more slowly - a phenomenon we call conservative sharpening. We provide a theoretical analysis of a simple high-dimensional model which shows the origin of this slowdown. We also show that there is an alternative stochastic edge of stability which arises at small batch size that is sensitive to the trace of the Neural Tangent Kernel rather than the large Hessian eigenvalues. We conduct an experimental study which highlights the qualitative differences from the full batch phenomenology, and suggests that controlling the stochastic edge of stability can help optimization. | [
"['Atish Agarwala' 'Jeffrey Pennington']"
]
|
null | null | 2404.19283 | null | null | http://arxiv.org/pdf/2404.19283v1 | 2024-04-30T06:21:42Z | 2024-04-30T06:21:42Z | MAP-Former: Multi-Agent-Pair Gaussian Joint Prediction | There is a gap in risk assessment of trajectories between the trajectory information coming from a traffic motion prediction module and what is actually needed. Closing this gap necessitates advancements in prediction beyond current practices. Existing prediction models yield joint predictions of agents' future trajectories with uncertainty weights or marginal Gaussian probability density functions (PDFs) for single agents. Although, these methods achieve high accurate trajectory predictions, they only provide little or no information about the dependencies of interacting agents. Since traffic is a process of highly interdependent agents, whose actions directly influence their mutual behavior, the existing methods are not sufficient to reliably assess the risk of future trajectories. This paper addresses that gap by introducing a novel approach to motion prediction, focusing on predicting agent-pair covariance matrices in a ``scene-centric'' manner, which can then be used to model Gaussian joint PDFs for all agent-pairs in a scene. We propose a model capable of predicting those agent-pair covariance matrices, leveraging an enhanced awareness of interactions. Utilizing the prediction results of our model, this work forms the foundation for comprehensive risk assessment with statistically based methods for analyzing agents' relations by their joint PDFs. | [
"['Marlon Steiner' 'Marvin Klemp' 'Christoph Stiller']"
]
|
null | null | 2404.19284 | null | null | http://arxiv.org/pdf/2404.19284v3 | 2024-06-05T06:42:42Z | 2024-04-30T06:21:44Z | Approximate Nearest Neighbour Search on Dynamic Datasets: An
Investigation | Approximate k-Nearest Neighbour (ANN) methods are often used for mining information and aiding machine learning on large scale high-dimensional datasets. ANN methods typically differ in the index structure used for accelerating searches, resulting in various recall/runtime trade-off points. For applications with static datasets, runtime constraints and dataset properties can be used to empirically select an ANN method with suitable operating characteristics. However, for applications with dynamic datasets, which are subject to frequent online changes (like addition of new samples), there is currently no consensus as to which ANN methods are most suitable. Traditional evaluation approaches do not consider the computational costs of updating the index structure, as well as the rate and size of index updates. To address this, we empirically evaluate 5 popular ANN methods on two main applications (online data collection and online feature learning) while taking into account these considerations. Two dynamic datasets are used, derived from the SIFT1M dataset with 1 million samples and the DEEP1B dataset with 1 billion samples. The results indicate that the often used k-d trees method is not suitable on dynamic datasets as it is slower than a straightforward baseline exhaustive search method. For online data collection, the Hierarchical Navigable Small World Graphs method achieves a consistent speedup over baseline across a wide range of recall rates. For online feature learning, the Scalable Nearest Neighbours method is faster than baseline for recall rates below 75%. | [
"['Ben Harwood' 'Amir Dezfouli' 'Iadine Chades' 'Conrad Sanderson']"
]
|
null | null | 2404.19288 | null | null | http://arxiv.org/pdf/2404.19288v1 | 2024-04-30T06:36:43Z | 2024-04-30T06:36:43Z | Training-free Graph Neural Networks and the Power of Labels as Features | We propose training-free graph neural networks (TFGNNs), which can be used without training and can also be improved with optional training, for transductive node classification. We first advocate labels as features (LaF), which is an admissible but not explored technique. We show that LaF provably enhances the expressive power of graph neural networks. We design TFGNNs based on this analysis. In the experiments, we confirm that TFGNNs outperform existing GNNs in the training-free setting and converge with much fewer training iterations than traditional GNNs. | [
"['Ryoma Sato']"
]
|
null | null | 2404.19289 | null | null | http://arxiv.org/pdf/2404.19289v1 | 2024-04-30T06:39:04Z | 2024-04-30T06:39:04Z | On Improving the Algorithm-, Model-, and Data- Efficiency of
Self-Supervised Learning | Self-supervised learning (SSL) has developed rapidly in recent years. However, most of the mainstream methods are computationally expensive and rely on two (or more) augmentations for each image to construct positive pairs. Moreover, they mainly focus on large models and large-scale datasets, which lack flexibility and feasibility in many practical applications. In this paper, we propose an efficient single-branch SSL method based on non-parametric instance discrimination, aiming to improve the algorithm, model, and data efficiency of SSL. By analyzing the gradient formula, we correct the update rule of the memory bank with improved performance. We further propose a novel self-distillation loss that minimizes the KL divergence between the probability distribution and its square root version. We show that this alleviates the infrequent updating problem in instance discrimination and greatly accelerates convergence. We systematically compare the training overhead and performance of different methods in different scales of data, and under different backbones. Experimental results show that our method outperforms various baselines with significantly less overhead, and is especially effective for limited amounts of data and small models. | [
"['Yun-Hao Cao' 'Jianxin Wu']"
]
|
null | null | 2404.19292 | null | null | http://arxiv.org/pdf/2404.19292v1 | 2024-04-30T06:48:56Z | 2024-04-30T06:48:56Z | Provably Efficient Information-Directed Sampling Algorithms for
Multi-Agent Reinforcement Learning | This work designs and analyzes a novel set of algorithms for multi-agent reinforcement learning (MARL) based on the principle of information-directed sampling (IDS). These algorithms draw inspiration from foundational concepts in information theory, and are proven to be sample efficient in MARL settings such as two-player zero-sum Markov games (MGs) and multi-player general-sum MGs. For episodic two-player zero-sum MGs, we present three sample-efficient algorithms for learning Nash equilibrium. The basic algorithm, referred to as MAIDS, employs an asymmetric learning structure where the max-player first solves a minimax optimization problem based on the joint information ratio of the joint policy, and the min-player then minimizes the marginal information ratio with the max-player's policy fixed. Theoretical analyses show that it achieves a Bayesian regret of tilde{O}(sqrt{K}) for K episodes. To reduce the computational load of MAIDS, we develop an improved algorithm called Reg-MAIDS, which has the same Bayesian regret bound while enjoying less computational complexity. Moreover, by leveraging the flexibility of IDS principle in choosing the learning target, we propose two methods for constructing compressed environments based on rate-distortion theory, upon which we develop an algorithm Compressed-MAIDS wherein the learning target is a compressed environment. Finally, we extend Reg-MAIDS to multi-player general-sum MGs and prove that it can learn either the Nash equilibrium or coarse correlated equilibrium in a sample efficient manner. | [
"['Qiaosheng Zhang' 'Chenjia Bai' 'Shuyue Hu' 'Zhen Wang' 'Xuelong Li']"
]
|
null | null | 2404.19301 | null | null | http://arxiv.org/pdf/2404.19301v1 | 2024-04-30T07:04:23Z | 2024-04-30T07:04:23Z | Statistics and explainability: a fruitful alliance | In this paper, we propose standard statistical tools as a solution to commonly highlighted problems in the explainability literature. Indeed, leveraging statistical estimators allows for a proper definition of explanations, enabling theoretical guarantees and the formulation of evaluation metrics to quantitatively assess the quality of explanations. This approach circumvents, among other things, the subjective human assessment currently prevalent in the literature. Moreover, we argue that uncertainty quantification is essential for providing robust and trustworthy explanations, and it can be achieved in this framework through classical statistical procedures such as the bootstrap. However, it is crucial to note that while Statistics offers valuable contributions, it is not a panacea for resolving all the challenges. Future research avenues could focus on open problems, such as defining a purpose for the explanations or establishing a statistical framework for counterfactual or adversarial scenarios. | [
"['Valentina Ghidini']"
]
|
null | null | 2404.19306 | null | null | http://arxiv.org/pdf/2404.19306v1 | 2024-04-30T07:18:10Z | 2024-04-30T07:18:10Z | Comprehensive Forecasting-Based Analysis of Hybrid and Stacked Stateful/
Stateless Models | Wind speed is a powerful source of renewable energy, which can be used as an alternative to the non-renewable resources for production of electricity. Renewable sources are clean, infinite and do not impact the environment negatively during production of electrical energy. However, while eliciting electrical energy from renewable resources viz. solar irradiance, wind speed, hydro should require special planning failing which may result in huge loss of labour and money for setting up the system. In this paper, we discuss four deep recurrent neural networks viz. Stacked Stateless LSTM, Stacked Stateless GRU, Stacked Stateful LSTM and Statcked Stateful GRU which will be used to predict wind speed on a short-term basis for the airport sites beside two campuses of Mississippi State University. The paper does a comprehensive analysis of the performance of the models used describing their architectures and how efficiently they elicit the results with the help of RMSE values. A detailed description of the time and space complexities of the above models has also been discussed. | [
"['Swayamjit Saha']"
]
|
null | null | 2404.19346 | null | null | http://arxiv.org/abs/2404.19346v1 | 2024-04-30T08:16:52Z | 2024-04-30T08:16:52Z | Pessimistic Value Iteration for Multi-Task Data Sharing in Offline
Reinforcement Learning | Offline Reinforcement Learning (RL) has shown promising results in learning a task-specific policy from a fixed dataset. However, successful offline RL often relies heavily on the coverage and quality of the given dataset. In scenarios where the dataset for a specific task is limited, a natural approach is to improve offline RL with datasets from other tasks, namely, to conduct Multi-Task Data Sharing (MTDS). Nevertheless, directly sharing datasets from other tasks exacerbates the distribution shift in offline RL. In this paper, we propose an uncertainty-based MTDS approach that shares the entire dataset without data selection. Given ensemble-based uncertainty quantification, we perform pessimistic value iteration on the shared offline dataset, which provides a unified framework for single- and multi-task offline RL. We further provide theoretical analysis, which shows that the optimality gap of our method is only related to the expected data coverage of the shared dataset, thus resolving the distribution shift issue in data sharing. Empirically, we release an MTDS benchmark and collect datasets from three challenging domains. The experimental results show our algorithm outperforms the previous state-of-the-art methods in challenging MTDS problems. See https://github.com/Baichenjia/UTDS for the datasets and code. | [
"['Chenjia Bai' 'Lingxiao Wang' 'Jianye Hao' 'Zhuoran Yang' 'Bin Zhao'\n 'Zhen Wang' 'Xuelong Li']"
]
|
null | null | 2404.19349 | null | null | http://arxiv.org/pdf/2404.19349v1 | 2024-04-30T08:20:31Z | 2024-04-30T08:20:31Z | Human-AI Interaction in Industrial Robotics: Design and Empirical
Evaluation of a User Interface for Explainable AI-Based Robot Program
Optimization | While recent advances in deep learning have demonstrated its transformative potential, its adoption for real-world manufacturing applications remains limited. We present an Explanation User Interface (XUI) for a state-of-the-art deep learning-based robot program optimizer which provides both naive and expert users with different user experiences depending on their skill level, as well as Explainable AI (XAI) features to facilitate the application of deep learning methods in real-world applications. To evaluate the impact of the XUI on task performance, user satisfaction and cognitive load, we present the results of a preliminary user survey and propose a study design for a large-scale follow-up study. | [
"['Benjamin Alt' 'Johannes Zahn' 'Claudius Kienle' 'Julia Dvorak'\n 'Marvin May' 'Darko Katic' 'Rainer Jäkel' 'Tobias Kopp' 'Michael Beetz'\n 'Gisela Lanza']"
]
|
null | null | 2404.19351 | null | null | http://arxiv.org/pdf/2404.19351v2 | 2024-05-03T15:05:01Z | 2024-04-30T08:28:03Z | Deep Learning Forecasts Caldera Collapse Events at Kilauea Volcano | During the three month long eruption of Kilauea volcano, Hawaii in 2018, the pre-existing summit caldera collapsed in over 60 quasi-periodic failure events. The last 40 of these events, which generated Mw >5 very long period (VLP) earthquakes, had inter-event times between 0.8 - 2.2 days. These failure events offer a unique dataset for testing methods for predicting earthquake recurrence based on locally recorded GPS, tilt, and seismicity data. In this work, we train a deep learning graph neural network (GNN) to predict the time-to-failure of the caldera collapse events using only a fraction of the data recorded at the start of each cycle. We find that the GNN generalizes to unseen data and can predict the time-to-failure to within a few hours using only 0.5 days of data, substantially improving upon a null model based only on inter-event statistics. Predictions improve with increasing input data length, and are most accurate when using high-SNR tilt-meter data. Applying the trained GNN to synthetic data with different magma pressure decay times predicts failure at a nearly constant stress threshold, revealing that the GNN is sensing the underling physics of caldera collapse. These findings demonstrate the predictability of caldera collapse sequences under well monitored conditions, and highlight the potential of machine learning methods for forecasting real world catastrophic events with limited training data. | [
"['Ian W. McBrearty' 'Paul Segall']"
]
|
null | null | 2404.19354 | null | null | http://arxiv.org/pdf/2404.19354v1 | 2024-04-30T08:33:52Z | 2024-04-30T08:33:52Z | PEFSL: A deployment Pipeline for Embedded Few-Shot Learning on a FPGA
SoC | This paper tackles the challenges of implementing few-shot learning on embedded systems, specifically FPGA SoCs, a vital approach for adapting to diverse classification tasks, especially when the costs of data acquisition or labeling prove to be prohibitively high. Our contributions encompass the development of an end-to-end open-source pipeline for a few-shot learning platform for object classification on a FPGA SoCs. The pipeline is built on top of the Tensil open-source framework, facilitating the design, training, evaluation, and deployment of DNN backbones tailored for few-shot learning. Additionally, we showcase our work's potential by building and deploying a low-power, low-latency demonstrator trained on the MiniImageNet dataset with a dataflow architecture. The proposed system has a latency of 30 ms while consuming 6.2 W on the PYNQ-Z1 board. | [
"['Lucas Grativol Ribeiro' 'Lubin Gauthier' 'Mathieu Leonardon'\n 'Jérémy Morlier' 'Antoine Lavrard-Meyer' 'Guillaume Muller'\n 'Virginie Fresse' 'Matthieu Arzel']"
]
|
null | null | 2404.19370 | null | null | http://arxiv.org/pdf/2404.19370v1 | 2024-04-30T08:58:47Z | 2024-04-30T08:58:47Z | Numeric Reward Machines | Reward machines inform reinforcement learning agents about the reward structure of the environment and often drastically speed up the learning process. However, reward machines only accept Boolean features such as robot-reached-gold. Consequently, many inherently numeric tasks cannot profit from the guidance offered by reward machines. To address this gap, we aim to extend reward machines with numeric features such as distance-to-gold. For this, we present two types of reward machines: numeric-Boolean and numeric. In a numeric-Boolean reward machine, distance-to-gold is emulated by two Boolean features distance-to-gold-decreased and robot-reached-gold. In a numeric reward machine, distance-to-gold is used directly alongside the Boolean feature robot-reached-gold. We compare our new approaches to a baseline reward machine in the Craft domain, where the numeric feature is the agent-to-target distance. We use cross-product Q-learning, Q-learning with counter-factual experiences, and the options framework for learning. Our experimental results show that our new approaches significantly outperform the baseline approach. Extending reward machines with numeric features opens up new possibilities of using reward machines in inherently numeric tasks. | [
"['Kristina Levina' 'Nikolaos Pappas' 'Athanasios Karapantelakis'\n 'Aneta Vulgarakis Feljan' 'Jendrik Seipp']"
]
|
null | null | 2404.19397 | null | null | http://arxiv.org/pdf/2404.19397v1 | 2024-04-30T09:42:40Z | 2024-04-30T09:42:40Z | Can humans teach machines to code? | The goal of inductive program synthesis is for a machine to automatically generate a program from user-supplied examples of the desired behaviour of the program. A key underlying assumption is that humans can provide examples of sufficient quality to teach a concept to a machine. However, as far as we are aware, this assumption lacks both empirical and theoretical support. To address this limitation, we explore the question `Can humans teach machines to code?'. To answer this question, we conduct a study where we ask humans to generate examples for six programming tasks, such as finding the maximum element of a list. We compare the performance of a program synthesis system trained on (i) human-provided examples, (ii) randomly sampled examples, and (iii) expert-provided examples. Our results show that, on most of the tasks, non-expert participants did not provide sufficient examples for a program synthesis system to learn an accurate program. Our results also show that non-experts need to provide more examples than both randomly sampled and expert-provided examples. | [
"['Céline Hocquette' 'Johannes Langer' 'Andrew Cropper' 'Ute Schmid']"
]
|
null | null | 2404.19420 | null | null | http://arxiv.org/pdf/2404.19420v1 | 2024-04-30T10:11:44Z | 2024-04-30T10:11:44Z | Let's Focus: Focused Backdoor Attack against Federated Transfer Learning | Federated Transfer Learning (FTL) is the most general variation of Federated Learning. According to this distributed paradigm, a feature learning pre-step is commonly carried out by only one party, typically the server, on publicly shared data. After that, the Federated Learning phase takes place to train a classifier collaboratively using the learned feature extractor. Each involved client contributes by locally training only the classification layers on a private training set. The peculiarity of an FTL scenario makes it hard to understand whether poisoning attacks can be developed to craft an effective backdoor. State-of-the-art attack strategies assume the possibility of shifting the model attention toward relevant features introduced by a forged trigger injected in the input data by some untrusted clients. Of course, this is not feasible in FTL, as the learned features are fixed once the server performs the pre-training step. Consequently, in this paper, we investigate this intriguing Federated Learning scenario to identify and exploit a vulnerability obtained by combining eXplainable AI (XAI) and dataset distillation. In particular, the proposed attack can be carried out by one of the clients during the Federated Learning phase of FTL by identifying the optimal local for the trigger through XAI and encapsulating compressed information of the backdoor class. Due to its behavior, we refer to our approach as a focused backdoor approach (FB-FTL for short) and test its performance by explicitly referencing an image classification scenario. With an average 80% attack success rate, obtained results show the effectiveness of our attack also against existing defenses for Federated Learning. | [
"['Marco Arazzi' 'Stefanos Koffas' 'Antonino Nocera' 'Stjepan Picek']"
]
|
null | null | 2404.19429 | null | null | http://arxiv.org/pdf/2404.19429v1 | 2024-04-30T10:17:21Z | 2024-04-30T10:17:21Z | Lancet: Accelerating Mixture-of-Experts Training via Whole Graph
Computation-Communication Overlapping | The Mixture-of-Expert (MoE) technique plays a crucial role in expanding the size of DNN model parameters. However, it faces the challenge of extended all-to-all communication latency during the training process. Existing methods attempt to mitigate this issue by overlapping all-to-all with expert computation. Yet, these methods frequently fall short of achieving sufficient overlap, consequently restricting the potential for performance enhancements. In our study, we extend the scope of this challenge by considering overlap at the broader training graph level. During the forward pass, we enable non-MoE computations to overlap with all-to-all through careful partitioning and pipelining. In the backward pass, we achieve overlap with all-to-all by scheduling gradient weight computations. We implement these techniques in Lancet, a system using compiler-based optimization to automatically enhance MoE model training. Our extensive evaluation reveals that Lancet significantly reduces the time devoted to non-overlapping communication, by as much as 77%. Moreover, it achieves a notable end-to-end speedup of up to 1.3 times when compared to the state-of-the-art solutions. | [
"['Chenyu Jiang' 'Ye Tian' 'Zhen Jia' 'Shuai Zheng' 'Chuan Wu' 'Yida Wang']"
]
|
null | null | 2404.19452 | null | null | http://arxiv.org/pdf/2404.19452v1 | 2024-04-30T11:09:47Z | 2024-04-30T11:09:47Z | How to Sustainably Monitor ML-Enabled Systems? Accuracy and Energy
Efficiency Tradeoffs in Concept Drift Detection | ML-enabled systems that are deployed in a production environment typically suffer from decaying model prediction quality through concept drift, i.e., a gradual change in the statistical characteristics of a certain real-world domain. To combat this, a simple solution is to periodically retrain ML models, which unfortunately can consume a lot of energy. One recommended tactic to improve energy efficiency is therefore to systematically monitor the level of concept drift and only retrain when it becomes unavoidable. Different methods are available to do this, but we know very little about their concrete impact on the tradeoff between accuracy and energy efficiency, as these methods also consume energy themselves. To address this, we therefore conducted a controlled experiment to study the accuracy vs. energy efficiency tradeoff of seven common methods for concept drift detection. We used five synthetic datasets, each in a version with abrupt and one with gradual drift, and trained six different ML models as base classifiers. Based on a full factorial design, we tested 420 combinations (7 drift detectors * 5 datasets * 2 types of drift * 6 base classifiers) and compared energy consumption and drift detection accuracy. Our results indicate that there are three types of detectors: a) detectors that sacrifice energy efficiency for detection accuracy (KSWIN), b) balanced detectors that consume low to medium energy with good accuracy (HDDM_W, ADWIN), and c) detectors that consume very little energy but are unusable in practice due to very poor accuracy (HDDM_A, PageHinkley, DDM, EDDM). By providing rich evidence for this energy efficiency tactic, our findings support ML practitioners in choosing the best suited method of concept drift detection for their ML-enabled systems. | [
"['Rafiullah Omar' 'Justus Bogner' 'Joran Leest' 'Vincenzo Stoico'\n 'Patricia Lago' 'Henry Muccini']"
]
|
null | null | 2404.19456 | null | null | http://arxiv.org/pdf/2404.19456v1 | 2024-04-30T11:13:23Z | 2024-04-30T11:13:23Z | Imitation Learning: A Survey of Learning Methods, Environments and
Metrics | Imitation learning is an approach in which an agent learns how to execute a task by trying to mimic how one or more teachers perform it. This learning approach offers a compromise between the time it takes to learn a new task and the effort needed to collect teacher samples for the agent. It achieves this by balancing learning from the teacher, who has some information on how to perform the task, and deviating from their examples when necessary, such as states not present in the teacher samples. Consequently, the field of imitation learning has received much attention from researchers in recent years, resulting in many new methods and applications. However, with this increase in published work and past surveys focusing mainly on methodology, a lack of standardisation became more prominent in the field. This non-standardisation is evident in the use of environments, which appear in no more than two works, and evaluation processes, such as qualitative analysis, that have become rare in current literature. In this survey, we systematically review current imitation learning literature and present our findings by (i) classifying imitation learning techniques, environments and metrics by introducing novel taxonomies; (ii) reflecting on main problems from the literature; and (iii) presenting challenges and future directions for researchers. | [
"['Nathan Gavenski' 'Odinaldo Rodrigues' 'Michael Luck']"
]
|
null | null | 2404.19460 | null | null | http://arxiv.org/pdf/2404.19460v1 | 2024-04-30T11:19:05Z | 2024-04-30T11:19:05Z | AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples | Adversarial examples are typically optimized with gradient-based attacks. While novel attacks are continuously proposed, each is shown to outperform its predecessors using different experimental setups, hyperparameter settings, and number of forward and backward calls to the target models. This provides overly-optimistic and even biased evaluations that may unfairly favor one particular attack over the others. In this work, we aim to overcome these limitations by proposing AttackBench, i.e., the first evaluation framework that enables a fair comparison among different attacks. To this end, we first propose a categorization of gradient-based attacks, identifying their main components and differences. We then introduce our framework, which evaluates their effectiveness and efficiency. We measure these characteristics by (i) defining an optimality metric that quantifies how close an attack is to the optimal solution, and (ii) limiting the number of forward and backward queries to the model, such that all attacks are compared within a given maximum query budget. Our extensive experimental analysis compares more than 100 attack implementations with a total of over 800 different configurations against CIFAR-10 and ImageNet models, highlighting that only very few attacks outperform all the competing approaches. Within this analysis, we shed light on several implementation issues that prevent many attacks from finding better solutions or running at all. We release AttackBench as a publicly available benchmark, aiming to continuously update it to include and evaluate novel gradient-based attacks for optimizing adversarial examples. | [
"['Antonio Emanuele Cinà' 'Jérôme Rony' 'Maura Pintor' 'Luca Demetrio'\n 'Ambra Demontis' 'Battista Biggio' 'Ismail Ben Ayed' 'Fabio Roli']"
]
|
null | null | 2404.19462 | null | null | http://arxiv.org/pdf/2404.19462v1 | 2024-04-30T11:23:31Z | 2024-04-30T11:23:31Z | Continual Model-based Reinforcement Learning for Data Efficient Wireless
Network Optimisation | We present a method that addresses the pain point of long lead-time required to deploy cell-level parameter optimisation policies to new wireless network sites. Given a sequence of action spaces represented by overlapping subsets of cell-level configuration parameters provided by domain experts, we formulate throughput optimisation as Continual Reinforcement Learning of control policies. Simulation results suggest that the proposed system is able to shorten the end-to-end deployment lead-time by two-fold compared to a reinitialise-and-retrain baseline without any drop in optimisation gain. | [
"['Cengis Hasan' 'Alexandros Agapitos' 'David Lynch' 'Alberto Castagna'\n 'Giorgio Cruciata' 'Hao Wang' 'Aleksandar Milenovic']"
]
|
null | null | 2404.19467 | null | null | http://arxiv.org/pdf/2404.19467v1 | 2024-04-30T11:31:07Z | 2024-04-30T11:31:07Z | Bayesian Functional Connectivity and Graph Convolutional Network for
Working Memory Load Classification | Brain responses related to working memory originate from distinct brain areas and oscillate at different frequencies. EEG signals with high temporal correlation can effectively capture these responses. Therefore, estimating the functional connectivity of EEG for working memory protocols in different frequency bands plays a significant role in analyzing the brain dynamics with increasing memory and cognitive loads, which remains largely unexplored. The present study introduces a Bayesian structure learning algorithm to learn the functional connectivity of EEG in sensor space. Next, the functional connectivity graphs are taken as input to the graph convolutional network to classify the working memory loads. The intrasubject (subject-specific) classification performed on 154 subjects for six different verbal working memory loads produced the highest classification accuracy of 96% and average classification accuracy of 89%, outperforming state-of-the-art classification models proposed in the literature. Furthermore, the proposed Bayesian structure learning algorithm is compared with state-of-the-art functional connectivity estimation methods through intersubject and intrasubject statistical analysis of variance. The results also show that the alpha and theta bands have better classification accuracy than the beta band. | [
"['Harshini Gangapuram' 'Vidya Manian']"
]
|
null | null | 2404.19484 | null | null | http://arxiv.org/pdf/2404.19484v2 | 2024-05-02T01:58:15Z | 2024-04-30T12:05:48Z | More Compute Is What You Need | Large language model pre-training has become increasingly expensive, with most practitioners relying on scaling laws to allocate compute budgets for model size and training tokens, commonly referred to as Compute-Optimal or Chinchilla Optimal. In this paper, we hypothesize a new scaling law that suggests model performance depends mostly on the amount of compute spent for transformer-based models, independent of the specific allocation to model size and dataset size. Using this unified scaling law, we predict that (a) for inference efficiency, training should prioritize smaller model sizes and larger training datasets, and (b) assuming the exhaustion of available web datasets, scaling the model size might be the only way to further improve model performance. | [
"['Zhen Guo']"
]
|
null | null | 2404.19486 | null | null | http://arxiv.org/pdf/2404.19486v1 | 2024-04-30T12:09:55Z | 2024-04-30T12:09:55Z | Safe Training with Sensitive In-domain Data: Leveraging Data
Fragmentation To Mitigate Linkage Attacks | Current text generation models are trained using real data which can potentially contain sensitive information, such as confidential patient information and the like. Under certain conditions output of the training data which they have memorised can be triggered, exposing sensitive data. To mitigate against this risk we propose a safer alternative which sees fragmented data in the form of domain-specific short phrases randomly grouped together shared instead of full texts. Thus, text fragments that could re-identify an individual cannot be reproduced by the model in one sequence, giving significant protection against linkage attacks. We fine-tune several state-of-the-art LLMs using meaningful syntactic chunks to explore their utility. In particular, we fine-tune BERT-based models to predict two cardiovascular diagnoses. Our results demonstrate the capacity of LLMs to benefit from the pre-trained knowledge and deliver classification results when fine-tuned with fragmented data comparable to fine-tuning with full training data. | [
"['Mariia Ignashina' 'Julia Ive']"
]
|
null | null | 2404.19487 | null | null | http://arxiv.org/pdf/2404.19487v1 | 2024-04-30T12:15:50Z | 2024-04-30T12:15:50Z | Finetuning greedy kernel models by exchange algorithms | Kernel based approximation offers versatile tools for high-dimensional approximation, which can especially be leveraged for surrogate modeling. For this purpose, both "knot insertion" and "knot removal" approaches aim at choosing a suitable subset of the data, in order to obtain a sparse but nevertheless accurate kernel model. In the present work, focussing on kernel based interpolation, we aim at combining these two approaches to further improve the accuracy of kernel models, without increasing the computational complexity of the final kernel model. For this, we introduce a class of kernel exchange algorithms (KEA). The resulting KEA algorithm can be used for finetuning greedy kernel surrogate models, allowing for an reduction of the error up to 86.4% (17.2% on average) in our experiments. | [
"['Tizian Wenzel' 'Armin Iske']"
]
|
null | null | 2404.19501 | null | null | http://arxiv.org/pdf/2404.19501v1 | 2024-04-30T12:37:01Z | 2024-04-30T12:37:01Z | A Unified Theory of Exact Inference and Learning in Exponential Family
Latent Variable Models | Bayes' rule describes how to infer posterior beliefs about latent variables given observations, and inference is a critical step in learning algorithms for latent variable models (LVMs). Although there are exact algorithms for inference and learning for certain LVMs such as linear Gaussian models and mixture models, researchers must typically develop approximate inference and learning algorithms when applying novel LVMs. In this paper we study the line that separates LVMs that rely on approximation schemes from those that do not, and develop a general theory of exponential family, latent variable models for which inference and learning may be implemented exactly. Firstly, under mild assumptions about the exponential family form of a given LVM, we derive necessary and sufficient conditions under which the LVM prior is in the same exponential family as its posterior, such that the prior is conjugate to the posterior. We show that all models that satisfy these conditions are constrained forms of a particular class of exponential family graphical model. We then derive general inference and learning algorithms, and demonstrate them on a variety of example models. Finally, we show how to compose our models into graphical models that retain tractable inference and learning. In addition to our theoretical work, we have implemented our algorithms in a collection of libraries with which we provide numerous demonstrations of our theory, and with which researchers may apply our theory in novel statistical settings. | [
"['Sacha Sokoloski']"
]
|
null | null | 2404.19508 | null | null | http://arxiv.org/pdf/2404.19508v1 | 2024-04-30T12:43:11Z | 2024-04-30T12:43:11Z | Temporal Graph ODEs for Irregularly-Sampled Time Series | Modern graph representation learning works mostly under the assumption of dealing with regularly sampled temporal graph snapshots, which is far from realistic, e.g., social networks and physical systems are characterized by continuous dynamics and sporadic observations. To address this limitation, we introduce the Temporal Graph Ordinary Differential Equation (TG-ODE) framework, which learns both the temporal and spatial dynamics from graph streams where the intervals between observations are not regularly spaced. We empirically validate the proposed approach on several graph benchmarks, showing that TG-ODE can achieve state-of-the-art performance in irregular graph stream tasks. | [
"['Alessio Gravina' 'Daniele Zambon' 'Davide Bacciu' 'Cesare Alippi']"
]
|
null | null | 2404.19519 | null | null | http://arxiv.org/pdf/2404.19519v1 | 2024-04-30T12:49:54Z | 2024-04-30T12:49:54Z | Generating Robust Counterfactual Witnesses for Graph Neural Networks | This paper introduces a new class of explanation structures, called robust counterfactual witnesses (RCWs), to provide robust, both counterfactual and factual explanations for graph neural networks. Given a graph neural network M, a robust counterfactual witness refers to the fraction of a graph G that are counterfactual and factual explanation of the results of M over G, but also remains so for any "disturbed" G by flipping up to k of its node pairs. We establish the hardness results, from tractable results to co-NP-hardness, for verifying and generating robust counterfactual witnesses. We study such structures for GNN-based node classification, and present efficient algorithms to verify and generate RCWs. We also provide a parallel algorithm to verify and generate RCWs for large graphs with scalability guarantees. We experimentally verify our explanation generation process for benchmark datasets, and showcase their applications. | [
"['Dazhuo Qiu' 'Mengying Wang' 'Arijit Khan' 'Yinghui Wu']"
]
|
null | null | 2404.19536 | null | null | http://arxiv.org/pdf/2404.19536v1 | 2024-04-30T13:12:36Z | 2024-04-30T13:12:36Z | Physics-Informed Machine Learning On Polar Ice: A Survey | The mass loss of the polar ice sheets contributes considerably to ongoing sea-level rise and changing ocean circulation, leading to coastal flooding and risking the homes and livelihoods of tens of millions of people globally. To address the complex problem of ice behavior, physical models and data-driven models have been proposed in the literature. Although traditional physical models can guarantee physically meaningful results, they have limitations in producing high-resolution results. On the other hand, data-driven approaches require large amounts of high-quality and labeled data, which is rarely available in the polar regions. Hence, as a promising framework that leverages the advantages of physical models and data-driven methods, physics-informed machine learning (PIML) has been widely studied in recent years. In this paper, we review the existing algorithms of PIML, provide our own taxonomy based on the methods of combining physics and data-driven approaches, and analyze the advantages of PIML in the aspects of accuracy and efficiency. Further, our survey discusses some current challenges and highlights future opportunities, including PIML on sea ice studies, PIML with different combination methods and backbone networks, and neural operator methods. | [
"['Zesheng Liu' 'YoungHyun Koo' 'Maryam Rahnemoonfar']"
]
|
null | null | 2404.19557 | null | null | http://arxiv.org/pdf/2404.19557v3 | 2024-06-12T14:38:48Z | 2024-04-30T13:39:26Z | Neural Dynamic Data Valuation | Data constitute the foundational component of the data economy and its marketplaces. Efficient and fair data valuation has emerged as a topic of significant interest. Many approaches based on marginal contribution have shown promising results in various downstream tasks. However, they are well known to be computationally expensive as they require training a large number of utility functions, which are used to evaluate the usefulness or value of a given dataset for a specific purpose. As a result, it has been recognized as infeasible to apply these methods to a data marketplace involving large-scale datasets. Consequently, a critical issue arises: how can the re-training of the utility function be avoided? To address this issue, we propose a novel data valuation method from the perspective of optimal control, named the neural dynamic data valuation (NDDV). Our method has solid theoretical interpretations to accurately identify the data valuation via the sensitivity of the data optimal control state. In addition, we implement a data re-weighting strategy to capture the unique features of data points, ensuring fairness through the interaction between data points and the mean-field states. Notably, our method requires only training once to estimate the value of all data points, significantly improving the computational efficiency. We conduct comprehensive experiments using different datasets and tasks. The results demonstrate that the proposed NDDV method outperforms the existing state-of-the-art data valuation methods in accurately identifying data points with either high or low values and is more computationally efficient. | [
"['Zhangyong Liang' 'Huanhuan Gao' 'Ji Zhang']"
]
|
null | null | 2404.19579 | null | null | http://arxiv.org/pdf/2404.19579v1 | 2024-04-30T14:16:45Z | 2024-04-30T14:16:45Z | Automatic Cardiac Pathology Recognition in Echocardiography Images Using
Higher Order Dynamic Mode Decomposition and a Vision Transformer for Small
Datasets | Heart diseases are the main international cause of human defunction. According to the WHO, nearly 18 million people decease each year because of heart diseases. Also considering the increase of medical data, much pressure is put on the health industry to develop systems for early and accurate heart disease recognition. In this work, an automatic cardiac pathology recognition system based on a novel deep learning framework is proposed, which analyses in real-time echocardiography video sequences. The system works in two stages. The first one transforms the data included in a database of echocardiography sequences into a machine-learning-compatible collection of annotated images which can be used in the training stage of any kind of machine learning-based framework, and more specifically with deep learning. This includes the use of the Higher Order Dynamic Mode Decomposition (HODMD) algorithm, for the first time to the authors' knowledge, for both data augmentation and feature extraction in the medical field. The second stage is focused on building and training a Vision Transformer (ViT), barely explored in the related literature. The ViT is adapted for an effective training from scratch, even with small datasets. The designed neural network analyses images from an echocardiography sequence to predict the heart state. The results obtained show the superiority of the proposed system and the efficacy of the HODMD algorithm, even outperforming pretrained Convolutional Neural Networks (CNNs), which are so far the method of choice in the literature. | [
"['Andrés Bell-Navas' 'Nourelhouda Groun' 'María Villalba-Orero'\n 'Enrique Lara-Pezzi' 'Jesús Garicano-Mena' 'Soledad Le Clainche']"
]
|
null | null | 2404.19582 | null | null | http://arxiv.org/pdf/2404.19582v1 | 2024-04-30T14:19:06Z | 2024-04-30T14:19:06Z | Leveraging Label Information for Stealthy Data Stealing in Vertical
Federated Learning | We develop DMAVFL, a novel attack strategy that evades current detection mechanisms. The key idea is to integrate a discriminator with auxiliary classifier that takes a full advantage of the label information (which was completely ignored in previous attacks): on one hand, label information helps to better characterize embeddings of samples from distinct classes, yielding an improved reconstruction performance; on the other hand, computing malicious gradients with label information better mimics the honest training, making the malicious gradients indistinguishable from the honest ones, and the attack much more stealthy. Our comprehensive experiments demonstrate that DMAVFL significantly outperforms existing attacks, and successfully circumvents SOTA defenses for malicious attacks. Additional ablation studies and evaluations on other defenses further underscore the robustness and effectiveness of DMAVFL. | [
"['Duanyi Yao' 'Songze Li' 'Xueluan Gong' 'Sizai Hou' 'Gaoning Pan']"
]
|
null | null | 2404.19591 | null | null | http://arxiv.org/abs/2404.19591v1 | 2024-04-30T14:36:04Z | 2024-04-30T14:36:04Z | Towards Interactively Improving ML Data Preparation Code via "Shadow
Pipelines" | Data scientists develop ML pipelines in an iterative manner: they repeatedly screen a pipeline for potential issues, debug it, and then revise and improve its code according to their findings. However, this manual process is tedious and error-prone. Therefore, we propose to support data scientists during this development cycle with automatically derived interactive suggestions for pipeline improvements. We discuss our vision to generate these suggestions with so-called shadow pipelines, hidden variants of the original pipeline that modify it to auto-detect potential issues, try out modifications for improvements, and suggest and explain these modifications to the user. We envision to apply incremental view maintenance-based optimisations to ensure low-latency computation and maintenance of the shadow pipelines. We conduct preliminary experiments to showcase the feasibility of our envisioned approach and the potential benefits of our proposed optimisations. | [
"['Stefan Grafberger' 'Paul Groth' 'Sebastian Schelter']"
]
|
null | null | 2404.19596 | null | null | http://arxiv.org/pdf/2404.19596v1 | 2024-04-30T14:43:51Z | 2024-04-30T14:43:51Z | Debiased Collaborative Filtering with Kernel-Based Causal Balancing | Debiased collaborative filtering aims to learn an unbiased prediction model by removing different biases in observational datasets. To solve this problem, one of the simple and effective methods is based on the propensity score, which adjusts the observational sample distribution to the target one by reweighting observed instances. Ideally, propensity scores should be learned with causal balancing constraints. However, existing methods usually ignore such constraints or implement them with unreasonable approximations, which may affect the accuracy of the learned propensity scores. To bridge this gap, in this paper, we first analyze the gaps between the causal balancing requirements and existing methods such as learning the propensity with cross-entropy loss or manually selecting functions to balance. Inspired by these gaps, we propose to approximate the balancing functions in reproducing kernel Hilbert space and demonstrate that, based on the universal property and representer theorem of kernel functions, the causal balancing constraints can be better satisfied. Meanwhile, we propose an algorithm that adaptively balances the kernel function and theoretically analyze the generalization error bound of our methods. We conduct extensive experiments to demonstrate the effectiveness of our methods, and to promote this research direction, we have released our project at https://github.com/haoxuanli-pku/ICLR24-Kernel-Balancing. | [
"['Haoxuan Li' 'Chunyuan Zheng' 'Yanghao Xiao' 'Peng Wu' 'Zhi Geng'\n 'Xu Chen' 'Peng Cui']"
]
|
null | null | 2404.19605 | null | null | http://arxiv.org/pdf/2404.19605v1 | 2024-04-30T14:55:57Z | 2024-04-30T14:55:57Z | Data-Driven Invertible Neural Surrogates of Atmospheric Transmission | We present a framework for inferring an atmospheric transmission profile from a spectral scene. This framework leverages a lightweight, physics-based simulator that is automatically tuned - by virtue of autodifferentiation and differentiable programming - to construct a surrogate atmospheric profile to model the observed data. We demonstrate utility of the methodology by (i) performing atmospheric correction, (ii) recasting spectral data between various modalities (e.g. radiance and reflectance at the surface and at the sensor), and (iii) inferring atmospheric transmission profiles, such as absorbing bands and their relative magnitudes. | [
"['James Koch' 'Brenda Forland' 'Bruce Bernacki' 'Timothy Doster'\n 'Tegan Emerson']"
]
|
null | null | 2404.19620 | null | null | http://arxiv.org/pdf/2404.19620v1 | 2024-04-30T15:20:41Z | 2024-04-30T15:20:41Z | Be Aware of the Neighborhood Effect: Modeling Selection Bias under
Interference | Selection bias in recommender system arises from the recommendation process of system filtering and the interactive process of user selection. Many previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model, but ignore the fact that potential outcomes for a given user-item pair may vary with the treatments assigned to other user-item pairs, named neighborhood effect. To fill the gap, this paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference and introduces a treatment representation to capture the neighborhood effect. On this basis, we propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect. We further develop two new estimators for estimating the proposed ideal loss. We theoretically establish the connection between the proposed and previous debiasing methods ignoring the neighborhood effect, showing that the proposed methods can achieve unbiased learning when both selection bias and neighborhood effect are present, while the existing methods are biased. Extensive semi-synthetic and real-world experiments are conducted to demonstrate the effectiveness of the proposed methods. | [
"['Haoxuan Li' 'Chunyuan Zheng' 'Sihao Ding' 'Peng Wu' 'Zhi Geng'\n 'Fuli Feng' 'Xiangnan He']"
]
|
null | null | 2404.19630 | null | null | http://arxiv.org/pdf/2404.19630v1 | 2024-04-30T15:30:14Z | 2024-04-30T15:30:14Z | Analyzing and Exploring Training Recipes for Large-Scale
Transformer-Based Weather Prediction | The rapid rise of deep learning (DL) in numerical weather prediction (NWP) has led to a proliferation of models which forecast atmospheric variables with comparable or superior skill than traditional physics-based NWP. However, among these leading DL models, there is a wide variance in both the training settings and architecture used. Further, the lack of thorough ablation studies makes it hard to discern which components are most critical to success. In this work, we show that it is possible to attain high forecast skill even with relatively off-the-shelf architectures, simple training procedures, and moderate compute budgets. Specifically, we train a minimally modified SwinV2 transformer on ERA5 data, and find that it attains superior forecast skill when compared against IFS. We present some ablations on key aspects of the training pipeline, exploring different loss functions, model sizes and depths, and multi-step fine-tuning to investigate their effect. We also examine the model performance with metrics beyond the typical ACC and RMSE, and investigate how the performance scales with model size. | [
"['Jared D. Willard' 'Peter Harrington' 'Shashank Subramanian'\n 'Ankur Mahesh' \"Travis A. O'Brien\" 'William D. Collins']"
]
|
null | null | 2404.19631 | null | null | http://arxiv.org/pdf/2404.19631v1 | 2024-04-30T15:34:51Z | 2024-04-30T15:34:51Z | On Training a Neural Network to Explain Binaries | In this work, we begin to investigate the possibility of training a deep neural network on the task of binary code understanding. Specifically, the network would take, as input, features derived directly from binaries and output English descriptions of functionality to aid a reverse engineer in investigating the capabilities of a piece of closed-source software, be it malicious or benign. Given recent success in applying large language models (generative AI) to the task of source code summarization, this seems a promising direction. However, in our initial survey of the available datasets, we found nothing of sufficiently high quality and volume to train these complex models. Instead, we build our own dataset derived from a capture of Stack Overflow containing 1.1M entries. A major result of our work is a novel dataset evaluation method using the correlation between two distances on sample pairs: one distance in the embedding space of inputs and the other in the embedding space of outputs. Intuitively, if two samples have inputs close in the input embedding space, their outputs should also be close in the output embedding space. We found this Embedding Distance Correlation (EDC) test to be highly diagnostic, indicating that our collected dataset and several existing open-source datasets are of low quality as the distances are not well correlated. We proceed to explore the general applicability of EDC, applying it to a number of qualitatively known good datasets and a number of synthetically known bad ones and found it to be a reliable indicator of dataset value. | [
"['Alexander Interrante-Grant' 'Andy Davis' 'Heather Preslier' 'Tim Leek']"
]
|
null | null | 2404.19640 | null | null | http://arxiv.org/pdf/2404.19640v1 | 2024-04-27T01:34:46Z | 2024-04-27T01:34:46Z | Attacking Bayes: On the Adversarial Robustness of Bayesian Neural
Networks | Adversarial examples have been shown to cause neural networks to fail on a wide range of vision and language tasks, but recent work has claimed that Bayesian neural networks (BNNs) are inherently robust to adversarial perturbations. In this work, we examine this claim. To study the adversarial robustness of BNNs, we investigate whether it is possible to successfully break state-of-the-art BNN inference methods and prediction pipelines using even relatively unsophisticated attacks for three tasks: (1) label prediction under the posterior predictive mean, (2) adversarial example detection with Bayesian predictive uncertainty, and (3) semantic shift detection. We find that BNNs trained with state-of-the-art approximate inference methods, and even BNNs trained with Hamiltonian Monte Carlo, are highly susceptible to adversarial attacks. We also identify various conceptual and experimental errors in previous works that claimed inherent adversarial robustness of BNNs and conclusively demonstrate that BNNs and uncertainty-aware Bayesian prediction pipelines are not inherently robust against adversarial attacks. | [
"['Yunzhen Feng' 'Tim G. J. Rudner' 'Nikolaos Tsilivis' 'Julia Kempe']"
]
|
null | null | 2404.19649 | null | null | http://arxiv.org/pdf/2404.19649v1 | 2024-04-29T04:53:20Z | 2024-04-29T04:53:20Z | Landmark Alternating Diffusion | Alternating Diffusion (AD) is a commonly applied diffusion-based sensor fusion algorithm. While it has been successfully applied to various problems, its computational burden remains a limitation. Inspired by the landmark diffusion idea considered in the Robust and Scalable Embedding via Landmark Diffusion (ROSELAND), we propose a variation of AD, called Landmark AD (LAD), which captures the essence of AD while offering superior computational efficiency. We provide a series of theoretical analyses of LAD under the manifold setup and apply it to the automatic sleep stage annotation problem with two electroencephalogram channels to demonstrate its application. | [
"['Sing-Yuan Yeh' 'Hau-Tieng Wu' 'Ronen Talmon' 'Mao-Pei Tsui']"
]
|
null | null | 2404.19651 | null | null | http://arxiv.org/pdf/2404.19651v1 | 2024-04-30T15:49:01Z | 2024-04-30T15:49:01Z | Provably Robust Conformal Prediction with Improved Efficiency | Conformal prediction is a powerful tool to generate uncertainty sets with guaranteed coverage using any predictive model, under the assumption that the training and test data are i.i.d.. Recently, it has been shown that adversarial examples are able to manipulate conformal methods to construct prediction sets with invalid coverage rates, as the i.i.d. assumption is violated. To address this issue, a recent work, Randomized Smoothed Conformal Prediction (RSCP), was first proposed to certify the robustness of conformal prediction methods to adversarial noise. However, RSCP has two major limitations: (i) its robustness guarantee is flawed when used in practice and (ii) it tends to produce large uncertainty sets. To address these limitations, we first propose a novel framework called RSCP+ to provide provable robustness guarantee in evaluation, which fixes the issues in the original RSCP method. Next, we propose two novel methods, Post-Training Transformation (PTT) and Robust Conformal Training (RCT), to effectively reduce prediction set size with little computation overhead. Experimental results in CIFAR10, CIFAR100, and ImageNet suggest the baseline method only yields trivial predictions including full label set, while our methods could boost the efficiency by up to $4.36times$, $5.46times$, and $16.9times$ respectively and provide practical robustness guarantee. Our codes are available at https://github.com/Trustworthy-ML-Lab/Provably-Robust-Conformal-Prediction. | [
"['Ge Yan' 'Yaniv Romano' 'Tsui-Wei Weng']"
]
|
null | null | 2404.19654 | null | null | http://arxiv.org/pdf/2404.19654v1 | 2024-04-30T15:51:05Z | 2024-04-30T15:51:05Z | Masked Multi-Query Slot Attention for Unsupervised Object Discovery | Unsupervised object discovery is becoming an essential line of research for tackling recognition problems that require decomposing an image into entities, such as semantic segmentation and object detection. Recently, object-centric methods that leverage self-supervision have gained popularity, due to their simplicity and adaptability to different settings and conditions. However, those methods do not exploit effective techniques already employed in modern self-supervised approaches. In this work, we consider an object-centric approach in which DINO ViT features are reconstructed via a set of queried representations called slots. Based on that, we propose a masking scheme on input features that selectively disregards the background regions, inducing our model to focus more on salient objects during the reconstruction phase. Moreover, we extend the slot attention to a multi-query approach, allowing the model to learn multiple sets of slots, producing more stable masks. During training, these multiple sets of slots are learned independently while, at test time, these sets are merged through Hungarian matching to obtain the final slots. Our experimental results and ablations on the PASCAL-VOC 2012 dataset show the importance of each component and highlight how their combination consistently improves object localization. Our source code is available at: https://github.com/rishavpramanik/maskedmultiqueryslot | [
"['Rishav Pramanik' 'José-Fabian Villa-Vásquez' 'Marco Pedersoli']"
]
|
null | null | 2404.19660 | null | null | http://arxiv.org/pdf/2404.19660v1 | 2024-04-25T10:09:37Z | 2024-04-25T10:09:37Z | Decoder Decomposition for the Analysis of the Latent Space of Nonlinear
Autoencoders With Wind-Tunnel Experimental Data | Turbulent flows are chaotic and multi-scale dynamical systems, which have large numbers of degrees of freedom. Turbulent flows, however, can be modelled with a smaller number of degrees of freedom when using the appropriate coordinate system, which is the goal of dimensionality reduction via nonlinear autoencoders. Autoencoders are expressive tools, but they are difficult to interpret. The goal of this paper is to propose a method to aid the interpretability of autoencoders. This is the decoder decomposition. First, we propose the decoder decomposition, which is a post-processing method to connect the latent variables to the coherent structures of flows. Second, we apply the decoder decomposition to analyse the latent space of synthetic data of a two-dimensional unsteady wake past a cylinder. We find that the dimension of latent space has a significant impact on the interpretability of autoencoders. We identify the physical and spurious latent variables. Third, we apply the decoder decomposition to the latent space of wind-tunnel experimental data of a three-dimensional turbulent wake past a bluff body. We show that the reconstruction error is a function of both the latent space dimension and the decoder size, which are correlated. Finally, we apply the decoder decomposition to rank and select latent variables based on the coherent structures that they represent. This is useful to filter unwanted or spurious latent variables, or to pinpoint specific coherent structures of interest. The ability to rank and select latent variables will help users design and interpret nonlinear autoencoders. | [
"['Yaxin Mo' 'Tullio Traverso' 'Luca Magri']"
]
|
null | null | 2404.19664 | null | null | http://arxiv.org/pdf/2404.19664v2 | 2024-06-07T09:25:42Z | 2024-04-30T15:57:41Z | Towards Generalist Robot Learning from Internet Video: A Survey | This survey presents an overview of methods for learning from video (LfV) in the context of reinforcement learning (RL) and robotics. We focus on methods capable of scaling to large internet video datasets and, in the process, extracting foundational knowledge about the world's dynamics and physical human behaviour. Such methods hold great promise for developing general-purpose robots. We open with an overview of fundamental concepts relevant to the LfV-for-robotics setting. This includes a discussion of the exciting benefits LfV methods can offer (e.g., improved generalization beyond the available robot data) and commentary on key LfV challenges (e.g., missing information in video and LfV distribution shifts). Our literature review begins with an analysis of video foundation model techniques that can extract knowledge from large, heterogeneous video datasets. Next, we review methods that specifically leverage video data for robot learning. Here, we categorise work according to which RL knowledge modality (KM) benefits from the use of video data. We additionally highlight techniques for mitigating LfV challenges, including reviewing action representations that address missing action labels in video. Finally, we examine LfV datasets and benchmarks, before concluding with a discussion of challenges and opportunities in LfV. Here, we advocate for scalable foundation model approaches that can leverage the full range of internet video data, and that target the learning of the most promising RL KMs: the policy and dynamics model. Overall, we hope this survey will serve as a comprehensive reference for the emerging field of LfV, catalysing further research in the area and facilitating progress towards the development of general-purpose robots. | [
"['Robert McCarthy' 'Daniel C. H. Tan' 'Dominik Schmidt' 'Fernando Acero'\n 'Nathan Herr' 'Yilun Du' 'Thomas G. Thuruthel' 'Zhibin Li']"
]
|
null | null | 2404.19668 | null | null | http://arxiv.org/pdf/2404.19668v1 | 2024-04-15T03:07:16Z | 2024-04-15T03:07:16Z | SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural
Networks | Weight quantization is used to deploy high-performance deep learning models on resource-limited hardware, enabling the use of low-precision integers for storage and computation. Spiking neural networks (SNNs) share the goal of enhancing efficiency, but adopt an 'event-driven' approach to reduce the power consumption of neural network inference. While extensive research has focused on weight quantization, quantization-aware training (QAT), and their application to SNNs, the precision reduction of state variables during training has been largely overlooked, potentially diminishing inference performance. This paper introduces two QAT schemes for stateful neurons: (i) a uniform quantization strategy, an established method for weight quantization, and (ii) threshold-centered quantization, which allocates exponentially more quantization levels near the firing threshold. Our results show that increasing the density of quantization levels around the firing threshold improves accuracy across several benchmark datasets. We provide an ablation analysis of the effects of weight and state quantization, both individually and combined, and how they impact models. Our comprehensive empirical evaluation includes full precision, 8-bit, 4-bit, and 2-bit quantized SNNs, using QAT, stateful QAT (SQUAT), and post-training quantization methods. The findings indicate that the combination of QAT and SQUAT enhance performance the most, but given the choice of one or the other, QAT improves performance by the larger degree. These trends are consistent all datasets. Our methods have been made available in our Python library snnTorch: https://github.com/jeshraghian/snntorch. | [
"['Sreyes Venkatesh' 'Razvan Marinescu' 'Jason K. Eshraghian']"
]
|
null | null | 2404.19669 | null | null | http://arxiv.org/pdf/2404.19669v1 | 2024-04-15T00:25:10Z | 2024-04-15T00:25:10Z | Enhancing Predictive Accuracy in Pharmaceutical Sales Through An
Ensemble Kernel Gaussian Process Regression Approach | This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Mat'ern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Mat'ern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an ( R^2 ) score near 1.0, and significantly lower values in Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE). These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets. | [
"['Shahin Mirshekari' 'Mohammadreza Moradi' 'Hossein Jafari' 'Mehdi Jafari'\n 'Mohammad Ensaf']"
]
|
null | null | 2404.19671 | null | null | http://arxiv.org/pdf/2404.19671v1 | 2024-04-14T17:48:05Z | 2024-04-14T17:48:05Z | ML-based handover prediction over a real O-RAN deployment using RAN
Intelligent controller | O-RAN introduces intelligent and flexible network control in all parts of the network. The use of controllers with open interfaces allow us to gather real time network measurements and make intelligent/informed decision. The work in this paper focuses on developing a use-case for open and reconfigurable networks to investigate the possibility to predict handover events and understand the value of such predictions for all stakeholders that rely on the communication network to conduct their business. We propose a Long-Short Term Memory Machine Learning approach that takes standard Radio Access Network measurements to predict handover events. The models were trained on real network data collected from a commercial O-RAN setup deployed in our OpenIreland testbed. Our results show that the proposed approach can be optimized for either recall or precision, depending on the defined application level objective. We also link the performance of the Machine Learning (ML) algorithm to the network operation cost. Our results show that ML-based matching between the required and available resources can reduce operational cost by more than 80%, compared to long term resource purchases. | [
"['Merim Dzaferagic' 'Bruno Missi Xavier' 'Diarmuid Collins'\n \"Vince D'Onofrio\" 'Magnos Martinello' 'Marco Ruffini']"
]
|
null | null | 2404.19673 | null | null | http://arxiv.org/pdf/2404.19673v1 | 2024-04-30T16:06:04Z | 2024-04-30T16:06:04Z | Neural Controlled Differential Equations with Quantum Hidden Evolutions | We introduce a class of neural controlled differential equation inspired by quantum mechanics. Neural quantum controlled differential equations (NQDEs) model the dynamics by analogue of the Schr"{o}dinger equation. Specifically, the hidden state represents the wave function, and its collapse leads to an interpretation of the classification probability. We implement and compare the results of four variants of NQDEs on a toy spiral classification problem. | [
"['Lingyi Yang' 'Zhen Shao']"
]
|
null | null | 2404.19675 | null | null | http://arxiv.org/pdf/2404.19675v1 | 2024-04-12T19:17:14Z | 2024-04-12T19:17:14Z | Deep Learning for Educational Data Science | With the ever-growing presence of deep artificial neural networks in every facet of modern life, a growing body of researchers in educational data science -- a field consisting of various interrelated research communities -- have turned their attention to leveraging these powerful algorithms within the domain of education. Use cases range from advanced knowledge tracing models that can leverage open-ended student essays or snippets of code to automatic affect and behavior detectors that can identify when a student is frustrated or aimlessly trying to solve problems unproductively -- and much more. This chapter provides a brief introduction to deep learning, describes some of its advantages and limitations, presents a survey of its many uses in education, and discusses how it may further come to shape the field of educational data science. | [
"['Juan D. Pinto' 'Luc Paquette']"
]
|
null | null | 2404.19689 | null | null | http://arxiv.org/pdf/2404.19689v1 | 2024-04-30T16:29:44Z | 2024-04-30T16:29:44Z | Continuum limit of $p$-biharmonic equations on graphs | This paper studies the $p$-biharmonic equation on graphs, which arises in point cloud processing and can be interpreted as a natural extension of the graph $p$-Laplacian from the perspective of hypergraph. The asymptotic behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity. We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions. The result relies on the uniform $L^p$ estimates for solutions and gradients of nonlocal and graph Poisson equations. The $L^infty$ estimates of solutions are also obtained as a byproduct. | [
"['Kehan Shi' 'Martin Burger']"
]
|
null | null | 2404.19696 | null | null | http://arxiv.org/pdf/2404.19696v1 | 2024-04-30T16:44:18Z | 2024-04-30T16:44:18Z | Naturally Supervised 3D Visual Grounding with Language-Regularized
Concept Learners | 3D visual grounding is a challenging task that often requires direct and dense supervision, notably the semantic label for each object in the scene. In this paper, we instead study the naturally supervised setting that learns from only 3D scene and QA pairs, where prior works underperform. We propose the Language-Regularized Concept Learner (LARC), which uses constraints from language as regularization to significantly improve the accuracy of neuro-symbolic concept learners in the naturally supervised setting. Our approach is based on two core insights: the first is that language constraints (e.g., a word's relation to another) can serve as effective regularization for structured representations in neuro-symbolic models; the second is that we can query large language models to distill such constraints from language properties. We show that LARC improves performance of prior works in naturally supervised 3D visual grounding, and demonstrates a wide range of 3D visual reasoning capabilities-from zero-shot composition, to data efficiency and transferability. Our method represents a promising step towards regularizing structured visual reasoning frameworks with language-based priors, for learning in settings without dense supervision. | [
"['Chun Feng' 'Joy Hsu' 'Weiyu Liu' 'Jiajun Wu']"
]
|
null | null | 2404.19708 | null | null | http://arxiv.org/pdf/2404.19708v1 | 2024-04-30T17:00:32Z | 2024-04-30T17:00:32Z | Harmonic LLMs are Trustworthy | We introduce an intuitive method to test the robustness (stability and explainability) of any black-box LLM in real-time, based upon the local deviation from harmoniticity, denoted as $gamma$. To the best of our knowledge this is the first completely model-agnostic and unsupervised method of measuring the robustness of any given response from an LLM, based upon the model itself conforming to a purely mathematical standard. We conduct human annotation experiments to show the positive correlation of $gamma$ with false or misleading answers, and demonstrate that following the gradient of $gamma$ in stochastic gradient ascent efficiently exposes adversarial prompts. Measuring $gamma$ across thousands of queries in popular LLMs (GPT-4, ChatGPT, Claude-2.1, Mixtral-8x7B, Smaug-72B, Llama2-7B, and MPT-7B) allows us to estimate the liklihood of wrong or hallucinatory answers automatically and quantitatively rank the reliability of these models in various objective domains (Web QA, TruthfulQA, and Programming QA). Across all models and domains tested, human ratings confirm that $gamma to 0$ indicates trustworthiness, and the low-$gamma$ leaders among these models are GPT-4, ChatGPT, and Smaug-72B. | [
"['Nicholas S. Kersting' 'Mohammad Rahman' 'Suchismitha Vedala' 'Yang Wang']"
]
|
null | null | 2404.19710 | null | null | http://arxiv.org/pdf/2404.19710v3 | 2024-06-04T09:47:30Z | 2024-04-30T17:01:20Z | A rank decomposition for the topological classification of neural
representations | Neural networks can be thought of as applying a transformation to an input dataset. The way in which they change the topology of such a dataset often holds practical significance for many tasks, particularly those demanding non-homeomorphic mappings for optimal solutions, such as classification problems. In this work, we leverage the fact that neural networks are equivalent to continuous piecewise-affine maps, whose rank can be used to pinpoint regions in the input space that undergo non-homeomorphic transformations, leading to alterations in the topological structure of the input dataset. Our approach enables us to make use of the relative homology sequence, with which one can study the homology groups of the quotient of a manifold $mathcal{M}$ and a subset $A$, assuming some minimal properties on these spaces. As a proof of principle, we empirically investigate the presence of low-rank (topology-changing) affine maps as a function of network width and mean weight. We show that in randomly initialized narrow networks, there will be regions in which the (co)homology groups of a data manifold can change. As the width increases, the homology groups of the input manifold become more likely to be preserved. We end this part of our work by constructing highly non-random wide networks that do not have this property and relating this non-random regime to Dale's principle, which is a defining characteristic of biological neural networks. Finally, we study simple feedforward networks trained on MNIST, as well as on toy classification and regression tasks, and show that networks manipulate the topology of data differently depending on the continuity of the task they are trained on. | [
"['Kosio Beshkov' 'Gaute T. Einevoll']"
]
|
null | null | 2404.19719 | null | null | http://arxiv.org/pdf/2404.19719v1 | 2024-04-30T17:11:12Z | 2024-04-30T17:11:12Z | The lazy (NTK) and rich ($μ$P) regimes: a gentle tutorial | A central theme of the modern machine learning paradigm is that larger neural networks achieve better performance on a variety of metrics. Theoretical analyses of these overparameterized models have recently centered around studying very wide neural networks. In this tutorial, we provide a nonrigorous but illustrative derivation of the following fact: in order to train wide networks effectively, there is only one degree of freedom in choosing hyperparameters such as the learning rate and the size of the initial weights. This degree of freedom controls the richness of training behavior: at minimum, the wide network trains lazily like a kernel machine, and at maximum, it exhibits feature learning in the so-called $mu$P regime. In this paper, we explain this richness scale, synthesize recent research results into a coherent whole, offer new perspectives and intuitions, and provide empirical evidence supporting our claims. In doing so, we hope to encourage further study of the richness scale, as it may be key to developing a scientific theory of feature learning in practical deep neural networks. | [
"['Dhruva Karkada']"
]
|
null | null | 2404.19725 | null | null | http://arxiv.org/pdf/2404.19725v3 | 2024-05-15T18:40:42Z | 2024-04-30T17:19:52Z | Fairness Without Demographics in Human-Centered Federated Learning | Federated learning (FL) enables collaborative model training while preserving data privacy, making it suitable for decentralized human-centered AI applications. However, a significant research gap remains in ensuring fairness in these systems. Current fairness strategies in FL require knowledge of bias-creating/sensitive attributes, clashing with FL's privacy principles. Moreover, in human-centered datasets, sensitive attributes may remain latent. To tackle these challenges, we present a novel bias mitigation approach inspired by "Fairness without Demographics" in machine learning. The presented approach achieves fairness without needing knowledge of sensitive attributes by minimizing the top eigenvalue of the Hessian matrix during training, ensuring equitable loss landscapes across FL participants. Notably, we introduce a novel FL aggregation scheme that promotes participating models based on error rates and loss landscape curvature attributes, fostering fairness across the FL system. This work represents the first approach to attaining "Fairness without Demographics" in human-centered FL. Through comprehensive evaluation, our approach demonstrates effectiveness in balancing fairness and efficacy across various real-world applications, FL setups, and scenarios involving single and multiple bias-inducing factors, representing a significant advancement in human-centered FL. | [
"['Shaily Roy' 'Harshit Sharma' 'Asif Salekin']"
]
|
null | null | 2404.19739 | null | null | http://arxiv.org/pdf/2404.19739v1 | 2024-04-30T17:37:21Z | 2024-04-30T17:37:21Z | Mixed Continuous and Categorical Flow Matching for 3D De Novo Molecule
Generation | Deep generative models that produce novel molecular structures have the potential to facilitate chemical discovery. Diffusion models currently achieve state of the art performance for 3D molecule generation. In this work, we explore the use of flow matching, a recently proposed generative modeling framework that generalizes diffusion models, for the task of de novo molecule generation. Flow matching provides flexibility in model design; however, the framework is predicated on the assumption of continuously-valued data. 3D de novo molecule generation requires jointly sampling continuous and categorical variables such as atom position and atom type. We extend the flow matching framework to categorical data by constructing flows that are constrained to exist on a continuous representation of categorical data known as the probability simplex. We call this extension SimplexFlow. We explore the use of SimplexFlow for de novo molecule generation. However, we find that, in practice, a simpler approach that makes no accommodations for the categorical nature of the data yields equivalent or superior performance. As a result of these experiments, we present FlowMol, a flow matching model for 3D de novo generative model that achieves improved performance over prior flow matching methods, and we raise important questions about the design of prior distributions for achieving strong performance in flow matching models. Code and trained models for reproducing this work are available at https://github.com/dunni3/FlowMol | [
"['Ian Dunn' 'David Ryan Koes']"
]
|
null | null | 2404.19749 | null | null | http://arxiv.org/pdf/2404.19749v1 | 2024-04-30T17:54:16Z | 2024-04-30T17:54:16Z | Scale-Robust Timely Asynchronous Decentralized Learning | We consider an asynchronous decentralized learning system, which consists of a network of connected devices trying to learn a machine learning model without any centralized parameter server. The users in the network have their own local training data, which is used for learning across all the nodes in the network. The learning method consists of two processes, evolving simultaneously without any necessary synchronization. The first process is the model update, where the users update their local model via a fixed number of stochastic gradient descent steps. The second process is model mixing, where the users communicate with each other via randomized gossiping to exchange their models and average them to reach consensus. In this work, we investigate the staleness criteria for such a system, which is a sufficient condition for convergence of individual user models. We show that for network scaling, i.e., when the number of user devices $n$ is very large, if the gossip capacity of individual users scales as $Omega(log n)$, we can guarantee the convergence of user models in finite time. Furthermore, we show that the bounded staleness can only be guaranteed by any distributed opportunistic scheme by $Omega(n)$ scaling. | [
"['Purbesh Mitra' 'Sennur Ulukus']"
]
|
null | null | 2404.19753 | null | null | http://arxiv.org/pdf/2404.19753v1 | 2024-04-30T17:56:24Z | 2024-04-30T17:56:24Z | DOCCI: Descriptions of Connected and Contrasting Images | Vision-language datasets are vital for both text-to-image (T2I) and image-to-text (I2T) research. However, current datasets lack descriptions with fine-grained detail that would allow for richer associations to be learned by models. To fill the gap, we introduce Descriptions of Connected and Contrasting Images (DOCCI), a dataset with long, human-annotated English descriptions for 15k images that were taken, curated and donated by a single researcher intent on capturing key challenges such as spatial relations, counting, text rendering, world knowledge, and more. We instruct human annotators to create comprehensive descriptions for each image; these average 136 words in length and are crafted to clearly distinguish each image from those that are related or similar. Each description is highly compositional and typically encompasses multiple challenges. Through both quantitative and qualitative analyses, we demonstrate that DOCCI serves as an effective training resource for image-to-text generation -- a PaLI 5B model finetuned on DOCCI shows equal or superior results compared to highly-performant larger models like LLaVA-1.5 7B and InstructBLIP 7B. Furthermore, we show that DOCCI is a useful testbed for text-to-image generation, highlighting the limitations of current text-to-image models in capturing long descriptions and fine details. | [
"['Yasumasa Onoe' 'Sunayana Rane' 'Zachary Berger' 'Yonatan Bitton'\n 'Jaemin Cho' 'Roopal Garg' 'Alexander Ku' 'Zarana Parekh'\n 'Jordi Pont-Tuset' 'Garrett Tanzer' 'Su Wang' 'Jason Baldridge']"
]
|
null | null | 2404.19756 | null | null | http://arxiv.org/pdf/2404.19756v4 | 2024-06-16T13:34:56Z | 2024-04-30T17:58:29Z | KAN: Kolmogorov-Arnold Networks | Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs. | [
"['Ziming Liu' 'Yixuan Wang' 'Sachin Vaidya' 'Fabian Ruehle'\n 'James Halverson' 'Marin Soljačić' 'Thomas Y. Hou' 'Max Tegmark']"
]
|
null | null | 2405.00017 | null | null | http://arxiv.org/pdf/2405.00017v1 | 2024-02-12T18:32:35Z | 2024-02-12T18:32:35Z | Queuing dynamics of asynchronous Federated Learning | We study asynchronous federated learning mechanisms with nodes having potentially different computational speeds. In such an environment, each node is allowed to work on models with potential delays and contribute to updates to the central server at its own pace. Existing analyses of such algorithms typically depend on intractable quantities such as the maximum node delay and do not consider the underlying queuing dynamics of the system. In this paper, we propose a non-uniform sampling scheme for the central server that allows for lower delays with better complexity, taking into account the closed Jackson network structure of the associated computational graph. Our experiments clearly show a significant improvement of our method over current state-of-the-art asynchronous algorithms on an image classification problem. | [
"['Louis Leconte' 'Matthieu Jonckheere' 'Sergey Samsonov' 'Eric Moulines']"
]
|
null | null | 2405.00025 | null | null | http://arxiv.org/pdf/2405.00025v1 | 2024-02-26T07:19:48Z | 2024-02-26T07:19:48Z | Leveraging Pre-trained CNNs for Efficient Feature Extraction in Rice
Leaf Disease Classification | Rice disease classification is a critical task in agricultural research, and in this study, we rigorously evaluate the impact of integrating feature extraction methodologies within pre-trained convolutional neural networks (CNNs). Initial investigations into baseline models, devoid of feature extraction, revealed commendable performance with ResNet-50 and ResNet-101 achieving accuracies of 91% and 92%, respectively. Subsequent integration of Histogram of Oriented Gradients (HOG) yielded substantial improvements across architectures, notably propelling the accuracy of EfficientNet-B7 from 92% to an impressive 97%. Conversely, the application of Local Binary Patterns (LBP) demonstrated more conservative performance enhancements. Moreover, employing Gradient-weighted Class Activation Mapping (Grad-CAM) unveiled that HOG integration resulted in heightened attention to disease-specific features, corroborating the performance enhancements observed. Visual representations further validated HOG's notable influence, showcasing a discernible surge in accuracy across epochs due to focused attention on disease-affected regions. These results underscore the pivotal role of feature extraction, particularly HOG, in refining representations and bolstering classification accuracy. The study's significant highlight was the achievement of 97% accuracy with EfficientNet-B7 employing HOG and Grad-CAM, a noteworthy advancement in optimizing pre-trained CNN-based rice disease identification systems. The findings advocate for the strategic integration of advanced feature extraction techniques with cutting-edge pre-trained CNN architectures, presenting a promising avenue for substantially augmenting the precision and effectiveness of image-based disease classification systems in agricultural contexts. | [
"['Md. Shohanur Islam Sobuj' 'Md. Imran Hossen' 'Md. Foysal Mahmud'\n 'Mahbub Ul Islam Khan']"
]
|
null | null | 2405.00027 | null | null | http://arxiv.org/abs/2405.00027v1 | 2024-02-27T23:49:43Z | 2024-02-27T23:49:43Z | Multidimensional Compressed Sensing for Spectral Light Field Imaging | This paper considers a compressive multi-spectral light field camera model that utilizes a one-hot spectralcoded mask and a microlens array to capture spatial, angular, and spectral information using a single monochrome sensor. We propose a model that employs compressed sensing techniques to reconstruct the complete multi-spectral light field from undersampled measurements. Unlike previous work where a light field is vectorized to a 1D signal, our method employs a 5D basis and a novel 5D measurement model, hence, matching the intrinsic dimensionality of multispectral light fields. We mathematically and empirically show the equivalence of 5D and 1D sensing models, and most importantly that the 5D framework achieves orders of magnitude faster reconstruction while requiring a small fraction of the memory. Moreover, our new multidimensional sensing model opens new research directions for designing efficient visual data acquisition algorithms and hardware. | [
"['Wen Cao' 'Ehsan Miandji' 'Jonas Unger']"
]
|
null | null | 2405.00055 | null | null | http://arxiv.org/pdf/2405.00055v1 | 2024-04-24T09:22:18Z | 2024-04-24T09:22:18Z | A Hybrid Probabilistic Battery Health Management Approach for Robust
Inspection Drone Operations | Health monitoring of remote critical infrastructure is a complex and expensive activity due to the limited infrastructure accessibility. Inspection drones are ubiquitous assets that enhance the reliability of critical infrastructures through improved accessibility. However, due to the harsh operation environment, it is crucial to monitor their health to ensure successful inspection operations. The battery is a key component that determines the overall reliability of the inspection drones and, with an appropriate health management approach, contributes to reliable and robust inspections. In this context, this paper presents a novel hybrid probabilistic approach for battery end-of-discharge (EOD) voltage prediction of Li-Po batteries. The hybridization is achieved in an error-correction configuration, which combines physics-based discharge and probabilistic error-correction models to quantify the aleatoric and epistemic uncertainty. The performance of the hybrid probabilistic methodology was empirically evaluated on a dataset comprising EOD voltage under varying load conditions. The dataset was obtained from real inspection drones operated on different flights, focused on offshore wind turbine inspections. The proposed approach has been tested with different probabilistic methods and demonstrates 14.8% improved performance in probabilistic accuracy compared to the best probabilistic method. In addition, aleatoric and epistemic uncertainties provide robust estimations to enhance the diagnosis of battery health-states. | [
"['Jokin Alcibar' 'Jose I. Aizpurua' 'Ekhi Zugastia' 'Oier Penagarikano']"
]
|
null | null | 2405.00065 | null | null | http://arxiv.org/pdf/2405.00065v2 | 2024-05-14T00:26:29Z | 2024-04-27T06:19:30Z | From Linear to Linearizable Optimization: A Novel Framework with
Applications to Stationary and Non-stationary DR-submodular Optimization | This paper introduces the notion of upper linearizable/quadratizable functions, a class that extends concavity and DR-submodularity in various settings, including monotone and non-monotone cases over different convex sets. A general meta-algorithm is devised to convert algorithms for linear/quadratic maximization into ones that optimize upper quadratizable functions, offering a unified approach to tackling concave and DR-submodular optimization problems. The paper extends these results to multiple feedback settings, facilitating conversions between semi-bandit/first-order feedback and bandit/zeroth-order feedback, as well as between first/zeroth-order feedback and semi-bandit/bandit feedback. Leveraging this framework, new algorithms are derived using existing results as base algorithms for convex optimization, improving upon state-of-the-art results in various cases. Dynamic and adaptive regret guarantees are obtained for DR-submodular maximization, marking the first algorithms to achieve such guarantees in these settings. Notably, the paper achieves these advancements with fewer assumptions compared to existing state-of-the-art results, underscoring its broad applicability and theoretical contributions to non-convex optimization. | [
"['Mohammad Pedramfar' 'Vaneet Aggarwal']"
]
|
null | null | 2405.00074 | null | null | http://arxiv.org/pdf/2405.00074v1 | 2024-04-30T07:24:41Z | 2024-04-30T07:24:41Z | PAODING: A High-fidelity Data-free Pruning Toolkit for Debloating
Pre-trained Neural Networks | We present PAODING, a toolkit to debloat pretrained neural network models through the lens of data-free pruning. To preserve the model fidelity, PAODING adopts an iterative process, which dynamically measures the effect of deleting a neuron to identify candidates that have the least impact to the output layer. Our evaluation shows that PAODING can significantly reduce the model size, generalize on different datasets and models, and meanwhile preserve the model fidelity in terms of test accuracy and adversarial robustness. PAODING is publicly available on PyPI via https://pypi.org/project/paoding-dl. | [
"['Mark Huasong Meng' 'Hao Guan' 'Liuhuo Wan' 'Sin Gee Teo' 'Guangdong Bai'\n 'Jin Song Dong']"
]
|
null | null | 2405.00076 | null | null | http://arxiv.org/pdf/2405.00076v1 | 2024-04-30T10:39:20Z | 2024-04-30T10:39:20Z | On Correcting SHAP Scores | Recent work uncovered examples of classifiers for which SHAP scores yield misleading feature attributions. While such examples might be perceived as suggesting the inadequacy of Shapley values for explainability, this paper shows that the source of the identified shortcomings of SHAP scores resides elsewhere. Concretely, the paper makes the case that the failings of SHAP scores result from the characteristic functions used in earlier works. Furthermore, the paper identifies a number of properties that characteristic functions ought to respect, and proposes several novel characteristic functions, each exhibiting one or more of the desired properties. More importantly, some of the characteristic functions proposed in this paper are guaranteed not to exhibit any of the shortcomings uncovered by earlier work. The paper also investigates the impact of the new characteristic functions on the complexity of computing SHAP scores. Finally, the paper proposes modifications to the tool SHAP to use instead one of our novel characteristic functions, thereby eliminating some of the limitations reported for SHAP scores. | [
"['Olivier Letoffe' 'Xuanxiang Huang' 'Joao Marques-Silva']"
]
|
null | null | 2405.00077 | null | null | http://arxiv.org/pdf/2405.00077v1 | 2024-04-30T10:53:30Z | 2024-04-30T10:53:30Z | BrainODE: Dynamic Brain Signal Analysis via Graph-Aided Neural Ordinary
Differential Equations | Brain network analysis is vital for understanding the neural interactions regarding brain structures and functions, and identifying potential biomarkers for clinical phenotypes. However, widely used brain signals such as Blood Oxygen Level Dependent (BOLD) time series generated from functional Magnetic Resonance Imaging (fMRI) often manifest three challenges: (1) missing values, (2) irregular samples, and (3) sampling misalignment, due to instrumental limitations, impacting downstream brain network analysis and clinical outcome predictions. In this work, we propose a novel model called BrainODE to achieve continuous modeling of dynamic brain signals using Ordinary Differential Equations (ODE). By learning latent initial values and neural ODE functions from irregular time series, BrainODE effectively reconstructs brain signals at any time point, mitigating the aforementioned three data challenges of brain signals altogether. Comprehensive experimental results on real-world neuroimaging datasets demonstrate the superior performance of BrainODE and its capability of addressing the three data challenges. | [
"['Kaiqiao Han' 'Yi Yang' 'Zijie Huang' 'Xuan Kan' 'Yang Yang' 'Ying Guo'\n 'Lifang He' 'Liang Zhan' 'Yizhou Sun' 'Wei Wang' 'Carl Yang']"
]
|
null | null | 2405.00080 | null | null | http://arxiv.org/pdf/2405.00080v2 | 2024-05-03T07:29:24Z | 2024-04-30T16:35:08Z | Recommenadation aided Caching using Combinatorial Multi-armed Bandits | We study content caching with recommendations in a wireless network where the users are connected through a base station equipped with a finite-capacity cache. We assume a fixed set of contents with unknown user preferences and content popularities. We can recommend a subset of the contents to the users which encourages the users to request these contents. Recommendation can thus be used to increase cache hits. We formulate the cache hit optimization problem as a combinatorial multi-armed bandit (CMAB). We propose a UCB-based algorithm to decide which contents to cache and recommend. We provide an upper bound on the regret of our algorithm. We numerically demonstrate the performance of our algorithm and compare it to state-of-the-art algorithms. | [
"['Pavamana K J' 'Chandramani Kishore Singh']"
]
|
null | null | 2405.00082 | null | null | http://arxiv.org/pdf/2405.00082v1 | 2024-04-30T18:00:00Z | 2024-04-30T18:00:00Z | Structure learning of Hamiltonians from real-time evolution | We initiate the study of Hamiltonian structure learning from real-time evolution: given the ability to apply $e^{-mathrm{i} Ht}$ for an unknown local Hamiltonian $H = sum_{a = 1}^m lambda_a E_a$ on $n$ qubits, the goal is to recover $H$. This problem is already well-studied under the assumption that the interaction terms, $E_a$, are given, and only the interaction strengths, $lambda_a$, are unknown. But is it possible to learn a local Hamiltonian without prior knowledge of its interaction structure? We present a new, general approach to Hamiltonian learning that not only solves the challenging structure learning variant, but also resolves other open questions in the area, all while achieving the gold standard of Heisenberg-limited scaling. In particular, our algorithm recovers the Hamiltonian to $varepsilon$ error with an evolution time scaling with $1/varepsilon$, and has the following appealing properties: (1) it does not need to know the Hamiltonian terms; (2) it works beyond the short-range setting, extending to any Hamiltonian $H$ where the sum of terms interacting with a qubit has bounded norm; (3) it evolves according to $H$ in constant time $t$ increments, thus achieving constant time resolution. To our knowledge, no prior algorithm with Heisenberg-limited scaling existed with even one of these properties. As an application, we can also learn Hamiltonians exhibiting power-law decay up to accuracy $varepsilon$ with total evolution time beating the standard limit of $1/varepsilon^2$. | [
"['Ainesh Bakshi' 'Allen Liu' 'Ankur Moitra' 'Ewin Tang']"
]
|
null | null | 2405.00099 | null | null | http://arxiv.org/pdf/2405.00099v2 | 2024-05-09T15:14:19Z | 2024-04-30T18:00:02Z | Creative Beam Search: LLM-as-a-Judge For Improving Response Generation | Large language models are revolutionizing several areas, including artificial creativity. However, the process of generation in machines profoundly diverges from that observed in humans. In particular, machine generation is characterized by a lack of intentionality and an underlying creative process. We propose a method called Creative Beam Search that uses Diverse Beam Search and LLM-as-a-Judge to perform response generation and response validation. The results of a qualitative experiment show how our approach can provide better output than standard sampling techniques. We also show that the response validation step is a necessary complement to the response generation step. | [
"['Giorgio Franceschelli' 'Mirco Musolesi']"
]
|
null | null | 2405.00123 | null | null | http://arxiv.org/abs/2405.00123v1 | 2024-04-30T18:17:44Z | 2024-04-30T18:17:44Z | Graph Neural Network Approach to Semantic Type Detection in Tables | This study addresses the challenge of detecting semantic column types in relational tables, a key task in many real-world applications. While language models like BERT have improved prediction accuracy, their token input constraints limit the simultaneous processing of intra-table and inter-table information. We propose a novel approach using Graph Neural Networks (GNNs) to model intra-table dependencies, allowing language models to focus on inter-table information. Our proposed method not only outperforms existing state-of-the-art algorithms but also offers novel insights into the utility and functionality of various GNN types for semantic type detection. The code is available at https://github.com/hoseinzadeehsan/GAIT | [
"['Ehsan Hoseinzade' 'Ke Wang']"
]
|
null | null | 2405.00130 | null | null | http://arxiv.org/pdf/2405.00130v1 | 2024-04-30T18:28:09Z | 2024-04-30T18:28:09Z | A Flexible 2.5D Medical Image Segmentation Approach with In-Slice and
Cross-Slice Attention | Deep learning has become the de facto method for medical image segmentation, with 3D segmentation models excelling in capturing complex 3D structures and 2D models offering high computational efficiency. However, segmenting 2.5D images, which have high in-plane but low through-plane resolution, is a relatively unexplored challenge. While applying 2D models to individual slices of a 2.5D image is feasible, it fails to capture the spatial relationships between slices. On the other hand, 3D models face challenges such as resolution inconsistencies in 2.5D images, along with computational complexity and susceptibility to overfitting when trained with limited data. In this context, 2.5D models, which capture inter-slice correlations using only 2D neural networks, emerge as a promising solution due to their reduced computational demand and simplicity in implementation. In this paper, we introduce CSA-Net, a flexible 2.5D segmentation model capable of processing 2.5D images with an arbitrary number of slices through an innovative Cross-Slice Attention (CSA) module. This module uses the cross-slice attention mechanism to effectively capture 3D spatial information by learning long-range dependencies between the center slice (for segmentation) and its neighboring slices. Moreover, CSA-Net utilizes the self-attention mechanism to understand correlations among pixels within the center slice. We evaluated CSA-Net on three 2.5D segmentation tasks: (1) multi-class brain MRI segmentation, (2) binary prostate MRI segmentation, and (3) multi-class prostate MRI segmentation. CSA-Net outperformed leading 2D and 2.5D segmentation methods across all three tasks, demonstrating its efficacy and superiority. Our code is publicly available at https://github.com/mirthAI/CSA-Net. | [
"['Amarjeet Kumar' 'Hongxu Jiang' 'Muhammad Imran' 'Cyndi Valdes'\n 'Gabriela Leon' 'Dahyun Kang' 'Parvathi Nataraj' 'Yuyin Zhou'\n 'Michael D. Weiss' 'Wei Shao']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.