categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.01283 | null | null | http://arxiv.org/pdf/2407.01283v1 | 2024-07-01T13:39:03Z | 2024-07-01T13:39:03Z | Energy-Aware Decentralized Learning with Intermittent Model Training | Decentralized learning (DL) offers a powerful framework where nodes collaboratively train models without sharing raw data and without the coordination of a central server. In the iterative rounds of DL, models are trained locally, shared with neighbors in the topology, and aggregated with other models received from neighbors. Sharing and merging models contribute to convergence towards a consensus model that generalizes better across the collective data captured at training time. In addition, the energy consumption while sharing and merging model parameters is negligible compared to the energy spent during the training phase. Leveraging this fact, we present SkipTrain, a novel DL algorithm, which minimizes energy consumption in decentralized learning by strategically skipping some training rounds and substituting them with synchronization rounds. These training-silent periods, besides saving energy, also allow models to better mix and finally produce models with superior accuracy than typical DL algorithms that train at every round. Our empirical evaluations with 256 nodes demonstrate that SkipTrain reduces energy consumption by 50% and increases model accuracy by up to 12% compared to D-PSGD, the conventional DL algorithm. | [
"['Akash Dhasade' 'Paolo Dini' 'Elia Guerra' 'Anne-Marie Kermarrec'\n 'Marco Miozzo' 'Rafael Pires' 'Rishi Sharma' 'Martijn de Vos']"
]
|
null | null | 2407.01284 | null | null | http://arxiv.org/pdf/2407.01284v1 | 2024-07-01T13:39:08Z | 2024-07-01T13:39:08Z | We-Math: Does Your Large Multimodal Model Achieve Human-like
Mathematical Reasoning? | Visual mathematical reasoning, as a fundamental visual reasoning ability, has received widespread attention from the Large Multimodal Models (LMMs) community. Existing benchmarks, such as MathVista and MathVerse, focus more on the result-oriented performance but neglect the underlying principles in knowledge acquisition and generalization. Inspired by human-like mathematical reasoning, we introduce WE-MATH, the first benchmark specifically designed to explore the problem-solving principles beyond end-to-end performance. We meticulously collect and categorize 6.5K visual math problems, spanning 67 hierarchical knowledge concepts and five layers of knowledge granularity. We decompose composite problems into sub-problems according to the required knowledge concepts and introduce a novel four-dimensional metric, namely Insufficient Knowledge (IK), Inadequate Generalization (IG), Complete Mastery (CM), and Rote Memorization (RM), to hierarchically assess inherent issues in LMMs' reasoning process. With WE-MATH, we conduct a thorough evaluation of existing LMMs in visual mathematical reasoning and reveal a negative correlation between solving steps and problem-specific performance. We confirm the IK issue of LMMs can be effectively improved via knowledge augmentation strategies. More notably, the primary challenge of GPT-4o has significantly transitioned from IK to IG, establishing it as the first LMM advancing towards the knowledge generalization stage. In contrast, other LMMs exhibit a marked inclination towards Rote Memorization - they correctly solve composite problems involving multiple knowledge concepts yet fail to answer sub-problems. We anticipate that WE-MATH will open new pathways for advancements in visual mathematical reasoning for LMMs. The WE-MATH data and evaluation code are available at https://github.com/We-Math/We-Math. | [
"['Runqi Qiao' 'Qiuna Tan' 'Guanting Dong' 'Minhui Wu' 'Chong Sun'\n 'Xiaoshuai Song' 'Zhuoma GongQue' 'Shanglin Lei' 'Zhe Wei'\n 'Miaoxuan Zhang' 'Runfeng Qiao' 'Yifan Zhang' 'Xiao Zong' 'Yida Xu'\n 'Muxi Diao' 'Zhimin Bao' 'Chen Li' 'Honggang Zhang']"
]
|
null | null | 2407.01290 | null | null | http://arxiv.org/pdf/2407.01290v1 | 2024-07-01T13:44:38Z | 2024-07-01T13:44:38Z | Hypformer: Exploring Efficient Hyperbolic Transformer Fully in
Hyperbolic Space | Hyperbolic geometry have shown significant potential in modeling complex structured data, particularly those with underlying tree-like and hierarchical structures. Despite the impressive performance of various hyperbolic neural networks across numerous domains, research on adapting the Transformer to hyperbolic space remains limited. Previous attempts have mainly focused on modifying self-attention modules in the Transformer. However, these efforts have fallen short of developing a complete hyperbolic Transformer. This stems primarily from: (i) the absence of well-defined modules in hyperbolic space, including linear transformation layers, LayerNorm layers, activation functions, dropout operations, etc. (ii) the quadratic time complexity of the existing hyperbolic self-attention module w.r.t the number of input tokens, which hinders its scalability. To address these challenges, we propose, Hypformer, a novel hyperbolic Transformer based on the Lorentz model of hyperbolic geometry. In Hypformer, we introduce two foundational blocks that define the essential modules of the Transformer in hyperbolic space. Furthermore, we develop a linear self-attention mechanism in hyperbolic space, enabling hyperbolic Transformer to process billion-scale graph data and long-sequence inputs for the first time. Our experimental results confirm the effectiveness and efficiency of Hypformer across various datasets, demonstrating its potential as an effective and scalable solution for large-scale data representation and large models. | [
"['Menglin Yang' 'Harshit Verma' 'Delvin Ce Zhang' 'Jiahong Liu'\n 'Irwin King' 'Rex Ying']"
]
|
null | null | 2407.01291 | null | null | http://arxiv.org/pdf/2407.01291v1 | 2024-07-01T13:45:31Z | 2024-07-01T13:45:31Z | Lightweight Zero-shot Text-to-Speech with Mixture of Adapters | The advancements in zero-shot text-to-speech (TTS) methods, based on large-scale models, have demonstrated high fidelity in reproducing speaker characteristics. However, these models are too large for practical daily use. We propose a lightweight zero-shot TTS method using a mixture of adapters (MoA). Our proposed method incorporates MoA modules into the decoder and the variance adapter of a non-autoregressive TTS model. These modules enhance the ability to adapt a wide variety of speakers in a zero-shot manner by selecting appropriate adapters associated with speaker characteristics on the basis of speaker embeddings. Our method achieves high-quality speech synthesis with minimal additional parameters. Through objective and subjective evaluations, we confirmed that our method achieves better performance than the baseline with less than 40% of parameters at 1.9 times faster inference speed. Audio samples are available on our demo page (https://ntt-hilab-gensp.github.io/is2024lightweightTTS/). | [
"['Kenichi Fujita' 'Takanori Ashihara' 'Marc Delcroix' 'Yusuke Ijima']"
]
|
null | null | 2407.01294 | null | null | http://arxiv.org/pdf/2407.01294v1 | 2024-07-01T13:47:53Z | 2024-07-01T13:47:53Z | A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and
Automation Harms | This paper introduces a collaborative, human-centered taxonomy of AI, algorithmic and automation harms. We argue that existing taxonomies, while valuable, can be narrow, unclear, typically cater to practitioners and government, and often overlook the needs of the wider public. Drawing on existing taxonomies and a large repository of documented incidents, we propose a taxonomy that is clear and understandable to a broad set of audiences, as well as being flexible, extensible, and interoperable. Through iterative refinement with topic experts and crowdsourced annotation testing, we propose a taxonomy that can serve as a powerful tool for civil society organisations, educators, policymakers, product teams and the general public. By fostering a greater understanding of the real-world harms of AI and related technologies, we aim to increase understanding, empower NGOs and individuals to identify and report violations, inform policy discussions, and encourage responsible technology development and deployment. | [
"['Gavin Abercrombie' 'Djalel Benbouzid' 'Paolo Giudici'\n 'Delaram Golpayegani' 'Julio Hernandez' 'Pierre Noro'\n 'Harshvardhan Pandit' 'Eva Paraschou' 'Charlie Pownall' 'Jyoti Prajapati'\n 'Mark A. Sayre' 'Ushnish Sengupta' 'Arthit Suriyawongkul' 'Ruby Thelot'\n 'Sofia Vei' 'Laura Waltersdorfer']"
]
|
null | null | 2407.01300 | null | null | http://arxiv.org/pdf/2407.01300v1 | 2024-07-01T13:56:42Z | 2024-07-01T13:56:42Z | Collaborative Performance Prediction for Large Language Models | Comprehensively understanding and accurately predicting the performance of large language models across diverse downstream tasks has emerged as a pivotal challenge in NLP research. The pioneering scaling law on downstream works demonstrated intrinsic similarities within model families and utilized such similarities for performance prediction. However, they tend to overlook the similarities between model families and only consider design factors listed in the original scaling law. To overcome these limitations, we introduce a novel framework, Collaborative Performance Prediction (CPP), which significantly enhances prediction accuracy by leveraging the historical performance of various models on downstream tasks and other design factors for both model and task. We also collect a collaborative data sourced from online platforms containing both historical performance and additional design factors. With the support of the collaborative data, CPP not only surpasses traditional scaling laws in predicting the performance of scaled LLMs but also facilitates a detailed analysis of factor importance, an area previously overlooked. | [
"['Qiyuan Zhang' 'Fuyuan Lyu' 'Xue Liu' 'Chen Ma']"
]
|
null | null | 2407.01306 | null | null | http://arxiv.org/pdf/2407.01306v1 | 2024-07-01T14:07:46Z | 2024-07-01T14:07:46Z | Unveiling the Unseen: Exploring Whitebox Membership Inference through
the Lens of Explainability | The increasing prominence of deep learning applications and reliance on personalized data underscore the urgent need to address privacy vulnerabilities, particularly Membership Inference Attacks (MIAs). Despite numerous MIA studies, significant knowledge gaps persist, particularly regarding the impact of hidden features (in isolation) on attack efficacy and insufficient justification for the root causes of attacks based on raw data features. In this paper, we aim to address these knowledge gaps by first exploring statistical approaches to identify the most informative neurons and quantifying the significance of the hidden activations from the selected neurons on attack accuracy, in isolation and combination. Additionally, we propose an attack-driven explainable framework by integrating the target and attack models to identify the most influential features of raw data that lead to successful membership inference attacks. Our proposed MIA shows an improvement of up to 26% on state-of-the-art MIA. | [
"['Chenxi Li' 'Abhinav Kumar' 'Zhen Guo' 'Jie Hou' 'Reza Tourani']"
]
|
null | null | 2407.01310 | null | null | http://arxiv.org/pdf/2407.01310v1 | 2024-07-01T14:18:15Z | 2024-07-01T14:18:15Z | Multi-State-Action Tokenisation in Decision Transformers for
Multi-Discrete Action Spaces | Decision Transformers, in their vanilla form, struggle to perform on image-based environments with multi-discrete action spaces. Although enhanced Decision Transformer architectures have been developed to improve performance, these methods have not specifically addressed this problem of multi-discrete action spaces which hampers existing Decision Transformer architectures from learning good representations. To mitigate this, we propose Multi-State Action Tokenisation (M-SAT), an approach for tokenising actions in multi-discrete action spaces that enhances the model's performance in such environments. Our approach involves two key changes: disentangling actions to the individual action level and tokenising the actions with auxiliary state information. These two key changes also improve individual action level interpretability and visibility within the attention layers. We demonstrate the performance gains of M-SAT on challenging ViZDoom environments with multi-discrete action spaces and image-based state spaces, including the Deadly Corridor and My Way Home scenarios, where M-SAT outperforms the baseline Decision Transformer without any additional data or heavy computational overheads. Additionally, we find that removing positional encoding does not adversely affect M-SAT's performance and, in some cases, even improves it. | [
"['Perusha Moodley' 'Pramod Kaushik' 'Dhillu Thambi' 'Mark Trovinger'\n 'Praveen Paruchuri' 'Xia Hong' 'Benjamin Rosman']"
]
|
null | null | 2407.01316 | null | null | http://arxiv.org/pdf/2407.01316v1 | 2024-07-01T14:24:05Z | 2024-07-01T14:24:05Z | Evaluating Model Performance Under Worst-case Subpopulations | The performance of ML models degrades when the training population is different from that seen under operation. Towards assessing distributional robustness, we study the worst-case performance of a model over all subpopulations of a given size, defined with respect to core attributes Z. This notion of robustness can consider arbitrary (continuous) attributes Z, and automatically accounts for complex intersectionality in disadvantaged groups. We develop a scalable yet principled two-stage estimation procedure that can evaluate the robustness of state-of-the-art models. We prove that our procedure enjoys several finite-sample convergence guarantees, including dimension-free convergence. Instead of overly conservative notions based on Rademacher complexities, our evaluation error depends on the dimension of Z only through the out-of-sample error in estimating the performance conditional on Z. On real datasets, we demonstrate that our method certifies the robustness of a model and prevents deployment of unreliable models. | [
"['Mike Li' 'Hongseok Namkoong' 'Shangzhou Xia']"
]
|
null | null | 2407.01318 | null | null | http://arxiv.org/pdf/2407.01318v1 | 2024-07-01T14:26:31Z | 2024-07-01T14:26:31Z | Deep Dive into MRI: Exploring Deep Learning Applications in 0.55T and 7T
MRI | The development of magnetic resonance imaging (MRI) for medical imaging has provided a leap forward in diagnosis, providing a safe, non-invasive alternative to techniques involving ionising radiation exposure for diagnostic purposes. It was described by Block and Purcel in 1946, and it was not until 1980 that the first clinical application of MRI became available. Since that time the MRI has gone through many advances and has altered the way diagnosing procedures are performed. Due to its ability to improve constantly, MRI has become a commonly used practice among several specialisations in medicine. Particularly starting 0.55T and 7T MRI technologies have pointed out enhanced preservation of image detail and advanced tissue characterisation. This review examines the integration of deep learning (DL) techniques into these MRI modalities, disseminating and exploring the study applications. It highlights how DL contributes to 0.55T and 7T MRI data, showcasing the potential of DL in improving and refining these technologies. The review ends with a brief overview of how MRI technology will evolve in the coming years. | [
"['Ana Carolina Alves' 'André Ferreira' 'Behrus Puladi' 'Jan Egger'\n 'Victor Alves']"
]
|
null | null | 2407.01320 | null | null | http://arxiv.org/pdf/2407.01320v1 | 2024-07-01T14:26:48Z | 2024-07-01T14:26:48Z | Increasing Model Capacity for Free: A Simple Strategy for Parameter
Efficient Fine-tuning | Fine-tuning large pre-trained foundation models, such as the 175B GPT-3, has attracted more attention for downstream tasks recently. While parameter-efficient fine-tuning methods have been proposed and proven effective without retraining all model parameters, their performance is limited by the capacity of incremental modules, especially under constrained parameter budgets. To overcome this challenge, we propose CapaBoost, a simple yet effective strategy that enhances model capacity by leveraging low-rank updates through parallel weight modules in target layers. By applying static random masks to the shared weight matrix, CapaBoost constructs a diverse set of weight matrices, effectively increasing the rank of incremental weights without adding parameters. Notably, our approach can be seamlessly integrated into various existing parameter-efficient fine-tuning methods. We extensively validate the efficacy of CapaBoost through experiments on diverse downstream tasks, including natural language understanding, question answering, and image classification. Our results demonstrate significant improvements over baselines, without incurring additional computation or storage costs. Our code is available at url{https://github.com/LINs-lab/CapaBoost}. | [
"['Haobo Song' 'Hao Zhao' 'Soumajit Majumder' 'Tao Lin']"
]
|
null | null | 2407.01327 | null | null | http://arxiv.org/pdf/2407.01327v1 | 2024-07-01T14:34:25Z | 2024-07-01T14:34:25Z | Gradient-based Class Weighting for Unsupervised Domain Adaptation in
Dense Prediction Visual Tasks | In unsupervised domain adaptation (UDA), where models are trained on source data (e.g., synthetic) and adapted to target data (e.g., real-world) without target annotations, addressing the challenge of significant class imbalance remains an open issue. Despite considerable progress in bridging the domain gap, existing methods often experience performance degradation when confronted with highly imbalanced dense prediction visual tasks like semantic and panoptic segmentation. This discrepancy becomes especially pronounced due to the lack of equivalent priors between the source and target domains, turning class imbalanced techniques used for other areas (e.g., image classification) ineffective in UDA scenarios. This paper proposes a class-imbalance mitigation strategy that incorporates class-weights into the UDA learning losses, but with the novelty of estimating these weights dynamically through the loss gradient, defining a Gradient-based class weighting (GBW) learning. GBW naturally increases the contribution of classes whose learning is hindered by large-represented classes, and has the advantage of being able to automatically and quickly adapt to the iteration training outcomes, avoiding explicitly curricular learning patterns common in loss-weighing strategies. Extensive experimentation validates the effectiveness of GBW across architectures (convolutional and transformer), UDA strategies (adversarial, self-training and entropy minimization), tasks (semantic and panoptic segmentation), and datasets (GTA and Synthia). Analysing the source of advantage, GBW consistently increases the recall of low represented classes. | [
"['Roberto Alcover-Couso' 'Marcos Escudero-Viñolo' 'Juan C. SanMiguel'\n 'Jesus Bescós']"
]
|
null | null | 2407.01331 | null | null | http://arxiv.org/pdf/2407.01331v1 | 2024-07-01T14:39:41Z | 2024-07-01T14:39:41Z | Restyling Unsupervised Concept Based Interpretable Networks with
Generative Models | Developing inherently interpretable models for prediction has gained prominence in recent years. A subclass of these models, wherein the interpretable network relies on learning high-level concepts, are valued because of closeness of concept representations to human communication. However, the visualization and understanding of the learnt unsupervised dictionary of concepts encounters major limitations, specially for large-scale images. We propose here a novel method that relies on mapping the concept features to the latent space of a pretrained generative model. The use of a generative model enables high quality visualization, and naturally lays out an intuitive and interactive procedure for better interpretation of the learnt concepts. Furthermore, leveraging pretrained generative models has the additional advantage of making the training of the system more efficient. We quantitatively ascertain the efficacy of our method in terms of accuracy of the interpretable prediction network, fidelity of reconstruction, as well as faithfulness and consistency of learnt concepts. The experiments are conducted on multiple image recognition benchmarks for large-scale images. Project page available at https://jayneelparekh.github.io/VisCoIN_project_page/ | [
"['Jayneel Parekh' 'Quentin Bouniot' 'Pavlo Mozharovskyi' 'Alasdair Newson'\n \"Florence d'Alché-Buc\"]"
]
|
null | null | 2407.01333 | null | null | http://arxiv.org/pdf/2407.01333v1 | 2024-07-01T14:41:18Z | 2024-07-01T14:41:18Z | Deep Reinforcement Learning for Adverse Garage Scenario Generation | Autonomous vehicles need to travel over 11 billion miles to ensure their safety. Therefore, the importance of simulation testing before real-world testing is self-evident. In recent years, the release of 3D simulators for autonomous driving, represented by Carla and CarSim, marks the transition of autonomous driving simulation testing environments from simple 2D overhead views to complex 3D models. During simulation testing, experimenters need to build static scenes and dynamic traffic flows, pedestrian flows, and other experimental elements to construct experimental scenarios. When building static scenes in 3D simulators, experimenters often need to manually construct 3D models, set parameters and attributes, which is time-consuming and labor-intensive. This thesis proposes an automated program generation framework. Based on deep reinforcement learning, this framework can generate different 2D ground script codes, on which 3D model files and map model files are built. The generated 3D ground scenes are displayed in the Carla simulator, where experimenters can use this scene for navigation algorithm simulation testing. | [
"['Kai Li']"
]
|
null | null | 2407.01343 | null | null | http://arxiv.org/pdf/2407.01343v1 | 2024-07-01T14:51:29Z | 2024-07-01T14:51:29Z | Coordination Failure in Cooperative Offline MARL | Offline multi-agent reinforcement learning (MARL) leverages static datasets of experience to learn optimal multi-agent control. However, learning from static data presents several unique challenges to overcome. In this paper, we focus on coordination failure and investigate the role of joint actions in multi-agent policy gradients with offline data, focusing on a common setting we refer to as the 'Best Response Under Data' (BRUD) approach. By using two-player polynomial games as an analytical tool, we demonstrate a simple yet overlooked failure mode of BRUD-based algorithms, which can lead to catastrophic coordination failure in the offline setting. Building on these insights, we propose an approach to mitigate such failure, by prioritising samples from the dataset based on joint-action similarity during policy learning and demonstrate its effectiveness in detailed experiments. More generally, however, we argue that prioritised dataset sampling is a promising area for innovation in offline MARL that can be combined with other effective approaches such as critic and policy regularisation. Importantly, our work shows how insights drawn from simplified, tractable games can lead to useful, theoretically grounded insights that transfer to more complex contexts. A core dimension of offering is an interactive notebook, from which almost all of our results can be reproduced, in a browser. | [
"['Callum Rhys Tilbury' 'Claude Formanek' 'Louise Beyers'\n 'Jonathan P. Shock' 'Arnu Pretorius']"
]
|
null | null | 2407.01356 | null | null | http://arxiv.org/pdf/2407.01356v1 | 2024-07-01T15:10:55Z | 2024-07-01T15:10:55Z | tPARAFAC2: Tracking evolving patterns in (incomplete) temporal data | Tensor factorizations have been widely used for the task of uncovering patterns in various domains. Often, the input is time-evolving, shifting the goal to tracking the evolution of underlying patterns instead. To adapt to this more complex setting, existing methods incorporate temporal regularization but they either have overly constrained structural requirements or lack uniqueness which is crucial for interpretation. In this paper, in order to capture the underlying evolving patterns, we introduce t(emporal)PARAFAC2 which utilizes temporal smoothness regularization on the evolving factors. We propose an algorithmic framework that employs Alternating Optimization (AO) and the Alternating Direction Method of Multipliers (ADMM) to fit the model. Furthermore, we extend the algorithmic framework to the case of partially observed data. Our numerical experiments on both simulated and real datasets demonstrate the effectiveness of the temporal smoothness regularization, in particular, in the case of data with missing entries. We also provide an extensive comparison of different approaches for handling missing data within the proposed framework. | [
"['Christos Chatzis' 'Carla Schenker' 'Max Pfeffer' 'Evrim Acar']"
]
|
null | null | 2407.01371 | null | null | http://arxiv.org/pdf/2407.01371v1 | 2024-07-01T15:24:34Z | 2024-07-01T15:24:34Z | Binary Losses for Density Ratio Estimation | Estimating the ratio of two probability densities from finitely many observations of the densities, is a central problem in machine learning and statistics. A large class of methods constructs estimators from binary classifiers which distinguish observations from the two densities. However, the error of these constructions depends on the choice of the binary loss function, raising the question of which loss function to choose based on desired error properties. In this work, we start from prescribed error measures in a class of Bregman divergences and characterize all loss functions that lead to density ratio estimators with a small error. Our characterization provides a simple recipe for constructing loss functions with certain properties, such as loss functions that prioritize an accurate estimation of large values. This contrasts with classical loss functions, such as the logistic loss or boosting loss, which prioritize accurate estimation of small values. We provide numerical illustrations with kernel methods and test their performance in applications of parameter selection for deep domain adaptation. | [
"['Werner Zellinger']"
]
|
null | null | 2407.01376 | null | null | http://arxiv.org/pdf/2407.01376v1 | 2024-07-01T15:29:45Z | 2024-07-01T15:29:45Z | Badllama 3: removing safety finetuning from Llama 3 in minutes | We show that extensive LLM safety fine-tuning is easily subverted when an attacker has access to model weights. We evaluate three state-of-the-art fine-tuning methods-QLoRA, ReFT, and Ortho-and show how algorithmic advances enable constant jailbreaking performance with cuts in FLOPs and optimisation power. We strip safety fine-tuning from Llama 3 8B in one minute and Llama 3 70B in 30 minutes on a single GPU, and sketch ways to reduce this further. | [
"['Dmitrii Volkov']"
]
|
null | null | 2407.01378 | null | null | http://arxiv.org/pdf/2407.01378v1 | 2024-07-01T15:32:28Z | 2024-07-01T15:32:28Z | Beyond Throughput and Compression Ratios: Towards High End-to-end
Utility of Gradient Compression | Gradient aggregation has long been identified as a major bottleneck in today's large-scale distributed machine learning training systems. One promising solution to mitigate such bottlenecks is gradient compression, directly reducing communicated gradient data volume. However, in practice, many gradient compression schemes do not achieve acceleration of the training process while also preserving accuracy. In this work, we identify several common issues in previous gradient compression systems and evaluation methods. These issues include excessive computational overheads; incompatibility with all-reduce; and inappropriate evaluation metrics, such as not using an end-to-end metric or using a 32-bit baseline instead of a 16-bit baseline. We propose several general design and evaluation techniques to address these issues and provide guidelines for future work. Our preliminary evaluation shows that our techniques enhance the system's performance and provide a clearer understanding of the end-to-end utility of gradient compression methods. | [
"['Wenchen Han' 'Shay Vargaftik' 'Michael Mitzenmacher' 'Brad Karp'\n 'Ran Ben Basat']"
]
|
null | null | 2407.01392 | null | null | http://arxiv.org/pdf/2407.01392v3 | 2024-07-04T04:51:10Z | 2024-07-01T15:43:25Z | Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion | This paper presents Diffusion Forcing, a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels. We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens without fully diffusing past ones. Our approach is shown to combine the strengths of next-token prediction models, such as variable-length generation, with the strengths of full-sequence diffusion models, such as the ability to guide sampling to desirable trajectories. Our method offers a range of additional capabilities, such as (1) rolling-out sequences of continuous tokens, such as video, with lengths past the training horizon, where baselines diverge and (2) new sampling and guiding schemes that uniquely profit from Diffusion Forcing's variable-horizon and causal architecture, and which lead to marked performance gains in decision-making and planning tasks. In addition to its empirical success, our method is proven to optimize a variational lower bound on the likelihoods of all subsequences of tokens drawn from the true joint distribution. Project website: https://boyuan.space/diffusion-forcing | [
"['Boyuan Chen' 'Diego Marti Monso' 'Yilun Du' 'Max Simchowitz'\n 'Russ Tedrake' 'Vincent Sitzmann']"
]
|
null | null | 2407.01394 | null | null | http://arxiv.org/pdf/2407.01394v2 | 2024-07-12T14:44:33Z | 2024-07-01T15:46:45Z | Gloss2Text: Sign Language Gloss translation using LLMs and Semantically
Aware Label Smoothing | Sign language translation from video to spoken text presents unique challenges owing to the distinct grammar, expression nuances, and high variation of visual appearance across different speakers and contexts. The intermediate gloss annotations of videos aim to guide the translation process. In our work, we focus on {em Gloss2Text} translation stage and propose several advances by leveraging pre-trained large language models (LLMs), data augmentation, and novel label-smoothing loss function exploiting gloss translation ambiguities improving significantly the performance of state-of-the-art approaches. Through extensive experiments and ablation studies on the PHOENIX Weather 2014T dataset, our approach surpasses state-of-the-art performance in {em Gloss2Text} translation, indicating its efficacy in addressing sign language translation and suggesting promising avenues for future research and development. | [
"['Pooya Fayyazsanavi' 'Antonios Anastasopoulos' 'Jana Košecká']"
]
|
null | null | 2407.01402 | null | null | http://arxiv.org/pdf/2407.01402v1 | 2024-07-01T15:53:03Z | 2024-07-01T15:53:03Z | Superconstant Inapproximability of Decision Tree Learning | We consider the task of properly PAC learning decision trees with queries. Recent work of Koch, Strassle, and Tan showed that the strictest version of this task, where the hypothesis tree $T$ is required to be optimally small, is NP-hard. Their work leaves open the question of whether the task remains intractable if $T$ is only required to be close to optimal, say within a factor of 2, rather than exactly optimal. We answer this affirmatively and show that the task indeed remains NP-hard even if $T$ is allowed to be within any constant factor of optimal. More generally, our result allows for a smooth tradeoff between the hardness assumption and the inapproximability factor. As Koch et al.'s techniques do not appear to be amenable to such a strengthening, we first recover their result with a new and simpler proof, which we couple with a new XOR lemma for decision trees. While there is a large body of work on XOR lemmas for decision trees, our setting necessitates parameters that are extremely sharp, and are not known to be attainable by existing XOR lemmas. Our work also carries new implications for the related problem of Decision Tree Minimization. | [
"['Caleb Koch' 'Carmen Strassle' 'Li-Yang Tan']"
]
|
null | null | 2407.01403 | null | null | http://arxiv.org/pdf/2407.01403v1 | 2024-07-01T15:53:29Z | 2024-07-01T15:53:29Z | Optimization of Retrieval-Augmented Generation Context with Outlier
Detection | In this paper, we focus on methods to reduce the size and improve the quality of the prompt context required for question-answering systems. Attempts to increase the number of retrieved chunked documents and thereby enlarge the context related to the query can significantly complicate the processing and decrease the performance of a Large Language Model (LLM) when generating responses to queries. It is well known that a large set of documents retrieved from a database in response to a query may contain irrelevant information, which often leads to hallucinations in the resulting answers. Our goal is to select the most semantically relevant documents, treating the discarded ones as outliers. We propose and evaluate several methods for identifying outliers by creating features that utilize the distances of embedding vectors, retrieved from the vector database, to both the centroid and the query vectors. The methods were evaluated by comparing the similarities of the retrieved LLM responses to ground-truth answers obtained using the OpenAI GPT-4o model. It was found that the greatest improvements were achieved with increasing complexity of the questions and answers. | [
"['Vitaly Bulgakov']"
]
|
null | null | 2407.01408 | null | null | http://arxiv.org/pdf/2407.01408v1 | 2024-07-01T15:58:20Z | 2024-07-01T15:58:20Z | Semantic Compositions Enhance Vision-Language Contrastive Learning | In the field of vision-language contrastive learning, models such as CLIP capitalize on matched image-caption pairs as positive examples and leverage within-batch non-matching pairs as negatives. This approach has led to remarkable outcomes in zero-shot image classification, cross-modal retrieval, and linear evaluation tasks. We show that the zero-shot classification and retrieval capabilities of CLIP-like models can be improved significantly through the introduction of semantically composite examples during pretraining. Inspired by CutMix in vision categorization, we create semantically composite image-caption pairs by merging elements from two distinct instances in the dataset via a novel procedure. Our method fuses the captions and blends 50% of each image to form a new composite sample. This simple technique (termed CLIP-C for CLIP Compositions), devoid of any additional computational overhead or increase in model parameters, significantly improves zero-shot image classification and cross-modal retrieval. The benefits of CLIP-C are particularly pronounced in settings with relatively limited pretraining data. | [
"['Maxwell Aladago' 'Lorenzo Torresani' 'Soroush Vosoughi']"
]
|
null | null | 2407.01418 | null | null | http://arxiv.org/pdf/2407.01418v1 | 2024-07-01T16:08:37Z | 2024-07-01T16:08:37Z | RoboPack: Learning Tactile-Informed Dynamics Models for Dense Packing | Tactile feedback is critical for understanding the dynamics of both rigid and deformable objects in many manipulation tasks, such as non-prehensile manipulation and dense packing. We introduce an approach that combines visual and tactile sensing for robotic manipulation by learning a neural, tactile-informed dynamics model. Our proposed framework, RoboPack, employs a recurrent graph neural network to estimate object states, including particles and object-level latent physics information, from historical visuo-tactile observations and to perform future state predictions. Our tactile-informed dynamics model, learned from real-world data, can solve downstream robotics tasks with model-predictive control. We demonstrate our approach on a real robot equipped with a compliant Soft-Bubble tactile sensor on non-prehensile manipulation and dense packing tasks, where the robot must infer the physics properties of objects from direct and indirect interactions. Trained on only an average of 30 minutes of real-world interaction data per task, our model can perform online adaptation and make touch-informed predictions. Through extensive evaluations in both long-horizon dynamics prediction and real-world manipulation, our method demonstrates superior effectiveness compared to previous learning-based and physics-based simulation systems. | [
"['Bo Ai' 'Stephen Tian' 'Haochen Shi' 'Yixuan Wang' 'Cheston Tan'\n 'Yunzhu Li' 'Jiajun Wu']"
]
|
null | null | 2407.01419 | null | null | http://arxiv.org/pdf/2407.01419v1 | 2024-07-01T16:09:07Z | 2024-07-01T16:09:07Z | Neurovascular Segmentation in sOCT with Deep Learning and Synthetic
Training Data | Microvascular anatomy is known to be involved in various neurological disorders. However, understanding these disorders is hindered by the lack of imaging modalities capable of capturing the comprehensive three-dimensional vascular network structure at microscopic resolution. With a lateral resolution of $<=$20 {textmu}m and ability to reconstruct large tissue blocks up to tens of cubic centimeters, serial-section optical coherence tomography (sOCT) is well suited for this task. This method uses intrinsic optical properties to visualize the vessels and therefore does not possess a specific contrast, which complicates the extraction of accurate vascular models. The performance of traditional vessel segmentation methods is heavily degraded in the presence of substantial noise and imaging artifacts and is sensitive to domain shifts, while convolutional neural networks (CNNs) require extensive labeled data and are also sensitive the precise intensity characteristics of the data that they are trained on. Building on the emerging field of synthesis-based training, this study demonstrates a synthesis engine for neurovascular segmentation in sOCT images. Characterized by minimal priors and high variance sampling, our highly generalizable method tested on five distinct sOCT acquisitions eliminates the need for manual annotations while attaining human-level precision. Our approach comprises two phases: label synthesis and label-to-image transformation. We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models. | [
"['Etienne Chollet' 'Yaël Balbastre' 'Chiara Mauri' 'Caroline Magnain'\n 'Bruce Fischl' 'Hui Wang']"
]
|
null | null | 2407.01423 | null | null | http://arxiv.org/pdf/2407.01423v1 | 2024-07-01T16:13:54Z | 2024-07-01T16:13:54Z | FairLay-ML: Intuitive Debugging of Fairness in Data-Driven
Social-Critical Software | Data-driven software solutions have significantly been used in critical domains with significant socio-economic, legal, and ethical implications. The rapid adoptions of data-driven solutions, however, pose major threats to the trustworthiness of automated decision-support software. A diminished understanding of the solution by the developer and historical/current biases in the data sets are primary challenges. To aid data-driven software developers and end-users, we present toolname, a debugging tool to test and explain the fairness implications of data-driven solutions. toolname visualizes the logic of datasets, trained models, and decisions for a given data point. In addition, it trains various models with varying fairness-accuracy trade-offs. Crucially, toolname incorporates counterfactual fairness testing that finds bugs beyond the development datasets. We conducted two studies through toolname that allowed us to measure false positives/negatives in prevalent counterfactual testing and understand the human perception of counterfactual test cases in a class survey. toolname and its benchmarks are publicly available at~url{https://github.com/Pennswood/FairLay-ML}. The live version of the tool is available at~url{https://fairlayml-v2.streamlit.app/}. We provide a video demo of the tool at https://youtu.be/wNI9UWkywVU?t=127 | [
"['Normen Yu' 'Luciana Carreon' 'Gang Tan' 'Saeid Tizpaz-Niari']"
]
|
null | null | 2407.01433 | null | null | http://arxiv.org/pdf/2407.01433v1 | 2024-07-01T16:23:45Z | 2024-07-01T16:23:45Z | POST: Email Archival, Processing and Flagging Stack for Incident
Responders | Phishing is one of the main points of compromise, with email security and awareness being estimated at $50-100B in 2022. There is great need for email forensics capability to quickly search for malicious content. A novel solution POST is proposed. POST is an API driven serverless email archival, processing, and flagging workflow for both large and small organizations that collects and parses all email, flags emails using state of the art Natural Language Processing and Machine Learning, allows full email searching on every aspect of an email, and provides a cost savings of up to 68.6%. | [
"['Jeffrey Fairbanks']"
]
|
null | null | 2407.01437 | null | null | http://arxiv.org/pdf/2407.01437v2 | 2024-07-12T17:20:34Z | 2024-07-01T16:32:16Z | Needle in the Haystack for Memory Based Large Language Models | Current large language models (LLMs) often perform poorly on simple fact retrieval tasks. Here we investigate if coupling a dynamically adaptable external memory to a LLM can alleviate this problem. For this purpose, we test Larimar, a recently proposed language model architecture which uses an external associative memory, on long-context recall tasks including passkey and needle-in-the-haystack tests. We demonstrate that the external memory of Larimar, which allows fast write and read of an episode of text samples, can be used at test time to handle contexts much longer than those seen during training. We further show that the latent readouts from the memory (to which long contexts are written) control the decoder towards generating correct outputs, with the memory stored off of the GPU. Compared to existing transformer-based LLM architectures for long-context recall tasks that use larger parameter counts or modified attention mechanisms, a relatively smaller size Larimar is able to maintain strong performance without any task-specific training or training on longer contexts. | [
"['Elliot Nelson' 'Georgios Kollias' 'Payel Das' 'Subhajit Chaudhury'\n 'Soham Dan']"
]
|
null | null | 2407.01440 | null | null | http://arxiv.org/pdf/2407.01440v1 | 2024-07-01T16:32:49Z | 2024-07-01T16:32:49Z | GAT-Steiner: Rectilinear Steiner Minimal Tree Prediction Using GNNs | The Rectilinear Steiner Minimum Tree (RSMT) problem is a fundamental problem in VLSI placement and routing and is known to be NP-hard. Traditional RSMT algorithms spend a significant amount of time on finding Steiner points to reduce the total wire length or use heuristics to approximate producing sub-optimal results. We show that Graph Neural Networks (GNNs) can be used to predict optimal Steiner points in RSMTs with high accuracy and can be parallelized on GPUs. In this paper, we propose GAT-Steiner, a graph attention network model that correctly predicts 99.846% of the nets in the ISPD19 benchmark with an average increase in wire length of only 0.480% on suboptimal wire length nets. On randomly generated benchmarks, GAT-Steiner correctly predicts 99.942% with an average increase in wire length of only 0.420% on suboptimal wire length nets. | [
"['Bugra Onal' 'Eren Dogan' 'Muhammad Hadir Khan' 'Matthew R. Guthaus']"
]
|
null | null | 2407.01445 | null | null | http://arxiv.org/pdf/2407.01445v1 | 2024-07-01T16:37:18Z | 2024-07-01T16:37:18Z | FastCLIP: A Suite of Optimization Techniques to Accelerate CLIP Training
with Limited Resources | Existing studies of training state-of-the-art Contrastive Language-Image Pretraining (CLIP) models on large-scale data involve hundreds of or even thousands of GPUs due to the requirement of a large batch size. However, such a large amount of resources is not accessible to most people. While advanced compositional optimization techniques for optimizing global contrastive losses have been demonstrated effective for removing the requirement of large batch size, their performance on large-scale data remains underexplored and not optimized. To bridge the gap, this paper explores several aspects of CLIP training with limited resources (e.g., up to tens of GPUs). First, we introduce FastCLIP, a general CLIP training framework built on advanced compositional optimization techniques while designed and optimized for the distributed setting. Our framework is equipped with an efficient gradient reduction strategy to reduce communication overhead. Second, to further boost training efficiency, we investigate three components of the framework from an optimization perspective: the schedule of the inner learning rate, the update rules of the temperature parameter and the model parameters, respectively. Experiments on different strategies for each component shed light on how to conduct CLIP training more efficiently. Finally, we benchmark the performance of FastCLIP and the state-of-the-art training baseline (OpenCLIP) on different compute scales up to 32 GPUs on 8 nodes, and three data scales ranging from 2.7 million, 9.1 million to 315 million image-text pairs to demonstrate the significant improvement of FastCLIP in the resource-limited setting. We release the code of FastCLIP at https://github.com/Optimization-AI/fast_clip . | [
"['Xiyuan Wei' 'Fanjiang Ye' 'Ori Yonay' 'Xingyu Chen' 'Baixi Sun'\n 'Dingwen Tao' 'Tianbao Yang']"
]
|
null | null | 2407.01456 | null | null | http://arxiv.org/pdf/2407.01456v1 | 2024-06-28T02:20:54Z | 2024-06-28T02:20:54Z | Information-Theoretic Foundations for Neural Scaling Laws | Neural scaling laws aim to characterize how out-of-sample error behaves as a function of model and training dataset size. Such scaling laws guide allocation of a computational resources between model and data processing to minimize error. However, existing theoretical support for neural scaling laws lacks rigor and clarity, entangling the roles of information and optimization. In this work, we develop rigorous information-theoretic foundations for neural scaling laws. This allows us to characterize scaling laws for data generated by a two-layer neural network of infinite width. We observe that the optimal relation between data and model size is linear, up to logarithmic factors, corroborating large-scale empirical investigations. Concise yet general results of the kind we establish may bring clarity to this topic and inform future investigations. | [
"['Hong Jun Jeon' 'Benjamin Van Roy']"
]
|
null | null | 2407.01458 | null | null | http://arxiv.org/pdf/2407.01458v2 | 2024-07-02T15:17:50Z | 2024-07-01T16:53:00Z | Contractual Reinforcement Learning: Pulling Arms with Invisible Hands | The agency problem emerges in today's large scale machine learning tasks, where the learners are unable to direct content creation or enforce data collection. In this work, we propose a theoretical framework for aligning economic interests of different stakeholders in the online learning problems through contract design. The problem, termed emph{contractual reinforcement learning}, naturally arises from the classic model of Markov decision processes, where a learning principal seeks to optimally influence the agent's action policy for their common interests through a set of payment rules contingent on the realization of next state. For the planning problem, we design an efficient dynamic programming algorithm to determine the optimal contracts against the far-sighted agent. For the learning problem, we introduce a generic design of no-regret learning algorithms to untangle the challenges from robust design of contracts to the balance of exploration and exploitation, reducing the complexity analysis to the construction of efficient search algorithms. For several natural classes of problems, we design tailored search algorithms that provably achieve $tilde{O}(sqrt{T})$ regret. We also present an algorithm with $tilde{O}(T^{2/3})$ for the general problem that improves the existing analysis in online contract design with mild technical assumptions. | [
"['Jibang Wu' 'Siyu Chen' 'Mengdi Wang' 'Huazheng Wang' 'Haifeng Xu']"
]
|
null | null | 2407.01459 | null | null | http://arxiv.org/pdf/2407.01459v1 | 2024-07-01T16:54:07Z | 2024-07-01T16:54:07Z | On Implications of Scaling Laws on Feature Superposition | Using results from scaling laws, this theoretical note argues that the following two statements cannot be simultaneously true: 1. Superposition hypothesis where sparse features are linearly represented across a layer is a complete theory of feature representation. 2. Features are universal, meaning two models trained on the same data and achieving equal performance will learn identical features. | [
"['Pavan Katta']"
]
|
null | null | 2407.01464 | null | null | http://arxiv.org/pdf/2407.01464v1 | 2024-06-26T16:13:11Z | 2024-06-26T16:13:11Z | Graph Neural Network as Computationally Efficient Emulator of Ice-sheet
and Sea-level System Model (ISSM) | The Ice-sheet and Sea-level System Model (ISSM) provides solutions for Stokes equations relevant to ice sheet dynamics by employing finite element and fine mesh adaption. However, since its finite element method is compatible only with Central Processing Units (CPU), the ISSM has limits on further economizing computational time. Thus, by taking advantage of Graphics Processing Units (GPUs), we design a graph convolutional network (GCN) as a fast emulator for ISSM. The GCN is trained and tested using the 20-year transient ISSM simulations in the Pine Island Glacier (PIG). The GCN reproduces ice thickness and velocity with a correlation coefficient greater than 0.998, outperforming the traditional convolutional neural network (CNN). Additionally, GCN shows 34 times faster computational speed than the CPU-based ISSM modeling. The GPU-based GCN emulator allows us to predict how the PIG will change in the future under different melting rate scenarios with high fidelity and much faster computational time. | [
"['Younghyun Koo' 'Maryam Rahnemoonfar']"
]
|
null | null | 2407.01467 | null | null | http://arxiv.org/pdf/2407.01467v1 | 2024-06-25T14:28:05Z | 2024-06-25T14:28:05Z | The Balanced-Pairwise-Affinities Feature Transform | The Balanced-Pairwise-Affinities (BPA) feature transform is designed to upgrade the features of a set of input items to facilitate downstream matching or grouping related tasks. The transformed set encodes a rich representation of high order relations between the input features. A particular min-cost-max-flow fractional matching problem, whose entropy regularized version can be approximated by an optimal transport (OT) optimization, leads to a transform which is efficient, differentiable, equivariant, parameterless and probabilistically interpretable. While the Sinkhorn OT solver has been adapted extensively in many contexts, we use it differently by minimizing the cost between a set of features to $itself$ and using the transport plan's $rows$ as the new representation. Empirically, the transform is highly effective and flexible in its use and consistently improves networks it is inserted into, in a variety of tasks and training schemes. We demonstrate state-of-the-art results in few-shot classification, unsupervised image clustering and person re-identification. Code is available at url{github.com/DanielShalam/BPA}. | [
"['Daniel Shalam' 'Simon Korman']"
]
|
null | null | 2407.01475 | null | null | http://arxiv.org/pdf/2407.01475v1 | 2024-07-01T17:07:33Z | 2024-07-01T17:07:33Z | Exploring FPGA designs for MX and beyond | A number of companies recently worked together to release the new Open Compute Project MX standard for low-precision computation, aimed at efficient neural network implementation. In this paper, we describe and evaluate the first open-source FPGA implementation of the arithmetic defined in the standard. Our designs fully support all the standard's concrete formats for conversion into and out of MX formats and for the standard-defined arithmetic operations, as well as arbitrary fixed-point and floating-point formats. Certain elements of the standard are left as implementation-defined, and we present the first concrete FPGA-inspired choices for these elements, which we outline in the paper. Our library of optimized hardware components is available open source, and can be used to build larger systems. For this purpose, we also describe and release an open-source Pytorch library for quantization into the new standard, integrated with the Brevitas library so that the community can develop novel neural network designs quantized with MX formats in mind. We demonstrate the usability and efficacy of our libraries via the implementation of example neural networks such as ResNet-18 on the ImageNet ILSVRC12 dataset. Our testing shows that MX is very effective for formats such as INT5 or FP6 which are not natively supported on GPUs. This gives FPGAs an advantage as they have the flexibility to implement a custom datapath and take advantage of the smaller area footprints offered by these formats. | [
"['Ebby Samson' 'Naveen Mellempudi' 'Wayne Luk' 'George A. Constantinides']"
]
|
null | null | 2407.01476 | null | null | http://arxiv.org/pdf/2407.01476v1 | 2024-07-01T17:07:55Z | 2024-07-01T17:07:55Z | Tree Search for Language Model Agents | Autonomous agents powered by language models (LMs) have demonstrated promise in their ability to perform decision-making tasks such as web automation. However, a key limitation remains: LMs, primarily optimized for natural language understanding and generation, struggle with multi-step reasoning, planning, and using environmental feedback when attempting to solve realistic computer tasks. Towards addressing this, we propose an inference-time search algorithm for LM agents to explicitly perform exploration and multi-step planning in interactive web environments. Our approach is a form of best-first tree search that operates within the actual environment space, and is complementary with most existing state-of-the-art agents. It is the first tree search algorithm for LM agents that shows effectiveness on realistic web tasks. On the challenging VisualWebArena benchmark, applying our search algorithm on top of a GPT-4o agent yields a 39.7% relative increase in success rate compared to the same baseline without search, setting a state-of-the-art success rate of 26.4%. On WebArena, search also yields a 28.0% relative improvement over a baseline agent, setting a competitive success rate of 19.2%. Our experiments highlight the effectiveness of search for web agents, and we demonstrate that performance scales with increased test-time compute. We conduct a thorough analysis of our results to highlight improvements from search, limitations, and promising directions for future work. Our code and models are publicly released at https://jykoh.com/search-agents. | [
"['Jing Yu Koh' 'Stephen McAleer' 'Daniel Fried' 'Ruslan Salakhutdinov']"
]
|
null | null | 2407.01479 | null | null | http://arxiv.org/pdf/2407.01479v1 | 2024-07-01T17:09:43Z | 2024-07-01T17:09:43Z | EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data
Efficient Learning | Building effective imitation learning methods that enable robots to learn from limited data and still generalize across diverse real-world environments is a long-standing problem in robot learning. We propose EquiBot, a robust, data-efficient, and generalizable approach for robot manipulation task learning. Our approach combines SIM(3)-equivariant neural network architectures with diffusion models. This ensures that our learned policies are invariant to changes in scale, rotation, and translation, enhancing their applicability to unseen environments while retaining the benefits of diffusion-based policy learning such as multi-modality and robustness. We show in a suite of 6 simulation tasks that our proposed method reduces the data requirements and improves generalization to novel scenarios. In the real world, we show with in total 10 variations of 6 mobile manipulation tasks that our method can easily generalize to novel objects and scenes after learning from just 5 minutes of human demonstrations in each task. | [
"['Jingyun Yang' 'Zi-ang Cao' 'Congyue Deng' 'Rika Antonova' 'Shuran Song'\n 'Jeannette Bohg']"
]
|
null | null | 2407.01489 | null | null | http://arxiv.org/pdf/2407.01489v1 | 2024-07-01T17:24:45Z | 2024-07-01T17:24:45Z | Agentless: Demystifying LLM-based Software Engineering Agents | Recent advancements in large language models (LLMs) have significantly advanced the automation of software development tasks, including code synthesis, program repair, and test generation. More recently, researchers and industry practitioners have developed various autonomous LLM agents to perform end-to-end software development tasks. These agents are equipped with the ability to use tools, run commands, observe feedback from the environment, and plan for future actions. However, the complexity of these agent-based approaches, together with the limited abilities of current LLMs, raises the following question: Do we really have to employ complex autonomous software agents? To attempt to answer this question, we build Agentless -- an agentless approach to automatically solve software development problems. Compared to the verbose and complex setup of agent-based approaches, Agentless employs a simplistic two-phase process of localization followed by repair, without letting the LLM decide future actions or operate with complex tools. Our results on the popular SWE-bench Lite benchmark show that surprisingly the simplistic Agentless is able to achieve both the highest performance (27.33%) and lowest cost ($0.34) compared with all existing open-source software agents! Furthermore, we manually classified the problems in SWE-bench Lite and found problems with exact ground truth patch or insufficient/misleading issue descriptions. As such, we construct SWE-bench Lite-S by excluding such problematic issues to perform more rigorous evaluation and comparison. Our work highlights the current overlooked potential of a simple, interpretable technique in autonomous software development. We hope Agentless will help reset the baseline, starting point, and horizon for autonomous software agents, and inspire future work along this crucial direction. | [
"['Chunqiu Steven Xia' 'Yinlin Deng' 'Soren Dunn' 'Lingming Zhang']"
]
|
null | null | 2407.01490 | null | null | http://arxiv.org/pdf/2407.01490v1 | 2024-07-01T17:26:21Z | 2024-07-01T17:26:21Z | LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable
Objectives | The widespread adoption of synthetic data raises new questions about how models generating the data can influence other large language models (LLMs) via distilled data. To start, our work exhaustively characterizes the impact of passive inheritance of model properties by systematically studying the consequences of synthetic data integration. We provide one of the most comprehensive studies to-date of how the source of synthetic data shapes models' internal biases, calibration and generations' textual attributes and preferences. We find that models are surprisingly sensitive towards certain attributes even when the synthetic data prompts appear "neutral". which invites the question whether this sensitivity can be exploited for good. Our findings invite the question can we explicitly steer the models towards the properties we want at test time by exploiting the data generation process? This would have historically been considered infeasible due to the cost of collecting data with a specific characteristic or objective in mind. However, improvement in the quality of synthetic data, as well as a shift towards general-purpose models designed to follow a diverse way of instructions, means this question is timely. We propose active inheritance as a term to describe intentionally constraining synthetic data according to a non-differentiable objective. We demonstrate how active inheritance can steer the generation profiles of models towards desirable non-differentiable attributes, e.g. high lexical diversity or low toxicity. | [
"['Luísa Shimabucoro' 'Sebastian Ruder' 'Julia Kreutzer' 'Marzieh Fadaee'\n 'Sara Hooker']"
]
|
null | null | 2407.01496 | null | null | http://arxiv.org/pdf/2407.01496v1 | 2024-07-01T17:42:29Z | 2024-07-01T17:42:29Z | Fast Iterative Solver For Neural Network Method: II. 1D
Diffusion-Reaction Problems And Data Fitting | This paper expands the damped block Newton (dBN) method introduced recently in [4] for 1D diffusion-reaction equations and least-squares data fitting problems. To determine the linear parameters (the weights and bias of the output layer) of the neural network (NN), the dBN method requires solving systems of linear equations involving the mass matrix. While the mass matrix for local hat basis functions is tri-diagonal and well-conditioned, the mass matrix for NNs is dense and ill-conditioned. For example, the condition number of the NN mass matrix for quasi-uniform meshes is at least ${cal O}(n^4)$. We present a factorization of the mass matrix that enables solving the systems of linear equations in ${cal O}(n)$ operations. To determine the non-linear parameters (the weights and bias of the hidden layer), one step of a damped Newton method is employed at each iteration. A Gauss-Newton method is used in place of Newton for the instances in which the Hessian matrices are singular. This modified dBN is referred to as dBGN. For both methods, the computational cost per iteration is ${cal O}(n)$. Numerical results demonstrate the ability dBN and dBGN to efficiently achieve accurate results and outperform BFGS for select examples. | [
"['Zhiqiang Cai' 'Anastassia Doktorova' 'Robert D. Falgout' 'César Herrera']"
]
|
null | null | 2407.01499 | null | null | http://arxiv.org/pdf/2407.01499v1 | 2024-07-01T17:43:45Z | 2024-07-01T17:43:45Z | Pictures Of MIDI: Controlled Music Generation via Graphical Prompts for
Image-Based Diffusion Inpainting | Recent years have witnessed significant progress in generative models for music, featuring diverse architectures that balance output quality, diversity, speed, and user control. This study explores a user-friendly graphical interface enabling the drawing of masked regions for inpainting by an Hourglass Diffusion Transformer (HDiT) model trained on MIDI piano roll images. To enhance note generation in specified areas, masked regions can be "repainted" with extra noise. The non-latent HDiTs linear scaling with pixel count allows efficient generation in pixel space, providing intuitive and interpretable controls such as masking throughout the network and removing the need to operate in compressed latent spaces such as those provided by pretrained autoencoders. We demonstrate that, in addition to inpainting of melodies, accompaniment, and continuations, the use of repainting can help increase note density yielding musical structures closely matching user specifications such as rising, falling, or diverging melody and/or accompaniment, even when these lie outside the typical training data distribution. We achieve performance on par with prior results while operating at longer context windows, with no autoencoder, and can enable complex geometries for inpainting masks, increasing the options for machine-assisted composers to control the generated music. | [
"['Scott H. Hawley']"
]
|
null | null | 2407.01501 | null | null | http://arxiv.org/pdf/2407.01501v1 | 2024-07-01T17:47:31Z | 2024-07-01T17:47:31Z | Online Learning of Temporal Dependencies for Sustainable Foraging
Problem | The sustainable foraging problem is a dynamic environment testbed for exploring the forms of agent cognition in dealing with social dilemmas in a multi-agent setting. The agents need to resist the temptation of individual rewards through foraging and choose the collective long-term goal of sustainability. We investigate methods of online learning in Neuro-Evolution and Deep Recurrent Q-Networks to enable agents to attempt the problem one-shot as is often required by wicked social problems. We further explore if learning temporal dependencies with Long Short-Term Memory may be able to aid the agents in developing sustainable foraging strategies in the long term. It was found that the integration of Long Short-Term Memory assisted agents in developing sustainable strategies for a single agent, however failed to assist agents in managing the social dilemma that arises in the multi-agent scenario. | [
"['John Payne' 'Aishwaryaprajna' 'Peter R. Lewis']"
]
|
null | null | 2407.01502 | null | null | http://arxiv.org/pdf/2407.01502v1 | 2024-07-01T17:48:14Z | 2024-07-01T17:48:14Z | AI Agents That Matter | AI agents are an exciting new research direction, and agent development is driven by benchmarks. Our analysis of current agent benchmarks and evaluation practices reveals several shortcomings that hinder their usefulness in real-world applications. First, there is a narrow focus on accuracy without attention to other metrics. As a result, SOTA agents are needlessly complex and costly, and the community has reached mistaken conclusions about the sources of accuracy gains. Our focus on cost in addition to accuracy motivates the new goal of jointly optimizing the two metrics. We design and implement one such optimization, showing its potential to greatly reduce cost while maintaining accuracy. Second, the benchmarking needs of model and downstream developers have been conflated, making it hard to identify which agent would be best suited for a particular application. Third, many agent benchmarks have inadequate holdout sets, and sometimes none at all. This has led to agents that are fragile because they take shortcuts and overfit to the benchmark in various ways. We prescribe a principled framework for avoiding overfitting. Finally, there is a lack of standardization in evaluation practices, leading to a pervasive lack of reproducibility. We hope that the steps we introduce for addressing these shortcomings will spur the development of agents that are useful in the real world and not just accurate on benchmarks. | [
"['Sayash Kapoor' 'Benedikt Stroebl' 'Zachary S. Siegel' 'Nitya Nadgir'\n 'Arvind Narayanan']"
]
|
null | null | 2407.01512 | null | null | http://arxiv.org/pdf/2407.01512v2 | 2024-07-08T16:59:38Z | 2024-07-01T17:55:35Z | Open-TeleVision: Teleoperation with Immersive Active Visual Feedback | Teleoperation serves as a powerful method for collecting on-robot data essential for robot learning from demonstrations. The intuitiveness and ease of use of the teleoperation system are crucial for ensuring high-quality, diverse, and scalable data. To achieve this, we propose an immersive teleoperation system Open-TeleVision that allows operators to actively perceive the robot's surroundings in a stereoscopic manner. Additionally, the system mirrors the operator's arm and hand movements on the robot, creating an immersive experience as if the operator's mind is transmitted to a robot embodiment. We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks (Can Sorting, Can Insertion, Folding, and Unloading) for 2 different humanoid robots and deploy them in the real world. The system is open-sourced at: https://robot-tv.github.io/ | [
"['Xuxin Cheng' 'Jialong Li' 'Shiqi Yang' 'Ge Yang' 'Xiaolong Wang']"
]
|
null | null | 2407.01517 | null | null | http://arxiv.org/pdf/2407.01517v1 | 2024-07-01T17:58:44Z | 2024-07-01T17:58:44Z | Centerline Boundary Dice Loss for Vascular Segmentation | Vascular segmentation in medical imaging plays a crucial role in analysing morphological and functional assessments. Traditional methods, like the centerline Dice (clDice) loss, ensure topology preservation but falter in capturing geometric details, especially under translation and deformation. The combination of clDice with traditional Dice loss can lead to diameter imbalance, favoring larger vessels. Addressing these challenges, we introduce the centerline boundary Dice (cbDice) loss function, which harmonizes topological integrity and geometric nuances, ensuring consistent segmentation across various vessel sizes. cbDice enriches the clDice approach by including boundary-aware aspects, thereby improving geometric detail recognition. It matches the performance of the boundary difference over union (B-DoU) loss through a mask-distance-based approach, enhancing traslation sensitivity. Crucially, cbDice incorporates radius information from vascular skeletons, enabling uniform adaptation to vascular diameter changes and maintaining balance in branch growth and fracture impacts. Furthermore, we conducted a theoretical analysis of clDice variants (cl-X-Dice). We validated cbDice's efficacy on three diverse vascular segmentation datasets, encompassing both 2D and 3D, and binary and multi-class segmentation. Particularly, the method integrated with cbDice demonstrated outstanding performance on the MICCAI 2023 TopCoW Challenge dataset. Our code is made publicly available at: https://github.com/PengchengShi1220/cbDice. | [
"['Pengcheng Shi' 'Jiesi Hu' 'Yanwu Yang' 'Zilve Gao' 'Wei Liu' 'Ting Ma']"
]
|
null | null | 2407.01518 | null | null | http://arxiv.org/pdf/2407.01518v1 | 2024-07-01T17:59:09Z | 2024-07-01T17:59:09Z | Towards Multimodal Open-Set Domain Generalization and Adaptation through
Self-supervision | The task of open-set domain generalization (OSDG) involves recognizing novel classes within unseen domains, which becomes more challenging with multiple modalities as input. Existing works have only addressed unimodal OSDG within the meta-learning framework, without considering multimodal scenarios. In this work, we introduce a novel approach to address Multimodal Open-Set Domain Generalization (MM-OSDG) for the first time, utilizing self-supervision. To this end, we introduce two innovative multimodal self-supervised pretext tasks: Masked Cross-modal Translation and Multimodal Jigsaw Puzzles. These tasks facilitate the learning of multimodal representative features, thereby enhancing generalization and open-class detection capabilities. Additionally, we propose a novel entropy weighting mechanism to balance the loss across different modalities. Furthermore, we extend our approach to tackle also the Multimodal Open-Set Domain Adaptation (MM-OSDA) problem, especially in scenarios where unlabeled data from the target domain is available. Extensive experiments conducted under MM-OSDG, MM-OSDA, and Multimodal Closed-Set DG settings on the EPIC-Kitchens and HAC datasets demonstrate the efficacy and versatility of the proposed approach. Our source code is available at https://github.com/donghao51/MOOSA. | [
"['Hao Dong' 'Eleni Chatzi' 'Olga Fink']"
]
|
null | null | 2407.01521 | null | null | http://arxiv.org/pdf/2407.01521v1 | 2024-07-01T17:59:23Z | 2024-07-01T17:59:23Z | Improving Diffusion Inverse Problem Solving with Decoupled Noise
Annealing | Diffusion models have recently achieved success in solving Bayesian inverse problems with learned data priors. Current methods build on top of the diffusion sampling process, where each denoising step makes small modifications to samples from the previous step. However, this process struggles to correct errors from earlier sampling steps, leading to worse performance in complicated nonlinear inverse problems, such as phase retrieval. To address this challenge, we propose a new method called Decoupled Annealing Posterior Sampling (DAPS) that relies on a novel noise annealing process. Specifically, we decouple consecutive steps in a diffusion sampling trajectory, allowing them to vary considerably from one another while ensuring their time-marginals anneal to the true posterior as we reduce noise levels. This approach enables the exploration of a larger solution space, improving the success rate for accurate reconstructions. We demonstrate that DAPS significantly improves sample quality and stability across multiple image restoration tasks, particularly in complicated nonlinear inverse problems. For example, we achieve a PSNR of 30.72dB on the FFHQ 256 dataset for phase retrieval, which is an improvement of 9.12dB compared to existing methods. | [
"['Bingliang Zhang' 'Wenda Chu' 'Julius Berner' 'Chenlin Meng'\n 'Anima Anandkumar' 'Yang Song']"
]
|
null | null | 2407.01526 | null | null | http://arxiv.org/pdf/2407.01526v1 | 2024-07-01T17:59:41Z | 2024-07-01T17:59:41Z | Scalable Nested Optimization for Deep Learning | Gradient-based optimization has been critical to the success of machine learning, updating a single set of parameters to minimize a single loss. A growing number of applications rely on a generalization of this, where we have a bilevel or nested optimization of which subsets of parameters update on different objectives nested inside each other. We focus on motivating examples of hyperparameter optimization and generative adversarial networks. However, naively applying classical methods often fails when we look at solving these nested problems on a large scale. In this thesis, we build tools for nested optimization that scale to deep learning setups. | [
"['Jonathan Lorraine']"
]
|
null | null | 2407.01529 | null | null | http://arxiv.org/pdf/2407.01529v1 | 2024-07-01T17:59:54Z | 2024-07-01T17:59:54Z | On the Abuse and Detection of Polyglot Files | A polyglot is a file that is valid in two or more formats. Polyglot files pose a problem for malware detection systems that route files to format-specific detectors/signatures, as well as file upload and sanitization tools. In this work we found that existing file-format and embedded-file detection tools, even those developed specifically for polyglot files, fail to reliably detect polyglot files used in the wild, leaving organizations vulnerable to attack. To address this issue, we studied the use of polyglot files by malicious actors in the wild, finding $30$ polyglot samples and $15$ attack chains that leveraged polyglot files. In this report, we highlight two well-known APTs whose cyber attack chains relied on polyglot files to bypass detection mechanisms. Using knowledge from our survey of polyglot usage in the wild -- the first of its kind -- we created a novel data set based on adversary techniques. We then trained a machine learning detection solution, PolyConv, using this data set. PolyConv achieves a precision-recall area-under-curve score of $0.999$ with an F1 score of $99.20$% for polyglot detection and $99.47$% for file-format identification, significantly outperforming all other tools tested. We developed a content disarmament and reconstruction tool, ImSan, that successfully sanitized $100$% of the tested image-based polyglots, which were the most common type found via the survey. Our work provides concrete tools and suggestions to enable defenders to better defend themselves against polyglot files, as well as directions for future work to create more robust file specifications and methods of disarmament. | [
"['Luke Koch' 'Sean Oesch' 'Amul Chaulagain' 'Jared Dixon' 'Matthew Dixon'\n 'Mike Huettal' 'Amir Sadovnik' 'Cory Watson' 'Brian Weber'\n 'Jacob Hartman' 'Richard Patulski']"
]
|
null | null | 2407.01531 | null | null | http://arxiv.org/pdf/2407.01531v1 | 2024-07-01T17:59:56Z | 2024-07-01T17:59:56Z | Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for
Robot Learning | The increasing complexity of tasks in robotics demands efficient strategies for multitask and continual learning. Traditional models typically rely on a universal policy for all tasks, facing challenges such as high computational costs and catastrophic forgetting when learning new tasks. To address these issues, we introduce a sparse, reusable, and flexible policy, Sparse Diffusion Policy (SDP). By adopting Mixture of Experts (MoE) within a transformer-based diffusion policy, SDP selectively activates experts and skills, enabling efficient and task-specific learning without retraining the entire model. SDP not only reduces the burden of active parameters but also facilitates the seamless integration and reuse of experts across various tasks. Extensive experiments on diverse tasks in both simulations and real world show that SDP 1) excels in multitask scenarios with negligible increases in active parameters, 2) prevents forgetting in continual learning of new tasks, and 3) enables efficient task transfer, offering a promising solution for advanced robotic applications. Demos and codes can be found in https://forrest-110.github.io/sparse_diffusion_policy/. | [
"['Yixiao Wang' 'Yifei Zhang' 'Mingxiao Huo' 'Ran Tian' 'Xiang Zhang'\n 'Yichen Xie' 'Chenfeng Xu' 'Pengliang Ji' 'Wei Zhan' 'Mingyu Ding'\n 'Masayoshi Tomizuka']"
]
|
null | null | 2407.01546 | null | null | http://arxiv.org/pdf/2407.01546v1 | 2024-04-23T01:00:09Z | 2024-04-23T01:00:09Z | Machine Learning-Enhanced Ant Colony Optimization for Column Generation | Column generation (CG) is a powerful technique for solving optimization problems that involve a large number of variables or columns. This technique begins by solving a smaller problem with a subset of columns and gradually generates additional columns as needed. However, the generation of columns often requires solving difficult subproblems repeatedly, which can be a bottleneck for CG. To address this challenge, we propose a novel method called machine learning enhanced ant colony optimization (MLACO), to efficiently generate multiple high-quality columns from a subproblem. Specifically, we train a ML model to predict the optimal solution of a subproblem, and then integrate this ML prediction into the probabilistic model of ACO to sample multiple high-quality columns. Our experimental results on the bin packing problem with conflicts show that the MLACO method significantly improves the performance of CG compared to several state-of-the-art methods. Furthermore, when our method is incorporated into a Branch-and-Price method, it leads to a significant reduction in solution time. | [
"['Hongjie Xu' 'Yunzhuang Shen' 'Yuan Sun' 'Xiaodong Li']"
]
|
null | null | 2407.01548 | null | null | http://arxiv.org/pdf/2407.01548v1 | 2024-04-25T05:13:38Z | 2024-04-25T05:13:38Z | From Cognition to Computation: A Comparative Review of Human Attention
and Transformer Architectures | Attention is a cornerstone of human cognition that facilitates the efficient extraction of information in everyday life. Recent developments in artificial intelligence like the Transformer architecture also incorporate the idea of attention in model designs. However, despite the shared fundamental principle of selectively attending to information, human attention and the Transformer model display notable differences, particularly in their capacity constraints, attention pathways, and intentional mechanisms. Our review aims to provide a comparative analysis of these mechanisms from a cognitive-functional perspective, thereby shedding light on several open research questions. The exploration encourages interdisciplinary efforts to derive insights from human attention mechanisms in the pursuit of developing more generalized artificial intelligence. | [
"['Minglu Zhao' 'Dehong Xu' 'Tao Gao']"
]
|
null | null | 2407.01559 | null | null | http://arxiv.org/pdf/2407.01559v1 | 2024-05-06T18:47:54Z | 2024-05-06T18:47:54Z | Data-driven approaches for electrical impedance tomography image
segmentation from partial boundary data | Electrical impedance tomography (EIT) plays a crucial role in non-invasive imaging, with both medical and industrial applications. In this paper, we present three data-driven reconstruction methods for EIT imaging. These three approaches were originally submitted to the Kuopio tomography challenge 2023 (KTC2023). First, we introduce a post-processing approach, which achieved first place at KTC2023. Further, we present a fully learned and a conditional diffusion approach. All three methods are based on a similar neural network as a backbone and were trained using a synthetically generated data set, providing with an opportunity for a fair comparison of these different data-driven reconstruction methods. | [
"['Alexander Denker' 'Zeljko Kereta' 'Imraj Singh' 'Tom Freudenberg'\n 'Tobias Kluth' 'Peter Maass' 'Simon Arridge']"
]
|
null | null | 2407.01563 | null | null | http://arxiv.org/pdf/2407.01563v1 | 2024-05-16T01:18:52Z | 2024-05-16T01:18:52Z | NaviSlim: Adaptive Context-Aware Navigation and Sensing via Dynamic
Slimmable Networks | Small-scale autonomous airborne vehicles, such as micro-drones, are expected to be a central component of a broad spectrum of applications ranging from exploration to surveillance and delivery. This class of vehicles is characterized by severe constraints in computing power and energy reservoir, which impairs their ability to support the complex state-of-the-art neural models needed for autonomous operations. The main contribution of this paper is a new class of neural navigation models -- NaviSlim -- capable of adapting the amount of resources spent on computing and sensing in response to the current context (i.e., difficulty of the environment, current trajectory, and navigation goals). Specifically, NaviSlim is designed as a gated slimmable neural network architecture that, different from existing slimmable networks, can dynamically select a slimming factor to autonomously scale model complexity, which consequently optimizes execution time and energy consumption. Moreover, different from existing sensor fusion approaches, NaviSlim can dynamically select power levels of onboard sensors to autonomously reduce power and time spent during sensor acquisition, without the need to switch between different neural networks. By means of extensive training and testing on the robust simulation environment Microsoft AirSim, we evaluate our NaviSlim models on scenarios with varying difficulty and a test set that showed a dynamic reduced model complexity on average between 57-92%, and between 61-80% sensor utilization, as compared to static neural networks designed to match computing and sensing of that required by the most difficult scenario. | [
"['Tim Johnsen' 'Marco Levorato']"
]
|
null | null | 2407.01566 | null | null | http://arxiv.org/pdf/2407.01566v1 | 2024-05-22T18:38:05Z | 2024-05-22T18:38:05Z | A Contextual Online Learning Theory of Brokerage | We study the role of contextual information in the online learning problem of brokerage between traders. At each round, two traders arrive with secret valuations about an asset they wish to trade. The broker suggests a trading price based on contextual data about the asset. Then, the traders decide to buy or sell depending on whether their valuations are higher or lower than the brokerage price. We assume the market value of traded assets is an unknown linear function of a $d$-dimensional vector representing the contextual information available to the broker. Additionally, we model traders' valuations as independent bounded zero-mean perturbations of the asset's market value, allowing for potentially different unknown distributions across traders and time steps. Consistently with the existing online learning literature, we evaluate the performance of a learning algorithm with the regret with respect to the gain from trade. If the noise distributions admit densities bounded by some constant $L$, then, for any time horizon $T$: - If the agents' valuations are revealed after each interaction, we provide an algorithm achieving $O ( L d ln T )$ regret, and show a corresponding matching lower bound of $Omega( Ld ln T )$. - If only their willingness to sell or buy at the proposed price is revealed after each interaction, we provide an algorithm achieving $O(sqrt{LdT ln T })$ regret, and show that this rate is optimal (up to logarithmic factors), via a lower bound of $Omega(sqrt{LdT})$. To complete the picture, we show that if the bounded density assumption is lifted, then the problem becomes unlearnable, even with full feedback. | [
"['François Bachoc' 'Tommaso Cesari' 'Roberto Colomboni']"
]
|
null | null | 2407.01567 | null | null | http://arxiv.org/pdf/2407.01567v1 | 2024-05-24T18:39:20Z | 2024-05-24T18:39:20Z | MeMo: Meaningful, Modular Controllers via Noise Injection | Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines. | [
"['Megan Tjandrasuwita' 'Jie Xu' 'Armando Solar-Lezama' 'Wojciech Matusik']"
]
|
null | null | 2407.01571 | null | null | http://arxiv.org/pdf/2407.01571v1 | 2024-05-28T00:43:47Z | 2024-05-28T00:43:47Z | Interpretable DRL-based Maneuver Decision of UCAV Dogfight | This paper proposes a three-layer unmanned combat aerial vehicle (UCAV) dogfight frame where Deep reinforcement learning (DRL) is responsible for high-level maneuver decision. A four-channel low-level control law is firstly constructed, followed by a library containing eight basic flight maneuvers (BFMs). Double deep Q network (DDQN) is applied for BFM selection in UCAV dogfight, where the opponent strategy during the training process is constructed with DT. Our simulation result shows that, the agent can achieve a win rate of 85.75% against the DT strategy, and positive results when facing various unseen opponents. Based on the proposed frame, interpretability of the DRL-based dogfight is significantly improved. The agent performs yo-yo to adjust its turn rate and gain higher maneuverability. Emergence of "Dive and Chase" behavior also indicates the agent can generate a novel tactic that utilizes the drawback of its opponent. | [
"['Haoran Han' 'Jian Cheng' 'Maolong Lv']"
]
|
null | null | 2407.01572 | null | null | http://arxiv.org/pdf/2407.01572v1 | 2024-05-28T17:55:54Z | 2024-05-28T17:55:54Z | Exploring Sectoral Profitability in the Indian Stock Market Using Deep
Learning | This paper explores using a deep learning Long Short-Term Memory (LSTM) model for accurate stock price prediction and its implications for portfolio design. Despite the efficient market hypothesis suggesting that predicting stock prices is impossible, recent research has shown the potential of advanced algorithms and predictive models. The study builds upon existing literature on stock price prediction methods, emphasizing the shift toward machine learning and deep learning approaches. Using historical stock prices of 180 stocks across 18 sectors listed on the NSE, India, the LSTM model predicts future prices. These predictions guide buy/sell decisions for each stock and analyze sector profitability. The study's main contributions are threefold: introducing an optimized LSTM model for robust portfolio design, utilizing LSTM predictions for buy/sell transactions, and insights into sector profitability and volatility. Results demonstrate the efficacy of the LSTM model in accurately predicting stock prices and informing investment decisions. By comparing sector profitability and prediction accuracy, the work provides valuable insights into the dynamics of the current financial markets in India. | [
"['Jaydip Sen' 'Hetvi Waghela' 'Sneha Rakshit']"
]
|
null | null | 2407.01573 | null | null | http://arxiv.org/pdf/2407.01573v1 | 2024-05-28T22:14:25Z | 2024-05-28T22:14:25Z | Model-Based Diffusion for Trajectory Optimization | Recent advances in diffusion models have demonstrated their strong capabilities in generating high-fidelity samples from complex distributions through an iterative refinement process. Despite the empirical success of diffusion models in motion planning and control, the model-free nature of these methods does not leverage readily available model information and limits their generalization to new scenarios beyond the training data (e.g., new robots with different dynamics). In this work, we introduce Model-Based Diffusion (MBD), an optimization approach using the diffusion process to solve trajectory optimization (TO) problems without data. The key idea is to explicitly compute the score function by leveraging the model information in TO problems, which is why we refer to our approach as model-based diffusion. Moreover, although MBD does not require external data, it can be naturally integrated with data of diverse qualities to steer the diffusion process. We also reveal that MBD has interesting connections to sampling-based optimization. Empirical evaluations show that MBD outperforms state-of-the-art reinforcement learning and sampling-based TO methods in challenging contact-rich tasks. Additionally, MBD's ability to integrate with data enhances its versatility and practical applicability, even with imperfect and infeasible data (e.g., partial-state demonstrations for high-dimensional humanoids), beyond the scope of standard diffusion models. | [
"['Chaoyi Pan' 'Zeji Yi' 'Guanya Shi' 'Guannan Qu']"
]
|
null | null | 2407.01574 | null | null | http://arxiv.org/pdf/2407.01574v1 | 2024-05-29T15:12:19Z | 2024-05-29T15:12:19Z | cryoSPHERE: Single-particle heterogeneous reconstruction from cryo EM | The three-dimensional structure of a protein plays a key role in determining its function. Methods like AlphaFold have revolutionized protein structure prediction based only on the amino-acid sequence. However, proteins often appear in multiple different conformations, and it is highly relevant to resolve the full conformational distribution. Single-particle cryo-electron microscopy (cryo EM) is a powerful tool for capturing a large number of images of a given protein, frequently in different conformations (referred to as particles). The images are, however, very noisy projections of the protein, and traditional methods for cryo EM reconstruction are limited to recovering a single, or a few, conformations. In this paper, we introduce cryoSPHERE, a deep learning method that takes as input a nominal protein structure, e.g. from AlphaFold, learns how to divide it into segments, and how to move these as approximately rigid bodies to fit the different conformations present in the cryo EM dataset. This formulation is shown to provide enough constraints to recover meaningful reconstructions of single protein structures. This is illustrated in three examples where we show consistent improvements over the current state-of-the-art for heterogeneous reconstruction. | [
"['Gabriel Ducrocq' 'Lukas Grunewald' 'Sebastian Westenhoff'\n 'Fredrik Lindsten']"
]
|
null | null | 2407.01577 | null | null | http://arxiv.org/abs/2407.01577v1 | 2024-06-03T01:42:52Z | 2024-06-03T01:42:52Z | MOT: A Mixture of Actors Reinforcement Learning Method by Optimal
Transport for Algorithmic Trading | Algorithmic trading refers to executing buy and sell orders for specific assets based on automatically identified trading opportunities. Strategies based on reinforcement learning (RL) have demonstrated remarkable capabilities in addressing algorithmic trading problems. However, the trading patterns differ among market conditions due to shifted distribution data. Ignoring multiple patterns in the data will undermine the performance of RL. In this paper, we propose MOT,which designs multiple actors with disentangled representation learning to model the different patterns of the market. Furthermore, we incorporate the Optimal Transport (OT) algorithm to allocate samples to the appropriate actor by introducing a regularization loss term. Additionally, we propose Pretrain Module to facilitate imitation learning by aligning the outputs of actors with expert strategy and better balance the exploration and exploitation of RL. Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks. Ablation studies validate the effectiveness of the components of MOT. | [
"['Xi Cheng' 'Jinghao Zhang' 'Yunan Zeng' 'Wenfang Xue']"
]
|
null | null | 2407.01583 | null | null | http://arxiv.org/pdf/2407.01583v1 | 2024-06-17T10:33:52Z | 2024-06-17T10:33:52Z | Optimal Low-Depth Quantum Signal-Processing Phase Estimation | Quantum effects like entanglement and coherent amplification can be used to drastically enhance the accuracy of quantum parameter estimation beyond classical limits. However, challenges such as decoherence and time-dependent errors hinder Heisenberg-limited amplification. We introduce Quantum Signal-Processing Phase Estimation algorithms that are robust against these challenges and achieve optimal performance as dictated by the Cram'{e}r-Rao bound. These algorithms use quantum signal transformation to decouple interdependent phase parameters into largely orthogonal ones, ensuring that time-dependent errors in one do not compromise the accuracy of learning the other. Combining provably optimal classical estimation with near-optimal quantum circuit design, our approach achieves an unprecedented standard deviation accuracy of $10^{-4}$ radians for estimating unwanted swap angles in superconducting two-qubit experiments, using low-depth ($<10$) circuits. This represents up to two orders of magnitude improvement over existing methods. Theoretically and numerically, we demonstrate the optimality of our algorithm against time-dependent phase errors, observing that the variance of the time-sensitive parameter $varphi$ scales faster than the asymptotic Heisenberg scaling in the small-depth regime. Our results are rigorously validated against the quantum Fisher information, confirming our protocol's ability to achieve unmatched precision for two-qubit gate learning. | [
"['Yulong Dong' 'Jonathan A. Gross' 'Murphy Yuezhen Niu']"
]
|
null | null | 2407.01593 | null | null | http://arxiv.org/pdf/2407.01593v1 | 2024-06-24T11:13:06Z | 2024-06-24T11:13:06Z | neuROSym: Deployment and Evaluation of a ROS-based Neuro-Symbolic Model
for Human Motion Prediction | Autonomous mobile robots can rely on several human motion detection and prediction systems for safe and efficient navigation in human environments, but the underline model architectures can have different impacts on the trustworthiness of the robot in the real world. Among existing solutions for context-aware human motion prediction, some approaches have shown the benefit of integrating symbolic knowledge with state-of-the-art neural networks. In particular, a recent neuro-symbolic architecture (NeuroSyM) has successfully embedded context with a Qualitative Trajectory Calculus (QTC) for spatial interactions representation. This work achieved better performance than neural-only baseline architectures on offline datasets. In this paper, we extend the original architecture to provide neuROSym, a ROS package for robot deployment in real-world scenarios, which can run, visualise, and evaluate previous neural-only and neuro-symbolic models for motion prediction online. We evaluated these models, NeuroSyM and a baseline SGAN, on a TIAGo robot in two scenarios with different human motion patterns. We assessed accuracy and runtime performance of the prediction models, showing a general improvement in case our neuro-symbolic architecture is used. We make the neuROSym package1 publicly available to the robotics community. | [
"['Sariah Mghames' 'Luca Castri' 'Marc Hanheide' 'Nicola Bellotto']"
]
|
null | null | 2407.01595 | null | null | http://arxiv.org/pdf/2407.01595v1 | 2024-06-25T00:15:13Z | 2024-06-25T00:15:13Z | Fairpriori: Improving Biased Subgroup Discovery for Deep Neural Network
Fairness | While deep learning has become a core functional module of most software systems, concerns regarding the fairness of ML predictions have emerged as a significant issue that affects prediction results due to discrimination. Intersectional bias, which disproportionately affects members of subgroups, is a prime example of this. For instance, a machine learning model might exhibit bias against darker-skinned women, while not showing bias against individuals with darker skin or women. This problem calls for effective fairness testing before the deployment of such deep learning models in real-world scenarios. However, research into detecting such bias is currently limited compared to research on individual and group fairness. Existing tools to investigate intersectional bias lack important features such as support for multiple fairness metrics, fast and efficient computation, and user-friendly interpretation. This paper introduces Fairpriori, a novel biased subgroup discovery method, which aims to address these limitations. Fairpriori incorporates the frequent itemset generation algorithm to facilitate effective and efficient investigation of intersectional bias by producing fast fairness metric calculations on subgroups of a dataset. Through comparison with the state-of-the-art methods (e.g., Themis, FairFictPlay, and TestSGD) under similar conditions, Fairpriori demonstrates superior effectiveness and efficiency when identifying intersectional bias. Specifically, Fairpriori is easier to use and interpret, supports a wider range of use cases by accommodating multiple fairness metrics, and exhibits higher efficiency in computing fairness metrics. These findings showcase Fairpriori's potential for effectively uncovering subgroups affected by intersectional bias, supported by its open-source tooling at https://anonymous.4open.science/r/Fairpriori-0320. | [
"['Kacy Zhou' 'Jiawen Wen' 'Nan Yang' 'Dong Yuan' 'Qinghua Lu'\n 'Huaming Chen']"
]
|
null | null | 2407.01596 | null | null | http://arxiv.org/pdf/2407.01596v1 | 2024-06-25T09:34:11Z | 2024-06-25T09:34:11Z | Maze Discovery using Multiple Robots via Federated Learning | This work presents a use case of federated learning (FL) applied to discovering a maze with LiDAR sensors-equipped robots. Goal here is to train classification models to accurately identify the shapes of grid areas within two different square mazes made up with irregular shaped walls. Due to the use of different shapes for the walls, a classification model trained in one maze that captures its structure does not generalize for the other. This issue is resolved by adopting FL framework between the robots that explore only one maze so that the collective knowledge allows them to operate accurately in the unseen maze. This illustrates the effectiveness of FL in real-world applications in terms of enhancing classification accuracy and robustness in maze discovery tasks. | [
"['Kalpana Ranasinghe' 'H. P. Madushanka' 'Rafaela Scaciota'\n 'Sumudu Samarakoon' 'Mehdi Bennis']"
]
|
null | null | 2407.01598 | null | null | http://arxiv.org/pdf/2407.01598v1 | 2024-06-26T02:06:27Z | 2024-06-26T02:06:27Z | Long-Term Prediction Accuracy Improvement of Data-Driven Medium-Range
Global Weather Forecast | Long-term stability stands as a crucial requirement in data-driven medium-range global weather forecasting. Spectral bias is recognized as the primary contributor to instabilities, as data-driven methods difficult to learn small-scale dynamics. In this paper, we reveal that the universal mechanism for these instabilities is not only related to spectral bias but also to distortions brought by processing spherical data using conventional convolution. These distortions lead to a rapid amplification of errors over successive long-term iterations, resulting in a significant decline in forecast accuracy. To address this issue, a universal neural operator called the Spherical Harmonic Neural Operator (SHNO) is introduced to improve long-term iterative forecasts. SHNO uses the spherical harmonic basis to mitigate distortions for spherical data and uses gated residual spectral attention (GRSA) to correct spectral bias caused by spurious correlations across different scales. The effectiveness and merit of the proposed method have been validated through its application for spherical Shallow Water Equations (SWEs) and medium-range global weather forecasting. Our findings highlight the benefits and potential of SHNO to improve the accuracy of long-term prediction. | [
"['Yifan Hu' 'Fukang Yin' 'Weimin Zhang' 'Kaijun Ren' 'Junqiang Song'\n 'Kefeng Deng' 'Di Zhang']"
]
|
null | null | 2407.01599 | null | null | http://arxiv.org/pdf/2407.01599v1 | 2024-06-26T02:20:23Z | 2024-06-26T02:20:23Z | JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large
Language and Vision-Language Models | The rapid evolution of artificial intelligence (AI) through developments in Large Language Models (LLMs) and Vision-Language Models (VLMs) has brought significant advancements across various technological domains. While these models enhance capabilities in natural language processing and visual interactive tasks, their growing adoption raises critical concerns regarding security and ethical alignment. This survey provides an extensive review of the emerging field of jailbreaking--deliberately circumventing the ethical and operational boundaries of LLMs and VLMs--and the consequent development of defense mechanisms. Our study categorizes jailbreaks into seven distinct types and elaborates on defense strategies that address these vulnerabilities. Through this comprehensive examination, we identify research gaps and propose directions for future studies to enhance the security frameworks of LLMs and VLMs. Our findings underscore the necessity for a unified perspective that integrates both jailbreak strategies and defensive solutions to foster a robust, secure, and reliable environment for the next generation of language models. More details can be found on our website: url{https://chonghan-chen.com/llm-jailbreak-zoo-survey/}. | [
"['Haibo Jin' 'Leyang Hu' 'Xinuo Li' 'Peiyan Zhang' 'Chonghan Chen'\n 'Jun Zhuang' 'Haohan Wang']"
]
|
null | null | 2407.01601 | null | null | http://arxiv.org/pdf/2407.01601v2 | 2024-07-03T16:19:59Z | 2024-06-26T11:53:35Z | Unveiling and Controlling Anomalous Attention Distribution in
Transformers | With the advent of large models based on the Transformer architecture, researchers have observed an anomalous phenomenon in the Attention mechanism--there is a very high attention on the first element, which is prevalent across Transformer-based models. It is crucial to understand it for the development of techniques focusing on attention distribution, such as Key-Value (KV) Cache compression and infinite extrapolation; however, the latent cause leaves to be unknown. In this paper, we analyze such a phenomenon from the perspective of waiver phenomenon, which involves reducing the internal values of certain elements in the sequence, allowing them to absorb excess attention without affecting their contribution to information. In specific models, due to differences in positional encoding and attention patterns, we have found that the selection of waiver elements by the model can be categorized into two methods: positional-encoding-based and feature-distribution-within-elements-based. | [
"['Ruiqing Yan' 'Xingbo Du' 'Haoyu Deng' 'Linghan Zheng' 'Qiuzhuang Sun'\n 'Jifang Hu' 'Yuhang Shao' 'Penghao Jiang' 'Jinrong Jiang' 'Lian Zhao']"
]
|
null | null | 2407.01602 | null | null | http://arxiv.org/pdf/2407.01602v1 | 2024-06-26T16:13:35Z | 2024-06-26T16:13:35Z | Clustering in pure-attention hardmax transformers and its role in
sentiment analysis | Transformers are extremely successful machine learning models whose mathematical properties remain poorly understood. Here, we rigorously characterize the behavior of transformers with hardmax self-attention and normalization sublayers as the number of layers tends to infinity. By viewing such transformers as discrete-time dynamical systems describing the evolution of points in a Euclidean space, and thanks to a geometric interpretation of the self-attention mechanism based on hyperplane separation, we show that the transformer inputs asymptotically converge to a clustered equilibrium determined by special points called leaders. We then leverage this theoretical understanding to solve sentiment analysis problems from language processing using a fully interpretable transformer model, which effectively captures `context' by clustering meaningless words around leader words carrying the most meaning. Finally, we outline remaining challenges to bridge the gap between the mathematical analysis of transformers and their real-life implementation. | [
"['Albert Alcalde' 'Giovanni Fantuzzi' 'Enrique Zuazua']"
]
|
null | null | 2407.01603 | null | null | http://arxiv.org/pdf/2407.01603v1 | 2024-06-26T17:33:21Z | 2024-06-26T17:33:21Z | A Review of Large Language Models and Autonomous Agents in Chemistry | Large language models (LLMs) are emerging as a powerful tool in chemistry across multiple domains. In chemistry, LLMs are able to accurately predict properties, design new molecules, optimize synthesis pathways, and accelerate drug and material discovery. A core emerging idea is combining LLMs with chemistry-specific tools like synthesis planners and databases, leading to so-called "agents." This review covers LLMs' recent history, current capabilities, design, challenges specific to chemistry, and future directions. Particular attention is given to agents and their emergence as a cross-chemistry paradigm. Agents have proven effective in diverse domains of chemistry, but challenges remain. It is unclear if creating domain-specific versus generalist agents and developing autonomous pipelines versus "co-pilot" systems will accelerate chemistry. An emerging direction is the development of multi-agent systems using a human-in-the-loop approach. Due to the incredibly fast development of this field, a repository has been built to keep track of the latest studies: https://github.com/ur-whitelab/LLMs-in-science. | [
"['Mayk Caldas Ramos' 'Christopher J. Collison' 'Andrew D. White']"
]
|
null | null | 2407.01606 | null | null | http://arxiv.org/pdf/2407.01606v1 | 2024-06-27T02:53:01Z | 2024-06-27T02:53:01Z | On Discrete Prompt Optimization for Diffusion Models | This paper introduces the first gradient-based framework for prompt optimization in text-to-image diffusion models. We formulate prompt engineering as a discrete optimization problem over the language space. Two major challenges arise in efficiently finding a solution to this problem: (1) Enormous Domain Space: Setting the domain to the entire language space poses significant difficulty to the optimization process. (2) Text Gradient: Efficiently computing the text gradient is challenging, as it requires backpropagating through the inference steps of the diffusion model and a non-differentiable embedding lookup table. Beyond the problem formulation, our main technical contributions lie in solving the above challenges. First, we design a family of dynamically generated compact subspaces comprised of only the most relevant words to user input, substantially restricting the domain space. Second, we introduce "Shortcut Text Gradient" -- an effective replacement for the text gradient that can be obtained with constant memory and runtime. Empirical evaluation on prompts collected from diverse sources (DiffusionDB, ChatGPT, COCO) suggests that our method can discover prompts that substantially improve (prompt enhancement) or destroy (adversarial attack) the faithfulness of images generated by the text-to-image diffusion model. | [
"['Ruochen Wang' 'Ting Liu' 'Cho-Jui Hsieh' 'Boqing Gong']"
]
|
null | null | 2407.01607 | null | null | http://arxiv.org/pdf/2407.01607v1 | 2024-06-27T04:00:15Z | 2024-06-27T04:00:15Z | Multi-Epoch learning with Data Augmentation for Deep Click-Through Rate
Prediction | This paper investigates the one-epoch overfitting phenomenon in Click-Through Rate (CTR) models, where performance notably declines at the start of the second epoch. Despite extensive research, the efficacy of multi-epoch training over the conventional one-epoch approach remains unclear. We identify the overfitting of the embedding layer, caused by high-dimensional data sparsity, as the primary issue. To address this, we introduce a novel and simple Multi-Epoch learning with Data Augmentation (MEDA) framework, suitable for both non-continual and continual learning scenarios, which can be seamlessly integrated into existing deep CTR models and may have potential applications to handle the "forgetting or overfitting" dilemma in the retraining and the well-known catastrophic forgetting problems. MEDA minimizes overfitting by reducing the dependency of the embedding layer on subsequent training data or the Multi-Layer Perceptron (MLP) layers, and achieves data augmentation through training the MLP with varied embedding spaces. Our findings confirm that pre-trained MLP layers can adapt to new embedding spaces, enhancing performance without overfitting. This adaptability underscores the MLP layers' role in learning a matching function focused on the relative relationships among embeddings rather than their absolute positions. To our knowledge, MEDA represents the first multi-epoch training strategy tailored for deep CTR prediction models. We conduct extensive experiments on several public and business datasets, and the effectiveness of data augmentation and superiority over conventional single-epoch training are fully demonstrated. Besides, MEDA has exhibited significant benefits in a real-world online advertising system. | [
"['Zhongxiang Fan' 'Zhaocheng Liu' 'Jian Liang' 'Dongying Kong' 'Han Li'\n 'Peng Jiang' 'Shuang Li' 'Kun Gai']"
]
|
null | null | 2407.01608 | null | null | http://arxiv.org/pdf/2407.01608v1 | 2024-06-27T04:42:29Z | 2024-06-27T04:42:29Z | Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine
Learning Models | Increasingly, artificial intelligence (AI) and machine learning (ML) are used in eScience applications [9]. While these approaches have great potential, the literature has shown that ML-based approaches frequently suffer from results that are either incorrect or unreproducible due to mismanagement or misuse of data used for training and validating the models [12, 15]. Recognition of the necessity of high-quality data for correct ML results has led to data-centric ML approaches that shift the central focus from model development to creation of high-quality data sets to train and validate the models [14, 20]. However, there are limited tools and methods available for data-centric approaches to explore and evaluate ML solutions for eScience problems which often require collaborative multidisciplinary teams working with models and data that will rapidly evolve as an investigation unfolds [1]. In this paper, we show how data management tools based on the principle that all of the data for ML should be findable, accessible, interoperable and reusable (i.e. FAIR [26]) can significantly improve the quality of data that is used for ML applications. When combined with best practices that apply these tools to the entire life cycle of an ML-based eScience investigation, we can significantly improve the ability of an eScience team to create correct and reproducible ML solutions. We propose an architecture and implementation of such tools and demonstrate through two use cases how they can be used to improve ML-based eScience investigations. | [
"['Zhiwei Li' 'Carl Kesselman' \"Mike D'Arch\" 'Michael Pazzani'\n 'Benjamin Yizing Xu']"
]
|
null | null | 2407.01613 | null | null | http://arxiv.org/pdf/2407.01613v1 | 2024-06-28T00:53:48Z | 2024-06-28T00:53:48Z | Self-adaptive weights based on balanced residual decay rate for
physics-informed neural networks and deep operator networks | Physics-informed deep learning has emerged as a promising alternative for solving partial differential equations. However, for complex problems, training these networks can still be challenging, often resulting in unsatisfactory accuracy and efficiency. In this work, we demonstrate that the failure of plain physics-informed neural networks arises from the significant discrepancy in the convergence speed of residuals at different training points, where the slowest convergence speed dominates the overall solution convergence. Based on these observations, we propose a point-wise adaptive weighting method that balances the residual decay rate across different training points. The performance of our proposed adaptive weighting method is compared with current state-of-the-art adaptive weighting methods on benchmark problems for both physics-informed neural networks and physics-informed deep operator networks. Through extensive numerical results we demonstrate that our proposed approach of balanced residual decay rates offers several advantages, including bounded weights, high prediction accuracy, fast convergence speed, low training uncertainty, low computational cost and ease of hyperparameter tuning. | [
"['Wenqian Chen' 'Amanda A. Howard' 'Panos Stinis']"
]
|
null | null | 2407.01614 | null | null | http://arxiv.org/pdf/2407.01614v1 | 2024-06-28T01:46:10Z | 2024-06-28T01:46:10Z | Enhancing Stability for Large Models Training in Constrained Bandwidth
Networks | Training extremely large language models with billions of parameters is a computationally intensive task that pushes the limits of current data parallel training systems. While techniques like ZeRO++ have enabled efficient distributed training of such giant models on inexpensive low-bandwidth clusters, they can suffer from convergence issues due to potential race conditions in the hierarchical partitioning (hpZ) scheme employed to reduce cross-machine communication. In this work, we first show how these race conditions cause instability when training models with billions of parameters. We then propose a modification to the partitioning algorithm that addresses these convergence challenges while maintaining competitive training efficiency. Empirical evaluation on training the multi-billion parameters Falcon Models and Llama-2 models demonstrates the updated algorithm's ability to achieve reliable convergence on these massive models, where stock ZeRO++ hpZ fails to converge. The updated algorithm enables robust training of larger models with 98% throughput and model training speed improvement without sacrificing the quality of convergence. | [
"['Yun Dai' 'Tejas Dharamsi' 'Byron Hsu' 'Tao Song' 'Hamed Firooz']"
]
|
null | null | 2407.01615 | null | null | http://arxiv.org/pdf/2407.01615v1 | 2024-06-28T03:18:12Z | 2024-06-28T03:18:12Z | Edge-DIRECT: A Deep Reinforcement Learning-based Method for Solving
Heterogeneous Electric Vehicle Routing Problem with Time Window Constraints | In response to carbon-neutral policies in developed countries, electric vehicles route optimization has gained importance for logistics companies. With the increasing focus on customer expectations and the shift towards more customer-oriented business models, the integration of delivery time-windows has become essential in logistics operations. Recognizing the critical nature of these developments, this article studies the heterogeneous electric vehicle routing problem with time-window constraints (HEVRPTW). To solve this variant of vehicle routing problem (VRP), we propose a DRL-based approach, named Edge-enhanced Dual attentIon encoderR and feature-EnhanCed dual aTtention decoder (Edge-DIRECT). Edge-DIRECT features an extra graph representation, the node connectivity of which is based on the overlap of customer time-windows. Edge-DIRECT's self-attention encoding mechanism is enhanced by exploiting the energy consumption and travel time between the locations. To effectively account for the heterogeneity of the EVs' fleet, a dual attention decoder has been introduced. Experimental results based on two real-world datasets reveal that Edge-DIRECT outperforms a state-of-the-art DRL-based method and a well-established heuristic approach in solution quality and execution time. Furthermore, it exhibits competitive performance when compared to another leading heuristic method. | [
"['Arash Mozhdehi' 'Mahdi Mohammadizadeh' 'Xin Wang']"
]
|
null | null | 2407.01619 | null | null | http://arxiv.org/pdf/2407.01619v1 | 2024-06-28T17:28:53Z | 2024-06-28T17:28:53Z | TabSketchFM: Sketch-based Tabular Representation Learning for Data
Discovery over Data Lakes | Enterprises have a growing need to identify relevant tables in data lakes; e.g. tables that are unionable, joinable, or subsets of each other. Tabular neural models can be helpful for such data discovery tasks. In this paper, we present TabSketchFM, a neural tabular model for data discovery over data lakes. First, we propose a novel pre-training sketch-based approach to enhance the effectiveness of data discovery techniques in neural tabular models. Second, to further finetune the pretrained model for several downstream tasks, we develop LakeBench, a collection of 8 benchmarks to help with different data discovery tasks such as finding tasks that are unionable, joinable, or subsets of each other. We then show on these finetuning tasks that TabSketchFM achieves state-of-the art performance compared to existing neural models. Third, we use these finetuned models to search for tables that are unionable, joinable, or can be subsets of each other. Our results demonstrate improvements in F1 scores for search compared to state-of-the-art techniques (even up to 70% improvement in a joinable search benchmark). Finally, we show significant transfer across datasets and tasks establishing that our model can generalize across different tasks over different data lakes | [
"['Aamod Khatiwada' 'Harsha Kokel' 'Ibrahim Abdelaziz' 'Subhajit Chaudhury'\n 'Julian Dolby' 'Oktie Hassanzadeh' 'Zhenhan Huang' 'Tejaswini Pedapati'\n 'Horst Samulowitz' 'Kavitha Srinivas']"
]
|
null | null | 2407.01621 | null | null | http://arxiv.org/pdf/2407.01621v1 | 2024-06-29T03:17:53Z | 2024-06-29T03:17:53Z | Deciphering interventional dynamical causality from non-intervention
systems | Detecting and quantifying causality is a focal topic in the fields of science, engineering, and interdisciplinary studies. However, causal studies on non-intervention systems attract much attention but remain extremely challenging. To address this challenge, we propose a framework named Interventional Dynamical Causality (IntDC) for such non-intervention systems, along with its computational criterion, Interventional Embedding Entropy (IEE), to quantify causality. The IEE criterion theoretically and numerically enables the deciphering of IntDC solely from observational (non-interventional) time-series data, without requiring any knowledge of dynamical models or real interventions in the considered system. Demonstrations of performance showed the accuracy and robustness of IEE on benchmark simulated systems as well as real-world systems, including the neural connectomes of C. elegans, COVID-19 transmission networks in Japan, and regulatory networks surrounding key circadian genes. | [
"['Jifan Shi' 'Yang Li' 'Juan Zhao' 'Siyang Leng' 'Kazuyuki Aihara'\n 'Luonan Chen' 'Wei Lin']"
]
|
null | null | 2407.01622 | null | null | http://arxiv.org/abs/2407.01622v1 | 2024-06-29T05:36:04Z | 2024-06-29T05:36:04Z | Addressing Prediction Delays in Time Series Forecasting: A Continuous
GRU Approach with Derivative Regularization | Time series forecasting has been an essential field in many different application areas, including economic analysis, meteorology, and so forth. The majority of time series forecasting models are trained using the mean squared error (MSE). However, this training based on MSE causes a limitation known as prediction delay. The prediction delay, which implies the ground-truth precedes the prediction, can cause serious problems in a variety of fields, e.g., finance and weather forecasting -- as a matter of fact, predictions succeeding ground-truth observations are not practically meaningful although their MSEs can be low. This paper proposes a new perspective on traditional time series forecasting tasks and introduces a new solution to mitigate the prediction delay. We introduce a continuous-time gated recurrent unit (GRU) based on the neural ordinary differential equation (NODE) which can supervise explicit time-derivatives. We generalize the GRU architecture in a continuous-time manner and minimize the prediction delay through our time-derivative regularization. Our method outperforms in metrics such as MSE, Dynamic Time Warping (DTW) and Time Distortion Index (TDI). In addition, we demonstrate the low prediction delay of our method in a variety of datasets. | [
"['Sheo Yon Jhin' 'Seojin Kim' 'Noseong Park']"
]
|
null | null | 2407.01623 | null | null | http://arxiv.org/pdf/2407.01623v1 | 2024-06-29T05:58:00Z | 2024-06-29T05:58:00Z | Uncertainty estimation in satellite precipitation spatial prediction by
combining distributional regression algorithms | To facilitate effective decision-making, gridded satellite precipitation products should include uncertainty estimates. Machine learning has been proposed for issuing such estimates. However, most existing algorithms for this purpose rely on quantile regression. Distributional regression offers distinct advantages over quantile regression, including the ability to model intermittency as well as a stronger ability to extrapolate beyond the training data, which is critical for predicting extreme precipitation. In this work, we introduce the concept of distributional regression for the engineering task of creating precipitation datasets through data merging. Building upon this concept, we propose new ensemble learning methods that can be valuable not only for spatial prediction but also for prediction problems in general. These methods exploit conditional zero-adjusted probability distributions estimated with generalized additive models for location, scale, and shape (GAMLSS), spline-based GAMLSS and distributional regression forests as well as their ensembles (stacking based on quantile regression, and equal-weight averaging). To identify the most effective methods for our specific problem, we compared them to benchmarks using a large, multi-source precipitation dataset. Stacking emerged as the most successful strategy. Three specific stacking methods achieved the best performance based on the quantile scoring rule, although the ranking of these methods varied across quantile levels. This suggests that a task-specific combination of multiple algorithms could yield significant benefits. | [
"['Georgia Papacharalampous' 'Hristos Tyralis' 'Nikolaos Doulamis'\n 'Anastasios Doulamis']"
]
|
null | null | 2407.01624 | null | null | http://arxiv.org/pdf/2407.01624v1 | 2024-06-29T06:12:36Z | 2024-06-29T06:12:36Z | Guided Trajectory Generation with Diffusion Models for Offline
Model-based Optimization | Optimizing complex and high-dimensional black-box functions is ubiquitous in science and engineering fields. Unfortunately, the online evaluation of these functions is restricted due to time and safety constraints in most cases. In offline model-based optimization (MBO), we aim to find a design that maximizes the target function using only a pre-existing offline dataset. While prior methods consider forward or inverse approaches to address the problem, these approaches are limited by conservatism and the difficulty of learning highly multi-modal mappings. Recently, there has been an emerging paradigm of learning to improve solutions with synthetic trajectories constructed from the offline dataset. In this paper, we introduce a novel conditional generative modeling approach to produce trajectories toward high-scoring regions. First, we construct synthetic trajectories toward high-scoring regions using the dataset while injecting locality bias for consistent improvement directions. Then, we train a conditional diffusion model to generate trajectories conditioned on their scores. Lastly, we sample multiple trajectories from the trained model with guidance to explore high-scoring regions beyond the dataset and select high-fidelity designs among generated trajectories with the proxy function. Extensive experiment results demonstrate that our method outperforms competitive baselines on Design-Bench and its practical variants. The code is publicly available in texttt{https://github.com/dbsxodud-11/GTG}. | [
"['Taeyoung Yun' 'Sujin Yun' 'Jaewoo Lee' 'Jinkyoo Park']"
]
|
null | null | 2407.01635 | null | null | http://arxiv.org/pdf/2407.01635v1 | 2024-06-30T10:53:40Z | 2024-06-30T10:53:40Z | Commute Graph Neural Networks | Graph Neural Networks (GNNs) have shown remarkable success in learning from graph-structured data. However, their application to directed graphs (digraphs) presents unique challenges, primarily due to the inherent asymmetry in node relationships. Traditional GNNs are adept at capturing unidirectional relations but fall short in encoding the mutual path dependencies between nodes, such as asymmetrical shortest paths typically found in digraphs. Recognizing this gap, we introduce Commute Graph Neural Networks (CGNN), an approach that seamlessly integrates node-wise commute time into the message passing scheme. The cornerstone of CGNN is an efficient method for computing commute time using a newly formulated digraph Laplacian. Commute time information is then integrated into the neighborhood aggregation process, with neighbor contributions weighted according to their respective commute time to the central node in each layer. It enables CGNN to directly capture the mutual, asymmetric relationships in digraphs. | [
"['Wei Zhuo' 'Guang Tan']"
]
|
null | null | 2407.01639 | null | null | http://arxiv.org/pdf/2407.01639v1 | 2024-06-30T20:15:31Z | 2024-06-30T20:15:31Z | ModelVerification.jl: a Comprehensive Toolbox for Formally Verifying
Deep Neural Networks | Deep Neural Networks (DNN) are crucial in approximating nonlinear functions across diverse applications, ranging from image classification to control. Verifying specific input-output properties can be a highly challenging task due to the lack of a single, self-contained framework that allows a complete range of verification types. To this end, we present texttt{ModelVerification.jl (MV)}, the first comprehensive, cutting-edge toolbox that contains a suite of state-of-the-art methods for verifying different types of DNNs and safety specifications. This versatile toolbox is designed to empower developers and machine learning practitioners with robust tools for verifying and ensuring the trustworthiness of their DNN models. | [
"['Tianhao Wei' 'Luca Marzari' 'Kai S. Yun' 'Hanjiang Hu' 'Peizhi Niu'\n 'Xusheng Luo' 'Changliu Liu']"
]
|
null | null | 2407.01640 | null | null | http://arxiv.org/pdf/2407.01640v1 | 2024-06-30T20:47:15Z | 2024-06-30T20:47:15Z | BADM: Batch ADMM for Deep Learning | Stochastic gradient descent-based algorithms are widely used for training deep neural networks but often suffer from slow convergence. To address the challenge, we leverage the framework of the alternating direction method of multipliers (ADMM) to develop a novel data-driven algorithm, called batch ADMM (BADM). The fundamental idea of the proposed algorithm is to split the training data into batches, which is further divided into sub-batches where primal and dual variables are updated to generate global parameters through aggregation. We evaluate the performance of BADM across various deep learning tasks, including graph modelling, computer vision, image generation, and natural language processing. Extensive numerical experiments demonstrate that BADM achieves faster convergence and superior testing accuracy compared to other state-of-the-art optimizers. | [
"['Ouya Wang' 'Shenglong Zhou' 'Geoffrey Ye Li']"
]
|
null | null | 2407.01641 | null | null | http://arxiv.org/pdf/2407.01641v1 | 2024-06-30T21:48:38Z | 2024-06-30T21:48:38Z | NeurIPS 2024 ML4CFD Competition: Harnessing Machine Learning for
Computational Fluid Dynamics in Airfoil Design | The integration of machine learning (ML) techniques for addressing intricate physics problems is increasingly recognized as a promising avenue for expediting simulations. However, assessing ML-derived physical models poses a significant challenge for their adoption within industrial contexts. This competition is designed to promote the development of innovative ML approaches for tackling physical challenges, leveraging our recently introduced unified evaluation framework known as Learning Industrial Physical Simulations (LIPS). Building upon the preliminary edition held from November 2023 to March 2024, this iteration centers on a task fundamental to a well-established physical application: airfoil design simulation, utilizing our proposed AirfRANS dataset. The competition evaluates solutions based on various criteria encompassing ML accuracy, computational efficiency, Out-Of-Distribution performance, and adherence to physical principles. Notably, this competition represents a pioneering effort in exploring ML-driven surrogate methods aimed at optimizing the trade-off between computational efficiency and accuracy in physical simulations. Hosted on the Codabench platform, the competition offers online training and evaluation for all participating solutions. | [
"['Mouadh Yagoubi' 'David Danan' 'Milad Leyli-abadi' 'Jean-Patrick Brunet'\n 'Jocelyn Ahmed Mazari' 'Florent Bonnet' 'maroua gmati' 'Asma Farjallah'\n 'Paola Cinnella' 'Patrick Gallinari' 'Marc Schoenauer']"
]
|
null | null | 2407.01643 | null | null | http://arxiv.org/pdf/2407.01643v1 | 2024-06-30T23:01:58Z | 2024-06-30T23:01:58Z | A Deep Generative Framework for Joint Households and Individuals
Population Synthesis | Household and individual-level sociodemographic data are essential for understanding human-infrastructure interaction and policymaking. However, the Public Use Microdata Sample (PUMS) offers only a sample at the state level, while census tract data only provides the marginal distributions of variables without correlations. Therefore, we need an accurate synthetic population dataset that maintains consistent variable correlations observed in microdata, preserves household-individual and individual-individual relationships, adheres to state-level statistics, and accurately represents the geographic distribution of the population. We propose a deep generative framework leveraging the variational autoencoder (VAE) to generate a synthetic population with the aforementioned features. The methodological contributions include (1) a new data structure for capturing household-individual and individual-individual relationships, (2) a transfer learning process with pre-training and fine-tuning steps to generate households and individuals whose aggregated distributions align with the census tract marginal distribution, and (3) decoupled binary cross-entropy (D-BCE) loss function enabling distribution shift and out-of-sample records generation. Model results for an application in Delaware, USA demonstrate the ability to ensure the realism of generated household-individual records and accurately describe population statistics at the census tract level compared to existing methods. Furthermore, testing in North Carolina, USA yielded promising results, supporting the transferability of our method. | [
"['Xiao Qian' 'Utkarsh Gangwal' 'Shangjia Dong' 'Rachel Davidson']"
]
|
null | null | 2407.01644 | null | null | http://arxiv.org/pdf/2407.01644v1 | 2024-07-01T00:05:56Z | 2024-07-01T00:05:56Z | Evaluating the Role of Data Enrichment Approaches Towards Rare Event
Analysis in Manufacturing | Rare events are occurrences that take place with a significantly lower frequency than more common regular events. In manufacturing, predicting such events is particularly important, as they lead to unplanned downtime, shortening equipment lifespan, and high energy consumption. The occurrence of events is considered frequently-rare if observed in more than 10% of all instances, very-rare if it is 1-5%, moderately-rare if it is 5-10%, and extremely-rare if less than 1%. The rarity of events is inversely correlated with the maturity of a manufacturing industry. Typically, the rarity of events affects the multivariate data generated within a manufacturing process to be highly imbalanced, which leads to bias in predictive models. This paper evaluates the role of data enrichment techniques combined with supervised machine-learning techniques for rare event detection and prediction. To address the data scarcity, we use time series data augmentation and sampling methods to amplify the dataset with more multivariate features and data points while preserving the underlying time series patterns in the combined alterations. Imputation techniques are used in handling null values in datasets. Considering 15 learning models ranging from statistical learning to machine learning to deep learning methods, the best-performing model for the selected datasets is obtained and the efficacy of data enrichment is evaluated. Based on this evaluation, our results find that the enrichment procedure enhances up to 48% of F1 measure in rare failure event detection and prediction of supervised prediction models. We also conduct empirical and ablation experiments on the datasets to derive dataset-specific novel insights. Finally, we investigate the interpretability aspect of models for rare event prediction, considering multiple methods. | [
"['Chathurangi Shyalika' 'Ruwan Wickramarachchi' 'Fadi El Kalach'\n 'Ramy Harik' 'Amit Sheth']"
]
|
null | null | 2407.01645 | null | null | http://arxiv.org/pdf/2407.01645v1 | 2024-07-01T02:09:20Z | 2024-07-01T02:09:20Z | Sign Gradient Descent-based Neuronal Dynamics: ANN-to-SNN Conversion
Beyond ReLU Network | Spiking neural network (SNN) is studied in multidisciplinary domains to (i) enable order-of-magnitudes energy-efficient AI inference and (ii) computationally simulate neuro-scientific mechanisms. The lack of discrete theory obstructs the practical application of SNN by limiting its performance and nonlinearity support. We present a new optimization-theoretic perspective of the discrete dynamics of spiking neurons. We prove that a discrete dynamical system of simple integrate-and-fire models approximates the sub-gradient method over unconstrained optimization problems. We practically extend our theory to introduce a novel sign gradient descent (signGD)-based neuronal dynamics that can (i) approximate diverse nonlinearities beyond ReLU and (ii) advance ANN-to-SNN conversion performance in low time steps. Experiments on large-scale datasets show that our technique achieves (i) state-of-the-art performance in ANN-to-SNN conversion and (ii) is the first to convert new DNN architectures, e.g., ConvNext, MLP-Mixer, and ResMLP. We publicly share our source code at https://github.com/snuhcs/snn_signgd . | [
"['Hyunseok Oh' 'Youngki Lee']"
]
|
null | null | 2407.01647 | null | null | http://arxiv.org/pdf/2407.01647v1 | 2024-07-01T05:24:19Z | 2024-07-01T05:24:19Z | Optimizing PM2.5 Forecasting Accuracy with Hybrid Meta-Heuristic and
Machine Learning Models | Timely alerts about hazardous air pollutants are crucial for public health. However, existing forecasting models often overlook key factors like baseline parameters and missing data, limiting their accuracy. This study introduces a hybrid approach to address these issues, focusing on forecasting hourly PM2.5 concentrations using Support Vector Regression (SVR). Meta-heuristic algorithms, Grey Wolf Optimization (GWO) and Particle Swarm Optimization (PSO), optimize SVR Hyper-parameters "C" and "Gamma" to enhance prediction accuracy. Evaluation metrics include R-squared (R2), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). Results show significant improvements with PSO-SVR (R2: 0.9401, RMSE: 0.2390, MAE: 0.1368) and GWO-SVR (R2: 0.9408, RMSE: 0.2376, MAE: 0.1373), indicating robust and accurate models suitable for similar research applications. | [
"['Parviz Ghafariasl' 'Masoomeh Zeinalnezhad' 'Amir Ahmadishokooh']"
]
|
null | null | 2407.01648 | null | null | http://arxiv.org/pdf/2407.01648v1 | 2024-07-01T06:10:29Z | 2024-07-01T06:10:29Z | Aligning Target-Aware Molecule Diffusion Models with Exact Energy
Optimization | Generating ligand molecules for specific protein targets, known as structure-based drug design, is a fundamental problem in therapeutics development and biological discovery. Recently, target-aware generative models, especially diffusion models, have shown great promise in modeling protein-ligand interactions and generating candidate drugs. However, existing models primarily focus on learning the chemical distribution of all drug candidates, which lacks effective steerability on the chemical quality of model generations. In this paper, we propose a novel and general alignment framework to align pretrained target diffusion models with preferred functional properties, named AliDiff. AliDiff shifts the target-conditioned chemical distribution towards regions with higher binding affinity and structural rationality, specified by user-defined reward functions, via the preference optimization approach. To avoid the overfitting problem in common preference optimization objectives, we further develop an improved Exact Energy Preference Optimization method to yield an exact and efficient alignment of the diffusion models, and provide the closed-form expression for the converged distribution. Empirical studies on the CrossDocked2020 benchmark show that AliDiff can generate molecules with state-of-the-art binding energies with up to -7.07 Avg. Vina Score, while maintaining strong molecular properties. | [
"['Siyi Gu' 'Minkai Xu' 'Alexander Powers' 'Weili Nie' 'Tomas Geffner'\n 'Karsten Kreis' 'Jure Leskovec' 'Arash Vahdat' 'Stefano Ermon']"
]
|
null | null | 2407.01649 | null | null | http://arxiv.org/pdf/2407.01649v1 | 2024-07-01T06:47:21Z | 2024-07-01T06:47:21Z | FAFE: Immune Complex Modeling with Geodesic Distance Loss on Noisy Group
Frames | Despite the striking success of general protein folding models such as AlphaFold2(AF2, Jumper et al. (2021)), the accurate computational modeling of antibody-antigen complexes remains a challenging task. In this paper, we first analyze AF2's primary loss function, known as the Frame Aligned Point Error (FAPE), and raise a previously overlooked issue that FAPE tends to face gradient vanishing problem on high-rotational-error targets. To address this fundamental limitation, we propose a novel geodesic loss called Frame Aligned Frame Error (FAFE, denoted as F2E to distinguish from FAPE), which enables the model to better optimize both the rotational and translational errors between two frames. We then prove that F2E can be reformulated as a group-aware geodesic loss, which translates the optimization of the residue-to-residue error to optimizing group-to-group geodesic frame distance. By fine-tuning AF2 with our proposed new loss function, we attain a correct rate of 52.3% (DockQ $>$ 0.23) on an evaluation set and 43.8% correct rate on a subset with low homology, with substantial improvement over AF2 by 182% and 100% respectively. | [
"['Ruidong Wu' 'Ruihan Guo' 'Rui Wang' 'Shitong Luo' 'Yue Xu' 'Jiahan Li'\n 'Jianzhu Ma' 'Qiang Liu' 'Yunan Luo' 'Jian Peng']"
]
|
null | null | 2407.01653 | null | null | http://arxiv.org/pdf/2407.01653v1 | 2024-07-01T12:46:09Z | 2024-07-01T12:46:09Z | A Deep Reinforcement Learning Approach to Battery Management in Dairy
Farming via Proximal Policy Optimization | Dairy farms consume a significant amount of electricity for their operations, and this research focuses on enhancing energy efficiency and minimizing the impact on the environment in the sector by maximizing the utilization of renewable energy sources. This research investigates the application of Proximal Policy Optimization (PPO), a deep reinforcement learning algorithm (DRL), to enhance dairy farming battery management. We evaluate the algorithm's effectiveness based on its ability to reduce reliance on the electricity grid, highlighting the potential of DRL to enhance energy management in dairy farming. Using real-world data our results demonstrate how the PPO approach outperforms Q-learning by 1.62% for reducing electricity import from the grid. This significant improvement highlights the potential of the Deep Reinforcement Learning algorithm for improving energy efficiency and sustainability in dairy farms. | [
"['Nawazish Ali' 'Rachael Shaw' 'Karl Mason']"
]
|
null | null | 2407.01656 | null | null | http://arxiv.org/pdf/2407.01656v1 | 2024-07-01T14:13:11Z | 2024-07-01T14:13:11Z | Statistical signatures of abstraction in deep neural networks | We study how abstract representations emerge in a Deep Belief Network (DBN) trained on benchmark datasets. Our analysis targets the principles of learning in the early stages of information processing, starting from the "primordial soup" of the under-sampling regime. As the data is processed by deeper and deeper layers, features are detected and removed, transferring more and more "context-invariant" information to deeper layers. We show that the representation approaches an universal model -- the Hierarchical Feature Model (HFM) -- determined by the principle of maximal relevance. Relevance quantifies the uncertainty on the model of the data, thus suggesting that "meaning" -- i.e. syntactic information -- is that part of the data which is not yet captured by a model. Our analysis shows that shallow layers are well described by pairwise Ising models, which provide a representation of the data in terms of generic, low order features. We also show that plasticity increases with depth, in a similar way as it does in the brain. These findings suggest that DBNs are capable of extracting a hierarchy of features from the data which is consistent with the principle of maximal relevance. | [
"['Carlo Orientale Caputo' 'Matteo Marsili']"
]
|
null | null | 2407.01686 | null | null | http://arxiv.org/pdf/2407.01686v1 | 2024-07-01T18:01:07Z | 2024-07-01T18:01:07Z | Everything that can be learned about a causal structure with latent
variables by observational and interventional probing schemes | What types of differences among causal structures with latent variables are impossible to distinguish by statistical data obtained by probing each visible variable? If the probing scheme is simply passive observation, then it is well-known that many different causal structures can realize the same joint probability distributions. Even for the simplest case of two visible variables, for instance, one cannot distinguish between one variable being a causal parent of the other and the two variables sharing a latent common cause. However, it is possible to distinguish between these two causal structures if we have recourse to more powerful probing schemes, such as the possibility of intervening on one of the variables and observing the other. Herein, we address the question of which causal structures remain indistinguishable even given the most informative types of probing schemes on the visible variables. We find that two causal structures remain indistinguishable if and only if they are both associated with the same mDAG structure (as defined by Evans (2016)). We also consider the question of when one causal structure dominates another in the sense that it can realize all of the joint probability distributions that can be realized by the other using a given probing scheme. (Equivalence of causal structures is the special case of mutual dominance.) Finally, we investigate to what extent one can weaken the probing schemes implemented on the visible variables and still have the same discrimination power as a maximally informative probing scheme. | [
"['Marina Maciel Ansanelli' 'Elie Wolfe' 'Robert W. Spekkens']"
]
|
null | null | 2407.01704 | null | null | http://arxiv.org/pdf/2407.01704v1 | 2024-07-01T18:29:29Z | 2024-07-01T18:29:29Z | Weight Clipping for Deep Continual and Reinforcement Learning | Many failures in deep continual and reinforcement learning are associated with increasing magnitudes of the weights, making them hard to change and potentially causing overfitting. While many methods address these learning failures, they often change the optimizer or the architecture, a complexity that hinders widespread adoption in various systems. In this paper, we focus on learning failures that are associated with increasing weight norm and we propose a simple technique that can be easily added on top of existing learning systems: clipping neural network weights to limit them to a specific range. We study the effectiveness of weight clipping in a series of supervised and reinforcement learning experiments. Our empirical results highlight the benefits of weight clipping for generalization, addressing loss of plasticity and policy collapse, and facilitating learning with a large replay ratio. | [
"['Mohamed Elsayed' 'Qingfeng Lan' 'Clare Lyle' 'A. Rupam Mahmood']"
]
|
null | null | 2407.01718 | null | null | http://arxiv.org/pdf/2407.01718v1 | 2024-07-01T18:48:55Z | 2024-07-01T18:48:55Z | Entropic Optimal Transport Eigenmaps for Nonlinear Alignment and Joint
Embedding of High-Dimensional Datasets | Embedding high-dimensional data into a low-dimensional space is an indispensable component of data analysis. In numerous applications, it is necessary to align and jointly embed multiple datasets from different studies or experimental conditions. Such datasets may share underlying structures of interest but exhibit individual distortions, resulting in misaligned embeddings using traditional techniques. In this work, we propose textit{Entropic Optimal Transport (EOT) eigenmaps}, a principled approach for aligning and jointly embedding a pair of datasets with theoretical guarantees. Our approach leverages the leading singular vectors of the EOT plan matrix between two datasets to extract their shared underlying structure and align the datasets accordingly in a common embedding space. We interpret our approach as an inter-data variant of the classical Laplacian eigenmaps and diffusion maps embeddings, showing that it enjoys many favorable analogous properties. We then analyze a data-generative model where two observed high-dimensional datasets share latent variables on a common low-dimensional manifold, but each dataset is subject to data-specific translation, scaling, nuisance structures, and noise. We show that in a high-dimensional asymptotic regime, the EOT plan recovers the shared manifold structure by approximating a kernel function evaluated at the locations of the latent variables. Subsequently, we provide a geometric interpretation of our embedding by relating it to the eigenfunctions of population-level operators encoding the density and geometry of the shared manifold. Finally, we showcase the performance of our approach for data integration and embedding through simulations and analyses of real-world biological data, demonstrating its advantages over alternative methods in challenging scenarios. | [
"['Boris Landa' 'Yuval Kluger' 'Rong Ma']"
]
|
null | null | 2407.01725 | null | null | http://arxiv.org/pdf/2407.01725v1 | 2024-07-01T18:58:22Z | 2024-07-01T18:58:22Z | DiscoveryBench: Towards Data-Driven Discovery with Large Language Models | Can the rapid advances in code generation, function calling, and data analysis using large language models (LLMs) help automate the search and verification of hypotheses purely from a set of provided datasets? To evaluate this question, we present DiscoveryBench, the first comprehensive benchmark that formalizes the multi-step process of data-driven discovery. The benchmark is designed to systematically assess current model capabilities in discovery tasks and provide a useful resource for improving them. Our benchmark contains 264 tasks collected across 6 diverse domains, such as sociology and engineering, by manually deriving discovery workflows from published papers to approximate the real-world challenges faced by researchers, where each task is defined by a dataset, its metadata, and a discovery goal in natural language. We additionally provide 903 synthetic tasks to conduct controlled evaluations across task complexity. Furthermore, our structured formalism of data-driven discovery enables a facet-based evaluation that provides useful insights into different failure modes. We evaluate several popular LLM-based reasoning frameworks using both open and closed LLMs as baselines on DiscoveryBench and find that even the best system scores only 25%. Our benchmark, thus, illustrates the challenges in autonomous data-driven discovery and serves as a valuable resource for the community to make progress. | [
"['Bodhisattwa Prasad Majumder' 'Harshit Surana' 'Dhruv Agarwal'\n 'Bhavana Dalvi Mishra' 'Abhijeetsingh Meena' 'Aryan Prakhar' 'Tirth Vora'\n 'Tushar Khot' 'Ashish Sabharwal' 'Peter Clark']"
]
|
null | null | 2407.01745 | null | null | http://arxiv.org/pdf/2407.01745v1 | 2024-07-01T19:24:36Z | 2024-07-01T19:24:36Z | Adaptive control of reaction-diffusion PDEs via neural
operator-approximated gain kernels | Neural operator approximations of the gain kernels in PDE backstepping has emerged as a viable method for implementing controllers in real time. With such an approach, one approximates the gain kernel, which maps the plant coefficient into the solution of a PDE, with a neural operator. It is in adaptive control that the benefit of the neural operator is realized, as the kernel PDE solution needs to be computed online, for every updated estimate of the plant coefficient. We extend the neural operator methodology from adaptive control of a hyperbolic PDE to adaptive control of a benchmark parabolic PDE (a reaction-diffusion equation with a spatially-varying and unknown reaction coefficient). We prove global stability and asymptotic regulation of the plant state for a Lyapunov design of parameter adaptation. The key technical challenge of the result is handling the 2D nature of the gain kernels and proving that the target system with two distinct sources of perturbation terms, due to the parameter estimation error and due to the neural approximation error, is Lyapunov stable. To verify our theoretical result, we present simulations achieving calculation speedups up to 45x relative to the traditional finite difference solvers for every timestep in the simulation trajectory. | [
"['Luke Bhan' 'Yuanyuan Shi' 'Miroslav Krstic']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.