categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2405.14741
null
null
http://arxiv.org/pdf/2405.14741v2
2024-05-29T05:27:04Z
2024-05-23T16:05:10Z
Bagging Improves Generalization Exponentially
Bagging is a popular ensemble technique to improve the accuracy of machine learning models. It hinges on the well-established rationale that, by repeatedly retraining on resampled data, the aggregated model exhibits lower variance and hence higher stability, especially for discontinuous base learners. In this paper, we provide a new perspective on bagging: By suitably aggregating the base learners at the parametrization instead of the output level, bagging improves generalization performances exponentially, a strength that is significantly more powerful than variance reduction. More precisely, we show that for general stochastic optimization problems that suffer from slowly (i.e., polynomially) decaying generalization errors, bagging can effectively reduce these errors to an exponential decay. Moreover, this power of bagging is agnostic to the solution schemes, including common empirical risk minimization, distributionally robust optimization, and various regularizations. We demonstrate how bagging can substantially improve generalization performances in a range of examples involving heavy-tailed data that suffer from intrinsically slow rates.
[ "['Huajie Qian' 'Donghao Ying' 'Henry Lam' 'Wotao Yin']" ]
null
null
2405.14742
null
null
http://arxiv.org/pdf/2405.14742v1
2024-05-23T16:08:04Z
2024-05-23T16:08:04Z
HC-GAE: The Hierarchical Cluster-based Graph Auto-Encoder for Graph Representation Learning
Graph Auto-Encoders (GAEs) are powerful tools for graph representation learning. In this paper, we develop a novel Hierarchical Cluster-based GAE (HC-GAE), that can learn effective structural characteristics for graph data analysis. To this end, during the encoding process, we commence by utilizing the hard node assignment to decompose a sample graph into a family of separated subgraphs. We compress each subgraph into a coarsened node, transforming the original graph into a coarsened graph. On the other hand, during the decoding process, we adopt the soft node assignment to reconstruct the original graph structure by expanding the coarsened nodes. By hierarchically performing the above compressing procedure during the decoding process as well as the expanding procedure during the decoding process, the proposed HC-GAE can effectively extract bidirectionally hierarchical structural features of the original sample graph. Furthermore, we re-design the loss function that can integrate the information from either the encoder or the decoder. Since the associated graph convolution operation of the proposed HC-GAE is restricted in each individual separated subgraph and cannot propagate the node information between different subgraphs, the proposed HC-GAE can significantly reduce the over-smoothing problem arising in the classical convolution-based GAEs. The proposed HC-GAE can generate effective representations for either node classification or graph classification, and the experiments demonstrate the effectiveness on real-world datasets.
[ "['Zhuo Xu' 'Lu Bai' 'Lixin Cui' 'Ming Li' 'Yue Wang' 'Edwin R. Hancock']" ]
null
null
2405.14743
null
null
http://arxiv.org/pdf/2405.14743v1
2024-05-23T16:12:33Z
2024-05-23T16:12:33Z
Iterative Causal Segmentation: Filling the Gap between Market Segmentation and Marketing Strategy
The field of causal Machine Learning (ML) has made significant strides in recent years. Notable breakthroughs include methods such as meta learners (arXiv:1706.03461v6) and heterogeneous doubly robust estimators (arXiv:2004.14497) introduced in the last five years. Despite these advancements, the field still faces challenges, particularly in managing tightly coupled systems where both the causal treatment variable and a confounding covariate must serve as key decision-making indicators. This scenario is common in applications of causal ML for marketing, such as marketing segmentation and incremental marketing uplift. In this work, we present our formally proven algorithm, iterative causal segmentation, to address this issue.
[ "['Kaihua Ding' 'Jingsong Cui' 'Mohammad Soltani' 'Jing Jin']" ]
null
null
2405.14745
null
null
http://arxiv.org/pdf/2405.14745v1
2024-05-23T16:14:16Z
2024-05-23T16:14:16Z
AnyLoss: Transforming Classification Metrics into Loss Functions
Many evaluation metrics can be used to assess the performance of models in binary classification tasks. However, most of them are derived from a confusion matrix in a non-differentiable form, making it very difficult to generate a differentiable loss function that could directly optimize them. The lack of solutions to bridge this challenge not only hinders our ability to solve difficult tasks, such as imbalanced learning, but also requires the deployment of computationally expensive hyperparameter search processes in model selection. In this paper, we propose a general-purpose approach that transforms any confusion matrix-based metric into a loss function, textit{AnyLoss}, that is available in optimization processes. To this end, we use an approximation function to make a confusion matrix represented in a differentiable form, and this approach enables any confusion matrix-based metric to be directly used as a loss function. The mechanism of the approximation function is provided to ensure its operability and the differentiability of our loss functions is proved by suggesting their derivatives. We conduct extensive experiments under diverse neural networks with many datasets, and we demonstrate their general availability to target any confusion matrix-based metrics. Our method, especially, shows outstanding achievements in dealing with imbalanced datasets, and its competitive learning speed, compared to multiple baseline models, underscores its efficiency.
[ "['Doheon Han' 'Nuno Moniz' 'Nitesh V Chawla']" ]
null
null
2405.14748
null
null
http://arxiv.org/pdf/2405.14748v1
2024-05-23T16:16:00Z
2024-05-23T16:16:00Z
MultiCast: Zero-Shot Multivariate Time Series Forecasting Using LLMs
Predicting future values in multivariate time series is vital across various domains. This work explores the use of large language models (LLMs) for this task. However, LLMs typically handle one-dimensional data. We introduce MultiCast, a zero-shot LLM-based approach for multivariate time series forecasting. It allows LLMs to receive multivariate time series as input, through three novel token multiplexing solutions that effectively reduce dimensionality while preserving key repetitive patterns. Additionally, a quantization scheme helps LLMs to better learn these patterns, while significantly reducing token use for practical applications. We showcase the performance of our approach in terms of RMSE and execution time against state-of-the-art approaches on three real-world datasets.
[ "['Georgios Chatzigeorgakidis' 'Konstantinos Lentzos' 'Dimitrios Skoutas']" ]
null
null
2405.14749
null
null
http://arxiv.org/pdf/2405.14749v1
2024-05-23T16:16:58Z
2024-05-23T16:16:58Z
Policy Gradient Methods for Risk-Sensitive Distributional Reinforcement Learning with Provable Convergence
Risk-sensitive reinforcement learning (RL) is crucial for maintaining reliable performance in many high-stakes applications. While most RL methods aim to learn a point estimate of the random cumulative cost, distributional RL (DRL) seeks to estimate the entire distribution of it. The distribution provides all necessary information about the cost and leads to a unified framework for handling various risk measures in a risk-sensitive setting. However, developing policy gradient methods for risk-sensitive DRL is inherently more complex as it pertains to finding the gradient of a probability measure. This paper introduces a policy gradient method for risk-sensitive DRL with general coherent risk measures, where we provide an analytical form of the probability measure's gradient. We further prove the local convergence of the proposed algorithm under mild smoothness assumptions. For practical use, we also design a categorical distributional policy gradient algorithm (CDPG) based on categorical distributional policy evaluation and trajectory-based gradient estimation. Through experiments on a stochastic cliff-walking environment, we illustrate the benefits of considering a risk-sensitive setting in DRL.
[ "['Minheng Xiao' 'Xian Yu' 'Lei Ying']" ]
null
null
2405.14750
null
null
http://arxiv.org/pdf/2405.14750v2
2024-06-19T22:11:28Z
2024-05-23T16:17:16Z
Extreme Solar Flare Prediction Using Residual Networks with HMI Magnetograms and Intensitygrams
Solar flares, especially C, M, and X class, pose significant risks to satellite operations, communication systems, and power grids. We present a novel approach for predicting extreme solar flares using HMI intensitygrams and magnetograms. By detecting sunspots from intensitygrams and extracting magnetic field patches from magnetograms, we train a Residual Network (ResNet) to classify extreme class flares. Our model demonstrates high accuracy, offering a robust tool for predicting extreme solar flares and improving space weather forecasting. Additionally, we show that HMI magnetograms provide more useful data for deep learning compared to other SDO AIA images by better capturing features critical for predicting flare magnitudes. This study underscores the importance of identifying magnetic fields in solar flare prediction, marking a significant advancement in solar activity prediction with practical implications for mitigating space weather impacts.
[ "['Juyoung Yun' 'Jungmin Shin']" ]
null
null
2405.14751
null
null
http://arxiv.org/pdf/2405.14751v1
2024-05-23T16:17:44Z
2024-05-23T16:17:44Z
AGILE: A Novel Framework of LLM Agents
We introduce a novel framework of LLM agents named AGILE (AGent that Interacts and Learns from Environments) designed to perform complex conversational tasks with users, leveraging LLMs, memory, tools, and interactions with experts. The agent's abilities include not only conversation but also reflection, utilization of tools, and consultation with experts. We formulate the construction of such an LLM agent as a reinforcement learning problem, in which the LLM serves as the policy model. We fine-tune the LLM using labeled data of actions and the PPO algorithm. We focus on question answering and release a dataset for agents called ProductQA, comprising challenging questions in online shopping. Our extensive experiments on ProductQA and MedMCQA show that AGILE agents based on 13B and 7B LLMs trained with PPO can outperform GPT-4 agents. Our ablation study highlights the indispensability of memory, tools, consultation, reflection, and reinforcement learning in achieving the agent's strong performance.
[ "['Peiyuan Feng' 'Yichen He' 'Guanhua Huang' 'Yuan Lin' 'Hanchong Zhang'\n 'Yuchen Zhang' 'Hang Li']" ]
null
null
2405.14753
null
null
http://arxiv.org/abs/2405.14753v1
2024-05-23T16:19:32Z
2024-05-23T16:19:32Z
A Transformer-Based Approach for Smart Invocation of Automatic Code Completion
Transformer-based language models are highly effective for code completion, with much research dedicated to enhancing the content of these completions. Despite their effectiveness, these models come with high operational costs and can be intrusive, especially when they suggest too often and interrupt developers who are concentrating on their work. Current research largely overlooks how these models interact with developers in practice and neglects to address when a developer should receive completion suggestions. To tackle this issue, we developed a machine learning model that can accurately predict when to invoke a code completion tool given the code context and available telemetry data. To do so, we collect a dataset of 200k developer interactions with our cross-IDE code completion plugin and train several invocation filtering models. Our results indicate that our small-scale transformer model significantly outperforms the baseline while maintaining low enough latency. We further explore the search space for integrating additional telemetry data into a pre-trained transformer directly and obtain promising results. To further demonstrate our approach's practical potential, we deployed the model in an online environment with 34 developers and provided real-world insights based on 74k actual invocations.
[ "['Aral de Moor' 'Arie van Deursen' 'Maliheh Izadi']" ]
null
null
2405.14754
null
null
http://arxiv.org/pdf/2405.14754v1
2024-05-23T16:21:51Z
2024-05-23T16:21:51Z
Applied Machine Learning to Anomaly Detection in Enterprise Purchase Processes
In a context of a continuous digitalisation of processes, organisations must deal with the challenge of detecting anomalies that can reveal suspicious activities upon an increasing volume of data. To pursue this goal, audit engagements are carried out regularly, and internal auditors and purchase specialists are constantly looking for new methods to automate these processes. This work proposes a methodology to prioritise the investigation of the cases detected in two large purchase datasets from real data. The goal is to contribute to the effectiveness of the companies' control efforts and to increase the performance of carrying out such tasks. A comprehensive Exploratory Data Analysis is carried out before using unsupervised Machine Learning techniques addressed to detect anomalies. A univariate approach has been applied through the z-Score index and the DBSCAN algorithm, while a multivariate analysis is implemented with the k-Means and Isolation Forest algorithms, and the Silhouette index, resulting in each method having a transaction candidates' proposal to be reviewed. An ensemble prioritisation of the candidates is provided jointly with a proposal of explicability methods (LIME, Shapley, SHAP) to help the company specialists in their understanding.
[ "['A. Herreros-Martínez' 'R. Magdalena-Benedicto' 'J. Vila-Francés'\n 'A. J. Serrano-López' 'S. Pérez-Díaz']" ]
null
null
2405.14755
null
null
http://arxiv.org/pdf/2405.14755v1
2024-05-23T16:21:57Z
2024-05-23T16:21:57Z
Large language models can be zero-shot anomaly detectors for time series?
Recent studies have shown the ability of large language models to perform a variety of tasks, including time series forecasting. The flexible nature of these models allows them to be used for many applications. In this paper, we present a novel study of large language models used for the challenging task of time series anomaly detection. This problem entails two aspects novel for LLMs: the need for the model to identify part of the input sequence (or multiple parts) as anomalous; and the need for it to work with time series data rather than the traditional text input. We introduce sigllm, a framework for time series anomaly detection using large language models. Our framework includes a time-series-to-text conversion module, as well as end-to-end pipelines that prompt language models to perform time series anomaly detection. We investigate two paradigms for testing the abilities of large language models to perform the detection task. First, we present a prompt-based detection method that directly asks a language model to indicate which elements of the input are anomalies. Second, we leverage the forecasting capability of a large language model to guide the anomaly detection process. We evaluated our framework on 11 datasets spanning various sources and 10 pipelines. We show that the forecasting method significantly outperformed the prompting method in all 11 datasets with respect to the F1 score. Moreover, while large language models are capable of finding anomalies, state-of-the-art deep learning models are still superior in performance, achieving results 30% better than large language models.
[ "['Sarah Alnegheimish' 'Linh Nguyen' 'Laure Berti-Equille'\n 'Kalyan Veeramachaneni']" ]
null
null
2405.14758
null
null
http://arxiv.org/pdf/2405.14758v1
2024-05-23T16:29:29Z
2024-05-23T16:29:29Z
Axioms for AI Alignment from Human Feedback
In the context of reinforcement learning from human feedback (RLHF), the reward function is generally derived from maximum likelihood estimation of a random utility model based on pairwise comparisons made by humans. The problem of learning a reward function is one of preference aggregation that, we argue, largely falls within the scope of social choice theory. From this perspective, we can evaluate different aggregation methods via established axioms, examining whether these methods meet or fail well-known standards. We demonstrate that both the Bradley-Terry-Luce Model and its broad generalizations fail to meet basic axioms. In response, we develop novel rules for learning reward functions with strong axiomatic guarantees. A key innovation from the standpoint of social choice is that our problem has a linear structure, which greatly restricts the space of feasible rules and leads to a new paradigm that we call linear social choice.
[ "['Luise Ge' 'Daniel Halpern' 'Evi Micha' 'Ariel D. Procaccia'\n 'Itai Shapira' 'Yevgeniy Vorobeychik' 'Junlin Wu']" ]
null
null
2405.14759
null
null
http://arxiv.org/pdf/2405.14759v2
2024-06-05T16:32:31Z
2024-05-23T16:29:30Z
Fault Tolerant ML: Efficient Meta-Aggregation and Synchronous Training
In this paper, we investigate the challenging framework of Byzantine-robust training in distributed machine learning (ML) systems, focusing on enhancing both efficiency and practicality. As distributed ML systems become integral for complex ML tasks, ensuring resilience against Byzantine failures-where workers may contribute incorrect updates due to malice or error-gains paramount importance. Our first contribution is the introduction of the Centered Trimmed Meta Aggregator (CTMA), an efficient meta-aggregator that upgrades baseline aggregators to optimal performance levels, while requiring low computational demands. Additionally, we propose harnessing a recently developed gradient estimation technique based on a double-momentum strategy within the Byzantine context. Our paper highlights its theoretical and practical advantages for Byzantine-robust training, especially in simplifying the tuning process and reducing the reliance on numerous hyperparameters. The effectiveness of this technique is supported by theoretical insights within the stochastic convex optimization (SCO) framework and corroborated by empirical evidence.
[ "['Tehila Dahan' 'Kfir Y. Levy']" ]
null
null
2405.14762
null
null
http://arxiv.org/pdf/2405.14762v2
2024-06-06T18:54:29Z
2024-05-23T16:30:51Z
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations
Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost. Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently. Enforcing the permutation antisymmetry of electrons in such generalized neural wave functions remained challenging as existing methods require discrete orbital selection via non-learnable hand-crafted algorithms. This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules. We achieve this by relying on Pfaffians rather than Slater determinants. The Pfaffian allows us to enforce the antisymmetry on arbitrary electronic systems without any constraint on electronic spin configurations or molecular structure. Our empirical evaluation finds that a single neural Pfaffian calculates the ground state and ionization energies with chemical accuracy across various systems. On the TinyMol dataset, we outperform the `gold-standard' CCSD(T) CBS reference energies by 1.9m$E_h$ and reduce energy errors compared to previous generalized neural wave functions by up to an order of magnitude.
[ "['Nicholas Gao' 'Stephan Günnemann']" ]
null
null
2405.14766
null
null
http://arxiv.org/pdf/2405.14766v1
2024-05-23T16:33:18Z
2024-05-23T16:33:18Z
Evaluating Large Language Models for Public Health Classification and Extraction Tasks
Advances in Large Language Models (LLMs) have led to significant interest in their potential to support human experts across a range of domains, including public health. In this work we present automated evaluations of LLMs for public health tasks involving the classification and extraction of free text. We combine six externally annotated datasets with seven new internally annotated datasets to evaluate LLMs for processing text related to: health burden, epidemiological risk factors, and public health interventions. We initially evaluate five open-weight LLMs (7-70 billion parameters) across all tasks using zero-shot in-context learning. We find that Llama-3-70B-Instruct is the highest performing model, achieving the best results on 15/17 tasks (using micro-F1 scores). We see significant variation across tasks with all open-weight LLMs scoring below 60% micro-F1 on some challenging tasks, such as Contact Classification, while all LLMs achieve greater than 80% micro-F1 on others, such as GI Illness Classification. For a subset of 12 tasks, we also evaluate GPT-4 and find comparable results to Llama-3-70B-Instruct, which scores equally or outperforms GPT-4 on 6 of the 12 tasks. Overall, based on these initial results we find promising signs that LLMs may be useful tools for public health experts to extract information from a wide variety of free text sources, and support public health surveillance, research, and interventions.
[ "['Joshua Harris' 'Timothy Laurence' 'Leo Loman' 'Fan Grayson'\n 'Toby Nonnenmacher' 'Harry Long' 'Loes WalsGriffith' 'Amy Douglas'\n 'Holly Fountain' 'Stelios Georgiou' 'Jo Hardstaff' 'Kathryn Hopkins'\n 'Y-Ling Chi' 'Galena Kuyumdzhieva' 'Lesley Larkin' 'Samuel Collins'\n 'Hamish Mohammed' 'Thomas Finnie' 'Luke Hounsome' 'Steven Riley']" ]
null
null
2405.14767
null
null
http://arxiv.org/pdf/2405.14767v2
2024-05-27T12:43:42Z
2024-05-23T16:35:20Z
FinRobot: An Open-Source AI Agent Platform for Financial Applications using Large Language Models
As financial institutions and professionals increasingly incorporate Large Language Models (LLMs) into their workflows, substantial barriers, including proprietary data and specialized knowledge, persist between the finance sector and the AI community. These challenges impede the AI community's ability to enhance financial tasks effectively. Acknowledging financial analysis's critical role, we aim to devise financial-specialized LLM-based toolchains and democratize access to them through open-source initiatives, promoting wider AI adoption in financial decision-making. In this paper, we introduce FinRobot, a novel open-source AI agent platform supporting multiple financially specialized AI agents, each powered by LLM. Specifically, the platform consists of four major layers: 1) the Financial AI Agents layer that formulates Financial Chain-of-Thought (CoT) by breaking sophisticated financial problems down into logical sequences; 2) the Financial LLM Algorithms layer dynamically configures appropriate model application strategies for specific tasks; 3) the LLMOps and DataOps layer produces accurate models by applying training/fine-tuning techniques and using task-relevant data; 4) the Multi-source LLM Foundation Models layer that integrates various LLMs and enables the above layers to access them directly. Finally, FinRobot provides hands-on for both professional-grade analysts and laypersons to utilize powerful AI techniques for advanced financial analysis. We open-source FinRobot at url{https://github.com/AI4Finance-Foundation/FinRobot}.
[ "['Hongyang Yang' 'Boyu Zhang' 'Neng Wang' 'Cheng Guo' 'Xiaoli Zhang'\n 'Likun Lin' 'Junlin Wang' 'Tianyu Zhou' 'Mao Guan' 'Runjia Zhang'\n 'Christina Dan Wang']" ]
null
null
2405.14768
null
null
http://arxiv.org/pdf/2405.14768v1
2024-05-23T16:35:52Z
2024-05-23T16:35:52Z
WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental question for model editing. In this paper, we find that editing either long-term memory (direct model parameters) or working memory (non-parametric knowledge of neural network activations/representations by retrieval) will result in an impossible triangle -- reliability, generalization, and locality can not be realized together in the lifelong editing settings. For long-term memory, directly editing the parameters will cause conflicts with irrelevant pretrained knowledge or previous edits (poor reliability and locality). For working memory, retrieval-based activations can hardly make the model understand the edits and generalize (poor generalization). Therefore, we propose WISE to bridge the gap between memories. In WISE, we design a dual parametric memory scheme, which consists of the main memory for the pretrained knowledge and a side memory for the edited knowledge. We only edit the knowledge in the side memory and train a router to decide which memory to go through when given a query. For continual editing, we devise a knowledge-sharding mechanism where different sets of edits reside in distinct subspaces of parameters, and are subsequently merged into a shared memory without conflicts. Extensive experiments show that WISE can outperform previous model editing methods and overcome the impossible triangle under lifelong model editing of question answering, hallucination, and out-of-distribution settings across trending LLM architectures, e.g., GPT, LLaMA, and Mistral. Code will be released at https://github.com/zjunlp/EasyEdit.
[ "['Peng Wang' 'Zexi Li' 'Ningyu Zhang' 'Ziwen Xu' 'Yunzhi Yao' 'Yong Jiang'\n 'Pengjun Xie' 'Fei Huang' 'Huajun Chen']" ]
null
null
2405.14769
null
null
http://arxiv.org/pdf/2405.14769v1
2024-05-23T16:36:16Z
2024-05-23T16:36:16Z
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input
Humans use social context to specify preferences over behaviors, i.e. their reward functions. Yet, algorithms for inferring reward models from preference data do not take this social learning view into account. Inspired by pragmatic human communication, we study how to extract fine-grained data regarding why an example is preferred that is useful for learning more accurate reward models. We propose to enrich binary preference queries to ask both (1) which features of a given example are preferable in addition to (2) comparisons between examples themselves. We derive an approach for learning from these feature-level preferences, both for cases where users specify which features are reward-relevant, and when users do not. We evaluate our approach on linear bandit settings in both vision- and language-based domains. Results support the efficiency of our approach in quickly converging to accurate rewards with fewer comparisons vs. example-only labels. Finally, we validate the real-world applicability with a behavioral experiment on a mushroom foraging task. Our findings suggest that incorporating pragmatic feature preferences is a promising approach for more efficient user-aligned reward learning.
[ "['Andi Peng' 'Yuying Sun' 'Tianmin Shu' 'David Abel']" ]
null
null
2405.14776
null
null
http://arxiv.org/pdf/2405.14776v1
2024-05-23T16:44:29Z
2024-05-23T16:44:29Z
Kinetics of orbital ordering in cooperative Jahn-Teller models: Machine-learning enabled large-scale simulations
We present a scalable machine learning (ML) force-field model for the adiabatic dynamics of cooperative Jahn-Teller (JT) systems. Large scale dynamical simulations of the JT model also shed light on the orbital ordering dynamics in colossal magnetoresistance manganites. The JT effect in these materials describes the distortion of local oxygen octahedra driven by a coupling to the orbital degrees of freedom of $e_g$ electrons. An effective electron-mediated interaction between the local JT modes leads to a structural transition and the emergence of long-range orbital order at low temperatures. Assuming the principle of locality, a deep-learning neural-network model is developed to accurately and efficiently predict the electron-induced forces that drive the dynamical evolution of JT phonons. A group-theoretical method is utilized to develop a descriptor that incorporates the combined orbital and lattice symmetry into the ML model. Large-scale Langevin dynamics simulations, enabled by the ML force-field models, are performed to investigate the coarsening dynamics of the composite JT distortion and orbital order after a thermal quench. The late-stage coarsening of orbital domains exhibits pronounced freezing behaviors which are likely related to the unusual morphology of the domain structures. Our work highlights a promising avenue for multi-scale dynamical modeling of correlated electron systems.
[ "['Supriyo Ghosh' 'Sheng Zhang' 'Chen Cheng' 'Gia-Wei Chern']" ]
null
null
2405.14778
null
null
http://arxiv.org/pdf/2405.14778v1
2024-05-23T16:45:52Z
2024-05-23T16:45:52Z
Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms
We study theoretical properties of a broad class of regularized algorithms with vector-valued output. These spectral algorithms include kernel ridge regression, kernel principal component regression, various implementations of gradient descent and many more. Our contributions are twofold. First, we rigorously confirm the so-called saturation effect for ridge regression with vector-valued output by deriving a novel lower bound on learning rates; this bound is shown to be suboptimal when the smoothness of the regression function exceeds a certain level. Second, we present the upper bound for the finite sample risk general vector-valued spectral algorithms, applicable to both well-specified and misspecified scenarios (where the true regression function lies outside of the hypothesis space) which is minimax optimal in various regimes. All of our results explicitly allow the case of infinite-dimensional output variables, proving consistency of recent practical applications.
[ "['Dimitri Meunier' 'Zikai Shen' 'Mattes Mollenhauer' 'Arthur Gretton'\n 'Zhu Li']" ]
null
null
2405.14779
null
null
http://arxiv.org/pdf/2405.14779v1
2024-05-23T16:45:59Z
2024-05-23T16:45:59Z
Smart Bilingual Focused Crawling of Parallel Documents
Crawling parallel texts $unicode{x2014}$texts that are mutual translations$unicode{x2014}$ from the Internet is usually done following a brute-force approach: documents are massively downloaded in an unguided process, and only a fraction of them end up leading to actual parallel content. In this work we propose a smart crawling method that guides the crawl towards finding parallel content more rapidly. Our approach builds on two different models: one that infers the language of a document from its URL, and another that infers whether a pair of URLs link to parallel documents. We evaluate both models in isolation and their integration into a crawling tool. The results demonstrate the individual effectiveness of both models and highlight that their combination enables the early discovery of parallel content during crawling, leading to a reduction in the amount of downloaded documents deemed useless, and yielding a greater quantity of parallel documents compared to conventional crawling approaches.
[ "['Cristian García-Romero' 'Miquel Esplà-Gomis' 'Felipe Sánchez-Martínez']" ]
null
null
2405.14780
null
null
http://arxiv.org/pdf/2405.14780v1
2024-05-23T16:48:06Z
2024-05-23T16:48:06Z
Metric Flow Matching for Smooth Interpolations on the Data Manifold
Matching objectives underpin the success of modern generative models and rely on constructing conditional paths that transform a source distribution into a target distribution. Despite being a fundamental building block, conditional paths have been designed principally under the assumption of Euclidean geometry, resulting in straight interpolations. However, this can be particularly restrictive for tasks such as trajectory inference, where straight paths might lie outside the data manifold, thus failing to capture the underlying dynamics giving rise to the observed marginals. In this paper, we propose Metric Flow Matching (MFM), a novel simulation-free framework for conditional flow matching where interpolants are approximate geodesics learned by minimizing the kinetic energy of a data-induced Riemannian metric. This way, the generative model matches vector fields on the data manifold, which corresponds to lower uncertainty and more meaningful interpolations. We prescribe general metrics to instantiate MFM, independent of the task, and test it on a suite of challenging problems including LiDAR navigation, unpaired image translation, and modeling cellular dynamics. We observe that MFM outperforms the Euclidean baselines, particularly achieving SOTA on single-cell trajectory prediction.
[ "['Kacper Kapusniak' 'Peter Potaptchik' 'Teodora Reu' 'Leo Zhang'\n 'Alexander Tong' 'Michael Bronstein' 'Avishek Joey Bose'\n 'Francesco Di Giovanni']" ]
null
null
2405.14785
null
null
http://arxiv.org/pdf/2405.14785v1
2024-05-23T16:54:17Z
2024-05-23T16:54:17Z
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
Diffusion models have significantly improved the performance of image editing. Existing methods realize various approaches to achieve high-quality image editing, including but not limited to text control, dragging operation, and mask-and-inpainting. Among these, instruction-based editing stands out for its convenience and effectiveness in following human instructions across diverse scenarios. However, it still focuses on simple editing operations like adding, replacing, or deleting, and falls short of understanding aspects of world dynamics that convey the realistic dynamic nature in the physical world. Therefore, this work, EditWorld, introduces a new editing task, namely world-instructed image editing, which defines and categorizes the instructions grounded by various world scenarios. We curate a new image editing dataset with world instructions using a set of large pretrained models (e.g., GPT-3.5, Video-LLava and SDXL). To enable sufficient simulation of world dynamics for image editing, our EditWorld trains model in the curated dataset, and improves instruction-following ability with designed post-edit strategy. Extensive experiments demonstrate our method significantly outperforms existing editing methods in this new task. Our dataset and code will be available at https://github.com/YangLing0818/EditWorld
[ "['Ling Yang' 'Bohan Zeng' 'Jiaming Liu' 'Hong Li' 'Minghao Xu'\n 'Wentao Zhang' 'Shuicheng Yan']" ]
null
null
2405.14790
null
null
http://arxiv.org/pdf/2405.14790v1
2024-05-23T17:00:15Z
2024-05-23T17:00:15Z
DIDI: Diffusion-Guided Diversity for Offline Behavioral Generation
In this paper, we propose a novel approach called DIffusion-guided DIversity (DIDI) for offline behavioral generation. The goal of DIDI is to learn a diverse set of skills from a mixture of label-free offline data. We achieve this by leveraging diffusion probabilistic models as priors to guide the learning process and regularize the policy. By optimizing a joint objective that incorporates diversity and diffusion-guided regularization, we encourage the emergence of diverse behaviors while maintaining the similarity to the offline data. Experimental results in four decision-making domains (Push, Kitchen, Humanoid, and D4RL tasks) show that DIDI is effective in discovering diverse and discriminative skills. We also introduce skill stitching and skill interpolation, which highlight the generalist nature of the learned skill space. Further, by incorporating an extrinsic reward function, DIDI enables reward-guided behavior generation, facilitating the learning of diverse and optimal behaviors from sub-optimal data.
[ "['Jinxin Liu' 'Xinghong Guo' 'Zifeng Zhuang' 'Donglin Wang']" ]
null
null
2405.14791
null
null
http://arxiv.org/pdf/2405.14791v2
2024-05-27T16:11:22Z
2024-05-23T17:01:53Z
Recurrent Early Exits for Federated Learning with Heterogeneous Clients
Federated learning (FL) has enabled distributed learning of a model across multiple clients in a privacy-preserving manner. One of the main challenges of FL is to accommodate clients with varying hardware capacities; clients have differing compute and memory requirements. To tackle this challenge, recent state-of-the-art approaches leverage the use of early exits. Nonetheless, these approaches fall short of mitigating the challenges of joint learning multiple exit classifiers, often relying on hand-picked heuristic solutions for knowledge distillation among classifiers and/or utilizing additional layers for weaker classifiers. In this work, instead of utilizing multiple classifiers, we propose a recurrent early exit approach named ReeFL that fuses features from different sub-models into a single shared classifier. Specifically, we use a transformer-based early-exit module shared among sub-models to i) better exploit multi-layer feature representations for task-specific prediction and ii) modulate the feature representation of the backbone model for subsequent predictions. We additionally present a per-client self-distillation approach where the best sub-model is automatically selected as the teacher of the other sub-models at each client. Our experiments on standard image and speech classification benchmarks across various emerging federated fine-tuning baselines demonstrate ReeFL's effectiveness over previous works.
[ "['Royson Lee' 'Javier Fernandez-Marques' 'Shell Xu Hu' 'Da Li'\n 'Stefanos Laskaridis' 'Łukasz Dudziak' 'Timothy Hospedales'\n 'Ferenc Huszár' 'Nicholas D. Lane']" ]
null
null
2405.14806
null
null
http://arxiv.org/pdf/2405.14806v2
2024-07-09T16:01:23Z
2024-05-23T17:15:41Z
Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics
Extracting scientific understanding from particle-physics experiments requires solving diverse learning problems with high precision and good data efficiency. We propose the Lorentz Geometric Algebra Transformer (L-GATr), a new multi-purpose architecture for high-energy physics. L-GATr represents high-energy data in a geometric algebra over four-dimensional space-time and is equivariant under Lorentz transformations, the symmetry group of relativistic kinematics. At the same time, the architecture is a Transformer, which makes it versatile and scalable to large systems. L-GATr is first demonstrated on regression and classification tasks from particle physics. We then construct the first Lorentz-equivariant generative model: a continuous normalizing flow based on an L-GATr network, trained with Riemannian flow matching. Across our experiments, L-GATr is on par with or outperforms strong domain-specific baselines.
[ "['Jonas Spinner' 'Victor Bresó' 'Pim de Haan' 'Tilman Plehn'\n 'Jesse Thaler' 'Johann Brehmer']" ]
null
null
2405.14808
null
null
http://arxiv.org/pdf/2405.14808v1
2024-05-23T17:18:46Z
2024-05-23T17:18:46Z
Implicit Personalization in Language Models: A Systematic Study
Implicit Personalization (IP) is a phenomenon of language models inferring a user's background from the implicit cues in the input prompts and tailoring the response based on this inference. While previous work has touched upon various instances of this problem, there lacks a unified framework to study this behavior. This work systematically studies IP through a rigorous mathematical formulation, a multi-perspective moral reasoning framework, and a set of case studies. Our theoretical foundation for IP relies on a structural causal model and introduces a novel method, indirect intervention, to estimate the causal effect of a mediator variable that cannot be directly intervened upon. Beyond the technical approach, we also introduce a set of moral reasoning principles based on three schools of moral philosophy to study when IP may or may not be ethically appropriate. Equipped with both mathematical and ethical insights, we present three diverse case studies illustrating the varied nature of the IP problem and offer recommendations for future research. Our code and data are at https://github.com/jiarui-liu/IP.
[ "['Zhijing Jin' 'Nils Heil' 'Jiarui Liu' 'Shehzaad Dhuliawala' 'Yahang Qi'\n 'Bernhard Schölkopf' 'Rada Mihalcea' 'Mrinmaya Sachan']" ]
null
null
2405.14813
null
null
http://arxiv.org/pdf/2405.14813v1
2024-05-23T17:23:30Z
2024-05-23T17:23:30Z
Scalable Optimization in the Modular Norm
To improve performance in contemporary deep learning, one is interested in scaling up the neural network in terms of both the number and the size of the layers. When ramping up the width of a single layer, graceful scaling of training has been linked to the need to normalize the weights and their updates in the "natural norm" particular to that layer. In this paper, we significantly generalize this idea by defining the modular norm, which is the natural norm on the full weight space of any neural network architecture. The modular norm is defined recursively in tandem with the network architecture itself. We show that the modular norm has several promising applications. On the practical side, the modular norm can be used to normalize the updates of any base optimizer so that the learning rate becomes transferable across width and depth. This means that the user does not need to compute optimizer-specific scale factors in order to scale training. On the theoretical side, we show that for any neural network built from "well-behaved" atomic modules, the gradient of the network is Lipschitz-continuous in the modular norm, with the Lipschitz constant admitting a simple recursive formula. This characterization opens the door to porting standard ideas in optimization theory over to deep learning. We have created a Python package called Modula that automatically normalizes weight updates in the modular norm of the architecture. The package is available via "pip install modula" with source code at https://github.com/jxbz/modula.
[ "['Tim Large' 'Yang Liu' 'Minyoung Huh' 'Hyojin Bahng' 'Phillip Isola'\n 'Jeremy Bernstein']" ]
null
null
2405.14822
null
null
http://arxiv.org/pdf/2405.14822v1
2024-05-23T17:39:09Z
2024-05-23T17:39:09Z
PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher
To accelerate sampling, diffusion models (DMs) are often distilled into generators that directly map noise to data in a single step. In this approach, the resolution of the generator is fundamentally limited by that of the teacher DM. To overcome this limitation, we propose Progressive Growing of Diffusion Autoencoder (PaGoDA), a technique to progressively grow the resolution of the generator beyond that of the original teacher DM. Our key insight is that a pre-trained, low-resolution DM can be used to deterministically encode high-resolution data to a structured latent space by solving the PF-ODE forward in time (data-to-noise), starting from an appropriately down-sampled image. Using this frozen encoder in an auto-encoder framework, we train a decoder by progressively growing its resolution. From the nature of progressively growing decoder, PaGoDA avoids re-training teacher/student models when we upsample the student model, making the whole training pipeline much cheaper. In experiments, we used our progressively growing decoder to upsample from the pre-trained model's 64x64 resolution to generate 512x512 samples, achieving 2x faster inference compared to single-step distilled Stable Diffusion like LCM. PaGoDA also achieved state-of-the-art FIDs on ImageNet across all resolutions from 64x64 to 512x512. Additionally, we demonstrated PaGoDA's effectiveness in solving inverse problems and enabling controllable generation.
[ "['Dongjun Kim' 'Chieh-Hsin Lai' 'Wei-Hsiang Liao' 'Yuhta Takida'\n 'Naoki Murata' 'Toshimitsu Uesaka' 'Yuki Mitsufuji' 'Stefano Ermon']" ]
null
null
2405.14830
null
null
http://arxiv.org/pdf/2405.14830v1
2024-05-23T17:46:49Z
2024-05-23T17:46:49Z
Deep learning lattice gauge theories
Monte Carlo methods have led to profound insights into the strong-coupling behaviour of lattice gauge theories and produced remarkable results such as first-principles computations of hadron masses. Despite tremendous progress over the last four decades, fundamental challenges such as the sign problem and the inability to simulate real-time dynamics remain. Neural network quantum states have emerged as an alternative method that seeks to overcome these challenges. In this work, we use gauge-invariant neural network quantum states to accurately compute the ground state of $mathbb{Z}_N$ lattice gauge theories in $2+1$ dimensions. Using transfer learning, we study the distinct topological phases and the confinement phase transition of these theories. For $mathbb{Z}_2$, we identify a continuous transition and compute critical exponents, finding excellent agreement with existing numerics for the expected Ising universality class. In the $mathbb{Z}_3$ case, we observe a weakly first-order transition and identify the critical coupling. Our findings suggest that neural network quantum states are a promising method for precise studies of lattice gauge theory.
[ "['Anuj Apte' 'Anthony Ashmore' 'Clay Cordova' 'Tzu-Chen Huang']" ]
null
null
2405.14837
null
null
http://arxiv.org/pdf/2405.14837v2
2024-05-27T07:56:06Z
2024-05-23T17:51:05Z
Analysis of Atom-level pretraining with Quantum Mechanics (QM) data for Graph Neural Networks Molecular property models
Despite the rapid and significant advancements in deep learning for Quantitative Structure-Activity Relationship (QSAR) models, the challenge of learning robust molecular representations that effectively generalize in real-world scenarios to novel compounds remains an elusive and unresolved task. This study examines how atom-level pretraining with quantum mechanics (QM) data can mitigate violations of assumptions regarding the distributional similarity between training and test data and therefore improve performance and generalization in downstream tasks. In the public dataset Therapeutics Data Commons (TDC), we show how pretraining on atom-level QM improves performance overall and makes the activation of the features distributes more Gaussian-like which results in a representation that is more robust to distribution shifts. To the best of our knowledge, this is the first time that hidden state molecular representations are analyzed to compare the effects of molecule-level and atom-level pretraining on QM data.
[ "['Jose Arjona-Medina' 'Ramil Nugmanov']" ]
null
null
2405.14838
null
null
http://arxiv.org/pdf/2405.14838v1
2024-05-23T17:54:14Z
2024-05-23T17:54:14Z
From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step
When leveraging language models for reasoning tasks, generating explicit chain-of-thought (CoT) steps often proves essential for achieving high accuracy in final outputs. In this paper, we investigate if models can be taught to internalize these CoT steps. To this end, we propose a simple yet effective method for internalizing CoT steps: starting with a model trained for explicit CoT reasoning, we gradually remove the intermediate steps and finetune the model. This process allows the model to internalize the intermediate reasoning steps, thus simplifying the reasoning process while maintaining high performance. Our approach enables a GPT-2 Small model to solve 9-by-9 multiplication with up to 99% accuracy, whereas standard training cannot solve beyond 4-by-4 multiplication. Furthermore, our method proves effective on larger language models, such as Mistral 7B, achieving over 50% accuracy on GSM8K without producing any intermediate steps.
[ "['Yuntian Deng' 'Yejin Choi' 'Stuart Shieber']" ]
null
null
2405.14840
null
null
http://arxiv.org/pdf/2405.14840v1
2024-05-23T17:55:09Z
2024-05-23T17:55:09Z
Differentiable Annealed Importance Sampling Minimizes The Jensen-Shannon Divergence Between Initial and Target Distribution
Differentiable annealed importance sampling (DAIS), proposed by Geffner & Domke (2021) and Zhang et al. (2021), allows optimizing, among others, over the initial distribution of AIS. In this paper, we show that, in the limit of many transitions, DAIS minimizes the symmetrized KL divergence (Jensen-Shannon divergence) between the initial and target distribution. Thus, DAIS can be seen as a form of variational inference (VI) in that its initial distribution is a parametric fit to an intractable target distribution. We empirically evaluate the usefulness of the initial distribution as a variational distribution on synthetic and real-world data, observing that it often provides more accurate uncertainty estimates than standard VI (optimizing the reverse KL divergence), importance weighted VI, and Markovian score climbing (optimizing the forward KL divergence).
[ "['Johannes Zenn' 'Robert Bamler']" ]
null
null
2405.14848
null
null
http://arxiv.org/pdf/2405.14848v1
2024-05-23T17:56:38Z
2024-05-23T17:56:38Z
Local Causal Discovery for Structural Evidence of Direct Discrimination
Fairness is a critical objective in policy design and algorithmic decision-making. Identifying the causal pathways of unfairness requires knowledge of the underlying structural causal model, which may be incomplete or unavailable. This limits the practicality of causal fairness analysis in complex or low-knowledge domains. To mitigate this practicality gap, we advocate for developing efficient causal discovery methods for fairness applications. To this end, we introduce local discovery for direct discrimination (LD3): a polynomial-time algorithm that recovers structural evidence of direct discrimination. LD3 performs a linear number of conditional independence tests with respect to variable set size. Moreover, we propose a graphical criterion for identifying the weighted controlled direct effect (CDE), a qualitative measure of direct discrimination. We prove that this criterion is satisfied by the knowledge returned by LD3, increasing the accessibility of the weighted CDE as a causal fairness measure. Taking liver transplant allocation as a case study, we highlight the potential impact of LD3 for modeling fairness in complex decision systems. Results on real-world data demonstrate more plausible causal relations than baselines, which took 197x to 5870x longer to execute.
[ "['Jacqueline Maasch' 'Kyra Gan' 'Violet Chen' 'Agni Orfanoudaki'\n 'Nil-Jana Akpinar' 'Fei Wang']" ]
null
null
2405.14852
null
null
http://arxiv.org/pdf/2405.14852v2
2024-05-30T15:01:49Z
2024-05-23T17:57:04Z
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression
There has been significant interest in "extreme" compression of large language models (LLMs), i.e., to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices. Existing work focused on improved one-shot quantization techniques and weight representations; yet, purely post-training approaches are reaching diminishing returns in terms of the accuracy-vs-bit-width trade-off. State-of-the-art quantization methods such as QuIP# and AQLM include fine-tuning (part of) the compressed parameters over a limited amount of calibration data; however, such fine-tuning techniques over compressed weights often make exclusive use of straight-through estimators (STE), whose performance is not well-understood in this setting. In this work, we question the use of STE for extreme LLM compression, showing that it can be sub-optimal, and perform a systematic study of quantization-aware fine-tuning strategies for LLMs. We propose PV-Tuning - a representation-agnostic framework that generalizes and improves upon existing fine-tuning strategies, and provides convergence guarantees in restricted cases. On the practical side, when used for 1-2 bit vector quantization, PV-Tuning outperforms prior techniques for highly-performant models such as Llama and Mistral. Using PV-Tuning, we achieve the first Pareto-optimal quantization for Llama 2 family models at 2 bits per parameter.
[ "['Vladimir Malinovskii' 'Denis Mazur' 'Ivan Ilin' 'Denis Kuznedelev'\n 'Konstantin Burlachenko' 'Kai Yi' 'Dan Alistarh' 'Peter Richtarik']" ]
null
null
2405.14853
null
null
http://arxiv.org/pdf/2405.14853v1
2024-05-23T17:57:14Z
2024-05-23T17:57:14Z
Privileged Sensing Scaffolds Reinforcement Learning
We need to look at our shoelaces as we first learn to tie them but having mastered this skill, can do it from touch alone. We call this phenomenon "sensory scaffolding": observation streams that are not needed by a master might yet aid a novice learner. We consider such sensory scaffolding setups for training artificial agents. For example, a robot arm may need to be deployed with just a low-cost, robust, general-purpose camera; yet its performance may improve by having privileged training-time-only access to informative albeit expensive and unwieldy motion capture rigs or fragile tactile sensors. For these settings, we propose "Scaffolder", a reinforcement learning approach which effectively exploits privileged sensing in critics, world models, reward estimators, and other such auxiliary components that are only used at training time, to improve the target policy. For evaluating sensory scaffolding agents, we design a new "S3" suite of ten diverse simulated robotic tasks that explore a wide range of practical sensor setups. Agents must use privileged camera sensing to train blind hurdlers, privileged active visual perception to help robot arms overcome visual occlusions, privileged touch sensors to train robot hands, and more. Scaffolder easily outperforms relevant prior baselines and frequently performs comparably even to policies that have test-time access to the privileged sensors. Website: https://penn-pal-lab.github.io/scaffolder/
[ "['Edward S. Hu' 'James Springer' 'Oleh Rybkin' 'Dinesh Jayaraman']" ]
null
null
2405.14854
null
null
http://arxiv.org/pdf/2405.14854v1
2024-05-23T17:57:24Z
2024-05-23T17:57:24Z
TerDiT: Ternary Diffusion Models with Transformers
Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion models based on transformer architecture (DiTs). Among these diffusion models, diffusion transformers have demonstrated superior image generation capabilities, boosting lower FID scores and higher scalability. However, deploying large-scale DiT models can be expensive due to their extensive parameter numbers. Although existing research has explored efficient deployment techniques for diffusion models such as model quantization, there is still little work concerning DiT-based models. To tackle this research gap, in this paper, we propose TerDiT, a quantization-aware training (QAT) and efficient deployment scheme for ternary diffusion models with transformers. We focus on the ternarization of DiT networks and scale model sizes from 600M to 4.2B. Our work contributes to the exploration of efficient deployment strategies for large-scale DiT models, demonstrating the feasibility of training extremely low-bit diffusion transformer models from scratch while maintaining competitive image generation capacities compared to full-precision models. Code will be available at https://github.com/Lucky-Lance/TerDiT.
[ "['Xudong Lu' 'Aojun Zhou' 'Ziyi Lin' 'Qi Liu' 'Yuhui Xu' 'Renrui Zhang'\n 'Yafei Wen' 'Shuai Ren' 'Peng Gao' 'Junchi Yan' 'Hongsheng Li']" ]
null
null
2405.14857
null
null
http://arxiv.org/pdf/2405.14857v2
2024-06-10T08:23:03Z
2024-05-23T17:58:03Z
Semantica: An Adaptable Image-Conditioned Diffusion Model
We investigate the task of adapting image generative models to different datasets without finetuneing. To this end, we introduce Semantica, an image-conditioned diffusion model capable of generating images based on the semantics of a conditioning image. Semantica is trained exclusively on web-scale image pairs, that is it receives a random image from a webpage as conditional input and models another random image from the same webpage. Our experiments highlight the expressivity of pretrained image encoders and necessity of semantic-based data filtering in achieving high-quality image generation. Once trained, it can adaptively generate new images from a dataset by simply using images from that dataset as input. We study the transfer properties of Semantica on ImageNet, LSUN Churches, LSUN Bedroom and SUN397.
[ "['Manoj Kumar' 'Neil Houlsby' 'Emiel Hoogeboom']" ]
null
null
2405.14860
null
null
http://arxiv.org/pdf/2405.14860v1
2024-05-23T17:59:04Z
2024-05-23T17:59:04Z
Not All Language Model Features Are Linear
Recent work has proposed the linear representation hypothesis: that language models perform computation by manipulating one-dimensional representations of concepts ("features") in activation space. In contrast, we explore whether some language model representations may be inherently multi-dimensional. We begin by developing a rigorous definition of irreducible multi-dimensional features based on whether they can be decomposed into either independent or non-co-occurring lower-dimensional features. Motivated by these definitions, we design a scalable method that uses sparse autoencoders to automatically find multi-dimensional features in GPT-2 and Mistral 7B. These auto-discovered features include strikingly interpretable examples, e.g. circular features representing days of the week and months of the year. We identify tasks where these exact circles are used to solve computational problems involving modular arithmetic in days of the week and months of the year. Finally, we provide evidence that these circular features are indeed the fundamental unit of computation in these tasks with intervention experiments on Mistral 7B and Llama 3 8B, and we find further circular representations by breaking down the hidden states for these tasks into interpretable components.
[ "['Joshua Engels' 'Isaac Liao' 'Eric J. Michaud' 'Wes Gurnee' 'Max Tegmark']" ]
null
null
2405.14861
null
null
http://arxiv.org/pdf/2405.14861v1
2024-05-23T17:59:10Z
2024-05-23T17:59:10Z
Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models
This paper investigates score-based diffusion models when the underlying target distribution is concentrated on or near low-dimensional manifolds within the higher-dimensional space in which they formally reside, a common characteristic of natural image distributions. Despite previous efforts to understand the data generation process of diffusion models, existing theoretical support remains highly suboptimal in the presence of low-dimensional structure, which we strengthen in this paper. For the popular Denoising Diffusion Probabilistic Model (DDPM), we find that the dependency of the error incurred within each denoising step on the ambient dimension $d$ is in general unavoidable. We further identify a unique design of coefficients that yields a converges rate at the order of $O(k^{2}/sqrt{T})$ (up to log factors), where $k$ is the intrinsic dimension of the target distribution and $T$ is the number of steps. This represents the first theoretical demonstration that the DDPM sampler can adapt to unknown low-dimensional structures in the target distribution, highlighting the critical importance of coefficient design. All of this is achieved by a novel set of analysis tools that characterize the algorithmic dynamics in a more deterministic manner.
[ "['Gen Li' 'Yuling Yan']" ]
null
null
2405.14863
null
null
http://arxiv.org/pdf/2405.14863v1
2024-05-23T17:59:26Z
2024-05-23T17:59:26Z
A Nurse is Blue and Elephant is Rugby: Cross Domain Alignment in Large Language Models Reveal Human-like Patterns
Cross-domain alignment refers to the task of mapping a concept from one domain to another. For example, ``If a textit{doctor} were a textit{color}, what color would it be?''. This seemingly peculiar task is designed to investigate how people represent concrete and abstract concepts through their mappings between categories and their reasoning processes over those mappings. In this paper, we adapt this task from cognitive science to evaluate the conceptualization and reasoning abilities of large language models (LLMs) through a behavioral study. We examine several LLMs by prompting them with a cross-domain mapping task and analyzing their responses at both the population and individual levels. Additionally, we assess the models' ability to reason about their predictions by analyzing and categorizing their explanations for these mappings. The results reveal several similarities between humans' and models' mappings and explanations, suggesting that models represent concepts similarly to humans. This similarity is evident not only in the model representation but also in their behavior. Furthermore, the models mostly provide valid explanations and deploy reasoning paths that are similar to those of humans.
[ "['Asaf Yehudai' 'Taelin Karidi' 'Gabriel Stanovsky' 'Ariel Goldstein'\n 'Omri Abend']" ]
null
null
2405.14868
null
null
http://arxiv.org/pdf/2405.14868v2
2024-07-05T17:59:57Z
2024-05-23T17:59:52Z
Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis
Accurate reconstruction of complex dynamic scenes from just a single viewpoint continues to be a challenging task in computer vision. Current dynamic novel view synthesis methods typically require videos from many different camera viewpoints, necessitating careful recording setups, and significantly restricting their utility in the wild as well as in terms of embodied AI applications. In this paper, we propose $textbf{GCD}$, a controllable monocular dynamic view synthesis pipeline that leverages large-scale diffusion priors to, given a video of any scene, generate a synchronous video from any other chosen perspective, conditioned on a set of relative camera pose parameters. Our model does not require depth as input, and does not explicitly model 3D scene geometry, instead performing end-to-end video-to-video translation in order to achieve its goal efficiently. Despite being trained on synthetic multi-view video data only, zero-shot real-world generalization experiments show promising results in multiple domains, including robotics, object permanence, and driving environments. We believe our framework can potentially unlock powerful applications in rich dynamic scene understanding, perception for robotics, and interactive 3D video viewing experiences for virtual reality.
[ "['Basile Van Hoorick' 'Rundi Wu' 'Ege Ozguroglu' 'Kyle Sargent'\n 'Ruoshi Liu' 'Pavel Tokmakov' 'Achal Dave' 'Changxi Zheng'\n 'Carl Vondrick']" ]
null
null
2405.14877
null
null
http://arxiv.org/pdf/2405.14877v1
2024-04-02T01:58:53Z
2024-04-02T01:58:53Z
Visual Deformation Detection Using Soft Material Simulation for Pre-training of Condition Assessment Models
This paper addresses the challenge of geometric quality assurance in manufacturing, particularly when human assessment is required. It proposes using Blender, an open-source simulation tool, to create synthetic datasets for machine learning (ML) models. The process involves translating expert information into shape key parameters to simulate deformations, generating images for both deformed and non-deformed objects. The study explores the impact of discrepancies between real and simulated environments on ML model performance and investigates the effect of different simulation backgrounds on model sensitivity. Additionally, the study aims to enhance the model's robustness to camera positioning by generating datasets with a variety of randomized viewpoints. The entire process, from data synthesis to model training and testing, is implemented using a Python API interfacing with Blender. An experiment with a soda can object validates the accuracy of the proposed pipeline.
[ "['Joel Sol' 'Amir M. Soufi Enayati' 'Homayoun Najjaran']" ]
null
null
2405.14878
null
null
http://arxiv.org/pdf/2405.14878v1
2024-04-02T15:24:25Z
2024-04-02T15:24:25Z
Improving and Evaluating Machine Learning Methods for Forensic Shoeprint Matching
We propose a machine learning pipeline for forensic shoeprint pattern matching that improves on the accuracy and generalisability of existing methods. We extract 2D coordinates from shoeprint scans using edge detection and align the two shoeprints with iterative closest point (ICP). We then extract similarity metrics to quantify how well the two prints match and use these metrics to train a random forest that generates a probabilistic measurement of how likely two prints are to have originated from the same outsole. We assess the generalisability of machine learning methods trained on lab shoeprint scans to more realistic crime scene shoeprint data by evaluating the accuracy of our methods on several shoeprint scenarios: partial prints, prints with varying levels of blurriness, prints with different amounts of wear, and prints from different shoe models. We find that models trained on one type of shoeprint yield extremely high levels of accuracy when tested on shoeprint pairs of the same scenario but fail to generalise to other scenarios. We also discover that models trained on a variety of scenarios predict almost as accurately as models trained on specific scenarios.
[ "['Divij Jain' 'Saatvik Kher' 'Lena Liang' 'Yufeng Wu' 'Ashley Zheng'\n 'Xizhen Cai' 'Anna Plantinga' 'Elizabeth Upton']" ]
null
null
2405.14885
null
null
http://arxiv.org/pdf/2405.14885v1
2024-05-03T10:03:59Z
2024-05-03T10:03:59Z
Reservoir Computing with Generalized Readout based on Generalized Synchronization
Reservoir computing is a machine learning framework that exploits nonlinear dynamics, exhibiting significant computational capabilities. One of the defining characteristics of reservoir computing is its low cost and straightforward training algorithm, i.e. only the readout, given by a linear combination of reservoir variables, is trained. Inspired by recent mathematical studies based on dynamical system theory, in particular generalized synchronization, we propose a novel reservoir computing framework with generalized readout, including a nonlinear combination of reservoir variables. The first crucial advantage of using the generalized readout is its mathematical basis for improving information processing capabilities. Secondly, it is still within a linear learning framework, which preserves the original strength of reservoir computing. In summary, the generalized readout is naturally derived from mathematical theory and allows the extraction of useful basis functions from reservoir dynamics without sacrificing simplicity. In a numerical study, we find that introducing the generalized readout leads to a significant improvement in accuracy and an unexpected enhancement in robustness for the short- and long-term prediction of Lorenz chaos, with a particular focus on how to harness low-dimensional reservoir dynamics. A novel way and its advantages for physical implementations of reservoir computing with generalized readout are briefly discussed.
[ "['Akane Ookubo' 'Masanobu Inubushi']" ]
null
null
2405.14891
null
null
http://arxiv.org/pdf/2405.14891v1
2024-05-17T21:07:19Z
2024-05-17T21:07:19Z
Auditing the Fairness of COVID-19 Forecast Hub Case Prediction Models
The COVID-19 Forecast Hub, a repository of COVID-19 forecasts from over 50 independent research groups, is used by the Centers for Disease Control and Prevention (CDC) for their official COVID-19 communications. As such, the Forecast Hub is a critical centralized resource to promote transparent decision making. Nevertheless, by focusing exclusively on prediction accuracy, the Forecast Hub fails to evaluate whether the proposed models have similar performance across social determinants that have been known to play a role in the COVID-19 pandemic including race, ethnicity and urbanization level. In this paper, we carry out a comprehensive fairness analysis of the Forecast Hub model predictions and we show statistically significant diverse predictive performance across social determinants, with minority racial and ethnic groups as well as less urbanized areas often associated with higher prediction errors. We hope this work will encourage COVID-19 modelers and the CDC to report fairness metrics together with accuracy, and to reflect on the potential harms of the models on specific social groups and contexts.
[ "['Saad Mohammad Abrar' 'Naman Awasthi' 'Daniel Smolyak'\n 'Vanessa Frias-Martinez']" ]
null
null
2405.14893
null
null
http://arxiv.org/pdf/2405.14893v1
2024-05-20T08:27:14Z
2024-05-20T08:27:14Z
YUI: Day-ahead Electricity Price Forecasting Using Invariance Simplified Supply and Demand Curve
In day-ahead electricity market, it is crucial for all market participants to have access to reliable and accurate price forecasts for their decision-making processes. Forecasting methods currently utilized in industrial applications frequently neglect the underlying mechanisms of price formation, while economic research from the perspective of supply and demand have stringent data collection requirements, making it difficult to apply in actual markets. Observing the characteristics of the day-ahead electricity market, we introduce two invariance assumptions to simplify the modeling of supply and demand curves. Upon incorporating the time invariance assumption, we can forecast the supply curve using the market equilibrium points from multiple time slots in the recent period. By introducing the price insensitivity assumption, we can approximate the demand curve using a straight line. The point where these two curves intersect provides us with the forecast price. The proposed model, forecasting suppltextbf{Y} and demand cUrve simplified by Invariance, termed as YUI, is more efficient than state-of-the-art methods. Our experiment results in Shanxi day-ahead electricity market show that compared with existing methods, YUI can reduce forecast error by 13.8% in MAE and 28.7% in sMAPE. Code is publicly available at https://github.com/wangln19/YUI.
[ "['Linian Wang' 'Anlan Yu' 'Jianghong Liu' 'Huibing Zhang' 'Leye Wang']" ]
null
null
2405.14896
null
null
http://arxiv.org/abs/2405.14896v1
2024-05-21T21:52:08Z
2024-05-21T21:52:08Z
Study on spike-and-wave detection in epileptic signals using t-location-scale distribution and the K-nearest neighbors classifier
Pattern classification in electroencephalography (EEG) signals is an important problem in biomedical engineering since it enables the detection of brain activity, particularly the early detection of epileptic seizures. In this paper, we propose a k-nearest neighbors classification for epileptic EEG signals based on a t-location-scale statistical representation to detect spike-and-waves. The proposed approach is demonstrated on a real dataset containing both spike-and-wave events and normal brain function signals, where our performance is evaluated in terms of classification accuracy, sensitivity, and specificity.
[ "['Antonio Quintero-Rincón' 'Jorge Prendes' 'Valeria Muro' \"Carlos D'Giano\"]" ]
null
null
2405.14899
null
null
http://arxiv.org/pdf/2405.14899v1
2024-05-22T15:52:52Z
2024-05-22T15:52:52Z
DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning
In-context learning (ICL) allows transformer-based language models that are pre-trained on general text to quickly learn a specific task with a few "task demonstrations" without updating their parameters, significantly boosting their flexibility and generality. ICL possesses many distinct characteristics from conventional machine learning, thereby requiring new approaches to interpret this learning paradigm. Taking the viewpoint of recent works showing that transformers learn in context by formulating an internal optimizer, we propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL. We empirically verify the effectiveness of our approach for demonstration attribution while being computationally efficient. Leveraging the results, we then show how DETAIL can help improve model performance in real-world scenarios through demonstration reordering and curation. Finally, we experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.
[ "['Zijian Zhou' 'Xiaoqiang Lin' 'Xinyi Xu' 'Alok Prakash' 'Daniela Rus'\n 'Bryan Kian Hsiang Low']" ]
null
null
2405.14900
null
null
http://arxiv.org/abs/2405.14900v1
2024-05-22T19:54:09Z
2024-05-22T19:54:09Z
Fair Evaluation of Federated Learning Algorithms for Automated Breast Density Classification: The Results of the 2022 ACR-NCI-NVIDIA Federated Learning Challenge
The correct interpretation of breast density is important in the assessment of breast cancer risk. AI has been shown capable of accurately predicting breast density, however, due to the differences in imaging characteristics across mammography systems, models built using data from one system do not generalize well to other systems. Though federated learning (FL) has emerged as a way to improve the generalizability of AI without the need to share data, the best way to preserve features from all training data during FL is an active area of research. To explore FL methodology, the breast density classification FL challenge was hosted in partnership with the American College of Radiology, Harvard Medical School's Mass General Brigham, University of Colorado, NVIDIA, and the National Institutes of Health National Cancer Institute. Challenge participants were able to submit docker containers capable of implementing FL on three simulated medical facilities, each containing a unique large mammography dataset. The breast density FL challenge ran from June 15 to September 5, 2022, attracting seven finalists from around the world. The winning FL submission reached a linear kappa score of 0.653 on the challenge test data and 0.413 on an external testing dataset, scoring comparably to a model trained on the same data in a central location.
[ "['Kendall Schmidt' 'Benjamin Bearce' 'Ken Chang' 'Laura Coombs'\n 'Keyvan Farahani' 'Marawan Elbatele' 'Kaouther Mouhebe' 'Robert Marti'\n 'Ruipeng Zhang' 'Yao Zhang' 'Yanfeng Wang' 'Yaojun Hu' 'Haochao Ying'\n 'Yuyang Xu' 'Conrad Testagrose' 'Mutlu Demirer' 'Vikash Gupta'\n 'Ünal Akünal' 'Markus Bujotzek' 'Klaus H. Maier-Hein' 'Yi Qin'\n 'Xiaomeng Li' 'Jayashree Kalpathy-Cramer' 'Holger R. Roth']" ]
null
null
2405.14908
null
null
http://arxiv.org/pdf/2405.14908v2
2024-07-11T08:44:45Z
2024-05-23T09:44:02Z
Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining
Large language models exhibit exceptional generalization capabilities, primarily attributed to the utilization of diversely sourced data. However, conventional practices in integrating this diverse data heavily rely on heuristic schemes, lacking theoretical guidance. This research tackles these limitations by investigating strategies based on low-cost proxies for data mixtures, with the aim of streamlining data curation to enhance training efficiency. Specifically, we propose a unified scaling law, termed $textbf{BiMix}$, which accurately models the bivariate scaling behaviors of both data quantity and mixing proportions. We conduct systematic experiments and provide empirical evidence for the predictive power and fundamental principles of $textbf{BiMix}$. Notably, our findings reveal that entropy-driven training-free data mixtures can achieve comparable or even better performance than more resource-intensive methods. We hope that our quantitative insights can shed light on further judicious research and development in cost-effective language modeling.
[ "['Ce Ge' 'Zhijian Ma' 'Daoyuan Chen' 'Yaliang Li' 'Bolin Ding']" ]
null
null
2405.14913
null
null
http://arxiv.org/pdf/2405.14913v1
2024-05-23T13:20:47Z
2024-05-23T13:20:47Z
High Rank Path Development: an approach of learning the filtration of stochastic processes
Since the weak convergence for stochastic processes does not account for the growth of information over time which is represented by the underlying filtration, a slightly erroneous stochastic model in weak topology may cause huge loss in multi-periods decision making problems. To address such discontinuities Aldous introduced the extended weak convergence, which can fully characterise all essential properties, including the filtration, of stochastic processes; however was considered to be hard to find efficient numerical implementations. In this paper, we introduce a novel metric called High Rank PCF Distance (HRPCFD) for extended weak convergence based on the high rank path development method from rough path theory, which also defines the characteristic function for measure-valued processes. We then show that such HRPCFD admits many favourable analytic properties which allows us to design an efficient algorithm for training HRPCFD from data and construct the HRPCF-GAN by using HRPCFD as the discriminator for conditional time series generation. Our numerical experiments on both hypothesis testing and generative modelling validate the out-performance of our approach compared with several state-of-the-art methods, highlighting its potential in broad applications of synthetic time series generation and in addressing classic financial and economic challenges, such as optimal stopping or utility maximisation problems.
[ "['Jiajie Tao' 'Hao Ni' 'Chong Liu']" ]
null
null
2405.14917
null
null
http://arxiv.org/pdf/2405.14917v1
2024-05-23T16:21:48Z
2024-05-23T16:21:48Z
SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
Large language models (LLMs) achieve remarkable performance in natural language understanding but require substantial computation and memory resources. Post-training quantization (PTQ) is a powerful compression technique extensively investigated in LLMs. However, existing PTQ methods are still not ideal in terms of accuracy and efficiency, especially with below 4 bit-widths. Standard PTQ methods using group-wise quantization suffer difficulties in quantizing LLMs accurately to such low-bit, but advanced methods remaining high-precision weights element-wisely are hard to realize their theoretical hardware efficiency. This paper presents a Salience-Driven Mixed-Precision Quantization scheme for LLMs, namely SliM-LLM. The scheme exploits the salience distribution of weights to determine optimal bit-width and quantizers for accurate LLM quantization, while aligning bit-width partition to groups for compact memory usage and fast integer inference. Specifically, the proposed SliM-LLM mainly relies on two novel techniques: (1) Salience-Determined Bit Allocation utilizes the clustering characteristics of salience distribution to allocate the bit-widths of each group, increasing the accuracy of quantized LLMs and maintaining the inference efficiency; (2) Salience-Weighted Quantizer Calibration optimizes the parameters of the quantizer by considering the element-wise salience within the group, balancing the maintenance of salient information and minimization of errors. Comprehensive experiments show that SliM-LLM significantly improves the accuracy of LLMs at ultra-low bits, e.g., 2-bit LLaMA-7B achieves a 5.5-times memory-saving than original model on NVIDIA A800 GPUs, and 48% decrease of perplexity compared to the state-of-the-art gradient-free PTQ method. Moreover, SliM-LLM+, which is integrated from the extension of SliM-LLM with gradient-based quantizers, further reduces perplexity by 35.1%.
[ "['Wei Huang' 'Haotong Qin' 'Yangdong Liu' 'Yawei Li' 'Xianglong Liu'\n 'Luca Benini' 'Michele Magno' 'Xiaojuan Qi']" ]
null
null
2405.14918
null
null
http://arxiv.org/pdf/2405.14918v2
2024-05-30T16:04:44Z
2024-05-23T17:13:52Z
AnalogCoder: Analog Circuit Design via Training-Free Code Generation
Analog circuit design is a significant task in modern chip technology, focusing on the selection of component types, connectivity, and parameters to ensure proper circuit functionality. Despite advances made by Large Language Models (LLMs) in digital circuit design, the complexity and scarcity of data in analog circuitry pose significant challenges. To mitigate these issues, we introduce AnalogCoder, the first training-free LLM agent for designing analog circuits through Python code generation. Firstly, AnalogCoder incorporates a feedback-enhanced flow with tailored domain-specific prompts, enabling the automated and self-correcting design of analog circuits with a high success rate. Secondly, it proposes a circuit tool library to archive successful designs as reusable modular sub-circuits, simplifying composite circuit creation. Thirdly, extensive experiments on a benchmark designed to cover a wide range of analog circuit tasks show that AnalogCoder outperforms other LLM-based methods. It has successfully designed 20 circuits, 5 more than standard GPT-4o. We believe AnalogCoder can significantly improve the labor-intensive chip design process, enabling non-experts to design analog circuits efficiently.
[ "['Yao Lai' 'Sungyoung Lee' 'Guojin Chen' 'Souradip Poddar' 'Mengkang Hu'\n 'David Z. Pan' 'Ping Luo']" ]
null
null
2405.14923
null
null
http://arxiv.org/pdf/2405.14923v1
2024-05-23T17:51:36Z
2024-05-23T17:51:36Z
How Does Bayes Error Limit Probabilistic Robust Accuracy
Adversarial examples pose a security threat to many critical systems built on neural networks. Given that deterministic robustness often comes with significantly reduced accuracy, probabilistic robustness (i.e., the probability of having the same label with a vicinity is $ge 1-kappa$) has been proposed as a promising way of achieving robustness whilst maintaining accuracy. However, existing training methods for probabilistic robustness still experience non-trivial accuracy loss. It is unclear whether there is an upper bound on the accuracy when optimising towards probabilistic robustness, and whether there is a certain relationship between $kappa$ and this bound. This work studies these problems from a Bayes error perspective. We find that while Bayes uncertainty does affect probabilistic robustness, its impact is smaller than that on deterministic robustness. This reduced Bayes uncertainty allows a higher upper bound on probabilistic robust accuracy than that on deterministic robust accuracy. Further, we prove that with optimal probabilistic robustness, each probabilistically robust input is also deterministically robust in a smaller vicinity. We also show that voting within the vicinity always improves probabilistic robust accuracy and the upper bound of probabilistic robust accuracy monotonically increases as $kappa$ grows. Our empirical findings also align with our results.
[ "['Ruihan Zhang' 'Jun Sun']" ]
null
null
2405.14925
null
null
http://arxiv.org/pdf/2405.14925v1
2024-05-23T17:58:28Z
2024-05-23T17:58:28Z
PILOT: Equivariant diffusion for pocket conditioned de novo ligand generation with multi-objective guidance via importance sampling
The generation of ligands that both are tailored to a given protein pocket and exhibit a range of desired chemical properties is a major challenge in structure-based drug design. Here, we propose an in-silico approach for the $textit{de novo}$ generation of 3D ligand structures using the equivariant diffusion model PILOT, combining pocket conditioning with a large-scale pre-training and property guidance. Its multi-objective trajectory-based importance sampling strategy is designed to direct the model towards molecules that not only exhibit desired characteristics such as increased binding affinity for a given protein pocket but also maintains high synthetic accessibility. This ensures the practicality of sampled molecules, thus maximizing their potential for the drug discovery pipeline. PILOT significantly outperforms existing methods across various metrics on the common benchmark dataset CrossDocked2020. Moreover, we employ PILOT to generate novel ligands for unseen protein pockets from the Kinodata-3D dataset, which encompasses a substantial portion of the human kinome. The generated structures exhibit predicted $IC_{50}$ values indicative of potent biological activity, which highlights the potential of PILOT as a powerful tool for structure-based drug design.
[ "['Julian Cremer' 'Tuan Le' 'Frank Noé' 'Djork-Arné Clevert'\n 'Kristof T. Schütt']" ]
null
null
2405.14930
null
null
http://arxiv.org/pdf/2405.14930v1
2024-05-23T18:00:00Z
2024-05-23T18:00:00Z
AstroPT: Scaling Large Observation Models for Astronomy
This work presents AstroPT, an autoregressive pretrained transformer developed with astronomical use-cases in mind. The AstroPT models presented here have been pretrained on 8.6 million $512 times 512$ pixel $grz$-band galaxy postage stamp observations from the DESI Legacy Survey DR8. We train a selection of foundation models of increasing size from 1 million to 2.1 billion parameters, and find that AstroPT follows a similar saturating log-log scaling law to textual models. We also find that the models' performances on downstream tasks as measured by linear probing improves with model size up to the model parameter saturation point. We believe that collaborative community development paves the best route towards realising an open source `Large Observation Model' -- a model trained on data taken from the observational sciences at the scale seen in natural language processing. To this end, we release the source code, weights, and dataset for AstroPT under the MIT license, and invite potential collaborators to join us in collectively building and researching these models.
[ "['Michael J. Smith' 'Ryan J. Roberts' 'Eirini Angeloudi'\n 'Marc Huertas-Company']" ]
null
null
2405.14932
null
null
http://arxiv.org/pdf/2405.14932v1
2024-05-23T18:00:00Z
2024-05-23T18:00:00Z
Fast Inference Using Automatic Differentiation and Neural Transport in Astroparticle Physics
Multi-dimensional parameter spaces are commonly encountered in astroparticle physics theories that attempt to capture novel phenomena. However, they often possess complicated posterior geometries that are expensive to traverse using techniques traditional to this community. Effectively sampling these spaces is crucial to bridge the gap between experiment and theory. Several recent innovations, which are only beginning to make their way into this field, have made navigating such complex posteriors possible. These include GPU acceleration, automatic differentiation, and neural-network-guided reparameterization. We apply these advancements to astroparticle physics experimental results in the context of novel neutrino physics and benchmark their performances against traditional nested sampling techniques. Compared to nested sampling alone, we find that these techniques increase performance for both nested sampling and Hamiltonian Monte Carlo, accelerating inference by factors of $sim 100$ and $sim 60$, respectively. As nested sampling also evaluates the Bayesian evidence, these advancements can be exploited to improve model comparison performance while retaining compatibility with existing implementations that are widely used in the natural sciences.
[ "['Dorian W. P. Amaral' 'Shixiao Liang' 'Juehang Qin' 'Christopher Tunnell']" ]
null
null
2405.14953
null
null
http://arxiv.org/pdf/2405.14953v1
2024-05-23T18:01:11Z
2024-05-23T18:01:11Z
Mallows-DPO: Fine-Tune Your LLM with Preference Dispersions
Direct Preference Optimization (DPO) has recently emerged as a popular approach to improve reinforcement learning with human feedback (RLHF), leading to better techniques to fine-tune large language models (LLM). A weakness of DPO, however, lies in its lack of capability to characterize the diversity of human preferences. Inspired by Mallows' theory of preference ranking, we develop in this paper a new approach, the Mallows-DPO. A distinct feature of this approach is a dispersion index, which reflects the dispersion of human preference to prompts. We show that existing DPO models can be reduced to special cases of this dispersion index, thus unified with Mallows-DPO. More importantly, we demonstrate (empirically) how to use this dispersion index to enhance the performance of DPO in a broad array of benchmark tasks, from synthetic bandit selection to controllable generations and dialogues, while maintaining great generalization capabilities.
[ "['Haoxian Chen' 'Hanyang Zhao' 'Henry Lam' 'David Yao' 'Wenpin Tang']" ]
null
null
2405.14956
null
null
http://arxiv.org/pdf/2405.14956v1
2024-05-23T18:07:38Z
2024-05-23T18:07:38Z
Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning
Deep reinforcement learning agents are prone to goal misalignments. The black-box nature of their policies hinders the detection and correction of such misalignments, and the trust necessary for real-world deployment. So far, solutions learning interpretable policies are inefficient or require many human priors. We propose INTERPRETER, a fast distillation method producing INTerpretable Editable tRee Programs for ReinforcEmenT lEaRning. We empirically demonstrate that INTERPRETER compact tree programs match oracles across a diverse set of sequential decision tasks and evaluate the impact of our design choices on interpretability and performances. We show that our policies can be interpreted and edited to correct misalignments on Atari games and to explain real farming strategies.
[ "['Hector Kohler' 'Quentin Delfosse' 'Riad Akrour' 'Kristian Kersting'\n 'Philippe Preux']" ]
null
null
2405.14957
null
null
http://arxiv.org/pdf/2405.14957v1
2024-05-23T18:09:16Z
2024-05-23T18:09:16Z
Understanding the dynamics of the frequency bias in neural networks
Recent works have shown that traditional Neural Network (NN) architectures display a marked frequency bias in the learning process. Namely, the NN first learns the low-frequency features before learning the high-frequency ones. In this study, we rigorously develop a partial differential equation (PDE) that unravels the frequency dynamics of the error for a 2-layer NN in the Neural Tangent Kernel regime. Furthermore, using this insight, we explicitly demonstrate how an appropriate choice of distributions for the initialization weights can eliminate or control the frequency bias. We focus our study on the Fourier Features model, an NN where the first layer has sine and cosine activation functions, with frequencies sampled from a prescribed distribution. In this setup, we experimentally validate our theoretical results and compare the NN dynamics to the solution of the PDE using the finite element method. Finally, we empirically show that the same principle extends to multi-layer NNs.
[ "['Juan Molina' 'Mircea Petrache' 'Francisco Sahli Costabal'\n 'Matías Courdurier']" ]
null
null
2405.14961
null
null
http://arxiv.org/pdf/2405.14961v1
2024-05-23T18:11:14Z
2024-05-23T18:11:14Z
SFDDM: Single-fold Distillation for Diffusion models
While diffusion models effectively generate remarkable synthetic images, a key limitation is the inference inefficiency, requiring numerous sampling steps. To accelerate inference and maintain high-quality synthesis, teacher-student distillation is applied to compress the diffusion models in a progressive and binary manner by retraining, e.g., reducing the 1024-step model to a 128-step model in 3 folds. In this paper, we propose a single-fold distillation algorithm, SFDDM, which can flexibly compress the teacher diffusion model into a student model of any desired step, based on reparameterization of the intermediate inputs from the teacher model. To train the student diffusion, we minimize not only the output distance but also the distribution of the hidden variables between the teacher and student model. Extensive experiments on four datasets demonstrate that our student model trained by the proposed SFDDM is able to sample high-quality data with steps reduced to as little as approximately 1%, thus, trading off inference time. Our remarkable performance highlights that SFDDM effectively transfers knowledge in single-fold distillation, achieving semantic consistency and meaningful image interpolation.
[ "['Chi Hong' 'Jiyue Huang' 'Robert Birke' 'Dick Epema' 'Stefanie Roos'\n 'Lydia Y. Chen']" ]
null
null
2405.14973
null
null
http://arxiv.org/pdf/2405.14973v1
2024-05-23T18:19:47Z
2024-05-23T18:19:47Z
Two-Stage ML-Guided Decision Rules for Sequential Decision Making under Uncertainty
Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains. Some SDMU applications are naturally modeled as Multistage Stochastic Optimization Problems (MSPs), but the resulting optimizations are notoriously challenging from a computational standpoint. Under assumptions of convexity and stage-wise independence of the uncertainty, the resulting optimization can be solved efficiently using Stochastic Dual Dynamic Programming (SDDP). Two-stage Linear Decision Rules (TS-LDRs) have been proposed to solve MSPs without the stage-wise independence assumption. TS-LDRs are computationally tractable, but using a policy that is a linear function of past observations is typically not suitable for non-convex environments arising, for example, in energy systems. This paper introduces a novel approach, Two-Stage General Decision Rules (TS-GDR), to generalize the policy space beyond linear functions, making them suitable for non-convex environments. TS-GDR is a self-supervised learning algorithm that trains the nonlinear decision rules using stochastic gradient descent (SGD); its forward passes solve the policy implementation optimization problems, and the backward passes leverage duality theory to obtain closed-form gradients. The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-DDR). The method inherits the flexibility and computational performance of Deep Learning methodologies to solve SDMU problems generally tackled through large-scale optimization techniques. Applied to the Long-Term Hydrothermal Dispatch (LTHD) problem using actual power system data from Bolivia, the TS-DDR not only enhances solution quality but also significantly reduces computation times by several orders of magnitude.
[ "['Andrew Rosemberg' 'Alexandre Street' 'Davi M. Valladão'\n 'Pascal Van Hentenryck']" ]
null
null
2405.14981
null
null
http://arxiv.org/pdf/2405.14981v1
2024-05-23T18:35:46Z
2024-05-23T18:35:46Z
MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective
The growing richness of large-scale datasets has been crucial in driving the rapid advancement and wide adoption of machine learning technologies. The massive collection and usage of data, however, pose an increasing risk for people's private and sensitive information due to either inadvertent mishandling or malicious exploitation. Besides legislative solutions, many technical approaches have been proposed towards data privacy protection. However, they bear various limitations such as leading to degraded data availability and utility, or relying on heuristics and lacking solid theoretical bases. To overcome these limitations, we propose a formal information-theoretic definition for this utility-preserving privacy protection problem, and design a data-driven learnable data transformation framework that is capable of selectively suppressing sensitive attributes from target datasets while preserving the other useful attributes, regardless of whether or not they are known in advance or explicitly annotated for preservation. We provide rigorous theoretical analyses on the operational bounds for our framework, and carry out comprehensive experimental evaluations using datasets of a variety of modalities, including facial images, voice audio clips, and human activity motion sensor signals. Results demonstrate the effectiveness and generalizability of our method under various configurations on a multitude of tasks.
[ "['Yizhuo Chen' 'Chun-Fu Chen' 'Hsiang Hsu' 'Shaohan Hu' 'Marco Pistoia'\n 'Tarek Abdelzaher']" ]
null
null
2405.14982
null
null
http://arxiv.org/pdf/2405.14982v1
2024-05-23T18:37:00Z
2024-05-23T18:37:00Z
In-context Time Series Predictor
Recent Transformer-based large language models (LLMs) demonstrate in-context learning ability to perform various functions based solely on the provided context, without updating model parameters. To fully utilize the in-context capabilities in time series forecasting (TSF) problems, unlike previous Transformer-based or LLM-based time series forecasting methods, we reformulate "time series forecasting tasks" as input tokens by constructing a series of (lookback, future) pairs within the tokens. This method aligns more closely with the inherent in-context mechanisms, and is more parameter-efficient without the need of using pre-trained LLM parameters. Furthermore, it addresses issues such as overfitting in existing Transformer-based TSF models, consistently achieving better performance across full-data, few-shot, and zero-shot settings compared to previous architectures.
[ "['Jiecheng Lu' 'Yan Sun' 'Shihao Yang']" ]
null
null
2405.14986
null
null
http://arxiv.org/abs/2405.14986v1
2024-05-23T18:39:33Z
2024-05-23T18:39:33Z
Hand bone age estimation using divide and conquer strategy and lightweight convolutional neural networks
Estimating the Bone Age of children is very important for diagnosing growth defects, and related diseases, and estimating the final height that children reach after maturity. For this reason, it is widely used in different countries. Traditional methods for estimating bone age are performed by comparing atlas images and radiographic images of the left hand, which is time-consuming and error-prone. To estimate bone age using deep neural network models, a lot of research has been done, our effort has been to improve the accuracy and speed of this process by using the introduced approach. After creating and analyzing our initial model, we focused on preprocessing and made the inputs smaller, and increased their quality. we selected small regions of hand radiographs and estimated the age of the bone only according to these regions. by doing this we improved bone age estimation accuracy even further than what was achieved in related works, without increasing the required computational resource. We reached a Mean Absolute Error (MAE) of 3.90 months in the range of 0-20 years and an MAE of 3.84 months in the range of 1-18 years on the RSNA test set.
[ "['Amin Ahmadi Kasani' 'Hedieh Sajedi']" ]
null
null
2405.14992
null
null
http://arxiv.org/pdf/2405.14992v1
2024-05-23T18:51:47Z
2024-05-23T18:51:47Z
Linking In-context Learning in Transformers to Human Episodic Memory
Understanding the connections between artificial and biological intelligent systems can reveal fundamental principles underlying general intelligence. While many artificial intelligence (AI) models have a neuroscience counterpart, such connections are largely missing in Transformer models and the self-attention mechanism. Here, we examine the relationship between attention heads and human episodic memory. We focus on the induction heads, which contribute to the in-context learning capabilities of Transformer-based large language models (LLMs). We demonstrate that induction heads are behaviorally, functionally, and mechanistically similar to the contextual maintenance and retrieval (CMR) model of human episodic memory. Our analyses of LLMs pre-trained on extensive text data show that CMR-like heads often emerge in the intermediate model layers and that their behavior qualitatively mirrors the memory biases seen in humans. Our findings uncover a parallel between the computational mechanisms of LLMs and human memory, offering valuable insights into both research fields.
[ "['Li Ji-An' 'Corey Y. Zhou' 'Marcus K. Benna' 'Marcelo G. Mattar']" ]
null
null
2405.14995
null
null
http://arxiv.org/pdf/2405.14995v1
2024-05-23T18:56:46Z
2024-05-23T18:56:46Z
Lower Bound on the Greedy Approximation Ratio for Adaptive Submodular Cover
We show that the greedy algorithm for adaptive-submodular cover has approximation ratio at least 1.3*(1+ln Q). Moreover, the instance demonstrating this gap has Q=1. So, it invalidates a prior result in the paper ``Adaptive Submodularity: A New Approach to Active Learning and Stochastic Optimization'' by Golovin-Krause, that claimed a (1+ln Q)^2 approximation ratio for the same algorithm.
[ "['Blake Harris' 'Viswanath Nagarajan']" ]
null
null
2405.15002
null
null
http://arxiv.org/pdf/2405.15002v1
2024-05-23T19:09:50Z
2024-05-23T19:09:50Z
Private Regression via Data-Dependent Sufficient Statistic Perturbation
Sufficient statistic perturbation (SSP) is a widely used method for differentially private linear regression. SSP adopts a data-independent approach where privacy noise from a simple distribution is added to sufficient statistics. However, sufficient statistics can often be expressed as linear queries and better approximated by data-dependent mechanisms. In this paper we introduce data-dependent SSP for linear regression based on post-processing privately released marginals, and find that it outperforms state-of-the-art data-independent SSP. We extend this result to logistic regression by developing an approximate objective that can be expressed in terms of sufficient statistics, resulting in a novel and highly competitive SSP approach for logistic regression. We also make a connection to synthetic data for machine learning: for models with sufficient statistics, training on synthetic data corresponds to data-dependent SSP, with the overall utility determined by how well the mechanism answers these linear queries.
[ "['Cecilia Ferrando' 'Daniel Sheldon']" ]
null
null
2405.15006
null
null
http://arxiv.org/pdf/2405.15006v1
2024-05-23T19:23:09Z
2024-05-23T19:23:09Z
Path-metrics, pruning, and generalization
Analyzing the behavior of ReLU neural networks often hinges on understanding the relationships between their parameters and the functions they implement. This paper proves a new bound on function distances in terms of the so-called path-metrics of the parameters. Since this bound is intrinsically invariant with respect to the rescaling symmetries of the networks, it sharpens previously known bounds. It is also, to the best of our knowledge, the first bound of its kind that is broadly applicable to modern networks such as ResNets, VGGs, U-nets, and many more. In contexts such as network pruning and quantization, the proposed path-metrics can be efficiently computed using only two forward passes. Besides its intrinsic theoretical interest, the bound yields not only novel theoretical generalization bounds, but also a promising proof of concept for rescaling-invariant pruning.
[ "['Antoine Gonon' 'Nicolas Brisebarre' 'Elisa Riccietti' 'Rémi Gribonval']" ]
null
null
2405.15007
null
null
http://arxiv.org/pdf/2405.15007v1
2024-05-23T19:23:40Z
2024-05-23T19:23:40Z
RE-Adapt: Reverse Engineered Adaptation of Large Language Models
We introduce RE-Adapt, an approach to fine-tuning large language models on new domains without degrading any pre-existing instruction-tuning. We reverse engineer an adapter which isolates what an instruction-tuned model has learned beyond its corresponding pretrained base model. Importantly, this requires no additional data or training. We can then fine-tune the base model on a new domain and readapt it to instruction following with the reverse engineered adapter. RE-Adapt and our low-rank variant LoRE-Adapt both outperform other methods of fine-tuning, across multiple popular LLMs and datasets, even when the models are used in conjunction with retrieval-augmented generation.
[ "['William Fleshman' 'Benjamin Van Durme']" ]
null
null
2405.15010
null
null
http://arxiv.org/pdf/2405.15010v1
2024-05-23T19:29:38Z
2024-05-23T19:29:38Z
Polyak Meets Parameter-free Clipped Gradient Descent
Gradient descent and its variants are de facto standard algorithms for training machine learning models. As gradient descent is sensitive to its hyperparameters, we need to tune the hyperparameters carefully using a grid search, but it is time-consuming, especially when multiple hyperparameters exist. Recently, parameter-free methods that adjust the hyperparameters on the fly have been studied. However, the existing work only studied parameter-free methods for the stepsize, and parameter-free methods for other hyperparameters have not been explored. For instance, the gradient clipping threshold is also a crucial hyperparameter in addition to the stepsize to prevent gradient explosion issues, but none of the existing studies investigated the parameter-free methods for clipped gradient descent. In this work, we study the parameter-free methods for clipped gradient descent. Specifically, we propose Inexact Polyak Stepsize, which converges to the optimal solution without any hyperparameters tuning, and its convergence rate is asymptotically independent of L under L-smooth and $(L_0, L_1)$-smooth assumptions of the loss function as that of clipped gradient descent with well-tuned hyperparameters. We numerically validated our convergence results using a synthetic function and demonstrated the effectiveness of our proposed methods using LSTM, Nano-GPT, and T5.
[ "['Yuki Takezawa' 'Han Bao' 'Ryoma Sato' 'Kenta Niwa' 'Makoto Yamada']" ]
null
null
2405.15012
null
null
http://arxiv.org/pdf/2405.15012v1
2024-05-23T19:35:03Z
2024-05-23T19:35:03Z
Extracting Prompts by Inverting LLM Outputs
We consider the problem of language model inversion: given outputs of a language model, we seek to extract the prompt that generated these outputs. We develop a new black-box method, output2prompt, that learns to extract prompts without access to the model's logits and without adversarial or jailbreaking queries. In contrast to previous work, output2prompt only needs outputs of normal user queries. To improve memory efficiency, output2prompt employs a new sparse encoding techique. We measure the efficacy of output2prompt on a variety of user and system prompts and demonstrate zero-shot transferability across different LLMs.
[ "['Collin Zhang' 'John X. Morris' 'Vitaly Shmatikov']" ]
null
null
2405.15013
null
null
http://arxiv.org/pdf/2405.15013v1
2024-05-23T19:36:10Z
2024-05-23T19:36:10Z
Make Inference Faster: Efficient GPU Memory Management for Butterfly Sparse Matrix Multiplication
This paper is the first to assess the state of existing sparse matrix multiplication algorithms on GPU for the butterfly structure, a promising form of sparsity. This is achieved through a comprehensive benchmark that can be easily modified to add a new implementation. The goal is to provide a simple tool for users to select the optimal implementation based on their settings. Using this benchmark, we find that existing implementations spend up to 50% of their total runtime on memory rewriting operations. We show that these memory operations can be optimized by introducing a new CUDA kernel that minimizes the transfers between the different levels of GPU memory, achieving a median speed-up factor of x1.4 while also reducing energy consumption (median of x0.85). We also demonstrate the broader significance of our results by showing how the new kernel can speed up the inference of neural networks.
[ "['Antoine Gonon' 'Léon Zheng' 'Pascal Carrivain' 'Quoc-Tung Le']" ]
null
null
2405.15018
null
null
http://arxiv.org/pdf/2405.15018v2
2024-06-12T00:25:46Z
2024-05-23T19:43:45Z
What Variables Affect Out-Of-Distribution Generalization in Pretrained Models?
Embeddings produced by pre-trained deep neural networks (DNNs) are widely used; however, their efficacy for downstream tasks can vary widely. We study the factors influencing out-of-distribution (OOD) generalization of pre-trained DNN embeddings through the lens of the tunnel effect hypothesis, which suggests deeper DNN layers compress representations and hinder OOD performance. Contrary to earlier work, we find the tunnel effect is not universal. Based on 10,584 linear probes, we study the conditions that mitigate the tunnel effect by varying DNN architecture, training dataset, image resolution, and augmentations. We quantify each variable's impact using a novel SHAP analysis. Our results emphasize the danger of generalizing findings from toy datasets to broader contexts.
[ "['Md Yousuf Harun' 'Kyungbok Lee' 'Jhair Gallardo' 'Giri Krishnan'\n 'Christopher Kanan']" ]
null
null
2405.15019
null
null
http://arxiv.org/pdf/2405.15019v1
2024-05-23T19:44:03Z
2024-05-23T19:44:03Z
Agentic Skill Discovery
Language-conditioned robotic skills make it possible to apply the high-level reasoning of Large Language Models (LLMs) to low-level robotic control. A remaining challenge is to acquire a diverse set of fundamental skills. Existing approaches either manually decompose a complex task into atomic robotic actions in a top-down fashion, or bootstrap as many combinations as possible in a bottom-up fashion to cover a wider range of task possibilities. These decompositions or combinations, however, require an initial skill library. For example, a "grasping" capability can never emerge from a skill library containing only diverse "pushing" skills. Existing skill discovery techniques with reinforcement learning acquire skills by an exhaustive exploration but often yield non-meaningful behaviors. In this study, we introduce a novel framework for skill discovery that is entirely driven by LLMs. The framework begins with an LLM generating task proposals based on the provided scene description and the robot's configurations, aiming to incrementally acquire new skills upon task completion. For each proposed task, a series of reinforcement learning processes are initiated, utilizing reward and success determination functions sampled by the LLM to develop the corresponding policy. The reliability and trustworthiness of learned behaviors are further ensured by an independent vision-language model. We show that starting with zero skill, the ASD skill library emerges and expands to more and more meaningful and reliable skills, enabling the robot to efficiently further propose and complete advanced tasks. The project page can be found at: https://agentic-skill-discovery.github.io.
[ "['Xufeng Zhao' 'Cornelius Weber' 'Stefan Wermter']" ]
null
null
2405.15025
null
null
http://arxiv.org/pdf/2405.15025v1
2024-05-23T20:01:17Z
2024-05-23T20:01:17Z
OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Deployment of Large Language Models (LLMs) has major computational costs, due to their rapidly expanding size. Compression of LLMs reduces the memory footprint, latency, and energy required for their inference. Post-training Quantization (PTQ) techniques have been developed to compress LLMs while avoiding expensive re-training. Most PTQ approaches formulate the quantization error based on a layer-wise $ell_2$ loss, ignoring the model output. Then, each layer is calibrated using its layer-wise Hessian to update the weights towards minimizing the $ell_2$ quantization error. The Hessian is also used for detecting the most salient weights to quantization. Such PTQ approaches are prone to accuracy drop in low-precision quantization. We propose Output-adaptive Calibration (OAC) to incorporate the model output in the calibration process. We formulate the quantization error based on the distortion of the output cross-entropy loss. OAC approximates the output-adaptive Hessian for each layer under reasonable assumptions to reduce the computational complexity. The output-adaptive Hessians are used to update the weight matrices and detect the salient weights towards maintaining the model output. Our proposed method outperforms the state-of-the-art baselines such as SpQR and BiLLM, especially, at extreme low-precision (2-bit and binary) quantization.
[ "['Ali Edalati' 'Alireza Ghaffari' 'Masoud Asgharian' 'Lu Hou'\n 'Boxing Chen' 'Vahid Partovi Nia']" ]
null
null
2405.15031
null
null
http://arxiv.org/pdf/2405.15031v1
2024-05-23T20:10:29Z
2024-05-23T20:10:29Z
Amortized nonmyopic active search via deep imitation learning
Active search formalizes a specialized active learning setting where the goal is to collect members of a rare, valuable class. The state-of-the-art algorithm approximates the optimal Bayesian policy in a budget-aware manner, and has been shown to achieve impressive empirical performance in previous work. However, even this approximate policy has a superlinear computational complexity with respect to the size of the search problem, rendering its application impractical in large spaces or in real-time systems where decisions must be made quickly. We study the amortization of this policy by training a neural network to learn to search. To circumvent the difficulty of learning from scratch, we appeal to imitation learning techniques to mimic the behavior of the expert, expensive-to-compute policy. Our policy network, trained on synthetic data, learns a beneficial search strategy that yields nonmyopic decisions carefully balancing exploration and exploitation. Extensive experiments demonstrate our policy achieves competitive performance at real-world tasks that closely approximates the expert's at a fraction of the cost, while outperforming cheaper baselines.
[ "['Quan Nguyen' 'Anindya Sarkar' 'Roman Garnett']" ]
null
null
2405.15036
null
null
http://arxiv.org/pdf/2405.15036v1
2024-05-23T20:15:23Z
2024-05-23T20:15:23Z
Input-driven circuit reconfiguration in critical recurrent neural networks
Changing a circuit dynamically, without actually changing the hardware itself, is called reconfiguration, and is of great importance due to its manifold technological applications. Circuit reconfiguration appears to be a feature of the cerebral cortex, and hence understanding the neuroarchitectural and dynamical features underlying self-reconfiguration may prove key to elucidate brain function. We present a very simple single-layer recurrent network, whose signal pathways can be reconfigured "on the fly" using only its inputs, with no changes to its synaptic weights. We use the low spatio-temporal frequencies of the input to landscape the ongoing activity, which in turn permits or denies the propagation of traveling waves. This mechanism uses the inherent properties of dynamically-critical systems, which we guarantee through unitary convolution kernels. We show this network solves the classical connectedness problem, by allowing signal propagation only along the regions to be evaluated for connectedness and forbidding it elsewhere.
[ "['Marcelo O. Magnasco']" ]
null
null
2405.15039
null
null
http://arxiv.org/pdf/2405.15039v1
2024-05-23T20:36:10Z
2024-05-23T20:36:10Z
CEEBERT: Cross-Domain Inference in Early Exit BERT
Pre-trained Language Models (PLMs), like BERT, with self-supervision objectives exhibit remarkable performance and generalization across various tasks. However, they suffer in inference latency due to their large size. To address this issue, side branches are attached at intermediate layers, enabling early inference of samples without requiring them to pass through all layers. However, the challenge is to decide which layer to infer and exit each sample so that the accuracy and latency are balanced. Moreover, the distribution of the samples to be inferred may differ from that used for training necessitating cross-domain adaptation. We propose an online learning algorithm named Cross-Domain Inference in Early Exit BERT (CeeBERT) that dynamically determines early exits of samples based on the level of confidence at each exit point. CeeBERT learns optimal thresholds from domain-specific confidence observed at intermediate layers on the fly, eliminating the need for labeled data. Experimental results on five distinct datasets with BERT and ALBERT models demonstrate CeeBERT's ability to improve latency by reducing unnecessary computations with minimal drop in performance. By adapting to the threshold values, CeeBERT can speed up the BERT/ALBERT models by $2times$ - $3.5times$ with minimal drop in accuracy.
[ "['Divya Jyoti Bajpai' 'Manjesh Kumar Hanawal']" ]
null
null
2405.15047
null
null
http://arxiv.org/pdf/2405.15047v1
2024-05-23T20:51:22Z
2024-05-23T20:51:22Z
Credal Wrapper of Model Averaging for Uncertainty Estimation on Out-Of-Distribution Detection
This paper presents an innovative approach, called credal wrapper, to formulating a credal set representation of model averaging for Bayesian neural networks (BNNs) and deep ensembles, capable of improving uncertainty estimation in classification tasks. Given a finite collection of single distributions derived from BNNs or deep ensembles, the proposed approach extracts an upper and a lower probability bound per class, acknowledging the epistemic uncertainty due to the availability of a limited amount of sampled predictive distributions. Such probability intervals over classes can be mapped on a convex set of probabilities (a 'credal set') from which, in turn, a unique prediction can be obtained using a transformation called 'intersection probability transformation'. In this article, we conduct extensive experiments on multiple out-of-distribution (OOD) detection benchmarks, encompassing various dataset pairs (CIFAR10/100 vs SVHN/Tiny-ImageNet, CIFAR10 vs CIFAR10-C, CIFAR100 vs CIFAR100-C and ImageNet vs ImageNet-O) and using different network architectures (such as VGG16, Res18/50, EfficientNet B2, and ViT Base). Compared to BNN and deep ensemble baselines, the proposed credal representation methodology exhibits superior performance in uncertainty estimation and achieves lower expected calibration error on OOD samples.
[ "['Kaizheng Wang' 'Fabio Cuzzolin' 'Keivan Shariatmadar' 'David Moens'\n 'Hans Hallez']" ]
null
null
2405.15050
null
null
http://arxiv.org/pdf/2405.15050v1
2024-05-23T20:58:33Z
2024-05-23T20:58:33Z
Provably Efficient Reinforcement Learning for Infinite-Horizon Average-Reward Linear MDPs
We resolve the open problem of designing a computationally efficient algorithm for infinite-horizon average-reward linear Markov Decision Processes (MDPs) with $widetilde{O}(sqrt{T})$ regret. Previous approaches with $widetilde{O}(sqrt{T})$ regret either suffer from computational inefficiency or require strong assumptions on dynamics, such as ergodicity. In this paper, we approximate the average-reward setting by the discounted setting and show that running an optimistic value iteration-based algorithm for learning the discounted setting achieves $widetilde{O}(sqrt{T})$ regret when the discounting factor $gamma$ is tuned appropriately. The challenge in the approximation approach is to get a regret bound with a sharp dependency on the effective horizon $1 / (1 - gamma)$. We use a computationally efficient clipping operator that constrains the span of the optimistic state value function estimate to achieve a sharp regret bound in terms of the effective horizon, which leads to $widetilde{O}(sqrt{T})$ regret.
[ "['Kihyuk Hong' 'Yufan Zhang' 'Ambuj Tewari']" ]
null
null
2405.15052
null
null
http://arxiv.org/pdf/2405.15052v2
2024-06-28T19:39:45Z
2024-05-23T21:00:53Z
Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training
Mixture-of-Experts (MoE) enjoys performance gain by increasing model capacity while keeping computation cost constant. When comparing MoE to dense models, prior work typically adopt the following setting: 1) use FLOPs or activated parameters as a measure of model complexity; 2) train all models to the same number of tokens. We argue that this setting favors MoE as FLOPs and activated parameters do not accurately measure the communication overhead in sparse layers, leading to a larger actual training budget for MoE. In this work, we revisit the settings by adopting step time as a more accurate measure of model complexity, and by determining the total compute budget under the Chinchilla compute-optimal settings. To efficiently run MoE on modern accelerators, we adopt a 3D sharding method that keeps the dense-to-MoE step time increase within a healthy range. We evaluate MoE and dense LLMs on a set of nine 0-shot and two 1-shot English tasks, as well as MMLU 5-shot and GSM8K 8-shot across three model scales at 6.4B, 12.6B, and 29.6B. Experimental results show that even under these settings, MoE consistently outperform dense LLMs on the speed-accuracy trade-off curve with meaningful gaps. Our full model implementation and sharding strategy has been released at~url{https://github.com/apple/axlearn}
[ "['Xianzhi Du' 'Tom Gunter' 'Xiang Kong' 'Mark Lee' 'Zirui Wang'\n 'Aonan Zhang' 'Nan Du' 'Ruoming Pang']" ]
null
null
2405.15054
null
null
http://arxiv.org/pdf/2405.15054v1
2024-05-23T21:03:33Z
2024-05-23T21:03:33Z
Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning
The study of behavioral diversity in Multi-Agent Reinforcement Learning (MARL) is a nascent yet promising field. In this context, the present work deals with the question of how to control the diversity of a multi-agent system. With no existing approaches to control diversity to a set value, current solutions focus on blindly promoting it via intrinsic rewards or additional loss functions, effectively changing the learning objective and lacking a principled measure for it. To address this, we introduce Diversity Control (DiCo), a method able to control diversity to an exact value of a given metric by representing policies as the sum of a parameter-shared component and dynamically scaled per-agent components. By applying constraints directly to the policy architecture, DiCo leaves the learning objective unchanged, enabling its applicability to any actor-critic MARL algorithm. We theoretically prove that DiCo achieves the desired diversity, and we provide several experiments, both in cooperative and competitive tasks, that show how DiCo can be employed as a novel paradigm to increase performance and sample efficiency in MARL. Multimedia results are available on the paper's website: https://sites.google.com/view/dico-marl.
[ "['Matteo Bettini' 'Ryan Kortvelesy' 'Amanda Prorok']" ]
null
null
2405.15055
null
null
http://arxiv.org/pdf/2405.15055v1
2024-05-23T21:05:08Z
2024-05-23T21:05:08Z
CCBNet: Confidential Collaborative Bayesian Networks Inference
Effective large-scale process optimization in manufacturing industries requires close cooperation between different human expert parties who encode their knowledge of related domains as Bayesian network models. For instance, Bayesian networks for domains such as lithography equipment, processes, and auxiliary tools must be conjointly used to effectively identify process optimizations in the semiconductor industry. However, business confidentiality across domains hinders such collaboration, and encourages alternatives to centralized inference. We propose CCBNet, the first Confidentiality-preserving Collaborative Bayesian Network inference framework. CCBNet leverages secret sharing to securely perform analysis on the combined knowledge of party models by joining two novel subprotocols: (i) CABN, which augments probability distributions for features across parties by modeling them into secret shares of their normalized combination; and (ii) SAVE, which aggregates party inference result shares through distributed variable elimination. We extensively evaluate CCBNet via 9 public Bayesian networks. Our results show that CCBNet achieves predictive quality that is similar to the ones of centralized methods while preserving model confidentiality. We further demonstrate that CCBNet scales to challenging manufacturing use cases that involve 16-128 parties in large networks of 223-1003 features, and decreases, on average, computational overhead by 23%, while communicating 71k values per request. Finally, we showcase possible attacks and mitigations for partially reconstructing party networks in the two subprotocols.
[ "['Abele Mălan' 'Jérémie Decouchant' 'Thiago Guzella' 'Lydia Chen']" ]
null
null
2405.15056
null
null
http://arxiv.org/pdf/2405.15056v1
2024-05-23T21:09:36Z
2024-05-23T21:09:36Z
ElastoGen: 4D Generative Elastodynamics
We present ElastoGen, a knowledge-driven model that generates physically accurate and coherent 4D elastodynamics. Instead of relying on petabyte-scale data-driven learning, ElastoGen leverages the principles of physics-in-the-loop and learns from established physical knowledge, such as partial differential equations and their numerical solutions. The core idea of ElastoGen is converting the global differential operator, corresponding to the nonlinear elastodynamic equations, into iterative local convolution-like operations, which naturally fit modern neural networks. Each network module is specifically designed to support this goal rather than functioning as a black box. As a result, ElastoGen is exceptionally lightweight in terms of both training requirements and network scale. Additionally, due to its alignment with physical procedures, ElastoGen efficiently generates accurate dynamics for a wide range of hyperelastic materials and can be easily integrated with upstream and downstream deep modules to enable end-to-end 4D generation.
[ "['Yutao Feng' 'Yintong Shang' 'Xiang Feng' 'Lei Lan' 'Shandian Zhe'\n 'Tianjia Shao' 'Hongzhi Wu' 'Kun Zhou' 'Hao Su' 'Chenfanfu Jiang'\n 'Yin Yang']" ]
null
null
2405.15059
null
null
http://arxiv.org/pdf/2405.15059v1
2024-05-23T21:17:20Z
2024-05-23T21:17:20Z
Message-Passing Monte Carlo: Generating low-discrepancy point sets via Graph Neural Networks
Discrepancy is a well-known measure for the irregularity of the distribution of a point set. Point sets with small discrepancy are called low-discrepancy and are known to efficiently fill the space in a uniform manner. Low-discrepancy points play a central role in many problems in science and engineering, including numerical integration, computer vision, machine perception, computer graphics, machine learning, and simulation. In this work, we present the first machine learning approach to generate a new class of low-discrepancy point sets named Message-Passing Monte Carlo (MPMC) points. Motivated by the geometric nature of generating low-discrepancy point sets, we leverage tools from Geometric Deep Learning and base our model on Graph Neural Networks. We further provide an extension of our framework to higher dimensions, which flexibly allows the generation of custom-made points that emphasize the uniformity in specific dimensions that are primarily important for the particular problem at hand. Finally, we demonstrate that our proposed model achieves state-of-the-art performance superior to previous methods by a significant margin. In fact, MPMC points are empirically shown to be either optimal or near-optimal with respect to the discrepancy for every dimension and the number of points for which the optimal discrepancy can be determined.
[ "['T. Konstantin Rusch' 'Nathan Kirk' 'Michael M. Bronstein'\n 'Christiane Lemieux' 'Daniela Rus']" ]
null
null
2405.15062
null
null
http://arxiv.org/pdf/2405.15062v1
2024-05-23T21:21:40Z
2024-05-23T21:21:40Z
Model-Agnostic Utility-Preserving Biometric Information Anonymization
The recent rapid advancements in both sensing and machine learning technologies have given rise to the universal collection and utilization of people's biometrics, such as fingerprints, voices, retina/facial scans, or gait/motion/gestures data, enabling a wide range of applications including authentication, health monitoring, or much more sophisticated analytics. While providing better user experiences and deeper business insights, the use of biometrics has raised serious privacy concerns due to their intrinsic sensitive nature and the accompanying high risk of leaking sensitive information such as identity or medical conditions. In this paper, we propose a novel modality-agnostic data transformation framework that is capable of anonymizing biometric data by suppressing its sensitive attributes and retaining features relevant to downstream machine learning-based analyses that are of research and business values. We carried out a thorough experimental evaluation using publicly available facial, voice, and motion datasets. Results show that our proposed framework can achieve a highlight{high suppression level for sensitive information}, while at the same time retain underlying data utility such that subsequent analyses on the anonymized biometric data could still be carried out to yield satisfactory accuracy.
[ "['Chun-Fu Chen' 'Bill Moriarty' 'Shaohan Hu' 'Sean Moran' 'Marco Pistoia'\n 'Vincenzo Piuri' 'Pierangela Samarati']" ]
null
null
2405.15063
null
null
http://arxiv.org/pdf/2405.15063v1
2024-05-23T21:21:59Z
2024-05-23T21:21:59Z
A classification model based on a population of hypergraphs
This paper introduces a novel hypergraph classification algorithm. The use of hypergraphs in this framework has been widely studied. In previous work, hypergraph models are typically constructed using distance or attribute based methods. That is, hyperedges are generated by connecting a set of samples which are within a certain distance or have a common attribute. These methods however, do not often focus on multi-way interactions directly. The algorithm provided in this paper looks to address this problem by constructing hypergraphs which explore multi-way interactions of any order. We also increase the performance and robustness of the algorithm by using a population of hypergraphs. The algorithm is evaluated on two datasets, demonstrating promising performance compared to a generic random forest classification algorithm.
[ "['Samuel Barton' 'Adelle Coster' 'Diane Donovan' 'James Lefevre']" ]
null
null
2405.15065
null
null
http://arxiv.org/pdf/2405.15065v1
2024-05-23T21:25:20Z
2024-05-23T21:25:20Z
Direct Preference Optimization With Unobserved Preference Heterogeneity
RLHF has emerged as a pivotal step in aligning language models with human objectives and values. It typically involves learning a reward model from human preference data and then using reinforcement learning to update the generative model accordingly. Conversely, Direct Preference Optimization (DPO) directly optimizes the generative model with preference data, skipping reinforcement learning. However, both RLHF and DPO assume uniform preferences, overlooking the reality of diverse human annotators. This paper presents a new method to align generative models with varied human preferences. We propose an Expectation-Maximization adaptation to DPO, generating a mixture of models based on latent preference types of the annotators. We then introduce a min-max regret ensemble learning model to produce a single generative method to minimize worst-case regret among annotator subgroups with similar latent factors. Our algorithms leverage the simplicity of DPO while accommodating diverse preferences. Experimental results validate the effectiveness of our approach in producing equitable generative policies.
[ "['Keertana Chidambaram' 'Karthik Vinay Seetharaman' 'Vasilis Syrgkanis']" ]
null
null
2405.15074
null
null
http://arxiv.org/pdf/2405.15074v1
2024-05-23T21:50:54Z
2024-05-23T21:50:54Z
4+3 Phases of Compute-Optimal Neural Scaling Laws
We consider the three parameter solvable neural scaling model introduced by Maloney, Roberts, and Sully. The model has three parameters: data complexity, target complexity, and model-parameter-count. We use this neural scaling model to derive new predictions about the compute-limited, infinite-data scaling law regime. To train the neural scaling model, we run one-pass stochastic gradient descent on a mean-squared loss. We derive a representation of the loss curves which holds over all iteration counts and improves in accuracy as the model parameter count grows. We then analyze the compute-optimal model-parameter-count, and identify 4 phases (+3 subphases) in the data-complexity/target-complexity phase-plane. The phase boundaries are determined by the relative importance of model capacity, optimizer noise, and embedding of the features. We furthermore derive, with mathematical proof and extensive numerical evidence, the scaling-law exponents in all of these phases, in particular computing the optimal model-parameter-count as a function of floating point operation budget.
[ "['Elliot Paquette' 'Courtney Paquette' 'Lechao Xiao' 'Jeffrey Pennington']" ]
null
null
2405.15079
null
null
http://arxiv.org/pdf/2405.15079v1
2024-05-23T22:00:38Z
2024-05-23T22:00:38Z
A Survey of Distributed Learning in Cloud, Mobile, and Edge Settings
In the era of deep learning (DL), convolutional neural networks (CNNs), and large language models (LLMs), machine learning (ML) models are becoming increasingly complex, demanding significant computational resources for both inference and training stages. To address this challenge, distributed learning has emerged as a crucial approach, employing parallelization across various devices and environments. This survey explores the landscape of distributed learning, encompassing cloud and edge settings. We delve into the core concepts of data and model parallelism, examining how models are partitioned across different dimensions and layers to optimize resource utilization and performance. We analyze various partitioning schemes for different layer types, including fully connected, convolutional, and recurrent layers, highlighting the trade-offs between computational efficiency, communication overhead, and memory constraints. This survey provides valuable insights for future research and development in this rapidly evolving field by comparing and contrasting distributed learning approaches across diverse contexts.
[ "['Madison Threadgill' 'Andreas Gerstlauer']" ]
null
null
2405.15081
null
null
http://arxiv.org/abs/2405.15081v2
2024-06-17T03:28:33Z
2024-05-23T22:07:54Z
Distributed Harmonization: Federated Clustered Batch Effect Adjustment and Generalization
Independent and identically distributed (i.i.d.) data is essential to many data analysis and modeling techniques. In the medical domain, collecting data from multiple sites or institutions is a common strategy that guarantees sufficient clinical diversity, determined by the decentralized nature of medical data. However, data from various sites are easily biased by the local environment or facilities, thereby violating the i.i.d. rule. A common strategy is to harmonize the site bias while retaining important biological information. The ComBat is among the most popular harmonization approaches and has recently been extended to handle distributed sites. However, when faced with situations involving newly joined sites in training or evaluating data from unknown/unseen sites, ComBat lacks compatibility and requires retraining with data from all the sites. The retraining leads to significant computational and logistic overhead that is usually prohibitive. In this work, we develop a novel Cluster ComBat harmonization algorithm, which leverages cluster patterns of the data in different sites and greatly advances the usability of ComBat harmonization. We use extensive simulation and real medical imaging data from ADNI to demonstrate the superiority of the proposed approach. Our codes are provided in https://github.com/illidanlab/distributed-cluster-harmonization.
[ "['Bao Hoang' 'Yijiang Pang' 'Siqi Liang' 'Liang Zhan' 'Paul Thompson'\n 'Jiayu Zhou']" ]
null
null
2405.15084
null
null
http://arxiv.org/pdf/2405.15084v1
2024-05-23T22:13:44Z
2024-05-23T22:13:44Z
Efficient Certificates of Anti-Concentration Beyond Gaussians
A set of high dimensional points $X={x_1, x_2,ldots, x_n} subset R^d$ in isotropic position is said to be $delta$-anti concentrated if for every direction $v$, the fraction of points in $X$ satisfying $|langle x_i,v rangle |leq delta$ is at most $O(delta)$. Motivated by applications to list-decodable learning and clustering, recent works have considered the problem of constructing efficient certificates of anti-concentration in the average case, when the set of points $X$ corresponds to samples from a Gaussian distribution. Their certificates played a crucial role in several subsequent works in algorithmic robust statistics on list-decodable learning and settling the robust learnability of arbitrary Gaussian mixtures, yet remain limited to rotationally invariant distributions. This work presents a new (and arguably the most natural) formulation for anti-concentration. Using this formulation, we give quasi-polynomial time verifiable sum-of-squares certificates of anti-concentration that hold for a wide class of non-Gaussian distributions including anti-concentrated bounded product distributions and uniform distributions over $L_p$ balls (and their affine transformations). Consequently, our method upgrades and extends results in algorithmic robust statistics e.g., list-decodable learning and clustering, to such distributions. Our approach constructs a canonical integer program for anti-concentration and analysis a sum-of-squares relaxation of it, independent of the intended application. We rely on duality and analyze a pseudo-expectation on large subsets of the input points that take a small value in some direction. Our analysis uses the method of polynomial reweightings to reduce the problem to analyzing only analytically dense or sparse directions.
[ "['Ainesh Bakshi' 'Pravesh Kothari' 'Goutham Rajendran' 'Madhur Tulsiani'\n 'Aravindan Vijayaraghavan']" ]
null
null
2405.15090
null
null
http://arxiv.org/pdf/2405.15090v1
2024-05-23T22:35:11Z
2024-05-23T22:35:11Z
Pure Exploration for Constrained Best Mixed Arm Identification with a Fixed Budget
In this paper, we introduce the constrained best mixed arm identification (CBMAI) problem with a fixed budget. This is a pure exploration problem in a stochastic finite armed bandit model. Each arm is associated with a reward and multiple types of costs from unknown distributions. Unlike the unconstrained best arm identification problem, the optimal solution for the CBMAI problem may be a randomized mixture of multiple arms. The goal thus is to find the best mixed arm that maximizes the expected reward subject to constraints on the expected costs with a given learning budget $N$. We propose a novel, parameter-free algorithm, called the Score Function-based Successive Reject (SFSR) algorithm, that combines the classical successive reject framework with a novel score-function-based rejection criteria based on linear programming theory to identify the optimal support. We provide a theoretical upper bound on the mis-identification (of the the support of the best mixed arm) probability and show that it decays exponentially in the budget $N$ and some constants that characterize the hardness of the problem instance. We also develop an information theoretic lower bound on the error probability that shows that these constants appropriately characterize the problem difficulty. We validate this empirically on a number of average and hard instances.
[ "['Dengwang Tang' 'Rahul Jain' 'Ashutosh Nayyar' 'Pierluigi Nuzzo']" ]
null
null
2405.15094
null
null
http://arxiv.org/pdf/2405.15094v1
2024-05-23T22:57:15Z
2024-05-23T22:57:15Z
ULTRA-MC: A Unified Approach to Learning Mixtures of Markov Chains via Hitting Times
This study introduces a novel approach for learning mixtures of Markov chains, a critical process applicable to various fields, including healthcare and the analysis of web users. Existing research has identified a clear divide in methodologies for learning mixtures of discrete and continuous-time Markov chains, while the latter presents additional complexities for recovery accuracy and efficiency. We introduce a unifying strategy for learning mixtures of discrete and continuous-time Markov chains, focusing on hitting times, which are well defined for both types. Specifically, we design a reconstruction algorithm that outputs a mixture which accurately reflects the estimated hitting times and demonstrates resilience to noise. We introduce an efficient gradient-descent approach, specifically tailored to manage the computational complexity and non-symmetric characteristics inherent in the calculation of hitting time derivatives. Our approach is also of significant interest when applied to a single Markov chain, thus extending the methodologies previously established by Hoskins et al. and Wittmann et al. We complement our theoretical work with experiments conducted on synthetic and real-world datasets, providing a comprehensive evaluation of our methodology.
[ "['Fabian Spaeh' 'Konstantinos Sotiropoulos' 'Charalampos E. Tsourakakis']" ]
null
null
2405.15096
null
null
http://arxiv.org/pdf/2405.15096v1
2024-05-23T23:07:01Z
2024-05-23T23:07:01Z
Music Genre Classification: Training an AI model
Music genre classification is an area that utilizes machine learning models and techniques for the processing of audio signals, in which applications range from content recommendation systems to music recommendation systems. In this research I explore various machine learning algorithms for the purpose of music genre classification, using features extracted from audio signals.The systems are namely, a Multilayer Perceptron (built from scratch), a k-Nearest Neighbours (also built from scratch), a Convolutional Neural Network and lastly a Random Forest wide model. In order to process the audio signals, feature extraction methods such as Short-Time Fourier Transform, and the extraction of Mel Cepstral Coefficients (MFCCs), is performed. Through this extensive research, I aim to asses the robustness of machine learning models for genre classification, and to compare their results.
[ "['Keoikantse Mogonediwa']" ]
null
null
2405.15098
null
null
http://arxiv.org/pdf/2405.15098v1
2024-05-23T23:13:02Z
2024-05-23T23:13:02Z
Magnetic Resonance Image Processing Transformer for General Reconstruction
Purpose: To develop and evaluate a deep learning model for general accelerated MRI reconstruction. Materials and Methods: This retrospective study built a magnetic resonance image processing transformer (MR-IPT) which includes multi-head-tails and a single shared window transformer main body. Three mutations of MR-IPT with different transformer structures were implemented to guide the design of our MR-IPT model. Pre-trained on the MRI set of RadImageNet including 672675 images with multiple anatomy categories, the model was further migrated and evaluated on fastMRI knee dataset with 25012 images for downstream reconstruction tasks. We performed comparison studies with three CNN-based conventional networks in zero- and few-shot learning scenarios. Transfer learning process was conducted on both MR-IPT and CNN networks to further validate the generalizability of MR-IPT. To study the model performance stability, we evaluated our model with various downstream dataset sizes ranging from 10 to 2500 images. Result: The MR-IPT model provided superior performance in multiple downstream tasks compared to conventional CNN networks. MR-IPT achieved a PSNR/SSIM of 26.521/0.6102 (4-fold) and 24.861/0.4996 (8-fold) in 10-epoch learning, surpassing UNet128 at 25.056/0.5832 (4-fold) and 22.984/0.4637 (8-fold). With the same large-scale pre-training, MR-IPT provided a 5% performance boost compared to UNet128 in zero-shot learning in 8-fold and 3% in 4-fold. Conclusion: MR-IPT framework benefits from its transformer-based structure and large-scale pre-training and can serve as a solid backbone in other downstream tasks with zero- and few-shot learning.
[ "['Guoyao Shen' 'Mengyu Li' 'Stephan Anderson' 'Chad W. Farris' 'Xin Zhang']" ]
null
null
2405.15103
null
null
http://arxiv.org/pdf/2405.15103v1
2024-05-23T23:25:46Z
2024-05-23T23:25:46Z
The Rarity of Musical Audio Signals Within the Space of Possible Audio Generation
A white noise signal can access any possible configuration of values, though statistically over many samples tends to a uniform spectral distribution, and is highly unlikely to produce intelligible sound. But how unlikely? The probability that white noise generates a music-like signal over different durations is analyzed, based on some necessary features observed in real music audio signals such as mostly proximate movement and zero crossing rate. Given the mathematical results, the rarity of music as a signal is considered overall. The applicability of this study is not just to show that music has a precious rarity value, but that examination of the size of music relative to the overall size of audio signal space provides information to inform new generations of algorithmic music system (which are now often founded on audio signal generation directly, and may relate to white noise via such machine learning processes as diffusion). Estimated upper bounds on the rarity of music to the size of various physical and musical spaces are compared, to better understand the magnitude of the results (pun intended). Underlying the research are the questions `how much music is still out there?' and `how much music could a machine learning process actually reach?'.
[ "['Nick Collins']" ]
null
null
2405.15105
null
null
http://arxiv.org/pdf/2405.15105v1
2024-05-23T23:27:38Z
2024-05-23T23:27:38Z
Certified Inventory Control of Critical Resources
Inventory control is subject to service-level requirements, in which sufficient stock levels must be maintained despite an unknown demand. We propose a data-driven order policy that certifies any prescribed service level under minimal assumptions on the unknown demand process. The policy achieves this using any online learning method along with integral action. We further propose an inference method that is valid in finite samples. The properties and theoretical guarantees of the method are illustrated using both synthetic and real-world data.
[ "['Ludvig Hult' 'Dave Zachariah' 'Petre Stoica']" ]