categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.08609
null
null
http://arxiv.org/pdf/2402.08609v3
2024-06-26T16:50:01Z
2024-02-13T17:18:56Z
Mixtures of Experts Unlock Parameter Scaling for Deep RL
The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally to its size. Analogous scaling laws remain elusive for reinforcement learning domains, however, where increasing the parameter count of a model often hurts its final performance. In this paper, we demonstrate that incorporating Mixture-of-Expert (MoE) modules, and in particular Soft MoEs (Puigcerver et al., 2023), into value-based networks results in more parameter-scalable models, evidenced by substantial performance increases across a variety of training regimes and model sizes. This work thus provides strong empirical evidence towards developing scaling laws for reinforcement learning.
[ "['Johan Obando-Ceron' 'Ghada Sokar' 'Timon Willi' 'Clare Lyle'\n 'Jesse Farebrother' 'Jakob Foerster' 'Gintare Karolina Dziugaite'\n 'Doina Precup' 'Pablo Samuel Castro']" ]
null
null
2402.08611
null
null
http://arxiv.org/pdf/2402.08611v1
2024-01-16T15:09:53Z
2024-01-16T15:09:53Z
A Cost-Sensitive Transformer Model for Prognostics Under Highly Imbalanced Industrial Data
The rapid influx of data-driven models into the industrial sector has been facilitated by the proliferation of sensor technology, enabling the collection of vast quantities of data. However, leveraging these models for failure detection and prognosis poses significant challenges, including issues like missing values and class imbalances. Moreover, the cost sensitivity associated with industrial operations further complicates the application of conventional models in this context. This paper introduces a novel cost-sensitive transformer model developed as part of a systematic workflow, which also integrates a hybrid resampler and a regression-based imputer. After subjecting our approach to rigorous testing using the APS failure dataset from Scania trucks and the SECOM dataset, we observed a substantial enhancement in performance compared to state-of-the-art methods. Moreover, we conduct an ablation study to analyze the contributions of different components in our proposed method. Our findings highlight the potential of our method in addressing the unique challenges of failure prediction in industrial settings, thereby contributing to enhanced reliability and efficiency in industrial operations.
[ "['Ali Beikmohammadi' 'Mohammad Hosein Hamian' 'Neda Khoeyniha'\n 'Tony Lindgren' 'Olof Steinert' 'Sindri Magnússon']" ]
null
null
2402.08616
null
null
http://arxiv.org/pdf/2402.08616v2
2024-07-11T13:45:33Z
2024-02-13T17:32:59Z
Adjustment Identification Distance: A gadjid for Causal Structure Learning
Evaluating graphs learned by causal discovery algorithms is difficult: The number of edges that differ between two graphs does not reflect how the graphs differ with respect to the identifying formulas they suggest for causal effects. We introduce a framework for developing causal distances between graphs which includes the structural intervention distance for directed acyclic graphs as a special case. We use this framework to develop improved adjustment-based distances as well as extensions to completed partially directed acyclic graphs and causal orders. We develop new reachability algorithms to compute the distances efficiently and to prove their low polynomial time complexity. In our package gadjid (open source at https://github.com/CausalDisco/gadjid), we provide implementations of our distances; they are orders of magnitude faster with proven lower time complexity than the structural intervention distance and thereby provide a success metric for causal discovery that scales to graph sizes that were previously prohibitive.
[ "['Leonard Henckel' 'Theo Würtzen' 'Sebastian Weichwald']" ]
null
null
2402.08621
null
null
http://arxiv.org/pdf/2402.08621v2
2024-05-13T23:14:37Z
2024-02-13T17:42:27Z
A Generalized Approach to Online Convex Optimization
In this paper, we analyze the problem of online convex optimization in different settings. We show that any algorithm for online linear optimization with fully adaptive adversaries is an algorithm for online convex optimization. We also show that any such algorithm that requires full-information feedback may be transformed to an algorithm with semi-bandit feedback with comparable regret bound. We further show that algorithms that are designed for fully adaptive adversaries using deterministic semi-bandit feedback can obtain similar bounds using only stochastic semi-bandit feedback when facing oblivious adversaries. We use this to describe general meta-algorithms to convert first order algorithms to zeroth order algorithms with comparable regret bounds. Our framework allows us to analyze online optimization in various settings, such full-information feedback, bandit feedback, stochastic regret, adversarial regret and various forms of non-stationary regret.
[ "['Mohammad Pedramfar' 'Vaneet Aggarwal']" ]
null
null
2402.08631
null
null
http://arxiv.org/pdf/2402.08631v2
2024-02-17T16:06:10Z
2024-02-13T17:59:34Z
Knowledge Editing on Black-box Large Language Models
Knowledge editing (KE) aims to efficiently and precisely modify the behavior of large language models (LLMs) to update specific knowledge without negatively influencing other knowledge. Current research primarily focuses on white-box LLMs editing, overlooking an important scenario: black-box LLMs editing, where LLMs are accessed through interfaces and only textual output is available. In this paper, we first officially introduce KE on black-box LLMs and then propose a comprehensive evaluation framework to overcome the limitations of existing evaluations that are not applicable to black-box LLMs editing and lack comprehensiveness. To tackle privacy leaks of editing data and style over-editing in current methods, we introduce a novel postEdit framework, resolving privacy concerns through downstream post-processing and maintaining textual style consistency via fine-grained editing to original responses. Experiments and analysis on two benchmarks demonstrate that postEdit outperforms all baselines and achieves strong generalization, especially with huge improvements on style retention (average $+20.82%uparrow$).
[ "['Xiaoshuai Song' 'Zhengyang Wang' 'Keqing He' 'Guanting Dong' 'Yutao Mou'\n 'Jinxu Zhao' 'Weiran Xu']" ]
null
null
2402.08637
null
null
http://arxiv.org/pdf/2402.08637v1
2024-02-13T18:03:56Z
2024-02-13T18:03:56Z
Strategizing against No-Regret Learners in First-Price Auctions
We study repeated first-price auctions and general repeated Bayesian games between two players, where one player, the learner, employs a no-regret learning algorithm, and the other player, the optimizer, knowing the learner's algorithm, strategizes to maximize its own utility. For a commonly used class of no-regret learning algorithms called mean-based algorithms, we show that (i) in standard (i.e., full-information) first-price auctions, the optimizer cannot get more than the Stackelberg utility -- a standard benchmark in the literature, but (ii) in Bayesian first-price auctions, there are instances where the optimizer can achieve much higher than the Stackelberg utility. On the other hand, Mansour et al. (2022) showed that a more sophisticated class of algorithms called no-polytope-swap-regret algorithms are sufficient to cap the optimizer's utility at the Stackelberg utility in any repeated Bayesian game (including Bayesian first-price auctions), and they pose the open question whether no-polytope-swap-regret algorithms are necessary to cap the optimizer's utility. For general Bayesian games, under a reasonable and necessary condition, we prove that no-polytope-swap-regret algorithms are indeed necessary to cap the optimizer's utility and thus answer their open question. For Bayesian first-price auctions, we give a simple improvement of the standard algorithm for minimizing the polytope swap regret by exploiting the structure of Bayesian first-price auctions.
[ "['Aviad Rubinstein' 'Junyao Zhao']" ]
null
null
2402.08640
null
null
http://arxiv.org/pdf/2402.08640v2
2024-03-03T19:08:32Z
2024-02-13T18:09:38Z
Forecasting high-impact research topics via machine learning on evolving knowledge graphs
The exponential growth in scientific publications poses a severe challenge for human researchers. It forces attention to more narrow sub-fields, which makes it challenging to discover new impactful research ideas and collaborations outside one's own field. While there are ways to predict a scientific paper's future citation counts, they need the research to be finished and the paper written, usually assessing impact long after the idea was conceived. Here we show how to predict the impact of onsets of ideas that have never been published by researchers. For that, we developed a large evolving knowledge graph built from more than 21 million scientific papers. It combines a semantic network created from the content of the papers and an impact network created from the historic citations of papers. Using machine learning, we can predict the dynamic of the evolving network into the future with high accuracy, and thereby the impact of new research directions. We envision that the ability to predict the impact of new ideas will be a crucial component of future artificial muses that can inspire new impactful and interesting scientific ideas.
[ "['Xuemei Gu' 'Mario Krenn']" ]
null
null
2402.08643
null
null
http://arxiv.org/pdf/2402.08643v1
2024-02-13T18:20:04Z
2024-02-13T18:20:04Z
Learned Image Compression with Text Quality Enhancement
Learned image compression has gained widespread popularity for their efficiency in achieving ultra-low bit-rates. Yet, images containing substantial textual content, particularly screen-content images (SCI), often suffers from text distortion at such compressed levels. To address this, we propose to minimize a novel text logit loss designed to quantify the disparity in text between the original and reconstructed images, thereby improving the perceptual quality of the reconstructed text. Through rigorous experimentation across diverse datasets and employing state-of-the-art algorithms, our findings reveal significant enhancements in the quality of reconstructed text upon integration of the proposed loss function with appropriate weighting. Notably, we achieve a Bjontegaard delta (BD) rate of -32.64% for Character Error Rate (CER) and -28.03% for Word Error Rate (WER) on average by applying the text logit loss for two screenshot datasets. Additionally, we present quantitative metrics tailored for evaluating text quality in image compression tasks. Our findings underscore the efficacy and potential applicability of our proposed text logit loss function across various text-aware image compression contexts.
[ "['Chih-Yu Lai' 'Dung Tran' 'Kazuhito Koishida']" ]
null
null
2402.08645
null
null
http://arxiv.org/pdf/2402.08645v1
2024-02-13T18:24:10Z
2024-02-13T18:24:10Z
Peeking Behind the Curtains of Residual Learning
The utilization of residual learning has become widespread in deep and scalable neural nets. However, the fundamental principles that contribute to the success of residual learning remain elusive, thus hindering effective training of plain nets with depth scalability. In this paper, we peek behind the curtains of residual learning by uncovering the "dissipating inputs" phenomenon that leads to convergence failure in plain neural nets: the input is gradually compromised through plain layers due to non-linearities, resulting in challenges of learning feature representations. We theoretically demonstrate how plain neural nets degenerate the input to random noise and emphasize the significance of a residual connection that maintains a better lower bound of surviving neurons as a solution. With our theoretical discoveries, we propose "The Plain Neural Net Hypothesis" (PNNH) that identifies the internal path across non-linear layers as the most critical part in residual learning, and establishes a paradigm to support the training of deep plain neural nets devoid of residual connections. We thoroughly evaluate PNNH-enabled CNN architectures and Transformers on popular vision benchmarks, showing on-par accuracy, up to 0.3% higher training throughput, and 2x better parameter efficiency compared to ResNets and vision Transformers.
[ "['Tunhou Zhang' 'Feng Yan' 'Hai Li' 'Yiran Chen']" ]
null
null
2402.08648
null
null
http://arxiv.org/pdf/2402.08648v1
2024-02-13T18:27:53Z
2024-02-13T18:27:53Z
Generating Universal Adversarial Perturbations for Quantum Classifiers
Quantum Machine Learning (QML) has emerged as a promising field of research, aiming to leverage the capabilities of quantum computing to enhance existing machine learning methodologies. Recent studies have revealed that, like their classical counterparts, QML models based on Parametrized Quantum Circuits (PQCs) are also vulnerable to adversarial attacks. Moreover, the existence of Universal Adversarial Perturbations (UAPs) in the quantum domain has been demonstrated theoretically in the context of quantum classifiers. In this work, we introduce QuGAP: a novel framework for generating UAPs for quantum classifiers. We conceptualize the notion of additive UAPs for PQC-based classifiers and theoretically demonstrate their existence. We then utilize generative models (QuGAP-A) to craft additive UAPs and experimentally show that quantum classifiers are susceptible to such attacks. Moreover, we formulate a new method for generating unitary UAPs (QuGAP-U) using quantum generative models and a novel loss function based on fidelity constraints. We evaluate the performance of the proposed framework and show that our method achieves state-of-the-art misclassification rates, while maintaining high fidelity between legitimate and adversarial samples.
[ "['Gautham Anil' 'Vishnu Vinod' 'Apurva Narayan']" ]
null
null
2402.08653
null
null
http://arxiv.org/pdf/2402.08653v3
2024-02-21T04:35:38Z
2024-02-13T18:33:45Z
SAGMAN: Stability Analysis of Graph Neural Networks on the Manifolds
Modern graph neural networks (GNNs) can be sensitive to changes in the input graph structure and node features, potentially resulting in unpredictable behavior and degraded performance. In this work, we introduce a spectral framework known as SAGMAN for examining the stability of GNNs. This framework assesses the distance distortions that arise from the nonlinear mappings of GNNs between the input and output manifolds: when two nearby nodes on the input manifold are mapped (through a GNN model) to two distant ones on the output manifold, it implies a large distance distortion and thus a poor GNN stability. We propose a distance-preserving graph dimension reduction (GDR) approach that utilizes spectral graph embedding and probabilistic graphical models (PGMs) to create low-dimensional input/output graph-based manifolds for meaningful stability analysis. Our empirical evaluations show that SAGMAN effectively assesses the stability of each node when subjected to various edge or feature perturbations, offering a scalable approach for evaluating the stability of GNNs, extending to applications within recommendation systems. Furthermore, we illustrate its utility in downstream tasks, notably in enhancing GNN stability and facilitating adversarial targeted attacks.
[ "['Wuxinlin Cheng' 'Chenhui Deng' 'Ali Aghdaei' 'Zhiru Zhang' 'Zhuo Feng']" ]
null
null
2402.08662
null
null
http://arxiv.org/pdf/2402.08662v2
2024-02-17T17:50:28Z
2024-02-13T18:46:10Z
Learning Emergent Gaits with Decentralized Phase Oscillators: on the role of Observations, Rewards, and Feedback
We present a minimal phase oscillator model for learning quadrupedal locomotion. Each of the four oscillators is coupled only to itself and its corresponding leg through local feedback of the ground reaction force, which can be interpreted as an observer feedback gain. We interpret the oscillator itself as a latent contact state-estimator. Through a systematic ablation study, we show that the combination of phase observations, simple phase-based rewards, and the local feedback dynamics induces policies that exhibit emergent gait preferences, while using a reduced set of simple rewards, and without prescribing a specific gait. The code is open-source, and a video synopsis available at https://youtu.be/1NKQ0rSV3jU.
[ "['Jenny Zhang' 'Steve Heim' 'Se Hwan Jeon' 'Sangbae Kim']" ]
null
null
2402.08667
null
null
http://arxiv.org/pdf/2402.08667v1
2024-02-13T18:48:28Z
2024-02-13T18:48:28Z
Target Score Matching
Denoising Score Matching estimates the score of a noised version of a target distribution by minimizing a regression loss and is widely used to train the popular class of Denoising Diffusion Models. A well known limitation of Denoising Score Matching, however, is that it yields poor estimates of the score at low noise levels. This issue is particularly unfavourable for problems in the physical sciences and for Monte Carlo sampling tasks for which the score of the clean original target is known. Intuitively, estimating the score of a slightly noised version of the target should be a simple task in such cases. In this paper, we address this shortcoming and show that it is indeed possible to leverage knowledge of the target score. We present a Target Score Identity and corresponding Target Score Matching regression loss which allows us to obtain score estimates admitting favourable properties at low noise levels.
[ "['Valentin De Bortoli' 'Michael Hutchinson' 'Peter Wirnsberger'\n 'Arnaud Doucet']" ]
null
null
2402.08672
null
null
http://arxiv.org/pdf/2402.08672v2
2024-06-03T22:30:38Z
2024-02-13T18:54:08Z
Model Assessment and Selection under Temporal Distribution Shift
We investigate model assessment and selection in a changing environment, by synthesizing datasets from both the current time period and historical epochs. To tackle unknown and potentially arbitrary temporal distribution shift, we develop an adaptive rolling window approach to estimate the generalization error of a given model. This strategy also facilitates the comparison between any two candidate models by estimating the difference of their generalization errors. We further integrate pairwise comparisons into a single-elimination tournament, achieving near-optimal model selection from a collection of candidates. Theoretical analyses and numerical experiments demonstrate the adaptivity of our proposed methods to the non-stationarity in data.
[ "['Elise Han' 'Chengpiao Huang' 'Kaizheng Wang']" ]
null
null
2402.08674
null
null
http://arxiv.org/pdf/2402.08674v2
2024-05-12T08:24:38Z
2024-02-13T18:55:27Z
Human Curriculum Effects Emerge with In-Context Learning in Neural Networks
Human learning is sensitive to rule-like structure and the curriculum of examples used for training. In tasks governed by succinct rules, learning is more robust when related examples are blocked across trials, but in the absence of such rules, interleaving is more effective. To date, no neural model has simultaneously captured these seemingly contradictory effects. Here we show that this same tradeoff spontaneously emerges with ``in-context learning'' (ICL) both in neural networks trained with metalearning and in large language models (LLMs). ICL is the ability to learn new tasks ``in context'' -- without weight changes -- via an inner-loop algorithm implemented in activation dynamics. Experiments with pretrained LLMs and metalearning transformers show that ICL exhibits the blocking advantage demonstrated in humans on a task involving rule-like structure, and conversely, that concurrent in-weight learning reproduces the interleaving advantage observed in humans on tasks lacking such structure.
[ "['Jacob Russin' 'Ellie Pavlick' 'Michael J. Frank']" ]
null
null
2402.08676
null
null
http://arxiv.org/pdf/2402.08676v1
2024-02-13T18:56:55Z
2024-02-13T18:56:55Z
A Convergence Analysis of Approximate Message Passing with Non-Separable Functions and Applications to Multi-Class Classification
Motivated by the recent application of approximate message passing (AMP) to the analysis of convex optimizations in multi-class classifications [Loureiro, et. al., 2021], we present a convergence analysis of AMP dynamics with non-separable multivariate nonlinearities. As an application, we present a complete (and independent) analysis of the motivated convex optimization problem.
[ "['Burak Çakmak' 'Yue M. Lu' 'Manfred Opper']" ]
null
null
2402.08678
null
null
http://arxiv.org/pdf/2402.08678v2
2024-02-19T18:53:13Z
2024-02-13T18:58:17Z
Graph Mamba: Towards Learning on Graphs with State Space Models
Graph Neural Networks (GNNs) have shown promising potential in graph representation learning. The majority of GNNs define a local message-passing mechanism, propagating information over the graph by stacking multiple layers. These methods, however, are known to suffer from two major limitations: over-squashing and poor capturing of long-range dependencies. Recently, Graph Transformers (GTs) emerged as a powerful alternative to Message-Passing Neural Networks (MPNNs). GTs, however, have quadratic computational cost, lack inductive biases on graph structures, and rely on complex Positional/Structural Encodings (SE/PE). In this paper, we show that while Transformers, complex message-passing, and SE/PE are sufficient for good performance in practice, neither is necessary. Motivated by the recent success of State Space Models (SSMs), such as Mamba, we present Graph Mamba Networks (GMNs), a general framework for a new class of GNNs based on selective SSMs. We discuss and categorize the new challenges when adapting SSMs to graph-structured data, and present four required and one optional steps to design GMNs, where we choose (1) Neighborhood Tokenization, (2) Token Ordering, (3) Architecture of Bidirectional Selective SSM Encoder, (4) Local Encoding, and dispensable (5) PE and SE. We further provide theoretical justification for the power of GMNs. Experiments demonstrate that despite much less computational cost, GMNs attain an outstanding performance in long-range, small-scale, large-scale, and heterophilic benchmark datasets.
[ "['Ali Behrouz' 'Farnoosh Hashemi']" ]
null
null
2402.08679
null
null
http://arxiv.org/pdf/2402.08679v2
2024-06-07T00:13:08Z
2024-02-13T18:58:48Z
COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability
Jailbreaks on large language models (LLMs) have recently received increasing attention. For a comprehensive assessment of LLM safety, it is essential to consider jailbreaks with diverse attributes, such as contextual coherence and sentiment/stylistic variations, and hence it is beneficial to study controllable jailbreaking, i.e. how to enforce control on LLM attacks. In this paper, we formally formulate the controllable attack generation problem, and build a novel connection between this problem and controllable text generation, a well-explored topic of natural language processing. Based on this connection, we adapt the Energy-based Constrained Decoding with Langevin Dynamics (COLD), a state-of-the-art, highly efficient algorithm in controllable text generation, and introduce the COLD-Attack framework which unifies and automates the search of adversarial LLM attacks under a variety of control requirements such as fluency, stealthiness, sentiment, and left-right-coherence. The controllability enabled by COLD-Attack leads to diverse new jailbreak scenarios which not only cover the standard setting of generating fluent (suffix) attack with continuation constraint, but also allow us to address new controllable attack settings such as revising a user query adversarially with paraphrasing constraint, and inserting stealthy attacks in context with position constraint. Our extensive experiments on various LLMs (Llama-2, Mistral, Vicuna, Guanaco, GPT-3.5, and GPT-4) show COLD-Attack's broad applicability, strong controllability, high success rate, and attack transferability. Our code is available at https://github.com/Yu-Fangxu/COLD-Attack.
[ "['Xingang Guo' 'Fangxu Yu' 'Huan Zhang' 'Lianhui Qin' 'Bin Hu']" ]
null
null
2402.08680
null
null
http://arxiv.org/pdf/2402.08680v1
2024-02-13T18:59:05Z
2024-02-13T18:59:05Z
Mitigating Object Hallucination in Large Vision-Language Models via Classifier-Free Guidance
The advancement of Large Vision-Language Models (LVLMs) has increasingly highlighted the critical issue of their tendency to hallucinate non-existing objects in the images. To address this issue, previous works focused on using specially curated datasets or powerful LLMs (e.g., GPT-3.5) to rectify the outputs of LVLMs. However, these approaches require either expensive training/fine-tuning or API access to advanced LLMs to correct the model's output post-generation. In this paper, we tackle this challenge by introducing a framework called Mitigating hallucinAtion via classifieR-Free guIdaNcE (MARINE), which is both training-free and API-free, and can effectively and efficiently reduce object hallucinations during the generation process. Specifically, MARINE enriches the visual context of LVLMs by integrating existing open-source vision models, and employs classifier-free guidance to incorporate the additional object grounding features to improve the precision of LVLMs' generations. Through comprehensive evaluations across $6$ popular LVLMs with diverse evaluation metrics, we demonstrate the effectiveness of MARINE, which even outperforms existing fine-tuning-based methods. Remarkably, it not only reduces hallucinations but also improves the detailedness of LVLMs' generations, as assessed by GPT-4V.
[ "['Linxi Zhao' 'Yihe Deng' 'Weitong Zhang' 'Quanquan Gu']" ]
null
null
2402.08682
null
null
http://arxiv.org/pdf/2402.08682v1
2024-02-13T18:59:51Z
2024-02-13T18:59:51Z
IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation
Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score Distillation Sampling (SDS), which is slow, somewhat unstable, and prone to artifacts. A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly. In this paper, we further explore the design space of text-to-3D models. We significantly improve multi-view generation by considering video instead of image generators. Combined with a 3D reconstruction algorithm which, by using Gaussian splatting, can optimize a robust image-based loss, we directly produce high-quality 3D outputs from the generated views. Our new method, IM-3D, reduces the number of evaluations of the 2D generator network 10-100x, resulting in a much more efficient pipeline, better quality, fewer geometric inconsistencies, and higher yield of usable 3D assets.
[ "['Luke Melas-Kyriazi' 'Iro Laina' 'Christian Rupprecht' 'Natalia Neverova'\n 'Andrea Vedaldi' 'Oran Gafni' 'Filippos Kokkinos']" ]
null
null
2402.08687
null
null
http://arxiv.org/pdf/2402.08687v1
2024-01-26T12:21:57Z
2024-01-26T12:21:57Z
Fuzzy clustering of circular time series based on a new dependence measure with applications to wind data
Time series clustering is an essential machine learning task with applications in many disciplines. While the majority of the methods focus on time series taking values on the real line, very few works consider time series defined on the unit circle, although the latter objects frequently arise in many applications. In this paper, the problem of clustering circular time series is addressed. To this aim, a distance between circular series is introduced and used to construct a clustering procedure. The metric relies on a new measure of serial dependence considering circular arcs, thus taking advantage of the directional character inherent to the series range. Since the dynamics of the series may vary over the time, we adopt a fuzzy approach, which enables the procedure to locate each series into several clusters with different membership degrees. The resulting clustering algorithm is able to group series generated from similar stochastic processes, reaching accurate results with series coming from a broad variety of models. An extensive simulation study shows that the proposed method outperforms several alternative techniques, besides being computationally efficient. Two interesting applications involving time series of wind direction in Saudi Arabia highlight the potential of the proposed approach.
[ "['Ángel López-Oriona' 'Ying Sun' 'Rosa M. Crujeiras']" ]
null
null
2402.08688
null
null
http://arxiv.org/abs/2402.08688v1
2024-02-06T16:55:41Z
2024-02-06T16:55:41Z
Context-Aware Automated Passenger Counting Data Denoising
A reliable and accurate knowledge of the ridership in public transportation networks is crucial for public transport operators and public authorities to be aware of their network's use and optimize transport offering. Several techniques to estimate ridership exist nowadays, some of them in an automated manner. Among them, Automatic Passenger Counting (APC) systems detect passengers entering and leaving the vehicle at each station of its course. However, data resulting from these systems are often noisy or even biased, resulting in under or overestimation of onboard occupancy. In this work, we propose a denoising algorithm for APC data to improve their robustness and ease their analyzes. The proposed approach consists in a constrained integer linear optimization, taking advantage of ticketing data and historical ridership data to further constrain and guide the optimization. The performances are assessed and compared to other denoising methods on several public transportation networks in France, to manual counts available on one of these networks, and on simulated data.
[ "['Noëlie Cherrier' 'Baptiste Rérolle' 'Martin Graive' 'Amir Dib'\n 'Eglantine Schmitt']" ]
null
null
2402.08690
null
null
http://arxiv.org/pdf/2402.08690v1
2024-02-09T18:43:48Z
2024-02-09T18:43:48Z
If Turing played piano with an artificial partner
Music is an inherently social activity that allows people to share experiences and feel connected with one another. There has been little progress in designing artificial partners exhibiting a similar social experience as playing with another person. Neural network architectures that implement generative models, such as large language models, are suited for producing musical scores. Playing music socially, however, involves more than playing a score; it must complement the other musicians' ideas and keep time correctly. We addressed the question of whether a convincing social experience is made possible by a generative model trained to produce musical scores, not necessarily optimized for synchronization and continuation. The network, a variational autoencoder trained on a large corpus of digital scores, was adapted for a timed call-and-response task with a human partner. Participants played piano with a human or artificial partner-in various configurations-and rated the performance quality and first-person experience of self-other integration. Overall, the artificial partners held promise but were rated lower than human partners. The artificial partner with simplest design and highest similarity parameter was not rated differently from the human partners on some measures, suggesting that interactive rather than generative sophistication is important in enabling social AI.
[ "['Dobromir Dotov' 'Dante Camarena' 'Zack Harris' 'Joanna Spyra'\n 'Pietro Gagliano' 'Laurel Trainor']" ]
null
null
2402.08692
null
null
http://arxiv.org/pdf/2402.08692v1
2024-02-12T12:50:10Z
2024-02-12T12:50:10Z
Inference Stage Denoising for Undersampled MRI Reconstruction
Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning. A key challenge remains: to improve generalisation to distribution shifts between the training and testing data. Most approaches aim to address this via inductive design or data augmentation. However, they can be affected by misleading data, e.g. random noise, and cases where the inference stage data do not match assumptions in the modelled shifts. In this work, by employing a conditional hyperparameter network, we eliminate the need of augmentation, yet maintain robust performance under various levels of Gaussian noise. We demonstrate that our model withstands various input noise levels while producing high-definition reconstructions during the test stage. Moreover, we present a hyperparameter sampling strategy that accelerates the convergence of training. Our proposed method achieves the highest accuracy and image quality in all settings compared to baseline methods.
[ "['Yuyang Xue' 'Chen Qin' 'Sotirios A. Tsaftaris']" ]
null
null
2402.08695
null
null
http://arxiv.org/pdf/2402.08695v1
2024-02-12T20:14:46Z
2024-02-12T20:14:46Z
Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors
We propose and analyze an adaptive adversary that can retrain a Trojaned DNN and is also aware of SOTA output-based Trojaned model detectors. We show that such an adversary can ensure (1) high accuracy on both trigger-embedded and clean samples and (2) bypass detection. Our approach is based on an observation that the high dimensionality of the DNN parameters provides sufficient degrees of freedom to simultaneously achieve these objectives. We also enable SOTA detectors to be adaptive by allowing retraining to recalibrate their parameters, thus modeling a co-evolution of parameters of a Trojaned model and detectors. We then show that this co-evolution can be modeled as an iterative game, and prove that the resulting (optimal) solution of this interactive game leads to the adversary successfully achieving the above objectives. In addition, we provide a greedy algorithm for the adversary to select a minimum number of input samples for embedding triggers. We show that for cross-entropy or log-likelihood loss functions used by the DNNs, the greedy algorithm provides provable guarantees on the needed number of trigger-embedded input samples. Extensive experiments on four diverse datasets -- MNIST, CIFAR-10, CIFAR-100, and SpeechCommand -- reveal that the adversary effectively evades four SOTA output-based Trojaned model detectors: MNTD, NeuralCleanse, STRIP, and TABOR.
[ "['Dinuka Sahabandu' 'Xiaojun Xu' 'Arezoo Rajabi' 'Luyao Niu'\n 'Bhaskar Ramasubramanian' 'Bo Li' 'Radha Poovendran']" ]
null
null
2402.08698
null
null
http://arxiv.org/pdf/2402.08698v2
2024-04-26T18:02:38Z
2024-02-13T02:43:41Z
AMEND: A Mixture of Experts Framework for Long-tailed Trajectory Prediction
Accurate prediction of pedestrians' future motions is critical for intelligent driving systems. Developing models for this task requires rich datasets containing diverse sets of samples. However, the existing naturalistic trajectory prediction datasets are generally imbalanced in favor of simpler samples and lack challenging scenarios. Such a long-tail effect causes prediction models to underperform on the tail portion of the data distribution containing safety-critical scenarios. Previous methods tackle the long-tail problem using methods such as contrastive learning and class-conditioned hypernetworks. These approaches, however, are not modular and cannot be applied to many machine learning architectures. In this work, we propose a modular model-agnostic framework for trajectory prediction that leverages a specialized mixture of experts. In our approach, each expert is trained with a specialized skill with respect to a particular part of the data. To produce predictions, we utilise a router network that selects the best expert by generating relative confidence scores. We conduct experimentation on common pedestrian trajectory prediction datasets and show that our method improves performance on long-tail scenarios. We further conduct ablation studies to highlight the contribution of different proposed components.
[ "['Ray Coden Mercurius' 'Ehsan Ahmadi' 'Soheil Mohamad Alizadeh Shabestary'\n 'Amir Rasouli']" ]
null
null
2402.08699
null
null
http://arxiv.org/pdf/2402.08699v2
2024-05-27T10:55:06Z
2024-02-13T11:08:08Z
Unsupervised Evaluation of Code LLMs with Round-Trip Correctness
To evaluate code large language models (LLMs), research has relied on a few small manually curated benchmarks, such as HumanEval and MBPP, which represent a narrow part of the real-world software domains. In this work, we introduce round-trip correctness (RTC) as an alternative evaluation method. RTC allows Code LLM evaluation on a broader spectrum of real-world software domains without the need for costly human curation. RTC rests on the idea that we can ask a model to make a prediction (e.g., describe some code using natural language), feed that prediction back (e.g., synthesize code from the predicted description), and check if this round-trip leads to code that is semantically equivalent to the original input. We show how to employ RTC to evaluate code synthesis and editing. We find that RTC strongly correlates with model performance on existing narrow-domain code synthesis benchmarks while allowing us to expand to a much broader set of domains and tasks which was not previously possible without costly human annotations.
[ "['Miltiadis Allamanis' 'Sheena Panthaplackel' 'Pengcheng Yin']" ]
null
null
2402.08701
null
null
http://arxiv.org/pdf/2402.08701v1
2024-02-13T13:02:11Z
2024-02-13T13:02:11Z
Primal-Dual Algorithms with Predictions for Online Bounded Allocation and Ad-Auctions Problems
Matching problems have been widely studied in the research community, especially Ad-Auctions with many applications ranging from network design to advertising. Following the various advancements in machine learning, one natural question is whether classical algorithms can benefit from machine learning and obtain better-quality solutions. Even a small percentage of performance improvement in matching problems could result in significant gains for the studied use cases. For example, the network throughput or the revenue of Ad-Auctions can increase remarkably. This paper presents algorithms with machine learning predictions for the Online Bounded Allocation and the Online Ad-Auctions problems. We constructed primal-dual algorithms that achieve competitive performance depending on the quality of the predictions. When the predictions are accurate, the algorithms' performance surpasses previous performance bounds, while when the predictions are misleading, the algorithms maintain standard worst-case performance guarantees. We provide supporting experiments on generated data for our theoretical findings.
[ "['Eniko Kevi' 'Nguyen Kim Thang']" ]
null
null
2402.08703
null
null
http://arxiv.org/pdf/2402.08703v2
2024-06-26T11:03:21Z
2024-02-13T16:56:31Z
A Survey of Generative AI for de novo Drug Design: New Frontiers in Molecule and Protein Generation
Artificial intelligence (AI)-driven methods can vastly improve the historically costly drug design process, with various generative models already in widespread use. Generative models for de novo drug design, in particular, focus on the creation of novel biological compounds entirely from scratch, representing a promising future direction. Rapid development in the field, combined with the inherent complexity of the drug design process, creates a difficult landscape for new researchers to enter. In this survey, we organize de novo drug design into two overarching themes: small molecule and protein generation. Within each theme, we identify a variety of subtasks and applications, highlighting important datasets, benchmarks, and model architectures and comparing the performance of top models. We take a broad approach to AI-driven drug design, allowing for both micro-level comparisons of various methods within each subtask and macro-level observations across different fields. We discuss parallel challenges and approaches between the two applications and highlight future directions for AI-driven de novo drug design as a whole. An organized repository of all covered sources is available at https://github.com/gersteinlab/GenAI4Drug.
[ "['Xiangru Tang' 'Howard Dai' 'Elizabeth Knight' 'Fang Wu' 'Yunyang Li'\n 'Tianxiao Li' 'Mark Gerstein']" ]
null
null
2402.08708
null
null
http://arxiv.org/pdf/2402.08708v1
2024-02-13T17:53:44Z
2024-02-13T17:53:44Z
Zero Shot Molecular Generation via Similarity Kernels
Generative modelling aims to accelerate the discovery of novel chemicals by directly proposing structures with desirable properties. Recently, score-based, or diffusion, generative models have significantly outperformed previous approaches. Key to their success is the close relationship between the score and physical force, allowing the use of powerful equivariant neural networks. However, the behaviour of the learnt score is not yet well understood. Here, we analyse the score by training an energy-based diffusion model for molecular generation. We find that during the generation the score resembles a restorative potential initially and a quantum-mechanical force at the end. In between the two endpoints, it exhibits special properties that enable the building of large molecules. Using insights from the trained model, we present Similarity-based Molecular Generation (SiMGen), a new method for zero shot molecular generation. SiMGen combines a time-dependent similarity kernel with descriptors from a pretrained machine learning force field to generate molecules without any further training. Our approach allows full control over the molecular shape through point cloud priors and supports conditional generation. We also release an interactive web tool that allows users to generate structures with SiMGen online (https://zndraw.icp.uni-stuttgart.de).
[ "['Rokas Elijošius' 'Fabian Zills' 'Ilyes Batatia' 'Sam Walton Norwood'\n 'Dávid Péter Kovács' 'Christian Holm' 'Gábor Csányi']" ]
null
null
2402.08711
null
null
http://arxiv.org/pdf/2402.08711v2
2024-02-15T08:34:41Z
2024-02-13T18:31:55Z
Correction to "Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations"
A method for analyzing non-asymptotic guarantees of numerical discretizations of ergodic SDEs in Wasserstein-2 distance is presented by Sanz-Serna and Zygalakis in ``Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations". They analyze the UBU integrator which is strong order two and only requires one gradient evaluation per step, resulting in desirable non-asymptotic guarantees, in particular $mathcal{O}(d^{1/4}epsilon^{-1/2})$ steps to reach a distance of $epsilon > 0$ in Wasserstein-2 distance away from the target distribution. However, there is a mistake in the local error estimates in Sanz-Serna and Zygalakis (2021), in particular, a stronger assumption is needed to achieve these complexity estimates. This note reconciles the theory with the dimension dependence observed in practice in many applications of interest.
[ "['Daniel Paulin' 'Peter A. Whalley']" ]
null
null
2402.08712
null
null
http://arxiv.org/pdf/2402.08712v3
2024-05-31T21:14:42Z
2024-02-13T18:37:53Z
BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation
Continual Test Time Adaptation (CTTA) is required to adapt efficiently to continuous unseen domains while retaining previously learned knowledge. However, despite the progress of CTTA, it is still challenging to deploy the model with improved forgetting-adaptation trade-offs and efficiency. In addition, current CTTA scenarios assume only the disjoint situation, even though real-world domains are seamlessly changed. To address these challenges, this paper proposes BECoTTA, an input-dependent and efficient modular framework for CTTA. We propose Mixture-of Domain Low-rank Experts (MoDE) that contains two core components: (i) Domain-Adaptive Routing, which helps to selectively capture the domain adaptive knowledge with multiple domain routers, and (ii) Domain-Expert Synergy Loss to maximize the dependency between each domain and expert. We validate that our method outperforms multiple CTTA scenarios, including disjoint and gradual domain shits, while only requiring ~98% fewer trainable parameters. We also provide analyses of our method, including the construction of experts, the effect of domain-adaptive experts, and visualizations.
[ "['Daeun Lee' 'Jaehong Yoon' 'Sung Ju Hwang']" ]
null
null
2402.08714
null
null
http://arxiv.org/pdf/2402.08714v2
2024-03-27T21:37:39Z
2024-02-13T18:58:16Z
PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models
Reward finetuning has emerged as a promising approach to aligning foundation models with downstream objectives. Remarkable success has been achieved in the language domain by using reinforcement learning (RL) to maximize rewards that reflect human preference. However, in the vision domain, existing RL-based reward finetuning methods are limited by their instability in large-scale training, rendering them incapable of generalizing to complex, unseen prompts. In this paper, we propose Proximal Reward Difference Prediction (PRDP), enabling stable black-box reward finetuning for diffusion models for the first time on large-scale prompt datasets with over 100K prompts. Our key innovation is the Reward Difference Prediction (RDP) objective that has the same optimal solution as the RL objective while enjoying better training stability. Specifically, the RDP objective is a supervised regression objective that tasks the diffusion model with predicting the reward difference of generated image pairs from their denoising trajectories. We theoretically prove that the diffusion model that obtains perfect reward difference prediction is exactly the maximizer of the RL objective. We further develop an online algorithm with proximal updates to stably optimize the RDP objective. In experiments, we demonstrate that PRDP can match the reward maximization ability of well-established RL-based methods in small-scale training. Furthermore, through large-scale training on text prompts from the Human Preference Dataset v2 and the Pick-a-Pic v1 dataset, PRDP achieves superior generation quality on a diverse set of complex, unseen prompts whereas RL-based methods completely fail.
[ "['Fei Deng' 'Qifei Wang' 'Wei Wei' 'Matthias Grundmann' 'Tingbo Hou']" ]
null
null
2402.08726
null
null
http://arxiv.org/pdf/2402.08726v1
2024-02-13T19:00:08Z
2024-02-13T19:00:08Z
Trained quantum neural networks are Gaussian processes
We study quantum neural networks made by parametric one-qubit gates and fixed two-qubit gates in the limit of infinite width, where the generated function is the expectation value of the sum of single-qubit observables over all the qubits. First, we prove that the probability distribution of the function generated by the untrained network with randomly initialized parameters converges in distribution to a Gaussian process whenever each measured qubit is correlated only with few other measured qubits. Then, we analytically characterize the training of the network via gradient descent with square loss on supervised learning problems. We prove that, as long as the network is not affected by barren plateaus, the trained network can perfectly fit the training set and that the probability distribution of the function generated after training still converges in distribution to a Gaussian process. Finally, we consider the statistical noise of the measurement at the output of the network and prove that a polynomial number of measurements is sufficient for all the previous results to hold and that the network can always be trained in polynomial time.
[ "['Filippo Girardi' 'Giacomo De Palma']" ]
null
null
2402.08733
null
null
http://arxiv.org/pdf/2402.08733v2
2024-05-27T19:40:04Z
2024-02-13T19:01:45Z
Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs
Identifying how much a model ${widehat{p}}_{theta}(Y|X)$ knows about the stochastic real-world process $p(Y|X)$ it was trained on is important to ensure it avoids producing incorrect or "hallucinated" answers or taking unsafe actions. But this is difficult for generative models because probabilistic predictions do not distinguish between per-response noise (aleatoric uncertainty) and lack of knowledge about the process (epistemic uncertainty), and existing epistemic uncertainty quantification techniques tend to be overconfident when the model underfits. We propose a general strategy for teaching a model to both approximate $p(Y|X)$ and also estimate the remaining gaps between ${widehat{p}}_{theta}(Y|X)$ and $p(Y|X)$: train it to predict pairs of independent responses drawn from the true conditional distribution, allow it to "cheat" by observing one response while predicting the other, then measure how much it cheats. Remarkably, we prove that being good at cheating (i.e. cheating whenever it improves your prediction) is equivalent to being second-order calibrated, a principled extension of ordinary calibration that allows us to construct provably-correct frequentist confidence intervals for $p(Y|X)$ and detect incorrect responses with high probability. We demonstrate empirically that our approach accurately estimates how much models don't know across ambiguous image classification, (synthetic) language modeling, and partially-observable navigation tasks, outperforming existing techniques.
[ "['Daniel D. Johnson' 'Daniel Tarlow' 'David Duvenaud' 'Chris J. Maddison']" ]
null
null
2402.08742
null
null
http://arxiv.org/pdf/2402.08742v1
2024-02-13T19:27:06Z
2024-02-13T19:27:06Z
Unveiling Hidden Energy Anomalies: Harnessing Deep Learning to Optimize Energy Management in Sports Facilities
Anomaly detection in sport facilities has gained significant attention due to its potential to promote energy saving and optimizing operational efficiency. In this research article, we investigate the role of machine learning, particularly deep learning, in anomaly detection for sport facilities. We explore the challenges and perspectives of utilizing deep learning methods for this task, aiming to address the drawbacks and limitations of conventional approaches. Our proposed approach involves feature extraction from the data collected in sport facilities. We present a problem formulation using Deep Feedforward Neural Networks (DFNN) and introduce threshold estimation techniques to identify anomalies effectively. Furthermore, we propose methods to reduce false alarms, ensuring the reliability and accuracy of anomaly detection. To evaluate the effectiveness of our approach, we conduct experiments on aquatic center dataset at Qatar University. The results demonstrate the superiority of our deep learning-based method over conventional techniques, highlighting its potential in real-world applications. Typically, 94.33% accuracy and 92.92% F1-score have been achieved using the proposed scheme.
[ "['Fodil Fadli' 'Yassine Himeur' 'Mariam Elnour' 'Abbes Amira']" ]
null
null
2402.08743
null
null
http://arxiv.org/pdf/2402.08743v1
2024-02-13T19:27:34Z
2024-02-13T19:27:34Z
ADS: Approximate Densest Subgraph for Novel Image Discovery
The volume of image repositories continues to grow. Despite the availability of content-based addressing, we still lack a lightweight tool that allows us to discover images of distinct characteristics from a large collection. In this paper, we propose a fast and training-free algorithm for novel image discovery. The key of our algorithm is formulating a collection of images as a perceptual distance-weighted graph, within which our task is to locate the K-densest subgraph that corresponds to a subset of the most unique images. While solving this problem is not just NP-hard but also requires a full computation of the potentially huge distance matrix, we propose to relax it into a K-sparse eigenvector problem that we can efficiently solve using stochastic gradient descent (SGD) without explicitly computing the distance matrix. We compare our algorithm against state-of-the-arts on both synthetic and real datasets, showing that it is considerably faster to run with a smaller memory footprint while able to mine novel images more accurately.
[ "['Shanfeng Hu']" ]
null
null
2402.08748
null
null
http://arxiv.org/pdf/2402.08748v2
2024-05-09T18:33:59Z
2024-02-13T19:33:41Z
Nearest Neighbor Representations of Neurons
The Nearest Neighbor (NN) Representation is an emerging computational model that is inspired by the brain. We study the complexity of representing a neuron (threshold function) using the NN representations. It is known that two anchors (the points to which NN is computed) are sufficient for a NN representation of a threshold function, however, the resolution (the maximum number of bits required for the entries of an anchor) is $O(nlog{n})$. In this work, the trade-off between the number of anchors and the resolution of a NN representation of threshold functions is investigated. We prove that the well-known threshold functions EQUALITY, COMPARISON, and ODD-MAX-BIT, which require 2 or 3 anchors and resolution of $O(n)$, can be represented by polynomially large number of anchors in $n$ and $O(log{n})$ resolution. We conjecture that for all threshold functions, there are NN representations with polynomially large size and logarithmic resolution in $n$.
[ "['Kordag Mehmet Kilic' 'Jin Sima' 'Jehoshua Bruck']" ]
null
null
2402.08749
null
null
http://arxiv.org/pdf/2402.08749v1
2024-02-13T19:36:23Z
2024-02-13T19:36:23Z
Automated detection of motion artifacts in brain MR images using deep learning and explainable artificial intelligence
Quality assessment, including inspecting the images for artifacts, is a critical step during MRI data acquisition to ensure data quality and downstream analysis or interpretation success. This study demonstrates a deep learning model to detect rigid motion in T1-weighted brain images. We leveraged a 2D CNN for three-class classification and tested it on publicly available retrospective and prospective datasets. Grad-CAM heatmaps enabled the identification of failure modes and provided an interpretation of the model's results. The model achieved average precision and recall metrics of 85% and 80% on six motion-simulated retrospective datasets. Additionally, the model's classifications on the prospective dataset showed a strong inverse correlation (-0.84) compared to average edge strength, an image quality metric indicative of motion. This model is part of the ArtifactID tool, aimed at inline automatic detection of Gibbs ringing, wrap-around, and motion artifacts. This tool automates part of the time-consuming QA process and augments expertise on-site, particularly relevant in low-resource settings where local MR knowledge is scarce.
[ "['Marina Manso Jimeno' 'Keerthi Sravan Ravi' 'Maggie Fung'\n 'John Thomas Vaughan, Jr.' 'Sairam Geethanath']" ]
null
null
2402.08751
null
null
http://arxiv.org/pdf/2402.08751v2
2024-05-09T18:29:36Z
2024-02-13T19:38:01Z
Nearest Neighbor Representations of Neural Circuits
Neural networks successfully capture the computational power of the human brain for many tasks. Similarly inspired by the brain architecture, Nearest Neighbor (NN) representations is a novel approach of computation. We establish a firmer correspondence between NN representations and neural networks. Although it was known how to represent a single neuron using NN representations, there were no results even for small depth neural networks. Specifically, for depth-2 threshold circuits, we provide explicit constructions for their NN representation with an explicit bound on the number of bits to represent it. Example functions include NN representations of convex polytopes (AND of threshold gates), IP2, OR of threshold gates, and linear or exact decision lists.
[ "['Kordag Mehmet Kilic' 'Jin Sima' 'Jehoshua Bruck']" ]
null
null
2402.08753
null
null
http://arxiv.org/pdf/2402.08753v2
2024-06-15T20:31:37Z
2024-02-13T19:39:11Z
Forecasting for Swap Regret for All Downstream Agents
We study the problem of making predictions so that downstream agents who best respond to them will be guaranteed diminishing swap regret, no matter what their utility functions are. It has been known since Foster and Vohra (1997) that agents who best-respond to calibrated forecasts have no swap regret. Unfortunately, the best known algorithms for guaranteeing calibrated forecasts in sequential adversarial environments do so at rates that degrade exponentially with the dimension of the prediction space. In this work, we show that by making predictions that are not calibrated, but are unbiased subject to a carefully selected collection of events, we can guarantee arbitrary downstream agents diminishing swap regret at rates that substantially improve over the rates that result from calibrated forecasts -- while maintaining the appealing property that our forecasts give guarantees for any downstream agent, without our forecasting algorithm needing to know their utility function. We give separate results in the ``low'' (1 or 2) dimensional setting and the ``high'' ($> 2$) dimensional setting. In the low dimensional setting, we show how to make predictions such that all agents who best respond to our predictions have diminishing swap regret -- in 1 dimension, at the optimal $O(sqrt{T})$ rate. In the high dimensional setting we show how to make forecasts that guarantee regret scaling at a rate of $O(T^{2/3})$ (crucially, a dimension independent exponent), under the assumption that downstream agents smoothly best respond. Our results stand in contrast to rates that derive from agents who best respond to calibrated forecasts, which have an exponential dependence on the dimension of the prediction space.
[ "['Aaron Roth' 'Mirah Shi']" ]
null
null
2402.08758
null
null
http://arxiv.org/pdf/2402.08758v1
2024-02-13T19:51:49Z
2024-02-13T19:51:49Z
Bayesian Strategic Classification
In strategic classification, agents modify their features, at a cost, to ideally obtain a positive classification from the learner's classifier. The typical response of the learner is to carefully modify their classifier to be robust to such strategic behavior. When reasoning about agent manipulations, most papers that study strategic classification rely on the following strong assumption: agents fully know the exact parameters of the deployed classifier by the learner. This often is an unrealistic assumption when using complex or proprietary machine learning techniques in real-world prediction tasks. We initiate the study of partial information release by the learner in strategic classification. We move away from the traditional assumption that agents have full knowledge of the classifier. Instead, we consider agents that have a common distributional prior on which classifier the learner is using. The learner in our model can reveal truthful, yet not necessarily complete, information about the deployed classifier to the agents. The learner's goal is to release just enough information about the classifier to maximize accuracy. We show how such partial information release can, counter-intuitively, benefit the learner's accuracy, despite increasing agents' abilities to manipulate. We show that while it is intractable to compute the best response of an agent in the general case, there exist oracle-efficient algorithms that can solve the best response of the agents when the learner's hypothesis class is the class of linear classifiers, or when the agents' cost function satisfies a natural notion of submodularity as we define. We then turn our attention to the learner's optimization problem and provide both positive and negative results on the algorithmic problem of how much information the learner should release about the classifier to maximize their expected accuracy.
[ "['Lee Cohen' 'Saeed Sharifi-Malvajerdi' 'Kevin Stangl' 'Ali Vakilian'\n 'Juba Ziani']" ]
null
null
2402.08768
null
null
http://arxiv.org/pdf/2402.08768v1
2024-02-13T20:02:34Z
2024-02-13T20:02:34Z
Adversarially Robust Feature Learning for Breast Cancer Diagnosis
Adversarial data can lead to malfunction of deep learning applications. It is essential to develop deep learning models that are robust to adversarial data while accurate on standard, clean data. In this study, we proposed a novel adversarially robust feature learning (ARFL) method for a real-world application of breast cancer diagnosis. ARFL facilitates adversarial training using both standard data and adversarial data, where a feature correlation measure is incorporated as an objective function to encourage learning of robust features and restrain spurious features. To show the effects of ARFL in breast cancer diagnosis, we built and evaluated diagnosis models using two independent clinically collected breast imaging datasets, comprising a total of 9,548 mammogram images. We performed extensive experiments showing that our method outperformed several state-of-the-art methods and that our method can enhance safer breast cancer diagnosis against adversarial attacks in clinical settings.
[ "['Degan Hao' 'Dooman Arefan' 'Margarita Zuley' 'Wendie Berg' 'Shandong Wu']" ]
null
null
2402.08769
null
null
http://arxiv.org/pdf/2402.08769v1
2024-02-13T20:04:39Z
2024-02-13T20:04:39Z
FLASH: Federated Learning Across Simultaneous Heterogeneities
The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data. An overarching challenge to this date is client heterogeneity, which may arise not only from variations in data distribution, but also in data quality, as well as compute/communication latency. An integrated view of these diverse and concurrent sources of heterogeneity is critical; for instance, low-latency clients may have poor data quality, and vice versa. In this work, we propose FLASH(Federated Learning Across Simultaneous Heterogeneities), a lightweight and flexible client selection algorithm that outperforms state-of-the-art FL frameworks under extensive sources of heterogeneity, by trading-off the statistical information associated with the client's data quality, data distribution, and latency. FLASH is the first method, to our knowledge, for handling all these heterogeneities in a unified manner. To do so, FLASH models the learning dynamics through contextual multi-armed bandits (CMAB) and dynamically selects the most promising clients. Through extensive experiments, we demonstrate that FLASH achieves substantial and consistent improvements over state-of-the-art baselines -- as much as 10% in absolute accuracy -- thanks to its unified approach. Importantly, FLASH also outperforms federated aggregation methods that are designed to handle highly heterogeneous settings and even enjoys a performance boost when integrated with them.
[ "['Xiangyu Chang' 'Sk Miraj Ahmed' 'Srikanth V. Krishnamurthy'\n 'Basak Guler' 'Ananthram Swami' 'Samet Oymak' 'Amit K. Roy-Chowdhury']" ]
null
null
2402.08784
null
null
http://arxiv.org/pdf/2402.08784v1
2024-02-13T20:46:37Z
2024-02-13T20:46:37Z
Preconditioners for the Stochastic Training of Implicit Neural Representations
Implicit neural representations have emerged as a powerful technique for encoding complex continuous multidimensional signals as neural networks, enabling a wide range of applications in computer vision, robotics, and geometry. While Adam is commonly used for training due to its stochastic proficiency, it entails lengthy training durations. To address this, we explore alternative optimization techniques for accelerated training without sacrificing accuracy. Traditional second-order optimizers like L-BFGS are suboptimal in stochastic settings, making them unsuitable for large-scale data sets. Instead, we propose stochastic training using curvature-aware diagonal preconditioners, showcasing their effectiveness across various signal modalities such as images, shape reconstruction, and Neural Radiance Fields (NeRF).
[ "['Shin-Fang Chng' 'Hemanth Saratchandran' 'Simon Lucey']" ]
null
null
2402.08787
null
null
http://arxiv.org/pdf/2402.08787v5
2024-07-15T00:18:21Z
2024-02-13T20:51:58Z
Rethinking Machine Unlearning for Large Language Models
We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities, while maintaining the integrity of essential knowledge generation and not affecting causally unrelated information. We envision LLM unlearning becoming a pivotal element in the life-cycle management of LLMs, potentially standing as an essential foundation for developing generative AI that is not only safe, secure, and trustworthy, but also resource-efficient without the need of full retraining. We navigate the unlearning landscape in LLMs from conceptual formulation, methodologies, metrics, and applications. In particular, we highlight the often-overlooked aspects of existing LLM unlearning research, e.g., unlearning scope, data-model interaction, and multifaceted efficacy assessment. We also draw connections between LLM unlearning and related areas such as model editing, influence functions, model explanation, adversarial training, and reinforcement learning. Furthermore, we outline an effective assessment framework for LLM unlearning and explore its applications in copyright and privacy safeguards and sociotechnical harm reduction.
[ "['Sijia Liu' 'Yuanshun Yao' 'Jinghan Jia' 'Stephen Casper'\n 'Nathalie Baracaldo' 'Peter Hase' 'Yuguang Yao' 'Chris Yuhao Liu'\n 'Xiaojun Xu' 'Hang Li' 'Kush R. Varshney' 'Mohit Bansal' 'Sanmi Koyejo'\n 'Yang Liu']" ]
null
null
2402.08789
null
null
http://arxiv.org/pdf/2402.08789v1
2024-02-13T20:54:55Z
2024-02-13T20:54:55Z
Leveraging cough sounds to optimize chest x-ray usage in low-resource settings
Chest X-ray is a commonly used tool during triage, diagnosis and management of respiratory diseases. In resource-constricted settings, optimizing this resource can lead to valuable cost savings for the health care system and the patients as well as to and improvement in consult time. We used prospectively-collected data from 137 patients referred for chest X-ray at the Christian Medical Center and Hospital (CMCH) in Purnia, Bihar, India. Each patient provided at least five coughs while awaiting radiography. Collected cough sounds were analyzed using acoustic AI methods. Cross-validation was done on temporal and spectral features on the cough sounds of each patient. Features were summarized using standard statistical approaches. Three models were developed, tested and compared in their capacity to predict an abnormal result in the chest X-ray. All three methods yielded models that could discriminate to some extent between normal and abnormal with the logistic regression performing best with an area under the receiver operating characteristic curves ranging from 0.7 to 0.78. Despite limitations and its relatively small sample size, this study shows that AI-enabled algorithms can use cough sounds to predict which individuals presenting for chest radiographic examination will have a normal or abnormal results. These results call for expanding this research given the potential optimization of limited health care resources in low- and middle-income countries.
[ "['Alexander Philip' 'Sanya Chawla' 'Lola Jover' 'George P. Kafentzis'\n 'Joe Brew' 'Vishakh Saraf' 'Shibu Vijayan' 'Peter Small'\n 'Carlos Chaccour']" ]
null
null
2402.08790
null
null
http://arxiv.org/pdf/2402.08790v1
2024-02-13T20:58:36Z
2024-02-13T20:58:36Z
Improving Molecule Generation and Drug Discovery with a Knowledge-enhanced Generative Model
Recent advancements in generative models have established state-of-the-art benchmarks in generating molecules and novel drug candidates. Despite these successes, a significant gap persists between generative models and the utilization of extensive biomedical knowledge, often systematized within knowledge graphs, whose potential to inform and enhance generative processes has not been realized. In this paper, we present a novel approach that bridges this divide by developing a framework for knowledge-enhanced generative models called K-DReAM. We develop a scalable methodology to extend the functionality of knowledge graphs while preserving semantic integrity and incorporate this contextual information into a generative framework to guide a diffusion-based model. The integration of knowledge graph embeddings with our generative model furnishes a robust mechanism for producing novel drug candidates possessing specific characteristics while ensuring validity and synthesizability. K-DReAM outperforms state-of-the-art generative models on both unconditional and targeted generation tasks.
[ "['Aditya Malusare' 'Vaneet Aggarwal']" ]
null
null
2402.08799
null
null
http://arxiv.org/pdf/2402.08799v1
2024-02-13T21:13:29Z
2024-02-13T21:13:29Z
Projection-Free Online Convex Optimization with Time-Varying Constraints
We consider the setting of online convex optimization with adversarial time-varying constraints in which actions must be feasible w.r.t. a fixed constraint set, and are also required on average to approximately satisfy additional time-varying constraints. Motivated by scenarios in which the fixed feasible set (hard constraint) is difficult to project on, we consider projection-free algorithms that access this set only through a linear optimization oracle (LOO). We present an algorithm that, on a sequence of length $T$ and using overall $T$ calls to the LOO, guarantees $tilde{O}(T^{3/4})$ regret w.r.t. the losses and $O(T^{7/8})$ constraints violation (ignoring all quantities except for $T$) . In particular, these bounds hold w.r.t. any interval of the sequence. We also present a more efficient algorithm that requires only first-order oracle access to the soft constraints and achieves similar bounds w.r.t. the entire sequence. We extend the latter to the setting of bandit feedback and obtain similar bounds (as a function of $T$) in expectation.
[ "['Dan Garber' 'Ben Kretzu']" ]
null
null
2402.08808
null
null
http://arxiv.org/pdf/2402.08808v1
2024-02-13T21:26:38Z
2024-02-13T21:26:38Z
Depth Separation in Norm-Bounded Infinite-Width Neural Networks
We study depth separation in infinite-width neural networks, where complexity is controlled by the overall squared $ell_2$-norm of the weights (sum of squares of all weights in the network). Whereas previous depth separation results focused on separation in terms of width, such results do not give insight into whether depth determines if it is possible to learn a network that generalizes well even when the network width is unbounded. Here, we study separation in terms of the sample complexity required for learnability. Specifically, we show that there are functions that are learnable with sample complexity polynomial in the input dimension by norm-controlled depth-3 ReLU networks, yet are not learnable with sub-exponential sample complexity by norm-controlled depth-2 ReLU networks (with any value for the norm). We also show that a similar statement in the reverse direction is not possible: any function learnable with polynomial sample complexity by a norm-controlled depth-2 ReLU network with infinite width is also learnable with polynomial sample complexity by a norm-controlled depth-3 ReLU network.
[ "['Suzanna Parkinson' 'Greg Ongie' 'Rebecca Willett' 'Ohad Shamir'\n 'Nathan Srebro']" ]
null
null
2402.08811
null
null
http://arxiv.org/pdf/2402.08811v1
2024-02-13T21:30:44Z
2024-02-13T21:30:44Z
Deep and shallow data science for multi-scale optical neuroscience
Optical imaging of the brain has expanded dramatically in the past two decades. New optics, indicators, and experimental paradigms are now enabling in-vivo imaging from the synaptic to the cortex-wide scales. To match the resulting flood of data across scales, computational methods are continuously being developed to meet the need of extracting biologically relevant information. In this pursuit, challenges arise in some domains (e.g., SNR and resolution limits in micron-scale data) that require specialized algorithms. These algorithms can, for example, make use of state-of-the-art machine learning to maximally learn the details of a given scale to optimize the processing pipeline. In contrast, other methods, however, such as graph signal processing, seek to abstract away from some of the details that are scale-specific to provide solutions to specific sub-problems common across scales of neuroimaging. Here we discuss limitations and tradeoffs in algorithmic design with the goal of identifying how data quality and variability can hamper algorithm use and dissemination.
[ "['Gal Mishne' 'Adam Charles']" ]
null
null
2402.08813
null
null
http://arxiv.org/pdf/2402.08813v1
2024-02-13T21:36:30Z
2024-02-13T21:36:30Z
Model approximation in MDPs with unbounded per-step cost
We consider the problem of designing a control policy for an infinite-horizon discounted cost Markov decision process $mathcal{M}$ when we only have access to an approximate model $hat{mathcal{M}}$. How well does an optimal policy $hat{pi}^{star}$ of the approximate model perform when used in the original model $mathcal{M}$? We answer this question by bounding a weighted norm of the difference between the value function of $hat{pi}^star $ when used in $mathcal{M}$ and the optimal value function of $mathcal{M}$. We then extend our results and obtain potentially tighter upper bounds by considering affine transformations of the per-step cost. We further provide upper bounds that explicitly depend on the weighted distance between cost functions and weighted distance between transition kernels of the original and approximate models. We present examples to illustrate our results.
[ "['Berk Bozkurt' 'Aditya Mahajan' 'Ashutosh Nayyar' 'Yi Ouyang']" ]
null
null
2402.08818
null
null
http://arxiv.org/pdf/2402.08818v1
2024-02-13T21:54:15Z
2024-02-13T21:54:15Z
Corridor Geometry in Gradient-Based Optimization
We characterize regions of a loss surface as corridors when the continuous curves of steepest descent -- the solutions of the gradient flow -- become straight lines. We show that corridors provide insights into gradient-based optimization, since corridors are exactly the regions where gradient descent and the gradient flow follow the same trajectory, while the loss decreases linearly. As a result, inside corridors there are no implicit regularization effects or training instabilities that have been shown to occur due to the drift between gradient descent and the gradient flow. Using the loss linear decrease on corridors, we devise a learning rate adaptation scheme for gradient descent; we call this scheme Corridor Learning Rate (CLR). The CLR formulation coincides with a special case of Polyak step-size, discovered in the context of convex optimization. The Polyak step-size has been shown recently to have also good convergence properties for neural networks; we further confirm this here with results on CIFAR-10 and ImageNet.
[ "['Benoit Dherin' 'Mihaela Rosca']" ]
null
null
2402.08823
null
null
http://arxiv.org/pdf/2402.08823v1
2024-02-13T22:07:29Z
2024-02-13T22:07:29Z
RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning
We propose RanDumb to examine the efficacy of continual representation learning. RanDumb embeds raw pixels using a fixed random transform which approximates an RBF-Kernel, initialized before seeing any data, and learns a simple linear classifier on top. We present a surprising and consistent finding: RanDumb significantly outperforms the continually learned representations using deep networks across numerous continual learning benchmarks, demonstrating the poor performance of representation learning in these scenarios. RanDumb stores no exemplars and performs a single pass over the data, processing one sample at a time. It complements GDumb, operating in a low-exemplar regime where GDumb has especially poor performance. We reach the same consistent conclusions when RanDumb is extended to scenarios with pretrained models replacing the random transform with pretrained feature extractor. Our investigation is both surprising and alarming as it questions our understanding of how to effectively design and train models that require efficient continual representation learning, and necessitates a principled reinvestigation of the widely explored problem formulation itself. Our code is available at https://github.com/drimpossible/RanDumb.
[ "['Ameya Prabhu' 'Shiven Sinha' 'Ponnurangam Kumaraguru'\n 'Philip H. S. Torr' 'Ozan Sener' 'Puneet K. Dokania']" ]
null
null
2402.08824
null
null
http://arxiv.org/abs/2402.08824v1
2024-02-13T22:07:57Z
2024-02-13T22:07:57Z
Disambiguated Node Classification with Graph Neural Networks
Graph Neural Networks (GNNs) have demonstrated significant success in learning from graph-structured data across various domains. Despite their great successful, one critical challenge is often overlooked by existing works, i.e., the learning of message propagation that can generalize effectively to underrepresented graph regions. These minority regions often exhibit irregular homophily/heterophily patterns and diverse neighborhood class distributions, resulting in ambiguity. In this work, we investigate the ambiguity problem within GNNs, its impact on representation learning, and the development of richer supervision signals to fight against this problem. We conduct a fine-grained evaluation of GNN, analyzing the existence of ambiguity in different graph regions and its relation with node positions. To disambiguate node embeddings, we propose a novel method, {method}, which exploits additional optimization guidance to enhance representation learning, particularly for nodes in ambiguous regions. {method} identifies ambiguous nodes based on temporal inconsistency of predictions and introduces a disambiguation regularization by employing contrastive learning in a topology-aware manner. {method} promotes discriminativity of node representations and can alleviating semantic mixing caused by message propagation, effectively addressing the ambiguity problem. Empirical results validate the efficiency of {method} and highlight its potential to improve GNN performance in underrepresented graph regions.
[ "['Tianxiang Zhao' 'Xiang Zhang' 'Suhang Wang']" ]
null
null
2402.08832
null
null
http://arxiv.org/pdf/2402.08832v1
2024-02-13T22:29:40Z
2024-02-13T22:29:40Z
Intelligent Agricultural Management Considering N$_2$O Emission and Climate Variability with Uncertainties
This study examines how artificial intelligence (AI), especially Reinforcement Learning (RL), can be used in farming to boost crop yields, fine-tune nitrogen use and watering, and reduce nitrate runoff and greenhouse gases, focusing on Nitrous Oxide (N$_2$O) emissions from soil. Facing climate change and limited agricultural knowledge, we use Partially Observable Markov Decision Processes (POMDPs) with a crop simulator to model AI agents' interactions with farming environments. We apply deep Q-learning with Recurrent Neural Network (RNN)-based Q networks for training agents on optimal actions. Also, we develop Machine Learning (ML) models to predict N$_2$O emissions, integrating these predictions into the simulator. Our research tackles uncertainties in N$_2$O emission estimates with a probabilistic ML approach and climate variability through a stochastic weather model, offering a range of emission outcomes to improve forecast reliability and decision-making. By incorporating climate change effects, we enhance agents' climate adaptability, aiming for resilient agricultural practices. Results show these agents can align crop productivity with environmental concerns by penalizing N$_2$O emissions, adapting effectively to climate shifts like warmer temperatures and less rain. This strategy improves farm management under climate change, highlighting AI's role in sustainable agriculture.
[ "['Zhaoan Wang' 'Shaoping Xiao' 'Jun Wang' 'Ashwin Parab' 'Shivam Patel']" ]
null
null
2402.08845
null
null
http://arxiv.org/pdf/2402.08845v4
2024-06-04T05:15:01Z
2024-02-13T23:25:01Z
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation
We investigate the problem of explainability for machine learning models, focusing on Feature Attribution Methods (FAMs) that evaluate feature importance through perturbation tests. Despite their utility, FAMs struggle to distinguish the contributions of different features, when their prediction changes are similar after perturbation. To enhance FAMs' discriminative power, we introduce Feature Attribution with Necessity and Sufficiency (FANS), which find a neighborhood of the input such that perturbing samples within this neighborhood have a high Probability of being Necessity and Sufficiency (PNS) cause for the change in predictions, and use this PNS as the importance of the feature. Specifically, FANS compute this PNS via a heuristic strategy for estimating the neighborhood and a perturbation test involving two stages (factual and interventional) for counterfactual reasoning. To generate counterfactual samples, we use a resampling-based approach on the observed samples to approximate the required conditional distribution. We demonstrate that FANS outperforms existing attribution methods on six benchmarks. Please refer to the source code via url{https://github.com/DMIRLAB-Group/FANS}.
[ "['Xuexin Chen' 'Ruichu Cai' 'Zhengting Huang' 'Yuxuan Zhu'\n 'Julien Horwood' 'Zhifeng Hao' 'Zijian Li' 'Jose Miguel Hernandez-Lobato']" ]
null
null
2402.08847
null
null
http://arxiv.org/pdf/2402.08847v2
2024-07-07T22:44:32Z
2024-02-13T23:26:11Z
Space-Time Diffusion Bridge
In this study, we introduce a novel method for generating new synthetic samples that are independent and identically distributed (i.i.d.) from high-dimensional real-valued probability distributions, as defined implicitly by a set of Ground Truth (GT) samples. Central to our method is the integration of space-time mixing strategies that extend across temporal and spatial dimensions. Our methodology is underpinned by three interrelated stochastic processes designed to enable optimal transport from an easily tractable initial probability distribution to the target distribution represented by the GT samples: (a) linear processes incorporating space-time mixing that yield Gaussian conditional probability densities, (b) their diffusion bridge analogs that are conditioned to the initial and final state vectors, and (c) nonlinear stochastic processes refined through score-matching techniques. The crux of our training regime involves fine-tuning the nonlinear model, and potentially the linear models -- to align closely with the GT data. We validate the efficacy of our space-time diffusion approach with numerical experiments, laying the groundwork for more extensive future theory and experiments to fully authenticate the method, particularly providing a more efficient (possibly simulation-free) inference.
[ "['Hamidreza Behjoo' 'Michael Chertkov']" ]
null
null
2402.08848
null
null
http://arxiv.org/pdf/2402.08848v2
2024-06-05T00:17:20Z
2024-02-13T23:29:09Z
Hybrid Inverse Reinforcement Learning
The inverse reinforcement learning approach to imitation learning is a double-edged sword. On the one hand, it can enable learning from a smaller number of expert demonstrations with more robustness to error compounding than behavioral cloning approaches. On the other hand, it requires that the learner repeatedly solve a computationally expensive reinforcement learning (RL) problem. Often, much of this computation is wasted searching over policies very dissimilar to the expert's. In this work, we propose using hybrid RL -- training on a mixture of online and expert data -- to curtail unnecessary exploration. Intuitively, the expert data focuses the learner on good states during training, which reduces the amount of exploration required to compute a strong policy. Notably, such an approach doesn't need the ability to reset the learner to arbitrary states in the environment, a requirement of prior work in efficient inverse RL. More formally, we derive a reduction from inverse RL to expert-competitive RL (rather than globally optimal RL) that allows us to dramatically reduce interaction during the inner policy search loop while maintaining the benefits of the IRL approach. This allows us to derive both model-free and model-based hybrid inverse RL algorithms with strong policy performance guarantees. Empirically, we find that our approaches are significantly more sample efficient than standard inverse RL and several other baselines on a suite of continuous control tasks.
[ "['Juntao Ren' 'Gokul Swamy' 'Zhiwei Steven Wu' 'J. Andrew Bagnell'\n 'Sanjiban Choudhury']" ]
null
null
2402.08856
null
null
http://arxiv.org/pdf/2402.08856v2
2024-06-15T20:03:56Z
2024-02-13T23:53:47Z
Approximation of relation functions and attention mechanisms
Inner products of neural network feature maps arise in a wide variety of machine learning frameworks as a method of modeling relations between inputs. This work studies the approximation properties of inner products of neural networks. It is shown that the inner product of a multi-layer perceptron with itself is a universal approximator for symmetric positive-definite relation functions. In the case of asymmetric relation functions, it is shown that the inner product of two different multi-layer perceptrons is a universal approximator. In both cases, a bound is obtained on the number of neurons required to achieve a given accuracy of approximation. In the symmetric case, the function class can be identified with kernels of reproducing kernel Hilbert spaces, whereas in the asymmetric case the function class can be identified with kernels of reproducing kernel Banach spaces. Finally, these approximation results are applied to analyzing the attention mechanism underlying Transformers, showing that any retrieval mechanism defined by an abstract preorder can be approximated by attention through its inner product relations. This result uses the Debreu representation theorem in economics to represent preference relations in terms of utility functions.
[ "['Awni Altabaa' 'John Lafferty']" ]
null
null
2402.08864
null
null
http://arxiv.org/pdf/2402.08864v2
2024-06-05T02:05:13Z
2024-02-14T00:18:10Z
DeepPolar: Inventing Nonlinear Large-Kernel Polar Codes via Deep Learning
Progress in designing channel codes has been driven by human ingenuity and, fittingly, has been sporadic. Polar codes, developed on the foundation of Arikan's polarization kernel, represent the latest breakthrough in coding theory and have emerged as the state-of-the-art error-correction code for short-to-medium block length regimes. In an effort to automate the invention of good channel codes, especially in this regime, we explore a novel, non-linear generalization of Polar codes, which we call DeepPolar codes. DeepPolar codes extend the conventional Polar coding framework by utilizing a larger kernel size and parameterizing these kernels and matched decoders through neural networks. Our results demonstrate that these data-driven codes effectively leverage the benefits of a larger kernel size, resulting in enhanced reliability when compared to both existing neural codes and conventional Polar codes.
[ "['S Ashwin Hebbar' 'Sravan Kumar Ankireddy' 'Hyeji Kim' 'Sewoong Oh'\n 'Pramod Viswanath']" ]
null
null
2402.08871
null
null
http://arxiv.org/pdf/2402.08871v2
2024-05-30T08:52:56Z
2024-02-14T00:35:10Z
Position: Topological Deep Learning is the New Frontier for Relational Learning
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models. This paper posits that TDL is the new frontier for relational learning. TDL may complement graph representation learning and geometric deep learning by incorporating topological concepts, and can thus provide a natural choice for various machine learning settings. To this end, this paper discusses open problems in TDL, ranging from practical benefits to theoretical foundations. For each problem, it outlines potential solutions and future research opportunities. At the same time, this paper serves as an invitation to the scientific community to actively participate in TDL research to unlock the potential of this emerging field.
[ "['Theodore Papamarkou' 'Tolga Birdal' 'Michael Bronstein'\n 'Gunnar Carlsson' 'Justin Curry' 'Yue Gao' 'Mustafa Hajij' 'Roland Kwitt'\n 'Pietro Liò' 'Paolo Di Lorenzo' 'Vasileios Maroulas' 'Nina Miolane'\n 'Farzana Nasrin' 'Karthikeyan Natesan Ramamurthy' 'Bastian Rieck'\n 'Simone Scardapane' 'Michael T. Schaub' 'Petar Veličković' 'Bei Wang'\n 'Yusu Wang' 'Guo-Wei Wei' 'Ghada Zamzmi']" ]
null
null
2402.08879
null
null
http://arxiv.org/pdf/2402.08879v1
2024-02-14T00:56:09Z
2024-02-14T00:56:09Z
Inference for an Algorithmic Fairness-Accuracy Frontier
Decision-making processes increasingly rely on the use of algorithms. Yet, algorithms' predictive ability frequently exhibit systematic variation across subgroups of the population. While both fairness and accuracy are desirable properties of an algorithm, they often come at the cost of one another. What should a fairness-minded policymaker do then, when confronted with finite data? In this paper, we provide a consistent estimator for a theoretical fairness-accuracy frontier put forward by Liang, Lu and Mu (2023) and propose inference methods to test hypotheses that have received much attention in the fairness literature, such as (i) whether fully excluding a covariate from use in training the algorithm is optimal and (ii) whether there are less discriminatory alternatives to an existing algorithm. We also provide an estimator for the distance between a given algorithm and the fairest point on the frontier, and characterize its asymptotic distribution. We leverage the fact that the fairness-accuracy frontier is part of the boundary of a convex set that can be fully represented by its support function. We show that the estimated support function converges to a tight Gaussian process as the sample size increases, and then express policy-relevant hypotheses as restrictions on the support function to construct valid test statistics.
[ "['Yiqi Liu' 'Francesca Molinari']" ]
null
null
2402.08882
null
null
http://arxiv.org/pdf/2402.08882v1
2024-02-14T01:13:55Z
2024-02-14T01:13:55Z
Moving Object Proposals with Deep Learned Optical Flow for Video Object Segmentation
Dynamic scene understanding is one of the most conspicuous field of interest among computer vision community. In order to enhance dynamic scene understanding, pixel-wise segmentation with neural networks is widely accepted. The latest researches on pixel-wise segmentation combined semantic and motion information and produced good performance. In this work, we propose a state of art architecture of neural networks to accurately and efficiently get the moving object proposals (MOP). We first train an unsupervised convolutional neural network (UnFlow) to generate optical flow estimation. Then we render the output of optical flow net to a fully convolutional SegNet model. The main contribution of our work is (1) Fine-tuning the pretrained optical flow model on the brand new DAVIS Dataset; (2) Leveraging fully convolutional neural networks with Encoder-Decoder architecture to segment objects. We developed the codes with TensorFlow, and executed the training and evaluation processes on an AWS EC2 instance.
[ "['Ge Shi' 'Zhili Yang']" ]
null
null
2402.08890
null
null
http://arxiv.org/pdf/2402.08890v1
2024-02-14T01:41:50Z
2024-02-14T01:41:50Z
Predicting the Emergence of Solar Active Regions Using Machine Learning
To create early warning capabilities for upcoming Space Weather disturbances, we have selected a dataset of 61 emerging active regions, which allows us to identify characteristic features in the evolution of acoustic power density to predict continuum intensity emergence. For our study, we have utilized Doppler shift and continuum intensity observations from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). The local tracking of 30.66 x 30.66-degree patches in the vicinity of active regions allowed us to trace the evolution of active regions starting from the pre-emergence state. We have developed a machine learning model to capture the acoustic power flux density variations associated with upcoming magnetic flux emergence. The trained Long Short-Term Memory (LSTM) model is able to predict 5 hours ahead whether, in a given area of the solar surface, continuum intensity values will decrease. The performed study allows us to investigate the potential of the machine learning approach to predict the emergence of active regions using acoustic power maps as input.
[ "['Spiridon Kasapis' 'Irina N. Kitiashvili' 'Alexander G. Kosovichev'\n 'John T. Stefan' 'Bhairavi Apte']" ]
null
null
2402.08892
null
null
http://arxiv.org/pdf/2402.08892v1
2024-02-14T01:56:31Z
2024-02-14T01:56:31Z
Weakly Supervised Segmentation of Vertebral Bodies with Iterative Slice-propagation
Vertebral body (VB) segmentation is an important preliminary step towards medical visual diagnosis for spinal diseases. However, most previous works require pixel/voxel-wise strong supervisions, which is expensive, tedious and time-consuming for experts to annotate. In this paper, we propose a Weakly supervised Iterative Spinal Segmentation (WISS) method leveraging only four corner landmark weak labels on a single sagittal slice to achieve automatic volumetric segmentation from CT images for VBs. WISS first segments VBs on an annotated sagittal slice in an iterative self-training manner. This self-training method alternates between training and refining labels in the training set. Then WISS proceeds to segment the whole VBs slice by slice with a slice-propagation method to obtain volumetric segmentations. We evaluate the performance of WISS on a private spinal metastases CT dataset and the public lumbar CT dataset. On the first dataset, WISS achieves distinct improvements with regard to two different backbones. For the second dataset, WISS achieves dice coefficients of $91.7%$ and $83.7%$ for mid-sagittal slices and 3D CT volumes, respectively, saving a lot of labeling costs and only sacrificing a little segmentation performance.
[ "['Shiqi Peng' 'Bolin Lai' 'Guangyu Yao' 'Xiaoyun Zhang' 'Ya Zhang'\n 'Yan-Feng Wang' 'Hui Zhao']" ]
null
null
2402.08902
null
null
http://arxiv.org/pdf/2402.08902v3
2024-06-15T20:01:48Z
2024-02-14T02:17:37Z
Auto-Encoding Bayesian Inverse Games
When multiple agents interact in a common environment, each agent's actions impact others' future decisions, and noncooperative dynamic games naturally capture this coupling. In interactive motion planning, however, agents typically do not have access to a complete model of the game, e.g., due to unknown objectives of other players. Therefore, we consider the inverse game problem, in which some properties of the game are unknown a priori and must be inferred from observations. Existing maximum likelihood estimation (MLE) approaches to solve inverse games provide only point estimates of unknown parameters without quantifying uncertainty, and perform poorly when many parameter values explain the observed behavior. To address these limitations, we take a Bayesian perspective and construct posterior distributions of game parameters. To render inference tractable, we employ a variational autoencoder (VAE) with an embedded differentiable game solver. This structured VAE can be trained from an unlabeled dataset of observed interactions, naturally handles continuous, multi-modal distributions, and supports efficient sampling from the inferred posteriors without computing game solutions at runtime. Extensive evaluations in simulated driving scenarios demonstrate that the proposed approach successfully learns the prior and posterior game parameter distributions, provides more accurate objective estimates than MLE baselines, and facilitates safer and more efficient game-theoretic motion planning.
[ "['Xinjie Liu' 'Lasse Peters' 'Javier Alonso-Mora' 'Ufuk Topcu'\n 'David Fridovich-Keil']" ]
null
null
2402.08907
null
null
http://arxiv.org/pdf/2402.08907v2
2024-05-04T19:41:52Z
2024-02-14T02:46:47Z
Subgraph Pooling: Tackling Negative Transfer on Graphs
Transfer learning aims to enhance performance on a target task by using knowledge from related tasks. However, when the source and target tasks are not closely aligned, it can lead to reduced performance, known as negative transfer. Unlike in image or text data, we find that negative transfer could commonly occur in graph-structured data, even when source and target graphs have semantic similarities. Specifically, we identify that structural differences significantly amplify the dissimilarities in the node embeddings across graphs. To mitigate this, we bring a new insight in this paper: for semantically similar graphs, although structural differences lead to significant distribution shift in node embeddings, their impact on subgraph embeddings could be marginal. Building on this insight, we introduce Subgraph Pooling (SP) by aggregating nodes sampled from a k-hop neighborhood and Subgraph Pooling++ (SP++) by a random walk, to mitigate the impact of graph structural differences on knowledge transfer. We theoretically analyze the role of SP in reducing graph discrepancy and conduct extensive experiments to evaluate its superiority under various settings. The proposed SP methods are effective yet elegant, which can be easily applied on top of any backbone Graph Neural Networks (GNNs). Our code and data are available at: https://github.com/Zehong-Wang/Subgraph-Pooling.
[ "['Zehong Wang' 'Zheyuan Zhang' 'Chuxu Zhang' 'Yanfang Ye']" ]
null
null
2402.08910
null
null
http://arxiv.org/pdf/2402.08910v1
2024-02-14T02:53:51Z
2024-02-14T02:53:51Z
Learning-based Bone Quality Classification Method for Spinal Metastasis
Spinal metastasis is the most common disease in bone metastasis and may cause pain, instability and neurological injuries. Early detection of spinal metastasis is critical for accurate staging and optimal treatment. The diagnosis is usually facilitated with Computed Tomography (CT) scans, which requires considerable efforts from well-trained radiologists. In this paper, we explore a learning-based automatic bone quality classification method for spinal metastasis based on CT images. We simultaneously take the posterolateral spine involvement classification task into account, and employ multi-task learning (MTL) technique to improve the performance. MTL acts as a form of inductive bias which helps the model generalize better on each task by sharing representations between related tasks. Based on the prior knowledge that the mixed type can be viewed as both blastic and lytic, we model the task of bone quality classification as two binary classification sub-tasks, i.e., whether blastic and whether lytic, and leverage a multiple layer perceptron to combine their predictions. In order to make the model more robust and generalize better, self-paced learning is adopted to gradually involve from easy to more complex samples into the training process. The proposed learning-based method is evaluated on a proprietary spinal metastasis CT dataset. At slice level, our method significantly outperforms an 121-layer DenseNet classifier in sensitivities by $+12.54%$, $+7.23%$ and $+29.06%$ for blastic, mixed and lytic lesions, respectively, meanwhile $+12.33%$, $+23.21%$ and $+34.25%$ at vertebrae level.
[ "['Shiqi Peng' 'Bolin Lai' 'Guangyu Yao' 'Xiaoyun Zhang' 'Ya Zhang'\n 'Yan-Feng Wang' 'Hui Zhao']" ]
null
null
2402.08918
null
null
http://arxiv.org/pdf/2402.08918v1
2024-02-14T03:16:13Z
2024-02-14T03:16:13Z
Graph Inference Acceleration by Learning MLPs on Graphs without Supervision
Graph Neural Networks (GNNs) have demonstrated effectiveness in various graph learning tasks, yet their reliance on message-passing constraints their deployment in latency-sensitive applications such as financial fraud detection. Recent works have explored distilling knowledge from GNNs to Multi-Layer Perceptrons (MLPs) to accelerate inference. However, this task-specific supervised distillation limits generalization to unseen nodes, which are prevalent in latency-sensitive applications. To this end, we present textbf{textsc{SimMLP}}, a textbf{textsc{Sim}}ple yet effective framework for learning textbf{textsc{MLP}}s on graphs without supervision, to enhance generalization. textsc{SimMLP} employs self-supervised alignment between GNNs and MLPs to capture the fine-grained and generalizable correlation between node features and graph structures, and proposes two strategies to alleviate the risk of trivial solutions. Theoretically, we comprehensively analyze textsc{SimMLP} to demonstrate its equivalence to GNNs in the optimal case and its generalization capability. Empirically, textsc{SimMLP} outperforms state-of-the-art baselines, especially in settings with unseen nodes. In particular, it obtains significant performance gains {bf (7$sim$26%)} over MLPs and inference acceleration over GNNs {bf (90$sim$126$times$)} on large-scale graph datasets. Our codes are available at: url{https://github.com/Zehong-Wang/SimMLP}.
[ "['Zehong Wang' 'Zheyuan Zhang' 'Chuxu Zhang' 'Yanfang Ye']" ]
null
null
2402.08919
null
null
http://arxiv.org/pdf/2402.08919v1
2024-02-14T03:31:17Z
2024-02-14T03:31:17Z
Interpretable Measures of Conceptual Similarity by Complexity-Constrained Descriptive Auto-Encoding
Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning. In legal doctrine however, determining the degree of similarity between works requires subjective analysis, and fact-finders (judges and juries) can demonstrate considerable variability in these subjective judgement calls. Images that are structurally similar can be deemed dissimilar, whereas images of completely different scenes can be deemed similar enough to support a claim of copying. We seek to define and compute a notion of "conceptual similarity" among images that captures high-level relations even among images that do not share repeated elements or visually similar components. The idea is to use a base multi-modal model to generate "explanations" (captions) of visual data at increasing levels of complexity. Then, similarity can be measured by the length of the caption needed to discriminate between the two images: Two highly dissimilar images can be discriminated early in their description, whereas conceptually dissimilar ones will need more detail to be distinguished. We operationalize this definition and show that it correlates with subjective (averaged human evaluation) assessment, and beats existing baselines on both image-to-image and text-to-text similarity benchmarks. Beyond just providing a number, our method also offers interpretability by pointing to the specific level of granularity of the description where the source data are differentiated.
[ "['Alessandro Achille' 'Greg Ver Steeg' 'Tian Yu Liu' 'Matthew Trager'\n 'Carson Klingenberg' 'Stefano Soatto']" ]
null
null
2402.08922
null
null
http://arxiv.org/pdf/2402.08922v2
2024-06-19T15:35:04Z
2024-02-14T03:43:05Z
The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes
Large-scale black-box models have become ubiquitous across numerous applications. Understanding the influence of individual training data sources on predictions made by these models is crucial for improving their trustworthiness. Current influence estimation techniques involve computing gradients for every training point or repeated training on different subsets. These approaches face obvious computational challenges when scaled up to large datasets and models. In this paper, we introduce and explore the Mirrored Influence Hypothesis, highlighting a reciprocal nature of influence between training and test data. Specifically, it suggests that evaluating the influence of training data on test predictions can be reformulated as an equivalent, yet inverse problem: assessing how the predictions for training samples would be altered if the model were trained on specific test samples. Through both empirical and theoretical validations, we demonstrate the wide applicability of our hypothesis. Inspired by this, we introduce a new method for estimating the influence of training data, which requires calculating gradients for specific test samples, paired with a forward pass for each training point. This approach can capitalize on the common asymmetry in scenarios where the number of test samples under concurrent examination is much smaller than the scale of the training dataset, thus gaining a significant improvement in efficiency compared to existing approaches. We demonstrate the applicability of our method across a range of scenarios, including data attribution in diffusion models, data leakage detection, analysis of memorization, mislabeled data detection, and tracing behavior in language models. Our code will be made available at https://github.com/ruoxi-jia-group/Forward-INF.
[ "['Myeongseob Ko' 'Feiyang Kang' 'Weiyan Shi' 'Ming Jin' 'Zhou Yu'\n 'Ruoxi Jia']" ]
null
null
2402.08923
null
null
http://arxiv.org/pdf/2402.08923v2
2024-02-16T22:06:43Z
2024-02-14T03:45:26Z
IMUOptimize: A Data-Driven Approach to Optimal IMU Placement for Human Pose Estimation with Transformer Architecture
This paper presents a novel approach for predicting human poses using IMU data, diverging from previous studies such as DIP-IMU, IMUPoser, and TransPose, which use up to 6 IMUs in conjunction with bidirectional RNNs. We introduce two main innovations: a data-driven strategy for optimal IMU placement and a transformer-based model architecture for time series analysis. Our findings indicate that our approach not only outperforms traditional 6 IMU-based biRNN models but also that the transformer architecture significantly enhances pose reconstruction from data obtained from 24 IMU locations, with equivalent performance to biRNNs when using only 6 IMUs. The enhanced accuracy provided by our optimally chosen locations, when coupled with the parallelizability and performance of transformers, provides significant improvements to the field of IMU-based pose estimation.
[ "['Varun Ramani' 'Hossein Khayami' 'Yang Bai' 'Nakul Garg' 'Nirupam Roy']" ]
null
null
2402.08925
null
null
http://arxiv.org/pdf/2402.08925v1
2024-02-14T03:56:27Z
2024-02-14T03:56:27Z
MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences
Reinforcement Learning from Human Feedback (RLHF) aligns language models to human preferences by employing a singular reward model derived from preference data. However, such an approach overlooks the rich diversity of human preferences inherent in data collected from multiple users. In this work, we first derive an impossibility result of alignment with single reward RLHF, thereby highlighting its insufficiency in representing diverse human preferences. To provide an equitable solution to the problem, we learn a mixture of preference distributions via an expectation-maximization algorithm and propose a MaxMin alignment objective for policy learning inspired by the Egalitarian principle in social choice theory to better represent diverse human preferences. We elucidate the connection of our proposed approach to distributionally robust optimization and general utility RL, thereby highlighting the generality and robustness of our proposed solution. We present comprehensive experimental results on small-scale (GPT-2) and large-scale language models (with Tulu2-7B) and show the efficacy of the proposed approach in the presence of diversity among human preferences. Our algorithm achieves an average improvement of more than 16% in win-rates over conventional RLHF algorithms and improves the win-rate (accuracy) for minority groups by over 33% without compromising the performance of majority groups, showcasing the robustness and fairness of our approach. We remark that our findings in this work are not only limited to language models but also extend to reinforcement learning in general.
[ "['Souradip Chakraborty' 'Jiahao Qiu' 'Hui Yuan' 'Alec Koppel'\n 'Furong Huang' 'Dinesh Manocha' 'Amrit Singh Bedi' 'Mengdi Wang']" ]
null
null
2402.08929
null
null
http://arxiv.org/pdf/2402.08929v1
2024-02-14T04:03:38Z
2024-02-14T04:03:38Z
Second Order Methods for Bandit Optimization and Control
Bandit convex optimization (BCO) is a general framework for online decision making under uncertainty. While tight regret bounds for general convex losses have been established, existing algorithms achieving these bounds have prohibitive computational costs for high dimensional data. In this paper, we propose a simple and practical BCO algorithm inspired by the online Newton step algorithm. We show that our algorithm achieves optimal (in terms of horizon) regret bounds for a large class of convex functions that we call $kappa$-convex. This class contains a wide range of practically relevant loss functions including linear, quadratic, and generalized linear models. In addition to optimal regret, this method is the most efficient known algorithm for several well-studied applications including bandit logistic regression. Furthermore, we investigate the adaptation of our second-order bandit algorithm to online convex optimization with memory. We show that for loss functions with a certain affine structure, the extended algorithm attains optimal regret. This leads to an algorithm with optimal regret for bandit LQR/LQG problems under a fully adversarial noise model, thereby resolving an open question posed in citep{gradu2020non} and citep{sun2023optimal}. Finally, we show that the more general problem of BCO with (non-affine) memory is harder. We derive a $tilde{Omega}(T^{2/3})$ regret lower bound, even under the assumption of smooth and quadratic losses.
[ "['Arun Suggala' 'Y. Jennifer Sun' 'Praneeth Netrapalli' 'Elad Hazan']" ]
null
null
2402.08943
null
null
http://arxiv.org/pdf/2402.08943v1
2024-02-14T05:08:47Z
2024-02-14T05:08:47Z
Evaluating DTW Measures via a Synthesis Framework for Time-Series Data
Time-series data originate from various applications that describe specific observations or quantities of interest over time. Their analysis often involves the comparison across different time-series data sequences, which in turn requires the alignment of these sequences. Dynamic Time Warping (DTW) is the standard approach to achieve an optimal alignment between two temporal signals. Different variations of DTW have been proposed to address various needs for signal alignment or classifications. However, a comprehensive evaluation of their performance in these time-series data processing tasks is lacking. Most DTW measures perform well on certain types of time-series data without a clear explanation of the reason. To address that, we propose a synthesis framework to model the variation between two time-series data sequences for comparison. Our synthesis framework can produce a realistic initial signal and deform it with controllable variations that mimic real-world scenarios. With this synthesis framework, we produce a large number of time-series sequence pairs with different but known variations, which are used to assess the performance of a number of well-known DTW measures for the tasks of alignment and classification. We report their performance on different variations and suggest the proper DTW measure to use based on the type of variations between two time-series sequences. This is the first time such a guideline is presented for selecting a proper DTW measure. To validate our conclusion, we apply our findings to real-world applications, i.e., the detection of the formation top for the oil and gas industry and the pattern search in streamlines for flow visualization.
[ "['Kishansingh Rajput' 'Duong Binh Nguyen' 'Guoning Chen']" ]
null
null
2402.08946
null
null
http://arxiv.org/pdf/2402.08946v1
2024-02-14T05:22:53Z
2024-02-14T05:22:53Z
Measuring Sharpness in Grokking
Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set. In this workshop paper, we introduce a robust technique for measuring grokking, based on fitting an appropriate functional form. We then use this to investigate the sharpness of transitions in training and validation accuracy under two settings. The first setting is the theoretical framework developed by Levi et al. (2023) where closed form expressions are readily accessible. The second setting is a two-layer MLP trained to predict the parity of bits, with grokking induced by the concealment strategy of Miller et al. (2023). We find that trends between relative grokking gap and grokking sharpness are similar in both settings when using absolute and relative measures of sharpness. Reflecting on this, we make progress toward explaining some trends and identify the need for further study to untangle the various mechanisms which influence the sharpness of grokking.
[ "['Jack Miller' 'Patrick Gleeson' \"Charles O'Neill\" 'Thang Bui' 'Noam Levi']" ]
null
null
2402.08948
null
null
http://arxiv.org/pdf/2402.08948v2
2024-06-08T22:50:27Z
2024-02-14T05:34:24Z
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
In this work, we study the mean-field flow for learning subspace-sparse polynomials using stochastic gradient descent and two-layer neural networks, where the input distribution is standard Gaussian and the output only depends on the projection of the input onto a low-dimensional subspace. We propose a basis-free generalization of the merged-staircase property in Abbe et al. (2022) and establish a necessary condition for the SGD-learnability. In addition, we prove that the condition is almost sufficient, in the sense that a condition slightly stronger than the necessary condition can guarantee the exponential decay of the loss functional to zero.
[ "['Ziang Chen' 'Rong Ge']" ]
null
null
2402.08957
null
null
http://arxiv.org/pdf/2402.08957v3
2024-05-23T03:13:23Z
2024-02-14T05:57:58Z
MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data
Recent large language models (LLMs) have witnessed significant advancement in various tasks, including mathematical reasoning and theorem proving. As these two tasks require strict and formal multi-step inference, they are appealing domains for exploring the reasoning ability of LLMs but still face important challenges. Previous studies such as Chain-of-Thought (CoT) have revealed the effectiveness of intermediate steps guidance. However, such step-wise annotation requires heavy labor, leading to insufficient training steps for current benchmarks. To fill this gap, this work introduces MUSTARD, a data generation framework that masters uniform synthesis of theorem and proof data of high quality and diversity. MUSTARD synthesizes data in three stages: (1) It samples a few mathematical concept seeds as the problem category. (2) Then, it prompts a generative language model with the sampled concepts to obtain both the problems and their step-wise formal solutions. (3) Lastly, the framework utilizes a proof assistant (e.g., Lean Prover) to filter the valid proofs. With the proposed MUSTARD, we present a theorem-and-proof benchmark MUSTARDSAUCE with 5,866 valid data points. Each data point contains an informal statement, an informal proof, and a translated formal proof that passes the prover validation. We perform extensive analysis and demonstrate that MUSTARD generates validated high-quality step-by-step data. We further apply the MUSTARDSAUCE for fine-tuning smaller language models. The fine-tuned Llama 2-7B achieves a 15.41% average relative performance gain in automated theorem proving, and 8.18% in math word problems. Codes and data are available at https://github.com/Eleanor-H/MUSTARD.
[ "['Yinya Huang' 'Xiaohan Lin' 'Zhengying Liu' 'Qingxing Cao' 'Huajian Xin'\n 'Haiming Wang' 'Zhenguo Li' 'Linqi Song' 'Xiaodan Liang']" ]
null
null
2402.08958
null
null
http://arxiv.org/pdf/2402.08958v1
2024-02-14T05:58:43Z
2024-02-14T05:58:43Z
Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers
With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile devices and TVs. Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyper-parameter tunings are required. As a cost-effective alternative, one-shot PTQ schemes have been proposed. Still, the performance is somewhat limited because they cannot consider the inter-layer dependency within the attention module, which is a very important feature of Transformers. In this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency. The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while considering cross-layer dependency to preserve the attention score. Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models.
[ "['Junhan Kim' 'Kyungphil Park' 'Chungman Lee' 'Ho-young Kim'\n 'Joonyoung Kim' 'Yongkweon Jeon']" ]
null
null
2402.08963
null
null
http://arxiv.org/pdf/2402.08963v1
2024-02-14T06:09:36Z
2024-02-14T06:09:36Z
DUEL: Duplicate Elimination on Active Memory for Self-Supervised Class-Imbalanced Learning
Recent machine learning algorithms have been developed using well-curated datasets, which often require substantial cost and resources. On the other hand, the direct use of raw data often leads to overfitting towards frequently occurring class information. To address class imbalances cost-efficiently, we propose an active data filtering process during self-supervised pre-training in our novel framework, Duplicate Elimination (DUEL). This framework integrates an active memory inspired by human working memory and introduces distinctiveness information, which measures the diversity of the data in the memory, to optimize both the feature extractor and the memory. The DUEL policy, which replaces the most duplicated data with new samples, aims to enhance the distinctiveness information in the memory and thereby mitigate class imbalances. We validate the effectiveness of the DUEL framework in class-imbalanced environments, demonstrating its robustness and providing reliable results in downstream tasks. We also analyze the role of the DUEL policy in the training process through various metrics and visualizations.
[ "['Won-Seok Choi' 'Hyundo Lee' 'Dong-Sig Han' 'Junseok Park' 'Heeyeon Koo'\n 'Byoung-Tak Zhang']" ]
null
null
2402.08964
null
null
http://arxiv.org/pdf/2402.08964v1
2024-02-14T06:10:44Z
2024-02-14T06:10:44Z
Predicting User Experience on Laptops from Hardware Specifications
Estimating the overall user experience (UX) on a device is a common challenge faced by manufacturers. Today, device makers primarily rely on microbenchmark scores, such as Geekbench, that stress test specific hardware components, such as CPU or RAM, but do not satisfactorily capture consumer workloads. System designers often rely on domain-specific heuristics and extensive testing of prototypes to reach a desired UX goal, and yet there is often a mismatch between the manufacturers' performance claims and the consumers' experience. We present our initial results on predicting real-life experience on laptops from their hardware specifications. We target web applications that run on Chromebooks (ChromeOS laptops) for a simple and fair aggregation of experience across applications and workloads. On 54 laptops, we track 9 UX metrics on common end-user workloads: web browsing, video playback and audio/video calls. We focus on a subset of high-level metrics exposed by the Chrome browser, that are part of the Web Vitals initiative for judging the UX on web applications. With a dataset of 100K UX data points, we train gradient boosted regression trees that predict the metric values from device specifications. Across our 9 metrics, we note a mean $R^2$ score (goodness-of-fit on our dataset) of 97.8% and a mean MAAPE (percentage error in prediction on unseen data) of 10.1%.
[ "['Saswat Padhi' 'Sunil K. Bhasin' 'Udaya K. Ammu' 'Alex Bergman'\n 'Allan Knies']" ]
null
null
2402.08975
null
null
http://arxiv.org/pdf/2402.08975v1
2024-02-14T06:39:54Z
2024-02-14T06:39:54Z
Research and application of Transformer based anomaly detection model: A literature review
Transformer, as one of the most advanced neural network models in Natural Language Processing (NLP), exhibits diverse applications in the field of anomaly detection. To inspire research on Transformer-based anomaly detection, this review offers a fresh perspective on the concept of anomaly detection. We explore the current challenges of anomaly detection and provide detailed insights into the operating principles of Transformer and its variants in anomaly detection tasks. Additionally, we delineate various application scenarios for Transformer-based anomaly detection models and discuss the datasets and evaluation metrics employed. Furthermore, this review highlights the key challenges in Transformer-based anomaly detection research and conducts a comprehensive analysis of future research trends in this domain. The review includes an extensive compilation of over 100 core references related to Transformer-based anomaly detection. To the best of our knowledge, this is the first comprehensive review that focuses on the research related to Transformer in the context of anomaly detection. We hope that this paper can provide detailed technical information to researchers interested in Transformer-based anomaly detection tasks.
[ "['Mingrui Ma' 'Lansheng Han' 'Chunjie Zhou']" ]
null
null
2402.08978
null
null
http://arxiv.org/pdf/2402.08978v1
2024-02-14T06:47:30Z
2024-02-14T06:47:30Z
Prismatic: Interactive Multi-View Cluster Analysis of Concept Stocks
Financial cluster analysis allows investors to discover investment alternatives and avoid undertaking excessive risks. However, this analytical task faces substantial challenges arising from many pairwise comparisons, the dynamic correlations across time spans, and the ambiguity in deriving implications from business relational knowledge. We propose Prismatic, a visual analytics system that integrates quantitative analysis of historical performance and qualitative analysis of business relational knowledge to cluster correlated businesses interactively. Prismatic features three clustering processes: dynamic cluster generation, knowledge-based cluster exploration, and correlation-based cluster validation. Utilizing a multi-view clustering approach, it enriches data-driven clusters with knowledge-driven similarity, providing a nuanced understanding of business correlations. Through well-coordinated visual views, Prismatic facilitates a comprehensive interpretation of intertwined quantitative and qualitative features, demonstrating its usefulness and effectiveness via case studies on formulating concept stocks and extensive interviews with domain experts.
[ "['Wong Kam-Kwai' 'Yan Luo' 'Xuanwu Yue' 'Wei Chen' 'Huamin Qu']" ]
null
null
2402.08979
null
null
http://arxiv.org/pdf/2402.08979v1
2024-02-14T06:49:23Z
2024-02-14T06:49:23Z
Learning-enabled Flexible Job-shop Scheduling for Scalable Smart Manufacturing
In smart manufacturing systems (SMSs), flexible job-shop scheduling with transportation constraints (FJSPT) is essential to optimize solutions for maximizing productivity, considering production flexibility based on automated guided vehicles (AGVs). Recent developments in deep reinforcement learning (DRL)-based methods for FJSPT have encountered a scale generalization challenge. These methods underperform when applied to environment at scales different from their training set, resulting in low-quality solutions. To address this, we introduce a novel graph-based DRL method, named the Heterogeneous Graph Scheduler (HGS). Our method leverages locally extracted relational knowledge among operations, machines, and vehicle nodes for scheduling, with a graph-structured decision-making framework that reduces encoding complexity and enhances scale generalization. Our performance evaluation, conducted with benchmark datasets, reveals that the proposed method outperforms traditional dispatching rules, meta-heuristics, and existing DRL-based approaches in terms of makespan performance, even on large-scale instances that have not been experienced during training.
[ "['Sihoon Moon' 'Sanghoon Lee' 'Kyung-Joon Park']" ]
null
null
2402.08982
null
null
http://arxiv.org/pdf/2402.08982v1
2024-02-14T06:51:49Z
2024-02-14T06:51:49Z
MEL: Efficient Multi-Task Evolutionary Learning for High-Dimensional Feature Selection
Feature selection is a crucial step in data mining to enhance model performance by reducing data dimensionality. However, the increasing dimensionality of collected data exacerbates the challenge known as the "curse of dimensionality", where computation grows exponentially with the number of dimensions. To tackle this issue, evolutionary computational (EC) approaches have gained popularity due to their simplicity and applicability. Unfortunately, the diverse designs of EC methods result in varying abilities to handle different data, often underutilizing and not sharing information effectively. In this paper, we propose a novel approach called PSO-based Multi-task Evolutionary Learning (MEL) that leverages multi-task learning to address these challenges. By incorporating information sharing between different feature selection tasks, MEL achieves enhanced learning ability and efficiency. We evaluate the effectiveness of MEL through extensive experiments on 22 high-dimensional datasets. Comparing against 24 EC approaches, our method exhibits strong competitiveness. Additionally, we have open-sourced our code on GitHub at https://github.com/wangxb96/MEL.
[ "['Xubin Wang' 'Haojiong Shangguan' 'Fengyi Huang' 'Shangrui Wu'\n 'Weijia Jia']" ]
null
null
2402.08991
null
null
http://arxiv.org/pdf/2402.08991v2
2024-02-15T04:30:09Z
2024-02-14T07:27:30Z
Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption
This study tackles the challenges of adversarial corruption in model-based reinforcement learning (RL), where the transition dynamics can be corrupted by an adversary. Existing studies on corruption-robust RL mostly focus on the setting of model-free RL, where robust least-square regression is often employed for value function estimation. However, these techniques cannot be directly applied to model-based RL. In this paper, we focus on model-based RL and take the maximum likelihood estimation (MLE) approach to learn transition model. Our work encompasses both online and offline settings. In the online setting, we introduce an algorithm called corruption-robust optimistic MLE (CR-OMLE), which leverages total-variation (TV)-based information ratios as uncertainty weights for MLE. We prove that CR-OMLE achieves a regret of $tilde{mathcal{O}}(sqrt{T} + C)$, where $C$ denotes the cumulative corruption level after $T$ episodes. We also prove a lower bound to show that the additive dependence on $C$ is optimal. We extend our weighting technique to the offline setting, and propose an algorithm named corruption-robust pessimistic MLE (CR-PMLE). Under a uniform coverage condition, CR-PMLE exhibits suboptimality worsened by $mathcal{O}(C/n)$, nearly matching the lower bound. To the best of our knowledge, this is the first work on corruption-robust model-based RL algorithms with provable guarantees.
[ "['Chenlu Ye' 'Jiafan He' 'Quanquan Gu' 'Tong Zhang']" ]
null
null
2402.08992
null
null
http://arxiv.org/pdf/2402.08992v1
2024-02-14T07:34:22Z
2024-02-14T07:34:22Z
Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method
This paper proposes a stochastic proximal point method to solve a stochastic convex composite optimization problem. High probability results in stochastic optimization typically hinge on restrictive assumptions on the stochastic gradient noise, for example, sub-Gaussian distributions. Assuming only weak conditions such as bounded variance of the stochastic gradient, this paper establishes a low sample complexity to obtain a high probability guarantee on the convergence of the proposed method. Additionally, a notable aspect of this work is the development of a subroutine to solve the proximal subproblem, which also serves as a novel technique for variance reduction.
[ "['Jiaming Liang']" ]
null
null
2402.08998
null
null
http://arxiv.org/pdf/2402.08998v1
2024-02-14T07:52:00Z
2024-02-14T07:52:00Z
Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path
We study the Stochastic Shortest Path (SSP) problem with a linear mixture transition kernel, where an agent repeatedly interacts with a stochastic environment and seeks to reach certain goal state while minimizing the cumulative cost. Existing works often assume a strictly positive lower bound of the cost function or an upper bound of the expected length for the optimal policy. In this paper, we propose a new algorithm to eliminate these restrictive assumptions. Our algorithm is based on extended value iteration with a fine-grained variance-aware confidence set, where the variance is estimated recursively from high-order moments. Our algorithm achieves an $tilde{mathcal O}(dB_*sqrt{K})$ regret bound, where $d$ is the dimension of the feature mapping in the linear transition kernel, $B_*$ is the upper bound of the total cumulative cost for the optimal policy, and $K$ is the number of episodes. Our regret upper bound matches the $Omega(dB_*sqrt{K})$ lower bound of linear mixture SSPs in Min et al. (2022), which suggests that our algorithm is nearly minimax optimal.
[ "['Qiwei Di' 'Jiafan He' 'Dongruo Zhou' 'Quanquan Gu']" ]
null
null
2402.08999
null
null
http://arxiv.org/pdf/2402.08999v1
2024-02-14T07:52:28Z
2024-02-14T07:52:28Z
Exploring Federated Deep Learning for Standardising Naming Conventions in Radiotherapy Data
Standardising structure volume names in radiotherapy (RT) data is necessary to enable data mining and analyses, especially across multi-institutional centres. This process is time and resource intensive, which highlights the need for new automated and efficient approaches to handle the task. Several machine learning-based methods have been proposed and evaluated to standardise nomenclature. However, no studies have considered that RT patient records are distributed across multiple data centres. This paper introduces a method that emulates real-world environments to establish standardised nomenclature. This is achieved by integrating decentralised real-time data and federated learning (FL). A multimodal deep artificial neural network was proposed to standardise RT data in federated settings. Three types of possible attributes were extracted from the structures to train the deep learning models: tabular, visual, and volumetric. Simulated experiments were carried out to train the models across several scenarios including multiple data centres, input modalities, and aggregation strategies. The models were compared against models developed with single modalities in federated settings, in addition to models trained in centralised settings. Categorical classification accuracy was calculated on hold-out samples to inform the models performance. Our results highlight the need for fusing multiple modalities when training such models, with better performance reported with tabular-volumetric models. In addition, we report comparable accuracy compared to models built in centralised settings. This demonstrates the suitability of FL for handling the standardization task. Additional ablation analyses showed that the total number of samples in the data centres and the number of data centres highly affects the training process and should be carefully considered when building standardisation models.
[ "['Ali Haidar' 'Daniel Al Mouiee' 'Farhannah Aly' 'David Thwaites'\n 'Lois Holloway']" ]
null
null
2402.09004
null
null
http://arxiv.org/pdf/2402.09004v1
2024-02-14T08:17:21Z
2024-02-14T08:17:21Z
Gradient Alignment with Prototype Feature for Fully Test-time Adaptation
In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label. We developed a gradient alignment loss to precisely manage the adaptation process, ensuring that changes made for some data don't negatively impact the model's performance on other data. We introduce a prototype feature of a class as a proxy measure of the negative impact. To make GAP regularizer feasible under the TTA constraints, where model can only access test data without labels, we tailored its formula in two ways: approximating prototype features with weight vectors of the classifier, calculating gradient without back-propagation. We demonstrate GAP significantly improves TTA methods across various datasets, which proves its versatility and effectiveness.
[ "['Juhyeon Shin' 'Jonghyun Lee' 'Saehyung Lee' 'Minjun Park' 'Dongjun Lee'\n 'Uiwon Hwang' 'Sungroh Yoon']" ]
null
null
2402.09018
null
null
http://arxiv.org/pdf/2402.09018v1
2024-02-14T08:50:14Z
2024-02-14T08:50:14Z
Neural Operators Meet Energy-based Theory: Operator Learning for Hamiltonian and Dissipative PDEs
The operator learning has received significant attention in recent years, with the aim of learning a mapping between function spaces. Prior works have proposed deep neural networks (DNNs) for learning such a mapping, enabling the learning of solution operators of partial differential equations (PDEs). However, these works still struggle to learn dynamics that obeys the laws of physics. This paper proposes Energy-consistent Neural Operators (ENOs), a general framework for learning solution operators of PDEs that follows the energy conservation or dissipation law from observed solution trajectories. We introduce a novel penalty function inspired by the energy-based theory of physics for training, in which the energy functional is modeled by another DNN, allowing one to bias the outputs of the DNN-based solution operators to ensure energetic consistency without explicit PDEs. Experiments on multiple physical systems show that ENO outperforms existing DNN models in predicting solutions from data, especially in super-resolution settings.
[ "['Yusuke Tanaka' 'Takaharu Yaguchi' 'Tomoharu Iwata' 'Naonori Ueda']" ]
null
null
2402.09025
null
null
http://arxiv.org/pdf/2402.09025v4
2024-06-12T01:40:12Z
2024-02-14T09:01:13Z
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
Large language models (LLMs) have proven to be highly effective across various natural language processing tasks. However, their large number of parameters poses significant challenges for practical deployment. Pruning, a technique aimed at reducing the size and complexity of LLMs, offers a potential solution by removing redundant components from the network. Despite the promise of pruning, existing methods often struggle to achieve substantial end-to-end LLM inference speedup. In this paper, we introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks. We choose the transformer block as the fundamental unit for pruning, because LLMs exhibit block-level redundancy with high similarity between the outputs of neighboring blocks. This choice allows us to effectively enhance the processing speed of LLMs. Our experimental results demonstrate that SLEB outperforms previous LLM pruning methods in accelerating LLM inference while also maintaining superior perplexity and accuracy, making SLEB as a promising technique for enhancing the efficiency of LLMs. The code is available at: https://github.com/jiwonsong-dev/SLEB.
[ "['Jiwon Song' 'Kyungseok Oh' 'Taesu Kim' 'Hyungjun Kim' 'Yulhwa Kim'\n 'Jae-Joon Kim']" ]
null
null
2402.09034
null
null
http://arxiv.org/pdf/2402.09034v1
2024-02-14T09:20:13Z
2024-02-14T09:20:13Z
Enhancing Sequential Model Performance with Squared Sigmoid TanH (SST) Activation Under Data Constraints
Activation functions enable neural networks to learn complex representations by introducing non-linearities. While feedforward models commonly use rectified linear units, sequential models like recurrent neural networks, long short-term memory (LSTMs) and gated recurrent units (GRUs) still rely on Sigmoid and TanH activation functions. However, these classical activation functions often struggle to model sparse patterns when trained on small sequential datasets to effectively capture temporal dependencies. To address this limitation, we propose squared Sigmoid TanH (SST) activation specifically tailored to enhance the learning capability of sequential models under data constraints. SST applies mathematical squaring to amplify differences between strong and weak activations as signals propagate over time, facilitating improved gradient flow and information filtering. We evaluate SST-powered LSTMs and GRUs for diverse applications, such as sign language recognition, regression, and time-series classification tasks, where the dataset is limited. Our experiments demonstrate that SST models consistently outperform RNN-based models with baseline activations, exhibiting improved test accuracy.
[ "['Barathi Subramanian' 'Rathinaraja Jeyaraj'\n 'Rakhmonov Akhrorjon Akhmadjon Ugli' 'Jeonghong Kim']" ]
null
null
2402.09043
null
null
http://arxiv.org/pdf/2402.09043v1
2024-02-14T09:38:09Z
2024-02-14T09:38:09Z
Under manipulations, are some AI models harder to audit?
Auditors need robust methods to assess the compliance of web platforms with the law. However, since they hardly ever have access to the algorithm, implementation, or training data used by a platform, the problem is harder than a simple metric estimation. Within the recent framework of manipulation-proof auditing, we study in this paper the feasibility of robust audits in realistic settings, in which models exhibit large capacities. We first prove a constraining result: if a web platform uses models that may fit any data, no audit strategy -- whether active or not -- can outperform random sampling when estimating properties such as demographic parity. To better understand the conditions under which state-of-the-art auditing techniques may remain competitive, we then relate the manipulability of audits to the capacity of the targeted models, using the Rademacher complexity. We empirically validate these results on popular models of increasing capacities, thus confirming experimentally that large-capacity models, which are commonly used in practice, are particularly hard to audit robustly. These results refine the limits of the auditing problem, and open up enticing questions on the connection between model capacity and the ability of platforms to manipulate audit attempts.
[ "['Augustin Godinot' 'Gilles Tredan' 'Erwan Le Merrer' 'Camilla Penzo'\n 'Francois Taïani']" ]
null
null
2402.09046
null
null
http://arxiv.org/pdf/2402.09046v1
2024-02-14T09:43:35Z
2024-02-14T09:43:35Z
Inference of Abstraction for a Unified Account of Reasoning and Learning
Inspired by Bayesian approaches to brain function in neuroscience, we give a simple theory of probabilistic inference for a unified account of reasoning and learning. We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic. The underlying idea is that reasoning is a process of deriving symbolic knowledge from data via abstraction, i.e., selective ignorance. The logical consequence relation is discussed for its proof-based theoretical correctness. The MNIST dataset is discussed for its experiment-based empirical correctness.
[ "['Hiroyuki Kido']" ]
null
null
2402.09050
null
null
http://arxiv.org/pdf/2402.09050v2
2024-05-31T08:15:06Z
2024-02-14T09:46:53Z
End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training
End-to-end (E2E) training, optimizing the entire model through error backpropagation, fundamentally supports the advancements of deep learning. Despite its high performance, E2E training faces the problems of memory consumption, parallel computing, and discrepancy with the functionalities of the actual brain. Various alternative methods have been proposed to overcome these difficulties; however, no one can yet match the performance of E2E training, thereby falling short in practicality. Furthermore, there is no deep understanding regarding differences in the trained model properties beyond the performance gap. In this paper, we reconsider why E2E training demonstrates a superior performance through a comparison with layer-wise training, a non-E2E method that locally sets errors. On the basis of the observation that E2E training has an advantage in propagating input information, we analyze the information plane dynamics of intermediate representations based on the Hilbert-Schmidt independence criterion (HSIC). The results of our normalized HSIC value analysis reveal the E2E training ability to exhibit different information dynamics across layers, in addition to efficient information propagation. Furthermore, we show that this layer-role differentiation leads to the final representation following the information bottleneck principle. It suggests the need to consider the cooperative interactions between layers, not just the final layer when analyzing the information bottleneck of deep learning.
[ "['Keitaro Sakamoto' 'Issei Sato']" ]
null
null
2402.09056
null
null
http://arxiv.org/pdf/2402.09056v2
2024-02-20T21:59:39Z
2024-02-14T10:07:05Z
Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?
Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty. Bayesian methods are commonly used to quantify both aleatoric and epistemic uncertainty, but alternative approaches, such as evidential deep learning methods, have become popular in recent years. The latter group of methods in essence extends empirical risk minimization (ERM) for predicting second-order probability distributions over outcomes, from which measures of epistemic (and aleatoric) uncertainty can be extracted. This paper presents novel theoretical insights of evidential deep learning, highlighting the difficulties in optimizing second-order loss functions and interpreting the resulting epistemic uncertainty measures. With a systematic setup that covers a wide range of approaches for classification, regression and counts, it provides novel insights into issues of identifiability and convergence in second-order loss minimization, and the relative (rather than absolute) nature of epistemic uncertainty measures.
[ "['Mira Jürgens' 'Nis Meinert' 'Viktor Bengs' 'Eyke Hüllermeier'\n 'Willem Waegeman']" ]
null
null
2402.09057
null
null
http://arxiv.org/abs/2402.09057v1
2024-02-14T10:08:24Z
2024-02-14T10:08:24Z
Distributed Sensing Along Fibres for Smart Clothing
Textile sensors transform our everyday clothing into a means to track movement and bio-signals in a completely unobtrusive way. One major hindrance to the adoption of "smart" clothing is the difficulty encountered with connections and space when scaling up the number of sensors. There is a lack of research addressing a key limitation in wearable electronics: connections between rigid and textile elements are often unreliable and they require interfacing sensors in a way incompatible with textile mass production methods. We introduce a prototype garment, compact readout circuit, and algorithm to measure localized strain along multiple regions of a fibre. We employ a helical auxetic yarn sensor with tunable sensitivity along its length to selectively respond to strain signals. We demonstrate distributed sensing in clothing, monitoring arm joint angles from a single continuous fibre. Compared to optical motion capture, we achieve around 5{deg} error in reconstructing shoulder, elbow, and wrist joint angles.
[ "['Brett C. Hannigan' 'Tyler J. Cuthbert' 'Chakaveh Ahmadizadeh'\n 'Carlo Menon']" ]
null
null
2402.09059
null
null
http://arxiv.org/pdf/2402.09059v1
2024-02-14T10:15:43Z
2024-02-14T10:15:43Z
I can't see it but I can Fine-tune it: On Encrypted Fine-tuning of Transformers using Fully Homomorphic Encryption
In today's machine learning landscape, fine-tuning pretrained transformer models has emerged as an essential technique, particularly in scenarios where access to task-aligned training data is limited. However, challenges surface when data sharing encounters obstacles due to stringent privacy regulations or user apprehension regarding personal information disclosure. Earlier works based on secure multiparty computation (SMC) and fully homomorphic encryption (FHE) for privacy-preserving machine learning (PPML) focused more on privacy-preserving inference than privacy-preserving training. In response, we introduce BlindTuner, a privacy-preserving fine-tuning system that enables transformer training exclusively on homomorphically encrypted data for image classification. Our extensive experimentation validates BlindTuner's effectiveness by demonstrating comparable accuracy to non-encrypted models. Notably, our findings highlight a substantial speed enhancement of 1.5x to 600x over previous work in this domain.
[ "['Prajwal Panzade' 'Daniel Takabi' 'Zhipeng Cai']" ]