categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2403.14713
| null | null |
http://arxiv.org/pdf/2403.14713v2
|
2024-04-25T02:56:14Z
|
2024-03-18T21:09:06Z
|
Auditing Fairness under Unobserved Confounding
|
The presence of inequity is a fundamental problem in the outcomes of decision-making systems, especially when human lives are at stake. Yet, estimating notions of unfairness or inequity is difficult, particularly if they rely on hard-to-measure concepts such as risk. Such measurements of risk can be accurately obtained when no unobserved confounders have jointly influenced past decisions and outcomes. However, in the real world, this assumption rarely holds. In this paper, we show a surprising result that one can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed. We use the fact that in many real-world settings (e.g., the release of a new treatment) we have data from prior to any allocation to derive unbiased estimates of risk. This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner. For instance, in a real-world study of Paxlovid allocation, our framework provably identifies that observed racial inequity cannot be explained by unobserved confounders of the same strength as important observed covariates.
|
[
"['Yewon Byun' 'Dylan Sam' 'Michael Oberst' 'Zachary C. Lipton'\n 'Bryan Wilder']"
] |
null | null |
2403.14714
| null | null |
http://arxiv.org/pdf/2403.14714v1
|
2024-03-18T23:25:13Z
|
2024-03-18T23:25:13Z
|
Compiler generated feedback for Large Language Models
|
We introduce a novel paradigm in compiler optimization powered by Large Language Models with compiler feedback to optimize the code size of LLVM assembly. The model takes unoptimized LLVM IR as input and produces optimized IR, the best optimization passes, and instruction counts of both unoptimized and optimized IRs. Then we compile the input with generated optimization passes and evaluate if the predicted instruction count is correct, generated IR is compilable, and corresponds to compiled code. We provide this feedback back to LLM and give it another chance to optimize code. This approach adds an extra 0.53% improvement over -Oz to the original model. Even though, adding more information with feedback seems intuitive, simple sampling techniques achieve much higher performance given 10 or more samples.
|
[
"['Dejan Grubisic' 'Chris Cummins' 'Volker Seeker' 'Hugh Leather']"
] |
null | null |
2403.14715
| null | null |
http://arxiv.org/pdf/2403.14715v1
|
2024-03-19T06:46:24Z
|
2024-03-19T06:46:24Z
|
Understanding Why Label Smoothing Degrades Selective Classification and
How to Fix It
|
Label smoothing (LS) is a popular regularisation method for training deep neural network classifiers due to its effectiveness in improving test accuracy and its simplicity in implementation. "Hard" one-hot labels are "smoothed" by uniformly distributing probability mass to other classes, reducing overfitting. In this work, we reveal that LS negatively affects selective classification (SC) - where the aim is to reject misclassifications using a model's predictive uncertainty. We first demonstrate empirically across a range of tasks and architectures that LS leads to a consistent degradation in SC. We then explain this by analysing logit-level gradients, showing that LS exacerbates overconfidence and underconfidence by regularising the max logit more when the probability of error is low, and less when the probability of error is high. This elucidates previously reported experimental results where strong classifiers underperform in SC. We then demonstrate the empirical effectiveness of logit normalisation for recovering lost SC performance caused by LS. Furthermore, based on our gradient analysis, we explain why such normalisation is effective. We will release our code shortly.
|
[
"['Guoxuan Xia' 'Olivier Laurent' 'Gianni Franchi'\n 'Christos-Savvas Bouganis']"
] |
null | null |
2403.14716
| null | null |
http://arxiv.org/pdf/2403.14716v1
|
2024-03-19T06:48:40Z
|
2024-03-19T06:48:40Z
|
Distributed Learning based on 1-Bit Gradient Coding in the Presence of
Stragglers
|
This paper considers the problem of distributed learning (DL) in the presence of stragglers. For this problem, DL methods based on gradient coding have been widely investigated, which redundantly distribute the training data to the workers to guarantee convergence when some workers are stragglers. However, these methods require the workers to transmit real-valued vectors during the process of learning, which induces very high communication burden. To overcome this drawback, we propose a novel DL method based on 1-bit gradient coding (1-bit GCDL), where 1-bit data encoded from the locally computed gradients are transmitted by the workers to reduce the communication overhead. We theoretically provide the convergence guarantees of the proposed method for both the convex loss functions and nonconvex loss functions. It is shown empirically that 1-bit GC-DL outperforms the baseline methods, which attains better learning performance under the same communication overhead.
|
[
"['Chengxi Li' 'Mikael Skoglund']"
] |
null | null |
2403.14718
| null | null |
http://arxiv.org/pdf/2403.14718v1
|
2024-03-19T09:34:01Z
|
2024-03-19T09:34:01Z
|
FedSR: A Semi-Decentralized Federated Learning Algorithm for Non-IIDness
in IoT System
|
In the Industrial Internet of Things (IoT), a large amount of data will be generated every day. Due to privacy and security issues, it is difficult to collect all these data together to train deep learning models, thus the federated learning, a distributed machine learning paradigm that protects data privacy, has been widely used in IoT. However, in practical federated learning, the data distributions usually have large differences across devices, and the heterogeneity of data will deteriorate the performance of the model. Moreover, federated learning in IoT usually has a large number of devices involved in training, and the limited communication resource of cloud servers become a bottleneck for training. To address the above issues, in this paper, we combine centralized federated learning with decentralized federated learning to design a semi-decentralized cloud-edge-device hierarchical federated learning framework, which can mitigate the impact of data heterogeneity, and can be deployed at lage scale in IoT. To address the effect of data heterogeneity, we use an incremental subgradient optimization algorithm in each ring cluster to improve the generalization ability of the ring cluster models. Our extensive experiments show that our approach can effectively mitigate the impact of data heterogeneity and alleviate the communication bottleneck in cloud servers.
|
[
"['Jianjun Huang' 'Lixin Ye' 'Li Kang']"
] |
null | null |
2403.14719
| null | null |
http://arxiv.org/pdf/2403.14719v1
|
2024-03-19T17:54:39Z
|
2024-03-19T17:54:39Z
|
Bypassing LLM Watermarks with Color-Aware Substitutions
|
Watermarking approaches are proposed to identify if text being circulated is human or large language model (LLM) generated. The state-of-the-art watermarking strategy of Kirchenbauer et al. (2023a) biases the LLM to generate specific (``green'') tokens. However, determining the robustness of this watermarking method is an open problem. Existing attack methods fail to evade detection for longer text segments. We overcome this limitation, and propose {em Self Color Testing-based Substitution (SCTS)}, the first ``color-aware'' attack. SCTS obtains color information by strategically prompting the watermarked LLM and comparing output tokens frequencies. It uses this information to determine token colors, and substitutes green tokens with non-green ones. In our experiments, SCTS successfully evades watermark detection using fewer number of edits than related work. Additionally, we show both theoretically and empirically that SCTS can remove the watermark for arbitrarily long watermarked text.
|
[
"['Qilong Wu' 'Varun Chandrasekaran']"
] |
null | null |
2403.14720
| null | null |
http://arxiv.org/pdf/2403.14720v1
|
2024-03-20T15:26:23Z
|
2024-03-20T15:26:23Z
|
Defending Against Indirect Prompt Injection Attacks With Spotlighting
|
Large Language Models (LLMs), while powerful, are built and trained to process a single text input. In common applications, multiple inputs can be processed by concatenating them together into a single stream of text. However, the LLM is unable to distinguish which sections of prompt belong to various input sources. Indirect prompt injection attacks take advantage of this vulnerability by embedding adversarial instructions into untrusted data being processed alongside user commands. Often, the LLM will mistake the adversarial instructions as user commands to be followed, creating a security vulnerability in the larger system. We introduce spotlighting, a family of prompt engineering techniques that can be used to improve LLMs' ability to distinguish among multiple sources of input. The key insight is to utilize transformations of an input to provide a reliable and continuous signal of its provenance. We evaluate spotlighting as a defense against indirect prompt injection attacks, and find that it is a robust defense that has minimal detrimental impact to underlying NLP tasks. Using GPT-family models, we find that spotlighting reduces the attack success rate from greater than {50}% to below {2}% in our experiments with minimal impact on task efficacy.
|
[
"['Keegan Hines' 'Gary Lopez' 'Matthew Hall' 'Federico Zarfati'\n 'Yonatan Zunger' 'Emre Kiciman']"
] |
null | null |
2403.14724
| null | null |
http://arxiv.org/pdf/2403.14724v1
|
2024-03-20T20:41:26Z
|
2024-03-20T20:41:26Z
|
Six Levels of Privacy: A Framework for Financial Synthetic Data
|
Synthetic Data is increasingly important in financial applications. In addition to the benefits it provides, such as improved financial modeling and better testing procedures, it poses privacy risks as well. Such data may arise from client information, business information, or other proprietary sources that must be protected. Even though the process by which Synthetic Data is generated serves to obscure the original data to some degree, the extent to which privacy is preserved is hard to assess. Accordingly, we introduce a hierarchy of ``levels'' of privacy that are useful for categorizing Synthetic Data generation methods and the progressively improved protections they offer. While the six levels were devised in the context of financial applications, they may also be appropriate for other industries as well. Our paper includes: A brief overview of Financial Synthetic Data, how it can be used, how its value can be assessed, privacy risks, and privacy attacks. We close with details of the ``Six Levels'' that include defenses against those attacks.
|
[
"['Tucker Balch' 'Vamsi K. Potluru' 'Deepak Paramanand' 'Manuela Veloso']"
] |
null | null |
2403.14725
| null | null |
http://arxiv.org/pdf/2403.14725v2
|
2024-06-24T05:01:06Z
|
2024-03-20T21:53:56Z
|
Testing the Limits of Jailbreaking Defenses with the Purple Problem
|
The rise of "jailbreak" attacks on language models has led to a flurry of defenses aimed at preventing undesirable responses. We critically examine the two stages of the defense pipeline: (i) defining what constitutes unsafe outputs, and (ii) enforcing the definition via methods such as input processing or fine-tuning. To test the efficacy of existing enforcement mechanisms, we consider a simple and well-specified definition of unsafe outputs--outputs that contain the word "purple". Surprisingly, existing fine-tuning and input defenses fail on this simple problem, casting doubt on whether enforcement algorithms can be robust for more complicated definitions. We find that real safety benchmarks similarly test enforcement for a fixed definition. We hope that future research can lead to effective/fast enforcement as well as high quality definitions used for enforcement and evaluation.
|
[
"['Taeyoun Kim' 'Suhas Kotha' 'Aditi Raghunathan']"
] |
null | null |
2403.14727
| null | null |
http://arxiv.org/pdf/2403.14727v1
|
2024-03-21T00:21:38Z
|
2024-03-21T00:21:38Z
|
Protected group bias and stereotypes in Large Language Models
|
As modern Large Language Models (LLMs) shatter many state-of-the-art benchmarks in a variety of domains, this paper investigates their behavior in the domains of ethics and fairness, focusing on protected group bias. We conduct a two-part study: first, we solicit sentence continuations describing the occupations of individuals from different protected groups, including gender, sexuality, religion, and race. Second, we have the model generate stories about individuals who hold different types of occupations. We collect >10k sentence completions made by a publicly available LLM, which we subject to human annotation. We find bias across minoritized groups, but in particular in the domains of gender and sexuality, as well as Western bias, in model generations. The model not only reflects societal biases, but appears to amplify them. The model is additionally overly cautious in replies to queries relating to minoritized groups, providing responses that strongly emphasize diversity and equity to an extent that other group characteristics are overshadowed. This suggests that artificially constraining potentially harmful outputs may itself lead to harm, and should be applied in a careful and controlled manner.
|
[
"['Hadas Kotek' 'David Q. Sun' 'Zidi Xiu' 'Margit Bowler'\n 'Christopher Klein']"
] |
null | null |
2403.14729
| null | null |
http://arxiv.org/pdf/2403.14729v1
|
2024-03-21T02:33:37Z
|
2024-03-21T02:33:37Z
|
Auto-Train-Once: Controller Network Guided Automatic Network Pruning
from Scratch
|
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging. To address the limitation, the Only-Train-Once (OTO) and OTOv2 are proposed to eliminate the need for additional fine-tuning steps by directly training and compressing a general DNN from scratch. Nevertheless, the static design of optimizers (in OTO) can lead to convergence issues of local optima. In this paper, we proposed the Auto-Train-Once (ATO), an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs. During the model training phase, our approach not only trains the target model but also leverages a controller network as an architecture generator to guide the learning of target model weights. Furthermore, we developed a novel stochastic gradient algorithm that enhances the coordination between model training and controller network training, thereby improving pruning performance. We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures (including ResNet18, ResNet34, ResNet50, ResNet56, and MobileNetv2) on standard benchmark datasets (CIFAR-10, CIFAR-100, and ImageNet).
|
[
"['Xidong Wu' 'Shangqian Gao' 'Zeyu Zhang' 'Zhenzhen Li' 'Runxue Bao'\n 'Yanfu Zhang' 'Xiaoqian Wang' 'Heng Huang']"
] |
null | null |
2403.14731
| null | null |
http://arxiv.org/pdf/2403.14731v1
|
2024-03-21T04:54:31Z
|
2024-03-21T04:54:31Z
|
Reversible Jump Attack to Textual Classifiers with Modification
Reduction
|
Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic hierarchical rules that are agnostic to the optimal adversarial examples, a strategy that often results in adversarial samples with a suboptimal balance between magnitudes of changes and attack successes. To this end, in this research we propose two algorithms, Reversible Jump Attack (RJA) and Metropolis-Hasting Modification Reduction (MMR), to generate highly effective adversarial examples and to improve the imperceptibility of the examples, respectively. RJA utilizes a novel randomization mechanism to enlarge the search space and efficiently adapts to a number of perturbed words for adversarial examples. With these generated adversarial examples, MMR applies the Metropolis-Hasting sampler to enhance the imperceptibility of adversarial examples. Extensive experiments demonstrate that RJA-MMR outperforms current state-of-the-art methods in attack performance, imperceptibility, fluency and grammar correctness.
|
[
"['Mingze Ni' 'Zhensu Sun' 'Wei Liu']"
] |
null | null |
2403.14733
| null | null |
http://arxiv.org/pdf/2403.14733v1
|
2024-03-21T08:03:46Z
|
2024-03-21T08:03:46Z
|
Open Knowledge Base Canonicalization with Multi-task Learning
|
The construction of large open knowledge bases (OKBs) is integral to many knowledge-driven applications on the world wide web such as web search. However, noun phrases and relational phrases in OKBs often suffer from redundancy and ambiguity, which calls for the investigation on OKB canonicalization. Current solutions address OKB canonicalization by devising advanced clustering algorithms and using knowledge graph embedding (KGE) to further facilitate the canonicalization process. Nevertheless, these works fail to fully exploit the synergy between clustering and KGE learning, and the methods designed for these subtasks are sub-optimal. To this end, we put forward a multi-task learning framework, namely MulCanon, to tackle OKB canonicalization. In addition, diffusion model is used in the soft clustering process to improve the noun phrase representations with neighboring information, which can lead to more accurate representations. MulCanon unifies the learning objectives of these sub-tasks, and adopts a two-stage multi-task learning paradigm for training. A thorough experimental study on popular OKB canonicalization benchmarks validates that MulCanon can achieve competitive canonicalization results.
|
[
"['Bingchen Liu' 'Huang Peng' 'Weixin Zeng' 'Xiang Zhao' 'Shijun Liu'\n 'Li Pan']"
] |
null | null |
2403.14735
| null | null |
http://arxiv.org/abs/2403.14735v3
|
2024-06-18T08:10:07Z
|
2024-03-21T10:08:37Z
|
Foundation Models for Time Series Analysis: A Tutorial and Survey
|
Time series analysis stands as a focal point within the data mining community, serving as a cornerstone for extracting valuable insights crucial to a myriad of real-world applications. Recent advances in Foundation Models (FMs) have fundamentally reshaped the paradigm of model design for time series analysis, boosting various downstream tasks in practice. These innovative approaches often leverage pre-trained or fine-tuned FMs to harness generalized knowledge tailored for time series analysis. This survey aims to furnish a comprehensive and up-to-date overview of FMs for time series analysis. While prior surveys have predominantly focused on either application or pipeline aspects of FMs in time series analysis, they have often lacked an in-depth understanding of the underlying mechanisms that elucidate why and how FMs benefit time series analysis. To address this gap, our survey adopts a methodology-centric classification, delineating various pivotal elements of time-series FMs, including model architectures, pre-training techniques, adaptation methods, and data modalities. Overall, this survey serves to consolidate the latest advancements in FMs pertinent to time series analysis, accentuating their theoretical underpinnings, recent strides in development, and avenues for future exploration.
|
[
"['Yuxuan Liang' 'Haomin Wen' 'Yuqi Nie' 'Yushan Jiang' 'Ming Jin'\n 'Dongjin Song' 'Shirui Pan' 'Qingsong Wen']"
] |
null | null |
2403.14736
| null | null |
http://arxiv.org/pdf/2403.14736v2
|
2024-03-26T05:25:04Z
|
2024-03-21T13:27:57Z
|
NaNa and MiGu: Semantic Data Augmentation Techniques to Enhance Protein
Classification in Graph Neural Networks
|
Protein classification tasks are essential in drug discovery. Real-world protein structures are dynamic, which will determine the properties of proteins. However, the existing machine learning methods, like ProNet (Wang et al., 2022a), only access limited conformational characteristics and protein side-chain features, leading to impractical protein structure and inaccuracy of protein classes in their predictions. In this paper, we propose novel semantic data augmentation methods, Novel Augmentation of New Node Attributes (NaNa), and Molecular Interactions and Geometric Upgrading (MiGu) to incorporate backbone chemical and side-chain biophysical information into protein classification tasks and a co-embedding residual learning framework. Specifically, we leverage molecular biophysical, secondary structure, chemical bonds, and ionic features of proteins to facilitate protein classification tasks. Furthermore, our semantic augmentation methods and the co-embedding residual learning framework can improve the performance of GIN (Xu et al., 2019) on EC and Fold datasets (Bairoch, 2000; Andreeva et al., 2007) by 16.41% and 11.33% respectively. Our code is available at https://github.com/r08b46009/Code_for_MIGU_NANA/tree/main.
|
[
"['Yi-Shan Lan' 'Pin-Yu Chen' 'Tsung-Yi Ho']"
] |
null | null |
2403.14737
| null | null |
http://arxiv.org/pdf/2403.14737v1
|
2024-03-21T13:54:36Z
|
2024-03-21T13:54:36Z
|
FedMef: Towards Memory-efficient Federated Dynamic Pruning
|
Federated learning (FL) promotes decentralized training while prioritizing data confidentiality. However, its application on resource-constrained devices is challenging due to the high demand for computation and memory resources to train deep learning models. Neural network pruning techniques, such as dynamic pruning, could enhance model efficiency, but directly adopting them in FL still poses substantial challenges, including post-pruning performance degradation, high activation memory usage, etc. To address these challenges, we propose FedMef, a novel and memory-efficient federated dynamic pruning framework. FedMef comprises two key components. First, we introduce the budget-aware extrusion that maintains pruning efficiency while preserving post-pruning performance by salvaging crucial information from parameters marked for pruning within a given budget. Second, we propose scaled activation pruning to effectively reduce activation memory footprints, which is particularly beneficial for deploying FL to memory-limited devices. Extensive experiments demonstrate the effectiveness of our proposed FedMef. In particular, it achieves a significant reduction of 28.5% in memory footprint compared to state-of-the-art methods while obtaining superior accuracy.
|
[
"['Hong Huang' 'Weiming Zhuang' 'Chen Chen' 'Lingjuan Lyu']"
] |
null | null |
2403.14738
| null | null |
http://arxiv.org/pdf/2403.14738v1
|
2024-03-21T14:26:29Z
|
2024-03-21T14:26:29Z
|
A task of anomaly detection for a smart satellite Internet of things
system
|
When the equipment is working, real-time collection of environmental sensor data for anomaly detection is one of the key links to prevent industrial process accidents and network attacks and ensure system security. However, under the environment with specific real-time requirements, the anomaly detection for environmental sensors still faces the following difficulties: (1) The complex nonlinear correlation characteristics between environmental sensor data variables lack effective expression methods, and the distribution between the data is difficult to be captured. (2) it is difficult to ensure the real-time monitoring requirements by using complex machine learning models, and the equipment cost is too high. (3) Too little sample data leads to less labeled data in supervised learning. This paper proposes an unsupervised deep learning anomaly detection system. Based on the generative adversarial network and self-attention mechanism, considering the different feature information contained in the local subsequences, it automatically learns the complex linear and nonlinear dependencies between environmental sensor variables, and uses the anomaly score calculation method combining reconstruction error and discrimination error. It can monitor the abnormal points of real sensor data with high real-time performance and can run on the intelligent satellite Internet of things system, which is suitable for the real working environment. Anomaly detection outperforms baseline methods in most cases and has good interpretability, which can be used to prevent industrial accidents and cyber-attacks for monitoring environmental sensors.
|
[
"['Zilong Shao']"
] |
null | null |
2403.14742
| null | null |
http://arxiv.org/pdf/2403.14742v1
|
2024-03-21T18:00:00Z
|
2024-03-21T18:00:00Z
|
A Classifier-Based Approach to Multi-Class Anomaly Detection for
Astronomical Transients
|
Automating real-time anomaly detection is essential for identifying rare transients in the era of large-scale astronomical surveys. Modern survey telescopes are generating tens of thousands of alerts per night, and future telescopes, such as the Vera C. Rubin Observatory, are projected to increase this number dramatically. Currently, most anomaly detection algorithms for astronomical transients rely either on hand-crafted features extracted from light curves or on features generated through unsupervised representation learning, which are then coupled with standard machine learning anomaly detection algorithms. In this work, we introduce an alternative approach to detecting anomalies: using the penultimate layer of a neural network classifier as the latent space for anomaly detection. We then propose a novel method, named Multi-Class Isolation Forests (MCIF), which trains separate isolation forests for each class to derive an anomaly score for a light curve from the latent space representation given by the classifier. This approach significantly outperforms a standard isolation forest. We also use a simpler input method for real-time transient classifiers which circumvents the need for interpolation in light curves and helps the neural network model inter-passband relationships and handle irregular sampling. Our anomaly detection pipeline identifies rare classes including kilonovae, pair-instability supernovae, and intermediate luminosity transients shortly after trigger on simulated Zwicky Transient Facility light curves. Using a sample of our simulations that matched the population of anomalies expected in nature (54 anomalies and 12,040 common transients), our method was able to discover $41pm3$ anomalies (~75% recall) after following up the top 2000 (~15%) ranked transients. Our novel method shows that classifiers can be effectively repurposed for real-time anomaly detection.
|
[
"['Rithwik Gupta' 'Daniel Muthukrishna' 'Michelle Lochner']"
] |
null | null |
2403.14753
| null | null |
http://arxiv.org/pdf/2403.14753v1
|
2024-03-21T18:00:04Z
|
2024-03-21T18:00:04Z
|
Learning with SASQuaTCh: a Novel Variational Quantum Transformer
Architecture with Kernel-Based Self-Attention
|
The widely popular transformer network popularized by the generative pre-trained transformer (GPT) has a large field of applicability, including predicting text and images, classification, and even predicting solutions to the dynamics of physical systems. In the latter context, the continuous analog of the self-attention mechanism at the heart of transformer networks has been applied to learning the solutions of partial differential equations and reveals a convolution kernel nature that can be exploited by the Fourier transform. It is well known that many quantum algorithms that have provably demonstrated a speedup over classical algorithms utilize the quantum Fourier transform. In this work, we explore quantum circuits that can efficiently express a self-attention mechanism through the perspective of kernel-based operator learning. In this perspective, we are able to represent deep layers of a vision transformer network using simple gate operations and a set of multi-dimensional quantum Fourier transforms. We analyze the computational and parameter complexity of our novel variational quantum circuit, which we call Self-Attention Sequential Quantum Transformer Channel (SASQuaTCh), and demonstrate its utility on simplified classification problems.
|
[
"['Ethan N. Evans' 'Matthew Cook' 'Zachary P. Bradshaw'\n 'Margarite L. LaBorde']"
] |
null | null |
2403.14763
| null | null |
http://arxiv.org/pdf/2403.14763v1
|
2024-03-21T18:07:32Z
|
2024-03-21T18:07:32Z
|
Gravitational Duals from Equations of State
|
Holography relates gravitational theories in five dimensions to four-dimensional quantum field theories in flat space. Under this map, the equation of state of the field theory is encoded in the black hole solutions of the gravitational theory. Solving the five-dimensional Einstein's equations to determine the equation of state is an algorithmic, direct problem. Determining the gravitational theory that gives rise to a prescribed equation of state is a much more challenging, inverse problem. We present a novel approach to solve this problem based on physics-informed neural networks. The resulting algorithm is not only data-driven but also informed by the physics of the Einstein's equations. We successfully apply it to theories with crossovers, first- and second-order phase transitions.
|
[
"['Yago Bea' 'Raul Jimenez' 'David Mateos' 'Shuheng Liu'\n 'Pavlos Protopapas' 'Pedro Tarancón-Álvarez' 'Pablo Tejerina-Pérez']"
] |
null | null |
2403.14772
| null | null |
http://arxiv.org/pdf/2403.14772v1
|
2024-03-21T18:26:23Z
|
2024-03-21T18:26:23Z
|
Improving Robustness to Model Inversion Attacks via Sparse Coding
Architectures
|
Recent model inversion attack algorithms permit adversaries to reconstruct a neural network's private training data just by repeatedly querying the network and inspecting its outputs. In this work, we develop a novel network architecture that leverages sparse-coding layers to obtain superior robustness to this class of attacks. Three decades of computer science research has studied sparse coding in the context of image denoising, object recognition, and adversarial misclassification settings, but to the best of our knowledge, its connection to state-of-the-art privacy vulnerabilities remains unstudied. However, sparse coding architectures suggest an advantageous means to defend against model inversion attacks because they allow us to control the amount of irrelevant private information encoded in a network's intermediate representations in a manner that can be computed efficiently during training and that is known to have little effect on classification accuracy. Specifically, compared to networks trained with a variety of state-of-the-art defenses, our sparse-coding architectures maintain comparable or higher classification accuracy while degrading state-of-the-art training data reconstructions by factors of 1.1 to 18.3 across a variety of reconstruction quality metrics (PSNR, SSIM, FID). This performance advantage holds across 5 datasets ranging from CelebA faces to medical images and CIFAR-10, and across various state-of-the-art SGD-based and GAN-based inversion attacks, including Plug-&-Play attacks. We provide a cluster-ready PyTorch codebase to promote research and standardize defense evaluations.
|
[
"['Sayanton V. Dibbo' 'Adam Breuer' 'Juston Moore' 'Michael Teti']"
] |
null | null |
2403.14773
| null | null |
http://arxiv.org/pdf/2403.14773v1
|
2024-03-21T18:27:29Z
|
2024-03-21T18:27:29Z
|
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation
from Text
|
Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, making it easy to create diverse and individual content. However, existing approaches mostly focus on high-quality short video generation (typically 16 or 24 frames), ending up with hard-cuts when naively extended to the case of long video synthesis. To overcome these limitations, we introduce StreamingT2V, an autoregressive approach for long video generation of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:(i) a short-term memory block called conditional attention module (CAM), which conditions the current generation on the features extracted from the previous chunk via an attentional mechanism, leading to consistent chunk transitions, (ii) a long-term memory block called appearance preservation module, which extracts high-level scene and object features from the first video chunk to prevent the model from forgetting the initial scene, and (iii) a randomized blending approach that enables to apply a video enhancer autoregressively for infinitely long videos without inconsistencies between chunks. Experiments show that StreamingT2V generates high motion amount. In contrast, all competing image-to-video methods are prone to video stagnation when applied naively in an autoregressive manner. Thus, we propose with StreamingT2V a high-quality seamless text-to-long video generator that outperforms competitors with consistency and motion. Our code will be available at: https://github.com/Picsart-AI-Research/StreamingT2V
|
[
"['Roberto Henschel' 'Levon Khachatryan' 'Daniil Hayrapetyan'\n 'Hayk Poghosyan' 'Vahram Tadevosyan' 'Zhangyang Wang' 'Shant Navasardyan'\n 'Humphrey Shi']"
] |
null | null |
2403.14774
| null | null |
http://arxiv.org/pdf/2403.14774v1
|
2024-03-21T18:28:43Z
|
2024-03-21T18:28:43Z
|
Few-Shot Adversarial Prompt Learning on Vision-Language Models
|
The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention. Inspired by the success of vision-language foundation models, previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision. However, in practice, they are still unsatisfactory due to several issues, including heavy adaptation cost, suboptimal text supervision, and uncontrolled natural generalization capacity. In this paper, to address these issues, we propose a few-shot adversarial prompt framework where adapting input sequences with limited data makes significant adversarial robustness improvement. Specifically, we achieve this by providing adversarially correlated text supervision that is end-to-end learned from adversarial examples. We also propose a novel training objective that enhances the consistency of multi-modal features while encourages differentiated uni-modal features between natural and adversarial examples. The proposed framework gives access to learn adversarial text supervision, which provides superior cross-modal adversarial alignment and matches state-of-the-art zero-shot adversarial robustness with only 1% training data.
|
[
"['Yiwei Zhou' 'Xiaobo Xia' 'Zhiwei Lin' 'Bo Han' 'Tongliang Liu']"
] |
null | null |
2403.14783
| null | null |
http://arxiv.org/pdf/2403.14783v1
|
2024-03-21T18:57:25Z
|
2024-03-21T18:57:25Z
|
Multi-Agent VQA: Exploring Multi-Agent Foundation Models in Zero-Shot
Visual Question Answering
|
This work explores the zero-shot capabilities of foundation models in Visual Question Answering (VQA) tasks. We propose an adaptive multi-agent system, named Multi-Agent VQA, to overcome the limitations of foundation models in object detection and counting by using specialized agents as tools. Unlike existing approaches, our study focuses on the system's performance without fine-tuning it on specific VQA datasets, making it more practical and robust in the open world. We present preliminary experimental results under zero-shot scenarios and highlight some failure cases, offering new directions for future research.
|
[
"['Bowen Jiang' 'Zhijun Zhuang' 'Shreyas S. Shivakumar' 'Dan Roth'\n 'Camillo J. Taylor']"
] |
null | null |
2403.14797
| null | null |
http://arxiv.org/pdf/2403.14797v2
|
2024-07-15T12:59:02Z
|
2024-03-21T19:20:29Z
|
Preventing Catastrophic Forgetting through Memory Networks in Continuous
Detection
|
Modern pre-trained architectures struggle to retain previous information while undergoing continuous fine-tuning on new tasks. Despite notable progress in continual classification, systems designed for complex vision tasks such as detection or segmentation still struggle to attain satisfactory performance. In this work, we introduce a memory-based detection transformer architecture to adapt a pre-trained DETR-style detector to new tasks while preserving knowledge from previous tasks. We propose a novel localized query function for efficient information retrieval from memory units, aiming to minimize forgetting. Furthermore, we identify a fundamental challenge in continual detection referred to as background relegation. This arises when object categories from earlier tasks reappear in future tasks, potentially without labels, leading them to be implicitly treated as background. This is an inevitable issue in continual detection or segmentation. The introduced continual optimization technique effectively tackles this challenge. Finally, we assess the performance of our proposed system on continual detection benchmarks and demonstrate that our approach surpasses the performance of existing state-of-the-art resulting in 5-7% improvements on MS-COCO and PASCAL-VOC on the task of continual detection.
|
[
"['Gaurav Bhatt' 'James Ross' 'Leonid Sigal']"
] |
null | null |
2403.14800
| null | null |
http://arxiv.org/pdf/2403.14800v1
|
2024-03-21T19:28:17Z
|
2024-03-21T19:28:17Z
|
Deep Active Learning: A Reality Check
|
We conduct a comprehensive evaluation of state-of-the-art deep active learning methods. Surprisingly, under general settings, no single-model method decisively outperforms entropy-based active learning, and some even fall short of random sampling. We delve into overlooked aspects like starting budget, budget step, and pretraining's impact, revealing their significance in achieving superior results. Additionally, we extend our evaluation to other tasks, exploring the active learning effectiveness in combination with semi-supervised learning, and object detection. Our experiments provide valuable insights and concrete recommendations for future active learning studies. By uncovering the limitations of current methods and understanding the impact of different experimental settings, we aim to inspire more efficient training of deep learning models in real-world scenarios with limited annotation budgets. This work contributes to advancing active learning's efficacy in deep learning and empowers researchers to make informed decisions when applying active learning to their tasks.
|
[
"['Edrina Gashi' 'Jiankang Deng' 'Ismail Elezi']"
] |
null | null |
2403.14813
| null | null |
http://arxiv.org/pdf/2403.14813v1
|
2024-03-21T19:59:07Z
|
2024-03-21T19:59:07Z
|
Curvature Augmented Manifold Embedding and Learning
|
A new dimensional reduction (DR) and data visualization method, Curvature-Augmented Manifold Embedding and Learning (CAMEL), is proposed. The key novel contribution is to formulate the DR problem as a mechanistic/physics model, where the force field among nodes (data points) is used to find an n-dimensional manifold representation of the data sets. Compared with many existing attractive-repulsive force-based methods, one unique contribution of the proposed method is to include a non-pairwise force. A new force field model is introduced and discussed, inspired by the multi-body potential in lattice-particle physics and Riemann curvature in topology. A curvature-augmented force is included in CAMEL. Following this, CAMEL formulation for unsupervised learning, supervised learning, semi-supervised learning/metric learning, and inverse learning are provided. Next, CAMEL is applied to many benchmark datasets by comparing existing models, such as tSNE, UMAP, TRIMAP, and PacMap. Both visual comparison and metrics-based evaluation are performed. 14 open literature and self-proposed metrics are employed for a comprehensive comparison. Conclusions and future work are suggested based on the current investigation. Related code and demonstration are available on https://github.com/ymlasu/CAMEL for interested readers to reproduce the results and other applications.
|
[
"['Yongming Liu']"
] |
null | null |
2403.14814
| null | null |
http://arxiv.org/pdf/2403.14814v2
|
2024-03-26T18:10:10Z
|
2024-03-21T19:59:52Z
|
The opportunities and risks of large language models in mental health
|
Global rates of mental health concerns are rising and there is increasing realization that existing models of mental healthcare will not adequately expand to meet the demand. With the emergence of large language models (LLMs) has come great optimism regarding their promise to create novel, large-scale solutions to support mental health. Despite their nascence, LLMs have already been applied to mental health-related tasks. In this review, we summarize the extant literature on efforts to use LLMs to provide mental health education, assessment, and intervention and highlight key opportunities for positive impact in each area. We then highlight risks associated with LLMs application to mental health and encourage adoption of strategies to mitigate these risks. The urgent need for mental health support must be balanced with responsible development, testing, and deployment of mental health LLMs. Especially critical is ensuring that mental health LLMs are fine-tuned for mental health, enhance mental health equity, adhere to ethical standards, and that people, including those with lived experience with mental health concerns, are involved in all stages from development through deployment. Prioritizing these efforts will minimize potential harms to mental health and maximize the likelihood that LLMs will positively impact mental health globally.
|
[
"['Hannah R. Lawrence' 'Renee A. Schneider' 'Susan B. Rubin'\n 'Maja J. Mataric' 'Daniel J. McDuff' 'Megan Jones Bell']"
] |
null | null |
2403.14822
| null | null |
http://arxiv.org/pdf/2403.14822v1
|
2024-03-21T20:29:43Z
|
2024-03-21T20:29:43Z
|
Non-Convex Robust Hypothesis Testing using Sinkhorn Uncertainty Sets
|
We present a new framework to address the non-convex robust hypothesis testing problem, wherein the goal is to seek the optimal detector that minimizes the maximum of worst-case type-I and type-II risk functions. The distributional uncertainty sets are constructed to center around the empirical distribution derived from samples based on Sinkhorn discrepancy. Given that the objective involves non-convex, non-smooth probabilistic functions that are often intractable to optimize, existing methods resort to approximations rather than exact solutions. To tackle the challenge, we introduce an exact mixed-integer exponential conic reformulation of the problem, which can be solved into a global optimum with a moderate amount of input data. Subsequently, we propose a convex approximation, demonstrating its superiority over current state-of-the-art methodologies in literature. Furthermore, we establish connections between robust hypothesis testing and regularized formulations of non-robust risk functions, offering insightful interpretations. Our numerical study highlights the satisfactory testing performance and computational efficiency of the proposed framework.
|
[
"['Jie Wang' 'Rui Gao' 'Yao Xie']"
] |
null | null |
2403.14829
| null | null |
http://arxiv.org/abs/2403.14829v1
|
2024-03-21T20:43:34Z
|
2024-03-21T20:43:34Z
|
Hyperbolic Secant representation of the logistic function: Application
to probabilistic Multiple Instance Learning for CT intracranial hemorrhage
detection
|
Multiple Instance Learning (MIL) is a weakly supervised paradigm that has been successfully applied to many different scientific areas and is particularly well suited to medical imaging. Probabilistic MIL methods, and more specifically Gaussian Processes (GPs), have achieved excellent results due to their high expressiveness and uncertainty quantification capabilities. One of the most successful GP-based MIL methods, VGPMIL, resorts to a variational bound to handle the intractability of the logistic function. Here, we formulate VGPMIL using P'olya-Gamma random variables. This approach yields the same variational posterior approximations as the original VGPMIL, which is a consequence of the two representations that the Hyperbolic Secant distribution admits. This leads us to propose a general GP-based MIL method that takes different forms by simply leveraging distributions other than the Hyperbolic Secant one. Using the Gamma distribution we arrive at a new approach that obtains competitive or superior predictive performance and efficiency. This is validated in a comprehensive experimental study including one synthetic MIL dataset, two well-known MIL benchmarks, and a real-world medical problem. We expect that this work provides useful ideas beyond MIL that can foster further research in the field.
|
[
"['F. M. Castro-Macías' 'P. Morales-Álvarez' 'Y. Wu' 'R. Molina'\n 'A. K. Katsaggelos']"
] |
null | null |
2403.14830
| null | null |
http://arxiv.org/pdf/2403.14830v1
|
2024-03-21T20:43:44Z
|
2024-03-21T20:43:44Z
|
Deep Clustering Evaluation: How to Validate Internal Clustering
Validation Measures
|
Deep clustering, a method for partitioning complex, high-dimensional data using deep neural networks, presents unique evaluation challenges. Traditional clustering validation measures, designed for low-dimensional spaces, are problematic for deep clustering, which involves projecting data into lower-dimensional embeddings before partitioning. Two key issues are identified: 1) the curse of dimensionality when applying these measures to raw data, and 2) the unreliable comparison of clustering results across different embedding spaces stemming from variations in training procedures and parameter settings in different clustering models. This paper addresses these challenges in evaluating clustering quality in deep learning. We present a theoretical framework to highlight ineffectiveness arising from using internal validation measures on raw and embedded data and propose a systematic approach to applying clustering validity indices in deep clustering contexts. Experiments show that this framework aligns better with external validation measures, effectively reducing the misguidance from the improper use of clustering validity indices in deep learning.
|
[
"['Zeya Wang' 'Chenglong Ye']"
] |
null | null |
2403.14833
| null | null |
http://arxiv.org/pdf/2403.14833v1
|
2024-03-21T21:05:59Z
|
2024-03-21T21:05:59Z
|
Model order reduction of deep structured state-space models: A
system-theoretic approach
|
With a specific emphasis on control design objectives, achieving accurate system modeling with limited complexity is crucial in parametric system identification. The recently introduced deep structured state-space models (SSM), which feature linear dynamical blocks as key constituent components, offer high predictive performance. However, the learned representations often suffer from excessively large model orders, which render them unsuitable for control design purposes. The current paper addresses this challenge by means of system-theoretic model order reduction techniques that target the linear dynamical blocks of SSMs. We introduce two regularization terms which can be incorporated into the training loss for improved model order reduction. In particular, we consider modal $ell_1$ and Hankel nuclear norm regularization to promote sparsity, allowing one to retain only the relevant states without sacrificing accuracy. The presented regularizers lead to advantages in terms of parsimonious representations and faster inference resulting from the reduced order models. The effectiveness of the proposed methodology is demonstrated using real-world ground vibration data from an aircraft.
|
[
"['Marco Forgione' 'Manas Mejari' 'Dario Piga']"
] |
null | null |
2403.14843
| null | null |
http://arxiv.org/pdf/2403.14843v1
|
2024-03-21T21:27:39Z
|
2024-03-21T21:27:39Z
|
Local Causal Discovery with Linear non-Gaussian Cyclic Models
|
Local causal discovery is of great practical significance, as there are often situations where the discovery of the global causal structure is unnecessary, and the interest lies solely on a single target variable. Most existing local methods utilize conditional independence relations, providing only a partially directed graph, and assume acyclicity for the ground-truth structure, even though real-world scenarios often involve cycles like feedback mechanisms. In this work, we present a general, unified local causal discovery method with linear non-Gaussian models, whether they are cyclic or acyclic. We extend the application of independent component analysis from the global context to independent subspace analysis, enabling the exact identification of the equivalent local directed structures and causal strengths from the Markov blanket of the target variable. We also propose an alternative regression-based method in the particular acyclic scenarios. Our identifiability results are empirically validated using both synthetic and real-world datasets.
|
[
"['Haoyue Dai' 'Ignavier Ng' 'Yujia Zheng' 'Zhengqing Gao' 'Kun Zhang']"
] |
null | null |
2403.14848
| null | null |
http://arxiv.org/pdf/2403.14848v1
|
2024-03-21T21:39:05Z
|
2024-03-21T21:39:05Z
|
Learning WENO for entropy stable schemes to solve conservation laws
|
Entropy conditions play a crucial role in the extraction of a physically relevant solution for a system of conservation laws, thus motivating the construction of entropy stable schemes that satisfy a discrete analogue of such conditions. TeCNO schemes (Fjordholm et al. 2012) form a class of arbitrary high-order entropy stable finite difference solvers, which require specialized reconstruction algorithms satisfying the sign property at each cell interface. Recently, third-order WENO schemes called SP-WENO (Fjordholm and Ray, 2016) and SP-WENOc (Ray, 2018) have been designed to satisfy the sign property. However, these WENO algorithms can perform poorly near shocks, with the numerical solutions exhibiting large spurious oscillations. In the present work, we propose a variant of the SP-WENO, termed as Deep Sign-Preserving WENO (DSP-WENO), where a neural network is trained to learn the WENO weighting strategy. The sign property and third-order accuracy are strongly imposed in the algorithm, which constrains the WENO weight selection region to a convex polygon. Thereafter, a neural network is trained to select the WENO weights from this convex region with the goal of improving the shock-capturing capabilities without sacrificing the rate of convergence in smooth regions. The proposed synergistic approach retains the mathematical framework of the TeCNO scheme while integrating deep learning to remedy the computational issues of the WENO-based reconstruction. We present several numerical experiments to demonstrate the significant improvement with DSP-WENO over the existing variants of WENO satisfying the sign property.
|
[
"['Philip Charles' 'Deep Ray']"
] |
null | null |
2403.14849
| null | null |
http://arxiv.org/pdf/2403.14849v1
|
2024-03-21T21:51:36Z
|
2024-03-21T21:51:36Z
|
Output-Constrained Lossy Source Coding With Application to
Rate-Distortion-Perception Theory
|
The distortion-rate function of output-constrained lossy source coding with limited common randomness is analyzed for the special case of squared error distortion measure. An explicit expression is obtained when both source and reconstruction distributions are Gaussian. This further leads to a partial characterization of the information-theoretic limit of quadratic Gaussian rate-distortion-perception coding with the perception measure given by Kullback-Leibler divergence or squared quadratic Wasserstein distance.
|
[
"['Li Xie' 'Liangyan Li' 'Jun Chen' 'Zhongshan Zhang']"
] |
null | null |
2403.14853
| null | null |
http://arxiv.org/pdf/2403.14853v1
|
2024-03-21T21:56:44Z
|
2024-03-21T21:56:44Z
|
iSpLib: A Library for Accelerating Graph Neural Networks using
Auto-tuned Sparse Operations
|
Core computations in Graph Neural Network (GNN) training and inference are often mapped to sparse matrix operations such as sparse-dense matrix multiplication (SpMM). These sparse operations are harder to optimize by manual tuning because their performance depends significantly on the sparsity of input graphs, GNN models, and computing platforms. To address this challenge, we present iSpLib, a PyTorch-based C++ library equipped with auto-tuned sparse operations. iSpLib expedites GNN training with a cache-enabled backpropagation that stores intermediate matrices in local caches. The library offers a user-friendly Python plug-in that allows users to take advantage of our optimized PyTorch operations out-of-the-box for any existing linear algebra-based PyTorch implementation of popular GNNs (Graph Convolution Network, GraphSAGE, Graph Inference Network, etc.) with only two lines of additional code. We demonstrate that iSpLib obtains up to 27x overall training speedup compared to the equivalent PyTorch 2.1.0 and PyTorch Geometric 2.4.0 implementations on the CPU. Our library is publicly available at https://github.com/HipGraph/iSpLib (https://doi.org/10.5281/zenodo.10806511).
|
[
"['Md Saidul Hoque Anik' 'Pranav Badhe' 'Rohit Gampa' 'Ariful Azad']"
] |
null | null |
2403.14860
| null | null |
http://arxiv.org/pdf/2403.14860v1
|
2024-03-21T22:15:09Z
|
2024-03-21T22:15:09Z
|
Robust Model Based Reinforcement Learning Using $\mathcal{L}_1$ Adaptive
Control
|
We introduce $mathcal{L}_1$-MBRL, a control-theoretic augmentation scheme for Model-Based Reinforcement Learning (MBRL) algorithms. Unlike model-free approaches, MBRL algorithms learn a model of the transition function using data and use it to design a control input. Our approach generates a series of approximate control-affine models of the learned transition function according to the proposed switching law. Using the approximate model, control input produced by the underlying MBRL is perturbed by the $mathcal{L}_1$ adaptive control, which is designed to enhance the robustness of the system against uncertainties. Importantly, this approach is agnostic to the choice of MBRL algorithm, enabling the use of the scheme with various MBRL algorithms. MBRL algorithms with $mathcal{L}_1$ augmentation exhibit enhanced performance and sample efficiency across multiple MuJoCo environments, outperforming the original MBRL algorithms, both with and without system noise.
|
[
"['Minjun Sung' 'Sambhu H. Karumanchi' 'Aditya Gahlawat' 'Naira Hovakimyan']"
] |
null | null |
2403.14863
| null | null |
http://arxiv.org/pdf/2403.14863v1
|
2024-03-21T22:18:25Z
|
2024-03-21T22:18:25Z
|
Distribution-informed and wavelength-flexible data-driven photoacoustic
oximetry
|
Significance: Photoacoustic imaging (PAI) promises to measure spatially-resolved blood oxygen saturation, but suffers from a lack of accurate and robust spectral unmixing methods to deliver on this promise. Accurate blood oxygenation estimation could have important clinical applications, from cancer detection to quantifying inflammation. Aim: This study addresses the inflexibility of existing data-driven methods for estimating blood oxygenation in PAI by introducing a recurrent neural network architecture. Approach: We created 25 simulated training dataset variations to assess neural network performance. We used a long short-term memory network to implement a wavelength-flexible network architecture and proposed the Jensen-Shannon divergence to predict the most suitable training dataset. Results: The network architecture can handle arbitrary input wavelengths and outperforms linear unmixing and the previously proposed learned spectral decolouring method. Small changes in the training data significantly affect the accuracy of our method, but we find that the Jensen-Shannon divergence correlates with the estimation error and is thus suitable for predicting the most appropriate training datasets for any given application. Conclusions: A flexible data-driven network architecture combined with the Jensen-Shannon Divergence to predict the best training data set provides a promising direction that might enable robust data-driven photoacoustic oximetry for clinical use cases.
|
[
"['Janek Gröhl' 'Kylie Yeung' 'Kevin Gu' 'Thomas R. Else' 'Monika Golinska'\n 'Ellie V. Bunce' 'Lina Hacker' 'Sarah E. Bohndiek']"
] |
null | null |
2403.14870
| null | null |
http://arxiv.org/pdf/2403.14870v1
|
2024-03-21T22:36:24Z
|
2024-03-21T22:36:24Z
|
VidLA: Video-Language Alignment at Scale
|
In this paper, we propose VidLA, an approach for video-language alignment at scale. There are two major limitations of previous video-language alignment approaches. First, they do not capture both short-range and long-range temporal dependencies and typically employ complex hierarchical deep network architectures that are hard to integrate with existing pretrained image-text foundation models. To effectively address this limitation, we instead keep the network architecture simple and use a set of data tokens that operate at different temporal resolutions in a hierarchical manner, accounting for the temporally hierarchical nature of videos. By employing a simple two-tower architecture, we are able to initialize our video-language model with pretrained image-text foundation models, thereby boosting the final performance. Second, existing video-language alignment works struggle due to the lack of semantically aligned large-scale training data. To overcome it, we leverage recent LLMs to curate the largest video-language dataset to date with better visual grounding. Furthermore, unlike existing video-text datasets which only contain short clips, our dataset is enriched with video clips of varying durations to aid our temporally hierarchical data tokens in extracting better representations at varying temporal scales. Overall, empirical results show that our proposed approach surpasses state-of-the-art methods on multiple retrieval benchmarks, especially on longer videos, and performs competitively on classification benchmarks.
|
[
"['Mamshad Nayeem Rizve' 'Fan Fei' 'Jayakrishnan Unnikrishnan' 'Son Tran'\n 'Benjamin Z. Yao' 'Belinda Zeng' 'Mubarak Shah' 'Trishul Chilimbi']"
] |
null | null |
2403.14874
| null | null |
http://arxiv.org/pdf/2403.14874v2
|
2024-05-07T21:25:06Z
|
2024-03-21T22:46:27Z
|
WeatherProof: Leveraging Language Guidance for Semantic Segmentation in
Adverse Weather
|
We propose a method to infer semantic segmentation maps from images captured under adverse weather conditions. We begin by examining existing models on images degraded by weather conditions such as rain, fog, or snow, and found that they exhibit a large performance drop as compared to those captured under clear weather. To control for changes in scene structures, we propose WeatherProof, the first semantic segmentation dataset with accurate clear and adverse weather image pairs that share an underlying scene. Through this dataset, we analyze the error modes in existing models and found that they were sensitive to the highly complex combination of different weather effects induced on the image during capture. To improve robustness, we propose a way to use language as guidance by identifying contributions of adverse weather conditions and injecting that as "side information". Models trained using our language guidance exhibit performance gains by up to 10.2% in mIoU on WeatherProof, up to 8.44% in mIoU on the widely used ACDC dataset compared to standard training techniques, and up to 6.21% in mIoU on the ACDC dataset as compared to previous SOTA methods.
|
[
"['Blake Gella' 'Howard Zhang' 'Rishi Upadhyay' 'Tiffany Chang'\n 'Nathan Wei' 'Matthew Waliman' 'Yunhao Ba' 'Celso de Melo' 'Alex Wong'\n 'Achuta Kadambi']"
] |
null | null |
2403.14898
| null | null |
http://arxiv.org/pdf/2403.14898v1
|
2024-03-22T01:04:51Z
|
2024-03-22T01:04:51Z
|
Web-based Melanoma Detection
|
Melanoma is the most aggressive form of skin cancer, and early detection can significantly increase survival rates and prevent cancer spread. However, developing reliable automated detection techniques is difficult due to the lack of standardized datasets and evaluation methods. This study introduces a unified melanoma classification approach that supports 54 combinations of 11 datasets and 24 state-of-the-art deep learning architectures. It enables a fair comparison of 1,296 experiments and results in a lightweight model deployable to the web-based MeshNet architecture named Mela-D. This approach can run up to 33x faster by reducing parameters 24x to yield an analogous 88.8% accuracy comparable with ResNet50 on previously unseen images. This allows efficient and accurate melanoma detection in real-world settings that can run on consumer-level hardware.
|
[
"['SangHyuk Kim' 'Edward Gaibor' 'Daniel Haehn']"
] |
null | null |
2403.14902
| null | null |
http://arxiv.org/pdf/2403.14902v1
|
2024-03-22T01:17:07Z
|
2024-03-22T01:17:07Z
|
Hydro: Adaptive Query Processing of ML Queries
|
Query optimization in relational database management systems (DBMSs) is critical for fast query processing. The query optimizer relies on precise selectivity and cost estimates to effectively optimize queries prior to execution. While this strategy is effective for relational DBMSs, it is not sufficient for DBMSs tailored for processing machine learning (ML) queries. In ML-centric DBMSs, query optimization is challenging for two reasons. First, the performance bottleneck of the queries shifts to user-defined functions (UDFs) that often wrap around deep learning models, making it difficult to accurately estimate UDF statistics without profiling the query. This leads to inaccurate statistics and sub-optimal query plans. Second, the optimal query plan for ML queries is data-dependent, necessitating DBMSs to adapt the query plan on the fly during execution. So, a static query plan is not sufficient for such queries. In this paper, we present Hydro, an ML-centric DBMS that utilizes adaptive query processing (AQP) for efficiently processing ML queries. Hydro is designed to quickly evaluate UDF-based query predicates by ensuring optimal predicate evaluation order and improving the scalability of UDF execution. By integrating AQP, Hydro continuously monitors UDF statistics, routes data to predicates in an optimal order, and dynamically allocates resources for evaluating predicates. We demonstrate Hydro's efficacy through four illustrative use cases, delivering up to 11.52x speedup over a baseline system.
|
[
"['Gaurav Tarlok Kakkar' 'Jiashen Cao' 'Aubhro Sengupta' 'Joy Arulraj'\n 'Hyesoon Kim']"
] |
null | null |
2403.14905
| null | null |
http://arxiv.org/pdf/2403.14905v1
|
2024-03-22T01:51:48Z
|
2024-03-22T01:51:48Z
|
Adaptive Coded Federated Learning: Privacy Preservation and Straggler
Mitigation
|
In this article, we address the problem of federated learning in the presence of stragglers. For this problem, a coded federated learning framework has been proposed, where the central server aggregates gradients received from the non-stragglers and gradient computed from a privacy-preservation global coded dataset to mitigate the negative impact of the stragglers. However, when aggregating these gradients, fixed weights are consistently applied across iterations, neglecting the generation process of the global coded dataset and the dynamic nature of the trained model over iterations. This oversight may result in diminished learning performance. To overcome this drawback, we propose a new method named adaptive coded federated learning (ACFL). In ACFL, before the training, each device uploads a coded local dataset with additive noise to the central server to generate a global coded dataset under privacy preservation requirements. During each iteration of the training, the central server aggregates the gradients received from the non-stragglers and the gradient computed from the global coded dataset, where an adaptive policy for varying the aggregation weights is designed. Under this policy, we optimize the performance in terms of privacy and learning, where the learning performance is analyzed through convergence analysis and the privacy performance is characterized via mutual information differential privacy. Finally, we perform simulations to demonstrate the superiority of ACFL compared with the non-adaptive methods.
|
[
"['Chengxi Li' 'Ming Xiao' 'Mikael Skoglund']"
] |
null | null |
2403.14917
| null | null |
http://arxiv.org/pdf/2403.14917v2
|
2024-04-07T09:08:29Z
|
2024-03-22T02:41:57Z
|
Mean-field Analysis on Two-layer Neural Networks from a Kernel
Perspective
|
In this paper, we study the feature learning ability of two-layer neural networks in the mean-field regime through the lens of kernel methods. To focus on the dynamics of the kernel induced by the first layer, we utilize a two-timescale limit, where the second layer moves much faster than the first layer. In this limit, the learning problem is reduced to the minimization problem over the intrinsic kernel. Then, we show the global convergence of the mean-field Langevin dynamics and derive time and particle discretization error. We also demonstrate that two-layer neural networks can learn a union of multiple reproducing kernel Hilbert spaces more efficiently than any kernel methods, and neural networks acquire data-dependent kernel which aligns with the target function. In addition, we develop a label noise procedure, which converges to the global optimum and show that the degrees of freedom appears as an implicit regularization.
|
[
"['Shokichi Takakura' 'Taiji Suzuki']"
] |
null | null |
2403.14918
| null | null |
http://arxiv.org/pdf/2403.14918v1
|
2024-03-22T02:42:38Z
|
2024-03-22T02:42:38Z
|
Deep learning-based method for weather forecasting: A case study in
Itoshima
|
Accurate weather forecasting is of paramount importance for a wide range of practical applications, drawing substantial scientific and societal interest. However, the intricacies of weather systems pose substantial challenges to accurate predictions. This research introduces a multilayer perceptron model tailored for weather forecasting in Itoshima, Kyushu, Japan. Our meticulously designed architecture demonstrates superior performance compared to existing models, surpassing benchmarks such as Long Short-Term Memory and Recurrent Neural Networks.
|
[
"['Yuzhong Cheng' 'Linh Thi Hoai Nguyen' 'Akinori Ozaki' 'Ton Viet Ta']"
] |
null | null |
2403.14922
| null | null |
http://arxiv.org/pdf/2403.14922v1
|
2024-03-22T02:50:42Z
|
2024-03-22T02:50:42Z
|
CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR
|
In recent years, emerging research on mobile sensing has led to novel scenarios that enhance daily life for humans, but dynamic usage conditions often result in performance degradation when systems are deployed in real-world settings. Existing solutions typically employ one-off adaptation schemes based on neural networks, which struggle to ensure robustness against uncertain drifting conditions in human-centric sensing scenarios. In this paper, we propose CODA, a COst-efficient Domain Adaptation mechanism for mobile sensing that addresses real-time drifts from the data distribution perspective with active learning theory, ensuring cost-efficient adaptation directly on the device. By incorporating a clustering loss and importance-weighted active learning algorithm, CODA retains the relationship between different clusters during cost-effective instance-level updates, preserving meaningful structure within the data distribution. We also showcase its generalization by seamlessly integrating it with Neural Network-based solutions for Human Activity Recognition tasks. Through meticulous evaluations across diverse datasets, including phone-based, watch-based, and integrated sensor-based sensing tasks, we demonstrate the feasibility and potential of online adaptation with CODA. The promising results achieved by CODA, even without learnable parameters, also suggest the possibility of realizing unobtrusive adaptation through specific application designs with sufficient feedback.
|
[
"['Minghui Qiu' 'Yandao Huang' 'Lin Chen' 'Lu Wang' 'Kaishun Wu']"
] |
null | null |
2403.14926
| null | null |
http://arxiv.org/pdf/2403.14926v1
|
2024-03-22T03:01:42Z
|
2024-03-22T03:01:42Z
|
Contrastive Learning on Multimodal Analysis of Electronic Health Records
|
Electronic health record (EHR) systems contain a wealth of multimodal clinical data including structured data like clinical codes and unstructured data such as clinical notes. However, many existing EHR-focused studies has traditionally either concentrated on an individual modality or merged different modalities in a rather rudimentary fashion. This approach often results in the perception of structured and unstructured data as separate entities, neglecting the inherent synergy between them. Specifically, the two important modalities contain clinically relevant, inextricably linked and complementary health information. A more complete picture of a patient's medical history is captured by the joint analysis of the two modalities of data. Despite the great success of multimodal contrastive learning on vision-language, its potential remains under-explored in the realm of multimodal EHR, particularly in terms of its theoretical understanding. To accommodate the statistical analysis of multimodal EHR data, in this paper, we propose a novel multimodal feature embedding generative model and design a multimodal contrastive loss to obtain the multimodal EHR feature representation. Our theoretical analysis demonstrates the effectiveness of multimodal learning compared to single-modality learning and connects the solution of the loss function to the singular value decomposition of a pointwise mutual information matrix. This connection paves the way for a privacy-preserving algorithm tailored for multimodal EHR feature representation learning. Simulation studies show that the proposed algorithm performs well under a variety of configurations. We further validate the clinical utility of the proposed algorithm in real-world EHR data.
|
[
"['Tianxi Cai' 'Feiqing Huang' 'Ryumei Nakada' 'Linjun Zhang' 'Doudou Zhou']"
] |
null | null |
2403.14941
| null | null |
http://arxiv.org/pdf/2403.14941v1
|
2024-03-22T04:21:40Z
|
2024-03-22T04:21:40Z
|
Unifying Lane-Level Traffic Prediction from a Graph Structural
Perspective: Benchmark and Baseline
|
Traffic prediction has long been a focal and pivotal area in research, witnessing both significant strides from city-level to road-level predictions in recent years. With the advancement of Vehicle-to-Everything (V2X) technologies, autonomous driving, and large-scale models in the traffic domain, lane-level traffic prediction has emerged as an indispensable direction. However, further progress in this field is hindered by the absence of comprehensive and unified evaluation standards, coupled with limited public availability of data and code. This paper extensively analyzes and categorizes existing research in lane-level traffic prediction, establishes a unified spatial topology structure and prediction tasks, and introduces a simple baseline model, GraphMLP, based on graph structure and MLP networks. We have replicated codes not publicly available in existing studies and, based on this, thoroughly and fairly assessed various models in terms of effectiveness, efficiency, and applicability, providing insights for practical applications. Additionally, we have released three new datasets and corresponding codes to accelerate progress in this field, all of which can be found on https://github.com/ShuhaoLii/TITS24LaneLevel-Traffic-Benchmark.
|
[
"['Shuhao Li' 'Yue Cui' 'Jingyi Xu' 'Libin Li' 'Lingkai Meng'\n 'Weidong Yang' 'Fan Zhang' 'Xiaofang Zhou']"
] |
null | null |
2403.14946
| null | null |
http://arxiv.org/pdf/2403.14946v1
|
2024-03-22T04:38:42Z
|
2024-03-22T04:38:42Z
|
A Single Linear Layer Yields Task-Adapted Low-Rank Matrices
|
Low-Rank Adaptation (LoRA) is a widely used Parameter-Efficient Fine-Tuning (PEFT) method that updates an initial weight matrix $W_0$ with a delta matrix $Delta W$ consisted by two low-rank matrices $A$ and $B$. A previous study suggested that there is correlation between $W_0$ and $Delta W$. In this study, we aim to delve deeper into relationships between $W_0$ and low-rank matrices $A$ and $B$ to further comprehend the behavior of LoRA. In particular, we analyze a conversion matrix that transform $W_0$ into low-rank matrices, which encapsulates information about the relationships. Our analysis reveals that the conversion matrices are similar across each layer. Inspired by these findings, we hypothesize that a single linear layer, which takes each layer's $W_0$ as input, can yield task-adapted low-rank matrices. To confirm this hypothesis, we devise a method named Conditionally Parameterized LoRA (CondLoRA) that updates initial weight matrices with low-rank matrices derived from a single linear layer. Our empirical results show that CondLoRA maintains a performance on par with LoRA, despite the fact that the trainable parameters of CondLoRA are fewer than those of LoRA. Therefore, we conclude that "a single linear layer yields task-adapted low-rank matrices."
|
[
"['Hwichan Kim' 'Shota Sasaki' 'Sho Hoshino' 'Ukyo Honda']"
] |
null | null |
2403.14949
| null | null |
http://arxiv.org/pdf/2403.14949v1
|
2024-03-22T04:44:43Z
|
2024-03-22T04:44:43Z
|
Addressing Concept Shift in Online Time Series Forecasting:
Detect-then-Adapt
|
Online updating of time series forecasting models aims to tackle the challenge of concept drifting by adjusting forecasting models based on streaming data. While numerous algorithms have been developed, most of them focus on model design and updating. In practice, many of these methods struggle with continuous performance regression in the face of accumulated concept drifts over time. To address this limitation, we present a novel approach, Concept textbf{D}rift textbf{D}etection antextbf{D} textbf{A}daptation (D3A), that first detects drifting conception and then aggressively adapts the current model to the drifted concepts after the detection for rapid adaption. To best harness the utility of historical data for model adaptation, we propose a data augmentation strategy introducing Gaussian noise into existing training instances. It helps mitigate the data distribution gap, a critical factor contributing to train-test performance inconsistency. The significance of our data augmentation process is verified by our theoretical analysis. Our empirical studies across six datasets demonstrate the effectiveness of D3A in improving model adaptation capability. Notably, compared to a simple Temporal Convolutional Network (TCN) baseline, D3A reduces the average Mean Squared Error (MSE) by $43.9%$. For the state-of-the-art (SOTA) model, the MSE is reduced by $33.3%$.
|
[
"['YiFan Zhang' 'Weiqi Chen' 'Zhaoyang Zhu' 'Dalin Qin' 'Liang Sun'\n 'Xue Wang' 'Qingsong Wen' 'Zhang Zhang' 'Liang Wang' 'Rong Jin']"
] |
null | null |
2403.14950
| null | null |
http://arxiv.org/pdf/2403.14950v1
|
2024-03-22T04:48:41Z
|
2024-03-22T04:48:41Z
|
KnowLA: Enhancing Parameter-efficient Finetuning with Knowledgeable
Adaptation
|
Parameter-efficient finetuning (PEFT) is a key technique for adapting large language models (LLMs) to downstream tasks. In this paper, we study leveraging knowledge graph embeddings to improve the effectiveness of PEFT. We propose a knowledgeable adaptation method called KnowLA. It inserts an adaptation layer into an LLM to integrate the embeddings of entities appearing in the input text. The adaptation layer is trained in combination with LoRA on instruction data. Experiments on six benchmarks with two popular LLMs and three knowledge graphs demonstrate the effectiveness and robustness of KnowLA. We show that modelname can help activate the relevant parameterized knowledge in an LLM to answer a question without changing its parameters or input prompts.
|
[
"['Xindi Luo' 'Zequn Sun' 'Jing Zhao' 'Zhe Zhao' 'Wei Hu']"
] |
null | null |
2403.14951
| null | null |
http://arxiv.org/pdf/2403.14951v1
|
2024-03-22T05:04:48Z
|
2024-03-22T05:04:48Z
|
Simple Graph Condensation
|
The burdensome training costs on large-scale graphs have aroused significant interest in graph condensation, which involves tuning Graph Neural Networks (GNNs) on a small condensed graph for use on the large-scale original graph. Existing methods primarily focus on aligning key metrics between the condensed and original graphs, such as gradients, distribution and trajectory of GNNs, yielding satisfactory performance on downstream tasks. However, these complex metrics necessitate intricate computations and can potentially disrupt the optimization process of the condensation graph, making the condensation process highly demanding and unstable. Motivated by the recent success of simplified models in various fields, we propose a simplified approach to metric alignment in graph condensation, aiming to reduce unnecessary complexity inherited from GNNs. In our approach, we eliminate external parameters and exclusively retain the target condensed graph during the condensation process. Following the hierarchical aggregation principles of GNNs, we introduce the Simple Graph Condensation (SimGC) framework, which aligns the condensed graph with the original graph from the input layer to the prediction layer, guided by a pre-trained Simple Graph Convolution (SGC) model on the original graph. As a result, both graphs possess the similar capability to train GNNs. This straightforward yet effective strategy achieves a significant speedup of up to 10 times compared to existing graph condensation methods while performing on par with state-of-the-art baselines. Comprehensive experiments conducted on seven benchmark datasets demonstrate the effectiveness of SimGC in prediction accuracy, condensation time, and generalization capability. Our code will be made publicly available.
|
[
"['Zhenbang Xiao' 'Yu Wang' 'Shunyu Liu' 'Huiqiong Wang' 'Mingli Song'\n 'Tongya Zheng']"
] |
null | null |
2403.14958
| null | null |
http://arxiv.org/pdf/2403.14958v1
|
2024-03-22T05:23:31Z
|
2024-03-22T05:23:31Z
|
Adapprox: Adaptive Approximation in Adam Optimization via Randomized
Low-Rank Matrices
|
As deep learning models exponentially increase in size, optimizers such as Adam encounter significant memory consumption challenges due to the storage of first and second moment data. Current memory-efficient methods like Adafactor and CAME often compromise accuracy with their matrix factorization techniques. Addressing this, we introduce Adapprox, a novel approach that employs randomized low-rank matrix approximation for a more effective and accurate approximation of Adam's second moment. Adapprox features an adaptive rank selection mechanism, finely balancing accuracy and memory efficiency, and includes an optional cosine similarity guidance strategy to enhance stability and expedite convergence. In GPT-2 training and downstream tasks, Adapprox surpasses AdamW by achieving 34.5% to 49.9% and 33.8% to 49.9% memory savings for the 117M and 345M models, respectively, with the first moment enabled, and further increases these savings without the first moment. Besides, it enhances convergence speed and improves downstream task performance relative to its counterparts.
|
[
"['Pengxiang Zhao' 'Ping Li' 'Yingjie Gu' 'Yi Zheng'\n 'Stephan Ludger Kölker' 'Zhefeng Wang' 'Xiaoming Yuan']"
] |
null | null |
2403.14973
| null | null |
http://arxiv.org/pdf/2403.14973v1
|
2024-03-22T06:04:11Z
|
2024-03-22T06:04:11Z
|
Trajectory Regularization Enhances Self-Supervised Geometric
Representation
|
Self-supervised learning (SSL) has proven effective in learning high-quality representations for various downstream tasks, with a primary focus on semantic tasks. However, its application in geometric tasks remains underexplored, partially due to the absence of a standardized evaluation method for geometric representations. To address this gap, we introduce a new pose-estimation benchmark for assessing SSL geometric representations, which demands training without semantic or pose labels and achieving proficiency in both semantic and geometric downstream tasks. On this benchmark, we study enhancing SSL geometric representations without sacrificing semantic classification accuracy. We find that leveraging mid-layer representations improves pose-estimation performance by 10-20%. Further, we introduce an unsupervised trajectory-regularization loss, which improves performance by an additional 4% and improves generalization ability on out-of-distribution data. We hope the proposed benchmark and methods offer new insights and improvements in self-supervised geometric representation learning.
|
[
"['Jiayun Wang' 'Stella X. Yu' 'Yubei Chen']"
] |
null | null |
2403.14977
| null | null |
http://arxiv.org/pdf/2403.14977v1
|
2024-03-22T06:22:20Z
|
2024-03-22T06:22:20Z
|
Piecewise-Linear Manifolds for Deep Metric Learning
|
Unsupervised deep metric learning (UDML) focuses on learning a semantic representation space using only unlabeled data. This challenging problem requires accurately estimating the similarity between data points, which is used to supervise a deep network. For this purpose, we propose to model the high-dimensional data manifold using a piecewise-linear approximation, with each low-dimensional linear piece approximating the data manifold in a small neighborhood of a point. These neighborhoods are used to estimate similarity between data points. We empirically show that this similarity estimate correlates better with the ground truth than the similarity estimates of current state-of-the-art techniques. We also show that proxies, commonly used in supervised metric learning, can be used to model the piecewise-linear manifold in an unsupervised setting, helping improve performance. Our method outperforms existing unsupervised metric learning approaches on standard zero-shot image retrieval benchmarks.
|
[
"['Shubhang Bhatnagar' 'Narendra Ahuja']"
] |
null | null |
2403.14999
| null | null |
http://arxiv.org/pdf/2403.14999v1
|
2024-03-22T07:21:09Z
|
2024-03-22T07:21:09Z
|
Magic for the Age of Quantized DNNs
|
Recently, the number of parameters in DNNs has explosively increased, as exemplified by LLMs (Large Language Models), making inference on small-scale computers more difficult. Model compression technology is, therefore, essential for integration into products. In this paper, we propose a method of quantization-aware training. We introduce a novel normalization (Layer-Batch Normalization) that is independent of the mini-batch size and does not require any additional computation cost during inference. Then, we quantize the weights by the scaled round-clip function with the weight standardization. We also quantize activation functions using the same function and apply surrogate gradients to train the model with both quantized weights and the quantized activation functions. We call this method Magic for the age of Quantised DNNs (MaQD). Experimental results show that our quantization method can be achieved with minimal accuracy degradation.
|
[
"['Yoshihide Sawada' 'Ryuji Saiin' 'Kazuma Suetake']"
] |
null | null |
2403.15004
| null | null |
http://arxiv.org/pdf/2403.15004v1
|
2024-03-22T07:32:21Z
|
2024-03-22T07:32:21Z
|
ParFormer: Vision Transformer Baseline with Parallel Local Global Token
Mixer and Convolution Attention Patch Embedding
|
This work presents ParFormer as an enhanced transformer architecture that allows the incorporation of different token mixers into a single stage, hence improving feature extraction capabilities. Integrating both local and global data allows for precise representation of short- and long-range spatial relationships without the need for computationally intensive methods such as shifting windows. Along with the parallel token mixer encoder, We offer the Convolutional Attention Patch Embedding (CAPE) as an enhancement of standard patch embedding to improve token mixer extraction with a convolutional attention module. Our comprehensive evaluation demonstrates that our ParFormer outperforms CNN-based and state-of-the-art transformer-based architectures in image classification and several complex tasks such as object recognition. The proposed CAPE has been demonstrated to benefit the overall MetaFormer architecture, even while utilizing the Identity Mapping Token Mixer, resulting in a 0.5% increase in accuracy. The ParFormer models outperformed ConvNeXt and Swin Transformer for the pure convolution and transformer model in accuracy. Furthermore, our model surpasses the current leading hybrid transformer by reaching competitive Top-1 scores in the ImageNet-1K classification test. Specifically, our model variants with 11M, 23M, and 34M parameters achieve scores of 80.4%, 82.1%, and 83.1%, respectively. Code: https://github.com/novendrastywn/ParFormer-CAPE-2024
|
[
"['Novendra Setyawan' 'Ghufron Wahyu Kurniawan' 'Chi-Chia Sun'\n 'Jun-Wei Hsieh' 'Hui-Kai Su' 'Wen-Kai Kuo']"
] |
null | null |
2403.15012
| null | null |
http://arxiv.org/pdf/2403.15012v1
|
2024-03-22T07:56:31Z
|
2024-03-22T07:56:31Z
|
Empirical investigation of multi-source cross-validation in clinical
machine learning
|
Traditionally, machine learning-based clinical prediction models have been trained and evaluated on patient data from a single source, such as a hospital. Cross-validation methods can be used to estimate the accuracy of such models on new patients originating from the same source, by repeated random splitting of the data. However, such estimates tend to be highly overoptimistic when compared to accuracy obtained from deploying models to sources not represented in the dataset, such as a new hospital. The increasing availability of multi-source medical datasets provides new opportunities for obtaining more comprehensive and realistic evaluations of expected accuracy through source-level cross-validation designs. In this study, we present a systematic empirical evaluation of standard K-fold cross-validation and leave-source-out cross-validation methods in a multi-source setting. We consider the task of electrocardiogram based cardiovascular disease classification, combining and harmonizing the openly available PhysioNet CinC Challenge 2021 and the Shandong Provincial Hospital datasets for our study. Our results show that K-fold cross-validation, both on single-source and multi-source data, systemically overestimates prediction performance when the end goal is to generalize to new sources. Leave-source-out cross-validation provides more reliable performance estimates, having close to zero bias though larger variability. The evaluation highlights the dangers of obtaining misleading cross-validation results on medical data and demonstrates how these issues can be mitigated when having access to multi-source data.
|
[
"['Tuija Leinonen' 'David Wong' 'Ali Wahab' 'Ramesh Nadarajah'\n 'Matti Kaisti' 'Antti Airola']"
] |
null | null |
2403.15017
| null | null |
http://arxiv.org/pdf/2403.15017v1
|
2024-03-22T08:03:10Z
|
2024-03-22T08:03:10Z
|
Vehicle Detection Performance in Nordic Region
|
This paper addresses the critical challenge of vehicle detection in the harsh winter conditions in the Nordic regions, characterized by heavy snowfall, reduced visibility, and low lighting. Due to their susceptibility to environmental distortions and occlusions, traditional vehicle detection methods have struggled in these adverse conditions. The advanced proposed deep learning architectures brought promise, yet the unique difficulties of detecting vehicles in Nordic winters remain inadequately addressed. This study uses the Nordic Vehicle Dataset (NVD), which has UAV images from northern Sweden, to evaluate the performance of state-of-the-art vehicle detection algorithms under challenging weather conditions. Our methodology includes a comprehensive evaluation of single-stage, two-stage, and transformer-based detectors against the NVD. We propose a series of enhancements tailored to each detection framework, including data augmentation, hyperparameter tuning, transfer learning, and novel strategies designed explicitly for the DETR model. Our findings not only highlight the limitations of current detection systems in the Nordic environment but also offer promising directions for enhancing these algorithms for improved robustness and accuracy in vehicle detection amidst the complexities of winter landscapes. The code and the dataset are available at https://nvd.ltu-ai.dev
|
[
"['Hamam Mokayed' 'Rajkumar Saini' 'Oluwatosin Adewumi' 'Lama Alkhaled'\n 'Bjorn Backe' 'Palaiahnakote Shivakumara' 'Olle Hagner' 'Yan Chai Hum']"
] |
null | null |
2403.15022
| null | null |
http://arxiv.org/pdf/2403.15022v3
|
2024-06-25T15:14:12Z
|
2024-03-22T08:11:14Z
|
Insights into the Lottery Ticket Hypothesis and Iterative Magnitude
Pruning
|
Lottery ticket hypothesis for deep neural networks emphasizes the importance of initialization used to re-train the sparser networks obtained using the iterative magnitude pruning process. An explanation for why the specific initialization proposed by the lottery ticket hypothesis tends to work better in terms of generalization (and training) performance has been lacking. Moreover, the underlying principles in iterative magnitude pruning, like the pruning of smaller magnitude weights and the role of the iterative process, lack full understanding and explanation. In this work, we attempt to provide insights into these phenomena by empirically studying the volume/geometry and loss landscape characteristics of the solutions obtained at various stages of the iterative magnitude pruning process.
|
[
"['Tausifa Jan Saleem' 'Ramanjit Ahuja' 'Surendra Prasad' 'Brejesh Lall']"
] |
null | null |
2403.15025
| null | null |
http://arxiv.org/pdf/2403.15025v1
|
2024-03-22T08:13:33Z
|
2024-03-22T08:13:33Z
|
Robust Conformal Prediction under Distribution Shift via
Physics-Informed Structural Causal Model
|
Uncertainty is critical to reliable decision-making with machine learning. Conformal prediction (CP) handles uncertainty by predicting a set on a test input, hoping the set to cover the true label with at least $(1-alpha)$ confidence. This coverage can be guaranteed on test data even if the marginal distributions $P_X$ differ between calibration and test datasets. However, as it is common in practice, when the conditional distribution $P_{Y|X}$ is different on calibration and test data, the coverage is not guaranteed and it is essential to measure and minimize the coverage loss under distributional shift at textit{all} possible confidence levels. To address these issues, we upper bound the coverage difference at all levels using the cumulative density functions of calibration and test conformal scores and Wasserstein distance. Inspired by the invariance of physics across data distributions, we propose a physics-informed structural causal model (PI-SCM) to reduce the upper bound. We validated that PI-SCM can improve coverage robustness along confidence level and test domain on a traffic speed prediction task and an epidemic spread task with multiple real-world datasets.
|
[
"['Rui Xu' 'Yue Sun' 'Chao Chen' 'Parv Venkitasubramaniam' 'Sihong Xie']"
] |
null | null |
2403.15027
| null | null |
http://arxiv.org/pdf/2403.15027v2
|
2024-04-03T09:51:29Z
|
2024-03-22T08:17:00Z
|
Grey-informed neural network for time-series forecasting
|
Neural network models have shown outstanding performance and successful resolutions to complex problems in various fields. However, the majority of these models are viewed as black-box, requiring a significant amount of data for development. Consequently, in situations with limited data, constructing appropriate models becomes challenging due to the lack of transparency and scarcity of data. To tackle these challenges, this study suggests the implementation of a grey-informed neural network (GINN). The GINN ensures that the output of the neural network follows the differential equation model of the grey system, improving interpretability. Moreover, incorporating prior knowledge from grey system theory enables traditional neural networks to effectively handle small data samples. Our proposed model has been observed to uncover underlying patterns in the real world and produce reliable forecasts based on empirical data.
|
[
"['Wanli Xie' 'Ruibin Zhao' 'Zhenguo Xu' 'Tingting Liang']"
] |
null | null |
2403.15031
| null | null |
http://arxiv.org/pdf/2403.15031v1
|
2024-03-22T08:26:31Z
|
2024-03-22T08:26:31Z
|
Image Classification with Rotation-Invariant Variational Quantum
Circuits
|
Variational quantum algorithms are gaining attention as an early application of Noisy Intermediate-Scale Quantum (NISQ) devices. One of the main problems of variational methods lies in the phenomenon of Barren Plateaus, present in the optimization of variational parameters. Adding geometric inductive bias to the quantum models has been proposed as a potential solution to mitigate this problem, leading to a new field called Geometric Quantum Machine Learning. In this work, an equivariant architecture for variational quantum classifiers is introduced to create a label-invariant model for image classification with $C_4$ rotational label symmetry. The equivariant circuit is benchmarked against two different architectures, and it is experimentally observed that the geometric approach boosts the model's performance. Finally, a classical equivariant convolution operation is proposed to extend the quantum model for the processing of larger images, employing the resources available in NISQ devices.
|
[
"['Paul San Sebastian' 'Mikel Cañizo' 'Román Orús']"
] |
null | null |
2403.15038
| null | null |
http://arxiv.org/pdf/2403.15038v1
|
2024-03-22T08:42:41Z
|
2024-03-22T08:42:41Z
|
Estimation of multiple mean vectors in high dimension
|
We endeavour to estimate numerous multi-dimensional means of various probability distributions on a common space based on independent samples. Our approach involves forming estimators through convex combinations of empirical means derived from these samples. We introduce two strategies to find appropriate data-dependent convex combination weights: a first one employing a testing procedure to identify neighbouring means with low variance, which results in a closed-form plug-in formula for the weights, and a second one determining weights via minimization of an upper confidence bound on the quadratic risk.Through theoretical analysis, we evaluate the improvement in quadratic risk offered by our methods compared to the empirical means. Our analysis focuses on a dimensional asymptotics perspective, showing that our methods asymptotically approach an oracle (minimax) improvement as the effective dimension of the data increases.We demonstrate the efficacy of our methods in estimating multiple kernel mean embeddings through experiments on both simulated and real-world datasets.
|
[
"['Gilles Blanchard' 'Jean-Baptiste Fermanian' 'Hannah Marienwald']"
] |
null | null |
2403.15045
| null | null |
http://arxiv.org/pdf/2403.15045v1
|
2024-03-22T09:02:12Z
|
2024-03-22T09:02:12Z
|
DP-Dueling: Learning from Preference Feedback without Compromising User
Privacy
|
We consider the well-studied dueling bandit problem, where a learner aims to identify near-optimal actions using pairwise comparisons, under the constraint of differential privacy. We consider a general class of utility-based preference matrices for large (potentially unbounded) decision spaces and give the first differentially private dueling bandit algorithm for active learning with user preferences. Our proposed algorithms are computationally efficient with near-optimal performance, both in terms of the private and non-private regret bound. More precisely, we show that when the decision space is of finite size $K$, our proposed algorithm yields order optimal $OBig(sum_{i = 2}^Klogfrac{KT}{Delta_i} + frac{K}{epsilon}Big)$ regret bound for pure $epsilon$-DP, where $Delta_i$ denotes the suboptimality gap of the $i$-th arm. We also present a matching lower bound analysis which proves the optimality of our algorithms. Finally, we extend our results to any general decision space in $d$-dimensions with potentially infinite arms and design an $epsilon$-DP algorithm with regret $tilde{O} left( frac{d^6}{kappa epsilon } + frac{ dsqrt{T }}{kappa} right)$, providing privacy for free when $T gg d$.
|
[
"['Aadirupa Saha' 'Hilal Asi']"
] |
null | null |
2403.15048
| null | null |
http://arxiv.org/pdf/2403.15048v2
|
2024-03-25T02:08:01Z
|
2024-03-22T09:13:09Z
|
Cartoon Hallucinations Detection: Pose-aware In Context Visual Learning
|
Large-scale Text-to-Image (TTI) models have become a common approach for generating training data in various generative fields. However, visual hallucinations, which contain perceptually critical defects, remain a concern, especially in non-photorealistic styles like cartoon characters. We propose a novel visual hallucination detection system for cartoon character images generated by TTI models. Our approach leverages pose-aware in-context visual learning (PA-ICVL) with Vision-Language Models (VLMs), utilizing both RGB images and pose information. By incorporating pose guidance from a fine-tuned pose estimator, we enable VLMs to make more accurate decisions. Experimental results demonstrate significant improvements in identifying visual hallucinations compared to baseline methods relying solely on RGB images. This research advances TTI models by mitigating visual hallucinations, expanding their potential in non-photorealistic domains.
|
[
"['Bumsoo Kim' 'Wonseop Shin' 'Kyuchul Lee' 'Sanghyun Seo']"
] |
null | null |
2403.15073
| null | null |
http://arxiv.org/pdf/2403.15073v1
|
2024-03-22T09:54:04Z
|
2024-03-22T09:54:04Z
|
On the Inclusion of Charge and Spin States in Cartesian Tensor Neural
Network Potentials
|
In this letter, we present an extension to TensorNet, a state-of-the-art equivariant Cartesian tensor neural network potential, allowing it to handle charged molecules and spin states without architectural changes or increased costs. By incorporating these attributes, we address input degeneracy issues, enhancing the model's predictive accuracy across diverse chemical systems. This advancement significantly broadens TensorNet's applicability, maintaining its efficiency and accuracy.
|
[
"['Guillem Simeon' 'Antonio Mirarchi' 'Raul P. Pelaez' 'Raimondas Galvelis'\n 'Gianni De Fabritiis']"
] |
null | null |
2403.15077
| null | null |
http://arxiv.org/pdf/2403.15077v1
|
2024-03-22T10:02:13Z
|
2024-03-22T10:02:13Z
|
GTAGCN: Generalized Topology Adaptive Graph Convolutional Networks
|
Graph Neural Networks (GNN) have emerged as a popular and standard approach for learning from graph-structured data. The literature on GNN highlights the potential of this evolving research area and its widespread adoption in real-life applications. However, most of the approaches are either new in concept or derived from specific techniques. Therefore, the potential of more than one approach in hybrid form has not been studied extensively, which can be well utilized for sequenced data or static data together. We derive a hybrid approach based on two established techniques as generalized aggregation networks and topology adaptive graph convolution networks that solve our purpose to apply on both types of sequenced and static nature of data, effectively. The proposed method applies to both node and graph classification. Our empirical analysis reveals that the results are at par with literature results and better for handwritten strokes as sequenced data, where graph structures have not been explored.
|
[
"['Sukhdeep Singh' 'Anuj Sharma' 'Vinod Kumar Chauhan']"
] |
null | null |
2403.15079
| null | null |
http://arxiv.org/pdf/2403.15079v1
|
2024-03-22T10:05:21Z
|
2024-03-22T10:05:21Z
|
Automated Feature Selection for Inverse Reinforcement Learning
|
Inverse reinforcement learning (IRL) is an imitation learning approach to learning reward functions from expert demonstrations. Its use avoids the difficult and tedious procedure of manual reward specification while retaining the generalization power of reinforcement learning. In IRL, the reward is usually represented as a linear combination of features. In continuous state spaces, the state variables alone are not sufficiently rich to be used as features, but which features are good is not known in general. To address this issue, we propose a method that employs polynomial basis functions to form a candidate set of features, which are shown to allow the matching of statistical moments of state distributions. Feature selection is then performed for the candidates by leveraging the correlation between trajectory probabilities and feature expectations. We demonstrate the approach's effectiveness by recovering reward functions that capture expert policies across non-linear control tasks of increasing complexity. Code, data, and videos are available at https://sites.google.com/view/feature4irl.
|
[
"['Daulet Baimukashev' 'Gokhan Alcan' 'Ville Kyrki']"
] |
null | null |
2403.15083
| null | null |
http://arxiv.org/pdf/2403.15083v1
|
2024-03-22T10:06:42Z
|
2024-03-22T10:06:42Z
|
SIMAP: A simplicial-map layer for neural networks
|
In this paper, we present SIMAP, a novel layer integrated into deep learning models, aimed at enhancing the interpretability of the output. The SIMAP layer is an enhanced version of Simplicial-Map Neural Networks (SMNNs), an explainable neural network based on support sets and simplicial maps (functions used in topology to transform shapes while preserving their structural connectivity). The novelty of the methodology proposed in this paper is two-fold: Firstly, SIMAP layers work in combination with other deep learning architectures as an interpretable layer substituting classic dense final layers. Secondly, unlike SMNNs, the support set is based on a fixed maximal simplex, the barycentric subdivision being efficiently computed with a matrix-based multiplication algorithm.
|
[
"['Rocio Gonzalez-Diaz' 'Miguel A. Gutiérrez-Naranjo'\n 'Eduardo Paluzo-Hidalgo']"
] |
null | null |
2403.15091
| null | null |
http://arxiv.org/pdf/2403.15091v1
|
2024-03-22T10:20:09Z
|
2024-03-22T10:20:09Z
|
Improved Long Short-Term Memory-based Wastewater Treatment Simulators
for Deep Reinforcement Learning
|
Even though Deep Reinforcement Learning (DRL) showed outstanding results in the fields of Robotics and Games, it is still challenging to implement it in the optimization of industrial processes like wastewater treatment. One of the challenges is the lack of a simulation environment that will represent the actual plant as accurately as possible to train DRL policies. Stochasticity and non-linearity of wastewater treatment data lead to unstable and incorrect predictions of models over long time horizons. One possible reason for the models' incorrect simulation behavior can be related to the issue of compounding error, which is the accumulation of errors throughout the simulation. The compounding error occurs because the model utilizes its predictions as inputs at each time step. The error between the actual data and the prediction accumulates as the simulation continues. We implemented two methods to improve the trained models for wastewater treatment data, which resulted in more accurate simulators: 1- Using the model's prediction data as input in the training step as a tool of correction, and 2- Change in the loss function to consider the long-term predicted shape (dynamics). The experimental results showed that implementing these methods can improve the behavior of simulators in terms of Dynamic Time Warping throughout a year up to 98% compared to the base model. These improvements demonstrate significant promise in creating simulators for biological processes that do not need pre-existing knowledge of the process but instead depend exclusively on time series data obtained from the system.
|
[
"['Esmaeel Mohammadi' 'Daniel Ortiz-Arroyo' 'Mikkel Stokholm-Bjerregaard'\n 'Aviaja Anna Hansen' 'Petar Durdevic']"
] |
null | null |
2403.15095
| null | null |
http://arxiv.org/pdf/2403.15095v1
|
2024-03-22T10:23:48Z
|
2024-03-22T10:23:48Z
|
End-to-End Mineral Exploration with Artificial Intelligence and Ambient
Noise Tomography
|
This paper presents an innovative end-to-end workflow for mineral exploration, integrating ambient noise tomography (ANT) and artificial intelligence (AI) to enhance the discovery and delineation of mineral resources essential for the global transition to a low carbon economy. We focus on copper as a critical element, required in significant quantities for renewable energy solutions. We show the benefits of utilising ANT, characterised by its speed, scalability, depth penetration, resolution, and low environmental impact, alongside artificial intelligence (AI) techniques to refine a continent-scale prospectivity model at the deposit scale by fine-tuning our model on local high-resolution data. We show the promise of the method by first presenting a new data-driven AI prospectivity model for copper within Australia, which serves as our foundation model for further fine-tuning. We then focus on the Hillside IOCG deposit on the prospective Yorke Peninsula. We show that with relatively few local training samples (orebody intercepts), we can fine tune the foundation model to provide a good estimate of the Hillside orebody outline. Our methodology demonstrates how AI can augment geophysical data interpretation, providing a novel approach to mineral exploration with improved decision-making capabilities for targeting mineralization, thereby addressing the urgent need for increased mineral resource discovery.
|
[
"['Jack Muir' 'Gerrit Olivier' 'Anthony Reid']"
] |
null | null |
2403.15103
| null | null |
http://arxiv.org/pdf/2403.15103v1
|
2024-03-22T10:42:25Z
|
2024-03-22T10:42:25Z
|
Improving cross-domain brain tissue segmentation in fetal MRI with
synthetic data
|
Segmentation of fetal brain tissue from magnetic resonance imaging (MRI) plays a crucial role in the study of in utero neurodevelopment. However, automated tools face substantial domain shift challenges as they must be robust to highly heterogeneous clinical data, often limited in numbers and lacking annotations. Indeed, high variability of the fetal brain morphology, MRI acquisition parameters, and superresolution reconstruction (SR) algorithms adversely affect the model's performance when evaluated out-of-domain. In this work, we introduce FetalSynthSeg, a domain randomization method to segment fetal brain MRI, inspired by SynthSeg. Our results show that models trained solely on synthetic data outperform models trained on real data in out-ofdomain settings, validated on a 120-subject cross-domain dataset. Furthermore, we extend our evaluation to 40 subjects acquired using lowfield (0.55T) MRI and reconstructed with novel SR models, showcasing robustness across different magnetic field strengths and SR algorithms. Leveraging a generative synthetic approach, we tackle the domain shift problem in fetal brain MRI and offer compelling prospects for applications in fields with limited and highly heterogeneous data.
|
[
"['Vladyslav Zalevskyi' 'Thomas Sanchez' 'Margaux Roulet'\n 'Jordina Aviles Verddera' 'Jana Hutter' 'Hamza Kebiri'\n 'Meritxell Bach Cuadra']"
] |
null | null |
2403.15108
| null | null |
http://arxiv.org/pdf/2403.15108v1
|
2024-03-22T10:51:55Z
|
2024-03-22T10:51:55Z
|
Active Learning for Regression based on Wasserstein distance and
GroupSort Neural Networks
|
This paper addresses a new active learning strategy for regression problems. The presented Wasserstein active regression model is based on the principles of distribution-matching to measure the representativeness of the labeled dataset. The Wasserstein distance is computed using GroupSort Neural Networks. The use of such networks provides theoretical foundations giving a way to quantify errors with explicit bounds for their size and depth. This solution is combined with another uncertainty-based approach that is more outlier-tolerant to complete the query strategy. Finally, this method is compared with other classical and recent solutions. The study empirically shows the pertinence of such a representativity-uncertainty approach, which provides good estimation all along the query procedure. Moreover, the Wasserstein active regression often achieves more precise estimations and tends to improve accuracy faster than other models.
|
[
"['Benjamin Bobbia' 'Matthias Picard']"
] |
null | null |
2403.15112
| null | null |
http://arxiv.org/pdf/2403.15112v3
|
2024-05-30T15:17:55Z
|
2024-03-22T11:08:48Z
|
Text clustering with LLM embeddings
|
Text clustering is an important approach for organising the growing amount of digital content, helping to structure and find hidden patterns in uncategorised data. However, the effectiveness of text clustering heavily relies on the choice of textual embeddings and clustering algorithms. We argue that recent advances in large language models (LLMs) can potentially improve this task. In this research, we investigated how different textual embeddings -- particularly those used in LLMs -- and clustering algorithms affect how text datasets are clustered. A series of experiments were conducted to assess how embeddings influence clustering results, the role played by dimensionality reduction through summarisation, and model size adjustment. Findings reveal that LLM embeddings excel at capturing subtleties in structured language, while BERT leads the lightweight options in performance. In addition, we observe that increasing model dimensionality and employing summarization techniques do not consistently lead to improvements in clustering efficiency, suggesting that these strategies require careful analysis to use in real-life models. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by incorporating embeddings from LLMs, providing a path for improved methodologies, while informing new avenues for future research in various types of textual analysis.
|
[
"['Alina Petukhova' 'João P. Matos-Carvalho' 'Nuno Fachada']"
] |
null | null |
2403.15123
| null | null |
http://arxiv.org/pdf/2403.15123v1
|
2024-03-22T11:25:38Z
|
2024-03-22T11:25:38Z
|
Quantification using Permutation-Invariant Networks based on Histograms
|
Quantification, also known as class prevalence estimation, is the supervised learning task in which a model is trained to predict the prevalence of each class in a given bag of examples. This paper investigates the application of deep neural networks to tasks of quantification in scenarios where it is possible to apply a symmetric supervised approach that eliminates the need for classification as an intermediary step, directly addressing the quantification problem. Additionally, it discusses existing permutation-invariant layers designed for set processing and assesses their suitability for quantification. In light of our analysis, we propose HistNetQ, a novel neural architecture that relies on a permutation-invariant representation based on histograms that is specially suited for quantification problems. Our experiments carried out in the only quantification competition held to date, show that HistNetQ outperforms other deep neural architectures devised for set processing, as well as the state-of-the-art quantification methods. Furthermore, HistNetQ offers two significant advantages over traditional quantification methods: i) it does not require the labels of the training examples but only the prevalence values of a collection of training bags, making it applicable to new scenarios; and ii) it is able to optimize any custom quantification-oriented loss function.
|
[
"['Olaya Pérez-Mon' 'Alejandro Moreo' 'Juan José del Coz' 'Pablo González']"
] |
null | null |
2403.15146
| null | null |
http://arxiv.org/pdf/2403.15146v1
|
2024-03-22T11:57:51Z
|
2024-03-22T11:57:51Z
|
On the Convergence of Adam under Non-uniform Smoothness: Separability
from SGDM and Beyond
|
This paper aims to clearly distinguish between Stochastic Gradient Descent with Momentum (SGDM) and Adam in terms of their convergence rates. We demonstrate that Adam achieves a faster convergence compared to SGDM under the condition of non-uniformly bounded smoothness. Our findings reveal that: (1) in deterministic environments, Adam can attain the known lower bound for the convergence rate of deterministic first-order optimizers, whereas the convergence rate of Gradient Descent with Momentum (GDM) has higher order dependence on the initial function value; (2) in stochastic setting, Adam's convergence rate upper bound matches the lower bounds of stochastic first-order optimizers, considering both the initial function value and the final error, whereas there are instances where SGDM fails to converge with any learning rate. These insights distinctly differentiate Adam and SGDM regarding their convergence rates. Additionally, by introducing a novel stopping-time based technique, we further prove that if we consider the minimum gradient norm during iterations, the corresponding convergence rate can match the lower bounds across all problem hyperparameters. The technique can also help proving that Adam with a specific hyperparameter scheduler is parameter-agnostic, which hence can be of independent interest.
|
[
"['Bohan Wang' 'Huishuai Zhang' 'Qi Meng' 'Ruoyu Sun' 'Zhi-Ming Ma'\n 'Wei Chen']"
] |
null | null |
2403.15150
| null | null |
http://arxiv.org/pdf/2403.15150v1
|
2024-03-22T12:06:40Z
|
2024-03-22T12:06:40Z
|
An In-Depth Analysis of Data Reduction Methods for Sustainable Deep
Learning
|
In recent years, Deep Learning has gained popularity for its ability to solve complex classification tasks, increasingly delivering better results thanks to the development of more accurate models, the availability of huge volumes of data and the improved computational capabilities of modern computers. However, these improvements in performance also bring efficiency problems, related to the storage of datasets and models, and to the waste of energy and time involved in both the training and inference processes. In this context, data reduction can help reduce energy consumption when training a deep learning model. In this paper, we present up to eight different methods to reduce the size of a tabular training dataset, and we develop a Python package to apply them. We also introduce a representativeness metric based on topology to measure how similar are the reduced datasets and the full training dataset. Additionally, we develop a methodology to apply these data reduction methods to image datasets for object detection tasks. Finally, we experimentally compare how these data reduction methods affect the representativeness of the reduced dataset, the energy consumption and the predictive performance of the model.
|
[
"['Víctor Toscano-Durán' 'Javier Perera-Lago' 'Eduardo Paluzo-Hidalgo'\n 'Rocío Gonzalez-Diaz' 'Miguel Ángel Gutierrez-Naranjo' 'Matteo Rucco']"
] |
null | null |
2403.15167
| null | null |
http://arxiv.org/pdf/2403.15167v1
|
2024-03-22T12:37:14Z
|
2024-03-22T12:37:14Z
|
Transition Graph Properties of Target Class Classification
|
Target class classification is a mixed classification and transition model whose integrated goal is to assign objects to a certain, so called target or normal class. The classification process is iterative, and in each step an object in a certain class undergoes an action attached to that class, initiating the transition of the object to one of the classes. The sequence of transitions, which we call class transitions, must be designed to provide the final assignment of objects to the target class. The transition process can be described in the form of a directed graph, and the success of the final classification is mainly due to the properties of this graph. In our previous research we showed that the desirable structure of the transition graph is an oriented rooted tree with orientation towards the root vertex, which corresponds to the normal class. It is clear that the transition graph of an arbitrary algorithm (policy) may not have this property. In this paper we study the structure of realistic transition graphs, which makes it possible to find classification inconsistencies, helping to transfer it into the desired form. The medical interpretation of dynamic treatment regime considered in the article further clarifies the investigated framework.
|
[
"['Levon Aslanyan' 'Hasmik Sahakyan']"
] |
null | null |
2403.15170
| null | null |
http://arxiv.org/pdf/2403.15170v1
|
2024-03-22T12:46:58Z
|
2024-03-22T12:46:58Z
|
Exploring the Task-agnostic Trait of Self-supervised Learning in the
Context of Detecting Mental Disorders
|
Self-supervised learning (SSL) has been investigated to generate task-agnostic representations across various domains. However, such investigation has not been conducted for detecting multiple mental disorders. The rationale behind the existence of a task-agnostic representation lies in the overlapping symptoms among multiple mental disorders. Consequently, the behavioural data collected for mental health assessment may carry a mixed bag of attributes related to multiple disorders. Motivated by that, in this study, we explore a task-agnostic representation derived through SSL in the context of detecting major depressive disorder (MDD) and post-traumatic stress disorder (PTSD) using audio and video data collected during interactive sessions. This study employs SSL models trained by predicting multiple fixed targets or masked frames. We propose a list of fixed targets to make the generated representation more efficient for detecting MDD and PTSD. Furthermore, we modify the hyper-parameters of the SSL encoder predicting fixed targets to generate global representations that capture varying temporal contexts. Both these innovations are noted to yield improved detection performances for considered mental disorders and exhibit task-agnostic traits. In the context of the SSL model predicting masked frames, the generated global representations are also noted to exhibit task-agnostic traits.
|
[
"['Rohan Kumar Gupta' 'Rohit Sinha']"
] |
null | null |
2403.15180
| null | null |
http://arxiv.org/pdf/2403.15180v2
|
2024-06-11T19:48:33Z
|
2024-03-22T13:09:10Z
|
Self-Improvement for Neural Combinatorial Optimization: Sample without
Replacement, but Improvement
|
Current methods for end-to-end constructive neural combinatorial optimization usually train a policy using behavior cloning from expert solutions or policy gradient methods from reinforcement learning. While behavior cloning is straightforward, it requires expensive expert solutions, and policy gradient methods are often computationally demanding and complex to fine-tune. In this work, we bridge the two and simplify the training process by sampling multiple solutions for random instances using the current model in each epoch and then selecting the best solution as an expert trajectory for supervised imitation learning. To achieve progressively improving solutions with minimal sampling, we introduce a method that combines round-wise Stochastic Beam Search with an update strategy derived from a provable policy improvement. This strategy refines the policy between rounds by utilizing the advantage of the sampled sequences with almost no computational overhead. We evaluate our approach on the Traveling Salesman Problem and the Capacitated Vehicle Routing Problem. The models trained with our method achieve comparable performance and generalization to those trained with expert data. Additionally, we apply our method to the Job Shop Scheduling Problem using a transformer-based architecture and outperform existing state-of-the-art methods by a wide margin.
|
[
"['Jonathan Pirnay' 'Dominik G. Grimm']"
] |
null | null |
2403.15182
| null | null |
http://arxiv.org/pdf/2403.15182v2
|
2024-04-18T08:40:58Z
|
2024-03-22T13:11:26Z
|
PDE-CNNs: Axiomatic Derivations and Applications
|
PDE-based Group Convolutional Neural Networks (PDE-G-CNNs) utilize solvers of geometrically meaningful evolution PDEs as substitutes for the conventional components in G-CNNs. PDE-G-CNNs offer several key benefits all at once: fewer parameters, inherent equivariance, better performance, data efficiency, and geometric interpretability. In this article we focus on Euclidean equivariant PDE-G-CNNs where the feature maps are two dimensional throughout. We call this variant of the framework a PDE-CNN. From a machine learning perspective, we list several practically desirable axioms and derive from these which PDEs should be used in a PDE-CNN. Here our approach to geometric learning via PDEs is inspired by the axioms of classical linear and morphological scale-space theory, which we generalize by introducing semifield-valued signals. Furthermore, we experimentally confirm for small networks that PDE-CNNs offer fewer parameters, increased performance, and better data efficiency when compared to CNNs. We also investigate what effect the use of different semifields has on the performance of the models.
|
[
"['Gijs Bellaard' 'Sei Sakata' 'Bart M. N. Smets' 'Remco Duits']"
] |
null | null |
2403.15194
| null | null |
http://arxiv.org/pdf/2403.15194v1
|
2024-03-22T13:27:57Z
|
2024-03-22T13:27:57Z
|
Your Image is My Video: Reshaping the Receptive Field via Image-To-Video
Differentiable AutoAugmentation and Fusion
|
The landscape of deep learning research is moving towards innovative strategies to harness the true potential of data. Traditionally, emphasis has been on scaling model architectures, resulting in large and complex neural networks, which can be difficult to train with limited computational resources. However, independently of the model size, data quality (i.e. amount and variability) is still a major factor that affects model generalization. In this work, we propose a novel technique to exploit available data through the use of automatic data augmentation for the tasks of image classification and semantic segmentation. We introduce the first Differentiable Augmentation Search method (DAS) to generate variations of images that can be processed as videos. Compared to previous approaches, DAS is extremely fast and flexible, allowing the search on very large search spaces in less than a GPU day. Our intuition is that the increased receptive field in the temporal dimension provided by DAS could lead to benefits also to the spatial receptive field. More specifically, we leverage DAS to guide the reshaping of the spatial receptive field by selecting task-dependant transformations. As a result, compared to standard augmentation alternatives, we improve in terms of accuracy on ImageNet, Cifar10, Cifar100, Tiny-ImageNet, Pascal-VOC-2012 and CityScapes datasets when plugging-in our DAS over different light-weight video backbones.
|
[
"['Sofia Casarin' 'Cynthia I. Ugwu' 'Sergio Escalera' 'Oswald Lanz']"
] |
null | null |
2403.15195
| null | null |
http://arxiv.org/pdf/2403.15195v1
|
2024-03-22T13:31:24Z
|
2024-03-22T13:31:24Z
|
FSD-Inference: Fully Serverless Distributed Inference with Scalable
Cloud Communication
|
Serverless computing offers attractive scalability, elasticity and cost-effectiveness. However, constraints on memory, CPU and function runtime have hindered its adoption for data-intensive applications and machine learning (ML) workloads. Traditional 'server-ful' platforms enable distributed computation via fast networks and well-established inter-process communication (IPC) mechanisms such as MPI and shared memory. In the absence of such solutions in the serverless domain, parallel computation with significant IPC requirements is challenging. We present FSD-Inference, the first fully serverless and highly scalable system for distributed ML inference. We explore potential communication channels, in conjunction with Function-as-a-Service (FaaS) compute, to design a state-of-the-art solution for distributed ML within the context of serverless data-intensive computing. We introduce novel fully serverless communication schemes for ML inference workloads, leveraging both cloud-based publish-subscribe/queueing and object storage offerings. We demonstrate how publish-subscribe/queueing services can be adapted for FaaS IPC with comparable performance to object storage, while offering significantly reduced cost at high parallelism levels. We conduct in-depth experiments on benchmark DNNs of various sizes. The results show that when compared to server-based alternatives, FSD-Inference is significantly more cost-effective and scalable, and can even achieve competitive performance against optimized HPC solutions. Experiments also confirm that our serverless solution can handle large distributed workloads and leverage high degrees of FaaS parallelism.
|
[
"['Joe Oakley' 'Hakan Ferhatosmanoglu']"
] |
null | null |
2403.15207
| null | null |
http://arxiv.org/pdf/2403.15207v1
|
2024-03-22T13:49:53Z
|
2024-03-22T13:49:53Z
|
Robust optimization for adversarial learning with finite sample
complexity guarantees
|
Decision making and learning in the presence of uncertainty has attracted significant attention in view of the increasing need to achieve robust and reliable operations. In the case where uncertainty stems from the presence of adversarial attacks this need is becoming more prominent. In this paper we focus on linear and nonlinear classification problems and propose a novel adversarial training method for robust classifiers, inspired by Support Vector Machine (SVM) margins. We view robustness under a data driven lens, and derive finite sample complexity bounds for both linear and non-linear classifiers in binary and multi-class scenarios. Notably, our bounds match natural classifiers' complexity. Our algorithm minimizes a worst-case surrogate loss using Linear Programming (LP) and Second Order Cone Programming (SOCP) for linear and non-linear models. Numerical experiments on the benchmark MNIST and CIFAR10 datasets show our approach's comparable performance to state-of-the-art methods, without needing adversarial examples during training. Our work offers a comprehensive framework for enhancing binary linear and non-linear classifier robustness, embedding robustness in learning under the presence of adversaries.
|
[
"['André Bertolace' 'Konstatinos Gatsis' 'Kostas Margellos']"
] |
null | null |
2403.15210
| null | null |
http://arxiv.org/pdf/2403.15210v1
|
2024-03-22T13:52:53Z
|
2024-03-22T13:52:53Z
|
Early Period of Training Impacts Out-of-Distribution Generalization
|
Prior research has found that differences in the early period of neural network training significantly impact the performance of in-distribution (ID) tasks. However, neural networks are often sensitive to out-of-distribution (OOD) data, making them less reliable in downstream applications. Yet, the impact of the early training period on OOD generalization remains understudied due to its complexity and lack of effective analytical methodologies. In this work, we investigate the relationship between learning dynamics and OOD generalization during the early period of neural network training. We utilize the trace of Fisher Information and sharpness, with a focus on gradual unfreezing (i.e. progressively unfreezing parameters during training) as the methodology for investigation. Through a series of empirical experiments, we show that 1) selecting the number of trainable parameters at different times during training, i.e. realized by gradual unfreezing -- has a minuscule impact on ID results, but greatly affects the generalization to OOD data; 2) the absolute values of sharpness and trace of Fisher Information at the initial period of training are not indicative for OOD generalization, but the relative values could be; 3) the trace of Fisher Information and sharpness may be used as indicators for the removal of interventions during early period of training for better OOD generalization.
|
[
"['Chen Cecilia Liu' 'Iryna Gurevych']"
] |
null | null |
2403.15218
| null | null |
http://arxiv.org/pdf/2403.15218v1
|
2024-03-22T14:07:07Z
|
2024-03-22T14:07:07Z
|
Anytime, Anywhere, Anyone: Investigating the Feasibility of Segment
Anything Model for Crowd-Sourcing Medical Image Annotations
|
Curating annotations for medical image segmentation is a labor-intensive and time-consuming task that requires domain expertise, resulting in "narrowly" focused deep learning (DL) models with limited translational utility. Recently, foundation models like the Segment Anything Model (SAM) have revolutionized semantic segmentation with exceptional zero-shot generalizability across various domains, including medical imaging, and hold a lot of promise for streamlining the annotation process. However, SAM has yet to be evaluated in a crowd-sourced setting to curate annotations for training 3D DL segmentation models. In this work, we explore the potential of SAM for crowd-sourcing "sparse" annotations from non-experts to generate "dense" segmentation masks for training 3D nnU-Net models, a state-of-the-art DL segmentation model. Our results indicate that while SAM-generated annotations exhibit high mean Dice scores compared to ground-truth annotations, nnU-Net models trained on SAM-generated annotations perform significantly worse than nnU-Net models trained on ground-truth annotations ($p<0.001$, all).
|
[
"['Pranav Kulkarni' 'Adway Kanhere' 'Dharmam Savani' 'Andrew Chan'\n 'Devina Chatterjee' 'Paul H. Yi' 'Vishwa S. Parekh']"
] |
null | null |
2403.15230
| null | null |
http://arxiv.org/pdf/2403.15230v1
|
2024-03-22T14:23:21Z
|
2024-03-22T14:23:21Z
|
An Exploratory Investigation into Code License Infringements in Large
Language Model Training Datasets
|
Does the training of large language models potentially infringe upon code licenses? Furthermore, are there any datasets available that can be safely used for training these models without violating such licenses? In our study, we assess the current trends in the field and the importance of incorporating code into the training of large language models. Additionally, we examine publicly available datasets to see whether these models can be trained on them without the risk of legal issues in the future. To accomplish this, we compiled a list of 53 large language models trained on file-level code. We then extracted their datasets and analyzed how much they overlap with a dataset we created, consisting exclusively of strong copyleft code. Our analysis revealed that every dataset we examined contained license inconsistencies, despite being selected based on their associated repository licenses. We analyzed a total of 514 million code files, discovering 38 million exact duplicates present in our strong copyleft dataset. Additionally, we examined 171 million file-leading comments, identifying 16 million with strong copyleft licenses and another 11 million comments that discouraged copying without explicitly mentioning a license. Based on the findings of our study, which highlights the pervasive issue of license inconsistencies in large language models trained on code, our recommendation for both researchers and the community is to prioritize the development and adoption of best practices for dataset creation and management.
|
[
"['Jonathan Katzy' 'Răzvan-Mihai Popescu' 'Arie van Deursen'\n 'Maliheh Izadi']"
] |
null | null |
2403.15239
| null | null |
http://arxiv.org/pdf/2403.15239v1
|
2024-03-22T14:32:27Z
|
2024-03-22T14:32:27Z
|
Guided Decoding for Robot Motion Generation and Adaption
|
We address motion generation for high-DoF robot arms in complex settings with obstacles, via points, etc. A significant advancement in this domain is achieved by integrating Learning from Demonstration (LfD) into the motion generation process. This integration facilitates rapid adaptation to new tasks and optimizes the utilization of accumulated expertise by allowing robots to learn and generalize from demonstrated trajectories. We train a transformer architecture on a large dataset of simulated trajectories. This architecture, based on a conditional variational autoencoder transformer, learns essential motion generation skills and adapts these to meet auxiliary tasks and constraints. Our auto-regressive approach enables real-time integration of feedback from the physical system, enhancing the adaptability and efficiency of motion generation. We show that our model can generate motion from initial and target points, but also that it can adapt trajectories in navigating complex tasks, including obstacle avoidance, via points, and meeting velocity and acceleration constraints, across platforms.
|
[
"['Nutan Chen' 'Elie Aljalbout' 'Botond Cseke' 'Patrick van der Smagt']"
] |
null | null |
2403.15243
| null | null |
http://arxiv.org/pdf/2403.15243v1
|
2024-03-22T14:36:39Z
|
2024-03-22T14:36:39Z
|
Robust Utility Optimization via a GAN Approach
|
Robust utility optimization enables an investor to deal with market uncertainty in a structured way, with the goal of maximizing the worst-case outcome. In this work, we propose a generative adversarial network (GAN) approach to (approximately) solve robust utility optimization problems in general and realistic settings. In particular, we model both the investor and the market by neural networks (NN) and train them in a mini-max zero-sum game. This approach is applicable for any continuous utility function and in realistic market settings with trading costs, where only observable information of the market can be used. A large empirical study shows the versatile usability of our method. Whenever an optimal reference strategy is available, our method performs on par with it and in the (many) settings without known optimal strategy, our method outperforms all other reference strategies. Moreover, we can conclude from our study that the trained path-dependent strategies do not outperform Markovian ones. Lastly, we uncover that our generative approach for learning optimal, (non-) robust investments under trading costs generates universally applicable alternatives to well known asymptotic strategies of idealized settings.
|
[
"['Florian Krach' 'Josef Teichmann' 'Hanna Wutte']"
] |
null | null |
2403.15244
| null | null |
http://arxiv.org/pdf/2403.15244v1
|
2024-03-22T14:40:29Z
|
2024-03-22T14:40:29Z
|
A Stochastic Quasi-Newton Method for Non-convex Optimization with
Non-uniform Smoothness
|
Classical convergence analyses for optimization algorithms rely on the widely-adopted uniform smoothness assumption. However, recent experimental studies have demonstrated that many machine learning problems exhibit non-uniform smoothness, meaning the smoothness factor is a function of the model parameter instead of a universal constant. In particular, it has been observed that the smoothness grows with respect to the gradient norm along the training trajectory. Motivated by this phenomenon, the recently introduced $(L_0, L_1)$-smoothness is a more general notion, compared to traditional $L$-smoothness, that captures such positive relationship between smoothness and gradient norm. Under this type of non-uniform smoothness, existing literature has designed stochastic first-order algorithms by utilizing gradient clipping techniques to obtain the optimal $mathcal{O}(epsilon^{-3})$ sample complexity for finding an $epsilon$-approximate first-order stationary solution. Nevertheless, the studies of quasi-Newton methods are still lacking. Considering higher accuracy and more robustness for quasi-Newton methods, in this paper we propose a fast stochastic quasi-Newton method when there exists non-uniformity in smoothness. Leveraging gradient clipping and variance reduction, our algorithm can achieve the best-known $mathcal{O}(epsilon^{-3})$ sample complexity and enjoys convergence speedup with simple hyperparameter tuning. Our numerical experiments show that our proposed algorithm outperforms the state-of-the-art approaches.
|
[
"['Zhenyu Sun' 'Ermin Wei']"
] |
null | null |
2403.15245
| null | null |
http://arxiv.org/pdf/2403.15245v1
|
2024-03-22T14:41:55Z
|
2024-03-22T14:41:55Z
|
Reasoning-Enhanced Object-Centric Learning for Videos
|
Object-centric learning aims to break down complex visual scenes into more manageable object representations, enhancing the understanding and reasoning abilities of machine learning systems toward the physical world. Recently, slot-based video models have demonstrated remarkable proficiency in segmenting and tracking objects, but they overlook the importance of the effective reasoning module. In the real world, reasoning and predictive abilities play a crucial role in human perception and object tracking; in particular, these abilities are closely related to human intuitive physics. Inspired by this, we designed a novel reasoning module called the Slot-based Time-Space Transformer with Memory buffer (STATM) to enhance the model's perception ability in complex scenes. The memory buffer primarily serves as storage for slot information from upstream modules, the Slot-based Time-Space Transformer makes predictions through slot-based spatiotemporal attention computations and fusion. Our experiment results on various datasets show that STATM can significantly enhance object-centric learning capabilities of slot-based video models.
|
[
"['Jian Li' 'Pu Ren' 'Yang Liu' 'Hao Sun']"
] |
null | null |
2403.15246
| null | null |
http://arxiv.org/pdf/2403.15246v3
|
2024-05-07T14:25:15Z
|
2024-03-22T14:42:29Z
|
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow
Instructions
|
Modern Language Models (LMs) are capable of following long and complex instructions that enable a large and diverse set of user requests. While Information Retrieval (IR) models use these LMs as the backbone of their architectures, virtually none of them allow users to provide detailed instructions alongside queries, thus limiting their ability to satisfy complex information needs. In this work, we study the use of instructions in IR systems. First, we introduce our dataset FollowIR, which contains a rigorous instruction evaluation benchmark as well as a training set for helping IR models learn to better follow real-world instructions. FollowIR repurposes detailed instructions -- also known as narratives -- developed for professional assessors to evaluate retrieval systems. In particular, we build our benchmark from three collections curated for shared tasks at the Text REtrieval Conference (TREC). These collections contains hundreds to thousands of labeled documents per query, making them suitable for our exploration. Through this process, we can measure how well IR models follow instructions, through a new pairwise evaluation framework. Our results indicate that existing retrieval models fail to correctly use instructions, using them for basic keywords and struggling to understand long-form information. However, we show that it is possible for IR models to learn to follow complex instructions: our new FollowIR-7B model has significant improvements after fine-tuning on our training set.
|
[
"['Orion Weller' 'Benjamin Chang' 'Sean MacAvaney' 'Kyle Lo' 'Arman Cohan'\n 'Benjamin Van Durme' 'Dawn Lawrie' 'Luca Soldaini']"
] |
null | null |
2403.15249
| null | null |
http://arxiv.org/pdf/2403.15249v1
|
2024-03-22T14:47:18Z
|
2024-03-22T14:47:18Z
|
Spectral Motion Alignment for Video Motion Transfer using Diffusion
Models
|
The evolution of diffusion models has greatly impacted video generation and understanding. Particularly, text-to-video diffusion models (VDMs) have significantly facilitated the customization of input video with target appearance, motion, etc. Despite these advances, challenges persist in accurately distilling motion information from video frames. While existing works leverage the consecutive frame residual as the target motion vector, they inherently lack global motion context and are vulnerable to frame-wise distortions. To address this, we present Spectral Motion Alignment (SMA), a novel framework that refines and aligns motion vectors using Fourier and wavelet transforms. SMA learns motion patterns by incorporating frequency-domain regularization, facilitating the learning of whole-frame global motion dynamics, and mitigating spatial artifacts. Extensive experiments demonstrate SMA's efficacy in improving motion transfer while maintaining computational efficiency and compatibility across various video customization frameworks.
|
[
"['Geon Yeong Park' 'Hyeonho Jeong' 'Sang Wan Lee' 'Jong Chul Ye']"
] |
null | null |
2403.15250
| null | null |
http://arxiv.org/pdf/2403.15250v2
|
2024-06-24T07:49:25Z
|
2024-03-22T14:47:35Z
|
Comprehensive Reassessment of Large-Scale Evaluation Outcomes in LLMs: A
Multifaceted Statistical Approach
|
Amidst the rapid evolution of LLMs, the significance of evaluation in comprehending and propelling these models forward is increasingly paramount. Evaluations have revealed that factors such as scaling, training types, architectures and other factors profoundly impact the performance of LLMs. However, the extent and nature of these impacts continue to be subjects of debate because most assessments have been restricted to a limited number of models and data points. Clarifying the effects of these factors on performance scores can be more effectively achieved through a statistical lens. Our study embarks on a thorough re-examination of these LLMs, targeting the inadequacies in current evaluation methods. With the advent of a uniform evaluation framework, our research leverages an expansive dataset of evaluation results, introducing a comprehensive statistical methodology. This includes the application of ANOVA, Tukey HSD tests, GAMM, and clustering technique, offering a robust and transparent approach to deciphering LLM performance data. Contrary to prevailing findings, our results challenge assumptions about emergent abilities and the influence of given training types and architectures in LLMs. These findings furnish new perspectives on the characteristics, intrinsic nature, and developmental trajectories of LLMs. By providing straightforward and reliable methods to scrutinize and reassess LLM performance data, this study contributes a nuanced perspective on LLM efficiency and potentials.
|
[
"['Kun Sun' 'Rong Wang' 'Anders Søgaard']"
] |
null | null |
2403.15263
| null | null |
http://arxiv.org/pdf/2403.15263v2
|
2024-04-04T19:09:21Z
|
2024-03-22T15:02:24Z
|
Federated Bayesian Deep Learning: The Application of Statistical
Aggregation Methods to Bayesian Models
|
Federated learning (FL) is an approach to training machine learning models that takes advantage of multiple distributed datasets while maintaining data privacy and reducing communication costs associated with sharing local datasets. Aggregation strategies have been developed to pool or fuse the weights and biases of distributed deterministic models; however, modern deterministic deep learning (DL) models are often poorly calibrated and lack the ability to communicate a measure of epistemic uncertainty in prediction, which is desirable for remote sensing platforms and safety-critical applications. Conversely, Bayesian DL models are often well calibrated and capable of quantifying and communicating a measure of epistemic uncertainty along with a competitive prediction accuracy. Unfortunately, because the weights and biases in Bayesian DL models are defined by a probability distribution, simple application of the aggregation methods associated with FL schemes for deterministic models is either impossible or results in sub-optimal performance. In this work, we use independent and identically distributed (IID) and non-IID partitions of the CIFAR-10 dataset and a fully variational ResNet-20 architecture to analyze six different aggregation strategies for Bayesian DL models. Additionally, we analyze the traditional federated averaging approach applied to an approximate Bayesian Monte Carlo dropout model as a lightweight alternative to more complex variational inference methods in FL. We show that aggregation strategy is a key hyperparameter in the design of a Bayesian FL system with downstream effects on accuracy, calibration, uncertainty quantification, training stability, and client compute requirements.
|
[
"['John Fischer' 'Marko Orescanin' 'Justin Loomis' 'Patrick McClure']"
] |
null | null |
2403.15267
| null | null |
http://arxiv.org/pdf/2403.15267v1
|
2024-03-22T15:06:31Z
|
2024-03-22T15:06:31Z
|
Parametric PDE Control with Deep Reinforcement Learning and
Differentiable L0-Sparse Polynomial Policies
|
Optimal control of parametric partial differential equations (PDEs) is crucial in many applications in engineering and science. In recent years, the progress in scientific machine learning has opened up new frontiers for the control of parametric PDEs. In particular, deep reinforcement learning (DRL) has the potential to solve high-dimensional and complex control problems in a large variety of applications. Most DRL methods rely on deep neural network (DNN) control policies. However, for many dynamical systems, DNN-based control policies tend to be over-parametrized, which means they need large amounts of training data, show limited robustness, and lack interpretability. In this work, we leverage dictionary learning and differentiable L$_0$ regularization to learn sparse, robust, and interpretable control policies for parametric PDEs. Our sparse policy architecture is agnostic to the DRL method and can be used in different policy-gradient and actor-critic DRL algorithms without changing their policy-optimization procedure. We test our approach on the challenging tasks of controlling parametric Kuramoto-Sivashinsky and convection-diffusion-reaction PDEs. We show that our method (1) outperforms baseline DNN-based DRL policies, (2) allows for the derivation of interpretable equations of the learned optimal control laws, and (3) generalizes to unseen parameters of the PDE without retraining the policies.
|
[
"['Nicolò Botteghi' 'Urban Fasel']"
] |
null | null |
2403.15285
| null | null |
http://arxiv.org/pdf/2403.15285v1
|
2024-03-22T15:31:37Z
|
2024-03-22T15:31:37Z
|
Blockchain-based Pseudonym Management for Vehicle Twin Migrations in
Vehicular Edge Metaverse
|
Driven by the great advances in metaverse and edge computing technologies, vehicular edge metaverses are expected to disrupt the current paradigm of intelligent transportation systems. As highly computerized avatars of Vehicular Metaverse Users (VMUs), the Vehicle Twins (VTs) deployed in edge servers can provide valuable metaverse services to improve driving safety and on-board satisfaction for their VMUs throughout journeys. To maintain uninterrupted metaverse experiences, VTs must be migrated among edge servers following the movements of vehicles. This can raise concerns about privacy breaches during the dynamic communications among vehicular edge metaverses. To address these concerns and safeguard location privacy, pseudonyms as temporary identifiers can be leveraged by both VMUs and VTs to realize anonymous communications in the physical space and virtual spaces. However, existing pseudonym management methods fall short in meeting the extensive pseudonym demands in vehicular edge metaverses, thus dramatically diminishing the performance of privacy preservation. To this end, we present a cross-metaverse empowered dual pseudonym management framework. We utilize cross-chain technology to enhance management efficiency and data security for pseudonyms. Furthermore, we propose a metric to assess the privacy level and employ a Multi-Agent Deep Reinforcement Learning (MADRL) approach to obtain an optimal pseudonym generating strategy. Numerical results demonstrate that our proposed schemes are high-efficiency and cost-effective, showcasing their promising applications in vehicular edge metaverses.
|
[
"['Jiawen Kang' 'Xiaofeng Luo' 'Jiangtian Nie' 'Tianhao Wu' 'Haibo Zhou'\n 'Yonghua Wang' 'Dusit Niyato' 'Shiwen Mao' 'Shengli Xie']"
] |
null | null |
2403.15301
| null | null |
http://arxiv.org/pdf/2403.15301v2
|
2024-06-03T14:56:28Z
|
2024-03-22T15:51:39Z
|
Planning with a Learned Policy Basis to Optimally Solve Complex Tasks
|
Conventional reinforcement learning (RL) methods can successfully solve a wide range of sequential decision problems. However, learning policies that can generalize predictably across multiple tasks in a setting with non-Markovian reward specifications is a challenging problem. We propose to use successor features to learn a policy basis so that each (sub)policy in it solves a well-defined subproblem. In a task described by a finite state automaton (FSA) that involves the same set of subproblems, the combination of these (sub)policies can then be used to generate an optimal solution without additional learning. In contrast to other methods that combine (sub)policies via planning, our method asymptotically attains global optimality, even in stochastic environments.
|
[
"['Guillermo Infante' 'David Kuric' 'Anders Jonsson' 'Vicenç Gómez'\n 'Herke van Hoof']"
] |
null | null |
2403.15304
| null | null |
http://arxiv.org/pdf/2403.15304v2
|
2024-04-11T16:39:54Z
|
2024-03-22T15:54:30Z
|
KTbench: A Novel Data Leakage-Free Framework for Knowledge Tracing
|
Knowledge Tracing (KT) is concerned with predicting students' future performance on learning items in intelligent tutoring systems. Learning items are tagged with skill labels called knowledge concepts (KCs). Many KT models expand the sequence of item-student interactions into KC-student interactions by replacing learning items with their constituting KCs. This often results in a longer sequence length. This approach addresses the issue of sparse item-student interactions and minimises model parameters. However, two problems have been identified with such models. The first problem is the model's ability to learn correlations between KCs belonging to the same item, which can result in the leakage of ground truth labels and hinder performance. This problem can lead to a significant decrease in performance on datasets with a higher number of KCs per item. The second problem is that the available benchmark implementations ignore accounting for changes in sequence length when expanding KCs, leading to different models being tested with varying sequence lengths but still compared against the same benchmark. To address these problems, we introduce a general masking framework that mitigates the first problem and enhances the performance of such KT models while preserving the original model architecture without significant alterations. Additionally, we introduce KTbench, an open-source benchmark library designed to ensure the reproducibility of this work while mitigating the second problem.
|
[
"['Yahya Badran' 'Christine Preisach']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.