categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2403.14139
| null | null |
http://arxiv.org/pdf/2403.14139v1
|
2024-03-21T05:17:22Z
|
2024-03-21T05:17:22Z
|
Genetic Programming for Explainable Manifold Learning
|
Manifold learning techniques play a pivotal role in machine learning by revealing lower-dimensional embeddings within high-dimensional data, thus enhancing both the efficiency and interpretability of data analysis by transforming the data into a lower-dimensional representation. However, a notable challenge with current manifold learning methods is their lack of explicit functional mappings, crucial for explainability in many real-world applications. Genetic programming, known for its interpretable functional tree-based models, has emerged as a promising approach to address this challenge. Previous research leveraged multi-objective GP to balance manifold quality against embedding dimensionality, producing functional mappings across a range of embedding sizes. Yet, these mapping trees often became complex, hindering explainability. In response, in this paper, we introduce Genetic Programming for Explainable Manifold Learning (GP-EMaL), a novel approach that directly penalises tree complexity. Our new method is able to maintain high manifold quality while significantly enhancing explainability and also allows customisation of complexity measures, such as symmetry balancing, scaling, and node complexity, catering to diverse application needs. Our experimental analysis demonstrates that GP-EMaL is able to match the performance of the existing approach in most cases, while using simpler, smaller, and more interpretable tree structures. This advancement marks a significant step towards achieving interpretable manifold learning.
|
[
"['Ben Cravens' 'Andrew Lensen' 'Paula Maddigan' 'Bing Xue']"
] |
null | null |
2403.14140
| null | null |
http://arxiv.org/pdf/2403.14140v1
|
2024-03-21T05:33:49Z
|
2024-03-21T05:33:49Z
|
Learning Decomposable and Debiased Representations via Attribute-Centric
Information Bottlenecks
|
Biased attributes, spuriously correlated with target labels in a dataset, can problematically lead to neural networks that learn improper shortcuts for classifications and limit their capabilities for out-of-distribution (OOD) generalization. Although many debiasing approaches have been proposed to ensure correct predictions from biased datasets, few studies have considered learning latent embedding consisting of intrinsic and biased attributes that contribute to improved performance and explain how the model pays attention to attributes. In this paper, we propose a novel debiasing framework, Debiasing Global Workspace, introducing attention-based information bottlenecks for learning compositional representations of attributes without defining specific bias types. Based on our observation that learning shape-centric representation helps robust performance on OOD datasets, we adopt those abilities to learn robust and generalizable representations of decomposable latent embeddings corresponding to intrinsic and biasing attributes. We conduct comprehensive evaluations on biased datasets, along with both quantitative and qualitative analyses, to showcase our approach's efficacy in attribute-centric representation learning and its ability to differentiate between intrinsic and bias-related features.
|
[
"['Jinyung Hong' 'Eun Som Jeon' 'Changhoon Kim' 'Keun Hee Park'\n 'Utkarsh Nath' 'Yezhou Yang' 'Pavan Turaga' 'Theodore P. Pavlic']"
] |
null | null |
2403.14148
| null | null |
http://arxiv.org/pdf/2403.14148v1
|
2024-03-21T05:48:48Z
|
2024-03-21T05:48:48Z
|
Efficient Video Diffusion Models via Content-Frame Motion-Latent
Decomposition
|
Video diffusion models have recently made great progress in generation quality, but are still limited by the high memory and computational requirements. This is because current video diffusion models often attempt to process high-dimensional videos directly. To tackle this issue, we propose content-motion latent diffusion model (CMD), a novel efficient extension of pretrained image diffusion models for video generation. Specifically, we propose an autoencoder that succinctly encodes a video as a combination of a content frame (like an image) and a low-dimensional motion latent representation. The former represents the common content, and the latter represents the underlying motion in the video, respectively. We generate the content frame by fine-tuning a pretrained image diffusion model, and we generate the motion latent representation by training a new lightweight diffusion model. A key innovation here is the design of a compact latent space that can directly utilizes a pretrained image diffusion model, which has not been done in previous latent video diffusion models. This leads to considerably better quality generation and reduced computational costs. For instance, CMD can sample a video 7.7$times$ faster than prior approaches by generating a video of 512$times$1024 resolution and length 16 in 3.1 seconds. Moreover, CMD achieves an FVD score of 212.7 on WebVid-10M, 27.3% better than the previous state-of-the-art of 292.4.
|
[
"['Sihyun Yu' 'Weili Nie' 'De-An Huang' 'Boyi Li' 'Jinwoo Shin'\n 'Anima Anandkumar']"
] |
null | null |
2403.14151
| null | null |
http://arxiv.org/pdf/2403.14151v1
|
2024-03-21T05:57:27Z
|
2024-03-21T05:57:27Z
|
Deep Learning for Trajectory Data Management and Mining: A Survey and
Beyond
|
Trajectory computing is a pivotal domain encompassing trajectory data management and mining, garnering widespread attention due to its crucial role in various practical applications such as location services, urban traffic, and public safety. Traditional methods, focusing on simplistic spatio-temporal features, face challenges of complex calculations, limited scalability, and inadequate adaptability to real-world complexities. In this paper, we present a comprehensive review of the development and recent advances in deep learning for trajectory computing (DL4Traj). We first define trajectory data and provide a brief overview of widely-used deep learning models. Systematically, we explore deep learning applications in trajectory management (pre-processing, storage, analysis, and visualization) and mining (trajectory-related forecasting, trajectory-related recommendation, trajectory classification, travel time estimation, anomaly detection, and mobility generation). Notably, we encapsulate recent advancements in Large Language Models (LLMs) that hold the potential to augment trajectory computing. Additionally, we summarize application scenarios, public datasets, and toolkits. Finally, we outline current challenges in DL4Traj research and propose future directions. Relevant papers and open-source resources have been collated and are continuously updated at: href{https://github.com/yoshall/Awesome-Trajectory-Computing}{DL4Traj Repo}.
|
[
"['Wei Chen' 'Yuxuan Liang' 'Yuanshao Zhu' 'Yanchuan Chang' 'Kang Luo'\n 'Haomin Wen' 'Lei Li' 'Yanwei Yu' 'Qingsong Wen' 'Chao Chen' 'Kai Zheng'\n 'Yunjun Gao' 'Xiaofang Zhou' 'Yu Zheng']"
] |
null | null |
2403.14156
| null | null |
http://arxiv.org/pdf/2403.14156v1
|
2024-03-21T06:10:51Z
|
2024-03-21T06:10:51Z
|
Policy Mirror Descent with Lookahead
|
Policy Mirror Descent (PMD) stands as a versatile algorithmic framework encompassing several seminal policy gradient algorithms such as natural policy gradient, with connections with state-of-the-art reinforcement learning (RL) algorithms such as TRPO and PPO. PMD can be seen as a soft Policy Iteration algorithm implementing regularized 1-step greedy policy improvement. However, 1-step greedy policies might not be the best choice and recent remarkable empirical successes in RL such as AlphaGo and AlphaZero have demonstrated that greedy approaches with respect to multiple steps outperform their 1-step counterpart. In this work, we propose a new class of PMD algorithms called $h$-PMD which incorporates multi-step greedy policy improvement with lookahead depth $h$ to the PMD update rule. To solve discounted infinite horizon Markov Decision Processes with discount factor $gamma$, we show that $h$-PMD which generalizes the standard PMD enjoys a faster dimension-free $gamma^h$-linear convergence rate, contingent on the computation of multi-step greedy policies. We propose an inexact version of $h$-PMD where lookahead action values are estimated. Under a generative model, we establish a sample complexity for $h$-PMD which improves over prior work. Finally, we extend our result to linear function approximation to scale to large state spaces. Under suitable assumptions, our sample complexity only involves dependence on the dimension of the feature map space instead of the state space size.
|
[
"['Kimon Protopapas' 'Anas Barakat']"
] |
null | null |
2403.14183
| null | null |
http://arxiv.org/pdf/2403.14183v2
|
2024-07-11T18:09:48Z
|
2024-03-21T07:15:37Z
|
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic
Segmentation
|
The recent success of CLIP has demonstrated promising results in zero-shot semantic segmentation by transferring muiltimodal knowledge to pixel-level classification. However, leveraging pre-trained CLIP knowledge to closely align text embeddings with pixel embeddings still has limitations in existing approaches. To address this issue, we propose OTSeg, a novel multimodal attention mechanism aimed at enhancing the potential of multiple text prompts for matching associated pixel embeddings. We first propose Multi-Prompts Sinkhorn (MPS) based on the Optimal Transport (OT) algorithm, which leads multiple text prompts to selectively focus on various semantic features within image pixels. Moreover, inspired by the success of Sinkformers in unimodal settings, we introduce the extension of MPS, called Multi-Prompts Sinkhorn Attention (MPSA) , which effectively replaces cross-attention mechanisms within Transformer framework in multimodal settings. Through extensive experiments, we demonstrate that OTSeg achieves state-of-the-art (SOTA) performance with significant gains on Zero-Shot Semantic Segmentation (ZS3) tasks across three benchmark datasets.
|
[
"['Kwanyoung Kim' 'Yujin Oh' 'Jong Chul Ye']"
] |
null | null |
2403.14200
| null | null |
http://arxiv.org/pdf/2403.14200v1
|
2024-03-21T07:50:45Z
|
2024-03-21T07:50:45Z
|
Debiasing surgeon: fantastic weights and how to find them
|
Nowadays an ever-growing concerning phenomenon, the emergence of algorithmic biases that can lead to unfair models, emerges. Several debiasing approaches have been proposed in the realm of deep learning, employing more or less sophisticated approaches to discourage these models from massively employing these biases. However, a question emerges: is this extra complexity really necessary? Is a vanilla-trained model already embodying some ``unbiased sub-networks'' that can be used in isolation and propose a solution without relying on the algorithmic biases? In this work, we show that such a sub-network typically exists, and can be extracted from a vanilla-trained model without requiring additional training. We further validate that such specific architecture is incapable of learning a specific bias, suggesting that there are possible architectural countermeasures to the problem of biases in deep neural networks.
|
[
"['Rémi Nahon' 'Ivan Luiz De Moura Matos' 'Van-Tam Nguyen'\n 'Enzo Tartaglione']"
] |
null | null |
2403.14225
| null | null |
http://arxiv.org/pdf/2403.14225v1
|
2024-03-21T08:31:36Z
|
2024-03-21T08:31:36Z
|
Posterior concentrations of fully-connected Bayesian neural networks
with general priors on the weights
|
Bayesian approaches for training deep neural networks (BNNs) have received significant interest and have been effectively utilized in a wide range of applications. There have been several studies on the properties of posterior concentrations of BNNs. However, most of these studies only demonstrate results in BNN models with sparse or heavy-tailed priors. Surprisingly, no theoretical results currently exist for BNNs using Gaussian priors, which are the most commonly used one. The lack of theory arises from the absence of approximation results of Deep Neural Networks (DNNs) that are non-sparse and have bounded parameters. In this paper, we present a new approximation theory for non-sparse DNNs with bounded parameters. Additionally, based on the approximation theory, we show that BNNs with non-sparse general priors can achieve near-minimax optimal posterior concentration rates to the true model.
|
[
"['Insung Kong' 'Yongdai Kim']"
] |
null | null |
2403.14228
| null | null |
http://arxiv.org/pdf/2403.14228v1
|
2024-03-21T08:39:13Z
|
2024-03-21T08:39:13Z
|
Recovering Latent Confounders from High-dimensional Proxy Variables
|
Detecting latent confounders from proxy variables is an essential problem in causal effect estimation. Previous approaches are limited to low-dimensional proxies, sorted proxies, and binary treatments. We remove these assumptions and present a novel Proxy Confounder Factorization (PCF) framework for continuous treatment effect estimation when latent confounders manifest through high-dimensional, mixed proxy variables. For specific sample sizes, our two-step PCF implementation, using Independent Component Analysis (ICA-PCF), and the end-to-end implementation, using Gradient Descent (GD-PCF), achieve high correlation with the latent confounder and low absolute error in causal effect estimation with synthetic datasets in the high sample size regime. Even when faced with climate data, ICA-PCF recovers four components that explain $75.9%$ of the variance in the North Atlantic Oscillation, a known confounder of precipitation patterns in Europe. Code for our PCF implementations and experiments can be found here: https://github.com/IPL-UV/confound_it. The proposed methodology constitutes a stepping stone towards discovering latent confounders and can be applied to many problems in disciplines dealing with high-dimensional observed proxies, e.g., spatiotemporal fields.
|
[
"['Nathan Mankovich' 'Homer Durand' 'Emiliano Diaz' 'Gherardo Varando'\n 'Gustau Camps-Valls']"
] |
null | null |
2403.14232
| null | null |
http://arxiv.org/pdf/2403.14232v1
|
2024-03-21T08:41:53Z
|
2024-03-21T08:41:53Z
|
Contrastive Balancing Representation Learning for Heterogeneous
Dose-Response Curves Estimation
|
Estimating the individuals' potential response to varying treatment doses is crucial for decision-making in areas such as precision medicine and management science. Most recent studies predict counterfactual outcomes by learning a covariate representation that is independent of the treatment variable. However, such independence constraints neglect much of the covariate information that is useful for counterfactual prediction, especially when the treatment variables are continuous. To tackle the above issue, in this paper, we first theoretically demonstrate the importance of the balancing and prognostic representations for unbiased estimation of the heterogeneous dose-response curves, that is, the learned representations are constrained to satisfy the conditional independence between the covariates and both of the treatment variables and the potential responses. Based on this, we propose a novel Contrastive balancing Representation learning Network using a partial distance measure, called CRNet, for estimating the heterogeneous dose-response curves without losing the continuity of treatments. Extensive experiments are conducted on synthetic and real-world datasets demonstrating that our proposal significantly outperforms previous methods.
|
[
"['Minqin Zhu' 'Anpeng Wu' 'Haoxuan Li' 'Ruoxuan Xiong' 'Bo Li'\n 'Xiaoqing Yang' 'Xuan Qin' 'Peng Zhen' 'Jiecheng Guo' 'Fei Wu'\n 'Kun Kuang']"
] |
null | null |
2403.14233
| null | null |
http://arxiv.org/pdf/2403.14233v1
|
2024-03-21T08:49:34Z
|
2024-03-21T08:49:34Z
|
SoftPatch: Unsupervised Anomaly Detection with Noisy Data
|
Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise.
|
[
"['Xi Jiang' 'Ying Chen' 'Qiang Nie' 'Yong Liu' 'Jianlin Liu' 'Bin-Bin Gao'\n 'Jun Liu' 'Chengjie Wang' 'Feng Zheng']"
] |
null | null |
2403.14235
| null | null |
http://arxiv.org/pdf/2403.14235v1
|
2024-03-21T08:52:39Z
|
2024-03-21T08:52:39Z
|
RG-CAT: Detection Pipeline and Catalogue of Radio Galaxies in the EMU
Pilot Survey
|
We present source detection and catalogue construction pipelines to build the first catalogue of radio galaxies from the 270 $rm deg^2$ pilot survey of the Evolutionary Map of the Universe (EMU-PS) conducted with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The detection pipeline uses Gal-DINO computer-vision networks (Gupta et al., 2024) to predict the categories of radio morphology and bounding boxes for radio sources, as well as their potential infrared host positions. The Gal-DINO network is trained and evaluated on approximately 5,000 visually inspected radio galaxies and their infrared hosts, encompassing both compact and extended radio morphologies. We find that the Intersection over Union (IoU) for the predicted and ground truth bounding boxes is larger than 0.5 for 99% of the radio sources, and 98% of predicted host positions are within $3^{prime prime}$ of the ground truth infrared host in the evaluation set. The catalogue construction pipeline uses the predictions of the trained network on the radio and infrared image cutouts based on the catalogue of radio components identified using the Selavy source finder algorithm. Confidence scores of the predictions are then used to prioritize Selavy components with higher scores and incorporate them first into the catalogue. This results in identifications for a total of 211,625 radio sources, with 201,211 classified as compact and unresolved. The remaining 10,414 are categorized as extended radio morphologies, including 582 FR-I, 5,602 FR-II, 1,494 FR-x (uncertain whether FR-I or FR-II), 2,375 R (single-peak resolved) radio galaxies, and 361 with peculiar and other rare morphologies. We cross-match the radio sources in the catalogue with the infrared and optical catalogues, finding infrared cross-matches for 73% and photometric redshifts for 36% of the radio galaxies.
|
[
"['Nikhel Gupta' 'Ray P. Norris' 'Zeeshan Hayder' 'Minh Huynh'\n 'Lars Petersson' 'X. Rosalind Wang' 'Andrew M. Hopkins' 'Heinz Andernach'\n 'Yjan Gordon' 'Simone Riggi' 'Miranda Yew' 'Evan J. Crawford'\n 'Bärbel Koribalski' 'Miroslav D. Filipović' 'Anna D. Kapinśka'\n 'Stanislav Shabala' 'Tessa Vernstrom' 'Joshua R. Marvil']"
] |
null | null |
2403.14236
| null | null |
http://arxiv.org/pdf/2403.14236v2
|
2024-04-22T17:56:13Z
|
2024-03-21T08:54:24Z
|
A Unified Framework for Model Editing
|
We introduce a unifying framework that brings two leading "locate-and-edit" model editing techniques -- ROME and MEMIT -- under a single conceptual umbrella, optimizing for the same goal, which we call the preservation-memorization objective. ROME uses an equality constraint to perform one edit at a time, whereas MEMIT employs a more flexible least-square constraint that allows for batched edits. Following the preservation-memorization objective, we present Equality-constrained Mass Model Editing algorithm for Transformers or EMMET, a new batched memory-editing algorithm that uses a closed-form solution for the equality-constrained version of the preservation-memorization objective. EMMET is a batched-version of ROME and is able to perform batched-edits up to a batch-size of 10,000 with very similar performance to MEMIT across multiple dimensions. With EMMET, we unify and achieve symmetry within the "locate-and-edit" algorithms, allowing batched-editing using both objectives.
|
[
"['Akshat Gupta' 'Dev Sajnani' 'Gopala Anumanchipalli']"
] |
null | null |
2403.14244
| null | null |
http://arxiv.org/pdf/2403.14244v1
|
2024-03-21T09:02:31Z
|
2024-03-21T09:02:31Z
|
Isotropic Gaussian Splatting for Real-Time Radiance Field Rendering
|
The 3D Gaussian splatting method has drawn a lot of attention, thanks to its high performance in training and high quality of the rendered image. However, it uses anisotropic Gaussian kernels to represent the scene. Although such anisotropic kernels have advantages in representing the geometry, they lead to difficulties in terms of computation, such as splitting or merging two kernels. In this paper, we propose to use isotropic Gaussian kernels to avoid such difficulties in the computation, leading to a higher performance method. The experiments confirm that the proposed method is about {bf 100X} faster without losing the geometry representation accuracy. The proposed method can be applied in a large range applications where the radiance field is needed, such as 3D reconstruction, view synthesis, and dynamic object modeling.
|
[
"['Yuanhao Gong' 'Lantao Yu' 'Guanghui Yue']"
] |
null | null |
2403.14252
| null | null |
http://arxiv.org/pdf/2403.14252v1
|
2024-03-21T09:25:24Z
|
2024-03-21T09:25:24Z
|
LayoutLLM: Large Language Model Instruction Tuning for Visually Rich
Document Understanding
|
This paper proposes LayoutLLM, a more flexible document analysis method for understanding imaged documents. Visually Rich Document Understanding tasks, such as document image classification and information extraction, have gained significant attention due to their importance. Existing methods have been developed to enhance document comprehension by incorporating pre-training awareness of images, text, and layout structure. However, these methods require fine-tuning for each task and dataset, and the models are expensive to train and operate. To overcome this limitation, we propose a new LayoutLLM that integrates these with large-scale language models (LLMs). By leveraging the strengths of existing research in document image understanding and LLMs' superior language understanding capabilities, the proposed model, fine-tuned with multimodal instruction datasets, performs an understanding of document images in a single model. Our experiments demonstrate improvement over the baseline model in various document analysis tasks.
|
[
"['Masato Fujitake']"
] |
null | null |
2403.14255
| null | null |
http://arxiv.org/pdf/2403.14255v1
|
2024-03-21T09:28:38Z
|
2024-03-21T09:28:38Z
|
ERD: A Framework for Improving LLM Reasoning for Cognitive Distortion
Classification
|
Improving the accessibility of psychotherapy with the aid of Large Language Models (LLMs) is garnering a significant attention in recent years. Recognizing cognitive distortions from the interviewee's utterances can be an essential part of psychotherapy, especially for cognitive behavioral therapy. In this paper, we propose ERD, which improves LLM-based cognitive distortion classification performance with the aid of additional modules of (1) extracting the parts related to cognitive distortion, and (2) debating the reasoning steps by multiple agents. Our experimental results on a public dataset show that ERD improves the multi-class F1 score as well as binary specificity score. Regarding the latter score, it turns out that our method is effective in debiasing the baseline method which has high false positive rate, especially when the summary of multi-agent debate is provided to LLMs.
|
[
"['Sehee Lim' 'Yejin Kim' 'Chi-Hyun Choi' 'Jy-yong Sohn' 'Byung-Hoon Kim']"
] |
null | null |
2403.14262
| null | null |
http://arxiv.org/pdf/2403.14262v1
|
2024-03-21T09:50:39Z
|
2024-03-21T09:50:39Z
|
Diffusion Models with Ensembled Structure-Based Anomaly Scoring for
Unsupervised Anomaly Detection
|
Supervised deep learning techniques show promise in medical image analysis. However, they require comprehensive annotated data sets, which poses challenges, particularly for rare diseases. Consequently, unsupervised anomaly detection (UAD) emerges as a viable alternative for pathology segmentation, as only healthy data is required for training. However, recent UAD anomaly scoring functions often focus on intensity only and neglect structural differences, which impedes the segmentation performance. This work investigates the potential of Structural Similarity (SSIM) to bridge this gap. SSIM captures both intensity and structural disparities and can be advantageous over the classical $l1$ error. However, we show that there is more than one optimal kernel size for the SSIM calculation for different pathologies. Therefore, we investigate an adaptive ensembling strategy for various kernel sizes to offer a more pathology-agnostic scoring mechanism. We demonstrate that this ensembling strategy can enhance the performance of DMs and mitigate the sensitivity to different kernel sizes across varying pathologies, highlighting its promise for brain MRI anomaly detection.
|
[
"['Finn Behrendt' 'Debayan Bhattacharya' 'Lennart Maack' 'Julia Krüger'\n 'Roland Opfer' 'Robin Mieling' 'Alexander Schlaefer']"
] |
null | null |
2403.14270
| null | null |
http://arxiv.org/pdf/2403.14270v1
|
2024-03-21T10:15:57Z
|
2024-03-21T10:15:57Z
|
Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship
Detection
|
Visual relationship detection aims to identify objects and their relationships in images. Prior methods approach this task by adding separate relationship modules or decoders to existing object detection architectures. This separation increases complexity and hinders end-to-end training, which limits performance. We propose a simple and highly efficient decoder-free architecture for open-vocabulary visual relationship detection. Our model consists of a Transformer-based image encoder that represents objects as tokens and models their relationships implicitly. To extract relationship information, we introduce an attention mechanism that selects object pairs likely to form a relationship. We provide a single-stage recipe to train this model on a mixture of object and relationship detection data. Our approach achieves state-of-the-art relationship detection performance on Visual Genome and on the large-vocabulary GQA benchmark at real-time inference speeds. We provide analyses of zero-shot performance, ablations, and real-world qualitative examples.
|
[
"['Tim Salzmann' 'Markus Ryll' 'Alex Bewley' 'Matthias Minderer']"
] |
null | null |
2403.14282
| null | null |
http://arxiv.org/abs/2403.14282v1
|
2024-03-21T10:43:55Z
|
2024-03-21T10:43:55Z
|
How to be fair? A study of label and selection bias
|
It is widely accepted that biased data leads to biased and thus potentially unfair models. Therefore, several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques whose aim is to learn models that are fair by design. Despite the myriad of mitigation techniques developed in the past decade, however, it is still poorly understood under what circumstances which methods work. Recently, Wick et al. showed, with experiments on synthetic data, that there exist situations in which bias mitigation techniques lead to more accurate models when measured on unbiased data. Nevertheless, in the absence of a thorough mathematical analysis, it remains unclear which techniques are effective under what circumstances. We propose to address this problem by establishing relationships between the type of bias and the effectiveness of a mitigation technique, where we categorize the mitigation techniques by the bias measure they optimize. In this paper we illustrate this principle for label and selection bias on the one hand, and demographic parity and ``We're All Equal'' on the other hand. Our theoretical analysis allows to explain the results of Wick et al. and we also show that there are situations where minimizing fairness measures does not result in the fairest possible distribution.
|
[
"['Marco Favier' 'Toon Calders' 'Sam Pinxteren' 'Jonathan Meyer']"
] |
null | null |
2403.14286
| null | null |
http://arxiv.org/pdf/2403.14286v1
|
2024-03-21T10:49:54Z
|
2024-03-21T10:49:54Z
|
Assessing the Robustness of Spectral Clustering for Deep Speaker
Diarization
|
Clustering speaker embeddings is crucial in speaker diarization but hasn't received as much focus as other components. Moreover, the robustness of speaker diarization across various datasets hasn't been explored when the development and evaluation data are from different domains. To bridge this gap, this study thoroughly examines spectral clustering for both same-domain and cross-domain speaker diarization. Our extensive experiments on two widely used corpora, AMI and DIHARD, reveal the performance trend of speaker diarization in the presence of domain mismatch. We observe that the performance difference between two different domain conditions can be attributed to the role of spectral clustering. In particular, keeping other modules unchanged, we show that differences in optimal tuning parameters as well as speaker count estimation originates due to the mismatch. This study opens several future directions for speaker diarization research.
|
[
"['Nikhil Raghav' 'Md Sahidullah']"
] |
null | null |
2403.14290
| null | null |
http://arxiv.org/pdf/2403.14290v1
|
2024-03-21T10:54:21Z
|
2024-03-21T10:54:21Z
|
Exploring Green AI for Audio Deepfake Detection
|
The state-of-the-art audio deepfake detectors leveraging deep neural networks exhibit impressive recognition performance. Nonetheless, this advantage is accompanied by a significant carbon footprint. This is mainly due to the use of high-performance computing with accelerators and high training time. Studies show that average deep NLP model produces around 626k lbs of COtextsubscript{2} which is equivalent to five times of average US car emission at its lifetime. This is certainly a massive threat to the environment. To tackle this challenge, this study presents a novel framework for audio deepfake detection that can be seamlessly trained using standard CPU resources. Our proposed framework utilizes off-the-shelve self-supervised learning (SSL) based models which are pre-trained and available in public repositories. In contrast to existing methods that fine-tune SSL models and employ additional deep neural networks for downstream tasks, we exploit classical machine learning algorithms such as logistic regression and shallow neural networks using the SSL embeddings extracted using the pre-trained model. Our approach shows competitive results compared to the commonly used high-carbon footprint approaches. In experiments with the ASVspoof 2019 LA dataset, we achieve a 0.90% equal error rate (EER) with less than 1k trainable model parameters. To encourage further research in this direction and support reproducible results, the Python code will be made publicly accessible following acceptance. Github: https://github.com/sahasubhajit/Speech-Spoofing-
|
[
"['Subhajit Saha' 'Md Sahidullah' 'Swagatam Das']"
] |
null | null |
2403.14297
| null | null |
http://arxiv.org/pdf/2403.14297v2
|
2024-05-13T09:29:28Z
|
2024-03-21T11:03:56Z
|
Impact Assessment of Missing Data in Model Predictions for Earth
Observation Applications
|
Earth observation (EO) applications involving complex and heterogeneous data sources are commonly approached with machine learning models. However, there is a common assumption that data sources will be persistently available. Different situations could affect the availability of EO sources, like noise, clouds, or satellite mission failures. In this work, we assess the impact of missing temporal and static EO sources in trained models across four datasets with classification and regression tasks. We compare the predictive quality of different methods and find that some are naturally more robust to missing data. The Ensemble strategy, in particular, achieves a prediction robustness up to 100%. We evidence that missing scenarios are significantly more challenging in regression than classification tasks. Finally, we find that the optical view is the most critical view when it is missing individually.
|
[
"['Francisco Mena' 'Diego Arenas' 'Marcela Charfuelan' 'Marlon Nuske'\n 'Andreas Dengel']"
] |
null | null |
2403.14302
| null | null |
http://arxiv.org/pdf/2403.14302v2
|
2024-03-28T05:13:43Z
|
2024-03-21T11:16:42Z
|
SpikingResformer: Bridging ResNet and Vision Transformer in Spiking
Neural Networks
|
The remarkable success of Vision Transformers in Artificial Neural Networks (ANNs) has led to a growing interest in incorporating the self-attention mechanism and transformer-based architecture into Spiking Neural Networks (SNNs). While existing methods propose spiking self-attention mechanisms that are compatible with SNNs, they lack reasonable scaling methods, and the overall architectures proposed by these methods suffer from a bottleneck in effectively extracting local features. To address these challenges, we propose a novel spiking self-attention mechanism named Dual Spike Self-Attention (DSSA) with a reasonable scaling method. Based on DSSA, we propose a novel spiking Vision Transformer architecture called SpikingResformer, which combines the ResNet-based multi-stage architecture with our proposed DSSA to improve both performance and energy efficiency while reducing parameters. Experimental results show that SpikingResformer achieves higher accuracy with fewer parameters and lower energy consumption than other spiking Vision Transformer counterparts. Notably, our SpikingResformer-L achieves 79.40% top-1 accuracy on ImageNet with 4 time-steps, which is the state-of-the-art result in the SNN field.
|
[
"['Xinyu Shi' 'Zecheng Hao' 'Zhaofei Yu']"
] |
null | null |
2403.14324
| null | null |
http://arxiv.org/pdf/2403.14324v1
|
2024-03-21T11:44:25Z
|
2024-03-21T11:44:25Z
|
Neural Network-Based Processing and Reconstruction of Compromised
Biophotonic Image Data
|
The integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. This approach also offers the prospect of simplifying hardware requirements/complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function, signal-to-noise ratio, sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field-of-view, depth-of-field, and space-bandwidth product. Here, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span broad applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the future possibilities of this rapidly evolving concept, we hope to motivate our readers to explore novel ways of balancing hardware compromises with compensation via AI.
|
[
"['Michael John Fanous' 'Paloma Casteleiro Costa' 'Cagatay Isil'\n 'Luzhe Huang' 'Aydogan Ozcan']"
] |
null | null |
2403.14327
| null | null |
http://arxiv.org/pdf/2403.14327v1
|
2024-03-21T11:51:42Z
|
2024-03-21T11:51:42Z
|
Investigating the validity of structure learning algorithms in
identifying risk factors for intervention in patients with diabetes
|
Diabetes, a pervasive and enduring health challenge, imposes significant global implications on health, financial healthcare systems, and societal well-being. This study undertakes a comprehensive exploration of various structural learning algorithms to discern causal pathways amongst potential risk factors influencing diabetes progression. The methodology involves the application of these algorithms to relevant diabetes data, followed by the conversion of their output graphs into Causal Bayesian Networks (CBNs), enabling predictive analysis and the evaluation of discrepancies in the effect of hypothetical interventions within our context-specific case study. This study highlights the substantial impact of algorithm selection on intervention outcomes. To consolidate insights from diverse algorithms, we employ a model-averaging technique that helps us obtain a unique causal model for diabetes derived from a varied set of structural learning algorithms. We also investigate how each of those individual graphs, as well as the average graph, compare to the structures elicited by a domain expert who categorised graph edges into high confidence, moderate, and low confidence types, leading into three individual graphs corresponding to the three levels of confidence. The resulting causal model and data are made available online, and serve as a valuable resource and a guide for informed decision-making by healthcare practitioners, offering a comprehensive understanding of the interactions between relevant risk factors and the effect of hypothetical interventions. Therefore, this research not only contributes to the academic discussion on diabetes, but also provides practical guidance for healthcare professionals in developing efficient intervention and risk management strategies.
|
[
"['Sheresh Zahoor' 'Anthony C. Constantinou' 'Tim M Curtis'\n 'Mohammed Hasanuzzaman']"
] |
null | null |
2403.14328
| null | null |
http://arxiv.org/pdf/2403.14328v1
|
2024-03-21T11:54:45Z
|
2024-03-21T11:54:45Z
|
Distilling Reinforcement Learning Policies for Interpretable Robot
Locomotion: Gradient Boosting Machines and Symbolic Regression
|
Recent advancements in reinforcement learning (RL) have led to remarkable achievements in robot locomotion capabilities. However, the complexity and ``black-box'' nature of neural network-based RL policies hinder their interpretability and broader acceptance, particularly in applications demanding high levels of safety and reliability. This paper introduces a novel approach to distill neural RL policies into more interpretable forms using Gradient Boosting Machines (GBMs), Explainable Boosting Machines (EBMs) and Symbolic Regression. By leveraging the inherent interpretability of generalized additive models, decision trees, and analytical expressions, we transform opaque neural network policies into more transparent ``glass-box'' models. We train expert neural network policies using RL and subsequently distill them into (i) GBMs, (ii) EBMs, and (iii) symbolic policies. To address the inherent distribution shift challenge of behavioral cloning, we propose to use the Dataset Aggregation (DAgger) algorithm with a curriculum of episode-dependent alternation of actions between expert and distilled policies, to enable efficient distillation of feedback control policies. We evaluate our approach on various robot locomotion gaits -- walking, trotting, bounding, and pacing -- and study the importance of different observations in joint actions for distilled policies using various methods. We train neural expert policies for 205 hours of simulated experience and distill interpretable policies with only 10 minutes of simulated interaction for each gait using the proposed method.
|
[
"['Fernando Acero' 'Zhibin Li']"
] |
null | null |
2403.14332
| null | null |
http://arxiv.org/pdf/2403.14332v1
|
2024-03-21T11:57:16Z
|
2024-03-21T11:57:16Z
|
A Differentially Private Clustering Algorithm for Well-Clustered Graphs
|
We study differentially private (DP) algorithms for recovering clusters in well-clustered graphs, which are graphs whose vertex set can be partitioned into a small number of sets, each inducing a subgraph of high inner conductance and small outer conductance. Such graphs have widespread application as a benchmark in the theoretical analysis of spectral clustering. We provide an efficient ($epsilon$,$delta$)-DP algorithm tailored specifically for such graphs. Our algorithm draws inspiration from the recent work of Chen et al., who developed DP algorithms for recovery of stochastic block models in cases where the graph comprises exactly two nearly-balanced clusters. Our algorithm works for well-clustered graphs with $k$ nearly-balanced clusters, and the misclassification ratio almost matches the one of the best-known non-private algorithms. We conduct experimental evaluations on datasets with known ground truth clusters to substantiate the prowess of our algorithm. We also show that any (pure) $epsilon$-DP algorithm would result in substantial error.
|
[
"['Weiqiang He' 'Hendrik Fichtenberger' 'Pan Peng']"
] |
null | null |
2403.14339
| null | null |
http://arxiv.org/pdf/2403.14339v1
|
2024-03-21T12:11:26Z
|
2024-03-21T12:11:26Z
|
$\nabla τ$: Gradient-based and Task-Agnostic machine Unlearning
|
Machine Unlearning, the process of selectively eliminating the influence of certain data examples used during a model's training, has gained significant attention as a means for practitioners to comply with recent data protection regulations. However, existing unlearning methods face critical drawbacks, including their prohibitively high cost, often associated with a large number of hyperparameters, and the limitation of forgetting only relatively small data portions. This often makes retraining the model from scratch a quicker and more effective solution. In this study, we introduce Gradient-based and Task-Agnostic machine Unlearning ($nabla tau$), an optimization framework designed to remove the influence of a subset of training data efficiently. It applies adaptive gradient ascent to the data to be forgotten while using standard gradient descent for the remaining data. $nabla tau$ offers multiple benefits over existing approaches. It enables the unlearning of large sections of the training dataset (up to 30%). It is versatile, supporting various unlearning tasks (such as subset forgetting or class removal) and applicable across different domains (images, text, etc.). Importantly, $nabla tau$ requires no hyperparameter adjustments, making it a more appealing option than retraining the model from scratch. We evaluate our framework's effectiveness using a set of well-established Membership Inference Attack metrics, demonstrating up to 10% enhancements in performance compared to state-of-the-art methods without compromising the original model's accuracy.
|
[
"['Daniel Trippa' 'Cesare Campagnano' 'Maria Sofia Bucarelli'\n 'Gabriele Tolomei' 'Fabrizio Silvestri']"
] |
null | null |
2403.14340
| null | null |
http://arxiv.org/pdf/2403.14340v1
|
2024-03-21T12:14:02Z
|
2024-03-21T12:14:02Z
|
Exploring Task Unification in Graph Representation Learning via
Generative Approach
|
Graphs are ubiquitous in real-world scenarios and encompass a diverse range of tasks, from node-, edge-, and graph-level tasks to transfer learning. However, designing specific tasks for each type of graph data is often costly and lacks generalizability. Recent endeavors under the "Pre-training + Fine-tuning" or "Pre-training + Prompt" paradigms aim to design a unified framework capable of generalizing across multiple graph tasks. Among these, graph autoencoders (GAEs), generative self-supervised models, have demonstrated their potential in effectively addressing various graph tasks. Nevertheless, these methods typically employ multi-stage training and require adaptive designs, which on one hand make it difficult to be seamlessly applied to diverse graph tasks and on the other hand overlook the negative impact caused by discrepancies in task objectives between the different stages. To address these challenges, we propose GA^2E, a unified adversarially masked autoencoder capable of addressing the above challenges seamlessly. Specifically, GA^2E proposes to use the subgraph as the meta-structure, which remains consistent across all graph tasks (ranging from node-, edge-, and graph-level to transfer learning) and all stages (both during training and inference). Further, GA^2E operates in a textbf{"Generate then Discriminate"} manner. It leverages the masked GAE to reconstruct the input subgraph whilst treating it as a generator to compel the reconstructed graphs resemble the input subgraph. Furthermore, GA^2E introduces an auxiliary discriminator to discern the authenticity between the reconstructed (generated) subgraph and the input subgraph, thus ensuring the robustness of the graph representation through adversarial training mechanisms. We validate GA^2E's capabilities through extensive experiments on 21 datasets across four types of graph tasks.
|
[
"['Yulan Hu' 'Sheng Ouyang' 'Zhirui Yang' 'Ge Chen' 'Junchen Wan'\n 'Xiao Wang' 'Yong Liu']"
] |
null | null |
2403.14353
| null | null |
http://arxiv.org/pdf/2403.14353v2
|
2024-04-28T09:25:44Z
|
2024-03-21T12:28:44Z
|
DaCapo: Accelerating Continuous Learning in Autonomous Systems for Video
Analytics
|
Deep neural network (DNN) video analytics is crucial for autonomous systems such as self-driving vehicles, unmanned aerial vehicles (UAVs), and security robots. However, real-world deployment faces challenges due to their limited computational resources and battery power. To tackle these challenges, continuous learning exploits a lightweight "student" model at deployment (inference), leverages a larger "teacher" model for labeling sampled data (labeling), and continuously retrains the student model to adapt to changing scenarios (retraining). This paper highlights the limitations in state-of-the-art continuous learning systems: (1) they focus on computations for retraining, while overlooking the compute needs for inference and labeling, (2) they rely on power-hungry GPUs, unsuitable for battery-operated autonomous systems, and (3) they are located on a remote centralized server, intended for multi-tenant scenarios, again unsuitable for autonomous systems due to privacy, network availability, and latency concerns. We propose a hardware-algorithm co-designed solution for continuous learning, DaCapo, that enables autonomous systems to perform concurrent executions of inference, labeling, and training in a performant and energy-efficient manner. DaCapo comprises (1) a spatially-partitionable and precision-flexible accelerator enabling parallel execution of kernels on sub-accelerators at their respective precisions, and (2) a spatiotemporal resource allocation algorithm that strategically navigates the resource-accuracy tradeoff space, facilitating optimal decisions for resource allocation to achieve maximal accuracy. Our evaluation shows that DaCapo achieves 6.5% and 5.5% higher accuracy than a state-of-the-art GPU-based continuous learning systems, Ekya and EOMU, respectively, while consuming 254x less power.
|
[
"['Yoonsung Kim' 'Changhun Oh' 'Jinwoo Hwang' 'Wonung Kim' 'Seongryong Oh'\n 'Yubin Lee' 'Hardik Sharma' 'Amir Yazdanbakhsh' 'Jongse Park']"
] |
null | null |
2403.14356
| null | null |
http://arxiv.org/pdf/2403.14356v1
|
2024-03-21T12:35:46Z
|
2024-03-21T12:35:46Z
|
DomainLab: A modular Python package for domain generalization in deep
learning
|
Poor generalization performance caused by distribution shifts in unseen domains often hinders the trustworthy deployment of deep neural networks. Many domain generalization techniques address this problem by adding a domain invariant regularization loss terms during training. However, there is a lack of modular software that allows users to combine the advantages of different methods with minimal effort for reproducibility. DomainLab is a modular Python package for training user specified neural networks with composable regularization loss terms. Its decoupled design allows the separation of neural networks from regularization loss construction. Hierarchical combinations of neural networks, different domain generalization methods, and associated hyperparameters, can all be specified together with other experimental setup in a single configuration file. Hierarchical combinations of neural networks, different domain generalization methods, and associated hyperparameters, can all be specified together with other experimental setup in a single configuration file. In addition, DomainLab offers powerful benchmarking functionality to evaluate the generalization performance of neural networks in out-of-distribution data. The package supports running the specified benchmark on an HPC cluster or on a standalone machine. The package is well tested with over 95 percent coverage and well documented. From the user perspective, it is closed to modification but open to extension. The package is under the MIT license, and its source code, tutorial and documentation can be found at https://github.com/marrlab/DomainLab.
|
[
"['Xudong Sun' 'Carla Feistner' 'Alexej Gossmann' 'George Schwarz'\n 'Rao Muhammad Umer' 'Lisa Beer' 'Patrick Rockenschaub'\n 'Rahul Babu Shrestha' 'Armin Gruber' 'Nutan Chen'\n 'Sayedali Shetab Boushehri' 'Florian Buettner' 'Carsten Marr']"
] |
null | null |
2403.14358
| null | null |
http://arxiv.org/pdf/2403.14358v1
|
2024-03-21T12:37:54Z
|
2024-03-21T12:37:54Z
|
Exploring the Potential of Large Language Models in Graph Generation
|
Large language models (LLMs) have achieved great success in many fields, and recent works have studied exploring LLMs for graph discriminative tasks such as node classification. However, the abilities of LLMs for graph generation remain unexplored in the literature. Graph generation requires the LLM to generate graphs with given properties, which has valuable real-world applications such as drug discovery, while tends to be more challenging. In this paper, we propose LLM4GraphGen to explore the ability of LLMs for graph generation with systematical task designs and extensive experiments. Specifically, we propose several tasks tailored with comprehensive experiments to address key questions regarding LLMs' understanding of different graph structure rules, their ability to capture structural type distributions, and their utilization of domain knowledge for property-based graph generation. Our evaluations demonstrate that LLMs, particularly GPT-4, exhibit preliminary abilities in graph generation tasks, including rule-based and distribution-based generation. We also observe that popular prompting methods, such as few-shot and chain-of-thought prompting, do not consistently enhance performance. Besides, LLMs show potential in generating molecules with specific properties. These findings may serve as foundations for designing good LLMs based models for graph generation and provide valuable insights and further research.
|
[
"['Yang Yao' 'Xin Wang' 'Zeyang Zhang' 'Yijian Qin' 'Ziwei Zhang' 'Xu Chu'\n 'Yuekui Yang' 'Wenwu Zhu' 'Hong Mei']"
] |
null | null |
2403.14359
| null | null |
http://arxiv.org/pdf/2403.14359v1
|
2024-03-21T12:40:41Z
|
2024-03-21T12:40:41Z
|
Varroa destructor detection on honey bees using hyperspectral imagery
|
Hyperspectral (HS) imagery in agriculture is becoming increasingly common. These images have the advantage of higher spectral resolution. Advanced spectral processing techniques are required to unlock the information potential in these HS images. The present paper introduces a method rooted in multivariate statistics designed to detect parasitic Varroa destructor mites on the body of western honey bee Apis mellifera, enabling easier and continuous monitoring of the bee hives. The methodology explores unsupervised (K-means++) and recently developed supervised (Kernel Flows - Partial Least-Squares, KF-PLS) methods for parasitic identification. Additionally, in light of the emergence of custom-band multispectral cameras, the present research outlines a strategy for identifying the specific wavelengths necessary for effective bee-mite separation, suitable for implementation in a custom-band camera. Illustrated with a real-case dataset, our findings demonstrate that as few as four spectral bands are sufficient for accurate parasite identification.
|
[
"['Zina-Sabrina Duma' 'Tomas Zemcik' 'Simon Bilik' 'Tuomas Sihvonen'\n 'Peter Honec' 'Satu-Pia Reinikainen' 'Karel Horak']"
] |
null | null |
2403.14371
| null | null |
http://arxiv.org/pdf/2403.14371v1
|
2024-03-21T12:59:24Z
|
2024-03-21T12:59:24Z
|
Loop Improvement: An Efficient Approach for Extracting Shared Features
from Heterogeneous Data without Central Server
|
In federated learning, data heterogeneity significantly impacts performance. A typical solution involves segregating these parameters into shared and personalized components, a concept also relevant in multi-task learning. Addressing this, we propose "Loop Improvement" (LI), a novel method enhancing this separation and feature extraction without necessitating a central server or data interchange among participants. Our experiments reveal LI's superiority in several aspects: In personalized federated learning environments, LI consistently outperforms the advanced FedALA algorithm in accuracy across diverse scenarios. Additionally, LI's feature extractor closely matches the performance achieved when aggregating data from all clients. In global model contexts, employing LI with stacked personalized layers and an additional network also yields comparable results to combined client data scenarios. Furthermore, LI's adaptability extends to multi-task learning, streamlining the extraction of common features across tasks and obviating the need for simultaneous training. This approach not only enhances individual task performance but also achieves accuracy levels on par with classic multi-task learning methods where all tasks are trained simultaneously. LI integrates a loop topology with layer-wise and end-to-end training, compatible with various neural network models. This paper also delves into the theoretical underpinnings of LI's effectiveness, offering insights into its potential applications. The code is on https://github.com/axedge1983/LI
|
[
"['Fei Li' 'Chu Kiong Loo' 'Wei Shiung Liew' 'Xiaofeng Liu']"
] |
null | null |
2403.14377
| null | null |
http://arxiv.org/pdf/2403.14377v1
|
2024-03-21T13:09:23Z
|
2024-03-21T13:09:23Z
|
Knowledge-Enhanced Recommendation with User-Centric Subgraph Network
|
Recommendation systems, as widely implemented nowadays on various platforms, recommend relevant items to users based on their preferences. The classical methods which rely on user-item interaction matrices has limitations, especially in scenarios where there is a lack of interaction data for new items. Knowledge graph (KG)-based recommendation systems have emerged as a promising solution. However, most KG-based methods adopt node embeddings, which do not provide personalized recommendations for different users and cannot generalize well to the new items. To address these limitations, we propose Knowledge-enhanced User-Centric subgraph Network (KUCNet), a subgraph learning approach with graph neural network (GNN) for effective recommendation. KUCNet constructs a U-I subgraph for each user-item pair that captures both the historical information of user-item interactions and the side information provided in KG. An attention-based GNN is designed to encode the U-I subgraphs for recommendation. Considering efficiency, the pruned user-centric computation graph is further introduced such that multiple U-I subgraphs can be simultaneously computed and that the size can be pruned by Personalized PageRank. Our proposed method achieves accurate, efficient, and interpretable recommendations especially for new items. Experimental results demonstrate the superiority of KUCNet over state-of-the-art KG-based and collaborative filtering (CF)-based methods.
|
[
"['Guangyi Liu' 'Quanming Yao' 'Yongqi Zhang' 'Lei Chen']"
] |
null | null |
2403.14379
| null | null |
http://arxiv.org/pdf/2403.14379v1
|
2024-03-21T13:12:33Z
|
2024-03-21T13:12:33Z
|
Tensor network compressibility of convolutional models
|
Convolutional neural networks (CNNs) represent one of the most widely used neural network architectures, showcasing state-of-the-art performance in computer vision tasks. Although larger CNNs generally exhibit higher accuracy, their size can be effectively reduced by "tensorization" while maintaining accuracy. Tensorization consists of replacing the convolution kernels with compact decompositions such as Tucker, Canonical Polyadic decompositions, or quantum-inspired decompositions such as matrix product states, and directly training the factors in the decompositions to bias the learning towards low-rank decompositions. But why doesn't tensorization seem to impact the accuracy adversely? We explore this by assessing how truncating the convolution kernels of dense (untensorized) CNNs impact their accuracy. Specifically, we truncated the kernels of (i) a vanilla four-layer CNN and (ii) ResNet-50 pre-trained for image classification on CIFAR-10 and CIFAR-100 datasets. We found that kernels (especially those inside deeper layers) could often be truncated along several cuts resulting in significant loss in kernel norm but not in classification accuracy. This suggests that such ``correlation compression'' (underlying tensorization) is an intrinsic feature of how information is encoded in dense CNNs. We also found that aggressively truncated models could often recover the pre-truncation accuracy after only a few epochs of re-training, suggesting that compressing the internal correlations of convolution layers does not often transport the model to a worse minimum. Our results can be applied to tensorize and compress CNN models more effectively.
|
[
"['Sukhbinder Singh' 'Saeed S. Jahromi' 'Roman Orus']"
] |
null | null |
2403.14385
| null | null |
http://arxiv.org/pdf/2403.14385v2
|
2024-04-30T10:42:42Z
|
2024-03-21T13:21:33Z
|
Estimating Causal Effects with Double Machine Learning -- A Method
Evaluation
|
The estimation of causal effects with observational data continues to be a very active research area. In recent years, researchers have developed new frameworks which use machine learning to relax classical assumptions necessary for the estimation of causal effects. In this paper, we review one of the most prominent methods - "double/debiased machine learning" (DML) - and empirically evaluate it by comparing its performance on simulated data relative to more traditional statistical methods, before applying it to real-world data. Our findings indicate that the application of a suitably flexible machine learning algorithm within DML improves the adjustment for various nonlinear confounding relationships. This advantage enables a departure from traditional functional form assumptions typically necessary in causal effect estimation. However, we demonstrate that the method continues to critically depend on standard assumptions about causal structure and identification. When estimating the effects of air pollution on housing prices in our application, we find that DML estimates are consistently larger than estimates of less flexible methods. From our overall results, we provide actionable recommendations for specific choices researchers must make when applying DML in practice.
|
[
"['Jonathan Fuhr' 'Philipp Berens' 'Dominik Papies']"
] |
null | null |
2403.14392
| null | null |
http://arxiv.org/pdf/2403.14392v1
|
2024-03-21T13:33:00Z
|
2024-03-21T13:33:00Z
|
A Bag of Tricks for Few-Shot Class-Incremental Learning
|
We present a bag of tricks framework for few-shot class-incremental learning (FSCIL), which is a challenging form of continual learning that involves continuous adaptation to new tasks with limited samples. FSCIL requires both stability and adaptability, i.e., preserving proficiency in previously learned tasks while learning new ones. Our proposed bag of tricks brings together eight key and highly influential techniques that improve stability, adaptability, and overall performance under a unified framework for FSCIL. We organize these tricks into three categories: stability tricks, adaptability tricks, and training tricks. Stability tricks aim to mitigate the forgetting of previously learned classes by enhancing the separation between the embeddings of learned classes and minimizing interference when learning new ones. On the other hand, adaptability tricks focus on the effective learning of new classes. Finally, training tricks improve the overall performance without compromising stability or adaptability. We perform extensive experiments on three benchmark datasets, CIFAR-100, CUB-200, and miniIMageNet, to evaluate the impact of our proposed framework. Our detailed analysis shows that our approach substantially improves both stability and adaptability, establishing a new state-of-the-art by outperforming prior works in the area. We believe our method provides a go-to solution and establishes a robust baseline for future research in this area.
|
[
"['Shuvendu Roy' 'Chunjong Park' 'Aldi Fahrezi' 'Ali Etemad']"
] |
null | null |
2403.14398
| null | null |
http://arxiv.org/pdf/2403.14398v1
|
2024-03-21T13:43:49Z
|
2024-03-21T13:43:49Z
|
Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact
Subproblem Solver for Training Structured Neural Network
|
We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. Similar to existing regularized adaptive methods, the subproblem for computing the update direction of RAMDA involves a nonsmooth regularizer and a diagonal preconditioner, and therefore does not possess a closed-form solution in general. We thus also carefully devise an implementable inexactness condition that retains convergence guarantees similar to the exact versions, and propose a companion efficient solver for the subproblems of both RAMDA and existing methods to make them practically feasible. We leverage the theory of manifold identification in variational analysis to show that, even in the presence of such inexactness, the iterates of RAMDA attain the ideal structure induced by the regularizer at the stationary point of asymptotic convergence. This structure is locally optimal near the point of convergence, so RAMDA is guaranteed to obtain the best structure possible among all methods converging to the same point, making it the first regularized adaptive method outputting models that possess outstanding predictive performance while being (locally) optimally structured. Extensive numerical experiments in large-scale modern computer vision, language modeling, and speech tasks show that the proposed RAMDA is efficient and consistently outperforms state of the art for training structured neural network. Implementation of our algorithm is available at http://www.github.com/ismoptgroup/RAMDA/.
|
[
"['Zih-Syuan Huang' 'Ching-pei Lee']"
] |
null | null |
2403.14404
| null | null |
http://arxiv.org/pdf/2403.14404v2
|
2024-05-23T09:34:29Z
|
2024-03-21T13:52:55Z
|
Physics-Informed Diffusion Models
|
Generative models such as denoising diffusion models are quickly advancing their ability to approximate highly complex data distributions. They are also increasingly leveraged in scientific machine learning, where samples from the implied data distribution are expected to adhere to specific governing equations. We present a framework to inform denoising diffusion models of underlying constraints on such generated samples during model training. Our approach improves the alignment of the generated samples with the imposed constraints and significantly outperforms existing methods without affecting inference speed. Additionally, our findings suggest that incorporating such constraints during training provides a natural regularization against overfitting. Our framework is easy to implement and versatile in its applicability for imposing equality and inequality constraints as well as auxiliary optimization objectives.
|
[
"['Jan-Hendrik Bastek' 'WaiChing Sun' 'Dennis M. Kochmann']"
] |
null | null |
2403.14410
| null | null |
http://arxiv.org/pdf/2403.14410v1
|
2024-03-21T13:57:45Z
|
2024-03-21T13:57:45Z
|
GLC++: Source-Free Universal Domain Adaptation through Global-Local
Clustering and Contrastive Affinity Learning
|
Deep neural networks often exhibit sub-optimal performance under covariate and category shifts. Source-Free Domain Adaptation (SFDA) presents a promising solution to this dilemma, yet most SFDA approaches are restricted to closed-set scenarios. In this paper, we explore Source-Free Universal Domain Adaptation (SF-UniDA) aiming to accurately classify "known" data belonging to common categories and segregate them from target-private "unknown" data. We propose a novel Global and Local Clustering (GLC) technique, which comprises an adaptive one-vs-all global clustering algorithm to discern between target classes, complemented by a local k-NN clustering strategy to mitigate negative transfer. Despite the effectiveness, the inherent closed-set source architecture leads to uniform treatment of "unknown" data, impeding the identification of distinct "unknown" categories. To address this, we evolve GLC to GLC++, integrating a contrastive affinity learning strategy. We examine the superiority of GLC and GLC++ across multiple benchmarks and category shift scenarios. Remarkably, in the most challenging open-partial-set scenarios, GLC and GLC++ surpass GATE by 16.7% and 18.6% in H-score on VisDA, respectively. GLC++ enhances the novel category clustering accuracy of GLC by 4.3% in open-set scenarios on Office-Home. Furthermore, the introduced contrastive learning strategy not only enhances GLC but also significantly facilitates existing methodologies.
|
[
"['Sanqing Qu' 'Tianpei Zou' 'Florian Röhrbein' 'Cewu Lu' 'Guang Chen'\n 'Dacheng Tao' 'Changjun Jiang']"
] |
null | null |
2403.14413
| null | null |
http://arxiv.org/pdf/2403.14413v2
|
2024-03-22T13:15:20Z
|
2024-03-21T13:59:19Z
|
Model Uncertainty in Evolutionary Optimization and Bayesian
Optimization: A Comparative Analysis
|
Black-box optimization problems, which are common in many real-world applications, require optimization through input-output interactions without access to internal workings. This often leads to significant computational resources being consumed for simulations. Bayesian Optimization (BO) and Surrogate-Assisted Evolutionary Algorithm (SAEA) are two widely used gradient-free optimization techniques employed to address such challenges. Both approaches follow a similar iterative procedure that relies on surrogate models to guide the search process. This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods, as well as the impact of model inaccuracies on algorithmic performance. A novel model-assisted strategy is introduced, which utilizes unevaluated solutions to generate offspring, leveraging the population-based search capabilities of evolutionary algorithm to enhance the effectiveness of model-assisted optimization. Experimental results demonstrate that the proposed approach outperforms mainstream Bayesian optimization algorithms in terms of accuracy and efficiency.
|
[
"['Hao Hao' 'Xiaoqun Zhang' 'Aimin Zhou']"
] |
null | null |
2403.14421
| null | null |
http://arxiv.org/pdf/2403.14421v3
|
2024-05-13T14:57:34Z
|
2024-03-21T14:17:28Z
|
DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning
|
Text-to-image diffusion models have been shown to suffer from sample-level memorization, possibly reproducing near-perfect replica of images that they are trained on, which may be undesirable. To remedy this issue, we develop the first differentially private (DP) retrieval-augmented generation algorithm that is capable of generating high-quality image samples while providing provable privacy guarantees. Specifically, we assume access to a text-to-image diffusion model trained on a small amount of public data, and design a DP retrieval mechanism to augment the text prompt with samples retrieved from a private retrieval dataset. Our emph{differentially private retrieval-augmented diffusion model} (DP-RDM) requires no fine-tuning on the retrieval dataset to adapt to another domain, and can use state-of-the-art generative models to generate high-quality image samples while satisfying rigorous DP guarantees. For instance, when evaluated on MS-COCO, our DP-RDM can generate samples with a privacy budget of $epsilon=10$, while providing a $3.5$ point improvement in FID compared to public-only retrieval for up to $10,000$ queries.
|
[
"['Jonathan Lebensold' 'Maziar Sanjabi' 'Pietro Astolfi'\n 'Adriana Romero-Soriano' 'Kamalika Chaudhuri' 'Mike Rabbat' 'Chuan Guo']"
] |
null | null |
2403.14425
| null | null |
http://arxiv.org/pdf/2403.14425v1
|
2024-03-21T14:28:43Z
|
2024-03-21T14:28:43Z
|
Task-optimal data-driven surrogate models for eNMPC via differentiable
simulation and optimization
|
We present a method for end-to-end learning of Koopman surrogate models for optimal performance in control. In contrast to previous contributions that employ standard reinforcement learning (RL) algorithms, we use a training algorithm that exploits the potential differentiability of environments based on mechanistic simulation models. We evaluate the performance of our method by comparing it to that of other controller type and training algorithm combinations on a literature known eNMPC case study. Our method exhibits superior performance on this problem, thereby constituting a promising avenue towards more capable controllers that employ dynamic surrogate models.
|
[
"['Daniel Mayfrank' 'Na Young Ahn' 'Alexander Mitsos' 'Manuel Dahmen']"
] |
null | null |
2403.14429
| null | null |
http://arxiv.org/pdf/2403.14429v1
|
2024-03-21T14:36:59Z
|
2024-03-21T14:36:59Z
|
Style-Extracting Diffusion Models for Semi-Supervised Histopathology
Segmentation
|
Deep learning-based image generation has seen significant advancements with diffusion models, notably improving the quality of generated images. Despite these developments, generating images with unseen characteristics beneficial for downstream tasks has received limited attention. To bridge this gap, we propose Style-Extracting Diffusion Models, featuring two conditioning mechanisms. Specifically, we utilize 1) a style conditioning mechanism which allows to inject style information of previously unseen images during image generation and 2) a content conditioning which can be targeted to a downstream task, e.g., layout for segmentation. We introduce a trainable style encoder to extract style information from images, and an aggregation block that merges style information from multiple style inputs. This architecture enables the generation of images with unseen styles in a zero-shot manner, by leveraging styles from unseen images, resulting in more diverse generations. In this work, we use the image layout as target condition and first show the capability of our method on a natural image dataset as a proof-of-concept. We further demonstrate its versatility in histopathology, where we combine prior knowledge about tissue composition and unannotated data to create diverse synthetic images with known layouts. This allows us to generate additional synthetic data to train a segmentation network in a semi-supervised fashion. We verify the added value of the generated images by showing improved segmentation results and lower performance variability between patients when synthetic images are included during segmentation training. Our code will be made publicly available at [LINK].
|
[
"['Mathias Öttl' 'Frauke Wilm' 'Jana Steenpass' 'Jingna Qiu'\n 'Matthias Rübner' 'Arndt Hartmann' 'Matthias Beckmann' 'Peter Fasching'\n 'Andreas Maier' 'Ramona Erber' 'Bernhard Kainz' 'Katharina Breininger']"
] |
null | null |
2403.14435
| null | null |
http://arxiv.org/pdf/2403.14435v1
|
2024-03-21T14:41:58Z
|
2024-03-21T14:41:58Z
|
Biased Binary Attribute Classifiers Ignore the Majority Classes
|
To visualize the regions of interest that classifiers base their decisions on, different Class Activation Mapping (CAM) methods have been developed. However, all of these techniques target categorical classifiers only, though most real-world tasks are binary classification. In this paper, we extend gradient-based CAM techniques to work with binary classifiers and visualize the active regions for binary facial attribute classifiers. When training an unbalanced binary classifier on an imbalanced dataset, it is well-known that the majority class, i.e. the class with many training samples, is mostly predicted much better than minority class with few training instances. In our experiments on the CelebA dataset, we verify these results, when training an unbalanced classifier to extract 40 facial attributes simultaneously. One would expect that the biased classifier has learned to extract features mainly for the majority classes and that the proportional energy of the activations mainly reside in certain specific regions of the image where the attribute is located. However, we find very little regular activation for samples of majority classes, while the active regions for minority classes seem mostly reasonable and overlap with our expectations. These results suggest that biased classifiers mainly rely on bias activation for majority classes. When training a balanced classifier on the imbalanced data by employing attribute-specific class weights, majority and minority classes are classified similarly well and show expected activations for almost all attributes
|
[
"['Xinyi Zhang' 'Johanna Sophie Bieri' 'Manuel Günther']"
] |
null | null |
2403.14438
| null | null |
http://arxiv.org/abs/2403.14438v2
|
2024-03-26T11:02:32Z
|
2024-03-21T14:44:03Z
|
A Multimodal Approach to Device-Directed Speech Detection with Large
Language Models
|
Interactions with virtual assistants typically start with a predefined trigger phrase followed by the user command. To make interactions with the assistant more intuitive, we explore whether it is feasible to drop the requirement that users must begin each command with a trigger phrase. We explore this task in three ways: First, we train classifiers using only acoustic information obtained from the audio waveform. Second, we take the decoder outputs of an automatic speech recognition (ASR) system, such as 1-best hypotheses, as input features to a large language model (LLM). Finally, we explore a multimodal system that combines acoustic and lexical features, as well as ASR decoder signals in an LLM. Using multimodal information yields relative equal-error-rate improvements over text-only and audio-only models of up to 39% and 61%. Increasing the size of the LLM and training with low-rank adaption leads to further relative EER reductions of up to 18% on our dataset.
|
[
"['Dominik Wagner' 'Alexander Churchill' 'Siddharth Sigtia'\n 'Panayiotis Georgiou' 'Matt Mirsamadi' 'Aarshee Mishra' 'Erik Marchi']"
] |
null | null |
2403.14440
| null | null |
http://arxiv.org/pdf/2403.14440v1
|
2024-03-21T14:45:54Z
|
2024-03-21T14:45:54Z
|
Analysing Diffusion Segmentation for Medical Images
|
Denoising Diffusion Probabilistic models have become increasingly popular due to their ability to offer probabilistic modeling and generate diverse outputs. This versatility inspired their adaptation for image segmentation, where multiple predictions of the model can produce segmentation results that not only achieve high quality but also capture the uncertainty inherent in the model. Here, powerful architectures were proposed for improving diffusion segmentation performance. However, there is a notable lack of analysis and discussions on the differences between diffusion segmentation and image generation, and thorough evaluations are missing that distinguish the improvements these architectures provide for segmentation in general from their benefit for diffusion segmentation specifically. In this work, we critically analyse and discuss how diffusion segmentation for medical images differs from diffusion image generation, with a particular focus on the training behavior. Furthermore, we conduct an assessment how proposed diffusion segmentation architectures perform when trained directly for segmentation. Lastly, we explore how different medical segmentation tasks influence the diffusion segmentation behavior and the diffusion process could be adapted accordingly. With these analyses, we aim to provide in-depth insights into the behavior of diffusion segmentation that allow for a better design and evaluation of diffusion segmentation methods in the future.
|
[
"['Mathias Öttl' 'Siyuan Mei' 'Frauke Wilm' 'Jana Steenpass'\n 'Matthias Rübner' 'Arndt Hartmann' 'Matthias Beckmann' 'Peter Fasching'\n 'Andreas Maier' 'Ramona Erber' 'Katharina Breininger']"
] |
null | null |
2403.14443
| null | null |
http://arxiv.org/pdf/2403.14443v1
|
2024-03-21T14:48:37Z
|
2024-03-21T14:48:37Z
|
Language Models Can Reduce Asymmetry in Information Markets
|
This work addresses the buyer's inspection paradox for information markets. The paradox is that buyers need to access information to determine its value, while sellers need to limit access to prevent theft. To study this, we introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants. The central mechanism enabling this marketplace is the agents' dual capabilities: they not only have the capacity to assess the quality of privileged information but also come equipped with the ability to forget. This ability to induce amnesia allows vendors to grant temporary access to proprietary information, significantly reducing the risk of unauthorized retention while enabling agents to accurately gauge the information's relevance to specific queries or tasks. To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information. Concretely, our experiments (a) uncover biases in language models leading to irrational behavior and evaluate techniques to mitigate these biases, (b) investigate how price affects demand in the context of informational goods, and (c) show that inspection and higher budgets both lead to higher quality outcomes.
|
[
"['Nasim Rahaman' 'Martin Weiss' 'Manuel Wüthrich' 'Yoshua Bengio'\n 'Li Erran Li' 'Chris Pal' 'Bernhard Schölkopf']"
] |
null | null |
2403.14457
| null | null |
http://arxiv.org/pdf/2403.14457v1
|
2024-03-21T15:04:32Z
|
2024-03-21T15:04:32Z
|
gTBLS: Generating Tables from Text by Conditional Question Answering
|
Distilling large, unstructured text into a structured, condensed form such as tables is an open research problem. One of the primary challenges in automatically generating tables is ensuring their syntactic validity. Prior approaches address this challenge by including additional parameters in the Transformer's attention mechanism to attend to specific rows and column headers. In contrast to this single-stage method, this paper presents a two-stage approach called Generative Tables (gTBLS). The first stage infers table structure (row and column headers) from the text. The second stage formulates questions using these headers and fine-tunes a causal language model to answer them. Furthermore, the gTBLS approach is amenable to the utilization of pre-trained Large Language Models in a zero-shot configuration, presenting a solution for table generation in situations where fine-tuning is not feasible. gTBLS improves prior approaches by up to 10% in BERTScore on the table construction task and up to 20% on the table content generation task of the E2E, WikiTableText, WikiBio, and RotoWire datasets.
|
[
"['Anirudh Sundar' 'Christopher Richardson' 'Larry Heck']"
] |
null | null |
2403.14466
| null | null |
http://arxiv.org/pdf/2403.14466v1
|
2024-03-21T15:13:54Z
|
2024-03-21T15:13:54Z
|
Universal Feature Selection for Simultaneous Interpretability of
Multitask Datasets
|
Extracting meaningful features from complex, high-dimensional datasets across scientific domains remains challenging. Current methods often struggle with scalability, limiting their applicability to large datasets, or make restrictive assumptions about feature-property relationships, hindering their ability to capture complex interactions. BoUTS's general and scalable feature selection algorithm surpasses these limitations to identify both universal features relevant to all datasets and task-specific features predictive for specific subsets. Evaluated on seven diverse chemical regression datasets, BoUTS achieves state-of-the-art feature sparsity while maintaining prediction accuracy comparable to specialized methods. Notably, BoUTS's universal features enable domain-specific knowledge transfer between datasets, and suggest deep connections in seemingly-disparate chemical datasets. We expect these results to have important repercussions in manually-guided inverse problems. Beyond its current application, BoUTS holds immense potential for elucidating data-poor systems by leveraging information from similar data-rich systems. BoUTS represents a significant leap in cross-domain feature selection, potentially leading to advancements in various scientific fields.
|
[
"['Matt Raymond' 'Jacob Charles Saldinger' 'Paolo Elvati' 'Clayton Scott'\n 'Angela Violi']"
] |
null | null |
2403.14472
| null | null |
http://arxiv.org/pdf/2403.14472v5
|
2024-05-28T09:11:25Z
|
2024-03-21T15:18:30Z
|
Detoxifying Large Language Models via Knowledge Editing
|
This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs). We construct a benchmark, SafeEdit, which covers nine unsafe categories with various powerful attack prompts and equips comprehensive metrics for systematic evaluation. We conduct experiments with several knowledge editing approaches, indicating that knowledge editing has the potential to detoxify LLMs with a limited impact on general performance efficiently. Then, we propose a simple yet effective baseline, dubbed Detoxifying with Intraoperative Neural Monitoring (DINM), to diminish the toxicity of LLMs within a few tuning steps via only one instance. We further provide an in-depth analysis of the internal mechanism for various detoxifying approaches, demonstrating that previous methods like SFT and DPO may merely suppress the activations of toxic parameters, while DINM mitigates the toxicity of the toxic parameters to a certain extent, making permanent adjustments. We hope that these insights could shed light on future work of developing detoxifying approaches and the underlying knowledge mechanisms of LLMs. Code and benchmark are available at https://github.com/zjunlp/EasyEdit.
|
[
"['Mengru Wang' 'Ningyu Zhang' 'Ziwen Xu' 'Zekun Xi' 'Shumin Deng'\n 'Yunzhi Yao' 'Qishen Zhang' 'Linyi Yang' 'Jindong Wang' 'Huajun Chen']"
] |
null | null |
2403.14483
| null | null |
http://arxiv.org/abs/2403.14483v1
|
2024-03-21T15:29:24Z
|
2024-03-21T15:29:24Z
|
Utilizing the LightGBM Algorithm for Operator User Credit Assessment
Research
|
Mobile Internet user credit assessment is an important way for communication operators to establish decisions and formulate measures, and it is also a guarantee for operators to obtain expected benefits. However, credit evaluation methods have long been monopolized by financial industries such as banks and credit. As supporters and providers of platform network technology and network resources, communication operators are also builders and maintainers of communication networks. Internet data improves the user's credit evaluation strategy. This paper uses the massive data provided by communication operators to carry out research on the operator's user credit evaluation model based on the fusion LightGBM algorithm. First, for the massive data related to user evaluation provided by operators, key features are extracted by data preprocessing and feature engineering methods, and a multi-dimensional feature set with statistical significance is constructed; then, linear regression, decision tree, LightGBM, and other machine learning algorithms build multiple basic models to find the best basic model; finally, integrates Averaging, Voting, Blending, Stacking and other integrated algorithms to refine multiple fusion models, and finally establish the most suitable fusion model for operator user evaluation.
|
[
"['Shaojie Li' 'Xinqi Dong' 'Danqing Ma' 'Bo Dang' 'Hengyi Zang'\n 'Yulu Gong']"
] |
null | null |
2403.14484
| null | null |
http://arxiv.org/pdf/2403.14484v1
|
2024-03-21T15:31:28Z
|
2024-03-21T15:31:28Z
|
HyperGALE: ASD Classification via Hypergraph Gated Attention with
Learnable Hyperedges
|
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by varied social cognitive challenges and repetitive behavioral patterns. Identifying reliable brain imaging-based biomarkers for ASD has been a persistent challenge due to the spectrum's diverse symptomatology. Existing baselines in the field have made significant strides in this direction, yet there remains room for improvement in both performance and interpretability. We propose emph{HyperGALE}, which builds upon the hypergraph by incorporating learned hyperedges and gated attention mechanisms. This approach has led to substantial improvements in the model's ability to interpret complex brain graph data, offering deeper insights into ASD biomarker characterization. Evaluated on the extensive ABIDE II dataset, emph{HyperGALE} not only improves interpretability but also demonstrates statistically significant enhancements in key performance metrics compared to both previous baselines and the foundational hypergraph model. The advancement emph{HyperGALE} brings to ASD research highlights the potential of sophisticated graph-based techniques in neurodevelopmental studies. The source code and implementation instructions are available at GitHub:https://github.com/mehular0ra/HyperGALE.
|
[
"['Mehul Arora' 'Chirag Shantilal Jain' 'Lalith Bharadwaj Baru'\n 'Kamalaker Dadi' 'Bapi Raju Surampudi']"
] |
null | null |
2403.14488
| null | null |
http://arxiv.org/pdf/2403.14488v1
|
2024-03-21T15:36:26Z
|
2024-03-21T15:36:26Z
|
Physics-Based Causal Reasoning for Safe & Robust Next-Best Action
Selection in Robot Manipulation Tasks
|
Safe and efficient object manipulation is a key enabler of many real-world robot applications. However, this is challenging because robot operation must be robust to a range of sensor and actuator uncertainties. In this paper, we present a physics-informed causal-inference-based framework for a robot to probabilistically reason about candidate actions in a block stacking task in a partially observable setting. We integrate a physics-based simulation of the rigid-body system dynamics with a causal Bayesian network (CBN) formulation to define a causal generative probabilistic model of the robot decision-making process. Using simulation-based Monte Carlo experiments, we demonstrate our framework's ability to successfully: (1) predict block tower stability with high accuracy (Pred Acc: 88.6%); and, (2) select an approximate next-best action for the block stacking task, for execution by an integrated robot system, achieving 94.2% task success rate. We also demonstrate our framework's suitability for real-world robot systems by demonstrating successful task executions with a domestic support robot, with perception and manipulation sub-system integration. Hence, we show that by embedding physics-based causal reasoning into robots' decision-making processes, we can make robot task execution safer, more reliable, and more robust to various types of uncertainty.
|
[
"['Ricardo Cannizzaro' 'Michael Groom' 'Jonathan Routley'\n 'Robert Osazuwa Ness' 'Lars Kunze']"
] |
null | null |
2403.14504
| null | null |
http://arxiv.org/pdf/2403.14504v1
|
2024-03-21T15:56:15Z
|
2024-03-21T15:56:15Z
|
Soft Learning Probabilistic Circuits
|
Probabilistic Circuits (PCs) are prominent tractable probabilistic models, allowing for a range of exact inferences. This paper focuses on the main algorithm for training PCs, LearnSPN, a gold standard due to its efficiency, performance, and ease of use, in particular for tabular data. We show that LearnSPN is a greedy likelihood maximizer under mild assumptions. While inferences in PCs may use the entire circuit structure for processing queries, LearnSPN applies a hard method for learning them, propagating at each sum node a data point through one and only one of the children/edges as in a hard clustering process. We propose a new learning procedure named SoftLearn, that induces a PC using a soft clustering process. We investigate the effect of this learning-inference compatibility in PCs. Our experiments show that SoftLearn outperforms LearnSPN in many situations, yielding better likelihoods and arguably better samples. We also analyze comparable tractable models to highlight the differences between soft/hard learning and model querying.
|
[
"['Soroush Ghandi' 'Benjamin Quost' 'Cassio de Campos']"
] |
null | null |
2403.14508
| null | null |
http://arxiv.org/pdf/2403.14508v1
|
2024-03-21T16:02:52Z
|
2024-03-21T16:02:52Z
|
Constrained Reinforcement Learning with Smoothed Log Barrier Function
|
Reinforcement Learning (RL) has been widely applied to many control tasks and substantially improved the performances compared to conventional control methods in many domains where the reward function is well defined. However, for many real-world problems, it is often more convenient to formulate optimization problems in terms of rewards and constraints simultaneously. Optimizing such constrained problems via reward shaping can be difficult as it requires tedious manual tuning of reward functions with several interacting terms. Recent formulations which include constraints mostly require a pre-training phase, which often needs human expertise to collect data or assumes having a sub-optimal policy readily available. We propose a new constrained RL method called CSAC-LB (Constrained Soft Actor-Critic with Log Barrier Function), which achieves competitive performance without any pre-training by applying a linear smoothed log barrier function to an additional safety critic. It implements an adaptive penalty for policy learning and alleviates the numerical issues that are known to complicate the application of the log barrier function method. As a result, we show that with CSAC-LB, we achieve state-of-the-art performance on several constrained control tasks with different levels of difficulty and evaluate our methods in a locomotion task on a real quadruped robot platform.
|
[
"['Baohe Zhang' 'Yuan Zhang' 'Lilli Frison' 'Thomas Brox'\n 'Joschka Bödecker']"
] |
null | null |
2403.14514
| null | null |
http://arxiv.org/pdf/2403.14514v1
|
2024-03-21T16:10:42Z
|
2024-03-21T16:10:42Z
|
Machine-learning invariant foliations in forced systems for reduced
order modelling
|
We identify reduced order models (ROM) of forced systems from data using invariant foliations. The forcing can be external, parametric, periodic or quasi-periodic. The process has four steps: 1. identify an approximate invariant torus and the linear dynamics about the torus; 2. identify a globally defined invariant foliation about the torus; 3. identify a local foliation about an invariant manifold that complements the global foliation 4. extract the invariant manifold as the leaf going through the torus and interpret the result. We combine steps 2 and 3, so that we can track the location of the invariant torus and scale the invariance equations appropriately. We highlight some fundamental limitations of invariant manifolds and foliations when fitting them to data, that require further mathematics to resolve.
|
[
"['Robert Szalai']"
] |
null | null |
2403.14539
| null | null |
http://arxiv.org/pdf/2403.14539v1
|
2024-03-21T16:40:10Z
|
2024-03-21T16:40:10Z
|
Object-Centric Domain Randomization for 3D Shape Reconstruction in the
Wild
|
One of the biggest challenges in single-view 3D shape reconstruction in the wild is the scarcity of <3D shape, 2D image>-paired data from real-world environments. Inspired by remarkable achievements via domain randomization, we propose ObjectDR which synthesizes such paired data via a random simulation of visual variations in object appearances and backgrounds. Our data synthesis framework exploits a conditional generative model (e.g., ControlNet) to generate images conforming to spatial conditions such as 2.5D sketches, which are obtainable through a rendering process of 3D shapes from object collections (e.g., Objaverse-XL). To simulate diverse variations while preserving object silhouettes embedded in spatial conditions, we also introduce a disentangled framework which leverages an initial object guidance. After synthesizing a wide range of data, we pre-train a model on them so that it learns to capture a domain-invariant geometry prior which is consistent across various domains. We validate its effectiveness by substantially improving 3D shape reconstruction models on a real-world benchmark. In a scale-up evaluation, our pre-training achieves 23.6% superior results compared with the pre-training on high-quality computer graphics renderings.
|
[
"['Junhyeong Cho' 'Kim Youwang' 'Hunmin Yang' 'Tae-Hyun Oh']"
] |
null | null |
2403.14547
| null | null |
http://arxiv.org/pdf/2403.14547v2
|
2024-05-24T11:14:37Z
|
2024-03-21T16:48:45Z
|
Estimating Physical Information Consistency of Channel Data Augmentation
for Remote Sensing Images
|
The application of data augmentation for deep learning (DL) methods plays an important role in achieving state-of-the-art results in supervised, semi-supervised, and self-supervised image classification. In particular, channel transformations (e.g., solarize, grayscale, brightness adjustments) are integrated into data augmentation pipelines for remote sensing (RS) image classification tasks. However, contradicting beliefs exist about their proper applications to RS images. A common point of critique is that the application of channel augmentation techniques may lead to physically inconsistent spectral data (i.e., pixel signatures). To shed light on the open debate, we propose an approach to estimate whether a channel augmentation technique affects the physical information of RS images. To this end, the proposed approach estimates a score that measures the alignment of a pixel signature within a time series that can be naturally subject to deviations caused by factors such as acquisition conditions or phenological states of vegetation. We compare the scores associated with original and augmented pixel signatures to evaluate the physical consistency. Experimental results on a multi-label image classification task show that channel augmentations yielding a score that exceeds the expected deviation of original pixel signatures can not improve the performance of a baseline model trained without augmentation.
|
[
"['Tom Burgert' 'Begüm Demir']"
] |
null | null |
2403.14551
| null | null |
http://arxiv.org/pdf/2403.14551v1
|
2024-03-21T16:52:01Z
|
2024-03-21T16:52:01Z
|
Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling
|
Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning. Can we make LMs' representations and predictions more accurate (and more human-like) with more ecologically plausible supervision? This paper describes LexiContrastive Grounding (LCG), a grounded language learning procedure that leverages visual supervision to improve textual representations. LexiContrastive Grounding combines a next token prediction strategy with a contrastive visual grounding objective, focusing on early-layer representations that encode lexical information. Across multiple word-learning and sentence-understanding benchmarks, LexiContrastive Grounding not only outperforms standard language-only models in learning efficiency, but also improves upon vision-and-language learning procedures including CLIP, GIT, Flamingo, and Vokenization. Moreover, LexiContrastive Grounding improves perplexity by around 5% on multiple language modeling tasks. This work underscores the potential of incorporating visual grounding into language models, aligning more closely with the multimodal nature of human language acquisition.
|
[
"['Chengxu Zhuang' 'Evelina Fedorenko' 'Jacob Andreas']"
] |
null | null |
2403.14566
| null | null |
http://arxiv.org/pdf/2403.14566v2
|
2024-03-23T09:50:23Z
|
2024-03-21T17:09:20Z
|
A survey on Concept-based Approaches For Model Improvement
|
The focus of recent research has shifted from merely improving the metrics based performance of Deep Neural Networks (DNNs) to DNNs which are more interpretable to humans. The field of eXplainable Artificial Intelligence (XAI) has observed various techniques, including saliency-based and concept-based approaches. These approaches explain the model's decisions in simple human understandable terms called Concepts. Concepts are known to be the thinking ground of humans}. Explanations in terms of concepts enable detecting spurious correlations, inherent biases, or clever-hans. With the advent of concept-based explanations, a range of concept representation methods and automatic concept discovery algorithms have been introduced. Some recent works also use concepts for model improvement in terms of interpretability and generalization. We provide a systematic review and taxonomy of various concept representations and their discovery algorithms in DNNs, specifically in vision. We also provide details on concept-based model improvement literature marking the first comprehensive survey of these methods.
|
[
"['Avani Gupta' 'P J Narayanan']"
] |
null | null |
2403.14578
| null | null |
http://arxiv.org/pdf/2403.14578v1
|
2024-03-21T17:30:59Z
|
2024-03-21T17:30:59Z
|
RAmBLA: A Framework for Evaluating the Reliability of LLMs as Assistants
in the Biomedical Domain
|
Large Language Models (LLMs) increasingly support applications in a wide range of domains, some with potential high societal impact such as biomedicine, yet their reliability in realistic use cases is under-researched. In this work we introduce the Reliability AssesMent for Biomedical LLM Assistants (RAmBLA) framework and evaluate whether four state-of-the-art foundation LLMs can serve as reliable assistants in the biomedical domain. We identify prompt robustness, high recall, and a lack of hallucinations as necessary criteria for this use case. We design shortform tasks and tasks requiring LLM freeform responses mimicking real-world user interactions. We evaluate LLM performance using semantic similarity with a ground truth response, through an evaluator LLM.
|
[
"['William James Bolton' 'Rafael Poyiadzi' 'Edward R. Morrell'\n 'Gabriela van Bergen Gonzalez Bueno' 'Lea Goetz']"
] |
null | null |
2403.14583
| null | null |
http://arxiv.org/pdf/2403.14583v1
|
2024-03-21T17:37:43Z
|
2024-03-21T17:37:43Z
|
Co-Optimization of Environment and Policies for Decentralized
Multi-Agent Navigation
|
This work views the multi-agent system and its surrounding environment as a co-evolving system, where the behavior of one affects the other. The goal is to take both agent actions and environment configurations as decision variables, and optimize these two components in a coordinated manner to improve some measure of interest. Towards this end, we consider the problem of decentralized multi-agent navigation in cluttered environments. By introducing two sub-objectives of multi-agent navigation and environment optimization, we propose an $textit{agent-environment co-optimization}$ problem and develop a $textit{coordinated algorithm}$ that alternates between these sub-objectives to search for an optimal synthesis of agent actions and obstacle configurations in the environment; ultimately, improving the navigation performance. Due to the challenge of explicitly modeling the relation between agents, environment and performance, we leverage policy gradient to formulate a model-free learning mechanism within the coordinated framework. A formal convergence analysis shows that our coordinated algorithm tracks the local minimum trajectory of an associated time-varying non-convex optimization problem. Extensive numerical results corroborate theoretical findings and show the benefits of co-optimization over baselines. Interestingly, the results also indicate that optimized environment configurations are able to offer structural guidance that is key to de-conflicting agents in motion.
|
[
"['Zhan Gao' 'Guang Yang' 'Amanda Prorok']"
] |
null | null |
2403.14587
| null | null |
http://arxiv.org/pdf/2403.14587v2
|
2024-03-25T12:00:19Z
|
2024-03-21T17:42:45Z
|
An Analysis of Linear Time Series Forecasting Models
|
Despite their simplicity, linear models perform well at time series forecasting, even when pitted against deeper and more expensive models. A number of variations to the linear model have been proposed, often including some form of feature normalisation that improves model generalisation. In this paper we analyse the sets of functions expressible using these linear model architectures. In so doing we show that several popular variants of linear models for time series forecasting are equivalent and functionally indistinguishable from standard, unconstrained linear regression. We characterise the model classes for each linear variant. We demonstrate that each model can be reinterpreted as unconstrained linear regression over a suitably augmented feature set, and therefore admit closed-form solutions when using a mean-squared loss function. We provide experimental evidence that the models under inspection learn nearly identical solutions, and finally demonstrate that the simpler closed form solutions are superior forecasters across 72% of test settings.
|
[
"['William Toner' 'Luke Darlow']"
] |
null | null |
2403.14589
| null | null |
http://arxiv.org/pdf/2403.14589v3
|
2024-04-01T17:37:15Z
|
2024-03-21T17:43:44Z
|
ReAct Meets ActRe: When Language Agents Enjoy Training Data Autonomy
|
Language agents have demonstrated autonomous decision-making abilities by reasoning with foundation models. Recently, efforts have been made to train language agents for performance improvement, with multi-step reasoning and action trajectories as the training data. However, collecting such trajectories still requires considerable human effort, by either artificial annotation or implementations of diverse prompting frameworks. In this work, we propose A$^3$T, a framework that enables the Autonomous Annotation of Agent Trajectories in the style of ReAct. The central role is an ActRe prompting agent, which explains the reason for an arbitrary action. When randomly sampling an external action, the ReAct-style agent could query the ActRe agent with the action to obtain its textual rationales. Novel trajectories are then synthesized by prepending the posterior reasoning from ActRe to the sampled action. In this way, the ReAct-style agent executes multiple trajectories for the failed tasks, and selects the successful ones to supplement its failed trajectory for contrastive self-training. Realized by policy gradient methods with binarized rewards, the contrastive self-training with accumulated trajectories facilitates a closed loop for multiple rounds of language agent self-improvement. We conduct experiments using QLoRA fine-tuning with the open-sourced Mistral-7B-Instruct-v0.2. In AlfWorld, the agent trained with A$^3$T obtains a 1-shot success rate of 96%, and 100% success with 4 iterative rounds. In WebShop, the 1-shot performance of the A$^3$T agent matches human average, and 4 rounds of iterative refinement lead to the performance approaching human experts. A$^3$T agents significantly outperform existing techniques, including prompting with GPT-4, advanced agent frameworks, and fully fine-tuned LLMs.
|
[
"['Zonghan Yang' 'Peng Li' 'Ming Yan' 'Ji Zhang' 'Fei Huang' 'Yang Liu']"
] |
null | null |
2403.14593
| null | null |
http://arxiv.org/pdf/2403.14593v3
|
2024-05-14T12:54:32Z
|
2024-03-21T17:48:38Z
|
Rethinking Adversarial Inverse Reinforcement Learning: Policy Imitation,
Transferable Reward Recovery and Algebraic Equilibrium Proof
|
Adversarial inverse reinforcement learning (AIRL) stands as a cornerstone approach in imitation learning, yet it faces criticisms from prior studies. In this paper, we rethink AIRL and respond to these criticisms. Criticism 1 lies in Inadequate Policy Imitation. We show that substituting the built-in algorithm with soft actor-critic (SAC) during policy updating (requires multi-iterations) significantly enhances the efficiency of policy imitation. Criticism 2 lies in Limited Performance in Transferable Reward Recovery Despite SAC Integration. While we find that SAC indeed exhibits a significant improvement in policy imitation, it introduces drawbacks to transferable reward recovery. We prove that the SAC algorithm itself is not feasible to disentangle the reward function comprehensively during the AIRL training process, and propose a hybrid framework, PPO-AIRL + SAC, for a satisfactory transfer effect. Criticism 3 lies in Unsatisfactory Proof from the Perspective of Potential Equilibrium. We reanalyze it from an algebraic theory perspective.
|
[
"['Yangchun Zhang' 'Qiang Liu' 'Weiming Li' 'Yirui Zhou']"
] |
null | null |
2403.14597
| null | null |
http://arxiv.org/pdf/2403.14597v2
|
2024-06-14T19:27:14Z
|
2024-03-21T17:50:22Z
|
Extended Reality for Enhanced Human-Robot Collaboration: a
Human-in-the-Loop Approach
|
The rise of automation has provided an opportunity to achieve higher efficiency in manufacturing processes, yet it often compromises the flexibility required to promptly respond to evolving market needs and meet the demand for customization. Human-robot collaboration attempts to tackle these challenges by combining the strength and precision of machines with human ingenuity and perceptual understanding. In this paper, we conceptualize and propose an implementation framework for an autonomous, machine learning-based manipulator that incorporates human-in-the-loop principles and leverages Extended Reality (XR) to facilitate intuitive communication and programming between humans and robots. Furthermore, the conceptual framework foresees human involvement directly in the robot learning process, resulting in higher adaptability and task generalization. The paper highlights key technologies enabling the proposed framework, emphasizing the importance of developing the digital ecosystem as a whole. Additionally, we review the existent implementation approaches of XR in human-robot collaboration, showcasing diverse perspectives and methodologies. The challenges and future outlooks are discussed, delving into the major obstacles and potential research avenues of XR for more natural human-robot interaction and integration in the industrial landscape.
|
[
"['Yehor Karpichev' 'Todd Charter' 'Jayden Hong' 'Amir M. Soufi Enayati'\n 'Homayoun Honari' 'Mehran Ghafarian Tamizi' 'Homayoun Najjaran']"
] |
null | null |
2403.14602
| null | null |
http://arxiv.org/pdf/2403.14602v1
|
2024-03-21T17:52:08Z
|
2024-03-21T17:52:08Z
|
ReNoise: Real Image Inversion Through Iterative Noising
|
Recent advancements in text-guided diffusion models have unlocked powerful image manipulation capabilities. However, applying these methods to real images necessitates the inversion of the images into the domain of the pretrained diffusion model. Achieving faithful inversion remains a challenge, particularly for more recent models trained to generate images with a small number of denoising steps. In this work, we introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations. Building on reversing the diffusion sampling process, our method employs an iterative renoising mechanism at each inversion sampling step. This mechanism refines the approximation of a predicted point along the forward diffusion trajectory, by iteratively applying the pretrained diffusion model, and averaging these predictions. We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models. Through comprehensive evaluations and comparisons, we show its effectiveness in terms of both accuracy and speed. Furthermore, we confirm that our method preserves editability by demonstrating text-driven image editing on real images.
|
[
"['Daniel Garibi' 'Or Patashnik' 'Andrey Voynov' 'Hadar Averbuch-Elor'\n 'Daniel Cohen-Or']"
] |
null | null |
2403.14606
| null | null |
http://arxiv.org/pdf/2403.14606v1
|
2024-03-21T17:55:16Z
|
2024-03-21T17:55:16Z
|
The Elements of Differentiable Programming
|
Artificial intelligence has recently experienced remarkable advances, fueled by large models, vast datasets, accelerated hardware, and, last but not least, the transformative power of differentiable programming. This new programming paradigm enables end-to-end differentiation of complex computer programs (including those with control flows and data structures), making gradient-based optimization of program parameters possible. As an emerging paradigm, differentiable programming builds upon several areas of computer science and applied mathematics, including automatic differentiation, graphical models, optimization and statistics. This book presents a comprehensive review of the fundamental concepts useful for differentiable programming. We adopt two main perspectives, that of optimization and that of probability, with clear analogies between the two. Differentiable programming is not merely the differentiation of programs, but also the thoughtful design of programs intended for differentiation. By making programs differentiable, we inherently introduce probability distributions over their execution, providing a means to quantify the uncertainty associated with program outputs.
|
[
"['Mathieu Blondel' 'Vincent Roulet']"
] |
null | null |
2403.14608
| null | null |
http://arxiv.org/pdf/2403.14608v6
|
2024-07-12T09:58:10Z
|
2024-03-21T17:55:50Z
|
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
|
Large models represent a groundbreaking advancement in multiple application fields, enabling remarkable achievements across various tasks. However, their unprecedented scale comes with significant computational costs. These models, often consisting of billions of parameters, require vast amounts of computational resources for execution. Especially, the expansive scale and computational demands pose considerable challenges when customizing them for particular downstream tasks, particularly over the hardware platforms constrained by computational capabilities. Parameter Efficient Fine-Tuning (PEFT) provides a practical solution by efficiently adjusting the large models over the various downstream tasks. In particular, PEFT refers to the process of adjusting the parameters of a pre-trained large models to adapt it to a specific task or domain while minimizing the number of additional parameters introduced or computational resources required. This approach is particularly important when dealing with large-scale language models with high parameter counts, as fine-tuning these models from scratch can be computationally expensive and resource-intensive, posing considerable challenges in the supporting system platform design. In this survey, we present comprehensive studies of various PEFT algorithms, examining their performance and computational overhead. Moreover, we provide an overview of applications developed using different PEFT algorithms and discuss common techniques employed to mitigate computation costs for PEFT. In addition to providing an extensive survey from an algorithmic standpoint, we also examine various real-world system designs to investigate the implementation costs associated with different PEFT approaches. This survey serves as an indispensable resource for researchers aiming to understand both the PEFT algorithm and its system implementation, offering detailed ......
|
[
"['Zeyu Han' 'Chao Gao' 'Jinyang Liu' 'Jeff Zhang' 'Sai Qian Zhang']"
] |
null | null |
2403.14613
| null | null |
http://arxiv.org/pdf/2403.14613v1
|
2024-03-21T17:58:04Z
|
2024-03-21T17:58:04Z
|
DreamReward: Text-to-3D Generation with Human Preference
|
3D content creation from text prompts has shown remarkable success recently. However, current text-to-3D methods often generate 3D results that do not align well with human preferences. In this paper, we present a comprehensive framework, coined DreamReward, to learn and improve text-to-3D models from human preference feedback. To begin with, we collect 25k expert comparisons based on a systematic annotation pipeline including rating and ranking. Then, we build Reward3D -- the first general-purpose text-to-3D human preference reward model to effectively encode human preferences. Building upon the 3D reward model, we finally perform theoretical analysis and present the Reward3D Feedback Learning (DreamFL), a direct tuning algorithm to optimize the multi-view diffusion models with a redefined scorer. Grounded by theoretical proof and extensive experiment comparisons, our DreamReward successfully generates high-fidelity and 3D consistent results with significant boosts in prompt alignment with human intention. Our results demonstrate the great potential for learning from human feedback to improve text-to-3D models.
|
[
"['Junliang Ye' 'Fangfu Liu' 'Qixiu Li' 'Zhengyi Wang' 'Yikai Wang'\n 'Xinzhou Wang' 'Yueqi Duan' 'Jun Zhu']"
] |
null | null |
2403.14617
| null | null |
http://arxiv.org/pdf/2403.14617v2
|
2024-03-22T17:45:52Z
|
2024-03-21T17:59:03Z
|
Videoshop: Localized Semantic Video Editing with Noise-Extrapolated
Diffusion Inversion
|
We introduce Videoshop, a training-free video editing algorithm for localized semantic edits. Videoshop allows users to use any editing software, including Photoshop and generative inpainting, to modify the first frame; it automatically propagates those changes, with semantic, spatial, and temporally consistent motion, to the remaining frames. Unlike existing methods that enable edits only through imprecise textual instructions, Videoshop allows users to add or remove objects, semantically change objects, insert stock photos into videos, etc. with fine-grained control over locations and appearance. We achieve this through image-based video editing by inverting latents with noise extrapolation, from which we generate videos conditioned on the edited image. Videoshop produces higher quality edits against 6 baselines on 2 editing benchmarks using 10 evaluation metrics.
|
[
"['Xiang Fan' 'Anand Bhattad' 'Ranjay Krishna']"
] |
null | null |
2403.14623
| null | null |
http://arxiv.org/pdf/2403.14623v3
|
2024-05-27T04:44:22Z
|
2024-03-21T17:59:41Z
|
Simplified Diffusion Schrödinger Bridge
|
This paper introduces a novel theoretical simplification of the Diffusion Schr"odinger Bridge (DSB) that facilitates its unification with Score-based Generative Models (SGMs), addressing the limitations of DSB in complex data generation and enabling faster convergence and enhanced performance. By employing SGMs as an initial solution for DSB, our approach capitalizes on the strengths of both frameworks, ensuring a more efficient training process and improving the performance of SGM. We also propose a reparameterization technique that, despite theoretical approximations, practically improves the network's fitting capabilities. Our extensive experimental evaluations confirm the effectiveness of the simplified DSB, demonstrating its significant improvements. We believe the contributions of this work pave the way for advanced generative modeling. The code is available at https://github.com/checkcrab/SDSB.
|
[
"['Zhicong Tang' 'Tiankai Hang' 'Shuyang Gu' 'Dong Chen' 'Baining Guo']"
] |
null | null |
2403.14624
| null | null |
http://arxiv.org/pdf/2403.14624v1
|
2024-03-21T17:59:50Z
|
2024-03-21T17:59:50Z
|
MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual
Math Problems?
|
The remarkable progress of Multi-modal Large Language Models (MLLMs) has garnered unparalleled attention, due to their superior performance in visual contexts. However, their capabilities in visual math problem-solving remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams. To this end, we introduce MathVerse, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into six distinct versions, each offering varying degrees of information content in multi-modality, contributing to 15K test samples in total. This approach allows MathVerse to comprehensively assess whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning. In addition, we propose a Chain-of-Thought (CoT) evaluation strategy for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs. We hope the MathVerse benchmark may provide unique insights to guide the future development of MLLMs. Project page: https://mathverse-cuhk.github.io
|
[
"['Renrui Zhang' 'Dongzhi Jiang' 'Yichi Zhang' 'Haokun Lin' 'Ziyu Guo'\n 'Pengshuo Qiu' 'Aojun Zhou' 'Pan Lu' 'Kai-Wei Chang' 'Peng Gao'\n 'Hongsheng Li']"
] |
null | null |
2403.14638
| null | null |
http://arxiv.org/pdf/2403.14638v1
|
2024-02-20T10:38:38Z
|
2024-02-20T10:38:38Z
|
Personalized Programming Guidance based on Deep Programming Learning
Style Capturing
|
With the rapid development of big data and AI technology, programming is in high demand and has become an essential skill for students. Meanwhile, researchers also focus on boosting the online judging system's guidance ability to reduce students' dropout rates. Previous studies mainly targeted at enhancing learner engagement on online platforms by providing personalized recommendations. However, two significant challenges still need to be addressed in programming: C1) how to recognize complex programming behaviors; C2) how to capture intrinsic learning patterns that align with the actual learning process. To fill these gaps, in this paper, we propose a novel model called Programming Exercise Recommender with Learning Style (PERS), which simulates learners' intricate programming behaviors. Specifically, since programming is an iterative and trial-and-error process, we first introduce a positional encoding and a differentiating module to capture the changes of consecutive code submissions (which addresses C1). To better profile programming behaviors, we extend the Felder-Silverman learning style model, a classical pedagogical theory, to perceive intrinsic programming patterns. Based on this, we align three latent vectors to record and update programming ability, processing style, and understanding style, respectively (which addresses C2). We perform extensive experiments on two real-world datasets to verify the rationality of modeling programming learning styles and the effectiveness of PERS for personalized programming guidance.
|
[
"['Yingfan Liu' 'Renyu Zhu' 'Ming Gao']"
] |
null | null |
2403.14639
| null | null |
http://arxiv.org/abs/2403.14639v1
|
2024-02-20T18:34:24Z
|
2024-02-20T18:34:24Z
|
On Defining Smart Cities using Transformer Neural Networks
|
Cities worldwide are rapidly adopting smart technologies, transforming urban life. Despite this trend, a universally accepted definition of 'smart city' remains elusive. Past efforts to define it have not yielded a consensus, as evidenced by the numerous definitions in use. In this paper, we endeavored to create a new 'compromise' definition that should resonate with most experts previously involved in defining this concept and aimed to validate one of the existing definitions. We reviewed 60 definitions of smart cities from industry, academia, and various relevant organizations, employing transformer architecture-based generative AI and semantic text analysis to reach this compromise. We proposed a semantic similarity measure as an evaluation technique, which could generally be used to compare different smart city definitions, assessing their uniqueness or resemblance. Our methodology employed generative AI to analyze various existing definitions of smart cities, generating a list of potential new composite definitions. Each of these new definitions was then tested against the pre-existing individual definitions we have gathered, using cosine similarity as our metric. This process identified smart city definitions with the highest average cosine similarity, semantically positioning them as the closest on average to all the 60 individual definitions selected.
|
[
"['Andrei Khurshudov']"
] |
null | null |
2403.14641
| null | null |
http://arxiv.org/pdf/2403.14641v1
|
2024-02-21T08:29:42Z
|
2024-02-21T08:29:42Z
|
Testing autonomous vehicles and AI: perspectives and challenges from
cybersecurity, transparency, robustness and fairness
|
This study explores the complexities of integrating Artificial Intelligence (AI) into Autonomous Vehicles (AVs), examining the challenges introduced by AI components and the impact on testing procedures, focusing on some of the essential requirements for trustworthy AI. Topics addressed include the role of AI at various operational layers of AVs, the implications of the EU's AI Act on AVs, and the need for new testing methodologies for Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS). The study also provides a detailed analysis on the importance of cybersecurity audits, the need for explainability in AI decision-making processes and protocols for assessing the robustness and ethical behaviour of predictive systems in AVs. The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology, highlighting the need for multidisciplinary expertise.
|
[
"['David Fernández Llorca' 'Ronan Hamon' 'Henrik Junklewitz'\n 'Kathrin Grosse' 'Lars Kunze' 'Patrick Seiniger' 'Robert Swaim'\n 'Nick Reed' 'Alexandre Alahi' 'Emilia Gómez' 'Ignacio Sánchez'\n 'Akos Kriston']"
] |
null | null |
2403.14642
| null | null |
http://arxiv.org/pdf/2403.14642v1
|
2024-02-21T12:15:58Z
|
2024-02-21T12:15:58Z
|
Revolutionising Distance Learning: A Comparative Study of Learning
Progress with AI-Driven Tutoring
|
Generative AI is expected to have a vast, positive impact on education; however, at present, this potential has not yet been demonstrated at scale at university level. In this study, we present first evidence that generative AI can increase the speed of learning substantially in university students. We tested whether using the AI-powered teaching assistant Syntea affected the speed of learning of hundreds of distance learning students across more than 40 courses at the IU International University of Applied Sciences. Our analysis suggests that using Syntea reduced their study time substantially--by about 27% on average--in the third month after the release of Syntea. Taken together, the magnitude of the effect and the scalability of the approach implicate generative AI as a key lever to significantly improve and accelerate learning by personalisation.
|
[
"['Moritz Möller' 'Gargi Nirmal' 'Dario Fabietti' 'Quintus Stierstorfer'\n 'Mark Zakhvatkin' 'Holger Sommerfeld' 'Sven Schütt']"
] |
null | null |
2403.14661
| null | null |
http://arxiv.org/pdf/2403.14661v1
|
2024-02-29T14:06:34Z
|
2024-02-29T14:06:34Z
|
Towards Modeling Learner Performance with Large Language Models
|
Recent work exploring the capabilities of pre-trained large language models (LLMs) has demonstrated their ability to act as general pattern machines by completing complex token sequences representing a wide array of tasks, including time-series prediction and robot control. This paper investigates whether the pattern recognition and sequence modeling capabilities of LLMs can be extended to the domain of knowledge tracing, a critical component in the development of intelligent tutoring systems (ITSs) that tailor educational experiences by predicting learner performance over time. In an empirical evaluation across multiple real-world datasets, we compare two approaches to using LLMs for this task, zero-shot prompting and model fine-tuning, with existing, non-LLM approaches to knowledge tracing. While LLM-based approaches do not achieve state-of-the-art performance, fine-tuned LLMs surpass the performance of naive baseline models and perform on par with standard Bayesian Knowledge Tracing approaches across multiple metrics. These findings suggest that the pattern recognition capabilities of LLMs can be used to model complex learning trajectories, opening a novel avenue for applying LLMs to educational contexts. The paper concludes with a discussion of the implications of these findings for future research, suggesting that further refinements and a deeper understanding of LLMs' predictive mechanisms could lead to enhanced performance in knowledge tracing tasks.
|
[
"['Seyed Parsa Neshaei' 'Richard Lee Davis' 'Adam Hazimeh'\n 'Bojan Lazarevski' 'Pierre Dillenbourg' 'Tanja Käser']"
] |
null | null |
2403.14662
| null | null |
http://arxiv.org/pdf/2403.14662v1
|
2024-02-29T19:17:11Z
|
2024-02-29T19:17:11Z
|
Case Studies of AI Policy Development in Africa
|
Artificial Intelligence (AI) requires new ways of evaluating national technology use and strategy for African nations. We conduct a survey of existing 'readiness' assessments both for general digital adoption and for AI policy in particular. We conclude that existing global readiness assessments do not fully capture African states' progress in AI readiness and lay the groundwork for how assessments can be better used for the African context. We consider the extent to which these indicators map to the African context and what these indicators miss in capturing African states' on-the-ground work in meeting AI capability. Through case studies of four African nations of diverse geographic and economic dimensions, we identify nuances missed by global assessments and offer high-level policy considerations for how states can best improve their AI readiness standards and prepare their societies to capture the benefits of AI.
|
[
"['Kadijatou Diallo' 'Jonathan Smith' 'Chinasa T. Okolo' 'Dorcas Nyamwaya'\n 'Jonas Kgomo' 'Richard Ngamita']"
] |
null | null |
2403.14663
| null | null |
http://arxiv.org/pdf/2403.14663v1
|
2024-03-01T13:18:08Z
|
2024-03-01T13:18:08Z
|
Machine Learning Predicts Upper Secondary Education Dropout as Early as
the End of Primary School
|
Education plays a pivotal role in alleviating poverty, driving economic growth, and empowering individuals, thereby significantly influencing societal and personal development. However, the persistent issue of school dropout poses a significant challenge, with its effects extending beyond the individual. While previous research has employed machine learning for dropout classification, these studies often suffer from a short-term focus, relying on data collected only a few years into the study period. This study expanded the modeling horizon by utilizing a 13-year longitudinal dataset, encompassing data from kindergarten to Grade 9. Our methodology incorporated a comprehensive range of parameters, including students' academic and cognitive skills, motivation, behavior, well-being, and officially recorded dropout data. The machine learning models developed in this study demonstrated notable classification ability, achieving a mean area under the curve (AUC) of 0.61 with data up to Grade 6 and an improved AUC of 0.65 with data up to Grade 9. Further data collection and independent correlational and causal analyses are crucial. In future iterations, such models may have the potential to proactively support educators' processes and existing protocols for identifying at-risk students, thereby potentially aiding in the reinvention of student retention and success strategies and ultimately contributing to improved educational outcomes.
|
[
"['Maria Psyridou' 'Fabi Prezja' 'Minna Torppa' 'Marja-Kristiina Lerkkanen'\n 'Anna-Maija Poikkeus' 'Kati Vasalampi']"
] |
null | null |
2403.14664
| null | null |
http://arxiv.org/pdf/2403.14664v1
|
2024-03-01T23:39:03Z
|
2024-03-01T23:39:03Z
|
ClickTree: A Tree-based Method for Predicting Math Students' Performance
Based on Clickstream Data
|
The prediction of student performance and the analysis of students' learning behavior play an important role in enhancing online courses. By analysing a massive amount of clickstream data that captures student behavior, educators can gain valuable insights into the factors that influence academic outcomes and identify areas of improvement in courses. In this study, we developed ClickTree, a tree-based methodology, to predict student performance in mathematical assignments based on students' clickstream data. We extracted a set of features, including problem-level, assignment-level and student-level features, from the extensive clickstream data and trained a CatBoost tree to predict whether a student successfully answers a problem in an assignment. The developed method achieved an AUC of 0.78844 in the Educational Data Mining Cup 2023 and ranked second in the competition. Furthermore, our results indicate that students encounter more difficulties in the problem types that they must select a subset of answers from a given set as well as problem subjects of Algebra II. Additionally, students who performed well in answering end-unit assignment problems engaged more with in-unit assignments and answered more problems correctly, while those who struggled had higher tutoring request rate. The proposed method can be utilized to improve students' learning experiences, and the above insights can be integrated into mathematical courses to enhance students' learning outcomes.
|
[
"['Narjes Rohani' 'Behnam Rohani' 'Areti Manataki']"
] |
null | null |
2403.14666
| null | null |
http://arxiv.org/pdf/2403.14666v1
|
2024-03-03T03:01:14Z
|
2024-03-03T03:01:14Z
|
SyllabusQA: A Course Logistics Question Answering Dataset
|
Automated teaching assistants and chatbots have significant potential to reduce the workload of human instructors, especially for logistics-related question answering, which is important to students yet repetitive for instructors. However, due to privacy concerns, there is a lack of publicly available datasets. We introduce SyllabusQA, an open-source dataset with 63 real course syllabi covering 36 majors, containing 5,078 open-ended course logistics-related question-answer pairs that are diverse in both question types and answer formats. Since many logistics-related questions contain critical information like the date of an exam, it is important to evaluate the factuality of answers. We benchmark several strong baselines on this task, from large language model prompting to retrieval-augmented generation. We find that despite performing close to humans on traditional metrics of textual similarity, there remains a significant gap between automated approaches and humans in terms of fact precision.
|
[
"['Nigel Fernandez' 'Alexander Scarlatos' 'Andrew Lan']"
] |
null | null |
2403.14668
| null | null |
http://arxiv.org/pdf/2403.14668v1
|
2024-03-04T08:14:07Z
|
2024-03-04T08:14:07Z
|
Predicting Learning Performance with Large Language Models: A Study in
Adult Literacy
|
Intelligent Tutoring Systems (ITSs) have significantly enhanced adult literacy training, a key factor for societal participation, employment opportunities, and lifelong learning. Our study investigates the application of advanced AI models, including Large Language Models (LLMs) like GPT-4, for predicting learning performance in adult literacy programs in ITSs. This research is motivated by the potential of LLMs to predict learning performance based on its inherent reasoning and computational capabilities. By using reading comprehension datasets from the ITS, AutoTutor, we evaluate the predictive capabilities of GPT-4 versus traditional machine learning methods in predicting learning performance through five-fold cross-validation techniques. Our findings show that the GPT-4 presents the competitive predictive abilities with traditional machine learning methods such as Bayesian Knowledge Tracing, Performance Factor Analysis, Sparse Factor Analysis Lite (SPARFA-Lite), tensor factorization and eXtreme Gradient Boosting (XGBoost). While XGBoost (trained on local machine) outperforms GPT-4 in predictive accuracy, GPT-4-selected XGBoost and its subsequent tuning on the GPT-4 platform demonstrates superior performance compared to local machine execution. Moreover, our investigation into hyper-parameter tuning by GPT-4 versus grid-search suggests comparable performance, albeit with less stability in the automated approach, using XGBoost as the case study. Our study contributes to the field by highlighting the potential of integrating LLMs with traditional machine learning models to enhance predictive accuracy and personalize adult literacy education, setting a foundation for future research in applying LLMs within ITSs.
|
[
"['Liang Zhang' 'Jionghao Lin' 'Conrad Borchers' 'John Sabatini'\n 'John Hollander' 'Meng Cao' 'Xiangen Hu']"
] |
null | null |
2403.14671
| null | null |
http://arxiv.org/pdf/2403.14671v2
|
2024-03-30T16:25:21Z
|
2024-03-05T08:50:21Z
|
Understanding the Transit Gap: A Comparative Study of On-Demand Bus
Services and Urban Climate Resilience in South End, Charlotte, NC and
Avondale, Chattanooga, TN
|
Urban design significantly impacts sustainability, particularly in the context of public transit efficiency and carbon emissions reduction. This study explores two neighborhoods with distinct urban designs: South End, Charlotte, NC, featuring a dynamic mixed-use urban design pattern, and Avondale, Chattanooga, TN, with a residential suburban grid layout. Using the TRANSIT-GYM tool, we assess the impact of increased bus utilization in these different urban settings on traffic and CO2 emissions. Our results highlight the critical role of urban design and planning in transit system efficiency. In South End, the mixed-use design led to more substantial emission reductions, indicating that urban layout can significantly influence public transit outcomes. Tailored strategies that consider the unique urban design elements are essential for climate resilience. Notably, doubling bus utilization decreased daily emissions by 10.18% in South End and 8.13% in Avondale, with a corresponding reduction in overall traffic. A target of 50% bus utilization saw emissions drop by 21.45% in South End and 14.50% in Avondale. At an idealistic goal of 70% bus utilization, South End and Avondale witnessed emission reductions of 37.22% and 27.80%, respectively. These insights are crucial for urban designers and policymakers in developing sustainable urban landscapes.
|
[
"['Sanaz Sadat Hosseini' 'Babak Rahimi Ardabili' 'Mona Azarbayjani'\n 'Srinivas Pulugurtha' 'Hamed Tabkhi']"
] |
null | null |
2403.14676
| null | null |
http://arxiv.org/abs/2403.14676v1
|
2024-03-09T13:48:20Z
|
2024-03-09T13:48:20Z
|
Unified Uncertainty Estimation for Cognitive Diagnosis Models
|
Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, based on which users can get personalized instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited efficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
|
[
"['Fei Wang' 'Qi Liu' 'Enhong Chen' 'Chuanren Liu' 'Zhenya Huang'\n 'Jinze Wu' 'Shijin Wang']"
] |
null | null |
2403.14678
| null | null |
http://arxiv.org/pdf/2403.14678v1
|
2024-03-12T11:38:45Z
|
2024-03-12T11:38:45Z
|
Towards a Framework for Deep Learning Certification in Safety-Critical
Applications Using Inherently Safe Design and Run-Time Error Detection
|
Although an ever-growing number of applications employ deep learning based systems for prediction, decision-making, or state estimation, almost no certification processes have been established that would allow such systems to be deployed in safety-critical applications. In this work we consider real-world problems arising in aviation and other safety-critical areas, and investigate their requirements for a certified model. To this end, we investigate methodologies from the machine learning research community aimed towards verifying robustness and reliability of deep learning systems, and evaluate these methodologies with regard to their applicability to real-world problems. Then, we establish a new framework towards deep learning certification based on (i) inherently safe design, and (ii) run-time error detection. Using a concrete use case from aviation, we show how deep learning models can recover disentangled variables through the use of weakly-supervised representation learning. We argue that such a system design is inherently less prone to common model failures, and can be verified to encode underlying mechanisms governing the data. Then, we investigate four techniques related to the run-time safety of a model, namely (i) uncertainty quantification, (ii) out-of-distribution detection, (iii) feature collapse, and (iv) adversarial attacks. We evaluate each for their applicability and formulate a set of desiderata that a certified model should fulfill. Finally, we propose a novel model structure that exhibits all desired properties discussed in this work, and is able to make regression and uncertainty predictions, as well as detect out-of-distribution inputs, while requiring no regression labels to train. We conclude with a discussion of the current state and expected future progress of deep learning certification, and its industrial and social implications.
|
[
"['Romeo Valentin']"
] |
null | null |
2403.14679
| null | null |
http://arxiv.org/pdf/2403.14679v1
|
2024-03-12T15:31:14Z
|
2024-03-12T15:31:14Z
|
Continual Learning by Three-Phase Consolidation
|
TPC (Three-Phase Consolidation) is here introduced as a simple but effective approach to continually learn new classes (and/or instances of known classes) while controlling forgetting of previous knowledge. Each experience (a.k.a. task) is learned in three phases characterized by different rules and learning dynamics, aimed at removing the class-bias problem (due to class unbalancing) and limiting gradient-based corrections to prevent forgetting of underrepresented classes. Several experiments on complex datasets demonstrate its accuracy and efficiency advantages over competitive existing approaches. The algorithm and all the results presented in this paper are fully reproducible thanks to its publication on the Avalanche open framework for continual learning.
|
[
"['Davide Maltoni' 'Lorenzo Pellegrini']"
] |
null | null |
2403.14682
| null | null |
http://arxiv.org/pdf/2403.14682v1
|
2024-03-12T22:48:23Z
|
2024-03-12T22:48:23Z
|
Deep Generative Domain Adaptation with Temporal Relation Knowledge for
Cross-User Activity Recognition
|
In human activity recognition (HAR), the assumption that training and testing data are independent and identically distributed (i.i.d.) often fails, particularly in cross-user scenarios where data distributions vary significantly. This discrepancy highlights the limitations of conventional domain adaptation methods in HAR, which typically overlook the inherent temporal relations in time-series data. To bridge this gap, our study introduces a Conditional Variational Autoencoder with Universal Sequence Mapping (CVAE-USM) approach, which addresses the unique challenges of time-series domain adaptation in HAR by relaxing the i.i.d. assumption and leveraging temporal relations to align data distributions effectively across different users. This method combines the strengths of Variational Autoencoder (VAE) and Universal Sequence Mapping (USM) to capture and utilize common temporal patterns between users for improved activity recognition. Our results, evaluated on two public HAR datasets (OPPT and PAMAP2), demonstrate that CVAE-USM outperforms existing state-of-the-art methods, offering a more accurate and generalizable solution for cross-user activity recognition.
|
[
"['Xiaozhou Ye' 'Kevin I-Kai Wang']"
] |
null | null |
2403.14683
| null | null |
http://arxiv.org/pdf/2403.14683v1
|
2024-03-13T05:44:50Z
|
2024-03-13T05:44:50Z
|
A Moral Imperative: The Need for Continual Superalignment of Large
Language Models
|
This paper examines the challenges associated with achieving life-long superalignment in AI systems, particularly large language models (LLMs). Superalignment is a theoretical framework that aspires to ensure that superintelligent AI systems act in accordance with human values and goals. Despite its promising vision, we argue that achieving superalignment requires substantial changes in the current LLM architectures due to their inherent limitations in comprehending and adapting to the dynamic nature of these human ethics and evolving global scenarios. We dissect the challenges of encoding an ever-changing spectrum of human values into LLMs, highlighting the discrepancies between static AI models and the dynamic nature of human societies. To illustrate these challenges, we analyze two distinct examples: one demonstrates a qualitative shift in human values, while the other presents a quantifiable change. Through these examples, we illustrate how LLMs, constrained by their training data, fail to align with contemporary human values and scenarios. The paper concludes by exploring potential strategies to address and possibly mitigate these alignment discrepancies, suggesting a path forward in the pursuit of more adaptable and responsive AI systems.
|
[
"['Gokul Puthumanaillam' 'Manav Vora' 'Pranay Thangeda' 'Melkior Ornik']"
] |
null | null |
2403.14684
| null | null |
http://arxiv.org/pdf/2403.14684v1
|
2024-03-13T13:51:12Z
|
2024-03-13T13:51:12Z
|
FOCIL: Finetune-and-Freeze for Online Class Incremental Learning by
Training Randomly Pruned Sparse Experts
|
Class incremental learning (CIL) in an online continual learning setting strives to acquire knowledge on a series of novel classes from a data stream, using each data point only once for training. This is more realistic compared to offline modes, where it is assumed that all data from novel class(es) is readily available. Current online CIL approaches store a subset of the previous data which creates heavy overhead costs in terms of both memory and computation, as well as privacy issues. In this paper, we propose a new online CIL approach called FOCIL. It fine-tunes the main architecture continually by training a randomly pruned sparse subnetwork for each task. Then, it freezes the trained connections to prevent forgetting. FOCIL also determines the sparsity level and learning rate per task adaptively and ensures (almost) zero forgetting across all tasks without storing any replay data. Experimental results on 10-Task CIFAR100, 20-Task CIFAR100, and 100-Task TinyImagenet, demonstrate that our method outperforms the SOTA by a large margin. The code is publicly available at https://github.com/muratonuryildirim/FOCIL.
|
[
"['Murat Onur Yildirim' 'Elif Ceren Gok Yildirim'\n 'Decebal Constantin Mocanu' 'Joaquin Vanschoren']"
] |
null | null |
2403.14685
| null | null |
http://arxiv.org/pdf/2403.14685v1
|
2024-03-13T14:07:20Z
|
2024-03-13T14:07:20Z
|
Cyclical Log Annealing as a Learning Rate Scheduler
|
A learning rate scheduler is a predefined set of instructions for varying search stepsizes during model training processes. This paper introduces a new logarithmic method using harsh restarting of step sizes through stochastic gradient descent. Cyclical log annealing implements the restart pattern more aggressively to maybe allow the usage of more greedy algorithms on the online convex optimization framework. The algorithm was tested on the CIFAR-10 image datasets, and seemed to perform analogously with cosine annealing on large transformer-enhanced residual neural networks. Future experiments would involve testing the scheduler in generative adversarial networks and finding the best parameters for the scheduler with more experiments.
|
[
"['Philip Naveen']"
] |
null | null |
2403.14687
| null | null |
http://arxiv.org/pdf/2403.14687v1
|
2024-03-13T18:07:17Z
|
2024-03-13T18:07:17Z
|
On the Performance of Imputation Techniques for Missing Values on
Healthcare Datasets
|
Missing values or data is one popular characteristic of real-world datasets, especially healthcare data. This could be frustrating when using machine learning algorithms on such datasets, simply because most machine learning models perform poorly in the presence of missing values. The aim of this study is to compare the performance of seven imputation techniques, namely Mean imputation, Median Imputation, Last Observation carried Forward (LOCF) imputation, K-Nearest Neighbor (KNN) imputation, Interpolation imputation, Missforest imputation, and Multiple imputation by Chained Equations (MICE), on three healthcare datasets. Some percentage of missing values - 10%, 15%, 20% and 25% - were introduced into the dataset, and the imputation techniques were employed to impute these missing values. The comparison of their performance was evaluated by using root mean squared error (RMSE) and mean absolute error (MAE). The results show that Missforest imputation performs the best followed by MICE imputation. Additionally, we try to determine whether it is better to perform feature selection before imputation or vice versa by using the following metrics - the recall, precision, f1-score and accuracy. Due to the fact that there are few literature on this and some debate on the subject among researchers, we hope that the results from this experiment will encourage data scientists and researchers to perform imputation first before feature selection when dealing with data containing missing values.
|
[
"['Luke Oluwaseye Joel' 'Wesley Doorsamy' 'Babu Sena Paul']"
] |
null | null |
2403.14688
| null | null |
http://arxiv.org/pdf/2403.14688v1
|
2024-03-13T20:35:44Z
|
2024-03-13T20:35:44Z
|
Kernel Alignment for Unsupervised Feature Selection via Matrix
Factorization
|
By removing irrelevant and redundant features, feature selection aims to find a good representation of the original features. With the prevalence of unlabeled data, unsupervised feature selection has been proven effective in alleviating the so-called curse of dimensionality. Most existing matrix factorization-based unsupervised feature selection methods are built upon subspace learning, but they have limitations in capturing nonlinear structural information among features. It is well-known that kernel techniques can capture nonlinear structural information. In this paper, we construct a model by integrating kernel functions and kernel alignment, which can be equivalently characterized as a matrix factorization problem. However, such an extension raises another issue: the algorithm performance heavily depends on the choice of kernel, which is often unknown a priori. Therefore, we further propose a multiple kernel-based learning method. By doing so, our model can learn both linear and nonlinear similarity information and automatically generate the most appropriate kernel. Experimental analysis on real-world data demonstrates that the two proposed methods outperform other classic and state-of-the-art unsupervised feature selection methods in terms of clustering results and redundancy reduction in almost all datasets tested.
|
[
"['Ziyuan Lin' 'Deanna Needell']"
] |
null | null |
2403.14689
| null | null |
http://arxiv.org/pdf/2403.14689v2
|
2024-03-25T04:21:13Z
|
2024-03-13T22:38:08Z
|
Developing and Deploying Industry Standards for Artificial Intelligence
in Education (AIED): Challenges, Strategies, and Future Directions
|
The adoption of Artificial Intelligence in Education (AIED) holds the promise of revolutionizing educational practices by offering personalized learning experiences, automating administrative and pedagogical tasks, and reducing the cost of content creation. However, the lack of standardized practices in the development and deployment of AIED solutions has led to fragmented ecosystems, which presents challenges in interoperability, scalability, and ethical governance. This article aims to address the critical need to develop and implement industry standards in AIED, offering a comprehensive analysis of the current landscape, challenges, and strategic approaches to overcome these obstacles. We begin by examining the various applications of AIED in various educational settings and identify key areas lacking in standardization, including system interoperability, ontology mapping, data integration, evaluation, and ethical governance. Then, we propose a multi-tiered framework for establishing robust industry standards for AIED. In addition, we discuss methodologies for the iterative development and deployment of standards, incorporating feedback loops from real-world applications to refine and adapt standards over time. The paper also highlights the role of emerging technologies and pedagogical theories in shaping future standards for AIED. Finally, we outline a strategic roadmap for stakeholders to implement these standards, fostering a cohesive and ethical AIED ecosystem. By establishing comprehensive industry standards, such as those by IEEE Artificial Intelligence Standards Committee (AISC) and International Organization for Standardization (ISO), we can accelerate and scale AIED solutions to improve educational outcomes, ensuring that technological advances align with the principles of inclusivity, fairness, and educational excellence.
|
[
"['Richard Tong' 'Haoyang Li' 'Joleen Liang' 'Qingsong Wen']"
] |
null | null |
2403.14690
| null | null |
http://arxiv.org/pdf/2403.14690v1
|
2024-03-14T11:00:09Z
|
2024-03-14T11:00:09Z
|
Incorporating Graph Attention Mechanism into Geometric Problem Solving
Based on Deep Reinforcement Learning
|
In the context of online education, designing an automatic solver for geometric problems has been considered a crucial step towards general math Artificial Intelligence (AI), empowered by natural language understanding and traditional logical inference. In most instances, problems are addressed by adding auxiliary components such as lines or points. However, adding auxiliary components automatically is challenging due to the complexity in selecting suitable auxiliary components especially when pivotal decisions have to be made. The state-of-the-art performance has been achieved by exhausting all possible strategies from the category library to identify the one with the maximum likelihood. However, an extensive strategy search have to be applied to trade accuracy for ef-ficiency. To add auxiliary components automatically and efficiently, we present deep reinforcement learning framework based on the language model, such as BERT. We firstly apply the graph attention mechanism to reduce the strategy searching space, called AttnStrategy, which only focus on the conclusion-related components. Meanwhile, a novel algorithm, named Automatically Adding Auxiliary Components using Reinforcement Learning framework (A3C-RL), is proposed by forcing an agent to select top strategies, which incorporates the AttnStrategy and BERT as the memory components. Results from extensive experiments show that the proposed A3C-RL algorithm can substantially enhance the average precision by 32.7% compared to the traditional MCTS. In addition, the A3C-RL algorithm outperforms humans on the geometric questions from the annual University Entrance Mathematical Examination of China.
|
[
"['Xiuqin Zhong' 'Shengyuan Yan' 'Gongqi Lin' 'Hongguang Fu' 'Liang Xu'\n 'Siwen Jiang' 'Lei Huang' 'Wei Fang']"
] |
null | null |
2403.14695
| null | null |
http://arxiv.org/pdf/2403.14695v1
|
2024-03-15T15:05:59Z
|
2024-03-15T15:05:59Z
|
Chain-structured neural architecture search for financial time series
forecasting
|
We compare three popular neural architecture search strategies on chain-structured search spaces: Bayesian optimization, the hyperband method, and reinforcement learning in the context of financial time series forecasting.
|
[
"['Denis Levchenko' 'Efstratios Rappos' 'Shabnam Ataee' 'Biagio Nigro'\n 'Stephan Robert']"
] |
null | null |
2403.14709
| null | null |
http://arxiv.org/pdf/2403.14709v1
|
2024-03-18T08:16:02Z
|
2024-03-18T08:16:02Z
|
ClimateQ&A: Bridging the gap between climate scientists and the general
public
|
This research paper investigates public views on climate change and biodiversity loss by analyzing questions asked to the ClimateQ&A platform. ClimateQ&A is a conversational agent that uses LLMs to respond to queries based on over 14,000 pages of scientific literature from the IPCC and IPBES reports. Launched online in March 2023, the tool has gathered over 30,000 questions, mainly from a French audience. Its chatbot interface allows for the free formulation of questions related to nature*. While its main goal is to make nature science more accessible, it also allows for the collection and analysis of questions and their themes. Unlike traditional surveys involving closed questions, this novel method offers a fresh perspective on individual interrogations about nature. Running NLP clustering algorithms on a sample of 3,425 questions, we find that a significant 25.8% inquire about how climate change and biodiversity loss will affect them personally (e.g., where they live or vacation, their consumption habits) and the specific impacts of their actions on nature (e.g., transportation or food choices). This suggests that traditional methods of surveying may not identify all existing knowledge gaps, and that relying solely on IPCC and IPBES reports may not address all individual inquiries about climate and biodiversity, potentially affecting public understanding and action on these issues. *we use 'nature' as an umbrella term for 'climate change' and 'biodiversity loss'
|
[
"['Natalia De La Calzada' 'Théo Alves Da Costa' 'Annabelle Blangero'\n 'Nicolas Chesneau']"
] |
null | null |
2403.14711
| null | null |
http://arxiv.org/pdf/2403.14711v1
|
2024-03-18T13:25:57Z
|
2024-03-18T13:25:57Z
|
Human-in-the-Loop AI for Cheating Ring Detection
|
Online exams have become popular in recent years due to their accessibility. However, some concerns have been raised about the security of the online exams, particularly in the context of professional cheating services aiding malicious test takers in passing exams, forming so-called "cheating rings". In this paper, we introduce a human-in-the-loop AI cheating ring detection system designed to detect and deter these cheating rings. We outline the underlying logic of this human-in-the-loop AI system, exploring its design principles tailored to achieve its objectives of detecting cheaters. Moreover, we illustrate the methodologies used to evaluate its performance and fairness, aiming to mitigate the unintended risks associated with the AI system. The design and development of the system adhere to Responsible AI (RAI) standards, ensuring that ethical considerations are integrated throughout the entire development process.
|
[
"['Yong-Siang Shih' 'Manqian Liao' 'Ruidong Liu' 'Mirza Basim Baig']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.