categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2403.05034
| null | null |
http://arxiv.org/pdf/2403.05034v1
|
2024-03-08T04:25:29Z
|
2024-03-08T04:25:29Z
|
CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction
Model
|
Feed-forward 3D generative models like the Large Reconstruction Model (LRM) have demonstrated exceptional generation speed. However, the transformer-based methods do not leverage the geometric priors of the triplane component in their architecture, often leading to sub-optimal quality given the limited size of 3D data and slow training. In this work, we present the Convolutional Reconstruction Model (CRM), a high-fidelity feed-forward single image-to-3D generative model. Recognizing the limitations posed by sparse 3D data, we highlight the necessity of integrating geometric priors into network design. CRM builds on the key observation that the visualization of triplane exhibits spatial correspondence of six orthographic images. First, it generates six orthographic view images from a single input image, then feeds these images into a convolutional U-Net, leveraging its strong pixel-level alignment capabilities and significant bandwidth to create a high-resolution triplane. CRM further employs Flexicubes as geometric representation, facilitating direct end-to-end optimization on textured meshes. Overall, our model delivers a high-fidelity textured mesh from an image in just 10 seconds, without any test-time optimization.
|
[
"['Zhengyi Wang' 'Yikai Wang' 'Yifei Chen' 'Chendong Xiang' 'Shuo Chen'\n 'Dajiang Yu' 'Chongxuan Li' 'Hang Su' 'Jun Zhu']"
] |
null | null |
2403.05045
| null | null |
http://arxiv.org/pdf/2403.05045v1
|
2024-03-08T04:44:25Z
|
2024-03-08T04:44:25Z
|
Are Human Conversations Special? A Large Language Model Perspective
|
This study analyzes changes in the attention mechanisms of large language models (LLMs) when used to understand natural conversations between humans (human-human). We analyze three use cases of LLMs: interactions over web content, code, and mathematical texts. By analyzing attention distance, dispersion, and interdependency across these domains, we highlight the unique challenges posed by conversational data. Notably, conversations require nuanced handling of long-term contextual relationships and exhibit higher complexity through their attention patterns. Our findings reveal that while language models exhibit domain-specific attention behaviors, there is a significant gap in their ability to specialize in human conversations. Through detailed attention entropy analysis and t-SNE visualizations, we demonstrate the need for models trained with a diverse array of high-quality conversational data to enhance understanding and generation of human-like dialogue. This research highlights the importance of domain specialization in language models and suggests pathways for future advancement in modeling human conversational nuances.
|
[
"['Toshish Jawale' 'Chaitanya Animesh' 'Sekhar Vallath'\n 'Kartik Talamadupula' 'Larry Heck']"
] |
null | null |
2403.05054
| null | null |
http://arxiv.org/pdf/2403.05054v1
|
2024-03-08T05:01:43Z
|
2024-03-08T05:01:43Z
|
A Sinkhorn-type Algorithm for Constrained Optimal Transport
|
Entropic optimal transport (OT) and the Sinkhorn algorithm have made it practical for machine learning practitioners to perform the fundamental task of calculating transport distance between statistical distributions. In this work, we focus on a general class of OT problems under a combination of equality and inequality constraints. We derive the corresponding entropy regularization formulation and introduce a Sinkhorn-type algorithm for such constrained OT problems supported by theoretical guarantees. We first bound the approximation error when solving the problem through entropic regularization, which reduces exponentially with the increase of the regularization parameter. Furthermore, we prove a sublinear first-order convergence rate of the proposed Sinkhorn-type algorithm in the dual space by characterizing the optimization procedure with a Lyapunov function. To achieve fast and higher-order convergence under weak entropy regularization, we augment the Sinkhorn-type algorithm with dynamic regularization scheduling and second-order acceleration. Overall, this work systematically combines recent theoretical and numerical advances in entropic optimal transport with the constrained case, allowing practitioners to derive approximate transport plans in complex scenarios.
|
[
"['Xun Tang' 'Holakou Rahmanian' 'Michael Shavlovsky'\n 'Kiran Koshy Thekumparampil' 'Tesi Xiao' 'Lexing Ying']"
] |
null | null |
2403.05064
| null | null |
http://arxiv.org/pdf/2403.05064v1
|
2024-03-08T05:23:55Z
|
2024-03-08T05:23:55Z
|
Unsupervised Graph Neural Architecture Search with Disentangled
Self-supervision
|
The existing graph neural architecture search (GNAS) methods heavily rely on supervised labels during the search process, failing to handle ubiquitous scenarios where supervisions are not available. In this paper, we study the problem of unsupervised graph neural architecture search, which remains unexplored in the literature. The key problem is to discover the latent graph factors that drive the formation of graph data as well as the underlying relations between the factors and the optimal neural architectures. Handling this problem is challenging given that the latent graph factors together with architectures are highly entangled due to the nature of the graph and the complexity of the neural architecture search process. To address the challenge, we propose a novel Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) model, which is able to discover the optimal architectures capturing various latent graph factors in a self-supervised fashion based on unlabeled graph data. Specifically, we first design a disentangled graph super-network capable of incorporating multiple architectures with factor-wise disentanglement, which are optimized simultaneously. Then, we estimate the performance of architectures under different factors by our proposed self-supervised training with joint architecture-graph disentanglement. Finally, we propose a contrastive search with architecture augmentations to discover architectures with factor-specific expertise. Extensive experiments on 11 real-world datasets demonstrate that the proposed model is able to achieve state-of-the-art performance against several baseline methods in an unsupervised manner.
|
[
"['Zeyang Zhang' 'Xin Wang' 'Ziwei Zhang' 'Guangyao Shen' 'Shiqi Shen'\n 'Wenwu Zhu']"
] |
null | null |
2403.05066
| null | null |
http://arxiv.org/pdf/2403.05066v1
|
2024-03-08T05:37:59Z
|
2024-03-08T05:37:59Z
|
Reset & Distill: A Recipe for Overcoming Negative Transfer in Continual
Reinforcement Learning
|
We argue that one of the main obstacles for developing effective Continual Reinforcement Learning (CRL) algorithms is the negative transfer issue occurring when the new task to learn arrives. Through comprehensive experimental validation, we demonstrate that such issue frequently exists in CRL and cannot be effectively addressed by several recent work on mitigating plasticity loss of RL agents. To that end, we develop Reset & Distill (R&D), a simple yet highly effective method, to overcome the negative transfer problem in CRL. R&D combines a strategy of resetting the agent's online actor and critic networks to learn a new task and an offline learning step for distilling the knowledge from the online actor and previous expert's action probabilities. We carried out extensive experiments on long sequence of Meta-World tasks and show that our method consistently outperforms recent baselines, achieving significantly higher success rates across a range of tasks. Our findings highlight the importance of considering negative transfer in CRL and emphasize the need for robust strategies like R&D to mitigate its detrimental effects.
|
[
"['Hongjoon Ahn' 'Jinu Hyeon' 'Youngmin Oh' 'Bosun Hwang' 'Taesup Moon']"
] |
null | null |
2403.05069
| null | null |
http://arxiv.org/pdf/2403.05069v1
|
2024-03-08T05:43:00Z
|
2024-03-08T05:43:00Z
|
Improving Diffusion-Based Generative Models via Approximated Optimal
Transport
|
We introduce the Approximated Optimal Transport (AOT) technique, a novel training scheme for diffusion-based generative models. Our approach aims to approximate and integrate optimal transport into the training process, significantly enhancing the ability of diffusion models to estimate the denoiser outputs accurately. This improvement leads to ODE trajectories of diffusion models with lower curvature and reduced truncation errors during sampling. We achieve superior image quality and reduced sampling steps by employing AOT in training. Specifically, we achieve FID scores of 1.88 with just 27 NFEs and 1.73 with 29 NFEs in unconditional and conditional generations, respectively. Furthermore, when applying AOT to train the discriminator for guidance, we establish new state-of-the-art FID scores of 1.68 and 1.58 for unconditional and conditional generations, respectively, each with 29 NFEs. This outcome demonstrates the effectiveness of AOT in enhancing the performance of diffusion models.
|
[
"['Daegyu Kim' 'Jooyoung Choi' 'Chaehun Shin' 'Uiwon Hwang' 'Sungroh Yoon']"
] |
null | null |
2403.05075
| null | null |
http://arxiv.org/pdf/2403.05075v1
|
2024-03-08T05:59:56Z
|
2024-03-08T05:59:56Z
|
Benchmarking Large Language Models for Molecule Prediction Tasks
|
Large Language Models (LLMs) stand at the forefront of a number of Natural Language Processing (NLP) tasks. Despite the widespread adoption of LLMs in NLP, much of their potential in broader fields remains largely unexplored, and significant limitations persist in their design and implementation. Notably, LLMs struggle with structured data, such as graphs, and often falter when tasked with answering domain-specific questions requiring deep expertise, such as those in biology and chemistry. In this paper, we explore a fundamental question: Can LLMs effectively handle molecule prediction tasks? Rather than pursuing top-tier performance, our goal is to assess how LLMs can contribute to diverse molecule tasks. We identify several classification and regression prediction tasks across six standard molecule datasets. Subsequently, we carefully design a set of prompts to query LLMs on these tasks and compare their performance with existing Machine Learning (ML) models, which include text-based models and those specifically designed for analysing the geometric structure of molecules. Our investigation reveals several key insights: Firstly, LLMs generally lag behind ML models in achieving competitive performance on molecule tasks, particularly when compared to models adept at capturing the geometric structure of molecules, highlighting the constrained ability of LLMs to comprehend graph data. Secondly, LLMs show promise in enhancing the performance of ML models when used collaboratively. Lastly, we engage in a discourse regarding the challenges and promising avenues to harness LLMs for molecule prediction tasks. The code and models are available at https://github.com/zhiqiangzhongddu/LLMaMol.
|
[
"['Zhiqiang Zhong' 'Kuangyu Zhou' 'Davide Mottin']"
] |
null | null |
2403.05100
| null | null |
http://arxiv.org/pdf/2403.05100v1
|
2024-03-08T07:03:18Z
|
2024-03-08T07:03:18Z
|
Exploring the Adversarial Frontier: Quantifying Robustness via
Adversarial Hypervolume
|
The escalating threat of adversarial attacks on deep learning models, particularly in security-critical fields, has underscored the need for robust deep learning systems. Conventional robustness evaluations have relied on adversarial accuracy, which measures a model's performance under a specific perturbation intensity. However, this singular metric does not fully encapsulate the overall resilience of a model against varying degrees of perturbation. To address this gap, we propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities from a multi-objective optimization standpoint. This metric allows for an in-depth comparison of defense mechanisms and recognizes the trivial improvements in robustness afforded by less potent defensive strategies. Additionally, we adopt a novel training algorithm that enhances adversarial robustness uniformly across various perturbation intensities, in contrast to methods narrowly focused on optimizing adversarial accuracy. Our extensive empirical studies validate the effectiveness of the adversarial hypervolume metric, demonstrating its ability to reveal subtle differences in robustness that adversarial accuracy overlooks. This research contributes a new measure of robustness and establishes a standard for assessing and benchmarking the resilience of current and future defensive models against adversarial threats.
|
[
"['Ping Guo' 'Cheng Gong' 'Xi Lin' 'Zhiyuan Yang' 'Qingfu Zhang']"
] |
null | null |
2403.05106
| null | null |
http://arxiv.org/pdf/2403.05106v2
|
2024-04-10T17:39:53Z
|
2024-03-08T07:09:56Z
|
Simulating Battery-Powered TinyML Systems Optimised using Reinforcement
Learning in Image-Based Anomaly Detection
|
Advances in Tiny Machine Learning (TinyML) have bolstered the creation of smart industry solutions, including smart agriculture, healthcare and smart cities. Whilst related research contributes to enabling TinyML solutions on constrained hardware, there is a need to amplify real-world applications by optimising energy consumption in battery-powered systems. The work presented extends and contributes to TinyML research by optimising battery-powered image-based anomaly detection Internet of Things (IoT) systems. Whilst previous work in this area has yielded the capabilities of on-device inferencing and training, there has yet to be an investigation into optimising the management of such capabilities using machine learning approaches, such as Reinforcement Learning (RL), to improve the deployment battery life of such systems. Using modelled simulations, the battery life effects of an RL algorithm are benchmarked against static and dynamic optimisation approaches, with the foundation laid for a hardware benchmark to follow. It is shown that using RL within a TinyML-enabled IoT system to optimise the system operations, including cloud anomaly processing and on-device training, yields an improved battery life of 22.86% and 10.86% compared to static and dynamic optimisation approaches respectively. The proposed solution can be deployed to resource-constrained hardware, given its low memory footprint of 800 B, which could be further reduced. This further facilitates the real-world deployment of such systems, including key sectors such as smart agriculture.
|
[
"['Jared M. Ping' 'Ken J. Nixon']"
] |
null | null |
2403.05110
| null | null |
http://arxiv.org/pdf/2403.05110v2
|
2024-05-21T14:18:47Z
|
2024-03-08T07:15:38Z
|
Efficient Data Collection for Robotic Manipulation via Compositional
Generalization
|
Data collection has become an increasingly important problem in robotic manipulation, yet there still lacks much understanding of how to effectively collect data to facilitate broad generalization. Recent works on large-scale robotic data collection typically vary many environmental factors of variation (e.g., object types, table textures) during data collection, to cover a diverse range of scenarios. However, they do not explicitly account for the possible compositional abilities of policies trained on the data. If robot policies can compose environmental factors from their data to succeed when encountering unseen factor combinations, we can exploit this to avoid collecting data for situations that composition would address. To investigate this possibility, we conduct thorough empirical studies both in simulation and on a real robot that compare data collection strategies and assess whether visual imitation learning policies can compose environmental factors. We find that policies do exhibit composition, although leveraging prior robotic datasets is critical for this on a real robot. We use these insights to propose better in-domain data collection strategies that exploit composition, which can induce better generalization than naive approaches for the same amount of effort during data collection. We further demonstrate that a real robot policy trained on data from such a strategy achieves a success rate of 77.5% when transferred to entirely new environments that encompass unseen combinations of environmental factors, whereas policies trained using data collected without accounting for environmental variation fail to transfer effectively, with a success rate of only 2.5%. We provide videos at http://iliad.stanford.edu/robot-data-comp/.
|
[
"['Jensen Gao' 'Annie Xie' 'Ted Xiao' 'Chelsea Finn' 'Dorsa Sadigh']"
] |
null | null |
2403.05119
| null | null |
http://arxiv.org/pdf/2403.05119v1
|
2024-03-08T07:32:28Z
|
2024-03-08T07:32:28Z
|
Estimation of Electronic Band Gap Energy From Material Properties Using
Machine Learning
|
Machine learning techniques are utilized to estimate the electronic band gap energy and forecast the band gap category of materials based on experimentally quantifiable properties. The determination of band gap energy is critical for discerning various material properties, such as its metallic nature, and potential applications in electronic and optoelectronic devices. While numerical methods exist for computing band gap energy, they often entail high computational costs and have limitations in accuracy and scalability. A machine learning-driven model capable of swiftly predicting material band gap energy using easily obtainable experimental properties would offer a superior alternative to conventional density functional theory (DFT) methods. Our model does not require any preliminary DFT-based calculation or knowledge of the structure of the material. We present a scheme for improving the performance of simple regression and classification models by partitioning the dataset into multiple clusters. A new evaluation scheme for comparing the performance of ML-based models in material sciences involving both regression and classification tasks is introduced based on traditional evaluation metrics. It is shown that on this new evaluation metric, our method of clustering the dataset results in better performance.
|
[
"['Sagar Prakash Barad' 'Sajag Kumar' 'Subhankar Mishra']"
] |
null | null |
2403.05122
| null | null |
http://arxiv.org/pdf/2403.05122v1
|
2024-03-08T07:36:14Z
|
2024-03-08T07:36:14Z
|
Multi-Tower Multi-Interest Recommendation with User Representation Repel
|
In the era of information overload, the value of recommender systems has been profoundly recognized in academia and industry alike. Multi-interest sequential recommendation, in particular, is a subfield that has been receiving increasing attention in recent years. By generating multiple-user representations, multi-interest learning models demonstrate superior expressiveness than single-user representation models, both theoretically and empirically. Despite major advancements in the field, three major issues continue to plague the performance and adoptability of multi-interest learning methods, the difference between training and deployment objectives, the inability to access item information, and the difficulty of industrial adoption due to its single-tower architecture. We address these challenges by proposing a novel multi-tower multi-interest framework with user representation repel. Experimental results across multiple large-scale industrial datasets proved the effectiveness and generalizability of our proposed framework.
|
[
"['Tianyu Xiong' 'Xiaohan Yu']"
] |
null | null |
2403.05123
| null | null |
http://arxiv.org/pdf/2403.05123v1
|
2024-03-08T07:36:46Z
|
2024-03-08T07:36:46Z
|
ECToNAS: Evolutionary Cross-Topology Neural Architecture Search
|
We present ECToNAS, a cost-efficient evolutionary cross-topology neural architecture search algorithm that does not require any pre-trained meta controllers. Our framework is able to select suitable network architectures for different tasks and hyperparameter settings, independently performing cross-topology optimisation where required. It is a hybrid approach that fuses training and topology optimisation together into one lightweight, resource-friendly process. We demonstrate the validity and power of this approach with six standard data sets (CIFAR-10, CIFAR-100, EuroSAT, Fashion MNIST, MNIST, SVHN), showcasing the algorithm's ability to not only optimise the topology within an architectural type, but also to dynamically add and remove convolutional cells when and where required, thus crossing boundaries between different network types. This enables researchers without a background in machine learning to make use of appropriate model types and topologies and to apply machine learning methods in their domains, with a computationally cheap, easy-to-use cross-topology neural architecture search framework that fully encapsulates the topology optimisation within the training process.
|
[
"['Elisabeth J. Schiessler' 'Roland C. Aydin' 'Christian J. Cyron']"
] |
null | null |
2403.05133
| null | null |
http://arxiv.org/pdf/2403.05133v1
|
2024-03-08T08:05:50Z
|
2024-03-08T08:05:50Z
|
RIS-empowered Topology Control for Distributed Learning in Urban Air
Mobility
|
Urban Air Mobility (UAM) expands vehicles from the ground to the near-ground space, envisioned as a revolution for transportation systems. Comprehensive scene perception is the foundation for autonomous aerial driving. However, UAM encounters the intelligent perception challenge: high perception learning requirements conflict with the limited sensors and computing chips of flying cars. To overcome the challenge, federated learning (FL) and other collaborative learning have been proposed to enable resource-limited devices to conduct onboard deep learning (DL) collaboratively. But traditional collaborative learning like FL relies on a central integrator for DL model aggregation, which is difficult to deploy in dynamic environments. The fully decentralized learning schemes may be the intuitive solution while the convergence of distributed learning cannot be guaranteed. Accordingly, this paper explores reconfigurable intelligent surfaces (RIS) empowered distributed learning, taking account of topological attributes to facilitate the learning performance with convergence guarantee. We propose several FL topological criteria for optimizing the transmission delay and convergence rate by exploiting the Laplacian matrix eigenvalues of the communication network. Subsequently, we innovatively leverage the RIS link modification ability to remold the current network according to the proposed topological criteria. This paper rethinks the functions of RIS from the perspective of the network layer. Furthermore, a deep deterministic policy gradient-based RIS phase shift control algorithm is developed to construct or deconstruct the network links simultaneously to reshape the communication network. Simulation experiments are conducted over MobileNet-based multi-view learning to verify the efficiency of the distributed FL framework.
|
[
"['Kai Xiong' 'Rui Wang' 'Supeng Leng' 'Wenyang Che' 'Chongwen Huang'\n 'Chau Yuen']"
] |
null | null |
2403.05134
| null | null |
http://arxiv.org/pdf/2403.05134v1
|
2024-03-08T08:07:26Z
|
2024-03-08T08:07:26Z
|
Follow-the-Perturbed-Leader with Fréchet-type Tail Distributions:
Optimality in Adversarial Bandits and Best-of-Both-Worlds
|
This paper studies the optimality of the Follow-the-Perturbed-Leader (FTPL) policy in both adversarial and stochastic $K$-armed bandits. Despite the widespread use of the Follow-the-Regularized-Leader (FTRL) framework with various choices of regularization, the FTPL framework, which relies on random perturbations, has not received much attention, despite its inherent simplicity. In adversarial bandits, there has been conjecture that FTPL could potentially achieve $mathcal{O}(sqrt{KT})$ regrets if perturbations follow a distribution with a Fr'{e}chet-type tail. Recent work by Honda et al. (2023) showed that FTPL with Fr'{e}chet distribution with shape $alpha=2$ indeed attains this bound and, notably logarithmic regret in stochastic bandits, meaning the Best-of-Both-Worlds (BOBW) capability of FTPL. However, this result only partly resolves the above conjecture because their analysis heavily relies on the specific form of the Fr'{e}chet distribution with this shape. In this paper, we establish a sufficient condition for perturbations to achieve $mathcal{O}(sqrt{KT})$ regrets in the adversarial setting, which covers, e.g., Fr'{e}chet, Pareto, and Student-$t$ distributions. We also demonstrate the BOBW achievability of FTPL with certain Fr'{e}chet-type tail distributions. Our results contribute not only to resolving existing conjectures through the lens of extreme value theory but also potentially offer insights into the effect of the regularization functions in FTRL through the mapping from FTPL to FTRL.
|
[
"['Jongyeong Lee' 'Junya Honda' 'Shinji Ito' 'Min-hwan Oh']"
] |
null | null |
2403.05138
| null | null |
http://arxiv.org/pdf/2403.05138v1
|
2024-03-08T08:12:05Z
|
2024-03-08T08:12:05Z
|
Greedy feature selection: Classifier-dependent feature selection via
greedy methods
|
The purpose of this study is to introduce a new approach to feature ranking for classification tasks, called in what follows greedy feature selection. In statistical learning, feature selection is usually realized by means of methods that are independent of the classifier applied to perform the prediction using that reduced number of features. Instead, greedy feature selection identifies the most important feature at each step and according to the selected classifier. In the paper, the benefits of such scheme are investigated theoretically in terms of model capacity indicators, such as the Vapnik-Chervonenkis (VC) dimension or the kernel alignment, and tested numerically by considering its application to the problem of predicting geo-effective manifestations of the active Sun.
|
[
"['Fabiana Camattari' 'Sabrina Guastavino' 'Francesco Marchetti'\n 'Michele Piana' 'Emma Perracchione']"
] |
null | null |
2403.05158
| null | null |
http://arxiv.org/pdf/2403.05158v1
|
2024-03-08T08:51:37Z
|
2024-03-08T08:51:37Z
|
Adaptive Split Learning over Energy-Constrained Wireless Edge Networks
|
Split learning (SL) is a promising approach for training artificial intelligence (AI) models, in which devices collaborate with a server to train an AI model in a distributed manner, based on a same fixed split point. However, due to the device heterogeneity and variation of channel conditions, this way is not optimal in training delay and energy consumption. In this paper, we design an adaptive split learning (ASL) scheme which can dynamically select split points for devices and allocate computing resource for the server in wireless edge networks. We formulate an optimization problem to minimize the average training latency subject to long-term energy consumption constraint. The difficulties in solving this problem are the lack of future information and mixed integer programming (MIP). To solve it, we propose an online algorithm leveraging the Lyapunov theory, named OPEN, which decomposes it into a new MIP problem only with the current information. Then, a two-layer optimization method is proposed to solve the MIP problem. Extensive simulation results demonstrate that the ASL scheme can reduce the average training delay and energy consumption by 53.7% and 22.1%, respectively, as compared to the existing SL schemes.
|
[
"['Zuguang Li' 'Wen Wu' 'Shaohua Wu' 'Wei Wang']"
] |
null | null |
2403.05164
| null | null |
http://arxiv.org/pdf/2403.05164v1
|
2024-03-08T09:09:15Z
|
2024-03-08T09:09:15Z
|
Synthetic data generation for system identification: leveraging
knowledge transfer from similar systems
|
This paper addresses the challenge of overfitting in the learning of dynamical systems by introducing a novel approach for the generation of synthetic data, aimed at enhancing model generalization and robustness in scenarios characterized by data scarcity. Central to the proposed methodology is the concept of knowledge transfer from systems within the same class. Specifically, synthetic data is generated through a pre-trained meta-model that describes a broad class of systems to which the system of interest is assumed to belong. Training data serves a dual purpose: firstly, as input to the pre-trained meta model to discern the system's dynamics, enabling the prediction of its behavior and thereby generating synthetic output sequences for new input sequences; secondly, in conjunction with synthetic data, to define the loss function used for model estimation. A validation dataset is used to tune a scalar hyper-parameter balancing the relative importance of training and synthetic data in the definition of the loss function. The same validation set can be also used for other purposes, such as early stopping during the training, fundamental to avoid overfitting in case of small-size training datasets. The efficacy of the approach is shown through a numerical example that highlights the advantages of integrating synthetic data into the system identification process.
|
[
"['Dario Piga' 'Matteo Rufolo' 'Gabriele Maroni' 'Manas Mejari'\n 'Marco Forgione']"
] |
null | null |
2403.05171
| null | null |
http://arxiv.org/pdf/2403.05171v2
|
2024-07-09T13:17:36Z
|
2024-03-08T09:20:12Z
|
Overcoming Reward Overoptimization via Adversarial Policy Optimization
with Lightweight Uncertainty Estimation
|
We introduce Adversarial Policy Optimization (AdvPO), a novel solution to the pervasive issue of reward over-optimization in Reinforcement Learning from Human Feedback (RLHF) for Large Language Models (LLMs). Over-optimization occurs when a reward model serves as an imperfect proxy for human preference, and RL-driven policy optimization erroneously exploits reward inaccuracies. In this paper, we begin by introducing a lightweight way to quantify uncertainties in rewards, relying solely on the last layer embeddings of the reward model, without the need for computationally expensive reward ensembles. AdvPO then addresses a distributionally robust optimization problem centred around the confidence interval of the reward model's predictions for policy improvement. Through comprehensive experiments on the Anthropic HH and TL;DR summarization datasets, we illustrate the efficacy of AdvPO in mitigating the overoptimization issue, consequently resulting in enhanced performance as evaluated through human-assisted evaluation.
|
[
"['Xiaoying Zhang' 'Jean-Francois Ton' 'Wei Shen' 'Hongning Wang'\n 'Yang Liu']"
] |
null | null |
2403.05174
| null | null |
http://arxiv.org/pdf/2403.05174v1
|
2024-03-08T09:28:42Z
|
2024-03-08T09:28:42Z
|
VTruST: Controllable value function based subset selection for
Data-Centric Trustworthy AI
|
Trustworthy AI is crucial to the widespread adoption of AI in high-stakes applications with fairness, robustness, and accuracy being some of the key trustworthiness metrics. In this work, we propose a controllable framework for data-centric trustworthy AI (DCTAI)- VTruST, that allows users to control the trade-offs between the different trustworthiness metrics of the constructed training datasets. A key challenge in implementing an efficient DCTAI framework is to design an online value-function-based training data subset selection algorithm. We pose the training data valuation and subset selection problem as an online sparse approximation formulation. We propose a novel online version of the Orthogonal Matching Pursuit (OMP) algorithm for solving this problem. Experimental results show that VTruST outperforms the state-of-the-art baselines on social, image, and scientific datasets. We also show that the data values generated by VTruST can provide effective data-centric explanations for different trustworthiness metrics.
|
[
"['Soumi Das' 'Shubhadip Nag' 'Shreyyash Sharma' 'Suparna Bhattacharya'\n 'Sourangshu Bhattacharya']"
] |
null | null |
2403.05175
| null | null |
http://arxiv.org/pdf/2403.05175v1
|
2024-03-08T09:32:43Z
|
2024-03-08T09:32:43Z
|
Continual Learning and Catastrophic Forgetting
|
This book chapter delves into the dynamics of continual learning, which is the process of incrementally learning from a non-stationary stream of data. Although continual learning is a natural skill for the human brain, it is very challenging for artificial neural networks. An important reason is that, when learning something new, these networks tend to quickly and drastically forget what they had learned before, a phenomenon known as catastrophic forgetting. Especially in the last decade, continual learning has become an extensively studied topic in deep learning. This book chapter reviews the insights that this field has generated.
|
[
"['Gido M. van de Ven' 'Nicholas Soures' 'Dhireesha Kudithipudi']"
] |
null | null |
2403.05181
| null | null |
http://arxiv.org/pdf/2403.05181v1
|
2024-03-08T09:43:27Z
|
2024-03-08T09:43:27Z
|
Adversarial Sparse Teacher: Defense Against Distillation-Based Model
Stealing Attacks Using Adversarial Examples
|
Knowledge Distillation (KD) facilitates the transfer of discriminative capabilities from an advanced teacher model to a simpler student model, ensuring performance enhancement without compromising accuracy. It is also exploited for model stealing attacks, where adversaries use KD to mimic the functionality of a teacher model. Recent developments in this domain have been influenced by the Stingy Teacher model, which provided empirical analysis showing that sparse outputs can significantly degrade the performance of student models. Addressing the risk of intellectual property leakage, our work introduces an approach to train a teacher model that inherently protects its logits, influenced by the Nasty Teacher concept. Differing from existing methods, we incorporate sparse outputs of adversarial examples with standard training data to strengthen the teacher's defense against student distillation. Our approach carefully reduces the relative entropy between the original and adversarially perturbed outputs, allowing the model to produce adversarial logits with minimal impact on overall performance. The source codes will be made publicly available soon.
|
[
"['Eda Yilmaz' 'Hacer Yalim Keles']"
] |
null | null |
2403.05185
| null | null |
http://arxiv.org/pdf/2403.05185v1
|
2024-03-08T09:53:07Z
|
2024-03-08T09:53:07Z
|
Personalized Audiobook Recommendations at Spotify Through Graph Neural
Networks
|
In the ever-evolving digital audio landscape, Spotify, well-known for its music and talk content, has recently introduced audiobooks to its vast user base. While promising, this move presents significant challenges for personalized recommendations. Unlike music and podcasts, audiobooks, initially available for a fee, cannot be easily skimmed before purchase, posing higher stakes for the relevance of recommendations. Furthermore, introducing a new content type into an existing platform confronts extreme data sparsity, as most users are unfamiliar with this new content type. Lastly, recommending content to millions of users requires the model to react fast and be scalable. To address these challenges, we leverage podcast and music user preferences and introduce 2T-HGNN, a scalable recommendation system comprising Heterogeneous Graph Neural Networks (HGNNs) and a Two Tower (2T) model. This novel approach uncovers nuanced item relationships while ensuring low latency and complexity. We decouple users from the HGNN graph and propose an innovative multi-link neighbor sampler. These choices, together with the 2T component, significantly reduce the complexity of the HGNN model. Empirical evaluations involving millions of users show significant improvement in the quality of personalized recommendations, resulting in a +46% increase in new audiobooks start rate and a +23% boost in streaming rates. Intriguingly, our model's impact extends beyond audiobooks, benefiting established products like podcasts.
|
[
"['Marco De Nadai' 'Francesco Fabbri' 'Paul Gigioli' 'Alice Wang' 'Ang Li'\n 'Fabrizio Silvestri' 'Laura Kim' 'Shawn Lin' 'Vladan Radosavljevic'\n 'Sandeep Ghael' 'David Nyhan' 'Hugues Bouchard' 'Mounia Lalmas-Roelleke'\n 'Andreas Damianou']"
] |
null | null |
2403.05196
| null | null |
http://arxiv.org/pdf/2403.05196v2
|
2024-06-04T10:47:02Z
|
2024-03-08T10:19:00Z
|
Denoising Autoregressive Representation Learning
|
In this paper, we explore a new generative approach for learning visual representations. Our method, DARL, employs a decoder-only Transformer to predict image patches autoregressively. We find that training with Mean Squared Error (MSE) alone leads to strong representations. To enhance the image generation ability, we replace the MSE loss with the diffusion objective by using a denoising patch decoder. We show that the learned representation can be improved by using tailored noise schedules and longer training in larger models. Notably, the optimal schedule differs significantly from the typical ones used in standard image diffusion models. Overall, despite its simple architecture, DARL delivers performance remarkably close to state-of-the-art masked prediction models under the fine-tuning protocol. This marks an important step towards a unified model capable of both visual perception and generation, effectively combining the strengths of autoregressive and denoising diffusion models.
|
[
"['Yazhe Li' 'Jorg Bornschein' 'Ting Chen']"
] |
null | null |
2403.05209
| null | null |
http://arxiv.org/pdf/2403.05209v1
|
2024-03-08T10:49:37Z
|
2024-03-08T10:49:37Z
|
Overcoming Data Inequality across Domains with Semi-Supervised Domain
Generalization
|
While there have been considerable advancements in machine learning driven by extensive datasets, a significant disparity still persists in the availability of data across various sources and populations. This inequality across domains poses challenges in modeling for those with limited data, which can lead to profound practical and ethical concerns. In this paper, we address a representative case of data inequality problem across domains termed Semi-Supervised Domain Generalization (SSDG), in which only one domain is labeled while the rest are unlabeled. We propose a novel algorithm, ProUD, which can effectively learn domain-invariant features via domain-aware prototypes along with progressive generalization via uncertainty-adaptive mixing of labeled and unlabeled domains. Our experiments on three different benchmark datasets demonstrate the effectiveness of ProUD, outperforming all baseline models including single domain generalization and semi-supervised learning. Source code will be released upon acceptance of the paper.
|
[
"['Jinha Park' 'Wonguk Cho' 'Taesup Kim']"
] |
null | null |
2403.05220
| null | null |
http://arxiv.org/pdf/2403.05220v1
|
2024-03-08T11:18:26Z
|
2024-03-08T11:18:26Z
|
Synthetic Privileged Information Enhances Medical Image Representation
Learning
|
Multimodal self-supervised representation learning has consistently proven to be a highly effective method in medical image analysis, offering strong task performance and producing biologically informed insights. However, these methods heavily rely on large, paired datasets, which is prohibitive for their use in scenarios where paired data does not exist, or there is only a small amount available. In contrast, image generation methods can work well on very small datasets, and can find mappings between unpaired datasets, meaning an effectively unlimited amount of paired synthetic data can be generated. In this work, we demonstrate that representation learning can be significantly improved by synthetically generating paired information, both compared to training on either single-modality (up to 4.4x error reduction) or authentic multi-modal paired datasets (up to 5.6x error reduction).
|
[
"['Lucas Farndale' 'Chris Walsh' 'Robert Insall' 'Ke Yuan']"
] |
null | null |
2403.05235
| null | null |
http://arxiv.org/pdf/2403.05235v1
|
2024-03-08T11:51:00Z
|
2024-03-08T11:51:00Z
|
Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine
Learning in Healthcare
|
The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness. We propose an interpretable framework - Fairness-Aware Interpretable Modeling (FAIM), to improve model fairness without compromising performance, featuring an interactive interface to identify a "fairer" model from a set of high-performing models and promoting the integration of data-driven evidence and clinical expertise to enhance contextualized fairness. We demonstrated FAIM's value in reducing sex and race biases by predicting hospital admission with two real-world databases, MIMIC-IV-ED and SGH-ED. We show that for both datasets, FAIM models not only exhibited satisfactory discriminatory performance but also significantly mitigated biases as measured by well-established fairness metrics, outperforming commonly used bias-mitigation methods. Our approach demonstrates the feasibility of improving fairness without sacrificing performance and provides an a modeling mode that invites domain experts to engage, fostering a multidisciplinary effort toward tailored AI fairness.
|
[
"['Mingxuan Liu' 'Yilin Ning' 'Yuhe Ke' 'Yuqing Shang' 'Bibhas Chakraborty'\n 'Marcus Eng Hock Ong' 'Roger Vaughan' 'Nan Liu']"
] |
null | null |
2403.05239
| null | null |
http://arxiv.org/pdf/2403.05239v1
|
2024-03-08T11:59:32Z
|
2024-03-08T11:59:32Z
|
Towards Effective Usage of Human-Centric Priors in Diffusion Models for
Text-based Human Image Generation
|
Vanilla text-to-image diffusion models struggle with generating accurate human images, commonly resulting in imperfect anatomies such as unnatural postures or disproportionate limbs.Existing methods address this issue mostly by fine-tuning the model with extra images or adding additional controls -- human-centric priors such as pose or depth maps -- during the image generation phase. This paper explores the integration of these human-centric priors directly into the model fine-tuning stage, essentially eliminating the need for extra conditions at the inference stage. We realize this idea by proposing a human-centric alignment loss to strengthen human-related information from the textual prompts within the cross-attention maps. To ensure semantic detail richness and human structural accuracy during fine-tuning, we introduce scale-aware and step-wise constraints within the diffusion process, according to an in-depth analysis of the cross-attention layer. Extensive experiments show that our method largely improves over state-of-the-art text-to-image models to synthesize high-quality human images based on user-written prompts. Project page: url{https://hcplayercvpr2024.github.io}.
|
[
"['Junyan Wang' 'Zhenhong Sun' 'Zhiyu Tan' 'Xuanbai Chen' 'Weihua Chen'\n 'Hao Li' 'Cheng Zhang' 'Yang Song']"
] |
null | null |
2403.05249
| null | null |
http://arxiv.org/pdf/2403.05249v1
|
2024-03-08T12:13:11Z
|
2024-03-08T12:13:11Z
|
On Representing Electronic Wave Functions with Sign Equivariant Neural
Networks
|
Recent neural networks demonstrated impressively accurate approximations of electronic ground-state wave functions. Such neural networks typically consist of a permutation-equivariant neural network followed by a permutation-antisymmetric operation to enforce the electronic exchange symmetry. While accurate, such neural networks are computationally expensive. In this work, we explore the flipped approach, where we first compute antisymmetric quantities based on the electronic coordinates and then apply sign equivariant neural networks to preserve the antisymmetry. While this approach promises acceleration thanks to the lower-dimensional representation, we demonstrate that it reduces to a Jastrow factor, a commonly used permutation-invariant multiplicative factor in the wave function. Our empirical results support this further, finding little to no improvements over baselines. We conclude with neither theoretical nor empirical advantages of sign equivariant functions for representing electronic wave functions within the evaluation of this work.
|
[
"['Nicholas Gao' 'Stephan Günnemann']"
] |
null | null |
2403.05256
| null | null |
http://arxiv.org/pdf/2403.05256v1
|
2024-03-08T12:26:48Z
|
2024-03-08T12:26:48Z
|
DuDoUniNeXt: Dual-domain unified hybrid model for single and
multi-contrast undersampled MRI reconstruction
|
Multi-contrast (MC) Magnetic Resonance Imaging (MRI) reconstruction aims to incorporate a reference image of auxiliary modality to guide the reconstruction process of the target modality. Known MC reconstruction methods perform well with a fully sampled reference image, but usually exhibit inferior performance, compared to single-contrast (SC) methods, when the reference image is missing or of low quality. To address this issue, we propose DuDoUniNeXt, a unified dual-domain MRI reconstruction network that can accommodate to scenarios involving absent, low-quality, and high-quality reference images. DuDoUniNeXt adopts a hybrid backbone that combines CNN and ViT, enabling specific adjustment of image domain and k-space reconstruction. Specifically, an adaptive coarse-to-fine feature fusion module (AdaC2F) is devised to dynamically process the information from reference images of varying qualities. Besides, a partially shared shallow feature extractor (PaSS) is proposed, which uses shared and distinct parameters to handle consistent and discrepancy information among contrasts. Experimental results demonstrate that the proposed model surpasses state-of-the-art SC and MC models significantly. Ablation studies show the effectiveness of the proposed hybrid backbone, AdaC2F, PaSS, and the dual-domain unified learning scheme.
|
[
"['Ziqi Gao' 'Yue Zhang' 'Xinwen Liu' 'Kaiyan Li' 'S. Kevin Zhou']"
] |
null | null |
2403.05266
| null | null |
http://arxiv.org/pdf/2403.05266v1
|
2024-03-08T12:42:36Z
|
2024-03-08T12:42:36Z
|
ERBench: An Entity-Relationship based Automatically Verifiable
Hallucination Benchmark for Large Language Models
|
Large language models (LLMs) have achieved unprecedented performance in various applications, yet their evaluation remains a critical issue. Existing hallucination benchmarks are either static or lack adjustable complexity for thorough analysis. We contend that utilizing existing relational databases is a promising approach for constructing benchmarks due to their accurate knowledge description via functional dependencies. We propose ERBench to automatically convert any relational database into a benchmark based on the entity-relationship (ER) model. Our key idea is to construct questions using the database schema, records, and functional dependencies such that they can be automatically verified. In addition, we use foreign key constraints to join relations and construct multihop questions, which can be arbitrarily complex and used to debug the intermediate answers of LLMs. Finally, ERBench supports continuous evaluation, multimodal questions, and various prompt engineering techniques. In our experiments, we construct an LLM benchmark using databases of multiple domains and make an extensive comparison of contemporary LLMs. We observe that better LLMs like GPT-4 can handle a larger variety of question types, but are by no means perfect. Also, correct answers do not necessarily imply correct rationales, which is an important evaluation that ERBench does better than other benchmarks for various question types. Code is available at https: //github.com/DILAB-KAIST/ERBench.
|
[
"['Jio Oh' 'Soyeon Kim' 'Junseok Seo' 'Jindong Wang' 'Ruochen Xu'\n 'Xing Xie' 'Steven Euijong Whang']"
] |
null | null |
2403.05268
| null | null |
http://arxiv.org/pdf/2403.05268v2
|
2024-06-24T13:15:33Z
|
2024-03-08T12:45:53Z
|
Deep Prompt Multi-task Network for Abuse Language Detection
|
The detection of abusive language remains a long-standing challenge with the extensive use of social networks. The detection task of abusive language suffers from limited accuracy. We argue that the existing detection methods utilize the fine-tuning technique of the pre-trained language models (PLMs) to handle downstream tasks. Hence, these methods fail to stimulate the general knowledge of the PLMs. To address the problem, we propose a novel Deep Prompt Multi-task Network (DPMN) for abuse language detection. Specifically, DPMN first attempts to design two forms of deep prompt tuning and light prompt tuning for the PLMs. The effects of different prompt lengths, tuning strategies, and prompt initialization methods on detecting abusive language are studied. In addition, we propose a Task Head based on Bi-LSTM and FFN, which can be used as a short text classifier. Eventually, DPMN utilizes multi-task learning to improve detection metrics further. The multi-task network has the function of transferring effective knowledge. The proposed DPMN is evaluated against eight typical methods on three public datasets: OLID, SOLID, and AbuseAnalyzer. The experimental results show that our DPMN outperforms the state-of-the-art methods.
|
[
"['Jian Zhu' 'Yuping Ruan' 'Jingfei Chang' 'Wenhui Sun' 'Hui Wan'\n 'Jian Long' 'Cheng Luo']"
] |
null | null |
2403.05290
| null | null |
http://arxiv.org/pdf/2403.05290v1
|
2024-03-08T13:16:17Z
|
2024-03-08T13:16:17Z
|
Foundational propositions of hesitant fuzzy soft $β$-covering
approximation spaces
|
Soft set theory serves as a mathematical framework for handling uncertain information, and hesitant fuzzy sets find extensive application in scenarios involving uncertainty and hesitation. Hesitant fuzzy sets exhibit diverse membership degrees, giving rise to various forms of inclusion relationships among them. This article introduces the notions of hesitant fuzzy soft $beta$-coverings and hesitant fuzzy soft $beta$-neighborhoods, which are formulated based on distinct forms of inclusion relationships among hesitancy fuzzy sets. Subsequently, several associated properties are investigated. Additionally, specific variations of hesitant fuzzy soft $beta$-coverings are introduced by incorporating hesitant fuzzy rough sets, followed by an exploration of properties pertaining to hesitant fuzzy soft $beta$-covering approximation spaces.
|
[
"['Shizhan Lu']"
] |
null | null |
2403.05293
| null | null |
http://arxiv.org/pdf/2403.05293v1
|
2024-03-08T13:21:07Z
|
2024-03-08T13:21:07Z
|
Leveraging Continuous Time to Understand Momentum When Training Diagonal
Linear Networks
|
In this work, we investigate the effect of momentum on the optimisation trajectory of gradient descent. We leverage a continuous-time approach in the analysis of momentum gradient descent with step size $gamma$ and momentum parameter $beta$ that allows us to identify an intrinsic quantity $lambda = frac{ gamma }{ (1 - beta)^2 }$ which uniquely defines the optimisation path and provides a simple acceleration rule. When training a $2$-layer diagonal linear network in an overparametrised regression setting, we characterise the recovered solution through an implicit regularisation problem. We then prove that small values of $lambda$ help to recover sparse solutions. Finally, we give similar but weaker results for stochastic momentum gradient descent. We provide numerical experiments which support our claims.
|
[
"['Hristo Papazov' 'Scott Pesme' 'Nicolas Flammarion']"
] |
null | null |
2403.05300
| null | null |
http://arxiv.org/pdf/2403.05300v3
|
2024-05-31T15:14:43Z
|
2024-03-08T13:29:46Z
|
Unity by Diversity: Improved Representation Learning in Multimodal VAEs
|
Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality's latent representation towards a shared aggregate posterior. This approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.
|
[
"['Thomas M. Sutter' 'Yang Meng' 'Andrea Agostini' 'Daphné Chopard'\n 'Norbert Fortin' 'Julia E. Vogt' 'Bahbak Shahbaba' 'Stephan Mandt']"
] |
null | null |
2403.05318
| null | null |
http://arxiv.org/pdf/2403.05318v1
|
2024-03-08T13:49:21Z
|
2024-03-08T13:49:21Z
|
Looking Ahead to Avoid Being Late: Solving Hard-Constrained Traveling
Salesman Problem
|
Many real-world problems can be formulated as a constrained Traveling Salesman Problem (TSP). However, the constraints are always complex and numerous, making the TSPs challenging to solve. When the number of complicated constraints grows, it is time-consuming for traditional heuristic algorithms to avoid illegitimate outcomes. Learning-based methods provide an alternative to solve TSPs in a soft manner, which also supports GPU acceleration to generate solutions quickly. Nevertheless, the soft manner inevitably results in difficulty solving hard-constrained problems with learning algorithms, and the conflicts between legality and optimality may substantially affect the optimality of the solution. To overcome this problem and to have an effective solution against hard constraints, we proposed a novel learning-based method that uses looking-ahead information as the feature to improve the legality of TSP with Time Windows (TSPTW) solutions. Besides, we constructed TSPTW datasets with hard constraints in order to accurately evaluate and benchmark the statistical performance of various approaches, which can serve the community for future research. With comprehensive experiments on diverse datasets, MUSLA outperforms existing baselines and shows generalizability potential.
|
[
"['Jingxiao Chen' 'Ziqin Gong' 'Minghuan Liu' 'Jun Wang' 'Yong Yu'\n 'Weinan Zhang']"
] |
null | null |
2403.05340
| null | null |
http://arxiv.org/pdf/2403.05340v1
|
2024-03-08T14:17:07Z
|
2024-03-08T14:17:07Z
|
Embedded Deployment of Semantic Segmentation in Medicine through
Low-Resolution Inputs
|
When deploying neural networks in real-life situations, the size and computational effort are often the limiting factors. This is especially true in environments where big, expensive hardware is not affordable, like in embedded medical devices, where budgets are often tight. State-of-the-art proposed multiple different lightweight solutions for such use cases, mostly by changing the base model architecture, not taking the input and output resolution into consideration. In this paper, we propose our architecture that takes advantage of the fact that in hardware-limited environments, we often refrain from using the highest available input resolutions to guarantee a higher throughput. Although using lower-resolution input leads to a significant reduction in computing and memory requirements, it may also incur reduced prediction quality. Our architecture addresses this problem by exploiting the fact that we can still utilize high-resolution ground-truths in training. The proposed model inputs lower-resolution images and high-resolution ground truths, which can improve the prediction quality by 5.5% while adding less than 200 parameters to the model. %reducing the frames per second only from 25 to 20. We conduct an extensive analysis to illustrate that our architecture enhances existing state-of-the-art frameworks for lightweight semantic segmentation of cancer in MRI images. We also tested the deployment speed of state-of-the-art lightweight networks and our architecture on Nvidia's Jetson Nano to emulate deployment in resource-constrained embedded scenarios.
|
[
"['Erik Ostrowski' 'Muhammad Shafique']"
] |
null | null |
2403.05353
| null | null |
http://arxiv.org/abs/2403.05353v1
|
2024-03-08T14:34:32Z
|
2024-03-08T14:34:32Z
|
Hybridized Convolutional Neural Networks and Long Short-Term Memory for
Improved Alzheimer's Disease Diagnosis from MRI Scans
|
Brain-related diseases are more sensitive than other diseases due to several factors, including the complexity of surgical procedures, high costs, and other challenges. Alzheimer's disease is a common brain disorder that causes memory loss and the shrinking of brain cells. Early detection is critical for providing proper treatment to patients. However, identifying Alzheimer's at an early stage using manual scanning of CT or MRI scans is challenging. Therefore, researchers have delved into the exploration of computer-aided systems, employing Machine Learning and Deep Learning methodologies, which entail the training of datasets to detect Alzheimer's disease. This study aims to present a hybrid model that combines a CNN model's feature extraction capabilities with an LSTM model's detection capabilities. This study has applied the transfer learning called VGG16 in the hybrid model to extract features from MRI images. The LSTM detects features between the convolution layer and the fully connected layer. The output layer of the fully connected layer uses the softmax function. The training of the hybrid model involved utilizing the ADNI dataset. The trial findings revealed that the model achieved a level of accuracy of 98.8%, a sensitivity rate of 100%, and a specificity rate of 76%. The proposed hybrid model outperforms its contemporary CNN counterparts, showcasing a superior performance.
|
[
"['Maleka Khatun' 'Md Manowarul Islam' 'Habibur Rahman Rifat'\n 'Md. Shamim Bin Shahid' 'Md. Alamin Talukder' 'Md Ashraf Uddin']"
] |
null | null |
2403.05358
| null | null |
http://arxiv.org/pdf/2403.05358v1
|
2024-03-08T14:45:18Z
|
2024-03-08T14:45:18Z
|
Variational Inference of Parameters in Opinion Dynamics Models
|
Despite the frequent use of agent-based models (ABMs) for studying social phenomena, parameter estimation remains a challenge, often relying on costly simulation-based heuristics. This work uses variational inference to estimate the parameters of an opinion dynamics ABM, by transforming the estimation problem into an optimization task that can be solved directly. Our proposal relies on probabilistic generative ABMs (PGABMs): we start by synthesizing a probabilistic generative model from the ABM rules. Then, we transform the inference process into an optimization problem suitable for automatic differentiation. In particular, we use the Gumbel-Softmax reparameterization for categorical agent attributes and stochastic variational inference for parameter estimation. Furthermore, we explore the trade-offs of using variational distributions with different complexity: normal distributions and normalizing flows. We validate our method on a bounded confidence model with agent roles (leaders and followers). Our approach estimates both macroscopic (bounded confidence intervals and backfire thresholds) and microscopic ($200$ categorical, agent-level roles) more accurately than simulation-based and MCMC methods. Consequently, our technique enables experts to tune and validate their ABMs against real-world observations, thus providing insights into human behavior in social systems via data-driven analysis.
|
[
"['Jacopo Lenti' 'Fabrizio Silvestri' 'Gianmarco De Francisci Morales']"
] |
null | null |
2403.05365
| null | null |
http://arxiv.org/pdf/2403.05365v1
|
2024-03-08T14:55:05Z
|
2024-03-08T14:55:05Z
|
The Impact of Quantization on the Robustness of Transformer-based Text
Classifiers
|
Transformer-based models have made remarkable advancements in various NLP areas. Nevertheless, these models often exhibit vulnerabilities when confronted with adversarial attacks. In this paper, we explore the effect of quantization on the robustness of Transformer-based models. Quantization usually involves mapping a high-precision real number to a lower-precision value, aiming at reducing the size of the model at hand. To the best of our knowledge, this work is the first application of quantization on the robustness of NLP models. In our experiments, we evaluate the impact of quantization on BERT and DistilBERT models in text classification using SST-2, Emotion, and MR datasets. We also evaluate the performance of these models against TextFooler, PWWS, and PSO adversarial attacks. Our findings show that quantization significantly improves (by an average of 18.68%) the adversarial accuracy of the models. Furthermore, we compare the effect of quantization versus that of the adversarial training approach on robustness. Our experiments indicate that quantization increases the robustness of the model by 18.80% on average compared to adversarial training without imposing any extra computational overhead during training. Therefore, our results highlight the effectiveness of quantization in improving the robustness of NLP models.
|
[
"['Seyed Parsa Neshaei' 'Yasaman Boreshban' 'Gholamreza Ghassem-Sani'\n 'Seyed Abolghasem Mirroshandel']"
] |
null | null |
2403.05368
| null | null |
http://arxiv.org/pdf/2403.05368v1
|
2024-03-08T14:59:15Z
|
2024-03-08T14:59:15Z
|
Exploring the Links between the Fundamental Lemma and Kernel Regression
|
Generalizations and variations of the fundamental lemma by Willems et al. are an active topic of recent research. In this note, we explore and formalize the links between kernel regression and known nonlinear extensions of the fundamental lemma. Applying a transformation to the usual linear equation in Hankel matrices, we arrive at an alternative implicit kernel representation of the system trajectories while keeping the requirements on persistency of excitation. We show that this representation is equivalent to the solution of a specific kernel regression problem. We explore the possible structures of the underlying kernel as well as the system classes to which they correspond.
|
[
"['Oleksii Molodchyk' 'Timm Faulwasser']"
] |
null | null |
2403.05385
| null | null |
http://arxiv.org/pdf/2403.05385v3
|
2024-03-12T16:01:02Z
|
2024-03-08T15:30:58Z
|
Switching the Loss Reduces the Cost in Batch Reinforcement Learning
|
We propose training fitted Q-iteration with log-loss (FQI-LOG) for batch reinforcement learning (RL). We show that the number of samples needed to learn a near-optimal policy with FQI-LOG scales with the accumulated cost of the optimal policy, which is zero in problems where acting optimally achieves the goal and incurs no cost. In doing so, we provide a general framework for proving $textit{small-cost}$ bounds, i.e. bounds that scale with the optimal achievable cost, in batch RL. Moreover, we empirically verify that FQI-LOG uses fewer samples than FQI trained with squared loss on problems where the optimal policy reliably achieves the goal.
|
[
"['Alex Ayoub' 'Kaiwen Wang' 'Vincent Liu' 'Samuel Robertson'\n 'James McInerney' 'Dawen Liang' 'Nathan Kallus' 'Csaba Szepesvári']"
] |
null | null |
2403.05395
| null | null |
http://arxiv.org/pdf/2403.05395v1
|
2024-03-08T15:45:13Z
|
2024-03-08T15:45:13Z
|
Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems
trained with Gradient Descent
|
Advanced machine learning methods, and more prominently neural networks, have become standard to solve inverse problems over the last years. However, the theoretical recovery guarantees of such methods are still scarce and difficult to achieve. Only recently did unsupervised methods such as Deep Image Prior (DIP) get equipped with convergence and recovery guarantees for generic loss functions when trained through gradient flow with an appropriate initialization. In this paper, we extend these results by proving that these guarantees hold true when using gradient descent with an appropriately chosen step-size/learning rate. We also show that the discretization only affects the overparametrization bound for a two-layer DIP network by a constant and thus that the different guarantees found for the gradient flow will hold for gradient descent.
|
[
"['Nathan Buskulic' 'Jalal Fadili' 'Yvain Quéau']"
] |
null | null |
2403.05396
| null | null |
http://arxiv.org/pdf/2403.05396v2
|
2024-06-18T05:58:43Z
|
2024-03-08T15:51:43Z
|
HistGen: Histopathology Report Generation via Local-Global Feature
Encoding and Cross-modal Context Interaction
|
Histopathology serves as the gold standard in cancer diagnosis, with clinical reports being vital in interpreting and understanding this process, guiding cancer treatment and patient care. The automation of histopathology report generation with deep learning stands to significantly enhance clinical efficiency and lessen the labor-intensive, time-consuming burden on pathologists in report writing. In pursuit of this advancement, we introduce HistGen, a multiple instance learning-empowered framework for histopathology report generation together with the first benchmark dataset for evaluation. Inspired by diagnostic and report-writing workflows, HistGen features two delicately designed modules, aiming to boost report generation by aligning whole slide images (WSIs) and diagnostic reports from local and global granularity. To achieve this, a local-global hierarchical encoder is developed for efficient visual feature aggregation from a region-to-slide perspective. Meanwhile, a cross-modal context module is proposed to explicitly facilitate alignment and interaction between distinct modalities, effectively bridging the gap between the extensive visual sequences of WSIs and corresponding highly summarized reports. Experimental results on WSI report generation show the proposed model outperforms state-of-the-art (SOTA) models by a large margin. Moreover, the results of fine-tuning our model on cancer subtyping and survival analysis tasks further demonstrate superior performance compared to SOTA methods, showcasing strong transfer learning capability. Dataset, model weights, and source code are available in https://github.com/dddavid4real/HistGen.
|
[
"['Zhengrui Guo' 'Jiabo Ma' 'Yingxue Xu' 'Yihui Wang' 'Liansheng Wang'\n 'Hao Chen']"
] |
null | null |
2403.05406
| null | null |
http://arxiv.org/pdf/2403.05406v1
|
2024-03-08T16:04:36Z
|
2024-03-08T16:04:36Z
|
Considering Nonstationary within Multivariate Time Series with
Variational Hierarchical Transformer for Forecasting
|
The forecasting of Multivariate Time Series (MTS) has long been an important but challenging task. Due to the non-stationary problem across long-distance time steps, previous studies primarily adopt stationarization method to attenuate the non-stationary problem of the original series for better predictability. However, existing methods always adopt the stationarized series, which ignores the inherent non-stationarity, and has difficulty in modeling MTS with complex distributions due to the lack of stochasticity. To tackle these problems, we first develop a powerful hierarchical probabilistic generative module to consider the non-stationarity and stochastic characteristics within MTS, and then combine it with transformer for a well-defined variational generative dynamic model named Hierarchical Time series Variational Transformer (HTV-Trans), which recovers the intrinsic non-stationary information into temporal dependencies. Being a powerful probabilistic model, HTV-Trans is utilized to learn expressive representations of MTS and applied to forecasting tasks. Extensive experiments on diverse datasets show the efficiency of HTV-Trans on MTS forecasting tasks
|
[
"['Muyao Wang' 'Wenchao Chen' 'Bo Chen']"
] |
null | null |
2403.05440
| null | null |
http://arxiv.org/abs/2403.05440v1
|
2024-03-08T16:48:20Z
|
2024-03-08T16:48:20Z
|
Is Cosine-Similarity of Embeddings Really About Similarity?
|
Cosine-similarity is the cosine of the angle between two vectors, or equivalently the dot product between their normalizations. A popular application is to quantify semantic similarity between high-dimensional objects by applying cosine-similarity to a learned low-dimensional feature embedding. This can work better but sometimes also worse than the unnormalized dot-product between embedded vectors in practice. To gain insight into this empirical observation, we study embeddings derived from regularized linear models, where closed-form solutions facilitate analytical insights. We derive analytically how cosine-similarity can yield arbitrary and therefore meaningless `similarities.' For some linear models the similarities are not even unique, while for others they are implicitly controlled by the regularization. We discuss implications beyond linear models: a combination of different regularizations are employed when learning deep models; these have implicit and unintended effects when taking cosine-similarities of the resulting embeddings, rendering results opaque and possibly arbitrary. Based on these insights, we caution against blindly using cosine-similarity and outline alternatives.
|
[
"['Harald Steck' 'Chaitanya Ekanadham' 'Nathan Kallus']"
] |
null | null |
2403.05441
| null | null |
http://arxiv.org/pdf/2403.05441v1
|
2024-03-08T16:51:27Z
|
2024-03-08T16:51:27Z
|
Bayesian Hierarchical Probabilistic Forecasting of Intraday Electricity
Prices
|
We present a first study of Bayesian forecasting of electricity prices traded on the German continuous intraday market which fully incorporates parameter uncertainty. Our target variable is the IDFull price index, forecasts are given in terms of posterior predictive distributions. For validation we use the exceedingly volatile electricity prices of 2022, which have hardly been the subject of forecasting studies before. As a benchmark model, we use all available intraday transactions at the time of forecast creation to compute a current value for the IDFull. According to the weak-form efficiency hypothesis, it would not be possible to significantly improve this benchmark built from last price information. We do, however, observe statistically significant improvement in terms of both point measures and probability scores. Finally, we challenge the declared gold standard of using LASSO for feature selection in electricity price forecasting by presenting strong statistical evidence that Orthogonal Matching Pursuit (OMP) leads to better forecasting performance.
|
[
"['Daniel Nickelsen' 'Gernot Müller']"
] |
null | null |
2403.05446
| null | null |
http://arxiv.org/pdf/2403.05446v1
|
2024-03-08T16:54:27Z
|
2024-03-08T16:54:27Z
|
An Improved Algorithm for Learning Drifting Discrete Distributions
|
We present a new adaptive algorithm for learning discrete distributions under distribution drift. In this setting, we observe a sequence of independent samples from a discrete distribution that is changing over time, and the goal is to estimate the current distribution. Since we have access to only a single sample for each time step, a good estimation requires a careful choice of the number of past samples to use. To use more samples, we must resort to samples further in the past, and we incur a drift error due to the bias introduced by the change in distribution. On the other hand, if we use a small number of past samples, we incur a large statistical error as the estimation has a high variance. We present a novel adaptive algorithm that can solve this trade-off without any prior knowledge of the drift. Unlike previous adaptive results, our algorithm characterizes the statistical error using data-dependent bounds. This technicality enables us to overcome the limitations of the previous work that require a fixed finite support whose size is known in advance and that cannot change over time. Additionally, we can obtain tighter bounds depending on the complexity of the drifting distribution, and also consider distributions with infinite support.
|
[
"['Alessio Mazzetto']"
] |
null | null |
2403.05452
| null | null |
http://arxiv.org/pdf/2403.05452v3
|
2024-05-01T15:58:52Z
|
2024-03-08T16:57:54Z
|
The R2D2 deep neural network series paradigm for fast precision imaging
in radio astronomy
|
Radio-interferometric (RI) imaging entails solving high-resolution high-dynamic range inverse problems from large data volumes. Recent image reconstruction techniques grounded in optimization theory have demonstrated remarkable capability for imaging precision, well beyond CLEAN's capability. These range from advanced proximal algorithms propelled by handcrafted regularization operators, such as the SARA family, to hybrid plug-and-play (PnP) algorithms propelled by learned regularization denoisers, such as AIRI. Optimization and PnP structures are however highly iterative, which hinders their ability to handle the extreme data sizes expected from future instruments. To address this scalability challenge, we introduce a novel deep learning approach, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging". R2D2's reconstruction is formed as a series of residual images, iteratively estimated as outputs of Deep Neural Networks (DNNs) taking the previous iteration's image estimate and associated data residual as inputs. It thus takes a hybrid structure between a PnP algorithm and a learned version of the matching pursuit algorithm that underpins CLEAN. We present a comprehensive study of our approach, featuring its multiple incarnations distinguished by their DNN architectures. We provide a detailed description of its training process, targeting a telescope-specific approach. R2D2's capability to deliver high precision is demonstrated in simulation, across a variety of image and observation settings using the Very Large Array (VLA). Its reconstruction speed is also demonstrated: with only few iterations required to clean data residuals at dynamic ranges up to 100000, R2D2 opens the door to fast precision imaging. R2D2 codes are available in the BASPLib library on GitHub.
|
[
"['Amir Aghabiglou' 'Chung San Chu' 'Arwa Dabbech' 'Yves Wiaux']"
] |
null | null |
2403.05465
| null | null |
http://arxiv.org/pdf/2403.05465v2
|
2024-03-26T18:43:35Z
|
2024-03-08T17:28:49Z
|
Algorithm-Hardware Co-Design of Distribution-Aware Logarithmic-Posit
Encodings for Efficient DNN Inference
|
Traditional Deep Neural Network (DNN) quantization methods using integer, fixed-point, or floating-point data types struggle to capture diverse DNN parameter distributions at low precision, and often require large silicon overhead and intensive quantization-aware training. In this study, we introduce Logarithmic Posits (LP), an adaptive, hardware-friendly data type inspired by posits that dynamically adapts to DNN weight/activation distributions by parameterizing LP bit fields. We also develop a novel genetic-algorithm based framework, LP Quantization (LPQ), to find optimal layer-wise LP parameters while reducing representational divergence between quantized and full-precision models through a novel global-local contrastive objective. Additionally, we design a unified mixed-precision LP accelerator (LPA) architecture comprising of processing elements (PEs) incorporating LP in the computational datapath. Our algorithm-hardware co-design demonstrates on average <1% drop in top-1 accuracy across various CNN and ViT models. It also achieves ~ 2x improvements in performance per unit area and 2.2x gains in energy efficiency compared to state-of-the-art quantization accelerators using different data types.
|
[
"['Akshat Ramachandran' 'Zishen Wan' 'Geonhwa Jeong' 'John Gustafson'\n 'Tushar Krishna']"
] |
null | null |
2403.05490
| null | null |
http://arxiv.org/pdf/2403.05490v1
|
2024-03-08T17:55:41Z
|
2024-03-08T17:55:41Z
|
Poly-View Contrastive Learning
|
Contrastive learning typically matches pairs of related views among a number of unrelated negative views. Views can be generated (e.g. by augmentations) or be observed. We investigate matching when there are more than two related views which we call poly-view tasks, and derive new representation learning objectives using information maximization and sufficient statistics. We show that with unlimited computation, one should maximize the number of related views, and with a fixed compute budget, it is beneficial to decrease the number of unique samples whilst increasing the number of views of those samples. In particular, poly-view contrastive models trained for 128 epochs with batch size 256 outperform SimCLR trained for 1024 epochs at batch size 4096 on ImageNet1k, challenging the belief that contrastive models require large batch sizes and many training epochs.
|
[
"['Amitis Shidani' 'Devon Hjelm' 'Jason Ramapuram' 'Russ Webb'\n 'Eeshan Gunesh Dhekane' 'Dan Busbridge']"
] |
null | null |
2403.05527
| null | null |
http://arxiv.org/pdf/2403.05527v2
|
2024-03-11T18:55:40Z
|
2024-03-08T18:48:30Z
|
GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless
Generative Inference of LLM
|
Key-value (KV) caching has become the de-facto to accelerate generation speed for large language models (LLMs) inference. However, the growing cache demand with increasing sequence length has transformed LLM inference to be a memory bound problem, significantly constraining the system throughput. Existing methods rely on dropping unimportant tokens or quantizing all entries uniformly. Such methods, however, often incur high approximation errors to represent the compressed matrices. The autoregressive decoding process further compounds the error of each step, resulting in critical deviation in model generation and deterioration of performance. To tackle this challenge, we propose GEAR, an efficient KV cache compression framework that achieves near-lossless high-ratio compression. GEAR first applies quantization to majority of entries of similar magnitudes to ultra-low precision. It then employs a low rank matrix to approximate the quantization error, and a sparse matrix to remedy individual errors from outlier entries. By adeptly integrating three techniques, GEAR is able to fully exploit their synergistic potentials. Our experiments demonstrate that compared to alternatives, GEAR achieves near-lossless 4-bit KV cache compression with up to 2.38x throughput improvement, while reducing peak-memory size up to 2.29x. Our code is publicly available at https://github.com/HaoKang-Timmy/GEAR.
|
[
"['Hao Kang' 'Qingru Zhang' 'Souvik Kundu' 'Geonhwa Jeong' 'Zaoxing Liu'\n 'Tushar Krishna' 'Tuo Zhao']"
] |
null | null |
2403.05529
| null | null |
http://arxiv.org/pdf/2403.05529v2
|
2024-03-12T22:27:02Z
|
2024-03-08T18:50:19Z
|
Computational-Statistical Gaps in Gaussian Single-Index Models
|
Single-Index Models are high-dimensional regression problems with planted structure, whereby labels depend on an unknown one-dimensional projection of the input via a generic, non-linear, and potentially non-deterministic transformation. As such, they encompass a broad class of statistical inference tasks, and provide a rich template to study statistical and computational trade-offs in the high-dimensional regime. While the information-theoretic sample complexity to recover the hidden direction is linear in the dimension $d$, we show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(d^{k^star/2})$ samples, where $k^star$ is a "generative" exponent associated with the model that we explicitly characterize. Moreover, we show that this sample complexity is also sufficient, by establishing matching upper bounds using a partial-trace algorithm. Therefore, our results provide evidence of a sharp computational-to-statistical gap (under both the SQ and LDP class) whenever $k^star>2$. To complete the study, we provide examples of smooth and Lipschitz deterministic target functions with arbitrarily large generative exponents $k^star$.
|
[
"['Alex Damian' 'Loucas Pillaud-Vivien' 'Jason D. Lee' 'Joan Bruna']"
] |
null | null |
2403.05532
| null | null |
http://arxiv.org/pdf/2403.05532v1
|
2024-03-08T18:57:00Z
|
2024-03-08T18:57:00Z
|
Tune without Validation: Searching for Learning Rate and Weight Decay on
Training Sets
|
We introduce Tune without Validation (Twin), a pipeline for tuning learning rate and weight decay without validation sets. We leverage a recent theoretical framework concerning learning phases in hypothesis space to devise a heuristic that predicts what hyper-parameter (HP) combinations yield better generalization. Twin performs a grid search of trials according to an early-/non-early-stopping scheduler and then segments the region that provides the best results in terms of training loss. Among these trials, the weight norm strongly correlates with predicting generalization. To assess the effectiveness of Twin, we run extensive experiments on 20 image classification datasets and train several families of deep networks, including convolutional, transformer, and feed-forward models. We demonstrate proper HP selection when training from scratch and fine-tuning, emphasizing small-sample scenarios.
|
[
"['Lorenzo Brigato' 'Stavroula Mougiakakou']"
] |
null | null |
2403.05540
| null | null |
http://arxiv.org/pdf/2403.05540v1
|
2024-02-02T23:04:13Z
|
2024-02-02T23:04:13Z
|
Extinction Risks from AI: Invisible to Science?
|
In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart's Law as "Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity", and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart's Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be informative for evaluating specific arguments for Extinction-level Goodhart's Law. Since each of the conditions seems to significantly contribute to the complexity of the resulting model, formally evaluating the hypothesis might be exceedingly difficult. This raises the possibility that whether the risk of extinction from artificial intelligence is real or not, the underlying dynamics might be invisible to current scientific methods.
|
[
"['Vojtech Kovarik' 'Christian van Merwijk' 'Ida Mattsson']"
] |
null | null |
2403.05541
| null | null |
http://arxiv.org/pdf/2403.05541v1
|
2024-02-03T02:14:47Z
|
2024-02-03T02:14:47Z
|
AI in ESG for Financial Institutions: An Industrial Survey
|
The burgeoning integration of Artificial Intelligence (AI) into Environmental, Social, and Governance (ESG) initiatives within the financial sector represents a paradigm shift towards more sus-tainable and equitable financial practices. This paper surveys the industrial landscape to delineate the necessity and impact of AI in bolstering ESG frameworks. With the advent of stringent regulatory requirements and heightened stakeholder awareness, financial institutions (FIs) are increasingly compelled to adopt ESG criteria. AI emerges as a pivotal tool in navigating the complex in-terplay of financial activities and sustainability goals. Our survey categorizes AI applications across three main pillars of ESG, illustrating how AI enhances analytical capabilities, risk assessment, customer engagement, reporting accuracy and more. Further, we delve into the critical con-siderations surrounding the use of data and the development of models, underscoring the importance of data quality, privacy, and model robustness. The paper also addresses the imperative of responsible and sustainable AI, emphasizing the ethical dimensions of AI deployment in ESG-related banking processes. Conclusively, our findings suggest that while AI offers transformative potential for ESG in banking, it also poses significant challenges that necessitate careful consideration. The final part of the paper synthesizes the survey's insights, proposing a forward-looking stance on the adoption of AI in ESG practices. We conclude with recommendations with a reference architecture for future research and development, advocating for a balanced approach that leverages AI's strengths while mitigating its risks within the ESG domain.
|
[
"['Jun Xu']"
] |
null | null |
2403.05546
| null | null |
http://arxiv.org/abs/2403.05546v1
|
2024-02-06T16:33:56Z
|
2024-02-06T16:33:56Z
|
Unified Occupancy on a Public Transport Network through Combination of
AFC and APC Data
|
In a transport network, the onboard occupancy is key for gaining insights into travelers' habits and adjusting the offer. Traditionally, operators have relied on field studies to evaluate ridership of a typical workday. However, automated fare collection (AFC) and automatic passenger counting (APC) data, which provide complete temporal coverage, are often available but underexploited. It should be noted, however, that each data source comes with its own biases: AFC data may not account for fraud, while not all vehicles are equipped with APC systems. This paper introduces the unified occupancy method, a geostatistical model to extrapolate occupancy to every course of a public transportation network by combining AFC and APC data with partial coverage. Unified occupancy completes missing APC information for courses on lines where other courses have APC measures, as well as for courses on lines where no APC data is available at all. The accuracy of this method is evaluated on real data from several public transportation networks in France.
|
[
"['Amir Dib' 'Noëlie Cherrier' 'Martin Graive' 'Baptiste Rérolle'\n 'Eglantine Schmitt']"
] |
null | null |
2403.05547
| null | null |
http://arxiv.org/pdf/2403.05547v1
|
2024-02-06T17:26:24Z
|
2024-02-06T17:26:24Z
|
AI for non-programmers: Applied AI in the lectures for students without
programming skills
|
Applications such as ChatGPT and WOMBO Dream make it easy to inspire students without programming knowledge to use artificial intelligence (AI). Therefore, given the increasing importance of AI in all disciplines, innovative strategies are needed to educate students in AI without programming knowledge so that AI can be integrated into their study modules as a future skill. This work presents a didactic planning script for applied AI. The didactic planning script is based on the AI application pipeline and links AI concepts with study-relevant topics. These linkages open up a new solution space and promote students' interest in and understanding of the potentials and risks of AI. An example lecture series for master students in energy management shows how AI can be seamlessly integrated into discipline-specific lectures. To this end, the planning script for applied AI is adapted to fit the study programs' topic. This specific teaching scenario enables students to solve a discipline-specific task step by step using the AI application pipeline. Thus, the application of the didactic planning script for applied AI shows the practical implementation of the theoretical concepts of AI. In addition, a checklist is presented that can be used to assess whether AI can be used in the discipline-specific lecture. AI as a future skill must be learned by students based on use cases that are relevant to the course of studies. For this reason, AI education should fit seamlessly into various curricula, even if the students do not have a programming background due to their field of study.
|
[
"['Julius Schöning' 'Tim Wawer' 'Kai-Michael Griese']"
] |
null | null |
2403.05548
| null | null |
http://arxiv.org/pdf/2403.05548v1
|
2024-02-06T20:34:49Z
|
2024-02-06T20:34:49Z
|
Monitoring the evolution of antisemitic discourse on extremist social
media using BERT
|
Racism and intolerance on social media contribute to a toxic online environment which may spill offline to foster hatred, and eventually lead to physical violence. That is the case with online antisemitism, the specific category of hatred considered in this study. Tracking antisemitic themes and their associated terminology over time in online discussions could help monitor the sentiments of their participants and their evolution, and possibly offer avenues for intervention that may prevent the escalation of hatred. Due to the large volume and constant evolution of online traffic, monitoring conversations manually is impractical. Instead, we propose an automated method that extracts antisemitic themes and terminology from extremist social media over time and captures their evolution. Since supervised learning would be too limited for such a task, we created an unsupervised online machine learning approach that uses large language models to assess the contextual similarity of posts. The method clusters similar posts together, dividing, and creating additional clusters over time when sub-themes emerge from existing ones or new themes appear. The antisemitic terminology used within each theme is extracted from the posts in each cluster. Our experiments show that our methodology outperforms existing baselines and demonstrates the kind of themes and sub-themes it discovers within antisemitic discourse along with their associated terminology. We believe that our approach will be useful for monitoring the evolution of all kinds of hatred beyond antisemitism on social platforms.
|
[
"['Raza Ul Mustafa' 'Nathalie Japkowicz']"
] |
null | null |
2403.05552
| null | null |
http://arxiv.org/abs/2403.05552v1
|
2024-02-08T21:29:41Z
|
2024-02-08T21:29:41Z
|
Multi-source and multimodal data fusion for predicting academic
performance in blended learning university courses
|
In this paper we applied data fusion approaches for predicting the final academic performance of university students using multiple-source, multimodal data from blended learning environments. We collected and preprocessed data about first-year university students from different sources: theory classes, practical sessions, on-line Moodle sessions, and a final exam. Our objective was to discover which data fusion approach produced the best results using our data. We carried out experiments by applying four different data fusion approaches and six classification algorithms. The results showed that the best predictions were produced using ensembles and selecting the best attributes approach with discretized data. The best prediction models showed us that the level of attention in theory classes, scores in Moodle quizzes, and the level of activity in Moodle forums were the best set of attributes for predicting students' final performance in our courses.
|
[
"['W. Chango' 'R. Cerezo' 'C. Romero']"
] |
null | null |
2403.05553
| null | null |
http://arxiv.org/pdf/2403.05553v1
|
2024-02-10T08:24:29Z
|
2024-02-10T08:24:29Z
|
Understanding the Progression of Educational Topics via Semantic
Matching
|
Education systems are dynamically changing to accommodate technological advances, industrial and societal needs, and to enhance students' learning journeys. Curriculum specialists and educators constantly revise taught subjects across educational grades to identify gaps, introduce new learning topics, and enhance the learning outcomes. This process is usually done within the same subjects (e.g. math) or across related subjects (e.g. math and physics) considering the same and different educational levels, leading to massive multi-layer comparisons. Having nuanced data about subjects, topics, and learning outcomes structured within a dataset, empowers us to leverage data science to better understand the progression of various learning topics. In this paper, Bidirectional Encoder Representations from Transformers (BERT) topic modeling was used to extract topics from the curriculum, which were then used to identify relationships between subjects, track their progression, and identify conceptual gaps. We found that grouping learning outcomes by common topics helped specialists reduce redundancy and introduce new concepts in the curriculum. We built a dashboard to avail the methodology to curriculum specials. Finally, we tested the validity of the approach with subject matter experts.
|
[
"['Tamador Alkhidir' 'Edmond Awad' 'Aamena Alshamsi']"
] |
null | null |
2403.05555
| null | null |
http://arxiv.org/abs/2403.05555v1
|
2024-02-10T16:07:38Z
|
2024-02-10T16:07:38Z
|
Subgroup Discovery in MOOCs: A Big Data Application for Describing
Different Types of Learners
|
The aim of this paper is to categorize and describe different types of learners in massive open online courses (MOOCs) by means of a subgroup discovery approach based on MapReduce. The final objective is to discover IF-THEN rules that appear in different MOOCs. The proposed subgroup discovery approach, which is an extension of the well-known FP-Growth algorithm, considers emerging parallel methodologies like MapReduce to be able to cope with extremely large datasets. As an additional feature, the proposal includes a threshold value to denote the number of courses that each discovered rule should satisfy. A post-processing step is also included so redundant subgroups can be removed. The experimental stage is carried out by considering de-identified data from the first year of 16 MITx and HarvardX courses on the edX platform. Experimental results demonstrate that the proposed MapReduce approach outperforms traditional sequential subgroup discovery approaches, achieving a runtime that is almost constant for different courses. Additionally, thanks to the final post-processing step, only interesting and not-redundant rules are discovered, hence reducing the number of subgroups in one or two orders of magnitude. Finally, the discovered subgroups are easily used by courses' instructors not only for descriptive purposes but also for additional tasks such as recommendation or personalization.
|
[
"['J. M. Luna' 'H. M. Fardoun' 'F. Padillo' 'C. Romero' 'S. Ventura']"
] |
null | null |
2403.05556
| null | null |
http://arxiv.org/pdf/2403.05556v1
|
2024-02-10T19:03:06Z
|
2024-02-10T19:03:06Z
|
Modeling and predicting students' engagement behaviors using mixture
Markov models
|
Students' engagements reflect their level of involvement in an ongoing learning process which can be estimated through their interactions with a computer-based learning or assessment system. A pre-requirement for stimulating student engagement lies in the capability to have an approximate representation model for comprehending students' varied (dis)engagement behaviors. In this paper, we utilized model-based clustering for this purpose which generates K mixture Markov models to group students' traces containing their (dis)engagement behavioral patterns. To prevent the Expectation-Maximization (EM) algorithm from getting stuck in a local maxima, we also introduced a K-means-based initialization method named as K-EM. We performed an experimental work on two real datasets using the three variants of the EM algorithm: the original EM, emEM, K-EM; and, non-mixture baseline models for both datasets. The proposed K-EM has shown very promising results and achieved significant performance difference in comparison with the other approaches particularly using the Dataset. Hence, we suggest to perform further experiments using large dataset(s) to validate our method. Additionally, visualization of the resultant clusters through first-order Markov chains reveals very useful insights about (dis)engagement behaviors depicted by the students. We conclude the paper with a discussion on the usefulness of our approach, limitations and potential extensions of this work.
|
[
"['R. Maqsood' 'P. Ceravolo' 'C. Romero' 'S. Ventura']"
] |
null | null |
2403.05557
| null | null |
http://arxiv.org/pdf/2403.05557v1
|
2024-02-11T12:23:21Z
|
2024-02-11T12:23:21Z
|
Re-thinking Human Activity Recognition with Hierarchy-aware Label
Relationship Modeling
|
Human Activity Recognition (HAR) has been studied for decades, from data collection, learning models, to post-processing and result interpretations. However, the inherent hierarchy in the activities remains relatively under-explored, despite its significant impact on model performance and interpretation. In this paper, we propose H-HAR, by rethinking the HAR tasks from a fresh perspective by delving into their intricate global label relationships. Rather than building multiple classifiers separately for multi-layered activities, we explore the efficacy of a flat model enhanced with graph-based label relationship modeling. Being hierarchy-aware, the graph-based label modeling enhances the fundamental HAR model, by incorporating intricate label relationships into the model. We validate the proposal with a multi-label classifier on complex human activity data. The results highlight the advantages of the proposal, which can be vertically integrated into advanced HAR models to further enhance their performances.
|
[
"['Jingwei Zuo' 'Hakim Hacid']"
] |
null | null |
2403.05559
| null | null |
http://arxiv.org/pdf/2403.05559v1
|
2024-02-15T14:12:38Z
|
2024-02-15T14:12:38Z
|
Improving Cognitive Diagnosis Models with Adaptive Relational Graph
Neural Networks
|
Cognitive Diagnosis (CD) algorithms receive growing research interest in intelligent education. Typically, these CD algorithms assist students by inferring their abilities (i.e., their proficiency levels on various knowledge concepts). The proficiency levels can enable further targeted skill training and personalized exercise recommendations, thereby promoting students' learning efficiency in online education. Recently, researchers have found that building and incorporating a student-exercise bipartite graph is beneficial for enhancing diagnostic performance. However, there are still limitations in their studies. On one hand, researchers overlook the heterogeneity within edges, where there can be both correct and incorrect answers. On the other hand, they disregard the uncertainty within edges, e.g., a correct answer can indicate true mastery or fortunate guessing. To address the limitations, we propose Adaptive Semantic-aware Graph-based Cognitive Diagnosis model (ASG-CD), which introduces a novel and effective way to leverage bipartite graph information in CD. Specifically, we first map students, exercises, and knowledge concepts into a latent representation space and combine these latent representations to obtain student abilities and exercise difficulties. After that, we propose a Semantic-aware Graph Neural Network Layer to address edge heterogeneity. This layer splits the original bipartite graph into two subgraphs according to edge semantics, and aggregates information based on these two subgraphs separately. To mitigate the impact of edge uncertainties, we propose an Adaptive Edge Differentiation Layer that dynamically differentiates edges, followed by keeping reliable edges and filtering out uncertain edges. Extensive experiments on three real-world datasets have demonstrated the effectiveness of ASG-CD.
|
[
"['Pengyang Shao' 'Chen Gao' 'Lei Chen' 'Yonghui Yang' 'Kun Zhang'\n 'Meng Wang']"
] |
null | null |
2403.05571
| null | null |
http://arxiv.org/pdf/2403.05571v3
|
2024-05-26T16:52:21Z
|
2024-02-22T03:52:17Z
|
Combining Constrained Diffusion Models and Numerical Solvers for
Efficient and Robust Non-Convex Trajectory Optimization
|
Motivated by the need to solve open-loop optimal control problems with computational efficiency and reliable constraint satisfaction, we introduce a general framework that combines diffusion models and numerical optimization solvers. Optimal control problems are rarely solvable in closed form, hence they are often transcribed into numerical trajectory optimization problems, which then require initial guesses. These initial guesses are supplied in our framework by diffusion models. To mitigate the effect of samples that violate the problem constraints, we develop a novel constrained diffusion model to approximate the true distribution of locally optimal solutions with an additional constraint violation loss in training. To further enhance the robustness, the diffusion samples as initial guesses are fed to the numerical solver to refine and derive final optimal (and hence feasible) solutions. Experimental evaluations on three tasks verify the improved constraint satisfaction and computational efficiency with 4$times$ to 30$times$ acceleration using our proposed framework, which generalizes across trajectory optimization problems and scales well with problem complexity.
|
[
"['Anjian Li' 'Zihan Ding' 'Adji Bousso Dieng' 'Ryne Beeson']"
] |
null | null |
2403.05573
| null | null |
http://arxiv.org/pdf/2403.05573v1
|
2024-02-26T08:59:46Z
|
2024-02-26T08:59:46Z
|
Beyond Predictive Algorithms in Child Welfare
|
Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.
|
[
"['Erina Seh-Young Moon' 'Devansh Saxena' 'Tegan Maharaj' 'Shion Guha']"
] |
null | null |
2403.05578
| null | null |
http://arxiv.org/pdf/2403.05578v1
|
2024-02-28T07:56:04Z
|
2024-02-28T07:56:04Z
|
Chaining text-to-image and large language model: A novel approach for
generating personalized e-commerce banners
|
Text-to-image models such as stable diffusion have opened a plethora of opportunities for generating art. Recent literature has surveyed the use of text-to-image models for enhancing the work of many creative artists. Many e-commerce platforms employ a manual process to generate the banners, which is time-consuming and has limitations of scalability. In this work, we demonstrate the use of text-to-image models for generating personalized web banners with dynamic content for online shoppers based on their interactions. The novelty in this approach lies in converting users' interaction data to meaningful prompts without human intervention. To this end, we utilize a large language model (LLM) to systematically extract a tuple of attributes from item meta-information. The attributes are then passed to a text-to-image model via prompt engineering to generate images for the banner. Our results show that the proposed approach can create high-quality personalized banners for users.
|
[
"['Shanu Vashishtha' 'Abhinav Prakash' 'Lalitesh Morishetti' 'Kaushiki Nag'\n 'Yokila Arora' 'Sushant Kumar' 'Kannan Achan']"
] |
null | null |
2403.05581
| null | null |
http://arxiv.org/pdf/2403.05581v1
|
2024-03-01T13:25:54Z
|
2024-03-01T13:25:54Z
|
Can Interpretability Layouts Influence Human Perception of Offensive
Sentences?
|
This paper conducts a user study to assess whether three machine learning (ML) interpretability layouts can influence participants' views when evaluating sentences containing hate speech, focusing on the "Misogyny" and "Racism" classes. Given the existence of divergent conclusions in the literature, we provide empirical evidence on using ML interpretability in online communities through statistical and qualitative analyses of questionnaire responses. The Generalized Additive Model estimates participants' ratings, incorporating within-subject and between-subject designs. While our statistical analysis indicates that none of the interpretability layouts significantly influences participants' views, our qualitative analysis demonstrates the advantages of ML interpretability: 1) triggering participants to provide corrective feedback in case of discrepancies between their views and the model, and 2) providing insights to evaluate a model's behavior beyond traditional performance metrics.
|
[
"['Thiago Freitas dos Santos' 'Nardine Osman' 'Marco Schorlemmer']"
] |
null | null |
2403.05591
| null | null |
http://arxiv.org/pdf/2403.05591v1
|
2024-03-05T23:32:45Z
|
2024-03-05T23:32:45Z
|
Data-Driven Ergonomic Risk Assessment of Complex Hand-intensive
Manufacturing Processes
|
Hand-intensive manufacturing processes, such as composite layup and textile draping, require significant human dexterity to accommodate task complexity. These strenuous hand motions often lead to musculoskeletal disorders and rehabilitation surgeries. We develop a data-driven ergonomic risk assessment system with a special focus on hand and finger activity to better identify and address ergonomic issues related to hand-intensive manufacturing processes. The system comprises a multi-modal sensor testbed to collect and synchronize operator upper body pose, hand pose and applied forces; a Biometric Assessment of Complete Hand (BACH) formulation to measure high-fidelity hand and finger risks; and industry-standard risk scores associated with upper body posture, RULA, and hand activity, HAL. Our findings demonstrate that BACH captures injurious activity with a higher granularity in comparison to the existing metrics. Machine learning models are also used to automate RULA and HAL scoring, and generalize well to unseen participants. Our assessment system, therefore, provides ergonomic interpretability of the manufacturing processes studied, and could be used to mitigate risks through minor workplace optimization and posture corrections.
|
[
"['Anand Krishnan' 'Xingjian Yang' 'Utsav Seth' 'Jonathan M. Jeyachandran'\n 'Jonathan Y. Ahn' 'Richard Gardner' 'Samuel F. Pedigo' 'Adriana'\n 'Blom-Schieber' 'Ashis G. Banerjee' 'Krithika Manohar']"
] |
null | null |
2403.05595
| null | null |
http://arxiv.org/abs/2403.05595v1
|
2024-03-07T10:05:09Z
|
2024-03-07T10:05:09Z
|
Comparison of gait phase detection using traditional machine learning
and deep learning techniques
|
Human walking is a complex activity with a high level of cooperation and interaction between different systems in the body. Accurate detection of the phases of the gait in real-time is crucial to control lower-limb assistive devices like exoskeletons and prostheses. There are several ways to detect the walking gait phase, ranging from cameras and depth sensors to the sensors attached to the device itself or the human body. Electromyography (EMG) is one of the input methods that has captured lots of attention due to its precision and time delay between neuromuscular activity and muscle movement. This study proposes a few Machine Learning (ML) based models on lower-limb EMG data for human walking. The proposed models are based on Gaussian Naive Bayes (NB), Decision Tree (DT), Random Forest (RF), Linear Discriminant Analysis (LDA) and Deep Convolutional Neural Networks (DCNN). The traditional ML models are trained on hand-crafted features or their reduced components using Principal Component Analysis (PCA). On the contrary, the DCNN model utilises convolutional layers to extract features from raw data. The results show up to 75% average accuracy for traditional ML models and 79% for Deep Learning (DL) model. The highest achieved accuracy in 50 trials of the training DL model is 89.5%.
|
[
"['Farhad Nazari' 'Navid Mohajer' 'Darius Nahavandi' 'Abbas Khosravi']"
] |
null | null |
2403.05598
| null | null |
http://arxiv.org/pdf/2403.05598v1
|
2024-03-07T21:22:07Z
|
2024-03-07T21:22:07Z
|
Privacy Amplification for the Gaussian Mechanism via Bounded Support
|
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset. These guarantees can be desirable compared to vanilla DP in real world settings as they tightly upper-bound the privacy leakage for a $textit{specific}$ individual in an $textit{actual}$ dataset, rather than considering worst-case datasets. While these frameworks are beginning to gain popularity, to date, there is a lack of private mechanisms that can fully leverage advantages of data-dependent accounting. To bridge this gap, we propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting. Experiments on model training with DP-SGD show that using bounded support Gaussian mechanisms can provide a reduction of the pDP bound $epsilon$ by as much as 30% without negative effects on model utility.
|
[
"['Shengyuan Hu' 'Saeed Mahloujifar' 'Virginia Smith' 'Kamalika Chaudhuri'\n 'Chuan Guo']"
] |
null | null |
2403.05600
| null | null |
http://arxiv.org/pdf/2403.05600v1
|
2024-03-07T23:20:34Z
|
2024-03-07T23:20:34Z
|
Density-Regression: Efficient and Distance-Aware Deep Regressor for
Uncertainty Estimation under Distribution Shifts
|
Morden deep ensembles technique achieves strong uncertainty estimation performance by going through multiple forward passes with different models. This is at the price of a high storage space and a slow speed in the inference (test) time. To address this issue, we propose Density-Regression, a method that leverages the density function in uncertainty estimation and achieves fast inference by a single forward pass. We prove it is distance aware on the feature space, which is a necessary condition for a neural network to produce high-quality uncertainty estimation under distribution shifts. Empirically, we conduct experiments on regression tasks with the cubic toy dataset, benchmark UCI, weather forecast with time series, and depth estimation under real-world shifted applications. We show that Density-Regression has competitive uncertainty estimation performance under distribution shifts with modern deep regressors while using a lower model size and a faster inference speed.
|
[
"['Ha Manh Bui' 'Anqi Liu']"
] |
null | null |
2403.05601
| null | null |
http://arxiv.org/pdf/2403.05601v1
|
2024-03-08T00:02:42Z
|
2024-03-08T00:02:42Z
|
Select High-Level Features: Efficient Experts from a Hierarchical
Classification Network
|
This study introduces a novel expert generation method that dynamically reduces task and computational complexity without compromising predictive performance. It is based on a new hierarchical classification network topology that combines sequential processing of generic low-level features with parallelism and nesting of high-level features. This structure allows for the innovative extraction technique: the ability to select only high-level features of task-relevant categories. In certain cases, it is possible to skip almost all unneeded high-level features, which can significantly reduce the inference cost and is highly beneficial in resource-constrained conditions. We believe this method paves the way for future network designs that are lightweight and adaptable, making them suitable for a wide range of applications, from compact edge devices to large-scale clouds. In terms of dynamic inference our methodology can achieve an exclusion of up to 88.7,% of parameters and 73.4,% fewer giga-multiply accumulate (GMAC) operations, analysis against comparative baselines showing an average reduction of 47.6,% in parameters and 5.8,% in GMACs across the cases we evaluated.
|
[
"['André Kelm' 'Niels Hannemann' 'Bruno Heberle' 'Lucas Schmidt'\n 'Tim Rolff' 'Christian Wilms' 'Ehsan Yaghoubi' 'Simone Frintrop']"
] |
null | null |
2403.05602
| null | null |
http://arxiv.org/abs/2403.05602v1
|
2024-03-08T01:43:21Z
|
2024-03-08T01:43:21Z
|
Extracting Protein-Protein Interactions (PPIs) from Biomedical
Literature using Attention-based Relational Context Information
|
Because protein-protein interactions (PPIs) are crucial to understand living systems, harvesting these data is essential to probe disease development and discern gene/protein functions and biological processes. Some curated datasets contain PPI data derived from the literature and other sources (e.g., IntAct, BioGrid, DIP, and HPRD). However, they are far from exhaustive, and their maintenance is a labor-intensive process. On the other hand, machine learning methods to automate PPI knowledge extraction from the scientific literature have been limited by a shortage of appropriate annotated data. This work presents a unified, multi-source PPI corpora with vetted interaction definitions augmented by binary interaction type labels and a Transformer-based deep learning method that exploits entities' relational context information for relation representation to improve relation classification performance. The model's performance is evaluated on four widely studied biomedical relation extraction datasets, as well as this work's target PPI datasets, to observe the effectiveness of the representation to relation extraction tasks in various data. Results show the model outperforms prior state-of-the-art models. The code and data are available at: https://github.com/BNLNLP/PPI-Relation-Extraction
|
[
"['Gilchan Park' 'Sean McCorkle' 'Carlos Soto' 'Ian Blaby' 'Shinjae Yoo']"
] |
null | null |
2403.05606
| null | null |
http://arxiv.org/pdf/2403.05606v1
|
2024-03-08T07:15:53Z
|
2024-03-08T07:15:53Z
|
A Concept-based Interpretable Model for the Diagnosis of Choroid
Neoplasias using Multimodal Data
|
Diagnosing rare diseases presents a common challenge in clinical practice, necessitating the expertise of specialists for accurate identification. The advent of machine learning offers a promising solution, while the development of such technologies is hindered by the scarcity of data on rare conditions and the demand for models that are both interpretable and trustworthy in a clinical context. Interpretable AI, with its capacity for human-readable outputs, can facilitate validation by clinicians and contribute to medical education. In the current work, we focus on choroid neoplasias, the most prevalent form of eye cancer in adults, albeit rare with 5.1 per million. We built the so-far largest dataset consisting of 750 patients, incorporating three distinct imaging modalities collected from 2004 to 2022. Our work introduces a concept-based interpretable model that distinguishes between three types of choroidal tumors, integrating insights from domain experts via radiological reports. Remarkably, this model not only achieves an F1 score of 0.91, rivaling that of black-box models, but also boosts the diagnostic accuracy of junior doctors by 42%. This study highlights the significant potential of interpretable machine learning in improving the diagnosis of rare diseases, laying a groundwork for future breakthroughs in medical AI that could tackle a wider array of complex health scenarios.
|
[
"['Yifan Wu' 'Yang Liu' 'Yue Yang' 'Michael S. Yao' 'Wenli Yang'\n 'Xuehui Shi' 'Lihong Yang' 'Dongjun Li' 'Yueming Liu' 'James C. Gee'\n 'Xuan Yang' 'Wenbin Wei' 'Shi Gu']"
] |
null | null |
2403.05610
| null | null |
http://arxiv.org/pdf/2403.05610v1
|
2024-03-08T13:23:42Z
|
2024-03-08T13:23:42Z
|
Evidence, Definitions and Algorithms regarding the Existence of
Cohesive-Convergence Groups in Neural Network Optimization
|
Understanding the convergence process of neural networks is one of the most complex and crucial issues in the field of machine learning. Despite the close association of notable successes in this domain with the convergence of artificial neural networks, this concept remains predominantly theoretical. In reality, due to the non-convex nature of the optimization problems that artificial neural networks tackle, very few trained networks actually achieve convergence. To expand recent research efforts on artificial-neural-network convergence, this paper will discuss a different approach based on observations of cohesive-convergence groups emerging during the optimization process of an artificial neural network.
|
[
"['Thien An L. Nguyen']"
] |
null | null |
2403.05612
| null | null |
http://arxiv.org/pdf/2403.05612v2
|
2024-05-28T23:56:14Z
|
2024-03-08T18:28:13Z
|
Unfamiliar Finetuning Examples Control How Language Models Hallucinate
|
Large language models are known to hallucinate when faced with unfamiliar queries, but the underlying mechanism that govern how models hallucinate are not yet fully understood. In this work, we find that unfamiliar examples in the models' finetuning data -- those that introduce concepts beyond the base model's scope of knowledge -- are crucial in shaping these errors. In particular, we find that an LLM's hallucinated predictions tend to mirror the responses associated with its unfamiliar finetuning examples. This suggests that by modifying how unfamiliar finetuning examples are supervised, we can influence a model's responses to unfamiliar queries (e.g., say ``I don't know''). We empirically validate this observation in a series of controlled experiments involving SFT, RL, and reward model finetuning on TriviaQA and MMLU. Our work further investigates RL finetuning strategies for improving the factuality of long-form model generations. We find that, while hallucinations from the reward model can significantly undermine the effectiveness of RL factuality finetuning, strategically controlling how reward models hallucinate can minimize these negative effects. Leveraging our previous observations on controlling hallucinations, we propose an approach for learning more reliable reward models, and show that they improve the efficacy of RL factuality finetuning in long-form biography and book/movie plot generation tasks.
|
[
"['Katie Kang' 'Eric Wallace' 'Claire Tomlin' 'Aviral Kumar'\n 'Sergey Levine']"
] |
null | null |
2403.05618
| null | null |
http://arxiv.org/pdf/2403.05618v1
|
2024-03-08T19:00:01Z
|
2024-03-08T19:00:01Z
|
OmniJet-$α$: The first cross-task foundation model for particle
physics
|
Foundation models are multi-dataset and multi-task machine learning methods that once pre-trained can be fine-tuned for a large variety of downstream applications. The successful development of such general-purpose models for physics data would be a major breakthrough as they could improve the achievable physics performance while at the same time drastically reduce the required amount of training time and data. We report significant progress on this challenge on several fronts. First, a comprehensive set of evaluation methods is introduced to judge the quality of an encoding from physics data into a representation suitable for the autoregressive generation of particle jets with transformer architectures (the common backbone of foundation models). These measures motivate the choice of a higher-fidelity tokenization compared to previous works. Finally, we demonstrate transfer learning between an unsupervised problem (jet generation) and a classic supervised task (jet tagging) with our new OmniJet-$alpha$ model. This is the first successful transfer between two different and actively studied classes of tasks and constitutes a major step in the building of foundation models for particle physics.
|
[
"['Joschka Birk' 'Anna Hallin' 'Gregor Kasieczka']"
] |
null | null |
2403.05645
| null | null |
http://arxiv.org/pdf/2403.05645v2
|
2024-06-20T21:18:58Z
|
2024-03-08T19:36:20Z
|
Geometric Neural Network based on Phase Space for BCI-EEG decoding
|
The integration of Deep Learning (DL) algorithms on brain signal analysis is still in its nascent stages compared to their success in fields like Computer Vision, especially in Brain-Computer Interface (BCI), where the brain activity is decoded to control external devices without requiring muscle control. Electroencephalography (EEG) is a widely adopted choice for designing BCI systems due to its non-invasive and cost-effective nature and excellent temporal resolution. Still, it comes at the expense of limited training data, poor signal-to-noise, and a large variability across and within-subject recordings. Finally, setting up a BCI system with many electrodes takes a long time, hindering the widespread adoption of reliable DL architectures in BCIs outside research laboratories. To improve adoption, we need to improve user comfort using, for instance, reliable algorithms that operate with few electrodes. Approach: Our research aims to develop a DL algorithm that delivers effective results with a limited number of electrodes. Taking advantage of the Augmented Covariance Method with SPDNet, we propose the SPDNet$_{psi}$ architecture and analyze its performance and computational impact, as well as the interpretability of the results. The evaluation is conducted on 5-fold cross-validation, using only three electrodes positioned above the Motor Cortex. The methodology was tested on nearly 100 subjects from several open-source datasets using the Mother Of All BCI Benchmark (MOABB) framework. Main results: The results of our SPDNet$_{psi}$ demonstrate that the augmented approach combined with the SPDNet significantly outperforms all the current state-of-the-art DL architecture in MI decoding. Significance: This new architecture is explainable, with a low number of trainable parameters and a reduced carbon footprint.
|
[
"['Igor Carrara' 'Bruno Aristimunha' 'Marie-Constance Corsi'\n 'Raphael Y. de Camargo' 'Sylvain Chevallier' 'Théodore Papadopoulo']"
] |
null | null |
2403.05652
| null | null |
http://arxiv.org/pdf/2403.05652v1
|
2024-03-08T19:52:39Z
|
2024-03-08T19:52:39Z
|
What is different between these datasets?
|
The performance of machine learning models heavily depends on the quality of input data, yet real-world applications often encounter various data-related challenges. One such challenge could arise when curating training data or deploying the model in the real world - two comparable datasets in the same domain may have different distributions. While numerous techniques exist for detecting distribution shifts, the literature lacks comprehensive approaches for explaining dataset differences in a human-understandable manner. To address this gap, we propose a suite of interpretable methods (toolbox) for comparing two datasets. We demonstrate the versatility of our approach across diverse data modalities, including tabular data, language, images, and signals in both low and high-dimensional settings. Our methods not only outperform comparable and related approaches in terms of explanation quality and correctness, but also provide actionable, complementary insights to understand and mitigate dataset differences effectively.
|
[
"['Varun Babbar' 'Zhicheng Guo' 'Cynthia Rudin']"
] |
null | null |
2403.05666
| null | null |
http://arxiv.org/pdf/2403.05666v2
|
2024-06-05T04:59:16Z
|
2024-03-08T20:43:57Z
|
Prepared for the Worst: A Learning-Based Adversarial Attack for
Resilience Analysis of the ICP Algorithm
|
This paper presents a novel method to assess the resilience of the Iterative Closest Point (ICP) algorithm via deep-learning-based attacks on lidar point clouds. For safety-critical applications such as autonomous navigation, ensuring the resilience of algorithms prior to deployments is of utmost importance. The ICP algorithm has become the standard for lidar-based localization. However, the pose estimate it produces can be greatly affected by corruption in the measurements. Corruption can arise from a variety of scenarios such as occlusions, adverse weather, or mechanical issues in the sensor. Unfortunately, the complex and iterative nature of ICP makes assessing its resilience to corruption challenging. While there have been efforts to create challenging datasets and develop simulations to evaluate the resilience of ICP empirically, our method focuses on finding the maximum possible ICP pose error using perturbation-based adversarial attacks. The proposed attack induces significant pose errors on ICP and outperforms baselines more than 88% of the time across a wide range of scenarios. As an example application, we demonstrate that our attack can be used to identify areas on a map where ICP is particularly vulnerable to corruption in the measurements.
|
[
"['Ziyu Zhang' 'Johann Laconte' 'Daniil Lisus' 'Timothy D. Barfoot']"
] |
null | null |
2403.05669
| null | null |
http://arxiv.org/pdf/2403.05669v1
|
2024-03-08T20:49:49Z
|
2024-03-08T20:49:49Z
|
Spectral Clustering of Categorical and Mixed-type Data via Extra Graph
Nodes
|
Clustering data objects into homogeneous groups is one of the most important tasks in data mining. Spectral clustering is arguably one of the most important algorithms for clustering, as it is appealing for its theoretical soundness and is adaptable to many real-world data settings. For example, mixed data, where the data is composed of numerical and categorical features, is typically handled via numerical discretization, dummy coding, or similarity computation that takes into account both data types. This paper explores a more natural way to incorporate both numerical and categorical information into the spectral clustering algorithm, avoiding the need for data preprocessing or the use of sophisticated similarity functions. We propose adding extra nodes corresponding to the different categories the data may belong to and show that it leads to an interpretable clustering objective function. Furthermore, we demonstrate that this simple framework leads to a linear-time spectral clustering algorithm for categorical-only data. Finally, we compare the performance of our algorithms against other related methods and show that it provides a competitive alternative to them in terms of performance and runtime.
|
[
"['Dylan Soemitro' 'Jeova Farias Sales Rocha Neto']"
] |
null | null |
2403.05681
| null | null |
http://arxiv.org/pdf/2403.05681v1
|
2024-03-08T21:19:01Z
|
2024-03-08T21:19:01Z
|
DP-TabICL: In-Context Learning with Differentially Private Tabular Data
|
In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks by conditioning on demonstrations of question-answer pairs and it has been shown to have comparable performance to costly model retraining and fine-tuning. Recently, ICL has been extended to allow tabular data to be used as demonstration examples by serializing individual records into natural language formats. However, it has been shown that LLMs can leak information contained in prompts, and since tabular data often contain sensitive information, understanding how to protect the underlying tabular data used in ICL is a critical area of research. This work serves as an initial investigation into how to use differential privacy (DP) -- the long-established gold standard for data privacy and anonymization -- to protect tabular data used in ICL. Specifically, we investigate the application of DP mechanisms for private tabular ICL via data privatization prior to serialization and prompting. We formulate two private ICL frameworks with provable privacy guarantees in both the local (LDP-TabICL) and global (GDP-TabICL) DP scenarios via injecting noise into individual records or group statistics, respectively. We evaluate our DP-based frameworks on eight real-world tabular datasets and across multiple ICL and DP settings. Our evaluations show that DP-based ICL can protect the privacy of the underlying tabular data while achieving comparable performance to non-LLM baselines, especially under high privacy regimes.
|
[
"['Alycia N. Carey' 'Karuna Bhaila' 'Kennedy Edemacu' 'Xintao Wu']"
] |
null | null |
2403.05683
| null | null |
http://arxiv.org/pdf/2403.05683v1
|
2024-03-08T21:31:00Z
|
2024-03-08T21:31:00Z
|
Efficient Public Health Intervention Planning Using Decomposition-Based
Decision-Focused Learning
|
The declining participation of beneficiaries over time is a key concern in public health programs. A popular strategy for improving retention is to have health workers `intervene' on beneficiaries at risk of dropping out. However, the availability and time of these health workers are limited resources. As a result, there has been a line of research on optimizing these limited intervention resources using Restless Multi-Armed Bandits (RMABs). The key technical barrier to using this framework in practice lies in the need to estimate the beneficiaries' RMAB parameters from historical data. Recent research has shown that Decision-Focused Learning (DFL), which focuses on maximizing the beneficiaries' adherence rather than predictive accuracy, improves the performance of intervention targeting using RMABs. Unfortunately, these gains come at a high computational cost because of the need to solve and evaluate the RMAB in each DFL training step. In this paper, we provide a principled way to exploit the structure of RMABs to speed up intervention planning by cleverly decoupling the planning for different beneficiaries. We use real-world data from an Indian NGO, ARMMAN, to show that our approach is up to two orders of magnitude faster than the state-of-the-art approach while also yielding superior model performance. This would enable the NGO to scale up deployments using DFL to potentially millions of mothers, ultimately advancing progress toward UNSDG 3.1.
|
[
"['Sanket Shah' 'Arun Suggala' 'Milind Tambe' 'Aparna Taneja']"
] |
null | null |
2403.05693
| null | null |
http://arxiv.org/pdf/2403.05693v3
|
2024-03-14T01:37:02Z
|
2024-03-08T22:04:25Z
|
Shielded Deep Reinforcement Learning for Complex Spacecraft Tasking
|
Autonomous spacecraft control via Shielded Deep Reinforcement Learning (SDRL) has become a rapidly growing research area. However, the construction of shields and the definition of tasking remains informal, resulting in policies with no guarantees on safety and ambiguous goals for the RL agent. In this paper, we first explore the use of formal languages, namely Linear Temporal Logic (LTL), to formalize spacecraft tasks and safety requirements. We then define a manner in which to construct a reward function from a co-safe LTL specification automatically for effective training in SDRL framework. We also investigate methods for constructing a shield from a safe LTL specification for spacecraft applications and propose three designs that provide probabilistic guarantees. We show how these shields interact with different policies and the flexibility of the reward structure through several experiments.
|
[
"['Robert Reed' 'Hanspeter Schaub' 'Morteza Lahijanian']"
] |
null | null |
2403.05713
| null | null |
http://arxiv.org/pdf/2403.05713v3
|
2024-04-03T17:17:21Z
|
2024-03-08T22:59:41Z
|
tsGT: Stochastic Time Series Modeling With Transformer
|
Time series methods are of fundamental importance in virtually any field of science that deals with temporally structured data. Recently, there has been a surge of deterministic transformer models with time series-specific architectural biases. In this paper, we go in a different direction by introducing tsGT, a stochastic time series model built on a general-purpose transformer architecture. We focus on using a well-known and theoretically justified rolling window backtesting and evaluation protocol. We show that tsGT outperforms the state-of-the-art models on MAD and RMSE, and surpasses its stochastic peers on QL and CRPS, on four commonly used datasets. We complement these results with a detailed analysis of tsGT's ability to model the data distribution and predict marginal quantile values.
|
[
"['Łukasz Kuciński' 'Witold Drzewakowski' 'Mateusz Olko' 'Piotr Kozakowski'\n 'Łukasz Maziarka' 'Marta Emilia Nowakowska' 'Łukasz Kaiser' 'Piotr Miłoś']"
] |
null | null |
2403.05715
| null | null |
http://arxiv.org/pdf/2403.05715v1
|
2024-03-08T23:02:20Z
|
2024-03-08T23:02:20Z
|
A Framework for Effective AI Recommendations in Cyber-Physical-Human
Systems
|
Many cyber-physical-human systems (CPHS) involve a human decision-maker who may receive recommendations from an artificial intelligence (AI) platform while holding the ultimate responsibility of making decisions. In such CPHS applications, the human decision-maker may depart from an optimal recommended decision and instead implement a different one for various reasons. In this letter, we develop a rigorous framework to overcome this challenge. In our framework, we consider that humans may deviate from AI recommendations as they perceive and interpret the system's state in a different way than the AI platform. We establish the structural properties of optimal recommendation strategies and develop an approximate human model (AHM) used by the AI. We provide theoretical bounds on the optimality gap that arises from an AHM and illustrate the efficacy of our results in a numerical example.
|
[
"['Aditya Dave' 'Heeseung Bang' 'Andreas A. Malikopoulos']"
] |
null | null |
2403.05720
| null | null |
http://arxiv.org/pdf/2403.05720v1
|
2024-03-08T23:17:55Z
|
2024-03-08T23:17:55Z
|
A Benchmark of Domain-Adapted Large Language Models for Generating Brief
Hospital Course Summaries
|
Brief hospital course (BHC) summaries are common clinical documents generated by summarizing clinical notes. While large language models (LLMs) depict remarkable capabilities in automating real-world tasks, their capabilities for healthcare applications such as BHC synthesis have not been shown. To enable the adaptation of LLMs for BHC synthesis, we introduce a novel benchmark consisting of a pre-processed dataset extracted from MIMIC-IV notes, encapsulating clinical note, and brief hospital course (BHC) pairs. We assess the performance of two general-purpose LLMs and three healthcare-adapted LLMs to improve BHC synthesis from clinical notes. Using clinical notes as input for generating BHCs, we apply prompting-based (using in-context learning) and fine-tuning-based adaptation strategies to three open-source LLMs (Clinical-T5-Large, Llama2-13B, FLAN-UL2) and two proprietary LLMs (GPT-3.5, GPT-4). We quantitatively evaluate the performance of these LLMs across varying context-length inputs using conventional natural language similarity metrics. We further perform a qualitative study where five diverse clinicians blindly compare clinician-written BHCs and two LLM-generated BHCs for 30 samples across metrics of comprehensiveness, conciseness, factual correctness, and fluency. Overall, we present a new benchmark and pre-processed dataset for using LLMs in BHC synthesis from clinical notes. We observe high-quality summarization performance for both in-context proprietary and fine-tuned open-source LLMs using both quantitative metrics and a qualitative clinical reader study. We propose our work as a benchmark to motivate future works to adapt and assess the performance of LLMs in BHC synthesis.
|
[
"['Asad Aali' 'Dave Van Veen' 'Yamin Ishraq Arefeen' 'Jason Hom'\n 'Christian Bluethgen' 'Eduardo Pontes Reis' 'Sergios Gatidis'\n 'Namuun Clifford' 'Joseph Daws' 'Arash S. Tehrani' 'Jangwon Kim'\n 'Akshay S. Chaudhari']"
] |
null | null |
2403.05726
| null | null |
http://arxiv.org/pdf/2403.05726v1
|
2024-03-08T23:42:06Z
|
2024-03-08T23:42:06Z
|
Augmentations vs Algorithms: What Works in Self-Supervised Learning
|
We study the relative effects of data augmentations, pretraining algorithms, and model architectures in Self-Supervised Learning (SSL). While the recent literature in this space leaves the impression that the pretraining algorithm is of critical importance to performance, understanding its effect is complicated by the difficulty in making objective and direct comparisons between methods. We propose a new framework which unifies many seemingly disparate SSL methods into a single shared template. Using this framework, we identify aspects in which methods differ and observe that in addition to changing the pretraining algorithm, many works also use new data augmentations or more powerful model architectures. We compare several popular SSL methods using our framework and find that many algorithmic additions, such as prediction networks or new losses, have a minor impact on downstream task performance (often less than $1%$), while enhanced augmentation techniques offer more significant performance improvements ($2-4%$). Our findings challenge the premise that SSL is being driven primarily by algorithmic improvements, and suggest instead a bitter lesson for SSL: that augmentation diversity and data / model scale are more critical contributors to recent advances in self-supervised learning.
|
[
"['Warren Morningstar' 'Alex Bijamov' 'Chris Duvarney' 'Luke Friedman'\n 'Neha Kalibhat' 'Luyang Liu' 'Philip Mansfield' 'Renan Rojas-Gomez'\n 'Karan Singhal' 'Bradley Green' 'Sushant Prakash']"
] |
null | null |
2403.05732
| null | null |
http://arxiv.org/pdf/2403.05732v2
|
2024-06-02T19:40:48Z
|
2024-03-08T23:59:38Z
|
Conservative DDPG -- Pessimistic RL without Ensemble
|
DDPG is hindered by the overestimation bias problem, wherein its $Q$-estimates tend to overstate the actual $Q$-values. Traditional solutions to this bias involve ensemble-based methods, which require significant computational resources, or complex log-policy-based approaches, which are difficult to understand and implement. In contrast, we propose a straightforward solution using a $Q$-target and incorporating a behavioral cloning (BC) loss penalty. This solution, acting as an uncertainty measure, can be easily implemented with minimal code and without the need for an ensemble. Our empirical findings strongly support the superiority of Conservative DDPG over DDPG across various MuJoCo and Bullet tasks. We consistently observe better performance in all evaluated tasks and even competitive or superior performance compared to TD3 and TD7, all achieved with significantly reduced computational requirements.
|
[
"['Nitsan Soffair' 'Shie Mannor']"
] |
null | null |
2403.05738
| null | null |
http://arxiv.org/pdf/2403.05738v1
|
2024-03-09T00:20:33Z
|
2024-03-09T00:20:33Z
|
Provable Policy Gradient Methods for Average-Reward Markov Potential
Games
|
We study Markov potential games under the infinite horizon average reward criterion. Most previous studies have been for discounted rewards. We prove that both algorithms based on independent policy gradient and independent natural policy gradient converge globally to a Nash equilibrium for the average reward criterion. To set the stage for gradient-based methods, we first establish that the average reward is a smooth function of policies and provide sensitivity bounds for the differential value functions, under certain conditions on ergodicity and the second largest eigenvalue of the underlying Markov decision process (MDP). We prove that three algorithms, policy gradient, proximal-Q, and natural policy gradient (NPG), converge to an $epsilon$-Nash equilibrium with time complexity $O(frac{1}{epsilon^2})$, given a gradient/differential Q function oracle. When policy gradients have to be estimated, we propose an algorithm with $tilde{O}(frac{1}{min_{s,a}pi(a|s)delta})$ sample complexity to achieve $delta$ approximation error w.r.t~the $ell_2$ norm. Equipped with the estimator, we derive the first sample complexity analysis for a policy gradient ascent algorithm, featuring a sample complexity of $tilde{O}(1/epsilon^5)$. Simulation studies are presented.
|
[
"['Min Cheng' 'Ruida Zhou' 'P. R. Kumar' 'Chao Tian']"
] |
null | null |
2403.05743
| null | null |
http://arxiv.org/pdf/2403.05743v4
|
2024-06-28T03:17:12Z
|
2024-03-09T00:41:30Z
|
Forecasting Electricity Market Signals via Generative AI
|
This paper presents a generative artificial intelligence approach to probabilistic forecasting of electricity market signals, such as real-time locational marginal prices and area control error signals. Inspired by the Wiener-Kallianpur innovation representation of nonparametric time series, we propose a weak innovation autoencoder architecture and a novel deep learning algorithm that extracts the canonical independent and identically distributed innovation sequence of the time series, from which samples of future time series are generated. The validity of the proposed approach is established by proving that, under ideal training conditions, the generated samples have the same conditional probability distribution as that of the ground truth. Three applications involving highly dynamic and volatile time series in real-time market operations are considered: (i) locational marginal price forecasting for self-scheduled resources such as battery storage participants, (ii) interregional price spread forecasting for virtual bidders in interchange markets, and (iii) area control error forecasting for frequency regulations. Numerical studies based on market data from multiple independent system operators demonstrate the superior performance of the proposed generative forecaster over leading classical and modern machine learning techniques under both probabilistic and point forecasting metrics.
|
[
"['Xinyi Wang' 'Qing Zhao' 'Lang Tong']"
] |
null | null |
2403.05750
| null | null |
http://arxiv.org/abs/2403.05750v3
|
2024-06-26T20:49:32Z
|
2024-03-09T01:13:54Z
|
Decoding the AI Pen: Techniques and Challenges in Detecting AI-Generated
Text
|
Large Language Models (LLMs) have revolutionized the field of Natural Language Generation (NLG) by demonstrating an impressive ability to generate human-like text. However, their widespread usage introduces challenges that necessitate thoughtful examination, ethical scrutiny, and responsible practices. In this study, we delve into these challenges, explore existing strategies for mitigating them, with a particular emphasis on identifying AI-generated text as the ultimate solution. Additionally, we assess the feasibility of detection from a theoretical perspective and propose novel research directions to address the current limitations in this domain.
|
[
"['Sara Abdali' 'Richard Anarfi' 'CJ Barberan' 'Jia He']"
] |
null | null |
2403.05751
| null | null |
http://arxiv.org/pdf/2403.05751v2
|
2024-03-16T01:16:19Z
|
2024-03-09T01:15:03Z
|
MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided
Learning Process
|
Recently, diffusion probabilistic models have attracted attention in generative time series forecasting due to their remarkable capacity to generate high-fidelity samples. However, the effective utilization of their strong modeling ability in the probabilistic time series forecasting task remains an open question, partially due to the challenge of instability arising from their stochastic nature. To address this challenge, we introduce a novel Multi-Granularity Time Series Diffusion (MG-TSD) model, which achieves state-of-the-art predictive performance by leveraging the inherent granularity levels within the data as given targets at intermediate diffusion steps to guide the learning process of diffusion models. The way to construct the targets is motivated by the observation that the forward process of the diffusion model, which sequentially corrupts the data distribution to a standard normal distribution, intuitively aligns with the process of smoothing fine-grained data into a coarse-grained representation, both of which result in a gradual loss of fine distribution features. In the study, we derive a novel multi-granularity guidance diffusion loss function and propose a concise implementation method to effectively utilize coarse-grained data across various granularity levels. More importantly, our approach does not rely on additional external data, making it versatile and applicable across various domains. Extensive experiments conducted on real-world datasets demonstrate that our MG-TSD model outperforms existing time series prediction methods.
|
[
"['Xinyao Fan' 'Yueying Wu' 'Chang Xu' 'Yuhao Huang' 'Weiqing Liu'\n 'Jiang Bian']"
] |
null | null |
2403.05752
| null | null |
http://arxiv.org/pdf/2403.05752v2
|
2024-03-22T14:44:17Z
|
2024-03-09T01:17:26Z
|
Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and
Efficient Modeling
|
A Knowledge Graph (KG) is a heterogeneous graph encompassing a diverse range of node and edge types. Heterogeneous Graph Neural Networks (HGNNs) are popular for training machine learning tasks like node classification and link prediction on KGs. However, HGNN methods exhibit excessive complexity influenced by the KG's size, density, and the number of node and edge types. AI practitioners handcraft a subgraph of a KG G relevant to a specific task. We refer to this subgraph as a task-oriented subgraph (TOSG), which contains a subset of task-related node and edge types in G. Training the task using TOSG instead of G alleviates the excessive computation required for a large KG. Crafting the TOSG demands a deep understanding of the KG's structure and the task's objectives. Hence, it is challenging and time-consuming. This paper proposes KG-TOSA, an approach to automate the TOSG extraction for task-oriented HGNN training on a large KG. In KG-TOSA, we define a generic graph pattern that captures the KG's local and global structure relevant to a specific task. We explore different techniques to extract subgraphs matching our graph pattern: namely (i) two techniques sampling around targeted nodes using biased random walk or influence scores, and (ii) a SPARQL-based extraction method leveraging RDF engines' built-in indices. Hence, it achieves negligible preprocessing overhead compared to the sampling techniques. We develop a benchmark of real KGs of large sizes and various tasks for node classification and link prediction. Our experiments show that KG-TOSA helps state-of-the-art HGNN methods reduce training time and memory usage by up to 70% while improving the model performance, e.g., accuracy and inference time.
|
[
"['Hussein Abdallah' 'Waleed Afandi' 'Panos Kalnis' 'Essam Mansour']"
] |
null | null |
2403.05754
| null | null |
http://arxiv.org/pdf/2403.05754v1
|
2024-03-09T01:34:26Z
|
2024-03-09T01:34:26Z
|
Hybrid Quantum-inspired Resnet and Densenet for Pattern Recognition with
Completeness Analysis
|
With the contemporary digital technology approaching, deep neural networks are emerging as the foundational algorithm of the artificial intelligence boom. Whereas, the evolving social demands have been emphasizing the necessity of novel methodologies to substitute traditional neural networks. Concurrently, the advent of the post-Moore era has spurred the development of quantum-inspired neural networks with outstanding potentials at certain circumstances. Nonetheless, a definitive evaluating system with detailed metrics is tremendously vital and indispensable owing to the vague indicators in comparison between the novel and traditional deep learning models at present. Hence, to improve and evaluate the performances of the novel neural networks more comprehensively in complex and unpredictable environments, we propose two hybrid quantum-inspired neural networks which are rooted in residual and dense connections respectively for pattern recognitions with completeness representation theory for model assessment. Comparative analyses against pure classical models with detailed frameworks reveal that our hybrid models with lower parameter complexity not only match the generalization power of pure classical models, but also outperform them notably in resistance to parameter attacks with various asymmetric noises. Moreover, our hybrid models indicate unique superiority to prevent gradient explosion problems through theoretical argumentation. Eventually, We elaborate on the application scenarios where our hybrid models are applicable and efficient, which paves the way for their industrialization and commercialization.
|
[
"['Andi Chen' 'Hua-Lei Yin' 'Zeng-Bing Chen' 'Shengjun Wu']"
] |
null | null |
2403.05756
| null | null |
http://arxiv.org/pdf/2403.05756v1
|
2024-03-09T01:58:45Z
|
2024-03-09T01:58:45Z
|
Model-Free Local Recalibration of Neural Networks
|
Artificial neural networks (ANNs) are highly flexible predictive models. However, reliably quantifying uncertainty for their predictions is a continuing challenge. There has been much recent work on "recalibration" of predictive distributions for ANNs, so that forecast probabilities for events of interest are consistent with certain frequency evaluations of them. Uncalibrated probabilistic forecasts are of limited use for many important decision-making tasks. To address this issue, we propose a localized recalibration of ANN predictive distributions using the dimension-reduced representation of the input provided by the ANN hidden layers. Our novel method draws inspiration from recalibration techniques used in the literature on approximate Bayesian computation and likelihood-free inference methods. Most existing calibration methods for ANNs can be thought of as calibrating either on the input layer, which is difficult when the input is high-dimensional, or the output layer, which may not be sufficiently flexible. Through a simulation study, we demonstrate that our method has good performance compared to alternative approaches, and explore the benefits that can be achieved by localizing the calibration based on different layers of the network. Finally, we apply our proposed method to a diamond price prediction problem, demonstrating the potential of our approach to improve prediction and uncertainty quantification in real-world applications.
|
[
"['R. Torres' 'D. J. Nott' 'S. A. Sisson' 'T. Rodrigues' 'J. G. Reis'\n 'G. S. Rodrigues']"
] |
null | null |
2403.05759
| null | null |
http://arxiv.org/pdf/2403.05759v1
|
2024-03-09T02:10:08Z
|
2024-03-09T02:10:08Z
|
Membership Testing in Markov Equivalence Classes via Independence Query
Oracles
|
Understanding causal relationships between variables is a fundamental problem with broad impact in numerous scientific fields. While extensive research has been dedicated to learning causal graphs from data, its complementary concept of testing causal relationships has remained largely unexplored. While learning involves the task of recovering the Markov equivalence class (MEC) of the underlying causal graph from observational data, the testing counterpart addresses the following critical question: Given a specific MEC and observational data from some causal graph, can we determine if the data-generating causal graph belongs to the given MEC? We explore constraint-based testing methods by establishing bounds on the required number of conditional independence tests. Our bounds are in terms of the size of the maximum undirected clique ($s$) of the given MEC. In the worst case, we show a lower bound of $exp(Omega(s))$ independence tests. We then give an algorithm that resolves the task with $exp(O(s))$ tests, matching our lower bound. Compared to the learning problem, where algorithms often use a number of independence tests that is exponential in the maximum in-degree, this shows that testing is relatively easier. In particular, it requires exponentially less independence tests in graphs featuring high in-degrees and small clique sizes. Additionally, using the DAG associahedron, we provide a geometric interpretation of testing versus learning and discuss how our testing result can aid learning.
|
[
"['Jiaqi Zhang' 'Kirankumar Shiragur' 'Caroline Uhler']"
] |
null | null |
2403.05763
| null | null |
http://arxiv.org/pdf/2403.05763v1
|
2024-03-09T02:17:43Z
|
2024-03-09T02:17:43Z
|
HDReason: Algorithm-Hardware Codesign for Hyperdimensional Knowledge
Graph Reasoning
|
In recent times, a plethora of hardware accelerators have been put forth for graph learning applications such as vertex classification and graph classification. However, previous works have paid little attention to Knowledge Graph Completion (KGC), a task that is well-known for its significantly higher algorithm complexity. The state-of-the-art KGC solutions based on graph convolution neural network (GCN) involve extensive vertex/relation embedding updates and complicated score functions, which are inherently cumbersome for acceleration. As a result, existing accelerator designs are no longer optimal, and a novel algorithm-hardware co-design for KG reasoning is needed. Recently, brain-inspired HyperDimensional Computing (HDC) has been introduced as a promising solution for lightweight machine learning, particularly for graph learning applications. In this paper, we leverage HDC for an intrinsically more efficient and acceleration-friendly KGC algorithm. We also co-design an acceleration framework named HDReason targeting FPGA platforms. On the algorithm level, HDReason achieves a balance between high reasoning accuracy, strong model interpretability, and less computation complexity. In terms of architecture, HDReason offers reconfigurability, high training throughput, and low energy consumption. When compared with NVIDIA RTX 4090 GPU, the proposed accelerator achieves an average 10.6x speedup and 65x energy efficiency improvement. When conducting cross-models and cross-platforms comparison, HDReason yields an average 4.2x higher performance and 3.4x better energy efficiency with similar accuracy versus the state-of-the-art FPGA-based GCN training platform.
|
[
"['Hanning Chen' 'Yang Ni' 'Ali Zakeri' 'Zhuowen Zou' 'Sanggeon Yun'\n 'Fei Wen' 'Behnam Khaleghi' 'Narayan Srinivasa' 'Hugo Latapie'\n 'Mohsen Imani']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.