categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2405.10231
null
null
http://arxiv.org/pdf/2405.10231v1
2024-05-16T16:29:49Z
2024-05-16T16:29:49Z
Influencer Cartels
Social media influencers account for a growing share of marketing worldwide. We demonstrate the existence of a novel form of market failure in this advertising market: influencer cartels, where groups of influencers collude to increase their advertising revenue by inflating their engagement. Our theoretical model shows that influencer cartels can improve consumer welfare if they expand social media engagement to the target audience, or reduce welfare if they divert engagement to less relevant audiences. We validate the model empirically using novel data on influencer cartels combined with machine learning tools, and derive policy implications for how to maximize consumer welfare.
[ "['Marit Hinnosaar' 'Toomas Hinnosaar']" ]
null
null
2405.10243
null
null
http://arxiv.org/pdf/2405.10243v1
2024-05-16T16:46:46Z
2024-05-16T16:46:46Z
DocuMint: Docstring Generation for Python using Small Language Models
Effective communication, specifically through documentation, is the beating heart of collaboration among contributors in software development. Recent advancements in language models (LMs) have enabled the introduction of a new type of actor in that ecosystem: LM-powered assistants capable of code generation, optimization, and maintenance. Our study investigates the efficacy of small language models (SLMs) for generating high-quality docstrings by assessing accuracy, conciseness, and clarity, benchmarking performance quantitatively through mathematical formulas and qualitatively through human evaluation using Likert scale. Further, we introduce DocuMint, as a large-scale supervised fine-tuning dataset with 100,000 samples. In quantitative experiments, Llama 3 8B achieved the best performance across all metrics, with conciseness and clarity scores of 0.605 and 64.88, respectively. However, under human evaluation, CodeGemma 7B achieved the highest overall score with an average of 8.3 out of 10 across all metrics. Fine-tuning the CodeGemma 2B model using the DocuMint dataset led to significant improvements in performance across all metrics, with gains of up to 22.5% in conciseness. The fine-tuned model and the dataset can be found in HuggingFace and the code can be found in the repository.
[ "['Bibek Poudel' 'Adam Cook' 'Sekou Traore' 'Shelah Ameli']" ]
null
null
2405.10254
null
null
http://arxiv.org/pdf/2405.10254v2
2024-05-22T17:22:32Z
2024-05-16T16:59:12Z
PRISM: A Multi-Modal Generative Foundation Model for Slide-Level Histopathology
Foundation models in computational pathology promise to unlock the development of new clinical decision support systems and models for precision medicine. However, there is a mismatch between most clinical analysis, which is defined at the level of one or more whole slide images, and foundation models to date, which process the thousands of image tiles contained in a whole slide image separately. The requirement to train a network to aggregate information across a large number of tiles in multiple whole slide images limits these models' impact. In this work, we present a slide-level foundation model for H&E-stained histopathology, PRISM, that builds on Virchow tile embeddings and leverages clinical report text for pre-training. Using the tile embeddings, PRISM produces slide-level embeddings with the ability to generate clinical reports, resulting in several modes of use. Using text prompts, PRISM achieves zero-shot cancer detection and sub-typing performance approaching and surpassing that of a supervised aggregator model. Using the slide embeddings with linear classifiers, PRISM surpasses supervised aggregator models. Furthermore, we demonstrate that fine-tuning of the PRISM slide encoder yields label-efficient training for biomarker prediction, a task that typically suffers from low availability of training data; an aggregator initialized with PRISM and trained on as little as 10% of the training data can outperform a supervised baseline that uses all of the data.
[ "['George Shaikovski' 'Adam Casson' 'Kristen Severson' 'Eric Zimmermann'\n 'Yi Kan Wang' 'Jeremy D. Kunz' 'Juan A. Retamero' 'Gerard Oakley'\n 'David Klimstra' 'Christopher Kanan' 'Matthew Hanna' 'Michal Zelechowski'\n 'Julian Viret' 'Neil Tenenholtz' 'James Hall' 'Nicolo Fusi'\n 'Razik Yousfi' 'Peter Hamilton' 'William A. Moye' 'Eugene Vorontsov'\n 'Siqi Liu' 'Thomas J. Fuchs']" ]
null
null
2405.10262
null
null
http://arxiv.org/pdf/2405.10262v1
2024-05-16T17:13:25Z
2024-05-16T17:13:25Z
Two-Phase Dynamics of Interactions Explains the Starting Point of a DNN Learning Over-Fitted Features
This paper investigates the dynamics of a deep neural network (DNN) learning interactions. Previous studies have discovered and mathematically proven that given each input sample, a well-trained DNN usually only encodes a small number of interactions (non-linear relationships) between input variables in the sample. A series of theorems have been derived to prove that we can consider the DNN's inference equivalent to using these interactions as primitive patterns for inference. In this paper, we discover the DNN learns interactions in two phases. The first phase mainly penalizes interactions of medium and high orders, and the second phase mainly learns interactions of gradually increasing orders. We can consider the two-phase phenomenon as the starting point of a DNN learning over-fitted features. Such a phenomenon has been widely shared by DNNs with various architectures trained for different tasks. Therefore, the discovery of the two-phase dynamics provides a detailed mechanism for how a DNN gradually learns different inference patterns (interactions). In particular, we have also verified the claim that high-order interactions have weaker generalization power than low-order interactions. Thus, the discovered two-phase dynamics also explains how the generalization power of a DNN changes during the training process.
[ "['Junpeng Zhang' 'Qing Li' 'Liang Lin' 'Quanshi Zhang']" ]
null
null
2405.10263
null
null
http://arxiv.org/pdf/2405.10263v1
2024-05-16T17:13:55Z
2024-05-16T17:13:55Z
On Partially Unitary Learning
The problem of an optimal mapping between Hilbert spaces $IN$ of $left|psirightrangle$ and $OUT$ of $left|phirightrangle$ based on a set of wavefunction measurements (within a phase) $psi_l to phi_l$, $l=1dots M$, is formulated as an optimization problem maximizing the total fidelity $sum_{l=1}^{M} omega^{(l)} left|langlephi_l|mathcal{U}|psi_lrangleright|^2$ subject to probability preservation constraints on $mathcal{U}$ (partial unitarity). Constructed operator $mathcal{U}$ can be considered as a $IN$ to $OUT$ quantum channel; it is a partially unitary rectangular matrix of the dimension $dim(OUT) times dim(IN)$ transforming operators as $A^{OUT}=mathcal{U} A^{IN} mathcal{U}^{dagger}$. An iteration algorithm finding the global maximum of this optimization problem is developed and it's application to a number of problems is demonstrated. A software product implementing the algorithm is available from the authors.
[ "['Mikhail Gennadievich Belov' 'Vladislav Gennadievich Malyshkin']" ]
null
null
2405.10264
null
null
http://arxiv.org/pdf/2405.10264v1
2024-05-16T17:15:39Z
2024-05-16T17:15:39Z
Architectures and random properties of symplectic quantum circuits
Parametrized and random unitary (or orthogonal) $n$-qubit circuits play a central role in quantum information. As such, one could naturally assume that circuits implementing symplectic transformation would attract similar attention. However, this is not the case, as $mathbb{SP}(d/2)$ -- the group of $dtimes d$ unitary symplectic matrices -- has thus far been overlooked. In this work, we aim at starting to right this wrong. We begin by presenting a universal set of generators $mathcal{G}$ for the symplectic algebra $imathfrak{sp}(d/2)$, consisting of one- and two-qubit Pauli operators acting on neighboring sites in a one-dimensional lattice. Here, we uncover two critical differences between such set, and equivalent ones for unitary and orthogonal circuits. Namely, we find that the operators in $mathcal{G}$ cannot generate arbitrary local symplectic unitaries and that they are not translationally invariant. We then review the Schur-Weyl duality between the symplectic group and the Brauer algebra, and use tools from Weingarten calculus to prove that Pauli measurements at the output of Haar random symplectic circuits can converge to Gaussian processes. As a by-product, such analysis provides us with concentration bounds for Pauli measurements in circuits that form $t$-designs over $mathbb{SP}(d/2)$. To finish, we present tensor-network tools to analyze shallow random symplectic circuits, and we use these to numerically show that computational-basis measurements anti-concentrate at logarithmic depth.
[ "['Diego García-Martín' 'Paolo Braccia' 'M. Cerezo']" ]
null
null
2405.10267
null
null
http://arxiv.org/pdf/2405.10267v2
2024-05-17T13:01:25Z
2024-05-16T17:19:58Z
Sharpness-Aware Minimization in Genetic Programming
Sharpness-Aware Minimization (SAM) was recently introduced as a regularization procedure for training deep neural networks. It simultaneously minimizes the fitness (or loss) function and the so-called fitness sharpness. The latter serves as a measure of the nonlinear behavior of a solution and does so by finding solutions that lie in neighborhoods having uniformly similar loss values across all fitness cases. In this contribution, we adapt SAM for tree Genetic Programming (TGP) by exploring the semantic neighborhoods of solutions using two simple approaches. By capitalizing upon perturbing input and output of program trees, sharpness can be estimated and used as a second optimization criterion during the evolution. To better understand the impact of this variant of SAM on TGP, we collect numerous indicators of the evolutionary process, including generalization ability, complexity, diversity, and a recently proposed genotype-phenotype mapping to study the amount of redundancy in trees. The experimental results demonstrate that using any of the two proposed SAM adaptations in TGP allows (i) a significant reduction of tree sizes in the population and (ii) a decrease in redundancy of the trees. When assessed on real-world benchmarks, the generalization ability of the elite solutions does not deteriorate.
[ "['Illya Bakurov' 'Nathan Haut' 'Wolfgang Banzhaf']" ]
null
null
2405.10271
null
null
http://arxiv.org/pdf/2405.10271v1
2024-05-16T17:27:41Z
2024-05-16T17:27:41Z
Automated Federated Learning via Informed Pruning
Federated learning (FL) represents a pivotal shift in machine learning (ML) as it enables collaborative training of local ML models coordinated by a central aggregator, all without the need to exchange local data. However, its application on edge devices is hindered by limited computational capabilities and data communication challenges, compounded by the inherent complexity of Deep Learning (DL) models. Model pruning is identified as a key technique for compressing DL models on devices with limited resources. Nonetheless, conventional pruning techniques typically rely on manually crafted heuristics and demand human expertise to achieve a balance between model size, speed, and accuracy, often resulting in sub-optimal solutions. In this study, we introduce an automated federated learning approach utilizing informed pruning, called AutoFLIP, which dynamically prunes and compresses DL models within both the local clients and the global server. It leverages a federated loss exploration phase to investigate model gradient behavior across diverse datasets and losses, providing insights into parameter significance. Our experiments showcase notable enhancements in scenarios with strong non-IID data, underscoring AutoFLIP's capacity to tackle computational constraints and achieve superior global convergence.
[ "['Christian Internò' 'Elena Raponi' 'Niki van Stein' 'Thomas Bäck'\n 'Markus Olhofer' 'Yaochu Jin' 'Barbara Hammer']" ]
null
null
2405.10284
null
null
http://arxiv.org/abs/2405.10284v1
2024-05-16T17:45:54Z
2024-05-16T17:45:54Z
Quantum Vision Transformers for Quark-Gluon Classification
We introduce a hybrid quantum-classical vision transformer architecture, notable for its integration of variational quantum circuits within both the attention mechanism and the multi-layer perceptrons. The research addresses the critical challenge of computational efficiency and resource constraints in analyzing data from the upcoming High Luminosity Large Hadron Collider, presenting the architecture as a potential solution. In particular, we evaluate our method by applying the model to multi-detector jet images from CMS Open Data. The goal is to distinguish quark-initiated from gluon-initiated jets. We successfully train the quantum model and evaluate it via numerical simulations. Using this approach, we achieve classification performance almost on par with the one obtained with the completely classical architecture, considering a similar number of parameters.
[ "['Marçal Comajoan Cara' 'Gopal Ramesh Dahale' 'Zhongtian Dong'\n 'Roy T. Forestano' 'Sergei Gleyzer' 'Daniel Justice' 'Kyoungchul Kong'\n 'Tom Magorsch' 'Konstantin T. Matchev' 'Katia Matcheva' 'Eyup B. Unlu']" ]
null
null
2405.10292
null
null
http://arxiv.org/pdf/2405.10292v2
2024-05-17T03:45:09Z
2024-05-16T17:50:19Z
Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning
Large vision-language models (VLMs) fine-tuned on specialized visual instruction-following data have exhibited impressive language reasoning capabilities across various scenarios. However, this fine-tuning paradigm may not be able to efficiently learn optimal decision-making agents in multi-step goal-directed tasks from interactive environments. To address this challenge, we propose an algorithmic framework that fine-tunes VLMs with reinforcement learning (RL). Specifically, our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning, enabling the VLM to efficiently explore intermediate reasoning steps that lead to the final text-based action. Next, the open-ended text output is parsed into an executable action to interact with the environment to obtain goal-directed task rewards. Finally, our framework uses these task rewards to fine-tune the entire VLM with RL. Empirically, we demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks, enabling 7b models to outperform commercial models such as GPT4-V or Gemini. Furthermore, we find that CoT reasoning is a crucial component for performance improvement, as removing the CoT reasoning results in a significant decrease in the overall performance of our method.
[ "['Yuexiang Zhai' 'Hao Bai' 'Zipeng Lin' 'Jiayi Pan' 'Shengbang Tong'\n 'Yifei Zhou' 'Alane Suhr' 'Saining Xie' 'Yann LeCun' 'Yi Ma'\n 'Sergey Levine']" ]
null
null
2405.10299
null
null
http://arxiv.org/pdf/2405.10299v2
2024-06-21T11:21:01Z
2024-05-16T17:53:32Z
HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models
The increasing size of language models necessitates a thorough analysis across multiple dimensions to assess trade-offs among crucial hardware metrics such as latency, energy consumption, GPU memory usage, and performance. Identifying optimal model configurations under specific hardware constraints is becoming essential but remains challenging due to the computational load of exhaustive training and evaluation on multiple devices. To address this, we introduce HW-GPT-Bench, a hardware-aware benchmark that utilizes surrogate predictions to approximate various hardware metrics across 13 devices of architectures in the GPT-2 family, with architectures containing up to 774M parameters. Our surrogates, via calibrated predictions and reliable uncertainty estimates, faithfully model the heteroscedastic noise inherent in the energy and latency measurements. To estimate perplexity, we employ weight-sharing techniques from Neural Architecture Search (NAS), inheriting pretrained weights from the largest GPT-2 model. Finally, we demonstrate the utility of HW-GPT-Bench by simulating optimization trajectories of various multi-objective optimization algorithms in just a few seconds.
[ "['Rhea Sanjay Sukthanker' 'Arber Zela' 'Benedikt Staffler' 'Aaron Klein'\n 'Lennart Purucker' 'Joerg K. H. Franke' 'Frank Hutter']" ]
null
null
2405.10301
null
null
http://arxiv.org/pdf/2405.10301v2
2024-05-21T21:49:47Z
2024-05-16T17:55:24Z
Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees
Before deploying outputs from foundation models in high-stakes tasks, it is imperative to ensure that they align with human values. For instance, in radiology report generation, reports generated by a vision-language model must align with human evaluations before their use in medical decision-making. This paper presents Conformal Alignment, a general framework for identifying units whose outputs meet a user-specified alignment criterion. It is guaranteed that on average, a prescribed fraction of selected units indeed meet the alignment criterion, regardless of the foundation model or the data distribution. Given any pre-trained model and new units with model-generated outputs, Conformal Alignment leverages a set of reference data with ground-truth alignment status to train an alignment predictor. It then selects new units whose predicted alignment scores surpass a data-dependent threshold, certifying their corresponding outputs as trustworthy. Through applications to question answering and radiology report generation, we demonstrate that our method is able to accurately identify units with trustworthy outputs via lightweight training over a moderate amount of reference data. En route, we investigate the informativeness of various features in alignment prediction and combine them with standard models to construct the alignment predictor.
[ "['Yu Gui' 'Ying Jin' 'Zhimei Ren']" ]
null
null
2405.10302
null
null
http://arxiv.org/pdf/2405.10302v1
2024-05-16T17:55:42Z
2024-05-16T17:55:42Z
Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift
As machine learning models are increasingly deployed in dynamic environments, it becomes paramount to assess and quantify uncertainties associated with distribution shifts. A distribution shift occurs when the underlying data-generating process changes, leading to a deviation in the model's performance. The prediction interval, which captures the range of likely outcomes for a given prediction, serves as a crucial tool for characterizing uncertainties induced by their underlying distribution. In this paper, we propose methodologies for aggregating prediction intervals to obtain one with minimal width and adequate coverage on the target domain under unsupervised domain shift, under which we have labeled samples from a related source domain and unlabeled covariates from the target domain. Our analysis encompasses scenarios where the source and the target domain are related via i) a bounded density ratio, and ii) a measure-preserving transformation. Our proposed methodologies are computationally efficient and easy to implement. Beyond illustrating the performance of our method through a real-world dataset, we also delve into the theoretical details. This includes establishing rigorous theoretical guarantees, coupled with finite sample bounds, regarding the coverage and width of our prediction intervals. Our approach excels in practical applications and is underpinned by a solid theoretical framework, ensuring its reliability and effectiveness across diverse contexts.
[ "['Jiawei Ge' 'Debarghya Mukherjee' 'Jianqing Fan']" ]
null
null
2405.10310
null
null
http://arxiv.org/pdf/2405.10310v1
2024-05-16T17:58:44Z
2024-05-16T17:58:44Z
Stochastic Q-learning for Large Discrete Action Spaces
In complex environments with large discrete action spaces, effective decision-making is critical in reinforcement learning (RL). Despite the widespread use of value-based RL approaches like Q-learning, they come with a computational burden, necessitating the maximization of a value function over all actions in each iteration. This burden becomes particularly challenging when addressing large-scale problems and using deep neural networks as function approximators. In this paper, we present stochastic value-based RL approaches which, in each iteration, as opposed to optimizing over the entire set of $n$ actions, only consider a variable stochastic set of a sublinear number of actions, possibly as small as $mathcal{O}(log(n))$. The presented stochastic value-based RL methods include, among others, Stochastic Q-learning, StochDQN, and StochDDQN, all of which integrate this stochastic approach for both value-function updates and action selection. The theoretical convergence of Stochastic Q-learning is established, while an analysis of stochastic maximization is provided. Moreover, through empirical validation, we illustrate that the various proposed approaches outperform the baseline methods across diverse environments, including different control problems, achieving near-optimal average returns in significantly reduced time.
[ "['Fares Fourati' 'Vaneet Aggarwal' 'Mohamed-Slim Alouini']" ]
null
null
2405.10313
null
null
http://arxiv.org/pdf/2405.10313v1
2024-05-16T17:59:02Z
2024-05-16T17:59:02Z
How Far Are We From AGI
The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors. Yet, the escalating demands on AI have highlighted the limitations of AI's current offerings, catalyzing a movement towards Artificial General Intelligence (AGI). AGI, distinguished by its ability to execute diverse real-world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution. While existing works have summarized specific recent advancements of AI, they lack a comprehensive discussion of AGI's definitions, goals, and developmental trajectories. Different from existing survey papers, this paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives. We start by articulating the requisite capability frameworks for AGI, integrating the internal, interface, and system dimensions. As the realization of AGI requires more advanced capabilities and adherence to stringent constraints, we further discuss necessary AGI alignment technologies to harmonize these factors. Notably, we emphasize the importance of approaching AGI responsibly by first defining the key levels of AGI progression, followed by the evaluation framework that situates the status-quo, and finally giving our roadmap of how to reach the pinnacle of AGI. Moreover, to give tangible insights into the ubiquitous impact of the integration of AI, we outline existing challenges and potential pathways toward AGI in multiple domains. In sum, serving as a pioneering exploration into the current state and future trajectory of AGI, this paper aims to foster a collective comprehension and catalyze broader public discussions among researchers and practitioners on AGI.
[ "['Tao Feng' 'Chuanyang Jin' 'Jingyu Liu' 'Kunlun Zhu' 'Haoqin Tu'\n 'Zirui Cheng' 'Guanyu Lin' 'Jiaxuan You']" ]
null
null
2405.10315
null
null
http://arxiv.org/pdf/2405.10315v1
2024-05-16T17:59:07Z
2024-05-16T17:59:07Z
TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots. The key challenge of this approach is to address simulation-to-reality (sim-to-real) gaps. Previous methods often require domain-specific knowledge a priori. We argue that a straightforward way to obtain such knowledge is by asking humans to observe and assist robot policy execution in the real world. The robots can then learn from humans to close various sim-to-real gaps. We propose TRANSIC, a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework. TRANSIC allows humans to augment simulation policies to overcome various unmodeled sim-to-real gaps holistically through intervention and online correction. Residual policies can be learned from human corrections and integrated with simulation policies for autonomous execution. We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly. Through synergistic integration of policies learned in simulation and from humans, TRANSIC is effective as a holistic approach to addressing various, often coexisting sim-to-real gaps. It displays attractive properties such as scaling with human effort. Videos and code are available at https://transic-robot.github.io/
[ "['Yunfan Jiang' 'Chen Wang' 'Ruohan Zhang' 'Jiajun Wu' 'Li Fei-Fei']" ]
null
null
2405.10343
null
null
http://arxiv.org/pdf/2405.10343v1
2024-05-15T09:20:02Z
2024-05-15T09:20:02Z
UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning
Recently, a noticeable trend has emerged in developing pre-trained foundation models in the domains of CV and NLP. However, for molecular pre-training, there lacks a universal model capable of effectively applying to various categories of molecular tasks, since existing prevalent pre-training methods exhibit effectiveness for specific types of downstream tasks. Furthermore, the lack of profound understanding of existing pre-training methods, including 2D graph masking, 2D-3D contrastive learning, and 3D denoising, hampers the advancement of molecular foundation models. In this work, we provide a unified comprehension of existing pre-training methods through the lens of contrastive learning. Thus their distinctions lie in clustering different views of molecules, which is shown beneficial to specific downstream tasks. To achieve a complete and general-purpose molecular representation, we propose a novel pre-training framework, named UniCorn, that inherits the merits of the three methods, depicting molecular views in three different levels. SOTA performance across quantum, physicochemical, and biological tasks, along with comprehensive ablation study, validate the universality and effectiveness of UniCorn.
[ "['Shikun Feng' 'Yuyan Ni' 'Minghao Li' 'Yanwen Huang' 'Zhi-Ming Ma'\n 'Wei-Ying Ma' 'Yanyan Lan']" ]
null
null
2405.10345
null
null
http://arxiv.org/pdf/2405.10345v1
2024-05-16T01:30:47Z
2024-05-16T01:30:47Z
Machine Learning Driven Biomarker Selection for Medical Diagnosis
Recent advances in experimental methods have enabled researchers to collect data on thousands of analytes simultaneously. This has led to correlational studies that associated molecular measurements with diseases such as Alzheimer's, Liver, and Gastric Cancer. However, the use of thousands of biomarkers selected from the analytes is not practical for real-world medical diagnosis and is likely undesirable due to potentially formed spurious correlations. In this study, we evaluate 4 different methods for biomarker selection and 4 different machine learning (ML) classifiers for identifying correlations, evaluating 16 approaches in all. We found that contemporary methods outperform previously reported logistic regression in cases where 3 and 10 biomarkers are permitted. When specificity is fixed at 0.9, ML approaches produced a sensitivity of 0.240 (3 biomarkers) and 0.520 (10 biomarkers), while standard logistic regression provided a sensitivity of 0.000 (3 biomarkers) and 0.040 (10 biomarkers). We also noted that causal-based methods for biomarker selection proved to be the most performant when fewer biomarkers were permitted, while univariate feature selection was the most performant when a greater number of biomarkers were permitted.
[ "['Divyagna Bavikadi' 'Ayushi Agarwal' 'Shashank Ganta' 'Yunro Chung'\n 'Lusheng Song' 'Ji Qiu' 'Paulo Shakarian']" ]
null
null
2405.10346
null
null
http://arxiv.org/pdf/2405.10346v1
2024-05-16T01:39:50Z
2024-05-16T01:39:50Z
AMCEN: An Attention Masking-based Contrastive Event Network for Two-stage Temporal Knowledge Graph Reasoning
Temporal knowledge graphs (TKGs) can effectively model the ever-evolving nature of real-world knowledge, and their completeness and enhancement can be achieved by reasoning new events from existing ones. However, reasoning accuracy is adversely impacted due to an imbalance between new and recurring events in the datasets. To achieve more accurate TKG reasoning, we propose an attention masking-based contrastive event network (AMCEN) with local-global temporal patterns for the two-stage prediction of future events. In the network, historical and non-historical attention mask vectors are designed to control the attention bias towards historical and non-historical entities, acting as the key to alleviating the imbalance. A local-global message-passing module is proposed to comprehensively consider and capture multi-hop structural dependencies and local-global temporal evolution for the in-depth exploration of latent impact factors of different event types. A contrastive event classifier is used to classify events more accurately by incorporating local-global temporal patterns into contrastive learning. Therefore, AMCEN refines the prediction scope with the results of the contrastive event classification, followed by utilizing attention masking-based decoders to finalize the specific outcomes. The results of our experiments on four benchmark datasets highlight the superiority of AMCEN. Especially, the considerable improvements in Hits@1 prove that AMCEN can make more precise predictions about future occurrences.
[ "['Jing Yang' 'Xiao Wang' 'Yutong Wang' 'Jiawei Wang' 'Fei-Yue Wang']" ]
null
null
2405.10348
null
null
http://arxiv.org/pdf/2405.10348v1
2024-05-16T03:53:21Z
2024-05-16T03:53:21Z
Learning to Predict Mutation Effects of Protein-Protein Interactions by Microenvironment-aware Hierarchical Prompt Learning
Protein-protein bindings play a key role in a variety of fundamental biological processes, and thus predicting the effects of amino acid mutations on protein-protein binding is crucial. To tackle the scarcity of annotated mutation data, pre-training with massive unlabeled data has emerged as a promising solution. However, this process faces a series of challenges: (1) complex higher-order dependencies among multiple (more than paired) structural scales have not yet been fully captured; (2) it is rarely explored how mutations alter the local conformation of the surrounding microenvironment; (3) pre-training is costly, both in data size and computational burden. In this paper, we first construct a hierarchical prompt codebook to record common microenvironmental patterns at different structural scales independently. Then, we develop a novel codebook pre-training task, namely masked microenvironment modeling, to model the joint distribution of each mutation with their residue types, angular statistics, and local conformational changes in the microenvironment. With the constructed prompt codebook, we encode the microenvironment around each mutation into multiple hierarchical prompts and combine them to flexibly provide information to wild-type and mutated protein complexes about their microenvironmental differences. Such a hierarchical prompt learning framework has demonstrated superior performance and training efficiency over state-of-the-art pre-training-based methods in mutation effect prediction and a case study of optimizing human antibodies against SARS-CoV-2.
[ "['Lirong Wu' 'Yijun Tian' 'Haitao Lin' 'Yufei Huang' 'Siyuan Li'\n 'Nitesh V Chawla' 'Stan Z. Li']" ]
null
null
2405.10350
null
null
http://arxiv.org/pdf/2405.10350v1
2024-05-16T13:19:51Z
2024-05-16T13:19:51Z
Monitizer: Automating Design and Evaluation of Neural Network Monitors
The behavior of neural networks (NNs) on previously unseen types of data (out-of-distribution or OOD) is typically unpredictable. This can be dangerous if the network's output is used for decision-making in a safety-critical system. Hence, detecting that an input is OOD is crucial for the safe application of the NN. Verification approaches do not scale to practical NNs, making runtime monitoring more appealing for practical use. While various monitors have been suggested recently, their optimization for a given problem, as well as comparison with each other and reproduction of results, remain challenging. We present a tool for users and developers of NN monitors. It allows for (i) application of various types of monitors from the literature to a given input NN, (ii) optimization of the monitor's hyperparameters, and (iii) experimental evaluation and comparison to other approaches. Besides, it facilitates the development of new monitoring approaches. We demonstrate the tool's usability on several use cases of different types of users as well as on a case study comparing different approaches from recent literature.
[ "['Muqsit Azeem' 'Marta Grobelna' 'Sudeep Kanav' 'Jan Kretinsky'\n 'Stefanie Mohr' 'Sabine Rieder']" ]
null
null
2405.10360
null
null
http://arxiv.org/pdf/2405.10360v1
2024-05-16T18:00:01Z
2024-05-16T18:00:01Z
Adversarial Robustness Guarantees for Quantum Classifiers
Despite their ever more widespread deployment throughout society, machine learning algorithms remain critically vulnerable to being spoofed by subtle adversarial tampering with their input data. The prospect of near-term quantum computers being capable of running {quantum machine learning} (QML) algorithms has therefore generated intense interest in their adversarial vulnerability. Here we show that quantum properties of QML algorithms can confer fundamental protections against such attacks, in certain scenarios guaranteeing robustness against classically-armed adversaries. We leverage tools from many-body physics to identify the quantum sources of this protection. Our results offer a theoretical underpinning of recent evidence which suggest quantum advantages in the search for adversarial robustness. In particular, we prove that quantum classifiers are: (i) protected against weak perturbations of data drawn from the trained distribution, (ii) protected against local attacks if they are insufficiently scrambling, and (iii) protected against universal adversarial attacks if they are sufficiently quantum chaotic. Our analytic results are supported by numerical evidence demonstrating the applicability of our theorems and the resulting robustness of a quantum classifier in practice. This line of inquiry constitutes a concrete pathway to advantage in QML, orthogonal to the usually sought improvements in model speed or accuracy.
[ "['Neil Dowling' 'Maxwell T. West' 'Angus Southwell' 'Azar C. Nakhl'\n 'Martin Sevior' 'Muhammad Usman' 'Kavan Modi']" ]
null
null
2405.10369
null
null
http://arxiv.org/pdf/2405.10369v1
2024-05-16T18:03:17Z
2024-05-16T18:03:17Z
Reinforcement learning
Observing celestial objects and advancing our scientific knowledge about them involves tedious planning, scheduling, data collection and data post-processing. Many of these operational aspects of astronomy are guided and executed by expert astronomers. Reinforcement learning is a mechanism where we (as humans and astronomers) can teach agents of artificial intelligence to perform some of these tedious tasks. In this paper, we will present a state of the art overview of reinforcement learning and how it can benefit astronomy.
[ "['Sarod Yatawatta']" ]
null
null
2405.10372
null
null
http://arxiv.org/pdf/2405.10372v1
2024-05-16T18:05:18Z
2024-05-16T18:05:18Z
Efficient model predictive control for nonlinear systems modelled by deep neural networks
This paper presents a model predictive control (MPC) for dynamic systems whose nonlinearity and uncertainty are modelled by deep neural networks (NNs), under input and state constraints. Since the NN output contains a high-order complex nonlinearity of the system state and control input, the MPC problem is nonlinear and challenging to solve for real-time control. This paper proposes two types of methods for solving the MPC problem: the mixed integer programming (MIP) method which produces an exact solution to the nonlinear MPC, and linear relaxation (LR) methods which generally give suboptimal solutions but are much computationally cheaper. Extensive numerical simulation for an inverted pendulum system modelled by ReLU NNs of various sizes is used to demonstrate and compare performance of the MIP and LR methods.
[ "['Jianglin Lan']" ]
null
null
2405.10375
null
null
http://arxiv.org/pdf/2405.10375v1
2024-05-16T18:14:59Z
2024-05-16T18:14:59Z
Implementing a GRU Neural Network for Flood Prediction in Ashland City, Tennessee
Ashland City, Tennessee, located within the Lower Cumberland Sycamore watershed, is highly susceptible to flooding due to increased upstream water levels. This study aimed to develop a robust flood prediction model for the city, utilizing water level data at 30-minute intervals from ten USGS gauge stations within the watershed. A Gated Recurrent Unit (GRU) network, known for its ability to effectively process sequential time-series data, was used. The model was trained, validated, and tested using a year-long dataset (January 2021-January 2022), and its performance was evaluated using statistical metrics including Nash-Sutcliffe Efficiency (NSE), Root Mean Squared Error (RMSE), Percent Bias (PBIAS), Mean Absolute Error (MAE), and Coefficient of Determination (R^2). The results demonstrated a high level of accuracy, with the model explaining 98.2% of the variance in the data. Despite minor discrepancies between predicted and observed values, the GRU model proved to be an effective tool for flood prediction in Ashland City, with potential applications for enhancing disaster preparedness and response efforts in Ashland City.
[ "['George K. Fordjour' 'Alfred J. Kalyanapu']" ]
null
null
2405.10377
null
null
http://arxiv.org/pdf/2405.10377v1
2024-05-16T18:16:56Z
2024-05-16T18:16:56Z
Smart Routing with Precise Link Estimation: DSEE-Based Anypath Routing for Reliable Wireless Networking
In dynamic and resource-constrained environments, such as multi-hop wireless mesh networks, traditional routing protocols often falter by relying on predetermined paths that prove ineffective in unpredictable link conditions. Shortest Anypath routing offers a solution by adapting routing decisions based on real-time link conditions. However, the effectiveness of such routing is fundamentally dependent on the quality and reliability of the available links, and predicting these variables with certainty is challenging. This paper introduces a novel approach that leverages the Deterministic Sequencing of Exploration and Exploitation (DSEE), a multi-armed bandit algorithm, to address the need for accurate and real-time estimation of link delivery probabilities. This approach augments the reliability and resilience of the Shortest Anypath routing in the face of fluctuating link conditions. By coupling DSEE with Anypath routing, this algorithm continuously learns and ensures accurate delivery probability estimation and selects the most suitable way to efficiently route packets while maintaining a provable near-logarithmic regret bound. We also theoretically prove that our proposed scheme offers better regret scaling with respect to the network size than the previously proposed Thompson Sampling-based Opportunistic Routing (TSOR).
[ "['Narjes Nourzad' 'Bhaskar Krishnamachari']" ]
null
null
2405.10378
null
null
http://arxiv.org/pdf/2405.10378v1
2024-05-16T18:17:44Z
2024-05-16T18:17:44Z
A Polynomial-Time Approximation for Pairwise Fair $k$-Median Clustering
In this work, we study pairwise fair clustering with $ell ge 2$ groups, where for every cluster $C$ and every group $i in [ell]$, the number of points in $C$ from group $i$ must be at most $t$ times the number of points in $C$ from any other group $j in [ell]$, for a given integer $t$. To the best of our knowledge, only bi-criteria approximation and exponential-time algorithms follow for this problem from the prior work on fair clustering problems when $ell > 2$. In our work, focusing on the $ell > 2$ case, we design the first polynomial-time $(t^{ell}cdot ellcdot k)^{O(ell)}$-approximation for this problem with $k$-median cost that does not violate the fairness constraints. We complement our algorithmic result by providing hardness of approximation results, which show that our problem even when $ell=2$ is almost as hard as the popular uniform capacitated $k$-median, for which no polynomial-time algorithm with an approximation factor of $o(log k)$ is known.
[ "['Sayan Bandyapadhyay' 'Eden Chlamtáč' 'Yury Makarychev' 'Ali Vakilian']" ]
null
null
2405.10385
null
null
http://arxiv.org/pdf/2405.10385v2
2024-05-20T05:21:13Z
2024-05-16T18:26:38Z
AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning
The SemEval 2024 BRAINTEASER task represents a pioneering venture in Natural Language Processing (NLP) by focusing on lateral thinking, a dimension of cognitive reasoning that is often overlooked in traditional linguistic analyses. This challenge comprises of Sentence Puzzle and Word Puzzle subtasks and aims to test language models' capacity for divergent thinking. In this paper, we present our approach to the BRAINTEASER task. We employ a holistic strategy by leveraging cutting-edge pre-trained models in multiple choice architecture, and diversify the training data with Sentence and Word Puzzle datasets. To gain further improvement, we fine-tuned the model with synthetic humor or jokes dataset and the RiddleSense dataset which helped augmenting the model's lateral thinking abilities. Empirical results show that our approach achieve 92.5% accuracy in Sentence Puzzle subtask and 80.2% accuracy in Word Puzzle subtask.
[ "['Mina Ghashami' 'Soumya Smruti Mishra']" ]
null
null
2405.10389
null
null
http://arxiv.org/pdf/2405.10389v1
2024-05-16T18:33:35Z
2024-05-16T18:33:35Z
Physics-Informed Heterogeneous Graph Neural Networks for DC Blocker Placement
The threat of geomagnetic disturbances (GMDs) to the reliable operation of the bulk energy system has spurred the development of effective strategies for mitigating their impacts. One such approach involves placing transformer neutral blocking devices, which interrupt the path of geomagnetically induced currents (GICs) to limit their impact. The high cost of these devices and the sparsity of transformers that experience high GICs during GMD events, however, calls for a sparse placement strategy that involves high computational cost. To address this challenge, we developed a physics-informed heterogeneous graph neural network (PIHGNN) for solving the graph-based dc-blocker placement problem. Our approach combines a heterogeneous graph neural network (HGNN) with a physics-informed neural network (PINN) to capture the diverse types of nodes and edges in ac/dc networks and incorporates the physical laws of the power grid. We train the PIHGNN model using a surrogate power flow model and validate it using case studies. Results demonstrate that PIHGNN can effectively and efficiently support the deployment of GIC dc-current blockers, ensuring the continued supply of electricity to meet societal demands. Our approach has the potential to contribute to the development of more reliable and resilient power grids capable of withstanding the growing threat that GMDs pose.
[ "['Hongwei Jin' 'Prasanna Balaprakash' 'Allen Zou' 'Pieter Ghysels'\n 'Aditi S. Krishnapriyan' 'Adam Mate' 'Arthur Barnes' 'Russell Bent']" ]
null
null
2405.10392
null
null
http://arxiv.org/pdf/2405.10392v1
2024-05-16T18:37:35Z
2024-05-16T18:37:35Z
Transport based particle methods for the Fokker-Planck-Landau equation
We propose a particle method for numerically solving the Landau equation, inspired by the score-based transport modeling (SBTM) method for the Fokker-Planck equation. This method can preserve some important physical properties of the Landau equation, such as the conservation of mass, momentum, and energy, and decay of estimated entropy. We prove that matching the gradient of the logarithm of the approximate solution is enough to recover the true solution to the Landau equation with Maxwellian molecules. Several numerical experiments in low and moderately high dimensions are performed, with particular emphasis on comparing the proposed method with the traditional particle or blob method.
[ "['Vasily Ilin' 'Jingwei Hu' 'Zhenfu Wang']" ]
null
null
2405.10399
null
null
http://arxiv.org/pdf/2405.10399v1
2024-05-16T18:58:19Z
2024-05-16T18:58:19Z
A note on continuous-time online learning
In online learning, the data is provided in a sequential order, and the goal of the learner is to make online decisions to minimize overall regrets. This note is concerned with continuous-time models and algorithms for several online learning problems: online linear optimization, adversarial bandit, and adversarial linear bandit. For each problem, we extend the discrete-time algorithm to the continuous-time setting and provide a concise proof of the optimal regret bound.
[ "['Lexing Ying']" ]
null
null
2405.10410
null
null
http://arxiv.org/pdf/2405.10410v2
2024-06-10T17:13:52Z
2024-05-16T19:22:49Z
The fast committor machine: Interpretable prediction with kernels
In the study of stochastic systems, the committor function describes the probability that a system starting from an initial configuration $x$ will reach a set $B$ before a set $A$. This paper introduces an efficient and interpretable algorithm for approximating the committor, called the "fast committor machine" (FCM). The FCM uses simulated trajectory data to build a kernel-based model of the committor. The kernel function is constructed to emphasize low-dimensional subspaces which optimally describe the $A$ to $B$ transitions. The coefficients in the kernel model are determined using randomized linear algebra, leading to a runtime that scales linearly in the number of data points. In numerical experiments involving a triple-well potential and alanine dipeptide, the FCM yields higher accuracy and trains more quickly than a neural network with the same number of parameters. The FCM is also more interpretable than the neural net.
[ "['D. Aristoff' 'M. Johnson' 'G. Simpson' 'R. J. Webber']" ]
null
null
2405.10414
null
null
http://arxiv.org/pdf/2405.10414v1
2024-05-16T19:33:00Z
2024-05-16T19:33:00Z
A Reliability Theory of Compromise Decisions for Large-Scale Stochastic Programs
Stochastic programming models can lead to very large-scale optimization problems for which it may be impossible to enumerate all possible scenarios. In such cases, one adopts a sampling-based solution methodology in which case the reliability of the resulting decisions may be suspect. For such instances, it is advisable to adopt methodologies that promote variance reduction. One such approach goes under a framework known as "compromise decision", which requires multiple replications of the solution procedure. This paper studies the reliability of stochastic programming solutions resulting from the "compromise decision" process. This process is characterized by minimizing an aggregation of objective function approximations across replications, presumably conducted in parallel. We refer to the post-parallel-processing problem as the problem of "compromise decision". We quantify the reliability of compromise decisions by estimating the expectation and variance of the "pessimistic distance" of sampled instances from the set of true optimal decisions. Such pessimistic distance is defined as an estimate of the largest possible distance of the solution of the sampled instance from the "true" optimal solution set. The Rademacher average of instances is used to bound the sample complexity of the compromise decision.
[ "['Shuotao Diao' 'Suvrajeet Sen']" ]
null
null
2405.10425
null
null
http://arxiv.org/pdf/2405.10425v1
2024-05-16T20:09:41Z
2024-05-16T20:09:41Z
Data Selection for Transfer Unlearning
As deep learning models are becoming larger and data-hungrier, there are growing ethical, legal and technical concerns over use of data: in practice, agreements on data use may change over time, rendering previously-used training data impermissible for training purposes. These issues have driven increased attention to machine unlearning: removing "the influence of" a subset of training data from a trained model. In this work, we advocate for a relaxed definition of unlearning that does not address privacy applications but targets a scenario where a data owner withdraws permission of use of their data for training purposes. In this context, we consider the important problem of emph{transfer unlearning} where a pretrained model is transferred to a target dataset that contains some "non-static" data that may need to be unlearned in the future. We propose a new method that uses a mechanism for selecting relevant examples from an auxiliary "static" dataset, and finetunes on the selected data instead of "non-static" target data; addressing all unlearning requests ahead of time. We also adapt a recent relaxed definition of unlearning to our problem setting and demonstrate that our approach is an exact transfer unlearner according to it, while being highly efficient (amortized). We find that our method outperforms the gold standard "exact unlearning" (finetuning on only the "static" portion of the target dataset) on several datasets, especially for small "static" sets, sometimes approaching an upper bound for test accuracy. We also analyze factors influencing the accuracy boost obtained by data selection.
[ "['Nazanin Mohammadi Sepahvand' 'Vincent Dumoulin' 'Eleni Triantafillou'\n 'Gintare Karolina Dziugaite']" ]
null
null
2405.10426
null
null
http://arxiv.org/pdf/2405.10426v1
2024-05-16T20:16:45Z
2024-05-16T20:16:45Z
Memory-efficient Energy-adaptive Inference of Pre-Trained Models on Batteryless Embedded Systems
Batteryless systems frequently face power failures, requiring extra runtime buffers to maintain inference progress and leaving only a memory space for storing ultra-tiny deep neural networks (DNNs). Besides, making these models responsive to stochastic energy harvesting dynamics during inference requires a balance between inference accuracy, latency, and energy overhead. Recent works on compression mostly focus on time and memory, but often ignore energy dynamics or significantly reduce the accuracy of pre-trained DNNs. Existing energy-adaptive inference works modify the architecture of pre-trained models and have significant memory overhead. Thus, energy-adaptive and accurate inference of pre-trained DNNs on batteryless devices with extreme memory constraints is more challenging than traditional microcontrollers. We combat these issues by proposing FreeML, a framework to optimize pre-trained DNN models for memory-efficient and energy-adaptive inference on batteryless systems. FreeML comprises (1) a novel compression technique to reduce the model footprint and runtime memory requirements simultaneously, making them executable on extremely memory-constrained batteryless platforms; and (2) the first early exit mechanism that uses a single exit branch for all exit points to terminate inference at any time, making models energy-adaptive with minimal memory overhead. Our experiments showed that FreeML reduces the model sizes by up to $95 times$, supports adaptive inference with a $2.03-19.65 times$ less memory overhead, and provides significant time and energy benefits with only a negligible accuracy drop compared to the state-of-the-art.
[ "['Pietro Farina' 'Subrata Biswas' 'Eren Yıldız' 'Khakim Akhunov'\n 'Saad Ahmed' 'Bashima Islam' 'Kasım Sinan Yıldırım']" ]
null
null
2405.10442
null
null
http://arxiv.org/pdf/2405.10442v1
2024-05-16T21:07:09Z
2024-05-16T21:07:09Z
Data-driven low-dimensional model of a sedimenting flexible fiber
The dynamics of flexible filaments entrained in flow, important for understanding many biological and industrial processes, are computationally expensive to model with full-physics simulations. This work describes a data-driven technique to create high-fidelity low-dimensional models of flexible fiber dynamics using machine learning; the technique is applied to sedimentation in a quiescent, viscous Newtonian fluid, using results from detailed simulations as the data set. The approach combines an autoencoder neural network architecture to learn a low-dimensional latent representation of the filament shape, with a neural ODE that learns the evolution of the particle in the latent state. The model was designed to model filaments of varying flexibility, characterized by an elasto-gravitational number $mathcal{B}$, and was trained on a data set containing the evolution of fibers beginning at set angles of inclination. For the range of $mathcal{B}$ considered here (100-10000), the filament shape dynamics can be represented with high accuracy with only four degrees of freedom, in contrast to the 93 present in the original bead-spring model used to generate the dynamic trajectories. We predict the evolution of fibers set at arbitrary angles and demonstrate that our data-driven model can accurately forecast the evolution of a fiber at both trained and untrained elasto-gravitational numbers.
[ "['Andrew J Fox' 'Michael D. Graham']" ]
null
null
2405.10443
null
null
http://arxiv.org/pdf/2405.10443v2
2024-06-26T20:22:31Z
2024-05-16T21:07:42Z
Simultaneous Masking, Not Prompting Optimization: A Paradigm Shift in Fine-tuning LLMs for Simultaneous Translation
Large language models (LLMs) have achieved state-of-the-art performance in various language processing tasks, motivating their adoption in simultaneous translation. Current fine-tuning methods to adapt LLMs for simultaneous translation focus on prompting optimization strategies using either data augmentation or prompt structure modifications. However, these methods suffer from several issues, such as unnecessarily expanded training sets, computational inefficiency from dumping the key and value cache, increased prompt sizes, or restriction to a single decision policy. To eliminate these issues, in this work, we propose SimulMask, a new paradigm for fine-tuning LLMs for simultaneous translation. It utilizes a novel attention mask approach that models simultaneous translation during fine-tuning by masking attention for a desired decision policy. Applying the proposed SimulMask on a Falcon LLM for the IWSLT 2017 dataset, we have observed a significant translation quality improvement compared to state-of-the-art prompting optimization strategies on five language pairs while reducing the computational cost.
[ "['Matthew Raffel' 'Victor Agostinelli' 'Lizhong Chen']" ]
null
null
2405.10452
null
null
http://arxiv.org/pdf/2405.10452v1
2024-05-16T21:38:21Z
2024-05-16T21:38:21Z
Navigating Public Sentiment in the Circular Economy through Topic Modelling and Hyperparameter Optimisation
To advance the circular economy (CE), it is crucial to gain insights into the evolution of public sentiments, cognitive pathways of the masses concerning circular products and digital technology, and recognise the primary concerns. To achieve this, we collected data related to the CE from diverse platforms including Twitter, Reddit, and The Guardian. This comprehensive data collection spanned across three distinct strata of the public: the general public, professionals, and official sources. Subsequently, we utilised three topic models on the collected data. Topic modelling represents a type of data-driven and machine learning approach for text mining, capable of automatically categorising a large number of documents into distinct semantic groups. Simultaneously, these groups are described by topics, and these topics can aid in understanding the semantic content of documents at a high level. However, the performance of topic modelling may vary depending on different hyperparameter values. Therefore, in this study, we proposed a framework for topic modelling with hyperparameter optimisation for CE and conducted a series of systematic experiments to ensure that topic models are set with appropriate hyperparameters and to gain insights into the correlations between the CE and public opinion based on well-established models. The results of this study indicate that concerns about sustainability and economic impact persist across all three datasets. Official sources demonstrate a higher level of engagement with the application and regulation of CE. To the best of our knowledge, this study is pioneering in investigating various levels of public opinions concerning CE through topic modelling with the exploration of hyperparameter optimisation.
[ "['Junhao Song' 'Yingfang Yuan' 'Kaiwen Chang' 'Bing Xu' 'Jin Xuan'\n 'Wei Pang']" ]
null
null
2405.10469
null
null
http://arxiv.org/pdf/2405.10469v1
2024-05-16T23:27:21Z
2024-05-16T23:27:21Z
Simulation-Based Benchmarking of Reinforcement Learning Agents for Personalized Retail Promotions
The development of open benchmarking platforms could greatly accelerate the adoption of AI agents in retail. This paper presents comprehensive simulations of customer shopping behaviors for the purpose of benchmarking reinforcement learning (RL) agents that optimize coupon targeting. The difficulty of this learning problem is largely driven by the sparsity of customer purchase events. We trained agents using offline batch data comprising summarized customer purchase histories to help mitigate this effect. Our experiments revealed that contextual bandit and deep RL methods that are less prone to over-fitting the sparse reward distributions significantly outperform static policies. This study offers a practical framework for simulating AI agents that optimize the entire retail customer journey. It aims to inspire the further development of simulation tools for retail AI systems.
[ "['Yu Xia' 'Sriram Narayanamoorthy' 'Zhengyuan Zhou' 'Joshua Mabry']" ]
null
null
2405.10480
null
null
http://arxiv.org/pdf/2405.10480v1
2024-05-17T00:52:39Z
2024-05-17T00:52:39Z
Lean Attention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers
Transformer-based models have emerged as one of the most widely used architectures for natural language processing, natural language generation, and image generation. The size of the state-of-the-art models has increased steadily reaching billions of parameters. These huge models are memory hungry and incur significant inference latency even on cutting edge AI-accelerators, such as GPUs. Specifically, the time and memory complexity of the attention operation is quadratic in terms of the total context length, i.e., prompt and output tokens. Thus, several optimizations such as key-value tensor caching and FlashAttention computation have been proposed to deliver the low latency demands of applications relying on such large models. However, these techniques do not cater to the computationally distinct nature of different phases during inference. To that end, we propose LeanAttention, a scalable technique of computing self-attention for the token-generation phase (decode-phase) of decoder-only transformer models. LeanAttention enables scaling the attention mechanism implementation for the challenging case of long context lengths by re-designing the execution flow for the decode-phase. We identify that the associative property of online softmax can be treated as a reduction operation thus allowing us to parallelize the attention computation over these large context lengths. We extend the "stream-K" style reduction of tiled calculation to self-attention to enable parallel computation resulting in an average of 2.6x attention execution speedup over FlashAttention-2 and up to 8.33x speedup for 512k context lengths.
[ "['Rya Sanovar' 'Srikant Bharadwaj' 'Renee St. Amant' 'Victor Rühle'\n 'Saravan Rajmohan']" ]
null
null
2405.10481
null
null
http://arxiv.org/pdf/2405.10481v1
2024-05-17T01:02:03Z
2024-05-17T01:02:03Z
Multi-Evidence based Fact Verification via A Confidential Graph Neural Network
Fact verification tasks aim to identify the integrity of textual contents according to the truthful corpus. Existing fact verification models usually build a fully connected reasoning graph, which regards claim-evidence pairs as nodes and connects them with edges. They employ the graph to propagate the semantics of the nodes. Nevertheless, the noisy nodes usually propagate their semantics via the edges of the reasoning graph, which misleads the semantic representations of other nodes and amplifies the noise signals. To mitigate the propagation of noisy semantic information, we introduce a Confidential Graph Attention Network (CO-GAT), which proposes a node masking mechanism for modeling the nodes. Specifically, CO-GAT calculates the node confidence score by estimating the relevance between the claim and evidence pieces. Then, the node masking mechanism uses the node confidence scores to control the noise information flow from the vanilla node to the other graph nodes. CO-GAT achieves a 73.59% FEVER score on the FEVER dataset and shows the generalization ability by broadening the effectiveness to the science-specific domain.
[ "['Yuqing Lan' 'Zhenghao Liu' 'Yu Gu' 'Xiaoyuan Yi' 'Xiaohua Li'\n 'Liner Yang' 'Ge Yu']" ]
null
null
2405.10490
null
null
http://arxiv.org/abs/2405.10490v3
2024-06-25T22:52:43Z
2024-05-17T01:44:30Z
Neural Optimization with Adaptive Heuristics for Intelligent Marketing System
Computational marketing has become increasingly important in today's digital world, facing challenges such as massive heterogeneous data, multi-channel customer journeys, and limited marketing budgets. In this paper, we propose a general framework for marketing AI systems, the Neural Optimization with Adaptive Heuristics (NOAH) framework. NOAH is the first general framework for marketing optimization that considers both to-business (2B) and to-consumer (2C) products, as well as both owned and paid channels. We describe key modules of the NOAH framework, including prediction, optimization, and adaptive heuristics, providing examples for bidding and content optimization. We then detail the successful application of NOAH to LinkedIn's email marketing system, showcasing significant wins over the legacy ranking system. Additionally, we share details and insights that are broadly useful, particularly on: (i) addressing delayed feedback with lifetime value, (ii) performing large-scale linear programming with randomization, (iii) improving retrieval with audience expansion, (iv) reducing signal dilution in targeting tests, and (v) handling zero-inflated heavy-tail metrics in statistical testing.
[ "['Changshuai Wei' 'Benjamin Zelditch' 'Joyce Chen'\n 'Andre Assuncao Silva T Ribeiro' 'Jingyi Kenneth Tay'\n 'Borja Ocejo Elizondo' 'Keerthi Selvaraj' 'Aman Gupta'\n 'Licurgo Benemann De Almeida']" ]
null
null
2405.10492
null
null
http://arxiv.org/pdf/2405.10492v2
2024-05-21T01:18:22Z
2024-05-17T01:58:23Z
Automatic News Generation and Fact-Checking System Based on Language Processing
This paper explores an automatic news generation and fact-checking system based on language processing, aimed at enhancing the efficiency and quality of news production while ensuring the authenticity and reliability of the news content. With the rapid development of Natural Language Processing (NLP) and deep learning technologies, automatic news generation systems are capable of extracting key information from massive data and generating well-structured, fluent news articles. Meanwhile, by integrating fact-checking technology, the system can effectively prevent the spread of false news and improve the accuracy and credibility of news. This study details the key technologies involved in automatic news generation and factchecking, including text generation, information extraction, and the application of knowledge graphs, and validates the effectiveness of these technologies through experiments. Additionally, the paper discusses the future development directions of automatic news generation and fact-checking systems, emphasizing the importance of further integration and innovation of technologies. The results show that with continuous technological optimization and practical application, these systems will play an increasingly important role in the future news industry, providing more efficient and reliable news services.
[ "['Xirui Peng' 'Qiming Xu' 'Zheng Feng' 'Haopeng Zhao' 'Lianghao Tan'\n 'Yan Zhou' 'Zecheng Zhang' 'Chenwei Gong' 'Yingqiao Zheng']" ]
null
null
2405.10512
null
null
http://arxiv.org/pdf/2405.10512v1
2024-05-17T03:32:15Z
2024-05-17T03:32:15Z
In-context Contrastive Learning for Event Causality Identification
Event Causality Identification (ECI) aims at determining the existence of a causal relation between two events. Although recent prompt learning-based approaches have shown promising improvements on the ECI task, their performance are often subject to the delicate design of multiple prompts and the positive correlations between the main task and derivate tasks. The in-context learning paradigm provides explicit guidance for label prediction in the prompt learning paradigm, alleviating its reliance on complex prompts and derivative tasks. However, it does not distinguish between positive and negative demonstrations for analogy learning. Motivated from such considerations, this paper proposes an In-Context Contrastive Learning (ICCL) model that utilizes contrastive learning to enhance the effectiveness of both positive and negative demonstrations. Additionally, we apply contrastive learning to event pairs to better facilitate event causality identification. Our ICCL is evaluated on the widely used corpora, including the EventStoryLine and Causal-TimeBank, and results show significant performance improvements over the state-of-the-art algorithms.
[ "['Chao Liang' 'Wei Xiang' 'Bang Wang']" ]
null
null
2405.10513
null
null
http://arxiv.org/pdf/2405.10513v1
2024-05-17T03:41:40Z
2024-05-17T03:41:40Z
Federated Learning With Energy Harvesting Devices: An MDP Framework
Federated learning (FL) requires edge devices to perform local training and exchange information with a parameter server, leading to substantial energy consumption. A critical challenge in practical FL systems is the rapid energy depletion of battery-limited edge devices, which curtails their operational lifespan and affects the learning performance. To address this issue, we apply energy harvesting technique in FL systems to extract ambient energy for continuously powering edge devices. We first establish the convergence bound for the wireless FL system with energy harvesting devices, illustrating that the convergence is impacted by partial device participation and packet drops, both of which depend on the energy supply. To accelerate the convergence, we formulate a joint device scheduling and power control problem and model it as a Markov decision process (MDP). By solving this MDP, we derive the optimal transmission policy and demonstrate that it possesses a monotone structure with respect to the battery and channel states. To overcome the curse of dimensionality caused by the exponential complexity of computing the optimal policy, we propose a low-complexity algorithm, which is asymptotically optimal as the number of devices increases. Furthermore, for unknown channels and harvested energy statistics, we develop a structure-enhanced deep reinforcement learning algorithm that leverages the monotone structure of the optimal policy to improve the training performance. Finally, extensive numerical experiments on real-world datasets are presented to validate the theoretical results and corroborate the effectiveness of the proposed algorithms.
[ "['Kai Zhang' 'Xuanyu Cao']" ]
null
null
2405.10515
null
null
http://arxiv.org/pdf/2405.10515v1
2024-05-17T03:47:30Z
2024-05-17T03:47:30Z
Improved AdaBoost for Virtual Reality Experience Prediction Based on Long Short-Term Memory Network
A classification prediction algorithm based on Long Short-Term Memory Network (LSTM) improved AdaBoost is used to predict virtual reality (VR) user experience. The dataset is randomly divided into training and test sets in the ratio of 7:3.During the training process, the model's loss value decreases from 0.65 to 0.31, which shows that the model gradually reduces the discrepancy between the prediction results and the actual labels, and improves the accuracy and generalisation ability.The final loss value of 0.31 indicates that the model fits the training data well, and is able to make predictions and classifications more accurately. The confusion matrix for the training set shows a total of 177 correct predictions and 52 incorrect predictions, with an accuracy of 77%, precision of 88%, recall of 77% and f1 score of 82%. The confusion matrix for the test set shows a total of 167 correct and 53 incorrect predictions with 75% accuracy, 87% precision, 57% recall and 69% f1 score. In summary, the classification prediction algorithm based on LSTM with improved AdaBoost shows good prediction ability for virtual reality user experience. This study is of great significance to enhance the application of virtual reality technology in user experience. By combining LSTM and AdaBoost algorithms, significant progress has been made in user experience prediction, which not only improves the accuracy and generalisation ability of the model, but also provides useful insights for related research in the field of virtual reality. This approach can help developers better understand user requirements, optimise virtual reality product design, and enhance user satisfaction, promoting the wide application of virtual reality technology in various fields.
[ "['Wenhan Fan' 'Zhicheng Ding' 'Ruixin Huang' 'Chang Zhou' 'Xuyang Zhang']" ]
null
null
2405.10531
null
null
http://arxiv.org/pdf/2405.10531v1
2024-05-17T04:20:39Z
2024-05-17T04:20:39Z
Nonparametric Teaching of Implicit Neural Representations
We investigate the learning of implicit neural representation (INR) using an overparameterized multilayer perceptron (MLP) via a novel nonparametric teaching perspective. The latter offers an efficient example selection framework for teaching nonparametrically defined (viz. non-closed-form) target functions, such as image functions defined by 2D grids of pixels. To address the costly training of INRs, we propose a paradigm called Implicit Neural Teaching (INT) that treats INR learning as a nonparametric teaching problem, where the given signal being fitted serves as the target function. The teacher then selects signal fragments for iterative training of the MLP to achieve fast convergence. By establishing a connection between MLP evolution through parameter-based gradient descent and that of function evolution through functional gradient descent in nonparametric teaching, we show for the first time that teaching an overparameterized MLP is consistent with teaching a nonparametric learner. This new discovery readily permits a convenient drop-in of nonparametric teaching algorithms to broadly enhance INR training efficiency, demonstrating 30%+ training time savings across various input modalities.
[ "['Chen Zhang' 'Steven Tin Sui Luo' 'Jason Chun Lok Li' 'Yik-Chung Wu'\n 'Ngai Wong']" ]
null
null
2405.10536
null
null
http://arxiv.org/pdf/2405.10536v1
2024-05-17T04:28:54Z
2024-05-17T04:28:54Z
Time-Varying Constraint-Aware Reinforcement Learning for Energy Storage Control
Energy storage devices, such as batteries, thermal energy storages, and hydrogen systems, can help mitigate climate change by ensuring a more stable and sustainable power supply. To maximize the effectiveness of such energy storage, determining the appropriate charging and discharging amounts for each time period is crucial. Reinforcement learning is preferred over traditional optimization for the control of energy storage due to its ability to adapt to dynamic and complex environments. However, the continuous nature of charging and discharging levels in energy storage poses limitations for discrete reinforcement learning, and time-varying feasible charge-discharge range based on state of charge (SoC) variability also limits the conventional continuous reinforcement learning. In this paper, we propose a continuous reinforcement learning approach that takes into account the time-varying feasible charge-discharge range. An additional objective function was introduced for learning the feasible action range for each time period, supplementing the objectives of training the actor for policy learning and the critic for value learning. This actively promotes the utilization of energy storage by preventing them from getting stuck in suboptimal states, such as continuous full charging or discharging. This is achieved through the enforcement of the charging and discharging levels into the feasible action range. The experimental results demonstrated that the proposed method further maximized the effectiveness of energy storage by actively enhancing its utilization.
[ "['Jaeik Jeong' 'Tai-Yeon Ku' 'Wan-Ki Park']" ]
null
null
2405.10552
null
null
http://arxiv.org/pdf/2405.10552v1
2024-05-17T05:32:27Z
2024-05-17T05:32:27Z
Data Science Principles for Interpretable and Explainable AI
Society's capacity for algorithmic problem-solving has never been greater. Artificial Intelligence is now applied across more domains than ever, a consequence of powerful abstractions, abundant data, and accessible software. As capabilities have expanded, so have risks, with models often deployed without fully understanding their potential impacts. Interpretable and interactive machine learning aims to make complex models more transparent and controllable, enhancing user agency. This review synthesizes key principles from the growing literature in this field. We first introduce precise vocabulary for discussing interpretability, like the distinction between glass box and explainable algorithms. We then explore connections to classical statistical and design principles, like parsimony and the gulfs of interaction. Basic explainability techniques -- including learned embeddings, integrated gradients, and concept bottlenecks -- are illustrated with a simple case study. We also review criteria for objectively evaluating interpretability approaches. Throughout, we underscore the importance of considering audience goals when designing interactive algorithmic systems. Finally, we outline open challenges and discuss the potential role of data science in addressing them. Code to reproduce all examples can be found at https://go.wisc.edu/3k1ewe.
[ "['Kris Sankaran']" ]
null
null
2405.10563
null
null
http://arxiv.org/pdf/2405.10563v1
2024-05-17T06:15:26Z
2024-05-17T06:15:26Z
Function Extrapolation with Neural Networks and Its Application for Manifolds
This paper addresses the problem of accurately estimating a function on one domain when only its discrete samples are available on another domain. To answer this challenge, we utilize a neural network, which we train to incorporate prior knowledge of the function. In addition, by carefully analyzing the problem, we obtain a bound on the error over the extrapolation domain and define a condition number for this problem that quantifies the level of difficulty of the setup. Compared to other machine learning methods that provide time series prediction, such as transformers, our approach is suitable for setups where the interpolation and extrapolation regions are general subdomains and, in particular, manifolds. In addition, our construction leads to an improved loss function that helps us boost the accuracy and robustness of our neural network. We conduct comprehensive numerical tests and comparisons of our extrapolation versus standard methods. The results illustrate the effectiveness of our approach in various scenarios.
[ "['Guy Hay' 'Nir Sharon']" ]
null
null
2405.10581
null
null
http://arxiv.org/pdf/2405.10581v1
2024-05-17T07:09:52Z
2024-05-17T07:09:52Z
Future Aware Safe Active Learning of Time Varying Systems using Gaussian Processes
Experimental exploration of high-cost systems with safety constraints, common in engineering applications, is a challenging endeavor. Data-driven models offer a promising solution, but acquiring the requisite data remains expensive and is potentially unsafe. Safe active learning techniques prove essential, enabling the learning of high-quality models with minimal expensive data points and high safety. This paper introduces a safe active learning framework tailored for time-varying systems, addressing drift, seasonal changes, and complexities due to dynamic behavior. The proposed Time-aware Integrated Mean Squared Prediction Error (T-IMSPE) method minimizes posterior variance over current and future states, optimizing information gathering also in the time domain. Empirical results highlight T-IMSPE's advantages in model quality through toy and real-world examples. State of the art Gaussian processes are compatible with T-IMSPE. Our theoretical contributions include a clear delineation which Gaussian process kernels, domains, and weighting measures are suitable for T-IMSPE and even beyond for its non-time aware predecessor IMSPE.
[ "['Markus Lange-Hegermann' 'Christoph Zimmer']" ]
null
null
2405.10584
null
null
http://arxiv.org/pdf/2405.10584v1
2024-05-17T07:18:08Z
2024-05-17T07:18:08Z
A Hybrid Deep Learning Framework for Stock Price Prediction Considering the Investor Sentiment of Online Forum Enhanced by Popularity
Stock price prediction has always been a difficult task for forecasters. Using cutting-edge deep learning techniques, stock price prediction based on investor sentiment extracted from online forums has become feasible. We propose a novel hybrid deep learning framework for predicting stock prices. The framework leverages the XLNET model to analyze the sentiment conveyed in user posts on online forums, combines these sentiments with the post popularity factor to compute daily group sentiments, and integrates this information with stock technical indicators into an improved BiLSTM-highway model for stock price prediction. Through a series of comparative experiments involving four stocks on the Chinese stock market, it is demonstrated that the hybrid framework effectively predicts stock prices. This study reveals the necessity of analyzing investors' textual views for stock price prediction.
[ "['Huiyu Li' 'Junhua Hu']" ]
null
null
2405.10597
null
null
http://arxiv.org/pdf/2405.10597v1
2024-05-17T07:47:11Z
2024-05-17T07:47:11Z
UniCL: A Universal Contrastive Learning Framework for Large Time Series Models
Time-series analysis plays a pivotal role across a range of critical applications, from finance to healthcare, which involves various tasks, such as forecasting and classification. To handle the inherent complexities of time-series data, such as high dimensionality and noise, traditional supervised learning methods first annotate extensive labels for time-series data in each task, which is very costly and impractical in real-world applications. In contrast, pre-trained foundation models offer a promising alternative by leveraging unlabeled data to capture general time series patterns, which can then be fine-tuned for specific tasks. However, existing approaches to pre-training such models typically suffer from high-bias and low-generality issues due to the use of predefined and rigid augmentation operations and domain-specific data training. To overcome these limitations, this paper introduces UniCL, a universal and scalable contrastive learning framework designed for pretraining time-series foundation models across cross-domain datasets. Specifically, we propose a unified and trainable time-series augmentation operation to generate pattern-preserved, diverse, and low-bias time-series data by leveraging spectral information. Besides, we introduce a scalable augmentation algorithm capable of handling datasets with varying lengths, facilitating cross-domain pretraining. Extensive experiments on two benchmark datasets across eleven domains validate the effectiveness of UniCL, demonstrating its high generalization on time-series analysis across various fields.
[ "['Jiawei Li' 'Jingshu Peng' 'Haoyang Li' 'Lei Chen']" ]
null
null
2405.10608
null
null
http://arxiv.org/pdf/2405.10608v1
2024-05-17T08:12:53Z
2024-05-17T08:12:53Z
ECATS: Explainable-by-design concept-based anomaly detection for time series
Deep learning methods for time series have already reached excellent performances in both prediction and classification tasks, including anomaly detection. However, the complexity inherent in Cyber Physical Systems (CPS) creates a challenge when it comes to explainability methods. To overcome this inherent lack of interpretability, we propose ECATS, a concept-based neuro-symbolic architecture where concepts are represented as Signal Temporal Logic (STL) formulae. Leveraging kernel-based methods for STL, concept embeddings are learnt in an unsupervised manner through a cross-attention mechanism. The network makes class predictions through these concept embeddings, allowing for a meaningful explanation to be naturally extracted for each input. Our preliminary experiments with a simple CPS-based dataset show that our model is able to achieve great classification performance while ensuring local interpretability.
[ "['Irene Ferfoglia' 'Gaia Saveri' 'Laura Nenzi' 'Luca Bortolussi']" ]
null
null
2405.10612
null
null
http://arxiv.org/pdf/2405.10612v1
2024-05-17T08:19:48Z
2024-05-17T08:19:48Z
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers
Given the power of vision transformers, a new learning paradigm, pre-training and then prompting, makes it more efficient and effective to address downstream visual recognition tasks. In this paper, we identify a novel security threat towards such a paradigm from the perspective of backdoor attacks. Specifically, an extra prompt token, called the switch token in this work, can turn the backdoor mode on, i.e., converting a benign model into a backdoored one. Once under the backdoor mode, a specific trigger can force the model to predict a target class. It poses a severe risk to the users of cloud API, since the malicious behavior can not be activated and detected under the benign mode, thus making the attack very stealthy. To attack a pre-trained model, our proposed attack, named SWARM, learns a trigger and prompt tokens including a switch token. They are optimized with the clean loss which encourages the model always behaves normally even the trigger presents, and the backdoor loss that ensures the backdoor can be activated by the trigger when the switch is on. Besides, we utilize the cross-mode feature distillation to reduce the effect of the switch token on clean samples. The experiments on diverse visual recognition tasks confirm the success of our switchable backdoor attack, i.e., achieving 95%+ attack success rate, and also being hard to be detected and removed. Our code is available at https://github.com/20000yshust/SWARM.
[ "['Sheng Yang' 'Jiawang Bai' 'Kuofeng Gao' 'Yong Yang' 'Yiming Li'\n 'Shu-tao Xia']" ]
null
null
2405.10616
null
null
http://arxiv.org/pdf/2405.10616v1
2024-05-17T08:27:12Z
2024-05-17T08:27:12Z
Feature-based Low-Rank Compression of Large Language Models via Bayesian Optimization
In recent years, large language models (LLMs) have driven advances in natural language processing. Still, their growing scale has increased the computational burden, necessitating a balance between efficiency and performance. Low-rank compression, a promising technique, reduces non-essential parameters by decomposing weight matrices into products of two low-rank matrices. Yet, its application in LLMs has not been extensively studied. The key to low-rank compression lies in low-rank factorization and low-rank dimensions allocation. To address the challenges of low-rank compression in LLMs, we conduct empirical research on the low-rank characteristics of large models. We propose a low-rank compression method suitable for LLMs. This approach involves precise estimation of feature distributions through pooled covariance matrices and a Bayesian optimization strategy for allocating low-rank dimensions. Experiments on the LLaMA-2 models demonstrate that our method outperforms existing strong structured pruning and low-rank compression techniques in maintaining model performance at the same compression ratio.
[ "['Yixin Ji' 'Yang Xiang' 'Juntao Li' 'Wei Chen' 'Zhongyi Liu' 'Kehai Chen'\n 'Min Zhang']" ]
null
null
2405.10618
null
null
http://arxiv.org/pdf/2405.10618v1
2024-05-17T08:30:28Z
2024-05-17T08:30:28Z
Distributed Event-Based Learning via ADMM
We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network. Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents. We can therefore guarantee convergence even if the local data-distributions of the agents are arbitrarily distinct. We analyze the convergence rate of the algorithm and derive accelerated convergence rates in a convex setting. We also characterize the effect of communication drops and demonstrate that our algorithm is robust to communication failures. The article concludes by presenting numerical results from a distributed LASSO problem, and distributed learning tasks on MNIST and CIFAR-10 datasets. The experiments underline communication savings of 50% or more due to the event-based communication strategy, show resilience towards heterogeneous data-distributions, and highlight that our approach outperforms common baselines such as FedAvg, FedProx, and FedADMM.
[ "['Guner Dilsad Er' 'Sebastian Trimpe' 'Michael Muehlebach']" ]
null
null
2405.10621
null
null
http://arxiv.org/pdf/2405.10621v1
2024-05-17T08:33:43Z
2024-05-17T08:33:43Z
Historically Relevant Event Structuring for Temporal Knowledge Graph Reasoning
Temporal Knowledge Graph (TKG) reasoning focuses on predicting events through historical information within snapshots distributed on a timeline. Existing studies mainly concentrate on two perspectives of leveraging the history of TKGs, including capturing evolution of each recent snapshot or correlations among global historical facts. Despite the achieved significant accomplishments, these models still fall short of (1) investigating the influences of multi-granularity interactions across recent snapshots and (2) harnessing the expressive semantics of significant links accorded with queries throughout the entire history, especially events exerting a profound impact on the future. These inadequacies restrict representation ability to reflect historical dependencies and future trends thoroughly. To overcome these drawbacks, we propose an innovative TKG reasoning approach towards textbf{His}torically textbf{R}elevant textbf{E}vents textbf{S}tructuring ($mathsf{HisRES}$). Concretely, $mathsf{HisRES}$ comprises two distinctive modules excelling in structuring historically relevant events within TKGs, including a multi-granularity evolutionary encoder that captures structural and temporal dependencies of the most recent snapshots, and a global relevance encoder that concentrates on crucial correlations among events relevant to queries from the entire history. Furthermore, $mathsf{HisRES}$ incorporates a self-gating mechanism for adaptively merging multi-granularity recent and historically relevant structuring representations. Extensive experiments on four event-based benchmarks demonstrate the state-of-the-art performance of $mathsf{HisRES}$ and indicate the superiority and effectiveness of structuring historical relevance for TKG reasoning.
[ "['Jinchuan Zhang' 'Bei Hui' 'Chong Mu' 'Ming Sun' 'Ling Tian']" ]
null
null
2405.10624
null
null
http://arxiv.org/pdf/2405.10624v1
2024-05-17T08:39:05Z
2024-05-17T08:39:05Z
Sample-Efficient Constrained Reinforcement Learning with General Parameterization
We consider a constrained Markov Decision Problem (CMDP) where the goal of an agent is to maximize the expected discounted sum of rewards over an infinite horizon while ensuring that the expected discounted sum of costs exceeds a certain threshold. Building on the idea of momentum-based acceleration, we develop the Primal-Dual Accelerated Natural Policy Gradient (PD-ANPG) algorithm that guarantees an $epsilon$ global optimality gap and $epsilon$ constraint violation with $mathcal{O}(epsilon^{-3})$ sample complexity. This improves the state-of-the-art sample complexity in CMDP by a factor of $mathcal{O}(epsilon^{-1})$.
[ "['Washim Uddin Mondal' 'Vaneet Aggarwal']" ]
null
null
2405.10625
null
null
http://arxiv.org/pdf/2405.10625v1
2024-05-17T08:39:56Z
2024-05-17T08:39:56Z
Specialising and Analysing Instruction-Tuned and Byte-Level Language Models for Organic Reaction Prediction
Transformer-based encoder-decoder models have demonstrated impressive results in chemical reaction prediction tasks. However, these models typically rely on pretraining using tens of millions of unlabelled molecules, which can be time-consuming and GPU-intensive. One of the central questions we aim to answer in this work is: Can FlanT5 and ByT5, the encode-decoder models pretrained solely on language data, be effectively specialised for organic reaction prediction through task-specific fine-tuning? We conduct a systematic empirical study on several key issues of the process, including tokenisation, the impact of (SMILES-oriented) pretraining, fine-tuning sample efficiency, and decoding algorithms at inference. Our key findings indicate that although being pretrained only on language tasks, FlanT5 and ByT5 provide a solid foundation to fine-tune for reaction prediction, and thus become `chemistry domain compatible' in the process. This suggests that GPU-intensive and expensive pretraining on a large dataset of unlabelled molecules may be useful yet not essential to leverage the power of language models for chemistry. All our models achieve comparable Top-1 and Top-5 accuracy although some variation across different models does exist. Notably, tokenisation and vocabulary trimming slightly affect final performance but can speed up training and inference; The most efficient greedy decoding strategy is very competitive while only marginal gains can be achieved from more sophisticated decoding algorithms. In summary, we evaluate FlanT5 and ByT5 across several dimensions and benchmark their impact on organic reaction prediction, which may guide more effective use of these state-of-the-art language models for chemistry-related tasks in the future.
[ "['Jiayun Pang' 'Ivan Vulić']" ]
null
null
2405.10633
null
null
http://arxiv.org/pdf/2405.10633v1
2024-05-17T08:50:00Z
2024-05-17T08:50:00Z
Harnessing Collective Structure Knowledge in Data Augmentation for Graph Neural Networks
Graph neural networks (GNNs) have achieved state-of-the-art performance in graph representation learning. Message passing neural networks, which learn representations through recursively aggregating information from each node and its neighbors, are among the most commonly-used GNNs. However, a wealth of structural information of individual nodes and full graphs is often ignored in such process, which restricts the expressive power of GNNs. Various graph data augmentation methods that enable the message passing with richer structure knowledge have been introduced as one main way to tackle this issue, but they are often focused on individual structure features and difficult to scale up with more structure features. In this work we propose a novel approach, namely collective structure knowledge-augmented graph neural network (CoS-GNN), in which a new message passing method is introduced to allow GNNs to harness a diverse set of node- and graph-level structure features, together with original node features/attributes, in augmented graphs. In doing so, our approach largely improves the structural knowledge modeling of GNNs in both node and graph levels, resulting in substantially improved graph representations. This is justified by extensive empirical results where CoS-GNN outperforms state-of-the-art models in various graph-level learning tasks, including graph classification, anomaly detection, and out-of-distribution generalization.
[ "['Rongrong Ma' 'Guansong Pang' 'Ling Chen']" ]
null
null
2405.10642
null
null
http://arxiv.org/pdf/2405.10642v1
2024-05-17T09:08:37Z
2024-05-17T09:08:37Z
Hi-GMAE: Hierarchical Graph Masked Autoencoders
Graph Masked Autoencoders (GMAEs) have emerged as a notable self-supervised learning approach for graph-structured data. Existing GMAE models primarily focus on reconstructing node-level information, categorizing them as single-scale GMAEs. This methodology, while effective in certain contexts, tends to overlook the complex hierarchical structures inherent in many real-world graphs. For instance, molecular graphs exhibit a clear hierarchical organization in the form of the atoms-functional groups-molecules structure. Hence, the inability of single-scale GMAE models to incorporate these hierarchical relationships often leads to their inadequate capture of crucial high-level graph information, resulting in a noticeable decline in performance. To address this limitation, we propose Hierarchical Graph Masked AutoEncoders (Hi-GMAE), a novel multi-scale GMAE framework designed to handle the hierarchical structures within graphs. First, Hi-GMAE constructs a multi-scale graph hierarchy through graph pooling, enabling the exploration of graph structures across different granularity levels. To ensure masking uniformity of subgraphs across these scales, we propose a novel coarse-to-fine strategy that initiates masking at the coarsest scale and progressively back-projects the mask to the finer scales. Furthermore, we integrate a gradual recovery strategy with the masking process to mitigate the learning challenges posed by completely masked subgraphs. Diverging from the standard graph neural network (GNN) used in GMAE models, Hi-GMAE modifies its encoder and decoder into hierarchical structures. This entails using GNN at the finer scales for detailed local graph analysis and employing a graph transformer at coarser scales to capture global information. Our experiments on 15 graph datasets consistently demonstrate that Hi-GMAE outperforms 17 state-of-the-art self-supervised competitors.
[ "['Chuang Liu' 'Zelin Yao' 'Yibing Zhan' 'Xueqi Ma' 'Dapeng Tao' 'Jia Wu'\n 'Wenbin Hu' 'Shirui Pan' 'Bo Du']" ]
null
null
2405.10647
null
null
http://arxiv.org/pdf/2405.10647v1
2024-05-17T09:20:21Z
2024-05-17T09:20:21Z
Cyclical Weight Consolidation: Towards Solving Catastrophic Forgetting in Serial Federated Learning
Federated Learning (FL) has gained attention for addressing data scarcity and privacy concerns. While parallel FL algorithms like FedAvg exhibit remarkable performance, they face challenges in scenarios with diverse network speeds and concerns about centralized control, especially in multi-institutional collaborations like the medical domain. Serial FL presents an alternative solution, circumventing these challenges by transferring model updates serially between devices in a cyclical manner. Nevertheless, it is deemed inferior to parallel FL in that (1) its performance shows undesirable fluctuations, and (2) it converges to a lower plateau, particularly when dealing with non-IID data. The observed phenomenon is attributed to catastrophic forgetting due to knowledge loss from previous sites. In this paper, to overcome fluctuation and low efficiency in the iterative learning and forgetting process, we introduce cyclical weight consolidation (CWC), a straightforward yet potent approach specifically tailored for serial FL. CWC employs a consolidation matrix to regulate local optimization. This matrix tracks the significance of each parameter on the overall federation throughout the entire training trajectory, preventing abrupt changes in significant weights. During revisitation, to maintain adaptability, old memory undergoes decay to incorporate new information. Our comprehensive evaluations demonstrate that in various non-IID settings, CWC mitigates the fluctuation behavior of the original serial FL approach and enhances the converged performance consistently and significantly. The improved performance is either comparable to or better than the parallel vanilla.
[ "['Haoyue Song' 'Jiacheng Wang' 'Liansheng Wang']" ]
null
null
2405.10658
null
null
http://arxiv.org/pdf/2405.10658v1
2024-05-17T09:42:44Z
2024-05-17T09:42:44Z
Cost-Effective Fault Tolerance for CNNs Using Parameter Vulnerability Based Hardening and Pruning
Convolutional Neural Networks (CNNs) have become integral in safety-critical applications, thus raising concerns about their fault tolerance. Conventional hardware-dependent fault tolerance methods, such as Triple Modular Redundancy (TMR), are computationally expensive, imposing a remarkable overhead on CNNs. Whereas fault tolerance techniques can be applied either at the hardware level or at the model levels, the latter provides more flexibility without sacrificing generality. This paper introduces a model-level hardening approach for CNNs by integrating error correction directly into the neural networks. The approach is hardware-agnostic and does not require any changes to the underlying accelerator device. Analyzing the vulnerability of parameters enables the duplication of selective filters/neurons so that their output channels are effectively corrected with an efficient and robust correction layer. The proposed method demonstrates fault resilience nearly equivalent to TMR-based correction but with significantly reduced overhead. Nevertheless, there exists an inherent overhead to the baseline CNNs. To tackle this issue, a cost-effective parameter vulnerability based pruning technique is proposed that outperforms the conventional pruning method, yielding smaller networks with a negligible accuracy loss. Remarkably, the hardened pruned CNNs perform up to 24% faster than the hardened un-pruned ones.
[ "['Mohammad Hasan Ahmadilivani' 'Seyedhamidreza Mousavi' 'Jaan Raik'\n 'Masoud Daneshtalab' 'Maksim Jenihhin']" ]
null
null
2405.10679
null
null
http://arxiv.org/pdf/2405.10679v1
2024-05-17T10:20:14Z
2024-05-17T10:20:14Z
Off-the-Shelf Neural Network Architectures for Forex Time Series Prediction come at a Cost
Our study focuses on comparing the performance and resource requirements between different Long Short-Term Memory (LSTM) neural network architectures and an ANN specialized architecture for forex market prediction. We analyze the execution time of the models as well as the resources consumed, such as memory and computational power. Our aim is to demonstrate that the specialized architecture not only achieves better results in forex market prediction but also executes using fewer resources and in a shorter time frame compared to LSTM architectures. This comparative analysis will provide significant insights into the suitability of these two types of architectures for time series prediction in the forex market environment.
[ "['Theodoros Zafeiriou' 'Dimitris Kalles']" ]
null
null
2405.10706
null
null
http://arxiv.org/pdf/2405.10706v1
2024-05-17T11:28:52Z
2024-05-17T11:28:52Z
Challenging the Human-in-the-loop in Algorithmic Decision-making
We discuss the role of humans in algorithmic decision-making (ADM) for socially relevant problems from a technical and philosophical perspective. In particular, we illustrate tensions arising from diverse expectations, values, and constraints by and on the humans involved. To this end, we assume that a strategic decision-maker (SDM) introduces ADM to optimize strategic and societal goals while the algorithms' recommended actions are overseen by a practical decision-maker (PDM) - a specific human-in-the-loop - who makes the final decisions. While the PDM is typically assumed to be a corrective, it can counteract the realization of the SDM's desired goals and societal values not least because of a misalignment of these values and unmet information needs of the PDM. This has significant implications for the distribution of power between the stakeholders in ADM, their constraints, and information needs. In particular, we emphasize the overseeing PDM's role as a potential political and ethical decision maker, who acts expected to balance strategic, value-driven objectives and on-the-ground individual decisions and constraints. We demonstrate empirically, on a machine learning benchmark dataset, the significant impact an overseeing PDM's decisions can have even if the PDM is constrained to performing only a limited amount of actions differing from the algorithms' recommendations. To ensure that the SDM's intended values are realized, the PDM needs to be provided with appropriate information conveyed through tailored explanations and its role must be characterized clearly. Our findings emphasize the need for an in-depth discussion of the role and power of the PDM and challenge the often-taken view that just including a human-in-the-loop in ADM ensures the 'correct' and 'ethical' functioning of the system.
[ "['Sebastian Tschiatschek' 'Eugenia Stamboliev' 'Timoth ee Schmude'\n 'Mark Coeckelbergh' 'Laura Koesten']" ]
null
null
2405.10745
null
null
http://arxiv.org/pdf/2405.10745v1
2024-05-17T12:46:23Z
2024-05-17T12:46:23Z
Empowering Small-Scale Knowledge Graphs: A Strategy of Leveraging General-Purpose Knowledge Graphs for Enriched Embeddings
Knowledge-intensive tasks pose a significant challenge for Machine Learning (ML) techniques. Commonly adopted methods, such as Large Language Models (LLMs), often exhibit limitations when applied to such tasks. Nevertheless, there have been notable endeavours to mitigate these challenges, with a significant emphasis on augmenting LLMs through Knowledge Graphs (KGs). While KGs provide many advantages for representing knowledge, their development costs can deter extensive research and applications. Addressing this limitation, we introduce a framework for enriching embeddings of small-scale domain-specific Knowledge Graphs with well-established general-purpose KGs. Adopting our method, a modest domain-specific KG can benefit from a performance boost in downstream tasks when linked to a substantial general-purpose KG. Experimental evaluations demonstrate a notable enhancement, with up to a 44% increase observed in the Hits@10 metric. This relatively unexplored research direction can catalyze more frequent incorporation of KGs in knowledge-intensive tasks, resulting in more robust, reliable ML implementations, which hallucinates less than prevalent LLM solutions. Keywords: knowledge graph, knowledge graph completion, entity alignment, representation learning, machine learning
[ "['Albert Sawczyn' 'Jakub Binkowski' 'Piotr Bielak' 'Tomasz Kajdanowicz']" ]
null
null
2405.10750
null
null
http://arxiv.org/pdf/2405.10750v1
2024-05-17T12:59:15Z
2024-05-17T12:59:15Z
Parameter Identification for Electrochemical Models of Lithium-Ion Batteries Using Bayesian Optimization
Efficient parameter identification of electrochemical models is crucial for accurate monitoring and control of lithium-ion cells. This process becomes challenging when applied to complex models that rely on a considerable number of interdependent parameters that affect the output response. Gradient-based and metaheuristic optimization techniques, although previously employed for this task, are limited by their lack of robustness, high computational costs, and susceptibility to local minima. In this study, Bayesian Optimization is used for tuning the dynamic parameters of an electrochemical equivalent circuit battery model (E-ECM) for a nickel-manganese-cobalt (NMC)-graphite cell. The performance of the Bayesian Optimization is compared with baseline methods based on gradient-based and metaheuristic approaches. The robustness of the parameter optimization method is tested by performing verification using an experimental drive cycle. The results indicate that Bayesian Optimization outperforms Gradient Descent and PSO optimization techniques, achieving reductions on average testing loss by 28.8% and 5.8%, respectively. Moreover, Bayesian optimization significantly reduces the variance in testing loss by 95.8% and 72.7%, respectively.
[ "['Jianzong Pi' 'Samuel Filgueira da Silva' 'Mehmet Fatih Ozkan'\n 'Abhishek Gupta' 'Marcello Canova']" ]
null
null
2405.10757
null
null
http://arxiv.org/abs/2405.10757v3
2024-07-12T02:21:54Z
2024-05-17T13:09:39Z
Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective
Graph Neural Networks (GNNs) have shown remarkable performance in various tasks. However, recent works reveal that GNNs are vulnerable to backdoor attacks. Generally, backdoor attack poisons the graph by attaching backdoor triggers and the target class label to a set of nodes in the training graph. A GNN trained on the poisoned graph will then be misled to predict test nodes attached with trigger to the target class. Despite their effectiveness, our empirical analysis shows that triggers generated by existing methods tend to be out-of-distribution (OOD), which significantly differ from the clean data. Hence, these injected triggers can be easily detected and pruned with widely used outlier detection methods in real-world applications. Therefore, in this paper, we study a novel problem of unnoticeable graph backdoor attacks with in-distribution (ID) triggers. To generate ID triggers, we introduce an OOD detector in conjunction with an adversarial learning strategy to generate the attributes of the triggers within distribution. To ensure a high attack success rate with ID triggers, we introduce novel modules designed to enhance trigger memorization by the victim model trained on poisoned graph. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed method in generating in distribution triggers that can by-pass various defense strategies while maintaining a high attack success rate.
[ "['Zhiwei Zhang' 'Minhua Lin' 'Enyan Dai' 'Suhang Wang']" ]
null
null
2405.10762
null
null
http://arxiv.org/pdf/2405.10762v2
2024-05-30T08:13:30Z
2024-05-17T13:18:46Z
Research on Credit Risk Early Warning Model of Commercial Banks Based on Neural Network Algorithm
In the realm of globalized financial markets, commercial banks are confronted with an escalating magnitude of credit risk, thereby imposing heightened requisites upon the security of bank assets and financial stability. This study harnesses advanced neural network techniques, notably the Backpropagation (BP) neural network, to pioneer a novel model for preempting credit risk in commercial banks. The discourse initially scrutinizes conventional financial risk preemptive models, such as ARMA, ARCH, and Logistic regression models, critically analyzing their real-world applications. Subsequently, the exposition elaborates on the construction process of the BP neural network model, encompassing network architecture design, activation function selection, parameter initialization, and objective function construction. Through comparative analysis, the superiority of neural network models in preempting credit risk in commercial banks is elucidated. The experimental segment selects specific bank data, validating the model's predictive accuracy and practicality. Research findings evince that this model efficaciously enhances the foresight and precision of credit risk management.
[ "['Yu Cheng' 'Qin Yang' 'Liyang Wang' 'Ao Xiang' 'Jingyu Zhang']" ]
null
null
2405.10779
null
null
http://arxiv.org/pdf/2405.10779v1
2024-05-17T13:40:59Z
2024-05-17T13:40:59Z
Baseline Results for Selected Nonlinear System Identification Benchmarks
Nonlinear system identification remains an important open challenge across research and academia. Large numbers of novel approaches are seen published each year, each presenting improvements or extensions to existing methods. It is natural, therefore, to consider how one might choose between these competing models. Benchmark datasets provide one clear way to approach this question. However, to make meaningful inference based on benchmark performance it is important to understand how well a new method performs comparatively to results available with well-established methods. This paper presents a set of ten baseline techniques and their relative performances on five popular benchmarks. The aim of this contribution is to stimulate thought and discussion regarding objective comparison of identification methodologies.
[ "['Max D. Champneys' 'Gerben I. Beintema' 'Roland Tóth' 'Maarten Schoukens'\n 'Maarten Schoukens' 'Timothy J. Rogers']" ]
null
null
2405.10780
null
null
http://arxiv.org/abs/2405.10780v2
2024-05-31T15:00:36Z
2024-05-13T21:37:50Z
Intelligent and Miniaturized Neural Interfaces: An Emerging Era in Neurotechnology
Integrating smart algorithms on neural devices presents significant opportunities for various brain disorders. In this paper, we review the latest advancements in the development of three categories of intelligent neural prostheses featuring embedded signal processing on the implantable or wearable device. These include: 1) Neural interfaces for closed-loop symptom tracking and responsive stimulation; 2) Neural interfaces for emerging network-related conditions, such as psychiatric disorders; and 3) Intelligent BMI SoCs for movement recovery following paralysis.
[ "['Mahsa Shoaran' 'Uisub Shin' 'MohammadAli Shaeri']" ]
null
null
2405.10799
null
null
http://arxiv.org/pdf/2405.10799v1
2024-05-17T14:10:24Z
2024-05-17T14:10:24Z
Training Compute Thresholds: Features and Functions in AI Governance
This paper examines the use of training compute thresholds as a tool for governing artificial intelligence (AI) systems. We argue that compute thresholds serve as a valuable trigger for further evaluation of AI models, rather than being the sole determinant of the regulation. Key advantages of compute thresholds include their correlation with model capabilities and risks, quantifiability, ease of measurement, robustness to circumvention, knowability before model development and deployment, potential for external verification, and targeted scope. Compute thresholds provide a practical starting point for identifying potentially high-risk models and can be used as an initial filter in AI governance frameworks alongside other sector-specific regulations and broader governance measures.
[ "['Lennart Heim']" ]
null
null
2405.10800
null
null
http://arxiv.org/pdf/2405.10800v1
2024-05-17T14:10:34Z
2024-05-17T14:10:34Z
Heterogeneity-Informed Meta-Parameter Learning for Spatiotemporal Time Series Forecasting
Spatiotemporal time series forecasting plays a key role in a wide range of real-world applications. While significant progress has been made in this area, fully capturing and leveraging spatiotemporal heterogeneity remains a fundamental challenge. Therefore, we propose a novel Heterogeneity-Informed Meta-Parameter Learning scheme. Specifically, our approach implicitly captures spatiotemporal heterogeneity through learning spatial and temporal embeddings, which can be viewed as a clustering process. Then, a novel spatiotemporal meta-parameter learning paradigm is proposed to learn spatiotemporal-specific parameters from meta-parameter pools, which is informed by the captured heterogeneity. Based on these ideas, we develop a Heterogeneity-Informed Spatiotemporal Meta-Network (HimNet) for spatiotemporal time series forecasting. Extensive experiments on five widely-used benchmarks demonstrate our method achieves state-of-the-art performance while exhibiting superior interpretability. Our code is available at https://github.com/XDZhelheim/HimNet.
[ "['Zheng Dong' 'Renhe Jiang' 'Haotian Gao' 'Hangchen Liu' 'Jinliang Deng'\n 'Qingsong Wen' 'Xuan Song']" ]
null
null
2405.10802
null
null
http://arxiv.org/pdf/2405.10802v1
2024-05-17T14:16:40Z
2024-05-17T14:16:40Z
Reduced storage direct tensor ring decomposition for convolutional neural networks compression
Convolutional neural networks (CNNs) are among the most widely used machine learning models for computer vision tasks, such as image classification. To improve the efficiency of CNNs, many CNNs compressing approaches have been developed. Low-rank methods approximate the original convolutional kernel with a sequence of smaller convolutional kernels, which leads to reduced storage and time complexities. In this study, we propose a novel low-rank CNNs compression method that is based on reduced storage direct tensor ring decomposition (RSDTR). The proposed method offers a higher circular mode permutation flexibility, and it is characterized by large parameter and FLOPS compression rates, while preserving a good classification accuracy of the compressed network. The experiments, performed on the CIFAR-10 and ImageNet datasets, clearly demonstrate the efficiency of RSDTR in comparison to other state-of-the-art CNNs compression approaches.
[ "['Mateusz Gabor' 'Rafał Zdunek']" ]
null
null
2405.10814
null
null
http://arxiv.org/pdf/2405.10814v1
2024-05-17T14:35:09Z
2024-05-17T14:35:09Z
Data-Driven Symbol Detection for Intersymbol Interference Channels with Bursty Impulsive Noise
We developed machine learning approaches for data-driven trellis-based soft symbol detection in coded transmission over intersymbol interference (ISI) channels in presence of bursty impulsive noise (IN), for example encountered in wireless digital broadcasting systems and vehicular communications. This enabled us to obtain optimized detectors based on the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm while circumventing the use of full channel state information (CSI) for computing likelihoods and trellis state transition probabilities. First, we extended the application of the neural network (NN)-aided BCJR, recently proposed for ISI channels with additive white Gaussian noise (AWGN). Although suitable for estimating likelihoods via labeling of transmission sequences, the BCJR-NN method does not provide a framework for learning the trellis state transitions. In addition to detection over the joint ISI and IN states we also focused on another scenario where trellis transitions are not trivial: detection for the ISI channel with AWGN with inaccurate knowledge of the channel memory at the receiver. Without access to the accurate state transition matrix, the BCJR- NN performance significantly degrades in both settings. To this end, we devised an alternative approach for data-driven BCJR detection based on the unsupervised learning of a hidden Markov model (HMM). The BCJR-HMM allowed us to optimize both the likelihood function and the state transition matrix without labeling. Moreover, we demonstrated the viability of a hybrid NN and HMM BCJR detection where NN is used for learning the likelihoods, while the state transitions are optimized via HMM. While reducing the required prior channel knowledge, the examined data-driven detectors with learned trellis state transitions achieve bit error rates close to the optimal full CSI-based BCJR, significantly outperforming detection with inaccurate CSI.
[ "['Boris Karanov' 'Chin-Hung Chen' 'Yan Wu' 'Alex Young' 'Wim van Houtum']" ]
null
null
2405.10815
null
null
http://arxiv.org/pdf/2405.10815v1
2024-05-17T14:35:50Z
2024-05-17T14:35:50Z
A Functional Model Method for Nonconvex Nonsmooth Conditional Stochastic Optimization
We consider stochastic optimization problems involving an expected value of a nonlinear function of a base random vector and a conditional expectation of another function depending on the base random vector, a dependent random vector, and the decision variables. We call such problems conditional stochastic optimization problems. They arise in many applications, such as uplift modeling, reinforcement learning, and contextual optimization. We propose a specialized single time-scale stochastic method for nonconvex constrained conditional stochastic optimization problems with a Lipschitz smooth outer function and a generalized differentiable inner function. In the method, we approximate the inner conditional expectation with a rich parametric model whose mean squared error satisfies a stochastic version of a {L}ojasiewicz condition. The model is used by an inner learning algorithm. The main feature of our approach is that unbiased stochastic estimates of the directions used by the method can be generated with one observation from the joint distribution per iteration, which makes it applicable to real-time learning. The directions, however, are not gradients or subgradients of any overall objective function. We prove the convergence of the method with probability one, using the method of differential inclusions and a specially designed Lyapunov function, involving a stochastic generalization of the Bregman distance. Finally, a numerical illustration demonstrates the viability of our approach.
[ "['Andrzej Ruszczyński' 'Shangzhe Yang']" ]
null
null
2405.10817
null
null
http://arxiv.org/pdf/2405.10817v1
2024-05-17T14:37:39Z
2024-05-17T14:37:39Z
Restless Linear Bandits
A more general formulation of the linear bandit problem is considered to allow for dependencies over time. Specifically, it is assumed that there exists an unknown $mathbb{R}^d$-valued stationary $varphi$-mixing sequence of parameters $(theta_t,~t in mathbb{N})$ which gives rise to pay-offs. This instance of the problem can be viewed as a generalization of both the classical linear bandits with iid noise, and the finite-armed restless bandits. In light of the well-known computational hardness of optimal policies for restless bandits, an approximation is proposed whose error is shown to be controlled by the $varphi$-dependence between consecutive $theta_t$. An optimistic algorithm, called LinMix-UCB, is proposed for the case where $theta_t$ has an exponential mixing rate. The proposed algorithm is shown to incur a sub-linear regret of $mathcal{O}left(sqrt{d nmathrm{polylog}(n) }right)$ with respect to an oracle that always plays a multiple of $mathbb{E}theta_t$. The main challenge in this setting is to ensure that the exploration-exploitation strategy is robust against long-range dependencies. The proposed method relies on Berbee's coupling lemma to carefully select near-independent samples and construct confidence ellipsoids around empirical estimates of $mathbb{E}theta_t$.
[ "['Azadeh Khaleghi']" ]
null
null
2405.10822
null
null
http://arxiv.org/pdf/2405.10822v1
2024-05-17T14:43:30Z
2024-05-17T14:43:30Z
Generative modeling through internal high-dimensional chaotic activity
Generative modeling aims at producing new datapoints whose statistical properties resemble the ones in a training dataset. In recent years, there has been a burst of machine learning techniques and settings that can achieve this goal with remarkable performances. In most of these settings, one uses the training dataset in conjunction with noise, which is added as a source of statistical variability and is essential for the generative task. Here, we explore the idea of using internal chaotic dynamics in high-dimensional chaotic systems as a way to generate new datapoints from a training dataset. We show that simple learning rules can achieve this goal within a set of vanilla architectures and characterize the quality of the generated datapoints through standard accuracy measures.
[ "['Samantha J. Fournier' 'Pierfrancesco Urbani']" ]
null
null
2405.10825
null
null
http://arxiv.org/pdf/2405.10825v1
2024-05-17T14:46:13Z
2024-05-17T14:46:13Z
Large Language Model (LLM) for Telecommunications: A Comprehensive Survey on Principles, Key Techniques, and Opportunities
Large language models (LLMs) have received considerable attention recently due to their outstanding comprehension and reasoning capabilities, leading to great progress in many fields. The advancement of LLM techniques also offers promising opportunities to automate many tasks in the telecommunication (telecom) field. After pre-training and fine-tuning, LLMs can perform diverse downstream tasks based on human instructions, paving the way to artificial general intelligence (AGI)-enabled 6G. Given the great potential of LLM technologies, this work aims to provide a comprehensive overview of LLM-enabled telecom networks. In particular, we first present LLM fundamentals, including model architecture, pre-training, fine-tuning, inference and utilization, model evaluation, and telecom deployment. Then, we introduce LLM-enabled key techniques and telecom applications in terms of generation, classification, optimization, and prediction problems. Specifically, the LLM-enabled generation applications include telecom domain knowledge, code, and network configuration generation. After that, the LLM-based classification applications involve network security, text, image, and traffic classification problems. Moreover, multiple LLM-enabled optimization techniques are introduced, such as automated reward function design for reinforcement learning and verbal reinforcement learning. Furthermore, for LLM-aided prediction problems, we discussed time-series prediction models and multi-modality prediction problems for telecom. Finally, we highlight the challenges and identify the future directions of LLM-enabled telecom networks.
[ "['Hao Zhou' 'Chengming Hu' 'Ye Yuan' 'Yufei Cui' 'Yili Jin' 'Can Chen'\n 'Haolun Wu' 'Dun Yuan' 'Li Jiang' 'Di Wu' 'Xue Liu' 'Charlie Zhang'\n 'Xianbin Wang' 'Jiangchuan Liu']" ]
null
null
2405.10828
null
null
http://arxiv.org/pdf/2405.10828v1
2024-05-17T14:48:37Z
2024-05-17T14:48:37Z
Analysis of Impulsive Interference in Digital Audio Broadcasting Systems in Electric Vehicles
Recently, new types of interference in electric vehicles (EVs), such as converters switching and/or battery chargers, have been found to degrade the performance of wireless digital transmission systems. Measurements show that such an interference is characterized by impulsive behavior and is widely varying in time. This paper uses recorded data from our EV testbed to analyze the impulsive interference in the digital audio broadcasting band. Moreover, we use our analysis to obtain a corresponding interference model. In particular, we studied the temporal characteristics of the interference and confirmed that its amplitude indeed exhibits an impulsive behavior. Our results show that impulsive events span successive received signal samples and thus indicate a bursty nature. To this end, we performed a data-driven modification of a well-established model for bursty impulsive interference, the Markov-Middleton model, to produce synthetic noise realization. We investigate the optimal symbol detector design based on the proposed model and show significant performance gains compared to the conventional detector based on the additive white Gaussian noise assumption.
[ "['Chin-Hung Chen' 'Wen-Hung Huang' 'Boris Karanov' 'Alex Young' 'Yan Wu'\n 'Wim van Houtum']" ]
null
null
2405.10852
null
null
http://arxiv.org/pdf/2405.10852v1
2024-05-17T15:27:35Z
2024-05-17T15:27:35Z
KernelSHAP-IQ: Weighted Least-Square Optimization for Shapley Interactions
The Shapley value (SV) is a prevalent approach of allocating credit to machine learning (ML) entities to understand black box ML models. Enriching such interpretations with higher-order interactions is inevitable for complex systems, where the Shapley Interaction Index (SII) is a direct axiomatic extension of the SV. While it is well-known that the SV yields an optimal approximation of any game via a weighted least square (WLS) objective, an extension of this result to SII has been a long-standing open problem, which even led to the proposal of an alternative index. In this work, we characterize higher-order SII as a solution to a WLS problem, which constructs an optimal approximation via SII and $k$-Shapley values ($k$-SII). We prove this representation for the SV and pairwise SII and give empirically validated conjectures for higher orders. As a result, we propose KernelSHAP-IQ, a direct extension of KernelSHAP for SII, and demonstrate state-of-the-art performance for feature interactions.
[ "['Fabian Fumagalli' 'Maximilian Muschalik' 'Patrick Kolpaczki'\n 'Eyke Hüllermeier' 'Barbara Hammer']" ]
null
null
2405.10853
null
null
http://arxiv.org/pdf/2405.10853v1
2024-05-17T15:27:52Z
2024-05-17T15:27:52Z
The Future of Large Language Model Pre-training is Federated
Generative pre-trained large language models (LLMs) have demonstrated impressive performance over a wide range of tasks, thanks to the unprecedented amount of data they have been trained on. As established scaling laws indicate, LLMs' future performance improvement depends on the amount of computing and data sources we can leverage for pre-training. Federated learning (FL) has the potential to unleash the majority of the planet's data and computational resources, which are underutilized by the data-center-focused training methodology of current LLM practice. Our work presents a robust, flexible, reproducible FL approach that enables large-scale collaboration across institutions to train LLMs. This would mobilize more computational and data resources while matching or potentially exceeding centralized performance. We further show the effectiveness of the federated training scales with model size and present our approach for training a billion-scale federated LLM using limited resources. This will help data-rich actors to become the protagonists of LLMs pre-training instead of leaving the stage to compute-rich actors alone.
[ "['Lorenzo Sani' 'Alex Iacob' 'Zeyu Cao' 'Bill Marino' 'Yan Gao'\n 'Tomas Paulik' 'Wanru Zhao' 'William F. Shen' 'Preslav Aleksandrov'\n 'Xinchi Qiu' 'Nicholas D. Lane']" ]
null
null
2405.10864
null
null
http://arxiv.org/pdf/2405.10864v1
2024-05-17T15:50:53Z
2024-05-17T15:50:53Z
Improving face generation quality and prompt following with synthetic captions
Recent advancements in text-to-image generation using diffusion models have significantly improved the quality of generated images and expanded the ability to depict a wide range of objects. However, ensuring that these models adhere closely to the text prompts remains a considerable challenge. This issue is particularly pronounced when trying to generate photorealistic images of humans. Without significant prompt engineering efforts models often produce unrealistic images and typically fail to incorporate the full extent of the prompt information. This limitation can be largely attributed to the nature of captions accompanying the images used in training large scale diffusion models, which typically prioritize contextual information over details related to the person's appearance. In this paper we address this issue by introducing a training-free pipeline designed to generate accurate appearance descriptions from images of people. We apply this method to create approximately 250,000 captions for publicly available face datasets. We then use these synthetic captions to fine-tune a text-to-image diffusion model. Our results demonstrate that this approach significantly improves the model's ability to generate high-quality, realistic human faces and enhances adherence to the given prompts, compared to the baseline model. We share our synthetic captions, pretrained checkpoints and training code.
[ "['Michail Tarasiou' 'Stylianos Moschoglou' 'Jiankang Deng'\n 'Stefanos Zafeiriou']" ]
null
null
2405.10877
null
null
http://arxiv.org/pdf/2405.10877v1
2024-05-17T16:09:51Z
2024-05-17T16:09:51Z
WEITS: A Wavelet-enhanced residual framework for interpretable time series forecasting
Time series (TS) forecasting has been an unprecedentedly popular problem in recent years, with ubiquitous applications in both scientific and business fields. Various approaches have been introduced to time series analysis, including both statistical approaches and deep neural networks. Although neural network approaches have illustrated stronger ability of representation than statistical methods, they struggle to provide sufficient interpretablility, and can be too complicated to optimize. In this paper, we present WEITS, a frequency-aware deep learning framework that is highly interpretable and computationally efficient. Through multi-level wavelet decomposition, WEITS novelly infuses frequency analysis into a highly deep learning framework. Combined with a forward-backward residual architecture, it enjoys both high representation capability and statistical interpretability. Extensive experiments on real-world datasets have demonstrated competitive performance of our model, along with its additional advantage of high computation efficiency. Furthermore, WEITS provides a general framework that can always seamlessly integrate with state-of-the-art approaches for time series forecast.
[ "['Ziyou Guo' 'Yan Sun' 'Tieru Wu']" ]
null
null
2405.10897
null
null
http://arxiv.org/pdf/2405.10897v1
2024-05-17T16:35:20Z
2024-05-17T16:35:20Z
Efficient Line Search Method Based on Regression and Uncertainty Quantification
Unconstrained optimization problems are typically solved using iterative methods, which often depend on line search techniques to determine optimal step lengths in each iteration. This paper introduces a novel line search approach. Traditional line search methods, aimed at determining optimal step lengths, often discard valuable data from the search process and focus on refining step length intervals. This paper proposes a more efficient method using Bayesian optimization, which utilizes all available data points, i.e., function values and gradients, to guide the search towards a potential global minimum. This new approach more effectively explores the search space, leading to better solution quality. It is also easy to implement and integrate into existing frameworks. Tested on the challenging CUTEst test set, it demonstrates superior performance compared to existing state-of-the-art methods, solving more problems to optimality with equivalent resource usage.
[ "['Sören Laue' 'Tomislav Prusina']" ]
null
null
2405.10918
null
null
http://arxiv.org/pdf/2405.10918v1
2024-05-17T17:09:45Z
2024-05-17T17:09:45Z
GenToC: Leveraging Partially-Labeled Data for Product Attribute-Value Identification
In the e-commerce domain, the accurate extraction of attribute-value pairs from product listings (e.g., Brand: Apple) is crucial for enhancing search and recommendation systems. The automation of this extraction process is challenging due to the vast diversity of product categories and their respective attributes, compounded by the lack of extensive, accurately annotated training datasets and the demand for low latency to meet the real-time needs of e-commerce platforms. To address these challenges, we introduce GenToC, a novel two-stage model for extracting attribute-value pairs from product titles. GenToC is designed to train with partially-labeled data, leveraging incomplete attribute-value pairs and obviating the need for a fully annotated dataset. Moreover, we introduce a bootstrapping method that enables GenToC to progressively refine and expand its training dataset. This enhancement substantially improves the quality of data available for training other neural network models that are typically faster but are inherently less capable than GenToC in terms of their capacity to handle partially-labeled data. By supplying an enriched dataset for training, GenToC significantly advances the performance of these alternative models, making them more suitable for real-time deployment. Our results highlight the unique capability of GenToC to learn from a limited set of labeled data and to contribute to the training of more efficient models, marking a significant leap forward in the automated extraction of attribute-value pairs from product titles. GenToC has been successfully integrated into India's largest B2B e-commerce platform, IndiaMART.com, achieving a significant increase of 21.1% in recall over the existing deployed system while maintaining a high precision of 89.5% in this challenging task.
[ "['D. Subhalingam' 'Keshav Kolluru' 'Mausam' 'Saurabh Singal']" ]
null
null
2405.10924
null
null
http://arxiv.org/pdf/2405.10924v2
2024-06-02T06:54:51Z
2024-05-17T17:23:36Z
Boosting Few-Pixel Robustness Verification via Covering Verification Designs
Proving local robustness is crucial to increase the reliability of neural networks. While many verifiers prove robustness in $L_infty$ $epsilon$-balls, very little work deals with robustness verification in $L_0$ $epsilon$-balls, capturing robustness to few pixel attacks. This verification introduces a combinatorial challenge, because the space of pixels to perturb is discrete and of exponential size. A previous work relies on covering designs to identify sets for defining $L_infty$ neighborhoods, which if proven robust imply that the $L_0$ $epsilon$-ball is robust. However, the number of neighborhoods to verify remains very high, leading to a high analysis time. We propose covering verification designs, a combinatorial design that tailors effective but analysis-incompatible coverings to $L_0$ robustness verification. The challenge is that computing a covering verification design introduces a high time and memory overhead, which is intensified in our setting, where multiple candidate coverings are required to identify how to reduce the overall analysis time. We introduce CoVerD, an $L_0$ robustness verifier that selects between different candidate coverings without constructing them, but by predicting their block size distribution. This prediction relies on a theorem providing closed-form expressions for the mean and variance of this distribution. CoVerD constructs the chosen covering verification design on-the-fly, while keeping the memory consumption minimal and enabling to parallelize the analysis. The experimental results show that CoVerD reduces the verification time on average by up to 5.1x compared to prior work and that it scales to larger $L_0$ $epsilon$-balls.
[ "['Yuval Shapira' 'Naor Wiesel' 'Shahar Shabelman' 'Dana Drachsler-Cohen']" ]
null
null
2405.10925
null
null
http://arxiv.org/pdf/2405.10925v1
2024-05-17T17:24:52Z
2024-05-17T17:24:52Z
High-dimensional multiple imputation (HDMI) for partially observed confounders including natural language processing-derived auxiliary covariates
Multiple imputation (MI) models can be improved by including auxiliary covariates (AC), but their performance in high-dimensional data is not well understood. We aimed to develop and compare high-dimensional MI (HDMI) approaches using structured and natural language processing (NLP)-derived AC in studies with partially observed confounders. We conducted a plasmode simulation study using data from opioid vs. non-steroidal anti-inflammatory drug (NSAID) initiators (X) with observed serum creatinine labs (Z2) and time-to-acute kidney injury as outcome. We simulated 100 cohorts with a null treatment effect, including X, Z2, atrial fibrillation (U), and 13 other investigator-derived confounders (Z1) in the outcome generation. We then imposed missingness (MZ2) on 50% of Z2 measurements as a function of Z2 and U and created different HDMI candidate AC using structured and NLP-derived features. We mimicked scenarios where U was unobserved by omitting it from all AC candidate sets. Using LASSO, we data-adaptively selected HDMI covariates associated with Z2 and MZ2 for MI, and with U to include in propensity score models. The treatment effect was estimated following propensity score matching in MI datasets and we benchmarked HDMI approaches against a baseline imputation and complete case analysis with Z1 only. HDMI using claims data showed the lowest bias (0.072). Combining claims and sentence embeddings led to an improvement in the efficiency displaying the lowest root-mean-squared-error (0.173) and coverage (94%). NLP-derived AC alone did not perform better than baseline MI. HDMI approaches may decrease bias in studies with partially observed confounders where missingness depends on unobserved factors.
[ "['Janick Weberpals' 'Pamela A. Shaw' 'Kueiyu Joshua Lin' 'Richard Wyss'\n 'Joseph M Plasek' 'Li Zhou' 'Kerry Ngan' 'Thomas DeRamus'\n 'Sudha R. Raman' 'Bradley G. Hammill' 'Hana Lee' 'Sengwee Toh'\n 'John G. Connolly' 'Kimberly J. Dandreo' 'Fang Tian' 'Wei Liu' 'Jie Li'\n 'José J. Hernández-Muñoz' 'Sebastian Schneeweiss' 'Rishi J. Desai']" ]
null
null
2405.10927
null
null
http://arxiv.org/pdf/2405.10927v2
2024-05-20T16:47:34Z
2024-05-17T17:26:33Z
Using Degeneracy in the Loss Landscape for Mechanistic Interpretability
Mechanistic Interpretability aims to reverse engineer the algorithms implemented by neural networks by studying their weights and activations. An obstacle to reverse engineering neural networks is that many of the parameters inside a network are not involved in the computation being implemented by the network. These degenerate parameters may obfuscate internal structure. Singular learning theory teaches us that neural network parameterizations are biased towards being more degenerate, and parameterizations with more degeneracy are likely to generalize further. We identify 3 ways that network parameters can be degenerate: linear dependence between activations in a layer; linear dependence between gradients passed back to a layer; ReLUs which fire on the same subset of datapoints. We also present a heuristic argument that modular networks are likely to be more degenerate, and we develop a metric for identifying modules in a network that is based on this argument. We propose that if we can represent a neural network in a way that is invariant to reparameterizations that exploit the degeneracies, then this representation is likely to be more interpretable, and we provide some evidence that such a representation is likely to have sparser interactions. We introduce the Interaction Basis, a tractable technique to obtain a representation that is invariant to degeneracies from linear dependence of activations or Jacobians.
[ "['Lucius Bushnaq' 'Jake Mendel' 'Stefan Heimersheim' 'Dan Braun'\n 'Nicholas Goldowsky-Dill' 'Kaarel Hänni' 'Cindy Wu' 'Marius Hobbhahn']" ]
null
null
2405.10928
null
null
http://arxiv.org/pdf/2405.10928v2
2024-05-20T16:34:37Z
2024-05-17T17:27:19Z
The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks
Mechanistic interpretability aims to understand the behavior of neural networks by reverse-engineering their internal computations. However, current methods struggle to find clear interpretations of neural network activations because a decomposition of activations into computational features is missing. Individual neurons or model components do not cleanly correspond to distinct features or functions. We present a novel interpretability method that aims to overcome this limitation by transforming the activations of the network into a new basis - the Local Interaction Basis (LIB). LIB aims to identify computational features by removing irrelevant activations and interactions. Our method drops irrelevant activation directions and aligns the basis with the singular vectors of the Jacobian matrix between adjacent layers. It also scales features based on their importance for downstream computation, producing an interaction graph that shows all computationally-relevant features and interactions in a model. We evaluate the effectiveness of LIB on modular addition and CIFAR-10 models, finding that it identifies more computationally-relevant features that interact more sparsely, compared to principal component analysis. However, LIB does not yield substantial improvements in interpretability or interaction sparsity when applied to language models. We conclude that LIB is a promising theory-driven approach for analyzing neural networks, but in its current form is not applicable to large language models.
[ "['Lucius Bushnaq' 'Stefan Heimersheim' 'Nicholas Goldowsky-Dill'\n 'Dan Braun' 'Jake Mendel' 'Kaarel Hänni' 'Avery Griffin' 'Jörn Stöhler'\n 'Magdalena Wache' 'Marius Hobbhahn']" ]
null
null
2405.10930
null
null
http://arxiv.org/pdf/2405.10930v3
2024-06-28T03:51:23Z
2024-05-17T17:31:02Z
Submodular Information Selection for Hypothesis Testing with Misclassification Penalties
We consider the problem of selecting an optimal subset of information sources for a hypothesis testing/classification task where the goal is to identify the true state of the world from a finite set of hypotheses, based on finite observation samples from the sources. In order to characterize the learning performance, we propose a misclassification penalty framework, which enables nonuniform treatment of different misclassification errors. In a centralized Bayesian learning setting, we study two variants of the subset selection problem: (i) selecting a minimum cost information set to ensure that the maximum penalty of misclassifying the true hypothesis is below a desired bound and (ii) selecting an optimal information set under a limited budget to minimize the maximum penalty of misclassifying the true hypothesis. Under certain assumptions, we prove that the objective (or constraints) of these combinatorial optimization problems are weak (or approximate) submodular, and establish high-probability performance guarantees for greedy algorithms. Further, we propose an alternate metric for information set selection which is based on the total penalty of misclassification. We prove that this metric is submodular and establish near-optimal guarantees for the greedy algorithms for both the information set selection problems. Finally, we present numerical simulations to validate our theoretical results over several randomly generated instances.
[ "['Jayanth Bhargav' 'Mahsa Ghasemi' 'Shreyas Sundaram']" ]
null
null
2405.10933
null
null
http://arxiv.org/pdf/2405.10933v1
2024-05-17T17:36:44Z
2024-05-17T17:36:44Z
Learning low-degree quantum objects
We consider the problem of learning low-degree quantum objects up to $varepsilon$-error in $ell_2$-distance. We show the following results: $(i)$ unknown $n$-qubit degree-$d$ (in the Pauli basis) quantum channels and unitaries can be learned using $O(1/varepsilon^d)$ queries (independent of $n$), $(ii)$ polynomials $p:{-1,1}^nrightarrow [-1,1]$ arising from $d$-query quantum algorithms can be classically learned from $O((1/varepsilon)^dcdot log n)$ many random examples $(x,p(x))$ (which implies learnability even for $d=O(log n)$), and $(iii)$ degree-$d$ polynomials $p:{-1,1}^nto [-1,1]$ can be learned through $O(1/varepsilon^d)$ queries to a quantum unitary $U_p$ that block-encodes $p$. Our main technical contributions are new Bohnenblust-Hille inequalities for quantum channels and completely bounded~polynomials.
[ "['Srinivasan Arunachalam' 'Arkopal Dutt' 'Francisco Escudero Gutiérrez'\n 'Carlos Palazuelos']" ]
null
null
2405.10938
null
null
http://arxiv.org/pdf/2405.10938v2
2024-07-02T14:16:42Z
2024-05-17T17:49:44Z
Observational Scaling Laws and the Predictability of Language Model Performance
Understanding how language model performance varies with scale is critical to benchmark and algorithm development. Scaling laws are one approach to building this understanding, but the requirement of training models across many different scales has limited their use. We propose an alternative, observational approach that bypasses model training and instead builds scaling laws from ~80 publically available models. Building a single scaling law from multiple model families is challenging due to large variations in their training compute efficiencies and capabilities. However, we show that these variations are consistent with a simple, generalized scaling law where language model performance is a function of a low-dimensional capability space, and model families only vary in their efficiency in converting training compute to capabilities. Using this approach, we show the surprising predictability of complex scaling phenomena: we show that several emergent phenomena follow a smooth, sigmoidal behavior and are predictable from small models; we show that the agent performance of models such as GPT-4 can be precisely predicted from simpler non-agentic benchmarks; and we show how to predict the impact of post-training interventions like Chain-of-Thought and Self-Consistency as language model capabilities continue to improve.
[ "['Yangjun Ruan' 'Chris J. Maddison' 'Tatsunori Hashimoto']" ]
null
null
2405.10939
null
null
http://arxiv.org/pdf/2405.10939v1
2024-05-17T17:49:45Z
2024-05-17T17:49:45Z
DINO as a von Mises-Fisher mixture model
Self-distillation methods using Siamese networks are popular for self-supervised pre-training. DINO is one such method based on a cross-entropy loss between $K$-dimensional probability vectors, obtained by applying a softmax function to the dot product between representations and learnt prototypes. Given the fact that the learned representations are $L^2$-normalized, we show that DINO and its derivatives, such as iBOT, can be interpreted as a mixture model of von Mises-Fisher components. With this interpretation, DINO assumes equal precision for all components when the prototypes are also $L^2$-normalized. Using this insight we propose DINO-vMF, that adds appropriate normalization constants when computing the cluster assignment probabilities. Unlike DINO, DINO-vMF is stable also for the larger ViT-Base model with unnormalized prototypes. We show that the added flexibility of the mixture model is beneficial in terms of better image representations. The DINO-vMF pre-trained model consistently performs better than DINO on a range of downstream tasks. We obtain similar improvements for iBOT-vMF vs iBOT and thereby show the relevance of our proposed modification also for other methods derived from DINO.
[ "['Hariprasath Govindarajan' 'Per Sidén' 'Jacob Roll' 'Fredrik Lindsten']" ]
null
null
2405.10944
null
null
http://arxiv.org/pdf/2405.10944v1
2024-05-17T17:58:52Z
2024-05-17T17:58:52Z
Probabilistic transfer learning methodology to expedite high fidelity simulation of reactive flows
Reduced order models based on the transport of a lower dimensional manifold representation of the thermochemical state, such as Principal Component (PC) transport and Machine Learning (ML) techniques, have been developed to reduce the computational cost associated with the Direct Numerical Simulations (DNS) of reactive flows. Both PC transport and ML normally require an abundance of data to exhibit sufficient predictive accuracy, which might not be available due to the prohibitive cost of DNS or experimental data acquisition. To alleviate such difficulties, similar data from an existing dataset or domain (source domain) can be used to train ML models, potentially resulting in adequate predictions in the domain of interest (target domain). This study presents a novel probabilistic transfer learning (TL) framework to enhance the trust in ML models in correctly predicting the thermochemical state in a lower dimensional manifold and a sparse data setting. The framework uses Bayesian neural networks, and autoencoders, to reduce the dimensionality of the state space and diffuse the knowledge from the source to the target domain. The new framework is applied to one-dimensional freely-propagating flame solutions under different data sparsity scenarios. The results reveal that there is an optimal amount of knowledge to be transferred, which depends on the amount of data available in the target domain and the similarity between the domains. TL can reduce the reconstruction error by one order of magnitude for cases with large sparsity. The new framework required 10 times less data for the target domain to reproduce the same error as in the abundant data scenario. Furthermore, comparisons with a state-of-the-art deterministic TL strategy show that the probabilistic method can require four times less data to achieve the same reconstruction error.
[ "['Bruno S. Soriano' 'Ki Sung Jung' 'Tarek Echekki' 'Jacqueline H. Chen'\n 'Mohammad Khalil']" ]
null
null
2405.10951
null
null
http://arxiv.org/pdf/2405.10951v1
2024-03-25T08:41:01Z
2024-03-25T08:41:01Z
Block Selective Reprogramming for On-device Training of Vision Transformers
The ubiquity of vision transformers (ViTs) for various edge applications, including personalized learning, has created the demand for on-device fine-tuning. However, training with the limited memory and computation power of edge devices remains a significant challenge. In particular, the memory required for training is much higher than that needed for inference, primarily due to the need to store activations across all layers in order to compute the gradients needed for weight updates. Previous works have explored reducing this memory requirement via frozen-weight training as well storing the activations in a compressed format. However, these methods are deemed inefficient due to their inability to provide training or inference speedup. In this paper, we first investigate the limitations of existing on-device training methods aimed at reducing memory and compute requirements. We then present block selective reprogramming (BSR) in which we fine-tune only a fraction of total blocks of a pre-trained model and selectively drop tokens based on self-attention scores of the frozen layers. To show the efficacy of BSR, we present extensive evaluations on ViT-B and DeiT-S with five different datasets. Compared to the existing alternatives, our approach simultaneously reduces training memory by up to 1.4x and compute cost by up to 2x while maintaining similar accuracy. We also showcase results for Mixture-of-Expert (MoE) models, demonstrating the effectiveness of our approach in multitask learning scenarios.
[ "['Sreetama Sarkar' 'Souvik Kundu' 'Kai Zheng' 'Peter A. Beerel']" ]
null
null
2405.10957
null
null
http://arxiv.org/pdf/2405.10957v1
2024-04-05T13:54:58Z
2024-04-05T13:54:58Z
Statistical Mechanics and Artificial Neural Networks: Principles, Models, and Applications
The field of neuroscience and the development of artificial neural networks (ANNs) have mutually influenced each other, drawing from and contributing to many concepts initially developed in statistical mechanics. Notably, Hopfield networks and Boltzmann machines are versions of the Ising model, a model extensively studied in statistical mechanics for over a century. In the first part of this chapter, we provide an overview of the principles, models, and applications of ANNs, highlighting their connections to statistical mechanics and statistical learning theory. Artificial neural networks can be seen as high-dimensional mathematical functions, and understanding the geometric properties of their loss landscapes (i.e., the high-dimensional space on which one wishes to find extrema or saddles) can provide valuable insights into their optimization behavior, generalization abilities, and overall performance. Visualizing these functions can help us design better optimization methods and improve their generalization abilities. Thus, the second part of this chapter focuses on quantifying geometric properties and visualizing loss functions associated with deep ANNs.
[ "['Lucas Böttcher' 'Gregory Wheeler']" ]
null
null
2405.10959
null
null
http://arxiv.org/pdf/2405.10959v1
2024-04-08T15:59:37Z
2024-04-08T15:59:37Z
Foundation Models for Education: Promises and Prospects
With the advent of foundation models like ChatGPT, educators are excited about the transformative role that AI might play in propelling the next education revolution. The developing speed and the profound impact of foundation models in various industries force us to think deeply about the changes they will make to education, a domain that is critically important for the future of humans. In this paper, we discuss the strengths of foundation models, such as personalized learning, education inequality, and reasoning capabilities, as well as the development of agent architecture tailored for education, which integrates AI agents with pedagogical frameworks to create adaptive learning environments. Furthermore, we highlight the risks and opportunities of AI overreliance and creativity. Lastly, we envision a future where foundation models in education harmonize human and AI capabilities, fostering a dynamic, inclusive, and adaptive educational ecosystem.
[ "['Tianlong Xu' 'Richard Tong' 'Jing Liang' 'Xing Fan' 'Haoyang Li'\n 'Qingsong Wen']" ]
null
null
2405.10968
null
null
http://arxiv.org/pdf/2405.10968v1
2024-05-05T23:04:37Z
2024-05-05T23:04:37Z
LIFL: A Lightweight, Event-driven Serverless Platform for Federated Learning
Federated Learning (FL) typically involves a large-scale, distributed system with individual user devices/servers training models locally and then aggregating their model updates on a trusted central server. Existing systems for FL often use an always-on server for model aggregation, which can be inefficient in terms of resource utilization. They may also be inelastic in their resource management. This is particularly exacerbated when aggregating model updates at scale in a highly dynamic environment with varying numbers of heterogeneous user devices/servers. We present LIFL, a lightweight and elastic serverless cloud platform with fine-grained resource management for efficient FL aggregation at scale. LIFL is enhanced by a streamlined, event-driven serverless design that eliminates the individual heavy-weight message broker and replaces inefficient container-based sidecars with lightweight eBPF-based proxies. We leverage shared memory processing to achieve high-performance communication for hierarchical aggregation, which is commonly adopted to speed up FL aggregation at scale. We further introduce locality-aware placement in LIFL to maximize the benefits of shared memory processing. LIFL precisely scales and carefully reuses the resources for hierarchical aggregation to achieve the highest degree of parallelism while minimizing the aggregation time and resource consumption. Our experimental results show that LIFL achieves significant improvement in resource efficiency and aggregation speed for supporting FL at scale, compared to existing serverful and serverless FL systems.
[ "['Shixiong Qi' 'K. K. Ramakrishnan' 'Myungjin Lee']" ]