categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.06642 | null | null | http://arxiv.org/pdf/2405.06642v3 | 2024-06-16T11:33:54Z | 2024-03-05T13:26:42Z | PPFlow: Target-aware Peptide Design with Torsional Flow Matching | Therapeutic peptides have proven to have great pharmaceutical value and potential in recent decades. However, methods of AI-assisted peptide drug discovery are not fully explored. To fill the gap, we propose a target-aware peptide design method called textsc{PPFlow}, based on conditional flow matching on torus manifolds, to model the internal geometries of torsion angles for the peptide structure design. Besides, we establish a protein-peptide binding dataset named PPBench2024 to fill the void of massive data for the task of structure-based peptide drug design and to allow the training of deep learning methods. Extensive experiments show that PPFlow reaches state-of-the-art performance in tasks of peptide drug generation and optimization in comparison with baseline models, and can be generalized to other tasks including docking and side-chain packing. | [
"['Haitao Lin' 'Odin Zhang' 'Huifeng Zhao' 'Dejun Jiang' 'Lirong Wu'\n 'Zicheng Liu' 'Yufei Huang' 'Stan Z. Li']"
]
|
null | null | 2405.06645 | null | null | http://arxiv.org/pdf/2405.06645v1 | 2024-03-15T16:35:47Z | 2024-03-15T16:35:47Z | On Recovering Higher-order Interactions from Protein Language Models | Protein language models leverage evolutionary information to perform state-of-the-art 3D structure and zero-shot variant prediction. Yet, extracting and explaining all the mutational interactions that govern model predictions remains difficult as it requires querying the entire amino acid space for $n$ sites using $20^n$ sequences, which is computationally expensive even for moderate values of $n$ (e.g., $nsim10$). Although approaches to lower the sample complexity exist, they often limit the interpretability of the model to just single and pairwise interactions. Recently, computationally scalable algorithms relying on the assumption of sparsity in the Fourier domain have emerged to learn interactions from experimental data. However, extracting interactions from language models poses unique challenges: it's unclear if sparsity is always present or if it is the only metric needed to assess the utility of Fourier algorithms. Herein, we develop a framework to do a systematic Fourier analysis of the protein language model ESM2 applied on three proteins-green fluorescent protein (GFP), tumor protein P53 (TP53), and G domain B1 (GB1)-across various sites for 228 experiments. We demonstrate that ESM2 is dominated by three regions in the sparsity-ruggedness plane, two of which are better suited for sparse Fourier transforms. Validations on two sample proteins demonstrate recovery of all interactions with $R^2=0.72$ in the more sparse region and $R^2=0.66$ in the more dense region, using only 7 million out of $20^{10}sim10^{13}$ ESM2 samples, reducing the computational time by a staggering factor of 15,000. All codes and data are available on our GitHub repository https://github.com/amirgroup-codes/InteractionRecovery. | [
"['Darin Tsui' 'Amirali Aghazadeh']"
]
|
null | null | 2405.06649 | null | null | http://arxiv.org/pdf/2405.06649v2 | 2024-07-12T11:38:56Z | 2024-03-30T05:32:42Z | ProLLM: Protein Chain-of-Thoughts Enhanced LLM for Protein-Protein
Interaction Prediction | The prediction of protein-protein interactions (PPIs) is crucial for understanding biological functions and diseases. Previous machine learning approaches to PPI prediction mainly focus on direct physical interactions, ignoring the broader context of nonphysical connections through intermediate proteins, thus limiting their effectiveness. The emergence of Large Language Models (LLMs) provides a new opportunity for addressing this complex biological challenge. By transforming structured data into natural language prompts, we can map the relationships between proteins into texts. This approach allows LLMs to identify indirect connections between proteins, tracing the path from upstream to downstream. Therefore, we propose a novel framework ProLLM that employs an LLM tailored for PPI for the first time. Specifically, we propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as natural language prompts. ProCoT considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins. Thus, we can use ProCoT to predict the interaction between upstream proteins and downstream proteins. The training of ProLLM employs the ProCoT format, which enhances the model's understanding of complex biological problems. In addition to ProCoT, this paper also contributes to the exploration of embedding replacement of protein sites in natural language prompts, and instruction fine-tuning in protein knowledge datasets. We demonstrate the efficacy of ProLLM through rigorous validation against benchmark datasets, showing significant improvement over existing methods in terms of prediction accuracy and generalizability. The code is available at: https://github.com/MingyuJ666/ProLLM. | [
"['Mingyu Jin' 'Haochen Xue' 'Zhenting Wang' 'Boming Kang' 'Ruosong Ye'\n 'Kaixiong Zhou' 'Mengnan Du' 'Yongfeng Zhang']"
]
|
null | null | 2405.06651 | null | null | http://arxiv.org/pdf/2405.06651v1 | 2024-04-05T03:16:11Z | 2024-04-05T03:16:11Z | Using GANs for De Novo Protein Design Targeting Microglial IL-3R$α$
to Inhibit Alzheimer's Progression | IL-3 is a hemopoietic growth factor that usually targets blood cell precursors; IL-3R is a cytokine receptor that binds to IL-3. However, IL-3 takes on a different role in the context of glial cells in the nervous system, where studies show that the protein IL-3 protects against Alzheimer's disease by activating microglia at their IL-3R receptors, causing the microglia to clear out the tangles caused by the build-up of misfolded Tau proteins. In this study, we seek to ascertain what role the secondary structure of IL-3 plays in its binding with the receptor. The motivation behind this study is to learn more about the mechanism and identify possible drugs that might be able to activate it, in hopes of inhibiting the spread of Alzheimer's Disease. From a preliminary analysis of complexes containing IL-3 and IL-3R, we hypothesized that the binding is largely due to the interactions of three alpha helix structures stretching towards the active site on the receptor. The original Il-3 protein serves as the control in this experiment; the other proteins being tested are generated through several types of computational de novo protein design, where machine learning allows for the production of entirely novel structures. The efficacy of the generated proteins is assessed through docking simulations with the IL-3R receptor, and the binding poses are also qualitatively examined to gain insight into the function of the binding. From the docking data and poses, the most successful proteins were those with similar secondary structure to IL-3. | [
"['Arnav Swaroop']"
]
|
null | null | 2405.06653 | null | null | http://arxiv.org/pdf/2405.06653v1 | 2024-04-08T08:25:25Z | 2024-04-08T08:25:25Z | A unified cross-attention model for predicting antigen binding
specificity to both HLA and TCR molecules | The immune checkpoint inhibitors have demonstrated promising clinical efficacy across various tumor types, yet the percentage of patients who benefit from them remains low. The binding affinity between antigens and HLA-I/TCR molecules plays a critical role in antigen presentation and T-cell activation. Some computational methods have been developed to predict antigen-HLA or antigen-TCR binding specificity, but they focus solely on one task at a time. In this paper, we propose UnifyImmun, a unified cross-attention transformer model designed to simultaneously predicts the binding of antigens to both HLA and TCR molecules, thereby providing more comprehensive evaluation of antigen immunogenicity. We devise a two-phase progressive training strategy that enables these two tasks to mutually reinforce each other, by compelling the encoders to extract more expressive features. To further enhance the model generalizability, we incorporate virtual adversarial training. Compared to over ten existing methods for predicting antigen-HLA and antigen-TCR binding, our method demonstrates better performance in both tasks. Notably, on a large-scale COVID-19 antigen-TCR binding test set, our method improves performance by at least 9% compared to the current state-of-the-art methods. The validation experiments on three clinical cohorts confirm that our approach effectively predicts immunotherapy response and clinical outcomes. Furthermore, the cross-attention scores reveal the amino acids sites critical for antigen binding to receptors. In essence, our approach marks a significant step towards comprehensive evaluation of antigen immunogenicity. | [
"['Chenpeng Yu' 'Xing Fang' 'Hui Liu']"
]
|
null | null | 2405.06654 | null | null | http://arxiv.org/pdf/2405.06654v1 | 2024-04-10T05:29:35Z | 2024-04-10T05:29:35Z | PROflow: An iterative refinement model for PROTAC-induced structure
prediction | Proteolysis targeting chimeras (PROTACs) are small molecules that trigger the breakdown of traditionally ``undruggable'' proteins by binding simultaneously to their targets and degradation-associated proteins. A key challenge in their rational design is understanding their structural basis of activity. Due to the lack of crystal structures (18 in the PDB), existing PROTAC docking methods have been forced to simplify the problem into a distance-constrained protein-protein docking task. To address the data issue, we develop a novel pseudo-data generation scheme that requires only binary protein-protein complexes. This new dataset enables PROflow, an iterative refinement model for PROTAC-induced structure prediction that models the full PROTAC flexibility during constrained protein-protein docking. PROflow outperforms the state-of-the-art across docking metrics and runtime. Its inference speed enables the large-scale screening of PROTAC designs, and computed properties of predicted structures achieve statistically significant correlations with published degradation activities. | [
"['Bo Qiang' 'Wenxian Shi' 'Yuxuan Song' 'Menghua Wu']"
]
|
null | null | 2405.06655 | null | null | http://arxiv.org/pdf/2405.06655v1 | 2024-04-14T08:36:14Z | 2024-04-14T08:36:14Z | RNA Secondary Structure Prediction Using Transformer-Based Deep Learning
Models | The Human Genome Project has led to an exponential increase in data related to the sequence, structure, and function of biomolecules. Bioinformatics is an interdisciplinary research field that primarily uses computational methods to analyze large amounts of biological macromolecule data. Its goal is to discover hidden biological patterns and related information. Furthermore, analysing additional relevant information can enhance the study of biological operating mechanisms. This paper discusses the fundamental concepts of RNA, RNA secondary structure, and its prediction.Subsequently, the application of machine learning technologies in predicting the structure of biological macromolecules is explored. This chapter describes the relevant knowledge of algorithms and computational complexity and presents a RNA tertiary structure prediction algorithm based on ResNet. To address the issue of the current scoring function's unsuitability for long RNA, a scoring model based on ResNet is proposed, and a structure prediction algorithm is designed. The chapter concludes by presenting some open and interesting challenges in the field of RNA tertiary structure prediction. | [
"['Yanlin Zhou' 'Tong Zhan' 'Yichao Wu' 'Bo Song' 'Chenxi Shi']"
]
|
null | null | 2405.06658 | null | null | http://arxiv.org/pdf/2405.06658v1 | 2024-04-21T01:07:33Z | 2024-04-21T01:07:33Z | ProteinEngine: Empower LLM with Domain Knowledge for Protein Engineering | Large language models (LLMs) have garnered considerable attention for their proficiency in tackling intricate tasks, particularly leveraging their capacities for zero-shot and in-context learning. However, their utility has been predominantly restricted to general tasks due to an absence of domain-specific knowledge. This constraint becomes particularly pertinent in the realm of protein engineering, where specialized expertise is required for tasks such as protein function prediction, protein evolution analysis, and protein design, with a level of specialization that existing LLMs cannot furnish. In response to this challenge, we introduce textsc{ProteinEngine}, a human-centered platform aimed at amplifying the capabilities of LLMs in protein engineering by seamlessly integrating a comprehensive range of relevant tools, packages, and software via API calls. Uniquely, textsc{ProteinEngine} assigns three distinct roles to LLMs, facilitating efficient task delegation, specialized task resolution, and effective communication of results. This design fosters high extensibility and promotes the smooth incorporation of new algorithms, models, and features for future development. Extensive user studies, involving participants from both the AI and protein engineering communities across academia and industry, consistently validate the superiority of textsc{ProteinEngine} in augmenting the reliability and precision of deep learning in protein engineering tasks. Consequently, our findings highlight the potential of textsc{ProteinEngine} to bride the disconnected tools for future research in the protein engineering domain. | [
"['Yiqing Shen' 'Outongyi Lv' 'Houying Zhu' 'Yu Guang Wang']"
]
|
null | null | 2405.06659 | null | null | http://arxiv.org/pdf/2405.06659v1 | 2024-04-22T14:36:19Z | 2024-04-22T14:36:19Z | ControlMol: Adding Substruture Control To Molecule Diffusion Models | Designing new molecules is an important task in the field of pharmaceuticals. Due to the vast design space of molecules, generating molecules conditioned on a specific sub-structure relevant to a particular function or therapeutic target is a crucial task in computer-aided drug design. In this paper, we present ControlMol, which adds sub-structure control to molecule generation with diffusion models. Unlike previous methods which view this task as inpainting or conditional generation, we adopt the idea of ControlNet into conditional molecule generation and make adaptive adjustments to a pre-trained diffusion model. We apply our method to both 2D and 3D molecule generation tasks. Conditioned on randomly partitioned sub-structure data, our method outperforms previous methods by generating more valid and diverse molecules. The method is easy to implement and can be quickly applied to a variety of pre-trained molecule generation models. | [
"['Qi Zhengyang' 'Liu Zijing' 'Zhang Jiying' 'Cao He' 'Li Yu']"
]
|
null | null | 2405.06660 | null | null | http://arxiv.org/pdf/2405.06660v1 | 2024-04-23T01:39:20Z | 2024-04-23T01:39:20Z | AI and Machine Learning for Next Generation Science Assessments | This chapter focuses on the transformative role of Artificial Intelligence (AI) and Machine Learning (ML) in science assessments. The paper begins with a discussion of the Framework for K-12 Science Education, which calls for a shift from conceptual learning to knowledge-in-use. This shift necessitates the development of new types of assessments that align with the Framework's three dimensions: science and engineering practices, disciplinary core ideas, and crosscutting concepts. The paper further highlights the limitations of traditional assessment methods like multiple-choice questions, which often fail to capture the complexities of scientific thinking and three-dimensional learning in science. It emphasizes the need for performance-based assessments that require students to engage in scientific practices like modeling, explanation, and argumentation. The paper achieves three major goals: reviewing the current state of ML-based assessments in science education, introducing a framework for scoring accuracy in ML-based automatic assessments, and discussing future directions and challenges. It delves into the evolution of ML-based automatic scoring systems, discussing various types of ML, like supervised, unsupervised, and semi-supervised learning. These systems can provide timely and objective feedback, thus alleviating the burden on teachers. The paper concludes by exploring pre-trained models like BERT and finetuned ChatGPT, which have shown promise in assessing students' written responses effectively. | [
"['Xiaoming Zhai']"
]
|
null | null | 2405.06662 | null | null | http://arxiv.org/pdf/2405.06662v1 | 2024-04-26T14:50:59Z | 2024-04-26T14:50:59Z | Language Interaction Network for Clinical Trial Approval Estimation | Clinical trial outcome prediction seeks to estimate the likelihood that a clinical trial will successfully reach its intended endpoint. This process predominantly involves the development of machine learning models that utilize a variety of data sources such as descriptions of the clinical trials, characteristics of the drug molecules, and specific disease conditions being targeted. Accurate predictions of trial outcomes are crucial for optimizing trial planning and prioritizing investments in a drug portfolio. While previous research has largely concentrated on small-molecule drugs, there is a growing need to focus on biologics-a rapidly expanding category of therapeutic agents that often lack the well-defined molecular properties associated with traditional drugs. Additionally, applying conventional methods like graph neural networks to biologics data proves challenging due to their complex nature. To address these challenges, we introduce the Language Interaction Network (LINT), a novel approach that predicts trial outcomes using only the free-text descriptions of the trials. We have rigorously tested the effectiveness of LINT across three phases of clinical trials, where it achieved ROC-AUC scores of 0.770, 0.740, and 0.748 for phases I, II, and III, respectively, specifically concerning trials involving biologic interventions. | [
"['Chufan Gao' 'Tianfan Fu' 'Jimeng Sun']"
]
|
null | null | 2405.06663 | null | null | http://arxiv.org/pdf/2405.06663v1 | 2024-04-29T05:42:29Z | 2024-04-29T05:42:29Z | Protein Representation Learning by Capturing Protein
Sequence-Structure-Function Relationship | The goal of protein representation learning is to extract knowledge from protein databases that can be applied to various protein-related downstream tasks. Although protein sequence, structure, and function are the three key modalities for a comprehensive understanding of proteins, existing methods for protein representation learning have utilized only one or two of these modalities due to the difficulty of capturing the asymmetric interrelationships between them. To account for this asymmetry, we introduce our novel asymmetric multi-modal masked autoencoder (AMMA). AMMA adopts (1) a unified multi-modal encoder to integrate all three modalities into a unified representation space and (2) asymmetric decoders to ensure that sequence latent features reflect structural and functional information. The experiments demonstrate that the proposed AMMA is highly effective in learning protein representations that exhibit well-aligned inter-modal relationships, which in turn makes it effective for various downstream protein-related tasks. | [
"['Eunji Ko' 'Seul Lee' 'Minseon Kim' 'Dongki Kim']"
]
|
null | null | 2405.06665 | null | null | http://arxiv.org/pdf/2405.06665v1 | 2024-05-02T14:33:05Z | 2024-05-02T14:33:05Z | Enhancing Language Models for Financial Relation Extraction with Named
Entities and Part-of-Speech | The Financial Relation Extraction (FinRE) task involves identifying the entities and their relation, given a piece of financial statement/text. To solve this FinRE problem, we propose a simple but effective strategy that improves the performance of pre-trained language models by augmenting them with Named Entity Recognition (NER) and Part-Of-Speech (POS), as well as different approaches to combine these information. Experiments on a financial relations dataset show promising results and highlights the benefits of incorporating NER and POS in existing models. Our dataset and codes are available at https://github.com/kwanhui/FinRelExtract. | [
"['Menglin Li' 'Kwan Hui Lim']"
]
|
null | null | 2405.06667 | null | null | http://arxiv.org/pdf/2405.06667v1 | 2024-05-03T09:49:46Z | 2024-05-03T09:49:46Z | Sentiment Polarity Analysis of Bangla Food Reviews Using Machine and
Deep Learning Algorithms | The Internet has become an essential tool for people in the modern world. Humans, like all living organisms, have essential requirements for survival. These include access to atmospheric oxygen, potable water, protective shelter, and sustenance. The constant flux of the world is making our existence less complicated. A significant portion of the population utilizes online food ordering services to have meals delivered to their residences. Although there are numerous methods for ordering food, customers sometimes experience disappointment with the food they receive. Our endeavor was to establish a model that could determine if food is of good or poor quality. We compiled an extensive dataset of over 1484 online reviews from prominent food ordering platforms, including Food Panda and HungryNaki. Leveraging the collected data, a rigorous assessment of various deep learning and machine learning techniques was performed to determine the most accurate approach for predicting food quality. Out of all the algorithms evaluated, logistic regression emerged as the most accurate, achieving an impressive 90.91% accuracy. The review offers valuable insights that will guide the user in deciding whether or not to order the food. | [
"['Al Amin' 'Anik Sarkar' 'Md Mahamodul Islam' 'Asif Ahammad Miazee'\n 'Md Robiul Islam' 'Md Mahmudul Hoque']"
]
|
null | null | 2405.06669 | null | null | http://arxiv.org/pdf/2405.06669v1 | 2024-05-03T16:33:16Z | 2024-05-03T16:33:16Z | Instruction-Guided Bullet Point Summarization of Long Financial Earnings
Call Transcripts | While automatic summarization techniques have made significant advancements, their primary focus has been on summarizing short news articles or documents that have clear structural patterns like scientific articles or government reports. There has not been much exploration into developing efficient methods for summarizing financial documents, which often contain complex facts and figures. Here, we study the problem of bullet point summarization of long Earning Call Transcripts (ECTs) using the recently released ECTSum dataset. We leverage an unsupervised question-based extractive module followed by a parameter efficient instruction-tuned abstractive module to solve this task. Our proposed model FLAN-FinBPS achieves new state-of-the-art performances outperforming the strongest baseline with 14.88% average ROUGE score gain, and is capable of generating factually consistent bullet point summaries that capture the important facts discussed in the ECTs. | [
"['Subhendu Khatuya' 'Koushiki Sinha' 'Niloy Ganguly' 'Saptarshi Ghosh'\n 'Pawan Goyal']"
]
|
null | null | 2405.06670 | null | null | http://arxiv.org/pdf/2405.06670v2 | 2024-05-14T18:30:52Z | 2024-05-03T16:38:14Z | TLINet: Differentiable Neural Network Temporal Logic Inference | There has been a growing interest in extracting formal descriptions of the system behaviors from data. Signal Temporal Logic (STL) is an expressive formal language used to describe spatial-temporal properties with interpretability. This paper introduces TLINet, a neural-symbolic framework for learning STL formulas. The computation in TLINet is differentiable, enabling the usage of off-the-shelf gradient-based tools during the learning process. In contrast to existing approaches, we introduce approximation methods for max operator designed specifically for temporal logic-based gradient techniques, ensuring the correctness of STL satisfaction evaluation. Our framework not only learns the structure but also the parameters of STL formulas, allowing flexible combinations of operators and various logical structures. We validate TLINet against state-of-the-art baselines, demonstrating that our approach outperforms these baselines in terms of interpretability, compactness, rich expressibility, and computational efficiency. | [
"['Danyang Li' 'Mingyu Cai' 'Cristian-Ioan Vasile' 'Roberto Tron']"
]
|
null | null | 2405.06671 | null | null | http://arxiv.org/pdf/2405.06671v2 | 2024-05-15T14:43:23Z | 2024-05-03T16:41:36Z | Parameter-Efficient Instruction Tuning of Large Language Models For
Extreme Financial Numeral Labelling | We study the problem of automatically annotating relevant numerals (GAAP metrics) occurring in the financial documents with their corresponding XBRL tags. Different from prior works, we investigate the feasibility of solving this extreme classification problem using a generative paradigm through instruction tuning of Large Language Models (LLMs). To this end, we leverage metric metadata information to frame our target outputs while proposing a parameter efficient solution for the task using LoRA. We perform experiments on two recently released financial numeric labeling datasets. Our proposed model, FLAN-FinXC, achieves new state-of-the-art performances on both the datasets, outperforming several strong baselines. We explain the better scores of our proposed model by demonstrating its capability for zero-shot as well as the least frequently occurring tags. Also, even when we fail to predict the XBRL tags correctly, our generated output has substantial overlap with the ground-truth in majority of the cases. | [
"['Subhendu Khatuya' 'Rajdeep Mukherjee' 'Akash Ghosh' 'Manjunath Hegde'\n 'Koustuv Dasgupta' 'Niloy Ganguly' 'Saptarshi Ghosh' 'Pawan Goyal']"
]
|
null | null | 2405.06672 | null | null | http://arxiv.org/pdf/2405.06672v2 | 2024-06-10T00:08:07Z | 2024-05-03T16:44:31Z | Liouville Flow Importance Sampler | We present the Liouville Flow Importance Sampler (LFIS), an innovative flow-based model for generating samples from unnormalized density functions. LFIS learns a time-dependent velocity field that deterministically transports samples from a simple initial distribution to a complex target distribution, guided by a prescribed path of annealed distributions. The training of LFIS utilizes a unique method that enforces the structure of a derived partial differential equation to neural networks modeling velocity fields. By considering the neural velocity field as an importance sampler, sample weights can be computed through accumulating errors along the sample trajectories driven by neural velocity fields, ensuring unbiased and consistent estimation of statistical quantities. We demonstrate the effectiveness of LFIS through its application to a range of benchmark problems, on many of which LFIS achieved state-of-the-art performance. | [
"['Yifeng Tian' 'Nishant Panda' 'Yen Ting Lin']"
]
|
null | null | 2405.06684 | null | null | http://arxiv.org/abs/2405.06684v1 | 2024-05-06T10:52:21Z | 2024-05-06T10:52:21Z | QuakeBERT: Accurate Classification of Social Media Texts for Rapid
Earthquake Impact Assessment | Social media aids disaster response but suffers from noise, hindering accurate impact assessment and decision making for resilient cities, which few studies considered. To address the problem, this study proposes the first domain-specific LLM model and an integrated method for rapid earthquake impact assessment. First, a few categories are introduced to classify and filter microblogs considering their relationship to the physical and social impacts of earthquakes, and a dataset comprising 7282 earthquake-related microblogs from twenty earthquakes in different locations is developed as well. Then, with a systematic analysis of various influential factors, QuakeBERT, a domain-specific large language model (LLM), is developed and fine-tuned for accurate classification and filtering of microblogs. Meanwhile, an integrated method integrating public opinion trend analysis, sentiment analysis, and keyword-based physical impact quantification is introduced to assess both the physical and social impacts of earthquakes based on social media texts. Experiments show that data diversity and data volume dominate the performance of QuakeBERT and increase the macro average F1 score by 27%, while the best classification model QuakeBERT outperforms the CNN- or RNN-based models by improving the macro average F1 score from 60.87% to 84.33%. Finally, the proposed approach is applied to assess two earthquakes with the same magnitude and focal depth. Results show that the proposed approach can effectively enhance the impact assessment process by accurate detection of noisy microblogs, which enables effective post-disaster emergency responses to create more resilient cities. | [
"['Jin Han' 'Zhe Zheng' 'Xin-Zheng Lu' 'Ke-Yin Chen' 'Jia-Rui Lin']"
]
|
null | null | 2405.06689 | null | null | http://arxiv.org/pdf/2405.06689v1 | 2024-05-07T07:40:42Z | 2024-05-07T07:40:42Z | Policy Iteration for Pareto-Optimal Policies in Stochastic Stackelberg
Games | In general-sum stochastic games, a stationary Stackelberg equilibrium (SSE) does not always exist, in which the leader maximizes leader's return for all the initial states when the follower takes the best response against the leader's policy. Existing methods of determining the SSEs require strong assumptions to guarantee the convergence and the coincidence of the limit with the SSE. Moreover, our analysis suggests that the performance at the fixed points of these methods is not reasonable when they are not SSEs. Herein, we introduced the concept of Pareto-optimality as a reasonable alternative to SSEs. We derive the policy improvement theorem for stochastic games with the best-response follower and propose an iterative algorithm to determine the Pareto-optimal policies based on it. Monotone improvement and convergence of the proposed approach are proved, and its convergence to SSEs is proved in a special case. | [
"['Mikoto Kudo' 'Yohei Akimoto']"
]
|
null | null | 2405.06690 | null | null | http://arxiv.org/pdf/2405.06690v1 | 2024-05-07T09:18:13Z | 2024-05-07T09:18:13Z | DrugLLM: Open Large Language Model for Few-shot Molecule Generation | Large Language Models (LLMs) have made great strides in areas such as language processing and computer vision. Despite the emergence of diverse techniques to improve few-shot learning capacity, current LLMs fall short in handling the languages in biology and chemistry. For example, they are struggling to capture the relationship between molecule structure and pharmacochemical properties. Consequently, the few-shot learning capacity of small-molecule drug modification remains impeded. In this work, we introduced DrugLLM, a LLM tailored for drug design. During the training process, we employed Group-based Molecular Representation (GMR) to represent molecules, arranging them in sequences that reflect modifications aimed at enhancing specific molecular properties. DrugLLM learns how to modify molecules in drug discovery by predicting the next molecule based on past modifications. Extensive computational experiments demonstrate that DrugLLM can generate new molecules with expected properties based on limited examples, presenting a powerful few-shot molecule generation capacity. | [
"['Xianggen Liu' 'Yan Guo' 'Haoran Li' 'Jin Liu' 'Shudong Huang' 'Bowen Ke'\n 'Jiancheng Lv']"
]
|
null | null | 2405.06691 | null | null | http://arxiv.org/pdf/2405.06691v1 | 2024-05-07T09:36:23Z | 2024-05-07T09:36:23Z | Fleet of Agents: Coordinated Problem Solving with Large Language Models
using Genetic Particle Filtering | Large language models (LLMs) have significantly evolved, moving from simple output generation to complex reasoning and from stand-alone usage to being embedded into broader frameworks. In this paper, we introduce emph{Fleet of Agents (FoA)}, a novel framework utilizing LLMs as agents to navigate through dynamic tree searches, employing a genetic-type particle filtering approach. FoA spawns a multitude of agents, each exploring autonomously, followed by a selection phase where resampling based on a heuristic value function optimizes the balance between exploration and exploitation. This mechanism enables dynamic branching, adapting the exploration strategy based on discovered solutions. We experimentally validate FoA using two benchmark tasks, "Game of 24" and "Mini-Crosswords". FoA outperforms the previously proposed Tree-of-Thoughts method in terms of efficacy and efficiency: it significantly decreases computational costs (by calling the value function less frequently) while preserving comparable or even superior accuracy. | [
"['Akhil Arora' 'Lars Klein' 'Nearchos Potamitis' 'Roland Aydin'\n 'Caglar Gulcehre' 'Robert West']"
]
|
null | null | 2405.06693 | null | null | http://arxiv.org/pdf/2405.06693v2 | 2024-06-17T20:20:58Z | 2024-05-07T19:09:46Z | SurfPro: Functional Protein Design Based on Continuous Surface | How can we design proteins with desired functions? We are motivated by a chemical intuition that both geometric structure and biochemical properties are critical to a protein's function. In this paper, we propose SurfPro, a new method to generate functional proteins given a desired surface and its associated biochemical properties. SurfPro comprises a hierarchical encoder that progressively models the geometric shape and biochemical features of a protein surface, and an autoregressive decoder to produce an amino acid sequence. We evaluate SurfPro on a standard inverse folding benchmark CATH 4.2 and two functional protein design tasks: protein binder design and enzyme design. Our SurfPro consistently surpasses previous state-of-the-art inverse folding methods, achieving a recovery rate of 57.78% on CATH 4.2 and higher success rates in terms of protein-protein binding and enzyme-substrate interaction scores. | [
"['Zhenqiao Song' 'Tinglin Huang' 'Lei Li' 'Wengong Jin']"
]
|
null | null | 2405.06703 | null | null | http://arxiv.org/pdf/2405.06703v1 | 2024-05-08T19:20:34Z | 2024-05-08T19:20:34Z | Interpretable Cross-Examination Technique (ICE-T): Using highly
informative features to boost LLM performance | In this paper, we introduce the Interpretable Cross-Examination Technique (ICE-T), a novel approach that leverages structured multi-prompt techniques with Large Language Models (LLMs) to improve classification performance over zero-shot and few-shot methods. In domains where interpretability is crucial, such as medicine and law, standard models often fall short due to their "black-box" nature. ICE-T addresses these limitations by using a series of generated prompts that allow an LLM to approach the problem from multiple directions. The responses from the LLM are then converted into numerical feature vectors and processed by a traditional classifier. This method not only maintains high interpretability but also allows for smaller, less capable models to achieve or exceed the performance of larger, more advanced models under zero-shot conditions. We demonstrate the effectiveness of ICE-T across a diverse set of data sources, including medical records and legal documents, consistently surpassing the zero-shot baseline in terms of classification metrics such as F1 scores. Our results indicate that ICE-T can be used for improving both the performance and transparency of AI applications in complex decision-making environments. | [
"['Goran Muric' 'Ben Delay' 'Steven Minton']"
]
|
null | null | 2405.06721 | null | null | http://arxiv.org/pdf/2405.06721v1 | 2024-05-10T06:03:45Z | 2024-05-10T06:03:45Z | Kolmogorov-Arnold Networks are Radial Basis Function Networks | This short paper is a fast proof-of-concept that the 3-order B-splines used in Kolmogorov-Arnold Networks (KANs) can be well approximated by Gaussian radial basis functions. Doing so leads to FastKAN, a much faster implementation of KAN which is also a radial basis function (RBF) network. | [
"['Ziyao Li']"
]
|
null | null | 2405.06724 | null | null | http://arxiv.org/pdf/2405.06724v2 | 2024-05-20T13:01:18Z | 2024-05-10T09:51:06Z | Boolean matrix logic programming for active learning of gene functions
in genome-scale metabolic network models | Techniques to autonomously drive research have been prominent in Computational Scientific Discovery, while Synthetic Biology is a field of science that focuses on designing and constructing new biological systems for useful purposes. Here we seek to apply logic-based machine learning techniques to facilitate cellular engineering and drive biological discovery. Comprehensive databases of metabolic processes called genome-scale metabolic network models (GEMs) are often used to evaluate cellular engineering strategies to optimise target compound production. However, predicted host behaviours are not always correctly described by GEMs, often due to errors in the models. The task of learning the intricate genetic interactions within GEMs presents computational and empirical challenges. To address these, we describe a novel approach called Boolean Matrix Logic Programming (BMLP) by leveraging boolean matrices to evaluate large logic programs. We introduce a new system, $BMLP_{active}$, which efficiently explores the genomic hypothesis space by guiding informative experimentation through active learning. In contrast to sub-symbolic methods, $BMLP_{active}$ encodes a state-of-the-art GEM of a widely accepted bacterial host in an interpretable and logical representation using datalog logic programs. Notably, $BMLP_{active}$ can successfully learn the interaction between a gene pair with fewer training examples than random experimentation, overcoming the increase in experimental design space. $BMLP_{active}$ enables rapid optimisation of metabolic models to reliably engineer biological systems for producing useful compounds. It offers a realistic approach to creating a self-driving lab for microbial engineering. | [
"['Lun Ai' 'Stephen H. Muggleton' 'Shi-Shun Liang' 'Geoff S. Baldwin']"
]
|
null | null | 2405.06725 | null | null | http://arxiv.org/pdf/2405.06725v3 | 2024-05-15T02:46:45Z | 2024-05-10T13:22:20Z | On the Shape of Brainscores for Large Language Models (LLMs) | With the rise of Large Language Models (LLMs), the novel metric "Brainscore" emerged as a means to evaluate the functional similarity between LLMs and human brain/neural systems. Our efforts were dedicated to mining the meaning of the novel score by constructing topological features derived from both human fMRI data involving 190 subjects, and 39 LLMs plus their untrained counterparts. Subsequently, we trained 36 Linear Regression Models and conducted thorough statistical analyses to discern reliable and valid features from our constructed ones. Our findings reveal distinctive feature combinations conducive to interpreting existing brainscores across various brain regions of interest (ROIs) and hemispheres, thereby significantly contributing to advancing interpretable machine learning (iML) studies. The study is enriched by our further discussions and analyses concerning existing brainscores. To our knowledge, this study represents the first attempt to comprehend the novel metric brainscore within this interdisciplinary domain. | [
"['Jingkai Li']"
]
|
null | null | 2405.06727 | null | null | http://arxiv.org/pdf/2405.06727v1 | 2024-05-10T14:31:58Z | 2024-05-10T14:31:58Z | Approximation Error and Complexity Bounds for ReLU Networks on
Low-Regular Function Spaces | In this work, we consider the approximation of a large class of bounded functions, with minimal regularity assumptions, by ReLU neural networks. We show that the approximation error can be bounded from above by a quantity proportional to the uniform norm of the target function and inversely proportional to the product of network width and depth. We inherit this approximation error bound from Fourier features residual networks, a type of neural network that uses complex exponential activation functions. Our proof is constructive and proceeds by conducting a careful complexity analysis associated with the approximation of a Fourier features residual network by a ReLU network. | [
"['Owen Davis' 'Gianluca Geraci' 'Mohammad Motamed']"
]
|
null | null | 2405.06729 | null | null | http://arxiv.org/pdf/2405.06729v1 | 2024-05-10T14:50:40Z | 2024-05-10T14:50:40Z | Fine-tuning Protein Language Models with Deep Mutational Scanning
improves Variant Effect Prediction | Protein Language Models (PLMs) have emerged as performant and scalable tools for predicting the functional impact and clinical significance of protein-coding variants, but they still lag experimental accuracy. Here, we present a novel fine-tuning approach to improve the performance of PLMs with experimental maps of variant effects from Deep Mutational Scanning (DMS) assays using a Normalised Log-odds Ratio (NLR) head. We find consistent improvements in a held-out protein test set, and on independent DMS and clinical variant annotation benchmarks from ProteinGym and ClinVar. These findings demonstrate that DMS is a promising source of sequence diversity and supervised training data for improving the performance of PLMs for variant effect prediction. | [
"['Aleix Lafita' 'Ferran Gonzalez' 'Mahmoud Hossam' 'Paul Smyth'\n 'Jacob Deasy' 'Ari Allyn-Feuer' 'Daniel Seaton' 'Stephen Young']"
]
|
null | null | 2405.06732 | null | null | http://arxiv.org/pdf/2405.06732v1 | 2024-05-10T15:58:39Z | 2024-05-10T15:58:39Z | A Global Data-Driven Model for The Hippocampus and Nucleus Accumbens of
Rat From The Local Field Potential Recordings (LFP) | In brain neural networks, Local Field Potential (LFP) signals represent the dynamic flow of information. Analyzing LFP clinical data plays a critical role in improving our understanding of brain mechanisms. One way to enhance our understanding of these mechanisms is to identify a global model to predict brain signals in different situations. This paper identifies a global data-driven based on LFP recordings of the Nucleus Accumbens and Hippocampus regions in freely moving rats. The LFP is recorded from each rat in two different situations: before and after the process of getting a reward which can be either a drug (Morphine) or natural food (like popcorn or biscuit). A comparison of five machine learning methods including Long Short Term Memory (LSTM), Echo State Network (ESN), Deep Echo State Network (DeepESN), Radial Basis Function (RBF), and Local Linear Model Tree (LLM) is conducted to develop this model. LoLiMoT was chosen with the best performance among all methods. This model can predict the future states of these regions with one pre-trained model. Identifying this model showed that Morphine and natural rewards do not change the dynamic features of neurons in these regions. | [
"['Maedeh Sadeghi' 'Mahdi Aliyari Shoorehdeli' 'Shole jamali'\n 'Abbas Haghparast']"
]
|
null | null | 2405.06747 | null | null | http://arxiv.org/pdf/2405.06747v1 | 2024-05-10T18:03:20Z | 2024-05-10T18:03:20Z | Music Emotion Prediction Using Recurrent Neural Networks | This study explores the application of recurrent neural networks to recognize emotions conveyed in music, aiming to enhance music recommendation systems and support therapeutic interventions by tailoring music to fit listeners' emotional states. We utilize Russell's Emotion Quadrant to categorize music into four distinct emotional regions and develop models capable of accurately predicting these categories. Our approach involves extracting a comprehensive set of audio features using Librosa and applying various recurrent neural network architectures, including standard RNNs, Bidirectional RNNs, and Long Short-Term Memory (LSTM) networks. Initial experiments are conducted using a dataset of 900 audio clips, labeled according to the emotional quadrants. We compare the performance of our neural network models against a set of baseline classifiers and analyze their effectiveness in capturing the temporal dynamics inherent in musical expression. The results indicate that simpler RNN architectures may perform comparably or even superiorly to more complex models, particularly in smaller datasets. We've also applied the following experiments on larger datasets: one is augmented based on our original dataset, and the other is from other sources. This research not only enhances our understanding of the emotional impact of music but also demonstrates the potential of neural networks in creating more personalized and emotionally resonant music recommendation and therapy systems. | [
"['Xinyu Chang' 'Xiangyu Zhang' 'Haoruo Zhang' 'Yulu Ran']"
]
|
null | null | 2405.06749 | null | null | http://arxiv.org/pdf/2405.06749v2 | 2024-05-16T14:24:37Z | 2024-05-10T18:06:41Z | Ensuring UAV Safety: A Vision-only and Real-time Framework for Collision
Avoidance Through Object Detection, Tracking, and Distance Estimation | In the last twenty years, unmanned aerial vehicles (UAVs) have garnered growing interest due to their expanding applications in both military and civilian domains. Detecting non-cooperative aerial vehicles with efficiency and estimating collisions accurately are pivotal for achieving fully autonomous aircraft and facilitating Advanced Air Mobility (AAM). This paper presents a deep-learning framework that utilizes optical sensors for the detection, tracking, and distance estimation of non-cooperative aerial vehicles. In implementing this comprehensive sensing framework, the availability of depth information is essential for enabling autonomous aerial vehicles to perceive and navigate around obstacles. In this work, we propose a method for estimating the distance information of a detected aerial object in real time using only the input of a monocular camera. In order to train our deep learning components for the object detection, tracking and depth estimation tasks we utilize the Amazon Airborne Object Tracking (AOT) Dataset. In contrast to previous approaches that integrate the depth estimation module into the object detector, our method formulates the problem as image-to-image translation. We employ a separate lightweight encoder-decoder network for efficient and robust depth estimation. In a nutshell, the object detection module identifies and localizes obstacles, conveying this information to both the tracking module for monitoring obstacle movement and the depth estimation module for calculating distances. Our approach is evaluated on the Airborne Object Tracking (AOT) dataset which is the largest (to the best of our knowledge) air-to-air airborne object dataset. | [
"['Vasileios Karampinis' 'Anastasios Arsenos' 'Orfeas Filippopoulos'\n 'Evangelos Petrongonas' 'Christos Skliros' 'Dimitrios Kollias'\n 'Stefanos Kollias' 'Athanasios Voulodimos']"
]
|
null | null | 2405.06758 | null | null | http://arxiv.org/pdf/2405.06758v1 | 2024-05-10T18:22:54Z | 2024-05-10T18:22:54Z | Scalable and Effective Arithmetic Tree Generation for Adder and
Multiplier Designs | Across a wide range of hardware scenarios, the computational efficiency and physical size of the arithmetic units significantly influence the speed and footprint of the overall hardware system. Nevertheless, the effectiveness of prior arithmetic design techniques proves inadequate, as it does not sufficiently optimize speed and area, resulting in a reduced processing rate and larger module size. To boost the arithmetic performance, in this work, we focus on the two most common and fundamental arithmetic modules: adders and multipliers. We cast the design tasks as single-player tree generation games, leveraging reinforcement learning techniques to optimize their arithmetic tree structures. Such a tree generation formulation allows us to efficiently navigate the vast search space and discover superior arithmetic designs that improve computational efficiency and hardware size within just a few hours. For adders, our approach discovers designs of 128-bit adders that achieve Pareto optimality in theoretical metrics. Compared with the state-of-the-art PrefixRL, our method decreases computational delay and hardware size by up to 26% and 30%, respectively. For multipliers, when compared to RL-MUL, our approach increases speed and reduces size by as much as 49% and 45%. Moreover, the inherent flexibility and scalability of our method enable us to deploy our designs into cutting-edge technologies, as we show that they can be seamlessly integrated into 7nm technology. We believe our work will offer valuable insights into hardware design, further accelerating speed and reducing size through the refined search space and our tree generation methodologies. See our introduction video at https://bit.ly/ArithmeticTree. Codes are released at https://github.com/laiyao1/ArithmeticTree. | [
"['Yao Lai' 'Jinxin Liu' 'David Z. Pan' 'Ping Luo']"
]
|
null | null | 2405.06774 | null | null | http://arxiv.org/pdf/2405.06774v1 | 2024-05-10T18:59:12Z | 2024-05-10T18:59:12Z | Hedging American Put Options with Deep Reinforcement Learning | This article leverages deep reinforcement learning (DRL) to hedge American put options, utilizing the deep deterministic policy gradient (DDPG) method. The agents are first trained and tested with Geometric Brownian Motion (GBM) asset paths and demonstrate superior performance over traditional strategies like the Black-Scholes (BS) Delta, particularly in the presence of transaction costs. To assess the real-world applicability of DRL hedging, a second round of experiments uses a market calibrated stochastic volatility model to train DRL agents. Specifically, 80 put options across 8 symbols are collected, stochastic volatility model coefficients are calibrated for each symbol, and a DRL agent is trained for each of the 80 options by simulating paths of the respective calibrated model. Not only do DRL agents outperform the BS Delta method when testing is conducted using the same calibrated stochastic volatility model data from training, but DRL agents achieves better results when hedging the true asset path that occurred between the option sale date and the maturity. As such, not only does this study present the first DRL agents tailored for American put option hedging, but results on both simulated and empirical market testing data also suggest the optimality of DRL agents over the BS Delta method in real-world scenarios. Finally, note that this study employs a model-agnostic Chebyshev interpolation method to provide DRL agents with option prices at each time step when a stochastic volatility model is used, thereby providing a general framework for an easy extension to more complex underlying asset processes. | [
"['Reilly Pickard' 'Finn Wredenhagen' 'Julio DeJesus' 'Mario Schlener'\n 'Yuri Lawryshyn']"
]
|
null | null | 2405.06780 | null | null | http://arxiv.org/pdf/2405.06780v1 | 2024-05-10T19:10:45Z | 2024-05-10T19:10:45Z | Deep MMD Gradient Flow without adversarial training | We propose a gradient flow procedure for generative modeling by transporting particles from an initial source distribution to a target distribution, where the gradient field on the particles is given by a noise-adaptive Wasserstein Gradient of the Maximum Mean Discrepancy (MMD). The noise-adaptive MMD is trained on data distributions corrupted by increasing levels of noise, obtained via a forward diffusion process, as commonly used in denoising diffusion probabilistic models. The result is a generalization of MMD Gradient Flow, which we call Diffusion-MMD-Gradient Flow or DMMD. The divergence training procedure is related to discriminator training in Generative Adversarial Networks (GAN), but does not require adversarial training. We obtain competitive empirical performance in unconditional image generation on CIFAR10, MNIST, CELEB-A (64 x64) and LSUN Church (64 x 64). Furthermore, we demonstrate the validity of the approach when MMD is replaced by a lower bound on the KL divergence. | [
"['Alexandre Galashov' 'Valentin de Bortoli' 'Arthur Gretton']"
]
|
null | null | 2405.06784 | null | null | http://arxiv.org/pdf/2405.06784v1 | 2024-05-10T19:22:24Z | 2024-05-10T19:22:24Z | Open Challenges and Opportunities in Federated Foundation Models Towards
Biomedical Healthcare | This survey explores the transformative impact of foundation models (FMs) in artificial intelligence, focusing on their integration with federated learning (FL) for advancing biomedical research. Foundation models such as ChatGPT, LLaMa, and CLIP, which are trained on vast datasets through methods including unsupervised pretraining, self-supervised learning, instructed fine-tuning, and reinforcement learning from human feedback, represent significant advancements in machine learning. These models, with their ability to generate coherent text and realistic images, are crucial for biomedical applications that require processing diverse data forms such as clinical reports, diagnostic images, and multimodal patient interactions. The incorporation of FL with these sophisticated models presents a promising strategy to harness their analytical power while safeguarding the privacy of sensitive medical data. This approach not only enhances the capabilities of FMs in medical diagnostics and personalized treatment but also addresses critical concerns about data privacy and security in healthcare. This survey reviews the current applications of FMs in federated settings, underscores the challenges, and identifies future research directions including scaling FMs, managing data diversity, and enhancing communication efficiency within FL frameworks. The objective is to encourage further research into the combined potential of FMs and FL, laying the groundwork for groundbreaking healthcare innovations. | [
"['Xingyu Li' 'Lu Peng' 'Yuping Wang' 'Weihua Zhang']"
]
|
null | null | 2405.06816 | null | null | http://arxiv.org/pdf/2405.06816v1 | 2024-05-10T21:32:43Z | 2024-05-10T21:32:43Z | Non-stationary Domain Generalization: Theory and Algorithm | Although recent advances in machine learning have shown its success to learn from independent and identically distributed (IID) data, it is vulnerable to out-of-distribution (OOD) data in an open world. Domain generalization (DG) deals with such an issue and it aims to learn a model from multiple source domains that can be generalized to unseen target domains. Existing studies on DG have largely focused on stationary settings with homogeneous source domains. However, in many applications, domains may evolve along a specific direction (e.g., time, space). Without accounting for such non-stationary patterns, models trained with existing methods may fail to generalize on OOD data. In this paper, we study domain generalization in non-stationary environment. We first examine the impact of environmental non-stationarity on model performance and establish the theoretical upper bounds for the model error at target domains. Then, we propose a novel algorithm based on adaptive invariant representation learning, which leverages the non-stationary pattern to train a model that attains good performance on target domains. Experiments on both synthetic and real data validate the proposed algorithm. | [
"['Thai-Hoang Pham' 'Xueru Zhang' 'Ping Zhang']"
]
|
null | null | 2405.06822 | null | null | http://arxiv.org/pdf/2405.06822v1 | 2024-05-10T21:52:27Z | 2024-05-10T21:52:27Z | MH-pFLID: Model Heterogeneous personalized Federated Learning via
Injection and Distillation for Medical Data Analysis | Federated learning is widely used in medical applications for training global models without needing local data access. However, varying computational capabilities and network architectures (system heterogeneity), across clients pose significant challenges in effectively aggregating information from non-independently and identically distributed (non-IID) data. Current federated learning methods using knowledge distillation require public datasets, raising privacy and data collection issues. Additionally, these datasets require additional local computing and storage resources, which is a burden for medical institutions with limited hardware conditions. In this paper, we introduce a novel federated learning paradigm, named Model Heterogeneous personalized Federated Learning via Injection and Distillation (MH-pFLID). Our framework leverages a lightweight messenger model that carries concentrated information to collect the information from each client. We also develop a set of receiver and transmitter modules to receive and send information from the messenger model, so that the information could be injected and distilled with efficiency. | [
"['Luyuan Xie' 'Manqing Lin' 'Tianyu Luan' 'Cong Li' 'Yuejian Fang'\n 'Qingni Shen' 'Zhonghai Wu']"
]
|
null | null | 2405.06823 | null | null | http://arxiv.org/pdf/2405.06823v2 | 2024-05-14T15:03:12Z | 2024-05-10T21:52:34Z | PLeak: Prompt Leaking Attacks against Large Language Model Applications | Large Language Models (LLMs) enable a new ecosystem with many downstream applications, called LLM applications, with different natural language processing tasks. The functionality and performance of an LLM application highly depend on its system prompt, which instructs the backend LLM on what task to perform. Therefore, an LLM application developer often keeps a system prompt confidential to protect its intellectual property. As a result, a natural attack, called prompt leaking, is to steal the system prompt from an LLM application, which compromises the developer's intellectual property. Existing prompt leaking attacks primarily rely on manually crafted queries, and thus achieve limited effectiveness. In this paper, we design a novel, closed-box prompt leaking attack framework, called PLeak, to optimize an adversarial query such that when the attacker sends it to a target LLM application, its response reveals its own system prompt. We formulate finding such an adversarial query as an optimization problem and solve it with a gradient-based method approximately. Our key idea is to break down the optimization goal by optimizing adversary queries for system prompts incrementally, i.e., starting from the first few tokens of each system prompt step by step until the entire length of the system prompt. We evaluate PLeak in both offline settings and for real-world LLM applications, e.g., those on Poe, a popular platform hosting such applications. Our results show that PLeak can effectively leak system prompts and significantly outperforms not only baselines that manually curate queries but also baselines with optimized queries that are modified and adapted from existing jailbreaking attacks. We responsibly reported the issues to Poe and are still waiting for their response. Our implementation is available at this repository: https://github.com/BHui97/PLeak. | [
"['Bo Hui' 'Haolin Yuan' 'Neil Gong' 'Philippe Burlina' 'Yinzhi Cao']"
]
|
null | null | 2405.06835 | null | null | http://arxiv.org/pdf/2405.06835v1 | 2024-05-10T22:18:43Z | 2024-05-10T22:18:43Z | Automating Code Adaptation for MLOps -- A Benchmarking Study on LLMs | This paper explores the possibilities of the current generation of Large Language Models for incorporating Machine Learning Operations (MLOps) functionalities into ML training code bases. We evaluate the performance of OpenAI (gpt-3.5-turbo) and WizardCoder (open-source, 15B parameters) models on the automated accomplishment of various MLOps functionalities in different settings. We perform a benchmarking study that assesses the ability of these models to: (1) adapt existing code samples (Inlining) with component-specific MLOps functionality such as MLflow and Weights & Biases for experiment tracking, Optuna for hyperparameter optimization etc., and (2) perform the task of Translation from one component of an MLOps functionality to another, e.g., translating existing GitPython library based version control code to Data Version Control library based. We also propose three different approaches that involve teaching LLMs to comprehend the API documentation of the components as a reference while accomplishing the Translation tasks. In our evaluations, the gpt-3.5-turbo model significantly outperforms WizardCoder by achieving impressive Pass@3 accuracy in model optimization (55% compared to 0% by WizardCoder), experiment tracking (100%, compared to 62.5% by WizardCoder), model registration (92% compared to 42% by WizardCoder) and hyperparameter optimization (83% compared to 58% by WizardCoder) on average, in their best possible settings, showcasing its superior code adaptability performance in complex MLOps tasks. | [
"['Harsh Patel' 'Buvaneswari A. Ramanan' 'Manzoor A. Khan'\n 'Thomas Williams' 'Brian Friedman' 'Lawrence Drabeck']"
]
|
null | null | 2405.06836 | null | null | http://arxiv.org/pdf/2405.06836v1 | 2024-05-10T22:19:12Z | 2024-05-10T22:19:12Z | Improving Targeted Molecule Generation through Language Model
Fine-Tuning Via Reinforcement Learning | Developing new drugs is laborious and costly, demanding extensive time investment. In this study, we introduce an innovative de-novo drug design strategy, which harnesses the capabilities of language models to devise targeted drugs for specific proteins. Employing a Reinforcement Learning (RL) framework utilizing Proximal Policy Optimization (PPO), we refine the model to acquire a policy for generating drugs tailored to protein targets. Our method integrates a composite reward function, combining considerations of drug-target interaction and molecular validity. Following RL fine-tuning, our approach demonstrates promising outcomes, yielding notable improvements in molecular validity, interaction efficacy, and critical chemical properties, achieving 65.37 for Quantitative Estimation of Drug-likeness (QED), 321.55 for Molecular Weight (MW), and 4.47 for Octanol-Water Partition Coefficient (logP), respectively. Furthermore, out of the generated drugs, only 0.041% do not exhibit novelty. | [
"['Salma J. Ahmed' 'Mustafa A. Elattar']"
]
|
null | null | 2405.06841 | null | null | http://arxiv.org/pdf/2405.06841v2 | 2024-05-16T14:23:23Z | 2024-05-10T22:40:01Z | Bridging the Gap: Protocol Towards Fair and Consistent Affect Analysis | The increasing integration of machine learning algorithms in daily life underscores the critical need for fairness and equity in their deployment. As these technologies play a pivotal role in decision-making, addressing biases across diverse subpopulation groups, including age, gender, and race, becomes paramount. Automatic affect analysis, at the intersection of physiology, psychology, and machine learning, has seen significant development. However, existing databases and methodologies lack uniformity, leading to biased evaluations. This work addresses these issues by analyzing six affective databases, annotating demographic attributes, and proposing a common protocol for database partitioning. Emphasis is placed on fairness in evaluations. Extensive experiments with baseline and state-of-the-art methods demonstrate the impact of these changes, revealing the inadequacy of prior assessments. The findings underscore the importance of considering demographic attributes in affect analysis research and provide a foundation for more equitable methodologies. Our annotations, code and pre-trained models are available at: https://github.com/dkollias/Fair-Consistent-Affect-Analysis | [
"['Guanyu Hu' 'Eleni Papadopoulou' 'Dimitrios Kollias' 'Paraskevi Tzouveli'\n 'Jie Wei' 'Xinyu Yang']"
]
|
null | null | 2405.06848 | null | null | http://arxiv.org/pdf/2405.06848v1 | 2024-05-10T23:20:46Z | 2024-05-10T23:20:46Z | ISR: Invertible Symbolic Regression | We introduce an Invertible Symbolic Regression (ISR) method. It is a machine learning technique that generates analytical relationships between inputs and outputs of a given dataset via invertible maps (or architectures). The proposed ISR method naturally combines the principles of Invertible Neural Networks (INNs) and Equation Learner (EQL), a neural network-based symbolic architecture for function learning. In particular, we transform the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic invertible architecture that allows for efficient gradient-based learning. The proposed ISR framework also relies on sparsity promoting regularization, allowing the discovery of concise and interpretable invertible expressions. We show that ISR can serve as a (symbolic) normalizing flow for density estimation tasks. Furthermore, we highlight its practical applicability in solving inverse problems, including a benchmark inverse kinematics problem, and notably, a geoacoustic inversion problem in oceanography aimed at inferring posterior distributions of underlying seabed parameters from acoustic signals. | [
"['Tony Tohme' 'Mohammad Javad Khojasteh' 'Mohsen Sadr' 'Florian Meyer'\n 'Kamal Youcef-Toumi']"
]
|
null | null | 2405.06849 | null | null | http://arxiv.org/pdf/2405.06849v1 | 2024-05-10T23:21:16Z | 2024-05-10T23:21:16Z | GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs | Vision graph neural networks (ViG) offer a new avenue for exploration in computer vision. A major bottleneck in ViGs is the inefficient k-nearest neighbor (KNN) operation used for graph construction. To solve this issue, we propose a new method for designing ViGs, Dynamic Axial Graph Construction (DAGC), which is more efficient than KNN as it limits the number of considered graph connections made within an image. Additionally, we propose a novel CNN-GNN architecture, GreedyViG, which uses DAGC. Extensive experiments show that GreedyViG beats existing ViG, CNN, and ViT architectures in terms of accuracy, GMACs, and parameters on image classification, object detection, instance segmentation, and semantic segmentation tasks. Our smallest model, GreedyViG-S, achieves 81.1% top-1 accuracy on ImageNet-1K, 2.9% higher than Vision GNN and 2.2% higher than Vision HyperGraph Neural Network (ViHGNN), with less GMACs and a similar number of parameters. Our largest model, GreedyViG-B obtains 83.9% top-1 accuracy, 0.2% higher than Vision GNN, with a 66.6% decrease in parameters and a 69% decrease in GMACs. GreedyViG-B also obtains the same accuracy as ViHGNN with a 67.3% decrease in parameters and a 71.3% decrease in GMACs. Our work shows that hybrid CNN-GNN architectures not only provide a new avenue for designing efficient models, but that they can also exceed the performance of current state-of-the-art models. | [
"['Mustafa Munir' 'William Avery' 'Md Mostafijur Rahman' 'Radu Marculescu']"
]
|
null | null | 2405.06855 | null | null | http://arxiv.org/pdf/2405.06855v1 | 2024-05-10T23:48:37Z | 2024-05-10T23:48:37Z | Linear Explanations for Individual Neurons | In recent years many methods have been developed to understand the internal workings of neural networks, often by describing the function of individual neurons in the model. However, these methods typically only focus on explaining the very highest activations of a neuron. In this paper we show this is not sufficient, and that the highest activation range is only responsible for a very small percentage of the neuron's causal effect. In addition, inputs causing lower activations are often very different and can't be reliably predicted by only looking at high activations. We propose that neurons should instead be understood as a linear combination of concepts, and develop an efficient method for producing these linear explanations. In addition, we show how to automatically evaluate description quality using simulation, i.e. predicting neuron activations on unseen inputs in vision setting. | [
"['Tuomas Oikarinen' 'Tsui-Wei Weng']"
]
|
null | null | 2405.06859 | null | null | http://arxiv.org/pdf/2405.06859v1 | 2024-05-11T00:43:56Z | 2024-05-11T00:43:56Z | Reimplementation of Learning to Reweight Examples for Robust Deep
Learning | Deep neural networks (DNNs) have been used to create models for many complex analysis problems like image recognition and medical diagnosis. DNNs are a popular tool within machine learning due to their ability to model complex patterns and distributions. However, the performance of these networks is highly dependent on the quality of the data used to train the models. Two characteristics of these sets, noisy labels and training set biases, are known to frequently cause poor generalization performance as a result of overfitting to the training set. This paper aims to solve this problem using the approach proposed by Ren et al. (2018) using meta-training and online weight approximation. We will first implement a toy-problem to crudely verify the claims made by the authors of Ren et al. (2018) and then venture into using the approach to solve a real world problem of Skin-cancer detection using an imbalanced image dataset. | [
"['Parth Patil' 'Ben Boardley' 'Jack Gardner' 'Emily Loiselle'\n 'Deerajkumar Parthipan']"
]
|
null | null | 2405.06869 | null | null | http://arxiv.org/pdf/2405.06869v1 | 2024-05-11T02:03:11Z | 2024-05-11T02:03:11Z | Sharpness-Aware Minimization for Evolutionary Feature Construction in
Regression | In recent years, genetic programming (GP)-based evolutionary feature construction has achieved significant success. However, a primary challenge with evolutionary feature construction is its tendency to overfit the training data, resulting in poor generalization on unseen data. In this research, we draw inspiration from PAC-Bayesian theory and propose using sharpness-aware minimization in function space to discover symbolic features that exhibit robust performance within a smooth loss landscape in the semantic space. By optimizing sharpness in conjunction with cross-validation loss, as well as designing a sharpness reduction layer, the proposed method effectively mitigates the overfitting problem of GP, especially when dealing with a limited number of instances or in the presence of label noise. Experimental results on 58 real-world regression datasets show that our approach outperforms standard GP as well as six state-of-the-art complexity measurement methods for GP in controlling overfitting. Furthermore, the ensemble version of GP with sharpness-aware minimization demonstrates superior performance compared to nine fine-tuned machine learning and symbolic regression algorithms, including XGBoost and LightGBM. | [
"['Hengzhe Zhang' 'Qi Chen' 'Bing Xue' 'Wolfgang Banzhaf' 'Mengjie Zhang']"
]
|
null | null | 2405.06884 | null | null | http://arxiv.org/pdf/2405.06884v1 | 2024-05-11T02:35:08Z | 2024-05-11T02:35:08Z | Efficient PAC Learnability of Dynamical Systems Over Multilayer Networks | Networked dynamical systems are widely used as formal models of real-world cascading phenomena, such as the spread of diseases and information. Prior research has addressed the problem of learning the behavior of an unknown dynamical system when the underlying network has a single layer. In this work, we study the learnability of dynamical systems over multilayer networks, which are more realistic and challenging. First, we present an efficient PAC learning algorithm with provable guarantees to show that the learner only requires a small number of training examples to infer an unknown system. We further provide a tight analysis of the Natarajan dimension which measures the model complexity. Asymptotically, our bound on the Nararajan dimension is tight for almost all multilayer graphs. The techniques and insights from our work provide the theoretical foundations for future investigations of learning problems for multilayer dynamical systems. | [
"['Zirou Qiu' 'Abhijin Adiga' 'Madhav V. Marathe' 'S. S. Ravi'\n 'Daniel J. Rosenkrantz' 'Richard E. Stearns' 'Anil Vullikanti']"
]
|
null | null | 2405.06902 | null | null | http://arxiv.org/pdf/2405.06902v2 | 2024-05-29T17:33:47Z | 2024-05-11T04:15:47Z | Causal Inference from Slowly Varying Nonstationary Processes | Causal inference from observational data following the restricted structural causal models (SCM) framework hinges largely on the asymmetry between cause and effect from the data generating mechanisms, such as non-Gaussianity or non-linearity. This methodology can be adapted to stationary time series, yet inferring causal relationships from nonstationary time series remains a challenging task. In this work, we propose a new class of restricted SCM, via a time-varying filter and stationary noise, and exploit the asymmetry from nonstationarity for causal identification in both bivariate and network settings. We propose efficient procedures by leveraging powerful estimates of the bivariate evolutionary spectra for slowly varying processes. Various synthetic and real datasets that involve high-order and non-smooth filters are evaluated to demonstrate the effectiveness of our proposed methodology. | [
"['Kang Du' 'Yu Xiang']"
]
|
null | null | 2405.06904 | null | null | http://arxiv.org/pdf/2405.06904v2 | 2024-05-15T09:29:58Z | 2024-05-11T04:21:32Z | Generation of Granular-Balls for Clustering Based on the Principle of
Justifiable Granularity | Efficient and robust data clustering remains a challenging task in the field of data analysis. Recent efforts have explored the integration of granular-ball (GB) computing with clustering algorithms to address this challenge, yielding promising results. However, existing methods for generating GBs often rely on single indicators to measure GB quality and employ threshold-based or greedy strategies, potentially leading to GBs that do not accurately capture the underlying data distribution. To address these limitations, this article introduces a novel GB generation method. The originality of this method lies in leveraging the principle of justifiable granularity to measure the quality of a GB for clustering tasks. To be precise, we define the coverage and specificity of a GB and introduce a comprehensive measure for assessing GB quality. Utilizing this quality measure, the method incorporates a binary tree pruning-based strategy and an anomaly detection method to determine the best combination of sub-GBs for each GB and identify abnormal GBs, respectively. Compared to previous GB generation methods, the new method maximizes the overall quality of generated GBs while ensuring alignment with the data distribution, thereby enhancing the rationality of the generated GBs. Experimental results obtained from both synthetic and publicly available datasets underscore the effectiveness of the proposed GB generation method, showcasing improvements in clustering accuracy and normalized mutual information. | [
"['Zihang Jia' 'Zhen Zhang' 'Witold Pedrycz']"
]
|
null | null | 2405.06907 | null | null | http://arxiv.org/pdf/2405.06907v2 | 2024-05-21T20:35:55Z | 2024-05-11T04:29:03Z | AIOS Compiler: LLM as Interpreter for Natural Language Programming and
Flow Programming of AI Agents | Since their inception, programming languages have trended towards greater readability and lower barriers for programmers. Following this trend, natural language can be a promising type of programming language that provides great flexibility and usability and helps towards the democracy of programming. However, the inherent vagueness, ambiguity, and verbosity of natural language pose significant challenges in developing an interpreter that can accurately understand the programming logic and execute instructions written in natural language. Fortunately, recent advancements in Large Language Models (LLMs) have demonstrated remarkable proficiency in interpreting complex natural language. Inspired by this, we develop a novel system for Code Representation and Execution (CoRE), which employs LLM as interpreter to interpret and execute natural language instructions. The proposed system unifies natural language programming, pseudo-code programming, and flow programming under the same representation for constructing language agents, while LLM serves as the interpreter to interpret and execute the agent programs. In this paper, we begin with defining the programming syntax that structures natural language instructions logically. During the execution, we incorporate external memory to minimize redundancy. Furthermore, we equip the designed interpreter with the capability to invoke external tools, compensating for the limitations of LLM in specialized domains or when accessing real-time information. This work is open-source at https://github.com/agiresearch/CoRE, https://github.com/agiresearch/OpenAGI, and https://github.com/agiresearch/AIOS. | [
"['Shuyuan Xu' 'Zelong Li' 'Kai Mei' 'Yongfeng Zhang']"
]
|
null | null | 2405.06909 | null | null | http://arxiv.org/pdf/2405.06909v1 | 2024-05-11T04:36:46Z | 2024-05-11T04:36:46Z | Fairness in Reinforcement Learning: A Survey | While our understanding of fairness in machine learning has significantly progressed, our understanding of fairness in reinforcement learning (RL) remains nascent. Most of the attention has been on fairness in one-shot classification tasks; however, real-world, RL-enabled systems (e.g., autonomous vehicles) are much more complicated in that agents operate in dynamic environments over a long period of time. To ensure the responsible development and deployment of these systems, we must better understand fairness in RL. In this paper, we survey the literature to provide the most up-to-date snapshot of the frontiers of fairness in RL. We start by reviewing where fairness considerations can arise in RL, then discuss the various definitions of fairness in RL that have been put forth thus far. We continue to highlight the methodologies researchers used to implement fairness in single- and multi-agent RL systems before showcasing the distinct application domains that fair RL has been investigated in. Finally, we critically examine gaps in the literature, such as understanding fairness in the context of RLHF, that still need to be addressed in future work to truly operationalize fair RL in real-world systems. | [
"['Anka Reuel' 'Devin Ma']"
]
|
null | null | 2405.06910 | null | null | http://arxiv.org/pdf/2405.06910v1 | 2024-05-11T04:38:07Z | 2024-05-11T04:38:07Z | Generative flow induced neural architecture search: Towards discovering
optimal architecture in wavelet neural operator | We propose a generative flow-induced neural architecture search algorithm. The proposed approach devices simple feed-forward neural networks to learn stochastic policies to generate sequences of architecture hyperparameters such that the generated states are in proportion with the reward from the terminal state. We demonstrate the efficacy of the proposed search algorithm on the wavelet neural operator (WNO), where we learn a policy to generate a sequence of hyperparameters like wavelet basis and activation operators for wavelet integral blocks. While the trajectory of the generated wavelet basis and activation sequence is cast as flow, the policy is learned by minimizing the flow violation between each state in the trajectory and maximizing the reward from the terminal state. In the terminal state, we train WNO simultaneously to guide the search. We propose to use the exponent of the negative of the WNO loss on the validation dataset as the reward function. While the grid search-based neural architecture generation algorithms foresee every combination, the proposed framework generates the most probable sequence based on the positive reward from the terminal state, thereby reducing exploration time. Compared to reinforcement learning schemes, where complete episodic training is required to get the reward, the proposed algorithm generates the hyperparameter trajectory sequentially. Through four fluid mechanics-oriented problems, we illustrate that the learned policies can sample the best-performing architecture of the neural operator, thereby improving the performance of the vanilla wavelet neural operator. | [
"['Hartej Soin' 'Tapas Tripura' 'Souvik Chakraborty']"
]
|
null | null | 2405.06917 | null | null | http://arxiv.org/pdf/2405.06917v1 | 2024-05-11T05:13:10Z | 2024-05-11T05:13:10Z | Design Requirements for Human-Centered Graph Neural Network Explanations | Graph neural networks (GNNs) are powerful graph-based machine-learning models that are popular in various domains, e.g., social media, transportation, and drug discovery. However, owing to complex data representations, GNNs do not easily allow for human-intelligible explanations of their predictions, which can decrease trust in them as well as deter any collaboration opportunities between the AI expert and non-technical, domain expert. Here, we first discuss the two papers that aim to provide GNN explanations to domain experts in an accessible manner and then establish a set of design requirements for human-centered GNN explanations. Finally, we offer two example prototypes to demonstrate some of those proposed requirements. | [
"['Pantea Habibi' 'Peyman Baghershahi' 'Sourav Medya'\n 'Debaleena Chattopadhyay']"
]
|
null | null | 2405.06925 | null | null | http://arxiv.org/pdf/2405.06925v2 | 2024-05-16T14:17:10Z | 2024-05-11T06:10:05Z | Semi-supervised Anomaly Detection via Adaptive Reinforcement
Learning-Enabled Method with Causal Inference for Sensor Signals | Semi-supervised anomaly detection for sensor signals is critical in ensuring system reliability in smart manufacturing. However, existing methods rely heavily on data correlation, neglecting causality and leading to potential misinterpretations due to confounding factors. Moreover, while current reinforcement learning-based methods can effectively identify known and unknown anomalies with limited labeled samples, these methods still face several challenges, such as under-utilization of priori knowledge, lack of model flexibility, and deficient reward feedback during environmental interactions. To address the above problems, this paper innovatively constructs a counterfactual causal reinforcement learning model, termed Triple-Assisted Causal Reinforcement Learning Anomaly Detector (Tri-CRLAD). The model leverages causal inference to extract the intrinsic causal feature in data, enhancing the agent's utilization of prior knowledge and improving its generalization capability. In addition, Tri-CRLAD features a triple decision support mechanism, including a sampling strategy based on historical similarity, an adaptive threshold smoothing adjustment strategy, and an adaptive decision reward mechanism. These mechanisms further enhance the flexibility and generalization ability of the model, enabling it to effectively respond to various complex and dynamically changing environments. Experimental results across seven diverse sensor signal datasets demonstrate that Tri-CRLAD outperforms nine state-of-the-art baseline methods. Notably, Tri-CRLAD achieves up to a 23% improvement in anomaly detection stability with minimal known anomaly samples, highlighting its potential in semi-supervised anomaly detection scenarios. Our code is available at https://github.com/Aoudsung/Tri-CRLAD. | [
"['Xiangwei Chen' 'Ruliang Xiaoa' 'Zhixia Zeng' 'Zhipeng Qiu' 'Shi Zhang'\n 'Xin Du']"
]
|
null | null | 2405.06965 | null | null | http://arxiv.org/pdf/2405.06965v1 | 2024-05-11T09:22:44Z | 2024-05-11T09:22:44Z | A De-singularity Subgradient Approach for the Extended Weber Location
Problem | The extended Weber location problem is a classical optimization problem that has inspired some new works in several machine learning scenarios recently. However, most existing algorithms may get stuck due to the singularity at the data points when the power of the cost function $1leqslant q<2$, such as the widely-used iterative Weiszfeld approach. In this paper, we establish a de-singularity subgradient approach for this problem. We also provide a complete proof of convergence which has fixed some incomplete statements of the proofs for some previous Weiszfeld algorithms. Moreover, we deduce a new theoretical result of superlinear convergence for the iteration sequence in a special case where the minimum point is a singular point. We conduct extensive experiments in a real-world machine learning scenario to show that the proposed approach solves the singularity problem, produces the same results as in the non-singularity cases, and shows a reasonable rate of linear convergence. The results also indicate that the $q$-th power case ($1<q<2$) is more advantageous than the $1$-st power case and the $2$-nd power case in some situations. Hence the de-singularity subgradient approach is beneficial to advancing both theory and practice for the extended Weber location problem. | [
"['Zhao-Rong Lai' 'Xiaotian Wu' 'Liangda Fang' 'Ziliang Chen']"
]
|
null | null | 2405.06975 | null | null | http://arxiv.org/pdf/2405.06975v1 | 2024-05-11T10:05:55Z | 2024-05-11T10:05:55Z | Input Snapshots Fusion for Scalable Discrete Dynamic Graph Nerual
Networks | Dynamic graphs are ubiquitous in the real world, yet there is a lack of suitable theoretical frameworks to effectively extend existing static graph models into the temporal domain. Additionally, for link prediction tasks on discrete dynamic graphs, the requirement of substantial GPU memory to store embeddings of all nodes hinders the scalability of existing models. In this paper, we introduce an Input {bf S}napshots {bf F}usion based {bf Dy}namic {bf G}raph Neural Network (SFDyG). By eliminating the partitioning of snapshots within the input window, we obtain a multi-graph (more than one edge between two nodes). Subsequently, by introducing a graph denoising problem with the assumption of temporal decayed smoothing, we integrate Hawkes process theory into Graph Neural Networks to model the generated multi-graph. Furthermore, based on the multi-graph, we propose a scalable three-step mini-batch training method and demonstrate its equivalence to full-batch training counterpart. Our experiments, conducted on eight distinct dynamic graph datasets for future link prediction tasks, revealed that SFDyG generally surpasses related methods. | [
"['QingGuo Qi' 'Hongyang Chen' 'Minhao Cheng' 'Han Liu']"
]
|
null | null | 2405.06979 | null | null | http://arxiv.org/pdf/2405.06979v2 | 2024-05-20T08:37:48Z | 2024-05-11T10:22:32Z | Robust Semi-supervised Learning by Wisely Leveraging Open-set Data | Open-set Semi-supervised Learning (OSSL) holds a realistic setting that unlabeled data may come from classes unseen in the labeled set, i.e., out-of-distribution (OOD) data, which could cause performance degradation in conventional SSL models. To handle this issue, except for the traditional in-distribution (ID) classifier, some existing OSSL approaches employ an extra OOD detection module to avoid the potential negative impact of the OOD data. Nevertheless, these approaches typically employ the entire set of open-set data during their training process, which may contain data unfriendly to the OSSL task that can negatively influence the model performance. This inspires us to develop a robust open-set data selection strategy for OSSL. Through a theoretical understanding from the perspective of learning theory, we propose Wise Open-set Semi-supervised Learning (WiseOpen), a generic OSSL framework that selectively leverages the open-set data for training the model. By applying a gradient-variance-based selection mechanism, WiseOpen exploits a friendly subset instead of the whole open-set dataset to enhance the model's capability of ID classification. Moreover, to reduce the computational expense, we also propose two practical variants of WiseOpen by adopting low-frequency update and loss-based selection respectively. Extensive experiments demonstrate the effectiveness of WiseOpen in comparison with the state-of-the-art. | [
"['Yang Yang' 'Nan Jiang' 'Yi Xu' 'De-Chuan Zhan']"
]
|
null | null | 2405.06985 | null | null | http://arxiv.org/pdf/2405.06985v1 | 2024-05-11T10:59:09Z | 2024-05-11T10:59:09Z | RoTHP: Rotary Position Embedding-based Transformer Hawkes Process | Temporal Point Processes (TPPs), especially Hawkes Process are commonly used for modeling asynchronous event sequences data such as financial transactions and user behaviors in social networks. Due to the strong fitting ability of neural networks, various neural Temporal Point Processes are proposed, among which the Neural Hawkes Processes based on self-attention such as Transformer Hawkes Process (THP) achieve distinct performance improvement. Although the THP has gained increasing studies, it still suffers from the {sequence prediction issue}, i.e., training on history sequences and inferencing about the future, which is a prevalent paradigm in realistic sequence analysis tasks. What's more, conventional THP and its variants simply adopt initial sinusoid embedding in transformers, which shows performance sensitivity to temporal change or noise in sequence data analysis by our empirical study. To deal with the problems, we propose a new Rotary Position Embedding-based THP (RoTHP) architecture in this paper. Notably, we show the translation invariance property and {sequence prediction flexibility} of our RoTHP induced by the {relative time embeddings} when coupled with Hawkes process theoretically. Furthermore, we demonstrate empirically that our RoTHP can be better generalized in sequence data scenarios with timestamp translations and in sequence prediction tasks. | [
"['Anningzhe Gao' 'Shan Dai']"
]
|
null | null | 2405.06986 | null | null | http://arxiv.org/pdf/2405.06986v1 | 2024-05-11T10:59:56Z | 2024-05-11T10:59:56Z | Revisiting the Efficacy of Signal Decomposition in AI-based Time Series
Prediction | Time series prediction is a fundamental problem in scientific exploration and artificial intelligence (AI) technologies have substantially bolstered its efficiency and accuracy. A well-established paradigm in AI-driven time series prediction is injecting physical knowledge into neural networks through signal decomposition methods, and sustaining progress in numerous scenarios has been reported. However, we uncover non-negligible evidence that challenges the effectiveness of signal decomposition in AI-based time series prediction. We confirm that improper dataset processing with subtle future label leakage is unfortunately widely adopted, possibly yielding abnormally superior but misleading results. By processing data in a strictly causal way without any future information, the effectiveness of additional decomposed signals diminishes. Our work probably identifies an ingrained and universal error in time series modeling, and the de facto progress in relevant areas is expected to be revisited and calibrated to prevent future scientific detours and minimize practical losses. | [
"['Kexin Jiang' 'Chuhan Wu' 'Yaoran Chen']"
]
|
null | null | 2405.06992 | null | null | http://arxiv.org/pdf/2405.06992v1 | 2024-05-11T11:50:43Z | 2024-05-11T11:50:43Z | ResSurv: Cancer Survival Analysis Prediction Model Based on Residual
Networks | Survival prediction is an important branch of cancer prognosis analysis. The model that predicts survival risk through TCGA genomics data can discover genes related to cancer and provide diagnosis and treatment recommendations based on patient characteristics. We found that deep learning models based on Cox proportional hazards often suffer from overfitting when dealing with high-throughput data. Moreover, we found that as the number of network layers increases, the experimental results will not get better, and network degradation will occur. Based on this problem, we propose a new framework based on Deep Residual Learning. Combine the ideas of Cox proportional hazards and Residual. And name it ResSurv. First, ResSurv is a feed-forward deep learning network stacked by multiple basic ResNet Blocks. In each ResNet Block, we add a Normalization Layer to prevent gradient disappearance and gradient explosion. Secondly, for the loss function of the neural network, we inherited the Cox proportional hazards methods, applied the semi-parametric of the CPH model to the neural network, combined with the partial likelihood model, established the loss function, and performed backpropagation and gradient update. Finally, we compared ResSurv networks of different depths and found that we can effectively extract high-dimensional features. Ablation experiments and comparative experiments prove that our model has reached SOTA(state of the art) in the field of deep learning, and our network can effectively extract deep information. | [
"['Wankang Zhai']"
]
|
null | null | 2405.06993 | null | null | http://arxiv.org/pdf/2405.06993v1 | 2024-05-11T11:55:26Z | 2024-05-11T11:55:26Z | Robust Model Aggregation for Heterogeneous Federated Learning: Analysis
and Optimizations | Conventional synchronous federated learning (SFL) frameworks suffer from performance degradation in heterogeneous systems due to imbalanced local data size and diverse computing power on the client side. To address this problem, asynchronous FL (AFL) and semi-asynchronous FL have been proposed to recover the performance loss by allowing asynchronous aggregation. However, asynchronous aggregation incurs a new problem of inconsistency between local updates and global updates. Motivated by the issues of conventional SFL and AFL, we first propose a time-driven SFL (T-SFL) framework for heterogeneous systems. The core idea of T-SFL is that the server aggregates the models from different clients, each with varying numbers of iterations, at regular time intervals. To evaluate the learning performance of T-SFL, we provide an upper bound on the global loss function. Further, we optimize the aggregation weights to minimize the developed upper bound. Then, we develop a discriminative model selection (DMS) algorithm that removes local models from clients whose number of iterations falls below a predetermined threshold. In particular, this algorithm ensures that each client's aggregation weight accurately reflects its true contribution to the global model update, thereby improving the efficiency and robustness of the system. To validate the effectiveness of T-SFL with the DMS algorithm, we conduct extensive experiments using several popular datasets including MNIST, Cifar-10, Fashion-MNIST, and SVHN. The experimental results demonstrate that T-SFL with the DMS algorithm can reduce the latency of conventional SFL by 50%, while achieving an average 3% improvement in learning accuracy over state-of-the-art AFL algorithms. | [
"['Yumeng Shao' 'Jun Li' 'Long Shi' 'Kang Wei' 'Ming Ding' 'Qianmu Li'\n 'Zengxiang Li' 'Wen Chen' 'Shi Jin']"
]
|
null | null | 2405.06994 | null | null | http://arxiv.org/pdf/2405.06994v1 | 2024-05-11T12:02:24Z | 2024-05-11T12:02:24Z | GRASP-GCN: Graph-Shape Prioritization for Neural Architecture Search
under Distribution Shifts | Neural Architecture Search (NAS) methods have shown to output networks that largely outperform human-designed networks. However, conventional NAS methods have mostly tackled the single dataset scenario, incuring in a large computational cost as the procedure has to be run from scratch for every new dataset. In this work, we focus on predictor-based algorithms and propose a simple and efficient way of improving their prediction performance when dealing with data distribution shifts. We exploit the Kronecker-product on the randomly wired search-space and create a small NAS benchmark composed of networks trained over four different datasets. To improve the generalization abilities, we propose GRASP-GCN, a ranking Graph Convolutional Network that takes as additional input the shape of the layers of the neural networks. GRASP-GCN is trained with the not-at-convergence accuracies, and improves the state-of-the-art of 3.3 % for Cifar-10 and increasing moreover the generalization abilities under data distribution shift. | [
"['Sofia Casarin' 'Oswald Lanz' 'Sergio Escalera']"
]
|
null | null | 2405.07004 | null | null | http://arxiv.org/pdf/2405.07004v1 | 2024-05-11T12:55:10Z | 2024-05-11T12:55:10Z | Stealthy Imitation: Reward-guided Environment-free Policy Stealing | Deep reinforcement learning policies, which are integral to modern control systems, represent valuable intellectual property. The development of these policies demands considerable resources, such as domain expertise, simulation fidelity, and real-world validation. These policies are potentially vulnerable to model stealing attacks, which aim to replicate their functionality using only black-box access. In this paper, we propose Stealthy Imitation, the first attack designed to steal policies without access to the environment or knowledge of the input range. This setup has not been considered by previous model stealing methods. Lacking access to the victim's input states distribution, Stealthy Imitation fits a reward model that allows to approximate it. We show that the victim policy is harder to imitate when the distribution of the attack queries matches that of the victim. We evaluate our approach across diverse, high-dimensional control tasks and consistently outperform prior data-free approaches adapted for policy stealing. Lastly, we propose a countermeasure that significantly diminishes the effectiveness of the attack. | [
"['Zhixiong Zhuang' 'Maria-Irina Nicolae' 'Mario Fritz']"
]
|
null | null | 2405.07011 | null | null | http://arxiv.org/pdf/2405.07011v1 | 2024-05-11T13:11:53Z | 2024-05-11T13:11:53Z | Fair Graph Representation Learning via Sensitive Attribute
Disentanglement | Group fairness for Graph Neural Networks (GNNs), which emphasizes algorithmic decisions neither favoring nor harming certain groups defined by sensitive attributes (e.g., race and gender), has gained considerable attention. In particular, the objective of group fairness is to ensure that the decisions made by GNNs are independent of the sensitive attribute. To achieve this objective, most existing approaches involve eliminating sensitive attribute information in node representations or algorithmic decisions. However, such ways may also eliminate task-related information due to its inherent correlation with the sensitive attribute, leading to a sacrifice in utility. In this work, we focus on improving the fairness of GNNs while preserving task-related information and propose a fair GNN framework named FairSAD. Instead of eliminating sensitive attribute information, FairSAD enhances the fairness of GNNs via Sensitive Attribute Disentanglement (SAD), which separates the sensitive attribute-related information into an independent component to mitigate its impact. Additionally, FairSAD utilizes a channel masking mechanism to adaptively identify the sensitive attribute-related component and subsequently decorrelates it. Overall, FairSAD minimizes the impact of the sensitive attribute on GNN outcomes rather than eliminating sensitive attributes, thereby preserving task-related information associated with the sensitive attribute. Furthermore, experiments conducted on several real-world datasets demonstrate that FairSAD outperforms other state-of-the-art methods by a significant margin in terms of both fairness and utility performance. Our source code is available at https://github.com/ZzoomD/FairSAD. | [
"['Yuchang Zhu' 'Jintang Li' 'Zibin Zheng' 'Liang Chen']"
]
|
null | null | 2405.07020 | null | null | http://arxiv.org/pdf/2405.07020v1 | 2024-05-11T13:59:52Z | 2024-05-11T13:59:52Z | Adaptive Online Bayesian Estimation of Frequency Distributions with
Local Differential Privacy | We propose a novel Bayesian approach for the adaptive and online estimation of the frequency distribution of a finite number of categories under the local differential privacy (LDP) framework. The proposed algorithm performs Bayesian parameter estimation via posterior sampling and adapts the randomization mechanism for LDP based on the obtained posterior samples. We propose a randomized mechanism for LDP which uses a subset of categories as an input and whose performance depends on the selected subset and the true frequency distribution. By using the posterior sample as an estimate of the frequency distribution, the algorithm performs a computationally tractable subset selection step to maximize the utility of the privatized response of the next user. We propose several utility functions related to well-known information metrics, such as (but not limited to) Fisher information matrix, total variation distance, and information entropy. We compare each of these utility metrics in terms of their computational complexity. We employ stochastic gradient Langevin dynamics for posterior sampling, a computationally efficient approximate Markov chain Monte Carlo method. We provide a theoretical analysis showing that (i) the posterior distribution targeted by the algorithm converges to the true parameter even for approximate posterior sampling, and (ii) the algorithm selects the optimal subset with high probability if posterior sampling is performed exactly. We also provide numerical results that empirically demonstrate the estimation accuracy of our algorithm where we compare it with nonadaptive and semi-adaptive approaches under experimental settings with various combinations of privacy parameters and population distribution parameters. | [
"['Soner Aydin' 'Sinan Yildirim']"
]
|
null | null | 2405.07022 | null | null | http://arxiv.org/pdf/2405.07022v1 | 2024-05-11T14:15:13Z | 2024-05-11T14:15:13Z | DTMamba : Dual Twin Mamba for Time Series Forecasting | We utilized the Mamba model for time series data prediction tasks, and the experimental results indicate that our model performs well. | [
"['Zexue Wu' 'Yifeng Gong' 'Aoqian Zhang']"
]
|
null | null | 2405.07024 | null | null | http://arxiv.org/pdf/2405.07024v1 | 2024-05-11T14:41:48Z | 2024-05-11T14:41:48Z | Demystifying the Hypercomplex: Inductive Biases in Hypercomplex Deep
Learning | Hypercomplex algebras have recently been gaining prominence in the field of deep learning owing to the advantages of their division algebras over real vector spaces and their superior results when dealing with multidimensional signals in real-world 3D and 4D paradigms. This paper provides a foundational framework that serves as a roadmap for understanding why hypercomplex deep learning methods are so successful and how their potential can be exploited. Such a theoretical framework is described in terms of inductive bias, i.e., a collection of assumptions, properties, and constraints that are built into training algorithms to guide their learning process toward more efficient and accurate solutions. We show that it is possible to derive specific inductive biases in the hypercomplex domains, which extend complex numbers to encompass diverse numbers and data structures. These biases prove effective in managing the distinctive properties of these domains, as well as the complex structures of multidimensional and multimodal signals. This novel perspective for hypercomplex deep learning promises to both demystify this class of methods and clarify their potential, under a unifying framework, and in this way promotes hypercomplex models as viable alternatives to traditional real-valued deep learning for multidimensional signal processing. | [
"['Danilo Comminiello' 'Eleonora Grassucci' 'Danilo P. Mandic'\n 'Aurelio Uncini']"
]
|
null | null | 2405.07030 | null | null | http://arxiv.org/pdf/2405.07030v1 | 2024-05-11T15:02:08Z | 2024-05-11T15:02:08Z | Lasso Ridge based XGBoost and Deep_LSTM Help Tennis Players Perform
better | Understanding the dynamics of momentum and game fluctuation in tennis matches is cru-cial for predicting match outcomes and enhancing player performance. In this study, we present a comprehensive analysis of these factors using a dataset from the 2023 Wimbledon final. Ini-tially, we develop a sliding-window-based scoring model to assess player performance, ac-counting for the influence of serving dominance through a serve decay factor. Additionally, we introduce a novel approach, Lasso-Ridge-based XGBoost, to quantify momentum effects, lev-eraging the predictive power of XGBoost while mitigating overfitting through regularization. Through experimentation, we achieve an accuracy of 94% in predicting match outcomes, iden-tifying key factors influencing winning rates. Subsequently, we propose a Derivative of the winning rate algorithm to quantify game fluctuation, employing an LSTM_Deep model to pre-dict fluctuation scores. Our model effectively captures temporal correlations in momentum fea-tures, yielding mean squared errors ranging from 0.036 to 0.064. Furthermore, we explore me-ta-learning using MAML to transfer our model to predict outcomes in ping-pong matches, though results indicate a comparative performance decline. Our findings provide valuable in-sights into momentum dynamics and game fluctuation, offering implications for sports analytics and player training strategies. | [
"['Wankang Zhai' 'Yuhan Wang']"
]
|
null | null | 2405.07038 | null | null | http://arxiv.org/pdf/2405.07038v1 | 2024-05-11T15:28:25Z | 2024-05-11T15:28:25Z | Conformal Online Auction Design | This paper proposes the conformal online auction design (COAD), a novel mechanism for maximizing revenue in online auctions by quantifying the uncertainty in bidders' values without relying on assumptions about value distributions. COAD incorporates both the bidder and item features and leverages historical data to provide an incentive-compatible mechanism for online auctions. Unlike traditional methods for online auctions, COAD employs a distribution-free, prediction interval-based approach using conformal prediction techniques. This novel approach ensures that the expected revenue from our mechanism can achieve at least a constant fraction of the revenue generated by the optimal mechanism. Additionally, COAD admits the use of a broad array of modern machine-learning methods, including random forests, kernel methods, and deep neural nets, for predicting bidders' values. It ensures revenue performance under any finite sample of historical data. Moreover, COAD introduces bidder-specific reserve prices based on the lower confidence bounds of bidders' valuations, which is different from the uniform reserve prices commonly used in the literature. We validate our theoretical predictions through extensive simulations and a real-data application. All code for using COAD and reproducing results is made available on GitHub. | [
"['Jiale Han' 'Xiaowu Dai']"
]
|
null | null | 2405.07045 | null | null | http://arxiv.org/pdf/2405.07045v1 | 2024-05-11T16:12:25Z | 2024-05-11T16:12:25Z | Predictive Modeling in the Reservoir Kernel Motif Space | This work proposes a time series prediction method based on the kernel view of linear reservoirs. In particular, the time series motifs of the reservoir kernel are used as representational basis on which general readouts are constructed. We provide a geometric interpretation of our approach shedding light on how our approach is related to the core reservoir models and in what way the two approaches differ. Empirical experiments then compare predictive performances of our suggested model with those of recent state-of-art transformer based models, as well as the established recurrent network model - LSTM. The experiments are performed on both univariate and multivariate time series and with a variety of prediction horizons. Rather surprisingly we show that even when linear readout is employed, our method has the capacity to outperform transformer models on univariate time series and attain competitive results on multivariate benchmark datasets. We conclude that simple models with easily controllable capacity but capturing enough memory and subsequence structure can outperform potentially over-complicated deep learning models. This does not mean that reservoir motif based models are preferable to other more complex alternatives - rather, when introducing a new complex time series model one should employ as a sanity check simple, but potentially powerful alternatives/baselines such as reservoir models or the models introduced here. | [
"['Peter Tino' 'Robert Simon Fong' 'Roberto Fabio Leonarduzzi']"
]
|
null | null | 2405.07061 | null | null | http://arxiv.org/pdf/2405.07061v1 | 2024-05-11T17:27:41Z | 2024-05-11T17:27:41Z | LLMs and the Future of Chip Design: Unveiling Security Risks and
Building Trust | Chip design is about to be revolutionized by the integration of large language, multimodal, and circuit models (collectively LxMs). While exploring this exciting frontier with tremendous potential, the community must also carefully consider the related security risks and the need for building trust into using LxMs for chip design. First, we review the recent surge of using LxMs for chip design in general. We cover state-of-the-art works for the automation of hardware description language code generation and for scripting and guidance of essential but cumbersome tasks for electronic design automation tools, e.g., design-space exploration, tuning, or designer training. Second, we raise and provide initial answers to novel research questions on critical issues for security and trustworthiness of LxM-powered chip design from both the attack and defense perspectives. | [
"['Zeng Wang' 'Lilas Alrahis' 'Likhitha Mankali' 'Johann Knechtel'\n 'Ozgur Sinanoglu']"
]
|
null | null | 2405.07067 | null | null | http://arxiv.org/pdf/2405.07067v1 | 2024-05-11T18:31:13Z | 2024-05-11T18:31:13Z | Learning Flame Evolution Operator under Hybrid Darrieus Landau and
Diffusive Thermal Instability | Recent advancements in the integration of artificial intelligence (AI) and machine learning (ML) with physical sciences have led to significant progress in addressing complex phenomena governed by nonlinear partial differential equations (PDE). This paper explores the application of novel operator learning methodologies to unravel the intricate dynamics of flame instability, particularly focusing on hybrid instabilities arising from the coexistence of Darrieus-Landau (DL) and Diffusive-Thermal (DT) mechanisms. Training datasets encompass a wide range of parameter configurations, enabling the learning of parametric solution advancement operators using techniques such as parametric Fourier Neural Operator (pFNO), and parametric convolutional neural networks (pCNN). Results demonstrate the efficacy of these methods in accurately predicting short-term and long-term flame evolution across diverse parameter regimes, capturing the characteristic behaviors of pure and blended instabilities. Comparative analyses reveal pFNO as the most accurate model for learning short-term solutions, while all models exhibit robust performance in capturing the nuanced dynamics of flame evolution. This research contributes to the development of robust modeling frameworks for understanding and controlling complex physical processes governed by nonlinear PDE. | [
"['Rixin Yu' 'Erdzan Hodzic' 'Karl-Johan Nogenmyr']"
]
|
null | null | 2405.07068 | null | null | http://arxiv.org/pdf/2405.07068v1 | 2024-05-11T18:35:54Z | 2024-05-11T18:35:54Z | Catastrophe Insurance: An Adaptive Robust Optimization Approach | The escalating frequency and severity of natural disasters, exacerbated by climate change, underscore the critical role of insurance in facilitating recovery and promoting investments in risk reduction. This work introduces a novel Adaptive Robust Optimization (ARO) framework tailored for the calculation of catastrophe insurance premiums, with a case study applied to the United States National Flood Insurance Program (NFIP). To the best of our knowledge, it is the first time an ARO approach has been applied to for disaster insurance pricing. Our methodology is designed to protect against both historical and emerging risks, the latter predicted by machine learning models, thus directly incorporating amplified risks induced by climate change. Using the US flood insurance data as a case study, optimization models demonstrate effectiveness in covering losses and produce surpluses, with a smooth balance transition through parameter fine-tuning. Among tested optimization models, results show ARO models with conservative parameter values achieving low number of insolvent states with the least insurance premium charged. Overall, optimization frameworks offer versatility and generalizability, making it adaptable to a variety of natural disaster scenarios, such as wildfires, droughts, etc. This work not only advances the field of insurance premium modeling but also serves as a vital tool for policymakers and stakeholders in building resilience to the growing risks of natural catastrophes. | [
"['Dimitris Bertsimas' 'Cynthia Zeng']"
]
|
null | null | 2405.07070 | null | null | http://arxiv.org/abs/2405.07070v1 | 2024-05-11T18:48:59Z | 2024-05-11T18:48:59Z | Decoding Cognitive Health Using Machine Learning: A Comprehensive
Evaluation for Diagnosis of Significant Memory Concern | The timely identification of significant memory concern (SMC) is crucial for proactive cognitive health management, especially in an aging population. Detecting SMC early enables timely intervention and personalized care, potentially slowing cognitive disorder progression. This study presents a state-of-the-art review followed by a comprehensive evaluation of machine learning models within the randomized neural networks (RNNs) and hyperplane-based classifiers (HbCs) family to investigate SMC diagnosis thoroughly. Utilizing the Alzheimer's Disease Neuroimaging Initiative 2 (ADNI2) dataset, 111 individuals with SMC and 111 healthy older adults are analyzed based on T1W magnetic resonance imaging (MRI) scans, extracting rich features. This analysis is based on baseline structural MRI (sMRI) scans, extracting rich features from gray matter (GM), white matter (WM), Jacobian determinant (JD), and cortical thickness (CT) measurements. In RNNs, deep random vector functional link (dRVFL) and ensemble dRVFL (edRVFL) emerge as the best classifiers in terms of performance metrics in the identification of SMC. In HbCs, Kernelized pinball general twin support vector machine (Pin-GTSVM-K) excels in CT and WM features, whereas Linear Pin-GTSVM (Pin-GTSVM-L) and Linear intuitionistic fuzzy TSVM (IFTSVM-L) performs well in the JD and GM features sets, respectively. This comprehensive evaluation emphasizes the critical role of feature selection and model choice in attaining an effective classifier for SMC diagnosis. The inclusion of statistical analyses further reinforces the credibility of the results, affirming the rigor of this analysis. The performance measures exhibit the suitability of this framework in aiding researchers with the automated and accurate assessment of SMC. The source codes of the algorithms and datasets used in this study are available at https://github.com/mtanveer1/SMC. | [
"['M. Sajid' 'Rahul Sharma' 'Iman Beheshti' 'M. Tanveer']"
]
|
null | null | 2405.07083 | null | null | http://arxiv.org/pdf/2405.07083v1 | 2024-05-11T19:47:27Z | 2024-05-11T19:47:27Z | Data-Efficient and Robust Task Selection for Meta-Learning | Meta-learning methods typically learn tasks under the assumption that all tasks are equally important. However, this assumption is often not valid. In real-world applications, tasks can vary both in their importance during different training stages and in whether they contain noisy labeled data or not, making a uniform approach suboptimal. To address these issues, we propose the Data-Efficient and Robust Task Selection (DERTS) algorithm, which can be incorporated into both gradient and metric-based meta-learning algorithms. DERTS selects weighted subsets of tasks from task pools by minimizing the approximation error of the full gradient of task pools in the meta-training stage. The selected tasks are efficient for rapid training and robust towards noisy label scenarios. Unlike existing algorithms, DERTS does not require any architecture modification for training and can handle noisy label data in both the support and query sets. Analysis of DERTS shows that the algorithm follows similar training dynamics as learning on the full task pools. Experiments show that DERTS outperforms existing sampling strategies for meta-learning on both gradient-based and metric-based meta-learning algorithms in limited data budget and noisy task settings. | [
"['Donglin Zhan' 'James Anderson']"
]
|
null | null | 2405.07087 | null | null | http://arxiv.org/pdf/2405.07087v1 | 2024-05-11T20:07:09Z | 2024-05-11T20:07:09Z | Auditing an Automatic Grading Model with deep Reinforcement Learning | We explore the use of deep reinforcement learning to audit an automatic short answer grading (ASAG) model. Automatic grading may decrease the time burden of rating open-ended items for educators, but a lack of robust evaluation methods for these models can result in uncertainty of their quality. Current state-of-the-art ASAG models are configured to match human ratings from a training set, and researchers typically assess their quality with accuracy metrics that signify agreement between model and human scores. In this paper, we show that a high level of agreement to human ratings does not give sufficient evidence that an ASAG model is infallible. We train a reinforcement learning agent to revise student responses with the objective of achieving a high rating from an automatic grading model in the least number of revisions. By analyzing the agent's revised responses that achieve a high grade from the ASAG model but would not be considered a high scoring responses according to a scoring rubric, we discover ways in which the automated grader can be exploited, exposing shortcomings in the grading model. | [
"['Aubrey Condor' 'Zachary Pardos']"
]
|
null | null | 2405.07097 | null | null | http://arxiv.org/pdf/2405.07097v1 | 2024-05-11T21:23:55Z | 2024-05-11T21:23:55Z | Diffusion models as probabilistic neural operators for recovering
unobserved states of dynamical systems | This paper explores the efficacy of diffusion-based generative models as neural operators for partial differential equations (PDEs). Neural operators are neural networks that learn a mapping from the parameter space to the solution space of PDEs from data, and they can also solve the inverse problem of estimating the parameter from the solution. Diffusion models excel in many domains, but their potential as neural operators has not been thoroughly explored. In this work, we show that diffusion-based generative models exhibit many properties favourable for neural operators, and they can effectively generate the solution of a PDE conditionally on the parameter or recover the unobserved parts of the system. We propose to train a single model adaptable to multiple tasks, by alternating between the tasks during training. In our experiments with multiple realistic dynamical systems, diffusion models outperform other neural operators. Furthermore, we demonstrate how the probabilistic diffusion model can elegantly deal with systems which are only partially identifiable, by producing samples corresponding to the different possible solutions. | [
"['Katsiaryna Haitsiukevich' 'Onur Poyraz' 'Pekka Marttinen'\n 'Alexander Ilin']"
]
|
null | null | 2405.07098 | null | null | http://arxiv.org/pdf/2405.07098v1 | 2024-05-11T21:29:40Z | 2024-05-11T21:29:40Z | Interpretable global minima of deep ReLU neural networks on sequentially
separable data | We explicitly construct zero loss neural network classifiers. We write the weight matrices and bias vectors in terms of cumulative parameters, which determine truncation maps acting recursively on input space. The configurations for the training data considered are (i) sufficiently small, well separated clusters corresponding to each class, and (ii) equivalence classes which are sequentially linearly separable. In the best case, for $Q$ classes of data in $mathbb{R}^M$, global minimizers can be described with $Q(M+2)$ parameters. | [
"['Thomas Chen' 'Patricia Muñoz Ewald']"
]
|
null | null | 2405.07105 | null | null | http://arxiv.org/pdf/2405.07105v1 | 2024-05-11T22:30:47Z | 2024-05-11T22:30:47Z | Overcoming systematic softening in universal machine learning
interatomic potentials by fine-tuning | Machine learning interatomic potentials (MLIPs) have introduced a new paradigm for atomic simulations. Recent advancements have seen the emergence of universal MLIPs (uMLIPs) that are pre-trained on diverse materials datasets, providing opportunities for both ready-to-use universal force fields and robust foundations for downstream machine learning refinements. However, their performance in extrapolating to out-of-distribution complex atomic environments remains unclear. In this study, we highlight a consistent potential energy surface (PES) softening effect in three uMLIPs: M3GNet, CHGNet, and MACE-MP-0, which is characterized by energy and force under-prediction in a series of atomic-modeling benchmarks including surfaces, defects, solid-solution energetics, phonon vibration modes, ion migration barriers, and general high-energy states. We find that the PES softening behavior originates from a systematic underprediction error of the PES curvature, which derives from the biased sampling of near-equilibrium atomic arrangements in uMLIP pre-training datasets. We demonstrate that the PES softening issue can be effectively rectified by fine-tuning with a single additional data point. Our findings suggest that a considerable fraction of uMLIP errors are highly systematic, and can therefore be efficiently corrected. This result rationalizes the data-efficient fine-tuning performance boost commonly observed with foundational MLIPs. We argue for the importance of a comprehensive materials dataset with improved PES sampling for next-generation foundational MLIPs. | [
"['Bowen Deng' 'Yunyeong Choi' 'Peichen Zhong' 'Janosh Riebesell'\n 'Shashwat Anand' 'Zhuohan Li' 'KyuJung Jun' 'Kristin A. Persson'\n 'Gerbrand Ceder']"
]
|
null | null | 2405.07117 | null | null | http://arxiv.org/pdf/2405.07117v1 | 2024-05-12T00:21:57Z | 2024-05-12T00:21:57Z | Context Neural Networks: A Scalable Multivariate Model for Time Series
Forecasting | Real-world time series often exhibit complex interdependencies that cannot be captured in isolation. Global models that model past data from multiple related time series globally while producing series-specific forecasts locally are now common. However, their forecasts for each individual series remain isolated, failing to account for the current state of its neighbouring series. Multivariate models like multivariate attention and graph neural networks can explicitly incorporate inter-series information, thus addressing the shortcomings of global models. However, these techniques exhibit quadratic complexity per timestep, limiting scalability. This paper introduces the Context Neural Network, an efficient linear complexity approach for augmenting time series models with relevant contextual insights from neighbouring time series without significant computational overhead. The proposed method enriches predictive models by providing the target series with real-time information from its neighbours, addressing the limitations of global models, yet remaining computationally tractable for large datasets. | [
"['Abishek Sriramulu' 'Christoph Bergmeir' 'Slawek Smyl']"
]
|
null | null | 2405.07135 | null | null | http://arxiv.org/pdf/2405.07135v1 | 2024-05-12T02:15:26Z | 2024-05-12T02:15:26Z | Combining multiple post-training techniques to achieve most efficient
quantized LLMs | Large Language Models (LLMs) have distinguished themselves with outstanding performance in complex language modeling tasks, yet they come with significant computational and storage challenges. This paper explores the potential of quantization to mitigate these challenges. We systematically study the combined application of two well-known post-training techniques, SmoothQuant and GPTQ, and provide a comprehensive analysis of their interactions and implications for advancing LLM quantization. We enhance the versatility of both techniques by enabling quantization to microscaling (MX) formats, expanding their applicability beyond their initial fixed-point format targets. We show that by applying GPTQ and SmoothQuant, and employing MX formats for quantizing models, we can achieve a significant reduction in the size of OPT models by up to 4x and LLaMA models by up to 3x with a negligible perplexity increase of 1-3%. | [
"['Sayeh Sharify' 'Zifei Xu' 'Wanzin Yazar' 'Xin Wang']"
]
|
null | null | 2405.07140 | null | null | http://arxiv.org/pdf/2405.07140v1 | 2024-05-12T02:38:58Z | 2024-05-12T02:38:58Z | Edge Intelligence Optimization for Large Language Model Inference with
Batching and Quantization | Generative Artificial Intelligence (GAI) is taking the world by storm with its unparalleled content creation ability. Large Language Models (LLMs) are at the forefront of this movement. However, the significant resource demands of LLMs often require cloud hosting, which raises issues regarding privacy, latency, and usage limitations. Although edge intelligence has long been utilized to solve these challenges by enabling real-time AI computation on ubiquitous edge resources close to data sources, most research has focused on traditional AI models and has left a gap in addressing the unique characteristics of LLM inference, such as considerable model size, auto-regressive processes, and self-attention mechanisms. In this paper, we present an edge intelligence optimization problem tailored for LLM inference. Specifically, with the deployment of the batching technique and model quantization on resource-limited edge devices, we formulate an inference model for transformer decoder-based LLMs. Furthermore, our approach aims to maximize the inference throughput via batch scheduling and joint allocation of communication and computation resources, while also considering edge resource constraints and varying user requirements of latency and accuracy. To address this NP-hard problem, we develop an optimal Depth-First Tree-Searching algorithm with online tree-Pruning (DFTSP) that operates within a feasible time complexity. Simulation results indicate that DFTSP surpasses other batching benchmarks in throughput across diverse user settings and quantization techniques, and it reduces time complexity by over 45% compared to the brute-force searching method. | [
"['Xinyuan Zhang' 'Jiang Liu' 'Zehui Xiong' 'Yudong Huang' 'Gaochang Xie'\n 'Ran Zhang']"
]
|
null | null | 2405.07142 | null | null | http://arxiv.org/pdf/2405.07142v1 | 2024-05-12T02:41:31Z | 2024-05-12T02:41:31Z | Cross-Domain Continual Learning via CLAMP | Artificial neural networks, celebrated for their human-like cognitive learning abilities, often encounter the well-known catastrophic forgetting (CF) problem, where the neural networks lose the proficiency in previously acquired knowledge. Despite numerous efforts to mitigate CF, it remains the significant challenge particularly in complex changing environments. This challenge is even more pronounced in cross-domain adaptation following the continual learning (CL) setting, which is a more challenging and realistic scenario that is under-explored. To this end, this article proposes a cross-domain CL approach making possible to deploy a single model in such environments without additional labelling costs. Our approach, namely continual learning approach for many processes (CLAMP), integrates a class-aware adversarial domain adaptation strategy to align a source domain and a target domain. An assessor-guided learning process is put forward to navigate the learning process of a base model assigning a set of weights to every sample controlling the influence of every sample and the interactions of each loss function in such a way to balance the stability and plasticity dilemma thus preventing the CF problem. The first assessor focuses on the negative transfer problem rejecting irrelevant samples of the source domain while the second assessor prevents noisy pseudo labels of the target domain. Both assessors are trained in the meta-learning approach using random transformation techniques and similar samples of the source domain. Theoretical analysis and extensive numerical validations demonstrate that CLAMP significantly outperforms established baseline algorithms across all experiments by at least $10%$ margin. | [
"['Weiwei Weng' 'Mahardhika Pratama' 'Jie Zhang' 'Chen Chen'\n 'Edward Yapp Kien Yee' 'Ramasamy Savitha']"
]
|
null | null | 2405.07175 | null | null | http://arxiv.org/pdf/2405.07175v1 | 2024-05-12T06:08:54Z | 2024-05-12T06:08:54Z | On-Demand Model and Client Deployment in Federated Learning with Deep
Reinforcement Learning | In Federated Learning (FL), the limited accessibility of data from diverse locations and user types poses a significant challenge due to restricted user participation. Expanding client access and diversifying data enhance models by incorporating diverse perspectives, thereby enhancing adaptability. However, challenges arise in dynamic and mobile environments where certain devices may become inaccessible as FL clients, impacting data availability and client selection methods. To address this, we propose an On-Demand solution, deploying new clients using Docker Containers on-the-fly. Our On-Demand solution, employing Deep Reinforcement Learning (DRL), targets client availability and selection, while considering data shifts, and container deployment complexities. It employs an autonomous end-to-end solution for handling model deployment and client selection. The DRL strategy uses a Markov Decision Process (MDP) framework, with a Master Learner and a Joiner Learner. The designed cost functions represent the complexity of the dynamic client deployment and selection. Simulated tests show that our architecture can easily adjust to changes in the environment and respond to On-Demand requests. This underscores its ability to improve client availability, capability, accuracy, and learning efficiency, surpassing heuristic and tabular reinforcement learning solutions. | [
"['Mario Chahoud' 'Hani Sami' 'Azzam Mourad' 'Hadi Otrok' 'Jamal Bentahar'\n 'Mohsen Guizani']"
]
|
null | null | 2405.07196 | null | null | http://arxiv.org/pdf/2405.07196v1 | 2024-05-12T07:46:00Z | 2024-05-12T07:46:00Z | Permissioned Blockchain-based Framework for Ranking Synthetic Data
Generators | Synthetic data generation is increasingly recognized as a crucial solution to address data related challenges such as scarcity, bias, and privacy concerns. As synthetic data proliferates, the need for a robust evaluation framework to select a synthetic data generator becomes more pressing given the variety of options available. In this research study, we investigate two primary questions: 1) How can we select the most suitable synthetic data generator from a set of options for a specific purpose? 2) How can we make the selection process more transparent, accountable, and auditable? To address these questions, we introduce a novel approach in which the proposed ranking algorithm is implemented as a smart contract within a permissioned blockchain framework called Sawtooth. Through comprehensive experiments and comparisons with state-of-the-art baseline ranking solutions, our framework demonstrates its effectiveness in providing nuanced rankings that consider both desirable and undesirable properties. Furthermore, our framework serves as a valuable tool for selecting the optimal synthetic data generators for specific needs while ensuring compliance with data protection principles. | [
"['Narasimha Raghavan Veeraragavan' 'Mohammad Hossein Tabatabaei'\n 'Severin Elvatun' 'Vibeke Binz Vallevik' 'Siri Larønningen'\n 'Jan F Nygård']"
]
|
null | null | 2405.07200 | null | null | http://arxiv.org/pdf/2405.07200v3 | 2024-06-14T15:46:11Z | 2024-05-12T07:55:43Z | Chebyshev Polynomial-Based Kolmogorov-Arnold Networks: An Efficient
Architecture for Nonlinear Function Approximation | Accurate approximation of complex nonlinear functions is a fundamental challenge across many scientific and engineering domains. Traditional neural network architectures, such as Multi-Layer Perceptrons (MLPs), often struggle to efficiently capture intricate patterns and irregularities present in high-dimensional functions. This paper presents the Chebyshev Kolmogorov-Arnold Network (Chebyshev KAN), a new neural network architecture inspired by the Kolmogorov-Arnold representation theorem, incorporating the powerful approximation capabilities of Chebyshev polynomials. By utilizing learnable functions parametrized by Chebyshev polynomials on the network's edges, Chebyshev KANs enhance flexibility, efficiency, and interpretability in function approximation tasks. We demonstrate the efficacy of Chebyshev KANs through experiments on digit classification, synthetic function approximation, and fractal function generation, highlighting their superiority over traditional MLPs in terms of parameter efficiency and interpretability. Our comprehensive evaluation, including ablation studies, confirms the potential of Chebyshev KANs to address longstanding challenges in nonlinear function approximation, paving the way for further advancements in various scientific and engineering applications. | [
"['Sidharth SS' 'Keerthana AR' 'Gokul R' 'Anas KP']"
]
|
null | null | 2405.07202 | null | null | http://arxiv.org/pdf/2405.07202v1 | 2024-05-12T07:59:46Z | 2024-05-12T07:59:46Z | Unified Video-Language Pre-training with Synchronized Audio | Video-language pre-training is a typical and challenging problem that aims at learning visual and textual representations from large-scale data in a self-supervised way. Existing pre-training approaches either captured the correspondence of image-text pairs or utilized temporal ordering of frames. However, they do not explicitly explore the natural synchronization between audio and the other two modalities. In this work, we propose an enhanced framework for Video-Language pre-training with Synchronized Audio, termed as VLSA, that can learn tri-modal representations in a unified self-supervised transformer. Specifically, our VLSA jointly aggregates embeddings of local patches and global tokens for video, text, and audio. Furthermore, we utilize local-patch masked modeling to learn modality-aware features, and leverage global audio matching to capture audio-guided features for video and text. We conduct extensive experiments on retrieval across text, video, and audio. Our simple model pre-trained on only 0.9M data achieves improving results against state-of-the-art baselines. In addition, qualitative visualizations vividly showcase the superiority of our VLSA in learning discriminative visual-textual representations. | [
"['Shentong Mo' 'Haofan Wang' 'Huaxia Li' 'Xu Tang']"
]
|
null | null | 2405.07220 | null | null | http://arxiv.org/pdf/2405.07220v1 | 2024-05-12T08:48:37Z | 2024-05-12T08:48:37Z | On Discovery of Local Independence over Continuous Variables via Neural
Contextual Decomposition | Conditional independence provides a way to understand causal relationships among the variables of interest. An underlying system may exhibit more fine-grained causal relationships especially between a variable and its parents, which will be called the local independence relationships. One of the most widely studied local relationships is Context-Specific Independence (CSI), which holds in a specific assignment of conditioned variables. However, its applicability is often limited since it does not allow continuous variables: data conditioned to the specific value of a continuous variable contains few instances, if not none, making it infeasible to test independence. In this work, we define and characterize the local independence relationship that holds in a specific set of joint assignments of parental variables, which we call context-set specific independence (CSSI). We then provide a canonical representation of CSSI and prove its fundamental properties. Based on our theoretical findings, we cast the problem of discovering multiple CSSI relationships in a system as finding a partition of the joint outcome space. Finally, we propose a novel method, coined neural contextual decomposition (NCD), which learns such partition by imposing each set to induce CSSI via modeling a conditional distribution. We empirically demonstrate that the proposed method successfully discovers the ground truth local independence relationships in both synthetic dataset and complex system reflecting the real-world physical dynamics. | [
"['Inwoo Hwang' 'Yunhyeok Kwak' 'Yeon-Ji Song' 'Byoung-Tak Zhang'\n 'Sanghack Lee']"
]
|
null | null | 2405.07223 | null | null | http://arxiv.org/pdf/2405.07223v1 | 2024-05-12T08:52:52Z | 2024-05-12T08:52:52Z | Ensemble Successor Representations for Task Generalization in
Offline-to-Online Reinforcement Learning | In Reinforcement Learning (RL), training a policy from scratch with online experiences can be inefficient because of the difficulties in exploration. Recently, offline RL provides a promising solution by giving an initialized offline policy, which can be refined through online interactions. However, existing approaches primarily perform offline and online learning in the same task, without considering the task generalization problem in offline-to-online adaptation. In real-world applications, it is common that we only have an offline dataset from a specific task while aiming for fast online-adaptation for several tasks. To address this problem, our work builds upon the investigation of successor representations for task generalization in online RL and extends the framework to incorporate offline-to-online learning. We demonstrate that the conventional paradigm using successor features cannot effectively utilize offline data and improve the performance for the new task by online fine-tuning. To mitigate this, we introduce a novel methodology that leverages offline data to acquire an ensemble of successor representations and subsequently constructs ensemble Q functions. This approach enables robust representation learning from datasets with different coverage and facilitates fast adaption of Q functions towards new tasks during the online fine-tuning phase. Extensive empirical evaluations provide compelling evidence showcasing the superior performance of our method in generalizing to diverse or even unseen tasks. | [
"['Changhong Wang' 'Xudong Yu' 'Chenjia Bai' 'Qiaosheng Zhang' 'Zhen Wang']"
]
|
null | null | 2405.07224 | null | null | http://arxiv.org/pdf/2405.07224v2 | 2024-05-18T07:16:24Z | 2024-05-12T08:58:35Z | A geometric decomposition of finite games: Convergence vs. recurrence
under exponential weights | In view of the complexity of the dynamics of learning in games, we seek to decompose a game into simpler components where the dynamics' long-run behavior is well understood. A natural starting point for this is Helmholtz's theorem, which decomposes a vector field into a potential and an incompressible component. However, the geometry of game dynamics - and, in particular, the dynamics of exponential / multiplicative weights (EW) schemes - is not compatible with the Euclidean underpinnings of Helmholtz's theorem. This leads us to consider a specific Riemannian framework based on the so-called Shahshahani metric, and introduce the class of incompressible games, for which we establish the following results: First, in addition to being volume-preserving, the continuous-time EW dynamics in incompressible games admit a constant of motion and are Poincar'e recurrent - i.e., almost every trajectory of play comes arbitrarily close to its starting point infinitely often. Second, we establish a deep connection with a well-known decomposition of games into a potential and harmonic component (where the players' objectives are aligned and anti-aligned respectively): a game is incompressible if and only if it is harmonic, implying in turn that the EW dynamics lead to Poincar'e recurrence in harmonic games. | [
"['Davide Legacci' 'Panayotis Mertikopoulos' 'Bary Pradelski']"
]
|
null | null | 2405.07226 | null | null | http://arxiv.org/pdf/2405.07226v1 | 2024-05-12T09:05:13Z | 2024-05-12T09:05:13Z | Separable Power of Classical and Quantum Learning Protocols Through the
Lens of No-Free-Lunch Theorem | The No-Free-Lunch (NFL) theorem, which quantifies problem- and data-independent generalization errors regardless of the optimization process, provides a foundational framework for comprehending diverse learning protocols' potential. Despite its significance, the establishment of the NFL theorem for quantum machine learning models remains largely unexplored, thereby overlooking broader insights into the fundamental relationship between quantum and classical learning protocols. To address this gap, we categorize a diverse array of quantum learning algorithms into three learning protocols designed for learning quantum dynamics under a specified observable and establish their NFL theorem. The exploited protocols, namely Classical Learning Protocols (CLC-LPs), Restricted Quantum Learning Protocols (ReQu-LPs), and Quantum Learning Protocols (Qu-LPs), offer varying levels of access to quantum resources. Our derived NFL theorems demonstrate quadratic reductions in sample complexity across CLC-LPs, ReQu-LPs, and Qu-LPs, contingent upon the orthogonality of quantum states and the diagonality of observables. We attribute this performance discrepancy to the unique capacity of quantum-related learning protocols to indirectly utilize information concerning the global phases of non-orthogonal quantum states, a distinctive physical feature inherent in quantum mechanics. Our findings not only deepen our understanding of quantum learning protocols' capabilities but also provide practical insights for the development of advanced quantum learning algorithms. | [
"['Xinbiao Wang' 'Yuxuan Du' 'Kecheng Liu' 'Yong Luo' 'Bo Du' 'Dacheng Tao']"
]
|
null | null | 2405.07233 | null | null | http://arxiv.org/pdf/2405.07233v1 | 2024-05-12T09:32:40Z | 2024-05-12T09:32:40Z | OXYGENERATOR: Reconstructing Global Ocean Deoxygenation Over a Century
with Deep Learning | Accurately reconstructing the global ocean deoxygenation over a century is crucial for assessing and protecting marine ecosystem. Existing expert-dominated numerical simulations fail to catch up with the dynamic variation caused by global warming and human activities. Besides, due to the high-cost data collection, the historical observations are severely sparse, leading to big challenge for precise reconstruction. In this work, we propose OxyGenerator, the first deep learning based model, to reconstruct the global ocean deoxygenation from 1920 to 2023. Specifically, to address the heterogeneity across large temporal and spatial scales, we propose zoning-varying graph message-passing to capture the complex oceanographic correlations between missing values and sparse observations. Additionally, to further calibrate the uncertainty, we incorporate inductive bias from dissolved oxygen (DO) variations and chemical effects. Compared with in-situ DO observations, OxyGenerator significantly outperforms CMIP6 numerical simulations, reducing MAPE by 38.77%, demonstrating a promising potential to understand the "breathless ocean" in data-driven manner. | [
"['Bin Lu' 'Ze Zhao' 'Luyu Han' 'Xiaoying Gan' 'Yuntao Zhou' 'Lei Zhou'\n 'Luoyi Fu' 'Xinbing Wang' 'Chenghu Zhou' 'Jing Zhang']"
]
|
null | null | 2405.07236 | null | null | http://arxiv.org/pdf/2405.07236v1 | 2024-05-12T09:58:03Z | 2024-05-12T09:58:03Z | Adaptive control of recurrent neural networks using conceptors | Recurrent Neural Networks excel at predicting and generating complex high-dimensional temporal patterns. Due to their inherent nonlinear dynamics and memory, they can learn unbounded temporal dependencies from data. In a Machine Learning setting, the network's parameters are adapted during a training phase to match the requirements of a given task/problem increasing its computational capabilities. After the training, the network parameters are kept fixed to exploit the learned computations. The static parameters thereby render the network unadaptive to changing conditions, such as external or internal perturbation. In this manuscript, we demonstrate how keeping parts of the network adaptive even after the training enhances its functionality and robustness. Here, we utilize the conceptor framework and conceptualize an adaptive control loop analyzing the network's behavior continuously and adjusting its time-varying internal representation to follow a desired target. We demonstrate how the added adaptivity of the network supports the computational functionality in three distinct tasks: interpolation of temporal patterns, stabilization against partial network degradation, and robustness against input distortion. Our results highlight the potential of adaptive networks in machine learning beyond training, enabling them to not only learn complex patterns but also dynamically adjust to changing environments, ultimately broadening their applicability. | [
"['Guillaume Pourcel' 'Mirko Goldmann' 'Ingo Fischer' 'Miguel C. Soriano']"
]
|
null | null | 2405.07252 | null | null | http://arxiv.org/pdf/2405.07252v2 | 2024-06-22T13:32:56Z | 2024-05-12T11:16:05Z | Universal Batch Learning Under The Misspecification Setting | In this paper we consider the problem of universal {em batch} learning in a misspecification setting with log-loss. In this setting the hypothesis class is a set of models $Theta$. However, the data is generated by an unknown distribution that may not belong to this set but comes from a larger set of models $Phi supset Theta$. Given a training sample, a universal learner is requested to predict a probability distribution for the next outcome and a log-loss is incurred. The universal learner performance is measured by the regret relative to the best hypothesis matching the data, chosen from $Theta$. Utilizing the minimax theorem and information theoretical tools, we derive the optimal universal learner, a mixture over the set of the data generating distributions, and get a closed form expression for the min-max regret. We show that this regret can be considered as a constrained version of the conditional capacity between the data and its generating distributions set. We present tight bounds for this min-max regret, implying that the complexity of the problem is dominated by the richness of the hypotheses models $Theta$ and not by the data generating distributions set $Phi$. We develop an extension to the Arimoto-Blahut algorithm for numerical evaluation of the regret and its capacity achieving prior distribution. We demonstrate our results for the case where the observations come from a $K$-parameters multinomial distributions while the hypothesis class $Theta$ is only a subset of this family of distributions. | [
"['Shlomi Vituri' 'Meir Feder']"
]
|
null | null | 2405.07260 | null | null | http://arxiv.org/pdf/2405.07260v1 | 2024-05-12T11:51:00Z | 2024-05-12T11:51:00Z | A Supervised Information Enhanced Multi-Granularity Contrastive Learning
Framework for EEG Based Emotion Recognition | This study introduces a novel Supervised Info-enhanced Contrastive Learning framework for EEG based Emotion Recognition (SICLEER). SI-CLEER employs multi-granularity contrastive learning to create robust EEG contextual representations, potentiallyn improving emotion recognition effectiveness. Unlike existing methods solely guided by classification loss, we propose a joint learning model combining self-supervised contrastive learning loss and supervised classification loss. This model optimizes both loss functions, capturing subtle EEG signal differences specific to emotion detection. Extensive experiments demonstrate SI-CLEER's robustness and superior accuracy on the SEED dataset compared to state-of-the-art methods. Furthermore, we analyze electrode performance, highlighting the significance of central frontal and temporal brain region EEGs in emotion detection. This study offers an universally applicable approach with potential benefits for diverse EEG classification tasks. | [
"['Xiang Li' 'Jian Song' 'Zhigang Zhao' 'Chunxiao Wang' 'Dawei Song'\n 'Bin Hu']"
]
|
null | null | 2405.07278 | null | null | http://arxiv.org/pdf/2405.07278v1 | 2024-05-12T12:55:40Z | 2024-05-12T12:55:40Z | Human-interpretable clustering of short-text using large language models | Large language models have seen extraordinary growth in popularity due to their human-like content generation capabilities. We show that these models can also be used to successfully cluster human-generated content, with success defined through the measures of distinctiveness and interpretability. This success is validated by both human reviewers and ChatGPT, providing an automated means to close the 'validation gap' that has challenged short-text clustering. Comparing the machine and human approaches we identify the biases inherent in each, and question the reliance on human-coding as the 'gold standard'. We apply our methodology to Twitter bios and find characteristic ways humans describe themselves, agreeing well with prior specialist work, but with interesting differences characteristic of the medium used to express identity. | [
"['Justin K. Miller' 'Tristram J. Alexander']"
]
|
null | null | 2405.07288 | null | null | http://arxiv.org/pdf/2405.07288v1 | 2024-05-12T14:01:05Z | 2024-05-12T14:01:05Z | Erasing Concepts from Text-to-Image Diffusion Models with Few-shot
Unlearning | Generating images from text has become easier because of the scaling of diffusion models and advancements in the field of vision and language. These models are trained using vast amounts of data from the Internet. Hence, they often contain undesirable content such as copyrighted material. As it is challenging to remove such data and retrain the models, methods for erasing specific concepts from pre-trained models have been investigated. We propose a novel concept-erasure method that updates the text encoder using few-shot unlearning in which a few real images are used. The discussion regarding the generated images after erasing a concept has been lacking. While there are methods for specifying the transition destination for concepts, the validity of the specified concepts is unclear. Our method implicitly achieves this by transitioning to the latent concepts inherent in the model or the images. Our method can erase a concept within 10 s, making concept erasure more accessible than ever before. Implicitly transitioning to related concepts leads to more natural concept erasure. We applied the proposed method to various concepts and confirmed that concept erasure can be achieved tens to hundreds of times faster than with current methods. By varying the parameters to be updated, we obtained results suggesting that, like previous research, knowledge is primarily accumulated in the feed-forward networks of the text encoder. | [
"['Masane Fuchi' 'Tomohiro Takagi']"
]
|
null | null | 2405.07309 | null | null | http://arxiv.org/pdf/2405.07309v1 | 2024-05-12T15:38:17Z | 2024-05-12T15:38:17Z | DiffGen: Robot Demonstration Generation via Differentiable Physics
Simulation, Differentiable Rendering, and Vision-Language Model | Generating robot demonstrations through simulation is widely recognized as an effective way to scale up robot data. Previous work often trained reinforcement learning agents to generate expert policies, but this approach lacks sample efficiency. Recently, a line of work has attempted to generate robot demonstrations via differentiable simulation, which is promising but heavily relies on reward design, a labor-intensive process. In this paper, we propose DiffGen, a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model to enable automatic and efficient generation of robot demonstrations. Given a simulated robot manipulation scenario and a natural language instruction, DiffGen can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation after manipulation. The embeddings are obtained from the vision-language model, and the optimization is achieved by calculating and descending gradients through the differentiable simulation, differentiable rendering, and vision-language model components, thereby accomplishing the specified task. Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time. | [
"['Yang Jin' 'Jun Lv' 'Shuqiang Jiang' 'Cewu Lu']"
]
|
null | null | 2405.07312 | null | null | http://arxiv.org/pdf/2405.07312v1 | 2024-05-12T15:46:52Z | 2024-05-12T15:46:52Z | Nonparametric Control-Koopman Operator Learning: Flexible and Scalable
Models for Prediction and Control | Linearity of Koopman operators and simplicity of their estimators coupled with model-reduction capabilities has lead to their great popularity in applications for learning dynamical systems. While nonparametric Koopman operator learning in infinite-dimensional reproducing kernel Hilbert spaces is well understood for autonomous systems, its control system analogues are largely unexplored. Addressing systems with control inputs in a principled manner is crucial for fully data-driven learning of controllers, especially since existing approaches commonly resort to representational heuristics or parametric models of limited expressiveness and scalability. We address the aforementioned challenge by proposing a universal framework via control-affine reproducing kernels that enables direct estimation of a single operator even for control systems. The proposed approach, called control-Koopman operator regression (cKOR), is thus completely analogous to Koopman operator regression of the autonomous case. First in the literature, we present a nonparametric framework for learning Koopman operator representations of nonlinear control-affine systems that does not suffer from the curse of control input dimensionality. This allows for reformulating the infinite-dimensional learning problem in a finite-dimensional space based solely on data without apriori loss of precision due to a restriction to a finite span of functions or inputs as in other approaches. For enabling applications to large-scale control systems, we also enhance the scalability of control-Koopman operator estimators by leveraging random projections (sketching). The efficacy of our novel cKOR approach is demonstrated on both forecasting and control tasks. | [
"['Petar Bevanda' 'Bas Driessen' 'Lucian Cristian Iacob' 'Roland Toth'\n 'Stefan Sosnowski' 'Sandra Hirche']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.