categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.16844
| null | null |
http://arxiv.org/pdf/2402.16844v1
|
2024-02-26T18:59:28Z
|
2024-02-26T18:59:28Z
|
Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
|
Large language models (LLMs) have become ubiquitous in practice and are widely used for generation tasks such as translation, summarization and instruction following. However, their enormous size and reliance on autoregressive decoding increase deployment costs and complicate their use in latency-critical applications. In this work, we propose a hybrid approach that combines language models of different sizes to increase the efficiency of autoregressive decoding while maintaining high performance. Our method utilizes a pretrained frozen LLM that encodes all prompt tokens once in parallel, and uses the resulting representations to condition and guide a small language model (SLM), which then generates the response more efficiently. We investigate the combination of encoder-decoder LLMs with both encoder-decoder and decoder-only SLMs from different model families and only require fine-tuning of the SLM. Experiments with various benchmarks show substantial speedups of up to $4times$, with minor performance penalties of $1-2%$ for translation and summarization tasks compared to the LLM.
|
[
"['Benjamin Bergner' 'Andrii Skliar' 'Amelie Royer' 'Tijmen Blankevoort'\n 'Yuki Asano' 'Babak Ehteshami Bejnordi']"
] |
null | null |
2402.16845
| null | null |
http://arxiv.org/pdf/2402.16845v2
|
2024-06-08T22:16:13Z
|
2024-02-26T18:59:31Z
|
Neural Operators with Localized Integral and Differential Kernels
|
Neural operators learn mappings between function spaces, which is practical for learning solution operators of PDEs and other scientific modeling applications. Among them, the Fourier neural operator (FNO) is a popular architecture that performs global convolutions in the Fourier space. However, such global operations are often prone to over-smoothing and may fail to capture local details. In contrast, convolutional neural networks (CNN) can capture local features but are limited to training and inference at a single resolution. In this work, we present a principled approach to operator learning that can capture local features under two frameworks by learning differential operators and integral operators with locally supported kernels. Specifically, inspired by stencil methods, we prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs. To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions. Both these approaches preserve the properties of operator learning and, hence, the ability to predict at any resolution. Adding our layers to FNOs significantly improves their performance, reducing the relative L2-error by 34-72% in our experiments, which include a turbulent 2D Navier-Stokes and the spherical shallow water equations.
|
[
"['Miguel Liu-Schiaffini' 'Julius Berner' 'Boris Bonev' 'Thorsten Kurth'\n 'Kamyar Azizzadenesheli' 'Anima Anandkumar']"
] |
null | null |
2402.16848
| null | null |
http://arxiv.org/pdf/2402.16848v1
|
2024-02-26T18:59:52Z
|
2024-02-26T18:59:52Z
|
InterroGate: Learning to Share, Specialize, and Prune Representations
for Multi-task Learning
|
Jointly learning multiple tasks with a unified model can improve accuracy and data efficiency, but it faces the challenge of task interference, where optimizing one task objective may inadvertently compromise the performance of another. A solution to mitigate this issue is to allocate task-specific parameters, free from interference, on top of shared features. However, manually designing such architectures is cumbersome, as practitioners need to balance between the overall performance across all tasks and the higher computational cost induced by the newly added parameters. In this work, we propose textit{InterroGate}, a novel multi-task learning (MTL) architecture designed to mitigate task interference while optimizing inference computational efficiency. We employ a learnable gating mechanism to automatically balance the shared and task-specific representations while preserving the performance of all tasks. Crucially, the patterns of parameter sharing and specialization dynamically learned during training, become fixed at inference, resulting in a static, optimized MTL architecture. Through extensive empirical evaluations, we demonstrate SoTA results on three MTL benchmarks using convolutional as well as transformer-based backbones on CelebA, NYUD-v2, and PASCAL-Context.
|
[
"['Babak Ehteshami Bejnordi' 'Gaurav Kumar' 'Amelie Royer'\n 'Christos Louizos' 'Tijmen Blankevoort' 'Mohsen Ghafoorian']"
] |
null | null |
2402.16854
| null | null |
http://arxiv.org/pdf/2402.16854v1
|
2024-01-18T21:45:12Z
|
2024-01-18T21:45:12Z
|
Attention Based Molecule Generation via Hierarchical Variational
Autoencoder
|
Molecule generation is a task made very difficult by the complex ways in which we represent molecules computationally. A common technique used in molecular generative modeling is to use SMILES strings with recurrent neural networks built into variational autoencoders - but these suffer from a myriad of issues: vanishing gradients, long-range forgetting, and invalid molecules. In this work, we show that by combining recurrent neural networks with convolutional networks in a hierarchical manner, we are able to both extract autoregressive information from SMILES strings while maintaining signal and long-range dependencies. This allows for generations with very high validity rates on the order of 95% when reconstructing known molecules. We also observe an average Tanimoto similarity of .6 between test set and reconstructed molecules, which suggests our method is able to map between SMILES strings and their learned representations in a more effective way than prior works using similar methods.
|
[
"['Divahar Sivanesan']"
] |
null | null |
2402.16858
| null | null |
http://arxiv.org/pdf/2402.16858v2
|
2024-06-04T10:10:22Z
|
2024-01-19T16:43:47Z
|
Pragmatic Goal-Oriented Communications under Semantic-Effectiveness
Channel Errors
|
In forthcoming AI-assisted 6G networks, integrating semantic, pragmatic, and goal-oriented communication strategies becomes imperative. This integration will enable sensing, transmission, and processing of exclusively pertinent task data, ensuring conveyed information possesses understandable, pragmatic semantic significance, aligning with destination needs and goals. Without doubt, no communication is error free. Within this context, besides errors stemming from typical wireless communication dynamics, potential distortions between transmitter-intended and receiver-interpreted meanings can emerge due to limitations in semantic processing capabilities, as well as language and knowledge representation disparities between transmitters and receivers. The main contribution of this paper is two-fold. First, it proposes and details a novel mathematical modeling of errors stemming from language mismatches at both semantic and effectiveness levels. Second, it provides a novel algorithmic solution to counteract these types of errors which leverages optimal transport theory. Our numerical results show the potential of the proposed mechanism to compensate for language mismatches, thereby enhancing the attainability of reliable communication under noisy communication environments.
|
[
"['Tomás Hüttebräucker' 'Mohamed Sana' 'Emilio Calvanese Strinati']"
] |
null | null |
2402.16865
| null | null |
http://arxiv.org/pdf/2402.16865v1
|
2024-01-21T04:14:54Z
|
2024-01-21T04:14:54Z
|
Improve Robustness of Eye Disease Detection by including Learnable
Probabilistic Discrete Latent Variables into Machine Learning Models
|
Ocular diseases, ranging from diabetic retinopathy to glaucoma, present a significant public health challenge due to their prevalence and potential for causing vision impairment. Early and accurate diagnosis is crucial for effective treatment and management.In recent years, deep learning models have emerged as powerful tools for analysing medical images, including ocular imaging . However, challenges persist in model interpretability and uncertainty estimation, which are critical for clinical decision-making. This study introduces a novel application of GFlowOut, leveraging the probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks, for the classification and analysis of ocular diseases using eye fundus images. We develop a robust and generalizable method that utilizes GFlowOut integrated with ResNet18 and ViT models as backbone in identifying various ocular conditions. This study employs a unique set of dropout masks - none, random, bottomup, and topdown - to enhance model performance in analyzing ocular images. Our results demonstrate that the bottomup GFlowOut mask significantly improves accuracy, outperforming the traditional dropout approach.
|
[
"['Anirudh Prabhakaran' 'YeKun Xiao' 'Ching-Yu Cheng' 'Dianbo Liu']"
] |
null | null |
2402.16877
| null | null |
http://arxiv.org/pdf/2402.16877v1
|
2024-02-08T20:35:31Z
|
2024-02-08T20:35:31Z
|
Large Language Model Augmented Exercise Retrieval for Personalized
Language Learning
|
We study the problem of zero-shot exercise retrieval in the context of online language learning, to give learners the ability to explicitly request personalized exercises via natural language. Using real-world data collected from language learners, we observe that vector similarity approaches poorly capture the relationship between exercise content and the language that learners use to express what they want to learn. This semantic gap between queries and content dramatically reduces the effectiveness of general-purpose retrieval models pretrained on large scale information retrieval datasets like MS MARCO. We leverage the generative capabilities of large language models to bridge the gap by synthesizing hypothetical exercises based on the learner's input, which are then used to search for relevant exercises. Our approach, which we call mHyER, overcomes three challenges: (1) lack of relevance labels for training, (2) unrestricted learner input content, and (3) low semantic similarity between input and retrieval candidates. mHyER outperforms several strong baselines on two novel benchmarks created from crowdsourced data and publicly available data.
|
[
"['Austin Xu' 'Will Monroe' 'Klinton Bicknell']"
] |
null | null |
2402.16878
| null | null |
http://arxiv.org/pdf/2402.16878v1
|
2024-02-12T19:10:11Z
|
2024-02-12T19:10:11Z
|
EvoGPT-f: An Evolutionary GPT Framework for Benchmarking Formal Math
Languages
|
Formal mathematics is the discipline of translating mathematics into a programming language in which any statement can be unequivocally checked by a computer. Mathematicians and computer scientists have spent decades of painstaking formalization efforts developing languages such as Coq, HOL, and Lean. Machine learning research has converged on these formal math corpora and given rise to an assortment of methodologies to aid in interactive and automated theorem proving. However, these papers have primarily focused on one method, for one proof task, in one language. This paper introduces EvoGPT-f: a novel evolutionary framework for the first systematic quantitative analysis of the differential machine learnability of five formal math corpora (Lean 3, Lean 4, Coq, HOL 4, HOL Light) using four tokenization methods (character, word-level, Byte Pair Encoding and StarCoder tokenizer). This paper does not put to rest the question of the "best" or "easiest" language to learn. Rather, this framework and preliminary findings begin to illuminate the differential machine learnability of these languages, offering a foundation to forge more systematic quantitative and qualitative comparative research across communities.
|
[
"['Johnathan Mercer']"
] |
null | null |
2402.16880
| null | null |
http://arxiv.org/pdf/2402.16880v2
|
2024-04-19T07:54:27Z
|
2024-02-18T12:44:15Z
|
BESA: Pruning Large Language Models with Blockwise Parameter-Efficient
Sparsity Allocation
|
Large language models (LLMs) have demonstrated outstanding performance in various tasks, such as text summarization, text question-answering, and etc. While their performance is impressive, the computational footprint due to their vast number of parameters can be prohibitive. Existing solutions such as SparseGPT and Wanda attempt to alleviate this issue through weight pruning. However, their layer-wise approach results in significant perturbation to the model's output and requires meticulous hyperparameter tuning, such as the pruning rate, which can adversely affect overall model performance. To address this, this paper introduces a novel LLM pruning technique dubbed blockwise parameter-efficient sparsity allocation (BESA) by applying a blockwise reconstruction loss. In contrast to the typical layer-wise pruning techniques, BESA is characterized by two distinctive attributes: i) it targets the overall pruning error with respect to individual transformer blocks, and ii) it allocates layer-specific sparsity in a differentiable manner, both of which ensure reduced performance degradation after pruning. Our experiments show that BESA achieves state-of-the-art performance, efficiently pruning LLMs like LLaMA1, and LLaMA2 with 7B to 70B parameters on a single A100 GPU in just five hours. Code is available at https://github.com/OpenGVLab/LLMPrune-BESA.
|
[
"['Peng Xu' 'Wenqi Shao' 'Mengzhao Chen' 'Shitao Tang' 'Kaipeng Zhang'\n 'Peng Gao' 'Fengwei An' 'Yu Qiao' 'Ping Luo']"
] |
null | null |
2402.16882
| null | null |
http://arxiv.org/pdf/2402.16882v1
|
2024-02-19T02:21:20Z
|
2024-02-19T02:21:20Z
|
Substrate Scope Contrastive Learning: Repurposing Human Bias to Learn
Atomic Representations
|
Learning molecular representation is a critical step in molecular machine learning that significantly influences modeling success, particularly in data-scarce situations. The concept of broadly pre-training neural networks has advanced fields such as computer vision, natural language processing, and protein engineering. However, similar approaches for small organic molecules have not achieved comparable success. In this work, we introduce a novel pre-training strategy, substrate scope contrastive learning, which learns atomic representations tailored to chemical reactivity. This method considers the grouping of substrates and their yields in published substrate scope tables as a measure of their similarity or dissimilarity in terms of chemical reactivity. We focus on 20,798 aryl halides in the CAS Content Collection spanning thousands of publications to learn a representation of aryl halide reactivity. We validate our pre-training approach through both intuitive visualizations and comparisons to traditional reactivity descriptors and physical organic chemistry principles. The versatility of these embeddings is further evidenced in their application to yield prediction, regioselectivity prediction, and the diverse selection of new substrates. This work not only presents a chemistry-tailored neural network pre-training strategy to learn reactivity-aligned atomic representations, but also marks a first-of-its-kind approach to benefit from the human bias in substrate scope design.
|
[
"['Wenhao Gao' 'Priyanka Raghavan' 'Ron Shprints' 'Connor W. Coley']"
] |
null | null |
2402.16886
| null | null |
http://arxiv.org/pdf/2402.16886v1
|
2024-02-07T22:15:15Z
|
2024-02-07T22:15:15Z
|
Using text embedding models and vector databases as text classifiers
with the example of medical data
|
The advent of Large Language Models (LLMs) is promising and has found application in numerous fields, but as it often is with the medical field, the bar is typically quite high [5]. In tandem with LLMs, vector embedding models and vector databases provide a robust way of expressing numerous modes of data that are easily digestible by typical machine learning models. Along with the ease of adding information, knowledge, and data to these vector databases, they provide a compelling reason to apply them in numerous fields where the task of retrieving information is typically done by humans. Researchers at Google have developed a clear alternative model, Med-PaLM [6] specifically designed to match a clinician's level of accuracy when it comes to medical knowledge. When training classifiers, and developing models, it is imperative to maintain factuality and reduce bias [4]. Here, we explore the use of vector databases and embedding models as a means of encoding, and classifying text with the example and application in the field of medicine. We show the robustness of these tools depends heavily on the sparsity of the data presented, and even with low amounts of data in the vector database itself, the vector database does a good job at classifying data [9]. Using various LLMs to generate the medical data, we also understand the limitations of the medical knowledge of these models and encourage further expert medical review of our testing data. By using vector databases to classify a clinician's notes on a patient presented with a certain ailment, we understand the limitations of such methods, but also the promise of their prospective use and with continued testing and experimentation, hope to explore a unique use case of vector databases and embedding models.
|
[
"['Rishabh Goel']"
] |
null | null |
2402.16887
| null | null |
http://arxiv.org/pdf/2402.16887v1
|
2024-02-23T09:06:36Z
|
2024-02-23T09:06:36Z
|
Artificial Intelligence for Complex Network: Potential, Methodology and
Application
|
Complex networks pervade various real-world systems, from the natural environment to human societies. The essence of these networks is in their ability to transition and evolve from microscopic disorder-where network topology and node dynamics intertwine-to a macroscopic order characterized by certain collective behaviors. Over the past two decades, complex network science has significantly enhanced our understanding of the statistical mechanics, structures, and dynamics underlying real-world networks. Despite these advancements, there remain considerable challenges in exploring more realistic systems and enhancing practical applications. The emergence of artificial intelligence (AI) technologies, coupled with the abundance of diverse real-world network data, has heralded a new era in complex network science research. This survey aims to systematically address the potential advantages of AI in overcoming the lingering challenges of complex network research. It endeavors to summarize the pivotal research problems and provide an exhaustive review of the corresponding methodologies and applications. Through this comprehensive survey-the first of its kind on AI for complex networks-we expect to provide valuable insights that will drive further research and advancement in this interdisciplinary field.
|
[
"['Jingtao Ding' 'Chang Liu' 'Yu Zheng' 'Yunke Zhang' 'Zihan Yu'\n 'Ruikun Li' 'Hongyi Chen' 'Jinghua Piao' 'Huandong Wang' 'Jiazhen Liu'\n 'Yong Li']"
] |
null | null |
2402.16888
| null | null |
http://arxiv.org/pdf/2402.16888v1
|
2024-02-23T09:43:52Z
|
2024-02-23T09:43:52Z
|
Chaotic attractor reconstruction using small reservoirs -- the influence
of topology
|
Forecasting timeseries based upon measured data is needed in a wide range of applications and has been the subject of extensive research. A particularly challenging task is the forecasting of timeseries generated by chaotic dynamics. In recent years reservoir computing has been shown to be an effective method of forecasting chaotic dynamics and reconstructing chaotic attractors from data. In this work strides are made toward smaller and lower complexity reservoirs with the goal of improved hardware implementability and more reliable production of adequate surrogate models. We show that a reservoir of uncoupled nodes more reliably produces long term timeseries predictions than complex reservoir topologies. We then link the improved attractor reconstruction of the uncoupled reservoir with smaller spectral radii of the resulting surrogate systems. These results indicate that, the node degree plays an important role in determining whether the desired dynamics will be stable in the autonomous surrogate system which is attained via closed-loop operation of the trained reservoir. In terms of hardware implementability, uncoupled nodes would allow for greater freedom in the hardware architecture because no complex coupling setups are needed and because, for uncoupled nodes, the system response is equivalent for space and time multiplexing.
|
[
"['Lina Jaurigue']"
] |
null | null |
2402.16889
| null | null |
http://arxiv.org/pdf/2402.16889v1
|
2024-02-23T10:48:21Z
|
2024-02-23T10:48:21Z
|
Generative Models are Self-Watermarked: Declaring Model Authentication
through Re-Generation
|
As machine- and AI-generated content proliferates, protecting the intellectual property of generative models has become imperative, yet verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data. The challenge of verifying data ownership is further amplified by using Machine Learning as a Service (MLaaS), which often functions as a black-box system. Our work is dedicated to detecting data reuse from even an individual sample. Traditionally, watermarking has been leveraged to detect AI-generated content. However, unlike watermarking techniques that embed additional information as triggers into models or generated content, potentially compromising output quality, our approach identifies latent fingerprints inherently present within the outputs through re-generation. We propose an explainable verification procedure that attributes data ownership through re-generation, and further amplifies these fingerprints in the generative models through iterative data re-generation. This methodology is theoretically grounded and demonstrates viability and robustness using recent advanced text and image generative models. Our methodology is significant as it goes beyond protecting the intellectual property of APIs and addresses important issues such as the spread of misinformation and academic misconduct. It provides a useful tool to ensure the integrity of sources and authorship, expanding its application in different scenarios where authenticity and ownership verification are essential.
|
[
"['Aditya Desu' 'Xuanli He' 'Qiongkai Xu' 'Wei Lu']"
] |
null | null |
2402.16891
| null | null |
http://arxiv.org/pdf/2402.16891v2
|
2024-04-12T15:34:18Z
|
2024-02-23T13:25:23Z
|
Multi-Task Learning for Routing Problem with Cross-Problem Zero-Shot
Generalization
|
Vehicle routing problems (VRPs), which can be found in numerous real-world applications, have been an important research topic for several decades. Recently, the neural combinatorial optimization (NCO) approach that leverages a learning-based model to solve VRPs without manual algorithm design has gained substantial attention. However, current NCO methods typically require building one model for each routing problem, which significantly hinders their practical application for real-world industry problems with diverse attributes. In this work, we make the first attempt to tackle the crucial challenge of cross-problem generalization. In particular, we formulate VRPs as different combinations of a set of shared underlying attributes and solve them simultaneously via a single model through attribute composition. In this way, our proposed model can successfully solve VRPs with unseen attribute combinations in a zero-shot generalization manner. Extensive experiments are conducted on eleven VRP variants, benchmark datasets, and industry logistic scenarios. The results show that the unified model demonstrates superior performance in the eleven VRPs, reducing the average gap to around 5% from over 20% in the existing approach and achieving a significant performance boost on benchmark datasets as well as a real-world logistics application. The source code is included in https://github.com/FeiLiu36/MTNCO.
|
[
"['Fei Liu' 'Xi Lin' 'Zhenkun Wang' 'Qingfu Zhang' 'Xialiang Tong'\n 'Mingxuan Yuan']"
] |
null | null |
2402.16896
| null | null |
http://arxiv.org/pdf/2402.16896v2
|
2024-03-07T15:59:17Z
|
2024-02-23T22:48:29Z
|
On Trojan Signatures in Large Language Models of Code
|
Trojan signatures, as described by Fields et al. (2021), are noticeable differences in the distribution of the trojaned class parameters (weights) and the non-trojaned class parameters of the trojaned model, that can be used to detect the trojaned model. Fields et al. (2021) found trojan signatures in computer vision classification tasks with image models, such as, Resnet, WideResnet, Densenet, and VGG. In this paper, we investigate such signatures in the classifier layer parameters of large language models of source code. Our results suggest that trojan signatures could not generalize to LLMs of code. We found that trojaned code models are stubborn, even when the models were poisoned under more explicit settings (finetuned with pre-trained weights frozen). We analyzed nine trojaned models for two binary classification tasks: clone and defect detection. To the best of our knowledge, this is the first work to examine weight-based trojan signature revelation techniques for large-language models of code and furthermore to demonstrate that detecting trojans only from the weights in such models is a hard problem.
|
[
"['Aftab Hussain' 'Md Rafiqul Islam Rabin' 'Mohammad Amin Alipour']"
] |
null | null |
2402.16897
| null | null |
http://arxiv.org/pdf/2402.16897v2
|
2024-02-28T09:58:46Z
|
2024-02-24T03:47:06Z
|
Reliable Conflictive Multi-View Learning
|
Multi-view learning aims to combine multiple features to achieve more comprehensive descriptions of data. Most previous works assume that multiple views are strictly aligned. However, real-world multi-view data may contain low-quality conflictive instances, which show conflictive information in different views. Previous methods for this problem mainly focus on eliminating the conflictive data instances by removing them or replacing conflictive views. Nevertheless, real-world applications usually require making decisions for conflictive instances rather than only eliminating them. To solve this, we point out a new Reliable Conflictive Multi-view Learning (RCML) problem, which requires the model to provide decision results and attached reliabilities for conflictive multi-view data. We develop an Evidential Conflictive Multi-view Learning (ECML) method for this problem. ECML first learns view-specific evidence, which could be termed as the amount of support to each category collected from data. Then, we can construct view-specific opinions consisting of decision results and reliability. In the multi-view fusion stage, we propose a conflictive opinion aggregation strategy and theoretically prove this strategy can exactly model the relation of multi-view common and view-specific reliabilities. Experiments performed on 6 datasets verify the effectiveness of ECML.
|
[
"['Cai Xu' 'Jiajun Si' 'Ziyu Guan' 'Wei Zhao' 'Yue Wu' 'Xiyue Gao']"
] |
null | null |
2402.16898
| null | null |
http://arxiv.org/pdf/2402.16898v2
|
2024-03-10T07:35:15Z
|
2024-02-24T03:48:22Z
|
MIM-Reasoner: Learning with Theoretical Guarantees for Multiplex
Influence Maximization
|
Multiplex influence maximization (MIM) asks us to identify a set of seed users such as to maximize the expected number of influenced users in a multiplex network. MIM has been one of central research topics, especially in nowadays social networking landscape where users participate in multiple online social networks (OSNs) and their influences can propagate among several OSNs simultaneously. Although there exist a couple combinatorial algorithms to MIM, learning-based solutions have been desired due to its generalization ability to heterogeneous networks and their diversified propagation characteristics. In this paper, we introduce MIM-Reasoner, coupling reinforcement learning with probabilistic graphical model, which effectively captures the complex propagation process within and between layers of a given multiplex network, thereby tackling the most challenging problem in MIM. We establish a theoretical guarantee for MIM-Reasoner as well as conduct extensive analyses on both synthetic and real-world datasets to validate our MIM-Reasoner's performance.
|
[
"['Nguyen Do' 'Tanmoy Chowdhury' 'Chen Ling' 'Liang Zhao' 'My T. Thai']"
] |
null | null |
2402.16899
| null | null |
http://arxiv.org/pdf/2402.16899v3
|
2024-03-07T05:33:40Z
|
2024-02-24T06:31:43Z
|
A priori Estimates for Deep Residual Network in Continuous-time
Reinforcement Learning
|
Deep reinforcement learning excels in numerous large-scale practical applications. However, existing performance analyses ignores the unique characteristics of continuous-time control problems, is unable to directly estimate the generalization error of the Bellman optimal loss and require a boundedness assumption. Our work focuses on continuous-time control problems and proposes a method that is applicable to all such problems where the transition function satisfies semi-group and Lipschitz properties. Under this method, we can directly analyze the emph{a priori} generalization error of the Bellman optimal loss. The core of this method lies in two transformations of the loss function. To complete the transformation, we propose a decomposition method for the maximum operator. Additionally, this analysis method does not require a boundedness assumption. Finally, we obtain an emph{a priori} generalization error without the curse of dimensionality.
|
[
"['Shuyu Yin' 'Qixuan Zhou' 'Fei Wen' 'Tao Luo']"
] |
null | null |
2402.16901
| null | null |
http://arxiv.org/pdf/2402.16901v1
|
2024-02-24T13:13:17Z
|
2024-02-24T13:13:17Z
|
FGBERT: Function-Driven Pre-trained Gene Language Model for Metagenomics
|
Metagenomic data, comprising mixed multi-species genomes, are prevalent in diverse environments like oceans and soils, significantly impacting human health and ecological functions. However, current research relies on K-mer representations, limiting the capture of structurally relevant gene contexts. To address these limitations and further our understanding of complex relationships between metagenomic sequences and their functions, we introduce a protein-based gene representation as a context-aware and structure-relevant tokenizer. Our approach includes Masked Gene Modeling (MGM) for gene group-level pre-training, providing insights into inter-gene contextual information, and Triple Enhanced Metagenomic Contrastive Learning (TEM-CL) for gene-level pre-training to model gene sequence-function relationships. MGM and TEM-CL constitute our novel metagenomic language model {NAME}, pre-trained on 100 million metagenomic sequences. We demonstrate the superiority of our proposed {NAME} on eight datasets.
|
[
"['ChenRui Duan' 'Zelin Zang' 'Yongjie Xu' 'Hang He' 'Zihan Liu'\n 'Zijia Song' 'Ju-Sheng Zheng' 'Stan Z. Li']"
] |
null | null |
2402.16902
| null | null |
http://arxiv.org/pdf/2402.16902v2
|
2024-05-27T02:24:25Z
|
2024-02-24T13:39:05Z
|
PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRA
|
With the rapid scaling of large language models (LLMs), serving numerous low-rank adaptations (LoRAs) concurrently has become increasingly impractical, leading to unaffordable costs and necessitating more parameter-efficient finetuning methods. In this work, we introduce Partially Rotation-enhanced Low-Rank Adaptation (PRoLoRA), an intra-layer sharing mechanism comprising four essential components: broadcast reduction, rotation enhancement, partially-sharing refinement, and rectified initialization strategy. As a superset of LoRA, PRoLoRA retains its advantages, and effectively circumvent the drawbacks of peer parameter-sharing methods with superior model capacity, practical feasibility, and broad applicability. Empirical experiments demonstrate the remarkably higher parameter efficiency of PRoLoRA in both specific parameter budget and performance target scenarios, and its scalability to larger LLMs. Notably, with one time less trainable parameters, PRoLoRA still outperforms LoRA on multiple instruction tuning datasets. Subsequently, an ablation study is conducted to validate the necessity of individual components and highlight the superiority of PRoLoRA over three potential variants. Hopefully, the conspicuously higher parameter efficiency can establish PRoLoRA as a resource-friendly alternative to LoRA.
|
[
"['Sheng Wang' 'Boyang Xue' 'Jiacheng Ye' 'Jiyue Jiang' 'Liheng Chen'\n 'Lingpeng Kong' 'Chuan Wu']"
] |
null | null |
2402.16903
| null | null |
http://arxiv.org/pdf/2402.16903v1
|
2024-02-24T14:42:42Z
|
2024-02-24T14:42:42Z
|
A novel data generation scheme for surrogate modelling with deep
operator networks
|
Operator-based neural network architectures such as DeepONets have emerged as a promising tool for the surrogate modeling of physical systems. In general, towards operator surrogate modeling, the training data is generated by solving the PDEs using techniques such as Finite Element Method (FEM). The computationally intensive nature of data generation is one of the biggest bottleneck in deploying these surrogate models for practical applications. In this study, we propose a novel methodology to alleviate the computational burden associated with training data generation for DeepONets. Unlike existing literature, the proposed framework for data generation does not use any partial differential equation integration strategy, thereby significantly reducing the computational cost associated with generating training dataset for DeepONet. In the proposed strategy, first, the output field is generated randomly, satisfying the boundary conditions using Gaussian Process Regression (GPR). From the output field, the input source field can be calculated easily using finite difference techniques. The proposed methodology can be extended to other operator learning methods, making the approach widely applicable. To validate the proposed approach, we employ the heat equations as the model problem and develop the surrogate model for numerous boundary value problems.
|
[
"['Shivam Choubey' 'Birupaksha Pal' 'Manish Agrawal']"
] |
null | null |
2402.16904
| null | null |
http://arxiv.org/pdf/2402.16904v1
|
2024-02-24T18:46:06Z
|
2024-02-24T18:46:06Z
|
Selective Task offloading for Maximum Inference Accuracy and Energy
efficient Real-Time IoT Sensing Systems
|
The recent advancements in small-size inference models facilitated AI deployment on the edge. However, the limited resource nature of edge devices poses new challenges especially for real-time applications. Deploying multiple inference models (or a single tunable model) varying in size and therefore accuracy and power consumption, in addition to an edge server inference model, can offer a dynamic system in which the allocation of inference models to inference jobs is performed according to the current resource conditions. Therefore, in this work, we tackle the problem of selectively allocating inference models to jobs or offloading them to the edge server to maximize inference accuracy under time and energy constraints. This problem is shown to be an instance of the unbounded multidimensional knapsack problem which is considered a strongly NP-hard problem. We propose a lightweight hybrid genetic algorithm (LGSTO) to solve this problem. We introduce a termination condition and neighborhood exploration techniques for faster evolution of populations. We compare LGSTO with the Naive and Dynamic programming solutions. In addition to classic genetic algorithms using different reproduction methods including NSGA-II, and finally we compare to other evolutionary methods such as Particle swarm optimization (PSO) and Ant colony optimization (ACO). Experiment results show that LGSTO performed 3 times faster than the fastest comparable schemes while producing schedules with higher average accuracy.
|
[
"['Abdelkarim Ben Sada' 'Amar Khelloufi' 'Abdenacer Naouri'\n 'Huansheng Ning' 'Sahraoui Dhelim']"
] |
null | null |
2402.16905
| null | null |
http://arxiv.org/pdf/2402.16905v1
|
2024-02-24T21:36:26Z
|
2024-02-24T21:36:26Z
|
Enforcing Temporal Constraints on Generative Agent Behavior with
Reactive Synthesis
|
The surge in popularity of Large Language Models (LLMs) has opened doors for new approaches to the creation of interactive agents. However, managing the temporal behavior of such agents over the course of an interaction remains challenging. The stateful, long-term horizon and quantitative reasoning required for coherent agent behavior does not fit well into the LLM paradigm. We propose a combination of formal logic-based program synthesis and LLM content generation to create generative agents that adhere to temporal constraints. Our approach uses Temporal Stream Logic (TSL) to generate an automaton that enforces a temporal structure on an agent and leaves the details of each action for a moment in time to an LLM. By using TSL, we are able to augment the generative agent where users have a higher level of guarantees on behavior, better interpretability of the system, and more ability to build agents in a modular way. We evaluate our approach on different tasks involved in creating a coherent interactive agent specialized for various application domains. We found that over all of the tasks, our approach using TSL achieves at least 96% adherence, whereas the pure LLM-based approach demonstrates as low as 14.67% adherence.
|
[
"['Raven Rothkopf' 'Hannah Tongxin Zeng' 'Mark Santolucito']"
] |
null | null |
2402.16907
| null | null |
http://arxiv.org/pdf/2402.16907v1
|
2024-02-25T04:24:28Z
|
2024-02-25T04:24:28Z
|
Diffusion Posterior Proximal Sampling for Image Restoration
|
Diffusion models have demonstrated remarkable efficacy in generating high-quality samples. Existing diffusion-based image restoration algorithms exploit pre-trained diffusion models to leverage data priors, yet they still preserve elements inherited from the unconditional generation paradigm. These strategies initiate the denoising process with pure white noise and incorporate random noise at each generative step, leading to over-smoothed results. In this paper, we introduce a refined paradigm for diffusion-based image restoration. Specifically, we opt for a sample consistent with the measurement identity at each generative step, exploiting the sampling selection as an avenue for output stability and enhancement. Besides, we start the restoration process with an initialization combined with the measurement signal, providing supplementary information to better align the generative process. Extensive experimental results and analyses validate the effectiveness of our proposed approach across diverse image restoration tasks.
|
[
"['Hongjie Wu' 'Linchao He' 'Mingqin Zhang' 'Dongdong Chen' 'Kunming Luo'\n 'Mengting Luo' 'Ji-Zhe Zhou' 'Hu Chen' 'Jiancheng Lv']"
] |
null | null |
2402.16908
| null | null |
http://arxiv.org/pdf/2402.16908v2
|
2024-03-20T07:05:55Z
|
2024-02-25T06:23:02Z
|
Lightweight, error-tolerant edge detection using memristor-enabled
stochastic logics
|
The demand for efficient edge vision has spurred the interest in developing stochastic computing approaches for performing image processing tasks. Memristors with inherent stochasticity readily introduce probability into the computations and thus enable stochastic image processing computations. Here, we present a stochastic computing approach for edge detection, a fundamental image processing technique, facilitated with memristor-enabled stochastic logics. Specifically, we integrate the memristors with logic circuits and harness the stochasticity from the memristors to realize compact stochastic logics for stochastic number encoding and processing. The stochastic numbers, exhibiting well-regulated probabilities and correlations, can be processed to perform logic operations with statistical probabilities. This can facilitate lightweight stochastic edge detection for edge visual scenarios characterized with high-level noise errors. As a practical demonstration, we implement a hardware stochastic Roberts cross operator using the stochastic logics, and prove its exceptional edge detection performance, remarkably, with 95% less computational cost while withstanding 50% bit-flip errors. The results underscore the great potential of our stochastic edge detection approach in developing lightweight, error-tolerant edge vision hardware and systems for autonomous driving, virtual/augmented reality, medical imaging diagnosis, industrial automation, and beyond.
|
[
"['Lekai Song' 'Pengyu Liu' 'Jingfang Pei' 'Yang Liu' 'Songwei Liu'\n 'Shengbo Wang' 'Leonard W. T. Ng' 'Tawfique Hasan' 'Kong-Pang Pun'\n 'Shuo Gao' 'Guohua Hu']"
] |
null | null |
2402.16909
| null | null |
http://arxiv.org/pdf/2402.16909v1
|
2024-02-25T12:07:32Z
|
2024-02-25T12:07:32Z
|
Impact of Physical Activity on Quality of Life During Pregnancy: A
Causal ML Approach
|
The concept of Quality of Life (QoL) refers to a holistic measurement of an individual's well-being, incorporating psychological and social aspects. Pregnant women, especially those with obesity and stress, often experience lower QoL. Physical activity (PA) has shown the potential to enhance the QoL. However, pregnant women who are overweight and obese rarely meet the recommended level of PA. Studies have investigated the relationship between PA and QoL during pregnancy using correlation-based approaches. These methods aim to discover spurious correlations between variables rather than causal relationships. Besides, the existing methods mainly rely on physical activity parameters and neglect the use of different factors such as maternal (medical) history and context data, leading to biased estimates. Furthermore, the estimations lack an understanding of mediators and counterfactual scenarios that might affect them. In this paper, we investigate the causal relationship between being physically active (treatment variable) and the QoL (outcome) during pregnancy and postpartum. To estimate the causal effect, we develop a Causal Machine Learning method, integrating causal discovery and causal inference components. The data for our investigation is derived from a long-term wearable-based health monitoring study focusing on overweight and obese pregnant women. The machine learning (meta-learner) estimation technique is used to estimate the causal effect. Our result shows that performing adequate physical activity during pregnancy and postpartum improves the QoL by units of 7.3 and 3.4 on average in physical health and psychological domains, respectively. In the final step, four refutation analysis techniques are employed to validate our estimation.
|
[
"['Kianoosh Kazemi' 'Iina Ryhtä' 'Iman Azimi' 'Hannakaisa Niela-Vilen'\n 'Anna Axelin' 'Amir M. Rahmani' 'Pasi Liljeberg']"
] |
null | null |
2402.16911
| null | null |
http://arxiv.org/pdf/2402.16911v1
|
2024-02-25T13:28:08Z
|
2024-02-25T13:28:08Z
|
Trustworthy Personalized Bayesian Federated Learning via Posterior
Fine-Tune
|
Performance degradation owing to data heterogeneity and low output interpretability are the most significant challenges faced by federated learning in practical applications. Personalized federated learning diverges from traditional approaches, as it no longer seeks to train a single model, but instead tailors a unique personalized model for each client. However, previous work focused only on personalization from the perspective of neural network parameters and lack of robustness and interpretability. In this work, we establish a novel framework for personalized federated learning, incorporating Bayesian methodology which enhances the algorithm's ability to quantify uncertainty. Furthermore, we introduce normalizing flow to achieve personalization from the parameter posterior perspective and theoretically analyze the impact of normalizing flow on out-of-distribution (OOD) detection for Bayesian neural networks. Finally, we evaluated our approach on heterogeneous datasets, and the experimental results indicate that the new algorithm not only improves accuracy but also outperforms the baseline significantly in OOD detection due to the reliable output of the Bayesian approach.
|
[
"['Mengen Luo' 'Chi Xu' 'Ercan Engin Kuruoglu']"
] |
null | null |
2402.16912
| null | null |
http://arxiv.org/pdf/2402.16912v1
|
2024-02-25T16:45:39Z
|
2024-02-25T16:45:39Z
|
An Adversarial Robustness Benchmark for Enterprise Network Intrusion
Detection
|
As cyber-attacks become more sophisticated, improving the robustness of Machine Learning (ML) models must be a priority for enterprises of all sizes. To reliably compare the robustness of different ML models for cyber-attack detection in enterprise computer networks, they must be evaluated in standardized conditions. This work presents a methodical adversarial robustness benchmark of multiple decision tree ensembles with constrained adversarial examples generated from standard datasets. The robustness of regularly and adversarially trained RF, XGB, LGBM, and EBM models was evaluated on the original CICIDS2017 dataset, a corrected version of it designated as NewCICIDS, and the HIKARI dataset, which contains more recent network traffic. NewCICIDS led to models with a better performance, especially XGB and EBM, but RF and LGBM were less robust against the more recent cyber-attacks of HIKARI. Overall, the robustness of the models to adversarial cyber-attack examples was improved without their generalization to regular traffic being affected, enabling a reliable detection of suspicious activity without costly increases of false alarms.
|
[
"['João Vitorino' 'Miguel Silva' 'Eva Maia' 'Isabel Praça']"
] |
null | null |
2402.16913
| null | null |
http://arxiv.org/pdf/2402.16913v1
|
2024-02-25T17:39:44Z
|
2024-02-25T17:39:44Z
|
PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations
|
Recent advancements in deep learning have led to the development of various models for long-term multivariate time-series forecasting (LMTF), many of which have shown promising results. Generally, the focus has been on historical-value-based models, which rely on past observations to predict future series. Notably, a new trend has emerged with time-index-based models, offering a more nuanced understanding of the continuous dynamics underlying time series. Unlike these two types of models that aggregate the information of spatial domains or temporal domains, in this paper, we consider multivariate time series as spatiotemporal data regularly sampled from a continuous dynamical system, which can be represented by partial differential equations (PDEs), with the spatial domain being fixed. Building on this perspective, we present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers, following the encoding-integration-decoding operations. Our extensive experimentation across seven diverse real-world LMTF datasets reveals that PDETime not only adapts effectively to the intrinsic spatiotemporal nature of the data but also sets new benchmarks, achieving state-of-the-art results
|
[
"['Shiyi Qi' 'Zenglin Xu' 'Yiduo Li' 'Liangjian Wen' 'Qingsong Wen'\n 'Qifan Wang' 'Yuan Qi']"
] |
null | null |
2402.16915
| null | null |
http://arxiv.org/pdf/2402.16915v1
|
2024-02-25T18:27:25Z
|
2024-02-25T18:27:25Z
|
More Than Routing: Joint GPS and Route Modeling for Refine Trajectory
Representation Learning
|
Trajectory representation learning plays a pivotal role in supporting various downstream tasks. Traditional methods in order to filter the noise in GPS trajectories tend to focus on routing-based methods used to simplify the trajectories. However, this approach ignores the motion details contained in the GPS data, limiting the representation capability of trajectory representation learning. To fill this gap, we propose a novel representation learning framework that Joint GPS and Route Modelling based on self-supervised technology, namely JGRM. We consider GPS trajectory and route as the two modes of a single movement observation and fuse information through inter-modal information interaction. Specifically, we develop two encoders, each tailored to capture representations of route and GPS trajectories respectively. The representations from the two modalities are fed into a shared transformer for inter-modal information interaction. Eventually, we design three self-supervised tasks to train the model. We validate the effectiveness of the proposed method on two real datasets based on extensive experiments. The experimental results demonstrate that JGRM outperforms existing methods in both road segment representation and trajectory representation tasks. Our source code is available at Anonymous Github.
|
[
"['Zhipeng Ma' 'Zheyan Tu' 'Xinhai Chen' 'Yan Zhang' 'Deguo Xia'\n 'Guyue Zhou' 'Yilun Chen' 'Yu Zheng' 'Jiangtao Gong']"
] |
null | null |
2402.16918
| null | null |
http://arxiv.org/pdf/2402.16918v3
|
2024-07-07T14:03:04Z
|
2024-02-26T04:47:32Z
|
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers
|
Modular neural architectures are gaining attention for their powerful generalization and efficient adaptation to new domains. However, training these models poses challenges due to optimization difficulties arising from intrinsic sparse connectivity. Leveraging knowledge from monolithic models through techniques like knowledge distillation can facilitate training and enable integration of diverse knowledge. Nevertheless, conventional knowledge distillation approaches are not tailored to modular models and struggle with unique architectures and enormous parameter counts. Motivated by these challenges, we propose module-to-module knowledge distillation (m2mKD) for transferring knowledge between modules. m2mKD combines teacher modules of a pretrained monolithic model and student modules of a modular model with a shared meta model respectively to encourage the student module to mimic the behaviour of the teacher module. We evaluate m2mKD on two modular neural architectures: Neural Attentive Circuits (NACs) and Vision Mixture-of-Experts (V-MoE). Applying m2mKD to NACs yields significant improvements in IID accuracy on Tiny-ImageNet (up to 5.6%) and OOD robustness on Tiny-ImageNet-R (up to 4.2%). Additionally, the V-MoE-Base model trained with m2mKD achieves 3.5% higher accuracy than end-to-end training on ImageNet-1k. Code is available at https://github.com/kamanphoebe/m2mKD.
|
[
"['Ka Man Lo' 'Yiming Liang' 'Wenyu Du' 'Yuantao Fan' 'Zili Wang'\n 'Wenhao Huang' 'Lei Ma' 'Jie Fu']"
] |
null | null |
2402.16919
| null | null |
http://arxiv.org/pdf/2402.16919v1
|
2024-02-26T06:29:05Z
|
2024-02-26T06:29:05Z
|
Personalized Federated Instruction Tuning via Neural Architecture Search
|
Federated Instruction Tuning (FIT) has shown the ability to achieve collaborative model instruction tuning among massive data owners without sharing private data. However, it still faces two key challenges, i.e., data and resource heterogeneity. Due to the varying data distribution and preferences among data owners, FIT cannot adapt to the personalized data of individual owners. Moreover, clients with superior computational abilities are constrained since they need to maintain the same fine-tuning architecture as the weaker clients. To address these issues, we propose a novel Personalized Federated Instruction Tuning (PerFIT) framework based on architecture search. Specifically, PerFIT allows each client to search for a personalized architecture by expanding the trainable parameter space of the global model followed by pruning the parameters to the original state. This procedure allows personalized instruction fine-tuning within expanded parameter spaces, concurrently preserving the same number of trainable parameters. Furthermore, to release the abilities of heterogeneous computational resources and enhance the performance of personalization on local data, we exploit personalized parameter-wise aggregation. The evaluation with multiple LLMs non-IID scenarios demonstrates that compared to the state-of-the-art FIT methods, our approach can achieve up to a 23% decrease in perplexity.
|
[
"['Pengyu Zhang' 'Yingbo Zhou' 'Ming Hu' 'Junxian Feng' 'Jiawen Weng'\n 'Mingsong Chen']"
] |
null | null |
2402.16925
| null | null |
http://arxiv.org/pdf/2402.16925v1
|
2024-02-26T11:18:53Z
|
2024-02-26T11:18:53Z
|
Minimize Control Inputs for Strong Structural Controllability Using
Reinforcement Learning with Graph Neural Network
|
Strong structural controllability (SSC) guarantees networked system with linear-invariant dynamics controllable for all numerical realizations of parameters. Current research has established algebraic and graph-theoretic conditions of SSC for zero/nonzero or zero/nonzero/arbitrary structure. One relevant practical problem is how to fully control the system with the minimal number of input signals and identify which nodes must be imposed signals. Previous work shows that this optimization problem is NP-hard and it is difficult to find the solution. To solve this problem, we formulate the graph coloring process as a Markov decision process (MDP) according to the graph-theoretical condition of SSC for both zero/nonzero and zero/nonzero/arbitrary structure. We use Actor-critic method with Directed graph neural network which represents the color information of graph to optimize MDP. Our method is validated in a social influence network with real data and different complex network models. We find that the number of input nodes is determined by the average degree of the network and the input nodes tend to select nodes with low in-degree and avoid high-degree nodes.
|
[
"['Mengbang Zou' 'Weisi Guo' 'Bailu Jin']"
] |
null | null |
2402.16926
| null | null |
http://arxiv.org/pdf/2402.16926v1
|
2024-02-26T11:43:01Z
|
2024-02-26T11:43:01Z
|
On the (In)feasibility of ML Backdoor Detection as an Hypothesis Testing
Problem
|
We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it to analyze the feasibility of such problems, providing evidence for the utility and applicability of our definition. The main contributions of this work are an impossibility result and an achievability result for backdoor detection. We show a no-free-lunch theorem, proving that universal (adversary-unaware) backdoor detection is impossible, except for very small alphabet sizes. Thus, we argue, that backdoor detection methods need to be either explicitly, or implicitly adversary-aware. However, our work does not imply that backdoor detection cannot work in specific scenarios, as evidenced by successful backdoor detection methods in the scientific literature. Furthermore, we connect our definition to the probably approximately correct (PAC) learnability of the out-of-distribution detection problem.
|
[
"['Georg Pichler' 'Marco Romanelli' 'Divya Prakash Manivannan'\n 'Prashanth Krishnamurthy' 'Farshad Khorrami' 'Siddharth Garg']"
] |
null | null |
2402.16930
| null | null |
http://arxiv.org/pdf/2402.16930v1
|
2024-02-26T15:14:38Z
|
2024-02-26T15:14:38Z
|
TrustMol: Trustworthy Inverse Molecular Design via Alignment with
Molecular Dynamics
|
Data-driven generation of molecules with desired properties, also known as inverse molecular design (IMD), has attracted significant attention in recent years. Despite the significant progress in the accuracy and diversity of solutions, existing IMD methods lag behind in terms of trustworthiness. The root issue is that the design process of these methods is increasingly more implicit and indirect, and this process is also isolated from the native forward process (NFP), the ground-truth function that models the molecular dynamics. Following this insight, we propose TrustMol, an IMD method built to be trustworthy. For this purpose, TrustMol relies on a set of technical novelties including a new variational autoencoder network. Moreover, we propose a latent-property pairs acquisition method to effectively navigate the complexities of molecular latent optimization, a process that seems intuitive yet challenging due to the high-frequency and discontinuous nature of molecule space. TrustMol also integrates uncertainty-awareness into molecular latent optimization. These lead to improvements in both explainability and reliability of the IMD process. We validate the trustworthiness of TrustMol through a wide range of experiments.
|
[
"['Kevin Tirta Wijaya' 'Navid Ansari' 'Hans-Peter Seidel' 'Vahid Babaei']"
] |
null | null |
2402.16933
| null | null |
http://arxiv.org/pdf/2402.16933v1
|
2024-02-26T17:20:16Z
|
2024-02-26T17:20:16Z
|
Avoiding Catastrophic Forgetting in Visual Classification Using Human
Concept Formation
|
Deep neural networks have excelled in machine learning, particularly in vision tasks, however, they often suffer from catastrophic forgetting when learning new tasks sequentially. In this work, we propose Cobweb4V, a novel visual classification approach that builds on Cobweb, a human like learning system that is inspired by the way humans incrementally learn new concepts over time. In this research, we conduct a comprehensive evaluation, showcasing the proficiency of Cobweb4V in learning visual concepts, requiring less data to achieve effective learning outcomes compared to traditional methods, maintaining stable performance over time, and achieving commendable asymptotic behavior, without catastrophic forgetting effects. These characteristics align with learning strategies in human cognition, positioning Cobweb4V as a promising alternative to neural network approaches.
|
[
"['Nicki Barari' 'Xin Lian' 'Christopher J. MacLellan']"
] |
null | null |
2402.16934
| null | null |
http://arxiv.org/pdf/2402.16934v1
|
2024-02-26T17:53:15Z
|
2024-02-26T17:53:15Z
|
FedReview: A Review Mechanism for Rejecting Poisoned Updates in
Federated Learning
|
Federated learning has recently emerged as a decentralized approach to learn a high-performance model without access to user data. Despite its effectiveness, federated learning gives malicious users opportunities to manipulate the model by uploading poisoned model updates to the server. In this paper, we propose a review mechanism called FedReview to identify and decline the potential poisoned updates in federated learning. Under our mechanism, the server randomly assigns a subset of clients as reviewers to evaluate the model updates on their training datasets in each round. The reviewers rank the model updates based on the evaluation results and count the number of the updates with relatively low quality as the estimated number of poisoned updates. Based on review reports, the server employs a majority voting mechanism to integrate the rankings and remove the potential poisoned updates in the model aggregation process. Extensive evaluation on multiple datasets demonstrate that FedReview can assist the server to learn a well-performed global model in an adversarial environment.
|
[
"['Tianhang Zheng' 'Baochun Li']"
] |
null | null |
2402.16936
| null | null |
http://arxiv.org/pdf/2402.16936v1
|
2024-02-26T18:54:15Z
|
2024-02-26T18:54:15Z
|
Disentangled 3D Scene Generation with Layout Learning
|
We introduce a method to generate 3D scenes that are disentangled into their component objects. This disentanglement is unsupervised, relying only on the knowledge of a large pretrained text-to-image model. Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene. Concretely, our method jointly optimizes multiple NeRFs from scratch - each representing its own object - along with a set of layouts that composite these objects into scenes. We then encourage these composited scenes to be in-distribution according to the image generator. We show that despite its simplicity, our approach successfully generates 3D scenes decomposed into individual objects, enabling new capabilities in text-to-3D content creation. For results and an interactive demo, see our project page at https://dave.ml/layoutlearning/
|
[
"['Dave Epstein' 'Ben Poole' 'Ben Mildenhall' 'Alexei A. Efros'\n 'Aleksander Holynski']"
] |
null | null |
2402.16978
| null | null |
http://arxiv.org/pdf/2402.16978v1
|
2024-02-26T19:25:21Z
|
2024-02-26T19:25:21Z
|
An inexact Bregman proximal point method and its acceleration version
for unbalanced optimal transport
|
The Unbalanced Optimal Transport (UOT) problem plays increasingly important roles in computational biology, computational imaging and deep learning. Scaling algorithm is widely used to solve UOT due to its convenience and good convergence properties. However, this algorithm has lower accuracy for large regularization parameters, and due to stability issues, small regularization parameters can easily lead to numerical overflow. We address this challenge by developing an inexact Bregman proximal point method for solving UOT. This algorithm approximates the proximal operator using the Scaling algorithm at each iteration. The algorithm (1) converges to the true solution of UOT, (2) has theoretical guarantees and robust regularization parameter selection, (3) mitigates numerical stability issues, and (4) can achieve comparable computational complexity to the Scaling algorithm in specific practice. Building upon this, we develop an accelerated version of inexact Bregman proximal point method for solving UOT by using acceleration techniques of Bregman proximal point method and provide theoretical guarantees and experimental validation of convergence and acceleration.
|
[
"['Xiang Chen' 'Faqiang Wang' 'Jun Liu' 'Li Cui']"
] |
null | null |
2402.16979
| null | null |
http://arxiv.org/pdf/2402.16979v1
|
2024-02-26T19:27:00Z
|
2024-02-26T19:27:00Z
|
Algorithmic Arbitrariness in Content Moderation
|
Machine learning (ML) is widely used to moderate online content. Despite its scalability relative to human moderation, the use of ML introduces unique challenges to content moderation. One such challenge is predictive multiplicity: multiple competing models for content classification may perform equally well on average, yet assign conflicting predictions to the same content. This multiplicity can result from seemingly innocuous choices during model development, such as random seed selection for parameter initialization. We experimentally demonstrate how content moderation tools can arbitrarily classify samples as toxic, leading to arbitrary restrictions on speech. We discuss these findings in terms of human rights set out by the International Covenant on Civil and Political Rights (ICCPR), namely freedom of expression, non-discrimination, and procedural justice. We analyze (i) the extent of predictive multiplicity among state-of-the-art LLMs used for detecting toxic content; (ii) the disparate impact of this arbitrariness across social groups; and (iii) how model multiplicity compares to unambiguous human classifications. Our findings indicate that the up-scaled algorithmic moderation risks legitimizing an algorithmic leviathan, where an algorithm disproportionately manages human rights. To mitigate such risks, our study underscores the need to identify and increase the transparency of arbitrariness in content moderation applications. Since algorithmic content moderation is being fueled by pressing social concerns, such as disinformation and hate speech, our discussion on harms raises concerns relevant to policy debates. Our findings also contribute to content moderation and intermediary liability laws being discussed and passed in many countries, such as the Digital Services Act in the European Union, the Online Safety Act in the United Kingdom, and the Fake News Bill in Brazil.
|
[
"['Juan Felipe Gomez' 'Caio Vieira Machado' 'Lucas Monteiro Paes'\n 'Flavio P. Calmon']"
] |
null | null |
2402.16990
| null | null |
http://arxiv.org/pdf/2402.16990v1
|
2024-02-26T19:49:54Z
|
2024-02-26T19:49:54Z
|
inGRASS: Incremental Graph Spectral Sparsification via
Low-Resistance-Diameter Decomposition
|
This work presents inGRASS, a novel algorithm designed for incremental spectral sparsification of large undirected graphs. The proposed inGRASS algorithm is highly scalable and parallel-friendly, having a nearly-linear time complexity for the setup phase and the ability to update the spectral sparsifier in $O(log N)$ time for each incremental change made to the original graph with $N$ nodes. A key component in the setup phase of inGRASS is a multilevel resistance embedding framework introduced for efficiently identifying spectrally-critical edges and effectively detecting redundant ones, which is achieved by decomposing the initial sparsifier into many node clusters with bounded effective-resistance diameters leveraging a low-resistance-diameter decomposition (LRD) scheme. The update phase of inGRASS exploits low-dimensional node embedding vectors for efficiently estimating the importance and uniqueness of each newly added edge. As demonstrated through extensive experiments, inGRASS achieves up to over $200 times$ speedups while retaining comparable solution quality in incremental spectral sparsification of graphs obtained from various datasets, such as circuit simulations, finite element analysis, and social networks.
|
[
"['Ali Aghdaei' 'Zhuo Feng']"
] |
null | null |
2402.16991
| null | null |
http://arxiv.org/pdf/2402.16991v2
|
2024-03-04T14:04:51Z
|
2024-02-26T19:52:33Z
|
A Phase Transition in Diffusion Models Reveals the Hierarchical Nature
of Data
|
Understanding the structure of real data is paramount in advancing modern deep-learning methodologies. Natural data such as images are believed to be composed of features organised in a hierarchical and combinatorial manner, which neural networks capture during learning. Recent advancements show that diffusion models can generate high-quality images, hinting at their ability to capture this underlying structure. We study this phenomenon in a hierarchical generative model of data. We find that the backward diffusion process acting after a time $t$ is governed by a phase transition at some threshold time, where the probability of reconstructing high-level features, like the class of an image, suddenly drops. Instead, the reconstruction of low-level features, such as specific details of an image, evolves smoothly across the whole diffusion process. This result implies that at times beyond the transition, the class has changed but the generated sample may still be composed of low-level elements of the initial image. We validate these theoretical insights through numerical experiments on class-unconditional ImageNet diffusion models. Our analysis characterises the relationship between time and scale in diffusion models and puts forward generative models as powerful tools to model combinatorial data properties.
|
[
"['Antonio Sclocchi' 'Alessandro Favero' 'Matthieu Wyart']"
] |
null | null |
2402.16994
| null | null |
http://arxiv.org/pdf/2402.16994v2
|
2024-04-11T03:44:49Z
|
2024-02-26T20:00:57Z
|
GEM3D: GEnerative Medial Abstractions for 3D Shape Synthesis
|
We introduce GEM3D -- a new deep, topology-aware generative model of 3D shapes. The key ingredient of our method is a neural skeleton-based representation encoding information on both shape topology and geometry. Through a denoising diffusion probabilistic model, our method first generates skeleton-based representations following the Medial Axis Transform (MAT), then generates surfaces through a skeleton-driven neural implicit formulation. The neural implicit takes into account the topological and geometric information stored in the generated skeleton representations to yield surfaces that are more topologically and geometrically accurate compared to previous neural field formulations. We discuss applications of our method in shape synthesis and point cloud reconstruction tasks, and evaluate our method both qualitatively and quantitatively. We demonstrate significantly more faithful surface reconstruction and diverse shape generation results compared to the state-of-the-art, also involving challenging scenarios of reconstructing and synthesizing structurally complex, high-genus shape surfaces from Thingi10K and ShapeNet.
|
[
"['Dmitry Petrov' 'Pradyumn Goyal' 'Vikas Thamizharasan' 'Vladimir G. Kim'\n 'Matheus Gadelha' 'Melinos Averkiou' 'Siddhartha Chaudhuri'\n 'Evangelos Kalogerakis']"
] |
null | null |
2402.16996
| null | null |
http://arxiv.org/pdf/2402.16996v1
|
2024-02-26T20:04:01Z
|
2024-02-26T20:04:01Z
|
Towards Decoding Brain Activity During Passive Listening of Speech
|
The aim of the study is to investigate the complex mechanisms of speech perception and ultimately decode the electrical changes in the brain accruing while listening to speech. We attempt to decode heard speech from intracranial electroencephalographic (iEEG) data using deep learning methods. The goal is to aid the advancement of brain-computer interface (BCI) technology for speech synthesis, and, hopefully, to provide an additional perspective on the cognitive processes of speech perception. This approach diverges from the conventional focus on speech production and instead chooses to investigate neural representations of perceived speech. This angle opened up a complex perspective, potentially allowing us to study more sophisticated neural patterns. Leveraging the power of deep learning models, the research aimed to establish a connection between these intricate neural activities and the corresponding speech sounds. Despite the approach not having achieved a breakthrough yet, the research sheds light on the potential of decoding neural activity during speech perception. Our current efforts can serve as a foundation, and we are optimistic about the potential of expanding and improving upon this work to move closer towards more advanced BCIs, better understanding of processes underlying perceived speech and its relation to spoken speech.
|
[
"['Milán András Fodor' 'Tamás Gábor Csapó' 'Frigyes Viktor Arthur']"
] |
null | null |
2402.16998
| null | null |
http://arxiv.org/pdf/2402.16998v1
|
2024-02-26T20:13:58Z
|
2024-02-26T20:13:58Z
|
What Do Language Models Hear? Probing for Auditory Representations in
Language Models
|
This work explores whether language models encode meaningfully grounded representations of sounds of objects. We learn a linear probe that retrieves the correct text representation of an object given a snippet of audio related to that object, where the sound representation is given by a pretrained audio model. This probe is trained via a contrastive loss that pushes the language representations and sound representations of an object to be close to one another. After training, the probe is tested on its ability to generalize to objects that were not seen during training. Across different language models and audio models, we find that the probe generalization is above chance in many cases, indicating that despite being trained only on raw text, language models encode grounded knowledge of sounds for some objects.
|
[
"['Jerry Ngo' 'Yoon Kim']"
] |
null | null |
2402.17002
| null | null |
http://arxiv.org/pdf/2402.17002v4
|
2024-05-22T23:02:44Z
|
2024-02-26T20:18:43Z
|
Discovering Abstract Symbolic Relations by Learning Unitary Group
Representations
|
We investigate a principled approach for symbolic operation completion (SOC), a minimal task for studying symbolic reasoning. While conceptually similar to matrix completion, SOC poses a unique challenge in modeling abstract relationships between discrete symbols. We demonstrate that SOC can be efficiently solved by a minimal model - a bilinear map - with a novel factorized architecture. Inspired by group representation theory, this architecture leverages matrix embeddings of symbols, modeling each symbol as an operator that dynamically influences others. Our model achieves perfect test accuracy on SOC with comparable or superior sample efficiency to Transformer baselines across most datasets, while boasting significantly faster learning speeds (100-1000$times$). Crucially, the model exhibits an implicit bias towards learning general group structures, precisely discovering the unitary representations of underlying groups. This remarkable property not only confers interpretability but also significant implications for automatic symmetry discovery in geometric deep learning. Overall, our work establishes group theory as a powerful guiding principle for discovering abstract algebraic structures in deep learning, and showcases matrix representations as a compelling alternative to traditional vector embeddings for modeling symbolic relationships.
|
[
"['Dongsung Huh']"
] |
null | null |
2402.17003
| null | null |
http://arxiv.org/pdf/2402.17003v1
|
2024-02-26T20:19:14Z
|
2024-02-26T20:19:14Z
|
Monitoring Fidelity of Online Reinforcement Learning Algorithms in
Clinical Trials
|
Online reinforcement learning (RL) algorithms offer great potential for personalizing treatment for participants in clinical trials. However, deploying an online, autonomous algorithm in the high-stakes healthcare setting makes quality control and data quality especially difficult to achieve. This paper proposes algorithm fidelity as a critical requirement for deploying online RL algorithms in clinical trials. It emphasizes the responsibility of the algorithm to (1) safeguard participants and (2) preserve the scientific utility of the data for post-trial analyses. We also present a framework for pre-deployment planning and real-time monitoring to help algorithm developers and clinical researchers ensure algorithm fidelity. To illustrate our framework's practical application, we present real-world examples from the Oralytics clinical trial. Since Spring 2023, this trial successfully deployed an autonomous, online RL algorithm to personalize behavioral interventions for participants at risk for dental disease.
|
[
"['Anna L. Trella' 'Kelly W. Zhang' 'Inbal Nahum-Shani' 'Vivek Shetty'\n 'Iris Yan' 'Finale Doshi-Velez' 'Susan A. Murphy']"
] |
null | null |
2402.17012
| null | null |
http://arxiv.org/pdf/2402.17012v4
|
2024-07-15T02:37:09Z
|
2024-02-26T20:41:50Z
|
Pandora's White-Box: Precise Training Data Detection and Extraction in
Large Language Models
|
In this paper we develop state-of-the-art privacy attacks against Large Language Models (LLMs), where an adversary with some access to the model tries to learn something about the underlying training data. Our headline results are new membership inference attacks (MIAs) against pretrained LLMs that perform hundreds of times better than baseline attacks, and a pipeline showing that over 50% (!) of the fine-tuning dataset can be extracted from a fine-tuned LLM in natural settings. We consider varying degrees of access to the underlying model, pretraining and fine-tuning data, and both MIAs and training data extraction. For pretraining data, we propose two new MIAs: a supervised neural network classifier that predicts training data membership on the basis of (dimensionality-reduced) model gradients, as well as a variant of this attack that only requires logit access to the model by leveraging recent model-stealing work on LLMs. To our knowledge this is the first MIA that explicitly incorporates model-stealing information. Both attacks outperform existing black-box baselines, and our supervised attack closes the gap between MIA attack success against LLMs and the strongest known attacks for other machine learning models. In fine-tuning, we find that a simple attack based on the ratio of the loss between the base and fine-tuned models is able to achieve near-perfect MIA performance; we then leverage our MIA to extract a large fraction of the fine-tuning dataset from fine-tuned Pythia and Llama models. Our code is available at github.com/safr-ai-lab/pandora-llm.
|
[
"['Jeffrey G. Wang' 'Jason Wang' 'Marvin Li' 'Seth Neel']"
] |
null | null |
2402.17013
| null | null |
http://arxiv.org/pdf/2402.17013v1
|
2024-02-26T20:42:40Z
|
2024-02-26T20:42:40Z
|
Towards Explainability and Fairness in Swiss Judgement Prediction:
Benchmarking on a Multilingual Dataset
|
The assessment of explainability in Legal Judgement Prediction (LJP) systems is of paramount importance in building trustworthy and transparent systems, particularly considering the reliance of these systems on factors that may lack legal relevance or involve sensitive attributes. This study delves into the realm of explainability and fairness in LJP models, utilizing Swiss Judgement Prediction (SJP), the only available multilingual LJP dataset. We curate a comprehensive collection of rationales that `support' and `oppose' judgement from legal experts for 108 cases in German, French, and Italian. By employing an occlusion-based explainability approach, we evaluate the explainability performance of state-of-the-art monolingual and multilingual BERT-based LJP models, as well as models developed with techniques such as data augmentation and cross-lingual transfer, which demonstrated prediction performance improvement. Notably, our findings reveal that improved prediction performance does not necessarily correspond to enhanced explainability performance, underscoring the significance of evaluating models from an explainability perspective. Additionally, we introduce a novel evaluation framework, Lower Court Insertion (LCI), which allows us to quantify the influence of lower court information on model predictions, exposing current models' biases.
|
[
"['Santosh T. Y. S. S' 'Nina Baumgartner' 'Matthias Stürmer'\n 'Matthias Grabmair' 'Joel Niklaus']"
] |
null | null |
2402.17018
| null | null |
http://arxiv.org/pdf/2402.17018v1
|
2024-02-26T20:55:47Z
|
2024-02-26T20:55:47Z
|
A Curious Case of Remarkable Resilience to Gradient Attacks via Fully
Convolutional and Differentiable Front End with a Skip Connection
|
We tested front-end enhanced neural models where a frozen classifier was prepended by a differentiable and fully convolutional model with a skip connection. By training them using a small learning rate for about one epoch, we obtained models that retained the accuracy of the backbone classifier while being unusually resistant to gradient attacks including APGD and FAB-T attacks from the AutoAttack package, which we attributed to gradient masking. The gradient masking phenomenon is not new, but the degree of masking was quite remarkable for fully differentiable models that did not have gradient-shattering components such as JPEG compression or components that are expected to cause diminishing gradients. Though black box attacks can be partially effective against gradient masking, they are easily defeated by combining models into randomized ensembles. We estimate that such ensembles achieve near-SOTA AutoAttack accuracy on CIFAR10, CIFAR100, and ImageNet despite having virtually zero accuracy under adaptive attacks. Adversarial training of the backbone classifier can further increase resistance of the front-end enhanced model to gradient attacks. On CIFAR10, the respective randomized ensemble achieved 90.8$pm 2.5$% (99% CI) accuracy under AutoAttack while having only 18.2$pm 3.6$% accuracy under the adaptive attack. We do not establish SOTA in adversarial robustness. Instead, we make methodological contributions and further supports the thesis that adaptive attacks designed with the complete knowledge of model architecture are crucial in demonstrating model robustness and that even the so-called white-box gradient attacks can have limited applicability. Although gradient attacks can be complemented with black-box attack such as the SQUARE attack or the zero-order PGD, black-box attacks can be weak against randomized ensembles, e.g., when ensemble models mask gradients.
|
[
"['Leonid Boytsov' 'Ameya Joshi' 'Filipe Condessa']"
] |
null | null |
2402.17032
| null | null |
http://arxiv.org/pdf/2402.17032v1
|
2024-02-26T21:21:30Z
|
2024-02-26T21:21:30Z
|
REFACTOR: Learning to Extract Theorems from Proofs
|
Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract 19.6% of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted 16 new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of 733.5 times, and help shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves more test theorems and outperforms state-of-the-art baselines by frequently leveraging a diverse set of newly extracted theorems. Code can be found at https://github.com/jinpz/refactor.
|
[
"['Jin Peng Zhou' 'Yuhuai Wu' 'Qiyang Li' 'Roger Grosse']"
] |
null | null |
2402.17036
| null | null |
http://arxiv.org/pdf/2402.17036v2
|
2024-06-03T23:08:30Z
|
2024-02-26T21:35:33Z
|
Iterated INLA for State and Parameter Estimation in Nonlinear Dynamical
Systems
|
Data assimilation (DA) methods use priors arising from differential equations to robustly interpolate and extrapolate data. Popular techniques such as ensemble methods that handle high-dimensional, nonlinear PDE priors focus mostly on state estimation, however can have difficulty learning the parameters accurately. On the other hand, machine learning based approaches can naturally learn the state and parameters, but their applicability can be limited, or produce uncertainties that are hard to interpret. Inspired by the Integrated Nested Laplace Approximation (INLA) method in spatial statistics, we propose an alternative approach to DA based on iteratively linearising the dynamical model. This produces a Gaussian Markov random field at each iteration, enabling one to use INLA to infer the state and parameters. Our approach can be used for arbitrary nonlinear systems, while retaining interpretability, and is furthermore demonstrated to outperform existing methods on the DA task. By providing a more nuanced approach to handling nonlinear PDE priors, our methodology offers improved accuracy and robustness in predictions, especially where data sparsity is prevalent.
|
[
"['Rafael Anderka' 'Marc Peter Deisenroth' 'So Takao']"
] |
null | null |
2402.17042
| null | null |
http://arxiv.org/pdf/2402.17042v2
|
2024-05-25T00:05:02Z
|
2024-02-26T21:49:44Z
|
Towards Generalizing Inferences from Trials to Target Populations
|
Randomized Controlled Trials (RCTs) are pivotal in generating internally valid estimates with minimal assumptions, serving as a cornerstone for researchers dedicated to advancing causal inference methods. However, extending these findings beyond the experimental cohort to achieve externally valid estimates is crucial for broader scientific inquiry. This paper delves into the forefront of addressing these external validity challenges, encapsulating the essence of a multidisciplinary workshop held at the Institute for Computational and Experimental Research in Mathematics (ICERM), Brown University, in Fall 2023. The workshop congregated experts from diverse fields including social science, medicine, public health, statistics, computer science, and education, to tackle the unique obstacles each discipline faces in extrapolating experimental findings. Our study presents three key contributions: we integrate ongoing efforts, highlighting methodological synergies across fields; provide an exhaustive review of generalizability and transportability based on the workshop's discourse; and identify persistent hurdles while suggesting avenues for future research. By doing so, this paper aims to enhance the collective understanding of the generalizability and transportability of causal effects, fostering cross-disciplinary collaboration and offering valuable insights for researchers working on refining and applying causal inference methods.
|
[
"['Melody Y Huang' 'Harsh Parikh']"
] |
null | null |
2402.17045
| null | null |
http://arxiv.org/pdf/2402.17045v2
|
2024-05-10T06:14:50Z
|
2024-02-26T22:04:25Z
|
An Investigation into the Performances of the State-of-the-art Machine
Learning Approaches for Various Cyber-attack Detection: A Survey
|
In this research, we analyzed the suitability of each of the current state-of-the-art machine learning models for various cyberattack detection from the past 5 years with a major emphasis on the most recent works for comparative study to identify the knowledge gap where work is still needed to be done with regard to detection of each category of cyberattack. We also reviewed the suitability, effeciency and limitations of recent research on state-of-the-art classifiers and novel frameworks in the detection of differnet cyberattacks. Our result shows the need for; further research and exploration on machine learning approach for the detection of drive-by download attacks, an investigation into the mix performance of Naive Bayes to identify possible research direction on improvement to existing state-of-the-art Naive Bayes classifier, we also identify that current machine learning approach to the detection of SQLi attack cannot detect an already compromised database with SQLi attack signifying another possible future research direction.
|
[
"['Tosin Ige' 'Christopher Kiekintveld' 'Aritran Piplai']"
] |
null | null |
2402.17061
| null | null |
http://arxiv.org/pdf/2402.17061v1
|
2024-02-26T22:47:03Z
|
2024-02-26T22:47:03Z
|
A Multi-Fidelity Methodology for Reduced Order Models with
High-Dimensional Inputs
|
In the early stages of aerospace design, reduced order models (ROMs) are crucial for minimizing computational costs associated with using physics-rich field information in many-query scenarios requiring multiple evaluations. The intricacy of aerospace design demands the use of high-dimensional design spaces to capture detailed features and design variability accurately. However, these spaces introduce significant challenges, including the curse of dimensionality, which stems from both high-dimensional inputs and outputs necessitating substantial training data and computational effort. To address these complexities, this study introduces a novel multi-fidelity, parametric, and non-intrusive ROM framework designed for high-dimensional contexts. It integrates machine learning techniques for manifold alignment and dimension reduction employing Proper Orthogonal Decomposition (POD) and Model-based Active Subspace with multi-fidelity regression for ROM construction. Our approach is validated through two test cases: the 2D RAE~2822 airfoil and the 3D NASA CRM wing, assessing combinations of various fidelity levels, training data ratios, and sample sizes. Compared to the single-fidelity PCAS method, our multi-fidelity solution offers improved cost-accuracy benefits and achieves better predictive accuracy with reduced computational demands. Moreover, our methodology outperforms the manifold-aligned ROM (MA-ROM) method by 50% in handling scenarios with large input dimensions, underscoring its efficacy in addressing the complex challenges of aerospace design.
|
[
"['Bilal Mufti' 'Christian Perron' 'Dimitri N. Mavris']"
] |
null | null |
2402.17065
| null | null |
http://arxiv.org/pdf/2402.17065v2
|
2024-06-16T22:11:56Z
|
2024-02-26T23:03:00Z
|
Taming the Tail in Class-Conditional GANs: Knowledge Sharing via
Unconditional Training at Lower Resolutions
|
Despite extensive research on training generative adversarial networks (GANs) with limited training data, learning to generate images from long-tailed training distributions remains fairly unexplored. In the presence of imbalanced multi-class training data, GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes. In this study, we aim to improve the training of class-conditional GANs with long-tailed data. We propose a straightforward yet effective method for knowledge sharing, allowing tail classes to borrow from the rich information from classes with more abundant training data. More concretely, we propose modifications to existing class-conditional GAN architectures to ensure that the lower-resolution layers of the generator are trained entirely unconditionally while reserving class-conditional generation for the higher-resolution layers. Experiments on several long-tail benchmarks and GAN architectures demonstrate a significant improvement over existing methods in both the diversity and fidelity of the generated images. The code is available at https://github.com/khorrams/utlo.
|
[
"['Saeed Khorram' 'Mingqi Jiang' 'Mohamad Shahbazi' 'Mohamad H. Danesh'\n 'Li Fuxin']"
] |
null | null |
2402.17073
| null | null |
http://arxiv.org/pdf/2402.17073v1
|
2024-02-26T23:15:01Z
|
2024-02-26T23:15:01Z
|
One-Shot Graph Representation Learning Using Hyperdimensional Computing
|
We present a novel, simple, fast, and efficient approach for semi-supervised learning on graphs. The proposed approach takes advantage of hyper-dimensional computing which encodes data samples using random projections into a high dimensional space (HD space for short). Specifically, we propose a Hyper-dimensional Graph Learning (HDGL) algorithm that leverages the injectivity property of the node representations of a family of graph neural networks. HDGL maps node features to the HD space and then uses HD operators such as bundling and binding to aggregate information from the local neighborhood of each node. Results of experiments with widely used benchmark data sets show that HDGL achieves predictive performance that is competitive with the state-of-the-art deep learning methods, without the need for computationally expensive training.
|
[
"['Abhishek Dalvi' 'Vasant Honavar']"
] |
null | null |
2402.17077
| null | null |
http://arxiv.org/pdf/2402.17077v1
|
2024-02-26T23:16:34Z
|
2024-02-26T23:16:34Z
|
Parallelized Spatiotemporal Binding
|
While modern best practices advocate for scalable architectures that support long-range interactions, object-centric models are yet to fully embrace these architectures. In particular, existing object-centric models for handling sequential inputs, due to their reliance on RNN-based implementation, show poor stability and capacity and are slow to train on long sequences. We introduce Parallelizable Spatiotemporal Binder or PSB, the first temporally-parallelizable slot learning architecture for sequential inputs. Unlike conventional RNN-based approaches, PSB produces object-centric representations, known as slots, for all time-steps in parallel. This is achieved by refining the initial slots across all time-steps through a fixed number of layers equipped with causal attention. By capitalizing on the parallelism induced by our architecture, the proposed model exhibits a significant boost in efficiency. In experiments, we test PSB extensively as an encoder within an auto-encoding framework paired with a wide variety of decoder options. Compared to the state-of-the-art, our architecture demonstrates stable training on longer sequences, achieves parallelization that results in a 60% increase in training speed, and yields performance that is on par with or better on unsupervised 2D and 3D object-centric scene decomposition and understanding.
|
[
"['Gautam Singh' 'Yue Wang' 'Jiawei Yang' 'Boris Ivanovic' 'Sungjin Ahn'\n 'Marco Pavone' 'Tong Che']"
] |
null | null |
2402.17087
| null | null |
http://arxiv.org/pdf/2402.17087v1
|
2024-02-26T23:53:34Z
|
2024-02-26T23:53:34Z
|
A Note on Bayesian Networks with Latent Root Variables
|
We characterise the likelihood function computed from a Bayesian network with latent variables as root nodes. We show that the marginal distribution over the remaining, manifest, variables also factorises as a Bayesian network, which we call empirical. A dataset of observations of the manifest variables allows us to quantify the parameters of the empirical Bayesian net. We prove that (i) the likelihood of such a dataset from the original Bayesian network is dominated by the global maximum of the likelihood from the empirical one; and that (ii) such a maximum is attained if and only if the parameters of the Bayesian network are consistent with those of the empirical model.
|
[
"['Marco Zaffalon' 'Alessandro Antonucci']"
] |
null | null |
2402.17089
| null | null |
http://arxiv.org/pdf/2402.17089v1
|
2024-02-26T23:56:11Z
|
2024-02-26T23:56:11Z
|
Learning high-dimensional targets by two-parameter models and gradient
flow
|
We explore the theoretical possibility of learning $d$-dimensional targets with $W$-parameter models by gradient flow (GF) when $W<d$. Our main result shows that if the targets are described by a particular $d$-dimensional probability distribution, then there exist models with as few as two parameters that can learn the targets with arbitrarily high success probability. On the other hand, we show that for $W<d$ there is necessarily a large subset of GF-non-learnable targets. In particular, the set of learnable targets is not dense in $mathbb R^d$, and any subset of $mathbb R^d$ homeomorphic to the $W$-dimensional sphere contains non-learnable targets. Finally, we observe that the model in our main theorem on almost guaranteed two-parameter learning is constructed using a hierarchical procedure and as a result is not expressible by a single elementary function. We show that this limitation is essential in the sense that such learnability can be ruled out for a large class of elementary functions.
|
[
"['Dmitry Yarotsky']"
] |
null | null |
2402.17104
| null | null |
http://arxiv.org/pdf/2402.17104v1
|
2024-02-27T00:41:00Z
|
2024-02-27T00:41:00Z
|
Adversarial Perturbations of Physical Signals
|
We investigate the vulnerability of computer-vision-based signal classifiers to adversarial perturbations of their inputs, where the signals and perturbations are subject to physical constraints. We consider a scenario in which a source and interferer emit signals that propagate as waves to a detector, which attempts to classify the source by analyzing the spectrogram of the signal it receives using a pre-trained neural network. By solving PDE-constrained optimization problems, we construct interfering signals that cause the detector to misclassify the source even though the perturbations to the spectrogram of the received signal are nearly imperceptible. Though such problems can have millions of decision variables, we introduce methods to solve them efficiently. Our experiments demonstrate that one can compute effective and physically realizable adversarial perturbations for a variety of machine learning models under various physical conditions.
|
[
"['Robert L. Bassett' 'Austin Van Dellen' 'Anthony P. Austin']"
] |
null | null |
2402.17106
| null | null |
http://arxiv.org/pdf/2402.17106v3
|
2024-05-30T11:44:40Z
|
2024-02-27T00:59:32Z
|
Achievable Fairness on Your Data With Utility Guarantees
|
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy, a phenomenon known as the fairness-accuracy trade-off. The severity of this trade-off inherently depends on dataset characteristics such as dataset imbalances or biases and therefore, using a uniform fairness requirement across diverse datasets remains questionable. To address this, we present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets, backed by rigorous statistical guarantees. By utilizing the You-Only-Train-Once (YOTO) framework, our approach mitigates the computational burden of having to train multiple models when approximating the trade-off curve. Crucially, we introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness while avoiding false conclusions due to estimation errors. Our experiments spanning tabular (e.g., Adult), image (CelebA), and language (Jigsaw) datasets underscore that our approach not only reliably quantifies the optimum achievable trade-offs across various data modalities but also helps detect suboptimality in SOTA fairness methods.
|
[
"['Muhammad Faaiz Taufiq' 'Jean-Francois Ton' 'Yang Liu']"
] |
null | null |
2402.17108
| null | null |
http://arxiv.org/pdf/2402.17108v1
|
2024-02-27T01:01:59Z
|
2024-02-27T01:01:59Z
|
Repeated Contracting with Multiple Non-Myopic Agents: Policy Regret and
Limited Liability
|
We study a repeated contracting setting in which a Principal adaptively chooses amongst $k$ Agents at each of $T$ rounds. The Agents are non-myopic, and so a mechanism for the Principal induces a $T$-round extensive form game amongst the Agents. We give several results aimed at understanding an under-explored aspect of contract theory -- the game induced when choosing an Agent to contract with. First, we show that this game admits a pure-strategy emph{non-responsive} equilibrium amongst the Agents -- informally an equilibrium in which the Agent's actions depend on the history of realized states of nature, but not on the history of each other's actions, and so avoids the complexities of collusion and threats. Next, we show that if the Principal selects Agents using a emph{monotone} bandit algorithm, then for any concave contract, in any such equilibrium, the Principal obtains no regret to contracting with the best Agent in hindsight -- not just given their realized actions, but also to the counterfactual world in which they had offered a guaranteed $T$-round contract to the best Agent in hindsight, which would have induced a different sequence of actions. Finally, we show that if the Principal selects Agents using a monotone bandit algorithm which guarantees no swap-regret, then the Principal can additionally offer only limited liability contracts (in which the Agent never needs to pay the Principal) while getting no-regret to the counterfactual world in which she offered a linear contract to the best Agent in hindsight -- despite the fact that linear contracts are not limited liability. We instantiate this theorem by demonstrating the existence of a monotone no swap-regret bandit algorithm, which to our knowledge has not previously appeared in the literature.
|
[
"['Natalie Collina' 'Varun Gupta' 'Aaron Roth']"
] |
null | null |
2402.17110
| null | null |
http://arxiv.org/pdf/2402.17110v1
|
2024-02-27T01:13:58Z
|
2024-02-27T01:13:58Z
|
Sinkhorn Distance Minimization for Knowledge Distillation
|
Knowledge distillation (KD) has been widely adopted to compress large language models (LLMs). Existing KD methods investigate various divergence measures including the Kullback-Leibler (KL), reverse Kullback-Leibler (RKL), and Jensen-Shannon (JS) divergences. However, due to limitations inherent in their assumptions and definitions, these measures fail to deliver effective supervision when few distribution overlap exists between the teacher and the student. In this paper, we show that the aforementioned KL, RKL, and JS divergences respectively suffer from issues of mode-averaging, mode-collapsing, and mode-underestimation, which deteriorates logits-based KD for diverse NLP tasks. We propose the Sinkhorn Knowledge Distillation (SinKD) that exploits the Sinkhorn distance to ensure a nuanced and precise assessment of the disparity between teacher and student distributions. Besides, profit by properties of the Sinkhorn metric, we can get rid of sample-wise KD that restricts the perception of divergence in each teacher-student sample pair. Instead, we propose a batch-wise reformulation to capture geometric intricacies of distributions across samples in the high-dimensional space. Comprehensive evaluation on GLUE and SuperGLUE, in terms of comparability, validity, and generalizability, highlights our superiority over state-of-the-art methods on all kinds of LLMs with encoder-only, encoder-decoder, and decoder-only architectures.
|
[
"['Xiao Cui' 'Yulei Qin' 'Yuting Gao' 'Enwei Zhang' 'Zihan Xu' 'Tong Wu'\n 'Ke Li' 'Xing Sun' 'Wengang Zhou' 'Houqiang Li']"
] |
null | null |
2402.17120
| null | null |
http://arxiv.org/pdf/2402.17120v2
|
2024-06-06T00:18:40Z
|
2024-02-27T01:26:48Z
|
LCEN: A Novel Feature Selection Algorithm for Nonlinear, Interpretable
Machine Learning Models
|
Interpretable architectures can have advantages over black-box architectures, and interpretability is essential for the application of machine learning in critical settings, such as aviation or medicine. However, the simplest, most commonly used interpretable architectures, such as LASSO or elastic net (EN), are limited to linear predictions and have poor feature selection capabilities. In this work, we introduce the LASSO-Clip-EN (LCEN) algorithm for the creation of nonlinear, interpretable machine learning models. LCEN is tested on a wide variety of artificial and empirical datasets, frequently creating more accurate, sparser models than other architectures, including those for building sparse, nonlinear models. LCEN is robust against many issues typically present in datasets and modeling, including noise, multicollinearity, data scarcity, and hyperparameter variance. LCEN is also able to rediscover multiple physical laws from empirical data and, for processes with no known physical laws, LCEN achieves better results than many other dense and sparse methods -- including using 10.8-fold fewer features than dense methods and 8.1-fold fewer features than EN on one dataset, and is comparable to or better than ANNs on multiple datasets.
|
[
"['Pedro Seber' 'Richard D. Braatz']"
] |
null | null |
2402.17128
| null | null |
http://arxiv.org/pdf/2402.17128v4
|
2024-04-02T23:14:42Z
|
2024-02-27T01:48:19Z
|
OSCaR: Object State Captioning and State Change Representation
|
The capability of intelligent models to extrapolate and comprehend changes in object states is a crucial yet demanding aspect of AI research, particularly through the lens of human interaction in real-world settings. This task involves describing complex visual environments, identifying active objects, and interpreting their changes as conveyed through language. Traditional methods, which isolate object captioning and state change detection, offer a limited view of dynamic environments. Moreover, relying on a small set of symbolic words to represent changes has restricted the expressiveness of the language. To address these challenges, in this paper, we introduce the Object State Captioning and State Change Representation (OSCaR) dataset and benchmark. OSCaR consists of 14,084 annotated video segments with nearly 1,000 unique objects from various egocentric video collections. It sets a new testbed for evaluating multimodal large language models (MLLMs). Our experiments demonstrate that while MLLMs show some skill, they lack a full understanding of object state changes. The benchmark includes a fine-tuned model that, despite initial capabilities, requires significant improvements in accuracy and generalization ability for effective understanding of these changes. Our code and dataset are available at https://github.com/nguyennm1024/OSCaR.
|
[
"['Nguyen Nguyen' 'Jing Bi' 'Ali Vosoughi' 'Yapeng Tian' 'Pooyan Fazli'\n 'Chenliang Xu']"
] |
null | null |
2402.17131
| null | null |
http://arxiv.org/pdf/2402.17131v1
|
2024-02-27T01:53:02Z
|
2024-02-27T01:53:02Z
|
Predicting O-GlcNAcylation Sites in Mammalian Proteins with Transformers
and RNNs Trained with a New Loss Function
|
Glycosylation, a protein modification, has multiple essential functional and structural roles. O-GlcNAcylation, a subtype of glycosylation, has the potential to be an important target for therapeutics, but methods to reliably predict O-GlcNAcylation sites had not been available until 2023; a 2021 review correctly noted that published models were insufficient and failed to generalize. Moreover, many are no longer usable. In 2023, a considerably better RNN model with an F$_1$ score of 36.17% and an MCC of 34.57% on a large dataset was published. This article first sought to improve these metrics using transformer encoders. While transformers displayed high performance on this dataset, their performance was inferior to that of the previously published RNN. We then created a new loss function, which we call the weighted focal differentiable MCC, to improve the performance of classification models. RNN models trained with this new function display superior performance to models trained using the weighted cross-entropy loss; this new function can also be used to fine-tune trained models. A two-cell RNN trained with this loss achieves state-of-the-art performance in O-GlcNAcylation site prediction with an F$_1$ score of 38.82% and an MCC of 38.21% on that large dataset.
|
[
"['Pedro Seber']"
] |
null | null |
2402.17135
| null | null |
http://arxiv.org/pdf/2402.17135v1
|
2024-02-27T01:59:02Z
|
2024-02-27T01:59:02Z
|
Unsupervised Zero-Shot Reinforcement Learning via Functional Reward
Encodings
|
Can we pre-train a generalist agent from a large amount of unlabeled offline trajectories such that it can be immediately adapted to any new downstream tasks in a zero-shot manner? In this work, we present a functional reward encoding (FRE) as a general, scalable solution to this zero-shot RL problem. Our main idea is to learn functional representations of any arbitrary tasks by encoding their state-reward samples using a transformer-based variational auto-encoder. This functional encoding not only enables the pre-training of an agent from a wide diversity of general unsupervised reward functions, but also provides a way to solve any new downstream tasks in a zero-shot manner, given a small number of reward-annotated samples. We empirically show that FRE agents trained on diverse random unsupervised reward functions can generalize to solve novel tasks in a range of simulated robotic benchmarks, often outperforming previous zero-shot RL and offline RL methods. Code for this project is provided at: https://github.com/kvfrans/fre
|
[
"['Kevin Frans' 'Seohong Park' 'Pieter Abbeel' 'Sergey Levine']"
] |
null | null |
2402.17143
| null | null |
http://arxiv.org/pdf/2402.17143v1
|
2024-02-27T02:13:32Z
|
2024-02-27T02:13:32Z
|
Energy-Efficient Scheduling with Predictions
|
An important goal of modern scheduling systems is to efficiently manage power usage. In energy-efficient scheduling, the operating system controls the speed at which a machine is processing jobs with the dual objective of minimizing energy consumption and optimizing the quality of service cost of the resulting schedule. Since machine-learned predictions about future requests can often be learned from historical data, a recent line of work on learning-augmented algorithms aims to achieve improved performance guarantees by leveraging predictions. In particular, for energy-efficient scheduling, Bamas et. al. [BamasMRS20] and Antoniadis et. al. [antoniadis2021novel] designed algorithms with predictions for the energy minimization with deadlines problem and achieved an improved competitive ratio when the prediction error is small while also maintaining worst-case bounds even when the prediction error is arbitrarily large. In this paper, we consider a general setting for energy-efficient scheduling and provide a flexible learning-augmented algorithmic framework that takes as input an offline and an online algorithm for the desired energy-efficient scheduling problem. We show that, when the prediction error is small, this framework gives improved competitive ratios for many different energy-efficient scheduling problems, including energy minimization with deadlines, while also maintaining a bounded competitive ratio regardless of the prediction error. Finally, we empirically demonstrate that this framework achieves an improved performance on real and synthetic datasets.
|
[
"['Eric Balkanski' 'Noemie Perivier' 'Clifford Stein' 'Hao-Ting Wei']"
] |
null | null |
2402.17148
| null | null |
http://arxiv.org/pdf/2402.17148v1
|
2024-02-27T02:29:24Z
|
2024-02-27T02:29:24Z
|
Time series generation for option pricing on quantum computers using
tensor network
|
Finance, especially option pricing, is a promising industrial field that might benefit from quantum computing. While quantum algorithms for option pricing have been proposed, it is desired to devise more efficient implementations of costly operations in the algorithms, one of which is preparing a quantum state that encodes a probability distribution of the underlying asset price. In particular, in pricing a path-dependent option, we need to generate a state encoding a joint distribution of the underlying asset price at multiple time points, which is more demanding. To address these issues, we propose a novel approach using Matrix Product State (MPS) as a generative model for time series generation. To validate our approach, taking the Heston model as a target, we conduct numerical experiments to generate time series in the model. Our findings demonstrate the capability of the MPS model to generate paths in the Heston model, highlighting its potential for path-dependent option pricing on quantum computers.
|
[
"['Nozomu Kobayashi' 'Yoshiyuki Suimon' 'Koichi Miyamoto']"
] |
null | null |
2402.17152
| null | null |
http://arxiv.org/pdf/2402.17152v3
|
2024-05-06T02:05:45Z
|
2024-02-27T02:37:37Z
|
Actions Speak Louder than Words: Trillion-Parameter Sequential
Transducers for Generative Recommendations
|
Large-scale recommendation systems are characterized by their reliance on high cardinality, heterogeneous features and the need to handle tens of billions of user actions on a daily basis. Despite being trained on huge volume of data with thousands of features, most Deep Learning Recommendation Models (DLRMs) in industry fail to scale with compute. Inspired by success achieved by Transformers in language and vision domains, we revisit fundamental design choices in recommendation systems. We reformulate recommendation problems as sequential transduction tasks within a generative modeling framework ("Generative Recommenders"), and propose a new architecture, HSTU, designed for high cardinality, non-stationary streaming recommendation data. HSTU outperforms baselines over synthetic and public datasets by up to 65.8% in NDCG, and is 5.3x to 15.2x faster than FlashAttention2-based Transformers on 8192 length sequences. HSTU-based Generative Recommenders, with 1.5 trillion parameters, improve metrics in online A/B tests by 12.4% and have been deployed on multiple surfaces of a large internet platform with billions of users. More importantly, the model quality of Generative Recommenders empirically scales as a power-law of training compute across three orders of magnitude, up to GPT-3/LLaMa-2 scale, which reduces carbon footprint needed for future model developments, and further paves the way for the first foundational models in recommendations.
|
[
"['Jiaqi Zhai' 'Lucy Liao' 'Xing Liu' 'Yueming Wang' 'Rui Li' 'Xuan Cao'\n 'Leon Gao' 'Zhaojie Gong' 'Fangda Gu' 'Michael He' 'Yinghai Lu' 'Yu Shi']"
] |
null | null |
2402.17156
| null | null |
http://arxiv.org/pdf/2402.17156v1
|
2024-02-27T02:41:46Z
|
2024-02-27T02:41:46Z
|
TaxDiff: Taxonomic-Guided Diffusion Model for Protein Sequence
Generation
|
Designing protein sequences with specific biological functions and structural stability is crucial in biology and chemistry. Generative models already demonstrated their capabilities for reliable protein design. However, previous models are limited to the unconditional generation of protein sequences and lack the controllable generation ability that is vital to biological tasks. In this work, we propose TaxDiff, a taxonomic-guided diffusion model for controllable protein sequence generation that combines biological species information with the generative capabilities of diffusion models to generate structurally stable proteins within the sequence space. Specifically, taxonomic control information is inserted into each layer of the transformer block to achieve fine-grained control. The combination of global and local attention ensures the sequence consistency and structural foldability of taxonomic-specific proteins. Extensive experiments demonstrate that TaxDiff can consistently achieve better performance on multiple protein sequence generation benchmarks in both taxonomic-guided controllable generation and unconditional generation. Remarkably, the sequences generated by TaxDiff even surpass those produced by direct-structure-generation models in terms of confidence based on predicted structures and require only a quarter of the time of models based on the diffusion model. The code for generating proteins and training new versions of TaxDiff is available at:https://github.com/Linzy19/TaxDiff.
|
[
"['Lin Zongying' 'Li Hao' 'Lv Liuzhenghao' 'Lin Bin' 'Zhang Junwu'\n 'Chen Calvin Yu-Chian' 'Yuan Li' 'Tian Yonghong']"
] |
null | null |
2402.17157
| null | null |
http://arxiv.org/pdf/2402.17157v1
|
2024-02-27T02:44:40Z
|
2024-02-27T02:44:40Z
|
Generative Learning for Forecasting the Dynamics of Complex Systems
|
We introduce generative models for accelerating simulations of complex systems through learning and evolving their effective dynamics. In the proposed Generative Learning of Effective Dynamics (G-LED), instances of high dimensional data are down sampled to a lower dimensional manifold that is evolved through an auto-regressive attention mechanism. In turn, Bayesian diffusion models, that map this low-dimensional manifold onto its corresponding high-dimensional space, capture the statistics of the system dynamics. We demonstrate the capabilities and drawbacks of G-LED in simulations of several benchmark systems, including the Kuramoto-Sivashinsky (KS) equation, two-dimensional high Reynolds number flow over a backward-facing step, and simulations of three-dimensional turbulent channel flow. The results demonstrate that generative learning offers new frontiers for the accurate forecasting of the statistical properties of complex systems at a reduced computational cost.
|
[
"['Han Gao' 'Sebastian Kaltenbach' 'Petros Koumoutsakos']"
] |
null | null |
2402.17176
| null | null |
http://arxiv.org/pdf/2402.17176v1
|
2024-02-27T03:24:54Z
|
2024-02-27T03:24:54Z
|
DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection
|
Model-X knockoff, among various feature selection methods, received much attention recently due to its guarantee on false discovery rate (FDR) control. Subsequent to its introduction in parametric design, knockoff is advanced to handle arbitrary data distributions using deep learning-based generative modeling. However, we observed that current implementations of the deep Model-X knockoff framework exhibit limitations. Notably, the "swap property" that knockoffs necessitate frequently encounter challenges on sample level, leading to a diminished selection power. To overcome, we develop "Deep Dependency Regularized Knockoff (DeepDRK)", a distribution-free deep learning method that strikes a balance between FDR and power. In DeepDRK, a generative model grounded in a transformer architecture is introduced to better achieve the "swap property". Novel efficient regularization techniques are also proposed to reach higher power. Our model outperforms other benchmarks in synthetic, semi-synthetic, and real-world data, especially when sample size is small and data distribution is complex.
|
[
"['Hongyu Shen' 'Yici Yan' 'Zhizhen Zhao']"
] |
null | null |
2402.17177
| null | null |
http://arxiv.org/pdf/2402.17177v3
|
2024-04-17T18:41:39Z
|
2024-02-27T03:30:58Z
|
Sora: A Review on Background, Technology, Limitations, and Opportunities
of Large Vision Models
|
Sora is a text-to-video generative AI model, released by OpenAI in February 2024. The model is trained to generate videos of realistic or imaginative scenes from text instructions and show potential in simulating the physical world. Based on public technical reports and reverse engineering, this paper presents a comprehensive review of the model's background, related technologies, applications, remaining challenges, and future directions of text-to-video AI models. We first trace Sora's development and investigate the underlying technologies used to build this "world simulator". Then, we describe in detail the applications and potential impact of Sora in multiple industries ranging from film-making and education to marketing. We discuss the main challenges and limitations that need to be addressed to widely deploy Sora, such as ensuring safe and unbiased video generation. Lastly, we discuss the future development of Sora and video generation models in general, and how advancements in the field could enable new ways of human-AI interaction, boosting productivity and creativity of video generation.
|
[
"['Yixin Liu' 'Kai Zhang' 'Yuan Li' 'Zhiling Yan' 'Chujie Gao' 'Ruoxi Chen'\n 'Zhengqing Yuan' 'Yue Huang' 'Hanchi Sun' 'Jianfeng Gao' 'Lifang He'\n 'Lichao Sun']"
] |
null | null |
2402.17179
| null | null |
http://arxiv.org/pdf/2402.17179v1
|
2024-02-27T03:33:23Z
|
2024-02-27T03:33:23Z
|
Dual-Space Optimization: Improved Molecule Sequence Design by Latent
Prompt Transformer
|
Designing molecules with desirable properties, such as drug-likeliness and high binding affinities towards protein targets, is a challenging problem. In this paper, we propose the Dual-Space Optimization (DSO) method that integrates latent space sampling and data space selection to solve this problem. DSO iteratively updates a latent space generative model and a synthetic dataset in an optimization process that gradually shifts the generative model and the synthetic data towards regions of desired property values. Our generative model takes the form of a Latent Prompt Transformer (LPT) where the latent vector serves as the prompt of a causal transformer. Our extensive experiments demonstrate effectiveness of the proposed method, which sets new performance benchmarks across single-objective, multi-objective and constrained molecule design tasks.
|
[
"['Deqian Kong' 'Yuhao Huang' 'Jianwen Xie' 'Edouardo Honig' 'Ming Xu'\n 'Shuanghong Xue' 'Pei Lin' 'Sanping Zhou' 'Sheng Zhong' 'Nanning Zheng'\n 'Ying Nian Wu']"
] |
null | null |
2402.17185
| null | null |
http://arxiv.org/pdf/2402.17185v1
|
2024-02-27T03:44:55Z
|
2024-02-27T03:44:55Z
|
Inpainting Computational Fluid Dynamics with Deep Learning
|
Fluid data completion is a research problem with high potential benefit for both experimental and computational fluid dynamics. An effective fluid data completion method reduces the required number of sensors in a fluid dynamics experiment, and allows a coarser and more adaptive mesh for a Computational Fluid Dynamics (CFD) simulation. However, the ill-posed nature of the fluid data completion problem makes it prohibitively difficult to obtain a theoretical solution and presents high numerical uncertainty and instability for a data-driven approach (e.g., a neural network model). To address these challenges, we leverage recent advancements in computer vision, employing the vector quantization technique to map both complete and incomplete fluid data spaces onto discrete-valued lower-dimensional representations via a two-stage learning procedure. We demonstrated the effectiveness of our approach on Kolmogorov flow data (Reynolds number: 1000) occluded by masks of different size and arrangement. Experimental results show that our proposed model consistently outperforms benchmark models under different occlusion settings in terms of point-wise reconstruction accuracy as well as turbulent energy spectrum and vorticity distribution.
|
[
"['Dule Shu' 'Wilson Zhen' 'Zijie Li' 'Amir Barati Farimani']"
] |
null | null |
2402.17191
| null | null |
http://arxiv.org/pdf/2402.17191v1
|
2024-02-27T04:12:25Z
|
2024-02-27T04:12:25Z
|
AI-Driven Anonymization: Protecting Personal Data Privacy While
Leveraging Machine Learning
|
The development of artificial intelligence has significantly transformed people's lives. However, it has also posed a significant threat to privacy and security, with numerous instances of personal information being exposed online and reports of criminal attacks and theft. Consequently, the need to achieve intelligent protection of personal information through machine learning algorithms has become a paramount concern. Artificial intelligence leverages advanced algorithms and technologies to effectively encrypt and anonymize personal data, enabling valuable data analysis and utilization while safeguarding privacy. This paper focuses on personal data privacy protection and the promotion of anonymity as its core research objectives. It achieves personal data privacy protection and detection through the use of machine learning's differential privacy protection algorithm. The paper also addresses existing challenges in machine learning related to privacy and personal data protection, offers improvement suggestions, and analyzes factors impacting datasets to enable timely personal data privacy detection and protection.
|
[
"['Le Yang' 'Miao Tian' 'Duan Xin' 'Qishuo Cheng' 'Jiajian Zheng']"
] |
null | null |
2402.17193
| null | null |
http://arxiv.org/pdf/2402.17193v1
|
2024-02-27T04:18:49Z
|
2024-02-27T04:18:49Z
|
When Scaling Meets LLM Finetuning: The Effect of Data, Model and
Finetuning Method
|
While large language models (LLMs) often adopt finetuning to unlock their capabilities for downstream applications, our understanding on the inductive biases (especially the scaling properties) of different finetuning methods is still limited. To fill this gap, we conduct systematic experiments studying whether and how different scaling factors, including LLM model size, pretraining data size, new finetuning parameter size and finetuning data size, affect the finetuning performance. We consider two types of finetuning -- full-model tuning (FMT) and parameter efficient tuning (PET, including prompt tuning and LoRA), and explore their scaling behaviors in the data-limited regime where the LLM model size substantially outweighs the finetuning data size. Based on two sets of pretrained bilingual LLMs from 1B to 16B and experiments on bilingual machine translation and multilingual summarization benchmarks, we find that 1) LLM finetuning follows a powerbased multiplicative joint scaling law between finetuning data size and each other scaling factor; 2) LLM finetuning benefits more from LLM model scaling than pretraining data scaling, and PET parameter scaling is generally ineffective; and 3) the optimal finetuning method is highly task- and finetuning data-dependent. We hope our findings could shed light on understanding, selecting and developing LLM finetuning methods.
|
[
"['Biao Zhang' 'Zhongtao Liu' 'Colin Cherry' 'Orhan Firat']"
] |
null | null |
2402.17196
| null | null |
http://arxiv.org/pdf/2402.17196v1
|
2024-02-27T04:23:35Z
|
2024-02-27T04:23:35Z
|
Prediction of the SYM-H Index Using a Bayesian Deep Learning Method with
Uncertainty Quantification
|
We propose a novel deep learning framework, named SYMHnet, which employs a graph neural network and a bidirectional long short-term memory network to cooperatively learn patterns from solar wind and interplanetary magnetic field parameters for short-term forecasts of the SYM-H index based on 1-minute and 5-minute resolution data. SYMHnet takes, as input, the time series of the parameters' values provided by NASA's Space Science Data Coordinated Archive and predicts, as output, the SYM-H index value at time point t + w hours for a given time point t where w is 1 or 2. By incorporating Bayesian inference into the learning framework, SYMHnet can quantify both aleatoric (data) uncertainty and epistemic (model) uncertainty when predicting future SYM-H indices. Experimental results show that SYMHnet works well at quiet time and storm time, for both 1-minute and 5-minute resolution data. The results also show that SYMHnet generally performs better than related machine learning methods. For example, SYMHnet achieves a forecast skill score (FSS) of 0.343 compared to the FSS of 0.074 of a recent gradient boosting machine (GBM) method when predicting SYM-H indices (1 hour in advance) in a large storm (SYM-H = -393 nT) using 5-minute resolution data. When predicting the SYM-H indices (2 hours in advance) in the large storm, SYMHnet achieves an FSS of 0.553 compared to the FSS of 0.087 of the GBM method. In addition, SYMHnet can provide results for both data and model uncertainty quantification, whereas the related methods cannot.
|
[
"['Yasser Abduallah' 'Khalid A. Alobaid' 'Jason T. L. Wang' 'Haimin Wang'\n 'Vania K. Jordanova' 'Vasyl Yurchyshyn' 'Huseyin Cavus' 'Ju Jing']"
] |
null | null |
2402.17202
| null | null |
http://arxiv.org/pdf/2402.17202v1
|
2024-02-27T04:50:13Z
|
2024-02-27T04:50:13Z
|
FedBRB: An Effective Solution to the Small-to-Large Scenario in
Device-Heterogeneity Federated Learning
|
Recently, the success of large models has demonstrated the importance of scaling up model size. This has spurred interest in exploring collaborative training of large-scale models from federated learning perspective. Due to computational constraints, many institutions struggle to train a large-scale model locally. Thus, training a larger global model using only smaller local models has become an important scenario (i.e., the textbf{small-to-large scenario}). Although recent device-heterogeneity federated learning approaches have started to explore this area, they face limitations in fully covering the parameter space of the global model. In this paper, we propose a method called textbf{FedBRB} (underline{B}lock-wise underline{R}olling and weighted underline{B}roadcast) based on the block concept. FedBRB can uses small local models to train all blocks of the large global model, and broadcasts the trained parameters to the entire space for faster information interaction. Experiments demonstrate FedBRB yields substantial performance gains, achieving state-of-the-art results in this scenario. Moreover, FedBRB using only minimal local models can even surpass baselines using larger local models.
|
[
"['Ziyue Xu' 'Mingfeng Xu' 'Tianchi Liao' 'Zibin Zheng' 'Chuan Chen']"
] |
null | null |
2402.17205
| null | null |
http://arxiv.org/pdf/2402.17205v3
|
2024-05-22T05:11:56Z
|
2024-02-27T04:55:03Z
|
Measuring Vision-Language STEM Skills of Neural Models
|
We introduce a new challenge to test the STEM skills of neural models. The problems in the real world often require solutions, combining knowledge from STEM (science, technology, engineering, and math). Unlike existing datasets, our dataset requires the understanding of multimodal vision-language information of STEM. Our dataset features one of the largest and most comprehensive datasets for the challenge. It includes 448 skills and 1,073,146 questions spanning all STEM subjects. Compared to existing datasets that often focus on examining expert-level ability, our dataset includes fundamental skills and questions designed based on the K-12 curriculum. We also add state-of-the-art foundation models such as CLIP and GPT-3.5-Turbo to our benchmark. Results show that the recent model advances only help master a very limited number of lower grade-level skills (2.5% in the third grade) in our dataset. In fact, these models are still well below (averaging 54.7%) the performance of elementary students, not to mention near expert-level performance. To understand and increase the performance on our dataset, we teach the models on a training split of our dataset. Even though we observe improved performance, the model performance remains relatively low compared to average elementary students. To solve STEM problems, we will need novel algorithmic innovations from the community.
|
[
"['Jianhao Shen' 'Ye Yuan' 'Srbuhi Mirzoyan' 'Ming Zhang' 'Chenguang Wang']"
] |
null | null |
2402.17215
| null | null |
http://arxiv.org/pdf/2402.17215v1
|
2024-02-27T05:11:14Z
|
2024-02-27T05:11:14Z
|
Multidimensional unstructured sparse recovery via eigenmatrix
|
This note considers the multidimensional unstructured sparse recovery problems. Examples include Fourier inversion and sparse deconvolution. The eigenmatrix is a data-driven construction with desired approximate eigenvalues and eigenvectors proposed for the one-dimensional problems. This note extends the eigenmatrix approach to multidimensional problems. Numerical results are provided to demonstrate the performance of the proposed method.
|
[
"['Lexing Ying']"
] |
null | null |
2402.17216
| null | null |
http://arxiv.org/pdf/2402.17216v1
|
2024-02-27T05:14:27Z
|
2024-02-27T05:14:27Z
|
Application of Machine Learning Optimization in Cloud Computing Resource
Scheduling and Management
|
In recent years, cloud computing has been widely used. Cloud computing refers to the centralized computing resources, users through the access to the centralized resources to complete the calculation, the cloud computing center will return the results of the program processing to the user. Cloud computing is not only for individual users, but also for enterprise users. By purchasing a cloud server, users do not have to buy a large number of computers, saving computing costs. According to a report by China Economic News Network, the scale of cloud computing in China has reached 209.1 billion yuan. At present, the more mature cloud service providers in China are Ali Cloud, Baidu Cloud, Huawei Cloud and so on. Therefore, this paper proposes an innovative approach to solve complex problems in cloud computing resource scheduling and management using machine learning optimization techniques. Through in-depth study of challenges such as low resource utilization and unbalanced load in the cloud environment, this study proposes a comprehensive solution, including optimization methods such as deep learning and genetic algorithm, to improve system performance and efficiency, and thus bring new breakthroughs and progress in the field of cloud computing resource management.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.
|
[
"['Yifan Zhang' 'Bo Liu' 'Yulu Gong' 'Jiaxin Huang' 'Jingyu Xu'\n 'Weixiang Wan']"
] |
null | null |
2402.17217
| null | null |
http://arxiv.org/pdf/2402.17217v1
|
2024-02-27T05:16:59Z
|
2024-02-27T05:16:59Z
|
Temporal Logic Specification-Conditioned Decision Transformer for
Offline Safe Reinforcement Learning
|
Offline safe reinforcement learning (RL) aims to train a constraint satisfaction policy from a fixed dataset. Current state-of-the-art approaches are based on supervised learning with a conditioned policy. However, these approaches fall short in real-world applications that involve complex tasks with rich temporal and logical structures. In this paper, we propose temporal logic Specification-conditioned Decision Transformer (SDT), a novel framework that harnesses the expressive power of signal temporal logic (STL) to specify complex temporal rules that an agent should follow and the sequential modeling capability of Decision Transformer (DT). Empirical evaluations on the DSRL benchmarks demonstrate the better capacity of SDT in learning safe and high-reward policies compared with existing approaches. In addition, SDT shows good alignment with respect to different desired degrees of satisfaction of the STL specification that it is conditioned on.
|
[
"['Zijian Guo' 'Weichao Zhou' 'Wenchao Li']"
] |
null | null |
2402.17227
| null | null |
http://arxiv.org/pdf/2402.17227v1
|
2024-02-27T05:40:36Z
|
2024-02-27T05:40:36Z
|
Efficient Backpropagation with Variance-Controlled Adaptive Sampling
|
Sampling-based algorithms, which eliminate ''unimportant'' computations during forward and/or back propagation (BP), offer potential solutions to accelerate neural network training. However, since sampling introduces approximations to training, such algorithms may not consistently maintain accuracy across various tasks. In this work, we introduce a variance-controlled adaptive sampling (VCAS) method designed to accelerate BP. VCAS computes an unbiased stochastic gradient with fine-grained layerwise importance sampling in data dimension for activation gradient calculation and leverage score sampling in token dimension for weight gradient calculation. To preserve accuracy, we control the additional variance by learning the sample ratio jointly with model parameters during training. We assessed VCAS on multiple fine-tuning and pre-training tasks in both vision and natural language domains. On all the tasks, VCAS can preserve the original training loss trajectory and validation accuracy with an up to 73.87% FLOPs reduction of BP and 49.58% FLOPs reduction of the whole training process. The implementation is available at https://github.com/thu-ml/VCAS .
|
[
"['Ziteng Wang' 'Jianfei Chen' 'Jun Zhu']"
] |
null | null |
2402.17229
| null | null |
http://arxiv.org/pdf/2402.17229v1
|
2024-02-27T05:47:33Z
|
2024-02-27T05:47:33Z
|
Preserving Fairness Generalization in Deepfake Detection
|
Although effective deepfake detection models have been developed in recent years, recent studies have revealed that these models can result in unfair performance disparities among demographic groups, such as race and gender. This can lead to particular groups facing unfair targeting or exclusion from detection, potentially allowing misclassified deepfakes to manipulate public opinion and undermine trust in the model. The existing method for addressing this problem is providing a fair loss function. It shows good fairness performance for intra-domain evaluation but does not maintain fairness for cross-domain testing. This highlights the significance of fairness generalization in the fight against deepfakes. In this work, we propose the first method to address the fairness generalization problem in deepfake detection by simultaneously considering features, loss, and optimization aspects. Our method employs disentanglement learning to extract demographic and domain-agnostic forgery features, fusing them to encourage fair learning across a flattened loss landscape. Extensive experiments on prominent deepfake datasets demonstrate our method's effectiveness, surpassing state-of-the-art approaches in preserving fairness during cross-domain deepfake detection. The code is available at https://github.com/Purdue-M2/Fairness-Generalization
|
[
"['Li Lin' 'Xinan He' 'Yan Ju' 'Xin Wang' 'Feng Ding' 'Shu Hu']"
] |
null | null |
2402.17232
| null | null |
http://arxiv.org/pdf/2402.17232v1
|
2024-02-27T05:57:45Z
|
2024-02-27T05:57:45Z
|
Two-scale Neural Networks for Partial Differential Equations with Small
Parameters
|
We propose a two-scale neural network method for solving partial differential equations (PDEs) with small parameters using physics-informed neural networks (PINNs). We directly incorporate the small parameters into the architecture of neural networks. The proposed method enables solving PDEs with small parameters in a simple fashion, without adding Fourier features or other computationally taxing searches of truncation parameters. Various numerical examples demonstrate reasonable accuracy in capturing features of large derivatives in the solutions caused by small parameters.
|
[
"['Qiao Zhuang' 'Chris Ziyi Yao' 'Zhongqiang Zhang' 'George Em Karniadakis']"
] |
null | null |
2402.17233
| null | null |
http://arxiv.org/pdf/2402.17233v2
|
2024-06-11T15:25:01Z
|
2024-02-27T06:01:56Z
|
Hybrid$^2$ Neural ODE Causal Modeling and an Application to Glycemic
Response
|
Hybrid models composing mechanistic ODE-based dynamics with flexible and expressive neural network components have grown rapidly in popularity, especially in scientific domains where such ODE-based modeling offers important interpretability and validated causal grounding (e.g., for counterfactual reasoning). The incorporation of mechanistic models also provides inductive bias in standard blackbox modeling approaches, critical when learning from small datasets or partially observed, complex systems. Unfortunately, as the hybrid models become more flexible, the causal grounding provided by the mechanistic model can quickly be lost. We address this problem by leveraging another common source of domain knowledge: emph{ranking} of treatment effects for a set of interventions, even if the precise treatment effect is unknown. We encode this information in a emph{causal loss} that we combine with the standard predictive loss to arrive at a emph{hybrid loss} that biases our learning towards causally valid hybrid models. We demonstrate our ability to achieve a win-win, state-of-the-art predictive performance emph{and} causal validity, in the challenging task of modeling glucose dynamics post-exercise in individuals with type 1 diabetes.
|
[
"['Bob Junyi Zou' 'Matthew E. Levine' 'Dessi P. Zaharieva' 'Ramesh Johari'\n 'Emily B. Fox']"
] |
null | null |
2402.17235
| null | null |
http://arxiv.org/pdf/2402.17235v1
|
2024-02-27T06:05:01Z
|
2024-02-27T06:05:01Z
|
Stochastic Gradient Succeeds for Bandits
|
We show that the emph{stochastic gradient} bandit algorithm converges to a emph{globally optimal} policy at an $O(1/t)$ rate, even with a emph{constant} step size. Remarkably, global convergence of the stochastic gradient bandit algorithm has not been previously established, even though it is an old algorithm known to be applicable to bandits. The new result is achieved by establishing two novel technical findings: first, the noise of the stochastic updates in the gradient bandit algorithm satisfies a strong ``growth condition'' property, where the variance diminishes whenever progress becomes small, implying that additional noise control via diminishing step sizes is unnecessary; second, a form of ``weak exploration'' is automatically achieved through the stochastic gradient updates, since they prevent the action probabilities from decaying faster than $O(1/t)$, thus ensuring that every action is sampled infinitely often with probability $1$. These two findings can be used to show that the stochastic gradient update is already ``sufficient'' for bandits in the sense that exploration versus exploitation is automatically balanced in a manner that ensures almost sure convergence to a global optimum. These novel theoretical findings are further verified by experimental results.
|
[
"['Jincheng Mei' 'Zixin Zhong' 'Bo Dai' 'Alekh Agarwal' 'Csaba Szepesvari'\n 'Dale Schuurmans']"
] |
null | null |
2402.17236
| null | null |
http://arxiv.org/abs/2402.17236v1
|
2024-02-27T06:09:48Z
|
2024-02-27T06:09:48Z
|
A Review of Data Mining in Personalized Education: Current Trends and
Future Prospects
|
Personalized education, tailored to individual student needs, leverages educational technology and artificial intelligence (AI) in the digital age to enhance learning effectiveness. The integration of AI in educational platforms provides insights into academic performance, learning preferences, and behaviors, optimizing the personal learning process. Driven by data mining techniques, it not only benefits students but also provides educators and institutions with tools to craft customized learning experiences. To offer a comprehensive review of recent advancements in personalized educational data mining, this paper focuses on four primary scenarios: educational recommendation, cognitive diagnosis, knowledge tracing, and learning analysis. This paper presents a structured taxonomy for each area, compiles commonly used datasets, and identifies future research directions, emphasizing the role of data mining in enhancing personalized education and paving the way for future exploration and innovation.
|
[
"['Zhang Xiong' 'Haoxuan Li' 'Zhuang Liu' 'Zhuofan Chen' 'Hao Zhou'\n 'Wenge Rong' 'Yuanxin Ouyang']"
] |
null | null |
2402.17238
| null | null |
http://arxiv.org/pdf/2402.17238v1
|
2024-02-27T06:13:02Z
|
2024-02-27T06:13:02Z
|
Does Negative Sampling Matter? A Review with Insights into its Theory
and Applications
|
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-ranging applications spanning machine learning, computer vision, natural language processing, data mining, and recommender systems. This growing interest raises several critical questions: Does negative sampling really matter? Is there a general framework that can incorporate all existing negative sampling methods? In what fields is it applied? Addressing these questions, we propose a general framework that leverages negative sampling. Delving into the history of negative sampling, we trace the development of negative sampling through five evolutionary paths. We dissect and categorize the strategies used to select negative sample candidates, detailing global, local, mini-batch, hop, and memory-based approaches. Our review categorizes current negative sampling methods into five types: static, hard, GAN-based, Auxiliary-based, and In-batch methods, providing a clear structure for understanding negative sampling. Beyond detailed categorization, we highlight the application of negative sampling in various areas, offering insights into its practical benefits. Finally, we briefly discuss open problems and future directions for negative sampling.
|
[
"['Zhen Yang' 'Ming Ding' 'Tinglin Huang' 'Yukuo Cen' 'Junshuai Song'\n 'Bin Xu' 'Yuxiao Dong' 'Jie Tang']"
] |
null | null |
2402.17246
| null | null |
http://arxiv.org/pdf/2402.17246v1
|
2024-02-27T06:32:56Z
|
2024-02-27T06:32:56Z
|
SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging
|
Automated classification of liver lesions in multi-phase CT and MR scans is of clinical significance but challenging. This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework, specifically designed for liver lesion classification in 3D multi-phase CT and MR imaging with varying phase counts. The proposed SDR-Former utilizes a streamlined Siamese Neural Network (SNN) to process multi-phase imaging inputs, possessing robust feature representations while maintaining computational efficiency. The weight-sharing feature of the SNN is further enriched by a hybrid Dual-Resolution Transformer (DR-Former), comprising a 3D Convolutional Neural Network (CNN) and a tailored 3D Transformer for processing high- and low-resolution images, respectively. This hybrid sub-architecture excels in capturing detailed local features and understanding global contextual information, thereby, boosting the SNN's feature extraction capabilities. Additionally, a novel Adaptive Phase Selection Module (APSM) is introduced, promoting phase-specific intercommunication and dynamically adjusting each phase's influence on the diagnostic outcome. The proposed SDR-Former framework has been validated through comprehensive experiments on two clinical datasets: a three-phase CT dataset and an eight-phase MR dataset. The experimental results affirm the efficacy of the proposed framework. To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public. This pioneering dataset, being the first publicly available multi-phase MR dataset in this field, also underpins the MICCAI LLD-MMRI Challenge. The dataset is accessible at:https://bit.ly/3IyYlgN.
|
[
"['Meng Lou' 'Hanning Ying' 'Xiaoqing Liu' 'Hong-Yu Zhou' 'Yuqing Zhang'\n 'Yizhou Yu']"
] |
null | null |
2402.17249
| null | null |
http://arxiv.org/pdf/2402.17249v1
|
2024-02-27T06:47:52Z
|
2024-02-27T06:47:52Z
|
Deep Learning-Based Speech and Vision Synthesis to Improve Phishing
Attack Detection through a Multi-layer Adaptive Framework
|
The ever-evolving ways attacker continues to im prove their phishing techniques to bypass existing state-of-the-art phishing detection methods pose a mountain of challenges to researchers in both industry and academia research due to the inability of current approaches to detect complex phishing attack. Thus, current anti-phishing methods remain vulnerable to complex phishing because of the increasingly sophistication tactics adopted by attacker coupled with the rate at which new tactics are being developed to evade detection. In this research, we proposed an adaptable framework that combines Deep learning and Randon Forest to read images, synthesize speech from deep-fake videos, and natural language processing at various predictions layered to significantly increase the performance of machine learning models for phishing attack detection.
|
[
"['Tosin Ige' 'Christopher Kiekintveld' 'Aritran Piplai']"
] |
null | null |
2402.17257
| null | null |
http://arxiv.org/pdf/2402.17257v3
|
2024-05-30T08:24:54Z
|
2024-02-27T07:03:25Z
|
RIME: Robust Preference-based Reinforcement Learning with Noisy
Preferences
|
Preference-based Reinforcement Learning (PbRL) circumvents the need for reward engineering by harnessing human preferences as the reward signal. However, current PbRL methods excessively depend on high-quality feedback from domain experts, which results in a lack of robustness. In this paper, we present RIME, a robust PbRL algorithm for effective reward learning from noisy preferences. Our method utilizes a sample selection-based discriminator to dynamically filter out noise and ensure robust training. To counteract the cumulative error stemming from incorrect selection, we suggest a warm start for the reward model, which additionally bridges the performance gap during the transition from pre-training to online training in PbRL. Our experiments on robotic manipulation and locomotion tasks demonstrate that RIME significantly enhances the robustness of the state-of-the-art PbRL method. Code is available at https://github.com/CJReinforce/RIME_ICML2024.
|
[
"['Jie Cheng' 'Gang Xiong' 'Xingyuan Dai' 'Qinghai Miao' 'Yisheng Lv'\n 'Fei-Yue Wang']"
] |
null | null |
2402.17269
| null | null |
http://arxiv.org/pdf/2402.17269v2
|
2024-03-08T06:00:12Z
|
2024-02-27T07:28:05Z
|
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion
Recognition
|
Emotion recognition in conversation (ERC) is a crucial task in natural language processing and affective computing. This paper proposes MultiDAG+CL, a novel approach for Multimodal Emotion Recognition in Conversation (ERC) that employs Directed Acyclic Graph (DAG) to integrate textual, acoustic, and visual features within a unified framework. The model is enhanced by Curriculum Learning (CL) to address challenges related to emotional shifts and data imbalance. Curriculum learning facilitates the learning process by gradually presenting training samples in a meaningful order, thereby improving the model's performance in handling emotional variations and data imbalance. Experimental results on the IEMOCAP and MELD datasets demonstrate that the MultiDAG+CL models outperform baseline models. We release the code for MultiDAG+CL and experiments: https://github.com/vanntc711/MultiDAG-CL
|
[
"['Cam-Van Thi Nguyen' 'Cao-Bach Nguyen' 'Quang-Thuy Ha' 'Duc-Trong Le']"
] |
null | null |
2402.17270
| null | null |
http://arxiv.org/pdf/2402.17270v1
|
2024-02-27T07:31:30Z
|
2024-02-27T07:31:30Z
|
Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social
Dilemmas
|
The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines, including computer science and social science. Recent advancements in Artificial Intelligence (AI) have significantly reshaped this field, offering fresh insights into understanding and enhancing cooperation. This survey examines three key areas at the intersection of AI and cooperation in social dilemmas. First, focusing on multi-agent cooperation, we review the intrinsic and external motivations that support cooperation among rational agents, and the methods employed to develop effective strategies against diverse opponents. Second, looking into human-agent cooperation, we discuss the current AI algorithms for cooperating with humans and the human biases towards AI agents. Third, we review the emergent field of leveraging AI agents to enhance cooperation among humans. We conclude by discussing future research avenues, such as using large language models, establishing unified theoretical frameworks, revisiting existing theories of human cooperation, and exploring multiple real-world applications.
|
[
"['Hao Guo' 'Chunjiang Mu' 'Yang Chen' 'Chen Shen' 'Shuyue Hu' 'Zhen Wang']"
] |
null | null |
2402.17287
| null | null |
http://arxiv.org/pdf/2402.17287v2
|
2024-06-14T01:37:57Z
|
2024-02-27T08:00:52Z
|
An Interpretable Evaluation of Entropy-based Novelty of Generative
Models
|
The massive developments of generative model frameworks require principled methods for the evaluation of a model's novelty compared to a reference dataset. While the literature has extensively studied the evaluation of the quality, diversity, and generalizability of generative models, the assessment of a model's novelty compared to a reference model has not been adequately explored in the machine learning community. In this work, we focus on the novelty assessment for multi-modal distributions and attempt to address the following differential clustering task: Given samples of a generative model $P_mathcal{G}$ and a reference model $P_mathrm{ref}$, how can we discover the sample types expressed by $P_mathcal{G}$ more frequently than in $P_mathrm{ref}$? We introduce a spectral approach to the differential clustering task and propose the Kernel-based Entropic Novelty (KEN) score to quantify the mode-based novelty of $P_mathcal{G}$ with respect to $P_mathrm{ref}$. We analyze the KEN score for mixture distributions with well-separable components and develop a kernel-based method to compute the KEN score from empirical data. We support the KEN framework by presenting numerical results on synthetic and real image datasets, indicating the framework's effectiveness in detecting novel modes and comparing generative models. The paper's code is available at: www.github.com/buyeah1109/KEN
|
[
"['Jingwei Zhang' 'Cheuk Ting Li' 'Farzan Farnia']"
] |
null | null |
2402.17295
| null | null |
http://arxiv.org/pdf/2402.17295v1
|
2024-02-27T08:16:17Z
|
2024-02-27T08:16:17Z
|
Quantum Distance Approximation for Persistence Diagrams
|
Topological Data Analysis methods can be useful for classification and clustering tasks in many different fields as they can provide two dimensional persistence diagrams that summarize important information about the shape of potentially complex and high dimensional data sets. The space of persistence diagrams can be endowed with various metrics such as the Wasserstein distance which admit a statistical structure and allow to use these summaries for machine learning algorithms. However, computing the distance between two persistence diagrams involves finding an optimal way to match the points of the two diagrams and may not always be an easy task for classical computers. In this work we explore the potential of quantum computers to estimate the distance between persistence diagrams, in particular we propose variational quantum algorithms for the Wasserstein distance as well as the $d^{c}_{p}$ distance. Our implementation is a weighted version of the Quantum Approximate Optimization Algorithm that relies on control clauses to encode the constraints of the optimization problem.
|
[
"['Bernardo Ameneyro' 'Rebekah Herrman' 'George Siopsis'\n 'Vasileios Maroulas']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.