categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.08836 | null | null | http://arxiv.org/pdf/2404.08836v1 | 2024-04-12T22:35:00Z | 2024-04-12T22:35:00Z | BERT-LSH: Reducing Absolute Compute For Attention | This study introduces a novel BERT-LSH model that incorporates Locality Sensitive Hashing (LSH) to approximate the attention mechanism in the BERT architecture. We examine the computational efficiency and performance of this model compared to a standard baseline BERT model. Our findings reveal that BERT-LSH significantly reduces computational demand for the self-attention layer while unexpectedly outperforming the baseline model in pretraining and fine-tuning tasks. These results suggest that the LSH-based attention mechanism not only offers computational advantages but also may enhance the model's ability to generalize from its training data. For more information, visit our GitHub repository: https://github.com/leo4life2/algoml-final | [
"['Zezheng Li' 'Kingston Yip']"
]
|
null | null | 2404.08838v | null | null | http://arxiv.org/pdf/2404.08838v12 | 2024-06-07T21:52:16Z | 2024-04-12T22:53:41Z | Predicting Traffic Congestion at Urban Intersections Using Data-Driven
Modeling | Traffic congestion at intersections is a significant issue in urban areas, leading to increased commute times, safety hazards, and operational inefficiencies. This study aims to develop a predictive model for congestion at intersections in major U.S. cities, utilizing a dataset of trip-logging metrics from commercial vehicles across 4,800 intersections. The dataset encompasses 27 features, including intersection coordinates, street names, time of day, and traffic metrics (Kashyap et al., 2019). Additional features, such as rainfall/snowfall percentage, distance from downtown and outskirts, and road types, were incorporated to enhance the model's predictive power. The methodology involves data exploration, feature transformation, and handling missing values through low-rank models and label encoding. The proposed model has the potential to assist city planners and governments in anticipating traffic hot spots, optimizing operations, and identifying infrastructure challenges. | [
"['Tara Kelly' 'Jessica Gupta']"
]
|
null | null | 2404.08839 | null | null | http://arxiv.org/pdf/2404.08839v3 | 2024-07-02T19:59:35Z | 2024-04-12T22:57:01Z | Multiply-Robust Causal Change Attribution | Comparing two samples of data, we observe a change in the distribution of an outcome variable. In the presence of multiple explanatory variables, how much of the change can be explained by each possible cause? We develop a new estimation strategy that, given a causal model, combines regression and re-weighting methods to quantify the contribution of each causal mechanism. Our proposed methodology is multiply robust, meaning that it still recovers the target parameter under partial misspecification. We prove that our estimator is consistent and asymptotically normal. Moreover, it can be incorporated into existing frameworks for causal attribution, such as Shapley values, which will inherit the consistency and large-sample distribution properties. Our method demonstrates excellent performance in Monte Carlo simulations, and we show its usefulness in an empirical application. Our method is implemented as part of the Python library DoWhy (arXiv:2011.04216, arXiv:2206.06821). | [
"['Victor Quintas-Martinez' 'Mohammad Taha Bahadori' 'Eduardo Santiago'\n 'Jeff Mu' 'Dominik Janzing' 'David Heckerman']"
]
|
null | null | 2404.08846 | null | null | http://arxiv.org/pdf/2404.08846v2 | 2024-05-31T02:37:10Z | 2024-04-12T23:27:46Z | Experimental Design for Active Transductive Inference in Large Language
Models | One emergent ability of large language models (LLMs) is that query-specific examples can be included in the prompt at inference time. In this work, we use active learning for adaptive prompt design and call it Active In-context Prompt Design (AIPD). We design the LLM prompt by adaptively choosing few-shot examples from a training set to optimize performance on a test set. The training examples are initially unlabeled and we obtain the label of the most informative ones, which maximally reduces uncertainty in the LLM prediction. We propose two algorithms, GO and SAL, which differ in how the few-shot examples are chosen. We analyze these algorithms in linear models: first GO and then use its equivalence with SAL. We experiment with many different tasks in small, medium-sized, and large language models; and show that GO and SAL outperform other methods for choosing few-shot examples in the LLM prompt at inference time. | [
"['Subhojyoti Mukherjee' 'Anusha Lalitha' 'Aniket Deshmukh' 'Ge Liu'\n 'Yifei Ma' 'Branislav Kveton']"
]
|
null | null | 2404.08847 | null | null | http://arxiv.org/abs/2404.08847v1 | 2024-04-12T23:32:06Z | 2024-04-12T23:32:06Z | LazyDP: Co-Designing Algorithm-Software for Scalable Training of
Differentially Private Recommendation Models | Differential privacy (DP) is widely being employed in the industry as a practical standard for privacy protection. While private training of computer vision or natural language processing applications has been studied extensively, the computational challenges of training of recommender systems (RecSys) with DP have not been explored. In this work, we first present our detailed characterization of private RecSys training using DP-SGD, root-causing its several performance bottlenecks. Specifically, we identify DP-SGD's noise sampling and noisy gradient update stage to suffer from a severe compute and memory bandwidth limitation, respectively, causing significant performance overhead in training private RecSys. Based on these findings, we propose LazyDP, an algorithm-software co-design that addresses the compute and memory challenges of training RecSys with DP-SGD. Compared to a state-of-the-art DP-SGD training system, we demonstrate that LazyDP provides an average 119x training throughput improvement while also ensuring mathematically equivalent, differentially private RecSys models to be trained. | [
"['Juntaek Lim' 'Youngeun Kwon' 'Ranggi Hwang' 'Kiwan Maeng'\n 'G. Edward Suh' 'Minsoo Rhu']"
]
|
null | null | 2404.08850 | null | null | http://arxiv.org/pdf/2404.08850v2 | 2024-05-28T17:11:44Z | 2024-04-12T23:37:56Z | Assessing Economic Viability: A Comparative Analysis of Total Cost of
Ownership for Domain-Adapted Large Language Models versus State-of-the-art
Counterparts in Chip Design Coding Assistance | This paper presents a comparative analysis of total cost of ownership (TCO) and performance between domain-adapted large language models (LLM) and state-of-the-art (SoTA) LLMs , with a particular emphasis on tasks related to coding assistance for chip design. We examine the TCO and performance metrics of a domain-adaptive LLM, ChipNeMo, against two leading LLMs, Claude 3 Opus and ChatGPT-4 Turbo, to assess their efficacy in chip design coding generation. Through a detailed evaluation of the accuracy of the model, training methodologies, and operational expenditures, this study aims to provide stakeholders with critical information to select the most economically viable and performance-efficient solutions for their specific needs. Our results underscore the benefits of employing domain-adapted models, such as ChipNeMo, that demonstrate improved performance at significantly reduced costs compared to their general-purpose counterparts. In particular, we reveal the potential of domain-adapted LLMs to decrease TCO by approximately 90%-95%, with the cost advantages becoming increasingly evident as the deployment scale expands. With expansion of deployment, the cost benefits of ChipNeMo become more pronounced, making domain-adaptive LLMs an attractive option for organizations with substantial coding needs supported by LLMs | [
"['Amit Sharma' 'Teodor-Dumitru Ene' 'Kishor Kunal' 'Mingjie Liu'\n 'Zafar Hasan' 'Haoxing Ren']"
]
|
null | null | 2404.08853 | null | null | http://arxiv.org/pdf/2404.08853v1 | 2024-04-12T23:49:37Z | 2024-04-12T23:49:37Z | Uncertainty Quantification in Detecting Choroidal Metastases on MRI via
Evolutionary Strategies | Uncertainty quantification plays a vital role in facilitating the practical implementation of AI in radiology by addressing growing concerns around trustworthiness. Given the challenges associated with acquiring large, annotated datasets in this field, there is a need for methods that enable uncertainty quantification in small data AI approaches tailored to radiology images. In this study, we focused on uncertainty quantification within the context of the small data evolutionary strategies-based technique of deep neuroevolution (DNE). Specifically, we employed DNE to train a simple Convolutional Neural Network (CNN) with MRI images of the eyes for binary classification. The goal was to distinguish between normal eyes and those with metastatic tumors called choroidal metastases. The training set comprised 18 images with choroidal metastases and 18 without tumors, while the testing set contained a tumor-to-normal ratio of 15:15. We trained CNN model weights via DNE for approximately 40,000 episodes, ultimately reaching a convergence of 100% accuracy on the training set. We saved all models that achieved maximal training set accuracy. Then, by applying these models to the testing set, we established an ensemble method for uncertainty quantification.The saved set of models produced distributions for each testing set image between the two classes of normal and tumor-containing. The relative frequencies permitted uncertainty quantification of model predictions. Intriguingly, we found that subjective features appreciated by human radiologists explained images for which uncertainty was high, highlighting the significance of uncertainty quantification in AI-driven radiological analyses. | [
"['Bala McRae-Posani' 'Andrei Holodny' 'Hrithwik Shalu' 'Joseph N Stember']"
]
|
null | null | 2404.08855 | null | null | http://arxiv.org/pdf/2404.08855v1 | 2024-04-12T23:55:59Z | 2024-04-12T23:55:59Z | WROOM: An Autonomous Driving Approach for Off-Road Navigation | Off-road navigation is a challenging problem both at the planning level to get a smooth trajectory and at the control level to avoid flipping over, hitting obstacles, or getting stuck at a rough patch. There have been several recent works using classical approaches involving depth map prediction followed by smooth trajectory planning and using a controller to track it. We design an end-to-end reinforcement learning (RL) system for an autonomous vehicle in off-road environments using a custom-designed simulator in the Unity game engine. We warm-start the agent by imitating a rule-based controller and utilize Proximal Policy Optimization (PPO) to improve the policy based on a reward that incorporates Control Barrier Functions (CBF), facilitating the agent's ability to generalize effectively to real-world scenarios. The training involves agents concurrently undergoing domain-randomized trials in various environments. We also propose a novel simulation environment to replicate off-road driving scenarios and deploy our proposed approach on a real buggy RC car. Videos and additional results: https://sites.google.com/view/wroom-utd/home | [
"['Dvij Kalaria' 'Shreya Sharma' 'Sarthak Bhagat' 'Haoru Xue'\n 'John M. Dolan']"
]
|
null | null | 2404.08856 | null | null | http://arxiv.org/pdf/2404.08856v1 | 2024-04-13T00:02:36Z | 2024-04-13T00:02:36Z | On Speculative Decoding for Multimodal Large Language Models | Inference with Multimodal Large Language Models (MLLMs) is slow due to their large-language-model backbone which suffers from memory bandwidth bottleneck and generates tokens auto-regressively. In this paper, we explore the application of speculative decoding to enhance the inference efficiency of MLLMs, specifically the LLaVA 7B model. We show that a language-only model can serve as a good draft model for speculative decoding with LLaVA 7B, bypassing the need for image tokens and their associated processing components from the draft model. Our experiments across three different tasks show that speculative decoding can achieve a memory-bound speedup of up to 2.37$times$ using a 115M parameter language model that we trained from scratch. Additionally, we introduce a compact LLaVA draft model incorporating an image adapter, which shows marginal performance gains in image captioning while maintaining comparable results in other tasks. | [
"['Mukul Gagrani' 'Raghavv Goel' 'Wonseok Jeon' 'Junyoung Park' 'Mingu Lee'\n 'Christopher Lott']"
]
|
null | null | 2404.08860 | null | null | http://arxiv.org/pdf/2404.08860v3 | 2024-07-09T01:14:21Z | 2024-04-13T00:20:09Z | Enhancing Mobile "How-to" Queries with Automated Search Results
Verification and Reranking | Many people use search engines to find online guidance to solve computer or mobile device problems. Users frequently encounter challenges in identifying effective solutions from search results, often wasting time trying ineffective solutions that seem relevant yet fail to solve real problems. This paper introduces a novel approach to improving the accuracy and relevance of online technical support search results through automated search results verification and reranking. Taking "How-to" queries specific to on-device execution as a starting point, we developed the first solution that allows an AI agent to interpret and execute step-by-step instructions in the search results in a controlled Android environment. We further integrated the agent's findings into a reranking mechanism that orders search results based on the success indicators of the tested solutions. The paper details the architecture of our solution and a comprehensive evaluation of the system through a series of tests across various application domains. The results demonstrate a significant improvement in the quality and reliability of the top-ranked results. Our findings suggest a paradigm shift in how search engine ranking for online technical support help can be optimized, offering a scalable and automated solution to the pervasive challenge of finding effective and reliable online help. | [
"['Lei Ding' 'Jeshwanth Bheemanpally' 'Yi Zhang']"
]
|
null | null | 2404.08865 | null | null | http://arxiv.org/pdf/2404.08865v1 | 2024-04-13T01:13:59Z | 2024-04-13T01:13:59Z | LLM In-Context Recall is Prompt Dependent | The proliferation of Large Language Models (LLMs) highlights the critical importance of conducting thorough evaluations to discern their comparative advantages, limitations, and optimal use cases. Particularly important is assessing their capacity to accurately retrieve information included in a given prompt. A model's ability to do this significantly influences how effectively it can utilize contextual details, thus impacting its practical efficacy and dependability in real-world applications. Our research analyzes the in-context recall performance of various LLMs using the needle-in-a-haystack method. In this approach, a factoid (the "needle") is embedded within a block of filler text (the "haystack"), which the model is asked to retrieve. We assess the recall performance of each model across various haystack lengths and with varying needle placements to identify performance patterns. This study demonstrates that an LLM's recall capability is not only contingent upon the prompt's content but also may be compromised by biases in its training data. Conversely, adjustments to model architecture, training strategy, or fine-tuning can improve performance. Our analysis provides insight into LLM behavior, offering direction for the development of more effective applications of LLMs. | [
"['Daniel Machlab' 'Rick Battle']"
]
|
null | null | 2404.08866 | null | null | http://arxiv.org/pdf/2404.08866v1 | 2024-04-13T01:16:45Z | 2024-04-13T01:16:45Z | An evaluation framework for synthetic data generation models | Nowadays, the use of synthetic data has gained popularity as a cost-efficient strategy for enhancing data augmentation for improving machine learning models performance as well as addressing concerns related to sensitive data privacy. Therefore, the necessity of ensuring quality of generated synthetic data, in terms of accurate representation of real data, consists of primary importance. In this work, we present a new framework for evaluating synthetic data generation models' ability for developing high-quality synthetic data. The proposed approach is able to provide strong statistical and theoretical information about the evaluation framework and the compared models' ranking. Two use case scenarios demonstrate the applicability of the proposed framework for evaluating the ability of synthetic data generation models to generated high quality data. The implementation code can be found in https://github.com/novelcore/synthetic_data_evaluation_framework. | [
"['Ioannis E. Livieris' 'Nikos Alimpertis' 'George Domalis'\n 'Dimitris Tsakalidis']"
]
|
null | null | 2404.08877 | null | null | http://arxiv.org/pdf/2404.08877v1 | 2024-04-13T02:36:40Z | 2024-04-13T02:36:40Z | Aligning LLMs for FL-free Program Repair | Large language models (LLMs) have achieved decent results on automated program repair (APR). However, the next token prediction training objective of decoder-only LLMs (e.g., GPT-4) is misaligned with the masked span prediction objective of current infilling-style methods, which impedes LLMs from fully leveraging pre-trained knowledge for program repair. In addition, while some LLMs are capable of locating and repairing bugs end-to-end when using the related artifacts (e.g., test cases) as input, existing methods regard them as separate tasks and ask LLMs to generate patches at fixed locations. This restriction hinders LLMs from exploring potential patches beyond the given locations. In this paper, we investigate a new approach to adapt LLMs to program repair. Our core insight is that LLM's APR capability can be greatly improved by simply aligning the output to their training objective and allowing them to refine the whole program without first performing fault localization. Based on this insight, we designed D4C, a straightforward prompting framework for APR. D4C can repair 180 bugs correctly in Defects4J, with each patch being sampled only 10 times. This surpasses the SOTA APR methods with perfect fault localization by 10% and reduces the patch sampling number by 90%. Our findings reveal that (1) objective alignment is crucial for fully exploiting LLM's pre-trained capability, and (2) replacing the traditional localize-then-repair workflow with direct debugging is more effective for LLM-based APR methods. Thus, we believe this paper introduces a new mindset for harnessing LLMs in APR. | [
"['Junjielong Xu' 'Ying Fu' 'Shin Hwei Tan' 'Pinjia He']"
]
|
null | null | 2404.08878 | null | null | http://arxiv.org/pdf/2404.08878v1 | 2024-04-13T02:39:36Z | 2024-04-13T02:39:36Z | Generative AI Agent for Next-Generation MIMO Design: Fundamentals,
Challenges, and Vision | Next-generation multiple input multiple output (MIMO) is expected to be intelligent and scalable. In this paper, we study generative artificial intelligence (AI) agent-enabled next-generation MIMO design. Firstly, we provide an overview of the development, fundamentals, and challenges of the next-generation MIMO. Then, we propose the concept of the generative AI agent, which is capable of generating tailored and specialized contents with the aid of large language model (LLM) and retrieval augmented generation (RAG). Next, we comprehensively discuss the features and advantages of the generative AI agent framework. More importantly, to tackle existing challenges of next-generation MIMO, we discuss generative AI agent-enabled next-generation MIMO design, from the perspective of performance analysis, signal processing, and resource allocation. Furthermore, we present two compelling case studies that demonstrate the effectiveness of leveraging the generative AI agent for performance analysis in complex configuration scenarios. These examples highlight how the integration of generative AI agents can significantly enhance the analysis and design of next-generation MIMO systems. Finally, we discuss important potential research future directions. | [
"['Zhe Wang' 'Jiayi Zhang' 'Hongyang Du' 'Ruichen Zhang' 'Dusit Niyato'\n 'Bo Ai' 'Khaled B. Letaief']"
]
|
null | null | 2404.08885 | null | null | http://arxiv.org/pdf/2404.08885v1 | 2024-04-13T03:11:07Z | 2024-04-13T03:11:07Z | Is Next Token Prediction Sufficient for GPT? Exploration on Code Logic
Comprehension | Large language models (LLMs) has experienced exponential growth, they demonstrate remarkable performance across various tasks. Notwithstanding, contemporary research primarily centers on enhancing the size and quality of pretraining data, still utilizing the next token prediction task on autoregressive transformer model structure. The efficacy of this task in truly facilitating the model's comprehension of code logic remains questionable, we speculate that it still interprets code as mere text, while human emphasizes the underlying logical knowledge. In order to prove it, we introduce a new task, "Logically Equivalent Code Selection," which necessitates the selection of logically equivalent code from a candidate set, given a query code. Our experimental findings indicate that current LLMs underperform in this task, since they understand code by unordered bag of keywords. To ameliorate their performance, we propose an advanced pretraining task, "Next Token Prediction+". This task aims to modify the sentence embedding distribution of the LLM without sacrificing its generative capabilities. Our experimental results reveal that following this pretraining, both Code Llama and StarCoder, the prevalent code domain pretraining models, display significant improvements on our logically equivalent code selection task and the code completion task. | [
"['Mengnan Qi' 'Yufan Huang' 'Yongqiang Yao' 'Maoquan Wang' 'Bin Gu'\n 'Neel Sundaresan']"
]
|
null | null | 2404.08886 | null | null | http://arxiv.org/pdf/2404.08886v1 | 2024-04-13T03:15:56Z | 2024-04-13T03:15:56Z | EIVEN: Efficient Implicit Attribute Value Extraction using Multimodal
LLM | In e-commerce, accurately extracting product attribute values from multimodal data is crucial for improving user experience and operational efficiency of retailers. However, previous approaches to multimodal attribute value extraction often struggle with implicit attribute values embedded in images or text, rely heavily on extensive labeled data, and can easily confuse similar attribute values. To address these issues, we introduce EIVEN, a data- and parameter-efficient generative framework that pioneers the use of multimodal LLM for implicit attribute value extraction. EIVEN leverages the rich inherent knowledge of a pre-trained LLM and vision encoder to reduce reliance on labeled data. We also introduce a novel Learning-by-Comparison technique to reduce model confusion by enforcing attribute value comparison and difference identification. Additionally, we construct initial open-source datasets for multimodal implicit attribute value extraction. Our extensive experiments reveal that EIVEN significantly outperforms existing methods in extracting implicit attribute values while requiring less labeled data. | [
"['Henry Peng Zou' 'Gavin Heqing Yu' 'Ziwei Fan' 'Dan Bu' 'Han Liu'\n 'Peng Dai' 'Dongmei Jia' 'Cornelia Caragea']"
]
|
null | null | 2404.08887 | null | null | http://arxiv.org/abs/2404.08887v1 | 2024-04-13T03:17:33Z | 2024-04-13T03:17:33Z | Countering Mainstream Bias via End-to-End Adaptive Local Learning | Collaborative filtering (CF) based recommendations suffer from mainstream bias -- where mainstream users are favored over niche users, leading to poor recommendation quality for many long-tail users. In this paper, we identify two root causes of this mainstream bias: (i) discrepancy modeling, whereby CF algorithms focus on modeling mainstream users while neglecting niche users with unique preferences; and (ii) unsynchronized learning, where niche users require more training epochs than mainstream users to reach peak performance. Targeting these causes, we propose a novel end-To-end Adaptive Local Learning (TALL) framework to provide high-quality recommendations to both mainstream and niche users. TALL uses a loss-driven Mixture-of-Experts module to adaptively ensemble experts to provide customized local models for different users. Further, it contains an adaptive weight module to synchronize the learning paces of different users by dynamically adjusting weights in the loss. Extensive experiments demonstrate the state-of-the-art performance of the proposed model. Code and data are provided at url{https://github.com/JP-25/end-To-end-Adaptive-Local-Leanring-TALL-} | [
"['Jinhao Pan' 'Ziwei Zhu' 'Jianling Wang' 'Allen Lin' 'James Caverlee']"
]
|
null | null | 2404.08888 | null | null | http://arxiv.org/pdf/2404.08888v1 | 2024-04-13T03:23:15Z | 2024-04-13T03:23:15Z | Towards Enhancing Health Coaching Dialogue in Low-Resource Settings | Health coaching helps patients identify and accomplish lifestyle-related goals, effectively improving the control of chronic diseases and mitigating mental health conditions. However, health coaching is cost-prohibitive due to its highly personalized and labor-intensive nature. In this paper, we propose to build a dialogue system that converses with the patients, helps them create and accomplish specific goals, and can address their emotions with empathy. However, building such a system is challenging since real-world health coaching datasets are limited and empathy is subtle. Thus, we propose a modularized health coaching dialogue system with simplified NLU and NLG frameworks combined with mechanism-conditioned empathetic response generation. Through automatic and human evaluation, we show that our system generates more empathetic, fluent, and coherent responses and outperforms the state-of-the-art in NLU tasks while requiring less annotation. We view our approach as a key step towards building automated and more accessible health coaching systems. | [
"['Yue Zhou' 'Barbara Di Eugenio' 'Brian Ziebart' 'Lisa Sharp' 'Bing Liu'\n 'Ben Gerber' 'Nikolaos Agadakos' 'Shweta Yadav']"
]
|
null | null | 2404.08892 | null | null | http://arxiv.org/pdf/2404.08892v1 | 2024-04-13T03:46:35Z | 2024-04-13T03:46:35Z | ChangeAnywhere: Sample Generation for Remote Sensing Change Detection
via Semantic Latent Diffusion Model | Remote sensing change detection (CD) is a pivotal technique that pinpoints changes on a global scale based on multi-temporal images. With the recent expansion of deep learning, supervised deep learning-based CD models have shown satisfactory performance. However, CD sample labeling is very time-consuming as it is densely labeled and requires expert knowledge. To alleviate this problem, we introduce ChangeAnywhere, a novel CD sample generation method using the semantic latent diffusion model and single-temporal images. Specifically, ChangeAnywhere leverages the relative ease of acquiring large single-temporal semantic datasets to generate large-scale, diverse, and semantically annotated bi-temporal CD datasets. ChangeAnywhere captures the two essentials of CD samples, i.e., change implies semantically different, and non-change implies reasonable change under the same semantic constraints. We generated ChangeAnywhere-100K, the largest synthesis CD dataset with 100,000 pairs of CD samples based on the proposed method. The ChangeAnywhere-100K significantly improved both zero-shot and few-shot performance on two CD benchmark datasets for various deep learning-based CD models, as demonstrated by transfer experiments. This paper delineates the enormous potential of ChangeAnywhere for CD sample generation and demonstrates the subsequent enhancement of model performance. Therefore, ChangeAnywhere offers a potent tool for remote sensing CD. All codes and pre-trained models will be available at https://github.com/tangkai-RS/ChangeAnywhere. | [
"['Kai Tang' 'Jin Chen']"
]
|
null | null | 2404.08893 | null | null | http://arxiv.org/pdf/2404.08893v1 | 2024-04-13T03:57:14Z | 2024-04-13T03:57:14Z | Early detection of disease outbreaks and non-outbreaks using incidence
data | Forecasting the occurrence and absence of novel disease outbreaks is essential for disease management. Here, we develop a general model, with no real-world training data, that accurately forecasts outbreaks and non-outbreaks. We propose a novel framework, using a feature-based time series classification method to forecast outbreaks and non-outbreaks. We tested our methods on synthetic data from a Susceptible-Infected-Recovered model for slowly changing, noisy disease dynamics. Outbreak sequences give a transcritical bifurcation within a specified future time window, whereas non-outbreak (null bifurcation) sequences do not. We identified incipient differences in time series of infectives leading to future outbreaks and non-outbreaks. These differences are reflected in 22 statistical features and 5 early warning signal indicators. Classifier performance, given by the area under the receiver-operating curve, ranged from 0.99 for large expanding windows of training data to 0.7 for small rolling windows. Real-world performances of classifiers were tested on two empirical datasets, COVID-19 data from Singapore and SARS data from Hong Kong, with two classifiers exhibiting high accuracy. In summary, we showed that there are statistical features that distinguish outbreak and non-outbreak sequences long before outbreaks occur. We could detect these differences in synthetic and real-world data sets, well before potential outbreaks occur. | [
"['Shan Gao' 'Amit K. Chakraborty' 'Russell Greiner' 'Mark A. Lewis'\n 'Hao Wang']"
]
|
null | null | 2404.08894 | null | null | http://arxiv.org/pdf/2404.08894v1 | 2024-04-13T04:01:35Z | 2024-04-13T04:01:35Z | HEAT: Head-level Parameter Efficient Adaptation of Vision Transformers
with Taylor-expansion Importance Scores | Prior computer vision research extensively explores adapting pre-trained vision transformers (ViT) to downstream tasks. However, the substantial number of parameters requiring adaptation has led to a focus on Parameter Efficient Transfer Learning (PETL) as an approach to efficiently adapt large pre-trained models by training only a subset of parameters, achieving both parameter and storage efficiency. Although the significantly reduced parameters have shown promising performance under transfer learning scenarios, the structural redundancy inherent in the model still leaves room for improvement, which warrants further investigation. In this paper, we propose Head-level Efficient Adaptation with Taylor-expansion importance score (HEAT): a simple method that efficiently fine-tuning ViTs at head levels. In particular, the first-order Taylor expansion is employed to calculate each head's importance score, termed Taylor-expansion Importance Score (TIS), indicating its contribution to specific tasks. Additionally, three strategies for calculating TIS have been employed to maximize the effectiveness of TIS. These strategies calculate TIS from different perspectives, reflecting varying contributions of parameters. Besides ViT, HEAT has also been applied to hierarchical transformers such as Swin Transformer, demonstrating its versatility across different transformer architectures. Through extensive experiments, HEAT has demonstrated superior performance over state-of-the-art PETL methods on the VTAB-1K benchmark. | [
"['Yibo Zhong' 'Yao Zhou']"
]
|
null | null | 2404.08901 | null | null | http://arxiv.org/pdf/2404.08901v1 | 2024-04-13T05:01:54Z | 2024-04-13T05:01:54Z | Bullion: A Column Store for Machine Learning | The past two decades have witnessed columnar storage revolutionizing data warehousing and analytics. However, the rapid growth of machine learning poses new challenges to this domain. This paper presents Bullion, a columnar storage system tailored for machine learning workloads. Bullion addresses the complexities of data compliance, optimizes the encoding of long sequence sparse features, efficiently manages wide-table projections, and introduces feature quantization in storage. By aligning with the evolving requirements of ML applications, Bullion extends columnar storage to various scenarios, from advertising and recommendation systems to the expanding realm of Generative AI. Preliminary experimental results and theoretical analysis demonstrate Bullion's superior performance in handling the unique demands of machine learning workloads compared to existing columnar storage solutions. Bullion significantly reduces I/O costs for deletion compliance, achieves substantial storage savings with its optimized encoding scheme for sparse features, and drastically improves metadata parsing speed for wide-table projections. These advancements position Bullion as a critical component in the future of machine learning infrastructure, enabling organizations to efficiently manage and process the massive volumes of data required for training and inference in modern AI applications. | [
"['Gang Liao' 'Ye Liu' 'Jianjun Chen' 'Daniel J. Abadi']"
]
|
null | null | 2404.08903 | null | null | http://arxiv.org/pdf/2404.08903v1 | 2024-04-13T05:15:46Z | 2024-04-13T05:15:46Z | Enhancing path-integral approximation for non-linear diffusion with
neural network | Enhancing the existing solution for pricing of fixed income instruments within Black-Karasinski model structure, with neural network at various parameterisation points to demonstrate that the method is able to achieve superior outcomes for multiple calibrations across extended projection horizons. | [
"['Anna Knezevic']"
]
|
null | null | 2404.08913 | null | null | http://arxiv.org/pdf/2404.08913v1 | 2024-04-13T06:57:44Z | 2024-04-13T06:57:44Z | On the best approximation by finite Gaussian mixtures | We consider the problem of approximating a general Gaussian location mixture by finite mixtures. The minimum order of finite mixtures that achieve a prescribed accuracy (measured by various $f$-divergences) is determined within constant factors for the family of mixing distributions with compactly support or appropriate assumptions on the tail probability including subgaussian and subexponential. While the upper bound is achieved using the technique of local moment matching, the lower bound is established by relating the best approximation error to the low-rank approximation of certain trigonometric moment matrices, followed by a refined spectral analysis of their minimum eigenvalue. In the case of Gaussian mixing distributions, this result corrects a previous lower bound in [Allerton Conference 48 (2010) 620-628]. | [
"['Yun Ma' 'Yihong Wu' 'Pengkun Yang']"
]
|
null | null | 2404.08915 | null | null | http://arxiv.org/pdf/2404.08915v2 | 2024-05-25T14:31:55Z | 2024-04-13T07:27:06Z | PM2: A New Prompting Multi-modal Model Paradigm for Few-shot Medical
Image Classification | Few-shot learning has been successfully applied to medical image classification as only very few medical examples are available for training. Due to the challenging problem of limited number of annotated medical images, image representations should not be solely derived from a single image modality which is insufficient for characterizing concept classes. In this paper, we propose a new prompting multi-modal model paradigm on medical image classification based on multi-modal foundation models, called PM2. Besides image modality,PM2 introduces another supplementary text input, known as prompt, to further describe corresponding image or concept classes and facilitate few-shot learning across diverse modalities. To better explore the potential of prompt engineering, we empirically investigate five distinct prompt schemes under the new paradigm. Furthermore, linear probing in multi-modal models acts as a linear classification head taking as input only class token, which ignores completely merits of rich statistics inherent in high-level visual tokens. Thus, we alternatively perform a linear classification on feature distribution of visual tokens and class token simultaneously. To effectively mine such rich statistics, a global covariance pooling with efficient matrix power normalization is used to aggregate visual tokens. Then we study and combine two classification heads. One is shared for class token of image from vision encoder and prompt representation encoded by text encoder. The other is to classification on feature distribution of visual tokens from vision encoder. Extensive experiments on three medical datasets show that our PM2 significantly outperforms counterparts regardless of prompt schemes and achieves state-of-the-art performance. | [
"['Zhenwei Wang' 'Qiule Sun' 'Bingbing Zhang' 'Pengfei Wang'\n 'Jianxin Zhang' 'Qiang Zhang']"
]
|
null | null | 2404.08916 | null | null | http://arxiv.org/pdf/2404.08916v1 | 2024-04-13T07:30:16Z | 2024-04-13T07:30:16Z | Meply: A Large-scale Dataset and Baseline Evaluations for Metastatic
Perirectal Lymph Node Detection and Segmentation | Accurate segmentation of metastatic lymph nodes in rectal cancer is crucial for the staging and treatment of rectal cancer. However, existing segmentation approaches face challenges due to the absence of pixel-level annotated datasets tailored for lymph nodes around the rectum. Additionally, metastatic lymph nodes are characterized by their relatively small size, irregular shapes, and lower contrast compared to the background, further complicating the segmentation task. To address these challenges, we present the first large-scale perirectal metastatic lymph node CT image dataset called Meply, which encompasses pixel-level annotations of 269 patients diagnosed with rectal cancer. Furthermore, we introduce a novel lymph-node segmentation model named CoSAM. The CoSAM utilizes sequence-based detection to guide the segmentation of metastatic lymph nodes in rectal cancer, contributing to improved localization performance for the segmentation model. It comprises three key components: sequence-based detection module, segmentation module, and collaborative convergence unit. To evaluate the effectiveness of CoSAM, we systematically compare its performance with several popular segmentation methods using the Meply dataset. Our code and dataset will be publicly available at: https://github.com/kanydao/CoSAM. | [
"['Weidong Guo' 'Hantao Zhang' 'Shouhong Wan' 'Bingbing Zou' 'Wanqin Wang'\n 'Chenyang Qiu' 'Jun Li' 'Peiquan Jin']"
]
|
null | null | 2404.08931 | null | null | http://arxiv.org/pdf/2404.08931v1 | 2024-04-13T08:49:17Z | 2024-04-13T08:49:17Z | Label-free Anomaly Detection in Aerial Agricultural Images with Masked
Image Modeling | Detecting various types of stresses (nutritional, water, nitrogen, etc.) in agricultural fields is critical for farmers to ensure maximum productivity. However, stresses show up in different shapes and sizes across different crop types and varieties. Hence, this is posed as an anomaly detection task in agricultural images. Accurate anomaly detection in agricultural UAV images is vital for early identification of field irregularities. Traditional supervised learning faces challenges in adapting to diverse anomalies, necessitating extensive annotated data. In this work, we overcome this limitation with self-supervised learning using a masked image modeling approach. Masked Autoencoders (MAE) extract meaningful normal features from unlabeled image samples which produces high reconstruction error for the abnormal pixels during reconstruction. To remove the need of using only ``normal" data while training, we use an anomaly suppression loss mechanism that effectively minimizes the reconstruction of anomalous pixels and allows the model to learn anomalous areas without explicitly separating ``normal" images for training. Evaluation on the Agriculture-Vision data challenge shows a mIOU score improvement in comparison to prior state of the art in unsupervised and self-supervised methods. A single model generalizes across all the anomaly categories in the Agri-Vision Challenge Dataset | [
"['Sambal Shikhar' 'Anupam Sobti']"
]
|
null | null | 2404.08935 | null | null | http://arxiv.org/pdf/2404.08935v1 | 2024-04-13T09:10:05Z | 2024-04-13T09:10:05Z | Developing An Attention-Based Ensemble Learning Framework for Financial
Portfolio Optimisation | In recent years, deep or reinforcement learning approaches have been applied to optimise investment portfolios through learning the spatial and temporal information under the dynamic financial market. Yet in most cases, the existing approaches may produce biased trading signals based on the conventional price data due to a lot of market noises, which possibly fails to balance the investment returns and risks. Accordingly, a multi-agent and self-adaptive portfolio optimisation framework integrated with attention mechanisms and time series, namely the MASAAT, is proposed in this work in which multiple trading agents are created to observe and analyse the price series and directional change data that recognises the significant changes of asset prices at different levels of granularity for enhancing the signal-to-noise ratio of price series. Afterwards, by reconstructing the tokens of financial data in a sequence, the attention-based cross-sectional analysis module and temporal analysis module of each agent can effectively capture the correlations between assets and the dependencies between time points. Besides, a portfolio generator is integrated into the proposed framework to fuse the spatial-temporal information and then summarise the portfolios suggested by all trading agents to produce a newly ensemble portfolio for reducing biased trading actions and balancing the overall returns and risks. The experimental results clearly demonstrate that the MASAAT framework achieves impressive enhancement when compared with many well-known portfolio optimsation approaches on three challenging data sets of DJIA, S&P 500 and CSI 300. More importantly, our proposal has potential strengths in many possible applications for future study. | [
"['Zhenglong Li' 'Vincent Tam']"
]
|
null | null | 2404.08937 | null | null | http://arxiv.org/pdf/2404.08937v1 | 2024-04-13T09:17:51Z | 2024-04-13T09:17:51Z | ChimpVLM: Ethogram-Enhanced Chimpanzee Behaviour Recognition | We show that chimpanzee behaviour understanding from camera traps can be enhanced by providing visual architectures with access to an embedding of text descriptions that detail species behaviours. In particular, we present a vision-language model which employs multi-modal decoding of visual features extracted directly from camera trap videos to process query tokens representing behaviours and output class predictions. Query tokens are initialised using a standardised ethogram of chimpanzee behaviour, rather than using random or name-based initialisations. In addition, the effect of initialising query tokens using a masked language model fine-tuned on a text corpus of known behavioural patterns is explored. We evaluate our system on the PanAf500 and PanAf20K datasets and demonstrate the performance benefits of our multi-modal decoding approach and query initialisation strategy on multi-class and multi-label recognition tasks, respectively. Results and ablations corroborate performance improvements. We achieve state-of-the-art performance over vision and vision-language models in top-1 accuracy (+6.34%) on PanAf500 and overall (+1.1%) and tail-class (+2.26%) mean average precision on PanAf20K. We share complete source code and network weights for full reproducibility of results and easy utilisation. | [
"['Otto Brookes' 'Majid Mirmehdi' 'Hjalmar Kuhl' 'Tilo Burghardt']"
]
|
null | null | 2404.08940 | null | null | http://arxiv.org/pdf/2404.08940v1 | 2024-04-13T09:33:00Z | 2024-04-13T09:33:00Z | Introducing Super RAGs in Mistral 8x7B-v1 | The relentless pursuit of enhancing Large Language Models (LLMs) has led to the advent of Super Retrieval-Augmented Generation (Super RAGs), a novel approach designed to elevate the performance of LLMs by integrating external knowledge sources with minimal structural modifications. This paper presents the integration of Super RAGs into the Mistral 8x7B v1, a state-of-the-art LLM, and examines the resultant improvements in accuracy, speed, and user satisfaction. Our methodology uses a fine-tuned instruct model setup and a cache tuning fork system, ensuring efficient and relevant data retrieval. The evaluation, conducted over several epochs, demonstrates significant enhancements across all metrics. The findings suggest that Super RAGs can effectively augment LLMs, paving the way for more sophisticated and reliable AI systems. This research contributes to the field by providing empirical evidence of the benefits of Super RAGs and offering insights into their potential applications. | [
"['Ayush Thakur' 'Raghav Gupta']"
]
|
null | null | 2404.08950 | null | null | http://arxiv.org/pdf/2404.08950v1 | 2024-04-13T10:13:07Z | 2024-04-13T10:13:07Z | Deep Reinforcement Learning based Online Scheduling Policy for Deep
Neural Network Multi-Tenant Multi-Accelerator Systems | Currently, there is a growing trend of outsourcing the execution of DNNs to cloud services. For service providers, managing multi-tenancy and ensuring high-quality service delivery, particularly in meeting stringent execution time constraints, assumes paramount importance, all while endeavoring to maintain cost-effectiveness. In this context, the utilization of heterogeneous multi-accelerator systems becomes increasingly relevant. This paper presents RELMAS, a low-overhead deep reinforcement learning algorithm designed for the online scheduling of DNNs in multi-tenant environments, taking into account the dataflow heterogeneity of accelerators and memory bandwidths contentions. By doing so, service providers can employ the most efficient scheduling policy for user requests, optimizing Service-Level-Agreement (SLA) satisfaction rates and enhancing hardware utilization. The application of RELMAS to a heterogeneous multi-accelerator system composed of various instances of Simba and Eyeriss sub-accelerators resulted in up to a 173% improvement in SLA satisfaction rate compared to state-of-the-art scheduling techniques across different workload scenarios, with less than a 1.5% energy overhead. | [
"['Francesco G. Blanco' 'Enrico Russo' 'Maurizio Palesi' 'Davide Patti'\n 'Giuseppe Ascia' 'Vincenzo Catania']"
]
|
null | null | 2404.08951 | null | null | http://arxiv.org/pdf/2404.08951v1 | 2024-04-13T10:15:51Z | 2024-04-13T10:15:51Z | Constructing and Exploring Intermediate Domains in Mixed Domain
Semi-supervised Medical Image Segmentation | Both limited annotation and domain shift are prevalent challenges in medical image segmentation. Traditional semi-supervised segmentation and unsupervised domain adaptation methods address one of these issues separately. However, the coexistence of limited annotation and domain shift is quite common, which motivates us to introduce a novel and challenging scenario: Mixed Domain Semi-supervised medical image Segmentation (MiDSS). In this scenario, we handle data from multiple medical centers, with limited annotations available for a single domain and a large amount of unlabeled data from multiple domains. We found that the key to solving the problem lies in how to generate reliable pseudo labels for the unlabeled data in the presence of domain shift with labeled data. To tackle this issue, we employ Unified Copy-Paste (UCP) between images to construct intermediate domains, facilitating the knowledge transfer from the domain of labeled data to the domains of unlabeled data. To fully utilize the information within the intermediate domain, we propose a symmetric Guidance training strategy (SymGD), which additionally offers direct guidance to unlabeled data by merging pseudo labels from intermediate samples. Subsequently, we introduce a Training Process aware Random Amplitude MixUp (TP-RAM) to progressively incorporate style-transition components into intermediate samples. Compared with existing state-of-the-art approaches, our method achieves a notable 13.57% improvement in Dice score on Prostate dataset, as demonstrated on three public datasets. Our code is available at https://github.com/MQinghe/MiDSS . | [
"['Qinghe Ma' 'Jian Zhang' 'Lei Qi' 'Qian Yu' 'Yinghuan Shi' 'Yang Gao']"
]
|
null | null | 2404.08958 | null | null | http://arxiv.org/pdf/2404.08958v1 | 2024-04-13T10:46:11Z | 2024-04-13T10:46:11Z | AMU-Tuning: Effective Logit Bias for CLIP-based Few-shot Learning | Recently, pre-trained vision-language models (e.g., CLIP) have shown great potential in few-shot learning and attracted a lot of research interest. Although efforts have been made to improve few-shot ability of CLIP, key factors on the effectiveness of existing methods have not been well studied, limiting further exploration of CLIP's potential in few-shot learning. In this paper, we first introduce a unified formulation to analyze CLIP-based few-shot learning methods from a perspective of logit bias, which encourages us to learn an effective logit bias for further improving performance of CLIP-based few-shot learning methods. To this end, we disassemble three key components involved in computation of logit bias (i.e., logit features, logit predictor, and logit fusion) and empirically analyze the effect on performance of few-shot classification. Based on analysis of key components, this paper proposes a novel AMU-Tuning method to learn effective logit bias for CLIP-based few-shot classification. Specifically, our AMU-Tuning predicts logit bias by exploiting the appropriate $underline{textbf{A}}$uxiliary features, which are fed into an efficient feature-initialized linear classifier with $underline{textbf{M}}$ulti-branch training. Finally, an $underline{textbf{U}}$ncertainty-based fusion is developed to incorporate logit bias into CLIP for few-shot classification. The experiments are conducted on several widely used benchmarks, and the results show AMU-Tuning clearly outperforms its counterparts while achieving state-of-the-art performance of CLIP-based few-shot learning without bells and whistles. | [
"['Yuwei Tang' 'Zhenyi Lin' 'Qilong Wang' 'Pengfei Zhu' 'Qinghua Hu']"
]
|
null | null | 2404.08964 | null | null | http://arxiv.org/pdf/2404.08964v1 | 2024-04-13T11:06:49Z | 2024-04-13T11:06:49Z | Understanding Multimodal Deep Neural Networks: A Concept Selection View | The multimodal deep neural networks, represented by CLIP, have generated rich downstream applications owing to their excellent performance, thus making understanding the decision-making process of CLIP an essential research topic. Due to the complex structure and the massive pre-training data, it is often regarded as a black-box model that is too difficult to understand and interpret. Concept-based models map the black-box visual representations extracted by deep neural networks onto a set of human-understandable concepts and use the concepts to make predictions, enhancing the transparency of the decision-making process. However, these methods involve the datasets labeled with fine-grained attributes by expert knowledge, which incur high costs and introduce excessive human prior knowledge and bias. In this paper, we observe the long-tail distribution of concepts, based on which we propose a two-stage Concept Selection Model (CSM) to mine core concepts without introducing any human priors. The concept greedy rough selection algorithm is applied to extract head concepts, and then the concept mask fine selection method performs the extraction of core concepts. Experiments show that our approach achieves comparable performance to end-to-end black-box models, and human evaluation demonstrates that the concepts discovered by our method are interpretable and comprehensible for humans. | [
"['Chenming Shang' 'Hengyuan Zhang' 'Hao Wen' 'Yujiu Yang']"
]
|
null | null | 2404.08968 | null | null | http://arxiv.org/pdf/2404.08968v3 | 2024-04-23T07:13:30Z | 2024-04-13T11:13:56Z | MCPNet: An Interpretable Classifier via Multi-Level Concept Prototypes | Recent advancements in post-hoc and inherently interpretable methods have markedly enhanced the explanations of black box classifier models. These methods operate either through post-analysis or by integrating concept learning during model training. Although being effective in bridging the semantic gap between a model's latent space and human interpretation, these explanation methods only partially reveal the model's decision-making process. The outcome is typically limited to high-level semantics derived from the last feature map. We argue that the explanations lacking insights into the decision processes at low and mid-level features are neither fully faithful nor useful. Addressing this gap, we introduce the Multi-Level Concept Prototypes Classifier (MCPNet), an inherently interpretable model. MCPNet autonomously learns meaningful concept prototypes across multiple feature map levels using Centered Kernel Alignment (CKA) loss and an energy-based weighted PCA mechanism, and it does so without reliance on predefined concept labels. Further, we propose a novel classifier paradigm that learns and aligns multi-level concept prototype distributions for classification purposes via Class-aware Concept Distribution (CCD) loss. Our experiments reveal that our proposed MCPNet while being adaptable to various model architectures, offers comprehensive multi-level explanations while maintaining classification accuracy. Additionally, its concept distribution-based classification approach shows improved generalization capabilities in few-shot classification scenarios. | [
"['Bor-Shiun Wang' 'Chien-Yi Wang' 'Wei-Chen Chiu']"
]
|
null | null | 2404.08969 | null | null | http://arxiv.org/pdf/2404.08969v1 | 2024-04-13T11:22:53Z | 2024-04-13T11:22:53Z | Concentration properties of fractional posterior in 1-bit matrix
completion | The problem of estimating a matrix based on a set of its observed entries is commonly referred to as the matrix completion problem. In this work, we specifically address the scenario of binary observations, often termed as 1-bit matrix completion. While numerous studies have explored Bayesian and frequentist methods for real-value matrix completion, there has been a lack of theoretical exploration regarding Bayesian approaches in 1-bit matrix completion. We tackle this gap by considering a general, non-uniform sampling scheme and providing theoretical assurances on the efficacy of the fractional posterior. Our contributions include obtaining concentration results for the fractional posterior and demonstrating its effectiveness in recovering the underlying parameter matrix. We accomplish this using two distinct types of prior distributions: low-rank factorization priors and a spectral scaled Student prior, with the latter requiring fewer assumptions. Importantly, our results exhibit an adaptive nature by not mandating prior knowledge of the rank of the parameter matrix. Our findings are comparable to those found in the frequentist literature, yet demand fewer restrictive assumptions. | [
"['The Tien Mai']"
]
|
null | null | 2404.08970 | null | null | http://arxiv.org/pdf/2404.08970v1 | 2024-04-13T11:23:34Z | 2024-04-13T11:23:34Z | Fast Gradient Computation for Gromov-Wasserstein Distance | The Gromov-Wasserstein distance is a notable extension of optimal transport. In contrast to the classic Wasserstein distance, it solves a quadratic assignment problem that minimizes the pair-wise distance distortion under the transportation of distributions and thus could apply to distributions in different spaces. These properties make Gromov-Wasserstein widely applicable to many fields, such as computer graphics and machine learning. However, the computation of the Gromov-Wasserstein distance and transport plan is expensive. The well-known Entropic Gromov-Wasserstein approach has a cubic complexity since the matrix multiplication operations need to be repeated in computing the gradient of Gromov-Wasserstein loss. This becomes a key bottleneck of the method. Currently, existing methods accelerate the computation focus on sampling and approximation, which leads to low accuracy or incomplete transport plan. In this work, we propose a novel method to accelerate accurate gradient computation by dynamic programming techniques, reducing the complexity from cubic to quadratic. In this way, the original computational bottleneck is broken and the new entropic solution can be obtained with total quadratic time, which is almost optimal complexity. Furthermore, it can be extended to some variants easily. Extensive experiments validate the efficiency and effectiveness of our method. | [
"['Wei Zhang' 'Zihao Wang' 'Jie Fan' 'Hao Wu' 'Yong Zhang']"
]
|
null | null | 2404.08973 | null | null | http://arxiv.org/pdf/2404.08973v1 | 2024-04-13T11:40:05Z | 2024-04-13T11:40:05Z | PraFFL: A Preference-Aware Scheme in Fair Federated Learning | Fairness in federated learning has emerged as a critical concern, aiming to develop an unbiased model for any special group (e.g., male or female) of sensitive features. However, there is a trade-off between model performance and fairness, i.e., improving fairness will decrease model performance. Existing approaches have characterized such a trade-off by introducing hyperparameters to quantify client's preferences for fairness and model performance. Nevertheless, these methods are limited to scenarios where each client has only a single pre-defined preference. In practical systems, each client may simultaneously have multiple preferences for the model performance and fairness. The key challenge is to design a method that allows the model to adapt to diverse preferences of each client in real time. To this end, we propose a Preference-aware scheme in Fair Federated Learning paradigm (called PraFFL). PraFFL can adaptively adjust the model based on each client's preferences to meet their needs. We theoretically prove that PraFFL can provide the optimal model for client's arbitrary preferences. Experimental results show that our proposed PraFFL outperforms five existing fair federated learning algorithms in terms of the model's capability in adapting to clients' different preferences. | [
"['Rongguang Ye' 'Ming Tang']"
]
|
null | null | 2404.08977 | null | null | http://arxiv.org/pdf/2404.08977v2 | 2024-04-18T06:54:55Z | 2024-04-13T11:58:28Z | RoNID: New Intent Discovery with Generated-Reliable Labels and
Cluster-friendly Representations | New Intent Discovery (NID) strives to identify known and reasonably deduce novel intent groups in the open-world scenario. But current methods face issues with inaccurate pseudo-labels and poor representation learning, creating a negative feedback loop that degrades overall model performance, including accuracy and the adjusted rand index. To address the aforementioned challenges, we propose a Robust New Intent Discovery (RoNID) framework optimized by an EM-style method, which focuses on constructing reliable pseudo-labels and obtaining cluster-friendly discriminative representations. RoNID comprises two main modules: reliable pseudo-label generation module and cluster-friendly representation learning module. Specifically, the pseudo-label generation module assigns reliable synthetic labels by solving an optimal transport problem in the E-step, which effectively provides high-quality supervised signals for the input of the cluster-friendly representation learning module. To learn cluster-friendly representation with strong intra-cluster compactness and large inter-cluster separation, the representation learning module combines intra-cluster and inter-cluster contrastive learning in the M-step to feed more discriminative features into the generation module. RoNID can be performed iteratively to ultimately yield a robust model with reliable pseudo-labels and cluster-friendly representations. Experimental results on multiple benchmarks demonstrate our method brings substantial improvements over previous state-of-the-art methods by a large margin of +1~+4 points. | [
"['Shun Zhang' 'Chaoran Yan' 'Jian Yang' 'Changyu Ren' 'Jiaqi Bai'\n 'Tongliang Li' 'Zhoujun Li']"
]
|
null | null | 2404.08978 | null | null | http://arxiv.org/pdf/2404.08978v2 | 2024-04-17T10:59:59Z | 2024-04-13T12:02:19Z | Incremental Residual Concept Bottleneck Models | Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts and use the concepts to make predictions, enhancing the transparency of the decision-making process. Multimodal pre-trained models can match visual representations with textual concept embeddings, allowing for obtaining the interpretable concept bottleneck without the expertise concept annotations. Recent research has focused on the concept bank establishment and the high-quality concept selection. However, it is challenging to construct a comprehensive concept bank through humans or large language models, which severely limits the performance of CBMs. In this work, we propose the Incremental Residual Concept Bottleneck Model (Res-CBM) to address the challenge of concept completeness. Specifically, the residual concept bottleneck model employs a set of optimizable vectors to complete missing concepts, then the incremental concept discovery module converts the complemented vectors with unclear meanings into potential concepts in the candidate concept bank. Our approach can be applied to any user-defined concept bank, as a post-hoc processing method to enhance the performance of any CBMs. Furthermore, to measure the descriptive efficiency of CBMs, the Concept Utilization Efficiency (CUE) metric is proposed. Experiments show that the Res-CBM outperforms the current state-of-the-art methods in terms of both accuracy and efficiency and achieves comparable performance to black-box models across multiple datasets. | [
"['Chenming Shang' 'Shiji Zhou' 'Hengyuan Zhang' 'Xinzhe Ni' 'Yujiu Yang'\n 'Yuwang Wang']"
]
|
null | null | 2404.08979 | null | null | http://arxiv.org/pdf/2404.08979v1 | 2024-04-13T12:06:29Z | 2024-04-13T12:06:29Z | BG-YOLO: A Bidirectional-Guided Method for Underwater Object Detection | Degraded underwater images decrease the accuracy of underwater object detection. However, existing methods for underwater image enhancement mainly focus on improving the indicators in visual aspects, which may not benefit the tasks of underwater image detection, and may lead to serious degradation in performance. To alleviate this problem, we proposed a bidirectional-guided method for underwater object detection, referred to as BG-YOLO. In the proposed method, network is organized by constructing an enhancement branch and a detection branch in a parallel way. The enhancement branch consists of a cascade of an image enhancement subnet and an object detection subnet. And the detection branch only consists of a detection subnet. A feature guided module connects the shallow convolution layer of the two branches. When training the enhancement branch, the object detection subnet in the enhancement branch guides the image enhancement subnet to be optimized towards the direction that is most conducive to the detection task. The shallow feature map of the trained enhancement branch will be output to the feature guided module, constraining the optimization of detection branch through consistency loss and prompting detection branch to learn more detailed information of the objects. And hence the detection performance will be refined. During the detection tasks, only detection branch will be reserved so that no additional cost of computation will be introduced. Extensive experiments demonstrate that the proposed method shows significant improvement in performance of the detector in severely degraded underwater scenes while maintaining a remarkable detection speed. | [
"['Jian Zhang' 'Ruiteng Zhang' 'Xinyue Yan' 'Xiting Zhuang' 'Ruicheng Cao']"
]
|
null | null | 2404.08980 | null | null | http://arxiv.org/pdf/2404.08980v1 | 2024-04-13T12:07:20Z | 2024-04-13T12:07:20Z | Stability and Generalization in Free Adversarial Training | While adversarial training methods have resulted in significant improvements in the deep neural nets' robustness against norm-bounded adversarial perturbations, their generalization performance from training samples to test data has been shown to be considerably worse than standard empirical risk minimization methods. Several recent studies seek to connect the generalization behavior of adversarially trained classifiers to various gradient-based min-max optimization algorithms used for their training. In this work, we study the generalization performance of adversarial training methods using the algorithmic stability framework. Specifically, our goal is to compare the generalization performance of the vanilla adversarial training scheme fully optimizing the perturbations at every iteration vs. the free adversarial training simultaneously optimizing the norm-bounded perturbations and classifier parameters. Our proven generalization bounds indicate that the free adversarial training method could enjoy a lower generalization gap between training and test samples due to the simultaneous nature of its min-max optimization algorithm. We perform several numerical experiments to evaluate the generalization performance of vanilla, fast, and free adversarial training methods. Our empirical findings also show the improved generalization performance of the free adversarial training method and further demonstrate that the better generalization result could translate to greater robustness against black-box attack schemes. The code is available at https://github.com/Xiwei-Cheng/Stability_FreeAT. | [
"['Xiwei Cheng' 'Kexin Fu' 'Farzan Farnia']"
]
|
null | null | 2404.08981 | null | null | http://arxiv.org/pdf/2404.08981v1 | 2024-04-13T12:09:37Z | 2024-04-13T12:09:37Z | Fast Fishing: Approximating BAIT for Efficient and Scalable Deep Active
Image Classification | Deep active learning (AL) seeks to minimize the annotation costs for training deep neural networks. BAIT, a recently proposed AL strategy based on the Fisher Information, has demonstrated impressive performance across various datasets. However, BAIT's high computational and memory requirements hinder its applicability on large-scale classification tasks, resulting in current research neglecting BAIT in their evaluation. This paper introduces two methods to enhance BAIT's computational efficiency and scalability. Notably, we significantly reduce its time complexity by approximating the Fisher Information. In particular, we adapt the original formulation by i) taking the expectation over the most probable classes, and ii) constructing a binary classification task, leading to an alternative likelihood for gradient computations. Consequently, this allows the efficient use of BAIT on large-scale datasets, including ImageNet. Our unified and comprehensive evaluation across a variety of datasets demonstrates that our approximations achieve strong performance with considerably reduced time complexity. Furthermore, we provide an extensive open-source toolbox that implements recent state-of-the-art AL strategies, available at https://github.com/dhuseljic/dal-toolbox. | [
"['Denis Huseljic' 'Paul Hahn' 'Marek Herde' 'Lukas Rauch' 'Bernhard Sick']"
]
|
null | null | 2404.08985 | null | null | http://arxiv.org/pdf/2404.08985v1 | 2024-04-13T12:14:58Z | 2024-04-13T12:14:58Z | Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient
Finetuning | Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications, ranging from content generation to interactive entertainment, and artistic creation. However, the diversity of downstream tasks in multitask scenarios presents substantial adaptation challenges for LLMs. While traditional methods often succumb to knowledge confusion on their monolithic dense models, Mixture-of-Experts (MoE) has been emerged as a promising solution with its sparse architecture for effective task decoupling. Inspired by the principles of human cognitive neuroscience, we design a novel framework texttt{Intuition-MoR1E} that leverages the inherent semantic clustering of instances to mimic the human brain to deal with multitask, offering implicit guidance to router for optimized feature allocation. Moreover, we introduce cutting-edge Rank-1 Experts formulation designed to manage a spectrum of intuitions, demonstrating enhanced parameter efficiency and effectiveness in multitask LLM finetuning. Extensive experiments demonstrate that Intuition-MoR1E achieves superior efficiency and 2.15% overall accuracy improvement across 14 public datasets against other state-of-the-art baselines. | [
"['Yijiang Liu' 'Rongyu Zhang' 'Huanrui Yang' 'Kurt Keutzer' 'Yuan Du'\n 'Li Du' 'Shanghang Zhang']"
]
|
null | null | 2404.08995 | null | null | http://arxiv.org/pdf/2404.08995v4 | 2024-04-30T07:13:18Z | 2024-04-13T12:41:40Z | Beyond Known Clusters: Probe New Prototypes for Efficient Generalized
Class Discovery | Generalized Class Discovery (GCD) aims to dynamically assign labels to unlabelled data partially based on knowledge learned from labelled data, where the unlabelled data may come from known or novel classes. The prevailing approach generally involves clustering across all data and learning conceptions by prototypical contrastive learning. However, existing methods largely hinge on the performance of clustering algorithms and are thus subject to their inherent limitations. Firstly, the estimated cluster number is often smaller than the ground truth, making the existing methods suffer from the lack of prototypes for comprehensive conception learning. To address this issue, we propose an adaptive probing mechanism that introduces learnable potential prototypes to expand cluster prototypes (centers). As there is no ground truth for the potential prototype, we develop a self-supervised prototype learning framework to optimize the potential prototype in an end-to-end fashion. Secondly, clustering is computationally intensive, and the conventional strategy of clustering both labelled and unlabelled instances exacerbates this issue. To counteract this inefficiency, we opt to cluster only the unlabelled instances and subsequently expand the cluster prototypes with our introduced potential prototypes to fast explore novel classes. Despite the simplicity of our proposed method, extensive empirical analysis on a wide range of datasets confirms that our method consistently delivers state-of-the-art results. Specifically, our method surpasses the nearest competitor by a significant margin of 9.7% within the Stanford Cars dataset and 12x clustering efficiency within the Herbarium 19 dataset. We will make the code and checkpoints publicly available at https://github.com/xjtuYW/PNP.git. | [
"['Ye Wang' 'Yaxiong Wang' 'Yujiao Wu' 'Bingchen Zhao' 'Xueming Qian']"
]
|
null | null | 2404.09000 | null | null | http://arxiv.org/pdf/2404.09000v1 | 2024-04-13T13:03:19Z | 2024-04-13T13:03:19Z | MaSkel: A Model for Human Whole-body X-rays Generation from Human
Masking Images | The human whole-body X-rays could offer a valuable reference for various applications, including medical diagnostics, digital animation modeling, and ergonomic design. The traditional method of obtaining X-ray information requires the use of CT (Computed Tomography) scan machines, which emit potentially harmful radiation. Thus it faces a significant limitation for realistic applications because it lacks adaptability and safety. In our work, We proposed a new method to directly generate the 2D human whole-body X-rays from the human masking images. The predicted images will be similar to the real ones with the same image style and anatomic structure. We employed a data-driven strategy. By leveraging advanced generative techniques, our model MaSkel(Masking image to Skeleton X-rays) could generate a high-quality X-ray image from a human masking image without the need for invasive and harmful radiation exposure, which not only provides a new path to generate highly anatomic and customized data but also reduces health risks. To our knowledge, our model MaSkel is the first work for predicting whole-body X-rays. In this paper, we did two parts of the work. The first one is to solve the data limitation problem, the diffusion-based techniques are utilized to make a data augmentation, which provides two synthetic datasets for preliminary pretraining. Then we designed a two-stage training strategy to train MaSkel. At last, we make qualitative and quantitative evaluations of the generated X-rays. In addition, we invite some professional doctors to assess our predicted data. These evaluations demonstrate the MaSkel's superior ability to generate anatomic X-rays from human masking images. The related code and links of the dataset are available at https://github.com/2022yingjie/MaSkel. | [
"['Yingjie Xi' 'Boyuan Cheng' 'Jingyao Cai' 'Jian Jun Zhang'\n 'Xiaosong Yang']"
]
|
null | null | 2404.09005 | null | null | http://arxiv.org/pdf/2404.09005v6 | 2024-07-14T20:56:10Z | 2024-04-13T13:18:40Z | Proof-of-Learning with Incentive Security | Most concurrent blockchain systems rely heavily on the Proof-of-Work (PoW) or Proof-of-Stake (PoS) mechanisms for decentralized consensus and security assurance. However, the substantial energy expenditure stemming from computationally intensive yet meaningless tasks has raised considerable concerns surrounding traditional PoW approaches, The PoS mechanism, while free of energy consumption, is subject to security and economic issues. Addressing these issues, the paradigm of Proof-of-Useful-Work (PoUW) seeks to employ challenges of practical significance as PoW, thereby imbuing energy consumption with tangible value. While previous efforts in Proof of Learning (PoL) explored the utilization of deep learning model training SGD tasks as PoUW challenges, recent research has revealed its vulnerabilities to adversarial attacks and the theoretical hardness in crafting a byzantine-secure PoL mechanism. In this paper, we introduce the concept of incentive-security that incentivizes rational provers to behave honestly for their best interest, bypassing the existing hardness to design a PoL mechanism with computational efficiency, a provable incentive-security guarantee and controllable difficulty. Particularly, our work is secure against two attacks to the recent work of Jia et al. [2021], and also improves the computational overhead from $Theta(1)$ to $O(frac{log E}{E})$. Furthermore, while most recent research assumes trusted problem providers and verifiers, our design also guarantees frontend incentive-security even when problem providers are untrusted, and verifier incentive-security that bypasses the Verifier's Dilemma. By incorporating ML training into blockchain consensus mechanisms with provable guarantees, our research not only proposes an eco-friendly solution to blockchain systems, but also provides a proposal for a completely decentralized computing power market in the new AI age. | [
"['Zishuo Zhao' 'Zhixuan Fang' 'Xuechao Wang' 'Xi Chen' 'Yuan Zhou']"
]
|
null | null | 2404.09010 | null | null | http://arxiv.org/pdf/2404.09010v1 | 2024-04-13T13:39:26Z | 2024-04-13T13:39:26Z | MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial
Expression Recognition in-the-wild | Dynamic Facial Expression Recognition (DFER) has received significant interest in the recent years dictated by its pivotal role in enabling empathic and human-compatible technologies. Achieving robustness towards in-the-wild data in DFER is particularly important for real-world applications. One of the directions aimed at improving such models is multimodal emotion recognition based on audio and video data. Multimodal learning in DFER increases the model capabilities by leveraging richer, complementary data representations. Within the field of multimodal DFER, recent methods have focused on exploiting advances of self-supervised learning (SSL) for pre-training of strong multimodal encoders. Another line of research has focused on adapting pre-trained static models for DFER. In this work, we propose a different perspective on the problem and investigate the advancement of multimodal DFER performance by adapting SSL-pre-trained disjoint unimodal encoders. We identify main challenges associated with this task, namely, intra-modality adaptation, cross-modal alignment, and temporal adaptation, and propose solutions to each of them. As a result, we demonstrate improvement over current state-of-the-art on two popular DFER benchmarks, namely DFEW and MFAW. | [
"['Kateryna Chumachenko' 'Alexandros Iosifidis' 'Moncef Gabbouj']"
]
|
null | null | 2404.09011 | null | null | http://arxiv.org/pdf/2404.09011v1 | 2024-04-13T13:41:13Z | 2024-04-13T13:41:13Z | PracticalDG: Perturbation Distillation on Vision-Language Models for
Hybrid Domain Generalization | Domain Generalization (DG) aims to resolve distribution shifts between source and target domains, and current DG methods are default to the setting that data from source and target domains share identical categories. Nevertheless, there exists unseen classes from target domains in practical scenarios. To address this issue, Open Set Domain Generalization (OSDG) has emerged and several methods have been exclusively proposed. However, most existing methods adopt complex architectures with slight improvement compared with DG methods. Recently, vision-language models (VLMs) have been introduced in DG following the fine-tuning paradigm, but consume huge training overhead with large vision models. Therefore, in this paper, we innovate to transfer knowledge from VLMs to lightweight vision models and improve the robustness by introducing Perturbation Distillation (PD) from three perspectives, including Score, Class and Instance (SCI), named SCI-PD. Moreover, previous methods are oriented by the benchmarks with identical and fixed splits, ignoring the divergence between source domains. These methods are revealed to suffer from sharp performance decay with our proposed new benchmark Hybrid Domain Generalization (HDG) and a novel metric $H^{2}$-CV, which construct various splits to comprehensively assess the robustness of algorithms. Extensive experiments demonstrate that our method outperforms state-of-the-art algorithms on multiple datasets, especially improving the robustness when confronting data scarcity. | [
"['Zining Chen' 'Weiqiu Wang' 'Zhicheng Zhao' 'Fei Su' 'Aidong Men'\n 'Hongying Meng']"
]
|
null | null | 2404.09016 | null | null | http://arxiv.org/pdf/2404.09016v1 | 2024-04-13T14:08:56Z | 2024-04-13T14:08:56Z | Theoretical research on generative diffusion models: an overview | Generative diffusion models showed high success in many fields with a powerful theoretical background. They convert the data distribution to noise and remove the noise back to obtain a similar distribution. Many existing reviews focused on the specific application areas without concentrating on the research about the algorithm. Unlike them we investigated the theoretical developments of the generative diffusion models. These approaches mainly divide into two: training-based and sampling-based. Awakening to this allowed us a clear and understandable categorization for the researchers who will make new developments in the future. | [
"['Melike Nur Yeğin' 'Mehmet Fatih Amasyalı']"
]
|
null | null | 2404.09022 | null | null | http://arxiv.org/pdf/2404.09022v1 | 2024-04-13T15:03:03Z | 2024-04-13T15:03:03Z | Navigating the Landscape of Large Language Models: A Comprehensive
Review and Analysis of Paradigms and Fine-Tuning Strategies | With the surge of ChatGPT,the use of large models has significantly increased,rapidly rising to prominence across the industry and sweeping across the internet. This article is a comprehensive review of fine-tuning methods for large models. This paper investigates the latest technological advancements and the application of advanced methods in aspects such as task-adaptive fine-tuning,domain-adaptive fine-tuning,few-shot learning,knowledge distillation,multi-task learning,parameter-efficient fine-tuning,and dynamic fine-tuning. | [
"['Benjue Weng']"
]
|
null | null | 2404.09029 | null | null | http://arxiv.org/pdf/2404.09029v1 | 2024-04-13T15:37:57Z | 2024-04-13T15:37:57Z | A Parametric Rate-Distortion Model for Video Transcoding | Over the past two decades, the surge in video streaming applications has been fueled by the increasing accessibility of the internet and the growing demand for network video. As users with varying internet speeds and devices seek high-quality video, transcoding becomes essential for service providers. In this paper, we introduce a parametric rate-distortion (R-D) transcoding model. Our model excels at predicting transcoding distortion at various rates without the need for encoding the video. This model serves as a versatile tool that can be used to achieve visual quality improvement (in terms of PSNR) via trans-sizing. Moreover, we use our model to identify visually lossless and near-zero-slope bitrate ranges for an ingest video. Having this information allows us to adjust the transcoding target bitrate while introducing visually negligible quality degradations. By utilizing our model in this manner, quality improvements up to 2 dB and bitrate savings of up to 46% of the original target bitrate are possible. Experimental results demonstrate the efficacy of our model in video transcoding rate distortion prediction. | [
"['Maedeh Jamali' 'Nader Karimi' 'Shadrokh Samavi' 'Shahram Shirani']"
]
|
null | null | 2404.09030 | null | null | http://arxiv.org/pdf/2404.09030v1 | 2024-04-13T15:40:39Z | 2024-04-13T15:40:39Z | Active Learning for Control-Oriented Identification of Nonlinear Systems | Model-based reinforcement learning is an effective approach for controlling an unknown system. It is based on a longstanding pipeline familiar to the control community in which one performs experiments on the environment to collect a dataset, uses the resulting dataset to identify a model of the system, and finally performs control synthesis using the identified model. As interacting with the system may be costly and time consuming, targeted exploration is crucial for developing an effective control-oriented model with minimal experimentation. Motivated by this challenge, recent work has begun to study finite sample data requirements and sample efficient algorithms for the problem of optimal exploration in model-based reinforcement learning. However, existing theory and algorithms are limited to model classes which are linear in the parameters. Our work instead focuses on models with nonlinear parameter dependencies, and presents the first finite sample analysis of an active learning algorithm suitable for a general class of nonlinear dynamics. In certain settings, the excess control cost of our algorithm achieves the optimal rate, up to logarithmic factors. We validate our approach in simulation, showcasing the advantage of active, control-oriented exploration for controlling nonlinear systems. | [
"['Bruce D. Lee' 'Ingvar Ziemann' 'George J. Pappas' 'Nikolai Matni']"
]
|
null | null | 2404.09042 | null | null | http://arxiv.org/pdf/2404.09042v1 | 2024-04-13T16:57:37Z | 2024-04-13T16:57:37Z | Improving Personalisation in Valence and Arousal Prediction using Data
Augmentation | In the field of emotion recognition and Human-Machine Interaction (HMI), personalised approaches have exhibited their efficacy in capturing individual-specific characteristics and enhancing affective prediction accuracy. However, personalisation techniques often face the challenge of limited data for target individuals. This paper presents our work on an enhanced personalisation strategy, that leverages data augmentation to develop tailored models for continuous valence and arousal prediction. Our proposed approach, Distance Weighting Augmentation (DWA), employs a weighting-based augmentation method that expands a target individual's dataset, leveraging distance metrics to identify similar samples at the segment-level. Experimental results on the MuSe-Personalisation 2023 Challenge dataset demonstrate that our method significantly improves the performance of features sets which have low baseline performance, on the test set. This improvement in poor-performing features comes without sacrificing performance on high-performing features. In particular, our method achieves a maximum combined testing CCC of 0.78, compared to the reported baseline score of 0.76 (reproduced at 0.72). It also achieved a peak arousal and valence scores of 0.81 and 0.76, compared to reproduced baseline scores of 0.76 and 0.67 respectively. Through this work, we make significant contributions to the advancement of personalised affective computing models, enhancing the practicality and adaptability of data-level personalisation in real world contexts. | [
"['Munachiso Nwadike' 'Jialin Li' 'Hanan Salam']"
]
|
null | null | 2404.09045 | null | null | http://arxiv.org/pdf/2404.09045v1 | 2024-04-13T17:11:35Z | 2024-04-13T17:11:35Z | Adapting Mental Health Prediction Tasks for Cross-lingual Learning via
Meta-Training and In-context Learning with Large Language Model | Timely identification is essential for the efficient handling of mental health illnesses such as depression. However, the current research fails to adequately address the prediction of mental health conditions from social media data in low-resource African languages like Swahili. This study introduces two distinct approaches utilising model-agnostic meta-learning and leveraging large language models (LLMs) to address this gap. Experiments are conducted on three datasets translated to low-resource language and applied to four mental health tasks, which include stress, depression, depression severity and suicidal ideation prediction. we first apply a meta-learning model with self-supervision, which results in improved model initialisation for rapid adaptation and cross-lingual transfer. The results show that our meta-trained model performs significantly better than standard fine-tuning methods, outperforming the baseline fine-tuning in macro F1 score with 18% and 0.8% over XLM-R and mBERT. In parallel, we use LLMs' in-context learning capabilities to assess their performance accuracy across the Swahili mental health prediction tasks by analysing different cross-lingual prompting approaches. Our analysis showed that Swahili prompts performed better than cross-lingual prompts but less than English prompts. Our findings show that in-context learning can be achieved through cross-lingual transfer through carefully crafted prompt templates with examples and instructions. | [
"['Zita Lifelo' 'Huansheng Ning' 'Sahraoui Dhelim']"
]
|
null | null | 2404.09053 | null | null | http://arxiv.org/pdf/2404.09053v1 | 2024-04-13T17:34:58Z | 2024-04-13T17:34:58Z | ALICE: Combining Feature Selection and Inter-Rater Agreeability for
Machine Learning Insights | This paper presents a new Python library called Automated Learning for Insightful Comparison and Evaluation (ALICE), which merges conventional feature selection and the concept of inter-rater agreeability in a simple, user-friendly manner to seek insights into black box Machine Learning models. The framework is proposed following an overview of the key concepts of interpretability in ML. The entire architecture and intuition of the main methods of the framework are also thoroughly discussed and results from initial experiments on a customer churn predictive modeling task are presented, alongside ideas for possible avenues to explore for the future. The full source code for the framework and the experiment notebooks can be found at: https://github.com/anasashb/aliceHU | [
"['Bachana Anasashvili' 'Vahidin Jeleskovic']"
]
|
null | null | 2404.09066 | null | null | http://arxiv.org/pdf/2404.09066v1 | 2024-04-13T19:30:58Z | 2024-04-13T19:30:58Z | CodeCloak: A Method for Evaluating and Mitigating Code Leakage by LLM
Code Assistants | LLM-based code assistants are becoming increasingly popular among developers. These tools help developers improve their coding efficiency and reduce errors by providing real-time suggestions based on the developer's codebase. While beneficial, these tools might inadvertently expose the developer's proprietary code to the code assistant service provider during the development process. In this work, we propose two complementary methods to mitigate the risk of code leakage when using LLM-based code assistants. The first is a technique for reconstructing a developer's original codebase from code segments sent to the code assistant service (i.e., prompts) during the development process, enabling assessment and evaluation of the extent of code leakage to third parties (or adversaries). The second is CodeCloak, a novel deep reinforcement learning agent that manipulates the prompts before sending them to the code assistant service. CodeCloak aims to achieve the following two contradictory goals: (i) minimizing code leakage, while (ii) preserving relevant and useful suggestions for the developer. Our evaluation, employing GitHub Copilot, StarCoder, and CodeLlama LLM-based code assistants models, demonstrates the effectiveness of our CodeCloak approach on a diverse set of code repositories of varying sizes, as well as its transferability across different models. In addition, we generate a realistic simulated coding environment to thoroughly analyze code leakage risks and evaluate the effectiveness of our proposed mitigation techniques under practical development scenarios. | [
"['Amit Finkman' 'Eden Bar-Kochva' 'Avishag Shapira' 'Dudu Mimran'\n 'Yuval Elovici' 'Asaf Shabtai']"
]
|
null | null | 2404.09077 | null | null | http://arxiv.org/pdf/2404.09077v1 | 2024-04-13T20:43:46Z | 2024-04-13T20:43:46Z | CuriousLLM: Elevating Multi-Document QA with Reasoning-Infused Knowledge
Graph Prompting | In the field of Question Answering (QA), unifying large language models (LLMs) with external databases has shown great success. However, these methods often fall short in providing the advanced reasoning needed for complex QA tasks. To address these issues, we improve over a novel approach called Knowledge Graph Prompting (KGP), which combines knowledge graphs with a LLM-based agent to improve reasoning and search accuracy. Nevertheless, the original KGP framework necessitates costly fine-tuning with large datasets yet still suffers from LLM hallucination. Therefore, we propose a reasoning-infused LLM agent to enhance this framework. This agent mimics human curiosity to ask follow-up questions to more efficiently navigate the search. This simple modification significantly boosts the LLM performance in QA tasks without the high costs and latency associated with the initial KGP framework. Our ultimate goal is to further develop this approach, leading to more accurate, faster, and cost-effective solutions in the QA domain. | [
"['Zukang Yang' 'Zixuan Zhu']"
]
|
null | null | 2404.09080 | null | null | http://arxiv.org/pdf/2404.09080v1 | 2024-04-13T20:55:15Z | 2024-04-13T20:55:15Z | Safe Reinforcement Learning on the Constraint Manifold: Theory and
Applications | Integrating learning-based techniques, especially reinforcement learning, into robotics is promising for solving complex problems in unstructured environments. However, most existing approaches are trained in well-tuned simulators and subsequently deployed on real robots without online fine-tuning. In this setting, the simulation's realism seriously impacts the deployment's success rate. Instead, learning with real-world interaction data offers a promising alternative: not only eliminates the need for a fine-tuned simulator but also applies to a broader range of tasks where accurate modeling is unfeasible. One major problem for on-robot reinforcement learning is ensuring safety, as uncontrolled exploration can cause catastrophic damage to the robot or the environment. Indeed, safety specifications, often represented as constraints, can be complex and non-linear, making safety challenging to guarantee in learning systems. In this paper, we show how we can impose complex safety constraints on learning-based robotics systems in a principled manner, both from theoretical and practical points of view. Our approach is based on the concept of the Constraint Manifold, representing the set of safe robot configurations. Exploiting differential geometry techniques, i.e., the tangent space, we can construct a safe action space, allowing learning agents to sample arbitrary actions while ensuring safety. We demonstrate the method's effectiveness in a real-world Robot Air Hockey task, showing that our method can handle high-dimensional tasks with complex constraints. Videos of the real robot experiments are available on the project website (https://puzeliu.github.io/TRO-ATACOM). | [
"['Puze Liu' 'Haitham Bou-Ammar' 'Jan Peters' 'Davide Tateo']"
]
|
null | null | 2404.09081 | null | null | http://arxiv.org/pdf/2404.09081v1 | 2024-04-13T21:02:49Z | 2024-04-13T21:02:49Z | Probabilistic Directed Distance Fields for Ray-Based Shape
Representations | In modern computer vision, the optimal representation of 3D shape continues to be task-dependent. One fundamental operation applied to such representations is differentiable rendering, as it enables inverse graphics approaches in learning frameworks. Standard explicit shape representations (voxels, point clouds, or meshes) are often easily rendered, but can suffer from limited geometric fidelity, among other issues. On the other hand, implicit representations (occupancy, distance, or radiance fields) preserve greater fidelity, but suffer from complex or inefficient rendering processes, limiting scalability. In this work, we devise Directed Distance Fields (DDFs), a novel neural shape representation that builds upon classical distance fields. The fundamental operation in a DDF maps an oriented point (position and direction) to surface visibility and depth. This enables efficient differentiable rendering, obtaining depth with a single forward pass per pixel, as well as differential geometric quantity extraction (e.g., surface normals), with only additional backward passes. Using probabilistic DDFs (PDDFs), we show how to model inherent discontinuities in the underlying field. We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction, showcasing strong performance with simple architectural components via the versatility of our representation. Finally, since the dimensionality of DDFs permits view-dependent geometric artifacts, we conduct a theoretical investigation of the constraints necessary for view consistency. We find a small set of field properties that are sufficient to guarantee a DDF is consistent, without knowing, for instance, which shape the field is expressing. | [
"['Tristan Aumentado-Armstrong' 'Stavros Tsogkas' 'Sven Dickinson'\n 'Allan Jepson']"
]
|
null | null | 2404.09091 | null | null | http://arxiv.org/pdf/2404.09091v2 | 2024-05-29T16:01:27Z | 2024-04-13T22:18:14Z | Semantic In-Domain Product Identification for Search Queries | Accurate explicit and implicit product identification in search queries is critical for enhancing user experiences, especially at a company like Adobe which has over 50 products and covers queries across hundreds of tools. In this work, we present a novel approach to training a product classifier from user behavioral data. Our semantic model led to >25% relative improvement in CTR (click through rate) across the deployed surfaces; a >50% decrease in null rate; a 2x increase in the app cards surfaced, which helps drive product visibility. | [
"['Sanat Sharma' 'Jayant Kumar' 'Twisha Naik' 'Zhaoyu Lu'\n 'Arvind Srikantan' 'Tracy Holloway King']"
]
|
null | null | 2404.09101 | null | null | http://arxiv.org/pdf/2404.09101v1 | 2024-04-13T23:20:16Z | 2024-04-13T23:20:16Z | Mixture of Experts Soften the Curse of Dimensionality in Operator
Learning | In this paper, we construct a mixture of neural operators (MoNOs) between function spaces whose complexity is distributed over a network of expert neural operators (NOs), with each NO satisfying parameter scaling restrictions. Our main result is a textit{distributed} universal approximation theorem guaranteeing that any Lipschitz non-linear operator between $L^2([0,1]^d)$ spaces can be approximated uniformly over the Sobolev unit ball therein, to any given $varepsilon>0$ accuracy, by an MoNO while satisfying the constraint that: each expert NO has a depth, width, and rank of $mathcal{O}(varepsilon^{-1})$. Naturally, our result implies that the required number of experts must be large, however, each NO is guaranteed to be small enough to be loadable into the active memory of most computers for reasonable accuracies $varepsilon$. During our analysis, we also obtain new quantitative expression rates for classical NOs approximating uniformly continuous non-linear operators uniformly on compact subsets of $L^2([0,1]^d)$. | [
"['Anastasis Kratsios' 'Takashi Furuya' 'Jose Antonio Lara Benitez'\n 'Matti Lassas' 'Maarten de Hoop']"
]
|
null | null | 2404.09113 | null | null | http://arxiv.org/pdf/2404.09113v2 | 2024-04-16T18:43:22Z | 2024-04-14T01:40:11Z | Extending Mean-Field Variational Inference via Entropic Regularization:
Theory and Computation | Variational inference (VI) has emerged as a popular method for approximate inference for high-dimensional Bayesian models. In this paper, we propose a novel VI method that extends the naive mean field via entropic regularization, referred to as $Xi$-variational inference ($Xi$-VI). $Xi$-VI has a close connection to the entropic optimal transport problem and benefits from the computationally efficient Sinkhorn algorithm. We show that $Xi$-variational posteriors effectively recover the true posterior dependency, where the dependence is downweighted by the regularization parameter. We analyze the role of dimensionality of the parameter space on the accuracy of $Xi$-variational approximation and how it affects computational considerations, providing a rough characterization of the statistical-computational trade-off in $Xi$-VI. We also investigate the frequentist properties of $Xi$-VI and establish results on consistency, asymptotic normality, high-dimensional asymptotics, and algorithmic stability. We provide sufficient criteria for achieving polynomial-time approximate inference using the method. Finally, we demonstrate the practical advantage of $Xi$-VI over mean-field variational inference on simulated and real data. | [
"['Bohan Wu' 'David Blei']"
]
|
null | null | 2404.09114 | null | null | http://arxiv.org/pdf/2404.09114v1 | 2024-04-14T01:44:58Z | 2024-04-14T01:44:58Z | Intelligent Chemical Purification Technique Based on Machine Learning | We present an innovative of artificial intelligence with column chromatography, aiming to resolve inefficiencies and standardize data collection in chemical separation and purification domain. By developing an automated platform for precise data acquisition and employing advanced machine learning algorithms, we constructed predictive models to forecast key separation parameters, thereby enhancing the efficiency and quality of chromatographic processes. The application of transfer learning allows the model to adapt across various column specifications, broadening its utility. A novel metric, separation probability ($S_p$), quantifies the likelihood of effective compound separation, validated through experimental verification. This study signifies a significant step forward int the application of AI in chemical research, offering a scalable solution to traditional chromatography challenges and providing a foundation for future technological advancements in chemical analysis and purification. | [
"['Wenchao Wu' 'Hao Xu' 'Dongxiao Zhang' 'Fanyang Mo']"
]
|
null | null | 2404.09123 | null | null | http://arxiv.org/pdf/2404.09123v1 | 2024-04-14T02:18:07Z | 2024-04-14T02:18:07Z | Provable Interactive Learning with Hindsight Instruction Feedback | We study interactive learning in a setting where the agent has to generate a response (e.g., an action or trajectory) given a context and an instruction. In contrast, to typical approaches that train the system using reward or expert supervision on response, we study learning with hindsight instruction where a teacher provides an instruction that is most suitable for the agent's generated response. This hindsight labeling of instruction is often easier to provide than providing expert supervision of the optimal response which may require expert knowledge or can be impractical to elicit. We initiate the theoretical analysis of interactive learning with hindsight labeling. We first provide a lower bound showing that in general, the regret of any algorithm must scale with the size of the agent's response space. We then study a specialized setting where the underlying instruction-response distribution can be decomposed as a low-rank matrix. We introduce an algorithm called LORIL for this setting and show that its regret scales as $sqrt{T}$ where $T$ is the number of rounds and depends on the intrinsic rank but does not depend on the size of the agent's response space. We provide experiments in two domains showing that LORIL outperforms baselines even when the low-rank assumption is violated. | [
"['Dipendra Misra' 'Aldo Pacchiano' 'Robert E. Schapire']"
]
|
null | null | 2404.09134 | null | null | http://arxiv.org/pdf/2404.09134v2 | 2024-06-29T13:41:36Z | 2024-04-14T03:44:54Z | Generative AI Agents with Large Language Model for Satellite Networks
via a Mixture of Experts Transmission | In response to the needs of 6G global communications, satellite communication networks have emerged as a key solution. However, the large-scale development of satellite communication networks is constrained by the complex system models, whose modeling is challenging for massive users. Moreover, transmission interference between satellites and users seriously affects communication performance. To solve these problems, this paper develops generative artificial intelligence (AI) agents for model formulation and then applies a mixture of experts (MoE) approach to design transmission strategies. Specifically, we leverage large language models (LLMs) to build an interactive modeling paradigm and utilize retrieval-augmented generation (RAG) to extract satellite expert knowledge that supports mathematical modeling. Afterward, by integrating the expertise of multiple specialized components, we propose an MoE-proximal policy optimization (PPO) approach to solve the formulated problem. Each expert can optimize the optimization variables at which it excels through specialized training through its own network and then aggregates them through the gating network to perform joint optimization. The simulation results validate the accuracy and effectiveness of employing a generative agent for problem formulation. Furthermore, the superiority of the proposed MoE-ppo approach over other benchmarks is confirmed in solving the formulated problem. The adaptability of MoE-PPO to various customized modeling problems has also been demonstrated. | [
"['Ruichen Zhang' 'Hongyang Du' 'Yinqiu Liu' 'Dusit Niyato' 'Jiawen Kang'\n 'Zehui Xiong' 'Abbas Jamalipour' 'Dong In Kim']"
]
|
null | null | 2404.09136 | null | null | http://arxiv.org/pdf/2404.09136v1 | 2024-04-14T04:14:30Z | 2024-04-14T04:14:30Z | TLDR at SemEval-2024 Task 2: T5-generated clinical-Language summaries
for DeBERTa Report Analysis | This paper introduces novel methodologies for the Natural Language Inference for Clinical Trials (NLI4CT) task. We present TLDR (T5-generated clinical-Language summaries for DeBERTa Report Analysis) which incorporates T5-model generated premise summaries for improved entailment and contradiction analysis in clinical NLI tasks. This approach overcomes the challenges posed by small context windows and lengthy premises, leading to a substantial improvement in Macro F1 scores: a 0.184 increase over truncated premises. Our comprehensive experimental evaluation, including detailed error analysis and ablations, confirms the superiority of TLDR in achieving consistency and faithfulness in predictions against semantically altered inputs. | [
"['Spandan Das' 'Vinay Samuel' 'Shahriar Noroozizadeh']"
]
|
null | null | 2404.09138 | null | null | http://arxiv.org/pdf/2404.09138v1 | 2024-04-14T04:25:41Z | 2024-04-14T04:25:41Z | From Bytes to Borsch: Fine-Tuning Gemma and Mistral for the Ukrainian
Language Representation | In the rapidly advancing field of AI and NLP, generative large language models (LLMs) stand at the forefront of innovation, showcasing unparalleled abilities in text understanding and generation. However, the limited representation of low-resource languages like Ukrainian poses a notable challenge, restricting the reach and relevance of this technology. Our paper addresses this by fine-tuning the open-source Gemma and Mistral LLMs with Ukrainian datasets, aiming to improve their linguistic proficiency and benchmarking them against other existing models capable of processing Ukrainian language. This endeavor not only aims to mitigate language bias in technology but also promotes inclusivity in the digital realm. Our transparent and reproducible approach encourages further NLP research and development. Additionally, we present the Ukrainian Knowledge and Instruction Dataset (UKID) to aid future efforts in language model fine-tuning. Our research not only advances the field of NLP but also highlights the importance of linguistic diversity in AI, which is crucial for cultural preservation, education, and expanding AI's global utility. Ultimately, we advocate for a future where technology is inclusive, enabling AI to communicate effectively across all languages, especially those currently underrepresented. | [
"['Artur Kiulian' 'Anton Polishko' 'Mykola Khandoga' 'Oryna Chubych'\n 'Jack Connor' 'Raghav Ravishankar' 'Adarsh Shirawalmath']"
]
|
null | null | 2404.09140 | null | null | http://arxiv.org/pdf/2404.09140v1 | 2024-04-14T04:56:05Z | 2024-04-14T04:56:05Z | RF-Diffusion: Radio Signal Generation via Time-Frequency Diffusion | Along with AIGC shines in CV and NLP, its potential in the wireless domain has also emerged in recent years. Yet, existing RF-oriented generative solutions are ill-suited for generating high-quality, time-series RF data due to limited representation capabilities. In this work, inspired by the stellar achievements of the diffusion model in CV and NLP, we adapt it to the RF domain and propose RF-Diffusion. To accommodate the unique characteristics of RF signals, we first introduce a novel Time-Frequency Diffusion theory to enhance the original diffusion model, enabling it to tap into the information within the time, frequency, and complex-valued domains of RF signals. On this basis, we propose a Hierarchical Diffusion Transformer to translate the theory into a practical generative DNN through elaborated design spanning network architecture, functional block, and complex-valued operator, making RF-Diffusion a versatile solution to generate diverse, high-quality, and time-series RF data. Performance comparison with three prevalent generative models demonstrates the RF-Diffusion's superior performance in synthesizing Wi-Fi and FMCW signals. We also showcase the versatility of RF-Diffusion in boosting Wi-Fi sensing systems and performing channel estimation in 5G networks. | [
"['Guoxuan Chi' 'Zheng Yang' 'Chenshu Wu' 'Jingao Xu' 'Yuchong Gao'\n 'Yunhao Liu' 'Tony Xiao Han']"
]
|
null | null | 2404.09151 | null | null | http://arxiv.org/pdf/2404.09151v2 | 2024-04-16T09:29:02Z | 2024-04-14T06:09:35Z | Emerging Platforms Meet Emerging LLMs: A Year-Long Journey of Top-Down
Development | Deploying machine learning (ML) on diverse computing platforms is crucial to accelerate and broaden their applications. However, it presents significant software engineering challenges due to the fast evolution of models, especially the recent Large Language Models (LLMs), and the emergence of new computing platforms. Current ML frameworks are primarily engineered for CPU and CUDA platforms, leaving a big gap in enabling emerging ones like Metal, Vulkan, and WebGPU. While a traditional bottom-up development pipeline fails to close the gap timely, we introduce TapML, a top-down approach and tooling designed to streamline the deployment of ML systems on diverse platforms, optimized for developer productivity. Unlike traditional bottom-up methods, which involve extensive manual testing and debugging, TapML automates unit testing through test carving and adopts a migration-based strategy for gradually offloading model computations from mature source platforms to emerging target platforms. By leveraging realistic inputs and remote connections for gradual target offloading, TapML accelerates the validation and minimizes debugging scopes, significantly optimizing development efforts. TapML was developed and applied through a year-long, real-world effort that successfully deployed significant emerging models and platforms. Through serious deployments of 82 emerging models in 17 distinct architectures across 5 emerging platforms, we showcase the effectiveness of TapML in enhancing developer productivity while ensuring model reliability and efficiency. Furthermore, we summarize comprehensive case studies from our real-world development, offering best practices for developing emerging ML systems. | [
"['Siyuan Feng' 'Jiawei Liu' 'Ruihang Lai' 'Charlie F. Ruan' 'Yong Yu'\n 'Lingming Zhang' 'Tianqi Chen']"
]
|
null | null | 2404.09155 | null | null | http://arxiv.org/pdf/2404.09155v1 | 2024-04-14T06:10:46Z | 2024-04-14T06:10:46Z | Mitigating Heterogeneity among Factor Tensors via Lie Group Manifolds
for Tensor Decomposition Based Temporal Knowledge Graph Embedding | Recent studies have highlighted the effectiveness of tensor decomposition methods in the Temporal Knowledge Graphs Embedding (TKGE) task. However, we found that inherent heterogeneity among factor tensors in tensor decomposition significantly hinders the tensor fusion process and further limits the performance of link prediction. To overcome this limitation, we introduce a novel method that maps factor tensors onto a unified smooth Lie group manifold to make the distribution of factor tensors approximating homogeneous in tensor decomposition. We provide the theoretical proof of our motivation that homogeneous tensors are more effective than heterogeneous tensors in tensor fusion and approximating the target for tensor decomposition based TKGE methods. The proposed method can be directly integrated into existing tensor decomposition based TKGE methods without introducing extra parameters. Extensive experiments demonstrate the effectiveness of our method in mitigating the heterogeneity and in enhancing the tensor decomposition based TKGE models. | [
"['Jiang Li' 'Xiangdong Su' 'Yeyun Gong' 'Guanglai Gao']"
]
|
null | null | 2404.09161 | null | null | http://arxiv.org/pdf/2404.09161v1 | 2024-04-14T06:46:16Z | 2024-04-14T06:46:16Z | Coreset Selection for Object Detection | Coreset selection is a method for selecting a small, representative subset of an entire dataset. It has been primarily researched in image classification, assuming there is only one object per image. However, coreset selection for object detection is more challenging as an image can contain multiple objects. As a result, much research has yet to be done on this topic. Therefore, we introduce a new approach, Coreset Selection for Object Detection (CSOD). CSOD generates imagewise and classwise representative feature vectors for multiple objects of the same class within each image. Subsequently, we adopt submodular optimization for considering both representativeness and diversity and utilize the representative vectors in the submodular optimization process to select a subset. When we evaluated CSOD on the Pascal VOC dataset, CSOD outperformed random selection by +6.4%p in AP$_{50}$ when selecting 200 images. | [
"['Hojun Lee' 'Suyoung Kim' 'Junhoo Lee' 'Jaeyoung Yoo' 'Nojun Kwak']"
]
|
null | null | 2404.09173 | null | null | http://arxiv.org/pdf/2404.09173v3 | 2024-05-07T13:23:46Z | 2024-04-14T07:43:45Z | TransformerFAM: Feedback attention is working memory | While Transformers have revolutionized deep learning, their quadratic attention complexity hinders their ability to process infinitely long inputs. We propose Feedback Attention Memory (FAM), a novel Transformer architecture that leverages a feedback loop to enable the network to attend to its own latent representations. This design fosters the emergence of working memory within the Transformer, allowing it to process indefinitely long sequences. TransformerFAM requires no additional weights, enabling seamless integration with pre-trained models. Our experiments show that TransformerFAM significantly improves Transformer performance on long-context tasks across various model sizes (1B, 8B, and 24B). These results showcase the potential to empower Large Language Models (LLMs) to process sequences of unlimited length. | [
"['Dongseong Hwang' 'Weiran Wang' 'Zhuoyuan Huo' 'Khe Chai Sim'\n 'Pedro Moreno Mengibar']"
]
|
null | null | 2404.09177 | null | null | http://arxiv.org/pdf/2404.09177v1 | 2024-04-14T07:56:08Z | 2024-04-14T07:56:08Z | An Experimental Comparison Of Multi-view Self-supervised Methods For
Music Tagging | Self-supervised learning has emerged as a powerful way to pre-train generalizable machine learning models on large amounts of unlabeled data. It is particularly compelling in the music domain, where obtaining labeled data is time-consuming, error-prone, and ambiguous. During the self-supervised process, models are trained on pretext tasks, with the primary objective of acquiring robust and informative features that can later be fine-tuned for specific downstream tasks. The choice of the pretext task is critical as it guides the model to shape the feature space with meaningful constraints for information encoding. In the context of music, most works have relied on contrastive learning or masking techniques. In this study, we expand the scope of pretext tasks applied to music by investigating and comparing the performance of new self-supervised methods for music tagging. We open-source a simple ResNet model trained on a diverse catalog of millions of tracks. Our results demonstrate that, although most of these pre-training methods result in similar downstream results, contrastive learning consistently results in better downstream performance compared to other self-supervised pre-training methods. This holds true in a limited-data downstream context. | [
"['Gabriel Meseguer-Brocal' 'Dorian Desblancs' 'Romain Hennequin']"
]
|
null | null | 2404.09207 | null | null | http://arxiv.org/pdf/2404.09207v1 | 2024-04-14T10:04:44Z | 2024-04-14T10:04:44Z | DEGNN: Dual Experts Graph Neural Network Handling Both Edge and Node
Feature Noise | Graph Neural Networks (GNNs) have achieved notable success in various applications over graph data. However, recent research has revealed that real-world graphs often contain noise, and GNNs are susceptible to noise in the graph. To address this issue, several Graph Structure Learning (GSL) models have been introduced. While GSL models are tailored to enhance robustness against edge noise through edge reconstruction, a significant limitation surfaces: their high reliance on node features. This inherent dependence amplifies their susceptibility to noise within node features. Recognizing this vulnerability, we present DEGNN, a novel GNN model designed to adeptly mitigate noise in both edges and node features. The core idea of DEGNN is to design two separate experts: an edge expert and a node feature expert. These experts utilize self-supervised learning techniques to produce modified edges and node features. Leveraging these modified representations, DEGNN subsequently addresses downstream tasks, ensuring robustness against noise present in both edges and node features of real-world graphs. Notably, the modification process can be trained end-to-end, empowering DEGNN to adjust dynamically and achieves optimal edge and node representations for specific tasks. Comprehensive experiments demonstrate DEGNN's efficacy in managing noise, both in original real-world graphs and in graphs with synthetic noise. | [
"['Tai Hasegawa' 'Sukwon Yun' 'Xin Liu' 'Yin Jun Phua' 'Tsuyoshi Murata']"
]
|
null | null | 2404.09210 | null | null | http://arxiv.org/pdf/2404.09210v1 | 2024-04-14T10:23:30Z | 2024-04-14T10:23:30Z | FedDistill: Global Model Distillation for Local Model De-Biasing in
Non-IID Federated Learning | Federated Learning (FL) is a novel approach that allows for collaborative machine learning while preserving data privacy by leveraging models trained on decentralized devices. However, FL faces challenges due to non-uniformly distributed (non-iid) data across clients, which impacts model performance and its generalization capabilities. To tackle the non-iid issue, recent efforts have utilized the global model as a teaching mechanism for local models. However, our pilot study shows that their effectiveness is constrained by imbalanced data distribution, which induces biases in local models and leads to a 'local forgetting' phenomenon, where the ability of models to generalize degrades over time, particularly for underrepresented classes. This paper introduces FedDistill, a framework enhancing the knowledge transfer from the global model to local models, focusing on the issue of imbalanced class distribution. Specifically, FedDistill employs group distillation, segmenting classes based on their frequency in local datasets to facilitate a focused distillation process to classes with fewer samples. Additionally, FedDistill dissects the global model into a feature extractor and a classifier. This separation empowers local models with more generalized data representation capabilities and ensures more accurate classification across all classes. FedDistill mitigates the adverse effects of data imbalance, ensuring that local models do not forget underrepresented classes but instead become more adept at recognizing and classifying them accurately. Our comprehensive experiments demonstrate FedDistill's effectiveness, surpassing existing baselines in accuracy and convergence speed across several benchmark datasets. | [
"['Changlin Song' 'Divya Saxena' 'Jiannong Cao' 'Yuqing Zhao']"
]
|
null | null | 2404.09213 | null | null | http://arxiv.org/pdf/2404.09213v1 | 2024-04-14T10:52:01Z | 2024-04-14T10:52:01Z | Qandle: Accelerating State Vector Simulation Using Gate-Matrix Caching
and Circuit Splitting | To address the computational complexity associated with state-vector simulation for quantum circuits, we propose a combination of advanced techniques to accelerate circuit execution. Quantum gate matrix caching reduces the overhead of repeated applications of the Kronecker product when applying a gate matrix to the state vector by storing decomposed partial matrices for each gate. Circuit splitting divides the circuit into sub-circuits with fewer gates by constructing a dependency graph, enabling parallel or sequential execution on disjoint subsets of the state vector. These techniques are implemented using the PyTorch machine learning framework. We demonstrate the performance of our approach by comparing it to other PyTorch-compatible quantum state-vector simulators. Our implementation, named Qandle, is designed to seamlessly integrate with existing machine learning workflows, providing a user-friendly API and compatibility with the OpenQASM format. Qandle is an open-source project hosted on GitHub https://github.com/gstenzel/qandle and PyPI https://pypi.org/project/qandle/ . | [
"['Gerhard Stenzel' 'Sebastian Zielinski' 'Michael Kölle' 'Philipp Altmann'\n 'Jonas Nüßlein' 'Thomas Gabor']"
]
|
null | null | 2404.09221 | null | null | http://arxiv.org/pdf/2404.09221v2 | 2024-06-05T05:00:35Z | 2024-04-14T11:49:38Z | Exploring and Improving Drafts in Blockwise Parallel Decoding | Despite the remarkable strides made by autoregressive language models, their potential is often hampered by the slow inference speeds inherent in sequential token generation. Blockwise parallel decoding (BPD) was proposed by Stern et al. as a method to improve inference speed of language models by simultaneously predicting multiple future tokens, termed block drafts, which are subsequently verified and conditionally accepted by the autoregressive model. This paper contributes to the understanding and improvement of block drafts in two ways. First, we analyze the token distributions produced by multiple prediction heads. Secondly, we leverage this analysis to develop algorithms to improve BPD inference speed by refining the block drafts using n-gram and neural language models. Experiments demonstrate that refined block drafts yield a +5-21% increase in block efficiency (i.e., the number of accepted tokens from the block draft) across diverse datasets. | [
"['Taehyeon Kim' 'Ananda Theertha Suresh' 'Kishore Papineni'\n 'Michael Riley' 'Sanjiv Kumar' 'Adrian Benton']"
]
|
null | null | 2404.09226 | null | null | http://arxiv.org/pdf/2404.09226v1 | 2024-04-14T12:09:47Z | 2024-04-14T12:09:47Z | Breast Cancer Image Classification Method Based on Deep Transfer
Learning | To address the issues of limited samples, time-consuming feature design, and low accuracy in detection and classification of breast cancer pathological images, a breast cancer image classification model algorithm combining deep learning and transfer learning is proposed. This algorithm is based on the DenseNet structure of deep neural networks, and constructs a network model by introducing attention mechanisms, and trains the enhanced dataset using multi-level transfer learning. Experimental results demonstrate that the algorithm achieves an efficiency of over 84.0% in the test set, with a significantly improved classification accuracy compared to previous models, making it applicable to medical breast cancer detection tasks. | [
"['Weimin Wang' 'Min Gao' 'Mingxuan Xiao' 'Xu Yan' 'Yufeng Li']"
]
|
null | null | 2404.09232 | null | null | http://arxiv.org/pdf/2404.09232v1 | 2024-04-14T12:22:42Z | 2024-04-14T12:22:42Z | MAP: Model Aggregation and Personalization in Federated Learning with
Incomplete Classes | In some real-world applications, data samples are usually distributed on local devices, where federated learning (FL) techniques are proposed to coordinate decentralized clients without directly sharing users' private data. FL commonly follows the parameter server architecture and contains multiple personalization and aggregation procedures. The natural data heterogeneity across clients, i.e., Non-I.I.D. data, challenges both the aggregation and personalization goals in FL. In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes, i.e., each client can only access a partial set of the whole class set. The server aims to aggregate a complete classification model that could generalize to all classes, while the clients are inclined to improve the performance of distinguishing their observed classes. For better model aggregation, we point out that the standard softmax will encounter several problems caused by missing classes and propose "restricted softmax" as an alternative. For better model personalization, we point out that the hard-won personalized models are not well exploited and propose "inherited private model" to store the personalization experience. Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL. Abundant experimental studies verify the superiorities of our algorithm. | [
"['Xin-Chun Li' 'Shaoming Song' 'Yinchuan Li' 'Bingshuai Li' 'Yunfeng Shao'\n 'Yang Yang' 'De-Chuan Zhan']"
]
|
null | null | 2404.09240 | null | null | http://arxiv.org/pdf/2404.09240v1 | 2024-04-14T12:59:35Z | 2024-04-14T12:59:35Z | Fault Detection in Mobile Networks Using Diffusion Models | In today's hyper-connected world, ensuring the reliability of telecom networks becomes increasingly crucial. Telecom networks encompass numerous underlying and intertwined software and hardware components, each providing different functionalities. To ensure the stability of telecom networks, telecom software, and hardware vendors developed several methods to detect any aberrant behavior in telecom networks and enable instant feedback and alerts. These approaches, although powerful, struggle to generalize due to the unsteady nature of the software-intensive embedded system and the complexity and diversity of multi-standard mobile networks. In this paper, we present a system to detect anomalies in telecom networks using a generative AI model. We evaluate several strategies using diffusion models to train the model for anomaly detection using multivariate time-series data. The contributions of this paper are threefold: (i) A proposal of a framework for utilizing diffusion models for time-series anomaly detection in telecom networks, (ii) A proposal of a particular Diffusion model architecture that outperforms other state-of-the-art techniques, (iii) Experiments on a real-world dataset to demonstrate that our model effectively provides explainable results, exposing some of its limitations and suggesting future research avenues to enhance its capabilities further. | [
"['Mohamad Nabeel' 'Doumitrou Daniil Nimara' 'Tahar Zanouda']"
]
|
null | null | 2404.09243 | null | null | http://arxiv.org/pdf/2404.09243v1 | 2024-04-14T13:08:21Z | 2024-04-14T13:08:21Z | LSROM: Learning Self-Refined Organizing Map for Fast Imbalanced
Streaming Data Clustering | Streaming data clustering is a popular research topic in the fields of data mining and machine learning. Compared to static data, streaming data, which is usually analyzed in data chunks, is more susceptible to encountering the dynamic cluster imbalanced issue. That is, the imbalanced degree of clusters varies in different streaming data chunks, leading to corruption in either the accuracy or the efficiency of streaming data analysis based on existing clustering methods. Therefore, we propose an efficient approach called Learning Self-Refined Organizing Map (LSROM) to handle the imbalanced streaming data clustering problem, where we propose an advanced SOM for representing the global data distribution. The constructed SOM is first refined for guiding the partition of the dataset to form many micro-clusters to avoid the missing small clusters in imbalanced data. Then an efficient merging of the micro-clusters is conducted through quick retrieval based on the SOM, which can automatically yield a true number of imbalanced clusters. In comparison to existing imbalanced data clustering approaches, LSROM is with a lower time complexity $O(nlog n)$, while achieving very competitive clustering accuracy. Moreover, LSROM is interpretable and insensitive to hyper-parameters. Extensive experiments have verified its efficacy. | [
"['Yongqi Xu' 'Yujian Lee' 'Rong Zou' 'Yiqun Zhang' 'Yiu-Ming Cheung']"
]
|
null | null | 2404.09247 | null | null | http://arxiv.org/pdf/2404.09247v1 | 2024-04-14T13:17:32Z | 2024-04-14T13:17:32Z | Generalization Error Bounds for Learning under Censored Feedback | Generalization error bounds from learning theory provide statistical guarantees on how well an algorithm will perform on previously unseen data. In this paper, we characterize the impacts of data non-IIDness due to censored feedback (a.k.a. selective labeling bias) on such bounds. We first derive an extension of the well-known Dvoretzky-Kiefer-Wolfowitz (DKW) inequality, which characterizes the gap between empirical and theoretical CDFs given IID data, to problems with non-IID data due to censored feedback. We then use this CDF error bound to provide a bound on the generalization error guarantees of a classifier trained on such non-IID data. We show that existing generalization error bounds (which do not account for censored feedback) fail to correctly capture the model's generalization guarantees, verifying the need for our bounds. We further analyze the effectiveness of (pure and bounded) exploration techniques, proposed by recent literature as a way to alleviate censored feedback, on improving our error bounds. Together, our findings illustrate how a decision maker should account for the trade-off between strengthening the generalization guarantees of an algorithm and the costs incurred in data collection when future data availability is limited by censored feedback. | [
"['Yifan Yang' 'Ali Payani' 'Parinaz Naghizadeh']"
]
|
null | null | 2404.09248 | null | null | http://arxiv.org/pdf/2404.09248v1 | 2024-04-14T13:19:40Z | 2024-04-14T13:19:40Z | Knowledgeable Agents by Offline Reinforcement Learning from Large
Language Model Rollouts | Reinforcement learning (RL) trains agents to accomplish complex tasks through environmental interaction data, but its capacity is also limited by the scope of the available data. To obtain a knowledgeable agent, a promising approach is to leverage the knowledge from large language models (LLMs). Despite previous studies combining LLMs with RL, seamless integration of the two components remains challenging due to their semantic gap. This paper introduces a novel method, Knowledgeable Agents from Language Model Rollouts (KALM), which extracts knowledge from LLMs in the form of imaginary rollouts that can be easily learned by the agent through offline reinforcement learning methods. The primary challenge of KALM lies in LLM grounding, as LLMs are inherently limited to textual data, whereas environmental data often comprise numerical vectors unseen to LLMs. To address this, KALM fine-tunes the LLM to perform various tasks based on environmental data, including bidirectional translation between natural language descriptions of skills and their corresponding rollout data. This grounding process enhances the LLM's comprehension of environmental dynamics, enabling it to generate diverse and meaningful imaginary rollouts that reflect novel skills. Initial empirical evaluations on the CLEVR-Robot environment demonstrate that KALM enables agents to complete complex rephrasings of task goals and extend their capabilities to novel tasks requiring unprecedented optimal behaviors. KALM achieves a success rate of 46% in executing tasks with unseen goals, substantially surpassing the 26% success rate achieved by baseline methods. Furthermore, KALM effectively enables the LLM to comprehend environmental dynamics, resulting in the generation of meaningful imaginary rollouts that reflect novel skills and demonstrate the seamless integration of large language models and reinforcement learning. | [
"['Jing-Cheng Pang' 'Si-Hang Yang' 'Kaiyuan Li' 'Jiaji Zhang'\n 'Xiong-Hui Chen' 'Nan Tang' 'Yang Yu']"
]
|
null | null | 2404.09249 | null | null | http://arxiv.org/pdf/2404.09249v1 | 2024-04-14T13:25:15Z | 2024-04-14T13:25:15Z | Test Code Generation for Telecom Software Systems using Two-Stage
Generative Model | In recent years, the evolution of Telecom towards achieving intelligent, autonomous, and open networks has led to an increasingly complex Telecom Software system, supporting various heterogeneous deployment scenarios, with multi-standard and multi-vendor support. As a result, it becomes a challenge for large-scale Telecom software companies to develop and test software for all deployment scenarios. To address these challenges, we propose a framework for Automated Test Generation for large-scale Telecom Software systems. We begin by generating Test Case Input data for test scenarios observed using a time-series Generative model trained on historical Telecom Network data during field trials. Additionally, the time-series Generative model helps in preserving the privacy of Telecom data. The generated time-series software performance data are then utilized with test descriptions written in natural language to generate Test Script using the Generative Large Language Model. Our comprehensive experiments on public datasets and Telecom datasets obtained from operational Telecom Networks demonstrate that the framework can effectively generate comprehensive test case data input and useful test code. | [
"['Mohamad Nabeel' 'Doumitrou Daniil Nimara' 'Tahar Zanouda']"
]
|
null | null | 2404.09256 | null | null | http://arxiv.org/pdf/2404.09256v1 | 2024-04-14T13:48:24Z | 2024-04-14T13:48:24Z | Foundational GPT Model for MEG | Deep learning techniques can be used to first training unsupervised models on large amounts of unlabelled data, before fine-tuning the models on specific tasks. This approach has seen massive success for various kinds of data, e.g. images, language, audio, and holds the promise of improving performance in various downstream tasks (e.g. encoding or decoding brain data). However, there has been limited progress taking this approach for modelling brain signals, such as Magneto-/electroencephalography (M/EEG). Here we propose two classes of deep learning foundational models that can be trained using forecasting of unlabelled MEG. First, we consider a modified Wavenet; and second, we consider a modified Transformer-based (GPT2) model. The modified GPT2 includes a novel application of tokenisation and embedding methods, allowing a model developed initially for the discrete domain of language to be applied to continuous multichannel time series data. We also extend the forecasting framework to include condition labels as inputs, enabling better modelling (encoding) of task data. We compare the performance of these deep learning models with standard linear autoregressive (AR) modelling on MEG data. This shows that GPT2-based models provide better modelling capabilities than Wavenet and linear AR models, by better reproducing the temporal, spatial and spectral characteristics of real data and evoked activity in task data. We show how the GPT2 model scales well to multiple subjects, while adapting its model to each subject through subject embedding. Finally, we show how such a model can be useful in downstream decoding tasks through data simulation. All code is available on GitHub (https://github.com/ricsinaruto/MEG-transfer-decoding). | [
"['Richard Csaky' 'Mats W. J. van Es' 'Oiwi Parker Jones' 'Mark Woolrich']"
]
|
null | null | 2404.09275 | null | null | http://arxiv.org/pdf/2404.09275v1 | 2024-04-14T14:51:44Z | 2024-04-14T14:51:44Z | TrafficVLM: A Controllable Visual Language Model for Traffic Video
Captioning | Traffic video description and analysis have received much attention recently due to the growing demand for efficient and reliable urban surveillance systems. Most existing methods only focus on locating traffic event segments, which severely lack descriptive details related to the behaviour and context of all the subjects of interest in the events. In this paper, we present TrafficVLM, a novel multi-modal dense video captioning model for vehicle ego camera view. TrafficVLM models traffic video events at different levels of analysis, both spatially and temporally, and generates long fine-grained descriptions for the vehicle and pedestrian at different phases of the event. We also propose a conditional component for TrafficVLM to control the generation outputs and a multi-task fine-tuning paradigm to enhance TrafficVLM's learning capability. Experiments show that TrafficVLM performs well on both vehicle and overhead camera views. Our solution achieved outstanding results in Track 2 of the AI City Challenge 2024, ranking us third in the challenge standings. Our code is publicly available at https://github.com/quangminhdinh/TrafficVLM. | [
"['Quang Minh Dinh' 'Minh Khoi Ho' 'Anh Quan Dang' 'Hung Phong Tran']"
]
|
null | null | 2404.09302 | null | null | http://arxiv.org/pdf/2404.09302v1 | 2024-04-14T16:57:41Z | 2024-04-14T16:57:41Z | High Significant Fault Detection in Azure Core Workload Insights | Azure Core workload insights have time-series data with different metric units. Faults or Anomalies are observed in these time-series data owing to faults observed with respect to metric name, resources region, dimensions, and its dimension value associated with the data. For Azure Core, an important task is to highlight faults or anomalies to the user on a dashboard that they can perceive easily. The number of anomalies reported should be highly significant and in a limited number, e.g., 5-20 anomalies reported per hour. The reported anomalies will have significant user perception and high reconstruction error in any time-series forecasting model. Hence, our task is to automatically identify 'high significant anomalies' and their associated information for user perception. | [
"['Pranay Lohia' 'Laurent Boue' 'Sharath Rangappa' 'Vijay Agneeswaran']"
]
|
null | null | 2404.09326 | null | null | http://arxiv.org/pdf/2404.09326v2 | 2024-04-17T07:46:28Z | 2024-04-14T18:57:38Z | Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision
Transformers | Few-shot knowledge distillation recently emerged as a viable approach to harness the knowledge of large-scale pre-trained models, using limited data and computational resources. In this paper, we propose a novel few-shot feature distillation approach for vision transformers. Our approach is based on two key steps. Leveraging the fact that vision transformers have a consistent depth-wise structure, we first copy the weights from intermittent layers of existing pre-trained vision transformers (teachers) into shallower architectures (students), where the intermittence factor controls the complexity of the student transformer with respect to its teacher. Next, we employ an enhanced version of Low-Rank Adaptation (LoRA) to distill knowledge into the student in a few-shot scenario, aiming to recover the information processing carried out by the skipped teacher layers. We present comprehensive experiments with supervised and self-supervised transformers as teachers, on five data sets from various domains, including natural, medical and satellite images. The empirical results confirm the superiority of our approach over competitive baselines. Moreover, the ablation results demonstrate the usefulness of each component of the proposed pipeline. | [
"['Diana-Nicoleta Grigore' 'Mariana-Iuliana Georgescu' 'Jon Alvarez Justo'\n 'Tor Johansen' 'Andreea Iuliana Ionescu' 'Radu Tudor Ionescu']"
]
|
null | null | 2404.09331 | null | null | http://arxiv.org/abs/2404.09331v2 | 2024-06-18T08:36:11Z | 2024-04-14T19:06:00Z | SNN4Agents: A Framework for Developing Energy-Efficient Embodied Spiking
Neural Networks for Autonomous Agents | Recent trends have shown that autonomous agents, such as Autonomous Ground Vehicles (AGVs), Unmanned Aerial Vehicles (UAVs), and mobile robots, effectively improve human productivity in solving diverse tasks. However, since these agents are typically powered by portable batteries, they require extremely low power/energy consumption to operate in a long lifespan. To solve this challenge, neuromorphic computing has emerged as a promising solution, where bio-inspired Spiking Neural Networks (SNNs) use spikes from event-based cameras or data conversion pre-processing to perform sparse computations efficiently. However, the studies of SNN deployments for autonomous agents are still at an early stage. Hence, the optimization stages for enabling efficient embodied SNN deployments for autonomous agents have not been defined systematically. Toward this, we propose a novel framework called SNN4Agents that consists of a set of optimization techniques for designing energy-efficient embodied SNNs targeting autonomous agent applications. Our SNN4Agents employs weight quantization, timestep reduction, and attention window reduction to jointly improve the energy efficiency, reduce the memory footprint, optimize the processing latency, while maintaining high accuracy. In the evaluation, we investigate use cases of event-based car recognition, and explore the trade-offs among accuracy, latency, memory, and energy consumption. The experimental results show that our proposed framework can maintain high accuracy (i.e., 84.12% accuracy) with 68.75% memory saving, 3.58x speed-up, and 4.03x energy efficiency improvement as compared to the state-of-the-art work for NCARS dataset. In this manner, our SNN4Agents framework paves the way toward enabling energy-efficient embodied SNN deployments for autonomous agents. | [
"['Rachmad Vidya Wicaksana Putra' 'Alberto Marchisio' 'Muhammad Shafique']"
]
|
null | null | 2404.09339 | null | null | http://arxiv.org/pdf/2404.09339v1 | 2024-04-14T19:45:47Z | 2024-04-14T19:45:47Z | Towards Practical Tool Usage for Continually Learning LLMs | Large language models (LLMs) show an innate skill for solving language based tasks. But insights have suggested an inability to adjust for information or task-solving skills becoming outdated, as their knowledge, stored directly within their parameters, remains static in time. Tool use helps by offloading work to systems that the LLM can access through an interface, but LLMs that use them still must adapt to nonstationary environments for prolonged use, as new tools can emerge and existing tools can change. Nevertheless, tools require less specialized knowledge, therefore we hypothesize they are better suited for continual learning (CL) as they rely less on parametric memory for solving tasks and instead focus on learning when to apply pre-defined tools. To verify this, we develop a synthetic benchmark and follow this by aggregating existing NLP tasks to form a more realistic testing scenario. While we demonstrate scaling model size is not a solution, regardless of tool usage, continual learning techniques can enable tool LLMs to both adapt faster while forgetting less, highlighting their potential as continual learners. | [
"['Jerry Huang' 'Prasanna Parthasarathi' 'Mehdi Rezagholizadeh'\n 'Sarath Chandar']"
]
|
null | null | 2404.09349 | null | null | http://arxiv.org/pdf/2404.09349v2 | 2024-07-10T17:32:29Z | 2024-04-14T20:14:38Z | Adversarial Robustness Limits via Scaling-Law and Human-Alignment
Studies | This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$%, but SOTA robustness to $ell_{infty}$-norm bounded perturbations barely exceeds $70$%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$% ($70$%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$% AutoAttack accuracy ($+3$% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$%, which we show to be attributable to $ell_{infty}$-constrained attacks' generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research. | [
"['Brian R. Bartoldson' 'James Diffenderfer' 'Konstantinos Parasyris'\n 'Bhavya Kailkhura']"
]
|
null | null | 2404.09350 | null | null | http://arxiv.org/pdf/2404.09350v1 | 2024-04-14T20:17:14Z | 2024-04-14T20:17:14Z | Machine learning-based identification of Gaia astrometric exoplanet
orbits | The third Gaia data release (DR3) contains $sim$170 000 astrometric orbit solutions of two-body systems located within $sim$500 pc of the Sun. Determining component masses in these systems, in particular of stars hosting exoplanets, usually hinges on incorporating complementary observations in addition to the astrometry, e.g. spectroscopy and radial velocities. Several DR3 two-body systems with exoplanet, brown-dwarf, stellar, and black-hole components have been confirmed in this way. We developed an alternative machine learning approach that uses only the DR3 orbital solutions with the aim of identifying the best candidates for exoplanets and brown-dwarf companions. Based on confirmed substellar companions in the literature, we use semi-supervised anomaly detection methods in combination with extreme gradient boosting and random forest classifiers to determine likely low-mass outliers in the population of non-single sources. We employ and study feature importance to investigate the method's plausibility and produced a list of 22 best candidates of which four are exoplanet candidates and another five are either very-massive brown dwarfs or very-low mass stars. Three candidates, including one initial exoplanet candidate, correspond to false-positive solutions where longer-period binary star motion was fitted with a biased shorter-period orbit. We highlight nine candidates with brown-dwarf companions for preferential follow-up. One candidate companion around the Sun-like star G 15-6 could be confirmed as a genuine brown dwarf using external radial-velocity data. This new approach is a powerful complement to the traditional identification methods for substellar companions among Gaia astrometric orbits. It is particularly relevant in the context of Gaia DR4 and its expected exoplanet discovery yield. | [
"['Johannes Sahlmann' 'Pablo Gómez']"
]
|
null | null | 2404.09359 | null | null | http://arxiv.org/pdf/2404.09359v3 | 2024-04-24T15:07:04Z | 2024-04-14T21:14:47Z | Exploring Feedback Generation in Automated Skeletal Movement Assessment:
A Comprehensive Overview | The application of machine-learning solutions to movement assessment from skeleton videos has attracted significant research attention in recent years. This advancement has made rehabilitation at home more accessible, utilizing movement assessment algorithms that can operate on affordable equipment for human pose detection and analysis from 2D or 3D videos. While the primary objective of automatic assessment tasks is to score movements, the automatic generation of feedback highlighting key movement issues has the potential to significantly enhance and accelerate the rehabilitation process. While numerous research works exist in the field of automatic movement assessment, only a handful address feedback generation. In this study, we explain the types of feedback that can be generated, review existing solutions for automatic feedback generation, and discuss future research directions. To our knowledge, this is the first comprehensive review of feedback generation in skeletal movement assessment. | [
"['Tal Hakim']"
]
|
null | null | 2404.09363 | null | null | http://arxiv.org/pdf/2404.09363v1 | 2024-04-14T21:30:00Z | 2024-04-14T21:30:00Z | Momentum-based gradient descent methods for Lie groups | Polyak's Heavy Ball (PHB; Polyak, 1964), a.k.a. Classical Momentum, and Nesterov's Accelerated Gradient (NAG; Nesterov, 1983) are well know examples of momentum-descent methods for optimization. While the latter outperforms the former, solely generalizations of PHB-like methods to nonlinear spaces have been described in the literature. We propose here a generalization of NAG-like methods for Lie group optimization based on the variational one-to-one correspondence between classical and accelerated momentum methods (Campos et al., 2023). Numerical experiments are shown. | [
"['Cédric M. Campos' 'David Martín de Diego' 'José Torrente']"
]
|
null | null | 2404.09365 | null | null | http://arxiv.org/pdf/2404.09365v1 | 2024-04-14T21:37:39Z | 2024-04-14T21:37:39Z | Hierarchical Attention Models for Multi-Relational Graphs | We present Bi-Level Attention-Based Relational Graph Convolutional Networks (BR-GCN), unique neural network architectures that utilize masked self-attentional layers with relational graph convolutions, to effectively operate on highly multi-relational data. BR-GCN models use bi-level attention to learn node embeddings through (1) node-level attention, and (2) relation-level attention. The node-level self-attentional layers use intra-relational graph interactions to learn relation-specific node embeddings using a weighted aggregation of neighborhood features in a sparse subgraph region. The relation-level self-attentional layers use inter-relational graph interactions to learn the final node embeddings using a weighted aggregation of relation-specific node embeddings. The BR-GCN bi-level attention mechanism extends Transformer-based multiplicative attention from the natural language processing (NLP) domain, and Graph Attention Networks (GAT)-based attention, to large-scale heterogeneous graphs (HGs). On node classification, BR-GCN outperforms baselines from 0.29% to 14.95% as a stand-alone model, and on link prediction, BR-GCN outperforms baselines from 0.02% to 7.40% as an auto-encoder model. We also conduct ablation studies to evaluate the quality of BR-GCN's relation-level attention and discuss how its learning of graph structure may be transferred to enrich other graph neural networks (GNNs). Through various experiments, we show that BR-GCN's attention mechanism is both scalable and more effective in learning compared to state-of-the-art GNNs. | [
"['Roshni G. Iyer' 'Wei Wang' 'Yizhou Sun']"
]
|
null | null | 2404.09384 | null | null | http://arxiv.org/pdf/2404.09384v1 | 2024-04-14T23:45:23Z | 2024-04-14T23:45:23Z | Tasks People Prompt: A Taxonomy of LLM Downstream Tasks in Software
Verification and Falsification Approaches | Prompting has become one of the main approaches to leverage emergent capabilities of Large Language Models [Brown et al. NeurIPS 2020, Wei et al. TMLR 2022, Wei et al. NeurIPS 2022]. During the last year, researchers and practitioners have been playing with prompts to see how to make the most of LLMs. By homogeneously dissecting 80 papers, we investigate in deep how software testing and verification research communities have been abstractly architecting their LLM-enabled solutions. More precisely, first, we want to validate whether downstream tasks are an adequate concept to convey the blueprint of prompt-based solutions. We also aim at identifying number and nature of such tasks in solutions. For such goal, we develop a novel downstream task taxonomy that enables pinpointing some engineering patterns in a rather varied spectrum of Software Engineering problems that encompasses testing, fuzzing, debugging, vulnerability detection, static analysis and program verification approaches. | [
"['Víctor A. Braberman' 'Flavia Bonomo-Braberman' 'Yiannis Charalambous'\n 'Juan G. Colonna' 'Lucas C. Cordeiro' 'Rosiane de Freitas']"
]
|
null | null | 2404.09386 | null | null | http://arxiv.org/pdf/2404.09386v2 | 2024-06-11T01:59:25Z | 2024-04-15T00:11:01Z | Integrating Marketing Channels into Quantile Transformation and Bayesian
Optimization of Ensemble Kernels for Sales Prediction with Gaussian Process
Models | This study introduces an innovative Gaussian Process (GP) model utilizing an ensemble kernel that integrates Radial Basis Function (RBF), Rational Quadratic, and Mat'ern kernels for product sales forecasting. By applying Bayesian optimization, we efficiently find the optimal weights for each kernel, enhancing the model's ability to handle complex sales data patterns. Our approach significantly outperforms traditional GP models, achieving a notable 98% accuracy and superior performance across key metrics including Mean Squared Error (MSE), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Coefficient of Determination ($R^2$). This advancement underscores the effectiveness of ensemble kernels and Bayesian optimization in improving predictive accuracy, offering profound implications for machine learning applications in sales forecasting. | [
"['Shahin Mirshekari' 'Negin Hayeri Motedayen' 'Mohammad Ensaf']"
]
|
null | null | 2404.09387 | null | null | http://arxiv.org/pdf/2404.09387v2 | 2024-06-20T16:20:37Z | 2024-04-15T00:12:27Z | RankCLIP: Ranking-Consistent Language-Image Pretraining | Self-supervised contrastive learning models, such as CLIP, have set new benchmarks for vision-language models in many downstream tasks. However, their dependency on rigid one-to-one mappings overlooks the complex and often multifaceted relationships between and within texts and images. To this end, we introduce RANKCLIP, a novel pretraining method that extends beyond the rigid one-to-one matching framework of CLIP and its variants. By extending the traditional pair-wise loss to list-wise, and leveraging both in-modal and cross-modal ranking consistency, RANKCLIP improves the alignment process, enabling it to capture the nuanced many-to-many relationships between and within each modality. Through comprehensive experiments, we demonstrate the effectiveness of RANKCLIP in various downstream tasks, notably achieving significant gains in zero-shot classifications over state-of-the-art methods, underscoring the importance of this enhanced learning process. | [
"['Yiming Zhang' 'Zhuokai Zhao' 'Zhaorun Chen' 'Zhili Feng' 'Zenghui Ding'\n 'Yining Sun']"
]
|
null | null | 2404.09389 | null | null | http://arxiv.org/pdf/2404.09389v1 | 2024-04-15T00:19:47Z | 2024-04-15T00:19:47Z | Masked and Shuffled Blind Spot Denoising for Real-World Images | We introduce a novel approach to single image denoising based on the Blind Spot Denoising principle, which we call MAsked and SHuffled Blind Spot Denoising (MASH). We focus on the case of correlated noise, which often plagues real images. MASH is the result of a careful analysis to determine the relationships between the level of blindness (masking) of the input and the (unknown) noise correlation. Moreover, we introduce a shuffling technique to weaken the local correlation of noise, which in turn yields an additional denoising performance improvement. We evaluate MASH via extensive experiments on real-world noisy image datasets. We demonstrate on par or better results compared to existing self-supervised denoising methods. | [
"['Hamadi Chihaoui' 'Paolo Favaro']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.