categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.04567 | null | null | http://arxiv.org/pdf/2404.04567v1 | 2024-04-06T09:30:38Z | 2024-04-06T09:30:38Z | Optimization of Lightweight Malware Detection Models For AIoT Devices | Malware intrusion is problematic for Internet of Things (IoT) and Artificial Intelligence of Things (AIoT) devices as they often reside in an ecosystem of connected devices, such as a smart home. If any devices are infected, the whole ecosystem can be compromised. Although various Machine Learning (ML) models are deployed to detect malware and network intrusion, generally speaking, robust high-accuracy models tend to require resources not found in all IoT devices, compared to less robust models defined by weak learners. In order to combat this issue, Fadhilla proposed a meta-learner ensemble model comprised of less robust prediction results inherent with weak learner ML models to produce a highly robust meta-learning ensemble model. The main problem with the prior research is that it cannot be deployed in low-end AIoT devices due to the limited resources comprising processing power, storage, and memory (the required libraries quickly exhaust low-end AIoT devices' resources.) Hence, this research aims to optimize the proposed super learner meta-learning ensemble model to make it viable for low-end AIoT devices. We show the library and ML model memory requirements associated with each optimization stage and emphasize that optimization of current ML models is necessitated for low-end AIoT devices. Our results demonstrate that we can obtain similar accuracy and False Positive Rate (FPR) metrics from high-end AIoT devices running the derived ML model, with a lower inference duration and smaller memory footprint. | [
"['Felicia Lo' 'Shin-Ming Cheng' 'Rafael Kaliski']"
]
|
null | null | 2404.04575 | null | null | http://arxiv.org/pdf/2404.04575v3 | 2024-06-16T12:43:39Z | 2024-04-06T09:55:03Z | To Cool or not to Cool? Temperature Network Meets Large Foundation
Models via DRO | The temperature parameter plays a profound role during training and/or inference with large foundation models (LFMs) such as large language models (LLMs) and CLIP models. Particularly, it adjusts the logits in the softmax function in LLMs, which is crucial for next token generation, and it scales the similarities in the contrastive loss for training CLIP models. A significant question remains: Is it viable to learn a neural network to predict a personalized temperature of any input data for enhancing LFMs"? In this paper, we present a principled framework for learning a small yet generalizable temperature prediction network (TempNet) to improve LFMs. Our solution is composed of a novel learning framework with a robust loss underpinned by constrained distributionally robust optimization (DRO), and a properly designed TempNet with theoretical inspiration. TempNet can be trained together with a large foundation model from scratch or learned separately given a pretrained foundation model. It is not only useful for predicting personalized temperature to promote the training of LFMs but also generalizable and transferable to new tasks. Our experiments on LLMs and CLIP models demonstrate that TempNet greatly improves the performance of existing solutions or models, e.g. Table 1. The code to reproduce the experimental results in this paper can be found at https://github.com/zhqiu/TempNet. | [
"['Zi-Hao Qiu' 'Siqi Guo' 'Mao Xu' 'Tuo Zhao' 'Lijun Zhang' 'Tianbao Yang']"
]
|
null | null | 2404.04612 | null | null | http://arxiv.org/pdf/2404.04612v1 | 2024-04-06T12:40:21Z | 2024-04-06T12:40:21Z | Spectral Graph Pruning Against Over-Squashing and Over-Smoothing | Message Passing Graph Neural Networks are known to suffer from two problems that are sometimes believed to be diametrically opposed: over-squashing and over-smoothing. The former results from topological bottlenecks that hamper the information flow from distant nodes and are mitigated by spectral gap maximization, primarily, by means of edge additions. However, such additions often promote over-smoothing that renders nodes of different classes less distinguishable. Inspired by the Braess phenomenon, we argue that deleting edges can address over-squashing and over-smoothing simultaneously. This insight explains how edge deletions can improve generalization, thus connecting spectral gap optimization to a seemingly disconnected objective of reducing computational resources by pruning graphs for lottery tickets. To this end, we propose a more effective spectral gap optimization framework to add or delete edges and demonstrate its effectiveness on large heterophilic datasets. | [
"['Adarsh Jamadandi' 'Celia Rubio-Madrigal' 'Rebekka Burkholz']"
]
|
null | null | 2404.04615 | null | null | http://arxiv.org/pdf/2404.04615v1 | 2024-04-06T12:49:09Z | 2024-04-06T12:49:09Z | PointSAGE: Mesh-independent superresolution approach to fluid flow
predictions | Computational Fluid Dynamics (CFD) serves as a powerful tool for simulating fluid flow across diverse industries. High-resolution CFD simulations offer valuable insights into fluid behavior and flow patterns, aiding in optimizing design features or enhancing system performance. However, as resolution increases, computational data requirements and time increase proportionately. This presents a persistent challenge in CFD. Recently, efforts have been directed towards accurately predicting fine-mesh simulations using coarse-mesh simulations, with geometry and boundary conditions as input. Drawing inspiration from models designed for super-resolution, deep learning techniques like UNets have been applied to address this challenge. However, these existing methods are limited to structured data and fail if the mesh is unstructured due to its inability to convolute. Additionally, incorporating geometry/mesh information in the training process introduces drawbacks such as increased data requirements, challenges in generalizing to unseen geometries for the same physical phenomena, and issues with robustness to mesh distortions. To address these concerns, we propose a novel framework, PointSAGE a mesh-independent network that leverages the unordered, mesh-less nature of Pointcloud to learn the complex fluid flow and directly predict fine simulations, completely neglecting mesh information. Utilizing an adaptable framework, the model accurately predicts the fine data across diverse point cloud sizes, regardless of the training dataset's dimension. We have evaluated the effectiveness of PointSAGE on diverse datasets in different scenarios, demonstrating notable results and a significant acceleration in computational time in generating fine simulations compared to standard CFD techniques. | [
"['Rajat Sarkar' 'Krishna Sai Sudhir Aripirala' 'Vishal Sudam Jadhav'\n 'Sagar Srinivas Sakhinana' 'Venkataramana Runkana']"
]
|
null | null | 2404.04616 | null | null | http://arxiv.org/pdf/2404.04616v2 | 2024-06-18T08:16:52Z | 2024-04-06T12:49:20Z | Vanishing Variance Problem in Fully Decentralized Neural-Network Systems | Federated learning and gossip learning are emerging methodologies designed to mitigate data privacy concerns by retaining training data on client devices and exclusively sharing locally-trained machine learning (ML) models with others. The primary distinction between the two lies in their approach to model aggregation: federated learning employs a centralized parameter server, whereas gossip learning adopts a fully decentralized mechanism, enabling direct model exchanges among nodes. This decentralized nature often positions gossip learning as less efficient compared to federated learning. Both methodologies involve a critical step: computing a representation of received ML models and integrating this representation into the existing model. Conventionally, this representation is derived by averaging the received models, exemplified by the FedAVG algorithm. Our findings suggest that this averaging approach inherently introduces a potential delay in model convergence. We identify the underlying cause and refer to it as the "vanishing variance" problem, where averaging across uncorrelated ML models undermines the optimal variance established by the Xavier weight initialization. Unlike federated learning where the central server ensures model correlation, and unlike traditional gossip learning which circumvents this problem through model partitioning and sampling, our research introduces a variance-corrected model averaging algorithm. This novel algorithm preserves the optimal variance needed during model averaging, irrespective of network topology or non-IID data distributions. Our extensive simulation results demonstrate that our approach enables gossip learning to achieve convergence efficiency comparable to that of federated learning. | [
"['Yongding Tian' 'Zaid Al-Ars' 'Maksim Kitsak' 'Peter Hofstee']"
]
|
null | null | 2404.04623 | null | null | http://arxiv.org/pdf/2404.04623v1 | 2024-04-06T13:13:45Z | 2024-04-06T13:13:45Z | An Automated Machine Learning Approach to Inkjet Printed Component
Analysis: A Step Toward Smart Additive Manufacturing | In this paper, we present a machine learning based architecture for microwave characterization of inkjet printed components on flexible substrates. Our proposed architecture uses several machine learning algorithms and automatically selects the best algorithm to extract the material parameters (ink conductivity and dielectric properties) from on-wafer measurements. Initially, the mutual dependence between material parameters of the inkjet printed coplanar waveguides (CPWs) and EM-simulated propagation constants is utilized to train the machine learning models. Next, these machine learning models along with measured propagation constants are used to extract the ink conductivity and dielectric properties of the test prototypes. To demonstrate the applicability of our proposed approach, we compare and contrast four heuristic based machine learning models. It is shown that eXtreme Gradient Boosted Trees Regressor (XGB) and Light Gradient Boosting (LGB) algorithms perform best for the characterization problem under study. | [
"['Abhishek Sahu' 'Peter H. Aaen' 'Praveen Damacharla']"
]
|
null | null | 2404.04642 | null | null | http://arxiv.org/pdf/2404.04642v1 | 2024-04-06T14:27:22Z | 2024-04-06T14:27:22Z | Power-Efficient Image Storage: Leveraging Super Resolution Generative
Adversarial Network for Sustainable Compression and Reduced Carbon Footprint | In recent years, large-scale adoption of cloud storage solutions has revolutionized the way we think about digital data storage. However, the exponential increase in data volume, especially images, has raised environmental concerns regarding power and resource consumption, as well as the rising digital carbon footprint emissions. The aim of this research is to propose a methodology for cloud-based image storage by integrating image compression technology with SuperResolution Generative Adversarial Networks (SRGAN). Rather than storing images in their original format directly on the cloud, our approach involves initially reducing the image size through compression and downsizing techniques before storage. Upon request, these compressed images will be retrieved and processed by SRGAN to generate images. The efficacy of the proposed method is evaluated in terms of PSNR and SSIM metrics. Additionally, a mathematical analysis is given to calculate power consumption and carbon footprint assesment. The proposed data compression technique provides a significant solution to achieve a reasonable trade off between environmental sustainability and industrial efficiency. | [
"['Ashok Mondal' 'Satyam Singh']"
]
|
null | null | 2404.04645 | null | null | http://arxiv.org/pdf/2404.04645v1 | 2024-04-06T14:34:46Z | 2024-04-06T14:34:46Z | HyperTTS: Parameter Efficient Adaptation in Text to Speech using
Hypernetworks | Neural speech synthesis, or text-to-speech (TTS), aims to transform a signal from the text domain to the speech domain. While developing TTS architectures that train and test on the same set of speakers has seen significant improvements, out-of-domain speaker performance still faces enormous limitations. Domain adaptation on a new set of speakers can be achieved by fine-tuning the whole model for each new domain, thus making it parameter-inefficient. This problem can be solved by Adapters that provide a parameter-efficient alternative to domain adaptation. Although famous in NLP, speech synthesis has not seen much improvement from Adapters. In this work, we present HyperTTS, which comprises a small learnable network, "hypernetwork", that generates parameters of the Adapter blocks, allowing us to condition Adapters on speaker representations and making them dynamic. Extensive evaluations of two domain adaptation settings demonstrate its effectiveness in achieving state-of-the-art performance in the parameter-efficient regime. We also compare different variants of HyperTTS, comparing them with baselines in different studies. Promising results on the dynamic adaptation of adapter parameters using hypernetworks open up new avenues for domain-generic multi-speaker TTS systems. The audio samples and code are available at https://github.com/declare-lab/HyperTTS. | [
"['Yingting Li' 'Rishabh Bhardwaj' 'Ambuj Mehrish' 'Bo Cheng'\n 'Soujanya Poria']"
]
|
null | null | 2404.04656 | null | null | http://arxiv.org/pdf/2404.04656v1 | 2024-04-06T15:20:59Z | 2024-04-06T15:20:59Z | Binary Classifier Optimization for Large Language Model Alignment | Aligning Large Language Models (LLMs) to human preferences through preference optimization has been crucial but labor-intensive, necessitating for each prompt a comparison of both a chosen and a rejected text completion by evaluators. Recently, Kahneman-Tversky Optimization (KTO) has demonstrated that LLMs can be aligned using merely binary "thumbs-up" or "thumbs-down" signals on each prompt-completion pair. In this paper, we present theoretical foundations to explain the successful alignment achieved through these binary signals. Our analysis uncovers a new perspective: optimizing a binary classifier, whose logit is a reward, implicitly induces minimizing the Direct Preference Optimization (DPO) loss. In the process of this discovery, we identified two techniques for effective alignment: reward shift and underlying distribution matching. Consequently, we propose a new algorithm, textit{Binary Classifier Optimization}, that integrates the techniques. We validate our methodology in two settings: first, on a paired preference dataset, where our method performs on par with DPO and KTO; and second, on binary signal datasets simulating real-world conditions with divergent underlying distributions between thumbs-up and thumbs-down data. Our model consistently demonstrates effective and robust alignment across two base LLMs and three different binary signal datasets, showcasing the strength of our approach to learning from binary feedback. | [
"['Seungjae Jung' 'Gunsoo Han' 'Daniel Wontae Nam' 'Kyoung-Woon On']"
]
|
null | null | 2404.04661 | null | null | http://arxiv.org/pdf/2404.04661v1 | 2024-04-06T15:31:17Z | 2024-04-06T15:31:17Z | Transform then Explore: a Simple and Effective Technique for Exploratory
Combinatorial Optimization with Reinforcement Learning | Many complex problems encountered in both production and daily life can be conceptualized as combinatorial optimization problems (COPs) over graphs. Recent years, reinforcement learning (RL) based models have emerged as a promising direction, which treat the COPs solving as a heuristic learning problem. However, current finite-horizon-MDP based RL models have inherent limitations. They are not allowed to explore adquately for improving solutions at test time, which may be necessary given the complexity of NP-hard optimization tasks. Some recent attempts solve this issue by focusing on reward design and state feature engineering, which are tedious and ad-hoc. In this work, we instead propose a much simpler but more effective technique, named gauge transformation (GT). The technique is originated from physics, but is very effective in enabling RL agents to explore to continuously improve the solutions during test. Morever, GT is very simple, which can be implemented with less than 10 lines of Python codes, and can be applied to a vast majority of RL models. Experimentally, we show that traditional RL models with GT technique produce the state-of-the-art performances on the MaxCut problem. Furthermore, since GT is independent of any RL models, it can be seamlessly integrated into various RL frameworks, paving the way of these models for more effective explorations in the solving of general COPs. | [
"['Tianle Pu' 'Changjun Fan' 'Mutian Shen' 'Yizhou Lu' 'Li Zeng'\n 'Zohar Nussinov' 'Chao Chen' 'Zhong Liu']"
]
|
null | null | 2404.04662 | null | null | http://arxiv.org/pdf/2404.04662v2 | 2024-06-11T23:25:06Z | 2024-04-06T15:31:20Z | Learning Minimal NAP Specifications for Neural Network Verification | Specifications play a crucial role in neural network verification. They define the precise input regions we aim to verify, typically represented as L-infinity norm balls. While recent research suggests using neural activation patterns (NAPs) as specifications for verifying unseen test set data, it focuses on computing the most refined NAPs, often limited to very small regions in the input space. In this paper, we study the following problem: Given a neural network, find a minimal (coarsest) NAP that is sufficient for formal verification of the network's robustness. Finding the minimal NAP specification not only expands verifiable bounds but also provides insights into which neurons contribute to the model's robustness. To address this problem, we propose several exact and approximate approaches. Our exact approaches leverage the verification tool to find minimal NAP specifications in either a deterministic or statistical manner. Whereas the approximate methods efficiently estimate minimal NAPs using adversarial examples and local gradients, without making calls to the verification tool. This allows us to inspect potential causal links between neurons and the robustness of state-of-the-art neural networks, a task for which existing verification frameworks fail to scale. Our experimental results suggest that minimal NAP specifications require much smaller fractions of neurons compared to the most refined NAP specifications, yet they can significantly expand the verifiable boundaries to several orders of magnitude larger. | [
"['Chuqin Geng' 'Zhaoyue Wang' 'Haolin Ye' 'Saifei Liao' 'Xujie Si']"
]
|
null | null | 2404.04669 | null | null | http://arxiv.org/pdf/2404.04669v2 | 2024-05-30T07:11:03Z | 2024-04-06T16:05:48Z | Domain Generalisation via Imprecise Learning | Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g., optimising the average-case risk, worst-case risk, or interpolations thereof. While this choice should in principle be made by the model operator like medical doctors, this information might not always be available at training time. The institutional separation between machine learners and model operators leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Supported by both theoretical and empirical evidence, our work showcases the benefits of integrating imprecision into domain generalisation. | [
"['Anurag Singh' 'Siu Lun Chau' 'Shahine Bouabid' 'Krikamol Muandet']"
]
|
null | null | 2404.04671 | null | null | http://arxiv.org/pdf/2404.04671v3 | 2024-06-16T14:39:20Z | 2024-04-06T16:16:30Z | PhyloLM : Inferring the Phylogeny of Large Language Models and
Predicting their Performances in Benchmarks | This paper introduces PhyloLM, a method adapting phylogenetic algorithms to Large Language Models (LLMs) to explore whether and how they relate to each other and to predict their performance characteristics. Our method calculates a phylogenetic distance metrics based on the similarity of LLMs' output. The resulting metric is then used to construct dendrograms, which satisfactorily capture known relationships across a set of 111 open-source and 45 closed models. Furthermore, our phylogenetic distance predicts performance in standard benchmarks, thus demonstrating its functional validity and paving the way for a time and cost-effective estimation of LLM capabilities. To sum up, by translating population genetic concepts to machine learning, we propose and validate a tool to evaluate LLM development, relationships and capabilities, even in the absence of transparent training information. | [
"['Nicolas Yax' 'Pierre-Yves Oudeyer' 'Stefano Palminteri']"
]
|
null | null | 2404.04678 | null | null | http://arxiv.org/pdf/2404.04678v1 | 2024-04-06T16:48:12Z | 2024-04-06T16:48:12Z | Automatic Gradient Estimation for Calibrating Crowd Models with Discrete
Decision Making | Recently proposed gradient estimators enable gradient descent over stochastic programs with discrete jumps in the response surface, which are not covered by automatic differentiation (AD) alone. Although these estimators' capability to guide a swift local search has been shown for certain problems, their applicability to models relevant to real-world applications remains largely unexplored. As the gradients governing the choice in candidate solutions are calculated from sampled simulation trajectories, the optimization procedure bears similarities to metaheuristics such as particle swarm optimization, which puts the focus on the different methods' calibration progress per function evaluation. Here, we consider the calibration of force-based crowd evacuation models based on the popular Social Force model augmented by discrete decision making. After studying the ability of an AD-based estimator for branching programs to capture the simulation's rugged response surface, calibration problems are tackled using gradient descent and two metaheuristics. As our main insights, we find 1) that the estimation's fidelity benefits from disregarding jumps of large magnitude inherent to the Social Force model, and 2) that the common problem of calibration by adjusting a simulation input distribution obviates the need for AD across the Social Force calculations, allowing gradient descent to excel. | [
"['Philipp Andelfinger' 'Justin N. Kreikemeyer']"
]
|
null | null | 2404.04682 | null | null | http://arxiv.org/pdf/2404.04682v1 | 2024-04-06T17:02:18Z | 2024-04-06T17:02:18Z | Compositional Conservatism: A Transductive Approach in Offline
Reinforcement Learning | Offline reinforcement learning (RL) is a compelling framework for learning optimal policies from past experiences without additional interaction with the environment. Nevertheless, offline RL inevitably faces the problem of distributional shifts, where the states and actions encountered during policy execution may not be in the training dataset distribution. A common solution involves incorporating conservatism into the policy or the value function to safeguard against uncertainties and unknowns. In this work, we focus on achieving the same objectives of conservatism but from a different perspective. We propose COmpositional COnservatism with Anchor-seeking (COCOA) for offline RL, an approach that pursues conservatism in a compositional manner on top of the transductive reparameterization (Netanyahu et al., 2023), which decomposes the input variable (the state in our case) into an anchor and its difference from the original input. Our COCOA seeks both in-distribution anchors and differences by utilizing the learned reverse dynamics model, encouraging conservatism in the compositional input space for the policy or value function. Such compositional conservatism is independent of and agnostic to the prevalent behavioral conservatism in offline RL. We apply COCOA to four state-of-the-art offline RL algorithms and evaluate them on the D4RL benchmark, where COCOA generally improves the performance of each algorithm. The code is available at https://github.com/runamu/compositional-conservatism. | [
"['Yeda Song' 'Dongwook Lee' 'Gunhee Kim']"
]
|
null | null | 2404.04686 | null | null | http://arxiv.org/pdf/2404.04686v1 | 2024-04-06T17:23:21Z | 2024-04-06T17:23:21Z | Predictive Modeling for Breast Cancer Classification in the Context of
Bangladeshi Patients: A Supervised Machine Learning Approach with Explainable
AI | Breast cancer has rapidly increased in prevalence in recent years, making it one of the leading causes of mortality worldwide. Among all cancers, it is by far the most common. Diagnosing this illness manually requires significant time and expertise. Since detecting breast cancer is a time-consuming process, preventing its further spread can be aided by creating machine-based forecasts. Machine learning and Explainable AI are crucial in classification as they not only provide accurate predictions but also offer insights into how the model arrives at its decisions, aiding in the understanding and trustworthiness of the classification results. In this study, we evaluate and compare the classification accuracy, precision, recall, and F-1 scores of five different machine learning methods using a primary dataset (500 patients from Dhaka Medical College Hospital). Five different supervised machine learning techniques, including decision tree, random forest, logistic regression, naive bayes, and XGBoost, have been used to achieve optimal results on our dataset. Additionally, this study applied SHAP analysis to the XGBoost model to interpret the model's predictions and understand the impact of each feature on the model's output. We compared the accuracy with which several algorithms classified the data, as well as contrasted with other literature in this field. After final evaluation, this study found that XGBoost achieved the best model accuracy, which is 97%. | [
"['Taminul Islam' 'Md. Alif Sheakh' 'Mst. Sazia Tahosin' 'Most. Hasna Hena'\n 'Shopnil Akash' 'Yousef A. Bin Jardan' 'Gezahign Fentahun Wondmie'\n 'Hiba-Allah Nafidi' 'Mohammed Bourhia']"
]
|
null | null | 2404.04687 | null | null | http://arxiv.org/pdf/2404.04687v2 | 2024-07-05T16:58:15Z | 2024-04-06T17:23:43Z | Z-Splat: Z-Axis Gaussian Splatting for Camera-Sonar Fusion | Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view ($360^{circ}$ viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation. In these restricted baseline imaging scenarios, the GS algorithm suffers from a well-known 'missing cone' problem, which results in poor reconstruction along the depth axis. In this manuscript, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis. We extend the Gaussian splatting algorithms for two commonly used sonars and propose fusion algorithms that simultaneously utilize RGB camera data and sonar data. Through simulations, emulations, and hardware experiments across various imaging scenarios, we show that the proposed fusion algorithms lead to significantly better novel view synthesis (5 dB improvement in PSNR) and 3D geometry reconstruction (60% lower Chamfer distance). | [
"['Ziyuan Qu' 'Omkar Vengurlekar' 'Mohamad Qadri' 'Kevin Zhang'\n 'Michael Kaess' 'Christopher Metzler' 'Suren Jayasuriya'\n 'Adithya Pediredla']"
]
|
null | null | 2404.04689 | null | null | http://arxiv.org/pdf/2404.04689v1 | 2024-04-06T17:33:37Z | 2024-04-06T17:33:37Z | Multicalibration for Confidence Scoring in LLMs | This paper proposes the use of "multicalibration" to yield interpretable and reliable confidence scores for outputs generated by large language models (LLMs). Multicalibration asks for calibration not just marginally, but simultaneously across various intersecting groupings of the data. We show how to form groupings for prompt/completion pairs that are correlated with the probability of correctness via two techniques: clustering within an embedding space, and "self-annotation" - querying the LLM by asking it various yes-or-no questions about the prompt. We also develop novel variants of multicalibration algorithms that offer performance improvements by reducing their tendency to overfit. Through systematic benchmarking across various question answering datasets and LLMs, we show how our techniques can yield confidence scores that provide substantial improvements in fine-grained measures of both calibration and accuracy compared to existing methods. | [
"['Gianluca Detommaso' 'Martin Bertran' 'Riccardo Fogliato' 'Aaron Roth']"
]
|
null | null | 2404.04690 | null | null | http://arxiv.org/pdf/2404.04690v1 | 2024-04-06T17:37:45Z | 2024-04-06T17:37:45Z | The Identification and Categorization of Anemia Through Artificial
Neural Networks: A Comparative Analysis of Three Models | This paper presents different neural network-based classifier algorithms for diagnosing and classifying Anemia. The study compares these classifiers with established models such as Feed Forward Neural Network (FFNN), Elman network, and Non-linear Auto-Regressive Exogenous model (NARX). Experimental evaluations were conducted using data from clinical laboratory test results for 230 patients. The proposed neural network features nine inputs (age, gender, RBC, HGB, HCT, MCV, MCH, MCHC, WBCs) and one output. The simulation outcomes for diverse patients demonstrate that the suggested artificial neural network rapidly and accurately detects the presence of the disease. Consequently, the network could be seamlessly integrated into clinical laboratories for automatic generation of Anemia patients' reports Additionally, the suggested method is affordable and can be deployed on hardware at low costs. | [
"['Mohammed A. A. Elmaleeh']"
]
|
null | null | 2404.04692 | null | null | http://arxiv.org/pdf/2404.04692v1 | 2024-04-06T17:41:00Z | 2024-04-06T17:41:00Z | Securing the Skies: An IRS-Assisted AoI-Aware Secure Multi-UAV System
with Efficient Task Offloading | Unmanned Aerial Vehicles (UAVs) are integral in various sectors like agriculture, surveillance, and logistics, driven by advancements in 5G. However, existing research lacks a comprehensive approach addressing both data freshness and security concerns. In this paper, we address the intricate challenges of data freshness, and security, especially in the context of eavesdropping and jamming in modern UAV networks. Our framework incorporates exponential AoI metrics and emphasizes secrecy rate to tackle eavesdropping and jamming threats. We introduce a transformer-enhanced Deep Reinforcement Learning (DRL) approach to optimize task offloading processes. Comparative analysis with existing algorithms showcases the superiority of our scheme, indicating its promising advancements in UAV network management. | [
"['Poorvi Joshi' 'Alakesh Kalita' 'Mohan Gurusamy']"
]
|
null | null | 2404.04710 | null | null | http://arxiv.org/pdf/2404.04710v1 | 2024-04-06T19:07:12Z | 2024-04-06T19:07:12Z | Explaining Indian Stock Market through Geometry of Scale free Networks | This paper presents an analysis of the Indian stock market using a method based on embedding the network in a hyperbolic space using Machine learning techniques. We claim novelty on four counts. First, it is demonstrated that the hyperbolic clusters resemble the topological network communities more closely than the Euclidean clusters. Second, we are able to clearly distinguish between periods of market stability and volatility through a statistical analysis of hyperbolic distance and hyperbolic shortest path distance corresponding to the embedded network. Third, we demonstrate that using the modularity of the embedded network significant market changes can be spotted early. Lastly, the coalescent embedding is able to segregate the certain market sectors thereby underscoring its natural clustering ability. | [
"['Pawanesh Yadav' 'Charu Sharma' 'Niteesh Sahni']"
]
|
null | null | 2404.04714 | null | null | http://arxiv.org/pdf/2404.04714v1 | 2024-04-06T19:27:57Z | 2024-04-06T19:27:57Z | Data Poisoning Attacks on Off-Policy Policy Evaluation Methods | Off-policy Evaluation (OPE) methods are a crucial tool for evaluating policies in high-stakes domains such as healthcare, where exploration is often infeasible, unethical, or expensive. However, the extent to which such methods can be trusted under adversarial threats to data quality is largely unexplored. In this work, we make the first attempt at investigating the sensitivity of OPE methods to marginal adversarial perturbations to the data. We design a generic data poisoning attack framework leveraging influence functions from robust statistics to carefully construct perturbations that maximize error in the policy value estimates. We carry out extensive experimentation with multiple healthcare and control datasets. Our results demonstrate that many existing OPE methods are highly prone to generating value estimates with large errors when subject to data poisoning attacks, even for small adversarial perturbations. These findings question the reliability of policy values derived using OPE methods and motivate the need for developing OPE methods that are statistically robust to train-time data poisoning attacks. | [
"['Elita Lobo' 'Harvineet Singh' 'Marek Petrik' 'Cynthia Rudin'\n 'Himabindu Lakkaraju']"
]
|
null | null | 2404.04736 | null | null | http://arxiv.org/pdf/2404.04736v1 | 2024-04-06T21:39:49Z | 2024-04-06T21:39:49Z | ProtoAL: Interpretable Deep Active Learning with prototypes for medical
imaging | The adoption of Deep Learning algorithms in the medical imaging field is a prominent area of research, with high potential for advancing AI-based Computer-aided diagnosis (AI-CAD) solutions. However, current solutions face challenges due to a lack of interpretability features and high data demands, prompting recent efforts to address these issues. In this study, we propose the ProtoAL method, where we integrate an interpretable DL model into the Deep Active Learning (DAL) framework. This approach aims to address both challenges by focusing on the medical imaging context and utilizing an inherently interpretable model based on prototypes. We evaluated ProtoAL on the Messidor dataset, achieving an area under the precision-recall curve of 0.79 while utilizing only 76.54% of the available labeled data. These capabilities can enhances the practical usability of a DL model in the medical field, providing a means of trust calibration in domain experts and a suitable solution for learning in the data scarcity context often found. | [
"['Iury B. de A. Santos' 'André C. P. L. F. de Carvalho']"
]
|
null | null | 2404.04759 | null | null | http://arxiv.org/pdf/2404.04759v1 | 2024-04-06T23:52:53Z | 2024-04-06T23:52:53Z | What Happens When Small Is Made Smaller? Exploring the Impact of
Compression on Small Data Pretrained Language Models | Compression techniques have been crucial in advancing machine learning by enabling efficient training and deployment of large-scale language models. However, these techniques have received limited attention in the context of low-resource language models, which are trained on even smaller amounts of data and under computational constraints, a scenario known as the "low-resource double-bind." This paper investigates the effectiveness of pruning, knowledge distillation, and quantization on an exclusively low-resourced, small-data language model, AfriBERTa. Through a battery of experiments, we assess the effects of compression on performance across several metrics beyond accuracy. Our study provides evidence that compression techniques significantly improve the efficiency and effectiveness of small-data language models, confirming that the prevailing beliefs regarding the effects of compression on large, heavily parameterized models hold true for less-parameterized, small-data models. | [
"['Busayo Awobade' 'Mardiyyah Oduwole' 'Steven Kolawole']"
]
|
null | null | 2404.04793 | null | null | http://arxiv.org/pdf/2404.04793v1 | 2024-04-07T03:08:14Z | 2024-04-07T03:08:14Z | SqueezeAttention: 2D Management of KV-Cache in LLM Inference via
Layer-wise Optimal Budget | Optimizing the Key-Value (KV) cache of the Large Language Model (LLM) has been considered critical to saving the cost of inference. Most of the existing KV-cache compression algorithms attempted to sparsify the sequence of tokens by taking advantage of the different importance of tokens. In this work, we found that by identifying the importance of attention layers, we could optimize the KV-cache jointly from two dimensions. Based on our observations regarding layer-wise importance in inference, we propose SqueezeAttention to precisely optimize the allocation of KV-cache budget among layers on-the-fly and then incorporate three representative token sparsification algorithms to compress the KV-cache for each layer with its very own budget. By optimizing the KV-cache from both sequence's and layer's dimensions, SqueezeAttention achieves around 30% to 70% of the memory reductions and up to 2.2 times of throughput improvements in a wide range of LLMs and benchmarks. The code is available at https://github.com/hetailang/SqueezeAttention. | [
"['Zihao Wang' 'Shaoduo Gan']"
]
|
null | null | 2404.04800 | null | null | http://arxiv.org/pdf/2404.04800v1 | 2024-04-07T03:41:45Z | 2024-04-07T03:41:45Z | Coordinated Sparse Recovery of Label Noise | Label noise is a common issue in real-world datasets that inevitably impacts the generalization of models. This study focuses on robust classification tasks where the label noise is instance-dependent. Estimating the transition matrix accurately in this task is challenging, and methods based on sample selection often exhibit confirmation bias to varying degrees. Sparse over-parameterized training (SOP) has been theoretically effective in estimating and recovering label noise, offering a novel solution for noise-label learning. However, this study empirically observes and verifies a technical flaw of SOP: the lack of coordination between model predictions and noise recovery leads to increased generalization error. To address this, we propose a method called Coordinated Sparse Recovery (CSR). CSR introduces a collaboration matrix and confidence weights to coordinate model predictions and noise recovery, reducing error leakage. Based on CSR, this study designs a joint sample selection strategy and constructs a comprehensive and powerful learning framework called CSR+. CSR+ significantly reduces confirmation bias, especially for datasets with more classes and a high proportion of instance-specific noise. Experimental results on simulated and real-world noisy datasets demonstrate that both CSR and CSR+ achieve outstanding performance compared to methods at the same level. | [
"['Yukun Yang' 'Naihao Wang' 'Haixin Yang' 'Ruirui Li']"
]
|
null | null | 2404.04810 | null | null | http://arxiv.org/pdf/2404.04810v1 | 2024-04-07T05:17:43Z | 2024-04-07T05:17:43Z | AlphaCrystal-II: Distance matrix based crystal structure prediction
using deep learning | Computational prediction of stable crystal structures has a profound impact on the large-scale discovery of novel functional materials. However, predicting the crystal structure solely from a material's composition or formula is a promising yet challenging task, as traditional ab initio crystal structure prediction (CSP) methods rely on time-consuming global searches and first-principles free energy calculations. Inspired by the recent success of deep learning approaches in protein structure prediction, which utilize pairwise amino acid interactions to describe 3D structures, we present AlphaCrystal-II, a novel knowledge-based solution that exploits the abundant inter-atomic interaction patterns found in existing known crystal structures. AlphaCrystal-II predicts the atomic distance matrix of a target crystal material and employs this matrix to reconstruct its 3D crystal structure. By leveraging the wealth of inter-atomic relationships of known crystal structures, our approach demonstrates remarkable effectiveness and reliability in structure prediction through comprehensive experiments. This work highlights the potential of data-driven methods in accelerating the discovery and design of new materials with tailored properties. | [
"['Yuqi Song' 'Rongzhi Dong' 'Lai Wei' 'Qin Li' 'Jianjun Hu']"
]
|
null | null | 2404.04814 | null | null | http://arxiv.org/pdf/2404.04814v3 | 2024-07-11T15:33:35Z | 2024-04-07T05:47:41Z | Inference-Time Rule Eraser: Fair Recognition via Distilling and Removing
Biased Rules | Machine learning models often make predictions based on biased features such as gender, race, and other social attributes, posing significant fairness risks, especially in societal applications, such as hiring, banking, and criminal justice. Traditional approaches to addressing this issue involve retraining or fine-tuning neural networks with fairness-aware optimization objectives. However, these methods can be impractical due to significant computational resources, complex industrial tests, and the associated CO2 footprint. Additionally, regular users often fail to fine-tune models because they lack access to model parameters In this paper, we introduce the Inference-Time Rule Eraser (Eraser), a novel method designed to address fairness concerns by removing biased decision-making rules from deployed models during inference without altering model weights. We begin by establishing a theoretical foundation for modifying model outputs to eliminate biased rules through Bayesian analysis. Next, we present a specific implementation of Eraser that involves two stages: (1) distilling the biased rules from the deployed model into an additional patch model, and (2) removing these biased rules from the output of the deployed model during inference. Extensive experiments validate the effectiveness of our approach, showcasing its superior performance in addressing fairness concerns in AI systems. | [
"['Yi Zhang' 'Dongyuan Lu' 'Jitao Sang']"
]
|
null | null | 2404.04815 | null | null | http://arxiv.org/abs/2404.04815v1 | 2024-04-07T05:47:54Z | 2024-04-07T05:47:54Z | Allo: A Programming Model for Composable Accelerator Design | Special-purpose hardware accelerators are increasingly pivotal for sustaining performance improvements in emerging applications, especially as the benefits of technology scaling continue to diminish. However, designers currently lack effective tools and methodologies to construct complex, high-performance accelerator architectures in a productive manner. Existing high-level synthesis (HLS) tools often require intrusive source-level changes to attain satisfactory quality of results. Despite the introduction of several new accelerator design languages (ADLs) aiming to enhance or replace HLS, their advantages are more evident in relatively simple applications with a single kernel. Existing ADLs prove less effective for realistic hierarchical designs with multiple kernels, even if the design hierarchy is flattened. In this paper, we introduce Allo, a composable programming model for efficient spatial accelerator design. Allo decouples hardware customizations, including compute, memory, communication, and data type from algorithm specification, and encapsulates them as a set of customization primitives. Allo preserves the hierarchical structure of an input program by combining customizations from different functions in a bottom-up, type-safe manner. This approach facilitates holistic optimizations that span across function boundaries. We conduct comprehensive experiments on commonly-used HLS benchmarks and several realistic deep learning models. Our evaluation shows that Allo can outperform state-of-the-art HLS tools and ADLs on all test cases in the PolyBench. For the GPT2 model, the inference latency of the Allo generated accelerator is 1.7x faster than the NVIDIA A100 GPU with 5.4x higher energy efficiency, demonstrating the capability of Allo to handle large-scale designs. | [
"['Hongzheng Chen' 'Niansong Zhang' 'Shaojie Xiang' 'Zhichen Zeng'\n 'Mengjia Dai' 'Zhiru Zhang']"
]
|
null | null | 2404.04824 | null | null | http://arxiv.org/pdf/2404.04824v1 | 2024-04-07T06:23:18Z | 2024-04-07T06:23:18Z | Mixup Domain Adaptations for Dynamic Remaining Useful Life Predictions | Remaining Useful Life (RUL) predictions play vital role for asset planning and maintenance leading to many benefits to industries such as reduced downtime, low maintenance costs, etc. Although various efforts have been devoted to study this topic, most existing works are restricted for i.i.d conditions assuming the same condition of the training phase and the deployment phase. This paper proposes a solution to this problem where a mix-up domain adaptation (MDAN) is put forward. MDAN encompasses a three-staged mechanism where the mix-up strategy is not only performed to regularize the source and target domains but also applied to establish an intermediate mix-up domain where the source and target domains are aligned. The self-supervised learning strategy is implemented to prevent the supervision collapse problem. Rigorous evaluations have been performed where MDAN is compared to recently published works for dynamic RUL predictions. MDAN outperforms its counterparts with substantial margins in 12 out of 12 cases. In addition, MDAN is evaluated with the bearing machine dataset where it beats prior art with significant gaps in 8 of 12 cases. Source codes of MDAN are made publicly available in url{https://github.com/furqon3009/MDAN}. | [
"['Muhammad Tanzil Furqon' 'Mahardhika Pratama' 'Lin Liu' 'Habibullah'\n 'Kutluyil Dogancay']"
]
|
null | null | 2404.04825 | null | null | http://arxiv.org/pdf/2404.04825v1 | 2024-04-07T06:24:47Z | 2024-04-07T06:24:47Z | Gradient-based Design of Computational Granular Crystals | There is growing interest in engineering unconventional computing devices that leverage the intrinsic dynamics of physical substrates to perform fast and energy-efficient computations. Granular metamaterials are one such substrate that has emerged as a promising platform for building wave-based information processing devices with the potential to integrate sensing, actuation, and computation. Their high-dimensional and nonlinear dynamics result in nontrivial and sometimes counter-intuitive wave responses that can be shaped by the material properties, geometry, and configuration of individual grains. Such highly tunable rich dynamics can be utilized for mechanical computing in special-purpose applications. However, there are currently no general frameworks for the inverse design of large-scale granular materials. Here, we build upon the similarity between the spatiotemporal dynamics of wave propagation in material and the computational dynamics of Recurrent Neural Networks to develop a gradient-based optimization framework for harmonically driven granular crystals. We showcase how our framework can be utilized to design basic logic gates where mechanical vibrations carry the information at predetermined frequencies. We compare our design methodology with classic gradient-free methods and find that our approach discovers higher-performing configurations with less computational effort. Our findings show that a gradient-based optimization method can greatly expand the design space of metamaterials and provide the opportunity to systematically traverse the parameter space to find materials with the desired functionalities. | [
"['Atoosa Parsa' \"Corey S. O'Hern\" 'Rebecca Kramer-Bottiglio'\n 'Josh Bongard']"
]
|
null | null | 2404.04854 | null | null | http://arxiv.org/pdf/2404.04854v1 | 2024-04-07T07:56:14Z | 2024-04-07T07:56:14Z | Contextual Chart Generation for Cyber Deception | Honeyfiles are security assets designed to attract and detect intruders on compromised systems. Honeyfiles are a type of honeypot that mimic real, sensitive documents, creating the illusion of the presence of valuable data. Interaction with a honeyfile reveals the presence of an intruder, and can provide insights into their goals and intentions. Their practical use, however, is limited by the time, cost and effort associated with manually creating realistic content. The introduction of large language models has made high-quality text generation accessible, but honeyfiles contain a variety of content including charts, tables and images. This content needs to be plausible and realistic, as well as semantically consistent both within honeyfiles and with the real documents they mimic, to successfully deceive an intruder. In this paper, we focus on an important component of the honeyfile content generation problem: document charts. Charts are ubiquitous in corporate documents and are commonly used to communicate quantitative and scientific data. Existing image generation models, such as DALL-E, are rather prone to generating charts with incomprehensible text and unconvincing data. We take a multi-modal approach to this problem by combining two purpose-built generative models: a multitask Transformer and a specialized multi-head autoencoder. The Transformer generates realistic captions and plot text, while the autoencoder generates the underlying tabular data for the plot. To advance the field of automated honeyplot generation, we also release a new document-chart dataset and propose a novel metric Keyword Semantic Matching (KSM). This metric measures the semantic consistency between keywords of a corpus and a smaller bag of words. Extensive experiments demonstrate excellent performance against multiple large language models, including ChatGPT and GPT4. | [
"['David D. Nguyen' 'David Liebowitz' 'Surya Nepal' 'Salil S. Kanhere'\n 'Sharif Abuadbba']"
]
|
null | null | 2404.04859 | null | null | http://arxiv.org/pdf/2404.04859v1 | 2024-04-07T08:07:02Z | 2024-04-07T08:07:02Z | Demystifying Lazy Training of Neural Networks from a Macroscopic
Viewpoint | In this paper, we advance the understanding of neural network training dynamics by examining the intricate interplay of various factors introduced by weight parameters in the initialization process. Motivated by the foundational work of Luo et al. (J. Mach. Learn. Res., Vol. 22, Iss. 1, No. 71, pp 3327-3373), we explore the gradient descent dynamics of neural networks through the lens of macroscopic limits, where we analyze its behavior as width $m$ tends to infinity. Our study presents a unified approach with refined techniques designed for multi-layer fully connected neural networks, which can be readily extended to other neural network architectures. Our investigation reveals that gradient descent can rapidly drive deep neural networks to zero training loss, irrespective of the specific initialization schemes employed by weight parameters, provided that the initial scale of the output function $kappa$ surpasses a certain threshold. This regime, characterized as the theta-lazy area, accentuates the predominant influence of the initial scale $kappa$ over other factors on the training behavior of neural networks. Furthermore, our approach draws inspiration from the Neural Tangent Kernel (NTK) paradigm, and we expand its applicability. While NTK typically assumes that $lim_{mtoinfty}frac{log kappa}{log m}=frac{1}{2}$, and imposes each weight parameters to scale by the factor $frac{1}{sqrt{m}}$, in our theta-lazy regime, we discard the factor and relax the conditions to $lim_{mtoinfty}frac{log kappa}{log m}>0$. Similar to NTK, the behavior of overparameterized neural networks within the theta-lazy regime trained by gradient descent can be effectively described by a specific kernel. Through rigorous analysis, our investigation illuminates the pivotal role of $kappa$ in governing the training dynamics of neural networks. | [
"['Yuqing Li' 'Tao Luo' 'Qixuan Zhou']"
]
|
null | null | 2404.04865 | null | null | http://arxiv.org/pdf/2404.04865v1 | 2024-04-07T08:17:48Z | 2024-04-07T08:17:48Z | On the Learnability of Out-of-distribution Detection | Supervised learning aims to train a classifier under the assumption that training and test data are from the same distribution. To ease the above assumption, researchers have studied a more realistic setting: out-of-distribution (OOD) detection, where test data may come from classes that are unknown during training (i.e., OOD data). Due to the unavailability and diversity of OOD data, good generalization ability is crucial for effective OOD detection algorithms, and corresponding learning theory is still an open problem. To study the generalization of OOD detection, this paper investigates the probably approximately correct (PAC) learning theory of OOD detection that fits the commonly used evaluation metrics in the literature. First, we find a necessary condition for the learnability of OOD detection. Then, using this condition, we prove several impossibility theorems for the learnability of OOD detection under some scenarios. Although the impossibility theorems are frustrating, we find that some conditions of these impossibility theorems may not hold in some practical scenarios. Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios. Lastly, we offer theoretical support for representative OOD detection works based on our OOD theory. | [
"['Zhen Fang' 'Yixuan Li' 'Feng Liu' 'Bo Han' 'Jie Lu']"
]
|
null | null | 2404.04870 | null | null | http://arxiv.org/pdf/2404.04870v2 | 2024-05-30T05:47:45Z | 2024-04-07T08:31:35Z | Signal-noise separation using unsupervised reservoir computing | Removing noise from a signal without knowing the characteristics of the noise is a challenging task. This paper introduces a signal-noise separation method based on time series prediction. We use Reservoir Computing (RC) to extract the maximum portion of "predictable information" from a given signal. Reproducing the deterministic component of the signal using RC, we estimate the noise distribution from the difference between the original signal and reconstructed one. The method is based on a machine learning approach and requires no prior knowledge of either the deterministic signal or the noise distribution. It provides a way to identify additivity/multiplicativity of noise and to estimate the signal-to-noise ratio (SNR) indirectly. The method works successfully for combinations of various signal and noise, including chaotic signal and highly oscillating sinusoidal signal which are corrupted by non-Gaussian additive/ multiplicative noise. The separation performances are robust and notably outstanding for signals with strong noise, even for those with negative SNR. | [
"['Jaesung Choi' 'Pilwon Kim']"
]
|
null | null | 2404.04871 | null | null | http://arxiv.org/pdf/2404.04871v1 | 2024-04-07T08:32:16Z | 2024-04-07T08:32:16Z | Data Stream Sampling with Fuzzy Task Boundaries and Noisy Labels | In the realm of continual learning, the presence of noisy labels within data streams represents a notable obstacle to model reliability and fairness. We focus on the data stream scenario outlined in pertinent literature, characterized by fuzzy task boundaries and noisy labels. To address this challenge, we introduce a novel and intuitive sampling method called Noisy Test Debiasing (NTD) to mitigate noisy labels in evolving data streams and establish a fair and robust continual learning algorithm. NTD is straightforward to implement, making it feasible across various scenarios. Our experiments benchmark four datasets, including two synthetic noise datasets (CIFAR10 and CIFAR100) and real-world noise datasets (mini-WebVision and Food-101N). The results validate the efficacy of NTD for online continual learning in scenarios with noisy labels in data streams. Compared to the previous leading approach, NTD achieves a training speedup enhancement over two times while maintaining or surpassing accuracy levels. Moreover, NTD utilizes less than one-fifth of the GPU memory resources compared to previous leading methods. | [
"['Yu-Hsi Chen']"
]
|
null | null | 2404.04874 | null | null | http://arxiv.org/pdf/2404.04874v1 | 2024-04-07T08:38:35Z | 2024-04-07T08:38:35Z | Graph Neural Networks for Binary Programming | This paper investigates a link between Graph Neural Networks (GNNs) and Binary Programming (BP) problems, laying the groundwork for GNNs to approximate solutions for these computationally challenging problems. By analyzing the sensitivity of BP problems, we are able to frame the solution of BP problems as a heterophilic node classification task. We then propose Binary-Programming GNN (BPGNN), an architecture that integrates graph representation learning techniques with BP-aware features to approximate BP solutions efficiently. Additionally, we introduce a self-supervised data generation mechanism, to enable efficient and tractable training data acquisition even for large-scale BP problems. Experimental evaluations of BPGNN across diverse BP problem sizes showcase its superior performance compared to exhaustive search and heuristic approaches. Finally, we discuss open challenges in the under-explored field of BP problems with GNNs. | [
"['Moshe Eliasof' 'Eldad Haber']"
]
|
null | null | 2404.04885 | null | null | http://arxiv.org/pdf/2404.04885v1 | 2024-04-07T09:05:09Z | 2024-04-07T09:05:09Z | TimeGPT in Load Forecasting: A Large Time Series Model Perspective | Machine learning models have made significant progress in load forecasting, but their forecast accuracy is limited in cases where historical load data is scarce. Inspired by the outstanding performance of large language models (LLMs) in computer vision and natural language processing, this paper aims to discuss the potential of large time series models in load forecasting with scarce historical data. Specifically, the large time series model is constructed as a time series generative pre-trained transformer (TimeGPT), which is trained on massive and diverse time series datasets consisting of 100 billion data points (e.g., finance, transportation, banking, web traffic, weather, energy, healthcare, etc.). Then, the scarce historical load data is used to fine-tune the TimeGPT, which helps it to adapt to the data distribution and characteristics associated with load forecasting. Simulation results show that TimeGPT outperforms the benchmarks (e.g., popular machine learning models and statistical models) for load forecasting on several real datasets with scarce training samples, particularly for short look-ahead times. However, it cannot be guaranteed that TimeGPT is always superior to benchmarks for load forecasting with scarce data, since the performance of TimeGPT may be affected by the distribution differences between the load data and the training data. In practical applications, we can divide the historical data into a training set and a validation set, and then use the validation set loss to decide whether TimeGPT is the best choice for a specific dataset. | [
"['Wenlong Liao' 'Fernando Porte-Agel' 'Jiannong Fang' 'Christian Rehtanz'\n 'Shouxiang Wang' 'Dechang Yang' 'Zhe Yang']"
]
|
null | null | 2404.04891 | null | null | http://arxiv.org/pdf/2404.04891v1 | 2024-04-07T09:17:00Z | 2024-04-07T09:17:00Z | DL-EWF: Deep Learning Empowering Women's Fashion with
Grounded-Segment-Anything Segmentation for Body Shape Classification | The global fashion industry plays a pivotal role in the global economy, and addressing fundamental issues within the industry is crucial for developing innovative solutions. One of the most pressing challenges in the fashion industry is the mismatch between body shapes and the garments of individuals they purchase. This issue is particularly prevalent among individuals with non-ideal body shapes, exacerbating the challenges faced. Considering inter-individual variability in body shapes is essential for designing and producing garments that are widely accepted by consumers. Traditional methods for determining human body shape are limited due to their low accuracy, high costs, and time-consuming nature. New approaches, utilizing digital imaging and deep neural networks (DNN), have been introduced to identify human body shape. In this study, the Style4BodyShape dataset is used for classifying body shapes into five categories: Rectangle, Triangle, Inverted Triangle, Hourglass, and Apple. In this paper, the body shape segmentation of a person is extracted from the image, disregarding the surroundings and background. Then, Various pre-trained models, such as ResNet18, ResNet34, ResNet50, VGG16, VGG19, and Inception v3, are used to classify the segmentation results. Among these pre-trained models, the Inception V3 model demonstrates superior performance regarding f1-score evaluation metric and accuracy compared to the other models. | [
"['Fatemeh Asghari' 'Mohammad Reza Soheili' 'Faezeh Gholamrezaie']"
]
|
null | null | 2404.04903 | null | null | http://arxiv.org/pdf/2404.04903v1 | 2024-04-07T10:07:56Z | 2024-04-07T10:07:56Z | Online Learning under Haphazard Input Conditions: A Comprehensive Review
and Analysis | The domain of online learning has experienced multifaceted expansion owing to its prevalence in real-life applications. Nonetheless, this progression operates under the assumption that the input feature space of the streaming data remains constant. In this survey paper, we address the topic of online learning in the context of haphazard inputs, explicitly foregoing such an assumption. We discuss, classify, evaluate, and compare the methodologies that are adept at modeling haphazard inputs, additionally providing the corresponding code implementations and their carbon footprint. Moreover, we classify the datasets related to the field of haphazard inputs and introduce evaluation metrics specifically designed for datasets exhibiting imbalance. The code of each methodology can be found at https://github.com/Rohit102497/HaphazardInputsReview | [
"['Rohit Agarwal' 'Arijit Das' 'Alexander Horsch' 'Krishna Agarwal'\n 'Dilip K. Prasad']"
]
|
null | null | 2404.04905 | null | null | http://arxiv.org/pdf/2404.04905v1 | 2024-04-07T10:11:22Z | 2024-04-07T10:11:22Z | Review for Handling Missing Data with special missing mechanism | Missing data poses a significant challenge in data science, affecting decision-making processes and outcomes. Understanding what missing data is, how it occurs, and why it is crucial to handle it appropriately is paramount when working with real-world data, especially in tabular data, one of the most commonly used data types in the real world. Three missing mechanisms are defined in the literature: Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR), each presenting unique challenges in imputation. Most existing work are focused on MCAR that is relatively easy to handle. The special missing mechanisms of MNAR and MAR are less explored and understood. This article reviews existing literature on handling missing values. It compares and contrasts existing methods in terms of their ability to handle different missing mechanisms and data types. It identifies research gap in the existing literature and lays out potential directions for future research in the field. The information in this review will help data analysts and researchers to adopt and promote good practices for handling missing data in real-world problems. | [
"['Youran Zhou' 'Sunil Aryal' 'Mohamed Reda Bouadjenek']"
]
|
null | null | 2404.04916 | null | null | http://arxiv.org/pdf/2404.04916v2 | 2024-05-02T13:37:13Z | 2024-04-07T10:57:54Z | Correcting Diffusion-Based Perceptual Image Compression with Privileged
End-to-End Decoder | The images produced by diffusion models can attain excellent perceptual quality. However, it is challenging for diffusion models to guarantee distortion, hence the integration of diffusion models and image compression models still needs more comprehensive explorations. This paper presents a diffusion-based image compression method that employs a privileged end-to-end decoder model as correction, which achieves better perceptual quality while guaranteeing the distortion to an extent. We build a diffusion model and design a novel paradigm that combines the diffusion model and an end-to-end decoder, and the latter is responsible for transmitting the privileged information extracted at the encoder side. Specifically, we theoretically analyze the reconstruction process of the diffusion models at the encoder side with the original images being visible. Based on the analysis, we introduce an end-to-end convolutional decoder to provide a better approximation of the score function $nabla_{mathbf{x}_t}log p(mathbf{x}_t)$ at the encoder side and effectively transmit the combination. Experiments demonstrate the superiority of our method in both distortion and perception compared with previous perceptual compression methods. | [
"['Yiyang Ma' 'Wenhan Yang' 'Jiaying Liu']"
]
|
null | null | 2404.04920 | null | null | http://arxiv.org/pdf/2404.04920v1 | 2024-04-07T11:20:32Z | 2024-04-07T11:20:32Z | Regularized Conditional Diffusion Model for Multi-Task Preference
Alignment | Sequential decision-making is desired to align with human intents and exhibit versatility across various tasks. Previous methods formulate it as a conditional generation process, utilizing return-conditioned diffusion models to directly model trajectory distributions. Nevertheless, the return-conditioned paradigm relies on pre-defined reward functions, facing challenges when applied in multi-task settings characterized by varying reward functions (versatility) and showing limited controllability concerning human preferences (alignment). In this work, we adopt multi-task preferences as a unified condition for both single- and multi-task decision-making, and propose preference representations aligned with preference labels. The learned representations are used to guide the conditional generation process of diffusion models, and we introduce an auxiliary objective to maximize the mutual information between representations and corresponding generated trajectories, improving alignment between trajectories and preferences. Extensive experiments in D4RL and Meta-World demonstrate that our method presents favorable performance in single- and multi-task scenarios, and exhibits superior alignment with preferences. | [
"['Xudong Yu' 'Chenjia Bai' 'Haoran He' 'Changhong Wang' 'Xuelong Li']"
]
|
null | null | 2404.04931 | null | null | http://arxiv.org/pdf/2404.04931v2 | 2024-04-11T02:32:43Z | 2024-04-07T12:07:33Z | The Sample Complexity of Gradient Descent in Stochastic Convex
Optimization | We analyze the sample complexity of full-batch Gradient Descent (GD) in the setup of non-smooth Stochastic Convex Optimization. We show that the generalization error of GD, with common choice of hyper-parameters, can be $tilde Theta(d/m + 1/sqrt{m})$, where $d$ is the dimension and $m$ is the sample size. This matches the sample complexity of emph{worst-case} empirical risk minimizers. That means that, in contrast with other algorithms, GD has no advantage over naive ERMs. Our bound follows from a new generalization bound that depends on both the dimension as well as the learning rate and number of iterations. Our bound also shows that, for general hyper-parameters, when the dimension is strictly larger than number of samples, $T=Omega(1/epsilon^4)$ iterations are necessary to avoid overfitting. This resolves an open problem by Schlisserman et al.23 and Amir er Al.21, and improves over previous lower bounds that demonstrated that the sample size must be at least square root of the dimension. | [
"['Roi Livni']"
]
|
null | null | 2404.04940 | null | null | http://arxiv.org/pdf/2404.04940v1 | 2024-04-07T12:25:03Z | 2024-04-07T12:25:03Z | Fuzzy K-Means Clustering without Cluster Centroids | Fuzzy K-Means clustering is a critical technique in unsupervised data analysis. However, the performance of popular Fuzzy K-Means algorithms is sensitive to the selection of initial cluster centroids and is also affected by noise when updating mean cluster centroids. To address these challenges, this paper proposes a novel Fuzzy K-Means clustering algorithm that entirely eliminates the reliance on cluster centroids, obtaining membership matrices solely through distance matrix computation. This innovation enhances flexibility in distance measurement between sample points, thus improving the algorithm's performance and robustness. The paper also establishes theoretical connections between the proposed model and popular Fuzzy K-Means clustering techniques. Experimental results on several real datasets demonstrate the effectiveness of the algorithm. | [
"['Han Lu' 'Fangfang Li' 'Quanxue Gao' 'Cheng Deng' 'Chris Ding'\n 'Qianqian Wang']"
]
|
null | null | 2404.04943 | null | null | http://arxiv.org/pdf/2404.04943v1 | 2024-04-07T12:40:37Z | 2024-04-07T12:40:37Z | Chiplet Placement Order Exploration Based on Learning to Rank with Graph
Representation | Chiplet-based systems, integrating various silicon dies manufactured at different integrated circuit technology nodes on a carrier interposer, have garnered significant attention in recent years due to their cost-effectiveness and competitive performance. The widespread adoption of reinforcement learning as a sequential placement method has introduced a new challenge in determining the optimal placement order for each chiplet. The order in which chiplets are placed on the interposer influences the spatial resources available for earlier and later placed chiplets, making the placement results highly sensitive to the sequence of chiplet placement. To address these challenges, we propose a learning to rank approach with graph representation, building upon the reinforcement learning framework RLPlanner. This method aims to select the optimal chiplet placement order for each chiplet-based system. Experimental results demonstrate that compared to placement order obtained solely based on the descending order of the chiplet area and the number of interconnect wires between the chiplets, utilizing the placement order obtained from the learning to rank network leads to further improvements in system temperature and inter-chiplet wirelength. Specifically, applying the top-ranked placement order obtained from the learning to rank network results in a 10.05% reduction in total inter-chiplet wirelength and a 1.01% improvement in peak system temperature during the chiplet placement process. | [
"['Zhihui Deng' 'Yuanyuan Duan' 'Leilai Shao' 'Xiaolei Zhu']"
]
|
null | null | 2404.04947 | null | null | http://arxiv.org/pdf/2404.04947v2 | 2024-06-07T07:03:30Z | 2024-04-07T12:57:46Z | Gull: A Generative Multifunctional Audio Codec | We introduce Gull, a generative multifunctional audio codec. Gull is a general purpose neural audio compression and decompression model which can be applied to a wide range of tasks and applications such as real-time communication, audio super-resolution, and codec language models. The key components of Gull include (1) universal-sample-rate modeling via subband modeling schemes motivated by recent progress in audio source separation, (2) gain-shape representations motivated by traditional audio codecs, (3) improved residual vector quantization modules, (4) elastic decoder network that enables user-defined model size and complexity during inference time, (5) built-in ability for audio super-resolution without the increase of bitrate. We compare Gull with existing traditional and neural audio codecs and show that Gull is able to achieve on par or better performance across various sample rates, bitrates and model complexities in both subjective and objective evaluation metrics. | [
"['Yi Luo' 'Jianwei Yu' 'Hangting Chen' 'Rongzhi Gu' 'Chao Weng']"
]
|
null | null | 2404.04969 | null | null | http://arxiv.org/pdf/2404.04969v1 | 2024-04-07T14:19:22Z | 2024-04-07T14:19:22Z | Temporal Generalization Estimation in Evolving Graphs | Graph Neural Networks (GNNs) are widely deployed in vast fields, but they often struggle to maintain accurate representations as graphs evolve. We theoretically establish a lower bound, proving that under mild conditions, representation distortion inevitably occurs over time. To estimate the temporal distortion without human annotation after deployment, one naive approach is to pre-train a recurrent model (e.g., RNN) before deployment and use this model afterwards, but the estimation is far from satisfactory. In this paper, we analyze the representation distortion from an information theory perspective, and attribute it primarily to inaccurate feature extraction during evolution. Consequently, we introduce Smart, a straightforward and effective baseline enhanced by an adaptive feature extractor through self-supervised graph reconstruction. In synthetic random graphs, we further refine the former lower bound to show the inevitable distortion over time and empirically observe that Smart achieves good estimation performance. Moreover, we observe that Smart consistently shows outstanding generalization estimation on four real-world evolving graphs. The ablation studies underscore the necessity of graph reconstruction. For example, on OGB-arXiv dataset, the estimation metric MAPE deteriorates from 2.19% to 8.00% without reconstruction. | [
"['Bin Lu' 'Tingyan Ma' 'Xiaoying Gan' 'Xinbing Wang' 'Yunqiang Zhu'\n 'Chenghu Zhou' 'Shiyu Liang']"
]
|
null | null | 2404.04970 | null | null | http://arxiv.org/pdf/2404.04970v2 | 2024-07-07T04:47:49Z | 2024-04-07T14:20:51Z | How to characterize imprecision in multi-view clustering? | It is still challenging to cluster multi-view data since existing methods can only assign an object to a specific (singleton) cluster when combining different view information. As a result, it fails to characterize imprecision of objects in overlapping regions of different clusters, thus leading to a high risk of errors. In this paper, we thereby want to answer the question: how to characterize imprecision in multi-view clustering? Correspondingly, we propose a multi-view low-rank evidential c-means based on entropy constraint (MvLRECM). The proposed MvLRECM can be considered as a multi-view version of evidential c-means based on the theory of belief functions. In MvLRECM, each object is allowed to belong to different clusters with various degrees of support (masses of belief) to characterize uncertainty when decision-making. Moreover, if an object is in the overlapping region of several singleton clusters, it can be assigned to a meta-cluster, defined as the union of these singleton clusters, to characterize the local imprecision in the result. In addition, entropy-weighting and low-rank constraints are employed to reduce imprecision and improve accuracy. Compared to state-of-the-art methods, the effectiveness of MvLRECM is demonstrated based on several toy and UCI real datasets. | [
"['Jinyi Xu' 'Zuowei Zhang' 'Ze Lin' 'Yixiang Chen' 'Zhe Liu'\n 'Weiping Ding']"
]
|
null | null | 2404.04979 | null | null | http://arxiv.org/pdf/2404.04979v2 | 2024-04-11T16:11:33Z | 2024-04-07T14:47:07Z | CAVIAR: Categorical-Variable Embeddings for Accurate and Robust
Inference | Social science research often hinges on the relationship between categorical variables and outcomes. We introduce CAVIAR, a novel method for embedding categorical variables that assume values in a high-dimensional ambient space but are sampled from an underlying manifold. Our theoretical and numerical analyses outline challenges posed by such categorical variables in causal inference. Specifically, dynamically varying and sparse levels can lead to violations of the Donsker conditions and a failure of the estimation functionals to converge to a tight Gaussian process. Traditional approaches, including the exclusion of rare categorical levels and principled variable selection models like LASSO, fall short. CAVIAR embeds the data into a lower-dimensional global coordinate system. The mapping can be derived from both structured and unstructured data, and ensures stable and robust estimates through dimensionality reduction. In a dataset of direct-to-consumer apparel sales, we illustrate how high-dimensional categorical variables, such as zip codes, can be succinctly represented, facilitating inference and analysis. | [
"['Anirban Mukherjee' 'Hannah Hanwen Chang']"
]
|
null | null | 2404.04997 | null | null | http://arxiv.org/pdf/2404.04997v2 | 2024-04-18T23:23:53Z | 2024-04-07T15:44:20Z | Adapting LLMs for Efficient Context Processing through Soft Prompt
Compression | The rapid advancement of Large Language Models (LLMs) has inaugurated a transformative epoch in natural language processing, fostering unprecedented proficiency in text generation, comprehension, and contextual scrutiny. Nevertheless, effectively handling extensive contexts, crucial for myriad applications, poses a formidable obstacle owing to the intrinsic constraints of the models' context window sizes and the computational burdens entailed by their operations. This investigation presents an innovative framework that strategically tailors LLMs for streamlined context processing by harnessing the synergies among natural language summarization, soft prompt compression, and augmented utility preservation mechanisms. Our methodology, dubbed SoftPromptComp, amalgamates natural language prompts extracted from summarization methodologies with dynamically generated soft prompts to forge a concise yet semantically robust depiction of protracted contexts. This depiction undergoes further refinement via a weighting mechanism optimizing information retention and utility for subsequent tasks. We substantiate that our framework markedly diminishes computational overhead and enhances LLMs' efficacy across various benchmarks, while upholding or even augmenting the caliber of the produced content. By amalgamating soft prompt compression with sophisticated summarization, SoftPromptComp confronts the dual challenges of managing lengthy contexts and ensuring model scalability. Our findings point towards a propitious trajectory for augmenting LLMs' applicability and efficiency, rendering them more versatile and pragmatic for real-world applications. This research enriches the ongoing discourse on optimizing language models, providing insights into the potency of soft prompts and summarization techniques as pivotal instruments for the forthcoming generation of NLP solutions. | [
"['Cangqing Wang' 'Yutian Yang' 'Ruisi Li' 'Dan Sun' 'Ruicong Cai'\n 'Yuzhu Zhang' 'Chengqian Fu' 'Lillian Floyd']"
]
|
null | null | 2404.05019 | null | null | http://arxiv.org/pdf/2404.05019v1 | 2024-04-07T17:17:23Z | 2024-04-07T17:17:23Z | Shortcut-connected Expert Parallelism for Accelerating
Mixture-of-Experts | Expert parallelism has been introduced as a strategy to distribute the computational workload of sparsely-gated mixture-of-experts (MoE) models across multiple computing devices, facilitating the execution of these increasingly large-scale models. However, the All-to-All communication intrinsic to expert parallelism constitutes a significant overhead, diminishing the MoE models' efficiency. Current optimization approaches offer some relief, yet they are constrained by the sequential interdependence of communication and computation operations. To address this limitation, we present a novel shortcut-connected MoE architecture with overlapping parallel strategy, designated as ScMoE, which effectively decouples communication from its conventional sequence, allowing for a substantial overlap of 70% to 100% with computation. When compared with the prevalent top-2 MoE architecture, ScMoE demonstrates training speed improvements of 30% and 11%, and inference improvements of 40% and 15%, in our PCIe and NVLink hardware environments, respectively, where communication constitutes 60% and 15% of the total MoE time consumption. On the other hand, extensive experiments and theoretical analyses indicate that ScMoE not only achieves comparable but in some instances surpasses the model quality of existing approaches in vision and language tasks. | [
"['Weilin Cai' 'Juyong Jiang' 'Le Qin' 'Junwei Cui' 'Sunghun Kim'\n 'Jiayi Huang']"
]
|
null | null | 2404.05022 | null | null | http://arxiv.org/pdf/2404.05022v1 | 2024-04-07T17:25:52Z | 2024-04-07T17:25:52Z | DinoBloom: A Foundation Model for Generalizable Cell Embeddings in
Hematology | In hematology, computational models offer significant potential to improve diagnostic accuracy, streamline workflows, and reduce the tedious work of analyzing single cells in peripheral blood or bone marrow smears. However, clinical adoption of computational models has been hampered by the lack of generalization due to large batch effects, small dataset sizes, and poor performance in transfer learning from natural images. To address these challenges, we introduce DinoBloom, the first foundation model for single cell images in hematology, utilizing a tailored DINOv2 pipeline. Our model is built upon an extensive collection of 13 diverse, publicly available datasets of peripheral blood and bone marrow smears, the most substantial open-source cohort in hematology so far, comprising over 380,000 white blood cell images. To assess its generalization capability, we evaluate it on an external dataset with a challenging domain shift. We show that our model outperforms existing medical and non-medical vision models in (i) linear probing and k-nearest neighbor evaluations for cell-type classification on blood and bone marrow smears and (ii) weakly supervised multiple instance learning for acute myeloid leukemia subtyping by a large margin. A family of four DinoBloom models (small, base, large, and giant) can be adapted for a wide range of downstream applications, be a strong baseline for classification problems, and facilitate the assessment of batch effects in new datasets. All models are available at github.com/marrlab/DinoBloom. | [
"['Valentin Koch' 'Sophia J. Wagner' 'Salome Kazeminia' 'Ece Sancar'\n 'Matthias Hehr' 'Julia Schnabel' 'Tingying Peng' 'Carsten Marr']"
]
|
null | null | 2404.05043 | null | null | http://arxiv.org/pdf/2404.05043v1 | 2024-04-07T18:55:33Z | 2024-04-07T18:55:33Z | Optimizing Privacy and Utility Tradeoffs for Group Interests Through
Harmonization | We propose a novel problem formulation to address the privacy-utility tradeoff, specifically when dealing with two distinct user groups characterized by unique sets of private and utility attributes. Unlike previous studies that primarily focus on scenarios where all users share identical private and utility attributes and often rely on auxiliary datasets or manual annotations, we introduce a collaborative data-sharing mechanism between two user groups through a trusted third party. This third party uses adversarial privacy techniques with our proposed data-sharing mechanism to internally sanitize data for both groups and eliminates the need for manual annotation or auxiliary datasets. Our methodology ensures that private attributes cannot be accurately inferred while enabling highly accurate predictions of utility features. Importantly, even if analysts or adversaries possess auxiliary datasets containing raw data, they are unable to accurately deduce private features. Additionally, our data-sharing mechanism is compatible with various existing adversarially trained privacy techniques. We empirically demonstrate the effectiveness of our approach using synthetic and real-world datasets, showcasing its ability to balance the conflicting goals of privacy and utility. | [
"['Bishwas Mandal' 'George Amariucai' 'Shuangqing Wei']"
]
|
null | null | 2404.05047 | null | null | http://arxiv.org/pdf/2404.05047v1 | 2024-04-07T19:02:50Z | 2024-04-07T19:02:50Z | Initial Exploration of Zero-Shot Privacy Utility Tradeoffs in Tabular
Data Using GPT-4 | We investigate the application of large language models (LLMs), specifically GPT-4, to scenarios involving the tradeoff between privacy and utility in tabular data. Our approach entails prompting GPT-4 by transforming tabular data points into textual format, followed by the inclusion of precise sanitization instructions in a zero-shot manner. The primary objective is to sanitize the tabular data in such a way that it hinders existing machine learning models from accurately inferring private features while allowing models to accurately infer utility-related attributes. We explore various sanitization instructions. Notably, we discover that this relatively simple approach yields performance comparable to more complex adversarial optimization methods used for managing privacy-utility tradeoffs. Furthermore, while the prompts successfully obscure private features from the detection capabilities of existing machine learning models, we observe that this obscuration alone does not necessarily meet a range of fairness metrics. Nevertheless, our research indicates the potential effectiveness of LLMs in adhering to these fairness metrics, with some of our experimental results aligning with those achieved by well-established adversarial optimization techniques. | [
"['Bishwas Mandal' 'George Amariucai' 'Shuangqing Wei']"
]
|
null | null | 2404.05051 | null | null | http://arxiv.org/pdf/2404.05051v1 | 2024-04-07T19:22:51Z | 2024-04-07T19:22:51Z | Skill Transfer and Discovery for Sim-to-Real Learning: A
Representation-Based Viewpoint | We study sim-to-real skill transfer and discovery in the context of robotics control using representation learning. We draw inspiration from spectral decomposition of Markov decision processes. The spectral decomposition brings about representation that can linearly represent the state-action value function induced by any policies, thus can be regarded as skills. The skill representations are transferable across arbitrary tasks with the same transition dynamics. Moreover, to handle the sim-to-real gap in the dynamics, we propose a skill discovery algorithm that learns new skills caused by the sim-to-real gap from real-world data. We promote the discovery of new skills by enforcing orthogonal constraints between the skills to learn and the skills from simulators, and then synthesize the policy using the enlarged skill sets. We demonstrate our methodology by transferring quadrotor controllers from simulators to Crazyflie 2.1 quadrotors. We show that we can learn the skill representations from a single simulator task and transfer these to multiple different real-world tasks including hovering, taking off, landing and trajectory tracking. Our skill discovery approach helps narrow the sim-to-real gap and improve the real-world controller performance by up to 30.2%. | [
"['Haitong Ma' 'Zhaolin Ren' 'Bo Dai' 'Na Li']"
]
|
null | null | 2404.05055 | null | null | http://arxiv.org/pdf/2404.05055v1 | 2024-04-07T19:29:09Z | 2024-04-07T19:29:09Z | Percentile Criterion Optimization in Offline Reinforcement Learning | In reinforcement learning, robust policies for high-stakes decision-making problems with limited data are usually computed by optimizing the emph{percentile criterion}. The percentile criterion is approximately solved by constructing an emph{ambiguity set} that contains the true model with high probability and optimizing the policy for the worst model in the set. Since the percentile criterion is non-convex, constructing ambiguity sets is often challenging. Existing work uses emph{Bayesian credible regions} as ambiguity sets, but they are often unnecessarily large and result in learning overly conservative policies. To overcome these shortcomings, we propose a novel Value-at-Risk based dynamic programming algorithm to optimize the percentile criterion without explicitly constructing any ambiguity sets. Our theoretical and empirical results show that our algorithm implicitly constructs much smaller ambiguity sets and learns less conservative robust policies. | [
"['Elita A. Lobo' 'Cyrus Cousins' 'Yair Zick' 'Marek Petrik']"
]
|
null | null | 2404.05057 | null | null | http://arxiv.org/pdf/2404.05057v1 | 2024-04-07T19:39:14Z | 2024-04-07T19:39:14Z | TimeCSL: Unsupervised Contrastive Learning of General Shapelets for
Explorable Time Series Analysis | Unsupervised (a.k.a. Self-supervised) representation learning (URL) has emerged as a new paradigm for time series analysis, because it has the ability to learn generalizable time series representation beneficial for many downstream tasks without using labels that are usually difficult to obtain. Considering that existing approaches have limitations in the design of the representation encoder and the learning objective, we have proposed Contrastive Shapelet Learning (CSL), the first URL method that learns the general-purpose shapelet-based representation through unsupervised contrastive learning, and shown its superior performance in several analysis tasks, such as time series classification, clustering, and anomaly detection. In this paper, we develop TimeCSL, an end-to-end system that makes full use of the general and interpretable shapelets learned by CSL to achieve explorable time series analysis in a unified pipeline. We introduce the system components and demonstrate how users interact with TimeCSL to solve different analysis tasks in the unified pipeline, and gain insight into their time series by exploring the learned shapelets and representation. | [
"['Zhiyu Liang' 'Chen Liang' 'Zheng Liang' 'Hongzhi Wang' 'Bo Zheng']"
]
|
null | null | 2404.05058 | null | null | http://arxiv.org/pdf/2404.05058v1 | 2024-04-07T20:05:49Z | 2024-04-07T20:05:49Z | A robust assessment for invariant representations | The performance of machine learning models can be impacted by changes in data over time. A promising approach to address this challenge is invariant learning, with a particular focus on a method known as invariant risk minimization (IRM). This technique aims to identify a stable data representation that remains effective with out-of-distribution (OOD) data. While numerous studies have developed IRM-based methods adaptive to data augmentation scenarios, there has been limited attention on directly assessing how well these representations preserve their invariant performance under varying conditions. In our paper, we propose a novel method to evaluate invariant performance, specifically tailored for IRM-based methods. We establish a bridge between the conditional expectation of an invariant predictor across different environments through the likelihood ratio. Our proposed criterion offers a robust basis for evaluating invariant performance. We validate our approach with theoretical support and demonstrate its effectiveness through extensive numerical studies.These experiments illustrate how our method can assess the invariant performance of various representation techniques. | [
"['Wenlu Tang' 'Zicheng Liu']"
]
|
null | null | 2404.05062 | null | null | http://arxiv.org/pdf/2404.05062v1 | 2024-04-07T20:16:37Z | 2024-04-07T20:16:37Z | New methods for computing the generalized chi-square distribution | We present several exact and approximate mathematical methods and open-source software to compute the cdf, pdf and inverse cdf of the generalized chi-square distribution, which appears in Bayesian classification problems. Some methods are geared for speed, while others are designed to be accurate far into the tails, using which we can also measure large values of the discriminability index $d'$ between multinormals. We compare the accuracy and speed of these methods against the best existing methods. | [
"['Abhranil Das']"
]
|
null | null | 2404.05064 | null | null | http://arxiv.org/pdf/2404.05064v1 | 2024-04-07T20:24:44Z | 2024-04-07T20:24:44Z | A Structure-Guided Gauss-Newton Method for Shallow ReLU Neural Network | In this paper, we propose a structure-guided Gauss-Newton (SgGN) method for solving least squares problems using a shallow ReLU neural network. The method effectively takes advantage of both the least squares structure and the neural network structure of the objective function. By categorizing the weights and biases of the hidden and output layers of the network as nonlinear and linear parameters, respectively, the method iterates back and forth between the nonlinear and linear parameters. The nonlinear parameters are updated by a damped Gauss-Newton method and the linear ones are updated by a linear solver. Moreover, at the Gauss-Newton step, a special form of the Gauss-Newton matrix is derived for the shallow ReLU neural network and is used for efficient iterations. It is shown that the corresponding mass and Gauss-Newton matrices in the respective linear and nonlinear steps are symmetric and positive definite under reasonable assumptions. Thus, the SgGN method naturally produces an effective search direction without the need of additional techniques like shifting in the Levenberg-Marquardt method to achieve invertibility of the Gauss-Newton matrix. The convergence and accuracy of the method are demonstrated numerically for several challenging function approximation problems, especially those with discontinuities or sharp transition layers that pose significant challenges for commonly used training algorithms in machine learning. | [
"['Zhiqiang Cai' 'Tong Ding' 'Min Liu' 'Xinyu Liu' 'Jianlin Xia']"
]
|
null | null | 2404.05071 | null | null | http://arxiv.org/pdf/2404.05071v1 | 2024-04-07T20:50:13Z | 2024-04-07T20:50:13Z | Test-Time Training for Depression Detection | Previous works on depression detection use datasets collected in similar environments to train and test the models. In practice, however, the train and test distributions cannot be guaranteed to be identical. Distribution shifts can be introduced due to variations such as recording environment (e.g., background noise) and demographics (e.g., gender, age, etc). Such distributional shifts can surprisingly lead to severe performance degradation of the depression detection models. In this paper, we analyze the application of test-time training (TTT) to improve robustness of models trained for depression detection. When compared to regular testing of the models, we find TTT can significantly improve the robustness of the model under a variety of distributional shifts introduced due to: (a) background-noise, (b) gender-bias, and (c) data collection and curation procedure (i.e., train and test samples are from separate datasets). | [
"['Sri Harsha Dumpala' 'Chandramouli Shama Sastry' 'Rudolf Uher'\n 'Sageev Oore']"
]
|
null | null | 2404.05083 | null | null | http://arxiv.org/pdf/2404.05083v1 | 2024-04-07T21:46:47Z | 2024-04-07T21:46:47Z | HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large
Foundation Models | While recent progress in video-text retrieval has been driven by the exploration of powerful model architectures and training strategies, the representation learning ability of video-text retrieval models is still limited due to low-quality and scarce training data annotations. To address this issue, we present a novel video-text learning paradigm, HaVTR, which augments video and text data to learn more generalized features. Specifically, we first adopt a simple augmentation method, which generates self-similar data by randomly duplicating or dropping subwords and frames. In addition, inspired by the recent advancement in visual and language generative models, we propose a more powerful augmentation method through textual paraphrasing and video stylization using large language models (LLMs) and visual generative models (VGMs). Further, to bring richer information into video and text, we propose a hallucination-based augmentation method, where we use LLMs and VGMs to generate and add new relevant information to the original data. Benefiting from the enriched data, extensive experiments on several video-text retrieval benchmarks demonstrate the superiority of HaVTR over existing methods. | [
"['Yimu Wang' 'Shuai Yuan' 'Xiangru Jian' 'Wei Pang' 'Mushi Wang' 'Ning Yu']"
]
|
null | null | 2404.05086 | null | null | http://arxiv.org/pdf/2404.05086v1 | 2024-04-07T22:00:50Z | 2024-04-07T22:00:50Z | A Note on LoRA | LoRA (Low-Rank Adaptation) has emerged as a preferred method for efficiently adapting Large Language Models (LLMs) with remarkable simplicity and efficacy. This note extends the original LoRA paper by offering new perspectives that were not initially discussed and presents a series of insights for deploying LoRA at scale. Without introducing new experiments, we aim to improve the understanding and application of LoRA. | [
"['Vlad Fomenko' 'Han Yu' 'Jongho Lee' 'Stanley Hsieh' 'Weizhu Chen']"
]
|
null | null | 2404.05089 | null | null | http://arxiv.org/pdf/2404.05089v1 | 2024-04-07T22:13:43Z | 2024-04-07T22:13:43Z | SEER-MoE: Sparse Expert Efficiency through Regularization for
Mixture-of-Experts | The advancement of deep learning has led to the emergence of Mixture-of-Experts (MoEs) models, known for their dynamic allocation of computational resources based on input. Despite their promise, MoEs face challenges, particularly in terms of memory requirements. To address this, our work introduces SEER-MoE, a novel two-stage framework for reducing both the memory footprint and compute requirements of pre-trained MoE models. The first stage involves pruning the total number of experts using a heavy-hitters counting guidance, while the second stage employs a regularization-based fine-tuning strategy to recover accuracy loss and reduce the number of activated experts during inference. Our empirical studies demonstrate the effectiveness of our method, resulting in a sparse MoEs model optimized for inference efficiency with minimal accuracy trade-offs. | [
"['Alexandre Muzio' 'Alex Sun' 'Churan He']"
]
|
null | null | 2404.05090 | null | null | http://arxiv.org/pdf/2404.05090v1 | 2024-04-07T22:15:13Z | 2024-04-07T22:15:13Z | How Bad is Training on Synthetic Data? A Statistical Analysis of
Language Model Collapse | The phenomenon of model collapse, introduced in (Shumailov et al., 2023), refers to the deterioration in performance that occurs when new models are trained on synthetic data generated from previously trained models. This recursive training loop makes the tails of the original distribution disappear, thereby making future-generation models forget about the initial (real) distribution. With the aim of rigorously understanding model collapse in language models, we consider in this paper a statistical model that allows us to characterize the impact of various recursive training scenarios. Specifically, we demonstrate that model collapse cannot be avoided when training solely on synthetic data. However, when mixing both real and synthetic data, we provide an estimate of a maximal amount of synthetic data below which model collapse can eventually be avoided. Our theoretical conclusions are further supported by empirical validations. | [
"['Mohamed El Amine Seddik' 'Suei-Wen Chen' 'Soufiane Hayou'\n 'Pierre Youssef' 'Merouane Debbah']"
]
|
null | null | 2404.05094 | null | null | http://arxiv.org/pdf/2404.05094v1 | 2024-04-07T22:31:34Z | 2024-04-07T22:31:34Z | Active Test-Time Adaptation: Theoretical Analyses and An Algorithm | Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings. Currently, most TTA methods can only deal with minor shifts and rely heavily on heuristic and empirical studies. To advance TTA under domain shifts, we propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting. We provide a learning theory analysis, demonstrating that incorporating limited labeled test instances enhances overall performances across test domains with a theoretical guarantee. We also present a sample entropy balancing for implementing ATTA while avoiding catastrophic forgetting (CF). We introduce a simple yet effective ATTA algorithm, known as SimATTA, using real-time sample selection techniques. Extensive experimental results confirm consistency with our theoretical analyses and show that the proposed ATTA method yields substantial performance improvements over TTA methods while maintaining efficiency and shares similar effectiveness to the more demanding active domain adaptation (ADA) methods. Our code is available at https://github.com/divelab/ATTA | [
"['Shurui Gui' 'Xiner Li' 'Shuiwang Ji']"
]
|
null | null | 2404.05102 | null | null | http://arxiv.org/pdf/2404.05102v1 | 2024-04-07T22:58:18Z | 2024-04-07T22:58:18Z | LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance
Volumetric Medical Image Segmentation | As a result of the rise of Transformer architectures in medical image analysis, specifically in the domain of medical image segmentation, a multitude of hybrid models have been created that merge the advantages of Convolutional Neural Networks (CNNs) and Transformers. These hybrid models have achieved notable success by significantly improving segmentation accuracy. Yet, this progress often comes at the cost of increased model complexity, both in terms of parameters and computational demand. Moreover, many of these models fail to consider the crucial interplay between spatial and channel features, which could further refine and improve segmentation outcomes. To address this, we introduce LHU-Net, a Light Hybrid U-Net architecture optimized for volumetric medical image segmentation. LHU-Net is meticulously designed to prioritize spatial feature analysis in its initial layers before shifting focus to channel-based features in its deeper layers, ensuring a comprehensive feature extraction process. Rigorous evaluation across five benchmark datasets - Synapse, LA, Pancreas, ACDC, and BRaTS 2018 - underscores LHU-Net's superior performance, showcasing its dual capacity for efficiency and accuracy. Notably, LHU-Net sets new performance benchmarks, such as attaining a Dice score of 92.66 on the ACDC dataset, while simultaneously reducing parameters by 85% and quartering the computational load compared to existing state-of-the-art models. Achieved without any reliance on pre-training, additional data, or model ensemble, LHU-Net's effectiveness is further evidenced by its state-of-the-art performance across all evaluated datasets, utilizing fewer than 11 million parameters. This achievement highlights that balancing computational efficiency with high accuracy in medical image segmentation is feasible. Our implementation of LHU-Net is freely accessible to the research community on GitHub. | [
"['Yousef Sadegheih' 'Afshin Bozorgpour' 'Pratibha Kumari' 'Reza Azad'\n 'Dorit Merhof']"
]
|
null | null | 2404.05108 | null | null | http://arxiv.org/pdf/2404.05108v1 | 2024-04-07T23:34:51Z | 2024-04-07T23:34:51Z | Efficient Gradient Estimation of Variational Quantum Circuits with Lie
Algebraic Symmetries | Hybrid quantum-classical optimization and learning strategies are among the most promising approaches to harnessing quantum information or gaining a quantum advantage over classical methods. However, efficient estimation of the gradient of the objective function in such models remains a challenge due to several factors including the exponential dimensionality of the Hilbert spaces, and information loss of quantum measurements. In this work, we study generic parameterized circuits in the context of variational methods. We develop a framework for gradient estimation that exploits the algebraic symmetries of Hamiltonian characterized through Lie algebra or group theory. Particularly, we prove that when the dimension of the dynamical Lie algebra is polynomial in the number of qubits, one can estimate the gradient with polynomial classical and quantum resources. This is done by a series of Hadamard tests applied to the output of the ansatz with no change to its circuit. We show that this approach can be equipped with classical shadow tomography to further reduce the measurement shot complexity to scale logarithmically with the number of parameters. | [
"['Mohsen Heidari' 'Masih Mozakka' 'Wojciech Szpankowski']"
]
|
null | null | 2404.05128 | null | null | http://arxiv.org/pdf/2404.05128v2 | 2024-05-15T16:55:03Z | 2024-04-08T01:08:41Z | Importance of realism in procedurally-generated synthetic images for
deep learning: case studies in maize and canola | Artificial neural networks are often used to identify features of crop plants. However, training their models requires many annotated images, which can be expensive and time-consuming to acquire. Procedural models of plants, such as those developed with Lindenmayer-systems (L-systems) can be created to produce visually realistic simulations, and hence images of plant simulations, where annotations are implicitly known. These synthetic images can either augment or completely replace real images in training neural networks for phenotyping tasks. In this paper, we systematically vary amounts of real and synthetic images used for training in both maize and canola to better understand situations where synthetic images generated from L-systems can help prediction on real images. This work also explores the degree to which realism in the synthetic images improves prediction. We have five different variants of a procedural canola model (these variants were created by tuning the realism while using calibration), and the deep learning results showed how drastically these results improve as the canola synthetic images are made to be more realistic. Furthermore, we see how neural network predictions can be used to help calibrate L-systems themselves, creating a feedback loop. | [
"['Nazifa Azam Khan' 'Mikolaj Cieslak' 'Ian McQuillan']"
]
|
null | null | 2404.05143 | null | null | http://arxiv.org/pdf/2404.05143v1 | 2024-04-08T01:54:28Z | 2024-04-08T01:54:28Z | Plug and Play with Prompts: A Prompt Tuning Approach for Controlling
Text Generation | Transformer-based Large Language Models (LLMs) have shown exceptional language generation capabilities in response to text-based prompts. However, controlling the direction of generation via textual prompts has been challenging, especially with smaller models. In this work, we explore the use of Prompt Tuning to achieve controlled language generation. Generated text is steered using prompt embeddings, which are trained using a small language model, used as a discriminator. Moreover, we demonstrate that these prompt embeddings can be trained with a very small dataset, with as low as a few hundred training examples. Our method thus offers a data and parameter efficient solution towards controlling language model outputs. We carry out extensive evaluation on four datasets: SST-5 and Yelp (sentiment analysis), GYAFC (formality) and JIGSAW (toxic language). Finally, we demonstrate the efficacy of our method towards mitigating harmful, toxic, and biased text generated by language models. | [
"['Rohan Deepak Ajwani' 'Zining Zhu' 'Jonathan Rose' 'Frank Rudzicz']"
]
|
null | null | 2404.05144 | null | null | http://arxiv.org/pdf/2404.05144v1 | 2024-04-08T01:55:28Z | 2024-04-08T01:55:28Z | Enhancing Clinical Efficiency through LLM: Discharge Note Generation for
Cardiac Patients | Medical documentation, including discharge notes, is crucial for ensuring patient care quality, continuity, and effective medical communication. However, the manual creation of these documents is not only time-consuming but also prone to inconsistencies and potential errors. The automation of this documentation process using artificial intelligence (AI) represents a promising area of innovation in healthcare. This study directly addresses the inefficiencies and inaccuracies in creating discharge notes manually, particularly for cardiac patients, by employing AI techniques, specifically large language model (LLM). Utilizing a substantial dataset from a cardiology center, encompassing wide-ranging medical records and physician assessments, our research evaluates the capability of LLM to enhance the documentation process. Among the various models assessed, Mistral-7B distinguished itself by accurately generating discharge notes that significantly improve both documentation efficiency and the continuity of care for patients. These notes underwent rigorous qualitative evaluation by medical expert, receiving high marks for their clinical relevance, completeness, readability, and contribution to informed decision-making and care planning. Coupled with quantitative analyses, these results confirm Mistral-7B's efficacy in distilling complex medical information into concise, coherent summaries. Overall, our findings illuminate the considerable promise of specialized LLM, such as Mistral-7B, in refining healthcare documentation workflows and advancing patient care. This study lays the groundwork for further integrating advanced AI technologies in healthcare, demonstrating their potential to revolutionize patient documentation and support better care outcomes. | [
"['HyoJe Jung' 'Yunha Kim' 'Heejung Choi' 'Hyeram Seo' 'Minkyoung Kim'\n 'JiYe Han' 'Gaeun Kee' 'Seohyun Park' 'Soyoung Ko' 'Byeolhee Kim'\n 'Suyeon Kim' 'Tae Joon Jun' 'Young-Hak Kim']"
]
|
null | null | 2404.05155 | null | null | http://arxiv.org/pdf/2404.05155v1 | 2024-04-08T02:41:32Z | 2024-04-08T02:41:32Z | On the price of exact truthfulness in incentive-compatible online
learning with bandit feedback: A regret lower bound for WSU-UX | In one view of the classical game of prediction with expert advice with binary outcomes, in each round, each expert maintains an adversarially chosen belief and honestly reports this belief. We consider a recently introduced, strategic variant of this problem with selfish (reputation-seeking) experts, where each expert strategically reports in order to maximize their expected future reputation based on their belief. In this work, our goal is to design an algorithm for the selfish experts problem that is incentive-compatible (IC, or emph{truthful}), meaning each expert's best strategy is to report truthfully, while also ensuring the algorithm enjoys sublinear regret with respect to the expert with the best belief. Freeman et al. (2020) recently studied this problem in the full information and bandit settings and obtained truthful, no-regret algorithms by leveraging prior work on wagering mechanisms. While their results under full information match the minimax rate for the classical ("honest experts") problem, the best-known regret for their bandit algorithm WSU-UX is $O(T^{2/3})$, which does not match the minimax rate for the classical ("honest bandits") setting. It was unclear whether the higher regret was an artifact of their analysis or a limitation of WSU-UX. We show, via explicit construction of loss sequences, that the algorithm suffers a worst-case $Omega(T^{2/3})$ lower bound. Left open is the possibility that a different IC algorithm obtains $O(sqrt{T})$ regret. Yet, WSU-UX was a natural choice for such an algorithm owing to the limited design room for IC algorithms in this setting. | [
"['Ali Mortazavi' 'Junhao Lin' 'Nishant A. Mehta']"
]
|
null | null | 2404.05159 | null | null | http://arxiv.org/pdf/2404.05159v1 | 2024-04-08T02:55:01Z | 2024-04-08T02:55:01Z | Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods | In various real-world applications such as machine translation, sentiment analysis, and question answering, a pivotal role is played by NLP models, facilitating efficient communication and decision-making processes in domains ranging from healthcare to finance. However, a significant challenge is posed to the robustness of these natural language processing models by text adversarial attacks. These attacks involve the deliberate manipulation of input text to mislead the predictions of the model while maintaining human interpretability. Despite the remarkable performance achieved by state-of-the-art models like BERT in various natural language processing tasks, they are found to remain vulnerable to adversarial perturbations in the input text. In addressing the vulnerability of text classifiers to adversarial attacks, three distinct attack mechanisms are explored in this paper using the victim model BERT: BERT-on-BERT attack, PWWS attack, and Fraud Bargain's Attack (FBA). Leveraging the IMDB, AG News, and SST2 datasets, a thorough comparative analysis is conducted to assess the effectiveness of these attacks on the BERT classifier model. It is revealed by the analysis that PWWS emerges as the most potent adversary, consistently outperforming other methods across multiple evaluation scenarios, thereby emphasizing its efficacy in generating adversarial examples for text classification. Through comprehensive experimentation, the performance of these attacks is assessed and the findings indicate that the PWWS attack outperforms others, demonstrating lower runtime, higher accuracy, and favorable semantic similarity scores. The key insight of this paper lies in the assessment of the relative performances of three prevalent state-of-the-art attack mechanisms. | [
"['Roopkatha Dey' 'Aivy Debnath' 'Sayak Kumar Dutta' 'Kaustav Ghosh'\n 'Arijit Mitra' 'Arghya Roy Chowdhury' 'Jaydip Sen']"
]
|
null | null | 2404.05168 | null | null | http://arxiv.org/pdf/2404.05168v1 | 2024-04-08T03:29:58Z | 2024-04-08T03:29:58Z | Adapting to Covariate Shift in Real-time by Encoding Trees with Motion
Equations | Input distribution shift presents a significant problem in many real-world systems. Here we present Xenovert, an adaptive algorithm that can dynamically adapt to changes in input distribution. It is a perfect binary tree that adaptively divides a continuous input space into several intervals of uniform density while receiving a continuous stream of input. This process indirectly maps the source distribution to the shifted target distribution, preserving the data's relationship with the downstream decoder/operation, even after the shift occurs. In this paper, we demonstrated how a neural network integrated with Xenovert achieved better results in 4 out of 5 shifted datasets, saving the hurdle of retraining a machine learning model. We anticipate that Xenovert can be applied to many more applications that require adaptation to unforeseen input distribution shifts, even when the distribution shift is drastic. | [
"['Tham Yik Foong' 'Heng Zhang' 'Mao Po Yuan' 'Danilo Vasconcellos Vargas']"
]
|
null | null | 2404.05182 | null | null | http://arxiv.org/pdf/2404.05182v1 | 2024-04-08T04:14:02Z | 2024-04-08T04:14:02Z | DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large
Language Model | To enhance the performance of large language models (LLM) on downstream tasks, one solution is to fine-tune certain LLM parameters and make it better align with the characteristics of the training dataset. This process is commonly known as parameter-efficient fine-tuning (PEFT). Due to the scale of LLM, PEFT operations are usually executed in the public environment (e.g., cloud server). This necessitates the sharing of sensitive user data across public environments, thereby raising potential privacy concerns. To tackle these challenges, we propose a distributed PEFT framework called DLoRA. DLoRA enables scalable PEFT operations to be performed collaboratively between the cloud and user devices. Coupled with the proposed Kill and Revive algorithm, the evaluation results demonstrate that DLoRA can significantly reduce the computation and communication workload over the user devices while achieving superior accuracy and privacy protection. | [
"['Chao Gao' 'Sai Qian Zhang']"
]
|
null | null | 2404.05183 | null | null | http://arxiv.org/pdf/2404.05183v1 | 2024-04-08T04:17:27Z | 2024-04-08T04:17:27Z | Progressive Alignment with VLM-LLM Feature to Augment Defect
Classification for the ASE Dataset | Traditional defect classification approaches are facing with two barriers. (1) Insufficient training data and unstable data quality. Collecting sufficient defective sample is expensive and time-costing, consequently leading to dataset variance. It introduces the difficulty on recognition and learning. (2) Over-dependence on visual modality. When the image pattern and texture is monotonic for all defect classes in a given dataset, the performance of conventional AOI system cannot be guaranteed. In scenarios where image quality is compromised due to mechanical failures or when defect information is inherently difficult to discern, the performance of deep models cannot be guaranteed. A main question is, "how to solve those two problems when they occur at the same time?" The feasible strategy is to explore another feature within dataset and combine an eminent vision-language model (VLM) and Large-Language model (LLM) with their astonishing zero-shot capability. In this work, we propose the special ASE dataset, including rich data description recorded on image, for defect classification, but the defect feature is uneasy to learn directly. Secondly, We present the prompting for VLM-LLM against defect classification with the proposed ASE dataset to activate extra-modality feature from images to enhance performance. Then, We design the novel progressive feature alignment (PFA) block to refine image-text feature to alleviate the difficulty of alignment under few-shot scenario. Finally, the proposed Cross-modality attention fusion (CMAF) module can effectively fuse different modality feature. Experiment results have demonstrated our method's effectiveness over several defect classification methods for the ASE dataset. | [
"['Chih-Chung Hsu' 'Chia-Ming Lee' 'Chun-Hung Sun' 'Kuang-Ming Wu']"
]
|
null | null | 2404.05184 | null | null | http://arxiv.org/abs/2404.05184v7 | 2024-06-05T04:37:03Z | 2024-04-08T04:18:54Z | Predicting the Geothermal Gradient in Colombia: a Machine Learning
Approach | Accurate determination of the geothermal gradient is critical for assessing the geothermal energy potential of a given region. Of particular interest is the case of Colombia, a country with abundant geothermal resources. A history of active oil and gas exploration and production has left drilled boreholes in different geological settings, providing direct measurements of the geothermal gradient. Unfortunately, large regions of the country where geothermal resources might exist lack such measurements. Indirect geophysical measurements are costly and difficult to perform at regional scales. Computational thermal models could be constructed, but they require very detailed knowledge of the underlying geology and uniform sampling of subsurface temperatures to be well-constrained. We present an alternative approach that leverages recent advances in supervised machine learning and available direct measurements to predict the geothermal gradient in regions where only global-scale geophysical datasets and course geological knowledge are available. We find that a Gradient Boosted Regression Tree algorithm yields optimal predictions and extensively validate the trained model. We show that predictions of our model are within 12% accuracy and that independent measurements performed by other authors agree well with our model. Finnally, we present a geothermal gradient map for Colombia that highlights regions where futher exploration and data collection should be performed. | [
"['Juan Camilo Mejía-Fragoso' 'Manuel A. Florez' 'Rocío Bernal-Olaya']"
]
|
null | null | 2404.05185 | null | null | http://arxiv.org/pdf/2404.05185v1 | 2024-04-08T04:22:55Z | 2024-04-08T04:22:55Z | Convergence analysis of controlled particle systems arising in deep
learning: from finite to infinite sample size | This paper deals with a class of neural SDEs and studies the limiting behavior of the associated sampled optimal control problems as the sample size grows to infinity. The neural SDEs with N samples can be linked to the N-particle systems with centralized control. We analyze the Hamilton--Jacobi--Bellman equation corresponding to the N-particle system and establish regularity results which are uniform in N. The uniform regularity estimates are obtained by the stochastic maximum principle and the analysis of a backward stochastic Riccati equation. Using these uniform regularity results, we show the convergence of the minima of objective functionals and optimal parameters of the neural SDEs as the sample size N tends to infinity. The limiting objects can be identified with suitable functions defined on the Wasserstein space of Borel probability measures. Furthermore, quantitative algebraic convergence rates are also obtained. | [
"['Huafu Liao' 'Alpár R. Mészáros' 'Chenchen Mou' 'Chao Zhou']"
]
|
null | null | 2404.05192 | null | null | http://arxiv.org/pdf/2404.05192v1 | 2024-04-08T04:41:39Z | 2024-04-08T04:41:39Z | ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time
Series Forecasting | The intricate nature of time series data analysis benefits greatly from the distinct advantages offered by time and frequency domain representations. While the time domain is superior in representing local dependencies, particularly in non-periodic series, the frequency domain excels in capturing global dependencies, making it ideal for series with evident periodic patterns. To capitalize on both of these strengths, we propose ATFNet, an innovative framework that combines a time domain module and a frequency domain module to concurrently capture local and global dependencies in time series data. Specifically, we introduce Dominant Harmonic Series Energy Weighting, a novel mechanism for dynamically adjusting the weights between the two modules based on the periodicity of the input time series. In the frequency domain module, we enhance the traditional Discrete Fourier Transform (DFT) with our Extended DFT, designed to address the challenge of discrete frequency misalignment. Additionally, our Complex-valued Spectrum Attention mechanism offers a novel approach to discern the intricate relationships between different frequency combinations. Extensive experiments across multiple real-world datasets demonstrate that our ATFNet framework outperforms current state-of-the-art methods in long-term time series forecasting. | [
"['Hengyu Ye' 'Jiadong Chen' 'Shijin Gong' 'Fuxin Jiang' 'Tieying Zhang'\n 'Jianjun Chen' 'Xiaofeng Gao']"
]
|
null | null | 2404.05210 | null | null | http://arxiv.org/pdf/2404.05210v1 | 2024-04-08T05:45:03Z | 2024-04-08T05:45:03Z | Bidirectional Long-Range Parser for Sequential Data Understanding | The transformer is a powerful data modelling framework responsible for remarkable performance on a wide range of tasks. However, they are limited in terms of scalability as it is suboptimal and inefficient to process long-sequence data. To this purpose we introduce BLRP (Bidirectional Long-Range Parser), a novel and versatile attention mechanism designed to increase performance and efficiency on long-sequence tasks. It leverages short and long range heuristics in the form of a local sliding window approach combined with a global bidirectional latent space synthesis technique. We show the benefits and versatility of our approach on vision and language domains by demonstrating competitive results against state-of-the-art methods on the Long-Range-Arena and CIFAR benchmarks together with ablations demonstrating the computational efficiency. | [
"['George Leotescu' 'Daniel Voinea' 'Alin-Ionut Popa']"
]
|
null | null | 2404.05219 | null | null | http://arxiv.org/pdf/2404.05219v1 | 2024-04-08T06:27:38Z | 2024-04-08T06:27:38Z | Out-of-Distribution Data: An Acquaintance of Adversarial Examples -- A
Survey | Deep neural networks (DNNs) deployed in real-world applications can encounter out-of-distribution (OOD) data and adversarial examples. These represent distinct forms of distributional shifts that can significantly impact DNNs' reliability and robustness. Traditionally, research has addressed OOD detection and adversarial robustness as separate challenges. This survey focuses on the intersection of these two areas, examining how the research community has investigated them together. Consequently, we identify two key research directions: robust OOD detection and unified robustness. Robust OOD detection aims to differentiate between in-distribution (ID) data and OOD data, even when they are adversarially manipulated to deceive the OOD detector. Unified robustness seeks a single approach to make DNNs robust against both adversarial attacks and OOD inputs. Accordingly, first, we establish a taxonomy based on the concept of distributional shifts. This framework clarifies how robust OOD detection and unified robustness relate to other research areas addressing distributional shifts, such as OOD detection, open set recognition, and anomaly detection. Subsequently, we review existing work on robust OOD detection and unified robustness. Finally, we highlight the limitations of the existing work and propose promising research directions that explore adversarial and OOD inputs within a unified framework. | [
"['Naveen Karunanayake' 'Ravin Gunawardena' 'Suranga Seneviratne'\n 'Sanjay Chawla']"
]
|
null | null | 2404.05229 | null | null | http://arxiv.org/pdf/2404.05229v1 | 2024-04-08T06:49:59Z | 2024-04-08T06:49:59Z | Empirical Upscaling of Point-scale Soil Moisture Measurements for
Spatial Evaluation of Model Simulations and Satellite Retrievals | The evaluation of modelled or satellite-derived soil moisture (SM) estimates is usually dependent on comparisons against in-situ SM measurements. However, the inherent mismatch in spatial support (i.e., scale) necessitates a cautious interpretation of point-to-pixel comparisons. The upscaling of the in-situ measurements to a commensurate resolution to that of the modelled or retrieved SM will lead to a fairer comparison and statistically more defensible evaluation. In this study, we presented an upscaling approach that combines spatiotemporal fusion with machine learning to extrapolate point-scale SM measurements from 28 in-situ sites to a 100 m resolution for an agricultural area of 100 km by 100 km. We conducted a four-fold cross-validation, which consistently demonstrated comparable correlation performance across folds, ranging from 0.6 to 0.9. The proposed approach was further validated based on a cross-cluster strategy by using two spatial subsets within the study area, denoted as cluster A and B, each of which equally comprised of 12 in-situ sites. The cross-cluster validation underscored the capability of the upscaling approach to map the spatial variability of SM within areas that were not covered by in-situ sites, with correlation performance ranging between 0.6 and 0.8. In general, our proposed upscaling approach offers an avenue to extrapolate point measurements of SM to a spatial scale more akin to climatic model grids or remotely sensed observations. Future investigations should delve into a further evaluation of the upscaling approach using independent data, such as model simulations, satellite retrievals or field campaign data. | [
"['Yi Yu' 'Brendan P. Malone' 'Luigi J. Renzullo']"
]
|
null | null | 2404.05241 | null | null | http://arxiv.org/pdf/2404.05241v4 | 2024-05-14T07:30:50Z | 2024-04-08T07:11:33Z | Lightweight Inference for Forward-Forward Algorithm | The human brain performs tasks with an outstanding energy-efficiency, i.e., with approximately 20 Watts. The state-of-the-art Artificial/Deep Neural Networks (ANN/DNN), on the other hand, have recently been shown to consume massive amounts of energy. The training of these ANNs/DNNs is done almost exclusively based on the back-propagation algorithm, which is known to be biologically implausible. This has led to a new generation of forward-only techniques, including the Forward-Forward algorithm. In this paper, we propose a lightweight inference scheme specifically designed for DNNs trained using the Forward-Forward algorithm. We have evaluated our proposed lightweight inference scheme in the case of the MNIST and CIFAR datasets, as well as two real-world applications, namely, epileptic seizure detection and cardiac arrhythmia classification using wearable technologies, where complexity overheads/energy consumption is a major constraint, and demonstrate its relevance. | [
"['Amin Aminifar' 'Baichuan Huang' 'Azra Abtahi' 'Amir Aminifar']"
]
|
null | null | 2404.05249 | null | null | http://arxiv.org/pdf/2404.05249v1 | 2024-04-08T07:25:25Z | 2024-04-08T07:25:25Z | SAFE-GIL: SAFEty Guided Imitation Learning | Behavior Cloning is a popular approach to Imitation Learning, in which a robot observes an expert supervisor and learns a control policy. However, behavior cloning suffers from the "compounding error" problem - the policy errors compound as it deviates from the expert demonstrations and might lead to catastrophic system failures, limiting its use in safety-critical applications. On-policy data aggregation methods are able to address this issue at the cost of rolling out and repeated training of the imitation policy, which can be tedious and computationally prohibitive. We propose SAFE-GIL, an off-policy behavior cloning method that guides the expert via adversarial disturbance during data collection. The algorithm abstracts the imitation error as an adversarial disturbance in the system dynamics, injects it during data collection to expose the expert to safety critical states, and collects corrective actions. Our method biases training to more closely replicate expert behavior in safety-critical states and allows more variance in less critical states. We compare our method with several behavior cloning techniques and DAgger on autonomous navigation and autonomous taxiing tasks and show higher task success and safety, especially in low data regimes where the likelihood of error is higher, at a slight drop in the performance. | [
"['Yusuf Umut Ciftci' 'Zeyuan Feng' 'Somil Bansal']"
]
|
null | null | 2404.05270 | null | null | http://arxiv.org/pdf/2404.05270v1 | 2024-04-08T08:00:05Z | 2024-04-08T08:00:05Z | Exploiting Preference Elicitation in Interactive and User-centered
Algorithmic Recourse: An Initial Exploration | Algorithmic Recourse aims to provide actionable explanations, or recourse plans, to overturn potentially unfavourable decisions taken by automated machine learning models. In this paper, we propose an interaction paradigm based on a guided interaction pattern aimed at both eliciting the users' preferences and heading them toward effective recourse interventions. In a fictional task of money lending, we compare this approach with an exploratory interaction pattern based on a combination of alternative plans and the possibility of freely changing the configurations by the users themselves. Our results suggest that users may recognize that the guided interaction paradigm improves efficiency. However, they also feel less freedom to experiment with "what-if" scenarios. Nevertheless, the time spent on the purely exploratory interface tends to be perceived as a lack of efficiency, which reduces attractiveness, perspicuity, and dependability. Conversely, for the guided interface, more time on the interface seems to increase its attractiveness, perspicuity, and dependability while not impacting the perceived efficiency. That might suggest that this type of interfaces should combine these two approaches by trying to support exploratory behavior while gently pushing toward a guided effective solution. | [
"['Seyedehdelaram Esfahani' 'Giovanni De Toni' 'Bruno Lepri'\n 'Andrea Passerini' 'Katya Tentori' 'Massimo Zancanaro']"
]
|
null | null | 2404.05298 | null | null | http://arxiv.org/pdf/2404.05298v1 | 2024-04-08T08:35:50Z | 2024-04-08T08:35:50Z | In-Flight Estimation of Instrument Spectral Response Functions Using
Sparse Representations | Accurate estimates of Instrument Spectral Response Functions (ISRFs) are crucial in order to have a good characterization of high resolution spectrometers. Spectrometers are composed of different optical elements that can induce errors in the measurements and therefore need to be modeled as accurately as possible. Parametric models are currently used to estimate these response functions. However, these models cannot always take into account the diversity of ISRF shapes that are encountered in practical applications. This paper studies a new ISRF estimation method based on a sparse representation of atoms belonging to a dictionary. This method is applied to different high-resolution spectrometers in order to assess its reproducibility for multiple remote sensing missions. The proposed method is shown to be very competitive when compared to the more commonly used parametric models, and yields normalized ISRF estimation errors less than 1%. | [
"['Jihanne El Haouari' 'Jean-Michel Gaucel' 'Christelle Pittet'\n 'Jean-Yves Tourneret' 'Herwig Wendt']"
]
|
null | null | 2404.05304 | null | null | http://arxiv.org/pdf/2404.05304v1 | 2024-04-08T08:47:46Z | 2024-04-08T08:47:46Z | Liquid Neural Network-based Adaptive Learning vs. Incremental Learning
for Link Load Prediction amid Concept Drift due to Network Failures | Adapting to concept drift is a challenging task in machine learning, which is usually tackled using incremental learning techniques that periodically re-fit a learning model leveraging newly available data. A primary limitation of these techniques is their reliance on substantial amounts of data for retraining. The necessity of acquiring fresh data introduces temporal delays prior to retraining, potentially rendering the models inaccurate if a sudden concept drift occurs in-between two consecutive retrainings. In communication networks, such issue emerges when performing traffic forecasting following a~failure event: post-failure re-routing may induce a drastic shift in distribution and pattern of traffic data, thus requiring a timely model adaptation. In this work, we address this challenge for the problem of traffic forecasting and propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining. Through extensive simulations of failure scenarios, we compare the predictive performance of our proposed approach to that of a reference method based on incremental learning. Experimental results show that our proposed approach outperforms incremental learning-based methods in situations where the shifts in traffic patterns are drastic. | [
"['Omran Ayoub' 'Davide Andreoletti' 'Aleksandra Knapińska' 'Róża Goścień'\n 'Piotr Lechowicz' 'Tiziano Leidi' 'Silvia Giordano' 'Cristina Rottondi'\n 'Krzysztof Walkowiak']"
]
|
null | null | 2404.05311 | null | null | http://arxiv.org/pdf/2404.05311v2 | 2024-06-01T04:59:16Z | 2024-04-08T08:59:26Z | BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial
Attack | We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries. Sparse attacks aim to discover a minimum number-the l0 bounded-perturbations to model inputs to craft adversarial examples and misguide model decisions. But, in contrast to query-based dense attack counterparts against black-box models, constructing sparse adversarial perturbations, even when models serve confidence score information to queries in a score-based setting, is non-trivial. Because, such an attack leads to i) an NP-hard problem; and ii) a non-differentiable search space. We develop the BruSLeAttack-a new, faster (more query-efficient) Bayesian algorithm for the problem. We conduct extensive attack evaluations including an attack demonstration against a Machine Learning as a Service (MLaaS) offering exemplified by Google Cloud Vision and robustness testing of adversarial training regimes and a recent defense against black-box attacks. The proposed attack scales to achieve state-of-the-art attack success rates and query efficiency on standard computer vision tasks such as ImageNet across different model architectures. Our artefacts and DIY attack samples are available on GitHub. Importantly, our work facilitates faster evaluation of model vulnerabilities and raises our vigilance on the safety, security and reliability of deployed systems. | [
"['Viet Quoc Vo' 'Ehsan Abbasnejad' 'Damith C. Ranasinghe']"
]
|
null | null | 2404.05316 | null | null | http://arxiv.org/pdf/2404.05316v2 | 2024-04-16T15:14:50Z | 2024-04-08T09:06:16Z | HOEG: A New Approach for Object-Centric Predictive Process Monitoring | Predictive Process Monitoring focuses on predicting future states of ongoing process executions, such as forecasting the remaining time. Recent developments in Object-Centric Process Mining have enriched event data with objects and their explicit relations between events. To leverage this enriched data, we propose the Heterogeneous Object Event Graph encoding (HOEG), which integrates events and objects into a graph structure with diverse node types. It does so without aggregating object features, thus creating a more nuanced and informative representation. We then adopt a heterogeneous Graph Neural Network architecture, which incorporates these diverse object features in prediction tasks. We evaluate the performance and scalability of HOEG in predicting remaining time, benchmarking it against two established graph-based encodings and two baseline models. Our evaluation uses three Object-Centric Event Logs (OCELs), including one from a real-life process at a major Dutch financial institution. The results indicate that HOEG competes well with existing models and surpasses them when OCELs contain informative object attributes and event-object interactions. | [
"['Tim K. Smit' 'Hajo A. Reijers' 'Xixi Lu']"
]
|
null | null | 2404.05318 | null | null | http://arxiv.org/pdf/2404.05318v1 | 2024-04-08T09:08:59Z | 2024-04-08T09:08:59Z | Stochastic Online Optimization for Cyber-Physical and Robotic Systems | We propose a novel gradient-based online optimization framework for solving stochastic programming problems that frequently arise in the context of cyber-physical and robotic systems. Our problem formulation accommodates constraints that model the evolution of a cyber-physical system, which has, in general, a continuous state and action space, is nonlinear, and where the state is only partially observed. We also incorporate an approximate model of the dynamics as prior knowledge into the learning process and show that even rough estimates of the dynamics can significantly improve the convergence of our algorithms. Our online optimization framework encompasses both gradient descent and quasi-Newton methods, and we provide a unified convergence analysis of our algorithms in a non-convex setting. We also characterize the impact of modeling errors in the system dynamics on the convergence rate of the algorithms. Finally, we evaluate our algorithms in simulations of a flexible beam, a four-legged walking robot, and in real-world experiments with a ping-pong playing robot. | [
"['Hao Ma' 'Melanie Zeilinger' 'Michael Muehlebach']"
]
|
null | null | 2404.05324 | null | null | http://arxiv.org/pdf/2404.05324v1 | 2024-04-08T09:13:16Z | 2024-04-08T09:13:16Z | Back to the Future: GNN-based NO$_2$ Forecasting via Future Covariates | Due to the latest environmental concerns in keeping at bay contaminants emissions in urban areas, air pollution forecasting has been rising the forefront of all researchers around the world. When predicting pollutant concentrations, it is common to include the effects of environmental factors that influence these concentrations within an extended period, like traffic, meteorological conditions and geographical information. Most of the existing approaches exploit this information as past covariates, i.e., past exogenous variables that affected the pollutant but were not affected by it. In this paper, we present a novel forecasting methodology to predict NO$_2$ concentration via both past and future covariates. Future covariates are represented by weather forecasts and future calendar events, which are already known at prediction time. In particular, we deal with air quality observations in a city-wide network of ground monitoring stations, modeling the data structure and estimating the predictions with a Spatiotemporal Graph Neural Network (STGNN). We propose a conditioning block that embeds past and future covariates into the current observations. After extracting meaningful spatiotemporal representations, these are fused together and projected into the forecasting horizon to generate the final prediction. To the best of our knowledge, it is the first time that future covariates are included in time series predictions in a structured way. Remarkably, we find that conditioning on future weather information has a greater impact than considering past traffic conditions. We release our code implementation at https://github.com/polimi-ispl/MAGCRN. | [
"['Antonio Giganti' 'Sara Mandelli' 'Paolo Bestagini' 'Umberto Giuriato'\n \"Alessandro D'Ausilio\" 'Marco Marcon' 'Stefano Tubaro']"
]
|
null | null | 2404.05348 | null | null | http://arxiv.org/pdf/2404.05348v1 | 2024-04-08T09:33:40Z | 2024-04-08T09:33:40Z | Iterative Refinement Strategy for Automated Data Labeling: Facial
Landmark Diagnosis in Medical Imaging | Automated data labeling techniques are crucial for accelerating the development of deep learning models, particularly in complex medical imaging applications. However, ensuring accuracy and efficiency remains challenging. This paper presents iterative refinement strategies for automated data labeling in facial landmark diagnosis to enhance accuracy and efficiency for deep learning models in medical applications, including dermatology, plastic surgery, and ophthalmology. Leveraging feedback mechanisms and advanced algorithms, our approach iteratively refines initial labels, reducing reliance on manual intervention while improving label quality. Through empirical evaluation and case studies, we demonstrate the effectiveness of our proposed strategies in deep learning tasks across medical imaging domains. Our results highlight the importance of iterative refinement in automated data labeling to enhance the capabilities of deep learning systems in medical imaging applications. | [
"['Yu-Hsi Chen']"
]
|
null | null | 2404.05350 | null | null | http://arxiv.org/pdf/2404.05350v1 | 2024-04-08T09:38:22Z | 2024-04-08T09:38:22Z | Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized
Smoothing | Randomized smoothing is the primary certified robustness method for accessing the robustness of deep learning models to adversarial perturbations in the l2-norm, by adding isotropic Gaussian noise to the input image and returning the majority votes over the base classifier. Theoretically, it provides a certified norm bound, ensuring predictions of adversarial examples are stable within this bound. A notable constraint limiting widespread adoption is the necessity to retrain base models entirely from scratch to attain a robust version. This is because the base model fails to learn the noise-augmented data distribution to give an accurate vote. One intuitive way to overcome this challenge is to involve a custom-trained denoiser to eliminate the noise. However, this approach is inefficient and sub-optimal. Inspired by recent large model training procedures, we explore an alternative way named PEFTSmoothing to adapt the base model to learn the Gaussian noise-augmented data with Parameter-Efficient Fine-Tuning (PEFT) methods in both white-box and black-box settings. Extensive results demonstrate the effectiveness and efficiency of PEFTSmoothing, which allow us to certify over 98% accuracy for ViT on CIFAR-10, 20% higher than SoTA denoised smoothing, and over 61% accuracy on ImageNet which is 30% higher than CNN-based denoiser and comparable to the Diffusion-based denoiser. | [
"['Chengyan Fu' 'Wenjie Wang']"
]
|
null | null | 2404.05359 | null | null | http://arxiv.org/pdf/2404.05359v1 | 2024-04-08T09:52:19Z | 2024-04-08T09:52:19Z | Improving Algorithm-Selection and Performance-Prediction via Learning
Discriminating Training Samples | The choice of input-data used to train algorithm-selection models is recognised as being a critical part of the model success. Recently, feature-free methods for algorithm-selection that use short trajectories obtained from running a solver as input have shown promise. However, it is unclear to what extent these trajectories reliably discriminate between solvers. We propose a meta approach to generating discriminatory trajectories with respect to a portfolio of solvers. The algorithm-configuration tool irace is used to tune the parameters of a simple Simulated Annealing algorithm (SA) to produce trajectories that maximise the performance metrics of ML models trained on this data. We show that when the trajectories obtained from the tuned SA algorithm are used in ML models for algorithm-selection and performance prediction, we obtain significantly improved performance metrics compared to models trained both on raw trajectory data and on exploratory landscape features. | [
"['Quentin Renau' 'Emma Hart']"
]
|
null | null | 2404.05363 | null | null | http://arxiv.org/pdf/2404.05363v1 | 2024-04-08T09:57:02Z | 2024-04-08T09:57:02Z | A parameter-free clustering algorithm for missing datasets | Missing datasets, in which some objects have missing values in certain dimensions, are prevalent in the Real-world. Existing clustering algorithms for missing datasets first impute the missing values and then perform clustering. However, both the imputation and clustering processes require input parameters. Too many input parameters inevitably increase the difficulty of obtaining accurate clustering results. Although some studies have shown that decision graphs can replace the input parameters of clustering algorithms, current decision graphs require equivalent dimensions among objects and are therefore not suitable for missing datasets. To this end, we propose a Single-Dimensional Clustering algorithm, i.e., SDC. SDC, which removes the imputation process and adapts the decision graph to the missing datasets by splitting dimension and partition intersection fusion, can obtain valid clustering results on the missing datasets without input parameters. Experiments demonstrate that, across three evaluation metrics, SDC outperforms baseline algorithms by at least 13.7%(NMI), 23.8%(ARI), and 8.1%(Purity). | [
"['Qi Li' 'Xianjun Zeng' 'Shuliang Wang' 'Wenhao Zhu' 'Shijie Ruan'\n 'Zhimeng Yuan']"
]
|
null | null | 2404.05368 | null | null | http://arxiv.org/pdf/2404.05368v1 | 2024-04-08T10:10:30Z | 2024-04-08T10:10:30Z | Exploring Quantization and Mapping Synergy in Hardware-Aware Deep Neural
Network Accelerators | Energy efficiency and memory footprint of a convolutional neural network (CNN) implemented on a CNN inference accelerator depend on many factors, including a weight quantization strategy (i.e., data types and bit-widths) and mapping (i.e., placement and scheduling of DNN elementary operations on hardware units of the accelerator). We show that enabling rich mixed quantization schemes during the implementation can open a previously hidden space of mappings that utilize the hardware resources more effectively. CNNs utilizing quantized weights and activations and suitable mappings can significantly improve trade-offs among the accuracy, energy, and memory requirements compared to less carefully optimized CNN implementations. To find, analyze, and exploit these mappings, we: (i) extend a general-purpose state-of-the-art mapping tool (Timeloop) to support mixed quantization, which is not currently available; (ii) propose an efficient multi-objective optimization algorithm to find the most suitable bit-widths and mapping for each DNN layer executed on the accelerator; and (iii) conduct a detailed experimental evaluation to validate the proposed method. On two CNNs (MobileNetV1 and MobileNetV2) and two accelerators (Eyeriss and Simba) we show that for a given quality metric (such as the accuracy on ImageNet), energy savings are up to 37% without any accuracy drop. | [
"['Jan Klhufek' 'Miroslav Safar' 'Vojtech Mrazek' 'Zdenek Vasicek'\n 'Lukas Sekanina']"
]
|
null | null | 2404.05388 | null | null | http://arxiv.org/abs/2404.05388v3 | 2024-05-15T06:19:04Z | 2024-04-08T10:49:59Z | An AI System Evaluation Framework for Advancing AI Safety: Terminology,
Taxonomy, Lifecycle Mapping | The advent of advanced AI underscores the urgent need for comprehensive safety evaluations, necessitating collaboration across communities (i.e., AI, software engineering, and governance). However, divergent practices and terminologies across these communities, combined with the complexity of AI systems-of which models are only a part-and environmental affordances (e.g., access to tools), obstruct effective communication and comprehensive evaluation. This paper proposes a framework for AI system evaluation comprising three components: 1) harmonised terminology to facilitate communication across communities involved in AI safety evaluation; 2) a taxonomy identifying essential elements for AI system evaluation; 3) a mapping between AI lifecycle, stakeholders, and requisite evaluations for accountable AI supply chain. This framework catalyses a deeper discourse on AI system evaluation beyond model-centric approaches. | [
"['Boming Xia' 'Qinghua Lu' 'Liming Zhu' 'Zhenchang Xing']"
]
|
null | null | 2404.05405 | null | null | http://arxiv.org/pdf/2404.05405v1 | 2024-04-08T11:11:31Z | 2024-04-08T11:11:31Z | Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws | Scaling laws describe the relationship between the size of language models and their capabilities. Unlike prior studies that evaluate a model's capability via loss or benchmarks, we estimate the number of knowledge bits a model stores. We focus on factual knowledge represented as tuples, such as (USA, capital, Washington D.C.) from a Wikipedia page. Through multiple controlled datasets, we establish that language models can and only can store 2 bits of knowledge per parameter, even when quantized to int8, and such knowledge can be flexibly extracted for downstream applications. Consequently, a 7B model can store 14B bits of knowledge, surpassing the English Wikipedia and textbooks combined based on our estimation. More broadly, we present 12 results on how (1) training duration, (2) model architecture, (3) quantization, (4) sparsity constraints such as MoE, and (5) data signal-to-noise ratio affect a model's knowledge storage capacity. Notable insights include: * The GPT-2 architecture, with rotary embedding, matches or even surpasses LLaMA/Mistral architectures in knowledge storage, particularly over shorter training durations. This arises because LLaMA/Mistral uses GatedMLP, which is less stable and harder to train. * Prepending training data with domain names (e.g., wikipedia.org) significantly increases a model's knowledge capacity. Language models can autonomously identify and prioritize domains rich in knowledge, optimizing their storage capacity. | [
"['Zeyuan Allen-Zhu' 'Yuanzhi Li']"
]
|
null | null | 2404.05424 | null | null | http://arxiv.org/pdf/2404.05424v1 | 2024-04-08T11:47:46Z | 2024-04-08T11:47:46Z | What Are the Odds? Improving the foundations of Statistical Model
Checking | Markov decision processes (MDPs) are a fundamental model for decision making under uncertainty. They exhibit non-deterministic choice as well as probabilistic uncertainty. Traditionally, verification algorithms assume exact knowledge of the probabilities that govern the behaviour of an MDP. As this assumption is often unrealistic in practice, statistical model checking (SMC) was developed in the past two decades. It allows to analyse MDPs with unknown transition probabilities and provide probably approximately correct (PAC) guarantees on the result. Model-based SMC algorithms sample the MDP and build a model of it by estimating all transition probabilities, essentially for every transition answering the question: ``What are the odds?'' However, so far the statistical methods employed by the state of the art SMC algorithms are quite naive. Our contribution are several fundamental improvements to those methods: On the one hand, we survey statistics literature for better concentration inequalities; on the other hand, we propose specialised approaches that exploit our knowledge of the MDP. Our improvements are generally applicable to many kinds of problem statements because they are largely independent of the setting. Moreover, our experimental evaluation shows that they lead to significant gains, reducing the number of samples that the SMC algorithm has to collect by up to two orders of magnitude. | [
"['Tobias Meggendorfer' 'Maximilian Weininger' 'Patrick Wienhöft']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.