categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.10937
| null | null |
http://arxiv.org/pdf/2402.10937v1
|
2024-02-07T07:32:03Z
|
2024-02-07T07:32:03Z
|
A Lightweight Inception Boosted U-Net Neural Network for Routability
Prediction
|
As the modern CPU, GPU, and NPU chip design complexity and transistor counts keep increasing, and with the relentless shrinking of semiconductor technology nodes to nearly 1 nanometer, the placement and routing have gradually become the two most pivotal processes in modern very-large-scale-integrated (VLSI) circuit back-end design. How to evaluate routability efficiently and accurately in advance (at the placement and global routing stages) has grown into a crucial research area in the field of artificial intelligence (AI) assisted electronic design automation (EDA). In this paper, we propose a novel U-Net variant model boosted by an Inception embedded module to predict Routing Congestion (RC) and Design Rule Checking (DRC) hotspots. Experimental results on the recently published CircuitNet dataset benchmark show that our proposed method achieves up to 5% (RC) and 20% (DRC) rate reduction in terms of Avg-NRMSE (Average Normalized Root Mean Square Error) compared to the classic architecture. Furthermore, our approach consistently outperforms the prior model on the SSIM (Structural Similarity Index Measure) metric.
|
[
"['Hailiang Li' 'Yan Huo' 'Yan Wang' 'Xu Yang' 'Miaohui Hao' 'Xiao Wang']"
] |
null | null |
2402.10940
| null | null |
http://arxiv.org/pdf/2402.10940v1
|
2024-02-07T20:11:56Z
|
2024-02-07T20:11:56Z
|
Neural machine translation of clinical procedure codes for medical
diagnosis and uncertainty quantification
|
A Clinical Decision Support System (CDSS) is designed to enhance clinician decision-making by combining system-generated recommendations with medical expertise. Given the high costs, intensive labor, and time-sensitive nature of medical treatments, there is a pressing need for efficient decision support, especially in complex emergency scenarios. In these scenarios, where information can be limited, an advanced CDSS framework that leverages AI (artificial intelligence) models to effectively reduce diagnostic uncertainty has utility. Such an AI-enabled CDSS framework with quantified uncertainty promises to be practical and beneficial in the demanding context of real-world medical care. In this study, we introduce the concept of Medical Entropy, quantifying uncertainties in patient outcomes predicted by neural machine translation based on the ICD-9 code of procedures. Our experimental results not only show strong correlations between procedure and diagnosis sequences based on the simple ICD-9 code but also demonstrate the promising capacity to model trends of uncertainties during hospitalizations through a data-driven approach.
|
[
"['Pei-Hung Chung' 'Shuhan He' 'Norawit Kijpaisalratana'\n 'Abdel-badih el Ariss' 'Byung-Jun Yoon']"
] |
null | null |
2402.10941
| null | null |
http://arxiv.org/pdf/2402.10941v1
|
2024-02-08T03:41:39Z
|
2024-02-08T03:41:39Z
|
Text2Data: Low-Resource Data Generation with Textual Control
|
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines. Recognizing the importance of this interface, the machine learning community is investing considerable effort in generating data that is semantically coherent with textual instructions. While strides have been made in text-to-data generation spanning image editing, audio synthesis, video creation, and beyond, low-resource areas characterized by expensive annotations or complex data structures, such as molecules, motion dynamics, and time series, often lack textual labels. This deficiency impedes supervised learning, thereby constraining the application of advanced generative models for text-to-data tasks. In response to these challenges in the low-resource scenario, we propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model. Subsequently, it undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting. Comprehensive experiments demonstrate that Text2Data is able to achieve enhanced performance regarding controllability across various modalities, including molecules, motions and time series, when compared to existing baselines.
|
[
"['Shiyu Wang' 'Yihao Feng' 'Tian Lan' 'Ning Yu' 'Yu Bai' 'Ran Xu'\n 'Huan Wang' 'Caiming Xiong' 'Silvio Savarese']"
] |
null | null |
2402.10946
| null | null |
http://arxiv.org/pdf/2402.10946v1
|
2024-02-09T04:02:43Z
|
2024-02-09T04:02:43Z
|
CultureLLM: Incorporating Cultural Differences into Large Language
Models
|
Large language models (LLMs) are reported to be partial to certain cultures owing to the training data dominance from the English corpora. Since multilingual cultural data are often expensive to collect, existing efforts handle this by prompt engineering or culture-specific pre-training. However, they might overlook the knowledge deficiency of low-resource culture and require extensive computing resources. In this paper, we propose CultureLLM, a cost-effective solution to incorporate cultural differences into LLMs. CultureLLM adopts World Value Survey (WVS) as seed data and generates semantically equivalent training data via the proposed semantic data augmentation. Using only 50 seed samples from WVS with augmented data, we fine-tune culture-specific LLMs and one unified model (CultureLLM-One) for 9 cultures covering rich and low-resource languages. Extensive experiments on 60 culture-related datasets demonstrate that CultureLLM significantly outperforms various counterparts such as GPT-3.5 (by 8.1%) and Gemini Pro (by 9.5%) with comparable performance to GPT-4 or even better. Our human study shows that the generated samples are semantically equivalent to the original samples, providing an effective solution for LLMs augmentation.
|
[
"['Cheng Li' 'Mengzhou Chen' 'Jindong Wang' 'Sunayana Sitaram' 'Xing Xie']"
] |
null | null |
2402.10949
| null | null |
http://arxiv.org/pdf/2402.10949v2
|
2024-02-20T15:03:00Z
|
2024-02-09T22:48:45Z
|
The Unreasonable Effectiveness of Eccentric Automatic Prompts
|
Large Language Models (LLMs) have demonstrated remarkable problem-solving and basic mathematics abilities. However, their efficacy is highly contingent on the formulation of the prompt. This study endeavors to quantify the influence of incorporating "positive thinking" into the system message of the prompt, then compare that to systematic prompt optimization. We assess the performance of 60 combinations of system message snippets, tested with and without Chain of Thought prompting, across three models with parameters ranging from 7 to 70 billion on the GSM8K dataset. Our findings reveal that results do not universally generalize across models. In most instances, the inclusion of "positive thinking" prompts positively affected model performance. Notably, however, Llama2-70B exhibited an exception when not utilizing Chain of Thought, as the optimal system message was found to be none at all. Given the combinatorial complexity, and thus computation time, of experimenting with hand-tuning prompts for large black-box models, we then compared the performance of the best "positive thinking" prompt against the output of systematic prompt optimization. We show that employing an automated prompt optimizer emerges as the most effective method for enhancing performance, even when working with smaller open-source models. Additionally, our findings reveal that the highest-scoring, automatically-optimized prompt exhibits a degree of peculiarity far beyond expectations.
|
[
"['Rick Battle' 'Teja Gollapudi']"
] |
null | null |
2402.10951
| null | null |
http://arxiv.org/pdf/2402.10951v1
|
2024-02-10T16:48:45Z
|
2024-02-10T16:48:45Z
|
DAEDRA: A language model for predicting outcomes in passive
pharmacovigilance reporting
|
Over the recent years, the emergence of large language models (LLMs) has given rise to a proliferation of domain-specific models that are intended to reflect the particularities of linguistic context and content as a correlate of the originating domain. This paper details the conception, design, training and evaluation of DAEDRA, a LLM designed to detect regulatory-relevant outcomes (mortality, ER attendance and hospitalisation) in adverse event reports elicited through passive reporting (PR). While PR is a highly cost-efficient way of eliciting information from a wide and diverse audience -- typically including not only physicians and healthcare providers but also patients, family members and other lay stakeholders --, this diversity makes PR corpora difficult to analyse. Generic language models may not capture the complex clinical dimensions while specific clinical or biomedical models may not perform well on lay reports. To evaluate the utility of a subdomain-specific language model, an adaptive training approach was adapted, wherein base language model candidates were evaluated on a subset of the corpus, and the best performer was trained on the entire corpus. This yielded a small but significant improvement in $F_1$ (+1%), precision (+2.5%) and recall (+3.8%), at a relatively low training cost and a single-day training time. Subdomain-specific LLMs continue to be viable options for better results when analysing highly specialised corpora.
|
[
"['Chris von Csefalvay']"
] |
null | null |
2402.10956
| null | null |
http://arxiv.org/pdf/2402.10956v1
|
2024-02-12T18:25:41Z
|
2024-02-12T18:25:41Z
|
Sleep-Like Unsupervised Replay Improves Performance when Data are
Limited or Unbalanced
|
The performance of artificial neural networks (ANNs) degrades when training data are limited or imbalanced. In contrast, the human brain can learn quickly from just a few examples. Here, we investigated the role of sleep in improving the performance of ANNs trained with limited data on the MNIST and Fashion MNIST datasets. Sleep was implemented as an unsupervised phase with local Hebbian type learning rules. We found a significant boost in accuracy after the sleep phase for models trained with limited data in the range of 0.5-10% of total MNIST or Fashion MNIST datasets. When more than 10% of the total data was used, sleep alone had a slight negative impact on performance, but this was remedied by fine-tuning on the original data. This study sheds light on a potential synaptic weight dynamics strategy employed by the brain during sleep to enhance memory performance when training data are limited or imbalanced.
|
[
"['Anthony Bazhenov' 'Pahan Dewasurendra' 'Giri Krishnan'\n 'Jean Erik Delanois']"
] |
null | null |
2402.10958
| null | null |
http://arxiv.org/pdf/2402.10958v2
|
2024-05-27T20:05:03Z
|
2024-02-12T22:47:57Z
|
Relative Preference Optimization: Enhancing LLM Alignment through
Contrasting Responses across Identical and Diverse Prompts
|
In the field of large language models (LLMs), aligning models with the diverse preferences of users is a critical challenge. Direct Preference Optimization (DPO) has played a key role in this area. It works by using pairs of preferences derived from the same prompts, and it functions without needing an additional reward model. However, DPO does not fully reflect the complex nature of human learning, which often involves understanding contrasting responses to not only identical but also similar questions. To overcome this shortfall, we propose Relative Preference Optimization (RPO). RPO is designed to discern between more and less preferred responses derived from both identical and related prompts. It introduces a contrastive weighting mechanism, enabling the tuning of LLMs using a broader range of preference data, including both paired and unpaired sets. This approach expands the learning capabilities of the model, allowing it to leverage insights from a more varied set of prompts. Through empirical tests, including dialogue and summarization tasks, and evaluations using the AlpacaEval2.0 leaderboard, RPO has demonstrated a superior ability to align LLMs with user preferences and to improve their adaptability during the training process. Our code can be viewed at https://github.com/yinyueqin/relative-preference-optimization
|
[
"['Yueqin Yin' 'Zhendong Wang' 'Yi Gu' 'Hai Huang' 'Weizhu Chen'\n 'Mingyuan Zhou']"
] |
null | null |
2402.10962
| null | null |
http://arxiv.org/pdf/2402.10962v3
|
2024-05-01T16:47:42Z
|
2024-02-13T20:10:29Z
|
Measuring and Controlling Instruction (In)Stability in Language Model
Dialogs
|
System-prompting is a standard tool for customizing language-model chatbots, enabling them to follow a specific instruction. An implicit assumption in the use of system prompts is that they will be stable, so the chatbot will continue to generate text according to the stipulated instructions for the duration of a conversation. We propose a quantitative benchmark to test this assumption, evaluating instruction stability via self-chats between two instructed chatbots. Testing popular models like LLaMA2-chat-70B and GPT-3.5, we reveal a significant instruction drift within eight rounds of conversations. An empirical and theoretical analysis of this phenomenon suggests the transformer attention mechanism plays a role, due to attention decay over long exchanges. To combat attention decay and instruction drift, we propose a lightweight method called split-softmax, which compares favorably against two strong baselines.
|
[
"['Kenneth Li' 'Tianle Liu' 'Naomi Bashkansky' 'David Bau'\n 'Fernanda Viégas' 'Hanspeter Pfister' 'Martin Wattenberg']"
] |
null | null |
2402.10963
| null | null |
http://arxiv.org/pdf/2402.10963v2
|
2024-06-25T03:14:10Z
|
2024-02-13T20:16:29Z
|
GLoRe: When, Where, and How to Improve LLM Reasoning via Global and
Local Refinements
|
State-of-the-art language models can exhibit impressive reasoning refinement capabilities on math, science or coding tasks. However, recent work demonstrates that even the best models struggle to identify textit{when and where to refine} without access to external feedback. Outcome-based Reward Models (textbf{ORMs}), trained to predict correctness of the final answer indicating when to refine, offer one convenient solution for deciding when to refine. Process Based Reward Models (textbf{PRMs}), trained to predict correctness of intermediate steps, can then be used to indicate where to refine. But they are expensive to train, requiring extensive human annotations. In this paper, we propose Stepwise ORMs (textbf{SORMs}) which are trained, only on synthetic data, to approximate the expected future reward of the optimal policy or $V^{star}$. More specifically, SORMs are trained to predict the correctness of the final answer when sampling the current policy many times (rather than only once as in the case of ORMs). Our experiments show that SORMs can more accurately detect incorrect reasoning steps compared to ORMs, thus improving downstream accuracy when doing refinements. We then train textit{global} refinement models, which take only the question and a draft solution as input and predict a corrected solution, and textit{local} refinement models which also take as input a critique indicating the location of the first reasoning error. We generate training data for both models synthetically by reusing data used to train the SORM. We find combining global and local refinements, using the ORM as a reranker, significantly outperforms either one individually, as well as a best of three sample baseline. With this strategy we can improve the accuracy of a LLaMA-2 13B model (already fine-tuned with RL) on GSM8K from 53% to 65% when greedily sampled.
|
[
"['Alex Havrilla' 'Sharath Raparthy' 'Christoforus Nalmpantis'\n 'Jane Dwivedi-Yu' 'Maksym Zhuravinskyi' 'Eric Hambro' 'Roberta Raileanu']"
] |
null | null |
2402.10964
| null | null |
http://arxiv.org/pdf/2402.10964v2
|
2024-02-20T08:24:42Z
|
2024-02-13T21:57:31Z
|
Optimal feature rescaling in machine learning based on neural networks
|
This paper proposes a novel approach to improve the training efficiency and the generalization performance of Feed Forward Neural Networks (FFNNs) resorting to an optimal rescaling of input features (OFR) carried out by a Genetic Algorithm (GA). The OFR reshapes the input space improving the conditioning of the gradient-based algorithm used for the training. Moreover, the scale factors exploration entailed by GA trials and selection corresponds to different initialization of the first layer weights at each training attempt, thus realizing a multi-start global search algorithm (even though restrained to few weights only) which fosters the achievement of a global minimum. The approach has been tested on a FFNN modeling the outcome of a real industrial process (centerless grinding).
|
[
"['Federico Maria Vitrò' 'Marco Leonesio' 'Lorenzo Fagiano']"
] |
null | null |
2402.10965
| null | null |
http://arxiv.org/pdf/2402.10965v2
|
2024-02-24T13:17:38Z
|
2024-02-14T06:24:52Z
|
Generalization in Healthcare AI: Evaluation of a Clinical Large Language
Model
|
Advances in large language models (LLMs) provide new opportunities in healthcare for improved patient care, clinical decision-making, and enhancement of physician and administrator workflows. However, the potential of these models importantly depends on their ability to generalize effectively across clinical environments and populations, a challenge often underestimated in early development. To better understand reasons for these challenges and inform mitigation approaches, we evaluated ClinicLLM, an LLM trained on [HOSPITAL]'s clinical notes, analyzing its performance on 30-day all-cause readmission prediction focusing on variability across hospitals and patient characteristics. We found poorer generalization particularly in hospitals with fewer samples, among patients with government and unspecified insurance, the elderly, and those with high comorbidities. To understand reasons for lack of generalization, we investigated sample sizes for fine-tuning, note content (number of words per note), patient characteristics (comorbidity level, age, insurance type, borough), and health system aspects (hospital, all-cause 30-day readmission, and mortality rates). We used descriptive statistics and supervised classification to identify features. We found that, along with sample size, patient age, number of comorbidities, and the number of words in notes are all important factors related to generalization. Finally, we compared local fine-tuning (hospital specific), instance-based augmented fine-tuning and cluster-based fine-tuning for improving generalization. Among these, local fine-tuning proved most effective, increasing AUC by 0.25% to 11.74% (most helpful in settings with limited data). Overall, this study provides new insights for enhancing the deployment of large language models in the societally important domain of healthcare, and improving their performance for broader populations.
|
[
"['Salman Rahman' 'Lavender Yao Jiang' 'Saadia Gabriel'\n 'Yindalon Aphinyanaphongs' 'Eric Karl Oermann' 'Rumi Chunara']"
] |
null | null |
2402.10972
| null | null |
http://arxiv.org/abs/2402.10972v1
|
2024-02-15T08:30:50Z
|
2024-02-15T08:30:50Z
|
Modeling methodology for the accurate and prompt prediction of
symptomatic events in chronic diseases
|
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40 minutes, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises.
|
[
"['Josué Pagán' 'José L. Risco-Martín' 'José M. Moya' 'José L. Ayala']"
] |
null | null |
2402.10974
| null | null |
http://arxiv.org/pdf/2402.10974v1
|
2024-02-15T14:39:58Z
|
2024-02-15T14:39:58Z
|
On the Cross-Dataset Generalization of Machine Learning for Network
Intrusion Detection
|
Network Intrusion Detection Systems (NIDS) are a fundamental tool in cybersecurity. Their ability to generalize across diverse networks is a critical factor in their effectiveness and a prerequisite for real-world applications. In this study, we conduct a comprehensive analysis on the generalization of machine-learning-based NIDS through an extensive experimentation in a cross-dataset framework. We employ four machine learning classifiers and utilize four datasets acquired from different networks: CIC-IDS-2017, CSE-CIC-IDS2018, LycoS-IDS2017, and LycoS-Unicas-IDS2018. Notably, the last dataset is a novel contribution, where we apply corrections based on LycoS-IDS2017 to the well-known CSE-CIC-IDS2018 dataset. The results show nearly perfect classification performance when the models are trained and tested on the same dataset. However, when training and testing the models in a cross-dataset fashion, the classification accuracy is largely commensurate with random chance except for a few combinations of attacks and datasets. We employ data visualization techniques in order to provide valuable insights on the patterns in the data. Our analysis unveils the presence of anomalies in the data that directly hinder the classifiers capability to generalize the learned knowledge to new scenarios. This study enhances our comprehension of the generalization capabilities of machine-learning-based NIDS, highlighting the significance of acknowledging data heterogeneity.
|
[
"['Marco Cantone' 'Claudio Marrocco' 'Alessandro Bria']"
] |
null | null |
2402.10977
| null | null |
http://arxiv.org/abs/2402.10977v2
|
2024-05-06T21:40:04Z
|
2024-02-15T18:20:42Z
|
Generative AI and Process Systems Engineering: The Next Frontier
|
This article explores how emerging generative artificial intelligence (GenAI) models, such as large language models (LLMs), can enhance solution methodologies within process systems engineering (PSE). These cutting-edge GenAI models, particularly foundation models (FMs), which are pre-trained on extensive, general-purpose datasets, offer versatile adaptability for a broad range of tasks, including responding to queries, image generation, and complex decision-making. Given the close relationship between advancements in PSE and developments in computing and systems technologies, exploring the synergy between GenAI and PSE is essential. We begin our discussion with a compact overview of both classic and emerging GenAI models, including FMs, and then dive into their applications within key PSE domains: synthesis and design, optimization and integration, and process monitoring and control. In each domain, we explore how GenAI models could potentially advance PSE methodologies, providing insights and prospects for each area. Furthermore, the article identifies and discusses potential challenges in fully leveraging GenAI within PSE, including multiscale modeling, data requirements, evaluation metrics and benchmarks, and trust and safety, thereby deepening the discourse on effective GenAI integration into systems analysis, design, optimization, operations, monitoring, and control. This paper provides a guide for future research focused on the applications of emerging GenAI in PSE.
|
[
"['Benjamin Decardi-Nelson' 'Abdulelah S. Alshehri' 'Akshay Ajagekar'\n 'Fengqi You']"
] |
null | null |
2402.10978
| null | null |
http://arxiv.org/pdf/2402.10978v1
|
2024-02-15T18:31:53Z
|
2024-02-15T18:31:53Z
|
Language Models with Conformal Factuality Guarantees
|
Guaranteeing the correctness and factuality of language model (LM) outputs is a major open problem. In this work, we propose conformal factuality, a framework that can ensure high probability correctness guarantees for LMs by connecting language modeling and conformal prediction. We observe that the correctness of an LM output is equivalent to an uncertainty quantification problem, where the uncertainty sets are defined as the entailment set of an LM's output. Using this connection, we show that conformal prediction in language models corresponds to a back-off algorithm that provides high probability correctness guarantees by progressively making LM outputs less specific (and expanding the associated uncertainty sets). This approach applies to any black-box LM and requires very few human-annotated samples. Evaluations of our approach on closed book QA (FActScore, NaturalQuestions) and reasoning tasks (MATH) show that our approach can provide 80-90% correctness guarantees while retaining the majority of the LM's original output.
|
[
"['Christopher Mohri' 'Tatsunori Hashimoto']"
] |
null | null |
2402.10980
| null | null |
http://arxiv.org/pdf/2402.10980v4
|
2024-06-07T17:33:21Z
|
2024-02-15T21:33:07Z
|
ChemReasoner: Heuristic Search over a Large Language Model's Knowledge
Space using Quantum-Chemical Feedback
|
The discovery of new catalysts is essential for the design of new and more efficient chemical processes in order to transition to a sustainable future. We introduce an AI-guided computational screening framework unifying linguistic reasoning with quantum-chemistry based feedback from 3D atomistic representations. Our approach formulates catalyst discovery as an uncertain environment where an agent actively searches for highly effective catalysts via the iterative combination of large language model (LLM)-derived hypotheses and atomistic graph neural network (GNN)-derived feedback. Identified catalysts in intermediate search steps undergo structural evaluation based on spatial orientation, reaction pathways, and stability. Scoring functions based on adsorption energies and reaction energy barriers steer the exploration in the LLM's knowledge space toward energetically favorable, high-efficiency catalysts. We introduce planning methods that automatically guide the exploration without human input, providing competitive performance against expert-enumerated chemical descriptor-based implementations. By integrating language-guided reasoning with computational chemistry feedback, our work pioneers AI-accelerated, trustworthy catalyst discovery.
|
[
"['Henry W. Sprueill' 'Carl Edwards' 'Khushbu Agarwal' 'Mariefel V. Olarte'\n 'Udishnu Sanyal' 'Conrad Johnston' 'Hongbin Liu' 'Heng Ji'\n 'Sutanay Choudhury']"
] |
null | null |
2402.10981
| null | null |
http://arxiv.org/pdf/2402.10981v1
|
2024-02-15T22:51:27Z
|
2024-02-15T22:51:27Z
|
Stuck-at Faults in ReRAM Neuromorphic Circuit Array and their Correction
through Machine Learning
|
In this paper, we study the inference accuracy of the Resistive Random Access Memory (ReRAM) neuromorphic circuit due to stuck-at faults (stuck-on, stuck-off, and stuck at a certain resistive value). A simulation framework using Python is used to perform supervised machine learning (neural network with 3 hidden layers, 1 input layer, and 1 output layer) of handwritten digits and construct a corresponding fully analog neuromorphic circuit (4 synaptic arrays) simulated by Spectre. A generic 45nm Process Development Kit (PDK) was used. We study the difference in the inference accuracy degradation due to stuck-on and stuck-off defects. Various defect patterns are studied including circular, ring, row, column, and circular-complement defects. It is found that stuck-on and stuck-off defects have a similar effect on inference accuracy. However, it is also found that if there is a spatial defect variation across the columns, the inference accuracy may be degraded significantly. We also propose a machine learning (ML) strategy to recover the inference accuracy degradation due to stuck-at faults. The inference accuracy is improved from 48% to 85% in a defective neuromorphic circuit.
|
[
"['Vedant Sawal' 'Hiu Yung Wong']"
] |
null | null |
2402.10982
| null | null |
http://arxiv.org/abs/2402.10982v1
|
2024-02-15T23:08:18Z
|
2024-02-15T23:08:18Z
|
mshw, a forecasting library to predict short-term electricity demand
based on multiple seasonal Holt-Winters
|
Transmission system operators have a growing need for more accurate forecasting of electricity demand. Current electricity systems largely require demand forecasting so that the electricity market establishes electricity prices as well as the programming of production units. The companies that are part of the electrical system use exclusive software to obtain predictions, based on the use of time series and prediction tools, whether statistical or artificial intelligence. However, the most common form of prediction is based on hybrid models that use both technologies. In any case, it is software with a complicated structure, with a large number of associated variables and that requires a high computational load to make predictions. The predictions they can offer are not much better than those that simple models can offer. In this paper we present a MATLAB toolbox created for the prediction of electrical demand. The toolbox implements multiple seasonal Holt-Winters exponential smoothing models and neural network models. The models used include the use of discrete interval mobile seasonalities (DIMS) to improve forecasting on special days. Additionally, the results of its application in various electrical systems in Europe are shown, where the results obtained can be seen. The use of this library opens a new avenue of research for the use of models with discrete and complex seasonalities in other fields of application.
|
[
"['Oscar Trull' 'J. Carlos García-Díaz' 'Angel Peiró-Signes']"
] |
null | null |
2402.10983
| null | null |
http://arxiv.org/pdf/2402.10983v1
|
2024-02-16T02:11:27Z
|
2024-02-16T02:11:27Z
|
Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks
|
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks. Such attacks, born from the gradient of the loss function relative to the input, are discerned as input conjugates, revealing a systemic fragility within the network structure. Intriguingly, a mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity. This inherent susceptibility within neural network systems is generally intrinsic, highlighting not only the innate vulnerability of these networks but also suggesting potential advancements in the interdisciplinary area for understanding these black-box networks.
|
[
"['Jun-Jie Zhang' 'Deyu Meng']"
] |
null | null |
2402.10991
| null | null |
http://arxiv.org/pdf/2402.10991v4
|
2024-03-04T03:35:40Z
|
2024-02-16T12:10:53Z
|
Enhancing Convergence in Federated Learning: A Contribution-Aware
Asynchronous Approach
|
Federated Learning (FL) is a distributed machine learning paradigm that allows clients to train models on their data while preserving their privacy. FL algorithms, such as Federated Averaging (FedAvg) and its variants, have been shown to converge well in many scenarios. However, these methods require clients to upload their local updates to the server in a synchronous manner, which can be slow and unreliable in realistic FL settings. To address this issue, researchers have developed asynchronous FL methods that allow clients to continue training on their local data using a stale global model. However, most of these methods simply aggregate all of the received updates without considering their relative contributions, which can slow down convergence. In this paper, we propose a contribution-aware asynchronous FL method that takes into account the staleness and statistical heterogeneity of the received updates. Our method dynamically adjusts the contribution of each update based on these factors, which can speed up convergence compared to existing methods.
|
[
"['Changxin Xu' 'Yuxin Qiao' 'Zhanxin Zhou' 'Fanghao Ni' 'Jize Xiong']"
] |
null | null |
2402.10998
| null | null |
http://arxiv.org/pdf/2402.10998v2
|
2024-06-14T13:05:01Z
|
2024-02-16T16:15:25Z
|
Provably Safe Neural Network Controllers via Differential Dynamic Logic
|
While neural networks (NNs) have potential as autonomous controllers for Cyber-Physical Systems, verifying the safety of NN based control systems (NNCSs) poses significant challenges for the practical use of NNs, especially when safety is needed for unbounded time horizons. One reason is the intractability of analyzing NNs, ODEs and hybrid systems. To this end, we introduce VerSAILLE (Verifiably Safe AI via Logically Linked Envelopes): The first general approach that allows reusing control theory results for NNCS verification. By joining forces, we exploit the efficiency of NN verification tools while retaining the rigor of differential dynamic logic (dL). Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification. We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS. The NN verification properties resulting from hybrid systems typically contain nonlinear arithmetic and arbitrary logical structures while efficient NN verification merely supports linear constraints. To overcome this divide, we present Mosaic: An efficient, sound and complete verification approach for polynomial real arithmetic properties on piece-wise linear NNs. Mosaic partitions complex verification queries into simple queries and lifts off-the-shelf linear constraint tools to the nonlinear setting in a completeness-preserving manner by combining approximation with exact reasoning for counterexample regions. Our evaluation demonstrates the versatility of VerSAILLE and Mosaic: We prove infinite-time safety on the classical Vertical Airborne Collision Avoidance NNCS verification benchmark for two scenarios while (exhaustively) enumerating counterexample regions in unsafe scenarios. We also show that our approach significantly outperforms State-of-the-Art tools in closed-loop NNV.
|
[
"['Samuel Teuber' 'Stefan Mitsch' 'André Platzer']"
] |
null | null |
2402.10999
| null | null |
http://arxiv.org/pdf/2402.10999v1
|
2024-02-16T16:47:48Z
|
2024-02-16T16:47:48Z
|
Analysis and Mortality Prediction using Multiclass Classification for
Older Adults with Type 2 Diabetes
|
Designing proper treatment plans to manage diabetes requires health practitioners to pay heed to the individuals remaining life along with the comorbidities affecting them. Older adults with Type 2 Diabetes Mellitus (T2DM) are prone to experience premature death or even hypoglycaemia. The structured dataset utilized has 68 potential mortality predictors for 275,190 diabetic U.S. military Veterans aged 65 years or older. A new target variable is invented by combining the two original target variables. Outliers are handled by discretizing the continuous variables. Categorical variables have been dummy encoded. Class balancing is achieved by random under-sampling. A benchmark regression model is built using Multinomial Logistic Regression with LASSO. Chi-Squared and Information Gain are the filter-based feature selection techniques utilized. Classifiers such as Multinomial Logistic Regression, Random Forest, Extreme Gradient Boosting (XGBoost), and One-vs-Rest classifier are employed to build various models. Contrary to expectations, all the models have constantly underperformed. XGBoost has given the highest accuracy of 53.03 percent with Chi-Squared feature selection. All the models have consistently shown an acceptable performance for Class 3 (remaining life is more than 10 years), significantly low for Class 1 (remaining life is up to 5 years), and the worst for Class 2 (remaining life is more than 5 but up to 10 years). Features analysis has deduced that almost all input variables are associated with multiple target classes. The high dimensionality of the input data after dummy encoding seems to have confused the models, leading to misclassifications. The approach taken in this study is ineffective in producing a high-performing predictive model but lays a foundation as this problem has never been viewed from a multiclass classification perspective.
|
[
"['Ruchika Desure' 'Gutha Jaya Krishna']"
] |
null | null |
2402.11004
| null | null |
http://arxiv.org/pdf/2402.11004v1
|
2024-02-16T18:28:36Z
|
2024-02-16T18:28:36Z
|
The Evolution of Statistical Induction Heads: In-Context Learning Markov
Chains
|
Large language models have the ability to generate text that mimics patterns in their inputs. We introduce a simple Markov Chain sequence modeling task in order to study how this in-context learning (ICL) capability emerges. In our setting, each example is sampled from a Markov chain drawn from a prior distribution over Markov chains. Transformers trained on this task form emph{statistical induction heads} which compute accurate next-token probabilities given the bigram statistics of the context. During the course of training, models pass through multiple phases: after an initial stage in which predictions are uniform, they learn to sub-optimally predict using in-context single-token statistics (unigrams); then, there is a rapid phase transition to the correct in-context bigram solution. We conduct an empirical and theoretical investigation of this multi-phase process, showing how successful learning results from the interaction between the transformer's layers, and uncovering evidence that the presence of the simpler unigram solution may delay formation of the final bigram solution. We examine how learning is affected by varying the prior distribution over Markov chains, and consider the generalization of our in-context learning of Markov chains (ICL-MC) task to $n$-grams for $n > 2$.
|
[
"['Benjamin L. Edelman' 'Ezra Edelman' 'Surbhi Goel' 'Eran Malach'\n 'Nikolaos Tsilivis']"
] |
null | null |
2402.11006
| null | null |
http://arxiv.org/pdf/2402.11006v1
|
2024-02-16T18:51:40Z
|
2024-02-16T18:51:40Z
|
Automated Detection and Analysis of Data Practices Using A Real-World
Corpus
|
Privacy policies are crucial for informing users about data practices, yet their length and complexity often deter users from reading them. In this paper, we propose an automated approach to identify and visualize data practices within privacy policies at different levels of detail. Leveraging crowd-sourced annotations from the ToS;DR platform, we experiment with various methods to match policy excerpts with predefined data practice descriptions. We further conduct a case study to evaluate our approach on a real-world policy, demonstrating its effectiveness in simplifying complex policies. Experiments show that our approach accurately matches data practice descriptions with policy excerpts, facilitating the presentation of simplified privacy information to users.
|
[
"['Mukund Srinath' 'Pranav Venkit' 'Maria Badillo' 'Florian Schaub'\n 'C. Lee Giles' 'Shomir Wilson']"
] |
null | null |
2402.11025
| null | null |
http://arxiv.org/pdf/2402.11025v1
|
2024-02-16T19:15:49Z
|
2024-02-16T19:15:49Z
|
Training Bayesian Neural Networks with Sparse Subspace Variational
Inference
|
Bayesian neural networks (BNNs) offer uncertainty quantification but come with the downside of substantially increased training and inference costs. Sparse BNNs have been investigated for efficient inference, typically by either slowly introducing sparsity throughout the training or by post-training compression of dense BNNs. The dilemma of how to cut down massive training costs remains, particularly given the requirement to learn about the uncertainty. To solve this challenge, we introduce Sparse Subspace Variational Inference (SSVI), the first fully sparse BNN framework that maintains a consistently highly sparse Bayesian model throughout the training and inference phases. Starting from a randomly initialized low-dimensional sparse subspace, our approach alternately optimizes the sparse subspace basis selection and its associated parameters. While basis selection is characterized as a non-differentiable problem, we approximate the optimal solution with a removal-and-addition strategy, guided by novel criteria based on weight distribution statistics. Our extensive experiments show that SSVI sets new benchmarks in crafting sparse BNNs, achieving, for instance, a 10-20x compression in model size with under 3% performance drop, and up to 20x FLOPs reduction during training compared with dense VI training. Remarkably, SSVI also demonstrates enhanced robustness to hyperparameters, reducing the need for intricate tuning in VI and occasionally even surpassing VI-trained dense BNNs on both accuracy and uncertainty metrics.
|
[
"['Junbo Li' 'Zichen Miao' 'Qiang Qiu' 'Ruqi Zhang']"
] |
null | null |
2402.11036
| null | null |
http://arxiv.org/pdf/2402.11036v1
|
2024-02-16T19:29:43Z
|
2024-02-16T19:29:43Z
|
Occlusion Resilient 3D Human Pose Estimation
|
Occlusions remain one of the key challenges in 3D body pose estimation from single-camera video sequences. Temporal consistency has been extensively used to mitigate their impact but the existing algorithms in the literature do not explicitly model them. Here, we apply this by representing the deforming body as a spatio-temporal graph. We then introduce a refinement network that performs graph convolutions over this graph to output 3D poses. To ensure robustness to occlusions, we train this network with a set of binary masks that we use to disable some of the edges as in drop-out techniques. In effect, we simulate the fact that some joints can be hidden for periods of time and train the network to be immune to that. We demonstrate the effectiveness of this approach compared to state-of-the-art techniques that infer poses from single-camera sequences.
|
[
"['Soumava Kumar Roy' 'Ilia Badanin' 'Sina Honari' 'Pascal Fua']"
] |
null | null |
2402.11039
| null | null |
http://arxiv.org/pdf/2402.11039v2
|
2024-06-26T16:35:16Z
|
2024-02-16T19:35:42Z
|
Robustness to Subpopulation Shift with Domain Label Noise via
Regularized Annotation of Domains
|
Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA) rely heavily on well-annotated groups in the training data. We show, both in theory and practice, that annotation-based data augmentations using either downsampling or upweighting for WGA are susceptible to domain annotation noise, and in high-noise regimes approach the WGA of a model trained with vanilla empirical risk minimization. We introduce Regularized Annotation of Domains (RAD) in order to train robust last layer classifiers without the need for explicit domain annotations. Our results show that RAD is competitive with other recently proposed domain annotation-free techniques. Most importantly, RAD outperforms state-of-the-art annotation-reliant methods even with only 5% noise in the training data for several publicly available datasets.
|
[
"['Nathan Stromberg' 'Rohan Ayyagari' 'Monica Welfert' 'Sanmi Koyejo'\n 'Richard Nock' 'Lalitha Sankar']"
] |
null | null |
2402.11040
| null | null |
http://arxiv.org/pdf/2402.11040v2
|
2024-07-14T14:45:52Z
|
2024-02-16T19:35:58Z
|
Surpassing legacy approaches to PWR core reload optimization with
single-objective Reinforcement learning
|
Optimizing the fuel cycle cost through the optimization of nuclear reactor core loading patterns involves multiple objectives and constraints, leading to a vast number of candidate solutions that cannot be explicitly solved. To advance the state-of-the-art in core reload patterns, we have developed methods based on Deep Reinforcement Learning (DRL) for both single- and multi-objective optimization. Our previous research has laid the groundwork for these approaches and demonstrated their ability to discover high-quality patterns within a reasonable time frame. On the other hand, stochastic optimization (SO) approaches are commonly used in the literature, but there is no rigorous explanation that shows which approach is better in which scenario. In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO), against the most commonly used SO-based methods: Genetic Algorithm (GA), Parallel Simulated Annealing (PSA) with mixing of states, and Tabu Search (TS), as well as an ensemble-based method, Prioritized Replay Evolutionary and Swarm Algorithm (PESA). We found that the LP scenarios derived in this paper are amenable to a global search to identify promising research directions rapidly, but then need to transition into a local search to exploit these directions efficiently and prevent getting stuck in local optima. PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global and local search method. Subsequently, we compared all algorithms against PPO in long runs, which exacerbated the differences seen in the shorter cases. Overall, the work demonstrates the statistical superiority of PPO compared to the other considered algorithms.
|
[
"['Paul Seurin' 'Koroush Shirvan']"
] |
null | null |
2402.11066
| null | null |
http://arxiv.org/pdf/2402.11066v1
|
2024-02-16T20:40:30Z
|
2024-02-16T20:40:30Z
|
Towards Financially Inclusive Credit Products Through Financial Time
Series Clustering
|
Financial inclusion ensures that individuals have access to financial products and services that meet their needs. As a key contributing factor to economic growth and investment opportunity, financial inclusion increases consumer spending and consequently business development. It has been shown that institutions are more profitable when they provide marginalised social groups access to financial services. Customer segmentation based on consumer transaction data is a well-known strategy used to promote financial inclusion. While the required data is available to modern institutions, the challenge remains that segment annotations are usually difficult and/or expensive to obtain. This prevents the usage of time series classification models for customer segmentation based on domain expert knowledge. As a result, clustering is an attractive alternative to partition customers into homogeneous groups based on the spending behaviour encoded within their transaction data. In this paper, we present a solution to one of the key challenges preventing modern financial institutions from providing financially inclusive credit, savings and insurance products: the inability to understand consumer financial behaviour, and hence risk, without the introduction of restrictive conventional credit scoring techniques. We present a novel time series clustering algorithm that allows institutions to understand the financial behaviour of their customers. This enables unique product offerings to be provided based on the needs of the customer, without reliance on restrictive credit practices.
|
[
"['Tristan Bester' 'Benjamin Rosman']"
] |
null | null |
2402.11078
| null | null |
http://arxiv.org/pdf/2402.11078v3
|
2024-06-03T05:39:10Z
|
2024-02-16T21:10:33Z
|
Model Editing by Standard Fine-Tuning
|
Standard fine-tuning is considered not as effective as specialized methods for model editing due to its comparatively poor performance. However, it is simple, agnostic to the architectural details of the model being edited, and able to leverage advances in standard training techniques with no additional work (e.g., black-box PEFT for computational efficiency), making it an appealing choice for a model editor. In this work, we show that standard fine-tuning alone can yield competitive model editing performance with two minor modifications. First, we optimize the conditional likelihood rather than the full likelihood. Second, in addition to the typical practice of training on randomly paraphrased edit prompts to encourage generalization, we also train on random or similar unedited facts to encourage locality. Our experiments on the ZsRE and CounterFact datasets demonstrate that these simple modifications allow standard fine-tuning to match or outperform highly specialized editors in terms of edit score.
|
[
"['Govind Gangadhar' 'Karl Stratos']"
] |
null | null |
2402.11093
| null | null |
http://arxiv.org/pdf/2402.11093v1
|
2024-02-16T21:39:28Z
|
2024-02-16T21:39:28Z
|
Modular Graph Extraction for Handwritten Circuit Diagram Images
|
As digitization in engineering progressed, circuit diagrams (also referred to as schematics) are typically developed and maintained in computer-aided engineering (CAE) systems, thus allowing for automated verification, simulation and further processing in downstream engineering steps. However, apart from printed legacy schematics, hand-drawn circuit diagrams are still used today in the educational domain, where they serve as an easily accessible mean for trainees and students to learn drawing this type of diagrams. Furthermore, hand-drawn schematics are typically used in examinations due to legal constraints. In order to harness the capabilities of digital circuit representations, automated means for extracting the electrical graph from raster graphics are required. While respective approaches have been proposed in literature, they are typically conducted on small or non-disclosed datasets. This paper describes a modular end-to-end solution on a larger, public dataset, in which approaches for the individual sub-tasks are evaluated to form a new baseline. These sub-tasks include object detection (for electrical symbols and texts), binary segmentation (drafter's stroke vs. background), handwritten character recognition and orientation regression for electrical symbols and texts. Furthermore, computer-vision graph assembly and rectification algorithms are presented. All methods are integrated in a publicly available prototype.
|
[
"['Johannes Bayer' 'Leo van Waveren' 'Andreas Dengel']"
] |
null | null |
2402.11101
| null | null |
http://arxiv.org/abs/2402.11101v4
|
2024-05-29T08:33:14Z
|
2024-02-16T22:14:21Z
|
Physics-based material parameters extraction from perovskite experiments
via Bayesian optimization
|
The ability to extract material parameters of perovskite from quantitative experimental analysis is essential for rational design of photovoltaic and optoelectronic applications. However, the difficulty of this analysis increases significantly with the complexity of the theoretical model and the number of material parameters for perovskite. Here we use Bayesian optimization to develop an analysis platform that can extract up to 8 fundamental material parameters of an organometallic perovskite semiconductor from a transient photoluminescence experiment, based on a complex full physics model that includes drift-diffusion of carriers and dynamic defect occupation. An example study of thermal degradation reveals that the carrier mobility and trap-assisted recombination coefficient are reduced noticeably, while the defect energy level remains nearly unchanged. The reduced carrier mobility can dominate the overall effect on thermal degradation of perovskite solar cells by reducing the fill factor, despite the opposite effect of the reduced trap-assisted recombination coefficient on increasing the fill factor. In future, this platform can be conveniently applied to other experiments or to combinations of experiments, accelerating materials discovery and optimization of semiconductor materials for photovoltaics and other applications.
|
[
"['Hualin Zhan' 'Viqar Ahmad' 'Azul Mayon' 'Grace Tabi' 'Anh Dinh Bui'\n 'Zhuofeng Li' 'Daniel Walter' 'Hieu Nguyen' 'Klaus Weber' 'Thomas White'\n 'Kylie Catchpole']"
] |
null | null |
2402.11103
| null | null |
http://arxiv.org/pdf/2402.11103v1
|
2024-02-16T22:16:14Z
|
2024-02-16T22:16:14Z
|
Toward Learning Latent-Variable Representations of Microstructures by
Optimizing in Spatial Statistics Space
|
In Materials Science, material development involves evaluating and optimizing the internal structures of the material, generically referred to as microstructures. Microstructures structure is stochastic, analogously to image textures. A particular microstructure can be well characterized by its spatial statistics, analogously to image texture being characterized by the response to a Fourier-like filter bank. Material design would benefit from low-dimensional representation of microstructures Paulson et al. (2017). In this work, we train a Variational Autoencoders (VAE) to produce reconstructions of textures that preserve the spatial statistics of the original texture, while not necessarily reconstructing the same image in data space. We accomplish this by adding a differentiable term to the cost function in order to minimize the distance between the original and the reconstruction in spatial statistics space. Our experiments indicate that it is possible to train a VAE that minimizes the distance in spatial statistics space between the original and the reconstruction of synthetic images. In future work, we will apply the same techniques to microstructures, with the goal of obtaining low-dimensional representations of material microstructures.
|
[
"['Sayed Sajad Hashemi' 'Michael Guerzhoy' 'Noah H. Paulson']"
] |
null | null |
2402.11107
| null | null |
http://arxiv.org/abs/2402.11107v1
|
2024-02-16T22:19:43Z
|
2024-02-16T22:19:43Z
|
Dynamic nowcast of the New Zealand greenhouse gas inventory
|
As efforts to mitigate the effects of climate change grow, reliable and thorough reporting of greenhouse gas emissions are essential for measuring progress towards international and domestic emissions reductions targets. New Zealand's national emissions inventories are currently reported between 15 to 27 months out-of-date. We present a machine learning approach to nowcast (dynamically estimate) national greenhouse gas emissions in New Zealand in advance of the national emissions inventory's release, with just a two month latency due to current data availability. Key findings include an estimated 0.2% decrease in national gross emissions since 2020 (as at July 2022). Our study highlights the predictive power of a dynamic view of emissions intensive activities. This methodology is a proof of concept that a machine learning approach can make sub-annual estimates of national greenhouse gas emissions by sector with a relatively low error that could be of value for policy makers.
|
[
"['Malcolm Jones' 'Hannah Chorley' 'Flynn Owen' 'Tamsyn Hilder'\n 'Holly Trowland' 'Paul Bracewell']"
] |
null | null |
2402.11119
| null | null |
http://arxiv.org/pdf/2402.11119v1
|
2024-02-16T22:44:52Z
|
2024-02-16T22:44:52Z
|
Private PAC Learning May be Harder than Online Learning
|
We continue the study of the computational complexity of differentially private PAC learning and how it is situated within the foundations of machine learning. A recent line of work uncovered a qualitative equivalence between the private PAC model and Littlestone's mistake-bounded model of online learning, in particular, showing that any concept class of Littlestone dimension $d$ can be privately PAC learned using $mathrm{poly}(d)$ samples. This raises the natural question of whether there might be a generic conversion from online learners to private PAC learners that also preserves computational efficiency. We give a negative answer to this question under reasonable cryptographic assumptions (roughly, those from which it is possible to build indistinguishability obfuscation for all circuits). We exhibit a concept class that admits an online learner running in polynomial time with a polynomial mistake bound, but for which there is no computationally-efficient differentially private PAC learner. Our construction and analysis strengthens and generalizes that of Bun and Zhandry (TCC 2016-A), who established such a separation between private and non-private PAC learner.
|
[
"['Mark Bun' 'Aloni Cohen' 'Rathin Desai']"
] |
null | null |
2402.11120
| null | null |
http://arxiv.org/pdf/2402.11120v1
|
2024-02-16T22:48:38Z
|
2024-02-16T22:48:38Z
|
DART: A Principled Approach to Adversarially Robust Unsupervised Domain
Adaptation
|
Distribution shifts and adversarial examples are two major challenges for deploying machine learning models. While these challenges have been studied individually, their combination is an important topic that remains relatively under-explored. In this work, we study the problem of adversarial robustness under a common setting of distribution shift - unsupervised domain adaptation (UDA). Specifically, given a labeled source domain $D_S$ and an unlabeled target domain $D_T$ with related but different distributions, the goal is to obtain an adversarially robust model for $D_T$. The absence of target domain labels poses a unique challenge, as conventional adversarial robustness defenses cannot be directly applied to $D_T$. To address this challenge, we first establish a generalization bound for the adversarial target loss, which consists of (i) terms related to the loss on the data, and (ii) a measure of worst-case domain divergence. Motivated by this bound, we develop a novel unified defense framework called Divergence Aware adveRsarial Training (DART), which can be used in conjunction with a variety of standard UDA methods; e.g., DANN [Ganin and Lempitsky, 2015]. DART is applicable to general threat models, including the popular $ell_p$-norm model, and does not require heuristic regularizers or architectural changes. We also release DomainRobust: a testbed for evaluating robustness of UDA models to adversarial attacks. DomainRobust consists of 4 multi-domain benchmark datasets (with 46 source-target pairs) and 7 meta-algorithms with a total of 11 variants. Our large-scale experiments demonstrate that on average, DART significantly enhances model robustness on all benchmarks compared to the state of the art, while maintaining competitive standard accuracy. The relative improvement in robustness from DART reaches up to 29.2% on the source-target domain pairs considered.
|
[
"['Yunjuan Wang' 'Hussein Hazimeh' 'Natalia Ponomareva' 'Alexey Kurakin'\n 'Ibrahim Hammoud' 'Raman Arora']"
] |
null | null |
2402.11123
| null | null |
http://arxiv.org/pdf/2402.11123v1
|
2024-02-16T23:13:05Z
|
2024-02-16T23:13:05Z
|
Optimizing Warfarin Dosing Using Contextual Bandit: An Offline Policy
Learning and Evaluation Method
|
Warfarin, an anticoagulant medication, is formulated to prevent and address conditions associated with abnormal blood clotting, making it one of the most prescribed drugs globally. However, determining the suitable dosage remains challenging due to individual response variations, and prescribing an incorrect dosage may lead to severe consequences. Contextual bandit and reinforcement learning have shown promise in addressing this issue. Given the wide availability of observational data and safety concerns of decision-making in healthcare, we focused on using exclusively observational data from historical policies as demonstrations to derive new policies; we utilized offline policy learning and evaluation in a contextual bandit setting to establish the optimal personalized dosage strategy. Our learned policies surpassed these baseline approaches without genotype inputs, even when given a suboptimal demonstration, showcasing promising application potential.
|
[
"['Yong Huang' 'Charles A. Downs' 'Amir M. Rahmani']"
] |
null | null |
2402.11124
| null | null |
http://arxiv.org/pdf/2402.11124v2
|
2024-05-28T19:43:21Z
|
2024-02-16T23:17:00Z
|
Implicit Causal Representation Learning via Switchable Mechanisms
|
Learning causal representations from observational and interventional data in the absence of known ground-truth graph structures necessitates implicit latent causal representation learning. Implicit learning of causal mechanisms typically involves two categories of interventional data: hard and soft interventions. In real-world scenarios, soft interventions are often more realistic than hard interventions, as the latter require fully controlled environments. Unlike hard interventions, which directly force changes in a causal variable, soft interventions exert influence indirectly by affecting the causal mechanism. However, the subtlety of soft interventions impose several challenges for learning causal models. One challenge is that soft intervention's effects are ambiguous, since parental relations remain intact. In this paper, we tackle the challenges of learning causal models using soft interventions while retaining implicit modeling. Our approach models the effects of soft interventions by employing a textit{causal mechanism switch variable} designed to toggle between different causal mechanisms. In our experiments, we consistently observe improved learning of identifiable, causal representations, compared to baseline approaches.
|
[
"['Shayan Shirahmad Gale Bagi' 'Zahra Gharaee' 'Oliver Schulte'\n 'Mark Crowley']"
] |
null | null |
2402.11126
| null | null |
http://arxiv.org/pdf/2402.11126v1
|
2024-02-16T23:21:40Z
|
2024-02-16T23:21:40Z
|
Kolmogorov n-Widths for Multitask Physics-Informed Machine Learning
(PIML) Methods: Towards Robust Metrics
|
Physics-informed machine learning (PIML) as a means of solving partial differential equations (PDE) has garnered much attention in the Computational Science and Engineering (CS&E) world. This topic encompasses a broad array of methods and models aimed at solving a single or a collection of PDE problems, called multitask learning. PIML is characterized by the incorporation of physical laws into the training process of machine learning models in lieu of large data when solving PDE problems. Despite the overall success of this collection of methods, it remains incredibly difficult to analyze, benchmark, and generally compare one approach to another. Using Kolmogorov n-widths as a measure of effectiveness of approximating functions, we judiciously apply this metric in the comparison of various multitask PIML architectures. We compute lower accuracy bounds and analyze the model's learned basis functions on various PDE problems. This is the first objective metric for comparing multitask PIML architectures and helps remove uncertainty in model validation from selective sampling and overfitting. We also identify avenues of improvement for model architectures, such as the choice of activation function, which can drastically affect model generalization to "worst-case" scenarios, which is not observed when reporting task-specific errors. We also incorporate this metric into the optimization process through regularization, which improves the models' generalizability over the multitask PDE problem.
|
[
"['Michael Penwarden' 'Houman Owhadi' 'Robert M. Kirby']"
] |
null | null |
2402.11131
| null | null |
http://arxiv.org/pdf/2402.11131v1
|
2024-02-16T23:36:43Z
|
2024-02-16T23:36:43Z
|
Speculative Streaming: Fast LLM Inference without Auxiliary Models
|
Speculative decoding is a prominent technique to speed up the inference of a large target language model based on predictions of an auxiliary draft model. While effective, in application-specific settings, it often involves fine-tuning both draft and target models to achieve high acceptance rates. As the number of downstream tasks grows, these draft models add significant complexity to inference systems. We propose Speculative Streaming, a single-model speculative decoding method that fuses drafting into the target model by changing the fine-tuning objective from next token prediction to future n-gram prediction. Speculative Streaming speeds up decoding by 1.8 - 3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and Meaning Representation, without sacrificing generation quality. Additionally, Speculative Streaming is parameter-efficient. It achieves on-par/higher speed-ups than Medusa-style architectures while using ~10000X fewer extra parameters, making it well-suited for resource-constrained devices.
|
[
"['Nikhil Bhendawade' 'Irina Belousova' 'Qichen Fu' 'Henry Mason'\n 'Mohammad Rastegari' 'Mahyar Najibi']"
] |
null | null |
2402.11137
| null | null |
http://arxiv.org/pdf/2402.11137v2
|
2024-03-19T00:49:24Z
|
2024-02-17T00:02:23Z
|
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks
|
While tabular classification has traditionally relied on from-scratch training, a recent breakthrough called prior-data fitted networks (PFNs) challenges this approach. Similar to large language models, PFNs make use of pretraining and in-context learning to achieve strong performance on new tasks in a single forward pass. However, current PFNs have limitations that prohibit their widespread adoption. Notably, TabPFN achieves very strong performance on small tabular datasets but is not designed to make predictions for datasets of size larger than 1000. In this work, we overcome these limitations and substantially improve the performance of PFNs by developing context optimization techniques for PFNs. Specifically, we propose TuneTables, a novel prompt-tuning strategy that compresses large datasets into a smaller learned context. TuneTables scales TabPFN to be competitive with state-of-the-art tabular classification methods on larger datasets, while having a substantially lower inference time than TabPFN. Furthermore, we show that TuneTables can be used as an interpretability tool and can even be used to mitigate biases by optimizing a fairness objective.
|
[
"['Benjamin Feuer' 'Robin Tibor Schirrmeister' 'Valeriia Cherepanova'\n 'Chinmay Hegde' 'Frank Hutter' 'Micah Goldblum' 'Niv Cohen' 'Colin White']"
] |
null | null |
2402.11138
| null | null |
http://arxiv.org/pdf/2402.11138v2
|
2024-06-06T06:03:34Z
|
2024-02-17T00:09:32Z
|
Contrastive Instruction Tuning
|
Instruction tuning has been used as a promising approach to improve the performance of large language models (LLMs) on unseen tasks. However, current LLMs exhibit limited robustness to unseen instructions, generating inconsistent outputs when the same instruction is phrased with slightly varied forms or language styles. This behavior indicates LLMs' lack of robustness to textual variations and generalizability to unseen instructions, potentially leading to trustworthiness issues. Accordingly, we propose Contrastive Instruction Tuning, which maximizes the similarity between the hidden representations of semantically equivalent instruction-instance pairs while minimizing the similarity between semantically different ones. To facilitate this approach, we augment the existing FLAN collection by paraphrasing task instructions. Experiments on the PromptBench benchmark show that CoIN consistently improves LLMs' robustness to unseen instructions with variations across character, word, sentence, and semantic levels by an average of +2.5% in accuracy. Code is available at https://github.com/luka-group/CoIN.
|
[
"['Tianyi Lorena Yan' 'Fei Wang' 'James Y. Huang' 'Wenxuan Zhou' 'Fan Yin'\n 'Aram Galstyan' 'Wenpeng Yin' 'Muhao Chen']"
] |
null | null |
2402.11139
| null | null |
http://arxiv.org/pdf/2402.11139v1
|
2024-02-17T00:10:33Z
|
2024-02-17T00:10:33Z
|
LiGNN: Graph Neural Networks at LinkedIn
|
In this paper, we present LiGNN, a deployed large-scale Graph Neural Networks (GNNs) Framework. We share our insight on developing and deployment of GNNs at large scale at LinkedIn. We present a set of algorithmic improvements to the quality of GNN representation learning including temporal graph architectures with long term losses, effective cold start solutions via graph densification, ID embeddings and multi-hop neighbor sampling. We explain how we built and sped up by 7x our large-scale training on LinkedIn graphs with adaptive sampling of neighbors, grouping and slicing of training data batches, specialized shared-memory queue and local gradient optimization. We summarize our deployment lessons and learnings gathered from A/B test experiments. The techniques presented in this work have contributed to an approximate relative improvements of 1% of Job application hearing back rate, 2% Ads CTR lift, 0.5% of Feed engaged daily active users, 0.2% session lift and 0.1% weekly active user lift from people recommendation. We believe that this work can provide practical solutions and insights for engineers who are interested in applying Graph neural networks at large scale.
|
[
"['Fedor Borisyuk' 'Shihai He' 'Yunbo Ouyang' 'Morteza Ramezani' 'Peng Du'\n 'Xiaochen Hou' 'Chengming Jiang' 'Nitin Pasumarthy' 'Priya Bannur'\n 'Birjodh Tiwana' 'Ping Liu' 'Siddharth Dangi' 'Daqi Sun' 'Zhoutao Pei'\n 'Xiao Shi' 'Sirou Zhu' 'Qianqi Shen' 'Kuang-Hsuan Lee' 'David Stein'\n 'Baolei Li' 'Haichao Wei' 'Amol Ghoting' 'Souvik Ghosh']"
] |
null | null |
2402.11140
| null | null |
http://arxiv.org/pdf/2402.11140v1
|
2024-02-17T00:13:36Z
|
2024-02-17T00:13:36Z
|
Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models
|
The reasoning performance of Large Language Models (LLMs) on a wide range of problems critically relies on chain-of-thought prompting, which involves providing a few chain of thought demonstrations as exemplars in prompts. Recent work, e.g., Tree of Thoughts, has pointed out the importance of exploration and self-evaluation in reasoning step selection for complex problem solving. In this paper, we present Boosting of Thoughts (BoT), an automated prompting framework for problem solving with LLMs by iteratively exploring and self-evaluating many trees of thoughts in order to acquire an ensemble of trial-and-error reasoning experiences, which will serve as a new form of prompting to solve the complex problem. Starting from a simple prompt without requiring examples, BoT iteratively explores and evaluates a large collection of reasoning steps, and more importantly, uses error analysis obtained from the LLM on them to explicitly revise prompting, which in turn enhances reasoning step generation, until a final answer is attained. Our experiments with GPT-4 and Llama2 across extensive complex mathematical problems demonstrate that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.
|
[
"['Sijia Chen' 'Baochun Li' 'Di Niu']"
] |
null | null |
2402.11145
| null | null |
http://arxiv.org/pdf/2402.11145v1
|
2024-02-17T00:27:04Z
|
2024-02-17T00:27:04Z
|
Supporting Experts with a Multimodal Machine-Learning-Based Tool for
Human Behavior Analysis of Conversational Videos
|
Multimodal scene search of conversations is essential for unlocking valuable insights into social dynamics and enhancing our communication. While experts in conversational analysis have their own knowledge and skills to find key scenes, a lack of comprehensive, user-friendly tools that streamline the processing of diverse multimodal queries impedes efficiency and objectivity. To solve it, we developed Providence, a visual-programming-based tool based on design considerations derived from a formative study with experts. It enables experts to combine various machine learning algorithms to capture human behavioral cues without writing code. Our study showed its preferable usability and satisfactory output with less cognitive load imposed in accomplishing scene search tasks of conversations, verifying the importance of its customizability and transparency. Furthermore, through the in-the-wild trial, we confirmed the objectivity and reusability of the tool transform experts' workflow, suggesting the advantage of expert-AI teaming in a highly human-contextual domain.
|
[
"['Riku Arakawa' 'Kiyosu Maeda' 'Hiromu Yakura']"
] |
null | null |
2402.11148
| null | null |
http://arxiv.org/pdf/2402.11148v2
|
2024-03-07T22:41:33Z
|
2024-02-17T00:28:06Z
|
Knowledge Distillation Based on Transformed Teacher Matching
|
As a technique to bridge logit matching and probability distribution matching, temperature scaling plays a pivotal role in knowledge distillation (KD). Conventionally, temperature scaling is applied to both teacher's logits and student's logits in KD. Motivated by some recent works, in this paper, we drop instead temperature scaling on the student side, and systematically study the resulting variant of KD, dubbed transformed teacher matching (TTM). By reinterpreting temperature scaling as a power transform of probability distribution, we show that in comparison with the original KD, TTM has an inherent R'enyi entropy term in its objective function, which serves as an extra regularization term. Extensive experiment results demonstrate that thanks to this inherent regularization, TTM leads to trained students with better generalization than the original KD. To further enhance student's capability to match teacher's power transformed probability distribution, we introduce a sample-adaptive weighting coefficient into TTM, yielding a novel distillation approach dubbed weighted TTM (WTTM). It is shown, by comprehensive experiments, that although WTTM is simple, it is effective, improves upon TTM, and achieves state-of-the-art accuracy performance. Our source code is available at https://github.com/zkxufo/TTM.
|
[
"['Kaixiang Zheng' 'En-Hui Yang']"
] |
null | null |
2402.11153
| null | null |
http://arxiv.org/pdf/2402.11153v1
|
2024-02-17T00:40:12Z
|
2024-02-17T00:40:12Z
|
Beyond Generalization: A Survey of Out-Of-Distribution Adaptation on
Graphs
|
Distribution shifts on graphs -- the data distribution discrepancies between training and testing a graph machine learning model, are often ubiquitous and unavoidable in real-world scenarios. Such shifts may severely deteriorate the performance of the model, posing significant challenges for reliable graph machine learning. Consequently, there has been a surge in research on graph Out-Of-Distribution (OOD) adaptation methods that aim to mitigate the distribution shifts and adapt the knowledge from one distribution to another. In our survey, we provide an up-to-date and forward-looking review of graph OOD adaptation methods, covering two main problem scenarios including training-time as well as test-time graph OOD adaptation. We start by formally formulating the two problems and then discuss different types of distribution shifts on graphs. Based on our proposed taxonomy for graph OOD adaptation, we systematically categorize the existing methods according to their learning paradigm and investigate the techniques behind them. Finally, we point out promising research directions and the corresponding challenges. We also provide a continuously updated reading list at https://github.com/kaize0409/Awesome-Graph-OOD-Adaptation.git
|
[
"['Shuhan Liu' 'Kaize Ding']"
] |
null | null |
2402.11156
| null | null |
http://arxiv.org/pdf/2402.11156v2
|
2024-06-08T14:56:22Z
|
2024-02-17T00:51:29Z
|
Efficient Low-Rank Matrix Estimation, Experimental Design, and
Arm-Set-Dependent Low-Rank Bandits
|
We study low-rank matrix trace regression and the related problem of low-rank matrix bandits. Assuming access to the distribution of the covariates, we propose a novel low-rank matrix estimation method called LowPopArt and provide its recovery guarantee that depends on a novel quantity denoted by B(Q) that characterizes the hardness of the problem, where Q is the covariance matrix of the measurement distribution. We show that our method can provide tighter recovery guarantees than classical nuclear norm penalized least squares (Koltchinskii et al., 2011) in several problems. To perform efficient estimation with a limited number of measurements from an arbitrarily given measurement set A, we also propose a novel experimental design criterion that minimizes B(Q) with computational efficiency. We leverage our novel estimator and design of experiments to derive two low-rank linear bandit algorithms for general arm sets that enjoy improved regret upper bounds. This improves over previous works on low-rank bandits, which make somewhat restrictive assumptions that the arm set is the unit ball or that an efficient exploration distribution is given. To our knowledge, our experimental design criterion is the first one tailored to low-rank matrix estimation beyond the naive reduction to linear regression, which can be of independent interest.
|
[
"['Kyoungseok Jang' 'Chicheng Zhang' 'Kwang-Sung Jun']"
] |
null | null |
2402.11168
| null | null |
http://arxiv.org/pdf/2402.11168v3
|
2024-06-05T16:36:21Z
|
2024-02-17T02:26:14Z
|
Trust Regions for Explanations via Black-Box Probabilistic Certification
|
Given the black box nature of machine learning models, a plethora of explainability methods have been developed to decipher the factors behind individual decisions. In this paper, we introduce a novel problem of black box (probabilistic) explanation certification. We ask the question: Given a black box model with only query access, an explanation for an example and a quality metric (viz. fidelity, stability), can we find the largest hypercube (i.e., $ell_{infty}$ ball) centered at the example such that when the explanation is applied to all examples within the hypercube, (with high probability) a quality criterion is met (viz. fidelity greater than some value)? Being able to efficiently find such a emph{trust region} has multiple benefits: i) insight into model behavior in a emph{region}, with a emph{guarantee}; ii) ascertained emph{stability} of the explanation; iii) emph{explanation reuse}, which can save time, energy and money by not having to find explanations for every example; and iv) a possible emph{meta-metric} to compare explanation methods. Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data.
|
[
"['Amit Dhurandhar' 'Swagatam Haldar' 'Dennis Wei'\n 'Karthikeyan Natesan Ramamurthy']"
] |
null | null |
2402.11173
| null | null |
http://arxiv.org/pdf/2402.11173v1
|
2024-02-17T02:42:56Z
|
2024-02-17T02:42:56Z
|
How to Make the Gradients Small Privately: Improved Rates for
Differentially Private Non-Convex Optimization
|
We provide a simple and flexible framework for designing differentially private algorithms to find approximate stationary points of non-convex loss functions. Our framework is based on using a private approximate risk minimizer to "warm start" another private algorithm for finding stationary points. We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions. First, we obtain improved rates for finding stationary points of smooth non-convex empirical loss functions. Second, we specialize to quasar-convex functions, which generalize star-convex functions and arise in learning dynamical systems and training some neural nets. We achieve the optimal rate for this class. Third, we give an optimal algorithm for finding stationary points of functions satisfying the Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural networks often satisfy this condition. Fourth, we provide new state-of-the-art rates for stationary points of non-convex population loss functions. Fifth, we obtain improved rates for non-convex generalized linear models. A modification of our algorithm achieves nearly the same rates for second-order stationary points of functions with Lipschitz Hessian, improving over the previous state-of-the-art for each of the above problems.
|
[
"['Andrew Lowy' 'Jonathan Ullman' 'Stephen J. Wright']"
] |
null | null |
2402.11179
| null | null |
http://arxiv.org/pdf/2402.11179v1
|
2024-02-17T03:19:23Z
|
2024-02-17T03:19:23Z
|
Uncertainty Quantification of Graph Convolution Neural Network Models of
Evolving Processes
|
The application of neural network models to scientific machine learning tasks has proliferated in recent years. In particular, neural network models have proved to be adept at modeling processes with spatial-temporal complexity. Nevertheless, these highly parameterized models have garnered skepticism in their ability to produce outputs with quantified error bounds over the regimes of interest. Hence there is a need to find uncertainty quantification methods that are suitable for neural networks. In this work we present comparisons of the parametric uncertainty quantification of neural networks modeling complex spatial-temporal processes with Hamiltonian Monte Carlo and Stein variational gradient descent and its projected variant. Specifically we apply these methods to graph convolutional neural network models of evolving systems modeled with recurrent neural network and neural ordinary differential equations architectures. We show that Stein variational inference is a viable alternative to Monte Carlo methods with some clear advantages for complex neural network models. For our exemplars, Stein variational interference gave similar uncertainty profiles through time compared to Hamiltonian Monte Carlo, albeit with generally more generous variance.Projected Stein variational gradient descent also produced similar uncertainty profiles to the non-projected counterpart, but large reductions in the active weight space were confounded by the stability of the neural network predictions and the convoluted likelihood landscape.
|
[
"['Jeremiah Hauth' 'Cosmin Safta' 'Xun Huan' 'Ravi G. Patel'\n 'Reese E. Jones']"
] |
null | null |
2402.11185
| null | null |
http://arxiv.org/pdf/2402.11185v1
|
2024-02-17T04:05:01Z
|
2024-02-17T04:05:01Z
|
Minimally Supervised Topological Projections of Self-Organizing Maps for
Phase of Flight Identification
|
Identifying phases of flight is important in the field of general aviation, as knowing which phase of flight data is collected from aircraft flight data recorders can aid in the more effective detection of safety or hazardous events. General aviation flight data for phase of flight identification is usually per-second data, comes on a large scale, and is class imbalanced. It is expensive to manually label the data and training classification models usually faces class imbalance problems. This work investigates the use of a novel method for minimally supervised self-organizing maps (MS-SOMs) which utilize nearest neighbor majority votes in the SOM U-matrix for class estimation. Results show that the proposed method can reach or exceed a naive SOM approach which utilized a full data file of labeled data, with only 30 labeled datapoints per class. Additionally, the minimally supervised SOM is significantly more robust to the class imbalance of the phase of flight data. These results highlight how little data is required for effective phase of flight identification.
|
[
"['Zimeng Lyu' 'Pujan Thapa' 'Travis Desell']"
] |
null | null |
2402.11196
| null | null |
http://arxiv.org/pdf/2402.11196v1
|
2024-02-17T05:14:47Z
|
2024-02-17T05:14:47Z
|
Maintaining Adversarial Robustness in Continuous Learning
|
Adversarial robustness is essential for security and reliability of machine learning systems. However, the adversarial robustness gained by sophisticated defense algorithms is easily erased as the neural network evolves to learn new tasks. This vulnerability can be addressed by fostering a novel capability for neural networks, termed continual robust learning, which focuses on both the (classification) performance and adversarial robustness on previous tasks during continuous learning. To achieve continuous robust learning, we propose an approach called Double Gradient Projection that projects the gradients for weight updates orthogonally onto two crucial subspaces -- one for stabilizing the smoothed sample gradients and another for stabilizing the final outputs of the neural network. The experimental results on four benchmarks demonstrate that the proposed approach effectively maintains continuous robustness against strong adversarial attacks, outperforming the baselines formed by combining the existing defense strategies and continual learning methods.
|
[
"['Xiaolei Ru' 'Xiaowei Cao' 'Zijia Liu' 'Jack Murdoch Moore'\n 'Xin-Ya Zhang' 'Xia Zhu' 'Wenjia Wei' 'Gang Yan']"
] |
null | null |
2402.11198
| null | null |
http://arxiv.org/pdf/2402.11198v1
|
2024-02-17T05:22:46Z
|
2024-02-17T05:22:46Z
|
Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients
|
Federated learning (FL) is an emerging distributed training paradigm that aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients. The Federated Averaging (FedAvg)-based algorithms have gained substantial popularity in FL to reduce the communication overhead, where each client conducts multiple localized iterations before communicating with a central server. In this paper, we focus on FL where the clients have diverse computation and/or communication capabilities. Under this circumstance, FedAvg can be less efficient since it requires all clients that participate in the global aggregation in a round to initiate iterations from the latest global model, and thus the synchronization among fast clients and straggler clients can severely slow down the overall training process. To address this issue, we propose an efficient asynchronous federated learning (AFL) framework called Delayed Federated Averaging (DeFedAvg). In DeFedAvg, the clients are allowed to perform local training with different stale global models at their own paces. Theoretical analyses demonstrate that DeFedAvg achieves asymptotic convergence rates that are on par with the results of FedAvg for solving nonconvex problems. More importantly, DeFedAvg is the first AFL algorithm that provably achieves the desirable linear speedup property, which indicates its high scalability. Additionally, we carry out extensive numerical experiments using real datasets to validate the efficiency and scalability of our approach when training deep neural networks.
|
[
"['Xiaolu Wang' 'Zijian Li' 'Shi Jin' 'Jun Zhang']"
] |
null | null |
2402.11203
| null | null |
http://arxiv.org/abs/2402.11203v1
|
2024-02-17T05:44:40Z
|
2024-02-17T05:44:40Z
|
Exploring ChatGPT for Next-generation Information Retrieval:
Opportunities and Challenges
|
The rapid advancement of artificial intelligence (AI) has highlighted ChatGPT as a pivotal technology in the field of information retrieval (IR). Distinguished from its predecessors, ChatGPT offers significant benefits that have attracted the attention of both the industry and academic communities. While some view ChatGPT as a groundbreaking innovation, others attribute its success to the effective integration of product development and market strategies. The emergence of ChatGPT, alongside GPT-4, marks a new phase in Generative AI, generating content that is distinct from training examples and exceeding the capabilities of the prior GPT-3 model by OpenAI. Unlike the traditional supervised learning approach in IR tasks, ChatGPT challenges existing paradigms, bringing forth new challenges and opportunities regarding text quality assurance, model bias, and efficiency. This paper seeks to examine the impact of ChatGPT on IR tasks and offer insights into its potential future developments.
|
[
"['Yizheng Huang' 'Jimmy Huang']"
] |
null | null |
2402.11215
| null | null |
http://arxiv.org/pdf/2402.11215v3
|
2024-05-28T05:40:38Z
|
2024-02-17T07:49:50Z
|
AdAdaGrad: Adaptive Batch Size Schemes for Adaptive Gradient Methods
|
The choice of batch sizes in minibatch stochastic gradient optimizers is critical in large-scale model training for both optimization and generalization performance. Although large-batch training is arguably the dominant training paradigm for large-scale deep learning due to hardware advances, the generalization performance of the model deteriorates compared to small-batch training, leading to the so-called "generalization gap" phenomenon. To mitigate this, we investigate adaptive batch size strategies derived from adaptive sampling methods, originally developed only for stochastic gradient descent. Given the significant interplay between learning rates and batch sizes, and considering the prevalence of adaptive gradient methods in deep learning, we emphasize the need for adaptive batch size strategies in these contexts. We introduce AdAdaGrad and its scalar variant AdAdaGradNorm, which progressively increase batch sizes during training, while model updates are performed using AdaGrad and AdaGradNorm. We prove that AdAdaGradNorm converges with high probability at a rate of $mathscr{O}(1/K)$ to find a first-order stationary point of smooth nonconvex functions within $K$ iterations. AdAdaGrad also demonstrates similar convergence properties when integrated with a novel coordinate-wise variant of our adaptive batch size strategies. We corroborate our theoretical claims by performing image classification experiments, highlighting the merits of the proposed schemes in terms of both training efficiency and model generalization. Our work unveils the potential of adaptive batch size strategies for adaptive gradient optimizers in large-scale model training.
|
[
"['Tim Tsz-Kit Lau' 'Han Liu' 'Mladen Kolar']"
] |
null | null |
2402.11223
| null | null |
http://arxiv.org/pdf/2402.11223v1
|
2024-02-17T08:41:37Z
|
2024-02-17T08:41:37Z
|
HEAL: Brain-inspired Hyperdimensional Efficient Active Learning
|
Drawing inspiration from the outstanding learning capability of our human brains, Hyperdimensional Computing (HDC) emerges as a novel computing paradigm, and it leverages high-dimensional vector presentation and operations for brain-like lightweight Machine Learning (ML). Practical deployments of HDC have significantly enhanced the learning efficiency compared to current deep ML methods on a broad spectrum of applications. However, boosting the data efficiency of HDC classifiers in supervised learning remains an open question. In this paper, we introduce Hyperdimensional Efficient Active Learning (HEAL), a novel Active Learning (AL) framework tailored for HDC classification. HEAL proactively annotates unlabeled data points via uncertainty and diversity-guided acquisition, leading to a more efficient dataset annotation and lowering labor costs. Unlike conventional AL methods that only support classifiers built upon deep neural networks (DNN), HEAL operates without the need for gradient or probabilistic computations. This allows it to be effortlessly integrated with any existing HDC classifier architecture. The key design of HEAL is a novel approach for uncertainty estimation in HDC classifiers through a lightweight HDC ensemble with prior hypervectors. Additionally, by exploiting hypervectors as prototypes (i.e., compact representations), we develop an extra metric for HEAL to select diverse samples within each batch for annotation. Our evaluation shows that HEAL surpasses a diverse set of baselines in AL quality and achieves notably faster acquisition than many BNN-powered or diversity-guided AL methods, recording 11 times to 40,000 times speedup in acquisition runtime per batch.
|
[
"['Yang Ni' 'Zhuowen Zou' 'Wenjun Huang' 'Hanning Chen'\n 'William Youngwoo Chung' 'Samuel Cho' 'Ranganath Krishnan'\n 'Pietro Mercati' 'Mohsen Imani']"
] |
null | null |
2402.11224
| null | null |
http://arxiv.org/pdf/2402.11224v2
|
2024-06-07T10:34:32Z
|
2024-02-17T08:54:25Z
|
Neural Networks with (Low-Precision) Polynomial Approximations: New
Insights and Techniques for Accuracy Improvement
|
Replacing non-polynomial functions (e.g., non-linear activation functions such as ReLU) in a neural network with their polynomial approximations is a standard practice in privacy-preserving machine learning. The resulting neural network, called polynomial approximation of neural network (PANN) in this paper, is compatible with advanced cryptosystems to enable privacy-preserving model inference. Using ``highly precise'' approximation, state-of-the-art PANN offers similar inference accuracy as the underlying backbone model. However, little is known about the effect of approximation, and existing literature often determined the required approximation precision empirically. In this paper, we initiate the investigation of PANN as a standalone object. Specifically, our contribution is two-fold. Firstly, we provide an explanation on the effect of approximate error in PANN. In particular, we discovered that (1) PANN is susceptible to some type of perturbations; and (2) weight regularisation significantly reduces PANN's accuracy. We support our explanation with experiments. Secondly, based on the insights from our investigations, we propose solutions to increase inference accuracy for PANN. Experiments showed that combination of our solutions is very effective: at the same precision, our PANN is 10% to 50% more accurate than state-of-the-arts; and at the same accuracy, our PANN only requires a precision of 2^{-9} while state-of-the-art solution requires a precision of 2^{-12} using the ResNet-20 model on CIFAR-10 dataset.
|
[
"['Chi Zhang' 'Jingjing Fan' 'Man Ho Au' 'Siu Ming Yiu']"
] |
null | null |
2402.11227
| null | null |
http://arxiv.org/pdf/2402.11227v1
|
2024-02-17T09:10:05Z
|
2024-02-17T09:10:05Z
|
On the Role of Similarity in Detecting Masquerading Files
|
Similarity has been applied to a wide range of security applications, typically used in machine learning models. We examine the problem posed by masquerading samples; that is samples crafted by bad actors to be similar or near identical to legitimate samples. We find that these samples potentially create significant problems for machine learning solutions. The primary problem being that bad actors can circumvent machine learning solutions by using masquerading samples. We then examine the interplay between digital signatures and machine learning solutions. In particular, we focus on executable files and code signing. We offer a taxonomy for masquerading files. We use a combination of similarity and clustering to find masquerading files. We use the insights gathered in this process to offer improvements to similarity based and machine learning security solutions.
|
[
"['Jonathan Oliver' 'Jue Mo' 'Susmit Yenkar' 'Raghav Batta'\n 'Sekhar Josyoula']"
] |
null | null |
2402.11228
| null | null |
http://arxiv.org/pdf/2402.11228v1
|
2024-02-17T09:10:40Z
|
2024-02-17T09:10:40Z
|
Adaptive Split Balancing for Optimal Random Forest
|
While random forests are commonly used for regression problems, existing methods often lack adaptability in complex situations or lose optimality under simple, smooth scenarios. In this study, we introduce the adaptive split balancing forest (ASBF), capable of learning tree representations from data while simultaneously achieving minimax optimality under the Lipschitz class. To exploit higher-order smoothness levels, we further propose a localized version that attains the minimax rate under the H"older class $mathcal{H}^{q,beta}$ for any $qinmathbb{N}$ and $betain(0,1]$. Rather than relying on the widely-used random feature selection, we consider a balanced modification to existing approaches. Our results indicate that an over-reliance on auxiliary randomness may compromise the approximation power of tree models, leading to suboptimal results. Conversely, a less random, more balanced approach demonstrates optimality. Additionally, we establish uniform upper bounds and explore the application of random forests in average treatment effect estimation problems. Through simulation studies and real-data applications, we demonstrate the superior empirical performance of the proposed methods over existing random forests.
|
[
"['Yuqian Zhang' 'Weijie Ji' 'Jelena Bradic']"
] |
null | null |
2402.11235
| null | null |
http://arxiv.org/pdf/2402.11235v2
|
2024-06-24T03:34:02Z
|
2024-02-17T09:52:43Z
|
ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs
|
With the development of foundation models such as large language models, zero-shot transfer learning has become increasingly significant. This is highlighted by the generative capabilities of NLP models like GPT-4, and the retrieval-based approaches of CV models like CLIP, both of which effectively bridge the gap between seen and unseen data. In the realm of graph learning, the continuous emergence of new graphs and the challenges of human labeling also amplify the necessity for zero-shot transfer learning, driving the exploration of approaches that can generalize across diverse graph data without necessitating dataset-specific and label-specific fine-tuning. In this study, we extend such paradigms to zero-shot transferability in graphs by introducing ZeroG, a new framework tailored to enable cross-dataset generalization. Addressing the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer, we leverage a language model to encode both node attributes and class semantics, ensuring consistent feature dimensions across datasets. We also propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs using prompting nodes and neighborhood aggregation, respectively. We further adopt a lightweight fine-tuning strategy that reduces the risk of overfitting and maintains the zero-shot learning efficacy of the language model. The results underscore the effectiveness of our model in achieving significant cross-dataset zero-shot transferability, opening pathways for the development of graph foundation models. Codes and data are available at https://github.com/NineAbyss/ZeroG.
|
[
"['Yuhan Li' 'Peisong Wang' 'Zhixun Li' 'Jeffrey Xu Yu' 'Jia Li']"
] |
null | null |
2402.11237
| null | null |
http://arxiv.org/pdf/2402.11237v1
|
2024-02-17T10:02:22Z
|
2024-02-17T10:02:22Z
|
Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in
Deep Learning
|
Deep neural networks (DNNs) are vulnerable to shortcut learning: rather than learning the intended task, they tend to draw inconclusive relationships between their inputs and outputs. Shortcut learning is ubiquitous among many failure cases of neural networks, and traces of this phenomenon can be seen in their generalizability issues, domain shift, adversarial vulnerability, and even bias towards majority groups. In this paper, we argue that this commonality in the cause of various DNN issues creates a significant opportunity that should be leveraged to find a unified solution for shortcut learning. To this end, we outline the recent advances in topological data analysis~(TDA), and persistent homology~(PH) in particular, to sketch a unified roadmap for detecting shortcuts in deep learning. We demonstrate our arguments by investigating the topological features of computational graphs in DNNs using two cases of unlearnable examples and bias in decision-making as our test studies. Our analysis of these two failure cases of DNNs reveals that finding a unified solution for shortcut learning in DNNs is not out of reach, and TDA can play a significant role in forming such a framework.
|
[
"['Hadi M. Dolatabadi' 'Sarah M. Erfani' 'Christopher Leckie']"
] |
null | null |
2402.11242
| null | null |
http://arxiv.org/pdf/2402.11242v1
|
2024-02-17T10:34:53Z
|
2024-02-17T10:34:53Z
|
Learning with Imbalanced Noisy Data by Preventing Bias in Sample
Selection
|
Learning with noisy labels has gained increasing attention because the inevitable imperfect labels in real-world scenarios can substantially hurt the deep model performance. Recent studies tend to regard low-loss samples as clean ones and discard high-loss ones to alleviate the negative impact of noisy labels. However, real-world datasets contain not only noisy labels but also class imbalance. The imbalance issue is prone to causing failure in the loss-based sample selection since the under-learning of tail classes also leans to produce high losses. To this end, we propose a simple yet effective method to address noisy labels in imbalanced datasets. Specifically, we propose Class-Balance-based sample Selection (CBS) to prevent the tail class samples from being neglected during training. We propose Confidence-based Sample Augmentation (CSA) for the chosen clean samples to enhance their reliability in the training process. To exploit selected noisy samples, we resort to prediction history to rectify labels of noisy samples. Moreover, we introduce the Average Confidence Margin (ACM) metric to measure the quality of corrected labels by leveraging the model's evolving training dynamics, thereby ensuring that low-quality corrected noisy samples are appropriately masked out. Lastly, consistency regularization is imposed on filtered label-corrected noisy samples to boost model performance. Comprehensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our proposed method, especially in imbalanced scenarios. Comprehensive experimental results on synthetic and real-world datasets demonstrate the effectiveness and superiority of our proposed method, especially in imbalanced scenarios.
|
[
"['Huafeng Liu' 'Mengmeng Sheng' 'Zeren Sun' 'Yazhou Yao' 'Xian-Sheng Hua'\n 'Heng-Tao Shen']"
] |
null | null |
2402.11253
| null | null |
http://arxiv.org/pdf/2402.11253v3
|
2024-06-25T13:39:52Z
|
2024-02-17T11:25:26Z
|
Aligning Large Language Models by On-Policy Self-Judgment
|
Existing approaches for aligning large language models with human preferences face a trade-off that requires a separate reward model (RM) for on-policy learning. In this paper, we present a novel alignment framework, SELF-JUDGE that (1) does on-policy learning and 2) is parameter efficient, as it does not require an additional RM for evaluating the samples for on-policy learning. To this end, we propose Judge-augmented Supervised Fine-Tuning (JSFT) to train a single model to act as both a policy and a judge. Specifically, we view the pairwise judgment task, choosing the better response from a response pair, as a special case of the instruction-following task. The resulting model can judge preferences of on-the-fly responses from current policy initialized from itself. Experimental results show the efficacy of SELF-JUDGE, outperforming baselines in preference benchmarks. We also show that the rejecting sampling by itself can improve performance further without an additional evaluator.
|
[
"['Sangkyu Lee' 'Sungdong Kim' 'Ashkan Yousefpour' 'Minjoon Seo'\n 'Kang Min Yoo' 'Youngjae Yu']"
] |
null | null |
2402.11262
| null | null |
http://arxiv.org/pdf/2402.11262v1
|
2024-02-17T12:27:30Z
|
2024-02-17T12:27:30Z
|
Mirror Gradient: Towards Robust Multimodal Recommender Systems via
Exploring Flat Local Minima
|
Multimodal recommender systems utilize various types of information to model user preferences and item features, helping users discover items aligned with their interests. The integration of multimodal information mitigates the inherent challenges in recommender systems, e.g., the data sparsity problem and cold-start issues. However, it simultaneously magnifies certain risks from multimodal information inputs, such as information adjustment risk and inherent noise risk. These risks pose crucial challenges to the robustness of recommendation models. In this paper, we analyze multimodal recommender systems from the novel perspective of flat local minima and propose a concise yet effective gradient strategy called Mirror Gradient (MG). This strategy can implicitly enhance the model's robustness during the optimization process, mitigating instability risks arising from multimodal information inputs. We also provide strong theoretical evidence and conduct extensive empirical experiments to show the superiority of MG across various multimodal recommendation models and benchmarks. Furthermore, we find that the proposed MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models, making it a promising new and fundamental paradigm for training multimodal recommender systems. The code is released at https://github.com/Qrange-group/Mirror-Gradient.
|
[
"['Shanshan Zhong' 'Zhongzhan Huang' 'Daifeng Li' 'Wushao Wen'\n 'Jinghui Qin' 'Liang Lin']"
] |
null | null |
2402.11274
| null | null |
http://arxiv.org/pdf/2402.11274v1
|
2024-02-17T13:09:00Z
|
2024-02-17T13:09:00Z
|
TC-DiffRecon: Texture coordination MRI reconstruction method based on
diffusion model and modified MF-UNet method
|
Recently, diffusion models have gained significant attention as a novel set of deep learning-based generative methods. These models attempt to sample data from a Gaussian distribution that adheres to a target distribution, and have been successfully adapted to the reconstruction of MRI data. However, as an unconditional generative model, the diffusion model typically disrupts image coordination because of the consistent projection of data introduced by conditional bootstrap. This often results in image fragmentation and incoherence. Furthermore, the inherent limitations of the diffusion model often lead to excessive smoothing of the generated images. In the same vein, some deep learning-based models often suffer from poor generalization performance, meaning their effectiveness is greatly affected by different acceleration factors. To address these challenges, we propose a novel diffusion model-based MRI reconstruction method, named TC-DiffRecon, which does not rely on a specific acceleration factor for training. We also suggest the incorporation of the MF-UNet module, designed to enhance the quality of MRI images generated by the model while mitigating the over-smoothing issue to a certain extent. During the image generation sampling process, we employ a novel TCKG module and a Coarse-to-Fine sampling scheme. These additions aim to harmonize image texture, expedite the sampling process, while achieving data consistency. Our source code is available at https://github.com/JustlfC03/TC-DiffRecon.
|
[
"['Chenyan Zhang' 'Yifei Chen' 'Zhenxiong Fan' 'Yiyu Huang' 'Wenchao Weng'\n 'Ruiquan Ge' 'Dong Zeng' 'Changmiao Wang']"
] |
null | null |
2402.11285
| null | null |
http://arxiv.org/pdf/2402.11285v1
|
2024-02-17T13:57:20Z
|
2024-02-17T13:57:20Z
|
Fair Resource Allocation in Virtualized O-RAN Platforms
|
O-RAN systems and their deployment in virtualized general-purpose computing platforms (O-Cloud) constitute a paradigm shift expected to bring unprecedented performance gains. However, these architectures raise new implementation challenges and threaten to worsen the already-high energy consumption of mobile networks. This paper presents first a series of experiments which assess the O-Cloud's energy costs and their dependency on the servers' hardware, capacity and data traffic properties which, typically, change over time. Next, it proposes a compute policy for assigning the base station data loads to O-Cloud servers in an energy-efficient fashion; and a radio policy that determines at near-real-time the minimum transmission block size for each user so as to avoid unnecessary energy costs. The policies balance energy savings with performance, and ensure that both of them are dispersed fairly across the servers and users, respectively. To cater for the unknown and time-varying parameters affecting the policies, we develop a novel online learning framework with fairness guarantees that apply to the entire operation horizon of the system (long-term fairness). The policies are evaluated using trace-driven simulations and are fully implemented in an O-RAN compatible system where we measure the energy costs and throughput in realistic scenarios.
|
[
"['Fatih Aslan' 'George Iosifidis' 'Jose A. Ayala-Romero'\n 'Andres Garcia-Saavedra' 'Xavier Costa-Perez']"
] |
null | null |
2402.11317
| null | null |
http://arxiv.org/pdf/2402.11317v1
|
2024-02-17T16:03:35Z
|
2024-02-17T16:03:35Z
|
Debiased Offline Representation Learning for Fast Online Adaptation in
Non-stationary Dynamics
|
Developing policies that can adjust to non-stationary environments is essential for real-world reinforcement learning applications. However, learning such adaptable policies in offline settings, with only a limited set of pre-collected trajectories, presents significant challenges. A key difficulty arises because the limited offline data makes it hard for the context encoder to differentiate between changes in the environment dynamics and shifts in the behavior policy, often leading to context misassociations. To address this issue, we introduce a novel approach called Debiased Offline Representation for fast online Adaptation (DORA). DORA incorporates an information bottleneck principle that maximizes mutual information between the dynamics encoding and the environmental data, while minimizing mutual information between the dynamics encoding and the actions of the behavior policy. We present a practical implementation of DORA, leveraging tractable bounds of the information bottleneck principle. Our experimental evaluation across six benchmark MuJoCo tasks with variable parameters demonstrates that DORA not only achieves a more precise dynamics encoding but also significantly outperforms existing baselines in terms of performance.
|
[
"['Xinyu Zhang' 'Wenjie Qiu' 'Yi-Chen Li' 'Lei Yuan' 'Chengxing Jia'\n 'Zongzhang Zhang' 'Yang Yu']"
] |
null | null |
2402.11318
| null | null |
http://arxiv.org/pdf/2402.11318v1
|
2024-02-17T16:16:24Z
|
2024-02-17T16:16:24Z
|
BiasBuster: a Neural Approach for Accurate Estimation of Population
Statistics using Biased Location Data
|
While extremely useful (e.g., for COVID-19 forecasting and policy-making, urban mobility analysis and marketing, and obtaining business insights), location data collected from mobile devices often contain data from a biased population subset, with some communities over or underrepresented in the collected datasets. As a result, aggregate statistics calculated from such datasets (as is done by various companies including Safegraph, Google, and Facebook), while ignoring the bias, leads to an inaccurate representation of population statistics. Such statistics will not only be generally inaccurate, but the error will disproportionately impact different population subgroups (e.g., because they ignore the underrepresented communities). This has dire consequences, as these datasets are used for sensitive decision-making such as COVID-19 policymaking. This paper tackles the problem of providing accurate population statistics using such biased datasets. We show that statistical debiasing, although in some cases useful, often fails to improve accuracy. We then propose BiasBuster, a neural network approach that utilizes the correlations between population statistics and location characteristics to provide accurate estimates of population statistics. Extensive experiments on real-world data show that BiasBuster improves accuracy by up to 2 times in general and up to 3 times for underrepresented populations.
|
[
"['Sepanta Zeighami' 'Cyrus Shahabi']"
] |
null | null |
2402.11322
| null | null |
http://arxiv.org/pdf/2402.11322v3
|
2024-04-05T11:51:58Z
|
2024-02-17T16:33:54Z
|
SpikeNAS: A Fast Memory-Aware Neural Architecture Search Framework for
Spiking Neural Network-based Autonomous Agents
|
Autonomous mobile agents (e.g., UAVs and UGVs) are typically expected to incur low power/energy consumption for solving machine learning tasks (such as object recognition), as these mobile agents are usually powered by portable batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs), since their bio-inspired spike-based operations offer high accuracy and ultra low-power/energy computation. Currently, most of the SNN architectures are derived from Artificial Neural Networks whose neurons' architectures and operations are different from SNNs, or developed without considering memory budgets from the underlying processing hardware of autonomous mobile agents. These limitations hinder SNNs from reaching their full potential in accuracy and efficiency. Toward this, we propose SpikeNAS, a novel fast memory-aware neural architecture search (NAS) framework for SNNs that quickly finds an appropriate SNN architecture with high accuracy under the given memory budgets from autonomous mobile agents. To do this, our SpikeNAS employs several key steps: analyzing the impacts of network operations on the accuracy, enhancing the network architecture to improve the learning quality, and developing a fast memory-aware search algorithm. The experimental results show that our SpikeNAS improves the searching time and maintains high accuracy as compared to state-of-the-art while meeting the given memory budgets (e.g., 4.4x faster search with 1.3% accuracy improvement for CIFAR100, using an Nvidia RTX 6000 Ada GPU machine), thereby quickly providing the appropriate SNN architecture for the memory-constrained autonomous mobile agents.
|
[
"['Rachmad Vidya Wicaksana Putra' 'Muhammad Shafique']"
] |
null | null |
2402.11338
| null | null |
http://arxiv.org/pdf/2402.11338v2
|
2024-06-01T12:48:40Z
|
2024-02-17T17:09:19Z
|
Fair Classification with Partial Feedback: An Exploration-Based Data
Collection Approach
|
In many predictive contexts (e.g., credit lending), true outcomes are only observed for samples that were positively classified in the past. These past observations, in turn, form training datasets for classifiers that make future predictions. However, such training datasets lack information about the outcomes of samples that were (incorrectly) negatively classified in the past and can lead to erroneous classifiers. We present an approach that trains a classifier using available data and comes with a family of exploration strategies to collect outcome data about subpopulations that otherwise would have been ignored. For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a ``desired'' classifier. The right exploration strategy is context-dependent; it can be chosen to improve learning guarantees and encode context-specific group fairness properties. Evaluation on real-world datasets shows that this approach consistently boosts the quality of collected outcome data and improves the fraction of true positives for all groups, with only a small reduction in predictive utility.
|
[
"['Vijay Keswani' 'Anay Mehrotra' 'L. Elisa Celis']"
] |
null | null |
2402.11339
| null | null |
http://arxiv.org/pdf/2402.11339v1
|
2024-02-17T17:13:41Z
|
2024-02-17T17:13:41Z
|
Expressive Higher-Order Link Prediction through Hypergraph Symmetry
Breaking
|
A hypergraph consists of a set of nodes along with a collection of subsets of the nodes called hyperedges. Higher-order link prediction is the task of predicting the existence of a missing hyperedge in a hypergraph. A hyperedge representation learned for higher order link prediction is fully expressive when it does not lose distinguishing power up to an isomorphism. Many existing hypergraph representation learners, are bounded in expressive power by the Generalized Weisfeiler Lehman-1 (GWL-1) algorithm, a generalization of the Weisfeiler Lehman-1 algorithm. However, GWL-1 has limited expressive power. In fact, induced subhypergraphs with identical GWL-1 valued nodes are indistinguishable. Furthermore, message passing on hypergraphs can already be computationally expensive, especially on GPU memory. To address these limitations, we devise a preprocessing algorithm that can identify certain regular subhypergraphs exhibiting symmetry. Our preprocessing algorithm runs once with complexity the size of the input hypergraph. During training, we randomly replace subhypergraphs identified by the algorithm with covering hyperedges to break symmetry. We show that our method improves the expressivity of GWL-1. Our extensive experiments also demonstrate the effectiveness of our approach for higher-order link prediction on both graph and hypergraph datasets with negligible change in computation.
|
[
"['Simon Zhang' 'Cheng Xin' 'Tamal K. Dey']"
] |
null | null |
2402.11342
| null | null |
http://arxiv.org/abs/2402.11342v1
|
2024-02-17T17:31:48Z
|
2024-02-17T17:31:48Z
|
Ransomware detection using stacked autoencoder for feature selection
|
The aim of this study is to propose and evaluate an advanced ransomware detection and classification method that combines a Stacked Autoencoder (SAE) for precise feature selection with a Long Short Term Memory (LSTM) classifier to enhance ransomware stratification accuracy. The proposed approach involves thorough pre processing of the UGRansome dataset and training an unsupervised SAE for optimal feature selection or fine tuning via supervised learning to elevate the LSTM model's classification capabilities. The study meticulously analyzes the autoencoder's learned weights and activations to identify essential features for distinguishing ransomware families from other malware and creates a streamlined feature set for precise classification. Extensive experiments, including up to 400 epochs and varying learning rates, are conducted to optimize the model's performance. The results demonstrate the outstanding performance of the SAE-LSTM model across all ransomware families, boasting high precision, recall, and F1 score values that underscore its robust classification capabilities. Furthermore, balanced average scores affirm the proposed model's ability to generalize effectively across various malware types. The proposed model achieves an exceptional 99% accuracy in ransomware classification, surpassing the Extreme Gradient Boosting (XGBoost) algorithm primarily due to its effective SAE feature selection mechanism. The model also demonstrates outstanding performance in identifying signature attacks, achieving a 98% accuracy rate.
|
[
"['Mike Nkongolo' 'Mahmut Tokmak']"
] |
null | null |
2402.11345
| null | null |
http://arxiv.org/pdf/2402.11345v1
|
2024-02-17T17:37:53Z
|
2024-02-17T17:37:53Z
|
Variational Entropy Search for Adjusting Expected Improvement
|
Bayesian optimization is a widely used technique for optimizing black-box functions, with Expected Improvement (EI) being the most commonly utilized acquisition function in this domain. While EI is often viewed as distinct from other information-theoretic acquisition functions, such as entropy search (ES) and max-value entropy search (MES), our work reveals that EI can be considered a special case of MES when approached through variational inference (VI). In this context, we have developed the Variational Entropy Search (VES) methodology and the VES-Gamma algorithm, which adapts EI by incorporating principles from information-theoretic concepts. The efficacy of VES-Gamma is demonstrated across a variety of test functions and read datasets, highlighting its theoretical and practical utilities in Bayesian optimization scenarios.
|
[
"['Nuojin Cheng' 'Stephen Becker']"
] |
null | null |
2402.11354
| null | null |
http://arxiv.org/pdf/2402.11354v2
|
2024-07-10T17:05:43Z
|
2024-02-17T18:08:37Z
|
Probabilistic Routing for Graph-Based Approximate Nearest Neighbor
Search
|
Approximate nearest neighbor search (ANNS) in high-dimensional spaces is a pivotal challenge in the field of machine learning. In recent years, graph-based methods have emerged as the superior approach to ANNS, establishing a new state of the art. Although various optimizations for graph-based ANNS have been introduced, they predominantly rely on heuristic methods that lack formal theoretical backing. This paper aims to enhance routing within graph-based ANNS by introducing a method that offers a probabilistic guarantee when exploring a node's neighbors in the graph. We formulate the problem as probabilistic routing and develop two baseline strategies by incorporating locality-sensitive techniques. Subsequently, we introduce PEOs, a novel approach that efficiently identifies which neighbors in the graph should be considered for exact distance calculation, thus significantly improving efficiency in practice. Our experiments demonstrate that equipping PEOs can increase throughput on commonly utilized graph indexes (HNSW and NSSG) by a factor of 1.6 to 2.5, and its efficiency consistently outperforms the leading-edge routing technique by 1.1 to 1.4 times.
|
[
"['Kejing Lu' 'Chuan Xiao' 'Yoshiharu Ishikawa']"
] |
null | null |
2402.11355
| null | null |
http://arxiv.org/pdf/2402.11355v3
|
2024-05-07T17:58:17Z
|
2024-02-17T18:12:02Z
|
Natural Language Counterfactuals through Representation Surgery
|
Interventions targeting the representation space of language models (LMs) have emerged as an effective means to influence model behavior. Such methods are employed, for example, to eliminate or alter the encoding of demographic information such as gender within the model's representations and, in so doing, create a counterfactual representation. However, because the intervention operates within the representation space, understanding precisely what aspects of the text it modifies poses a challenge. In this paper, we give a method to convert representation counterfactuals into string counterfactuals. We demonstrate that this approach enables us to analyze the linguistic alterations corresponding to a given representation space intervention and to interpret the features utilized to encode a specific concept. Moreover, the resulting counterfactuals can be used to mitigate bias in classification through data augmentation.
|
[
"['Matan Avitan' 'Ryan Cotterell' 'Yoav Goldberg' 'Shauli Ravfogel']"
] |
null | null |
2402.11362
| null | null |
http://arxiv.org/pdf/2402.11362v1
|
2024-02-17T18:51:21Z
|
2024-02-17T18:51:21Z
|
Exploiting T-norms for Deep Learning in Autonomous Driving
|
Deep learning has been at the core of the autonomous driving field development, due to the neural networks' success in finding patterns in raw data and turning them into accurate predictions. Moreover, recent neuro-symbolic works have shown that incorporating the available background knowledge about the problem at hand in the loss function via t-norms can further improve the deep learning models' performance. However, t-norm-based losses may have very high memory requirements and, thus, they may be impossible to apply in complex application domains like autonomous driving. In this paper, we show how it is possible to define memory-efficient t-norm-based losses, allowing for exploiting t-norms for the task of event detection in autonomous driving. We conduct an extensive experimental analysis on the ROAD-R dataset and show (i) that our proposal can be implemented and run on GPUs with less than 25 GiB of available memory, while standard t-norm-based losses are estimated to require more than 100 GiB, far exceeding the amount of memory normally available, (ii) that t-norm-based losses improve performance, especially when limited labelled data are available, and (iii) that t-norm-based losses can further improve performance when exploited on both labelled and unlabelled data.
|
[
"['Mihaela Cătălina Stoian' 'Eleonora Giunchiglia' 'Thomas Lukasiewicz']"
] |
null | null |
2402.11365
| null | null |
http://arxiv.org/pdf/2402.11365v1
|
2024-02-17T19:30:33Z
|
2024-02-17T19:30:33Z
|
Data-Driven Stochastic AC-OPF using Gaussian Processes
|
The thesis focuses on developing a data-driven algorithm, based on machine learning, to solve the stochastic alternating current (AC) chance-constrained (CC) Optimal Power Flow (OPF) problem. Although the AC CC-OPF problem has been successful in academic circles, it is highly nonlinear and computationally demanding, which limits its practical impact. The proposed approach aims to address this limitation and demonstrate its empirical efficiency through applications to multiple IEEE test cases. To solve the non-convex and computationally challenging CC AC-OPF problem, the proposed approach relies on a machine learning Gaussian process regression (GPR) model. The full Gaussian process (GP) approach is capable of learning a simple yet non-convex data-driven approximation to the AC power flow equations that can incorporate uncertain inputs. The proposed approach uses various approximations for GP-uncertainty propagation. The full GP CC-OPF approach exhibits highly competitive and promising results, outperforming the state-of-the-art sample-based chance constraint approaches. To further improve the robustness and complexity/accuracy trade-off of the full GP CC-OPF, a fast data-driven setup is proposed. This setup relies on the sparse and hybrid Gaussian processes (GP) framework to model the power flow equations with input uncertainty.
|
[
"['Mile Mitrovic']"
] |
null | null |
2402.11367
| null | null |
http://arxiv.org/pdf/2402.11367v1
|
2024-02-17T19:49:00Z
|
2024-02-17T19:49:00Z
|
Multi Task Inverse Reinforcement Learning for Common Sense Reward
|
One of the challenges in applying reinforcement learning in a complex real-world environment lies in providing the agent with a sufficiently detailed reward function. Any misalignment between the reward and the desired behavior can result in unwanted outcomes. This may lead to issues like "reward hacking" where the agent maximizes rewards by unintended behavior. In this work, we propose to disentangle the reward into two distinct parts. A simple task-specific reward, outlining the particulars of the task at hand, and an unknown common-sense reward, indicating the expected behavior of the agent within the environment. We then explore how this common-sense reward can be learned from expert demonstrations. We first show that inverse reinforcement learning, even when it succeeds in training an agent, does not learn a useful reward function. That is, training a new agent with the learned reward does not impair the desired behaviors. We then demonstrate that this problem can be solved by training simultaneously on multiple tasks. That is, multi-task inverse reinforcement learning can be applied to learn a useful reward function.
|
[
"['Neta Glazer' 'Aviv Navon' 'Aviv Shamsian' 'Ethan Fetaya']"
] |
null | null |
2402.11384
| null | null |
http://arxiv.org/pdf/2402.11384v1
|
2024-02-17T21:35:13Z
|
2024-02-17T21:35:13Z
|
Reinforcement learning to maximise wind turbine energy generation
|
We propose a reinforcement learning strategy to control wind turbine energy generation by actively changing the rotor speed, the rotor yaw angle and the blade pitch angle. A double deep Q-learning with a prioritized experience replay agent is coupled with a blade element momentum model and is trained to allow control for changing winds. The agent is trained to decide the best control (speed, yaw, pitch) for simple steady winds and is subsequently challenged with real dynamic turbulent winds, showing good performance. The double deep Q- learning is compared with a classic value iteration reinforcement learning control and both strategies outperform a classic PID control in all environments. Furthermore, the reinforcement learning approach is well suited to changing environments including turbulent/gusty winds, showing great adaptability. Finally, we compare all control strategies with real winds and compute the annual energy production. In this case, the double deep Q-learning algorithm also outperforms classic methodologies.
|
[
"['Daniel Soler' 'Oscar Mariño' 'David Huergo' 'Martín de Frutos'\n 'Esteban Ferrer']"
] |
null | null |
2402.11397
| null | null |
http://arxiv.org/pdf/2402.11397v1
|
2024-02-17T22:40:22Z
|
2024-02-17T22:40:22Z
|
Random Projection Neural Networks of Best Approximation: Convergence
theory and practical applications
|
We investigate the concept of Best Approximation for Feedforward Neural Networks (FNN) and explore their convergence properties through the lens of Random Projection (RPNNs). RPNNs have predetermined and fixed, once and for all, internal weights and biases, offering computational efficiency. We demonstrate that there exists a choice of external weights, for any family of such RPNNs, with non-polynomial infinitely differentiable activation functions, that exhibit an exponential convergence rate when approximating any infinitely differentiable function. For illustration purposes, we test the proposed RPNN-based function approximation, with parsimoniously chosen basis functions, across five benchmark function approximation problems. Results show that RPNNs achieve comparable performance to established methods such as Legendre Polynomials, highlighting their potential for efficient and accurate function approximation.
|
[
"['Gianluca Fabiani']"
] |
null | null |
2402.11399
| null | null |
http://arxiv.org/pdf/2402.11399v2
|
2024-06-08T04:24:27Z
|
2024-02-17T22:50:38Z
|
k-SemStamp: A Clustering-Based Semantic Watermark for Detection of
Machine-Generated Text
|
Recent watermarked generation algorithms inject detectable signatures during language generation to facilitate post-hoc detection. While token-level watermarks are vulnerable to paraphrase attacks, SemStamp (Hou et al., 2023) applies watermark on the semantic representation of sentences and demonstrates promising robustness. SemStamp employs locality-sensitive hashing (LSH) to partition the semantic space with arbitrary hyperplanes, which results in a suboptimal tradeoff between robustness and speed. We propose k-SemStamp, a simple yet effective enhancement of SemStamp, utilizing k-means clustering as an alternative of LSH to partition the embedding space with awareness of inherent semantic structure. Experimental results indicate that k-SemStamp saliently improves its robustness and sampling efficiency while preserving the generation quality, advancing a more effective tool for machine-generated text detection.
|
[
"['Abe Bohan Hou' 'Jingyu Zhang' 'Yichen Wang' 'Daniel Khashabi'\n 'Tianxing He']"
] |
null | null |
2402.11401
| null | null |
http://arxiv.org/pdf/2402.11401v2
|
2024-02-20T18:25:23Z
|
2024-02-17T23:08:32Z
|
GraphKD: Exploring Knowledge Distillation Towards Document Object
Detection with Structured Graph Creation
|
Object detection in documents is a key step to automate the structural elements identification process in a digital or scanned document through understanding the hierarchical structure and relationships between different elements. Large and complex models, while achieving high accuracy, can be computationally expensive and memory-intensive, making them impractical for deployment on resource constrained devices. Knowledge distillation allows us to create small and more efficient models that retain much of the performance of their larger counterparts. Here we present a graph-based knowledge distillation framework to correctly identify and localize the document objects in a document image. Here, we design a structured graph with nodes containing proposal-level features and edges representing the relationship between the different proposal regions. Also, to reduce text bias an adaptive node sampling strategy is designed to prune the weight distribution and put more weightage on non-text nodes. We encode the complete graph as a knowledge representation and transfer it from the teacher to the student through the proposed distillation loss by effectively capturing both local and global information concurrently. Extensive experimentation on competitive benchmarks demonstrates that the proposed framework outperforms the current state-of-the-art approaches. The code will be available at: https://github.com/ayanban011/GraphKD.
|
[
"['Ayan Banerjee' 'Sanket Biswas' 'Josep Lladós' 'Umapada Pal']"
] |
null | null |
2402.11404
| null | null |
http://arxiv.org/pdf/2402.11404v2
|
2024-02-23T20:59:54Z
|
2024-02-17T23:41:15Z
|
Evaluating the Stability of Deep Learning Latent Feature Spaces
|
High-dimensional datasets present substantial challenges in statistical modeling across various disciplines, necessitating effective dimensionality reduction methods. Deep learning approaches, notable for their capacity to distill essential features from complex data, facilitate modeling, visualization, and compression through reduced dimensionality latent feature spaces, have wide applications from bioinformatics to earth sciences. This study introduces a novel workflow to evaluate the stability of these latent spaces, ensuring consistency and reliability in subsequent analyses. Stability, defined as the invariance of latent spaces to minor data, training realizations, and parameter perturbations, is crucial yet often overlooked. Our proposed methodology delineates three stability types, sample, structural, and inferential, within latent spaces, and introduces a suite of metrics for comprehensive evaluation. We implement this workflow across 500 autoencoder realizations and three datasets, encompassing both synthetic and real-world scenarios to explain latent space dynamics. Employing k-means clustering and the modified Jonker-Volgenant algorithm for class alignment, alongside anisotropy metrics and convex hull analysis, we introduce adjusted stress and Jaccard dissimilarity as novel stability indicators. Our findings highlight inherent instabilities in latent feature spaces and demonstrate the workflow's efficacy in quantifying and interpreting these instabilities. This work advances the understanding of latent feature spaces, promoting improved model interpretability and quality control for more informed decision-making for diverse analytical workflows that leverage deep learning.
|
[
"['Ademide O. Mabadeje' 'Michael J. Pyrcz']"
] |
null | null |
2402.11410
| null | null |
http://arxiv.org/pdf/2402.11410v1
|
2024-02-18T00:53:05Z
|
2024-02-18T00:53:05Z
|
An Elementary Predictor Obtaining $2\sqrt{T}$ Distance to Calibration
|
Blasiok et al. [2023] proposed distance to calibration as a natural measure of calibration error that unlike expected calibration error (ECE) is continuous. Recently, Qiao and Zheng [2024] gave a non-constructive argument establishing the existence of an online predictor that can obtain $O(sqrt{T})$ distance to calibration in the adversarial setting, which is known to be impossible for ECE. They leave as an open problem finding an explicit, efficient algorithm. We resolve this problem and give an extremely simple, efficient, deterministic algorithm that obtains distance to calibration error at most $2sqrt{T}$.
|
[
"['Eshwar Ram Arunachaleswaran' 'Natalie Collina' 'Aaron Roth' 'Mirah Shi']"
] |
null | null |
2402.11411
| null | null |
http://arxiv.org/pdf/2402.11411v1
|
2024-02-18T00:56:16Z
|
2024-02-18T00:56:16Z
|
Aligning Modalities in Vision Large Language Models via Preference
Fine-tuning
|
Instruction-following Vision Large Language Models (VLLMs) have achieved significant progress recently on a variety of tasks. These approaches merge strong pre-trained vision models and large language models (LLMs). Since these components are trained separately, the learned representations need to be aligned with joint training on additional image-language pairs. This procedure is not perfect and can cause the model to hallucinate - provide answers that do not accurately reflect the image, even when the core LLM is highly factual and the vision backbone has sufficiently complete representations. In this work, we frame the hallucination problem as an alignment issue, tackle it with preference tuning. Specifically, we propose POVID to generate feedback data with AI models. We use ground-truth instructions as the preferred response and a two-stage approach to generate dispreferred data. First, we prompt GPT-4V to inject plausible hallucinations into the correct answer. Second, we distort the image to trigger the inherent hallucination behavior of the VLLM. This is an automated approach, which does not rely on human data generation or require a perfect expert, which makes it easily scalable. Finally, both of these generation strategies are integrated into an RLHF pipeline via Direct Preference Optimization. In experiments across broad benchmarks, we show that we can not only reduce hallucinations, but improve model performance across standard benchmarks, outperforming prior approaches. Our data and code are available at https://github.com/YiyangZhou/POVID.
|
[
"['Yiyang Zhou' 'Chenhang Cui' 'Rafael Rafailov' 'Chelsea Finn'\n 'Huaxiu Yao']"
] |
null | null |
2402.11413
| null | null |
http://arxiv.org/pdf/2402.11413v1
|
2024-02-18T01:01:13Z
|
2024-02-18T01:01:13Z
|
A Multispectral Automated Transfer Technique (MATT) for machine-driven
image labeling utilizing the Segment Anything Model (SAM)
|
Segment Anything Model (SAM) is drastically accelerating the speed and accuracy of automatically segmenting and labeling large Red-Green-Blue (RGB) imagery datasets. However, SAM is unable to segment and label images outside of the visible light spectrum, for example, for multispectral or hyperspectral imagery. Therefore, this paper outlines a method we call the Multispectral Automated Transfer Technique (MATT). By transposing SAM segmentation masks from RGB images we can automatically segment and label multispectral imagery with high precision and efficiency. For example, the results demonstrate that segmenting and labeling a 2,400-image dataset utilizing MATT achieves a time reduction of 87.8% in developing a trained model, reducing roughly 20 hours of manual labeling, to only 2.4 hours. This efficiency gain is associated with only a 6.7% decrease in overall mean average precision (mAP) when training multispectral models via MATT, compared to a manually labeled dataset. We consider this an acceptable level of precision loss when considering the time saved during training, especially for rapidly prototyping experimental modeling methods. This research greatly contributes to the study of multispectral object detection by providing a novel and open-source method to rapidly segment, label, and train multispectral object detection models with minimal human interaction. Future research needs to focus on applying these methods to (i) space-based multispectral, and (ii) drone-based hyperspectral imagery.
|
[
"['James E. Gallagher' 'Aryav Gogia' 'Edward J. Oughton']"
] |
null | null |
2402.11417
| null | null |
http://arxiv.org/pdf/2402.11417v1
|
2024-02-18T01:20:00Z
|
2024-02-18T01:20:00Z
|
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
Ultra-Low-Parameter Fine-Tuning of Large Language Models
|
Various parameter-efficient fine-tuning (PEFT) techniques have been proposed to enable computationally efficient fine-tuning while maintaining model performance. However, existing PEFT methods are still limited by the growing number of trainable parameters with the rapid deployment of Large Language Models (LLMs). To address this challenge, we present LoRETTA, an ultra-parameter-efficient framework that significantly reduces trainable parameters through tensor-train decomposition. Specifically, we propose two methods, named {LoRETTA}$_{adp}$ and {LoRETTA}$_{rep}$. The former employs tensorized adapters, offering a high-performance yet lightweight approach for the fine-tuning of LLMs. The latter emphasizes fine-tuning via weight parameterization with a set of small tensor factors. LoRETTA achieves comparable or better performance than most widely used PEFT methods with up to $100times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical results demonstrate that the proposed method effectively improves training efficiency, enjoys better multi-task learning performance, and enhances the anti-overfitting capability. Plug-and-play codes built upon the Huggingface framework and PEFT library will be released.
|
[
"['Yifan Yang' 'Jiajun Zhou' 'Ngai Wong' 'Zheng Zhang']"
] |
null | null |
2402.11425
| null | null |
http://arxiv.org/pdf/2402.11425v3
|
2024-04-02T02:36:56Z
|
2024-02-18T02:11:54Z
|
Online Local False Discovery Rate Control: A Resource Allocation
Approach
|
We consider the problem of sequentially conducting multiple experiments where each experiment corresponds to a hypothesis testing task. At each time point, the experimenter must make an irrevocable decision of whether to reject the null hypothesis (or equivalently claim a discovery) before the next experimental result arrives. The goal is to maximize the number of discoveries while maintaining a low error rate at all time points measured by local False Discovery Rate (FDR). We formulate the problem as an online knapsack problem with exogenous random budget replenishment. We start with general arrival distributions and show that a simple policy achieves a $O(sqrt{T})$ regret. We complement the result by showing that such regret rate is in general not improvable. We then shift our focus to discrete arrival distributions. We find that many existing re-solving heuristics in the online resource allocation literature, albeit achieve bounded loss in canonical settings, may incur a $Omega(sqrt{T})$ or even a $Omega(T)$ regret. With the observation that canonical policies tend to be too optimistic and over claim discoveries, we propose a novel policy that incorporates budget safety buffers. It turns out that a little more safety can greatly enhance efficiency -- small additional logarithmic buffers suffice to reduce the regret from $Omega(sqrt{T})$ or even $Omega(T)$ to $O(ln^2 T)$. From a practical perspective, we extend the policy to the scenario with continuous arrival distributions as well as time-dependent information structures. We conduct both synthetic experiments and empirical applications on a time series data from New York City taxi passengers to validate the performance of our proposed policies. Our results emphasize how effective policies should be designed in online resource allocation problems with exogenous budget replenishment.
|
[
"['Ruicheng Ao' 'Hongyu Chen' 'David Simchi-Levi' 'Feng Zhu']"
] |
null | null |
2402.11427
| null | null |
http://arxiv.org/pdf/2402.11427v1
|
2024-02-18T02:19:02Z
|
2024-02-18T02:19:02Z
|
OptEx: Expediting First-Order Optimization with Approximately
Parallelized Iterations
|
First-order optimization (FOO) algorithms are pivotal in numerous computational domains such as machine learning and signal denoising. However, their application to complex tasks like neural network training often entails significant inefficiencies due to the need for many sequential iterations for convergence. In response, we introduce first-order optimization expedited with approximately parallelized iterations (OptEx), the first framework that enhances the efficiency of FOO by leveraging parallel computing to mitigate its iterative bottleneck. OptEx employs kernelized gradient estimation to make use of gradient history for future gradient prediction, enabling parallelization of iterations -- a strategy once considered impractical because of the inherent iterative dependency in FOO. We provide theoretical guarantees for the reliability of our kernelized gradient estimation and the iteration complexity of SGD-based OptEx, confirming that estimation errors diminish to zero as historical gradients accumulate and that SGD-based OptEx enjoys an effective acceleration rate of $Omega(sqrt{N})$ over standard SGD given parallelism of N. We also use extensive empirical studies, including synthetic functions, reinforcement learning tasks, and neural network training across various datasets, to underscore the substantial efficiency improvements achieved by OptEx.
|
[
"['Yao Shu' 'Jiongfeng Fang' 'Ying Tiffany He' 'Fei Richard Yu']"
] |
null | null |
2402.11433
| null | null |
http://arxiv.org/pdf/2402.11433v1
|
2024-02-18T02:55:19Z
|
2024-02-18T02:55:19Z
|
Improved Indoor Localization with Machine Learning Techniques for IoT
applications
|
The rise of the Internet of Things (IoT) and mobile internet applications has spurred interest in location-based services (LBS) for commercial, military, and social applications. While the global positioning system (GPS) dominates outdoor localization, its efficacy wanes indoors due to signal challenges. Indoor localization systems leverage wireless technologies like Wi-Fi, ZigBee, Bluetooth, UWB, selecting based on context. Received signal strength indicator (RSSI) technology, known for its accuracy and simplicity, is widely adopted. This study employs machine learning algorithms in three phases: supervised regressors, supervised classifiers, and ensemble methods for RSSI-based indoor localization. Additionally, it introduces a weighted least squares technique and pseudo-linear solution approach to address non-linear RSSI measurement equations by approximating them with linear equations. An experimental testbed, utilizing diverse wireless technologies and anchor nodes, is designed for data collection, employing IoT cloud architectures. Pre-processing involves investigating filters for data refinement before algorithm training. The study employs machine learning models like linear regression, polynomial regression, support vector regression, random forest regression, and decision tree regressor across various wireless technologies. These models estimate the geographical coordinates of a moving target node, and their performance is evaluated using metrics such as accuracy, root mean square errors, precision, recall, sensitivity, coefficient of determinant, and the f1-score. The experiment's outcomes provide insights into the effectiveness of different supervised machine learning techniques in terms of localization accuracy and robustness in indoor environments.
|
[
"['M. W. P. Maduranga']"
] |
null | null |
2402.11441
| null | null |
http://arxiv.org/pdf/2402.11441v1
|
2024-02-18T03:36:26Z
|
2024-02-18T03:36:26Z
|
InfuserKI: Enhancing Large Language Models with Knowledge Graphs via
Infuser-Guided Knowledge Integration
|
Though Large Language Models (LLMs) have shown remarkable open-generation capabilities across diverse domains, they struggle with knowledge-intensive tasks. To alleviate this issue, knowledge integration methods have been proposed to enhance LLMs with domain-specific knowledge graphs using external modules. However, they suffer from data inefficiency as they require both known and unknown knowledge for fine-tuning. Thus, we study a novel problem of integrating unknown knowledge into LLMs efficiently without unnecessary overlap of known knowledge. Injecting new knowledge poses the risk of forgetting previously acquired knowledge. To tackle this, we propose a novel Infuser-Guided Knowledge Integration (InfuserKI) framework that utilizes transformer internal states to determine whether to enhance the original LLM output with additional information, thereby effectively mitigating knowledge forgetting. Evaluations on the UMLS-2.5k and MetaQA domain knowledge graphs demonstrate that InfuserKI can effectively acquire new knowledge and outperform state-of-the-art baselines by 9% and 6%, respectively, in reducing knowledge forgetting.
|
[
"['Fali Wang' 'Runxue Bao' 'Suhang Wang' 'Wenchao Yu' 'Yanchi Liu'\n 'Wei Cheng' 'Haifeng Chen']"
] |
null | null |
2402.11459
| null | null |
http://arxiv.org/pdf/2402.11459v2
|
2024-02-21T07:46:07Z
|
2024-02-18T05:04:50Z
|
Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion
Bridge
|
Accurate prediction of protein-ligand binding structures, a task known as molecular docking is crucial for drug design but remains challenging. While deep learning has shown promise, existing methods often depend on holo-protein structures (docked, and not accessible in realistic tasks) or neglect pocket sidechain conformations, leading to limited practical utility and unrealistic conformation predictions. To fill these gaps, we introduce an under-explored task, named flexible docking to predict poses of ligand and pocket sidechains simultaneously and introduce Re-Dock, a novel diffusion bridge generative model extended to geometric manifolds. Specifically, we propose energy-to-geometry mapping inspired by the Newton-Euler equation to co-model the binding energy and conformations for reflecting the energy-constrained docking generative process. Comprehensive experiments on designed benchmark datasets including apo-dock and cross-dock demonstrate our model's superior effectiveness and efficiency over current methods.
|
[
"['Yufei Huang' 'Odin Zhang' 'Lirong Wu' 'Cheng Tan' 'Haitao Lin'\n 'Zhangyang Gao' 'Siyuan Li' 'Stan. Z. Li']"
] |
null | null |
2402.11463
| null | null |
http://arxiv.org/pdf/2402.11463v6
|
2024-07-14T14:46:50Z
|
2024-02-18T05:35:01Z
|
Attractor Memory for Long-Term Time Series Forecasting: A Chaos
Perspective
|
In long-term time series forecasting (LTSF) tasks, an increasing number of models have acknowledged that discrete time series originate from continuous dynamic systems and have attempted to model their dynamical structures. Recognizing the chaotic nature of real-world data, our model, textbf{textit{Attraos}}, incorporates chaos theory into LTSF, perceiving real-world time series as observations from unknown high-dimensional chaotic dynamic systems. Under the concept of attractor invariance, Attraos utilizes non-parametric Phase Space Reconstruction embedding and the proposed multi-scale dynamic memory unit to memorize historical dynamics structure and predicts by a frequency-enhanced local evolution strategy. Detailed theoretical analysis and abundant empirical evidence consistently show that Attraos outperforms various LTSF methods on mainstream LTSF datasets and chaotic datasets with only one-twelfth of the parameters compared to PatchTST.
|
[
"['Jiaxi Hu' 'Yuehong Hu' 'Wei Chen' 'Ming Jin' 'Shirui Pan' 'Qingsong Wen'\n 'Yuxuan Liang']"
] |
null | null |
2402.11469
| null | null |
http://arxiv.org/pdf/2402.11469v2
|
2024-07-02T03:29:11Z
|
2024-02-18T05:58:25Z
|
A Curious Case of Searching for the Correlation between Training Data
and Adversarial Robustness of Transformer Textual Models
|
Existing works have shown that fine-tuned textual transformer models achieve state-of-the-art prediction performances but are also vulnerable to adversarial text perturbations. Traditional adversarial evaluation is often done textit{only after} fine-tuning the models and ignoring the training data. In this paper, we want to prove that there is also a strong correlation between training data and model robustness. To this end, we extract 13 different features representing a wide range of input fine-tuning corpora properties and use them to predict the adversarial robustness of the fine-tuned models. Focusing mostly on encoder-only transformer models BERT and RoBERTa with additional results for BART, ELECTRA, and GPT2, we provide diverse evidence to support our argument. First, empirical analyses show that (a) extracted features can be used with a lightweight classifier such as Random Forest to predict the attack success rate effectively, and (b) features with the most influence on the model robustness have a clear correlation with the robustness. Second, our framework can be used as a fast and effective additional tool for robustness evaluation since it (a) saves 30x-193x runtime compared to the traditional technique, (b) is transferable across models, (c) can be used under adversarial training, and (d) robust to statistical randomness. Our code is publicly available at url{https://github.com/CaptainCuong/RobustText_ACL2024}.
|
[
"['Cuong Dang' 'Dung D. Le' 'Thai Le']"
] |
null | null |
2402.11472
| null | null |
http://arxiv.org/pdf/2402.11472v4
|
2024-05-22T19:39:52Z
|
2024-02-18T06:22:01Z
|
Advanced Drug Interaction Event Prediction
|
Predicting drug-drug interaction adverse events, so-called DDI events, is increasingly valuable as it facilitates the study of mechanisms underlying drug use or adverse reactions. Existing models often neglect the distinctive characteristics of individual event classes when integrating multi-source features, which contributes to systematic unfairness when dealing with highly imbalanced event samples. Moreover, the limited capacity of these models to abstract the unique attributes of each event subclass considerably hampers their application in predicting rare drug-drug interaction events with a limited sample size. Reducing dataset bias and abstracting event subclass characteristics are two unresolved challenges. Recently, prompt tuning with frozen pre-trained graph models, namely "pre-train, prompt, fine-tune" strategy, has demonstrated impressive performance in few-shot tasks. Motivated by this, we propose an advanced method as a solution to address these aforementioned challenges. Specifically, our proposed approach entails a hierarchical pre-training task that aims to capture crucial aspects of drug molecular structure and intermolecular interactions while effectively mitigating implicit dataset bias within the node embeddings. Furthermore, we construct a prototypical graph by strategically sampling data from distinct event types and design subgraph prompts utilizing pre-trained node features. Through comprehensive benchmark experiments, we validate the efficacy of our subgraph prompts in accurately representing event classes and achieve exemplary results in both overall and subclass prediction tasks.
|
[
"['Yingying Wang' 'Yun Xiong' 'Xixi Wu' 'Xiangguo Sun' 'Jiawei Zhang']"
] |
null | null |
2402.11485
| null | null |
http://arxiv.org/pdf/2402.11485v2
|
2024-06-06T05:30:59Z
|
2024-02-18T07:24:34Z
|
LEIA: Facilitating Cross-lingual Knowledge Transfer in Language Models
with Entity-based Data Augmentation
|
Adapting English-based large language models (LLMs) to other languages has become increasingly popular due to the efficiency and potential of cross-lingual transfer. However, existing language adaptation methods often overlook the benefits of cross-lingual supervision. In this study, we introduce LEIA, a language adaptation tuning method that utilizes Wikipedia entity names aligned across languages. This method involves augmenting the target language corpus with English entity names and training the model using left-to-right language modeling. We assess LEIA on diverse question answering datasets using 7B-parameter LLMs, demonstrating significant performance gains across various non-English languages. The source code is available at https://github.com/studio-ousia/leia.
|
[
"['Ikuya Yamada' 'Ryokan Ri']"
] |
null | null |
2402.11494
| null | null |
http://arxiv.org/pdf/2402.11494v1
|
2024-02-18T07:49:22Z
|
2024-02-18T07:49:22Z
|
Graph Out-of-Distribution Generalization via Causal Intervention
|
Out-of-distribution (OOD) generalization has gained increasing attentions for learning on graphs, as graph neural networks (GNNs) often exhibit performance degradation with distribution shifts. The challenge is that distribution shifts on graphs involve intricate interconnections between nodes, and the environment labels are often absent in data. In this paper, we adopt a bottom-up data-generative perspective and reveal a key observation through causal analysis: the crux of GNNs' failure in OOD generalization lies in the latent confounding bias from the environment. The latter misguides the model to leverage environment-sensitive correlations between ego-graph features and target nodes' labels, resulting in undesirable generalization on new unseen nodes. Built upon this analysis, we introduce a conceptually simple yet principled approach for training robust GNNs under node-level distribution shifts, without prior knowledge of environment labels. Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor. The new approach can counteract the confounding bias in training data and facilitate learning generalizable predictive relations. Extensive experiment demonstrates that our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks. Source codes are available at https://github.com/fannie1208/CaNet.
|
[
"['Qitian Wu' 'Fan Nie' 'Chenxiao Yang' 'Tianyi Bao' 'Junchi Yan']"
] |
null | null |
2402.11495
| null | null |
http://arxiv.org/pdf/2402.11495v1
|
2024-02-18T07:51:20Z
|
2024-02-18T07:51:20Z
|
URLBERT:A Contrastive and Adversarial Pre-trained Model for URL
Classification
|
URLs play a crucial role in understanding and categorizing web content, particularly in tasks related to security control and online recommendations. While pre-trained models are currently dominating various fields, the domain of URL analysis still lacks specialized pre-trained models. To address this gap, this paper introduces URLBERT, the first pre-trained representation learning model applied to a variety of URL classification or detection tasks. We first train a URL tokenizer on a corpus of billions of URLs to address URL data tokenization. Additionally, we propose two novel pre-training tasks: (1) self-supervised contrastive learning tasks, which strengthen the model's understanding of URL structure and the capture of category differences by distinguishing different variants of the same URL; (2) virtual adversarial training, aimed at improving the model's robustness in extracting semantic features from URLs. Finally, our proposed methods are evaluated on tasks including phishing URL detection, web page classification, and ad filtering, achieving state-of-the-art performance. Importantly, we also explore multi-task learning with URLBERT, and experimental results demonstrate that multi-task learning model based on URLBERT exhibit equivalent effectiveness compared to independently fine-tuned models, showing the simplicity of URLBERT in handling complex task requirements. The code for our work is available at https://github.com/Davidup1/URLBERT.
|
[
"['Yujie Li' 'Yanbin Wang' 'Haitao Xu' 'Zhenhao Guo' 'Zheng Cao'\n 'Lun Zhang']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.