categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.14866
null
null
http://arxiv.org/abs/2402.14866v2
2024-04-16T03:18:38Z
2024-02-21T07:45:22Z
APTQ: Attention-aware Post-Training Mixed-Precision Quantization for Large Language Models
Large Language Models (LLMs) have greatly advanced the natural language processing paradigm. However, the high computational load and huge model sizes pose a grand challenge for deployment on edge devices. To this end, we propose APTQ (Attention-aware Post-Training Mixed-Precision Quantization) for LLMs, which considers not only the second-order information of each layer's weights, but also, for the first time, the nonlinear effect of attention outputs on the entire model. We leverage the Hessian trace as a sensitivity metric for mixed-precision quantization, ensuring an informed precision reduction that retains model performance. Experiments show APTQ surpasses previous quantization methods, achieving an average of 4 bit width a 5.22 perplexity nearly equivalent to full precision in the C4 dataset. In addition, APTQ attains state-of-the-art zero-shot accuracy of 68.24% and 70.48% at an average bitwidth of 3.8 in LLaMa-7B and LLaMa-13B, respectively, demonstrating its effectiveness to produce high-quality quantized LLMs.
[ "['Ziyi Guan' 'Hantao Huang' 'Yupeng Su' 'Hong Huang' 'Ngai Wong' 'Hao Yu']" ]
null
null
2402.14867
null
null
http://arxiv.org/pdf/2402.14867v1
2024-02-21T11:31:04Z
2024-02-21T11:31:04Z
Effects of term weighting approach with and without stop words removing on Arabic text classification
Classifying text is a method for categorizing documents into pre-established groups. Text documents must be prepared and represented in a way that is appropriate for the algorithms used for data mining prior to classification. As a result, a number of term weighting strategies have been created in the literature to enhance text categorization algorithms' functionality. This study compares the effects of Binary and Term frequency weighting feature methodologies on the text's classification method when stop words are eliminated once and when they are not. In recognition of assessing the effects of prior weighting of features approaches on classification results in terms of accuracy, recall, precision, and F-measure values, we used an Arabic data set made up of 322 documents divided into six main topics (agriculture, economy, health, politics, science, and sport), each of which contains 50 documents, with the exception of the health category, which contains 61 documents. The results demonstrate that for all metrics, the term frequency feature weighting approach with stop word removal outperforms the binary approach, while for accuracy, recall, and F-Measure, the binary approach outperforms the TF approach without stop word removal. However, for precision, the two approaches produce results that are very similar. Additionally, it is clear from the data that, using the same phrase weighting approach, stop word removing increases classification accuracy.
[ "[\"Esra'a Alhenawi\" 'Ruba Abu Khurma' 'Pedro A. Castillo'\n 'Maribel G. Arenas']" ]
null
null
2402.14874
null
null
http://arxiv.org/pdf/2402.14874v1
2024-02-21T17:20:38Z
2024-02-21T17:20:38Z
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
We propose a straightforward approach called Distillation Contrastive Decoding (DCD) to enhance the reasoning capabilities of Large Language Models (LLMs) during inference. In contrast to previous approaches that relied on smaller amateur models or analysis of hidden state differences, DCD employs Contrastive Chain-of-thought Prompting and advanced distillation techniques, including Dropout and Quantization. This approach effectively addresses the limitations of Contrastive Decoding (CD), which typically requires both an expert and an amateur model, thus increasing computational resource demands. By integrating contrastive prompts with distillation, DCD obviates the need for an amateur model and reduces memory usage. Our evaluations demonstrate that DCD significantly enhances LLM performance across a range of reasoning benchmarks, surpassing both CD and existing methods in the GSM8K and StrategyQA datasets.
[ "['Phuc Phan' 'Hieu Tran' 'Long Phan']" ]
null
null
2402.14875
null
null
http://arxiv.org/pdf/2402.14875v2
2024-02-29T19:39:35Z
2024-02-21T18:25:25Z
What's in a Name? Auditing Large Language Models for Race and Gender Bias
We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.
[ "['Amit Haim' 'Alejandro Salinas' 'Julian Nyarko']" ]
null
null
2402.14877
null
null
http://arxiv.org/pdf/2402.14877v1
2024-02-21T20:59:19Z
2024-02-21T20:59:19Z
Machine-learning prediction of tipping and collapse of the Atlantic Meridional Overturning Circulation
Recent research on the Atlantic Meridional Overturning Circulation (AMOC) raised concern about its potential collapse through a tipping point due to the climate-change caused increase in the freshwater input into the North Atlantic. The predicted time window of collapse is centered about the middle of the century and the earliest possible start is approximately two years from now. More generally, anticipating a tipping point at which the system transitions from one stable steady state to another is relevant to a broad range of fields. We develop a machine-learning approach to predicting tipping in noisy dynamical systems with a time-varying parameter and test it on a number of systems including the AMOC, ecological networks, an electrical power system, and a climate model. For the AMOC, our prediction based on simulated fingerprint data and real data of the sea surface temperature places the time window of a potential collapse between the years 2040 and 2065.
[ "['Shirin Panahi' 'Ling-Wei Kong' 'Mohammadamin Moradi' 'Zheng-Meng Zhai'\n 'Bryan Glaz' 'Mulugeta Haile' 'Ying-Cheng Lai']" ]
null
null
2402.14878
null
null
http://arxiv.org/pdf/2402.14878v2
2024-05-21T20:05:51Z
2024-02-21T21:02:11Z
Energy-efficiency Limits on Training AI Systems using Learning-in-Memory
Learning-in-memory (LIM) is a recently proposed paradigm to overcome fundamental memory bottlenecks in training machine learning systems. While compute-in-memory (CIM) approaches can address the so-called memory-wall (i.e. energy dissipated due to repeated memory read access) they are agnostic to the energy dissipated due to repeated memory writes at the precision required for training (the update-wall), and they don't account for the energy dissipated when transferring information between short-term and long-term memories (the consolidation-wall). The LIM paradigm proposes that these bottlenecks, too, can be overcome if the energy barrier of physical memories is adaptively modulated such that the dynamics of memory updates and consolidation match the Lyapunov dynamics of gradient-descent training of an AI model. In this paper, we derive new theoretical lower bounds on energy dissipation when training AI systems using different LIM approaches. The analysis presented here is model-agnostic and highlights the trade-off between energy efficiency and the speed of training. The resulting non-equilibrium energy-efficiency bounds have a similar flavor as that of Landauer's energy-dissipation bounds. We also extend these limits by taking into account the number of floating-point operations (FLOPs) used for training, the size of the AI model, and the precision of the training parameters. Our projections suggest that the energy-dissipation lower-bound to train a brain scale AI system (comprising of $10^{15}$ parameters) using LIM is $10^8 sim 10^9$ Joules, which is on the same magnitude the Landauer's adiabatic lower-bound and $6$ to $7$ orders of magnitude lower than the projections obtained using state-of-the-art AI accelerator hardware lower-bounds.
[ "['Zihao Chen' 'Johannes Leugering' 'Gert Cauwenberghs'\n 'Shantanu Chakrabartty']" ]
null
null
2402.14882
null
null
http://arxiv.org/pdf/2402.14882v1
2024-02-22T03:31:00Z
2024-02-22T03:31:00Z
Deep Generative Model-based Synthesis of Four-bar Linkage Mechanisms with Target Conditions
Mechanisms are essential components designed to perform specific tasks in various mechanical systems. However, designing a mechanism that satisfies certain kinematic or quasi-static requirements is a challenging task. The kinematic requirements may include the workspace of a mechanism, while the quasi-static requirements of a mechanism may include its torque transmission, which refers to the ability of the mechanism to transfer power and torque effectively. In this paper, we propose a deep learning-based generative model for generating multiple crank-rocker four-bar linkage mechanisms that satisfy both the kinematic and quasi-static requirements aforementioned. The proposed model is based on a conditional generative adversarial network (cGAN) with modifications for mechanism synthesis, which is trained to learn the relationship between the requirements of a mechanism with respect to linkage lengths. The results demonstrate that the proposed model successfully generates multiple distinct mechanisms that satisfy specific kinematic and quasi-static requirements. To evaluate the novelty of our approach, we provide a comparison of the samples synthesized by the proposed cGAN, traditional cVAE and NSGA-II. Our approach has several advantages over traditional design methods. It enables designers to efficiently generate multiple diverse and feasible design candidates while exploring a large design space. Also, the proposed model considers both the kinematic and quasi-static requirements, which can lead to more efficient and effective mechanisms for real-world use, making it a promising tool for linkage mechanism design.
[ "['Sumin Lee' 'Jihoon Kim' 'Namwoo Kang']" ]
null
null
2402.14883
null
null
http://arxiv.org/pdf/2402.14883v3
2024-06-05T11:30:02Z
2024-02-22T04:55:14Z
Double-I Watermark: Protecting Model Copyright for LLM Fine-tuning
To support various applications, a prevalent and efficient approach for business owners is leveraging their valuable datasets to fine-tune a pre-trained LLM through the API provided by LLM owners or cloud servers. However, this process carries a substantial risk of model misuse, potentially resulting in severe economic consequences for business owners. Thus, safeguarding the copyright of these customized models during LLM fine-tuning has become an urgent practical requirement, but there are limited existing solutions to provide such protection. To tackle this pressing issue, we propose a novel watermarking approach named ``Double-I watermark''. Specifically, based on the instruct-tuning data, two types of backdoor data paradigms are introduced with trigger in the instruction and the input, respectively. By leveraging LLM's learning capability to incorporate customized backdoor samples into the dataset, the proposed approach effectively injects specific watermarking information into the customized model during fine-tuning, which makes it easy to inject and verify watermarks in commercial scenarios. We evaluate the proposed "Double-I watermark" under various fine-tuning methods, demonstrating its harmlessness, robustness, uniqueness, imperceptibility, and validity through both quantitative and qualitative analyses.
[ "['Shen Li' 'Liuyi Yao' 'Jinyang Gao' 'Lan Zhang' 'Yaliang Li']" ]
null
null
2402.14886
null
null
http://arxiv.org/pdf/2402.14886v1
2024-02-22T07:37:04Z
2024-02-22T07:37:04Z
Applying Reinforcement Learning to Optimize Traffic Light Cycles
Manual optimization of traffic light cycles is a complex and time-consuming task, necessitating the development of automated solutions. In this paper, we propose the application of reinforcement learning to optimize traffic light cycles in real-time. We present a case study using the Simulation Urban Mobility simulator to train a Deep Q-Network algorithm. The experimental results showed 44.16% decrease in the average number of Emergency stops, showing the potential of our approach to reduce traffic congestion and improve traffic flow. Furthermore, we discuss avenues for future research and enhancements to the reinforcement learning model.
[ "['Seungah Son' 'Juhee Jin']" ]
null
null
2402.14888
null
null
http://arxiv.org/pdf/2402.14888v1
2024-02-22T09:43:53Z
2024-02-22T09:43:53Z
Efficient data selection employing Semantic Similarity-based Graph Structures for model training
Recent developments in natural language processing (NLP) have highlighted the need for substantial amounts of data for models to capture textual information accurately. This raises concerns regarding the computational resources and time required for training such models. This paper introduces Semantics for data SAliency in Model performance Estimation (SeSaME). It is an efficient data sampling mechanism solely based on textual information without passing the data through a compute-heavy model or other intensive pre-processing transformations. The application of this approach is demonstrated in the use case of low-resource automated speech recognition (ASR) models, which excessively rely on text-to-speech (TTS) calls when using augmented data. SeSaME learns to categorize new incoming data points into speech recognition difficulty buckets by employing semantic similarity-based graph structures and discrete ASR information from homophilous neighbourhoods through message passing. The results indicate reliable projections of ASR performance, with a 93% accuracy increase when using the proposed method compared to random predictions, bringing non-trivial information on the impact of textual representations in speech models. Furthermore, a series of experiments show both the benefits and challenges of using the ASR information on incoming data to fine-tune the model. We report a 7% drop in validation loss compared to random sampling, 7% WER drop with non-local aggregation when evaluating against a highly difficult dataset, and 1.8% WER drop with local aggregation and high semantic similarity between datasets.
[ "['Roxana Petcu' 'Subhadeep Maji']" ]
null
null
2402.14890
null
null
http://arxiv.org/pdf/2402.14890v2
2024-02-26T12:09:43Z
2024-02-22T12:00:32Z
Vygotsky Distance: Measure for Benchmark Task Similarity
Evaluation plays a significant role in modern natural language processing. Most modern NLP benchmarks consist of arbitrary sets of tasks that neither guarantee any generalization potential for the model once applied outside the test set nor try to minimize the resource consumption needed for model evaluation. This paper presents a theoretical instrument and a practical algorithm to calculate similarity between benchmark tasks, we call this similarity measure "Vygotsky distance". The core idea of this similarity measure is that it is based on relative performance of the "students" on a given task, rather that on the properties of the task itself. If two tasks are close to each other in terms of Vygotsky distance the models tend to have similar relative performance on them. Thus knowing Vygotsky distance between tasks one can significantly reduce the number of evaluation tasks while maintaining a high validation quality. Experiments on various benchmarks, including GLUE, SuperGLUE, CLUE, and RussianSuperGLUE, demonstrate that a vast majority of NLP benchmarks could be at least 40% smaller in terms of the tasks included. Most importantly, Vygotsky distance could also be used for the validation of new tasks thus increasing the generalization potential of the future NLP models.
[ "['Maxim K. Surkov' 'Ivan P. Yamshchikov']" ]
null
null
2402.14892
null
null
http://arxiv.org/pdf/2402.14892v2
2024-03-12T17:34:16Z
2024-02-22T14:13:44Z
Novelty Detection on Radio Astronomy Data using Signatures
We introduce SigNova, a new semi-supervised framework for detecting anomalies in streamed data. While our initial examples focus on detecting radio-frequency interference (RFI) in digitized signals within the field of radio astronomy, it is important to note that SigNova's applicability extends to any type of streamed data. The framework comprises three primary components. Firstly, we use the signature transform to extract a canonical collection of summary statistics from observational sequences. This allows us to represent variable-length visibility samples as finite-dimensional feature vectors. Secondly, each feature vector is assigned a novelty score, calculated as the Mahalanobis distance to its nearest neighbor in an RFI-free training set. By thresholding these scores we identify observation ranges that deviate from the expected behavior of RFI-free visibility samples without relying on stringent distributional assumptions. Thirdly, we integrate this anomaly detector with Pysegments, a segmentation algorithm, to localize consecutive observations contaminated with RFI, if any. This approach provides a compelling alternative to classical windowing techniques commonly used for RFI detection. Importantly, the complexity of our algorithm depends on the RFI pattern rather than on the size of the observation window. We demonstrate how SigNova improves the detection of various types of RFI (e.g., broadband and narrowband) in time-frequency visibility data. We validate our framework on the Murchison Widefield Array (MWA) telescope and simulated data and the Hydrogen Epoch of Reionization Array (HERA).
[ "['Paola Arrubarrena' 'Maud Lemercier' 'Bojan Nikolic' 'Terry Lyons'\n 'Thomas Cass']" ]
null
null
2402.14894
null
null
http://arxiv.org/pdf/2402.14894v1
2024-02-22T16:25:32Z
2024-02-22T16:25:32Z
Data-Driven Ground-Fault Location Method in Distribution Power System With Distributed Generation
The recent increase in renewable energy penetration at the distribution level introduces a multi-directional power flow that outdated traditional fault location techniques. To this extent, the development of new methods is needed to ensure fast and accurate fault localization and, hence, strengthen power system reliability. This paper proposes a data-driven ground fault location method for the power distribution system. An 11-bus 20 kV power system is modeled in Matlab/Simulink to simulate ground faults. The faults are generated at different locations and under various system operational states. Time-domain faulted three-phase voltages at the system substation are then analyzed with discrete wavelet transform. Statistical quantities of the processed data are eventually used to train an Artificial Neural Network (ANN) to find a mapping between computed voltage features and faults. Specifically, three ANNs allow the prediction of faulted phase, faulted branch, and fault distance from the system substation separately. According to the results, the method shows good potential, with a total relative error of 0,4% for fault distance prediction. The method is applied to datasets with unknown system states to test robustness.
[ "['Mauro Caporuscio' 'Antoine Dupuis' 'Welf Löwe']" ]
null
null
2402.14895
null
null
http://arxiv.org/pdf/2402.14895v1
2024-02-22T16:42:37Z
2024-02-22T16:42:37Z
Data Augmentation is Dead, Long Live Data Augmentation
Textual data augmentation (DA) is a prolific field of study where novel techniques to create artificial data are regularly proposed, and that has demonstrated great efficiency on small data settings, at least for text classification tasks. In this paper, we challenge those results, showing that classical data augmentation is simply a way of performing better fine-tuning, and that spending more time fine-tuning before applying data augmentation negates its effect. This is a significant contribution as it answers several questions that were left open in recent years, namely~: which DA technique performs best (all of them as long as they generate data close enough to the training set as to not impair training) and why did DA show positive results (facilitates training of network). We furthermore show that zero and few-shot data generation via conversational agents such as ChatGPT or LLama2 can increase performances, concluding that this form of data augmentation does still work, even if classical methods do not.
[ "['Frédéric Piedboeuf' 'Philippe Langlais']" ]
null
null
2402.14897
null
null
http://arxiv.org/pdf/2402.14897v3
2024-06-21T13:39:14Z
2024-02-22T17:23:53Z
Chain-of-Thought Unfaithfulness as Disguised Accuracy
Understanding the extent to which Chain-of-Thought (CoT) generations align with a large language model's (LLM) internal computations is critical for deciding whether to trust an LLM's output. As a proxy for CoT faithfulness, Lanham et al. (2023) propose a metric that measures a model's dependence on its CoT for producing an answer. Within a single family of proprietary models, they find that LLMs exhibit a scaling-then-inverse-scaling relationship between model size and their measure of faithfulness, and that a 13 billion parameter model exhibits increased faithfulness compared to models ranging from 810 million to 175 billion parameters in size. We evaluate whether these results generalize as a property of all LLMs. We replicate the experimental setup in their section focused on scaling experiments with three different families of models and, under specific conditions, successfully reproduce the scaling trends for CoT faithfulness they report. However, after normalizing the metric to account for a model's bias toward certain answer choices, unfaithfulness drops significantly for smaller less-capable models. This normalized faithfulness metric is also strongly correlated ($R^2$=0.74) with accuracy, raising doubts about its validity for evaluating faithfulness.
[ "['Oliver Bentham' 'Nathan Stringham' 'Ana Marasović']" ]
null
null
2402.14899
null
null
http://arxiv.org/pdf/2402.14899v2
2024-03-18T10:55:36Z
2024-02-22T17:36:34Z
Stop Reasoning! When Multimodal LLMs with Chain-of-Thought Reasoning Meets Adversarial Images
Recently, Multimodal LLMs (MLLMs) have shown a great ability to understand images. However, like traditional vision models, they are still vulnerable to adversarial images. Meanwhile, Chain-of-Thought (CoT) reasoning has been widely explored on MLLMs, which not only improves model's performance, but also enhances model's explainability by giving intermediate reasoning steps. Nevertheless, there is still a lack of study regarding MLLMs' adversarial robustness with CoT and an understanding of what the rationale looks like when MLLMs infer wrong answers with adversarial images. Our research evaluates the adversarial robustness of MLLMs when employing CoT reasoning, finding that CoT marginally improves adversarial robustness against existing attack methods. Moreover, we introduce a novel stop-reasoning attack technique that effectively bypasses the CoT-induced robustness enhancements. Finally, we demonstrate the alterations in CoT reasoning when MLLMs confront adversarial images, shedding light on their reasoning process under adversarial attacks.
[ "['Zefeng Wang' 'Zhen Han' 'Shuo Chen' 'Fan Xue' 'Zifeng Ding' 'Xun Xiao'\n 'Volker Tresp' 'Philip Torr' 'Jindong Gu']" ]
null
null
2402.14903
null
null
http://arxiv.org/pdf/2402.14903v1
2024-02-22T18:14:09Z
2024-02-22T18:14:09Z
Tokenization counts: the impact of tokenization on arithmetic in frontier LLMs
Tokenization, the division of input text into input tokens, is an often overlooked aspect of the large language model (LLM) pipeline and could be the source of useful or harmful inductive biases. Historically, LLMs have relied on byte pair encoding, without care to specific input domains. With the increased use of LLMs for reasoning, various number-specific tokenization schemes have been adopted, with popular models like LLaMa and PaLM opting for single-digit tokenization while GPT-3.5 and GPT-4 have separate tokens for each 1-, 2-, and 3-digit numbers. In this work, we study the effect this choice has on numerical reasoning through the use of arithmetic tasks. We consider left-to-right and right-to-left tokenization for GPT-3.5 and -4, finding that right-to-left tokenization (enforced by comma separating numbers at inference time) leads to largely improved performance. Furthermore, we find that model errors when using standard left-to-right tokenization follow stereotyped error patterns, suggesting that model computations are systematic rather than approximate. We show that the model is able to convert between tokenizations easily, thus allowing chain-of-thought-inspired approaches to recover performance on left-to-right tokenized inputs. We also find the gap between tokenization directions decreases when models are scaled, possibly indicating that larger models are better able to override this tokenization-dependent inductive bias. In summary, our work performs the first study of how number tokenization choices lead to differences in model performance on arithmetic tasks, accompanied by a thorough analysis of error patterns. We hope this work inspires practitioners to more carefully ablate number tokenization-related choices when working towards general models of numerical reasoning.
[ "['Aaditya K. Singh' 'DJ Strouse']" ]
null
null
2402.14904
null
null
http://arxiv.org/pdf/2402.14904v1
2024-02-22T18:55:22Z
2024-02-22T18:55:22Z
Watermarking Makes Language Models Radioactive
This paper investigates the radioactivity of LLM-generated texts, i.e. whether it is possible to detect that such input was used as training data. Conventional methods like membership inference can carry out this detection with some level of accuracy. We show that watermarked training data leaves traces easier to detect and much more reliable than membership inference. We link the contamination level to the watermark robustness, its proportion in the training set, and the fine-tuning process. We notably demonstrate that training on watermarked synthetic instructions can be detected with high confidence (p-value < 1e-5) even when as little as 5% of training text is watermarked. Thus, LLM watermarking, originally designed for detecting machine-generated text, gives the ability to easily identify if the outputs of a watermarked LLM were used to fine-tune another LLM.
[ "['Tom Sander' 'Pierre Fernandez' 'Alain Durmus' 'Matthijs Douze'\n 'Teddy Furon']" ]
null
null
2402.14905
null
null
http://arxiv.org/pdf/2402.14905v2
2024-06-27T03:53:46Z
2024-02-22T18:58:55Z
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
This paper addresses the growing need for efficient large language models (LLMs) on mobile devices, driven by increasing cloud costs and latency concerns. We focus on designing top-quality LLMs with fewer than a billion parameters, a practical choice for mobile deployment. Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs. Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted as MobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models. Additionally, we propose an immediate block-wise weight-sharing approach with no increase in model size and only marginal latency overhead. The resultant models, denoted as MobileLLM-LS, demonstrate a further accuracy enhancement of 0.7%/0.8% than MobileLLM 125M/350M. Moreover, MobileLLM model family shows significant improvements compared to previous sub-billion models on chat benchmarks, and demonstrates close correctness to LLaMA-v2 7B in API calling tasks, highlighting the capability of small models for common on-device use cases.
[ "['Zechun Liu' 'Changsheng Zhao' 'Forrest Iandola' 'Chen Lai'\n 'Yuandong Tian' 'Igor Fedorov' 'Yunyang Xiong' 'Ernie Chang'\n 'Yangyang Shi' 'Raghuraman Krishnamoorthi' 'Liangzhen Lai'\n 'Vikas Chandra']" ]
null
null
2402.14922
null
null
http://arxiv.org/pdf/2402.14922v1
2024-02-22T19:07:08Z
2024-02-22T19:07:08Z
Practical Insights into Knowledge Distillation for Pre-Trained Models
This research investigates the enhancement of knowledge distillation (KD) processes in pre-trained models, an emerging field in knowledge transfer with significant implications for distributed training and federated learning environments. These environments benefit from reduced communication demands and accommodate various model architectures. Despite the adoption of numerous KD approaches for transferring knowledge among pre-trained models, a comprehensive understanding of KD's application in these scenarios is lacking. Our study conducts an extensive comparison of multiple KD techniques, including standard KD, tuned KD (via optimized temperature and weight parameters), deep mutual learning, and data partitioning KD. We assess these methods across various data distribution strategies to identify the most effective contexts for each. Through detailed examination of hyperparameter tuning, informed by extensive grid search evaluations, we pinpoint when adjustments are crucial to enhance model performance. This paper sheds light on optimal hyperparameter settings for distinct data partitioning scenarios and investigates KD's role in improving federated learning by minimizing communication rounds and expediting the training process. By filling a notable void in current research, our findings serve as a practical framework for leveraging KD in pre-trained models within collaborative and federated learning frameworks.
[ "['Norah Alballa' 'Marco Canini']" ]
null
null
2402.14925
null
null
http://arxiv.org/pdf/2402.14925v1
2024-02-22T19:15:50Z
2024-02-22T19:15:50Z
Efficient Unbiased Sparsification
An unbiased $m$-sparsification of a vector $pin mathbb{R}^n$ is a random vector $Qin mathbb{R}^n$ with mean $p$ that has at most $m<n$ nonzero coordinates. Unbiased sparsification compresses the original vector without introducing bias; it arises in various contexts, such as in federated learning and sampling sparse probability distributions. Ideally, unbiased sparsification should also minimize the expected value of a divergence function $mathsf{Div}(Q,p)$ that measures how far away $Q$ is from the original $p$. If $Q$ is optimal in this sense, then we call it efficient. Our main results describe efficient unbiased sparsifications for divergences that are either permutation-invariant or additively separable. Surprisingly, the characterization for permutation-invariant divergences is robust to the choice of divergence function, in the sense that our class of optimal $Q$ for squared Euclidean distance coincides with our class of optimal $Q$ for Kullback-Leibler divergence, or indeed any of a wide variety of divergences.
[ "['Leighton Barnes' 'Timothy Chow' 'Emma Cohen' 'Keith Frankston'\n 'Benjamin Howard' 'Fred Kochman' 'Daniel Scheinerman' 'Jeffrey VanderKam']" ]
null
null
2402.14926
null
null
http://arxiv.org/pdf/2402.14926v1
2024-02-22T19:16:01Z
2024-02-22T19:16:01Z
Boosting gets full Attention for Relational Learning
More often than not in benchmark supervised ML, tabular data is flat, i.e. consists of a single $m times d$ (rows, columns) file, but cases abound in the real world where observations are described by a set of tables with structural relationships. Neural nets-based deep models are a classical fit to incorporate general topological dependence among description features (pixels, words, etc.), but their suboptimality to tree-based models on tabular data is still well documented. In this paper, we introduce an attention mechanism for structured data that blends well with tree-based models in the training context of (gradient) boosting. Each aggregated model is a tree whose training involves two steps: first, simple tabular models are learned descending tables in a top-down fashion with boosting's class residuals on tables' features. Second, what has been learned progresses back bottom-up via attention and aggregation mechanisms, progressively crafting new features that complete at the end the set of observation features over which a single tree is learned, boosting's iteration clock is incremented and new class residuals are computed. Experiments on simulated and real-world domains display the competitiveness of our method against a state of the art containing both tree-based and neural nets-based models.
[ "['Mathieu Guillame-Bert' 'Richard Nock']" ]
null
null
2402.14928
null
null
http://arxiv.org/pdf/2402.14928v1
2024-02-22T19:24:56Z
2024-02-22T19:24:56Z
Learning Inverse Kinodynamics for Autonomous Vehicle Drifting
In this work, we explore a data-driven learning-based approach to learning the kinodynamic model of a small autonomous vehicle, and observe the effect it has on motion planning, specifically autonomous drifting. When executing a motion plan in the real world, there are numerous causes for error, and what is planned is often not what is executed on the actual car. Learning a kinodynamic planner based off of inertial measurements and executed commands can help us learn the world state. In our case, we look towards the realm of drifting; it is a complex maneuver that requires a smooth enough surface, high enough speed, and a drastic change in velocity. We attempt to learn the kinodynamic model for these drifting maneuvers, and attempt to tighten the slip of the car. Our approach is able to learn a kinodynamic model for high-speed circular navigation, and is able to avoid obstacles on an autonomous drift at high speed by correcting an executed curvature for loose drifts. We seek to adjust our kinodynamic model for success in tighter drifts in future work.
[ "['M. Suvarna' 'O. Tehrani']" ]
null
null
2402.14929
null
null
http://arxiv.org/pdf/2402.14929v1
2024-02-22T19:24:59Z
2024-02-22T19:24:59Z
Federated Fairness without Access to Sensitive Groups
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training. However, due to factors ranging from emerging regulations to dynamics and location-dependency of protected groups, this assumption may be unsuitable in many real-world scenarios. In this work, we propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels. Our objective allows the federation to learn a Pareto efficient global model ensuring worst-case group fairness and it enables, via a single hyper-parameter, trade-offs between fairness and utility, subject only to a group size constraint. This implies that any sufficiently large subset of the population is guaranteed to receive at least a minimum level of utility performance from the model. The proposed objective encompasses existing approaches as special cases, such as empirical risk minimization and subgroup robustness objectives from centralized machine learning. We provide an algorithm to solve this problem in federation that enjoys convergence and excess risk guarantees. Our empirical results indicate that the proposed approach can effectively improve the worst-performing group that may be present without unnecessarily hurting the average performance, exhibits superior or comparable performance to relevant baselines, and achieves a large set of solutions with different fairness-utility trade-offs.
[ "['Afroditi Papadaki' 'Natalia Martinez' 'Martin Bertran'\n 'Guillermo Sapiro' 'Miguel Rodrigues']" ]
null
null
2402.14937
null
null
http://arxiv.org/pdf/2402.14937v1
2024-02-22T19:44:19Z
2024-02-22T19:44:19Z
SoK: Analyzing Adversarial Examples: A Framework to Study Adversary Knowledge
Adversarial examples are malicious inputs to machine learning models that trigger a misclassification. This type of attack has been studied for close to a decade, and we find that there is a lack of study and formalization of adversary knowledge when mounting attacks. This has yielded a complex space of attack research with hard-to-compare threat models and attacks. We focus on the image classification domain and provide a theoretical framework to study adversary knowledge inspired by work in order theory. We present an adversarial example game, inspired by cryptographic games, to standardize attacks. We survey recent attacks in the image classification domain and classify their adversary's knowledge in our framework. From this systematization, we compile results that both confirm existing beliefs about adversary knowledge, such as the potency of information about the attacked model as well as allow us to derive new conclusions on the difficulty associated with the white-box and transferable threat models, for example, that transferable attacks might not be as difficult as previously thought.
[ "['Lucas Fenaux' 'Florian Kerschbaum']" ]
null
null
2402.14948
null
null
http://arxiv.org/pdf/2402.14948v2
2024-02-26T14:59:58Z
2024-02-22T20:07:02Z
Re-Examine Distantly Supervised NER: A New Benchmark and a Simple Approach
This paper delves into Named Entity Recognition (NER) under the framework of Distant Supervision (DS-NER), where the main challenge lies in the compromised quality of labels due to inherent errors such as false positives, false negatives, and positive type errors. We critically assess the efficacy of current DS-NER methodologies using a real-world benchmark dataset named QTL, revealing that their performance often does not meet expectations. To tackle the prevalent issue of label noise, we introduce a simple yet effective approach, Curriculum-based Positive-Unlabeled Learning CuPUL, which strategically starts on "easy" and cleaner samples during the training process to enhance model resilience to noisy samples. Our empirical results highlight the capability of CuPUL to significantly reduce the impact of noisy labels and outperform existing methods. QTL dataset and our code is available on GitHub.
[ "['Yuepei Li' 'Kang Zhou' 'Qiao Qiao' 'Qing Wang' 'Qi Li']" ]
null
null
2402.14949
null
null
http://arxiv.org/pdf/2402.14949v1
2024-02-22T20:09:58Z
2024-02-22T20:09:58Z
Enhancing Power Quality Event Classification with AI Transformer Models
Recently, there has been a growing interest in utilizing machine learning for accurate classification of power quality events (PQEs). However, most of these studies are performed assuming an ideal situation, while in reality, we can have measurement noise, DC offset, and variations in the voltage signal's amplitude and frequency. Building on the prior PQE classification works using deep learning, this paper proposes a deep-learning framework that leverages attention-enabled Transformers as a tool to accurately classify PQEs under the aforementioned considerations. The proposed framework can operate directly on the voltage signals with no need for a separate feature extraction or calculation phase. Our results show that the proposed framework outperforms recently proposed learning-based techniques. It can accurately classify PQEs under the aforementioned conditions with an accuracy varying between 99.81%$-$91.43% depending on the signal-to-noise ratio, DC offsets, and variations in the signal amplitude and frequency.
[ "['Ahmad Mohammad Saber' 'Amr Youssef' 'Davor Svetinovic' 'Hatem Zeineldin'\n 'Deepa Kundur' 'Ehab El-Saadany']" ]
null
null
2402.14951
null
null
http://arxiv.org/pdf/2402.14951v1
2024-02-22T20:26:08Z
2024-02-22T20:26:08Z
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization
We study the emph{in-context learning} (ICL) ability of a emph{Linear Transformer Block} (LTB) that combines a linear attention component and a linear multi-layer perceptron (MLP) component. For ICL of linear regression with a Gaussian prior and a emph{non-zero mean}, we show that LTB can achieve nearly Bayes optimal ICL risk. In contrast, using only linear attention must incur an irreducible additive approximation error. Furthermore, we establish a correspondence between LTB and one-step gradient descent estimators with learnable initialization ($mathsf{GD}text{-}mathbf{beta}$), in the sense that every $mathsf{GD}text{-}mathbf{beta}$ estimator can be implemented by an LTB estimator and every optimal LTB estimator that minimizes the in-class ICL risk is effectively a $mathsf{GD}text{-}mathbf{beta}$ estimator. Finally, we show that $mathsf{GD}text{-}mathbf{beta}$ estimators can be efficiently optimized with gradient flow, despite a non-convex training objective. Our results reveal that LTB achieves ICL by implementing $mathsf{GD}text{-}mathbf{beta}$, and they highlight the role of MLP layers in reducing approximation error.
[ "['Ruiqi Zhang' 'Jingfeng Wu' 'Peter L. Bartlett']" ]
null
null
2402.14957
null
null
http://arxiv.org/pdf/2402.14957v1
2024-02-22T20:36:24Z
2024-02-22T20:36:24Z
The Common Stability Mechanism behind most Self-Supervised Learning Approaches
Last couple of years have witnessed a tremendous progress in self-supervised learning (SSL), the success of which can be attributed to the introduction of useful inductive biases in the learning process to learn meaningful visual representations while avoiding collapse. These inductive biases and constraints manifest themselves in the form of different optimization formulations in the SSL techniques, e.g. by utilizing negative examples in a contrastive formulation, or exponential moving average and predictor in BYOL and SimSiam. In this paper, we provide a framework to explain the stability mechanism of these different SSL techniques: i) we discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO; ii) we provide an argument that despite different formulations these methods implicitly optimize a similar objective function, i.e. minimizing the magnitude of the expected representation over all data samples, or the mean of the data distribution, while maximizing the magnitude of the expected representation of individual samples over different data augmentations; iii) we provide mathematical and empirical evidence to support our framework. We formulate different hypotheses and test them using the Imagenet100 dataset.
[ "['Abhishek Jha' 'Matthew B. Blaschko' 'Yuki M. Asano' 'Tinne Tuytelaars']" ]
null
null
2402.14961
null
null
http://arxiv.org/pdf/2402.14961v3
2024-07-03T00:31:18Z
2024-02-22T20:49:04Z
Reinforcement Learning with Elastic Time Steps
Traditional Reinforcement Learning (RL) policies are typically implemented with fixed control rates, often disregarding the impact of control rate selection. This can lead to inefficiencies as the optimal control rate varies with task requirements. We propose the Multi-Objective Soft Elastic Actor-Critic (MOSEAC), an off-policy actor-critic algorithm that uses elastic time steps to dynamically adjust the control frequency. This approach minimizes computational resources by selecting the lowest viable frequency. We show that MOSEAC converges and produces stable policies at the theoretical level, and validate our findings in a real-time 3D racing game. MOSEAC significantly outperformed other variable time step approaches in terms of energy efficiency and task effectiveness. Additionally, MOSEAC demonstrated faster and more stable training, showcasing its potential for real-world RL applications in robotics.
[ "['Dong Wang' 'Giovanni Beltrame']" ]
null
null
2402.14966
null
null
http://arxiv.org/pdf/2402.14966v1
2024-02-22T21:02:19Z
2024-02-22T21:02:19Z
Smoothness Adaptive Hypothesis Transfer Learning
Many existing two-phase kernel-based hypothesis transfer learning algorithms employ the same kernel regularization across phases and rely on the known smoothness of functions to obtain optimality. Therefore, they fail to adapt to the varying and unknown smoothness between the target/source and their offset in practice. In this paper, we address these problems by proposing Smoothness Adaptive Transfer Learning (SATL), a two-phase kernel ridge regression(KRR)-based algorithm. We first prove that employing the misspecified fixed bandwidth Gaussian kernel in target-only KRR learning can achieve minimax optimality and derive an adaptive procedure to the unknown Sobolev smoothness. Leveraging these results, SATL employs Gaussian kernels in both phases so that the estimators can adapt to the unknown smoothness of the target/source and their offset function. We derive the minimax lower bound of the learning problem in excess risk and show that SATL enjoys a matching upper bound up to a logarithmic factor. The minimax convergence rate sheds light on the factors influencing transfer dynamics and demonstrates the superiority of SATL compared to non-transfer learning settings. While our main objective is a theoretical analysis, we also conduct several experiments to confirm our results.
[ "['Haotian Lin' 'Matthew Reimherr']" ]
null
null
2402.14973
null
null
http://arxiv.org/pdf/2402.14973v2
2024-06-09T21:10:34Z
2024-02-22T21:22:04Z
Introducing GenCeption for Multimodal LLM Benchmarking: You May Bypass Annotations
Multimodal Large Language Models (MLLMs) are commonly evaluated using costly annotated multimodal benchmarks. However, these benchmarks often struggle to keep pace with the rapidly advancing requirements of MLLM evaluation. We propose GenCeption, a novel and annotation-free MLLM evaluation framework that merely requires unimodal data to assess inter-modality semantic coherence and inversely reflects the models' inclination to hallucinate. Analogous to the popular DrawCeption game, GenCeption initiates with a non-textual sample and undergoes a series of iterative description and generation steps. Semantic drift across iterations is quantified using the GC@T metric. Our empirical findings validate GenCeption's efficacy, showing strong correlations with popular MLLM benchmarking results. GenCeption may be extended to mitigate training data contamination by utilizing ubiquitous, previously unseen unimodal data.
[ "['Lele Cao' 'Valentin Buchner' 'Zineb Senane' 'Fangkai Yang']" ]
null
null
2402.14974
null
null
http://arxiv.org/pdf/2402.14974v1
2024-02-22T21:22:21Z
2024-02-22T21:22:21Z
Towards Spatially-Lucid AI Classification in Non-Euclidean Space: An Application for MxIF Oncology Data
Given multi-category point sets from different place-types, our goal is to develop a spatially-lucid classifier that can distinguish between two classes based on the arrangements of their points. This problem is important for many applications, such as oncology, for analyzing immune-tumor relationships and designing new immunotherapies. It is challenging due to spatial variability and interpretability needs. Previously proposed techniques require dense training data or have limited ability to handle significant spatial variability within a single place-type. Most importantly, these deep neural network (DNN) approaches are not designed to work in non-Euclidean space, particularly point sets. Existing non-Euclidean DNN methods are limited to one-size-fits-all approaches. We explore a spatial ensemble framework that explicitly uses different training strategies, including weighted-distance learning rate and spatial domain adaptation, on various place-types for spatially-lucid classification. Experimental results on real-world datasets (e.g., MxIF oncology data) show that the proposed framework provides higher prediction accuracy than baseline methods.
[ "['Majid Farhadloo' 'Arun Sharma' 'Jayant Gupta' 'Alexey Leontovich'\n 'Svetomir N. Markovic' 'Shashi Shekhar']" ]
null
null
2402.14976
null
null
http://arxiv.org/pdf/2402.14976v1
2024-02-22T21:25:20Z
2024-02-22T21:25:20Z
Unsupervised Domain Adaptation within Deep Foundation Latent Spaces
The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved.
[ "['Dmitry Kangin' 'Plamen Angelov']" ]
null
null
2402.14977
null
null
http://arxiv.org/pdf/2402.14977v1
2024-02-22T21:31:43Z
2024-02-22T21:31:43Z
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Foundation model has become the backbone of the AI ecosystem. In particular, a foundation model can be used as a general-purpose feature extractor to build various downstream classifiers. However, foundation models are vulnerable to backdoor attacks and a backdoored foundation model is a single-point-of-failure of the AI ecosystem, e.g., multiple downstream classifiers inherit the backdoor vulnerabilities simultaneously. In this work, we propose Mudjacking, the first method to patch foundation models to remove backdoors. Specifically, given a misclassified trigger-embedded input detected after a backdoored foundation model is deployed, Mudjacking adjusts the parameters of the foundation model to remove the backdoor. We formulate patching a foundation model as an optimization problem and propose a gradient descent based method to solve it. We evaluate Mudjacking on both vision and language foundation models, eleven benchmark datasets, five existing backdoor attacks, and thirteen adaptive backdoor attacks. Our results show that Mudjacking can remove backdoor from a foundation model while maintaining its utility.
[ "['Hongbin Liu' 'Michael K. Reiter' 'Neil Zhenqiang Gong']" ]
null
null
2402.14979
null
null
http://arxiv.org/pdf/2402.14979v2
2024-06-05T23:19:19Z
2024-02-22T21:36:07Z
Optimizing Language Models for Human Preferences is a Causal Inference Problem
As large language models (LLMs) see greater use in academic and commercial settings, there is increasing interest in methods that allow language models to generate texts aligned with human preferences. In this paper, we present an initial exploration of language model optimization for human preferences from direct outcome datasets, where each sample consists of a text and an associated numerical outcome measuring the reader's response. We first propose that language model optimization should be viewed as a causal problem to ensure that the model correctly learns the relationship between the text and the outcome. We formalize this causal language optimization problem, and we develop a method--causal preference optimization (CPO)--that solves an unbiased surrogate objective for the problem. We further extend CPO with doubly robust CPO (DR-CPO), which reduces the variance of the surrogate objective while retaining provably strong guarantees on bias. Finally, we empirically demonstrate the effectiveness of (DR-)CPO in optimizing state-of-the-art LLMs for human preferences on direct outcome data, and we validate the robustness of DR-CPO under difficult confounding conditions.
[ "['Victoria Lin' 'Eli Ben-Michael' 'Louis-Philippe Morency']" ]
null
null
2402.14980
null
null
http://arxiv.org/pdf/2402.14980v1
2024-02-22T21:41:27Z
2024-02-22T21:41:27Z
Comparative Analysis of Data Preprocessing Methods, Feature Selection Techniques and Machine Learning Models for Improved Classification and Regression Performance on Imbalanced Genetic Data
Rapid advancements in genome sequencing have led to the collection of vast amounts of genomics data. Researchers may be interested in using machine learning models on such data to predict the pathogenicity or clinical significance of a genetic mutation. However, many genetic datasets contain imbalanced target variables that pose challenges to machine learning models: observations are skewed/imbalanced in regression tasks or class-imbalanced in classification tasks. Genetic datasets are also often high-cardinal and contain skewed predictor variables, which poses further challenges. We aimed to investigate the effects of data preprocessing, feature selection techniques, and model selection on the performance of models trained on these datasets. We measured performance with 5-fold cross-validation and compared averaged r-squared and accuracy metrics across different combinations of techniques. We found that outliers/skew in predictor or target variables did not pose a challenge to regression models. We also found that class-imbalanced target variables and skewed predictors had little to no impact on classification performance. Random forest was the best model to use for imbalanced regression tasks. While our study uses a genetic dataset as an example of a real-world application, our findings can be generalized to any similar datasets.
[ "['Arshmeet Kaur' 'Morteza Sarmadi']" ]
null
null
2402.14982
null
null
http://arxiv.org/pdf/2402.14982v3
2024-07-09T02:21:21Z
2024-02-22T21:44:58Z
Human Brain Exhibits Distinct Patterns When Listening to Fake Versus Real Audio: Preliminary Evidence
In this paper we study the variations in human brain activity when listening to real and fake audio. Our preliminary results suggest that the representations learned by a state-of-the-art deepfake audio detection algorithm, do not exhibit clear distinct patterns between real and fake audio. In contrast, human brain activity, as measured by EEG, displays distinct patterns when individuals are exposed to fake versus real audio. This preliminary evidence enables future research directions in areas such as deepfake audio detection.
[ "['Mahsa Salehi' 'Kalin Stefanov' 'Ehsan Shareghi']" ]
null
null
2402.14983
null
null
http://arxiv.org/pdf/2402.14983v1
2024-02-22T21:46:24Z
2024-02-22T21:46:24Z
Privacy-Enhancing Collaborative Information Sharing through Federated Learning -- A Case of the Insurance Industry
The report demonstrates the benefits (in terms of improved claims loss modeling) of harnessing the value of Federated Learning (FL) to learn a single model across multiple insurance industry datasets without requiring the datasets themselves to be shared from one company to another. The application of FL addresses two of the most pressing concerns: limited data volume and data variety, which are caused by privacy concerns, the rarity of claim events, the lack of informative rating factors, etc.. During each round of FL, collaborators compute improvements on the model using their local private data, and these insights are combined to update a global model. Such aggregation of insights allows for an increase to the effectiveness in forecasting claims losses compared to models individually trained at each collaborator. Critically, this approach enables machine learning collaboration without the need for raw data to leave the compute infrastructure of each respective data owner. Additionally, the open-source framework, OpenFL, that is used in our experiments is designed so that it can be run using confidential computing as well as with additional algorithmic protections against leakage of information via the shared model updates. In such a way, FL is implemented as a privacy-enhancing collaborative learning technique that addresses the challenges posed by the sensitivity and privacy of data in traditional machine learning solutions. This paper's application of FL can also be expanded to other areas including fraud detection, catastrophe modeling, etc., that have a similar need to incorporate data privacy into machine learning collaborations. Our framework and empirical results provide a foundation for future collaborations among insurers, regulators, academic researchers, and InsurTech experts.
[ "['Panyi Dong' 'Zhiyu Quan' 'Brandon Edwards' 'Shih-han Wang'\n 'Runhuan Feng' 'Tianyang Wang' 'Patrick Foley' 'Prashant Shah']" ]
null
null
2402.14987
null
null
http://arxiv.org/pdf/2402.14987v1
2024-02-22T21:55:41Z
2024-02-22T21:55:41Z
On the Performance of Empirical Risk Minimization with Smoothed Data
In order to circumvent statistical and computational hardness results in sequential decision-making, recent work has considered smoothed online learning, where the distribution of data at each time is assumed to have bounded likeliehood ratio with respect to a base measure when conditioned on the history. While previous works have demonstrated the benefits of smoothness, they have either assumed that the base measure is known to the learner or have presented computationally inefficient algorithms applying only in special cases. This work investigates the more general setting where the base measure is emph{unknown} to the learner, focusing in particular on the performance of Empirical Risk Minimization (ERM) with square loss when the data are well-specified and smooth. We show that in this setting, ERM is able to achieve sublinear error whenever a class is learnable with iid data; in particular, ERM achieves error scaling as $tilde O( sqrt{mathrm{comp}(mathcal F)cdot T} )$, where $mathrm{comp}(mathcal F)$ is the statistical complexity of learning $mathcal F$ with iid data. In so doing, we prove a novel norm comparison bound for smoothed data that comprises the first sharp norm comparison for dependent data applying to arbitrary, nonlinear function classes. We complement these results with a lower bound indicating that our analysis of ERM is essentially tight, establishing a separation in the performance of ERM between smoothed and iid data.
[ "['Adam Block' 'Alexander Rakhlin' 'Abhishek Shetty']" ]
null
null
2402.14988
null
null
http://arxiv.org/pdf/2402.14988v1
2024-02-22T21:56:20Z
2024-02-22T21:56:20Z
Verifiable Boosted Tree Ensembles
Verifiable learning advocates for training machine learning models amenable to efficient security verification. Prior research demonstrated that specific classes of decision tree ensembles -- called large-spread ensembles -- allow for robustness verification in polynomial time against any norm-based attacker. This study expands prior work on verifiable learning from basic ensemble methods (i.e., hard majority voting) to advanced boosted tree ensembles, such as those trained using XGBoost or LightGBM. Our formal results indicate that robustness verification is achievable in polynomial time when considering attackers based on the $L_infty$-norm, but remains NP-hard for other norm-based attackers. Nevertheless, we present a pseudo-polynomial time algorithm to verify robustness against attackers based on the $L_p$-norm for any $p in mathbb{N} cup {0}$, which in practice grants excellent performance. Our experimental evaluation shows that large-spread boosted ensembles are accurate enough for practical adoption, while being amenable to efficient security verification.
[ "['Stefano Calzavara' 'Lorenzo Cazzaro' 'Claudio Lucchese'\n 'Giulio Ermanno Pibiri']" ]
null
null
2402.14989
null
null
http://arxiv.org/pdf/2402.14989v4
2024-06-15T23:32:54Z
2024-02-22T22:00:03Z
Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data
Irregular sampling intervals and missing values in real-world time series data present challenges for conventional methods that assume consistent intervals and complete data. Neural Ordinary Differential Equations (Neural ODEs) offer an alternative approach, utilizing neural networks combined with ODE solvers to learn continuous latent representations through parameterized vector fields. Neural Stochastic Differential Equations (Neural SDEs) extend Neural ODEs by incorporating a diffusion term, although this addition is not trivial, particularly when addressing irregular intervals and missing values. Consequently, careful design of drift and diffusion functions is crucial for maintaining stability and enhancing performance, while incautious choices can result in adverse properties such as the absence of strong solutions, stochastic destabilization, or unstable Euler discretizations, significantly affecting Neural SDEs' performance. In this study, we propose three stable classes of Neural SDEs: Langevin-type SDE, Linear Noise SDE, and Geometric SDE. Then, we rigorously demonstrate their robustness in maintaining excellent performance under distribution shift, while effectively preventing overfitting. To assess the effectiveness of our approach, we conduct extensive experiments on four benchmark datasets for interpolation, forecasting, and classification tasks, and analyze the robustness of our methods with 30 public datasets under different missing rates. Our results demonstrate the efficacy of the proposed method in handling real-world irregular time series data.
[ "['YongKyung Oh' 'Dongyoung Lim' 'Sungil Kim']" ]
null
null
2402.14991
null
null
http://arxiv.org/pdf/2402.14991v3
2024-06-03T15:42:55Z
2024-02-22T22:03:16Z
Quantum Theory and Application of Contextual Optimal Transport
Optimal Transport (OT) has fueled machine learning (ML) across many domains. When paired data measurements $(boldsymbol{mu}, boldsymbol{nu})$ are coupled to covariates, a challenging conditional distribution learning setting arises. Existing approaches for learning a $textit{global}$ transport map parameterized through a potentially unseen context utilize Neural OT and largely rely on Brenier's theorem. Here, we propose a first-of-its-kind quantum computing formulation for amortized optimization of contextualized transportation plans. We exploit a direct link between doubly stochastic matrices and unitary operators thus unravelling a natural connection between OT and quantum computation. We verify our method (QontOT) on synthetic and real data by predicting variations in cell type distributions conditioned on drug dosage. Importantly we conduct a 24-qubit hardware experiment on a task challenging for classical computers and report a performance that cannot be matched with our classical neural OT approach. In sum, this is a first step toward learning to predict contextualized transportation plans through quantum computing.
[ "['Nicola Mariella' 'Albert Akhriev' 'Francesco Tacchino' 'Christa Zoufal'\n 'Juan Carlos Gonzalez-Espitia' 'Benedek Harsanyi' 'Eugene Koskin'\n 'Ivano Tavernelli' 'Stefan Woerner' 'Marianna Rapsomaniki' 'Sergiy Zhuk'\n 'Jannis Born']" ]
null
null
2402.14992
null
null
http://arxiv.org/pdf/2402.14992v2
2024-05-26T22:27:23Z
2024-02-22T22:05:23Z
tinyBenchmarks: evaluating LLMs with fewer examples
The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.
[ "['Felipe Maia Polo' 'Lucas Weber' 'Leshem Choshen' 'Yuekai Sun'\n 'Gongjun Xu' 'Mikhail Yurochkin']" ]
null
null
2402.15000
null
null
http://arxiv.org/pdf/2402.15000v1
2024-02-22T22:28:46Z
2024-02-22T22:28:46Z
Divide-or-Conquer? Which Part Should You Distill Your LLM?
Recent methods have demonstrated that Large Language Models (LLMs) can solve reasoning tasks better when they are encouraged to solve subtasks of the main task first. In this paper we devise a similar strategy that breaks down reasoning tasks into a problem decomposition phase and a problem solving phase and show that the strategy is able to outperform a single stage solution. Further, we hypothesize that the decomposition should be easier to distill into a smaller model compared to the problem solving because the latter requires large amounts of domain knowledge while the former only requires learning general problem solving strategies. We propose methods to distill these two capabilities and evaluate their impact on reasoning outcomes and inference cost. We find that we can distill the problem decomposition phase and at the same time achieve good generalization across tasks, datasets, and models. However, it is harder to distill the problem solving capability without losing performance and the resulting distilled model struggles with generalization. These results indicate that by using smaller, distilled problem decomposition models in combination with problem solving LLMs we can achieve reasoning with cost-efficient inference and local adaptation.
[ "['Zhuofeng Wu' 'He Bai' 'Aonan Zhang' 'Jiatao Gu' 'VG Vinod Vydiswaran'\n 'Navdeep Jaitly' 'Yizhe Zhang']" ]
null
null
2402.15005
null
null
http://arxiv.org/pdf/2402.15005v1
2024-02-22T22:49:35Z
2024-02-22T22:49:35Z
Comparison of Machine Learning Classification Algorithms and Application to the Framingham Heart Study
The use of machine learning algorithms in healthcare can amplify social injustices and health inequities. While the exacerbation of biases can occur and compound during the problem selection, data collection, and outcome definition, this research pertains to some generalizability impediments that occur during the development and the post-deployment of machine learning classification algorithms. Using the Framingham coronary heart disease data as a case study, we show how to effectively select a probability cutoff to convert a regression model for a dichotomous variable into a classifier. We then compare the sampling distribution of the predictive performance of eight machine learning classification algorithms under four training/testing scenarios to test their generalizability and their potential to perpetuate biases. We show that both the Extreme Gradient Boosting, and Support Vector Machine are flawed when trained on an unbalanced dataset. We introduced and show that the double discriminant scoring of type I is the most generalizable as it consistently outperforms the other classification algorithms regardless of the training/testing scenario. Finally, we introduce a methodology to extract an optimal variable hierarchy for a classification algorithm, and illustrate it on the overall, male and female Framingham coronary heart disease data.
[ "['Nabil Kahouadji']" ]
null
null
2402.15006
null
null
http://arxiv.org/pdf/2402.15006v1
2024-02-22T22:54:41Z
2024-02-22T22:54:41Z
opp/ai: Optimistic Privacy-Preserving AI on Blockchain
The convergence of Artificial Intelligence (AI) and blockchain technology is reshaping the digital world, offering decentralized, secure, and efficient AI services on blockchain platforms. Despite the promise, the high computational demands of AI on blockchain raise significant privacy and efficiency concerns. The Optimistic Privacy-Preserving AI (opp/ai) framework is introduced as a pioneering solution to these issues, striking a balance between privacy protection and computational efficiency. The framework integrates Zero-Knowledge Machine Learning (zkML) for privacy with Optimistic Machine Learning (opML) for efficiency, creating a hybrid model tailored for blockchain AI services. This study presents the opp/ai framework, delves into the privacy features of zkML, and assesses the framework's performance and adaptability across different scenarios.
[ "['Cathie So' 'KD Conway' 'Xiaohang Yu' 'Suning Yao' 'Kartin Wong']" ]
null
null
2402.15010
null
null
http://arxiv.org/pdf/2402.15010v2
2024-06-09T15:11:31Z
2024-02-22T23:11:08Z
How Important Is Tokenization in French Medical Masked Language Models?
Subword tokenization has become the prevailing standard in the field of natural language processing (NLP) over recent years, primarily due to the widespread utilization of pre-trained language models. This shift began with Byte-Pair Encoding (BPE) and was later followed by the adoption of SentencePiece and WordPiece. While subword tokenization consistently outperforms character and word-level tokenization, the precise factors contributing to its success remain unclear. Key aspects such as the optimal segmentation granularity for diverse tasks and languages, the influence of data sources on tokenizers, and the role of morphological information in Indo-European languages remain insufficiently explored. This is particularly pertinent for biomedical terminology, characterized by specific rules governing morpheme combinations. Despite the agglutinative nature of biomedical terminology, existing language models do not explicitly incorporate this knowledge, leading to inconsistent tokenization strategies for common terms. In this paper, we seek to delve into the complexities of subword tokenization in French biomedical domain across a variety of NLP tasks and pinpoint areas where further enhancements can be made. We analyze classical tokenization algorithms, including BPE and SentencePiece, and introduce an original tokenization strategy that integrates morpheme-enriched word segmentation into existing tokenization methods.
[ "['Yanis Labrak' 'Adrien Bazoge' 'Beatrice Daille' 'Mickael Rouvier'\n 'Richard Dufour']" ]
null
null
2402.15017
null
null
http://arxiv.org/pdf/2402.15017v1
2024-02-22T23:29:42Z
2024-02-22T23:29:42Z
Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning
Foundation models have emerged as a powerful tool for many AI problems. Despite the tremendous success of foundation models, effective adaptation to new tasks, particularly those with limited labels, remains an open question and lacks theoretical understanding. An emerging solution with recent success in vision and NLP involves finetuning a foundation model on a selection of relevant tasks, before its adaptation to a target task with limited labeled samples. In this paper, we study the theoretical justification of this multitask finetuning approach. Our theoretical analysis reveals that with a diverse set of related tasks, this multitask finetuning leads to reduced error in the target task, in comparison to directly adapting the same pretrained model. We quantify the relationship between finetuning tasks and target tasks by diversity and consistency metrics, and further propose a practical task selection algorithm. We substantiate our theoretical claims with extensive empirical evidence. Further, we present results affirming our task selection algorithm adeptly chooses related finetuning tasks, providing advantages to the model performance on target tasks. We believe our study shed new light on the effective adaptation of foundation models to new tasks that lack abundant labels. Our code is available at https://github.com/OliverXUZY/Foudation-Model_Multitask.
[ "['Zhuoyan Xu' 'Zhenmei Shi' 'Junyi Wei' 'Fangzhou Mu' 'Yin Li'\n 'Yingyu Liang']" ]
null
null
2402.15018
null
null
http://arxiv.org/pdf/2402.15018v2
2024-06-06T22:31:48Z
2024-02-22T23:31:22Z
Unintended Impacts of LLM Alignment on Global Representation
Before being deployed for user-facing applications, developers align Large Language Models (LLMs) to user preferences through a variety of procedures, such as Reinforcement Learning From Human Feedback (RLHF) and Direct Preference Optimization (DPO). Current evaluations of these procedures focus on benchmarks of instruction following, reasoning, and truthfulness. However, human preferences are not universal, and aligning to specific preference sets may have unintended effects. We explore how alignment impacts performance along three axes of global representation: English dialects, multilingualism, and opinions from and about countries worldwide. Our results show that current alignment procedures create disparities between English dialects and global opinions. We find alignment improves capabilities in several languages. We conclude by discussing design decisions that led to these unintended impacts and recommendations for more equitable preference tuning. We make our code and data publicly available on Github.
[ "['Michael J. Ryan' 'William Held' 'Diyi Yang']" ]
null
null
2402.15019
null
null
http://arxiv.org/pdf/2402.15019v1
2024-02-22T23:36:18Z
2024-02-22T23:36:18Z
Consistency-Guided Temperature Scaling Using Style and Content Information for Out-of-Domain Calibration
Research interests in the robustness of deep neural networks against domain shifts have been rapidly increasing in recent years. Most existing works, however, focus on improving the accuracy of the model, not the calibration performance which is another important requirement for trustworthy AI systems. Temperature scaling (TS), an accuracy-preserving post-hoc calibration method, has been proven to be effective in in-domain settings, but not in out-of-domain (OOD) due to the difficulty in obtaining a validation set for the unseen domain beforehand. In this paper, we propose consistency-guided temperature scaling (CTS), a new temperature scaling strategy that can significantly enhance the OOD calibration performance by providing mutual supervision among data samples in the source domains. Motivated by our observation that over-confidence stemming from inconsistent sample predictions is the main obstacle to OOD calibration, we propose to guide the scaling process by taking consistencies into account in terms of two different aspects -- style and content -- which are the key components that can well-represent data samples in multi-domain settings. Experimental results demonstrate that our proposed strategy outperforms existing works, achieving superior OOD calibration performance on various datasets. This can be accomplished by employing only the source domains without compromising accuracy, making our scheme directly applicable to various trustworthy AI systems.
[ "['Wonjeong Choi' 'Jungwuk Park' 'Dong-Jun Han' 'Younghyun Park'\n 'Jaekyun Moon']" ]
null
null
2402.15020
null
null
http://arxiv.org/pdf/2402.15020v2
2024-07-09T09:32:52Z
2024-02-22T23:36:26Z
Probabilistically-Sound Beam Search with Masked Language Models
Beam search with masked language models (MLMs) is challenging in part because joint probability distributions over sequences are not readily available, unlike for autoregressive models. However, estimating such distributions has important domain-specific applications such as ancient text restoration and protein engineering. Here we present probabilistically-sound methods for beam search with MLMs. First, we clarify the conditions under which it is theoretically sound to perform text infilling with MLMs using standard beam search. When these conditions fail, we provide a probabilistically-sound modification with no additional computational complexity and demonstrate that it is superior to the aforementioned beam search in the expected conditions. We then present empirical results comparing several infilling approaches with MLMs across several domains.
[ "['Creston Brooks' 'Robert Calef' 'Charlie Cowen-Breen' 'Anna Sappington']" ]
null
null
2402.15025
null
null
http://arxiv.org/pdf/2402.15025v2
2024-05-18T15:54:21Z
2024-02-22T23:58:26Z
Practice Makes Perfect: Planning to Learn Skill Parameter Policies
One promising approach towards effective robot decision making in complex, long-horizon tasks is to sequence together parameterized skills. We consider a setting where a robot is initially equipped with (1) a library of parameterized skills, (2) an AI planner for sequencing together the skills given a goal, and (3) a very general prior distribution for selecting skill parameters. Once deployed, the robot should rapidly and autonomously learn to improve its performance by specializing its skill parameter selection policy to the particular objects, goals, and constraints in its environment. In this work, we focus on the active learning problem of choosing which skills to practice to maximize expected future task success. We propose that the robot should estimate the competence of each skill, extrapolate the competence (asking: "how much would the competence improve through practice?"), and situate the skill in the task distribution through competence-aware planning. This approach is implemented within a fully autonomous system where the robot repeatedly plans, practices, and learns without any environment resets. Through experiments in simulation, we find that our approach learns effective parameter policies more sample-efficiently than several baselines. Experiments in the real-world demonstrate our approach's ability to handle noise from perception and control and improve the robot's ability to solve two long-horizon mobile-manipulation tasks after a few hours of autonomous practice. Project website: http://ees.csail.mit.edu
[ "['Nishanth Kumar' 'Tom Silver' 'Willie McClinton' 'Linfeng Zhao'\n 'Stephen Proulx' 'Tomás Lozano-Pérez' 'Leslie Pack Kaelbling'\n 'Jennifer Barry']" ]
null
null
2402.15038
null
null
http://arxiv.org/pdf/2402.15038v1
2024-02-23T01:19:30Z
2024-02-23T01:19:30Z
Dynamics-Guided Diffusion Model for Robot Manipulator Design
We present Dynamics-Guided Diffusion Model, a data-driven framework for generating manipulator geometry designs for a given manipulation task. Instead of training different design models for each task, our approach employs a learned dynamics network shared across tasks. For a new manipulation task, we first decompose it into a collection of individual motion targets which we call target interaction profile, where each individual motion can be modeled by the shared dynamics network. The design objective constructed from the target and predicted interaction profiles provides a gradient to guide the refinement of finger geometry for the task. This refinement process is executed as a classifier-guided diffusion process, where the design objective acts as the classifier guidance. We evaluate our framework on various manipulation tasks, under the sensor-less setting using only an open-loop parallel jaw motion. Our generated designs outperform optimization-based and unguided diffusion baselines relatively by 31.5% and 45.3% on average manipulation success rate. With the ability to generate a design within 0.8 seconds, our framework could facilitate rapid design iteration and enhance the adoption of data-driven approaches for robotic mechanism design.
[ "['Xiaomeng Xu' 'Huy Ha' 'Shuran Song']" ]
null
null
2402.15039
null
null
http://arxiv.org/pdf/2402.15039v1
2024-02-23T01:22:32Z
2024-02-23T01:22:32Z
Descripción automática de secciones delgadas de rocas: una aplicación Web
The identification and characterization of various rock types is one of the fundamental activities for geology and related areas such as mining, petroleum, environment, industry and construction. Traditionally, a human specialist is responsible for analyzing and explaining details about the type, composition, texture, shape and other properties using rock samples collected in-situ or prepared in a laboratory. The results become subjective based on experience, in addition to consuming a large investment of time and effort. The present proposal uses artificial intelligence techniques combining computer vision and natural language processing to generate a textual and verbal description from a thin section image of rock. We build a dataset of images and their respective textual descriptions for the training of a model that associates the relevant features of the image extracted by EfficientNetB7 with the textual description generated by a Transformer network, reaching an accuracy value of 0.892 and a BLEU value of 0.71. This model can be a useful resource for research, professional and academic work, so it has been deployed through a Web application for public use.
[ "['Stalyn Paucar' 'Christian Mejía-Escobar y Víctor Collaguazo']" ]
null
null
2402.15043
null
null
http://arxiv.org/pdf/2402.15043v2
2024-06-03T06:02:39Z
2024-02-23T01:30:39Z
KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination status instead of accurately gauging model performance. In this paper, we introduce KIEval, a Knowledge-grounded Interactive Evaluation framework, which incorporates an LLM-powered "interactor" role for the first time to accomplish a dynamic contamination-resilient evaluation. Starting with a question in a conventional LLM benchmark involving domain-specific knowledge, KIEval utilizes dynamically generated, multi-round, and knowledge-focused dialogues to determine whether a model's response is merely a recall of benchmark answers or demonstrates a deep comprehension to apply knowledge in more complex conversations. Extensive experiments on seven leading LLMs across five datasets validate KIEval's effectiveness and generalization. We also reveal that data contamination brings no contribution or even negative effect to models' real-world applicability and understanding, and existing contamination detection methods for LLMs can only identify contamination in pre-training but not during supervised fine-tuning.
[ "['Zhuohao Yu' 'Chang Gao' 'Wenjin Yao' 'Yidong Wang' 'Wei Ye'\n 'Jindong Wang' 'Xing Xie' 'Yue Zhang' 'Shikun Zhang']" ]
null
null
2402.15044
null
null
http://arxiv.org/pdf/2402.15044v1
2024-02-23T01:34:00Z
2024-02-23T01:34:00Z
Fiducial Focus Augmentation for Facial Landmark Detection
Deep learning methods have led to significant improvements in the performance on the facial landmark detection (FLD) task. However, detecting landmarks in challenging settings, such as head pose changes, exaggerated expressions, or uneven illumination, continue to remain a challenge due to high variability and insufficient samples. This inadequacy can be attributed to the model's inability to effectively acquire appropriate facial structure information from the input images. To address this, we propose a novel image augmentation technique specifically designed for the FLD task to enhance the model's understanding of facial structures. To effectively utilize the newly proposed augmentation technique, we employ a Siamese architecture-based training mechanism with a Deep Canonical Correlation Analysis (DCCA)-based loss to achieve collective learning of high-level feature representations from two different views of the input images. Furthermore, we employ a Transformer + CNN-based network with a custom hourglass module as the robust backbone for the Siamese framework. Extensive experiments show that our approach outperforms multiple state-of-the-art approaches across various benchmark datasets.
[ "['Purbayan Kar' 'Vishal Chudasama' 'Naoyuki Onoe' 'Pankaj Wasnik'\n 'Vineeth Balasubramanian']" ]
null
null
2402.15053
null
null
http://arxiv.org/pdf/2402.15053v1
2024-02-23T02:14:44Z
2024-02-23T02:14:44Z
Nonlinear Bayesian optimal experimental design using logarithmic Sobolev inequalities
We study the problem of selecting $k$ experiments from a larger candidate pool, where the goal is to maximize mutual information (MI) between the selected subset and the underlying parameters. Finding the exact solution is to this combinatorial optimization problem is computationally costly, not only due to the complexity of the combinatorial search but also the difficulty of evaluating MI in nonlinear/non-Gaussian settings. We propose greedy approaches based on new computationally inexpensive lower bounds for MI, constructed via log-Sobolev inequalities. We demonstrate that our method outperforms random selection strategies, Gaussian approximations, and nested Monte Carlo (NMC) estimators of MI in various settings, including optimal design for nonlinear models with non-additive noise.
[ "['Fengyi Li' 'Ayoub Belhadji' 'Youssef Marzouk']" ]
null
null
2402.15055
null
null
http://arxiv.org/pdf/2402.15055v1
2024-02-23T02:15:47Z
2024-02-23T02:15:47Z
Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions
In this paper, we investigate the interplay between attention heads and specialized "next-token" neurons in the Multilayer Perceptron that predict specific tokens. By prompting an LLM like GPT-4 to explain these model internals, we can elucidate attention mechanisms that activate certain next-token neurons. Our analysis identifies attention heads that recognize contexts relevant to predicting a particular token, activating the associated neuron through the residual connection. We focus specifically on heads in earlier layers consistently activating the same next-token neuron across similar prompts. Exploring these differential activation patterns reveals that heads that specialize for distinct linguistic contexts are tied to generating certain tokens. Overall, our method combines neural explanations and probing isolated components to illuminate how attention enables context-dependent, specialized processing in LLMs.
[ "['Clement Neo' 'Shay B. Cohen' 'Fazl Barez']" ]
null
null
2402.15058
null
null
http://arxiv.org/pdf/2402.15058v1
2024-02-23T02:19:26Z
2024-02-23T02:19:26Z
Mixup Barcodes: Quantifying Geometric-Topological Interactions between Point Clouds
We combine standard persistent homology with image persistent homology to define a novel way of characterizing shapes and interactions between them. In particular, we introduce: (1) a mixup barcode, which captures geometric-topological interactions (mixup) between two point sets in arbitrary dimension; (2) simple summary statistics, total mixup and total percentage mixup, which quantify the complexity of the interactions as a single number; (3) a software tool for playing with the above. As a proof of concept, we apply this tool to a problem arising from machine learning. In particular, we study the disentanglement in embeddings of different classes. The results suggest that topological mixup is a useful method for characterizing interactions for low and high-dimensional data. Compared to the typical usage of persistent homology, the new tool is sensitive to the geometric locations of the topological features, which is often desirable.
[ "['Hubert Wagner' 'Nickolas Arustamyan' 'Matthew Wheeler' 'Peter Bubenik']" ]
null
null
2402.15061
null
null
http://arxiv.org/pdf/2402.15061v1
2024-02-23T02:24:15Z
2024-02-23T02:24:15Z
Fine-tuning Large Language Models for Domain-specific Machine Translation
Large language models (LLMs) have made significant progress in machine translation (MT). However, their potential in domain-specific MT remains under-explored. Current LLM-based MT systems still face several challenges. First, for LLMs with in-context learning, their effectiveness is highly sensitive to input translation examples, and processing them can increase inference costs. They often require extra post-processing due to over-generation. Second, LLMs with fine-tuning on domain-specific data often require high training costs for domain adaptation, and may weaken the zero-shot MT capabilities of LLMs due to over-specialization. The aforementioned methods can struggle to translate rare words in domain transfer scenarios. To address these challenges, this paper proposes a prompt-oriented fine-tuning method, denoted as LlamaIT, to effectively and efficiently fine-tune a general-purpose LLM for domain-specific MT tasks. First, we construct a task-specific mix-domain dataset, which is then used to fine-tune the LLM with LoRA. This can eliminate the need for input translation examples, post-processing, or over-specialization. By zero-shot prompting with instructions, we adapt the MT tasks to the target domain at inference time. To further elicit the MT capability for rare words, we construct new prompts by incorporating domain-specific bilingual vocabulary. We also conduct extensive experiments on both publicly available and self-constructed datasets. The results show that our LlamaIT can significantly enhance the domain-specific MT capabilities of the LLM, meanwhile preserving its zero-shot MT capabilities.
[ "['Jiawei Zheng' 'Hanghai Hong' 'Xiaoli Wang' 'Jingsong Su' 'Yonggui Liang'\n 'Shikai Wu']" ]
null
null
2402.15062
null
null
http://arxiv.org/pdf/2402.15062v1
2024-02-23T02:24:36Z
2024-02-23T02:24:36Z
Gotcha! Don't trick me with unanswerable questions! Self-aligning Large Language Models for Responding to Unknown Questions
Despite the remarkable abilities of Large Language Models (LLMs) to answer questions, they often display a considerable level of overconfidence even when the question does not have a definitive answer. To avoid providing hallucinated answers to these unknown questions, existing studies typically investigate approaches to refusing to answer these questions. In this work, we propose a novel and scalable self-alignment method to utilize the LLM itself to enhance its response-ability to different types of unknown questions, being capable of not only refusing to answer but also providing explanation to the unanswerability of unknown questions. Specifically, the Self-Align method first employ a two-stage class-aware self-augmentation approach to generate a large amount of unknown question-response data. Then we conduct disparity-driven self-curation to select qualified data for fine-tuning the LLM itself for aligning the responses to unknown questions as desired. Experimental results on two datasets across four types of unknown questions validate the superiority of the Self-Align method over existing baselines in terms of three types of task formulation.
[ "['Yang Deng' 'Yong Zhao' 'Moxin Li' 'See-Kiong Ng' 'Tat-Seng Chua']" ]
null
null
2402.15070
null
null
http://arxiv.org/pdf/2402.15070v1
2024-02-23T03:15:10Z
2024-02-23T03:15:10Z
Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting
One-shot Federated Learning (OFL) has become a promising learning paradigm, enabling the training of a global server model via a single communication round. In OFL, the server model is aggregated by distilling knowledge from all client models (the ensemble), which are also responsible for synthesizing samples for distillation. In this regard, advanced works show that the performance of the server model is intrinsically related to the quality of the synthesized data and the ensemble model. To promote OFL, we introduce a novel framework, Co-Boosting, in which synthesized data and the ensemble model mutually enhance each other progressively. Specifically, Co-Boosting leverages the current ensemble model to synthesize higher-quality samples in an adversarial manner. These hard samples are then employed to promote the quality of the ensemble model by adjusting the ensembling weights for each client model. Consequently, Co-Boosting periodically achieves high-quality data and ensemble models. Extensive experiments demonstrate that Co-Boosting can substantially outperform existing baselines under various settings. Moreover, Co-Boosting eliminates the need for adjustments to the client's local training, requires no additional data or model transmission, and allows client models to have heterogeneous architectures.
[ "['Rong Dai' 'Yonggang Zhang' 'Ang Li' 'Tongliang Liu' 'Xun Yang' 'Bo Han']" ]
null
null
2402.15073
null
null
http://arxiv.org/pdf/2402.15073v1
2024-02-23T03:27:17Z
2024-02-23T03:27:17Z
Cost-Adaptive Recourse Recommendation by Adaptive Preference Elicitation
Algorithmic recourse recommends a cost-efficient action to a subject to reverse an unfavorable machine learning classification decision. Most existing methods in the literature generate recourse under the assumption of complete knowledge about the cost function. In real-world practice, subjects could have distinct preferences, leading to incomplete information about the underlying cost function of the subject. This paper proposes a two-step approach integrating preference learning into the recourse generation problem. In the first step, we design a question-answering framework to refine the confidence set of the Mahalanobis matrix cost of the subject sequentially. Then, we generate recourse by utilizing two methods: gradient-based and graph-based cost-adaptive recourse that ensures validity while considering the whole confidence set of the cost matrix. The numerical evaluation demonstrates the benefits of our approach over state-of-the-art baselines in delivering cost-efficient recourse recommendations.
[ "['Duy Nguyen' 'Bao Nguyen' 'Viet Anh Nguyen']" ]
null
null
2402.15082
null
null
http://arxiv.org/pdf/2402.15082v2
2024-06-06T15:11:29Z
2024-02-23T03:59:18Z
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for adapting pre-trained language models to various tasks efficiently. Recently, there has been a growing interest in transferring knowledge from one or multiple tasks to the downstream target task to achieve performance improvements. However, current approaches typically either train adapters on individual tasks or distill shared knowledge from source tasks, failing to fully exploit task-specific knowledge and the correlation between source and target tasks. To overcome these limitations, we propose PEMT, a novel parameter-efficient fine-tuning framework based on multi-task transfer learning. PEMT extends the mixture-of-experts (MoE) framework to capture the transferable knowledge as a weighted combination of adapters trained on source tasks. These weights are determined by a gated unit, measuring the correlation between the target and each source task using task description prompt vectors. To fully exploit the task-specific knowledge, we also propose the Task Sparsity Loss to improve the sparsity of the gated unit. We conduct experiments on a broad range of tasks over 17 datasets. The experimental results demonstrate our PEMT yields stable improvements over full fine-tuning, and state-of-the-art PEFT and knowledge transferring methods on various tasks. The results highlight the effectiveness of our method which is capable of sufficiently exploiting the knowledge and correlation features across multiple tasks.
[ "['Zhisheng Lin' 'Han Fu' 'Chenghao Liu' 'Zhuo Li' 'Jianling Sun']" ]
null
null
2402.15089
null
null
http://arxiv.org/pdf/2402.15089v1
2024-02-23T04:23:33Z
2024-02-23T04:23:33Z
AttributionBench: How Hard is Automatic Attribution Evaluation?
Modern generative search engines enhance the reliability of large language model (LLM) responses by providing cited evidence. However, evaluating the answer's attribution, i.e., whether every claim within the generated responses is fully supported by its cited evidence, remains an open problem. This verification, traditionally dependent on costly human evaluation, underscores the urgent need for automatic attribution evaluation methods. To bridge the gap in the absence of standardized benchmarks for these methods, we present AttributionBench, a comprehensive benchmark compiled from various existing attribution datasets. Our extensive experiments on AttributionBench reveal the challenges of automatic attribution evaluation, even for state-of-the-art LLMs. Specifically, our findings show that even a fine-tuned GPT-3.5 only achieves around 80% macro-F1 under a binary classification formulation. A detailed analysis of more than 300 error cases indicates that a majority of failures stem from the model's inability to process nuanced information, and the discrepancy between the information the model has access to and that human annotators do.
[ "['Yifei Li' 'Xiang Yue' 'Zeyi Liao' 'Huan Sun']" ]
null
null
2402.15095
null
null
http://arxiv.org/pdf/2402.15095v1
2024-02-23T04:58:54Z
2024-02-23T04:58:54Z
The Umeyama algorithm for matching correlated Gaussian geometric models in the low-dimensional regime
Motivated by the problem of matching two correlated random geometric graphs, we study the problem of matching two Gaussian geometric models correlated through a latent node permutation. Specifically, given an unknown permutation $pi^*$ on ${1,ldots,n}$ and given $n$ i.i.d. pairs of correlated Gaussian vectors ${X_{pi^*(i)},Y_i}$ in $mathbb{R}^d$ with noise parameter $sigma$, we consider two types of (correlated) weighted complete graphs with edge weights given by $A_{i,j}=langle X_i,X_j rangle$, $B_{i,j}=langle Y_i,Y_j rangle$. The goal is to recover the hidden vertex correspondence $pi^*$ based on the observed matrices $A$ and $B$. For the low-dimensional regime where $d=O(log n)$, Wang, Wu, Xu, and Yolou [WWXY22+] established the information thresholds for exact and almost exact recovery in matching correlated Gaussian geometric models. They also conducted numerical experiments for the classical Umeyama algorithm. In our work, we prove that this algorithm achieves exact recovery of $pi^*$ when the noise parameter $sigma=o(d^{-3}n^{-2/d})$, and almost exact recovery when $sigma=o(d^{-3}n^{-1/d})$. Our results approach the information thresholds up to a $operatorname{poly}(d)$ factor in the low-dimensional regime.
[ "['Shuyang Gong' 'Zhangsong Li']" ]
null
null
2402.15096
null
null
http://arxiv.org/pdf/2402.15096v1
2024-02-23T05:09:35Z
2024-02-23T05:09:35Z
Multimodal Transformer With a Low-Computational-Cost Guarantee
Transformer-based models have significantly improved performance across a range of multimodal understanding tasks, such as visual question answering and action recognition. However, multimodal Transformers significantly suffer from a quadratic complexity of the multi-head attention with the input sequence length, especially as the number of modalities increases. To address this, we introduce Low-Cost Multimodal Transformer (LoCoMT), a novel multimodal attention mechanism that aims to reduce computational cost during training and inference with minimal performance loss. Specifically, by assigning different multimodal attention patterns to each attention head, LoCoMT can flexibly control multimodal signals and theoretically ensures a reduced computational cost compared to existing multimodal Transformer variants. Experimental results on two multimodal datasets, namely Audioset and MedVidCL demonstrate that LoCoMT not only reduces GFLOPs but also matches or even outperforms established models.
[ "['Sungjin Park' 'Edward Choi']" ]
null
null
2402.15097
null
null
http://arxiv.org/pdf/2402.15097v2
2024-03-17T01:45:49Z
2024-02-23T05:11:34Z
Learning solution operators of PDEs defined on varying domains via MIONet
In this work, we propose a method to learn the solution operators of PDEs defined on varying domains via MIONet, and theoretically justify this method. We first extend the approximation theory of MIONet to further deal with metric spaces, establishing that MIONet can approximate mappings with multiple inputs in metric spaces. Subsequently, we construct a set consisting of some appropriate regions and provide a metric on this set thus make it a metric space, which satisfies the approximation condition of MIONet. Building upon the theoretical foundation, we are able to learn the solution mapping of a PDE with all the parameters varying, including the parameters of the differential operator, the right-hand side term, the boundary condition, as well as the domain. Without loss of generality, we for example perform the experiments for 2-d Poisson equations, where the domains and the right-hand side terms are varying. The results provide insights into the performance of this method across convex polygons, polar regions with smooth boundary, and predictions for different levels of discretization on one task. We also show the additional result of the fully-parameterized case in the appendix for interested readers. Reasonably, we point out that this is a meshless method, hence can be flexibly used as a general solver for a type of PDE.
[ "['Shanshan Xiao' 'Pengzhan Jin' 'Yifa Tang']" ]
null
null
2402.15100
null
null
http://arxiv.org/pdf/2402.15100v1
2024-02-23T05:17:28Z
2024-02-23T05:17:28Z
Studying LLM Performance on Closed- and Open-source Data
Large Language models (LLMs) are finding wide use in software engineering practice. These models are extremely data-hungry, and are largely trained on open-source (OSS) code distributed with permissive licenses. In terms of actual use however, a great deal of software development still occurs in the for-profit/proprietary sphere, where the code under development is not, and never has been, in the public domain; thus, many developers, do their work, and use LLMs, in settings where the models may not be as familiar with the code under development. In such settings, do LLMs work as well as they do for OSS code? If not, what are the differences? When performance differs, what are the possible causes, and are there work-arounds? In this paper, we examine this issue using proprietary, closed-source software data from Microsoft, where most proprietary code is in C# and C++. We find that performance for C# changes little from OSS --> proprietary code, but does significantly reduce for C++; we find that this difference is attributable to differences in identifiers. We also find that some performance degradation, in some cases, can be ameliorated efficiently by in-context learning.
[ "['Toufique Ahmed' 'Christian Bird' 'Premkumar Devanbu'\n 'Saikat Chakraborty']" ]
null
null
2402.15102
null
null
http://arxiv.org/pdf/2402.15102v2
2024-04-08T09:33:10Z
2024-02-23T05:20:23Z
Trajectory-wise Iterative Reinforcement Learning Framework for Auto-bidding
In online advertising, advertisers participate in ad auctions to acquire ad opportunities, often by utilizing auto-bidding tools provided by demand-side platforms (DSPs). The current auto-bidding algorithms typically employ reinforcement learning (RL). However, due to safety concerns, most RL-based auto-bidding policies are trained in simulation, leading to a performance degradation when deployed in online environments. To narrow this gap, we can deploy multiple auto-bidding agents in parallel to collect a large interaction dataset. Offline RL algorithms can then be utilized to train a new policy. The trained policy can subsequently be deployed for further data collection, resulting in an iterative training framework, which we refer to as iterative offline RL. In this work, we identify the performance bottleneck of this iterative offline RL framework, which originates from the ineffective exploration and exploitation caused by the inherent conservatism of offline RL algorithms. To overcome this bottleneck, we propose Trajectory-wise Exploration and Exploitation (TEE), which introduces a novel data collecting and data utilization method for iterative offline RL from a trajectory perspective. Furthermore, to ensure the safety of online exploration while preserving the dataset quality for TEE, we propose Safe Exploration by Adaptive Action Selection (SEAS). Both offline experiments and real-world experiments on Alibaba display advertising platform demonstrate the effectiveness of our proposed method.
[ "['Haoming Li' 'Yusen Huo' 'Shuai Dou' 'Zhenzhe Zheng' 'Zhilin Zhang'\n 'Chuan Yu' 'Jian Xu' 'Fan Wu']" ]
null
null
2402.15106
null
null
http://arxiv.org/pdf/2402.15106v3
2024-05-31T22:39:26Z
2024-02-23T05:33:43Z
Sampling-based Distributed Training with Message Passing Neural Network
In this study, we introduce a domain-decomposition-based distributed training and inference approach for message-passing neural networks (MPNN). Our objective is to address the challenge of scaling edge-based graph neural networks as the number of nodes increases. Through our distributed training approach, coupled with Nystr"om-approximation sampling techniques, we present a scalable graph neural network, referred to as DS-MPNN (D and S standing for distributed and sampled, respectively), capable of scaling up to $O(10^5)$ nodes. We validate our sampling and distributed training approach on two cases: (a) a Darcy flow dataset and (b) steady RANS simulations of 2-D airfoils, providing comparisons with both single-GPU implementation and node-based graph convolution networks (GCNs). The DS-MPNN model demonstrates comparable accuracy to single-GPU implementation, can accommodate a significantly larger number of nodes compared to the single-GPU variant (S-MPNN), and significantly outperforms the node-based GCN.
[ "['Priyesh Kakka' 'Sheel Nidhan' 'Rishikesh Ranade' 'Jonathan F. MacArt']" ]
null
null
2402.15109
null
null
http://arxiv.org/pdf/2402.15109v1
2024-02-23T05:44:15Z
2024-02-23T05:44:15Z
Machine Unlearning by Suppressing Sample Contribution
Machine Unlearning (MU) is to forget data from a well-trained model, which is practically important due to the "right to be forgotten". In this paper, we start from the fundamental distinction between training data and unseen data on their contribution to the model: the training data contributes to the final model while the unseen data does not. We theoretically discover that the input sensitivity can approximately measure the contribution and practically design an algorithm, called MU-Mis (machine unlearning via minimizing input sensitivity), to suppress the contribution of the forgetting data. Experimental results demonstrate that MU-Mis outperforms state-of-the-art MU methods significantly. Additionally, MU-Mis aligns more closely with the application of MU as it does not require the use of remaining data.
[ "['Xinwen Cheng' 'Zhehao Huang' 'Xiaolin Huang']" ]
null
null
2402.15111
null
null
http://arxiv.org/pdf/2402.15111v2
2024-06-15T13:14:54Z
2024-02-23T05:50:43Z
Chu-ko-nu: A Reliable, Efficient, and Anonymously Authentication-Enabled Realization for Multi-Round Secure Aggregation in Federated Learning
Secure aggregation enables federated learning (FL) to perform collaborative training of clients from local gradient updates without exposing raw data. However, existing secure aggregation schemes inevitably perform an expensive fresh setup per round because each client needs to establish fresh input-independent secrets over different rounds. The latest research, Flamingo (S&P 2023), designed a share-transfer-based reusable secret key to support the server continuously performing multiple rounds of aggregation. Nevertheless, the share transfer mechanism it proposed can only be achieved with P probability, which has limited reliability. To tackle the aforementioned problems, we propose a more reliable and anonymously authenticated scheme called Chu-ko-nu for multi-round secure aggregation. Specifically, in terms of share transfer, Chu-ko-nu breaks the probability P barrier by supplementing a redistribution process of secret key components (the sum of all components is the secret key), thus ensuring the reusability of the secret key. Based on this reusable secret key, Chu-ko-nu can efficiently perform consecutive aggregation in the following rounds. Furthermore, considering the client identity authentication and privacy protection issue most approaches ignore, Chu-ko-nu introduces a zero-knowledge proof-based authentication mechanism. It can support clients anonymously participating in FL training and enables the server to authenticate clients effectively in the presence of various attacks. Rigorous security proofs and extensive experiments demonstrated that Chu-ko-nu can provide reliable and anonymously authenticated aggregation for FL with low aggregation costs, at least a 21.02% reduction compared to the state-of-the-art schemes.
[ "['Kaiping Cui' 'Xia Feng' 'Liangmin Wang' 'Haiqin Wu' 'Xiaoyu Zhang'\n 'Boris Düdder']" ]
null
null
2402.15113
null
null
http://arxiv.org/pdf/2402.15113v1
2024-02-23T05:57:22Z
2024-02-23T05:57:22Z
MSPipe: Efficient Temporal GNN Training via Staleness-aware Pipeline
Memory-based Temporal Graph Neural Networks (MTGNNs) are a class of temporal graph neural networks that utilize a node memory module to capture and retain long-term temporal dependencies, leading to superior performance compared to memory-less counterparts. However, the iterative reading and updating process of the memory module in MTGNNs to obtain up-to-date information needs to follow the temporal dependencies. This introduces significant overhead and limits training throughput. Existing optimizations for static GNNs are not directly applicable to MTGNNs due to differences in training paradigm, model architecture, and the absence of a memory module. Moreover, they do not effectively address the challenges posed by temporal dependencies, making them ineffective for MTGNN training. In this paper, we propose MSPipe, a general and efficient framework for MTGNNs that maximizes training throughput while maintaining model accuracy. Our design addresses the unique challenges associated with fetching and updating node memory states in MTGNNs by integrating staleness into the memory module. However, simply introducing a predefined staleness bound in the memory module to break temporal dependencies may lead to suboptimal performance and lack of generalizability across different models and datasets. To solve this, we introduce an online pipeline scheduling algorithm in MSPipe that strategically breaks temporal dependencies with minimal staleness and delays memory fetching to obtain fresher memory states. Moreover, we design a staleness mitigation mechanism to enhance training convergence and model accuracy. We provide convergence analysis and prove that MSPipe maintains the same convergence rate as vanilla sample-based GNN training. Experimental results show that MSPipe achieves up to 2.45x speed-up without sacrificing accuracy, making it a promising solution for efficient MTGNN training.
[ "['Guangming Sheng' 'Junwei Su' 'Chao Huang' 'Chuan Wu']" ]
null
null
2402.15115
null
null
http://arxiv.org/pdf/2402.15115v2
2024-05-11T08:49:42Z
2024-02-23T06:04:15Z
Physics-constrained polynomial chaos expansion for scientific machine learning and uncertainty quantification
We present a novel physics-constrained polynomial chaos expansion as a surrogate modeling method capable of performing both scientific machine learning (SciML) and uncertainty quantification (UQ) tasks. The proposed method possesses a unique capability: it seamlessly integrates SciML into UQ and vice versa, which allows it to quantify the uncertainties in SciML tasks effectively and leverage SciML for improved uncertainty assessment during UQ-related tasks. The proposed surrogate model can effectively incorporate a variety of physical constraints, such as governing partial differential equations (PDEs) with associated initial and boundary conditions constraints, inequality-type constraints (e.g., monotonicity, convexity, non-negativity, among others), and additional a priori information in the training process to supplement limited data. This ensures physically realistic predictions and significantly reduces the need for expensive computational model evaluations to train the surrogate model. Furthermore, the proposed method has a built-in uncertainty quantification (UQ) feature to efficiently estimate output uncertainties. To demonstrate the effectiveness of the proposed method, we apply it to a diverse set of problems, including linear/non-linear PDEs with deterministic and stochastic parameters, data-driven surrogate modeling of a complex physical system, and UQ of a stochastic system with parameters modeled as random fields.
[ "['Himanshu Sharma' 'Lukáš Novák' 'Michael D. Shields']" ]
null
null
2402.15120
null
null
http://arxiv.org/pdf/2402.15120v1
2024-02-23T06:11:50Z
2024-02-23T06:11:50Z
Fine-tuning CLIP Text Encoders with Two-step Paraphrasing
Contrastive language-image pre-training (CLIP) models have demonstrated considerable success across various vision-language tasks, such as text-to-image retrieval, where the model is required to effectively process natural language input to produce an accurate visual output. However, current models still face limitations in dealing with linguistic variations in input queries, such as paraphrases, making it challenging to handle a broad range of user queries in real-world applications. In this study, we introduce a straightforward fine-tuning approach to enhance the representations of CLIP models for paraphrases. Our approach involves a two-step paraphrase generation process, where we automatically create two categories of paraphrases from web-scale image captions by leveraging large language models. Subsequently, we fine-tune the CLIP text encoder using these generated paraphrases while freezing the image encoder. Our resulting model, which we call ParaCLIP, exhibits significant improvements over baseline CLIP models across various tasks, including paraphrased retrieval (with rank similarity scores improved by up to 2.0% and 5.6%), Visual Genome Relation and Attribution, as well as seven semantic textual similarity tasks.
[ "['Hyunjae Kim' 'Seunghyun Yoon' 'Trung Bui' 'Handong Zhao' 'Quan Tran'\n 'Franck Dernoncourt' 'Jaewoo Kang']" ]
null
null
2402.15125
null
null
http://arxiv.org/pdf/2402.15125v1
2024-02-23T06:24:57Z
2024-02-23T06:24:57Z
Accelerating Convergence of Stein Variational Gradient Descent via Deep Unfolding
Stein variational gradient descent (SVGD) is a prominent particle-based variational inference method used for sampling a target distribution. SVGD has attracted interest for application in machine-learning techniques such as Bayesian inference. In this paper, we propose novel trainable algorithms that incorporate a deep-learning technique called deep unfolding,into SVGD. This approach facilitates the learning of the internal parameters of SVGD, thereby accelerating its convergence speed. To evaluate the proposed trainable SVGD algorithms, we conducted numerical simulations of three tasks: sampling a one-dimensional Gaussian mixture, performing Bayesian logistic regression, and learning Bayesian neural networks. The results show that our proposed algorithms exhibit faster convergence than the conventional variants of SVGD.
[ "['Yuya Kawamura' 'Satoshi Takabe']" ]
null
null
2402.15127
null
null
http://arxiv.org/pdf/2402.15127v1
2024-02-23T06:27:12Z
2024-02-23T06:27:12Z
Multi-Armed Bandits with Abstention
We introduce a novel extension of the canonical multi-armed bandit problem that incorporates an additional strategic element: abstention. In this enhanced framework, the agent is not only tasked with selecting an arm at each time step, but also has the option to abstain from accepting the stochastic instantaneous reward before observing it. When opting for abstention, the agent either suffers a fixed regret or gains a guaranteed reward. Given this added layer of complexity, we ask whether we can develop efficient algorithms that are both asymptotically and minimax optimal. We answer this question affirmatively by designing and analyzing algorithms whose regrets meet their corresponding information-theoretic lower bounds. Our results offer valuable quantitative insights into the benefits of the abstention option, laying the groundwork for further exploration in other online decision-making problems with such an option. Numerical results further corroborate our theoretical findings.
[ "['Junwen Yang' 'Tianyuan Jin' 'Vincent Y. F. Tan']" ]
null
null
2402.15132
null
null
http://arxiv.org/pdf/2402.15132v1
2024-02-23T06:33:51Z
2024-02-23T06:33:51Z
Improving Sentence Embeddings with an Automatically Generated NLI Dataset
Decoder-based large language models (LLMs) have shown high performance on many tasks in natural language processing. This is also true for sentence embedding learning, where a decoder-based model, PromptEOL, has achieved the best performance on semantic textual similarity (STS) tasks. However, PromptEOL makes great use of fine-tuning with a manually annotated natural language inference (NLI) dataset. We aim to improve sentence embeddings learned in an unsupervised setting by automatically generating an NLI dataset with an LLM and using it to fine-tune PromptEOL. In experiments on STS tasks, the proposed method achieved an average Spearman's rank correlation coefficient of 82.21 with respect to human evaluation, thus outperforming existing methods without using large, manually annotated datasets.
[ "['Soma Sato' 'Hayato Tsukagoshi' 'Ryohei Sasano' 'Koichi Takeda']" ]
null
null
2402.15134
null
null
http://arxiv.org/pdf/2402.15134v1
2024-02-23T06:38:08Z
2024-02-23T06:38:08Z
Deep Coupling Network For Multivariate Time Series Forecasting
Multivariate time series (MTS) forecasting is crucial in many real-world applications. To achieve accurate MTS forecasting, it is essential to simultaneously consider both intra- and inter-series relationships among time series data. However, previous work has typically modeled intra- and inter-series relationships separately and has disregarded multi-order interactions present within and between time series data, which can seriously degrade forecasting accuracy. In this paper, we reexamine intra- and inter-series relationships from the perspective of mutual information and accordingly construct a comprehensive relationship learning mechanism tailored to simultaneously capture the intricate multi-order intra- and inter-series couplings. Based on the mechanism, we propose a novel deep coupling network for MTS forecasting, named DeepCN, which consists of a coupling mechanism dedicated to explicitly exploring the multi-order intra- and inter-series relationships among time series data concurrently, a coupled variable representation module aimed at encoding diverse variable patterns, and an inference module facilitating predictions through one forward step. Extensive experiments conducted on seven real-world datasets demonstrate that our proposed DeepCN achieves superior performance compared with the state-of-the-art baselines.
[ "['Kun Yi' 'Qi Zhang' 'Hui He' 'Kaize Shi' 'Liang Hu' 'Ning An'\n 'Zhendong Niu']" ]
null
null
2402.15141
null
null
http://arxiv.org/pdf/2402.15141v1
2024-02-23T06:55:34Z
2024-02-23T06:55:34Z
A note on the adjoint method for neural ordinary differential equation network
Perturbation and operator adjoint method are used to give the right adjoint form rigourously. From the derivation, we can have following results: 1) The loss gradient is not an ODE, it is an integral and we shows the reason; 2) The traditional adjoint form is not equivalent with the back propagation results. 3) The adjoint operator analysis shows that if and only if the discrete adjoint has the same scheme with the discrete neural ODE, the adjoint form would give the same results as BP does.
[ "['Pipi Hu']" ]
null
null
2402.15143
null
null
http://arxiv.org/pdf/2402.15143v1
2024-02-23T06:57:31Z
2024-02-23T06:57:31Z
PUAD: Frustratingly Simple Method for Robust Anomaly Detection
Developing an accurate and fast anomaly detection model is an important task in real-time computer vision applications. There has been much research to develop a single model that detects either structural or logical anomalies, which are inherently distinct. The majority of the existing approaches implicitly assume that the anomaly can be represented by identifying the anomalous location. However, we argue that logical anomalies, such as the wrong number of objects, can not be well-represented by the spatial feature maps and require an alternative approach. In addition, we focused on the possibility of detecting logical anomalies by using an out-of-distribution detection approach on the feature space, which aggregates the spatial information of the feature map. As a demonstration, we propose a method that incorporates a simple out-of-distribution detection method on the feature space against state-of-the-art reconstruction-based approaches. Despite the simplicity of our proposal, our method PUAD (Picturable and Unpicturable Anomaly Detection) achieves state-of-the-art performance on the MVTec LOCO AD dataset.
[ "['Shota Sugawara' 'Ryuji Imamura']" ]
null
null
2402.15145
null
null
http://arxiv.org/pdf/2402.15145v1
2024-02-23T07:03:52Z
2024-02-23T07:03:52Z
The Cost of Parallelizing Boosting
We study the cost of parallelizing weak-to-strong boosting algorithms for learning, following the recent work of Karbasi and Larsen. Our main results are two-fold: - First, we prove a tight lower bound, showing that even "slight" parallelization of boosting requires an exponential blow-up in the complexity of training. Specifically, let $gamma$ be the weak learner's advantage over random guessing. The famous textsc{AdaBoost} algorithm produces an accurate hypothesis by interacting with the weak learner for $tilde{O}(1 / gamma^2)$ rounds where each round runs in polynomial time. Karbasi and Larsen showed that "significant" parallelization must incur exponential blow-up: Any boosting algorithm either interacts with the weak learner for $Omega(1 / gamma)$ rounds or incurs an $exp(d / gamma)$ blow-up in the complexity of training, where $d$ is the VC dimension of the hypothesis class. We close the gap by showing that any boosting algorithm either has $Omega(1 / gamma^2)$ rounds of interaction or incurs a smaller exponential blow-up of $exp(d)$. -Complementing our lower bound, we show that there exists a boosting algorithm using $tilde{O}(1/(t gamma^2))$ rounds, and only suffer a blow-up of $exp(d cdot t^2)$. Plugging in $t = omega(1)$, this shows that the smaller blow-up in our lower bound is tight. More interestingly, this provides the first trade-off between the parallelism and the total work required for boosting.
[ "['Xin Lyu' 'Hongxun Wu' 'Junzhao Yang']" ]
null
null
2402.15146
null
null
http://arxiv.org/pdf/2402.15146v1
2024-02-23T07:05:09Z
2024-02-23T07:05:09Z
Convergence Analysis of Blurring Mean Shift
Blurring mean shift (BMS) algorithm, a variant of the mean shift algorithm, is a kernel-based iterative method for data clustering, where data points are clustered according to their convergent points via iterative blurring. In this paper, we analyze convergence properties of the BMS algorithm by leveraging its interpretation as an optimization procedure, which is known but has been underutilized in existing convergence studies. Whereas existing results on convergence properties applicable to multi-dimensional data only cover the case where all the blurred data point sequences converge to a single point, this study provides a convergence guarantee even when those sequences can converge to multiple points, yielding multiple clusters. This study also shows that the convergence of the BMS algorithm is fast by further leveraging geometrical characterization of the convergent points.
[ "['Ryoya Yamasaki' 'Toshiyuki Tanaka']" ]
null
null
2402.15147
null
null
http://arxiv.org/pdf/2402.15147v1
2024-02-23T07:05:32Z
2024-02-23T07:05:32Z
TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning
APT (Advanced Persistent Threat) with the characteristics of persistence, stealth, and diversity is one of the greatest threats against cyber-infrastructure. As a countermeasure, existing studies leverage provenance graphs to capture the complex relations between system entities in a host for effective APT detection. In addition to detecting single attack events as most existing work does, understanding the tactics / techniques (e.g., Kill-Chain, ATT&CK) applied to organize and accomplish the APT attack campaign is more important for security operations. Existing studies try to manually design a set of rules to map low-level system events to high-level APT tactics / techniques. However, the rule based methods are coarse-grained and lack generalization ability, thus they can only recognize APT tactics and cannot identify fine-grained APT techniques and mutant APT attacks. In this paper, we propose TREC, the first attempt to recognize APT tactics / techniques from provenance graphs by exploiting deep learning techniques. To address the "needle in a haystack" problem, TREC segments small and compact subgraphs covering individual APT technique instances from a large provenance graph based on a malicious node detection model and a subgraph sampling algorithm. To address the "training sample scarcity" problem, TREC trains the APT tactic / technique recognition model in a few-shot learning manner by adopting a Siamese neural network. We evaluate TREC based on a customized dataset collected and made public by our team. The experiment results show that TREC significantly outperforms state-of-the-art systems in APT tactic recognition and TREC can also effectively identify APT techniques.
[ "['Mingqi Lv' 'HongZhe Gao' 'Xuebo Qiu' 'Tieming Chen' 'Tiantian Zhu']" ]
null
null
2402.15152
null
null
http://arxiv.org/pdf/2402.15152v2
2024-06-05T08:39:57Z
2024-02-23T07:22:55Z
On the Duality Between Sharpness-Aware Minimization and Adversarial Training
Adversarial Training (AT), which adversarially perturb the input samples during training, has been acknowledged as one of the most effective defenses against adversarial attacks, yet suffers from inevitably decreased clean accuracy. Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape and improve generalization. However, as SAM is designed for better clean accuracy, its effectiveness in enhancing adversarial robustness remains unexplored. In this work, considering the duality between SAM and AT, we investigate the adversarial robustness derived from SAM. Intriguingly, we find that using SAM alone can improve adversarial robustness. To understand this unexpected property of SAM, we first provide empirical and theoretical insights into how SAM can implicitly learn more robust features, and conduct comprehensive experiments to show that SAM can improve adversarial robustness notably without sacrificing any clean accuracy, shedding light on the potential of SAM to be a substitute for AT when accuracy comes at a higher priority. Code is available at https://github.com/weizeming/SAM_AT.
[ "['Yihao Zhang' 'Hangzhou He' 'Jingyu Zhu' 'Huanran Chen' 'Yifei Wang'\n 'Zeming Wei']" ]
null
null
2402.15153
null
null
http://arxiv.org/pdf/2402.15153v1
2024-02-23T07:28:31Z
2024-02-23T07:28:31Z
Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings
Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions. To address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in sentences with an AutoEncoder to help the model to preserve more fine-grained semantics during tokens aggregating. In addition, we proposed a self-adaptive reconstruction loss to alleviate the token bias towards frequency. Experimental results show that SARCSE gains significant improvements compared with the strong baseline SimCSE on the 7 STS tasks.
[ "['Junlong Liu' 'Xichen Shang' 'Huawen Feng' 'Junhao Zheng' 'Qianli Ma']" ]
null
null
2402.15159
null
null
http://arxiv.org/pdf/2402.15159v3
2024-05-30T15:44:51Z
2024-02-23T07:43:26Z
Machine Unlearning of Pre-trained Large Language Models
This study investigates the concept of the `right to be forgotten' within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models--a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over $10^5$ times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.
[ "['Jin Yao' 'Eli Chien' 'Minxin Du' 'Xinyao Niu' 'Tianhao Wang'\n 'Zezhou Cheng' 'Xiang Yue']" ]
null
null
2402.15160
null
null
http://arxiv.org/pdf/2402.15160v3
2024-03-01T00:58:50Z
2024-02-23T07:46:30Z
Spatially-Aware Transformer for Embodied Agents
Episodic memory plays a crucial role in various cognitive processes, such as the ability to mentally recall past events. While cognitive science emphasizes the significance of spatial context in the formation and retrieval of episodic memory, the current primary approach to implementing episodic memory in AI systems is through transformers that store temporally ordered experiences, which overlooks the spatial dimension. As a result, it is unclear how the underlying structure could be extended to incorporate the spatial axis beyond temporal order alone and thereby what benefits can be obtained. To address this, this paper explores the use of Spatially-Aware Transformer models that incorporate spatial information. These models enable the creation of place-centric episodic memory that considers both temporal and spatial dimensions. Adopting this approach, we demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks. Additionally, we propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning that aims to optimize efficiency of memory utilization. Our experiments demonstrate the advantages of our proposed model in various environments and across multiple downstream tasks, including prediction, generation, reasoning, and reinforcement learning. The source code for our models and experiments will be available at https://github.com/junmokane/spatially-aware-transformer.
[ "['Junmo Cho' 'Jaesik Yoon' 'Sungjin Ahn']" ]
null
null
2402.15162
null
null
http://arxiv.org/pdf/2402.15162v1
2024-02-23T07:53:39Z
2024-02-23T07:53:39Z
Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Abstractive summarization models often generate factually inconsistent content particularly when the parametric knowledge of the model conflicts with the knowledge in the input document. In this paper, we analyze the robustness of fine-tuning based summarization models to the knowledge conflict, which we call factual adaptiveness. We utilize pre-trained language models to construct evaluation sets and find that factual adaptiveness is not strongly correlated with factual consistency on original datasets. Furthermore, we introduce a controllable counterfactual data augmentation method where the degree of knowledge conflict within the augmented data can be adjustable. Our experimental results on two pre-trained language models (PEGASUS and BART) and two fine-tuning datasets (XSum and CNN/DailyMail) demonstrate that our method enhances factual adaptiveness while achieving factual consistency on original datasets on par with the contrastive learning baseline.
[ "['Jongyoon Song' 'Nohil Park' 'Bongkyu Hwang' 'Jaewoong Yun' 'Seongho Joe'\n 'Youngjune L. Gwon' 'Sungroh Yoon']" ]
null
null
2402.15163
null
null
http://arxiv.org/pdf/2402.15163v3
2024-05-22T18:50:10Z
2024-02-23T07:54:20Z
Has the Deep Neural Network learned the Stochastic Process? A Wildfire Perspective
This paper presents the first systematic study of evalution of Deep Neural Network (DNN) designed and trained to predict the evolution of a stochastic dynamical system, using wildfire prediction as a case study. We show that traditional evaluation methods based on threshold based classification metrics and error-based scoring rules assess a DNN's ability to replicate the observed ground truth (GT), but do not measure the fidelity of the DNN's learning of the underlying stochastic process. To address this gap, we propose a new system property: Statistic-GT, representing the GT of the stochastic process, and an evaluation metric that exclusively assesses fidelity to Statistic-GT. Utilizing a synthetic dataset, we introduce a stochastic framework to characterize this property and establish criteria for a metric to be a valid measure of the proposed property. We formally show that Expected Calibration Error (ECE) tests the necessary condition for fidelity to Statistic-GT. We perform empirical experiments, differentiating ECE's behavior from conventional metrics and demonstrate that ECE exclusively measures fidelity to the stochastic process. Extending our analysis to real-world wildfire data, we highlight the limitations of traditional evaluation methods and discuss the utility of evaluating fidelity to the stochastic process alongside existing metrics.
[ "['Harshit Kumar' 'Beomseok Kang' 'Biswadeep Chakraborty'\n 'Saibal Mukhopadhyay']" ]
null
null
2402.15164
null
null
http://arxiv.org/pdf/2402.15164v3
2024-05-24T03:45:21Z
2024-02-23T07:54:26Z
EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems
Reinforcement Learning (RL)-Based Recommender Systems (RSs) have gained rising attention for their potential to enhance long-term user engagement. However, research in this field faces challenges, including the lack of user-friendly frameworks, inconsistent evaluation metrics, and difficulties in reproducing existing studies. To tackle these issues, we introduce EasyRL4Rec, an easy-to-use code library designed specifically for RL-based RSs. This library provides lightweight and diverse RL environments based on five public datasets and includes core modules with rich options, simplifying model development. It provides unified evaluation standards focusing on long-term outcomes and offers tailored designs for state modeling and action representation for recommendation scenarios. Furthermore, we share our findings from insightful experiments with current methods. EasyRL4Rec seeks to facilitate the model development and experimental process in the domain of RL-based RSs. The library is available for public use.
[ "['Yuanqing Yu' 'Chongming Gao' 'Jiawei Chen' 'Heng Tang' 'Yuefeng Sun'\n 'Qian Chen' 'Weizhi Ma' 'Min Zhang']" ]
null
null
2402.15166
null
null
http://arxiv.org/pdf/2402.15166v1
2024-02-23T07:59:23Z
2024-02-23T07:59:23Z
Convergence Analysis of Split Federated Learning on Heterogeneous Data
Split federated learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite the recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this paper aims to fill this gap. The analysis of SFL can be more challenging than that of federated learning (FL), due to the potential dual-paced updates at the clients and the main server. We provide convergence analysis of SFL for strongly convex and general convex objectives on heterogeneous data. The convergence rates are $O(1/T)$ and $O(1/sqrt[3]{T})$, respectively, where $T$ denotes the total number of rounds for SFL training. We further extend the analysis to non-convex objectives and where some clients may be unavailable during training. Numerical experiments validate our theoretical results and show that SFL outperforms FL and split learning (SL) when data is highly heterogeneous across a large number of clients.
[ "['Pengchao Han' 'Chao Huang' 'Geng Tian' 'Ming Tang' 'Xin Liu']" ]
null
null
2402.15170
null
null
http://arxiv.org/pdf/2402.15170v1
2024-02-23T08:05:23Z
2024-02-23T08:05:23Z
The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling
With the incorporation of the UNet architecture, diffusion probabilistic models have become a dominant force in image generation tasks. One key design in UNet is the skip connections between the encoder and decoder blocks. Although skip connections have been shown to improve training stability and model performance, we reveal that such shortcuts can be a limiting factor for the complexity of the transformation. As the sampling steps decrease, the generation process and the role of the UNet get closer to the push-forward transformations from Gaussian distribution to the target, posing a challenge for the network's complexity. To address this challenge, we propose Skip-Tuning, a simple yet surprisingly effective training-free tuning method on the skip connections. Our method can achieve 100% FID improvement for pretrained EDM on ImageNet 64 with only 19 NFEs (1.75), breaking the limit of ODE samplers regardless of sampling steps. Surprisingly, the improvement persists when we increase the number of sampling steps and can even surpass the best result from EDM-2 (1.58) with only 39 NFEs (1.57). Comprehensive exploratory experiments are conducted to shed light on the surprising effectiveness. We observe that while Skip-Tuning increases the score-matching losses in the pixel space, the losses in the feature space are reduced, particularly at intermediate noise levels, which coincide with the most effective range accounting for image quality improvement.
[ "['Jiajun Ma' 'Shuchen Xue' 'Tianyang Hu' 'Wenjia Wang' 'Zhaoqiang Liu'\n 'Zhenguo Li' 'Zhi-Ming Ma' 'Kenji Kawaguchi']" ]
null
null
2402.15171
null
null
http://arxiv.org/pdf/2402.15171v2
2024-07-03T14:29:43Z
2024-02-23T08:07:54Z
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits
We address the problem of stochastic combinatorial semi-bandits, where a player selects among $P$ actions from the power set of a set containing $d$ base items. Adaptivity to the problem's structure is essential in order to obtain optimal regret upper bounds. As estimating the coefficients of a covariance matrix can be manageable in practice, leveraging them should improve the regret. We design ``optimistic'' covariance-adaptive algorithms relying on online estimations of the covariance structure, called OLSUCBC and COSV (only the variances for the latter). They both yields improved gap-free regret. Although COSV can be slightly suboptimal, it improves on computational complexity by taking inspiration from Thompson Sampling approaches. It is the first sampling-based algorithm satisfying a $sqrt{T}$ gap-free regret (up to poly-logs). We also show that in some cases, our approach efficiently leverages the semi-bandit feedback and outperforms bandit feedback approaches, not only in exponential regimes where $Pgg d$ but also when $Pleq d$, which is not covered by existing analyses.
[ "['Julien Zhou' 'Pierre Gaillard' 'Thibaud Rahier' 'Houssam Zenati'\n 'Julyan Arbel']" ]
null
null
2402.15172
null
null
http://arxiv.org/pdf/2402.15172v1
2024-02-23T08:11:25Z
2024-02-23T08:11:25Z
Attention-Guided Masked Autoencoders For Learning Image Representations
Masked autoencoders (MAEs) have established themselves as a powerful method for unsupervised pre-training for computer vision tasks. While vanilla MAEs put equal emphasis on reconstructing the individual parts of the image, we propose to inform the reconstruction process through an attention-guided loss function. By leveraging advances in unsupervised object discovery, we obtain an attention map of the scene which we employ in the loss function to put increased emphasis on reconstructing relevant objects, thus effectively incentivizing the model to learn more object-focused representations without compromising the established masking strategy. Our evaluations show that our pre-trained models learn better latent representations than the vanilla MAE, demonstrated by improved linear probing and k-NN classification results on several benchmarks while at the same time making ViTs more robust against varying backgrounds.
[ "['Leon Sick' 'Dominik Engel' 'Pedro Hermosilla' 'Timo Ropinski']" ]
null
null
2402.15173
null
null
http://arxiv.org/pdf/2402.15173v1
2024-02-23T08:11:55Z
2024-02-23T08:11:55Z
Second-Order Fine-Tuning without Pain for LLMs:A Hessian Informed Zeroth-Order Optimizer
Fine-tuning large language models (LLMs) with classic first-order optimizers entails prohibitive GPU memory due to the backpropagation process. Recent works have turned to zeroth-order optimizers for fine-tuning, which save substantial memory by using two forward passes. However, these optimizers are plagued by the heterogeneity of parameter curvatures across different dimensions. In this work, we propose HiZOO, a diagonal Hessian informed zeroth-order optimizer which is the first work to leverage the diagonal Hessian to enhance zeroth-order optimizer for fine-tuning LLMs. What's more, HiZOO avoids the expensive memory cost and only increases one forward pass per step. Extensive experiments on various models (350M~66B parameters) indicate that HiZOO improves model convergence, significantly reducing training steps and effectively enhancing model accuracy. Moreover, we visualize the optimization trajectories of HiZOO on test functions, illustrating its effectiveness in handling heterogeneous curvatures. Lastly, we provide theoretical proofs of convergence for HiZOO. Code is publicly available at https://anonymous.4open.science/r/HiZOO27F8.
[ "['Yanjun Zhao' 'Sizhe Dang' 'Haishan Ye' 'Guang Dai' 'Yi Qian'\n 'Ivor W. Tsang']" ]
null
null
2402.15175
null
null
http://arxiv.org/pdf/2402.15175v2
2024-02-26T02:49:16Z
2024-02-23T08:14:36Z
Unified View of Grokking, Double Descent and Emergent Abilities: A Perspective from Circuits Competition
Recent studies have uncovered intriguing phenomena in deep learning, such as grokking, double descent, and emergent abilities in large language models, which challenge human intuition and are crucial for a deeper understanding of neural models. In this paper, we present a comprehensive framework that provides a unified view of these three phenomena, focusing on the competition between memorization and generalization circuits. This approach, initially employed to explain grokking, is extended in our work to encompass a wider range of model sizes and training data volumes. Our framework delineates four distinct training dynamics, each depending on varying combinations of model size and training data quantity. Utilizing this framework, we provide a detailed analysis of the double descent phenomenon and propose two verifiable predictions regarding its occurrence, both substantiated by our experimental results. Moreover, we expand our framework to the multi-task learning paradigm, demonstrating how algorithm tasks can be turned into emergent abilities. This offers a novel perspective to understand emergent abilities in Large Language Models.
[ "['Yufei Huang' 'Shengding Hu' 'Xu Han' 'Zhiyuan Liu' 'Maosong Sun']" ]
null
null
2402.15179
null
null
http://arxiv.org/pdf/2402.15179v3
2024-06-02T09:05:31Z
2024-02-23T08:21:02Z
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Parameter Efficient Fine-Tuning (PEFT) techniques have drawn significant attention due to their ability to yield competitive results while updating only a small portion of the adjustable parameters. However, existing PEFT methods pose challenges in hyperparameter selection, such as choosing the rank for LoRA or Adapter, or specifying the length of soft prompts. To address these challenges, we propose a novel fine-tuning approach for neural models, named Representation EDiting (RED), which modifies the representations generated at some layers through the application of scaling and biasing operations. While existing PEFT methods still demonstrate over-parameterization that could potentially undermine the generalization ability acquired from pre-training, RED can substantially reduce the number of trainable parameters by a factor of 25, 700 compared to full parameter fine-tuning and by a factor of 32 relative to LoRA. Remarkably, RED achieves results comparable or superior to both full parameter fine-tuning and other PEFT methods. Extensive experiments across various model architectures and scales, including RoBERTa, GPT-2, T5, and LLaMA-2, have demonstrated the effectiveness and efficiency of RED1, thereby positioning it as a promising PEFT strategy for large-scale neural models.
[ "['Muling Wu' 'Wenhao Liu' 'Xiaohua Wang' 'Tianlong Li' 'Changze Lv'\n 'Zixuan Ling' 'Jianhao Zhu' 'Cenyuan Zhang' 'Xiaoqing Zheng'\n 'Xuanjing Huang']" ]