categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.10392
null
null
http://arxiv.org/pdf/2402.10392v1
2024-02-16T01:25:21Z
2024-02-16T01:25:21Z
Pretext Training Algorithms for Event Sequence Data
Pretext training followed by task-specific fine-tuning has been a successful approach in vision and language domains. This paper proposes a self-supervised pretext training framework tailored to event sequence data. We introduce a novel alignment verification task that is specialized to event sequences, building on good practices in masked reconstruction and contrastive learning. Our pretext tasks unlock foundational representations that are generalizable across different down-stream tasks, including next-event prediction for temporal point process models, event sequence classification, and missing event interpolation. Experiments on popular public benchmarks demonstrate the potential of the proposed method across different tasks and data domains.
[ "['Yimu Wang' 'He Zhao' 'Ruizhi Deng' 'Frederick Tung' 'Greg Mori']" ]
null
null
2402.10397
null
null
http://arxiv.org/pdf/2402.10397v1
2024-02-16T01:47:02Z
2024-02-16T01:47:02Z
LogELECTRA: Self-supervised Anomaly Detection for Unstructured Logs
System logs are some of the most important information for the maintenance of software systems, which have become larger and more complex in recent years. The goal of log-based anomaly detection is to automatically detect system anomalies by analyzing the large number of logs generated in a short period of time, which is a critical challenge in the real world. Previous studies have used a log parser to extract templates from unstructured log data and detect anomalies on the basis of patterns of the template occurrences. These methods have limitations for logs with unknown templates. Furthermore, since most log anomalies are known to be point anomalies rather than contextual anomalies, detection methods based on occurrence patterns can cause unnecessary delays in detection. In this paper, we propose LogELECTRA, a new log anomaly detection model that analyzes a single line of log messages more deeply on the basis of self-supervised anomaly detection. LogELECTRA specializes in detecting log anomalies as point anomalies by applying ELECTRA, a natural language processing model, to analyze the semantics of a single line of log messages. LogELECTRA outperformed existing state-of-the-art methods in experiments on the public benchmark log datasets BGL, Sprit, and Thunderbird.
[ "['Yuuki Yamanaka' 'Tomokatsu Takahashi' 'Takuya Minami'\n 'Yoshiaki Nakajima']" ]
null
null
2402.10401
null
null
http://arxiv.org/pdf/2402.10401v2
2024-02-29T08:02:27Z
2024-02-16T01:58:35Z
ManiFPT: Defining and Analyzing Fingerprints of Generative Models
Recent works have shown that generative models leave traces of their underlying generative process on the generated samples, broadly referred to as fingerprints of a generative model, and have studied their utility in detecting synthetic images from real ones. However, the extend to which these fingerprints can distinguish between various types of synthetic image and help identify the underlying generative process remain under-explored. In particular, the very definition of a fingerprint remains unclear, to our knowledge. To that end, in this work, we formalize the definition of artifact and fingerprint in generative models, propose an algorithm for computing them in practice, and finally study its effectiveness in distinguishing a large array of different generative models. We find that using our proposed definition can significantly improve the performance on the task of identifying the underlying generative process from samples (model attribution) compared to existing methods. Additionally, we study the structure of the fingerprints, and observe that it is very predictive of the effect of different design choices on the generative process.
[ "['Hae Jin Song' 'Mahyar Khayatkhoei' 'Wael AbdAlmageed']" ]
null
null
2402.10403
null
null
http://arxiv.org/pdf/2402.10403v2
2024-05-27T09:50:32Z
2024-02-16T02:01:24Z
Polyhedral Complex Derivation from Piecewise Trilinear Networks
Recent advancements in visualizing deep neural networks provide insights into their structures and mesh extraction from Continuous Piecewise Affine (CPWA) functions. Meanwhile, developments in neural surface representation learning incorporate non-linear positional encoding, addressing issues like spectral bias; however, this poses challenges in applying mesh extraction techniques based on CPWA functions. Focusing on trilinear interpolating methods as positional encoding, we present theoretical insights and an analytical mesh extraction, showing the transformation of hypersurfaces to flat planes within the trilinear region under the eikonal constraint. Moreover, we introduce a method for approximating intersecting points among three hypersurfaces contributing to broader applications. We empirically validate correctness and parsimony through chamfer distance and efficiency, and angular distance, while examining the correlation between the eikonal loss and the planarity of the hypersurfaces.
[ "['Jin-Hwa Kim']" ]
null
null
2402.10409
null
null
http://arxiv.org/pdf/2402.10409v1
2024-02-16T02:21:59Z
2024-02-16T02:21:59Z
Understanding Survey Paper Taxonomy about Large Language Models via Graph Representation Learning
As new research on Large Language Models (LLMs) continues, it is difficult to keep up with new research and models. To help researchers synthesize the new research many have written survey papers, but even those have become numerous. In this paper, we develop a method to automatically assign survey papers to a taxonomy. We collect the metadata of 144 LLM survey papers and explore three paradigms to classify papers within the taxonomy. Our work indicates that leveraging graph structure information on co-category graphs can significantly outperform the language models in two paradigms; pre-trained language models' fine-tuning and zero-shot/few-shot classifications using LLMs. We find that our model surpasses an average human recognition level and that fine-tuning LLMs using weak labels generated by a smaller model, such as the GCN in this study, can be more effective than using ground-truth labels, revealing the potential of weak-to-strong generalization in the taxonomy classification task.
[ "['Jun Zhuang' 'Casey Kennington']" ]
null
null
2402.10412
null
null
http://arxiv.org/pdf/2402.10412v2
2024-06-06T18:33:02Z
2024-02-16T02:32:06Z
Measuring and Reducing LLM Hallucination without Gold-Standard Answers
LLM hallucination, i.e. generating factually incorrect yet seemingly convincing answers, is currently a major threat to the trustworthiness and reliability of LLMs. The first step towards solving this complicated problem is to measure it. However, existing hallucination metrics require having a benchmark dataset with gold-standard answers, i.e. "best" or "correct" answers written by humans. Such requirements make hallucination measurement costly and prone to human errors. In this work, we propose Factualness Evaluations via Weighting LLMs (FEWL), an innovative hallucination metric that is specifically designed for the scenario when gold-standard answers are absent. FEWL leverages the answers from off-the-shelf LLMs that serve as a proxy of gold-standard answers. The key challenge is how to quantify the expertise of reference LLMs resourcefully. We show FEWL has certain theoretical guarantees and demonstrate empirically it gives more accurate hallucination measures than naively using reference LLMs. We also show how to leverage FEWL to reduce hallucination through both in-context learning and supervised fine-tuning. Extensive experiment results on Truthful-QA, CHALE, and HaluEval datasets demonstrate the effectiveness of FEWL.
[ "['Jiaheng Wei' 'Yuanshun Yao' 'Jean-Francois Ton' 'Hongyi Guo'\n 'Andrew Estornell' 'Yang Liu']" ]
null
null
2402.10425
null
null
http://arxiv.org/pdf/2402.10425v1
2024-02-16T03:22:58Z
2024-02-16T03:22:58Z
DABS-LS: Deep Atlas-Based Segmentation Using Regional Level Set Self-Supervision
Cochlear implants (CIs) are neural prosthetics used to treat patients with severe-to-profound hearing loss. Patient-specific modeling of CI stimulation of the auditory nerve fiber (ANFs) can help audiologists improve the CI programming. These models require localization of the ANFs relative to surrounding anatomy and the CI. Localization is challenging because the ANFs are so small they are not directly visible in clinical imaging. In this work, we hypothesize the position of the ANFs can be accurately inferred from the location of the internal auditory canal (IAC), which has high contrast in CT, since the ANFs pass through this canal between the cochlea and the brain. Inspired by VoxelMorph, in this paper we propose a deep atlas-based IAC segmentation network. We create a single atlas in which the IAC and ANFs are pre-localized. Our network is trained to produce deformation fields (DFs) mapping coordinates from the atlas to new target volumes and that accurately segment the IAC. We hypothesize that DFs that accurately segment the IAC in target images will also facilitate accurate atlas-based localization of the ANFs. As opposed to VoxelMorph, which aims to produce DFs that accurately register the entire volume, our novel contribution is an entirely self-supervised training scheme that aims to produce DFs that accurately segment the target structure. This self-supervision is facilitated using a regional level set (LS) inspired loss function. We call our method Deep Atlas Based Segmentation using Level Sets (DABS-LS). Results show that DABS-LS outperforms VoxelMorph for IAC segmentation. Tests with publicly available datasets for trachea and kidney segmentation also show significant improvement in segmentation accuracy, demonstrating the generalizability of the method.
[ "['Hannah G. Mason' 'Jack H. Noble']" ]
null
null
2402.10429
null
null
http://arxiv.org/pdf/2402.10429v2
2024-06-23T03:50:12Z
2024-02-16T03:36:03Z
Fixed Confidence Best Arm Identification in the Bayesian Setting
We consider the fixed-confidence best arm identification (FC-BAI) problem in the Bayesian setting. This problem aims to find the arm of the largest mean with a fixed confidence level when the bandit model has been sampled from the known prior. Most studies on the FC-BAI problem have been conducted in the frequentist setting, where the bandit model is predetermined before the game starts. We show that the traditional FC-BAI algorithms studied in the frequentist setting, such as track-and-stop and top-two algorithms, result in arbitrarily suboptimal performances in the Bayesian setting. We also obtain a lower bound of the expected number of samples in the Bayesian setting and introduce a variant of successive elimination that has a matching performance with the lower bound up to a logarithmic factor. Simulations verify the theoretical results.
[ "['Kyoungseok Jang' 'Junpei Komiyama' 'Kazutoshi Yamazaki']" ]
null
null
2402.10433
null
null
http://arxiv.org/pdf/2402.10433v2
2024-03-11T20:20:16Z
2024-02-16T03:48:55Z
Fusing Neural and Physical: Augment Protein Conformation Sampling with Tractable Simulations
The protein dynamics are common and important for their biological functions and properties, the study of which usually involves time-consuming molecular dynamics (MD) simulations in silico. Recently, generative models has been leveraged as a surrogate sampler to obtain conformation ensembles with orders of magnitude faster and without requiring any simulation data (a "zero-shot" inference). However, being agnostic of the underlying energy landscape, the accuracy of such generative model may still be limited. In this work, we explore the few-shot setting of such pre-trained generative sampler which incorporates MD simulations in a tractable manner. Specifically, given a target protein of interest, we first acquire some seeding conformations from the pre-trained sampler followed by a number of physical simulations in parallel starting from these seeding samples. Then we fine-tuned the generative model using the simulation trajectories above to become a target-specific sampler. Experimental results demonstrated the superior performance of such few-shot conformation sampler at a tractable computational cost.
[ "['Jiarui Lu' 'Zuobai Zhang' 'Bozitao Zhong' 'Chence Shi' 'Jian Tang']" ]
null
null
2402.10434
null
null
http://arxiv.org/pdf/2402.10434v1
2024-02-16T03:51:14Z
2024-02-16T03:51:14Z
Parametric Augmentation for Time Series Contrastive Learning
Modern techniques like contrastive learning have been effectively used in many areas, including computer vision, natural language processing, and graph-structured data. Creating positive examples that assist the model in learning robust and discriminative representations is a crucial stage in contrastive learning approaches. Usually, preset human intuition directs the selection of relevant data augmentations. Due to patterns that are easily recognized by humans, this rule of thumb works well in the vision and language domains. However, it is impractical to visually inspect the temporal structures in time series. The diversity of time series augmentations at both the dataset and instance levels makes it difficult to choose meaningful augmentations on the fly. In this study, we address this gap by analyzing time series data augmentation using information theory and summarizing the most commonly adopted augmentations in a unified format. We then propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning. The proposed approach is encoder-agnostic, allowing it to be seamlessly integrated with different backbone encoders. Experiments on univariate forecasting tasks demonstrate the highly competitive results of our method, with an average 6.5% reduction in MSE and 4.7% in MAE over the leading baselines. In classification tasks, AutoTCL achieves a $1.2%$ increase in average accuracy.
[ "['Xu Zheng' 'Tianchun Wang' 'Wei Cheng' 'Aitian Ma' 'Haifeng Chen'\n 'Mo Sha' 'Dongsheng Luo']" ]
null
null
2402.10445
null
null
http://arxiv.org/pdf/2402.10445v3
2024-05-23T01:09:25Z
2024-02-16T04:32:22Z
Collaborative Learning with Different Labeling Functions
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions, while minimizing the number of samples drawn from them in total. Unlike in the usual collaborative learning setup, it is not assumed that there exists a single classifier that is simultaneously accurate for all distributions. We show that, when the data distributions satisfy a weaker realizability assumption, which appeared in [Crammer and Mansour, 2012] in the context of multi-task learning, sample-efficient learning is still feasible. We give a learning algorithm based on Empirical Risk Minimization (ERM) on a natural augmentation of the hypothesis class, and the analysis relies on an upper bound on the VC dimension of this augmented class. In terms of the computational efficiency, we show that ERM on the augmented hypothesis class is NP-hard, which gives evidence against the existence of computationally efficient learners in general. On the positive side, for two special cases, we give learners that are both sample- and computationally-efficient.
[ "['Yuyang Deng' 'Mingda Qiao']" ]
null
null
2402.10447
null
null
http://arxiv.org/pdf/2402.10447v2
2024-05-27T15:23:17Z
2024-02-16T04:41:33Z
Incremental Sequence Labeling: A Tale of Two Shifts
The incremental sequence labeling task involves continuously learning new classes over time while retaining knowledge of the previous ones. Our investigation identifies two significant semantic shifts: E2O (where the model mislabels an old entity as a non-entity) and O2E (where the model labels a non-entity or old entity as a new entity). Previous research has predominantly focused on addressing the E2O problem, neglecting the O2E issue. This negligence results in a model bias towards classifying new data samples as belonging to the new class during the learning process. To address these challenges, we propose a novel framework, Incremental Sequential Labeling without Semantic Shifts (IS3). Motivated by the identified semantic shifts (E2O and O2E), IS3 aims to mitigate catastrophic forgetting in models. As for the E2O problem, we use knowledge distillation to maintain the model's discriminative ability for old entities. Simultaneously, to tackle the O2E problem, we alleviate the model's bias towards new entities through debiased loss and optimization levels. Our experimental evaluation, conducted on three datasets with various incremental settings, demonstrates the superior performance of IS3 compared to the previous state-of-the-art method by a significant margin.The data, code, and scripts are publicly available at https://github.com/zzz47zzz/codebase-for-incremental-learning-with-llm.
[ "['Shengjie Qiu' 'Junhao Zheng' 'Zhen Liu' 'Yicheng Luo' 'Qianli Ma']" ]
null
null
2402.10450
null
null
http://arxiv.org/pdf/2402.10450v3
2024-06-06T04:47:52Z
2024-02-16T04:55:09Z
PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control
Temporal action abstractions, along with belief state representations, are a powerful knowledge sharing mechanism for sequential decision making. In this work, we propose a novel view that treats inducing temporal action abstractions as a sequence compression problem. To do so, we bring a subtle but critical component of LLM training pipelines -- input tokenization via byte pair encoding (BPE) -- to the seemingly distant task of learning skills of variable time span in continuous control domains. We introduce an approach called Primitive Sequence Encoding (PRISE) that combines continuous action quantization with BPE to learn powerful action abstractions. We empirically show that high-level skills discovered by PRISE from a multitask set of robotic manipulation demonstrations significantly boost the performance of both multitask imitation learning as well as few-shot imitation learning on unseen tasks. Our code is released at https://github.com/FrankZheng2022/PRISE.
[ "['Ruijie Zheng' 'Ching-An Cheng' 'Hal Daumé III' 'Furong Huang'\n 'Andrey Kolobov']" ]
null
null
2402.10456
null
null
http://arxiv.org/pdf/2402.10456v1
2024-02-16T05:27:05Z
2024-02-16T05:27:05Z
Generative Modeling for Tabular Data via Penalized Optimal Transport Network
The task of precisely learning the probability distribution of rows within tabular data and producing authentic synthetic samples is both crucial and non-trivial. Wasserstein generative adversarial network (WGAN) marks a notable improvement in generative modeling, addressing the challenges faced by its predecessor, generative adversarial network. However, due to the mixed data types and multimodalities prevalent in tabular data, the delicate equilibrium between the generator and discriminator, as well as the inherent instability of Wasserstein distance in high dimensions, WGAN often fails to produce high-fidelity samples. To this end, we propose POTNet (Penalized Optimal Transport Network), a generative deep neural network based on a novel, robust, and interpretable marginally-penalized Wasserstein (MPW) loss. POTNet can effectively model tabular data containing both categorical and continuous features. Moreover, it offers the flexibility to condition on a subset of features. We provide theoretical justifications for the motivation behind the MPW loss. We also empirically demonstrate the effectiveness of our proposed method on four different benchmarks across a variety of real-world and simulated datasets. Our proposed model achieves orders of magnitude speedup during the sampling stage compared to state-of-the-art generative models for tabular data, thereby enabling efficient large-scale synthetic data generation.
[ "['Wenhui Sophia Lu' 'Chenyang Zhong' 'Wing Hung Wong']" ]
null
null
2402.10457
null
null
http://arxiv.org/pdf/2402.10457v1
2024-02-16T05:27:13Z
2024-02-16T05:27:13Z
Learning-Augmented Skip Lists
We study the integration of machine learning advice into the design of skip lists to improve upon traditional data structure design. Given access to a possibly erroneous oracle that outputs estimated fractional frequencies for search queries on a set of items, we construct a skip list that provably provides the optimal expected search time, within nearly a factor of two. In fact, our learning-augmented skip list is still optimal up to a constant factor, even if the oracle is only accurate within a constant factor. We show that if the search queries follow the ubiquitous Zipfian distribution, then the expected search time for an item by our skip list is only a constant, independent of the total number $n$ of items, i.e., $mathcal{O}(1)$, whereas a traditional skip list will have an expected search time of $mathcal{O}(log n)$. We also demonstrate robustness by showing that our data structure achieves an expected search time that is within a constant factor of an oblivious skip list construction even when the predictions are arbitrarily incorrect. Finally, we empirically show that our learning-augmented skip list outperforms traditional skip lists on both synthetic and real-world datasets.
[ "['Chunkai Fu' 'Jung Hoon Seo' 'Samson Zhou']" ]
null
null
2402.10462
null
null
http://arxiv.org/pdf/2402.10462v1
2024-02-16T05:42:17Z
2024-02-16T05:42:17Z
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Finetuning large language models requires huge GPU memory, restricting the choice to acquire Larger models. While the quantized version of the Low-Rank Adaptation technique, named QLoRA, significantly alleviates this issue, finding the efficient LoRA rank is still challenging. Moreover, QLoRA is trained on a pre-defined rank and, therefore, cannot be reconfigured for its lower ranks without requiring further fine-tuning steps. This paper proposes QDyLoRA -Quantized Dynamic Low-Rank Adaptation-, as an efficient quantization approach for dynamic low-rank adaptation. Motivated by Dynamic LoRA, QDyLoRA is able to efficiently finetune LLMs on a set of pre-defined LoRA ranks. QDyLoRA enables fine-tuning Falcon-40b for ranks 1 to 64 on a single 32 GB V100-GPU through one round of fine-tuning. Experimental results show that QDyLoRA is competitive to QLoRA and outperforms when employing its optimal rank.
[ "['Hossein Rajabzadeh' 'Mojtaba Valipour' 'Tianshu Zhu' 'Marzieh Tahaei'\n 'Hyock Ju Kwon' 'Ali Ghodsi' 'Boxing Chen' 'Mehdi Rezagholizadeh']" ]
null
null
2402.10464
null
null
http://arxiv.org/pdf/2402.10464v1
2024-02-16T06:00:31Z
2024-02-16T06:00:31Z
FedKit: Enabling Cross-Platform Federated Learning for Android and iOS
We present FedKit, a federated learning (FL) system tailored for cross-platform FL research on Android and iOS devices. FedKit pipelines cross-platform FL development by enabling model conversion, hardware-accelerated training, and cross-platform model aggregation. Our FL workflow supports flexible machine learning operations (MLOps) in production, facilitating continuous model delivery and training. We have deployed FedKit in a real-world use case for health data analysis on university campuses, demonstrating its effectiveness. FedKit is open-source at https://github.com/FedCampus/FedKit.
[ "['Sichang He' 'Beilong Tang' 'Boyan Zhang' 'Jiaoqi Shao' 'Xiaomin Ouyang'\n 'Daniel Nata Nugraha' 'Bing Luo']" ]
null
null
2402.10468
null
null
http://arxiv.org/pdf/2402.10468v1
2024-02-16T06:17:50Z
2024-02-16T06:17:50Z
Adversarial Curriculum Graph Contrastive Learning with Pair-wise Augmentation
Graph contrastive learning (GCL) has emerged as a pivotal technique in the domain of graph representation learning. A crucial aspect of effective GCL is the caliber of generated positive and negative samples, which is intrinsically dictated by their resemblance to the original data. Nevertheless, precise control over similarity during sample generation presents a formidable challenge, often impeding the effective discovery of representative graph patterns. To address this challenge, we propose an innovative framework: Adversarial Curriculum Graph Contrastive Learning (ACGCL), which capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity, alongside subgraph contrastive learning to discern effective graph patterns therein. Within the ACGCL framework, we have devised a novel adversarial curriculum training methodology that facilitates progressive learning by sequentially increasing the difficulty of distinguishing the generated samples. Notably, this approach transcends the prevalent sparsity issue inherent in conventional curriculum learning strategies by adaptively concentrating on more challenging training data. Finally, a comprehensive assessment of ACGCL is conducted through extensive experiments on six well-known benchmark datasets, wherein ACGCL conspicuously surpasses a set of state-of-the-art baselines.
[ "['Xinjian Zhao' 'Liang Zhang' 'Yang Liu' 'Ruocheng Guo' 'Xiangyu Zhao']" ]
null
null
2402.10470
null
null
http://arxiv.org/pdf/2402.10470v1
2024-02-16T06:22:44Z
2024-02-16T06:22:44Z
Theoretical Understanding of Learning from Adversarial Perturbations
It is not fully understood why adversarial examples can deceive neural networks and transfer between different networks. To elucidate this, several studies have hypothesized that adversarial perturbations, while appearing as noises, contain class features. This is supported by empirical evidence showing that networks trained on mislabeled adversarial examples can still generalize well to correctly labeled test samples. However, a theoretical understanding of how perturbations include class features and contribute to generalization is limited. In this study, we provide a theoretical framework for understanding learning from perturbations using a one-hidden-layer network trained on mutually orthogonal samples. Our results highlight that various adversarial perturbations, even perturbations of a few pixels, contain sufficient class features for generalization. Moreover, we reveal that the decision boundary when learning from perturbations matches that from standard samples except for specific regions under mild conditions. The code is available at https://github.com/s-kumano/learning-from-adversarial-perturbations.
[ "['Soichiro Kumano' 'Hiroshi Kera' 'Toshihiko Yamasaki']" ]
null
null
2402.10473
null
null
http://arxiv.org/pdf/2402.10473v1
2024-02-16T06:35:10Z
2024-02-16T06:35:10Z
Privacy for Fairness: Information Obfuscation for Fair Representation Learning with Local Differential Privacy
As machine learning (ML) becomes more prevalent in human-centric applications, there is a growing emphasis on algorithmic fairness and privacy protection. While previous research has explored these areas as separate objectives, there is a growing recognition of the complex relationship between privacy and fairness. However, previous works have primarily focused on examining the interplay between privacy and fairness through empirical investigations, with limited attention given to theoretical exploration. This study aims to bridge this gap by introducing a theoretical framework that enables a comprehensive examination of their interrelation. We shall develop and analyze an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning. In contrast to many empirical studies on fairness in ML, we show that the incorporation of LDP randomizers during the encoding process can enhance the fairness of the learned representation. Our analysis will demonstrate that the disclosure of sensitive information is constrained by the privacy budget of the LDP randomizer, thereby enabling the optimization process within the IB framework to effectively suppress sensitive information while preserving the desired utility through obfuscation. Based on the proposed method, we further develop a variational representation encoding approach that simultaneously achieves fairness and LDP. Our variational encoding approach offers practical advantages. It is trained using a non-adversarial method and does not require the introduction of any variational prior. Extensive experiments will be presented to validate our theoretical results and demonstrate the ability of our proposed approach to achieve both LDP and fairness while preserving adequate utility.
[ "['Songjie Xie' 'Youlong Wu' 'Jiaxuan Li' 'Ming Ding' 'Khaled B. Letaief']" ]
null
null
2402.10474
null
null
http://arxiv.org/pdf/2402.10474v1
2024-02-16T06:39:40Z
2024-02-16T06:39:40Z
One-Bit Quantization and Sparsification for Multiclass Linear Classification via Regularized Regression
We study the use of linear regression for multiclass classification in the over-parametrized regime where some of the training data is mislabeled. In such scenarios it is necessary to add an explicit regularization term, $lambda f(w)$, for some convex function $f(cdot)$, to avoid overfitting the mislabeled data. In our analysis, we assume that the data is sampled from a Gaussian Mixture Model with equal class sizes, and that a proportion $c$ of the training labels is corrupted for each class. Under these assumptions, we prove that the best classification performance is achieved when $f(cdot) = |cdot|^2_2$ and $lambda to infty$. We then proceed to analyze the classification errors for $f(cdot) = |cdot|_1$ and $f(cdot) = |cdot|_infty$ in the large $lambda$ regime and notice that it is often possible to find sparse and one-bit solutions, respectively, that perform almost as well as the one corresponding to $f(cdot) = |cdot|_2^2$.
[ "['Reza Ghane' 'Danil Akhtiamov' 'Babak Hassibi']" ]
null
null
2402.10475
null
null
http://arxiv.org/pdf/2402.10475v2
2024-07-15T08:21:28Z
2024-02-16T06:41:35Z
Fundamental Benefit of Alternating Updates in Minimax Optimization
The Gradient Descent-Ascent (GDA) algorithm, designed to solve minimax optimization problems, takes the descent and ascent steps either simultaneously (Sim-GDA) or alternately (Alt-GDA). While Alt-GDA is commonly observed to converge faster, the performance gap between the two is not yet well understood theoretically, especially in terms of global convergence rates. To address this theory-practice gap, we present fine-grained convergence analyses of both algorithms for strongly-convex-strongly-concave and Lipschitz-gradient objectives. Our new iteration complexity upper bound of Alt-GDA is strictly smaller than the lower bound of Sim-GDA; i.e., Alt-GDA is provably faster. Moreover, we propose Alternating-Extrapolation GDA (Alex-GDA), a general algorithmic framework that subsumes Sim-GDA and Alt-GDA, for which the main idea is to alternately take gradients from extrapolations of the iterates. We show that Alex-GDA satisfies a smaller iteration complexity bound, identical to that of the Extra-gradient method, while requiring less gradient computations. We also prove that Alex-GDA enjoys linear convergence for bilinear problems, for which both Sim-GDA and Alt-GDA fail to converge at all.
[ "['Jaewook Lee' 'Hanseul Cho' 'Chulhee Yun']" ]
null
null
2402.10477
null
null
http://arxiv.org/pdf/2402.10477v1
2024-02-16T06:56:59Z
2024-02-16T06:56:59Z
Understanding Likelihood of Normalizing Flow and Image Complexity through the Lens of Out-of-Distribution Detection
Out-of-distribution (OOD) detection is crucial to safety-critical machine learning applications and has been extensively studied. While recent studies have predominantly focused on classifier-based methods, research on deep generative model (DGM)-based methods have lagged relatively. This disparity may be attributed to a perplexing phenomenon: DGMs often assign higher likelihoods to unknown OOD inputs than to their known training data. This paper focuses on explaining the underlying mechanism of this phenomenon. We propose a hypothesis that less complex images concentrate in high-density regions in the latent space, resulting in a higher likelihood assignment in the Normalizing Flow (NF). We experimentally demonstrate its validity for five NF architectures, concluding that their likelihood is untrustworthy. Additionally, we show that this problem can be alleviated by treating image complexity as an independent variable. Finally, we provide evidence of the potential applicability of our hypothesis in another DGM, PixelCNN++.
[ "['Genki Osada' 'Tsubasa Takahashi' 'Takashi Nishide']" ]
null
null
2402.10478
null
null
http://arxiv.org/pdf/2402.10478v1
2024-02-16T06:57:03Z
2024-02-16T06:57:03Z
CodaMal: Contrastive Domain Adaptation for Malaria Detection in Low-Cost Microscopes
Malaria is a major health issue worldwide, and its diagnosis requires scalable solutions that can work effectively with low-cost microscopes (LCM). Deep learning-based methods have shown success in computer-aided diagnosis from microscopic images. However, these methods need annotated images that show cells affected by malaria parasites and their life stages. Annotating images from LCM significantly increases the burden on medical experts compared to annotating images from high-cost microscopes (HCM). For this reason, a practical solution would be trained on HCM images which should generalize well on LCM images during testing. While earlier methods adopted a multi-stage learning process, they did not offer an end-to-end approach. In this work, we present an end-to-end learning framework, named CodaMal (Contrastive Domain Adpation for Malaria). In order to bridge the gap between HCM (training) and LCM (testing), we propose a domain adaptive contrastive loss. It reduces the domain shift by promoting similarity between the representations of HCM and its corresponding LCM image, without imposing an additional annotation burden. In addition, the training objective includes object detection objectives with carefully designed augmentations, ensuring the accurate detection of malaria parasites. On the publicly available large-scale M5-dataset, our proposed method shows a significant improvement of 16% over the state-of-the-art methods in terms of the mean average precision metric (mAP), provides 21x speed up during inference, and requires only half learnable parameters than the prior methods. Our code is publicly available.
[ "['Ishan Rajendrakumar Dave' 'Tristan de Blegiers' 'Chen Chen'\n 'Mubarak Shah']" ]
null
null
2402.10481
null
null
http://arxiv.org/pdf/2402.10481v2
2024-05-04T14:32:44Z
2024-02-16T07:05:49Z
Emoji Driven Crypto Assets Market Reactions
In the burgeoning realm of cryptocurrency, social media platforms like Twitter have become pivotal in influencing market trends and investor sentiments. In our study, we leverage GPT-4 and a fine-tuned transformer-based BERT model for a multimodal sentiment analysis, focusing on the impact of emoji sentiment on cryptocurrency markets. By translating emojis into quantifiable sentiment data, we correlate these insights with key market indicators like BTC Price and the VCRIX index. Our architecture's analysis of emoji sentiment demonstrated a distinct advantage over FinBERT's pure text sentiment analysis in such predicting power. This approach may be fed into the development of trading strategies aimed at utilizing social media elements to identify and forecast market trends. Crucially, our findings suggest that strategies based on emoji sentiment can facilitate the avoidance of significant market downturns and contribute to the stabilization of returns. This research underscores the practical benefits of integrating advanced AI-driven analyses into financial strategies, offering a nuanced perspective on the interplay between digital communication and market dynamics in an academic context.
[ "['Xiaorui Zuo' 'Yao-Tsung Chen' 'Wolfgang Karl Härdle']" ]
null
null
2402.10482
null
null
http://arxiv.org/pdf/2402.10482v1
2024-02-16T07:13:12Z
2024-02-16T07:13:12Z
Understanding Self-Distillation and Partial Label Learning in Multi-Class Classification with Label Noise
Self-distillation (SD) is the process of training a student model using the outputs of a teacher model, with both models sharing the same architecture. Our study theoretically examines SD in multi-class classification with cross-entropy loss, exploring both multi-round SD and SD with refined teacher outputs, inspired by partial label learning (PLL). By deriving a closed-form solution for the student model's outputs, we discover that SD essentially functions as label averaging among instances with high feature correlations. Initially beneficial, this averaging helps the model focus on feature clusters correlated with a given instance for predicting the label. However, it leads to diminishing performance with increasing distillation rounds. Additionally, we demonstrate SD's effectiveness in label noise scenarios and identify the label corruption condition and minimum number of distillation rounds needed to achieve 100% classification accuracy. Our study also reveals that one-step distillation with refined teacher outputs surpasses the efficacy of multi-step SD using the teacher's direct output in high noise rate regimes.
[ "['Hyeonsu Jeong' 'Hye Won Chung']" ]
null
null
2402.10487
null
null
http://arxiv.org/pdf/2402.10487v4
2024-06-12T08:49:48Z
2024-02-16T07:28:59Z
RPMixer: Shaking Up Time Series Forecasting with Random Projections for Large Spatial-Temporal Data
Spatial-temporal forecasting systems play a crucial role in addressing numerous real-world challenges. In this paper, we investigate the potential of addressing spatial-temporal forecasting problems using general time series forecasting models, i.e., models that do not leverage the spatial relationships among the nodes. We propose a all-Multi-Layer Perceptron (all-MLP) time series forecasting architecture called RPMixer. The all-MLP architecture was chosen due to its recent success in time series forecasting benchmarks. Furthermore, our method capitalizes on the ensemble-like behavior of deep neural networks, where each individual block within the network behaves like a base learner in an ensemble model, particularly when identity mapping residual connections are incorporated. By integrating random projection layers into our model, we increase the diversity among the blocks' outputs, thereby improving the overall performance of the network. Extensive experiments conducted on the largest spatial-temporal forecasting benchmark datasets demonstrate that the proposed method outperforms alternative methods, including both spatial-temporal graph models and general forecasting models.
[ "['Chin-Chia Michael Yeh' 'Yujie Fan' 'Xin Dai' 'Uday Singh Saini'\n 'Vivian Lai' 'Prince Osei Aboagye' 'Junpeng Wang' 'Huiyuan Chen'\n 'Yan Zheng' 'Zhongfang Zhuang' 'Liang Wang' 'Wei Zhang']" ]
null
null
2402.10492
null
null
http://arxiv.org/pdf/2402.10492v1
2024-02-16T07:48:59Z
2024-02-16T07:48:59Z
Developing an Optimal Model for Predicting the Severity of Wheat Stem Rust (Case study of Arsi and Bale Zone)
This research utilized three types of artificial neural network (ANN) methodologies, namely Backpropagation Neural Network (BPNN) with varied training, transfer, divide, and learning functions; Radial Basis Function Neural Network (RBFNN); and General Regression Neural Network (GRNN), to forecast the severity of stem rust. It considered parameters such as mean maximum temperature, mean minimum temperature, mean rainfall, mean average temperature, mean relative humidity, and different wheat varieties. The statistical analysis revealed that GRNN demonstrated effective predictive capability and required less training time compared to the other models. Additionally, the results indicated that total seasonal rainfall positively influenced the development of wheat stem rust. Keywords: Wheat stem rust, Back propagation neural network, Radial Basis Function Neural Network, General Regression Neural Network.
[ "['Tewodrose Altaye']" ]
null
null
2402.10500
null
null
http://arxiv.org/pdf/2402.10500v2
2024-06-05T15:10:08Z
2024-02-16T08:19:34Z
Active Preference Optimization for Sample Efficient RLHF
Reinforcement Learning from Human Feedback (RLHF) is pivotal in aligning Large Language Models (LLMs) with human preferences. Although aligned generative models have shown remarkable abilities in various tasks, their reliance on high-quality human preference data creates a costly bottleneck in the practical application of RLHF. One primary reason is that current methods rely on uniformly picking prompt-generation pairs from a dataset of prompt-generations, to collect human feedback, resulting in sub-optimal alignment under a constrained budget, which highlights the criticality of adaptive strategies in efficient alignment. Recent works [Mehta et al., 2023, Muldrew et al., 2024] have tried to address this problem by designing various heuristics based on generation uncertainty. However, either the assumptions in [Mehta et al., 2023] are restrictive, or [Muldrew et al., 2024] do not provide any rigorous theoretical guarantee. To address these, we reformulate RLHF within contextual preference bandit framework, treating prompts as contexts, and develop an active-learning algorithm, $textit{Active Preference Optimization}$ ($texttt{APO}$), which enhances model alignment by querying preference data from the most important samples, achieving superior performance for small sample budget. We analyze the theoretical performance guarantees of $texttt{APO}$ under the BTL preference model showing that the suboptimality gap of the policy learned via $texttt{APO}$ scales as $O(1/sqrt{T})$ for a budget of $T$. We also show that collecting preference data by choosing prompts randomly leads to a policy that suffers a constant sub-optimality. We perform detailed experimental evaluations on practical preference datasets to validate $texttt{APO}$'s efficacy over the existing methods, establishing it as a sample-efficient and practical solution of alignment in a cost-effective and scalable manner.
[ "['Nirjhar Das' 'Souradip Chakraborty' 'Aldo Pacchiano'\n 'Sayak Ray Chowdhury']" ]
null
null
2402.10502
null
null
http://arxiv.org/pdf/2402.10502v1
2024-02-16T08:21:43Z
2024-02-16T08:21:43Z
Late-time transition of $M_B$ inferred via neural networks
The strengthening of tensions in the cosmological parameters has led to a reconsideration of fundamental aspects of standard cosmology. The tension in the Hubble constant can also be viewed as a tension between local and early Universe constraints on the absolute magnitude $M_B$ of Type Ia supernova. In this work, we reconsider the possibility of a variation of this parameter in a model-independent way. We employ neural networks to agnostically constrain the value of the absolute magnitude as well as assess the impact and statistical significance of a variation in $M_B$ with redshift from the Pantheon+ compilation, together with a thorough analysis of the neural network architecture. We find an indication for a transition redshift at the $zapprox 1$ region.
[ "['Purba Mukherjee' 'Konstantinos F. Dialektopoulos' 'Jackson Levi Said'\n 'Jurgen Mifsud']" ]
null
null
2402.10504
null
null
http://arxiv.org/pdf/2402.10504v1
2024-02-16T08:27:55Z
2024-02-16T08:27:55Z
Resilience of the quadratic Littlewood-Offord problem
We study the statistical resilience of high-dimensional data. Our results provide estimates as to the effects of adversarial noise over the anti-concentration properties of the quadratic Radamecher chaos $boldsymbol{xi}^{mathsf{T}} M boldsymbol{xi}$, where $M$ is a fixed (high-dimensional) matrix and $boldsymbol{xi}$ is a conformal Rademacher vector. Specifically, we pursue the question of how many adversarial sign-flips can $boldsymbol{xi}$ sustain without "inflating" $sup_{xin mathbb{R}} mathbb{P} left{boldsymbol{xi}^{mathsf{T}} M boldsymbol{xi} = xright}$ and thus "de-smooth" the original distribution resulting in a more "grainy" and adversarially biased distribution. Our results provide lower bound estimations for the statistical resilience of the quadratic and bilinear Rademacher chaos; these are shown to be asymptotically tight across key regimes.
[ "['Elad Aigner-Horev' 'Daniel Rozenberg' 'Roi Weiss']" ]
null
null
2402.10511
null
null
http://arxiv.org/pdf/2402.10511v1
2024-02-16T08:56:22Z
2024-02-16T08:56:22Z
Can Transformers Predict Vibrations?
Highly accurate time-series vibration prediction is an important research issue for electric vehicles (EVs). EVs often experience vibrations when driving on rough terrains, known as torsional resonance. This resonance, caused by the interaction between motor and tire vibrations, puts excessive loads on the vehicle's drive shaft. However, current damping technologies only detect resonance after the vibration amplitude of the drive shaft torque reaches a certain threshold, leading to significant loads on the shaft at the time of detection. In this study, we propose a novel approach to address this issue by introducing Resoformer, a transformer-based model for predicting torsional resonance. Resoformer utilizes time-series of the motor rotation speed as input and predicts the amplitude of torsional vibration at a specified quantile occurring in the shaft after the input series. By calculating the attention between recursive and convolutional features extracted from the measured data points, Resoformer improves the accuracy of vibration forecasting. To evaluate the model, we use a vibration dataset called VIBES (Dataset for Forecasting Vibration Transition in EVs), consisting of 2,600 simulator-generated vibration sequences. Our experiments, conducted on strong baselines built on the VIBES dataset, demonstrate that Resoformer achieves state-of-the-art results. In conclusion, our study answers the question "Can Transformers Forecast Vibrations?" While traditional transformer architectures show low performance in forecasting torsional resonance waves, our findings indicate that combining recurrent neural network and temporal convolutional network using the transformer architecture improves the accuracy of long-term vibration forecasting.
[ "['Fusataka Kuniyoshi' 'Yoshihide Sawada']" ]
null
null
2402.10516
null
null
http://arxiv.org/pdf/2402.10516v1
2024-02-16T09:05:02Z
2024-02-16T09:05:02Z
Generative AI for Controllable Protein Sequence Design: A Survey
The design of novel protein sequences with targeted functionalities underpins a central theme in protein engineering, impacting diverse fields such as drug discovery and enzymatic engineering. However, navigating this vast combinatorial search space remains a severe challenge due to time and financial constraints. This scenario is rapidly evolving as the transformative advancements in AI, particularly in the realm of generative models and optimization algorithms, have been propelling the protein design field towards an unprecedented revolution. In this survey, we systematically review recent advances in generative AI for controllable protein sequence design. To set the stage, we first outline the foundational tasks in protein sequence design in terms of the constraints involved and present key generative models and optimization algorithms. We then offer in-depth reviews of each design task and discuss the pertinent applications. Finally, we identify the unresolved challenges and highlight research opportunities that merit deeper exploration.
[ "['Yiheng Zhu' 'Zitai Kong' 'Jialu Wu' 'Weize Liu' 'Yuqiang Han'\n 'Mingze Yin' 'Hongxia Xu' 'Chang-Yu Hsieh' 'Tingjun Hou']" ]
null
null
2402.10517
null
null
http://arxiv.org/pdf/2402.10517v4
2024-06-21T05:20:56Z
2024-02-16T09:06:06Z
Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs
Recently, considerable efforts have been directed towards compressing Large Language Models (LLMs), which showcase groundbreaking capabilities across diverse applications but entail significant deployment costs due to their large sizes. Meanwhile, much less attention has been given to mitigating the costs associated with deploying multiple LLMs of varying sizes despite its practical significance. Thus, this paper introduces emph{any-precision LLM}, extending the concept of any-precision DNN to LLMs. Addressing challenges in any-precision LLM, we propose a lightweight method for any-precision quantization of LLMs, leveraging a post-training quantization framework, and develop a specialized software engine for its efficient serving. As a result, our solution significantly reduces the high costs of deploying multiple, different-sized LLMs by overlaying LLMs quantized to varying bit-widths, such as 3, 4, ..., $n$ bits, into a memory footprint comparable to a single $n$-bit LLM. All the supported LLMs with varying bit-widths demonstrate state-of-the-art model quality and inference throughput, proving itself to be a compelling option for deployment of multiple, different-sized LLMs. Our code is open-sourced and available online.
[ "['Yeonhong Park' 'Jake Hyun' 'SangLyul Cho' 'Bonggeun Sim' 'Jae W. Lee']" ]
null
null
2402.10524
null
null
http://arxiv.org/pdf/2402.10524v1
2024-02-16T09:14:49Z
2024-02-16T09:14:49Z
LLM Comparator: Visual Analytics for Side-by-Side Evaluation of Large Language Models
Automatic side-by-side evaluation has emerged as a promising approach to evaluating the quality of responses from large language models (LLMs). However, analyzing the results from this evaluation approach raises scalability and interpretability challenges. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from automatic side-by-side evaluation. The tool supports interactive workflows for users to understand when and why a model performs better or worse than a baseline model, and how the responses from two models are qualitatively different. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. This paper details the user challenges we identified, the design and development of the tool, and an observational study with participants who regularly evaluate their models.
[ "['Minsuk Kahng' 'Ian Tenney' 'Mahima Pushkarna' 'Michael Xieyang Liu'\n 'James Wexler' 'Emily Reif' 'Krystal Kallarackal' 'Minsuk Chang'\n 'Michael Terry' 'Lucas Dixon']" ]
null
null
2402.10532
null
null
http://arxiv.org/pdf/2402.10532v1
2024-02-16T09:37:54Z
2024-02-16T09:37:54Z
Properties and Challenges of LLM-Generated Explanations
The self-rationalising capabilities of large language models (LLMs) have been explored in restricted settings, using task/specific data sets. However, current LLMs do not (only) rely on specifically annotated data; nonetheless, they frequently explain their outputs. The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning. As the pre-training corpus includes a large amount of human-written explanations "in the wild", we hypothesise that LLMs adopt common properties of human explanations. By analysing the outputs for a multi-domain instruction fine-tuning data set, we find that generated explanations show selectivity and contain illustrative elements, but less frequently are subjective or misleading. We discuss reasons and consequences of the properties' presence or absence. In particular, we outline positive and negative implications depending on the goals and user groups of the self-rationalising system.
[ "['Jenny Kunz' 'Marco Kuhlmann']" ]
null
null
2402.10547
null
null
http://arxiv.org/pdf/2402.10547v1
2024-02-16T10:20:42Z
2024-02-16T10:20:42Z
Learning Disentangled Audio Representations through Controlled Synthesis
This paper tackles the scarcity of benchmarking data in disentangled auditory representation learning. We introduce SynTone, a synthetic dataset with explicit ground truth explanatory factors for evaluating disentanglement techniques. Benchmarking state-of-the-art methods on SynTone highlights its utility for method evaluation. Our results underscore strengths and limitations in audio disentanglement, motivating future research.
[ "['Yusuf Brima' 'Ulf Krumnack' 'Simone Pika' 'Gunther Heidemann']" ]
null
null
2402.10551
null
null
http://arxiv.org/pdf/2402.10551v1
2024-02-16T10:29:25Z
2024-02-16T10:29:25Z
Personalised Drug Identifier for Cancer Treatment with Transformers using Auxiliary Information
Cancer remains a global challenge due to its growing clinical and economic burden. Its uniquely personal manifestation, which makes treatment difficult, has fuelled the quest for personalized treatment strategies. Thus, genomic profiling is increasingly becoming part of clinical diagnostic panels. Effective use of such panels requires accurate drug response prediction (DRP) models, which are challenging to build due to limited labelled patient data. Previous methods to address this problem have used various forms of transfer learning. However, they do not explicitly model the variable length sequential structure of the list of mutations in such diagnostic panels. Further, they do not utilize auxiliary information (like patient survival) for model training. We address these limitations through a novel transformer based method, which surpasses the performance of state-of-the-art DRP models on benchmark data. We also present the design of a treatment recommendation system (TRS), which is currently deployed at the National University Hospital, Singapore and is being evaluated in a clinical trial.
[ "['Aishwarya Jayagopal' 'Hansheng Xue' 'Ziyang He' 'Robert J. Walsh'\n 'Krishna Kumar Hariprasannan' 'David Shao Peng Tan' 'Tuan Zea Tan'\n 'Jason J. Pitt' 'Anand D. Jeyasekharan' 'Vaibhav Rajan']" ]
null
null
2402.10553
null
null
http://arxiv.org/pdf/2402.10553v1
2024-02-16T10:35:01Z
2024-02-16T10:35:01Z
A novel integrated industrial approach with cobots in the age of industry 4.0 through conversational interaction and computer vision
From robots that replace workers to robots that serve as helpful colleagues, the field of robotic automation is experiencing a new trend that represents a huge challenge for component manufacturers. The contribution starts from an innovative vision that sees an ever closer collaboration between Cobot, able to do a specific physical job with precision, the AI world, able to analyze information and support the decision-making process, and the man able to have a strategic vision of the future.
[ "['Andrea Pazienza' 'Nicola Macchiarulo' 'Felice Vitulano'\n 'Antonio Fiorentini' 'Marco Cammisa' 'Leonardo Rigutini'\n 'Ernesto Di Iorio' 'Achille Globo' 'Antonio Trevisi']" ]
null
null
2402.10571
null
null
http://arxiv.org/pdf/2402.10571v2
2024-06-06T12:02:37Z
2024-02-16T10:55:38Z
Direct Preference Optimization with an Offset
Direct preference optimization (DPO) is a successful fine-tuning strategy for aligning large language models with human preferences without the need to train a reward model or employ reinforcement learning. DPO, as originally formulated, relies on binary preference data and fine-tunes a language model to increase the likelihood of a preferred response over a dispreferred response. However, not all preference pairs are equal. Sometimes, the preferred response is only slightly better than the dispreferred one. In other cases, the preference is much stronger. For instance, if a response contains harmful or toxic content, the annotator will have a strong preference for that response. In this paper, we propose a generalization of DPO, termed DPO with an offset (ODPO), that does not treat every preference pair equally during fine-tuning. Intuitively, ODPO requires the difference between the likelihood of the preferred and dispreferred response to be greater than an offset value. The offset is determined based on the extent to which one response is preferred over another. Our experiments on various tasks suggest that ODPO significantly outperforms DPO in aligning language models, especially when the number of preference pairs is limited.
[ "['Afra Amini' 'Tim Vieira' 'Ryan Cotterell']" ]
null
null
2402.10575
null
null
http://arxiv.org/pdf/2402.10575v1
2024-02-16T11:04:31Z
2024-02-16T11:04:31Z
Symbolic Autoencoding for Self-Supervised Sequence Learning
Traditional language models, adept at next-token prediction in text sequences, often struggle with transduction tasks between distinct symbolic systems, particularly when parallel data is scarce. Addressing this issue, we introduce textit{symbolic autoencoding} ($Sigma$AE), a self-supervised framework that harnesses the power of abundant unparallel data alongside limited parallel data. $Sigma$AE connects two generative models via a discrete bottleneck layer and is optimized end-to-end by minimizing reconstruction loss (simultaneously with supervised loss for the parallel data), such that the sequence generated by the discrete bottleneck can be read out as the transduced input sequence. We also develop gradient-based methods allowing for efficient self-supervised sequence learning despite the discreteness of the bottleneck. Our results demonstrate that $Sigma$AE significantly enhances performance on transduction tasks, even with minimal parallel data, offering a promising solution for weakly supervised learning scenarios.
[ "['Mohammad Hossein Amani' 'Nicolas Mario Baldwin' 'Amin Mansouri'\n 'Martin Josifoski' 'Maxime Peyrard' 'Robert West']" ]
null
null
2402.10580
null
null
http://arxiv.org/pdf/2402.10580v1
2024-02-16T11:09:16Z
2024-02-16T11:09:16Z
Efficient Multi-task Uncertainties for Joint Semantic Segmentation and Monocular Depth Estimation
Quantifying the predictive uncertainty emerged as a possible solution to common challenges like overconfidence or lack of explainability and robustness of deep neural networks, albeit one that is often computationally expensive. Many real-world applications are multi-modal in nature and hence benefit from multi-task learning. In autonomous driving, for example, the joint solution of semantic segmentation and monocular depth estimation has proven to be valuable. In this work, we first combine different uncertainty quantification methods with joint semantic segmentation and monocular depth estimation and evaluate how they perform in comparison to each other. Additionally, we reveal the benefits of multi-task learning with regard to the uncertainty quality compared to solving both tasks separately. Based on these insights, we introduce EMUFormer, a novel student-teacher distillation approach for joint semantic segmentation and monocular depth estimation as well as efficient multi-task uncertainty quantification. By implicitly leveraging the predictive uncertainties of the teacher, EMUFormer achieves new state-of-the-art results on Cityscapes and NYUv2 and additionally estimates high-quality predictive uncertainties for both tasks that are comparable or superior to a Deep Ensemble despite being an order of magnitude more efficient.
[ "['Steven Landgraf' 'Markus Hillemann' 'Theodor Kapler' 'Markus Ulrich']" ]
null
null
2402.10592
null
null
http://arxiv.org/pdf/2402.10592v1
2024-02-16T11:27:48Z
2024-02-16T11:27:48Z
Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification
Practitioners conducting adaptive experiments often encounter two competing priorities: reducing the cost of experimentation by effectively assigning treatments during the experiment itself, and gathering information swiftly to conclude the experiment and implement a treatment across the population. Currently, the literature is divided, with studies on regret minimization addressing the former priority in isolation, and research on best-arm identification focusing solely on the latter. This paper proposes a unified model that accounts for both within-experiment performance and post-experiment outcomes. We then provide a sharp theory of optimal performance in large populations that unifies canonical results in the literature. This unification also uncovers novel insights. For example, the theory reveals that familiar algorithms, like the recently proposed top-two Thompson sampling algorithm, can be adapted to optimize a broad class of objectives by simply adjusting a single scalar parameter. In addition, the theory reveals that enormous reductions in experiment duration can sometimes be achieved with minimal impact on both within-experiment and post-experiment regret.
[ "['Chao Qin' 'Daniel Russo']" ]
null
null
2402.10609
null
null
http://arxiv.org/pdf/2402.10609v2
2024-07-05T07:49:01Z
2024-02-16T11:54:34Z
MRPD: Undersampled MRI reconstruction by prompting a large latent diffusion model
Implicit visual knowledge in a large latent diffusion model (LLDM) pre-trained on natural images is rich and hypothetically universal to natural and medical images. To test this hypothesis from a practical perspective, we propose a novel framework for undersampled MRI Reconstruction by Prompting a large latent Diffusion model (MRPD). While the existing methods trained on MRI datasets are typically of limited generalizability toward diverse data acquisition scenarios, MRPD supports unsupervised and universally adaptive MRI reconstruction. For unsupervised reconstruction, MRSampler guides LLDM with a random-phase-modulated hard-to-soft control. With any single- or multiple-source MRI dataset, MRPD's performance is boosted universally by a lightweight MRAdapter that only finetunes the LLDM's autoencoder. Experiments on FastMRI and IXI show that MRPD is the only model that supports both MRI database-free and database-available scenarios and attains the best generalizability towards out-of-domain (OOD) samplings, contrasts, and organs among compared unsupervised, supervised, and MRI diffusion methods. To our knowledge, MRPD is the first method that empirically shows the universal prowess of an LLDM pre-trained on vast natural images for MRI. Our official implementation is at https://github.com/Z7Gao/MRPD.
[ "['Ziqi Gao' 'S. Kevin Zhou']" ]
null
null
2402.10614
null
null
http://arxiv.org/pdf/2402.10614v2
2024-06-07T20:19:09Z
2024-02-16T12:00:34Z
Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements
Making LLMs speak for different, especially minority groups of people, and generate statements supporting their diverse or even controversial perspectives is critical to creating an inclusive environment. However, existing LLMs lack sufficient controllability to the stance of their generated content, which often contains inconsistent, neutral, or biased statements. In this paper, we improve the controllability of LLMs in generating statements supporting an argument the user defined in the prompt. We find that multi-round debates between two LLMs with opposite stances generate higher-quality and more salient statements for each, which are important training data to improve the controllability of LLMs. Motivated by this, we develop a novel debate & tuning (DEBATUNE) pipeline finetuning LLMs to generate the statements obtained via debate. To examine DEBATUNE, we curate the largest dataset of debate topics so far, which covers 710 controversial topics and corresponding arguments for each topic. Evaluations by the GPT-4 judge with a novel controversy controllability metric show that LLMs' capability of generating diverse perspectives is significantly improved by DEBATUNE. Moreover, such controllability can be generalized to unseen topics, generating high-quality statements supporting controversial arguments.
[ "['Ming Li' 'Jiuhai Chen' 'Lichang Chen' 'Tianyi Zhou']" ]
null
null
2402.10617
null
null
http://arxiv.org/abs/2402.10617v1
2024-02-16T12:11:34Z
2024-02-16T12:11:34Z
Multitask Kernel-based Learning with Logic Constraints
This paper presents a general framework to integrate prior knowledge in the form of logic constraints among a set of task functions into kernel machines. The logic propositions provide a partial representation of the environment, in which the learner operates, that is exploited by the learning algorithm together with the information available in the supervised examples. In particular, we consider a multi-task learning scheme, where multiple unary predicates on the feature space are to be learned by kernel machines and a higher level abstract representation consists of logic clauses on these predicates, known to hold for any input. A general approach is presented to convert the logic clauses into a continuous implementation, that processes the outputs computed by the kernel-based predicates. The learning task is formulated as a primal optimization problem of a loss function that combines a term measuring the fitting of the supervised examples, a regularization term, and a penalty term that enforces the constraints on both supervised and unsupervised examples. The proposed semi-supervised learning framework is particularly suited for learning in high dimensionality feature spaces, where the supervised training examples tend to be sparse and generalization difficult. Unlike for standard kernel machines, the cost function to optimize is not generally guaranteed to be convex. However, the experimental results show that it is still possible to find good solutions using a two stage learning schema, in which first the supervised examples are learned until convergence and then the logic constraints are forced. Some promising experimental results on artificial multi-task learning tasks are reported, showing how the classification accuracy can be effectively improved by exploiting the a priori rules and the unsupervised examples.
[ "['Michelangelo Diligenti' 'Marco Gori' 'Marco Maggini' 'Leonardo Rigutini']" ]
null
null
2402.10634
null
null
http://arxiv.org/pdf/2402.10634v3
2024-06-08T15:27:35Z
2024-02-16T12:33:31Z
Graph-based Forecasting with Missing Data through Spatiotemporal Downsampling
Given a set of synchronous time series, each associated with a sensor-point in space and characterized by inter-series relationships, the problem of spatiotemporal forecasting consists of predicting future observations for each point. Spatiotemporal graph neural networks achieve striking results by representing the relationships across time series as a graph. Nonetheless, most existing methods rely on the often unrealistic assumption that inputs are always available and fail to capture hidden spatiotemporal dynamics when part of the data is missing. In this work, we tackle this problem through hierarchical spatiotemporal downsampling. The input time series are progressively coarsened over time and space, obtaining a pool of representations that capture heterogeneous temporal and spatial dynamics. Conditioned on observations and missing data patterns, such representations are combined by an interpretable attention mechanism to generate the forecasts. Our approach outperforms state-of-the-art methods on synthetic and real-world benchmarks under different missing data distributions, particularly in the presence of contiguous blocks of missing values.
[ "['Ivan Marisca' 'Cesare Alippi' 'Filippo Maria Bianchi']" ]
null
null
2402.10635
null
null
http://arxiv.org/pdf/2402.10635v1
2024-02-16T12:34:38Z
2024-02-16T12:34:38Z
ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling
Modeling continuous-time dynamics on irregular time series is critical to account for data evolution and correlations that occur continuously. Traditional methods including recurrent neural networks or Transformer models leverage inductive bias via powerful neural architectures to capture complex patterns. However, due to their discrete characteristic, they have limitations in generalizing to continuous-time data paradigms. Though neural ordinary differential equations (Neural ODEs) and their variants have shown promising results in dealing with irregular time series, they often fail to capture the intricate correlations within these sequences. It is challenging yet demanding to concurrently model the relationship between input data points and capture the dynamic changes of the continuous-time system. To tackle this problem, we propose ContiFormer that extends the relation modeling of vanilla Transformer to the continuous-time domain, which explicitly incorporates the modeling abilities of continuous dynamics of Neural ODEs with the attention mechanism of Transformers. We mathematically characterize the expressive power of ContiFormer and illustrate that, by curated designs of function hypothesis, many Transformer variants specialized in irregular time series modeling can be covered as a special case of ContiFormer. A wide range of experiments on both synthetic and real-world datasets have illustrated the superior modeling capacities and prediction performance of ContiFormer on irregular time series data. The project link is https://seqml.github.io/contiformer/.
[ "['Yuqi Chen' 'Kan Ren' 'Yansen Wang' 'Yuchen Fang' 'Weiwei Sun'\n 'Dongsheng Li']" ]
null
null
2402.10641
null
null
http://arxiv.org/pdf/2402.10641v1
2024-02-16T12:41:31Z
2024-02-16T12:41:31Z
A Predictive Surrogate Model for Heat Transfer of an Impinging Jet on a Concave Surface
This paper aims to comprehensively investigate the efficacy of various Model Order Reduction (MOR) and deep learning techniques in predicting heat transfer in a pulsed jet impinging on a concave surface. Expanding on the previous experimental and numerical research involving pulsed circular jets, this investigation extends to evaluate Predictive Surrogate Models (PSM) for heat transfer across various jet characteristics. To this end, this work introduces two predictive approaches, one employing a Fast Fourier Transformation augmented Artificial Neural Network (FFT-ANN) for predicting the average Nusselt number under constant-frequency scenarios. Moreover, the investigation introduces the Proper Orthogonal Decomposition and Long Short-Term Memory (POD-LSTM) approach for random-frequency impingement jets. The POD-LSTM method proves to be a robust solution for predicting the local heat transfer rate under random-frequency impingement scenarios, capturing both the trend and value of temporal modes. The comparison of these approaches highlights the versatility and efficacy of advanced machine learning techniques in modelling complex heat transfer phenomena.
[ "['Sajad Salavatidezfouli' 'Saeid Rakhsha' 'Armin Sheidani'\n 'Giovanni Stabile' 'Gianluigi Rozza']" ]
null
null
2402.10644
null
null
http://arxiv.org/pdf/2402.10644v2
2024-06-05T14:13:22Z
2024-02-16T12:44:15Z
Linear Transformers with Learnable Kernel Functions are Better In-Context Models
Advancing the frontier of subquadratic architectures for Language Models (LMs) is crucial in the rapidly evolving field of natural language processing. Current innovations, including State Space Models, were initially celebrated for surpassing Transformer performance on language modeling tasks. However, these models have revealed deficiencies in essential In-Context Learning capabilities - a domain where the Transformer traditionally shines. The Based model emerged as a hybrid solution, blending a Linear Transformer with a kernel inspired by the Taylor expansion of exponential functions, augmented by convolutional networks. Mirroring the Transformer's in-context adeptness, it became a strong contender in the field. In our work, we present a singular, elegant alteration to the Based kernel that amplifies its In-Context Learning abilities evaluated with the Multi-Query Associative Recall task and overall language modeling process, as demonstrated on the Pile dataset.
[ "['Yaroslav Aksenov' 'Nikita Balagansky' 'Sofia Maria Lo Cicero Vaina'\n 'Boris Shaposhnikov' 'Alexey Gorbatovski' 'Daniil Gavrilov']" ]
null
null
2402.10649
null
null
http://arxiv.org/pdf/2402.10649v1
2024-02-16T12:51:25Z
2024-02-16T12:51:25Z
Hermite Neural Network Simulation for Solving the 2D Schrodinger Equation
The Schrodinger equation is a mathematical equation describing the wave function's behavior in a quantum-mechanical system. It is a partial differential equation that provides valuable insights into the fundamental principles of quantum mechanics. In this paper, the aim was to solve the Schrodinger equation with sufficient accuracy by using a mixture of neural networks with the collocation method base Hermite functions. Initially, the Hermite functions roots were employed as collocation points, enhancing the efficiency of the solution. The Schrodinger equation is defined in an infinite domain, the use of Hermite functions as activation functions resulted in excellent precision. Finally, the proposed method was simulated using MATLAB's Simulink tool. The results were then compared with those obtained using Physics-informed neural networks and the presented method.
[ "['Kourosh Parand' 'Aida Pakniyat']" ]
null
null
2402.10665
null
null
http://arxiv.org/pdf/2402.10665v2
2024-05-07T01:05:14Z
2024-02-16T13:14:12Z
Selective Prediction for Semantic Segmentation using Post-Hoc Confidence Estimation and Its Performance under Distribution Shift
Semantic segmentation plays a crucial role in various computer vision applications, yet its efficacy is often hindered by the lack of high-quality labeled data. To address this challenge, a common strategy is to leverage models trained on data from different populations, such as publicly available datasets. This approach, however, leads to the distribution shift problem, presenting a reduced performance on the population of interest. In scenarios where model errors can have significant consequences, selective prediction methods offer a means to mitigate risks and reduce reliance on expert supervision. This paper investigates selective prediction for semantic segmentation in low-resource settings, thus focusing on post-hoc confidence estimators applied to pre-trained models operating under distribution shift. We propose a novel image-level confidence measure tailored for semantic segmentation and demonstrate its effectiveness through experiments on three medical imaging tasks. Our findings show that post-hoc confidence estimators offer a cost-effective approach to reducing the impacts of distribution shift.
[ "['Bruno Laboissiere Camargos Borges' 'Bruno Machado Pacheco'\n 'Danilo Silva']" ]
null
null
2402.10677
null
null
http://arxiv.org/pdf/2402.10677v1
2024-02-16T13:31:43Z
2024-02-16T13:31:43Z
Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model
We study the estimation of a planted signal hidden in a recently introduced nested matrix-tensor model, which is an extension of the classical spiked rank-one tensor model, motivated by multi-view clustering. Prior work has theoretically examined the performance of a tensor-based approach, which relies on finding a best rank-one approximation, a problem known to be computationally hard. A tractable alternative approach consists in computing instead the best rank-one (matrix) approximation of an unfolding of the observed tensor data, but its performance was hitherto unknown. We quantify here the performance gap between these two approaches, in particular by deriving the precise algorithmic threshold of the unfolding approach and demonstrating that it exhibits a BBP-type transition behavior. This work is therefore in line with recent contributions which deepen our understanding of why tensor-based methods surpass matrix-based methods in handling structured tensor data.
[ "['Hugo Lebeau' 'Mohamed El Amine Seddik' 'José Henrique de Morais Goulart']" ]
null
null
2402.10681
null
null
http://arxiv.org/pdf/2402.10681v1
2024-02-16T13:34:51Z
2024-02-16T13:34:51Z
Physics-informed MeshGraphNets (PI-MGNs): Neural finite element solvers for non-stationary and nonlinear simulations on arbitrary meshes
Engineering components must meet increasing technological demands in ever shorter development cycles. To face these challenges, a holistic approach is essential that allows for the concurrent development of part design, material system and manufacturing process. Current approaches employ numerical simulations, which however quickly becomes computation-intensive, especially for iterative optimization. Data-driven machine learning methods can be used to replace time- and resource-intensive numerical simulations. In particular, MeshGraphNets (MGNs) have shown promising results. They enable fast and accurate predictions on unseen mesh geometries while being fully differentiable for optimization. However, these models rely on large amounts of expensive training data, such as numerical simulations. Physics-informed neural networks (PINNs) offer an opportunity to train neural networks with partial differential equations instead of labeled data, but have not been extended yet to handle time-dependent simulations of arbitrary meshes. This work introduces PI-MGNs, a hybrid approach that combines PINNs and MGNs to quickly and accurately solve non-stationary and nonlinear partial differential equations (PDEs) on arbitrary meshes. The method is exemplified for thermal process simulations of unseen parts with inhomogeneous material distribution. Further results show that the model scales well to large and complex meshes, although it is trained on small generic meshes only.
[ "['Tobias Würth' 'Niklas Freymuth' 'Clemens Zimmerling' 'Gerhard Neumann'\n 'Luise Kärger']" ]
null
null
2402.10686
null
null
http://arxiv.org/pdf/2402.10686v1
2024-02-16T13:41:18Z
2024-02-16T13:41:18Z
Uncertainty, Calibration, and Membership Inference Attacks: An Information-Theoretic Perspective
In a membership inference attack (MIA), an attacker exploits the overconfidence exhibited by typical machine learning models to determine whether a specific data point was used to train a target model. In this paper, we analyze the performance of the state-of-the-art likelihood ratio attack (LiRA) within an information-theoretical framework that allows the investigation of the impact of the aleatoric uncertainty in the true data generation process, of the epistemic uncertainty caused by a limited training data set, and of the calibration level of the target model. We compare three different settings, in which the attacker receives decreasingly informative feedback from the target model: confidence vector (CV) disclosure, in which the output probability vector is released; true label confidence (TLC) disclosure, in which only the probability assigned to the true label is made available by the model; and decision set (DS) disclosure, in which an adaptive prediction set is produced as in conformal prediction. We derive bounds on the advantage of an MIA adversary with the aim of offering insights into the impact of uncertainty and calibration on the effectiveness of MIAs. Simulation results demonstrate that the derived analytical bounds predict well the effectiveness of MIAs.
[ "['Meiyi Zhu' 'Caili Guo' 'Chunyan Feng' 'Osvaldo Simeone']" ]
null
null
2402.10693
null
null
http://arxiv.org/pdf/2402.10693v3
2024-06-04T11:33:27Z
2024-02-16T13:53:26Z
Exploring Precision and Recall to assess the quality and diversity of LLMs
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textsc{Llama-2} and textsc{Mistral}, focusing on importing Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora. By conducting a comprehensive evaluation of state-of-the-art language models, the study reveals new insights into their performance on open-ended generation tasks, which are not adequately captured by traditional benchmarks. The findings highlight a trade-off between the quality and diversity of generated samples, particularly when models are fine-tuned on instruction dataset or with human feedback. This work extends the toolkit for distribution-based NLP evaluation, offering insights into the practical capabilities and challenges that current LLMs face in generating diverse and high-quality text. We release our code and data.
[ "['Florian Le Bronnec' 'Alexandre Verine' 'Benjamin Negrevergne'\n 'Yann Chevaleyre' 'Alexandre Allauzen']" ]
null
null
2402.10695
null
null
http://arxiv.org/pdf/2402.10695v2
2024-03-11T17:08:36Z
2024-02-16T13:58:23Z
Unlink to Unlearn: Simplifying Edge Unlearning in GNNs
As concerns over data privacy intensify, unlearning in Graph Neural Networks (GNNs) has emerged as a prominent research frontier in academia. This concept is pivotal in enforcing the textit{right to be forgotten}, which entails the selective removal of specific data from trained GNNs upon user request. Our research focuses on edge unlearning, a process of particular relevance to real-world applications. Current state-of-the-art approaches like GNNDelete can eliminate the influence of specific edges yet suffer from textit{over-forgetting}, which means the unlearning process inadvertently removes excessive information beyond needed, leading to a significant performance decline for remaining edges. Our analysis identifies the loss functions of GNNDelete as the primary source of over-forgetting and also suggests that loss functions may be redundant for effective edge unlearning. Building on these insights, we simplify GNNDelete to develop textbf{Unlink to Unlearn} (UtU), a novel method that facilitates unlearning exclusively through unlinking the forget edges from graph structure. Our extensive experiments demonstrate that UtU delivers privacy protection on par with that of a retrained model while preserving high accuracy in downstream tasks, by upholding over 97.3% of the retrained model's privacy protection capabilities and 99.8% of its link prediction accuracy. Meanwhile, UtU requires only constant computational demands, underscoring its advantage as a highly lightweight and practical edge unlearning solution.
[ "['Jiajun Tan' 'Fei Sun' 'Ruichen Qiu' 'Du Su' 'Huawei Shen']" ]
null
null
2402.10723
null
null
http://arxiv.org/pdf/2402.10723v1
2024-02-16T14:30:12Z
2024-02-16T14:30:12Z
Conformalized Credal Set Predictors
Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution. In machine learning, they have recently attracted attention as an appealing formalism for uncertainty representation, in particular due to their ability to represent both the aleatoric and epistemic uncertainty in a prediction. However, the design of methods for learning credal set predictors remains a challenging problem. In this paper, we make use of conformal prediction for this purpose. More specifically, we propose a method for predicting credal sets in the classification task, given training data labeled by probability distributions. Since our method inherits the coverage guarantees of conformal prediction, our conformal credal sets are guaranteed to be valid with high probability (without any assumptions on model or distribution). We demonstrate the applicability of our method to natural language inference, a highly ambiguous natural language task where it is common to obtain multiple annotations per example.
[ "['Alireza Javanmardi' 'David Stutz' 'Eyke Hüllermeier']" ]
null
null
2402.10724
null
null
http://arxiv.org/pdf/2402.10724v1
2024-02-16T14:30:46Z
2024-02-16T14:30:46Z
Machine Learning based Prediction of Ditching Loads
We present approaches to predict dynamic ditching loads on aircraft fuselages using machine learning. The employed learning procedure is structured into two parts, the reconstruction of the spatial loads using a convolutional autoencoder (CAE) and the transient evolution of these loads in a subsequent part. Different CAE strategies are assessed and combined with either long short-term memory (LSTM) networks or Koopman-operator based methods to predict the transient behaviour. The training data is compiled by an extension of the momentum method of von-Karman and Wagner and the rationale of the training approach is briefly summarised. The application included refers to a full-scale fuselage of a DLR-D150 aircraft for a range of horizontal and vertical approach velocities at 6{deg} incidence. Results indicate a satisfactory level of predictive agreement for all four investigated surrogate models examined, with the combination of an LSTM and a deep decoder CAE showing the best performance.
[ "['Henning Schwarz' 'Micha Überrück' 'Jens-Peter M. Zemke' 'Thomas Rung']" ]
null
null
2402.10727
null
null
http://arxiv.org/pdf/2402.10727v2
2024-06-06T15:52:17Z
2024-02-16T14:40:22Z
Predictive Uncertainty Quantification via Risk Decompositions for Strictly Proper Scoring Rules
Uncertainty quantification in predictive modeling often relies on ad hoc methods as there is no universally accepted formal framework for that. This paper introduces a theoretical approach to understanding uncertainty through statistical risks, distinguishing between aleatoric (data-related) and epistemic (model-related) uncertainties. We explain how to split pointwise risk into Bayes risk and excess risk. In particular, we show that excess risk, related to epistemic uncertainty, aligns with Bregman divergences. To turn considered risk measures into actual uncertainty estimates, we suggest using the Bayesian approach by approximating the risks with the help of posterior distributions. We tested our method on image datasets, evaluating its performance in detecting out-of-distribution and misclassified data using the AUROC metric. Our results confirm the effectiveness of the considered approach and offer practical guidance for estimating uncertainty in real-world applications.
[ "['Nikita Kotelevskii' 'Maxim Panov']" ]
null
null
2402.10747
null
null
http://arxiv.org/pdf/2402.10747v1
2024-02-16T15:13:30Z
2024-02-16T15:13:30Z
Fully Differentiable Lagrangian Convolutional Neural Network for Continuity-Consistent Physics-Informed Precipitation Nowcasting
This paper presents a convolutional neural network model for precipitation nowcasting that combines data-driven learning with physics-informed domain knowledge. We propose LUPIN, a Lagrangian Double U-Net for Physics-Informed Nowcasting, that draws from existing extrapolation-based nowcasting methods and implements the Lagrangian coordinate system transformation of the data in a fully differentiable and GPU-accelerated manner to allow for real-time end-to-end training and inference. Based on our evaluation, LUPIN matches and exceeds the performance of the chosen benchmark, opening the door for other Lagrangian machine learning models.
[ "['Peter Pavlík' 'Martin Výboh' 'Anna Bou Ezzeddine' 'Viera Rozinajová']" ]
null
null
2402.10748
null
null
http://arxiv.org/abs/2402.10748v2
2024-06-21T15:55:13Z
2024-02-16T15:14:16Z
A Tiny Transformer for Low-Power Arrhythmia Classification on Microcontrollers
Wearable systems for the continuous and real-time monitoring of cardiovascular diseases are becoming widespread and valuable assets in diagnosis and therapy. A promising approach for real-time analysis of the electrocardiographic (ECG) signal and the detection of heart conditions, such as arrhythmia, is represented by the transformer machine learning model. Transformers are powerful models for the classification of time series, although efficient implementation in the wearable domain raises significant design challenges, to combine adequate accuracy and a suitable complexity. In this work, we present a tiny transformer model for the analysis of the ECG signal, requiring only 6k parameters and reaching 98.97% accuracy in the recognition of the 5 most common arrhythmia classes from the MIT-BIH Arrhythmia database, assessed considering 8-bit integer inference as required for efficient execution on low-power microcontroller-based devices. We explored an augmentation-based training approach for improving the robustness against electrode motion artifacts noise, resulting in a worst-case post-deployment performance assessment of 98.36% accuracy. Suitability for wearable monitoring solutions is finally demonstrated through efficient deployment on the parallel ultra-low-power GAP9 processor, where inference execution requires 4.28ms and 0.09mJ.
[ "['Paola Busia' 'Matteo Antonio Scrugli' 'Victor Jean-Baptiste Jung'\n 'Luca Benini' 'Paolo Meloni']" ]
null
null
2402.10754
null
null
http://arxiv.org/pdf/2402.10754v1
2024-02-16T15:21:35Z
2024-02-16T15:21:35Z
When Dataflow Analysis Meets Large Language Models
Dataflow analysis is a powerful code analysis technique that reasons dependencies between program values, offering support for code optimization, program comprehension, and bug detection. Existing approaches require the successful compilation of the subject program and customizations for downstream applications. This paper introduces LLMDFA, an LLM-powered dataflow analysis framework that analyzes arbitrary code snippets without requiring a compilation infrastructure and automatically synthesizes downstream applications. Inspired by summary-based dataflow analysis, LLMDFA decomposes the problem into three sub-problems, which are effectively resolved by several essential strategies, including few-shot chain-of-thought prompting and tool synthesis. Our evaluation has shown that the design can mitigate the hallucination and improve the reasoning ability, obtaining high precision and recall in detecting dataflow-related bugs upon benchmark programs, outperforming state-of-the-art (classic) tools, including a very recent industrial analyzer.
[ "['Chengpeng Wang' 'Wuqi Zhang' 'Zian Su' 'Xiangzhe Xu' 'Xiaoheng Xie'\n 'Xiangyu Zhang']" ]
null
null
2402.10756
null
null
http://arxiv.org/pdf/2402.10756v1
2024-02-16T15:25:56Z
2024-02-16T15:25:56Z
Towards Cohesion-Fairness Harmony: Contrastive Regularization in Individual Fair Graph Clustering
Conventional fair graph clustering methods face two primary challenges: i) They prioritize balanced clusters at the expense of cluster cohesion by imposing rigid constraints, ii) Existing methods of both individual and group-level fairness in graph partitioning mostly rely on eigen decompositions and thus, generally lack interpretability. To address these issues, we propose iFairNMTF, an individual Fairness Nonnegative Matrix Tri-Factorization model with contrastive fairness regularization that achieves balanced and cohesive clusters. By introducing fairness regularization, our model allows for customizable accuracy-fairness trade-offs, thereby enhancing user autonomy without compromising the interpretability provided by nonnegative matrix tri-factorization. Experimental evaluations on real and synthetic datasets demonstrate the superior flexibility of iFairNMTF in achieving fairness and clustering performance.
[ "['Siamak Ghodsi' 'Seyed Amjad Seyedi' 'Eirini Ntoutsi']" ]
null
null
2402.10758
null
null
http://arxiv.org/pdf/2402.10758v2
2024-05-28T12:05:08Z
2024-02-16T15:28:41Z
Stochastic Localization via Iterative Posterior Sampling
Building upon score-based learning, new interest in stochastic localization techniques has recently emerged. In these models, one seeks to noise a sample from the data distribution through a stochastic process, called observation process, and progressively learns a denoiser associated to this dynamics. Apart from specific applications, the use of stochastic localization for the problem of sampling from an unnormalized target density has not been explored extensively. This work contributes to fill this gap. We consider a general stochastic localization framework and introduce an explicit class of observation processes, associated with flexible denoising schedules. We provide a complete methodology, $textit{Stochastic Localization via Iterative Posterior Sampling}$ (SLIPS), to obtain approximate samples of this dynamics, and as a by-product, samples from the target distribution. Our scheme is based on a Markov chain Monte Carlo estimation of the denoiser and comes with detailed practical guidelines. We illustrate the benefits and applicability of SLIPS on several benchmarks of multi-modal distributions, including Gaussian mixtures in increasing dimensions, Bayesian logistic regression and a high-dimensional field system from statistical-mechanics.
[ "['Louis Grenioux' 'Maxence Noble' 'Marylou Gabrié' 'Alain Oliviero Durmus']" ]
null
null
2402.10760
null
null
http://arxiv.org/pdf/2402.10760v1
2024-02-16T15:34:07Z
2024-02-16T15:34:07Z
RAGIC: Risk-Aware Generative Adversarial Model for Stock Interval Construction
Efforts to predict stock market outcomes have yielded limited success due to the inherently stochastic nature of the market, influenced by numerous unpredictable factors. Many existing prediction approaches focus on single-point predictions, lacking the depth needed for effective decision-making and often overlooking market risk. To bridge this gap, we propose a novel model, RAGIC, which introduces sequence generation for stock interval prediction to quantify uncertainty more effectively. Our approach leverages a Generative Adversarial Network (GAN) to produce future price sequences infused with randomness inherent in financial markets. RAGIC's generator includes a risk module, capturing the risk perception of informed investors, and a temporal module, accounting for historical price trends and seasonality. This multi-faceted generator informs the creation of risk-sensitive intervals through statistical inference, incorporating horizon-wise insights. The interval's width is carefully adjusted to reflect market volatility. Importantly, our approach relies solely on publicly available data and incurs only low computational overhead. RAGIC's evaluation across globally recognized broad-based indices demonstrates its balanced performance, offering both accuracy and informativeness. Achieving a consistent 95% coverage, RAGIC maintains a narrow interval width. This promising outcome suggests that our approach effectively addresses the challenges of stock market prediction while incorporating vital risk considerations.
[ "['Jingyi Gu' 'Wenlu Du' 'Guiling Wang']" ]
null
null
2402.10765
null
null
http://arxiv.org/pdf/2402.10765v1
2024-02-16T15:39:51Z
2024-02-16T15:39:51Z
Policy Learning for Off-Dynamics RL with Deficient Support
Reinforcement Learning (RL) can effectively learn complex policies. However, learning these policies often demands extensive trial-and-error interactions with the environment. In many real-world scenarios, this approach is not practical due to the high costs of data collection and safety concerns. As a result, a common strategy is to transfer a policy trained in a low-cost, rapid source simulator to a real-world target environment. However, this process poses challenges. Simulators, no matter how advanced, cannot perfectly replicate the intricacies of the real world, leading to dynamics discrepancies between the source and target environments. Past research posited that the source domain must encompass all possible target transitions, a condition we term full support. However, expecting full support is often unrealistic, especially in scenarios where significant dynamics discrepancies arise. In this paper, our emphasis shifts to addressing large dynamics mismatch adaptation. We move away from the stringent full support condition of earlier research, focusing instead on crafting an effective policy for the target domain. Our proposed approach is simple but effective. It is anchored in the central concepts of the skewing and extension of source support towards target support to mitigate support deficiencies. Through comprehensive testing on a varied set of benchmarks, our method's efficacy stands out, showcasing notable improvements over previous techniques.
[ "['Linh Le Pham Van' 'Hung The Tran' 'Sunil Gupta']" ]
null
null
2402.10774
null
null
http://arxiv.org/pdf/2402.10774v1
2024-02-16T15:55:59Z
2024-02-16T15:55:59Z
Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants
Error Feedback (EF) is a highly popular and immensely effective mechanism for fixing convergence issues which arise in distributed training methods (such as distributed GD or SGD) when these are enhanced with greedy communication compression techniques such as TopK. While EF was proposed almost a decade ago (Seide et al., 2014), and despite concentrated effort by the community to advance the theoretical understanding of this mechanism, there is still a lot to explore. In this work we study a modern form of error feedback called EF21 (Richtarik et al., 2021) which offers the currently best-known theoretical guarantees, under the weakest assumptions, and also works well in practice. In particular, while the theoretical communication complexity of EF21 depends on the quadratic mean of certain smoothness parameters, we improve this dependence to their arithmetic mean, which is always smaller, and can be substantially smaller, especially in heterogeneous data regimes. We take the reader on a journey of our discovery process. Starting with the idea of applying EF21 to an equivalent reformulation of the underlying problem which (unfortunately) requires (often impractical) machine cloning, we continue to the discovery of a new weighted version of EF21 which can (fortunately) be executed without any cloning, and finally circle back to an improved analysis of the original EF21 method. While this development applies to the simplest form of EF21, our approach naturally extends to more elaborate variants involving stochastic gradients and partial participation. Further, our technique improves the best-known theory of EF21 in the rare features regime (Richtarik et al., 2023). Finally, we validate our theoretical findings with suitable experiments.
[ "['Peter Richtárik' 'Elnur Gasanov' 'Konstantin Burlachenko']" ]
null
null
2402.10787
null
null
http://arxiv.org/pdf/2402.10787v1
2024-02-16T16:10:38Z
2024-02-16T16:10:38Z
EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for the Acceleration of Lightweight LLMs on the Edge
Despite the remarkable strides of Large Language Models (LLMs) in various fields, the wide applications of LLMs on edge devices are limited due to their massive parameters and computations. To address this, quantization is commonly adopted to generate lightweight LLMs with efficient computations and fast inference. However, Post-Training Quantization (PTQ) methods dramatically degrade in quality when quantizing weights, activations, and KV cache together to below 8 bits. Besides, many Quantization-Aware Training (QAT) works quantize model weights, leaving the activations untouched, which do not fully exploit the potential of quantization for inference acceleration on the edge. In this paper, we propose EdgeQAT, the Entropy and Distribution Guided QAT for the optimization of lightweight LLMs to achieve inference acceleration on Edge devices. We first identify that the performance drop of quantization primarily stems from the information distortion in quantized attention maps, demonstrated by the different distributions in quantized query and key of the self-attention mechanism. Then, the entropy and distribution guided QAT is proposed to mitigate the information distortion. Moreover, we design a token importance-aware adaptive method to dynamically quantize the tokens with different bit widths for further optimization and acceleration. Our extensive experiments verify the substantial improvements with our framework across various datasets. Furthermore, we achieve an on-device speedup of up to 2.37x compared with its FP16 counterparts across multiple edge devices, signaling a groundbreaking advancement.
[ "['Xuan Shen' 'Zhenglun Kong' 'Changdi Yang' 'Zhaoyang Han' 'Lei Lu'\n 'Peiyan Dong' 'Cheng Lyu' 'Chih-hsiang Li' 'Xuehang Guo' 'Zhihao Shu'\n 'Wei Niu' 'Miriam Leeser' 'Pu Zhao' 'Yanzhi Wang']" ]
null
null
2402.10790
null
null
http://arxiv.org/pdf/2402.10790v2
2024-02-21T03:07:42Z
2024-02-16T16:15:01Z
In Search of Needles in a 11M Haystack: Recurrent Memory Finds What LLMs Miss
This paper addresses the challenge of processing long documents using generative transformer models. To evaluate different approaches, we introduce BABILong, a new benchmark designed to assess model capabilities in extracting and processing distributed facts within extensive texts. Our evaluation, which includes benchmarks for GPT-4 and RAG, reveals that common methods are effective only for sequences up to $10^4$ elements. In contrast, fine-tuning GPT-2 with recurrent memory augmentations enables it to handle tasks involving up to $11times 10^6$ elements. This achievement marks a substantial leap, as it is by far the longest input processed by any neural network model to date, demonstrating a significant improvement in the processing capabilities for long sequences.
[ "['Yuri Kuratov' 'Aydar Bulatov' 'Petr Anokhin' 'Dmitry Sorokin'\n 'Artyom Sorokin' 'Mikhail Burtsev']" ]
null
null
2402.10793
null
null
http://arxiv.org/pdf/2402.10793v1
2024-02-16T16:20:11Z
2024-02-16T16:20:11Z
Masked Attention is All You Need for Graphs
Graph neural networks (GNNs) and variations of the message passing algorithm are the predominant means for learning on graphs, largely due to their flexibility, speed, and satisfactory performance. The design of powerful and general purpose GNNs, however, requires significant research efforts and often relies on handcrafted, carefully-chosen message passing operators. Motivated by this, we propose a remarkably simple alternative for learning on graphs that relies exclusively on attention. Graphs are represented as node or edge sets and their connectivity is enforced by masking the attention weight matrix, effectively creating custom attention patterns for each graph. Despite its simplicity, masked attention for graphs (MAG) has state-of-the-art performance on long-range tasks and outperforms strong message passing baselines and much more involved attention-based methods on over 55 node and graph-level tasks. We also show significantly better transfer learning capabilities compared to GNNs and comparable or better time and memory scaling. MAG has sub-linear memory scaling in the number of nodes or edges, enabling learning on dense graphs and future-proofing the approach.
[ "['David Buterez' 'Jon Paul Janet' 'Dino Oglic' 'Pietro Lio']" ]
null
null
2402.10795
null
null
http://arxiv.org/pdf/2402.10795v1
2024-02-16T16:20:43Z
2024-02-16T16:20:43Z
Diversified Ensembling: An Experiment in Crowdsourced Machine Learning
Crowdsourced machine learning on competition platforms such as Kaggle is a popular and often effective method for generating accurate models. Typically, teams vie for the most accurate model, as measured by overall error on a holdout set, and it is common towards the end of such competitions for teams at the top of the leaderboard to ensemble or average their models outside the platform mechanism to get the final, best global model. In arXiv:2201.10408, the authors developed an alternative crowdsourcing framework in the context of fair machine learning, in order to integrate community feedback into models when subgroup unfairness is present and identifiable. There, unlike in classical crowdsourced ML, participants deliberately specialize their efforts by working on subproblems, such as demographic subgroups in the service of fairness. Here, we take a broader perspective on this work: we note that within this framework, participants may both specialize in the service of fairness and simply to cater to their particular expertise (e.g., focusing on identifying bird species in an image classification task). Unlike traditional crowdsourcing, this allows for the diversification of participants' efforts and may provide a participation mechanism to a larger range of individuals (e.g. a machine learning novice who has insight into a specific fairness concern). We present the first medium-scale experimental evaluation of this framework, with 46 participating teams attempting to generate models to predict income from American Community Survey data. We provide an empirical analysis of teams' approaches, and discuss the novel system architecture we developed. From here, we give concrete guidance for how best to deploy such a framework.
[ "['Ira Globus-Harris' 'Declan Harrison' 'Michael Kearns' 'Pietro Perona'\n 'Aaron Roth']" ]
null
null
2402.10797
null
null
http://arxiv.org/pdf/2402.10797v2
2024-02-22T10:58:50Z
2024-02-16T16:21:02Z
BlackJAX: Composable Bayesian inference in JAX
BlackJAX is a library implementing sampling and variational inference algorithms commonly used in Bayesian computation. It is designed for ease of use, speed, and modularity by taking a functional approach to the algorithms' implementation. BlackJAX is written in Python, using JAX to compile and run NumpPy-like samplers and variational methods on CPUs, GPUs, and TPUs. The library integrates well with probabilistic programming languages by working directly with the (un-normalized) target log density function. BlackJAX is intended as a collection of low-level, composable implementations of basic statistical 'atoms' that can be combined to perform well-defined Bayesian inference, but also provides high-level routines for ease of use. It is designed for users who need cutting-edge methods, researchers who want to create complex sampling methods, and people who want to learn how these work.
[ "['Alberto Cabezas' 'Adrien Corenflos' 'Junpeng Lao' 'Rémi Louf'\n 'Antoine Carnec' 'Kaustubh Chaudhari' 'Reuben Cohn-Gordon'\n 'Jeremie Coullon' 'Wei Deng' 'Sam Duffield' 'Gerardo Durán-Martín'\n 'Marcin Elantkowski' 'Dan Foreman-Mackey' 'Michele Gregori'\n 'Carlos Iguaran' 'Ravin Kumar' 'Martin Lysy' 'Kevin Murphy'\n 'Juan Camilo Orduz' 'Karm Patel' 'Xi Wang' 'Rob Zinkov']" ]
null
null
2402.10802
null
null
http://arxiv.org/pdf/2402.10802v2
2024-02-26T14:13:52Z
2024-02-16T16:25:20Z
TimeSeriesBench: An Industrial-Grade Benchmark for Time Series Anomaly Detection Models
Driven by the proliferation of real-world application scenarios and scales, time series anomaly detection (TSAD) has attracted considerable scholarly and industrial interest. However, existing algorithms exhibit a gap in terms of training paradigm, online detection paradigm, and evaluation criteria when compared to the actual needs of real-world industrial systems. Firstly, current algorithms typically train a specific model for each individual time series. In a large-scale online system with tens of thousands of curves, maintaining such a multitude of models is impractical. The performance of using merely one single unified model to detect anomalies remains unknown. Secondly, most TSAD models are trained on the historical part of a time series and are tested on its future segment. In distributed systems, however, there are frequent system deployments and upgrades, with new, previously unseen time series emerging daily. The performance of testing newly incoming unseen time series on current TSAD algorithms remains unknown. Lastly, although some papers have conducted detailed surveys, the absence of an online evaluation platform prevents answering questions like "Who is the best at anomaly detection at the current stage?" In this paper, we propose TimeSeriesBench, an industrial-grade benchmark that we continuously maintain as a leaderboard. On this leaderboard, we assess the performance of existing algorithms across more than 168 evaluation settings combining different training and testing paradigms, evaluation metrics and datasets. Through our comprehensive analysis of the results, we provide recommendations for the future design of anomaly detection algorithms. To address known issues with existing public datasets, we release an industrial dataset to the public together with TimeSeriesBench. All code, data, and the online leaderboard have been made publicly available.
[ "['Haotian Si' 'Changhua Pei' 'Hang Cui' 'Jingwen Yang' 'Yongqian Sun'\n 'Shenglin Zhang' 'Jingjing Li' 'Haiming Zhang' 'Jing Han' 'Dan Pei'\n 'Jianhui Li' 'Gaogang Xie']" ]
null
null
2402.10810
null
null
http://arxiv.org/pdf/2402.10810v1
2024-02-16T16:35:18Z
2024-02-16T16:35:18Z
Double Duality: Variational Primal-Dual Policy Optimization for Constrained Reinforcement Learning
We study the Constrained Convex Markov Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure, subject to a convex constraint. Designing algorithms for a constrained convex MDP faces several challenges, including (1) handling the large state space, (2) managing the exploration/exploitation tradeoff, and (3) solving the constrained optimization where the objective and the constraint are both nonlinear functions of the visitation measure. In this work, we present a model-based algorithm, Variational Primal-Dual Policy Optimization (VPDPO), in which Lagrangian and Fenchel duality are implemented to reformulate the original constrained problem into an unconstrained primal-dual optimization. Moreover, the primal variables are updated by model-based value iteration following the principle of Optimism in the Face of Uncertainty (OFU), while the dual variables are updated by gradient ascent. Moreover, by embedding the visitation measure into a finite-dimensional space, we can handle large state spaces by incorporating function approximation. Two notable examples are (1) Kernelized Nonlinear Regulators and (2) Low-rank MDPs. We prove that with an optimistic planning oracle, our algorithm achieves sublinear regret and constraint violation in both cases and can attain the globally optimal policy of the original constrained problem.
[ "['Zihao Li' 'Boyi Liu' 'Zhuoran Yang' 'Zhaoran Wang' 'Mengdi Wang']" ]
null
null
2402.10814
null
null
http://arxiv.org/pdf/2402.10814v1
2024-02-16T16:37:48Z
2024-02-16T16:37:48Z
Associative Memories in the Feature Space
An autoassociative memory model is a function that, given a set of data points, takes as input an arbitrary vector and outputs the most similar data point from the memorized set. However, popular memory models fail to retrieve images even when the corruption is mild and easy to detect for a human evaluator. This is because similarities are evaluated in the raw pixel space, which does not contain any semantic information about the images. This problem can be easily solved by computing emph{similarities} in an embedding space instead of the pixel space. We show that an effective way of computing such embeddings is via a network pretrained with a contrastive loss. As the dimension of embedding spaces is often significantly smaller than the pixel space, we also have a faster computation of similarity scores. We test this method on complex datasets such as CIFAR10 and STL10. An additional drawback of current models is the need of storing the whole dataset in the pixel space, which is often extremely large. We relax this condition and propose a class of memory models that only stores low-dimensional semantic embeddings, and uses them to retrieve similar, but not identical, memories. We demonstrate a proof of concept of this method on a simple task on the MNIST dataset.
[ "['Tommaso Salvatori' 'Beren Millidge' 'Yuhang Song' 'Rafal Bogacz'\n 'Thomas Lukasiewicz']" ]
null
null
2402.10816
null
null
http://arxiv.org/pdf/2402.10816v1
2024-02-16T16:41:14Z
2024-02-16T16:41:14Z
TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data
Distributed training of deep neural networks faces three critical challenges: privacy preservation, communication efficiency, and robustness to fault and adversarial behaviors. Although significant research efforts have been devoted to addressing these challenges independently, their synthesis remains less explored. In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously. We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm. Particularly, in terms of privacy guarantees, compared to the existing sign-based approach StoSign, the proposed method improves the dimension dependence on the gradient size and enjoys privacy amplification by mini-batch sampling while ensuring a comparable convergence rate. We also prove that TernaryVote is robust when less than 50% of workers are blind attackers, which matches that of SIGNSGD with majority vote. Extensive experimental results validate the effectiveness of the proposed algorithm.
[ "['Richeng Jin' 'Yujie Gu' 'Kai Yue' 'Xiaofan He' 'Zhaoyang Zhang'\n 'Huaiyu Dai']" ]
null
null
2402.10818
null
null
http://arxiv.org/pdf/2402.10818v1
2024-02-16T16:42:09Z
2024-02-16T16:42:09Z
Trading off Consistency and Dimensionality of Convex Surrogates for the Mode
In multiclass classification over $n$ outcomes, the outcomes must be embedded into the reals with dimension at least $n-1$ in order to design a consistent surrogate loss that leads to the "correct" classification, regardless of the data distribution. For large $n$, such as in information retrieval and structured prediction tasks, optimizing a surrogate in $n-1$ dimensions is often intractable. We investigate ways to trade off surrogate loss dimension, the number of problem instances, and restricting the region of consistency in the simplex for multiclass classification. Following past work, we examine an intuitive embedding procedure that maps outcomes into the vertices of convex polytopes in a low-dimensional surrogate space. We show that full-dimensional subsets of the simplex exist around each point mass distribution for which consistency holds, but also, with less than $n-1$ dimensions, there exist distributions for which a phenomenon called hallucination occurs, which is when the optimal report under the surrogate loss is an outcome with zero probability. Looking towards application, we derive a result to check if consistency holds under a given polytope embedding and low-noise assumption, providing insight into when to use a particular embedding. We provide examples of embedding $n = 2^{d}$ outcomes into the $d$-dimensional unit cube and $n = d!$ outcomes into the $d$-dimensional permutahedron under low-noise assumptions. Finally, we demonstrate that with multiple problem instances, we can learn the mode with $frac{n}{2}$ dimensions over the whole simplex.
[ "['Enrique Nueve' 'Bo Waggoner' 'Dhamma Kimpara' 'Jessie Finocchiaro']" ]
null
null
2402.10820
null
null
http://arxiv.org/pdf/2402.10820v2
2024-06-08T14:56:23Z
2024-02-16T16:46:53Z
Learning Goal-Conditioned Policies from Sub-Optimal Offline Data via Metric Learning
We address the problem of learning optimal behavior from sub-optimal datasets for goal-conditioned offline reinforcement learning. To do so, we propose the use of metric learning to approximate the optimal value function for goal-conditioned offline RL problems under sparse rewards, invertible actions and deterministic transitions. We introduce distance monotonicity, a property for representations to recover optimality and propose an optimization objective that leads to such property. We use the proposed value function to guide the learning of a policy in an actor-critic fashion, a method we name MetricRL. Experimentally, we show that our method estimates optimal behaviors from severely sub-optimal offline datasets without suffering from out-of-distribution estimation errors. We demonstrate that MetricRL consistently outperforms prior state-of-the-art goal-conditioned RL methods in learning optimal policies from sub-optimal offline datasets.
[ "['Alfredo Reichlin' 'Miguel Vasco' 'Hang Yin' 'Danica Kragic']" ]
null
null
2402.10831
null
null
http://arxiv.org/pdf/2402.10831v1
2024-02-16T17:03:08Z
2024-02-16T17:03:08Z
GAN-driven Electromagnetic Imaging of 2-D Dielectric Scatterers
Inverse scattering problems are inherently challenging, given the fact they are ill-posed and nonlinear. This paper presents a powerful deep learning-based approach that relies on generative adversarial networks to accurately and efficiently reconstruct randomly-shaped two-dimensional dielectric objects from amplitudes of multi-frequency scattered electric fields. An adversarial autoencoder (AAE) is trained to learn to generate the scatterer's geometry from a lower-dimensional latent representation constrained to adhere to the Gaussian distribution. A cohesive inverse neural network (INN) framework is set up comprising a sequence of appropriately designed dense layers, the already-trained generator as well as a separately trained forward neural network. The images reconstructed at the output of the inverse network are validated through comparison with outputs from the forward neural network, addressing the non-uniqueness challenge inherent to electromagnetic (EM) imaging problems. The trained INN demonstrates an enhanced robustness, evidenced by a mean binary cross-entropy (BCE) loss of $0.13$ and a structure similarity index (SSI) of $0.90$. The study not only demonstrates a significant reduction in computational load, but also marks a substantial improvement over traditional objective-function-based methods. It contributes both to the fields of machine learning and EM imaging by offering a real-time quantitative imaging approach. The results obtained with the simulated data, for both training and testing, yield promising results and may open new avenues for radio-frequency inverse imaging.
[ "['Ehtasham Naseer' 'Ali Imran Sandhu' 'Muhammad Adnan Siddique'\n 'Waqas W. Ahmed' 'Mohamed Farhat' 'Ying Wu']" ]
null
null
2402.10846
null
null
http://arxiv.org/pdf/2402.10846v1
2024-02-16T17:36:51Z
2024-02-16T17:36:51Z
FedD2S: Personalized Data-Free Federated Knowledge Distillation
This paper addresses the challenge of mitigating data heterogeneity among clients within a Federated Learning (FL) framework. The model-drift issue, arising from the noniid nature of client data, often results in suboptimal personalization of a global model compared to locally trained models for each client. To tackle this challenge, we propose a novel approach named FedD2S for Personalized Federated Learning (pFL), leveraging knowledge distillation. FedD2S incorporates a deep-to-shallow layer-dropping mechanism in the data-free knowledge distillation process to enhance local model personalization. Through extensive simulations on diverse image datasets-FEMNIST, CIFAR10, CINIC0, and CIFAR100-we compare FedD2S with state-of-the-art FL baselines. The proposed approach demonstrates superior performance, characterized by accelerated convergence and improved fairness among clients. The introduced layer-dropping technique effectively captures personalized knowledge, resulting in enhanced performance compared to alternative FL models. Moreover, we investigate the impact of key hyperparameters, such as the participation ratio and layer-dropping rate, providing valuable insights into the optimal configuration for FedD2S. The findings demonstrate the efficacy of adaptive layer-dropping in the knowledge distillation process to achieve enhanced personalization and performance across diverse datasets and tasks.
[ "['Kawa Atapour' 'S. Jamal Seyedmohammadi' 'Jamshid Abouei'\n 'Arash Mohammadi' 'Konstantinos N. Plataniotis']" ]
null
null
2402.10851
null
null
http://arxiv.org/pdf/2402.10851v1
2024-02-16T17:44:11Z
2024-02-16T17:44:11Z
HistoSegCap: Capsules for Weakly-Supervised Semantic Segmentation of Histological Tissue Type in Whole Slide Images
Digital pathology involves converting physical tissue slides into high-resolution Whole Slide Images (WSIs), which pathologists analyze for disease-affected tissues. However, large histology slides with numerous microscopic fields pose challenges for visual search. To aid pathologists, Computer Aided Diagnosis (CAD) systems offer visual assistance in efficiently examining WSIs and identifying diagnostically relevant regions. This paper presents a novel histopathological image analysis method employing Weakly Supervised Semantic Segmentation (WSSS) based on Capsule Networks, the first such application. The proposed model is evaluated using the Atlas of Digital Pathology (ADP) dataset and its performance is compared with other histopathological semantic segmentation methodologies. The findings underscore the potential of Capsule Networks in enhancing the precision and efficiency of histopathological image analysis. Experimental results show that the proposed model outperforms traditional methods in terms of accuracy and the mean Intersection-over-Union (mIoU) metric.
[ "['Mobina Mansoori' 'Sajjad Shahabodini' 'Jamshid Abouei' 'Arash Mohammadi'\n 'Konstantinos N. Plataniotis']" ]
null
null
2402.10857
null
null
http://arxiv.org/abs/2402.10857v1
2024-02-16T17:53:08Z
2024-02-16T17:53:08Z
JetTrain: IDE-Native Machine Learning Experiments
Integrated development environments (IDEs) are prevalent code-writing and debugging tools. However, they have yet to be widely adopted for launching machine learning (ML) experiments. This work aims to fill this gap by introducing JetTrain, an IDE-integrated tool that delegates specific tasks from an IDE to remote computational resources. A user can write and debug code locally and then seamlessly run it remotely using on-demand hardware. We argue that this approach can lower the entry barrier for ML training problems and increase experiment throughput.
[ "['Artem Trofimov' 'Mikhail Kostyukov' 'Sergei Ugdyzhekov'\n 'Natalia Ponomareva' 'Igor Naumov' 'Maksim Melekhovets']" ]
null
null
2402.10862
null
null
http://arxiv.org/pdf/2402.10862v2
2024-04-29T00:47:30Z
2024-02-16T18:00:04Z
Differential Private Federated Transfer Learning for Mental Health Monitoring in Everyday Settings: A Case Study on Stress Detection
Mental health conditions, prevalent across various demographics, necessitate efficient monitoring to mitigate their adverse impacts on life quality. The surge in data-driven methodologies for mental health monitoring has underscored the importance of privacy-preserving techniques in handling sensitive health data. Despite strides in federated learning for mental health monitoring, existing approaches struggle with vulnerabilities to certain cyber-attacks and data insufficiency in real-world applications. In this paper, we introduce a differential private federated transfer learning framework for mental health monitoring to enhance data privacy and enrich data sufficiency. To accomplish this, we integrate federated learning with two pivotal elements: (1) differential privacy, achieved by introducing noise into the updates, and (2) transfer learning, employing a pre-trained universal model to adeptly address issues of data imbalance and insufficiency. We evaluate the framework by a case study on stress detection, employing a dataset of physiological and contextual data from a longitudinal study. Our finding show that the proposed approach can attain a 10% boost in accuracy and a 21% enhancement in recall, while ensuring privacy protection.
[ "['Ziyu Wang' 'Zhongqi Yang' 'Iman Azimi' 'Amir M. Rahmani']" ]
null
null
2402.10870
null
null
http://arxiv.org/pdf/2402.10870v3
2024-02-26T19:55:34Z
2024-02-16T18:13:35Z
Best of Three Worlds: Adaptive Experimentation for Digital Marketing in Practice
Adaptive experimental design (AED) methods are increasingly being used in industry as a tool to boost testing throughput or reduce experimentation cost relative to traditional A/B/N testing methods. However, the behavior and guarantees of such methods are not well-understood beyond idealized stationary settings. This paper shares lessons learned regarding the challenges of naively using AED systems in industrial settings where non-stationarity is prevalent, while also providing perspectives on the proper objectives and system specifications in such settings. We developed an AED framework for counterfactual inference based on these experiences, and tested it in a commercial environment.
[ "['Tanner Fiez' 'Houssam Nassif' 'Yu-Cheng Chen' 'Sergio Gamez'\n 'Lalit Jain']" ]
null
null
2402.10874
null
null
http://arxiv.org/pdf/2402.10874v1
2024-02-16T18:20:33Z
2024-02-16T18:20:33Z
Design of 2D Skyrmionic Metamaterial Through Controlled Assembly
Despite extensive research on magnetic skyrmions and antiskyrmions, a significant challenge remains in crafting nontrivial high-order skyrmionic textures with varying, or even tailor-made, topologies. We address this challenge, by focusing on a construction pathway of skyrmionics metamaterial within a monolayer thin film and suggest several promising lattice-like, flakes-like, and cell-like skyrmionic metamaterials that are surprisingly stable. Central to our approach is the concept of 'simulated controlled assembly', in short, a protocol inspired by 'click chemistry' that allows for positioning topological magnetic structures where one likes, and then allowing for energy minimization to elucidate the stability. Utilizing high-throughput atomistic-spin-dynamic (ASD) simulations alongside state-of-the-art AI-driven tools, we have isolated skyrmions (topological charge Q=1), antiskyrmions (Q=-1), and skyrmionium (Q=0). These entities serve as foundational 'skyrmionic building blocks' to forming reported intricate textures. In this work, two key contributions are introduced to the field of skyrmionic systems. First, we present a novel method for integrating control assembly protocols for the stabilization and investigation of topological magnets, which marks a significant advancement in the ability to explore new skyrmionic textures. Second, we report on the discovery of skyrmionic metamaterials, which shows a plethora of complex topologies that are possible to investigate theoretically and experimentally.
[ "['Qichen Xu' 'Zhuanglin Shen' 'Alexander Edström' 'I. P. Miranda'\n 'Zhiwei Lu' 'Anders Bergman' 'Danny Thonig' 'Wanjian Yin' 'Olle Eriksson'\n 'Anna Delin']" ]
null
null
2402.10877
null
null
http://arxiv.org/pdf/2402.10877v6
2024-04-15T11:34:52Z
2024-02-16T18:29:19Z
Robust agents learn causal world models
It has long been hypothesised that causal reasoning plays a fundamental role in robust and general intelligence. However, it is not known if agents must learn causal models in order to generalise to new domains, or if other inductive biases are sufficient. We answer this question, showing that any agent capable of satisfying a regret bound under a large set of distributional shifts must have learned an approximate causal model of the data generating process, which converges to the true causal model for optimal agents. We discuss the implications of this result for several research areas including transfer learning and causal inference.
[ "['Jonathan Richens' 'Tom Everitt']" ]
null
null
2402.10884
null
null
http://arxiv.org/pdf/2402.10884v1
2024-02-16T18:42:08Z
2024-02-16T18:42:08Z
Multi-modal preference alignment remedies regression of visual instruction tuning on language model
In production, multi-modal large language models (MLLMs) are expected to support multi-turn queries of interchanging image and text modalities. However, the current MLLMs trained with visual-question-answering (VQA) datasets could suffer from degradation, as VQA datasets lack the diversity and complexity of the original text instruction datasets which the underlying language model had been trained with. To address this challenging degradation, we first collect a lightweight (6k entries) VQA preference dataset where answers were annotated by Gemini for 5 quality metrics in a granular fashion, and investigate standard Supervised Fine-tuning, rejection sampling, Direct Preference Optimization (DPO), and SteerLM. Our findings indicate that the with DPO we are able to surpass instruction-following capabilities of the language model, achieving a 6.73 score on MT-Bench, compared to Vicuna's 6.57 and LLaVA's 5.99 despite small data scale. This enhancement in textual instruction proficiency correlates with boosted visual instruction performance (+4.9% on MM-Vet, +6% on LLaVA-Bench), with minimal alignment tax on visual knowledge benchmarks compared to previous RLHF approach. In conclusion, we propose a distillation-based multi-modal alignment model with fine-grained annotations on a small dataset that reconciles the textual and visual performance of MLLMs, restoring and boosting language capability after visual instruction tuning.
[ "['Shengzhi Li' 'Rongyu Lin' 'Shichao Pei']" ]
null
null
2402.10885
null
null
http://arxiv.org/pdf/2402.10885v2
2024-03-11T22:05:00Z
2024-02-16T18:43:02Z
3D Diffuser Actor: Policy Diffusion with 3D Scene Representations
We marry diffusion policies and 3D scene representations for robot manipulation. Diffusion policies learn the action distribution conditioned on the robot and environment state using conditional diffusion models. They have recently shown to outperform both deterministic and alternative state-conditioned action distribution learning methods. 3D robot policies use 3D scene feature representations aggregated from a single or multiple camera views using sensed depth. They have shown to generalize better than their 2D counterparts across camera viewpoints. We unify these two lines of work and present 3D Diffuser Actor, a neural policy architecture that, given a language instruction, builds a 3D representation of the visual scene and conditions on it to iteratively denoise 3D rotations and translations for the robot's end-effector. At each denoising iteration, our model represents end-effector pose estimates as 3D scene tokens and predicts the 3D translation and rotation error for each of them, by featurizing them using 3D relative attention to other 3D visual and language tokens. 3D Diffuser Actor sets a new state-of-the-art on RLBench with an absolute performance gain of 16.3% over the current SOTA on a multi-view setup and an absolute gain of 13.1% on a single-view setup. On the CALVIN benchmark, it outperforms the current SOTA in the setting of zero-shot unseen scene generalization by being able to successfully run 0.2 more tasks, a 7% relative increase. It also works in the real world from a handful of demonstrations. We ablate our model's architectural design choices, such as 3D scene featurization and 3D relative attentions, and show they all help generalization. Our results suggest that 3D scene representations and powerful generative modeling are keys to efficient robot learning from demonstrations.
[ "['Tsung-Wei Ke' 'Nikolaos Gkanatsios' 'Katerina Fragkiadaki']" ]
null
null
2402.10888
null
null
http://arxiv.org/pdf/2402.10888v1
2024-02-16T18:44:37Z
2024-02-16T18:44:37Z
Explainability for Machine Learning Models: From Data Adaptability to User Perception
This thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems.
[ "['julien Delaunay']" ]
null
null
2402.10890
null
null
http://arxiv.org/pdf/2402.10890v2
2024-06-06T14:55:40Z
2024-02-16T18:45:58Z
When is Tree Search Useful for LLM Planning? It Depends on the Discriminator
In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method. We investigate the practical utility of two advanced planning methods, iterative correction and tree search. We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that: (1) advanced planning methods demand discriminators with at least 90% accuracy to achieve significant improvements over re-ranking; (2) current LLMs' discrimination abilities have not met the needs of advanced planning methods to achieve such improvements; (3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10--20 times slower but leads to negligible performance gains, which hinders its real-world applications. Code and data are available at https://github.com/OSU-NLP-Group/llm-planning-eval.
[ "['Ziru Chen' 'Michael White' 'Raymond Mooney' 'Ali Payani' 'Yu Su'\n 'Huan Sun']" ]
null
null
2402.10891
null
null
http://arxiv.org/pdf/2402.10891v1
2024-02-16T18:47:21Z
2024-02-16T18:47:21Z
Instruction Diversity Drives Generalization To Unseen Tasks
Instruction tuning -- fine-tuning a large language model (LLM) on pairs of instructions and desired outcomes -- is an approach that enables pre-trained language models to perform real-world tasks and follow human instructions. Its practical success depends on the model learning a broader set of instructions than those it was trained on. Yet the factors that determine model generalization to such emph{unseen tasks} are not well understood. %To understand the driving factors of generalization, In this paper, we experiment with string rewrites, a symbolic task that serves as a building block for Turing complete Markov algorithms while allowing experimental control of "inputs" and "instructions". We investigate the trade-off between the number of instructions the model is trained on and the number of training samples provided for each instruction and observe that the diversity of the instruction set determines generalization. Generalization emerges once a diverse enough set of tasks is provided, even though very few examples are provided for each task. Instruction diversity also ensures robustness with respect to non-uniform distributions of instructions in the training set.
[ "['Dylan Zhang' 'Justin Wang' 'Francois Charton']" ]
null
null
2402.10892
null
null
http://arxiv.org/pdf/2402.10892v2
2024-06-10T19:39:34Z
2024-02-16T18:49:27Z
Proving membership in LLM pretraining data via data watermarks
Detecting whether copyright holders' works were used in LLM pretraining is poised to be an important problem. This work proposes using data watermarks to enable principled detection with only black-box model access, provided that the rightholder contributed multiple training documents and watermarked them before public release. By applying a randomly sampled data watermark, detection can be framed as hypothesis testing, which provides guarantees on the false detection rate. We study two watermarks: one that inserts random sequences, and another that randomly substitutes characters with Unicode lookalikes. We first show how three aspects of watermark design -- watermark length, number of duplications, and interference -- affect the power of the hypothesis test. Next, we study how a watermark's detection strength changes under model and dataset scaling: while increasing the dataset size decreases the strength of the watermark, watermarks remain strong if the model size also increases. Finally, we view SHA hashes as natural watermarks and show that we can robustly detect hashes from BLOOM-176B's training data, as long as they occurred at least 90 times. Together, our results point towards a promising future for data watermarks in real world use.
[ "['Johnny Tian-Zheng Wei' 'Ryan Yixiang Wang' 'Robin Jia']" ]
null
null
2402.10893
null
null
http://arxiv.org/pdf/2402.10893v1
2024-02-16T18:50:24Z
2024-02-16T18:50:24Z
RLVF: Learning from Verbal Feedback without Overgeneralization
The diversity of contexts in which large language models (LLMs) are deployed requires the ability to modify or customize default model behaviors to incorporate nuanced requirements and preferences. A convenient interface to specify such model adjustments is high-level verbal feedback, such as "Don't use emojis when drafting emails to my boss." However, while writing high-level feedback is far simpler than collecting annotations for reinforcement learning from human feedback (RLHF), we find that simply prompting a model with such feedback leads to overgeneralization of the feedback to contexts where it is not relevant. We study the problem of incorporating verbal feedback without such overgeneralization, inspiring a new method Contextualized Critiques with Constrained Preference Optimization (C3PO). C3PO uses a piece of high-level feedback to generate a small synthetic preference dataset specifying how the feedback should (and should not) be applied. It then fine-tunes the model in accordance with the synthetic preference data while minimizing the divergence from the original model for prompts where the feedback does not apply. Our experimental results indicate that our approach effectively applies verbal feedback to relevant scenarios while preserving existing behaviors for other contexts. For both human- and GPT-4-generated high-level feedback, C3PO effectively adheres to the given feedback comparably to in-context baselines while reducing overgeneralization by 30%.
[ "['Moritz Stephan' 'Alexander Khazatsky' 'Eric Mitchell' 'Annie S Chen'\n 'Sheryl Hsu' 'Archit Sharma' 'Chelsea Finn']" ]
null
null
2402.10894
null
null
http://arxiv.org/pdf/2402.10894v1
2024-02-16T18:51:42Z
2024-02-16T18:51:42Z
Fusion of Diffusion Weighted MRI and Clinical Data for Predicting Functional Outcome after Acute Ischemic Stroke with Deep Contrastive Learning
Stroke is a common disabling neurological condition that affects about one-quarter of the adult population over age 25; more than half of patients still have poor outcomes, such as permanent functional dependence or even death, after the onset of acute stroke. The aim of this study is to investigate the efficacy of diffusion-weighted MRI modalities combining with structured health profile on predicting the functional outcome to facilitate early intervention. A deep fusion learning network is proposed with two-stage training: the first stage focuses on cross-modality representation learning and the second stage on classification. Supervised contrastive learning is exploited to learn discriminative features that separate the two classes of patients from embeddings of individual modalities and from the fused multimodal embedding. The network takes as the input DWI and ADC images, and structured health profile data. The outcome is the prediction of the patient needing long-term care at 3 months after the onset of stroke. Trained and evaluated with a dataset of 3297 patients, our proposed fusion model achieves 0.87, 0.80 and 80.45% for AUC, F1-score and accuracy, respectively, outperforming existing models that consolidate both imaging and structured data in the medical domain. If trained with comprehensive clinical variables, including NIHSS and comorbidities, the gain from images on making accurate prediction is not considered substantial, but significant. However, diffusion-weighted MRI can replace NIHSS to achieve comparable level of accuracy combining with other readily available clinical variables for better generalization.
[ "['Chia-Ling Tsai' 'Hui-Yun Su' 'Shen-Feng Sung' 'Wei-Yang Lin'\n 'Ying-Ying Su' 'Tzu-Hsien Yang' 'Man-Lin Mai']" ]
null
null
2402.10898
null
null
http://arxiv.org/pdf/2402.10898v3
2024-06-27T14:04:51Z
2024-02-16T18:56:41Z
The Price of Adaptivity in Stochastic Convex Optimization
We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a "price of adaptivity" (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch. En route, we also establish tight upper and lower bounds for (known-parameter) high-probability stochastic convex optimization with heavy-tailed and bounded noise, respectively.
[ "['Yair Carmon' 'Oliver Hinder']" ]
null
null
2402.10908
null
null
http://arxiv.org/pdf/2402.10908v1
2024-01-12T17:50:35Z
2024-01-12T17:50:35Z
LLM-Assisted Crisis Management: Building Advanced LLM Platforms for Effective Emergency Response and Public Collaboration
Emergencies and critical incidents often unfold rapidly, necessitating a swift and effective response. In this research, we introduce a novel approach to identify and classify emergency situations from social media posts and direct emergency messages using an open source Large Language Model, LLAMA2. The goal is to harness the power of natural language processing and machine learning to assist public safety telecommunicators and huge crowds during countrywide emergencies. Our research focuses on developing a language model that can understand users describe their situation in the 911 call, enabling LLAMA2 to analyze the content and offer relevant instructions to the telecommunicator, while also creating workflows to notify government agencies with the caller's information when necessary. Another benefit this language model provides is its ability to assist people during a significant emergency incident when the 911 system is overwhelmed, by assisting the users with simple instructions and informing authorities with their location and emergency information.
[ "['Hakan T. Otal' 'M. Abdullah Canbaz']" ]
null
null
2402.10921
null
null
http://arxiv.org/pdf/2402.10921v1
2024-01-26T19:57:26Z
2024-01-26T19:57:26Z
AM^2-EmoJE: Adaptive Missing-Modality Emotion Recognition in Conversation via Joint Embedding Learning
Human emotion can be presented in different modes i.e., audio, video, and text. However, the contribution of each mode in exhibiting each emotion is not uniform. Furthermore, the availability of complete mode-specific details may not always be guaranteed in the test time. In this work, we propose AM^2-EmoJE, a model for Adaptive Missing-Modality Emotion Recognition in Conversation via Joint Embedding Learning model that is grounded on two-fold contributions: First, a query adaptive fusion that can automatically learn the relative importance of its mode-specific representations in a query-specific manner. By this the model aims to prioritize the mode-invariant spatial query details of the emotion patterns, while also retaining its mode-exclusive aspects within the learned multimodal query descriptor. Second the multimodal joint embedding learning module that explicitly addresses various missing modality scenarios in test-time. By this, the model learns to emphasize on the correlated patterns across modalities, which may help align the cross-attended mode-specific descriptors pairwise within a joint-embedding space and thereby compensate for missing modalities during inference. By leveraging the spatio-temporal details at the dialogue level, the proposed AM^2-EmoJE not only demonstrates superior performance compared to the best-performing state-of-the-art multimodal methods, by effectively leveraging body language in place of face expression, it also exhibits an enhanced privacy feature. By reporting around 2-5% improvement in the weighted-F1 score, the proposed multimodal joint embedding module facilitates an impressive performance gain in a variety of missing-modality query scenarios during test time.
[ "['Naresh Kumar Devulapally' 'Sidharth Anand' 'Sreyasee Das Bhattacharjee'\n 'Junsong Yuan']" ]
null
null
2402.10926
null
null
http://arxiv.org/pdf/2402.10926v1
2024-01-30T10:43:27Z
2024-01-30T10:43:27Z
Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
Physics-informed neural networks (PINNs) and their variants have been very popular in recent years as algorithms for the numerical simulation of both forward and inverse problems for partial differential equations. This article aims to provide a comprehensive review of currently available results on the numerical analysis of PINNs and related models that constitute the backbone of physics-informed machine learning. We provide a unified framework in which analysis of the various components of the error incurred by PINNs in approximating PDEs can be effectively carried out. A detailed review of available results on approximation, generalization and training errors and their behavior with respect to the type of the PDE and the dimension of the underlying domain is presented. In particular, the role of the regularity of the solutions and their stability to perturbations in the error analysis is elucidated. Numerical results are also presented to illustrate the theory. We identify training errors as a key bottleneck which can adversely affect the overall performance of various models in physics-informed machine learning.
[ "['Tim De Ryck' 'Siddhartha Mishra']" ]
null
null
2402.10930
null
null
http://arxiv.org/pdf/2402.10930v2
2024-02-20T09:52:42Z
2024-01-31T17:52:52Z
ConSmax: Hardware-Friendly Alternative Softmax with Learnable Parameters
The self-attention mechanism sets transformer-based large language model (LLM) apart from the convolutional and recurrent neural networks. Despite the performance improvement, achieving real-time LLM inference on silicon is challenging due to the extensively used Softmax in self-attention. Apart from the non-linearity, the low arithmetic intensity greatly reduces the processing parallelism, which becomes the bottleneck especially when dealing with a longer context. To address this challenge, we propose Constant Softmax (ConSmax), a software-hardware co-design as an efficient Softmax alternative. ConSmax employs differentiable normalization parameters to remove the maximum searching and denominator summation in Softmax. It allows for massive parallelization while performing the critical tasks of Softmax. In addition, a scalable ConSmax hardware utilizing a bitwidth-split look-up table (LUT) can produce lossless non-linear operation and support mix-precision computing. It further facilitates efficient LLM inference. Experimental results show that ConSmax achieves a minuscule power consumption of 0.43 mW and area of 0.001 mm2 at 1-GHz working frequency and 22-nm CMOS technology. Compared to state-of-the-art Softmax hardware, ConSmax results in 14.5x energy and 14.0x area savings with a comparable accuracy on a GPT-2 model and the WikiText103 dataset.
[ "['Shiwei Liu' 'Guanchen Tao' 'Yifei Zou' 'Derek Chow' 'Zichen Fan'\n 'Kauna Lei' 'Bangfei Pan' 'Dennis Sylvester' 'Gregory Kielian'\n 'Mehdi Saligane']" ]