categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.07528 | null | null | http://arxiv.org/pdf/2406.07528v1 | 2024-06-11T17:55:03Z | 2024-06-11T17:55:03Z | QuickLLaMA: Query-aware Inference Acceleration for Large Language Models | The capacity of Large Language Models (LLMs) to comprehend and reason over long contexts is pivotal for advancements in diverse fields. Yet, they still stuggle with capturing long-distance dependencies within sequences to deeply understand semantics. To address this issue, we introduce Query-aware Inference for LLMs (Q-LLM), a system designed to process extensive sequences akin to human cognition. By focusing on memory data relevant to a given query, Q-LLM can accurately capture pertinent information within a fixed window size and provide precise answers to queries. It doesn't require extra training and can be seamlessly integrated with any LLMs. Q-LLM using LLaMA3 (QuickLLaMA) can read Harry Potter within 30s and accurately answer the questions. Q-LLM improved by 7.17% compared to the current state-of-the-art on LLaMA3, and by 3.26% on Mistral on the $infty$-bench. In the Needle-in-a-Haystack task, On widely recognized benchmarks, Q-LLM improved upon the current SOTA by 7.0% on Mistral and achieves 100% on LLaMA3. Our code can be found in https://github.com/dvlab-research/Q-LLM. | [
"['Jingyao Li' 'Han Shi' 'Xin Jiang' 'Zhenguo Li' 'Hong Xu' 'Jiaya Jia']"
] |
null | null | 2406.07529 | null | null | http://arxiv.org/pdf/2406.07529v2 | 2024-06-18T06:24:11Z | 2024-06-11T17:55:25Z | MAP: Low-compute Model Merging with Amortized Pareto Fronts via
Quadratic Approximation | Model merging has emerged as an effective approach to combine multiple single-task models, fine-tuned from the same pre-trained model, into a multitask model. This process typically involves computing a weighted average of the model parameters without any additional training. Existing model-merging methods focus on enhancing average task accuracy. However, interference and conflicts between the objectives of different tasks can lead to trade-offs during model merging. In real-world applications, a set of solutions with various trade-offs can be more informative, helping practitioners make decisions based on diverse preferences. In this paper, we introduce a novel low-compute algorithm, Model Merging with Amortized Pareto Front (MAP). MAP identifies a Pareto set of scaling coefficients for merging multiple models to reflect the trade-offs. The core component of MAP is approximating the evaluation metrics of the various tasks using a quadratic approximation surrogate model derived from a pre-selected set of scaling coefficients, enabling amortized inference. Experimental results on vision and natural language processing tasks show that MAP can accurately identify the Pareto front. To further reduce the required computation of MAP, we propose (1) a Bayesian adaptive sampling algorithm and (2) a nested merging scheme with multiple stages. | [
"['Lu Li' 'Tianyu Zhang' 'Zhiqi Bu' 'Suyuchen Wang' 'Huan He' 'Jie Fu'\n 'Yonghui Wu' 'Jiang Bian' 'Yong Chen' 'Yoshua Bengio']"
] |
null | null | 2406.07532 | null | null | http://arxiv.org/pdf/2406.07532v1 | 2024-06-11T17:56:14Z | 2024-06-11T17:56:14Z | Hearing Anything Anywhere | Recent years have seen immense progress in 3D computer vision and computer graphics, with emerging tools that can virtualize real-world 3D environments for numerous Mixed Reality (XR) applications. However, alongside immersive visual experiences, immersive auditory experiences are equally vital to our holistic perception of an environment. In this paper, we aim to reconstruct the spatial acoustic characteristics of an arbitrary environment given only a sparse set of (roughly 12) room impulse response (RIR) recordings and a planar reconstruction of the scene, a setup that is easily achievable by ordinary users. To this end, we introduce DiffRIR, a differentiable RIR rendering framework with interpretable parametric models of salient acoustic features of the scene, including sound source directivity and surface reflectivity. This allows us to synthesize novel auditory experiences through the space with any source audio. To evaluate our method, we collect a dataset of RIR recordings and music in four diverse, real environments. We show that our model outperforms state-ofthe-art baselines on rendering monaural and binaural RIRs and music at unseen locations, and learns physically interpretable parameters characterizing acoustic properties of the sound source and surfaces in the scene. | [
"['Mason Wang' 'Ryosuke Sawata' 'Samuel Clarke' 'Ruohan Gao' 'Shangzhe Wu'\n 'Jiajun Wu']"
] |
null | null | 2406.07536 | null | null | http://arxiv.org/pdf/2406.07536v1 | 2024-06-11T17:57:49Z | 2024-06-11T17:57:49Z | Towards Fundamentally Scalable Model Selection: Asymptotically Fast
Update and Selection | The advancement of deep learning technologies is bringing new models every day, motivating the study of scalable model selection. An ideal model selection scheme should minimally support two operations efficiently over a large pool of candidate models: update, which involves either adding a new candidate model or removing an existing candidate model, and selection, which involves locating highly performing models for a given task. However, previous solutions to model selection require high computational complexity for at least one of these two operations. In this work, we target fundamentally (more) scalable model selection that supports asymptotically fast update and asymptotically fast selection at the same time. Firstly, we define isolated model embedding, a family of model selection schemes supporting asymptotically fast update and selection: With respect to the number of candidate models $m$, the update complexity is O(1) and the selection consists of a single sweep over $m$ vectors in addition to O(1) model operations. Isolated model embedding also implies several desirable properties for applications. Secondly, we present Standardized Embedder, an empirical realization of isolated model embedding. We assess its effectiveness by using it to select representations from a pool of 100 pre-trained vision models for classification tasks and measuring the performance gaps between the selected models and the best candidates with a linear probing protocol. Experiments suggest our realization is effective in selecting models with competitive performances and highlight isolated model embedding as a promising direction towards model selection that is fundamentally (more) scalable. | [
"['Wenxiao Wang' 'Weiming Zhuang' 'Lingjuan Lyu']"
] |
null | null | 2406.07540 | null | null | http://arxiv.org/pdf/2406.07540v1 | 2024-06-11T17:59:01Z | 2024-06-11T17:59:01Z | Ctrl-X: Controlling Structure and Appearance for Text-To-Image
Generation Without Guidance | Recent controllable generation approaches such as FreeControl and Diffusion Self-guidance bring fine-grained spatial and appearance control to text-to-image (T2I) diffusion models without training auxiliary modules. However, these methods optimize the latent embedding for each type of score function with longer diffusion steps, making the generation process time-consuming and limiting their flexibility and use. This work presents Ctrl-X, a simple framework for T2I diffusion controlling structure and appearance without additional training or guidance. Ctrl-X designs feed-forward structure control to enable the structure alignment with a structure image and semantic-aware appearance transfer to facilitate the appearance transfer from a user-input image. Extensive qualitative and quantitative experiments illustrate the superior performance of Ctrl-X on various condition inputs and model checkpoints. In particular, Ctrl-X supports novel structure and appearance control with arbitrary condition images of any modality, exhibits superior image quality and appearance transfer compared to existing works, and provides instant plug-and-play functionality to any T2I and text-to-video (T2V) diffusion model. See our project page for an overview of the results: https://genforce.github.io/ctrl-x | [
"['Kuan Heng Lin' 'Sicheng Mo' 'Ben Klingher' 'Fangzhou Mu' 'Bolei Zhou']"
] |
null | null | 2406.07541 | null | null | http://arxiv.org/pdf/2406.07541v1 | 2024-06-11T17:59:29Z | 2024-06-11T17:59:29Z | CDSA: Conservative Denoising Score-based Algorithm for Offline
Reinforcement Learning | Distribution shift is a major obstacle in offline reinforcement learning, which necessitates minimizing the discrepancy between the learned policy and the behavior policy to avoid overestimating rare or unseen actions. Previous conservative offline RL algorithms struggle to generalize to unseen actions, despite their success in learning good in-distribution policy. In contrast, we propose to use the gradient fields of the dataset density generated from a pre-trained offline RL algorithm to adjust the original actions. We decouple the conservatism constraints from the policy, thus can benefit wide offline RL algorithms. As a consequence, we propose the Conservative Denoising Score-based Algorithm (CDSA) which utilizes the denoising score-based model to model the gradient of the dataset density, rather than the dataset density itself, and facilitates a more accurate and efficient method to adjust the action generated by the pre-trained policy in a deterministic and continuous MDP environment. In experiments, we show that our approach significantly improves the performance of baseline algorithms in D4RL datasets, and demonstrate the generalizability and plug-and-play capability of our model across different pre-trained offline RL policy in different tasks. We also validate that the agent exhibits greater risk aversion after employing our method while showcasing its ability to generalize effectively across diverse tasks. | [
"['Zeyuan Liu' 'Kai Yang' 'Xiu Li']"
] |
null | null | 2406.07542 | null | null | http://arxiv.org/pdf/2406.07542v1 | 2024-06-11T17:59:31Z | 2024-06-11T17:59:31Z | Cognitive Insights Across Languages: Enhancing Multimodal Interview
Analysis | Cognitive decline is a natural process that occurs as individuals age. Early diagnosis of anomalous decline is crucial for initiating professional treatment that can enhance the quality of life of those affected. To address this issue, we propose a multimodal model capable of predicting Mild Cognitive Impairment and cognitive scores. The TAUKADIAL dataset is used to conduct the evaluation, which comprises audio recordings of clinical interviews. The proposed model demonstrates the ability to transcribe and differentiate between languages used in the interviews. Subsequently, the model extracts audio and text features, combining them into a multimodal architecture to achieve robust and generalized results. Our approach involves in-depth research to implement various features obtained from the proposed modalities. | [
"['David Ortiz-Perez' 'Jose Garcia-Rodriguez' 'David Tomás']"
] |
null | null | 2406.07544 | null | null | http://arxiv.org/pdf/2406.07544v2 | 2024-06-26T17:59:50Z | 2024-06-11T17:59:45Z | Situational Awareness Matters in 3D Vision Language Reasoning | Being able to carry out complicated vision language reasoning tasks in 3D space represents a significant milestone in developing household robots and human-centered embodied AI. In this work, we demonstrate that a critical and distinct challenge in 3D vision language reasoning is situational awareness, which incorporates two key components: (1) The autonomous agent grounds its self-location based on a language prompt. (2) The agent answers open-ended questions from the perspective of its calculated position. To address this challenge, we introduce SIG3D, an end-to-end Situation-Grounded model for 3D vision language reasoning. We tokenize the 3D scene into sparse voxel representation and propose a language-grounded situation estimator, followed by a situated question answering module. Experiments on the SQA3D and ScanQA datasets show that SIG3D outperforms state-of-the-art models in situation estimation and question answering by a large margin (e.g., an enhancement of over 30% on situation estimation accuracy). Subsequent analysis corroborates our architectural design choices, explores the distinct functions of visual and textual tokens, and highlights the importance of situational awareness in the domain of 3D question answering. | [
"['Yunze Man' 'Liang-Yan Gui' 'Yu-Xiong Wang']"
] |
null | null | 2406.07548 | null | null | http://arxiv.org/pdf/2406.07548v1 | 2024-06-11T17:59:53Z | 2024-06-11T17:59:53Z | Image and Video Tokenization with Binary Spherical Quantization | We propose a new transformer-based image and video tokenizer with Binary Spherical Quantization (BSQ). BSQ projects the high-dimensional visual embedding to a lower-dimensional hypersphere and then applies binary quantization. BSQ is (1) parameter-efficient without an explicit codebook, (2) scalable to arbitrary token dimensions, and (3) compact: compressing visual data by up to 100$times$ with minimal distortion. Our tokenizer uses a transformer encoder and decoder with simple block-wise causal masking to support variable-length videos as input. The resulting BSQ-ViT achieves state-of-the-art visual reconstruction quality on image and video reconstruction benchmarks with 2.4$times$ throughput compared to the best prior methods. Furthermore, by learning an autoregressive prior for adaptive arithmetic coding, BSQ-ViT achieves comparable results on video compression with state-of-the-art video compression standards. BSQ-ViT also enables masked language models to achieve competitive image synthesis quality to GAN- and diffusion-based methods. | [
"['Yue Zhao' 'Yuanjun Xiong' 'Philipp Krähenbühl']"
] |
null | null | 2406.07564 | null | null | http://arxiv.org/pdf/2406.07564v1 | 2024-05-15T08:11:41Z | 2024-05-15T08:11:41Z | Optimizing Sales Forecasts through Automated Integration of Market
Indicators | Recognizing that traditional forecasting models often rely solely on historical demand, this work investigates the potential of data-driven techniques to automatically select and integrate market indicators for improving customer demand predictions. By adopting an exploratory methodology, we integrate macroeconomic time series, such as national GDP growth, from the textit{Eurostat} database into textit{Neural Prophet} and textit{SARIMAX} forecasting models. Suitable time series are automatically identified through different state-of-the-art feature selection methods and applied to sales data from our industrial partner. It could be shown that forecasts can be significantly enhanced by incorporating external information. Notably, the potential of feature selection methods stands out, especially due to their capability for automation without expert knowledge and manual selection effort. In particular, the Forward Feature Selection technique consistently yielded superior forecasting accuracy for both SARIMAX and Neural Prophet across different company sales datasets. In the comparative analysis of the errors of the selected forecasting models, namely Neural Prophet and SARIMAX, it is observed that neither model demonstrates a significant superiority over the other. | [
"['Lina Döring' 'Felix Grumbach' 'Pascal Reusch']"
] |
null | null | 2406.07568 | null | null | http://arxiv.org/pdf/2406.07568v1 | 2024-05-27T23:00:57Z | 2024-05-27T23:00:57Z | Reinforcement Learning Based Escape Route Generation in Low Visibility
Environments | Structure fires are responsible for the majority of fire-related deaths nationwide. In order to assist with the rapid evacuation of trapped people, this paper proposes the use of a system that determines optimal search paths for firefighters and exit paths for civilians in real time based on environmental measurements. Through the use of a LiDAR mapping system evaluated and verified by a trust range derived from sonar and smoke concentration data, a proposed solution to low visibility mapping is tested. These independent point clouds are then used to create distinct maps, which are merged through the use of a RANSAC based alignment methodology and simplified into a visibility graph. Temperature and humidity data are then used to label each node with a danger score, creating an environment tensor. After demonstrating how a Linear Function Approximation based Natural Policy Gradient RL methodology outperforms more complex competitors with respect to robustness and speed, this paper outlines two systems (savior and refugee) that process the environment tensor to create safe rescue and escape routes, respectively. | [
"['Hari Srikanth']"
] |
null | null | 2406.07572 | null | null | http://arxiv.org/pdf/2406.07572v1 | 2024-06-01T13:35:18Z | 2024-06-01T13:35:18Z | Domain-specific ReAct for physics-integrated iterative modeling: A case
study of LLM agents for gas path analysis of gas turbines | This study explores the application of large language models (LLMs) with callable tools in energy and power engineering domain, focusing on gas path analysis of gas turbines. We developed a dual-agent tool-calling process to integrate expert knowledge, predefined tools, and LLM reasoning. We evaluated various LLMs, including LLama3, Qwen1.5 and GPT. Smaller models struggled with tool usage and parameter extraction, while larger models demonstrated favorable capabilities. All models faced challenges with complex, multi-component problems. Based on the test results, we infer that LLMs with nearly 100 billion parameters could meet professional scenario requirements with fine-tuning and advanced prompt design. Continued development are likely to enhance their accuracy and effectiveness, paving the way for more robust AI-driven solutions. | [
"['Tao Song' 'Yuwei Fan' 'Chenlong Feng' 'Keyu Song' 'Chao Liu'\n 'Dongxiang Jiang']"
] |
null | null | 2406.07573 | null | null | http://arxiv.org/pdf/2406.07573v1 | 2024-06-04T08:56:56Z | 2024-06-04T08:56:56Z | Investigating the Potential of Using Large Language Models for
Scheduling | The inaugural ACM International Conference on AI-powered Software introduced the AIware Challenge, prompting researchers to explore AI-driven tools for optimizing conference programs through constrained optimization. We investigate the use of Large Language Models (LLMs) for program scheduling, focusing on zero-shot learning and integer programming to measure paper similarity. Our study reveals that LLMs, even under zero-shot settings, create reasonably good first drafts of conference schedules. When clustering papers, using only titles as LLM inputs produces results closer to human categorization than using titles and abstracts with TFIDF. The code has been made publicly available. | [
"['Deddy Jobson' 'Yilin Li']"
] |
null | null | 2406.07574 | null | null | http://arxiv.org/pdf/2406.07574v1 | 2024-06-04T22:59:37Z | 2024-06-04T22:59:37Z | Biharmonic Distance of Graphs and its Higher-Order Variants: Theoretical
Properties with Applications to Centrality and Clustering | Effective resistance is a distance between vertices of a graph that is both theoretically interesting and useful in applications. We study a variant of effective resistance called the biharmonic distance. While the effective resistance measures how well-connected two vertices are, we prove several theoretical results supporting the idea that the biharmonic distance measures how important an edge is to the global topology of the graph. Our theoretical results connect the biharmonic distance to well-known measures of connectivity of a graph like its total resistance and sparsity. Based on these results, we introduce two clustering algorithms using the biharmonic distance. Finally, we introduce a further generalization of the biharmonic distance that we call the $k$-harmonic distance. We empirically study the utility of biharmonic and $k$-harmonic distance for edge centrality and graph clustering. | [
"['Mitchell Black' 'Lucy Lin' 'Amir Nayyeri' 'Weng-Keen Wong']"
] |
null | null | 2406.07576 | null | null | http://arxiv.org/pdf/2406.07576v1 | 2024-06-07T08:51:52Z | 2024-06-07T08:51:52Z | Towards objective and interpretable speech disorder assessment: a
comparative analysis of CNN and transformer-based models | Head and Neck Cancers (HNC) significantly impact patients' ability to speak, affecting their quality of life. Commonly used metrics for assessing pathological speech are subjective, prompting the need for automated and unbiased evaluation methods. This study proposes a self-supervised Wav2Vec2-based model for phone classification with HNC patients, to enhance accuracy and improve the discrimination of phonetic features for subsequent interpretability purpose. The impact of pre-training datasets, model size, and fine-tuning datasets and parameters are explored. Evaluation on diverse corpora reveals the effectiveness of the Wav2Vec2 architecture, outperforming a CNN-based approach, used in previous work. Correlation with perceptual measures also affirms the model relevance for impaired speech analysis. This work paves the way for better understanding of pathological speech with interpretable approaches for clinicians, by leveraging complex self-learnt speech representations. | [
"['Malo Maisonneuve' 'Corinne Fredouille' 'Muriel Lalain' 'Alain Ghio'\n 'Virginie Woisard']"
] |
null | null | 2406.07579 | null | null | http://arxiv.org/pdf/2406.07579v1 | 2024-06-09T06:44:08Z | 2024-06-09T06:44:08Z | GFPack++: Improving 2D Irregular Packing by Learning Gradient Field with
Attention | 2D irregular packing is a classic combinatorial optimization problem with various applications, such as material utilization and texture atlas generation. This NP-hard problem requires efficient algorithms to optimize space utilization. Conventional numerical methods suffer from slow convergence and high computational cost. Existing learning-based methods, such as the score-based diffusion model, also have limitations, such as no rotation support, frequent collisions, and poor adaptability to arbitrary boundaries, and slow inferring. The difficulty of learning from teacher packing is to capture the complex geometric relationships among packing examples, which include the spatial (position, orientation) relationships of objects, their geometric features, and container boundary conditions. Representing these relationships in latent space is challenging. We propose GFPack++, an attention-based gradient field learning approach that addresses this challenge. It consists of two pivotal strategies: emph{attention-based geometry encoding} for effective feature encoding and emph{attention-based relation encoding} for learning complex relationships. We investigate the utilization distribution between the teacher and inference data and design a weighting function to prioritize tighter teacher data during training, enhancing learning effectiveness. Our diffusion model supports continuous rotation and outperforms existing methods on various datasets. We achieve higher space utilization over several widely used baselines, one-order faster than the previous diffusion-based method, and promising generalization for arbitrary boundaries. We plan to release our source code and datasets to support further research in this direction. | [
"['Tianyang Xue' 'Lin Lu' 'Yang Liu' 'Mingdong Wu' 'Hao Dong'\n 'Yanbin Zhang' 'Renmin Han' 'Baoquan Chen']"
] |
null | null | 2406.07580 | null | null | http://arxiv.org/pdf/2406.07580v1 | 2024-06-09T07:38:45Z | 2024-06-09T07:38:45Z | DMS: Addressing Information Loss with More Steps for Pragmatic
Adversarial Attacks | Despite the exceptional performance of deep neural networks (DNNs) across different domains, they are vulnerable to adversarial samples, in particular for tasks related to computer vision. Such vulnerability is further influenced by the digital container formats used in computers, where the discrete numerical values are commonly used for storing the pixel values. This paper examines how information loss in file formats impacts the effectiveness of adversarial attacks. Notably, we observe a pronounced hindrance to the adversarial attack performance due to the information loss of the non-integer pixel values. To address this issue, we explore to leverage the gradient information of the attack samples within the model to mitigate the information loss. We introduce the Do More Steps (DMS) algorithm, which hinges on two core techniques: gradient ascent-based textit{adversarial integerization} (DMS-AI) and integrated gradients-based textit{attribution selection} (DMS-AS). Our goal is to alleviate such lossy process to retain the attack performance when storing these adversarial samples digitally. In particular, DMS-AI integerizes the non-integer pixel values according to the gradient direction, and DMS-AS selects the non-integer pixels by comparing attribution results. We conduct thorough experiments to assess the effectiveness of our approach, including the implementations of the DMS-AI and DMS-AS on two large-scale datasets with various latest gradient-based attack methods. Our empirical findings conclusively demonstrate the superiority of our proposed DMS-AI and DMS-AS pixel integerization methods over the standardised methods, such as rounding, truncating and upper approaches, in maintaining attack integrity. | [
"['Zhiyu Zhu' 'Jiayu Zhang' 'Xinyi Wang' 'Zhibo Jin' 'Huaming Chen']"
] |
null | null | 2406.07581 | null | null | http://arxiv.org/pdf/2406.07581v1 | 2024-06-09T17:13:25Z | 2024-06-09T17:13:25Z | A novel method for identifying rice seed purity based on hybrid machine
learning algorithms | In the grain industry, the identification of seed purity is a crucial task as it is an important factor in evaluating the quality of seeds. For rice seeds, this property allows for the reduction of unexpected influences of other varieties on rice yield, nutrient composition, and price. However, in practice, they are often mixed with seeds from others. This study proposes a novel method for automatically identifying the rice seed purity of a certain rice variety based on hybrid machine learning algorithms. The main idea is to use deep learning architectures for extracting important features from the raw data and then use machine learning algorithms for classification. Several experiments are conducted following a practical implementation to evaluate the performance of the proposed model. The obtained results show that the novel method improves significantly the performance of existing methods. Thus, it can be applied to design effective identification systems for rice seed purity. | [
"['Phan Thi-Thu-Hong' 'Vo Quoc-Trinh' 'Nguyen Huu-Du']"
] |
null | null | 2406.07585 | null | null | http://arxiv.org/pdf/2406.07585v1 | 2024-06-10T23:23:52Z | 2024-06-10T23:23:52Z | Rate-Preserving Reductions for Blackwell Approachability | Abernethy et al. (2011) showed that Blackwell approachability and no-regret learning are equivalent, in the sense that any algorithm that solves a specific Blackwell approachability instance can be converted to a sublinear regret algorithm for a specific no-regret learning instance, and vice versa. In this paper, we study a more fine-grained form of such reductions, and ask when this translation between problems preserves not only a sublinear rate of convergence, but also preserves the optimal rate of convergence. That is, in which cases does it suffice to find the optimal regret bound for a no-regret learning instance in order to find the optimal rate of convergence for a corresponding approachability instance? We show that the reduction of Abernethy et al. (2011) does not preserve rates: their reduction may reduce a $d$-dimensional approachability instance $I_1$ with optimal convergence rate $R_1$ to a no-regret learning instance $I_2$ with optimal regret-per-round of $R_2$, with $R_{2}/R_{1}$ arbitrarily large (in particular, it is possible that $R_1 = 0$ and $R_{2} > 0$). On the other hand, we show that it is possible to tightly reduce any approachability instance to an instance of a generalized form of regret minimization we call improper $phi$-regret minimization (a variant of the $phi$-regret minimization of Gordon et al. (2008) where the transformation functions may map actions outside of the action set). Finally, we characterize when linear transformations suffice to reduce improper $phi$-regret minimization problems to standard classes of regret minimization problems in a rate preserving manner. We prove that some improper $phi$-regret minimization instances cannot be reduced to either subclass of instance in this way, suggesting that approachability can capture some problems that cannot be phrased in the language of online learning. | [
"['Christoph Dann' 'Yishay Mansour' 'Mehryar Mohri' 'Jon Schneider'\n 'Balasubramanian Sivan']"
] |
null | null | 2406.07590 | null | null | http://arxiv.org/pdf/2406.07590v1 | 2024-06-11T10:46:41Z | 2024-06-11T10:46:41Z | StreamPrompt: Learnable Prompt-guided Data Selection for Efficient
Stream Learning | Stream Learning (SL) requires models to rapidly adapt to continuous data streams, setting it apart from traditional Continual Learning (CL). Recent SL methods emphasize efficiency by selecting data subsets for training, but they often struggle due to their reliance on static, rule-based selection algorithms that cannot effectively adapt to the changing importance of data. In this work, we introduce StreamPrompt, a method that enhances data selection through dynamic, learnable prompts. These dynamic prompts serve two purposes beyond guiding model inference: 1) optimizing data selection, and 2) guiding updates to the rehearsal buffer. This approach addresses the challenges of adaptability and computational efficiency in processing continuous data streams. Moreover, StreamPrompt introduces Prompt Attunement,a mechanism that enhances the efficiency of prompt learning. By leveraging attention layers from vision transformers and softly combining their outputs with a gate unit, Prompt Attunementrefines prompts with minimal computational resources. Comprehensive evaluations demonstrate StreamPrompts superior performance over state-of-the-art, with significant improvements in accuracy and reductions in training time. These results underscore the efficacy and efficiency of StreamPrompt, establishing its potential as a scalable and effective solution for the evolving demands of SL. Our code is available at https://github.com/intellistream/Efficient-Stream-Learning. | [
"['Tongjun Shi' 'Shuhao Zhang']"
] |
null | null | 2406.07592 | null | null | http://arxiv.org/pdf/2406.07592v1 | 2024-06-11T12:15:47Z | 2024-06-11T12:15:47Z | MambaLRP: Explaining Selective State Space Sequence Models | Recent sequence modeling approaches using Selective State Space Sequence Models, referred to as Mamba models, have seen a surge of interest. These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a wide range of applications such as language modeling, demonstrating promising performance. To foster their reliable use in real-world scenarios, it is crucial to augment their transparency. Our work bridges this critical gap by bringing explainability, particularly Layer-wise Relevance Propagation (LRP), to the Mamba architecture. Guided by the axiom of relevance conservation, we identify specific components in the Mamba architecture, which cause unfaithful explanations. To remedy this issue, we propose MambaLRP, a novel algorithm within the LRP framework, which ensures a more stable and reliable relevance propagation through these components. Our proposed method is theoretically sound and excels in achieving state-of-the-art explanation performance across a diverse range of models and datasets. Moreover, MambaLRP facilitates a deeper inspection of Mamba architectures, uncovering various biases and evaluating their significance. It also enables the analysis of previous speculations regarding the long-range capabilities of Mamba models. | [
"['Farnoush Rezaei Jafari' 'Grégoire Montavon' 'Klaus-Robert Müller'\n 'Oliver Eberle']"
] |
null | null | 2406.07598 | null | null | http://arxiv.org/pdf/2406.07598v4 | 2024-06-21T15:43:36Z | 2024-06-11T15:58:56Z | Equivariance via Minimal Frame Averaging for More Symmetries and
Efficiency | We consider achieving equivariance in machine learning systems via frame averaging. Current frame averaging methods involve a costly sum over large frames or rely on sampling-based approaches that only yield approximate equivariance. Here, we propose Minimal Frame Averaging (MFA), a mathematical framework for constructing provably minimal frames that are exactly equivariant. The general foundations of MFA also allow us to extend frame averaging to more groups than previously considered, including the Lorentz group for describing symmetries in space-time, and the unitary group for complex-valued domains. Results demonstrate the efficiency and effectiveness of encoding symmetries via MFA across a diverse range of tasks, including $n$-body simulation, top tagging in collider physics, and relaxed energy prediction. Our code is available at https://github.com/divelab/MFA. | [
"['Yuchao Lin' 'Jacob Helwig' 'Shurui Gui' 'Shuiwang Ji']"
] |
null | null | 2406.07640 | null | null | http://arxiv.org/pdf/2406.07640v1 | 2024-06-11T18:13:46Z | 2024-06-11T18:13:46Z | When is an Embedding Model More Promising than Another? | Embedders play a central role in machine learning, projecting any object into numerical representations that can, in turn, be leveraged to perform various downstream tasks. The evaluation of embedding models typically depends on domain-specific empirical approaches utilizing downstream tasks, primarily because of the lack of a standardized framework for comparison. However, acquiring adequately large and representative datasets for conducting these assessments is not always viable and can prove to be prohibitively expensive and time-consuming. In this paper, we present a unified approach to evaluate embedders. First, we establish theoretical foundations for comparing embedding models, drawing upon the concepts of sufficiency and informativeness. We then leverage these concepts to devise a tractable comparison criterion (information sufficiency), leading to a task-agnostic and self-supervised ranking procedure. We demonstrate experimentally that our approach aligns closely with the capability of embedding models to facilitate various downstream tasks in both natural language processing and molecular biology. This effectively offers practitioners a valuable tool for prioritizing model trials. | [
"['Maxime Darrin' 'Philippe Formont' 'Ismail Ben Ayed' 'Jackie CK Cheung'\n 'Pablo Piantanida']"
] |
null | null | 2406.07642 | null | null | http://arxiv.org/pdf/2406.07642v1 | 2024-06-11T18:16:28Z | 2024-06-11T18:16:28Z | Generating Human Understandable Explanations for Node Embeddings | Node embedding algorithms produce low-dimensional latent representations of nodes in a graph. These embeddings are often used for downstream tasks, such as node classification and link prediction. In this paper, we investigate the following two questions: (Q1) Can we explain each embedding dimension with human-understandable graph features (e.g. degree, clustering coefficient and PageRank). (Q2) How can we modify existing node embedding algorithms to produce embeddings that can be easily explained by human-understandable graph features? We find that the answer to Q1 is yes and introduce a new framework called XM (short for eXplain eMbedding) to answer Q2. A key aspect of XM involves minimizing the nuclear norm of the generated explanations. We show that by minimizing the nuclear norm, we minimize the lower bound on the entropy of the generated explanations. We test XM on a variety of real-world graphs and show that XM not only preserves the performance of existing node embedding methods, but also enhances their explainability. | [
"['Zohair Shafi' 'Ayan Chatterjee' 'Tina Eliassi-Rad']"
] |
null | null | 2406.07646 | null | null | http://arxiv.org/pdf/2406.07646v1 | 2024-06-11T18:22:59Z | 2024-06-11T18:22:59Z | Pre-training Feature Guided Diffusion Model for Speech Enhancement | Speech enhancement significantly improves the clarity and intelligibility of speech in noisy environments, improving communication and listening experiences. In this paper, we introduce a novel pretraining feature-guided diffusion model tailored for efficient speech enhancement, addressing the limitations of existing discriminative and generative models. By integrating spectral features into a variational autoencoder (VAE) and leveraging pre-trained features for guidance during the reverse process, coupled with the utilization of the deterministic discrete integration method (DDIM) to streamline sampling steps, our model improves efficiency and speech enhancement quality. Demonstrating state-of-the-art results on two public datasets with different SNRs, our model outshines other baselines in efficiency and robustness. The proposed method not only optimizes performance but also enhances practical deployment capabilities, without increasing computational demands. | [
"['Yiyuan Yang' 'Niki Trigoni' 'Andrew Markham']"
] |
null | null | 2406.07657 | null | null | http://arxiv.org/pdf/2406.07657v1 | 2024-06-11T18:55:04Z | 2024-06-11T18:55:04Z | OPTune: Efficient Online Preference Tuning | Reinforcement learning with human feedback~(RLHF) is critical for aligning Large Language Models (LLMs) with human preference. Compared to the widely studied offline version of RLHF, emph{e.g.} direct preference optimization (DPO), recent works have shown that the online variants achieve even better alignment. However, online alignment requires on-the-fly generation of new training data, which is costly, hard to parallelize, and suffers from varying quality and utility. In this paper, we propose a more efficient data exploration strategy for online preference tuning (OPTune), which does not rely on human-curated or pre-collected teacher responses but dynamically samples informative responses for on-policy preference alignment. During data generation, OPTune only selects prompts whose (re)generated responses can potentially provide more informative and higher-quality training signals than the existing responses. In the training objective, OPTune reweights each generated response (pair) by its utility in improving the alignment so that learning can be focused on the most helpful samples. Throughout our evaluations, OPTune'd LLMs maintain the instruction-following benefits provided by standard preference tuning whilst enjoying 1.27-1.56x faster training speed due to the efficient data exploration strategy. | [
"['Lichang Chen' 'Jiuhai Chen' 'Chenxi Liu' 'John Kirchenbauer'\n 'Davit Soselia' 'Chen Zhu' 'Tom Goldstein' 'Tianyi Zhou' 'Heng Huang']"
] |
null | null | 2406.07658 | null | null | http://arxiv.org/pdf/2406.07658v1 | 2024-06-11T18:59:24Z | 2024-06-11T18:59:24Z | Treeffuser: Probabilistic Predictions via Conditional Diffusions with
Gradient-Boosted Trees | Probabilistic prediction aims to compute predictive distributions rather than single-point predictions. These distributions enable practitioners to quantify uncertainty, compute risk, and detect outliers. However, most probabilistic methods assume parametric responses, such as Gaussian or Poisson distributions. When these assumptions fail, such models lead to bad predictions and poorly calibrated uncertainty. In this paper, we propose Treeffuser, an easy-to-use method for probabilistic prediction on tabular data. The idea is to learn a conditional diffusion model where the score function is estimated using gradient-boosted trees. The conditional diffusion model makes Treeffuser flexible and non-parametric, while the gradient-boosted trees make it robust and easy to train on CPUs. Treeffuser learns well-calibrated predictive distributions and can handle a wide range of regression tasks -- including those with multivariate, multimodal, and skewed responses. % , as well as categorical predictors and missing data We study Treeffuser on synthetic and real data and show that it outperforms existing methods, providing better-calibrated probabilistic predictions. We further demonstrate its versatility with an application to inventory allocation under uncertainty using sales data from Walmart. We implement Treeffuser in href{https://github.com/blei-lab/treeffuser}{https://github.com/blei-lab/treeffuser}. | [
"['Nicolas Beltran-Velez' 'Alessandro Antonio Grande' 'Achille Nazaret'\n 'Alp Kucukelbir' 'David Blei']"
] |
null | null | 2406.07662 | null | null | http://arxiv.org/pdf/2406.07662v3 | 2024-06-22T17:42:13Z | 2024-06-11T19:08:32Z | Progress Towards Decoding Visual Imagery via fNIRS | We demonstrate the possibility of reconstructing images from fNIRS brain activity and start building a prototype to match the required specs. By training an image reconstruction model on downsampled fMRI data, we discovered that cm-scale spatial resolution is sufficient for image generation. We obtained 71% retrieval accuracy with 1-cm resolution, compared to 93% on the full-resolution fMRI, and 20% with 2-cm resolution. With simulations and high-density tomography, we found that time-domain fNIRS can achieve 1-cm resolution, compared to 2-cm resolution for continuous-wave fNIRS. Lastly, we share designs for a prototype time-domain fNIRS device, consisting of a laser driver, a single photon detector, and a time-to-digital converter system. | [
"['Michel Adamic' 'Wellington Avelino' 'Anna Brandenberger' 'Bryan Chiang'\n 'Hunter Davis' 'Stephen Fay' 'Andrew Gregory' 'Aayush Gupta'\n 'Raphael Hotter' 'Grace Jiang' 'Fiona Leng' 'Stephen Polcyn'\n 'Thomas Ribeiro' 'Paul Scotti' 'Michelle Wang' 'Marley Xiong'\n 'Jonathan Xu']"
] |
null | null | 2406.07676 | null | null | http://arxiv.org/pdf/2406.07676v1 | 2024-06-11T19:50:50Z | 2024-06-11T19:50:50Z | FastAST: Accelerating Audio Spectrogram Transformer via Token Merging
and Cross-Model Knowledge Distillation | Audio classification models, particularly the Audio Spectrogram Transformer (AST), play a crucial role in efficient audio analysis. However, optimizing their efficiency without compromising accuracy remains a challenge. In this paper, we introduce FastAST, a framework that integrates Token Merging (ToMe) into the AST framework. FastAST enhances inference speed without requiring extensive retraining by merging similar tokens in audio spectrograms. Furthermore, during training, FastAST brings about significant speed improvements. The experiments indicate that FastAST can increase audio classification throughput with minimal impact on accuracy. To mitigate the accuracy impact, we integrate Cross-Model Knowledge Distillation (CMKD) into the FastAST framework. Integrating ToMe and CMKD into AST results in improved accuracy compared to AST while maintaining faster inference speeds. FastAST represents a step towards real-time, resource-efficient audio analysis. | [
"['Swarup Ranjan Behera' 'Abhishek Dhiman' 'Karthik Gowda'\n 'Aalekhya Satya Narayani']"
] |
null | null | 2406.07687 | null | null | http://arxiv.org/pdf/2406.07687v1 | 2024-06-11T20:07:22Z | 2024-06-11T20:07:22Z | Adversarial Machine Unlearning | This paper focuses on the challenge of machine unlearning, aiming to remove the influence of specific training data on machine learning models. Traditionally, the development of unlearning algorithms runs parallel with that of membership inference attacks (MIA), a type of privacy threat to determine whether a data instance was used for training. However, the two strands are intimately connected: one can view machine unlearning through the lens of MIA success with respect to removed data. Recognizing this connection, we propose a game-theoretic framework that integrates MIAs into the design of unlearning algorithms. Specifically, we model the unlearning problem as a Stackelberg game in which an unlearner strives to unlearn specific training data from a model, while an auditor employs MIAs to detect the traces of the ostensibly removed data. Adopting this adversarial perspective allows the utilization of new attack advancements, facilitating the design of unlearning algorithms. Our framework stands out in two ways. First, it takes an adversarial approach and proactively incorporates the attacks into the design of unlearning algorithms. Secondly, it uses implicit differentiation to obtain the gradients that limit the attacker's success, thus benefiting the process of unlearning. We present empirical results to demonstrate the effectiveness of the proposed approach for machine unlearning. | [
"['Zonglin Di' 'Sixie Yu' 'Yevgeniy Vorobeychik' 'Yang Liu']"
] |
null | null | 2406.07688 | null | null | http://arxiv.org/pdf/2406.07688v1 | 2024-06-11T20:10:16Z | 2024-06-11T20:10:16Z | AI Radiologist: Revolutionizing Liver Tissue Segmentation with
Convolutional Neural Networks and a Clinician-Friendly GUI | Artificial Intelligence (AI) is a pervasive research topic, permeating various sectors and applications. In this study, we harness the power of AI, specifically convolutional neural networks (ConvNets), for segmenting liver tissues. It also focuses on developing a user-friendly graphical user interface (GUI) tool, "AI Radiologist", enabling clinicians to effectively delineate different liver tissues (parenchyma, tumors, and vessels), thereby saving lives. This endeavor bridges the gap between academic research and practical, industrial applications. The GUI is a single-page application and is designed using the PyQt5 Python framework. The offline-available AI Radiologist resorts to three ConvNet models trained to segment all liver tissues. With respect to the Dice metric, the best liver ConvNet scores 98.16%, the best tumor ConvNet scores 65.95%, and the best vessel ConvNet scores 51.94%. It outputs 2D slices of the liver, tumors, and vessels, along with 3D interpolations in .obj and .mtl formats, which can be visualized/printed using any 3D-compatible software. Thus, the AI Radiologist offers a convenient tool for clinicians to perform liver tissue segmentation and 3D interpolation employing state-of-the-art models for tissues segmentation. With the provided capacity to select the volumes and pre-trained models, the clinicians can leave the rest to the AI Radiologist. | [
"['Ayman Al-Kababji' 'Faycal Bensaali' 'Sarada Prasad Dakua'\n 'Yassine Himeur']"
] |
null | null | 2406.07693 | null | null | http://arxiv.org/pdf/2406.07693v2 | 2024-06-16T21:10:55Z | 2024-06-11T20:14:22Z | A Labelled Dataset for Sentiment Analysis of Videos on YouTube, TikTok,
and Other Sources about the 2024 Outbreak of Measles | The work of this paper presents a dataset that contains the data of 4011 videos about the ongoing outbreak of measles published on 264 websites on the internet between January 1, 2024, and May 31, 2024. The dataset is available at https://dx.doi.org/10.21227/40s8-xf63. These websites primarily include YouTube and TikTok, which account for 48.6% and 15.2% of the videos, respectively. The remainder of the websites include Instagram and Facebook as well as the websites of various global and local news organizations. For each of these videos, the URL of the video, title of the post, description of the post, and the date of publication of the video are presented as separate attributes in the dataset. After developing this dataset, sentiment analysis (using VADER), subjectivity analysis (using TextBlob), and fine-grain sentiment analysis (using DistilRoBERTa-base) of the video titles and video descriptions were performed. This included classifying each video title and video description into (i) one of the sentiment classes i.e. positive, negative, or neutral, (ii) one of the subjectivity classes i.e. highly opinionated, neutral opinionated, or least opinionated, and (iii) one of the fine-grain sentiment classes i.e. fear, surprise, joy, sadness, anger, disgust, or neutral. These results are presented as separate attributes in the dataset for the training and testing of machine learning algorithms for performing sentiment analysis or subjectivity analysis in this field as well as for other applications. Finally, this paper also presents a list of open research questions that may be investigated using this dataset. | [
"['Nirmalya Thakur' 'Vanessa Su' 'Mingchen Shao' 'Kesha A. Patel'\n 'Hongseok Jeong' 'Victoria Knieling' 'Andrew Bian']"
] |
null | null | 2406.07694 | null | null | http://arxiv.org/pdf/2406.07694v1 | 2024-06-11T20:14:59Z | 2024-06-11T20:14:59Z | A PRISMA Driven Systematic Review of Publicly Available Datasets for
Benchmark and Model Developments for Industrial Defect Detection | Recent advancements in quality control across various industries have increasingly utilized the integration of video cameras and image processing for effective defect detection. A critical barrier to progress is the scarcity of comprehensive datasets featuring annotated defects, which are essential for developing and refining automated defect detection models. This systematic review, spanning from 2015 to 2023, identifies 15 publicly available datasets and critically examines them to assess their effectiveness and applicability for benchmarking and model development. Our findings reveal a diverse landscape of datasets, such as NEU-CLS, NEU-DET, DAGM, KolektorSDD, PCB Defect Dataset, and the Hollow Cylindrical Defect Detection Dataset, each with unique strengths and limitations in terms of image quality, defect type representation, and real-world applicability. The goal of this systematic review is to consolidate these datasets in a single location, providing researchers who seek such publicly available resources with a comprehensive reference. | [
"['Can Akbas' 'Irem Su Arin' 'Sinan Onal']"
] |
null | null | 2406.07698 | null | null | http://arxiv.org/pdf/2406.07698v1 | 2024-06-11T20:26:26Z | 2024-06-11T20:26:26Z | Label Smoothing Improves Machine Unlearning | The objective of machine unlearning (MU) is to eliminate previously learned data from a model. However, it is challenging to strike a balance between computation cost and performance when using existing MU techniques. Taking inspiration from the influence of label smoothing on model confidence and differential privacy, we propose a simple gradient-based MU approach that uses an inverse process of label smoothing. This work introduces UGradSL, a simple, plug-and-play MU approach that uses smoothed labels. We provide theoretical analyses demonstrating why properly introducing label smoothing improves MU performance. We conducted extensive experiments on six datasets of various sizes and different modalities, demonstrating the effectiveness and robustness of our proposed method. The consistent improvement in MU performance is only at a marginal cost of additional computations. For instance, UGradSL improves over the gradient ascent MU baseline by 66% unlearning accuracy without sacrificing unlearning efficiency. | [
"['Zonglin Di' 'Zhaowei Zhu' 'Jinghan Jia' 'Jiancheng Liu' 'Zafar Takhirov'\n 'Bo Jiang' 'Yuanshun Yao' 'Sijia Liu' 'Yang Liu']"
] |
null | null | 2406.07707 | null | null | http://arxiv.org/pdf/2406.07707v2 | 2024-06-13T04:51:11Z | 2024-06-11T20:38:41Z | A Deep Learning Approach to Detect Complete Safety Equipment For
Construction Workers Based On YOLOv7 | In the construction sector, ensuring worker safety is of the utmost significance. In this study, a deep learning-based technique is presented for identifying safety gear worn by construction workers, such as helmets, goggles, jackets, gloves, and footwears. The recommended approach uses the YOLO v7 (You Only Look Once) object detection algorithm to precisely locate these safety items. The dataset utilized in this work consists of labeled images split into training, testing and validation sets. Each image has bounding box labels that indicate where the safety equipment is located within the image. The model is trained to identify and categorize the safety equipment based on the labeled dataset through an iterative training approach. We used custom dataset to train this model. Our trained model performed admirably well, with good precision, recall, and F1-score for safety equipment recognition. Also, the model's evaluation produced encouraging results, with a [email protected] score of 87.7%. The model performs effectively, making it possible to quickly identify safety equipment violations on building sites. A thorough evaluation of the outcomes reveals the model's advantages and points up potential areas for development. By offering an automatic and trustworthy method for safety equipment detection, this research makes a contribution to the fields of computer vision and workplace safety. The proposed deep learning-based approach will increase safety compliance and reduce the risk of accidents in the construction industry | [
"['Md. Shariful Islam' 'SM Shaqib' 'Shahriar Sultan Ramit'\n 'Shahrun Akter Khushbu' 'Mr. Abdus Sattar'\n 'Dr. Sheak Rashed Haider Noori']"
] |
null | null | 2406.07709 | null | null | http://arxiv.org/pdf/2406.07709v1 | 2024-06-11T20:44:04Z | 2024-06-11T20:44:04Z | Diagnosing and fixing common problems in Bayesian optimization for
molecule design | Bayesian optimization (BO) is a principled approach to molecular design tasks. In this paper we explain three pitfalls of BO which can cause poor empirical performance: an incorrect prior width, over-smoothing, and inadequate acquisition function maximization. We show that with these issues addressed, even a basic BO setup is able to achieve the highest overall performance on the PMO benchmark for molecule design (Gao et al, 2022). These results suggest that BO may benefit from more attention in the machine learning for molecules community. | [
"['Austin Tripp' 'José Miguel Hernández-Lobato']"
] |
null | null | 2406.07712 | null | null | http://arxiv.org/pdf/2406.07712v1 | 2024-06-11T20:46:32Z | 2024-06-11T20:46:32Z | Loss Gradient Gaussian Width based Generalization and Optimization
Guarantees | Generalization and optimization guarantees on the population loss in machine learning often rely on uniform convergence based analysis, typically based on the Rademacher complexity of the predictors. The rich representation power of modern models has led to concerns about this approach. In this paper, we present generalization and optimization guarantees in terms of the complexity of the gradients, as measured by the Loss Gradient Gaussian Width (LGGW). First, we introduce generalization guarantees directly in terms of the LGGW under a flexible gradient domination condition, which we demonstrate to hold empirically for deep models. Second, we show that sample reuse in finite sum (stochastic) optimization does not make the empirical gradient deviate from the population gradient as long as the LGGW is small. Third, focusing on deep networks, we present results showing how to bound their LGGW under mild assumptions. In particular, we show that their LGGW can be bounded (a) by the $L_2$-norm of the loss Hessian eigenvalues, which has been empirically shown to be $tilde{O}(1)$ for commonly used deep models; and (b) in terms of the Gaussian width of the featurizer, i.e., the output of the last-but-one layer. To our knowledge, our generalization and optimization guarantees in terms of LGGW are the first results of its kind, avoid the pitfalls of predictor Rademacher complexity based analysis, and hold considerable promise towards quantitatively tight bounds for deep models. | [
"['Arindam Banerjee' 'Qiaobo Li' 'Yingxue Zhou']"
] |
null | null | 2406.07726 | null | null | http://arxiv.org/pdf/2406.07726v1 | 2024-06-11T21:09:45Z | 2024-06-11T21:09:45Z | A Concise Mathematical Description of Active Inference in Discrete Time | In this paper we present a concise mathematical description of active inference in discrete time. The main part of the paper serves as a general introduction to the topic, including an example illustrating the theory on action selection. In the appendix the more subtle mathematical details are discussed. This part is aimed at readers who have already studied the active inference literature but struggle to make sense of the mathematical details and derivations. Throughout the whole manuscript, special attention has been paid to adopting notation that is both precise and in line with standard mathematical texts. All equations and derivations are linked to specific equation numbers in other popular text on the topic. Furthermore, Python code is provided that implements the action selection mechanism described in this paper and is compatible with pymdp environments. | [
"['Jesse van Oostrum' 'Carlotta Langer' 'Nihat Ay']"
] |
null | null | 2406.07727 | null | null | http://arxiv.org/pdf/2406.07727v1 | 2024-06-11T21:12:34Z | 2024-06-11T21:12:34Z | Efficient Parallel Multi-Hop Reasoning: A Scalable Approach for
Knowledge Graph Analysis | Multi-hop reasoning (MHR) is a process in artificial intelligence and natural language processing where a system needs to make multiple inferential steps to arrive at a conclusion or answer. In the context of knowledge graphs or databases, it involves traversing multiple linked entities and relationships to understand complex queries or perform tasks requiring a deeper understanding. Multi-hop reasoning is a critical function in various applications, including question answering, knowledge base completion, and link prediction. It has garnered significant interest in artificial intelligence, machine learning, and graph analytics. This paper focuses on optimizing MHR for time efficiency on large-scale graphs, diverging from the traditional emphasis on accuracy which is an orthogonal goal. We introduce a novel parallel algorithm that harnesses domain-specific learned embeddings to efficiently identify the top K paths between vertices in a knowledge graph to find the best answers to a three-hop query. Our contributions are: (1) We present a new parallel algorithm to enhance MHR performance, scalability and efficiency. (2) We demonstrate the algorithm's superior performance on leading-edge Intel and AMD architectures through empirical results. We showcase the algorithm's practicality through a case study on identifying academic affiliations of potential Turing Award laureates in Deep Learning, highlighting its capability to handle intricate entity relationships. This demonstrates the potential of our approach to enabling high-performance MHR, useful to navigate the growing complexity of modern knowledge graphs. | [
"['Jesmin Jahan Tithi' 'Fabio Checconi' 'Fabrizio Petrini']"
] |
null | null | 2406.07735 | null | null | http://arxiv.org/pdf/2406.07735v1 | 2024-06-11T21:44:49Z | 2024-06-11T21:44:49Z | REAL Sampling: Boosting Factuality and Diversity of Open-Ended
Generation via Asymptotic Entropy | Decoding methods for large language models (LLMs) usually struggle with the tradeoff between ensuring factuality and maintaining diversity. For example, a higher p threshold in the nucleus (top-p) sampling increases the diversity but decreases the factuality, and vice versa. In this paper, we propose REAL (Residual Entropy from Asymptotic Line) sampling, a decoding method that achieves improved factuality and diversity over nucleus sampling by predicting an adaptive threshold of $p$. Specifically, REAL sampling predicts the step-wise likelihood of an LLM to hallucinate, and lowers the p threshold when an LLM is likely to hallucinate. Otherwise, REAL sampling increases the p threshold to boost the diversity. To predict the step-wise hallucination likelihood without supervision, we construct a Token-level Hallucination Forecasting (THF) model to predict the asymptotic entropy (i.e., inherent uncertainty) of the next token by extrapolating the next-token entropies from a series of LLMs with different sizes. If a LLM's entropy is higher than the asymptotic entropy (i.e., the LLM is more uncertain than it should be), the THF model predicts a high hallucination hazard, which leads to a lower p threshold in REAL sampling. In the FactualityPrompts benchmark, we demonstrate that REAL sampling based on a 70M THF model can substantially improve the factuality and diversity of 7B LLMs simultaneously, judged by both retrieval-based metrics and human evaluation. After combined with contrastive decoding, REAL sampling outperforms 9 sampling methods, and generates texts that are more factual than the greedy sampling and more diverse than the nucleus sampling with $p=0.5$. Furthermore, the predicted asymptotic entropy is also a useful unsupervised signal for hallucination detection tasks. | [
"['Haw-Shiuan Chang' 'Nanyun Peng' 'Mohit Bansal' 'Anil Ramakrishna'\n 'Tagyoung Chung']"
] |
null | null | 2406.07737 | null | null | http://arxiv.org/pdf/2406.07737v1 | 2024-06-11T21:46:19Z | 2024-06-11T21:46:19Z | The Future of Software Engineering in an AI-Driven World | A paradigm shift is underway in Software Engineering, with AI systems such as LLMs gaining increasing importance for improving software development productivity. This trend is anticipated to persist. In the next five years, we will likely see an increasing symbiotic partnership between human developers and AI. The Software Engineering research community cannot afford to overlook this trend; we must address the key research challenges posed by the integration of AI into the software development process. In this paper, we present our vision of the future of software development in an AI-Driven world and explore the key challenges that our research community should address to realize this vision. | [
"['Valerio Terragni' 'Partha Roop' 'Kelly Blincoe']"
] |
null | null | 2406.07746 | null | null | http://arxiv.org/pdf/2406.07746v1 | 2024-06-11T22:04:59Z | 2024-06-11T22:04:59Z | Fully Adaptive Regret-Guaranteed Algorithm for Control of Linear
Quadratic Systems | The first algorithm for the Linear Quadratic (LQ) control problem with an unknown system model, featuring a regret of $mathcal{O}(sqrt{T})$, was introduced by Abbasi-Yadkori and Szepesv'ari (2011). Recognizing the computational complexity of this algorithm, subsequent efforts (see Cohen et al. (2019), Mania et al. (2019), Faradonbeh et al. (2020a), and Kargin et al.(2022)) have been dedicated to proposing algorithms that are computationally tractable while preserving this order of regret. Although successful, the existing works in the literature lack a fully adaptive exploration-exploitation trade-off adjustment and require a user-defined value, which can lead to overall regret bound growth with some factors. In this work, noticing this gap, we propose the first fully adaptive algorithm that controls the number of policy updates (i.e., tunes the exploration-exploitation trade-off) and optimizes the upper-bound of regret adaptively. Our proposed algorithm builds on the SDP-based approach of Cohen et al. (2019) and relaxes its need for a horizon-dependant warm-up phase by appropriately tuning the regularization parameter and adding an adaptive input perturbation. We further show that through careful exploration-exploitation trade-off adjustment there is no need to commit to the widely-used notion of strong sequential stability, which is restrictive and can introduce complexities in initialization. | [
"['Jafar Abbaszadeh Chekan' 'Cedric Langbort']"
] |
null | null | 2406.07767 | null | null | http://arxiv.org/pdf/2406.07767v2 | 2024-07-10T18:34:05Z | 2024-06-11T23:16:46Z | Conformalized Teleoperation: Confidently Mapping Human Inputs to
High-Dimensional Robot Actions | Assistive robotic arms often have more degrees-of-freedom than a human teleoperator can control with a low-dimensional input, like a joystick. To overcome this challenge, existing approaches use data-driven methods to learn a mapping from low-dimensional human inputs to high-dimensional robot actions. However, determining if such a black-box mapping can confidently infer a user's intended high-dimensional action from low-dimensional inputs remains an open problem. Our key idea is to adapt the assistive map at training time to additionally estimate high-dimensional action quantiles, and then calibrate these quantiles via rigorous uncertainty quantification methods. Specifically, we leverage adaptive conformal prediction which adjusts the intervals over time, reducing the uncertainty bounds when the mapping is performant and increasing the bounds when the mapping consistently mis-predicts. Furthermore, we propose an uncertainty-interval-based mechanism for detecting high-uncertainty user inputs and robot states. We evaluate the efficacy of our proposed approach in a 2D assistive navigation task and two 7DOF Kinova Jaco tasks involving assistive cup grasping and goal reaching. Our findings demonstrate that conformalized assistive teleoperation manages to detect (but not differentiate between) high uncertainty induced by diverse preferences and induced by low-precision trajectories in the mapping's training dataset. On the whole, we see this work as a key step towards enabling robots to quantify their own uncertainty and proactively seek intervention when needed. | [
"['Michelle Zhao' 'Reid Simmons' 'Henny Admoni' 'Andrea Bajcsy']"
] |
null | null | 2406.07769 | null | null | http://arxiv.org/abs/2406.07769v2 | 2024-06-13T17:21:26Z | 2024-06-11T23:23:54Z | Personalized Product Assortment with Real-time 3D Perception and
Bayesian Payoff Estimation | Product assortment selection is a critical challenge facing physical retailers. Effectively aligning inventory with the preferences of shoppers can increase sales and decrease out-of-stocks. However, in real-world settings the problem is challenging due to the combinatorial explosion of product assortment possibilities. Consumer preferences are typically heterogeneous across space and time, making inventory-preference alignment challenging. Additionally, existing strategies rely on syndicated data, which tends to be aggregated, low resolution, and suffer from high latency. To solve these challenges, we introduce a real-time recommendation system, which we call EdgeRec3D. Our system utilizes recent advances in 3D computer vision for perception and automatic, fine grained sales estimation. These perceptual components run on the edge of the network and facilitate real-time reward signals. Additionally, we develop a Bayesian payoff model to account for noisy estimates from 3D LIDAR data. We rely on spatial clustering to allow the system to adapt to heterogeneous consumer preferences, and a graph-based candidate generation algorithm to address the combinatorial search problem. We test our system in real-world stores across two, 6-8 week A/B tests with beverage products and demonstrate a 35% and 27% increase in sales respectively. Finally, we monitor the deployed system for a period of 28 weeks with an observational study and show a 9.4% increase in sales. | [
"['Porter Jenkins' 'Michael Selander' 'J. Stockton Jenkins'\n 'Andrew Merrill' 'Kyle Armstrong']"
] |
null | null | 2406.07770 | null | null | http://arxiv.org/pdf/2406.07770v1 | 2024-06-11T23:29:48Z | 2024-06-11T23:29:48Z | DualBind: A Dual-Loss Framework for Protein-Ligand Binding Affinity
Prediction | Accurate prediction of protein-ligand binding affinities is crucial for drug development. Recent advances in machine learning show promising results on this task. However, these methods typically rely heavily on labeled data, which can be scarce or unreliable, or they rely on assumptions like Boltzmann-distributed data that may not hold true in practice. Here, we present DualBind, a novel framework that integrates supervised mean squared error (MSE) with unsupervised denoising score matching (DSM) to accurately learn the binding energy function. DualBind not only addresses the limitations of DSM-only models by providing more accurate absolute affinity predictions but also improves generalizability and reduces reliance on labeled data compared to MSE-only models. Our experimental results demonstrate that DualBind excels in predicting binding affinities and can effectively utilize both labeled and unlabeled data to enhance performance. | [
"['Meng Liu' 'Saee Gopal Paliwal']"
] |
null | null | 2406.07775 | null | null | http://arxiv.org/pdf/2406.07775v1 | 2024-06-11T23:51:06Z | 2024-06-11T23:51:06Z | Self-attention-based non-linear basis transformations for compact latent
space modelling of dynamic optical fibre transmission matrices | Multimode optical fibres are hair-thin strands of glass that efficiently transport light. They promise next-generation medical endoscopes that provide unprecedented sub-cellular image resolution deep inside the body. However, confining light to such fibres means that images are inherently scrambled in transit. Conventionally, this scrambling has been compensated by pre-calibrating how a specific fibre scrambles light and solving a stationary linear matrix equation that represents a physical model of the fibre. However, as the technology develops towards real-world deployment, the unscrambling process must account for dynamic changes in the matrix representing the fibre's effect on light, due to factors such as movement and temperature shifts, and non-linearities resulting from the inaccessibility of the fibre tip when inside the body. Such complex, dynamic and nonlinear behaviour is well-suited to approximation by neural networks, but most leading image reconstruction networks rely on convolutional layers, which assume strong correlations between adjacent pixels, a strong inductive bias that is inappropriate for fibre matrices which may be expressed in a range of arbitrary coordinate representations with long-range correlations. We introduce a new concept that uses self-attention layers to dynamically transform the coordinate representations of varying fibre matrices to a basis that admits compact, low-dimensional representations suitable for further processing. We demonstrate the effectiveness of this approach on diverse fibre matrix datasets. We show our models significantly improve the sparsity of fibre bases in their transformed bases with a participation ratio, p, as a measure of sparsity, of between 0.01 and 0.11. Further, we show that these transformed representations admit reconstruction of the original matrices with < 10% reconstruction error, demonstrating the invertibility. | [
"['Yijie Zheng' 'Robert J. Kilpatrick' 'David B. Phillips'\n 'George S. D. Gordon']"
] |
null | null | 2406.07777 | null | null | http://arxiv.org/pdf/2406.07777v1 | 2024-06-11T23:54:42Z | 2024-06-11T23:54:42Z | Unifying Interpretability and Explainability for Alzheimer's Disease
Progression Prediction | Reinforcement learning (RL) has recently shown promise in predicting Alzheimer's disease (AD) progression due to its unique ability to model domain knowledge. However, it is not clear which RL algorithms are well-suited for this task. Furthermore, these methods are not inherently explainable, limiting their applicability in real-world clinical scenarios. Our work addresses these two important questions. Using a causal, interpretable model of AD, we first compare the performance of four contemporary RL algorithms in predicting brain cognition over 10 years using only baseline (year 0) data. We then apply SHAP (SHapley Additive exPlanations) to explain the decisions made by each algorithm in the model. Our approach combines interpretability with explainability to provide insights into the key factors influencing AD progression, offering both global and individual, patient-level analysis. Our findings show that only one of the RL methods is able to satisfactorily model disease progression, but the post-hoc explanations indicate that all methods fail to properly capture the importance of amyloid accumulation, one of the pathological hallmarks of Alzheimer's disease. Our work aims to merge predictive accuracy with transparency, assisting clinicians and researchers in enhancing disease progression modeling for informed healthcare decisions. Code is available at https://github.com/rfali/xrlad. | [
"['Raja Farrukh Ali' 'Stephanie Milani' 'John Woods' 'Emmanuel Adenij'\n 'Ayesha Farooq' 'Clayton Mansel' 'Jeffrey Burns' 'William Hsu']"
] |
null | null | 2406.07778 | null | null | http://arxiv.org/pdf/2406.07778v1 | 2024-06-12T00:01:32Z | 2024-06-12T00:01:32Z | On Trojans in Refined Language Models | A Trojan in a language model can be inserted when the model is refined for a particular application such as determining the sentiment of product reviews. In this paper, we clarify and empirically explore variations of the data-poisoning threat model. We then empirically assess two simple defenses each for a different defense scenario. Finally, we provide a brief survey of related attacks and defenses. | [
"['Jayaram Raghuram' 'George Kesidis' 'David J. Miller']"
] |
null | null | 2406.07780 | null | null | http://arxiv.org/pdf/2406.07780v1 | 2024-06-12T00:19:40Z | 2024-06-12T00:19:40Z | A Critical Look At Tokenwise Reward-Guided Text Generation | Large language models (LLMs) can significantly be improved by aligning to human preferences -- the so-called reinforcement learning from human feedback (RLHF). However, the cost of fine-tuning an LLM is prohibitive for many users. Due to their ability to bypass LLM finetuning, tokenwise reward-guided text generation (RGTG) methods have recently been proposed. They use a reward model trained on full sequences to score partial sequences during a tokenwise decoding, in a bid to steer the generation towards sequences with high rewards. However, these methods have so far been only heuristically motivated and poorly analyzed. In this work, we show that reward models trained on full sequences are not compatible with scoring partial sequences. To alleviate this issue, we propose to explicitly train a Bradley-Terry reward model on partial sequences, and autoregressively sample from the implied tokenwise policy during decoding time. We study the property of this reward model and the implied policy. In particular, we show that this policy is proportional to the ratio of two distinct RLHF policies. We show that our simple approach outperforms previous RGTG methods and achieves similar performance as strong offline baselines but without large-scale LLM finetuning. | [
"['Ahmad Rashid' 'Ruotian Wu' 'Julia Grosse' 'Agustinus Kristiadi'\n 'Pascal Poupart']"
] |
null | null | 2406.07785 | null | null | http://arxiv.org/pdf/2406.07785v1 | 2024-06-12T00:41:25Z | 2024-06-12T00:41:25Z | From Variance to Veracity: Unbundling and Mitigating Gradient Variance
in Differentiable Bundle Adjustment Layers | Various pose estimation and tracking problems in robotics can be decomposed into a correspondence estimation problem (often computed using a deep network) followed by a weighted least squares optimization problem to solve for the poses. Recent work has shown that coupling the two problems by iteratively refining one conditioned on the other's output yields SOTA results across domains. However, training these models has proved challenging, requiring a litany of tricks to stabilize and speed up training. In this work, we take the visual odometry problem as an example and identify three plausible causes: (1) flow loss interference, (2) linearization errors in the bundle adjustment (BA) layer, and (3) dependence of weight gradients on the BA residual. We show how these issues result in noisy and higher variance gradients, potentially leading to a slow down in training and instabilities. We then propose a simple, yet effective solution to reduce the gradient variance by using the weights predicted by the network in the inner optimization loop to weight the correspondence objective in the training problem. This helps the training objective `focus' on the more important points, thereby reducing the variance and mitigating the influence of outliers. We show that the resulting method leads to faster training and can be more flexibly trained in varying training setups without sacrificing performance. In particular we show $2$--$2.5times$ training speedups over a baseline visual odometry model we modify. | [
"['Swaminathan Gurumurthy' 'Karnik Ram' 'Bingqing Chen'\n 'Zachary Manchester' 'Zico Kolter']"
] |
null | null | 2406.07800 | null | null | http://arxiv.org/pdf/2406.07800v1 | 2024-06-12T01:32:24Z | 2024-06-12T01:32:24Z | Regularizing and Aggregating Clients with Class Distribution for
Personalized Federated Learning | Personalized federated learning (PFL) enables customized models for clients with varying data distributions. However, existing PFL methods often incur high computational and communication costs, limiting their practical application. This paper proposes a novel PFL method, Class-wise Federated Averaging (cwFedAVG), that performs Federated Averaging (FedAVG) class-wise, creating multiple global models per class on the server. Each local model integrates these global models weighted by its estimated local class distribution, derived from the L2-norms of deep network weights, avoiding privacy violations. Afterward, each global model does the same with local models using the same method. We also newly designed Weight Distribution Regularizer (WDR) to further enhance the accuracy of estimating a local class distribution by minimizing the Euclidean distance between the class distribution and the weight norms' distribution. Experimental results demonstrate that cwFedAVG matches or outperforms several existing PFL methods. Notably, cwFedAVG is conceptually simple yet computationally efficient as it mitigates the need for extensive calculation to collaborate between clients by leveraging shared global models. Visualizations provide insights into how cwFedAVG enables local model specialization on respective class distributions while global models capture class-relevant information across clients. | [
"['Gyuejeong Lee' 'Daeyoung Choi']"
] |
null | null | 2406.07811 | null | null | http://arxiv.org/pdf/2406.07811v1 | 2024-06-12T02:06:24Z | 2024-06-12T02:06:24Z | Evolutionary Computation and Explainable AI: A Roadmap to Transparent
Intelligent Systems | AI methods are finding an increasing number of applications, but their often black-box nature has raised concerns about accountability and trust. The field of explainable artificial intelligence (XAI) has emerged in response to the need for human understanding of AI models. Evolutionary computation (EC), as a family of powerful optimization and learning tools, has significant potential to contribute to XAI. In this paper, we provide an introduction to XAI and review various techniques in current use for explaining machine learning (ML) models. We then focus on how EC can be used in XAI, and review some XAI approaches which incorporate EC techniques. Additionally, we discuss the application of XAI principles within EC itself, examining how these principles can shed some light on the behavior and outcomes of EC algorithms in general, on the (automatic) configuration of these algorithms, and on the underlying problem landscapes that these algorithms optimize. Finally, we discuss some open challenges in XAI and opportunities for future research in this field using EC. Our aim is to demonstrate that EC is well-suited for addressing current problems in explainability and to encourage further exploration of these methods to contribute to the development of more transparent and trustworthy ML models and EC algorithms. | [
"['Ryan Zhou' 'Jaume Bacardit' 'Alexander Brownlee' 'Stefano Cagnoni'\n 'Martin Fyvie' 'Giovanni Iacca' 'John McCall' 'Niki van Stein'\n 'David Walker' 'Ting Hu']"
] |
null | null | 2406.07812 | null | null | http://arxiv.org/pdf/2406.07812v1 | 2024-06-12T02:08:45Z | 2024-06-12T02:08:45Z | To be Continuous, or to be Discrete, Those are Bits of Questions | Recently, binary representation has been proposed as a novel representation that lies between continuous and discrete representations. It exhibits considerable information-preserving capability when being used to replace continuous input vectors. In this paper, we investigate the feasibility of further introducing it to the output side, aiming to allow models to output binary labels instead. To preserve the structural information on the output side along with label information, we extend the previous contrastive hashing method as structured contrastive hashing. More specifically, we upgrade CKY from label-level to bit-level, define a new similarity function with span marginal probabilities, and introduce a novel contrastive loss function with a carefully designed instance selection strategy. Our model achieves competitive performance on various structured prediction tasks, and demonstrates that binary representation can be considered a novel representation that further bridges the gap between the continuous nature of deep learning and the discrete intrinsic property of natural languages. | [
"['Yiran Wang' 'Masao Utiyama']"
] |
null | null | 2406.07820 | null | null | http://arxiv.org/pdf/2406.07820v1 | 2024-06-12T02:39:46Z | 2024-06-12T02:39:46Z | Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial
Analysis | Explainable AI (XAI) has revolutionized the field of deep learning by empowering users to have more trust in neural network models. The field of XAI allows users to probe the inner workings of these algorithms to elucidate their decision-making processes. The rise in popularity of XAI has led to the advent of different strategies to produce explanations, all of which only occasionally agree. Thus several objective evaluation metrics have been devised to decide which of these modules give the best explanation for specific scenarios. The goal of the paper is twofold: (i) we employ the notions of necessity and sufficiency from causal literature to come up with a novel explanatory technique called SHifted Adversaries using Pixel Elimination(SHAPE) which satisfies all the theoretical and mathematical criteria of being a valid explanation, (ii) we show that SHAPE is, infact, an adversarial explanation that fools causal metrics that are employed to measure the robustness and reliability of popular importance based visual XAI methods. Our analysis shows that SHAPE outperforms popular explanatory techniques like GradCAM and GradCAM++ in these tests and is comparable to RISE, raising questions about the sanity of these metrics and the need for human involvement for an overall better evaluation. | [
"['Prithwijit Chowdhury' 'Mohit Prabhushankar' 'Ghassan AlRegib'\n 'Mohamed Deriche']"
] |
null | null | 2406.07826 | null | null | http://arxiv.org/pdf/2406.07826v1 | 2024-06-12T02:47:54Z | 2024-06-12T02:47:54Z | The Max-Min Formulation of Multi-Objective Reinforcement Learning: From
Theory to a Model-Free Algorithm | In this paper, we consider multi-objective reinforcement learning, which arises in many real-world problems with multiple optimization goals. We approach the problem with a max-min framework focusing on fairness among the multiple goals and develop a relevant theory and a practical model-free algorithm under the max-min framework. The developed theory provides a theoretical advance in multi-objective reinforcement learning, and the proposed algorithm demonstrates a notable performance improvement over existing baseline methods. | [
"['Giseung Park' 'Woohyeon Byeon' 'Seongmin Kim' 'Elad Havakuk'\n 'Amir Leshem' 'Youngchul Sung']"
] |
null | null | 2406.07831 | null | null | http://arxiv.org/pdf/2406.07831v1 | 2024-06-12T02:57:41Z | 2024-06-12T02:57:41Z | ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large
Language Models | The impressive performance of Large Language Models (LLMs) across various natural language processing tasks comes at the cost of vast computational resources and storage requirements. One-shot pruning techniques offer a way to alleviate these burdens by removing redundant weights without the need for retraining. Yet, the massive scale of LLMs often forces current pruning approaches to rely on heuristics instead of optimization-based techniques, potentially resulting in suboptimal compression. In this paper, we introduce ALPS, an optimization-based framework that tackles the pruning problem using the operator splitting technique and a preconditioned conjugate gradient-based post-processing step. Our approach incorporates novel techniques to accelerate and theoretically guarantee convergence while leveraging vectorization and GPU parallelism for efficiency. ALPS substantially outperforms state-of-the-art methods in terms of the pruning objective and perplexity reduction, particularly for highly sparse models. On the OPT-30B model with 70% sparsity, ALPS achieves a 13% reduction in test perplexity on the WikiText dataset and a 19% improvement in zero-shot benchmark performance compared to existing methods. | [
"['Xiang Meng' 'Kayhan Behdin' 'Haoyue Wang' 'Rahul Mazumder']"
] |
null | null | 2406.07857 | null | null | http://arxiv.org/pdf/2406.07857v2 | 2024-06-16T01:46:06Z | 2024-06-12T04:14:24Z | Toward Enhanced Reinforcement Learning-Based Resource Management via
Digital Twin: Opportunities, Applications, and Challenges | This article presents a digital twin (DT)-enhanced reinforcement learning (RL) framework aimed at optimizing performance and reliability in network resource management, since the traditional RL methods face several unified challenges when applied to physical networks, including limited exploration efficiency, slow convergence, poor long-term performance, and safety concerns during the exploration phase. To deal with the above challenges, a comprehensive DT-based framework is proposed to enhance the convergence speed and performance for unified RL-based resource management. The proposed framework provides safe action exploration, more accurate estimates of long-term returns, faster training convergence, higher convergence performance, and real-time adaptation to varying network conditions. Then, two case studies on ultra-reliable and low-latency communication (URLLC) services and multiple unmanned aerial vehicles (UAV) network are presented, demonstrating improvements of the proposed framework in performance, convergence speed, and training cost reduction both on traditional RL and neural network based Deep RL (DRL). Finally, the article identifies and explores some of the research challenges and open issues in this rapidly evolving field. | [
"['Nan Cheng' 'Xiucheng Wang' 'Zan Li' 'Zhisheng Yin' 'Tom Luan'\n 'Xuemin Shen']"
] |
null | null | 2406.07860 | null | null | http://arxiv.org/pdf/2406.07860v1 | 2024-06-12T04:22:27Z | 2024-06-12T04:22:27Z | BookSQL: A Large Scale Text-to-SQL Dataset for Accounting Domain | Several large-scale datasets (e.g., WikiSQL, Spider) for developing natural language interfaces to databases have recently been proposed. These datasets cover a wide breadth of domains but fall short on some essential domains, such as finance and accounting. Given that accounting databases are used worldwide, particularly by non-technical people, there is an imminent need to develop models that could help extract information from accounting databases via natural language queries. In this resource paper, we aim to fill this gap by proposing a new large-scale Text-to-SQL dataset for the accounting and financial domain: BookSQL. The dataset consists of 100k natural language queries-SQL pairs, and accounting databases of 1 million records. We experiment with and analyze existing state-of-the-art models (including GPT-4) for the Text-to-SQL task on BookSQL. We find significant performance gaps, thus pointing towards developing more focused models for this domain. | [
"['Rahul Kumar' 'Amar Raja Dibbu' 'Shrutendra Harsola'\n 'Vignesh Subrahmaniam' 'Ashutosh Modi']"
] |
null | null | 2406.07862 | null | null | http://arxiv.org/pdf/2406.07862v1 | 2024-06-12T04:30:40Z | 2024-06-12T04:30:40Z | Self-Distillation Learning Based on Temporal-Spatial Consistency for
Spiking Neural Networks | Spiking neural networks (SNNs) have attracted considerable attention for their event-driven, low-power characteristics and high biological interpretability. Inspired by knowledge distillation (KD), recent research has improved the performance of the SNN model with a pre-trained teacher model. However, additional teacher models require significant computational resources, and it is tedious to manually define the appropriate teacher network architecture. In this paper, we explore cost-effective self-distillation learning of SNNs to circumvent these concerns. Without an explicit defined teacher, the SNN generates pseudo-labels and learns consistency during training. On the one hand, we extend the timestep of the SNN during training to create an implicit temporal ``teacher" that guides the learning of the original ``student", i.e., the temporal self-distillation. On the other hand, we guide the output of the weak classifier at the intermediate stage by the final output of the SNN, i.e., the spatial self-distillation. Our temporal-spatial self-distillation (TSSD) learning method does not introduce any inference overhead and has excellent generalization ability. Extensive experiments on the static image datasets CIFAR10/100 and ImageNet as well as the neuromorphic datasets CIFAR10-DVS and DVS-Gesture validate the superior performance of the TSSD method. This paper presents a novel manner of fusing SNNs with KD, providing insights into high-performance SNN learning methods. | [
"['Lin Zuo' 'Yongqi Ding' 'Mengmeng Jing' 'Kunshan Yang' 'Yunqian Yu']"
] |
null | null | 2406.07865 | null | null | http://arxiv.org/pdf/2406.07865v1 | 2024-06-12T04:45:33Z | 2024-06-12T04:45:33Z | FaithFill: Faithful Inpainting for Object Completion Using a Single
Reference Image | We present FaithFill, a diffusion-based inpainting object completion approach for realistic generation of missing object parts. Typically, multiple reference images are needed to achieve such realistic generation, otherwise the generation would not faithfully preserve shape, texture, color, and background. In this work, we propose a pipeline that utilizes only a single input reference image -having varying lighting, background, object pose, and/or viewpoint. The singular reference image is used to generate multiple views of the object to be inpainted. We demonstrate that FaithFill produces faithful generation of the object's missing parts, together with background/scene preservation, from a single reference image. This is demonstrated through standard similarity metrics, human judgement, and GPT evaluation. Our results are presented on the DreamBooth dataset, and a novel proposed dataset. | [
"['Rupayan Mallick' 'Amr Abdalla' 'Sarah Adel Bargal']"
] |
null | null | 2406.07866 | null | null | http://arxiv.org/pdf/2406.07866v1 | 2024-06-12T04:46:23Z | 2024-06-12T04:46:23Z | Asymptotically Optimal Regret for Black-Box Predict-then-Optimize | We consider the predict-then-optimize paradigm for decision-making in which a practitioner (1) trains a supervised learning model on historical data of decisions, contexts, and rewards, and then (2) uses the resulting model to make future binary decisions for new contexts by finding the decision that maximizes the model's predicted reward. This approach is common in industry. Past analysis assumes that rewards are observed for all actions for all historical contexts, which is possible only in problems with special structure. Motivated by problems from ads targeting and recommender systems, we study new black-box predict-then-optimize problems that lack this special structure and where we only observe the reward from the action taken. We present a novel loss function, which we call Empirical Soft Regret (ESR), designed to significantly improve reward when used in training compared to classical accuracy-based metrics like mean-squared error. This loss function targets the regret achieved when taking a suboptimal decision; because the regret is generally not differentiable, we propose a differentiable "soft" regret term that allows the use of neural networks and other flexible machine learning models dependent on gradient-based training. In the particular case of paired data, we show theoretically that optimizing our loss function yields asymptotically optimal regret within the class of supervised learning models. We also show our approach significantly outperforms state-of-the-art algorithms on real-world decision-making problems in news recommendation and personalized healthcare compared to benchmark methods from contextual bandits and conditional average treatment effect estimation. | [
"['Samuel Tan' 'Peter I. Frazier']"
] |
null | null | 2406.07875 | null | null | http://arxiv.org/pdf/2406.07875v2 | 2024-06-13T10:29:16Z | 2024-06-12T05:08:51Z | Carbon Market Simulation with Adaptive Mechanism Design | A carbon market is a market-based tool that incentivizes economic agents to align individual profits with the global utility, i.e., reducing carbon emissions to tackle climate change. Cap and trade stands as a critical principle based on allocating and trading carbon allowances (carbon emission credit), enabling economic agents to follow planned emissions and penalizing excess emissions. A central authority is responsible for introducing and allocating those allowances in cap and trade. However, the complexity of carbon market dynamics makes accurate simulation intractable, which in turn hinders the design of effective allocation strategies. To address this, we propose an adaptive mechanism design framework, simulating the market using hierarchical, model-free multi-agent reinforcement learning (MARL). Government agents allocate carbon credits, while enterprises engage in economic activities and carbon trading. This framework illustrates agents' behavior comprehensively. Numerical results show MARL enables government agents to balance productivity, equality, and carbon emissions. Our project is available at https://github.com/xwanghan/Carbon-Simulator. | [
"['Han Wang' 'Wenhao Li' 'Hongyuan Zha' 'Baoxiang Wang']"
] |
null | null | 2406.07876 | null | null | http://arxiv.org/pdf/2406.07876v1 | 2024-06-12T05:09:41Z | 2024-06-12T05:09:41Z | Small Scale Data-Free Knowledge Distillation | Data-free knowledge distillation is able to utilize the knowledge learned by a large teacher network to augment the training of a smaller student network without accessing the original training data, avoiding privacy, security, and proprietary risks in real applications. In this line of research, existing methods typically follow an inversion-and-distillation paradigm in which a generative adversarial network on-the-fly trained with the guidance of the pre-trained teacher network is used to synthesize a large-scale sample set for knowledge distillation. In this paper, we reexamine this common data-free knowledge distillation paradigm, showing that there is considerable room to improve the overall training efficiency through a lens of ``small-scale inverted data for knowledge distillation". In light of three empirical observations indicating the importance of how to balance class distributions in terms of synthetic sample diversity and difficulty during both data inversion and distillation processes, we propose Small Scale Data-free Knowledge Distillation SSD-KD. In formulation, SSD-KD introduces a modulating function to balance synthetic samples and a priority sampling function to select proper samples, facilitated by a dynamic replay buffer and a reinforcement learning strategy. As a result, SSD-KD can perform distillation training conditioned on an extremely small scale of synthetic samples (e.g., 10X less than the original training data scale), making the overall training efficiency one or two orders of magnitude faster than many mainstream methods while retaining superior or competitive model performance, as demonstrated on popular image classification and semantic segmentation benchmarks. The code is available at https://github.com/OSVAI/SSD-KD. | [
"['He Liu' 'Yikai Wang' 'Huaping Liu' 'Fuchun Sun' 'Anbang Yao']"
] |
null | null | 2406.07877 | null | null | http://arxiv.org/pdf/2406.07877v1 | 2024-06-12T05:12:10Z | 2024-06-12T05:12:10Z | Hierarchical Reinforcement Learning for Swarm Confrontation with High
Uncertainty | In swarm robotics, confrontation including the pursuit-evasion game is a key scenario. High uncertainty caused by unknown opponents' strategies and dynamic obstacles complicates the action space into a hybrid decision process. Although the deep reinforcement learning method is significant for swarm confrontation since it can handle various sizes, as an end-to-end implementation, it cannot deal with the hybrid process. Here, we propose a novel hierarchical reinforcement learning approach consisting of a target allocation layer, a path planning layer, and the underlying dynamic interaction mechanism between the two layers, which indicates the quantified uncertainty. It decouples the hybrid process into discrete allocation and continuous planning layers, with a probabilistic ensemble model to quantify the uncertainty and regulate the interaction frequency adaptively. Furthermore, to overcome the unstable training process introduced by the two layers, we design an integration training method including pre-training and cross-training, which enhances the training efficiency and stability. Experiment results in both comparison and ablation studies validate the effectiveness and generalization performance of our proposed approach. | [
"['Qizhen Wu' 'Kexin Liu' 'Lei Chen' 'Jinhu Lü']"
] |
null | null | 2406.07879 | null | null | http://arxiv.org/pdf/2406.07879v1 | 2024-06-12T05:16:26Z | 2024-06-12T05:16:26Z | KernelWarehouse: Rethinking the Design of Dynamic Convolution | Dynamic convolution learns a linear mixture of n static kernels weighted with their input-dependent attentions, demonstrating superior performance than normal convolution. However, it increases the number of convolutional parameters by n times, and thus is not parameter efficient. This leads to no research progress that can allow researchers to explore the setting n>100 (an order of magnitude larger than the typical setting n<10) for pushing forward the performance boundary of dynamic convolution while enjoying parameter efficiency. To fill this gap, in this paper, we propose KernelWarehouse, a more general form of dynamic convolution, which redefines the basic concepts of ``kernels", ``assembling kernels" and ``attention function" through the lens of exploiting convolutional parameter dependencies within the same layer and across neighboring layers of a ConvNet. We testify the effectiveness of KernelWarehouse on ImageNet and MS-COCO datasets using various ConvNet architectures. Intriguingly, KernelWarehouse is also applicable to Vision Transformers, and it can even reduce the model size of a backbone while improving the model accuracy. For instance, KernelWarehouse (n=4) achieves 5.61%|3.90%|4.38% absolute top-1 accuracy gain on the ResNet18|MobileNetV2|DeiT-Tiny backbone, and KernelWarehouse (n=1/4) with 65.10% model size reduction still achieves 2.29% gain on the ResNet18 backbone. The code and models are available at https://github.com/OSVAI/KernelWarehouse. | [
"['Chao Li' 'Anbang Yao']"
] |
null | null | 2406.07884 | null | null | http://arxiv.org/pdf/2406.07884v1 | 2024-06-12T05:23:08Z | 2024-06-12T05:23:08Z | Reinforcement Learning to Disentangle Multiqubit Quantum States from
Partial Observations | Using partial knowledge of a quantum state to control multiqubit entanglement is a largely unexplored paradigm in the emerging field of quantum interactive dynamics with the potential to address outstanding challenges in quantum state preparation and compression, quantum control, and quantum complexity. We present a deep reinforcement learning (RL) approach to constructing short disentangling circuits for arbitrary 4-, 5-, and 6-qubit states using an actor-critic algorithm. With access to only two-qubit reduced density matrices, our agent decides which pairs of qubits to apply two-qubit gates on; requiring only local information makes it directly applicable on modern NISQ devices. Utilizing a permutation-equivariant transformer architecture, the agent can autonomously identify qubit permutations within the state, and adjusts the disentangling protocol accordingly. Once trained, it provides circuits from different initial states without further optimization. We demonstrate the agent's ability to identify and exploit the entanglement structure of multiqubit states. For 4-, 5-, and 6-qubit Haar-random states, the agent learns to construct disentangling circuits that exhibit strong correlations both between consecutive gates and among the qubits involved. Through extensive benchmarking, we show the efficacy of the RL approach to find disentangling protocols with minimal gate resources. We explore the resilience of our trained agents to noise, highlighting their potential for real-world quantum computing applications. Analyzing optimal disentangling protocols, we report a general circuit to prepare an arbitrary 4-qubit state using at most 5 two-qubit (10 CNOT) gates. | [
"['Pavel Tashev' 'Stefan Petrov' 'Friederike Metz' 'Marin Bukov']"
] |
null | null | 2406.07885 | null | null | http://arxiv.org/pdf/2406.07885v1 | 2024-06-12T05:24:53Z | 2024-06-12T05:24:53Z | GENIU: A Restricted Data Access Unlearning for Imbalanced Data | With the increasing emphasis on data privacy, the significance of machine unlearning has grown substantially. Class unlearning, which involves enabling a trained model to forget data belonging to a specific class learned before, is important as classification tasks account for the majority of today's machine learning as a service (MLaaS). Retraining the model on the original data, excluding the data to be forgotten (a.k.a forgetting data), is a common approach to class unlearning. However, the availability of original data during the unlearning phase is not always guaranteed, leading to the exploration of class unlearning with restricted data access. While current unlearning methods with restricted data access usually generate proxy sample via the trained neural network classifier, they typically focus on training and forgetting balanced data. However, the imbalanced original data can cause trouble for these proxies and unlearning, particularly when the forgetting data consists predominantly of the majority class. To address this issue, we propose the GENerative Imbalanced Unlearning (GENIU) framework. GENIU utilizes a Variational Autoencoder (VAE) to concurrently train a proxy generator alongside the original model. These generated proxies accurately represent each class and are leveraged in the unlearning phase, eliminating the reliance on the original training data. To further mitigate the performance degradation resulting from forgetting the majority class, we introduce an in-batch tuning strategy that works with the generated proxies. GENIU is the first practical framework for class unlearning in imbalanced data settings and restricted data access, ensuring the preservation of essential information for future unlearning. Experimental results confirm the superiority of GENIU over existing methods, establishing its effectiveness in empirical scenarios. | [
"['Chenhao Zhang' 'Shaofei Shen' 'Yawen Zhao' 'Weitong Tony Chen' 'Miao Xu']"
] |
null | null | 2406.07887 | null | null | http://arxiv.org/pdf/2406.07887v1 | 2024-06-12T05:25:15Z | 2024-06-12T05:25:15Z | An Empirical Study of Mamba-based Language Models | Selective state-space models (SSMs) like Mamba overcome some of the shortcomings of Transformers, such as quadratic computational complexity with sequence length and large inference-time memory requirements from the key-value cache. Moreover, recent studies have shown that SSMs can match or exceed the language modeling capabilities of Transformers, making them an attractive alternative. In a controlled setting (e.g., same data), however, studies so far have only presented small scale experiments comparing SSMs to Transformers. To understand the strengths and weaknesses of these architectures at larger scales, we present a direct comparison between 8B-parameter Mamba, Mamba-2, and Transformer models trained on the same datasets of up to 3.5T tokens. We also compare these models to a hybrid architecture consisting of 43% Mamba-2, 7% attention, and 50% MLP layers (Mamba-2-Hybrid). Using a diverse set of tasks, we answer the question of whether Mamba models can match Transformers at larger training budgets. Our results show that while pure SSMs match or exceed Transformers on many tasks, they lag behind Transformers on tasks which require strong copying or in-context learning abilities (e.g., 5-shot MMLU, Phonebook) or long-context reasoning. In contrast, we find that the 8B Mamba-2-Hybrid exceeds the 8B Transformer on all 12 standard tasks we evaluated (+2.65 points on average) and is predicted to be up to 8x faster when generating tokens at inference time. To validate long-context capabilities, we provide additional experiments evaluating variants of the Mamba-2-Hybrid and Transformer extended to support 16K, 32K, and 128K sequences. On an additional 23 long-context tasks, the hybrid model continues to closely match or exceed the Transformer on average. To enable further study, we release the checkpoints as well as the code used to train our models as part of NVIDIA's Megatron-LM project. | [
"['Roger Waleffe' 'Wonmin Byeon' 'Duncan Riach' 'Brandon Norick'\n 'Vijay Korthikanti' 'Tri Dao' 'Albert Gu' 'Ali Hatamizadeh'\n 'Sudhakar Singh' 'Deepak Narayanan' 'Garvit Kulshreshtha' 'Vartika Singh'\n 'Jared Casper' 'Jan Kautz' 'Mohammad Shoeybi' 'Bryan Catanzaro']"
] |
null | null | 2406.07890 | null | null | http://arxiv.org/pdf/2406.07890v1 | 2024-06-12T05:41:01Z | 2024-06-12T05:41:01Z | Exploring Speech Foundation Models for Speaker Diarization in
Child-Adult Dyadic Interactions | Speech foundation models, trained on vast datasets, have opened unique opportunities in addressing challenging low-resource speech understanding, such as child speech. In this work, we explore the capabilities of speech foundation models on child-adult speaker diarization. We show that exemplary foundation models can achieve 39.5% and 62.3% relative reductions in Diarization Error Rate and Speaker Confusion Rate, respectively, compared to previous speaker diarization methods. In addition, we benchmark and evaluate the speaker diarization results of the speech foundation models with varying the input audio window size, speaker demographics, and training data ratio. Our results highlight promising pathways for understanding and adopting speech foundation models to facilitate child speech understanding. | [
"['Anfeng Xu' 'Kevin Huang' 'Tiantian Feng' 'Lue Shen'\n 'Helen Tager-Flusberg' 'Shrikanth Narayanan']"
] |
null | null | 2406.07892 | null | null | http://arxiv.org/pdf/2406.07892v1 | 2024-06-12T05:49:53Z | 2024-06-12T05:49:53Z | Finite Time Analysis of Temporal Difference Learning for Mean-Variance
in a Discounted MDP | Motivated by risk-sensitive reinforcement learning scenarios, we consider the problem of policy evaluation for variance in a discounted reward Markov decision process (MDP). For this problem, a temporal difference (TD) type learning algorithm with linear function approximation (LFA) exists in the literature, though only asymptotic guarantees are available for this algorithm. We derive finite sample bounds that hold (i) in the mean-squared sense; and (ii) with high probability, when tail iterate averaging is employed with/without regularization. Our bounds exhibit exponential decay for the initial error, while the overall bound is $O(1/t)$, where $t$ is the number of update iterations of the TD algorithm. Further, the bound for the regularized TD variant is for a universal step size. Our bounds open avenues for analysis of actor-critic algorithms for mean-variance optimization in a discounted MDP. | [
"['Tejaram Sangadi' 'L. A. Prashanth' 'Krishna Jagannathan']"
] |
null | null | 2406.07897 | null | null | http://arxiv.org/pdf/2406.07897v1 | 2024-06-12T06:01:42Z | 2024-06-12T06:01:42Z | When Do Skills Help Reinforcement Learning? A Theoretical Analysis of
Temporal Abstractions | Skills are temporal abstractions that are intended to improve reinforcement learning (RL) performance through hierarchical RL. Despite our intuition about the properties of an environment that make skills useful, a precise characterization has been absent. We provide the first such characterization, focusing on the utility of deterministic skills in deterministic sparse-reward environments with finite action spaces. We show theoretically and empirically that RL performance gain from skills is worse in environments where solutions to states are less compressible. Additional theoretical results suggest that skills benefit exploration more than they benefit learning from existing experience, and that using unexpressive skills such as macroactions may worsen RL performance. We hope our findings can guide research on automatic skill discovery and help RL practitioners better decide when and how to use skills. | [
"['Zhening Li' 'Gabriel Poesia' 'Armando Solar-Lezama']"
] |
null | null | 2406.07904 | null | null | http://arxiv.org/pdf/2406.07904v1 | 2024-06-12T06:12:04Z | 2024-06-12T06:12:04Z | Grounding Multimodal Large Language Models in Actions | Multimodal Large Language Models (MLLMs) have demonstrated a wide range of capabilities across many domains, including Embodied AI. In this work, we study how to best ground a MLLM into different embodiments and their associated action spaces, with the goal of leveraging the multimodal world knowledge of the MLLM. We first generalize a number of methods through a unified architecture and the lens of action space adaptors. For continuous actions, we show that a learned tokenization allows for sufficient modeling precision, yielding the best performance on downstream tasks. For discrete actions, we demonstrate that semantically aligning these actions with the native output token space of the MLLM leads to the strongest performance. We arrive at these lessons via a thorough study of seven action space adapters on five different environments, encompassing over 114 embodied tasks. | [
"['Andrew Szot' 'Bogdan Mazoure' 'Harsh Agrawal' 'Devon Hjelm' 'Zsolt Kira'\n 'Alexander Toshev']"
] |
null | null | 2406.07908 | null | null | http://arxiv.org/pdf/2406.07908v1 | 2024-06-12T06:22:51Z | 2024-06-12T06:22:51Z | Ablation Based Counterfactuals | Diffusion models are a class of generative models that generate high-quality samples, but at present it is difficult to characterize how they depend upon their training data. This difficulty raises scientific and regulatory questions, and is a consequence of the complexity of diffusion models and their sampling process. To analyze this dependence, we introduce Ablation Based Counterfactuals (ABC), a method of performing counterfactual analysis that relies on model ablation rather than model retraining. In our approach, we train independent components of a model on different but overlapping splits of a training set. These components are then combined into a single model, from which the causal influence of any training sample can be removed by ablating a combination of model components. We demonstrate how we can construct a model like this using an ensemble of diffusion models. We then use this model to study the limits of training data attribution by enumerating full counterfactual landscapes, and show that single source attributability diminishes with increasing training data size. Finally, we demonstrate the existence of unattributable samples. | [
"['Zheng Dai' 'David K Gifford']"
] |
null | null | 2406.07917 | null | null | http://arxiv.org/pdf/2406.07917v1 | 2024-06-12T06:36:37Z | 2024-06-12T06:36:37Z | Graph Transductive Defense: a Two-Stage Defense for Graph Membership
Inference Attacks | Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities for tasks such as social networks and medical data analysis. Despite their successes, GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA), which threaten privacy by identifying whether a record was part of the model's training data. While existing research has explored MIA in GNNs under graph inductive learning settings, the more common and challenging graph transductive learning setting remains understudied in this context. This paper addresses this gap and proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics. The gist of our approach is a combination of a train-test alternate training schedule and flattening strategy, which successfully reduces the difference between the training and testing loss distributions. Extensive empirical results demonstrate the superior performance of our method (a decrease in attack AUROC by $9.42%$ and an increase in utility performance by $18.08%$ on average compared to LBP), highlighting its potential for seamless integration into various classification models with minimal overhead. | [
"['Peizhi Niu' 'Chao Pan' 'Siheng Chen' 'Olgica Milenkovic']"
] |
null | null | 2406.07920 | null | null | http://arxiv.org/pdf/2406.07920v1 | 2024-06-12T06:41:47Z | 2024-06-12T06:41:47Z | Near-Optimal Learning and Planning in Separated Latent MDPs | We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp *statistical threshold* for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis. | [
"['Fan Chen' 'Constantinos Daskalakis' 'Noah Golowich' 'Alexander Rakhlin']"
] |
null | null | 2406.07926 | null | null | http://arxiv.org/pdf/2406.07926v1 | 2024-06-12T06:45:03Z | 2024-06-12T06:45:03Z | Efficient Neural Common Neighbor for Temporal Graph Link Prediction | Temporal graphs are ubiquitous in real-world scenarios, such as social network, trade and transportation. Predicting dynamic links between nodes in a temporal graph is of vital importance. Traditional methods usually leverage the temporal neighborhood of interaction history to generate node embeddings first and then aggregate the source and target node embeddings to predict the link. However, such methods focus on learning individual node representations, but overlook the pairwise representation learning nature of link prediction and fail to capture the important pairwise features of links such as common neighbors (CN). Motivated by the success of Neural Common Neighbor (NCN) for static graph link prediction, we propose TNCN, a temporal version of NCN for link prediction in temporal graphs. TNCN dynamically updates a temporal neighbor dictionary for each node, and utilizes multi-hop common neighbors between the source and target node to learn a more effective pairwise representation. We validate our model on five large-scale real-world datasets from the Temporal Graph Benchmark (TGB), and find that it achieves new state-of-the-art performance on three of them. Additionally, TNCN demonstrates excellent scalability on large datasets, outperforming popular GNN baselines by up to 6.4 times in speed. Our code is available at https: //github.com/GraphPKU/TNCN. | [
"['Xiaohui Zhang' 'Yanbo Wang' 'Xiyuan Wang' 'Muhan Zhang']"
] |
null | null | 2406.07929 | null | null | http://arxiv.org/pdf/2406.07929v1 | 2024-06-12T06:46:37Z | 2024-06-12T06:46:37Z | A Generic Layer Pruning Method for Signal Modulation Recognition Deep
Learning Models | With the successful application of deep learning in communications systems, deep neural networks are becoming the preferred method for signal classification. Although these models yield impressive results, they often come with high computational complexity and large model sizes, which hinders their practical deployment in communication systems. To address this challenge, we propose a novel layer pruning method. Specifically, we decompose the model into several consecutive blocks, each containing consecutive layers with similar semantics. Then, we identify layers that need to be preserved within each block based on their contribution. Finally, we reassemble the pruned blocks and fine-tune the compact model. Extensive experiments on five datasets demonstrate the efficiency and effectiveness of our method over a variety of state-of-the-art baselines, including layer pruning and channel pruning methods. | [
"['Yao Lu' 'Yutao Zhu' 'Yuqi Li' 'Dongwei Xu' 'Yun Lin' 'Qi Xuan'\n 'Xiaoniu Yang']"
] |
null | null | 2406.07933 | null | null | http://arxiv.org/pdf/2406.07933v1 | 2024-06-12T06:56:20Z | 2024-06-12T06:56:20Z | Large Language Model Unlearning via Embedding-Corrupted Prompts | Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present Embedding-COrrupted (ECO) Prompts, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at nearly zero side effects in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases. | [
"['Chris Yuhao Liu' 'Yaxuan Wang' 'Jeffrey Flanigan' 'Yang Liu']"
] |
null | null | 2406.07935 | null | null | http://arxiv.org/pdf/2406.07935v1 | 2024-06-12T06:59:31Z | 2024-06-12T06:59:31Z | Defining and Detecting Vulnerability in Human Evaluation Guidelines: A
Preliminary Study Towards Reliable NLG Evaluation | Human evaluation serves as the gold standard for assessing the quality of Natural Language Generation (NLG) systems. Nevertheless, the evaluation guideline, as a pivotal element ensuring reliable and reproducible human assessment, has received limited attention.Our investigation revealed that only 29.84% of recent papers involving human evaluation at top conferences release their evaluation guidelines, with vulnerabilities identified in 77.09% of these guidelines. Unreliable evaluation guidelines can yield inaccurate assessment outcomes, potentially impeding the advancement of NLG in the right direction. To address these challenges, we take an initial step towards reliable evaluation guidelines and propose the first human evaluation guideline dataset by collecting annotations of guidelines extracted from existing papers as well as generated via Large Language Models (LLMs). We then introduce a taxonomy of eight vulnerabilities and formulate a principle for composing evaluation guidelines. Furthermore, a method for detecting guideline vulnerabilities has been explored using LLMs, and we offer a set of recommendations to enhance reliability in human evaluation. The annotated human evaluation guideline dataset and code for the vulnerability detection method are publicly available online. | [
"['Jie Ruan' 'Wenqing Wang' 'Xiaojun Wan']"
] |
null | null | 2406.07940 | null | null | http://arxiv.org/pdf/2406.07940v1 | 2024-06-12T07:02:59Z | 2024-06-12T07:02:59Z | Simple yet Sharp Sensitivity Analysis for Any Contrast Under Unmeasured
Confounding | We extend our previous work on sensitivity analysis for the risk ratio and difference contrasts under unmeasured confounding to any contrast. We prove that the bounds produced are still arbitrarily sharp, i.e. practically attainable. We illustrate the usability of the bounds with real data. | [
"['Jose M. Peña']"
] |
null | null | 2406.07953 | null | null | http://arxiv.org/abs/2406.07953v1 | 2024-06-12T07:24:19Z | 2024-06-12T07:24:19Z | DPSW-Sketch: A Differentially Private Sketch Framework for Frequency
Estimation over Sliding Windows (Technical Report) | The sliding window model of computation captures scenarios in which data are continually arriving in the form of a stream, and only the most recent $w$ items are used for analysis. In this setting, an algorithm needs to accurately track some desired statistics over the sliding window using a small space. When data streams contain sensitive information about individuals, the algorithm is also urgently needed to provide a provable guarantee of privacy. In this paper, we focus on the two fundamental problems of privately (1) estimating the frequency of an arbitrary item and (2) identifying the most frequent items (i.e., emph{heavy hitters}), in the sliding window model. We propose textsc{DPSW-Sketch}, a sliding window framework based on the count-min sketch that not only satisfies differential privacy over the stream but also approximates the results for frequency and heavy-hitter queries within bounded errors in sublinear time and space w.r.t.~$w$. Extensive experiments on five real-world and synthetic datasets show that textsc{DPSW-Sketch} provides significantly better utility-privacy trade-offs than state-of-the-art methods. | [
"['Yiping Wang' 'Yanhao Wang' 'Cen Chen']"
] |
null | null | 2406.07955 | null | null | http://arxiv.org/pdf/2406.07955v1 | 2024-06-12T07:28:28Z | 2024-06-12T07:28:28Z | How Interpretable Are Interpretable Graph Neural Networks? | Interpretable graph neural networks (XGNNs ) are widely adopted in various scientific applications involving graph-structured data. Existing XGNNs predominantly adopt the attention-based mechanism to learn edge or node importance for extracting and making predictions with the interpretable subgraph. However, the representational properties and limitations of these methods remain inadequately explored. In this work, we present a theoretical framework that formulates interpretable subgraph learning with the multilinear extension of the subgraph distribution, coined as subgraph multilinear extension (SubMT). Extracting the desired interpretable subgraph requires an accurate approximation of SubMT, yet we find that the existing XGNNs can have a huge gap in fitting SubMT. Consequently, the SubMT approximation failure will lead to the degenerated interpretability of the extracted subgraphs. To mitigate the issue, we design a new XGNN architecture called Graph Multilinear neT (GMT), which is provably more powerful in approximating SubMT. We empirically validate our theoretical findings on a number of graph classification benchmarks. The results demonstrate that GMT outperforms the state-of-the-art up to 10% in terms of both interpretability and generalizability across 12 regular and geometric graph benchmarks. | [
"['Yongqiang Chen' 'Yatao Bian' 'Bo Han' 'James Cheng']"
] |
null | null | 2406.07967 | null | null | http://arxiv.org/pdf/2406.07967v1 | 2024-06-12T07:44:36Z | 2024-06-12T07:44:36Z | Better than Random: Reliable NLG Human Evaluation with Constrained
Active Sampling | Human evaluation is viewed as a reliable evaluation method for NLG which is expensive and time-consuming. To save labor and costs, researchers usually perform human evaluation on a small subset of data sampled from the whole dataset in practice. However, different selection subsets will lead to different rankings of the systems. To give a more correct inter-system ranking and make the gold standard human evaluation more reliable, we propose a Constrained Active Sampling Framework (CASF) for reliable human judgment. CASF operates through a Learner, a Systematic Sampler and a Constrained Controller to select representative samples for getting a more correct inter-system ranking.Experiment results on 137 real NLG evaluation setups with 44 human evaluation metrics across 16 datasets and 5 NLG tasks demonstrate CASF receives 93.18% top-ranked system recognition accuracy and ranks first or ranks second on 90.91% of the human metrics with 0.83 overall inter-system ranking Kendall correlation.Code and data are publicly available online. | [
"['Jie Ruan' 'Xiao Pu' 'Mingqi Gao' 'Xiaojun Wan' 'Yuesheng Zhu']"
] |
null | null | 2406.07969 | null | null | http://arxiv.org/pdf/2406.07969v1 | 2024-06-12T07:49:21Z | 2024-06-12T07:49:21Z | LibriTTS-P: A Corpus with Speaking Style and Speaker Identity Prompts
for Text-to-Speech and Style Captioning | We introduce LibriTTS-P, a new corpus based on LibriTTS-R that includes utterance-level descriptions (i.e., prompts) of speaking style and speaker-level prompts of speaker characteristics. We employ a hybrid approach to construct prompt annotations: (1) manual annotations that capture human perceptions of speaker characteristics and (2) synthetic annotations on speaking style. Compared to existing English prompt datasets, our corpus provides more diverse prompt annotations for all speakers of LibriTTS-R. Experimental results for prompt-based controllable TTS demonstrate that the TTS model trained with LibriTTS-P achieves higher naturalness than the model using the conventional dataset. Furthermore, the results for style captioning tasks show that the model utilizing LibriTTS-P generates 2.5 times more accurate words than the model using a conventional dataset. Our corpus, LibriTTS-P, is available at https://github.com/line/LibriTTS-P. | [
"['Masaya Kawamura' 'Ryuichi Yamamoto' 'Yuma Shirahata' 'Takuya Hasumi'\n 'Kentaro Tachibana']"
] |
null | null | 2406.07971 | null | null | http://arxiv.org/pdf/2406.07971v2 | 2024-06-13T05:13:50Z | 2024-06-12T07:52:17Z | It Takes Two: On the Seamlessness between Reward and Policy Model in
RLHF | Reinforcement Learning from Human Feedback (RLHF) involves training policy models (PMs) and reward models (RMs) to align language models with human preferences. Instead of focusing solely on PMs and RMs independently, we propose to examine their interactions during fine-tuning, introducing the concept of seamlessness. Our study starts with observing the saturation phenomenon, where continual improvements in RM and PM do not translate into RLHF progress. Our analysis shows that RMs fail to assign proper scores to PM responses, resulting in a 35% mismatch rate with human preferences, highlighting a significant discrepancy between PM and RM. To measure seamlessness between PM and RM without human effort, we propose an automatic metric, SEAM. SEAM quantifies the discrepancies between PM and RM judgments induced by data samples. We validate the effectiveness of SEAM in data selection and model augmentation. Our experiments demonstrate that (1) using SEAM-filtered data for RL training improves RLHF performance by 4.5%, and (2) SEAM-guided model augmentation results in a 4% performance improvement over standard augmentation methods. | [
"['Taiming Lu' 'Lingfeng Shen' 'Xinyu Yang' 'Weiting Tan' 'Beidi Chen'\n 'Huaxiu Yao']"
] |
null | null | 2406.07979 | null | null | http://arxiv.org/pdf/2406.07979v2 | 2024-06-14T10:06:38Z | 2024-06-12T08:05:45Z | Heuristic Learning with Graph Neural Networks: A Unified Framework for
Link Prediction | Link prediction is a fundamental task in graph learning, inherently shaped by the topology of the graph. While traditional heuristics are grounded in graph topology, they encounter challenges in generalizing across diverse graphs. Recent research efforts have aimed to leverage the potential of heuristics, yet a unified formulation accommodating both local and global heuristics remains undiscovered. Drawing insights from the fact that both local and global heuristics can be represented by adjacency matrix multiplications, we propose a unified matrix formulation to accommodate and generalize various heuristics. We further propose the Heuristic Learning Graph Neural Network (HL-GNN) to efficiently implement the formulation. HL-GNN adopts intra-layer propagation and inter-layer connections, allowing it to reach a depth of around 20 layers with lower time complexity than GCN. Extensive experiments on the Planetoid, Amazon, and OGB datasets underscore the effectiveness and efficiency of HL-GNN. It outperforms existing methods by a large margin in prediction performance. Additionally, HL-GNN is several orders of magnitude faster than heuristic-inspired methods while requiring only a few trainable parameters. The case study further demonstrates that the generalized heuristics and learned weights are highly interpretable. | [
"['Juzheng Zhang' 'Lanning Wei' 'Zhen Xu' 'Quanming Yao']"
] |
null | null | 2406.07980 | null | null | http://arxiv.org/pdf/2406.07980v1 | 2024-06-12T08:06:31Z | 2024-06-12T08:06:31Z | Reinforcement Learning for High-Level Strategic Control in Tower Defense
Games | In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players. Many mobile titles feature quick gameplay loops that allow players to progress steadily, requiring an abundance of levels and puzzles to prevent them from reaching the end too quickly. As with any content creation, testing and validation are essential to ensure engaging gameplay mechanics, enjoyable game assets, and playable levels. In this paper, we propose an automated approach that can be leveraged for gameplay testing and validation that combines traditional scripted methods with reinforcement learning, reaping the benefits of both approaches while adapting to new situations similarly to how a human player would. We test our solution on a popular tower defense game, Plants vs. Zombies. The results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only heuristic AI, achieving a 57.12% success rate compared to 47.95% in a set of 40 levels. Moreover, the results demonstrate the difficulty of training a general agent for this type of puzzle-like game. | [
"['Joakim Bergdahl' 'Alessandro Sestini' 'Linus Gisslén']"
] |
null | null | 2406.07983 | null | null | http://arxiv.org/pdf/2406.07983v1 | 2024-06-12T08:09:29Z | 2024-06-12T08:09:29Z | Meta-Learning Neural Procedural Biases | The goal of few-shot learning is to generalize and achieve high performance on new unseen learning tasks, where each task has only a limited number of examples available. Gradient-based meta-learning attempts to address this challenging task by learning how to learn new tasks by embedding inductive biases informed by prior learning experiences into the components of the learning algorithm. In this work, we build upon prior research and propose Neural Procedural Bias Meta-Learning (NPBML), a novel framework designed to meta-learn task-adaptive procedural biases. Our approach aims to consolidate recent advancements in meta-learned initializations, optimizers, and loss functions by learning them simultaneously and making them adapt to each individual task to maximize the strength of the learned inductive biases. This imbues each learning task with a unique set of procedural biases which is specifically designed and selected to attain strong learning performance in only a few gradient steps. The experimental results show that by meta-learning the procedural biases of a neural network, we can induce strong inductive biases towards a distribution of learning tasks, enabling robust learning performance across many well-established few-shot learning benchmarks. | [
"['Christian Raymond' 'Qi Chen' 'Bing Xue' 'Mengjie Zhang']"
] |
null | null | 2406.07990 | null | null | http://arxiv.org/pdf/2406.07990v1 | 2024-06-12T08:26:30Z | 2024-06-12T08:26:30Z | Blowfish: Topological and statistical signatures for quantifying
ambiguity in semantic search | This works reports evidence for the topological signatures of ambiguity in sentence embeddings that could be leveraged for ranking and/or explanation purposes in the context of vector search and Retrieval Augmented Generation (RAG) systems. We proposed a working definition of ambiguity and designed an experiment where we have broken down a proprietary dataset into collections of chunks of varying size - 3, 5, and 10 lines and used the different collections successively as queries and answers sets. It allowed us to test the signatures of ambiguity with removal of confounding factors. Our results show that proxy ambiguous queries (size 10 queries against size 3 documents) display different distributions of homologies 0 and 1 based features than proxy clear queries (size 5 queries against size 10 documents). We then discuss those results in terms increased manifold complexity and/or approximately discontinuous embedding submanifolds. Finally we propose a strategy to leverage those findings as a new scoring strategy of semantic similarities. | [
"['Thomas Roland Barillot' 'Alex De Castro']"
] |
null | null | 2406.07991 | null | null | http://arxiv.org/pdf/2406.07991v1 | 2024-06-12T08:30:16Z | 2024-06-12T08:30:16Z | Interpetable Target-Feature Aggregation for Multi-Task Learning based on
Bias-Variance Analysis | Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance. Previous works have proposed approaches to MTL that can be divided into feature learning, focused on the identification of a common feature representation, and task clustering, where similar tasks are grouped together. In this paper, we propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features. First, we propose a bias-variance analysis for regression models with additive Gaussian noise, where we provide a general expression of the asymptotic bias and variance of a task, considering a linear regression trained on aggregated input features and an aggregated target. Then, we exploit this analysis to provide a two-phase MTL algorithm (NonLinCTFA). Firstly, this method partitions the tasks into clusters and aggregates each obtained group of targets with their mean. Then, for each aggregated task, it aggregates subsets of features with their mean in a dimensionality reduction fashion. In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is further motivated by applications to Earth science. Finally, we validate the algorithms on synthetic data, showing the effect of different parameters and real-world datasets, exploring the validity of the proposed methodology on classical datasets, recent baselines, and Earth science applications. | [
"['Paolo Bonetti' 'Alberto Maria Metelli' 'Marcello Restelli']"
] |
null | null | 2406.07992 | null | null | http://arxiv.org/pdf/2406.07992v1 | 2024-06-12T08:34:53Z | 2024-06-12T08:34:53Z | A Federated Online Restless Bandit Framework for Cooperative Resource
Allocation | Restless multi-armed bandits (RMABs) have been widely utilized to address resource allocation problems with Markov reward processes (MRPs). Existing works often assume that the dynamics of MRPs are known prior, which makes the RMAB problem solvable from an optimization perspective. Nevertheless, an efficient learning-based solution for RMABs with unknown system dynamics remains an open problem. In this paper, we study the cooperative resource allocation problem with unknown system dynamics of MRPs. This problem can be modeled as a multi-agent online RMAB problem, where multiple agents collaboratively learn the system dynamics while maximizing their accumulated rewards. We devise a federated online RMAB framework to mitigate the communication overhead and data privacy issue by adopting the federated learning paradigm. Based on this framework, we put forth a Federated Thompson Sampling-enabled Whittle Index (FedTSWI) algorithm to solve this multi-agent online RMAB problem. The FedTSWI algorithm enjoys a high communication and computation efficiency, and a privacy guarantee. Moreover, we derive a regret upper bound for the FedTSWI algorithm. Finally, we demonstrate the effectiveness of the proposed algorithm on the case of online multi-user multi-channel access. Numerical results show that the proposed algorithm achieves a fast convergence rate of $mathcal{O}(sqrt{Tlog(T)})$ and better performance compared with baselines. More importantly, its sample complexity decreases with the number of agents. | [
"['Jingwen Tong' 'Xinran Li' 'Liqun Fu' 'Jun Zhang' 'Khaled B. Letaief']"
] |
null | null | 2406.08001 | null | null | http://arxiv.org/pdf/2406.08001v1 | 2024-06-12T08:47:44Z | 2024-06-12T08:47:44Z | Asymptotic Unbiased Sample Sampling to Speed Up Sharpness-Aware
Minimization | Sharpness-Aware Minimization (SAM) has emerged as a promising approach for effectively reducing the generalization error. However, SAM incurs twice the computational cost compared to base optimizer (e.g., SGD). We propose Asymptotic Unbiased Sampling with respect to iterations to accelerate SAM (AUSAM), which maintains the model's generalization capacity while significantly enhancing computational efficiency. Concretely, we probabilistically sample a subset of data points beneficial for SAM optimization based on a theoretically guaranteed criterion, i.e., the Gradient Norm of each Sample (GNS). We further approximate the GNS by the difference in loss values before and after perturbation in SAM. As a plug-and-play, architecture-agnostic method, our approach consistently accelerates SAM across a range of tasks and networks, i.e., classification, human pose estimation and network quantization. On CIFAR10/100 and Tiny-ImageNet, AUSAM achieves results comparable to SAM while providing a speedup of over 70%. Compared to recent dynamic data pruning methods, AUSAM is better suited for SAM and excels in maintaining performance. Additionally, AUSAM accelerates optimization in human pose estimation and model quantization without sacrificing performance, demonstrating its broad practicality. | [
"['Jiaxin Deng' 'Junbiao Pang' 'Baochang Zhang']"
] |
null | null | 2406.08010 | null | null | http://arxiv.org/pdf/2406.08010v1 | 2024-06-12T09:00:49Z | 2024-06-12T09:00:49Z | A Self-boosted Framework for Calibrated Ranking | Scale-calibrated ranking systems are ubiquitous in real-world applications nowadays, which pursue accurate ranking quality and calibrated probabilistic predictions simultaneously. For instance, in the advertising ranking system, the predicted click-through rate (CTR) is utilized for ranking and required to be calibrated for the downstream cost-per-click ads bidding. Recently, multi-objective based methods have been wildly adopted as a standard approach for Calibrated Ranking, which incorporates the combination of two loss functions: a pointwise loss that focuses on calibrated absolute values and a ranking loss that emphasizes relative orderings. However, when applied to industrial online applications, existing multi-objective CR approaches still suffer from two crucial limitations. First, previous methods need to aggregate the full candidate list within a single mini-batch to compute the ranking loss. Such aggregation strategy violates extensive data shuffling which has long been proven beneficial for preventing overfitting, and thus degrades the training effectiveness. Second, existing multi-objective methods apply the two inherently conflicting loss functions on a single probabilistic prediction, which results in a sub-optimal trade-off between calibration and ranking. To tackle the two limitations, we propose a Self-Boosted framework for Calibrated Ranking (SBCR). | [
"['Shunyu Zhang' 'Hu Liu' 'Wentian Bao' 'Enyun Yu' 'Yang Song']"
] |
null | null | 2406.08030 | null | null | http://arxiv.org/pdf/2406.08030v1 | 2024-06-12T09:31:03Z | 2024-06-12T09:31:03Z | Fault detection in propulsion motors in the presence of concept drift | Machine learning and statistical methods can be used to enhance monitoring and fault prediction in marine systems. These methods rely on a dataset with records of historical system behaviour, potentially containing periods of both fault-free and faulty operation. An unexpected change in the underlying system, called a concept drift, may impact the performance of these methods, triggering the need for model retraining or other adaptations. In this article, we present an approach for detecting overheating in stator windings of marine propulsion motors that is able to successfully operate during concept drift without the need for full model retraining. Two distinct approaches are presented and tested. All models are trained and verified using a dataset from operational propulsion motors, with known, sudden concept drifts. | [
"['Martin Tveten' 'Morten Stakkeland']"
] |
null | null | 2406.08034 | null | null | http://arxiv.org/pdf/2406.08034v1 | 2024-06-12T09:36:20Z | 2024-06-12T09:36:20Z | Strong and Weak Random Walks on Signed Networks | Random walks play an important role in probing the structure of complex networks. On traditional networks, they can be used to extract community structure, understand node centrality, perform link prediction, or capture the similarity between nodes. On signed networks, where the edge weights can be either positive or negative, it is non-trivial to design a random walk which can be used to extract information about the signed structure of the network, in particular the ability to partition the graph into communities with positive edges inside and negative edges in between. Prior works on signed network random walks focus on the case where there are only two such communities (strong balance), which is rarely the case in empirical networks. In this paper, we propose a signed network random walk which can capture the structure of a network with more than two such communities (weak balance). The walk results in a similarity matrix which can be used to cluster the nodes into antagonistic communities. We compare the characteristics of the so-called strong and weak random walks, in terms of walk length and stationarity. We show through a series of experiments on synthetic and empirical networks that the similarity matrix based on weak walks can be used for both unsupervised and semi-supervised clustering, outperforming the same similarity matrix based on strong walks when the graph has more than two communities, or exhibits asymmetry in the density of links. These results suggest that other random-walk based algorithms for signed networks could be improved simply by running them with weak walks instead of strong walks. | [
"[\"Shazia'Ayn Babul\" 'Yu Tian' 'Renaud Lambiotte']"
] |
null | null | 2406.08039 | null | null | http://arxiv.org/pdf/2406.08039v1 | 2024-06-12T09:41:12Z | 2024-06-12T09:41:12Z | Beyond the Mean: Differentially Private Prototypes for Private Transfer
Learning | Machine learning (ML) models have been shown to leak private information from their training datasets. Differential Privacy (DP), typically implemented through the differential private stochastic gradient descent algorithm (DP-SGD), has become the standard solution to bound leakage from the models. Despite recent improvements, DP-SGD-based approaches for private learning still usually struggle in the high privacy ($varepsilonle1)$ and low data regimes, and when the private training datasets are imbalanced. To overcome these limitations, we propose Differentially Private Prototype Learning (DPPL) as a new paradigm for private transfer learning. DPPL leverages publicly pre-trained encoders to extract features from private data and generates DP prototypes that represent each private class in the embedding space and can be publicly released for inference. Since our DP prototypes can be obtained from only a few private training data points and without iterative noise addition, they offer high-utility predictions and strong privacy guarantees even under the notion of pure DP. We additionally show that privacy-utility trade-offs can be further improved when leveraging the public data beyond pre-training of the encoder: in particular, we can privately sample our DP prototypes from the publicly available data points used to train the encoder. Our experimental evaluation with four state-of-the-art encoders, four vision datasets, and under different data and imbalancedness regimes demonstrate DPPL's high performance under strong privacy guarantees in challenging private learning setups. | [
"['Dariush Wahdany' 'Matthew Jagielski' 'Adam Dziedzic'\n 'Franziska Boenisch']"
] |
null | null | 2406.08042 | null | null | http://arxiv.org/pdf/2406.08042v1 | 2024-06-12T09:51:29Z | 2024-06-12T09:51:29Z | Efficient Network Traffic Feature Sets for IoT Intrusion Detection | The use of Machine Learning (ML) models in cybersecurity solutions requires high-quality data that is stripped of redundant, missing, and noisy information. By selecting the most relevant features, data integrity and model efficiency can be significantly improved. This work evaluates the feature sets provided by a combination of different feature selection methods, namely Information Gain, Chi-Squared Test, Recursive Feature Elimination, Mean Absolute Deviation, and Dispersion Ratio, in multiple IoT network datasets. The influence of the smaller feature sets on both the classification performance and the training time of ML models is compared, with the aim of increasing the computational efficiency of IoT intrusion detection. Overall, the most impactful features of each dataset were identified, and the ML models obtained higher computational efficiency while preserving a good generalization, showing little to no difference between the sets. | [
"['Miguel Silva' 'João Vitorino' 'Eva Maia' 'Isabel Praça']"
] |
null | null | 2406.08045 | null | null | http://arxiv.org/pdf/2406.08045v1 | 2024-06-12T09:53:14Z | 2024-06-12T09:53:14Z | A novel approach to graph distinction through GENEOs and permutants | The theory of Group Equivariant Non-Expansive Operators (GENEOs) was initially developed in Topological Data Analysis for the geometric approximation of data observers, including their invariances and symmetries. This paper departs from that line of research and explores the use of GENEOs for distinguishing $r$-regular graphs up to isomorphisms. In doing so, we aim to test the capabilities and flexibility of these operators. Our experiments show that GENEOs offer a good compromise between efficiency and computational cost in comparing $r$-regular graphs, while their actions on data are easily interpretable. This supports the idea that GENEOs could be a general-purpose approach to discriminative problems in Machine Learning when some structural information about data and observers is explicitly given. | [
"['Giovanni Bocchi' 'Massimo Ferri' 'Patrizio Frosini']"
] |
null | null | 2406.08050 | null | null | http://arxiv.org/pdf/2406.08050v1 | 2024-06-12T10:02:27Z | 2024-06-12T10:02:27Z | Adversarial Evasion Attack Efficiency against Large Language Models | Large Language Models (LLMs) are valuable for text classification, but their vulnerabilities must not be disregarded. They lack robustness against adversarial examples, so it is pertinent to understand the impacts of different types of perturbations, and assess if those attacks could be replicated by common users with a small amount of perturbations and a small number of queries to a deployed LLM. This work presents an analysis of the effectiveness, efficiency, and practicality of three different types of adversarial attacks against five different LLMs in a sentiment classification task. The obtained results demonstrated the very distinct impacts of the word-level and character-level attacks. The word attacks were more effective, but the character and more constrained attacks were more practical and required a reduced number of perturbations and queries. These differences need to be considered during the development of adversarial defense strategies to train more robust LLMs for intelligent text classification applications. | [
"['João Vitorino' 'Eva Maia' 'Isabel Praça']"
] |
null | null | 2406.08069 | null | null | http://arxiv.org/pdf/2406.08069v1 | 2024-06-12T10:39:31Z | 2024-06-12T10:39:31Z | Explore-Go: Leveraging Exploration for Generalisation in Deep
Reinforcement Learning | One of the remaining challenges in reinforcement learning is to develop agents that can generalise to novel scenarios they might encounter once deployed. This challenge is often framed in a multi-task setting where agents train on a fixed set of tasks and have to generalise to new tasks. Recent work has shown that in this setting increased exploration during training can be leveraged to increase the generalisation performance of the agent. This makes sense when the states encountered during testing can actually be explored during training. In this paper, we provide intuition why exploration can also benefit generalisation to states that cannot be explicitly encountered during training. Additionally, we propose a novel method Explore-Go that exploits this intuition by increasing the number of states on which the agent trains. Explore-Go effectively increases the starting state distribution of the agent and as a result can be used in conjunction with most existing on-policy or off-policy reinforcement learning algorithms. We show empirically that our method can increase generalisation performance in an illustrative environment and on the Procgen benchmark. | [
"['Max Weltevrede' 'Felix Kaubek' 'Matthijs T. J. Spaan' 'Wendelin Böhmer']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.