categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2403.02502
| null | null |
http://arxiv.org/pdf/2403.02502v2
|
2024-07-10T17:36:25Z
|
2024-03-04T21:50:29Z
|
Trial and Error: Exploration-Based Trajectory Optimization for LLM
Agents
|
Large Language Models (LLMs) have become integral components in various autonomous agent systems. In this study, we present an exploration-based trajectory optimization approach, referred to as ETO. This learning method is designed to enhance the performance of open LLM agents. Contrary to previous studies that exclusively train on successful expert trajectories, our method allows agents to learn from their exploration failures. This leads to improved performance through an iterative optimization framework. During the exploration phase, the agent interacts with the environment while completing given tasks, gathering failure trajectories to create contrastive trajectory pairs. In the subsequent training phase, the agent utilizes these trajectory preference pairs to update its policy using contrastive learning methods like DPO. This iterative cycle of exploration and training fosters continued improvement in the agents. Our experiments on three complex tasks demonstrate that ETO consistently surpasses baseline performance by a large margin. Furthermore, an examination of task-solving efficiency and potential in scenarios lacking expert trajectory underscores the effectiveness of our approach.
|
[
"['Yifan Song' 'Da Yin' 'Xiang Yue' 'Jie Huang' 'Sujian Li'\n 'Bill Yuchen Lin']"
] |
null | null |
2403.02506
| null | null |
http://arxiv.org/pdf/2403.02506v1
|
2024-03-04T21:52:25Z
|
2024-03-04T21:52:25Z
|
Differentially Private Representation Learning via Image Captioning
|
Differentially private (DP) machine learning is considered the gold-standard solution for training a model from sensitive data while still preserving privacy. However, a major barrier to achieving this ideal is its sub-optimal privacy-accuracy trade-off, which is particularly visible in DP representation learning. Specifically, it has been shown that under modest privacy budgets, most models learn representations that are not significantly better than hand-crafted features. In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets. Through a series of engineering tricks, we successfully train a DP image captioner (DP-Cap) on a 233M subset of LAION-2B from scratch using a reasonable amount of computation, and obtaining unprecedented high-quality image features that can be used in a variety of downstream vision and vision-language tasks. For example, under a privacy budget of $varepsilon=8$, a linear classifier trained on top of learned DP-Cap features attains 65.8% accuracy on ImageNet-1K, considerably improving the previous SOTA of 56.5%. Our work challenges the prevailing sentiment that high-utility DP representation learning cannot be achieved by training from scratch.
|
[
"['Tom Sander' 'Yaodong Yu' 'Maziar Sanjabi' 'Alain Durmus' 'Yi Ma'\n 'Kamalika Chaudhuri' 'Chuan Guo']"
] |
null | null |
2403.02514
| null | null |
http://arxiv.org/pdf/2403.02514v1
|
2024-03-04T22:03:49Z
|
2024-03-04T22:03:49Z
|
Purpose for Open-Ended Learning Robots: A Computational Taxonomy,
Definition, and Operationalisation
|
Autonomous open-ended learning (OEL) robots are able to cumulatively acquire new skills and knowledge through direct interaction with the environment, for example relying on the guidance of intrinsic motivations and self-generated goals. OEL robots have a high relevance for applications as they can use the autonomously acquired knowledge to accomplish tasks relevant for their human users. OEL robots, however, encounter an important limitation: this may lead to the acquisition of knowledge that is not so much relevant to accomplish the users' tasks. This work analyses a possible solution to this problem that pivots on the novel concept of `purpose'. Purposes indicate what the designers and/or users want from the robot. The robot should use internal representations of purposes, called here `desires', to focus its open-ended exploration towards the acquisition of knowledge relevant to accomplish them. This work contributes to develop a computational framework on purpose in two ways. First, it formalises a framework on purpose based on a three-level motivational hierarchy involving: (a) the purposes; (b) the desires, which are domain independent; (c) specific domain dependent state-goals. Second, the work highlights key challenges highlighted by the framework such as: the `purpose-desire alignment problem', the `purpose-goal grounding problem', and the `arbitration between desires'. Overall, the approach enables OEL robots to learn in an autonomous way but also to focus on acquiring goals and skills that meet the purposes of the designers and users.
|
[
"['Gianluca Baldassarre' 'Richard J. Duro' 'Emilio Cartoni'\n 'Mehdi Khamassi' 'Alejandro Romero' 'Vieri Giuliano Santucci']"
] |
null | null |
2403.02522
| null | null |
http://arxiv.org/pdf/2403.02522v1
|
2024-03-04T22:26:25Z
|
2024-03-04T22:26:25Z
|
HeAR -- Health Acoustic Representations
|
Health acoustic sounds such as coughs and breaths are known to contain useful health signals with significant potential for monitoring health and disease, yet are underexplored in the medical machine learning community. The existing deep learning systems for health acoustics are often narrowly trained and evaluated on a single task, which is limited by data and may hinder generalization to other tasks. To mitigate these gaps, we develop HeAR, a scalable self-supervised learning-based deep learning system using masked autoencoders trained on a large dataset of 313 million two-second long audio clips. Through linear probes, we establish HeAR as a state-of-the-art health audio embedding model on a benchmark of 33 health acoustic tasks across 6 datasets. By introducing this work, we hope to enable and accelerate further health acoustics research.
|
[
"['Sebastien Baur' 'Zaid Nabulsi' 'Wei-Hung Weng' 'Jake Garrison'\n 'Louis Blankemeier' 'Sam Fishman' 'Christina Chen' 'Sujay Kakarmath'\n 'Minyoi Maimbolwa' 'Nsala Sanjase' 'Brian Shuma' 'Yossi Matias'\n 'Greg S. Corrado' 'Shwetak Patel' 'Shravya Shetty' 'Shruthi Prabhakara'\n 'Monde Muyoyeta' 'Diego Ardila']"
] |
null | null |
2403.02524
| null | null |
http://arxiv.org/pdf/2403.02524v2
|
2024-03-14T17:04:37Z
|
2024-03-04T22:28:20Z
|
Koopman operators with intrinsic observables in rigged reproducing
kernel Hilbert spaces
|
This paper presents a novel approach for estimating the Koopman operator defined on a reproducing kernel Hilbert space (RKHS) and its spectra. We propose an estimation method, what we call Jet Dynamic Mode Decomposition (JetDMD), leveraging the intrinsic structure of RKHS and the geometric notion known as jets to enhance the estimation of the Koopman operator. This method refines the traditional Extended Dynamic Mode Decomposition (EDMD) in accuracy, especially in the numerical estimation of eigenvalues. This paper proves JetDMD's superiority through explicit error bounds and convergence rate for special positive definite kernels, offering a solid theoretical foundation for its performance. We also delve into the spectral analysis of the Koopman operator, proposing the notion of extended Koopman operator within a framework of rigged Hilbert space. This notion leads to a deeper understanding of estimated Koopman eigenfunctions and capturing them outside the original function space. Through the theory of rigged Hilbert space, our study provides a principled methodology to analyze the estimated spectrum and eigenfunctions of Koopman operators, and enables eigendecomposition within a rigged RKHS. We also propose a new effective method for reconstructing the dynamical system from temporally-sampled trajectory data of the dynamical system with solid theoretical guarantee. We conduct several numerical simulations using the van der Pol oscillator, the Duffing oscillator, the H'enon map, and the Lorenz attractor, and illustrate the performance of JetDMD with clear numerical computations of eigenvalues and accurate predictions of the dynamical systems.
|
[
"['Isao Ishikawa' 'Yuka Hashimoto' 'Masahiro Ikeda' 'Yoshinobu Kawahara']"
] |
null | null |
2403.02531
| null | null |
http://arxiv.org/pdf/2403.02531v1
|
2024-03-04T22:51:51Z
|
2024-03-04T22:51:51Z
|
Density-based Isometric Mapping
|
The isometric mapping method employs the shortest path algorithm to estimate the Euclidean distance between points on High dimensional (HD) manifolds. This may not be sufficient for weakly uniformed HD data as it could lead to overestimating distances between far neighboring points, resulting in inconsistencies between the intrinsic (local) and extrinsic (global) distances during the projection. To address this issue, we modify the shortest path algorithm by adding a novel constraint inspired by the Parzen-Rosenblatt (PR) window, which helps to maintain the uniformity of the constructed shortest-path graph in Isomap. Multiple imaging datasets overall of 72,236 cases, 70,000 MINST data, 1596 from multiple Chest-XRay pneumonia datasets, and three NSCLC CT/PET datasets with a total of 640 lung cancer patients, were used to benchmark and validate PR-Isomap. 431 imaging biomarkers were extracted from each modality. Our results indicate that PR-Isomap projects HD attributes into a lower-dimensional (LD) space while preserving information, visualized by the MNIST dataset indicating the maintaining local and global distances. PR-Isomap achieved the highest comparative accuracies of 80.9% (STD:5.8) for pneumonia and 78.5% (STD:4.4), 88.4% (STD:1.4), and 61.4% (STD:11.4) for three NSCLC datasets, with a confidence interval of 95% for outcome prediction. Similarly, the multivariate Cox model showed higher overall survival, measured with c-statistics and log-likelihood test, of PR-Isomap compared to other dimensionality reduction methods. Kaplan Meier survival curve also signifies the notable ability of PR-Isomap to distinguish between high-risk and low-risk patients using multimodal imaging biomarkers preserving HD imaging characteristics for precision medicine.
|
[
"['Bardia Yousefi' 'Mélina Khansari' 'Ryan Trask' 'Patrick Tallon'\n 'Carina Carino' 'Arman Afrasiyabi' 'Vikas Kundra' 'Lan Ma' 'Lei Ren'\n 'Keyvan Farahani' 'Michelle Hershman']"
] |
null | null |
2403.02534
| null | null |
http://arxiv.org/pdf/2403.02534v1
|
2024-03-04T23:03:17Z
|
2024-03-04T23:03:17Z
|
Towards Foundation Time Series Model: To Synthesize Or Not To
Synthesize?
|
The industry is rich in cases when we are required to make forecasting for large amounts of time series at once. However, we might be in a situation where we can not afford to train a separate model for each of them. Such issue in time series modeling remains without due attention. The remedy for this setting is the establishment of a foundation model. Such a model is expected to work in zero-shot and few-shot regimes. However, what should we take as a training dataset for such kind of model? Witnessing the benefits from the enrichment of NLP datasets with artificially-generated data, we might want to adopt their experience for time series. In contrast to natural language, the process of generation of synthetic time series data is even more favorable because it provides full control of series patterns, time horizons, and number of samples. In this work, we consider the essential question if it is advantageous to train a foundation model on synthetic data or it is better to utilize only a limited number of real-life examples. Our experiments are conducted only for regular time series and speak in favor of leveraging solely the real time series. Moreover, the choice of the proper source dataset strongly influences the performance during inference. When provided access even to a limited quantity of short time series data, employing it within a supervised framework yields more favorable results than training on a larger volume of synthetic data. The code for our experiments is publicly available on Github url{https://github.com/sb-ai-lab/synthesize_or_not}.
|
[
"['Kseniia Kuvshinova' 'Olga Tsymboi' 'Alina Kostromina' 'Dmitry Simakov'\n 'Elizaveta Kovtun']"
] |
null | null |
2403.02536
| null | null |
http://arxiv.org/pdf/2403.02536v1
|
2024-03-04T23:12:17Z
|
2024-03-04T23:12:17Z
|
Forecasting SEP Events During Solar Cycles 23 and 24 Using Interpretable
Machine Learning
|
Prediction of the Solar Energetic Particle (SEP) events garner increasing interest as space missions extend beyond Earth's protective magnetosphere. These events, which are, in most cases, products of magnetic reconnection-driven processes during solar flares or fast coronal-mass-ejection-driven shock waves, pose significant radiation hazards to aviation, space-based electronics, and particularly, space exploration. In this work, we utilize the recently developed dataset that combines the Solar Dynamics Observatory/Helioseismic and Magnetic Imager's (SDO/HMI) Space weather HMI Active Region Patches (SHARP) and the Solar and Heliospheric Observatory/Michelson Doppler Imager's (SoHO/MDI) Space Weather MDI Active Region Patches (SMARP). We employ a suite of machine learning strategies, including Support Vector Machines (SVM) and regression models, to evaluate the predictive potential of this new data product for a forecast of post-solar flare SEP events. Our study indicates that despite the augmented volume of data, the prediction accuracy reaches 0.7 +- 0.1, which aligns with but does not exceed these published benchmarks. A linear SVM model with training and testing configurations that mimic an operational setting (positive-negative imbalance) reveals a slight increase (+ 0.04 +- 0.05) in the accuracy of a 14-hour SEP forecast compared to previous studies. This outcome emphasizes the imperative for more sophisticated, physics-informed models to better understand the underlying processes leading to SEP events.
|
[
"['Spiridon Kasapis' 'Irina N. Kitiashvili' 'Paul Kosovich'\n 'Alexander G. Kosovichev' 'Viacheslav M. Sadykov' \"Patrick O'Keefe\"\n 'Vincent Wang']"
] |
null | null |
2403.02544
| null | null |
http://arxiv.org/pdf/2403.02544v1
|
2024-03-04T23:40:02Z
|
2024-03-04T23:40:02Z
|
Coronary artery segmentation in non-contrast calcium scoring CT images
using deep learning
|
Precise localization of coronary arteries in Computed Tomography (CT) scans is critical from the perspective of medical assessment of coronary artery disease. Although various methods exist that offer high-quality segmentation of coronary arteries in cardiac contrast-enhanced CT scans, the potential of less invasive, non-contrast CT in this area is still not fully exploited. Since such fine anatomical structures are hardly visible in this type of medical images, the existing methods are characterized by high recall and low precision, and are used mainly for filtering of atherosclerotic plaques in the context of calcium scoring. In this paper, we address this research gap and introduce a deep learning algorithm for segmenting coronary arteries in multi-vendor ECG-gated non-contrast cardiac CT images which benefits from a novel framework for semi-automatic generation of Ground Truth (GT) via image registration. We hypothesize that the proposed GT generation process is much more efficient in this case than manual segmentation, since it allows for a fast generation of large volumes of diverse data, which leads to well-generalizing models. To investigate and thoroughly evaluate the segmentation quality based on such an approach, we propose a novel method for manual mesh-to-image registration, which is used to create our test-GT. The experimental study shows that the trained model has significantly higher accuracy than the GT used for training, and leads to the Dice and clDice metrics close to the interrater variability.
|
[
"['Mariusz Bujny' 'Katarzyna Jesionek' 'Jakub Nalepa'\n 'Karol Miszalski-Jamka' 'Katarzyna Widawka-Żak' 'Sabina Wolny'\n 'Marcin Kostur']"
] |
null | null |
2403.02545
| null | null |
http://arxiv.org/pdf/2403.02545v4
|
2024-06-04T04:29:24Z
|
2024-03-04T23:40:20Z
|
Wukong: Towards a Scaling Law for Large-Scale Recommendation
|
Scaling laws play an instrumental role in the sustainable improvement in model quality. Unfortunately, recommendation models to date do not exhibit such laws similar to those observed in the domain of large language models, due to the inefficiencies of their upscaling mechanisms. This limitation poses significant challenges in adapting these models to increasingly more complex real-world datasets. In this paper, we propose an effective network architecture based purely on stacked factorization machines, and a synergistic upscaling strategy, collectively dubbed Wukong, to establish a scaling law in the domain of recommendation. Wukong's unique design makes it possible to capture diverse, any-order of interactions simply through taller and wider layers. We conducted extensive evaluations on six public datasets, and our results demonstrate that Wukong consistently outperforms state-of-the-art models quality-wise. Further, we assessed Wukong's scalability on an internal, large-scale dataset. The results show that Wukong retains its superiority in quality over state-of-the-art models, while holding the scaling law across two orders of magnitude in model complexity, extending beyond 100 GFLOP/example, where prior arts fall short.
|
[
"['Buyun Zhang' 'Liang Luo' 'Yuxin Chen' 'Jade Nie' 'Xi Liu' 'Daifeng Guo'\n 'Yanli Zhao' 'Shen Li' 'Yuchen Hao' 'Yantao Yao' 'Guna Lakshminarayanan'\n 'Ellie Dingqiao Wen' 'Jongsoo Park' 'Maxim Naumov' 'Wenlin Chen']"
] |
null | null |
2403.02571
| null | null |
http://arxiv.org/pdf/2403.02571v1
|
2024-03-05T00:58:34Z
|
2024-03-05T00:58:34Z
|
DPAdapter: Improving Differentially Private Deep Learning through Noise
Tolerance Pre-training
|
Recent developments have underscored the critical role of textit{differential privacy} (DP) in safeguarding individual data for training machine learning models. However, integrating DP oftentimes incurs significant model performance degradation due to the perturbation introduced into the training process, presenting a formidable challenge in the {differentially private machine learning} (DPML) field. To this end, several mitigative efforts have been proposed, typically revolving around formulating new DPML algorithms or relaxing DP definitions to harmonize with distinct contexts. In spite of these initiatives, the diminishment induced by DP on models, particularly large-scale models, remains substantial and thus, necessitates an innovative solution that adeptly circumnavigates the consequential impairment of model utility. In response, we introduce DPAdapter, a pioneering technique designed to amplify the model performance of DPML algorithms by enhancing parameter robustness. The fundamental intuition behind this strategy is that models with robust parameters are inherently more resistant to the noise introduced by DP, thereby retaining better performance despite the perturbations. DPAdapter modifies and enhances the sharpness-aware minimization (SAM) technique, utilizing a two-batch strategy to provide a more accurate perturbation estimate and an efficient gradient descent, thereby improving parameter robustness against noise. Notably, DPAdapter can act as a plug-and-play component and be combined with existing DPML algorithms to further improve their performance. Our experiments show that DPAdapter vastly enhances state-of-the-art DPML algorithms, increasing average accuracy from 72.92% to 77.09% with a privacy budget of $epsilon=4$.
|
[
"['Zihao Wang' 'Rui Zhu' 'Dongruo Zhou' 'Zhikun Zhang' 'John Mitchell'\n 'Haixu Tang' 'XiaoFeng Wang']"
] |
null | null |
2403.02573
| null | null |
http://arxiv.org/pdf/2403.02573v1
|
2024-03-05T01:06:25Z
|
2024-03-05T01:06:25Z
|
Learning-augmented Online Minimization of Age of Information and
Transmission Costs
|
We consider a discrete-time system where a resource-constrained source (e.g., a small sensor) transmits its time-sensitive data to a destination over a time-varying wireless channel. Each transmission incurs a fixed transmission cost (e.g., energy cost), and no transmission results in a staleness cost represented by the Age-of-Information. The source must balance the tradeoff between transmission and staleness costs. To address this challenge, we develop a robust online algorithm to minimize the sum of transmission and staleness costs, ensuring a worst-case performance guarantee. While online algorithms are robust, they are usually overly conservative and may have a poor average performance in typical scenarios. In contrast, by leveraging historical data and prediction models, machine learning (ML) algorithms perform well in average cases. However, they typically lack worst-case performance guarantees. To achieve the best of both worlds, we design a learning-augmented online algorithm that exhibits two desired properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: ensuring worst-case performance guarantee even ML predictions are inaccurate. Finally, we perform extensive simulations to show that our online algorithm performs well empirically and that our learning-augmented algorithm achieves both consistency and robustness.
|
[
"['Zhongdong Liu' 'Keyuan Zhang' 'Bin Li' 'Yin Sun' 'Y. Thomas Hou' 'Bo Ji']"
] |
null | null |
2403.02576
| null | null |
http://arxiv.org/pdf/2403.02576v2
|
2024-04-14T09:57:48Z
|
2024-03-05T01:17:56Z
|
AceMap: Knowledge Discovery through Academic Graph
|
The exponential growth of scientific literature requires effective management and extraction of valuable insights. While existing scientific search engines excel at delivering search results based on relational databases, they often neglect the analysis of collaborations between scientific entities and the evolution of ideas, as well as the in-depth analysis of content within scientific publications. The representation of heterogeneous graphs and the effective measurement, analysis, and mining of such graphs pose significant challenges. To address these challenges, we present AceMap, an academic system designed for knowledge discovery through academic graph. We present advanced database construction techniques to build the comprehensive AceMap database with large-scale academic entities that contain rich visual, textual, and numerical information. AceMap also employs innovative visualization, quantification, and analysis methods to explore associations and logical relationships among academic entities. AceMap introduces large-scale academic network visualization techniques centered on nebular graphs, providing a comprehensive view of academic networks from multiple perspectives. In addition, AceMap proposes a unified metric based on structural entropy to quantitatively measure the knowledge content of different academic entities. Moreover, AceMap provides advanced analysis capabilities, including tracing the evolution of academic ideas through citation relationships and concept co-occurrence, and generating concise summaries informed by this evolutionary process. In addition, AceMap uses machine reading methods to generate potential new ideas at the intersection of different fields. Exploring the integration of large language models and knowledge graphs is a promising direction for future research in idea evolution. Please visit url{https://www.acemap.info} for further exploration.
|
[
"['Xinbing Wang' 'Luoyi Fu' 'Xiaoying Gan' 'Ying Wen' 'Guanjie Zheng'\n 'Jiaxin Ding' 'Liyao Xiang' 'Nanyang Ye' 'Meng Jin' 'Shiyu Liang'\n 'Bin Lu' 'Haiwen Wang' 'Yi Xu' 'Cheng Deng' 'Shao Zhang' 'Huquan Kang'\n 'Xingli Wang' 'Qi Li' 'Zhixin Guo' 'Jiexing Qi' 'Pan Liu' 'Yuyang Ren'\n 'Lyuwen Wu' 'Jungang Yang' 'Jianping Zhou' 'Chenghu Zhou']"
] |
null | null |
2403.02579
| null | null |
http://arxiv.org/pdf/2403.02579v1
|
2024-03-05T01:30:34Z
|
2024-03-05T01:30:34Z
|
Geometric Dynamics of Signal Propagation Predict Trainability of
Transformers
|
We investigate forward signal propagation and gradient back propagation in deep, randomly initialized transformers, yielding simple necessary and sufficient conditions on initialization hyperparameters that ensure trainability of deep transformers. Our approach treats the evolution of the representations of $n$ tokens as they propagate through the transformer layers in terms of a discrete time dynamical system of $n$ interacting particles. We derive simple update equations for the evolving geometry of this particle system, starting from a permutation symmetric simplex. Our update equations show that without MLP layers, this system will collapse to a line, consistent with prior work on rank collapse in transformers. However, unlike prior work, our evolution equations can quantitatively track particle geometry in the additional presence of nonlinear MLP layers, and it reveals an order-chaos phase transition as a function of initialization hyperparameters, like the strength of attentional and MLP residual connections and weight variances. In the ordered phase the particles are attractive and collapse to a line, while in the chaotic phase the particles are repulsive and converge to a regular $n$-simplex. We analytically derive two Lyapunov exponents: an angle exponent that governs departures from the edge of chaos in this particle system, and a gradient exponent that governs the rate of exponential growth or decay of backpropagated gradients. We show through experiments that, remarkably, the final test loss at the end of training is well predicted just by these two exponents at the beginning of training, and that the simultaneous vanishing of these two exponents yields a simple necessary and sufficient condition to achieve minimal test loss.
|
[
"['Aditya Cowsik' 'Tamra Nebabu' 'Xiao-Liang Qi' 'Surya Ganguli']"
] |
null | null |
2403.02580
| null | null |
http://arxiv.org/pdf/2403.02580v1
|
2024-03-05T01:32:29Z
|
2024-03-05T01:32:29Z
|
What do we learn from inverting CLIP models?
|
We employ an inversion-based approach to examine CLIP models. Our examination reveals that inverting CLIP models results in the generation of images that exhibit semantic alignment with the specified target prompts. We leverage these inverted images to gain insights into various aspects of CLIP models, such as their ability to blend concepts and inclusion of gender biases. We notably observe instances of NSFW (Not Safe For Work) images during model inversion. This phenomenon occurs even for semantically innocuous prompts, like "a beautiful landscape," as well as for prompts involving the names of celebrities.
|
[
"['Hamid Kazemi' 'Atoosa Chegini' 'Jonas Geiping' 'Soheil Feizi'\n 'Tom Goldstein']"
] |
null | null |
2403.02598
| null | null |
http://arxiv.org/pdf/2403.02598v2
|
2024-03-14T22:46:02Z
|
2024-03-05T02:20:33Z
|
Pooling Image Datasets With Multiple Covariate Shift and Imbalance
|
Small sample sizes are common in many disciplines, which necessitates pooling roughly similar datasets across multiple institutions to study weak but relevant associations between images and disease outcomes. Such data often manifest shift/imbalance in covariates (i.e., secondary non-imaging data). Controlling for such nuisance variables is common within standard statistical analysis, but the ideas do not directly apply to overparameterized models. Consequently, recent work has shown how strategies from invariant representation learning provides a meaningful starting point, but the current repertoire of methods is limited to accounting for shifts/imbalances in just a couple of covariates at a time. In this paper, we show how viewing this problem from the perspective of Category theory provides a simple and effective solution that completely avoids elaborate multi-stage training pipelines that would otherwise be needed. We show the effectiveness of this approach via extensive experiments on real datasets. Further, we discuss how this style of formulation offers a unified perspective on at least 5+ distinct problem settings, from self-supervised learning to matching problems in 3D reconstruction.
|
[
"['Sotirios Panagiotis Chytas' 'Vishnu Suresh Lokhande' 'Peiran Li'\n 'Vikas Singh']"
] |
null | null |
2403.02600
| null | null |
http://arxiv.org/pdf/2403.02600v1
|
2024-03-05T02:27:52Z
|
2024-03-05T02:27:52Z
|
TESTAM: A Time-Enhanced Spatio-Temporal Attention Model with Mixture of
Experts
|
Accurate traffic forecasting is challenging due to the complex dependency on road networks, various types of roads, and the abrupt speed change due to the events. Recent works mainly focus on dynamic spatial modeling with adaptive graph embedding or graph attention having less consideration for temporal characteristics and in-situ modeling. In this paper, we propose a novel deep learning model named TESTAM, which individually models recurring and non-recurring traffic patterns by a mixture-of-experts model with three experts on temporal modeling, spatio-temporal modeling with static graph, and dynamic spatio-temporal dependency modeling with dynamic graph. By introducing different experts and properly routing them, TESTAM could better model various circumstances, including spatially isolated nodes, highly related nodes, and recurring and non-recurring events. For the proper routing, we reformulate a gating problem into a classification problem with pseudo labels. Experimental results on three public traffic network datasets, METR-LA, PEMS-BAY, and EXPY-TKY, demonstrate that TESTAM achieves a better indication and modeling of recurring and non-recurring traffic. We published the official code at https://github.com/HyunWookL/TESTAM
|
[
"['Hyunwook Lee' 'Sungahn Ko']"
] |
null | null |
2403.02608
| null | null |
http://arxiv.org/pdf/2403.02608v1
|
2024-03-05T02:49:00Z
|
2024-03-05T02:49:00Z
|
DNNLasso: Scalable Graph Learning for Matrix-Variate Data
|
We consider the problem of jointly learning row-wise and column-wise dependencies of matrix-variate observations, which are modelled separately by two precision matrices. Due to the complicated structure of Kronecker-product precision matrices in the commonly used matrix-variate Gaussian graphical models, a sparser Kronecker-sum structure was proposed recently based on the Cartesian product of graphs. However, existing methods for estimating Kronecker-sum structured precision matrices do not scale well to large scale datasets. In this paper, we introduce DNNLasso, a diagonally non-negative graphical lasso model for estimating the Kronecker-sum structured precision matrix, which outperforms the state-of-the-art methods by a large margin in both accuracy and computational time. Our code is available at https://github.com/YangjingZhang/DNNLasso.
|
[
"['Meixia Lin' 'Yangjing Zhang']"
] |
null | null |
2403.02609
| null | null |
http://arxiv.org/pdf/2403.02609v1
|
2024-03-05T02:53:24Z
|
2024-03-05T02:53:24Z
|
Search Intenion Network for Personalized Query Auto-Completion in
E-Commerce
|
Query Auto-Completion(QAC), as an important part of the modern search engine, plays a key role in complementing user queries and helping them refine their search intentions.Today's QAC systems in real-world scenarios face two major challenges:1)intention equivocality(IE): during the user's typing process,the prefix often contains a combination of characters and subwords, which makes the current intention ambiguous and difficult to model.2)intention transfer (IT):previous works make personalized recommendations based on users' historical sequences, but ignore the search intention transfer.However, the current intention extracted from prefix may be contrary to the historical preferences.
|
[
"['Wei Bao' 'Mi Zhang' 'Tao Zhang' 'Chengfu Huo']"
] |
null | null |
2403.02616
| null | null |
http://arxiv.org/pdf/2403.02616v1
|
2024-03-05T03:11:02Z
|
2024-03-05T03:11:02Z
|
Unsupervised Spatio-Temporal State Estimation for Fine-grained Adaptive
Anomaly Diagnosis of Industrial Cyber-physical Systems
|
Accurate detection and diagnosis of abnormal behaviors such as network attacks from multivariate time series (MTS) are crucial for ensuring the stable and effective operation of industrial cyber-physical systems (CPS). However, existing researches pay little attention to the logical dependencies among system working states, and have difficulties in explaining the evolution mechanisms of abnormal signals. To reveal the spatio-temporal association relationships and evolution mechanisms of the working states of industrial CPS, this paper proposes a fine-grained adaptive anomaly diagnosis method (i.e. MAD-Transformer) to identify and diagnose anomalies in MTS. MAD-Transformer first constructs a temporal state matrix to characterize and estimate the change patterns of the system states in the temporal dimension. Then, to better locate the anomalies, a spatial state matrix is also constructed to capture the inter-sensor state correlation relationships within the system. Subsequently, based on these two types of state matrices, a three-branch structure of series-temporal-spatial attention module is designed to simultaneously capture the series, temporal, and space dependencies among MTS. Afterwards, three associated alignment loss functions and a reconstruction loss are constructed to jointly optimize the model. Finally, anomalies are determined and diagnosed by comparing the residual matrices with the original matrices. We conducted comparative experiments on five publicly datasets spanning three application domains (service monitoring, spatial and earth exploration, and water treatment), along with a petroleum refining simulation dataset collected by ourselves. The results demonstrate that MAD-Transformer can adaptively detect fine-grained anomalies with short duration, and outperforms the state-of-the-art baselines in terms of noise robustness and localization performance.
|
[
"['Haili Sun' 'Yan Huang' 'Lansheng Han' 'Cai Fu' 'Chunjie Zhou']"
] |
null | null |
2403.02619
| null | null |
http://arxiv.org/pdf/2403.02619v2
|
2024-03-13T07:19:06Z
|
2024-03-05T03:18:43Z
|
Training Machine Learning models at the Edge: A Survey
|
Edge Computing (EC) has gained significant traction in recent years, promising enhanced efficiency by integrating Artificial Intelligence (AI) capabilities at the edge. While the focus has primarily been on the deployment and inference of Machine Learning (ML) models at the edge, the training aspect remains less explored. This survey delves into Edge Learning (EL), specifically the optimization of ML model training at the edge. The objective is to comprehensively explore diverse approaches and methodologies in EL, synthesize existing knowledge, identify challenges, and highlight future trends. Utilizing Scopus' advanced search, relevant literature on EL was identified, revealing a concentration of research efforts in distributed learning methods, particularly Federated Learning (FL). This survey further provides a guideline for comparing techniques used to optimize ML for edge learning, along with an exploration of different frameworks, libraries, and simulation tools available for EL. In doing so, the paper contributes to a holistic understanding of the current landscape and future directions in the intersection of edge computing and machine learning, paving the way for informed comparisons between optimization methods and techniques designed for edge learning.
|
[
"['Aymen Rayane Khouas' 'Mohamed Reda Bouadjenek' 'Hakim Hacid'\n 'Sunil Aryal']"
] |
null | null |
2403.02622
| null | null |
http://arxiv.org/pdf/2403.02622v3
|
2024-05-07T13:28:48Z
|
2024-03-05T03:23:55Z
|
World Models for Autonomous Driving: An Initial Survey
|
In the rapidly evolving landscape of autonomous driving, the capability to accurately predict future events and assess their implications is paramount for both safety and efficiency, critically aiding the decision-making process. World models have emerged as a transformative approach, enabling autonomous driving systems to synthesize and interpret vast amounts of sensor data, thereby predicting potential future scenarios and compensating for information gaps. This paper provides an initial review of the current state and prospective advancements of world models in autonomous driving, spanning their theoretical underpinnings, practical applications, and the ongoing research efforts aimed at overcoming existing limitations. Highlighting the significant role of world models in advancing autonomous driving technologies, this survey aspires to serve as a foundational reference for the research community, facilitating swift access to and comprehension of this burgeoning field, and inspiring continued innovation and exploration.
|
[
"['Yanchen Guan' 'Haicheng Liao' 'Zhenning Li' 'Jia Hu' 'Runze Yuan'\n 'Yunjian Li' 'Guohui Zhang' 'Chengzhong Xu']"
] |
null | null |
2403.02624
| null | null |
http://arxiv.org/pdf/2403.02624v2
|
2024-03-12T06:28:39Z
|
2024-03-05T03:32:02Z
|
Pareto-Optimal Estimation and Policy Learning on Short-term and
Long-term Treatment Effects
|
This paper focuses on developing Pareto-optimal estimation and policy learning to identify the most effective treatment that maximizes the total reward from both short-term and long-term effects, which might conflict with each other. For example, a higher dosage of medication might increase the speed of a patient's recovery (short-term) but could also result in severe long-term side effects. Although recent works have investigated the problems about short-term or long-term effects or the both, how to trade-off between them to achieve optimal treatment remains an open challenge. Moreover, when multiple objectives are directly estimated using conventional causal representation learning, the optimization directions among various tasks can conflict as well. In this paper, we systematically investigate these issues and introduce a Pareto-Efficient algorithm, comprising Pareto-Optimal Estimation (POE) and Pareto-Optimal Policy Learning (POPL), to tackle them. POE incorporates a continuous Pareto module with representation balancing, enhancing estimation efficiency across multiple tasks. As for POPL, it involves deriving short-term and long-term outcomes linked with various treatment levels, facilitating an exploration of the Pareto frontier emanating from these outcomes. Results on both the synthetic and real-world datasets demonstrate the superiority of our method.
|
[
"['Yingrong Wang' 'Anpeng Wu' 'Haoxuan Li' 'Weiming Liu' 'Qiaowei Miao'\n 'Ruoxuan Xiong' 'Fei Wu' 'Kun Kuang']"
] |
null | null |
2403.02626
| null | null |
http://arxiv.org/pdf/2403.02626v2
|
2024-03-20T03:56:57Z
|
2024-03-05T03:34:11Z
|
Modeling Collaborator: Enabling Subjective Vision Classification With
Minimal Human Effort via LLM Tool-Use
|
From content moderation to wildlife conservation, the number of applications that require models to recognize nuanced or subjective visual concepts is growing. Traditionally, developing classifiers for such concepts requires substantial manual effort measured in hours, days, or even months to identify and annotate data needed for training. Even with recently proposed Agile Modeling techniques, which enable rapid bootstrapping of image classifiers, users are still required to spend 30 minutes or more of monotonous, repetitive data labeling just to train a single classifier. Drawing on Fiske's Cognitive Miser theory, we propose a new framework that alleviates manual effort by replacing human labeling with natural language interactions, reducing the total effort required to define a concept by an order of magnitude: from labeling 2,000 images to only 100 plus some natural language interactions. Our framework leverages recent advances in foundation models, both large language models and vision-language models, to carve out the concept space through conversation and by automatically labeling training data points. Most importantly, our framework eliminates the need for crowd-sourced annotations. Moreover, our framework ultimately produces lightweight classification models that are deployable in cost-sensitive scenarios. Across 15 subjective concepts and across 2 public image classification datasets, our trained models outperform traditional Agile Modeling as well as state-of-the-art zero-shot classification models like ALIGN, CLIP, CuPL, and large visual question-answering models like PaLI-X.
|
[
"['Imad Eddine Toubal' 'Aditya Avinash' 'Neil Gordon Alldrin' 'Jan Dlabal'\n 'Wenlei Zhou' 'Enming Luo' 'Otilia Stretcu' 'Hao Xiong' 'Chun-Ta Lu'\n 'Howard Zhou' 'Ranjay Krishna' 'Ariel Fuxman' 'Tom Duerig']"
] |
null | null |
2403.02628
| null | null |
http://arxiv.org/pdf/2403.02628v2
|
2024-03-19T02:19:52Z
|
2024-03-05T03:37:28Z
|
Interactive Continual Learning: Fast and Slow Thinking
|
Advanced life forms, sustained by the synergistic interaction of neural cognitive mechanisms, continually acquire and transfer knowledge throughout their lifespan. In contrast, contemporary machine learning paradigms exhibit limitations in emulating the facets of continual learning (CL). Nonetheless, the emergence of large language models (LLMs) presents promising avenues for realizing CL via interactions with these models. Drawing on Complementary Learning System theory, this paper presents a novel Interactive Continual Learning (ICL) framework, enabled by collaborative interactions among models of various sizes. Specifically, we assign the ViT model as System1 and multimodal LLM as System2. To enable the memory module to deduce tasks from class information and enhance Set2Set retrieval, we propose the Class-Knowledge-Task Multi-Head Attention (CKT-MHA). Additionally, to improve memory retrieval in System1 through enhanced geometric representation, we introduce the CL-vMF mechanism, based on the von Mises-Fisher (vMF) distribution. Meanwhile, we introduce the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI) strategy to identify hard examples, thus enhancing collaboration between System1 and System2 for complex reasoning realization. Comprehensive evaluation of our proposed ICL demonstrates significant resistance to forgetting and superior performance relative to existing methods. Code is available at github.com/ICL.
|
[
"['Biqing Qi' 'Xingquan Chen' 'Junqi Gao' 'Dong Li' 'Jianxing Liu'\n 'Ligang Wu' 'Bowen Zhou']"
] |
null | null |
2403.02630
| null | null |
http://arxiv.org/pdf/2403.02630v4
|
2024-06-10T14:57:11Z
|
2024-03-05T03:40:39Z
|
FedHCDR: Federated Cross-Domain Recommendation with Hypergraph Signal
Decoupling
|
In recent years, Cross-Domain Recommendation (CDR) has drawn significant attention, which utilizes user data from multiple domains to enhance the recommendation performance. However, current CDR methods require sharing user data across domains, thereby violating the General Data Protection Regulation (GDPR). Consequently, numerous approaches have been proposed for Federated Cross-Domain Recommendation (FedCDR). Nevertheless, the data heterogeneity across different domains inevitably influences the overall performance of federated learning. In this study, we propose FedHCDR, a novel Federated Cross-Domain Recommendation framework with Hypergraph signal decoupling. Specifically, to address the data heterogeneity across domains, we introduce an approach called hypergraph signal decoupling (HSD) to decouple the user features into domain-exclusive and domain-shared features. The approach employs high-pass and low-pass hypergraph filters to decouple domain-exclusive and domain-shared user representations, which are trained by the local-global bi-directional transfer algorithm. In addition, a hypergraph contrastive learning (HCL) module is devised to enhance the learning of domain-shared user relationship information by perturbing the user hypergraph. Extensive experiments conducted on three real-world scenarios demonstrate that FedHCDR outperforms existing baselines significantly.
|
[
"['Hongyu Zhang' 'Dongyi Zheng' 'Lin Zhong' 'Xu Yang' 'Jiyuan Feng'\n 'Yunqing Feng' 'Qing Liao']"
] |
null | null |
2403.02639
| null | null |
http://arxiv.org/pdf/2403.02639v3
|
2024-05-19T09:23:54Z
|
2024-03-05T04:07:54Z
|
False Positive Sampling-based Data Augmentation for Enhanced 3D Object
Detection Accuracy
|
Recent studies have focused on enhancing the performance of 3D object detection models. Among various approaches, ground-truth sampling has been proposed as an augmentation technique to address the challenges posed by limited ground-truth data. However, an inherent issue with ground-truth sampling is its tendency to increase false positives. Therefore, this study aims to overcome the limitations of ground-truth sampling and improve the performance of 3D object detection models by developing a new augmentation technique called false-positive sampling. False-positive sampling involves retraining the model using point clouds that are identified as false positives in the model's predictions. We propose an algorithm that utilizes both ground-truth and false-positive sampling and an algorithm for building the false-positive sample database. Additionally, we analyze the principles behind the performance enhancement due to false-positive sampling. Our experiments demonstrate that models utilizing false-positive sampling show a reduction in false positives and exhibit improved object detection performance. On the KITTI and Waymo Open datasets, models with false-positive sampling surpass the baseline models by a large margin.
|
[
"['Jiyong Oh' 'Junhaeng Lee' 'Woongchan Byun' 'Minsang Kong' 'Sang Hun Lee']"
] |
null | null |
2403.02645
| null | null |
http://arxiv.org/pdf/2403.02645v2
|
2024-03-11T17:25:14Z
|
2024-03-05T04:29:31Z
|
DT-DDNN: A Physical Layer Security Attack Detector in 5G RF Domain for
CAVs
|
The Synchronization Signal Block (SSB) is a fundamental component of the 5G New Radio (NR) air interface, crucial for the initial access procedure of Connected and Automated Vehicles (CAVs), and serves several key purposes in the network's operation. However, due to the predictable nature of SSB transmission, including the Primary and Secondary Synchronization Signals (PSS and SSS), jamming attacks are critical threats. These attacks, which can be executed without requiring high power or complex equipment, pose substantial risks to the 5G network, particularly as a result of the unencrypted transmission of control signals. Leveraging RF domain knowledge, this work presents a novel deep learning-based technique for detecting jammers in CAV networks. Unlike the existing jamming detection algorithms that mostly rely on network parameters, we introduce a double-threshold deep learning jamming detector by focusing on the SSB. The detection method is focused on RF domain features and improves the robustness of the network without requiring integration with the pre-existing network infrastructure. By integrating a preprocessing block to extract PSS correlation and energy per null resource elements (EPNRE) characteristics, our method distinguishes between normal and jammed received signals with high precision. Additionally, by incorporating of Discrete Wavelet Transform (DWT), the efficacy of training and detection are optimized. A double-threshold double Deep Neural Network (DT-DDNN) is also introduced to the architecture complemented by a deep cascade learning model to increase the sensitivity of the model to variations of signal-to-jamming noise ratio (SJNR). Results show that the proposed method achieves 96.4% detection rate in extra low jamming power, i.e., SJNR between 15 to 30 dB. Further, performance of DT-DDNN is validated by analyzing real 5G signals obtained from a practical testbed.
|
[
"['Ghazal Asemian' 'Mohammadreza Amini' 'Burak Kantarci'\n 'Melike Erol-Kantarci']"
] |
null | null |
2403.02648
| null | null |
http://arxiv.org/pdf/2403.02648v2
|
2024-06-05T15:13:02Z
|
2024-03-05T04:35:59Z
|
Remove that Square Root: A New Efficient Scale-Invariant Version of
AdaGrad
|
Adaptive methods are extremely popular in machine learning as they make learning rate tuning less expensive. This paper introduces a novel optimization algorithm named KATE, which presents a scale-invariant adaptation of the well-known AdaGrad algorithm. We prove the scale-invariance of KATE for the case of Generalized Linear Models. Moreover, for general smooth non-convex problems, we establish a convergence rate of $O left(frac{log T}{sqrt{T}} right)$ for KATE, matching the best-known ones for AdaGrad and Adam. We also compare KATE to other state-of-the-art adaptive algorithms Adam and AdaGrad in numerical experiments with different problems, including complex machine learning tasks like image classification and text classification on real data. The results indicate that KATE consistently outperforms AdaGrad and matches/surpasses the performance of Adam in all considered scenarios.
|
[
"['Sayantan Choudhury' 'Nazarii Tupitsa' 'Nicolas Loizou' 'Samuel Horvath'\n 'Martin Takac' 'Eduard Gorbunov']"
] |
null | null |
2403.02681
| null | null |
http://arxiv.org/pdf/2403.02681v1
|
2024-03-05T06:10:21Z
|
2024-03-05T06:10:21Z
|
SGD with Partial Hessian for Deep Neural Networks Optimization
|
Due to the effectiveness of second-order algorithms in solving classical optimization problems, designing second-order optimizers to train deep neural networks (DNNs) has attracted much research interest in recent years. However, because of the very high dimension of intermediate features in DNNs, it is difficult to directly compute and store the Hessian matrix for network optimization. Most of the previous second-order methods approximate the Hessian information imprecisely, resulting in unstable performance. In this work, we propose a compound optimizer, which is a combination of a second-order optimizer with a precise partial Hessian matrix for updating channel-wise parameters and the first-order stochastic gradient descent (SGD) optimizer for updating the other parameters. We show that the associated Hessian matrices of channel-wise parameters are diagonal and can be extracted directly and precisely from Hessian-free methods. The proposed method, namely SGD with Partial Hessian (SGD-PH), inherits the advantages of both first-order and second-order optimizers. Compared with first-order optimizers, it adopts a certain amount of information from the Hessian matrix to assist optimization, while compared with the existing second-order optimizers, it keeps the good generalization performance of first-order optimizers. Experiments on image classification tasks demonstrate the effectiveness of our proposed optimizer SGD-PH. The code is publicly available at url{https://github.com/myingysun/SGDPH}.
|
[
"['Ying Sun' 'Hongwei Yong' 'Lei Zhang']"
] |
null | null |
2403.02682
| null | null |
http://arxiv.org/pdf/2403.02682v1
|
2024-03-05T06:10:22Z
|
2024-03-05T06:10:22Z
|
Time Weaver: A Conditional Time Series Generation Model
|
Imagine generating a city's electricity demand pattern based on weather, the presence of an electric vehicle, and location, which could be used for capacity planning during a winter freeze. Such real-world time series are often enriched with paired heterogeneous contextual metadata (weather, location, etc.). Current approaches to time series generation often ignore this paired metadata, and its heterogeneity poses several practical challenges in adapting existing conditional generation approaches from the image, audio, and video domains to the time series domain. To address this gap, we introduce Time Weaver, a novel diffusion-based model that leverages the heterogeneous metadata in the form of categorical, continuous, and even time-variant variables to significantly improve time series generation. Additionally, we show that naive extensions of standard evaluation metrics from the image to the time series domain are insufficient. These metrics do not penalize conditional generation approaches for their poor specificity in reproducing the metadata-specific features in the generated time series. Thus, we innovate a novel evaluation metric that accurately captures the specificity of conditional generation and the realism of the generated time series. We show that Time Weaver outperforms state-of-the-art benchmarks, such as Generative Adversarial Networks (GANs), by up to 27% in downstream classification tasks on real-world energy, medical, air quality, and traffic data sets.
|
[
"['Sai Shankar Narasimhan' 'Shubhankar Agarwal' 'Oguzhan Akcin'\n 'Sujay Sanghavi' 'Sandeep Chinchali']"
] |
null | null |
2403.02683
| null | null |
http://arxiv.org/pdf/2403.02683v2
|
2024-05-13T05:42:03Z
|
2024-03-05T06:10:28Z
|
Learning to Defer to a Population: A Meta-Learning Approach
|
The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert. All existing work on L2D assumes that each expert is well-identified, and if any expert were to change, the system should be re-trained. In this work, we alleviate this constraint, formulating an L2D system that can cope with never-before-seen experts at test-time. We accomplish this by using meta-learning, considering both optimization- and model-based variants. Given a small context set to characterize the currently available expert, our framework can quickly adapt its deferral policy. For the model-based approach, we employ an attention mechanism that is able to look for points in the context set that are similar to a given test point, leading to an even more precise assessment of the expert's abilities. In the experiments, we validate our methods on image recognition, traffic sign detection, and skin lesion diagnosis benchmarks.
|
[
"['Dharmesh Tailor' 'Aditya Patra' 'Rajeev Verma' 'Putra Manggala'\n 'Eric Nalisnick']"
] |
null | null |
2403.02688
| null | null |
http://arxiv.org/pdf/2403.02688v2
|
2024-05-31T20:24:47Z
|
2024-03-05T06:17:13Z
|
DOCTOR: Dynamic On-Chip Temporal Variation Remediation Toward
Self-Corrected Photonic Tensor Accelerators
|
Photonic computing has emerged as a promising solution for accelerating computation-intensive artificial intelligence (AI) workloads, offering unparalleled speed and energy efficiency, especially in resource-limited, latency-sensitive edge computing environments. However, the deployment of analog photonic tensor accelerators encounters reliability challenges due to hardware noise and environmental variations. While off-chip noise-aware training and on-chip training have been proposed to enhance the variation tolerance of optical neural accelerators with moderate, static noise, we observe a notable performance degradation over time due to temporally drifting variations, which requires a real-time, in-situ calibration mechanism. To tackle this challenging reliability issues, for the first time, we propose a lightweight dynamic on-chip remediation framework, dubbed DOCTOR, providing adaptive, in-situ accuracy recovery against temporally drifting noise. The DOCTOR framework intelligently monitors the chip status using adaptive probing and performs fast in-situ training-free calibration to restore accuracy when necessary. Recognizing nonuniform spatial variation distributions across devices and tensor cores, we also propose a variation-aware architectural remapping strategy to avoid executing critical tasks on noisy devices. Extensive experiments show that our proposed framework can guarantee sustained performance under drifting variations with 34% higher accuracy and 2-3 orders-of-magnitude lower overhead compared to state-of-the-art on-chip training methods. Our code is open-sourced at https://github.com/ScopeX-ASU/DOCTOR.
|
[
"['Haotian Lu' 'Sanmitra Banerjee' 'Jiaqi Gu']"
] |
null | null |
2403.02690
| null | null |
http://arxiv.org/pdf/2403.02690v1
|
2024-03-05T06:20:49Z
|
2024-03-05T06:20:49Z
|
Dirichlet-based Per-Sample Weighting by Transition Matrix for Noisy
Label Learning
|
For learning with noisy labels, the transition matrix, which explicitly models the relation between noisy label distribution and clean label distribution, has been utilized to achieve the statistical consistency of either the classifier or the risk. Previous researches have focused more on how to estimate this transition matrix well, rather than how to utilize it. We propose good utilization of the transition matrix is crucial and suggest a new utilization method based on resampling, coined RENT. Specifically, we first demonstrate current utilizations can have potential limitations for implementation. As an extension to Reweighting, we suggest the Dirichlet distribution-based per-sample Weight Sampling (DWS) framework, and compare reweighting and resampling under DWS framework. With the analyses from DWS, we propose RENT, a REsampling method with Noise Transition matrix. Empirically, RENT consistently outperforms existing transition matrix utilization methods, which includes reweighting, on various benchmark datasets. Our code is available at url{https://github.com/BaeHeeSun/RENT}.
|
[
"['HeeSun Bae' 'Seungjae Shin' 'Byeonghu Na' 'Il-Chul Moon']"
] |
null | null |
2403.02694
| null | null |
http://arxiv.org/pdf/2403.02694v2
|
2024-04-03T16:06:30Z
|
2024-03-05T06:23:50Z
|
Privacy-Aware Semantic Cache for Large Language Models
|
Large Language Models (LLMs) like ChatGPT and Llama2 have revolutionized natural language processing and search engine dynamics. However, these models incur exceptionally high computational costs. For instance, GPT-3 consists of 175 billion parameters where inference demands billions of floating-point operations. Caching is a natural solution to reduce LLM inference costs on repeated queries which constitute about 31% of the total queries. However, existing caching methods are incapable of finding semantic similarities among LLM queries, leading to unacceptable false hit-and-miss rates. This paper introduces MeanCache, a user-centric semantic cache for LLMs that identifies semantically similar queries to determine cache hit or miss. Using MeanCache, the response to a user's semantically similar query can be retrieved from a local cache rather than re-querying the LLM, thus reducing costs, service provider load, and environmental impact. Existing caching solutions for LLMs raise privacy and scalability concerns and perform wasteful query requests. MeanCache leverages Federated Learning (FL) to collaboratively train a query similarity model across LLM users without violating privacy. By placing a local cache in each user's device and using FL, MeanCache reduces the latency and costs and enhances model performance, resulting in lower false hit rates. MeanCache compresses the embedding dimensions to minimize cache storage and also finds the optimal cosine similarity threshold. Our experiments benchmarked against the state-of-the-art caching method, reveal that MeanCache attains an approximately 17% higher F-score and a 20% increase in precision during semantic cache hit-and-miss decisions. It also reduces the storage requirement by 83% and accelerates semantic cache hit-and-miss decisions by 11%.
|
[
"['Waris Gill' 'Mohamed Elidrisi' 'Pallavi Kalapatapu' 'Ali Anwar'\n 'Muhammad Ali Gulzar']"
] |
null | null |
2403.02695
| null | null |
http://arxiv.org/pdf/2403.02695v2
|
2024-06-04T21:25:20Z
|
2024-03-05T06:23:55Z
|
Controllable Prompt Tuning For Balancing Group Distributional Robustness
|
Models trained on data composed of different groups or domains can suffer from severe performance degradation under distribution shifts. While recent methods have largely focused on optimizing the worst-group objective, this often comes at the expense of good performance on other groups. To address this problem, we introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them. However, directly applying such optimization involves updating the parameters of the entire network, making it both computationally expensive and challenging. Thus, we introduce Controllable Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques. On spurious correlation benchmarks, our procedures achieve state-of-the-art results across both transformer and non-transformer architectures, as well as unimodal and multimodal data, while requiring only 0.4% tunable parameters.
|
[
"['Hoang Phan' 'Andrew Gordon Wilson' 'Qi Lei']"
] |
null | null |
2403.02697
| null | null |
http://arxiv.org/pdf/2403.02697v1
|
2024-03-05T06:25:19Z
|
2024-03-05T06:25:19Z
|
Noise misleads rotation invariant algorithms on sparse targets
|
It is well known that the class of rotation invariant algorithms are suboptimal even for learning sparse linear problems when the number of examples is below the "dimension" of the problem. This class includes any gradient descent trained neural net with a fully-connected input layer (initialized with a rotationally symmetric distribution). The simplest sparse problem is learning a single feature out of $d$ features. In that case the classification error or regression loss grows with $1-k/n$ where $k$ is the number of examples seen. These lower bounds become vacuous when the number of examples $k$ reaches the dimension $d$. We show that when noise is added to this sparse linear problem, rotation invariant algorithms are still suboptimal after seeing $d$ or more examples. We prove this via a lower bound for the Bayes optimal algorithm on a rotationally symmetrized problem. We then prove much lower upper bounds on the same problem for simple non-rotation invariant algorithms. Finally we analyze the gradient flow trajectories of many standard optimization algorithms in some simple cases and show how they veer toward or away from the sparse targets. We believe that our trajectory categorization will be useful in designing algorithms that can exploit sparse targets and our method for proving lower bounds will be crucial for analyzing other families of algorithms that admit different classes of invariances.
|
[
"['Manfred K. Warmuth' 'Wojciech Kotłowski' 'Matt Jones' 'Ehsan Amid']"
] |
null | null |
2403.02713
| null | null |
http://arxiv.org/pdf/2403.02713v2
|
2024-07-13T02:12:30Z
|
2024-03-05T07:09:35Z
|
Android in the Zoo: Chain-of-Action-Thought for GUI Agents
|
Large language model (LLM) leads to a surge of autonomous GUI agents for smartphone, which completes a task triggered by natural language through predicting a sequence of actions of API. Even though the task highly relies on past actions and visual observations, existing studies typically consider little semantic information carried out by intermediate screenshots and screen operations. To address this, this work presents Chain-of-Action-Thought (dubbed CoAT), which takes the description of the previous actions, the current screen, and more importantly the action thinking of what actions should be performed and the outcomes led by the chosen action. We demonstrate that, in a zero-shot setting upon three off-the-shelf LMMs, CoAT significantly improves the action prediction compared to previous proposed context modeling. To further facilitate the research in this line, we construct a dataset Android-In-The-Zoo (AitZ), which contains 18,643 screen-action pairs together with chain-of-action-thought annotations. Experiments show that fine-tuning a 1B model (i.e. AUTO-UI-base) on our AitZ dataset achieves on-par performance with CogAgent-Chat-18B.
|
[
"['Jiwen Zhang' 'Jihao Wu' 'Yihua Teng' 'Minghui Liao' 'Nuo Xu' 'Xiao Xiao'\n 'Zhongyu Wei' 'Duyu Tang']"
] |
null | null |
2403.02730
| null | null |
http://arxiv.org/pdf/2403.02730v1
|
2024-03-05T07:37:47Z
|
2024-03-05T07:37:47Z
|
A Two-Stage Training Method for Modeling Constrained Systems With Neural
Networks
|
Real-world systems are often formulated as constrained optimization problems. Techniques to incorporate constraints into Neural Networks (NN), such as Neural Ordinary Differential Equations (Neural ODEs), have been used. However, these introduce hyperparameters that require manual tuning through trial and error, raising doubts about the successful incorporation of constraints into the generated model. This paper describes in detail the two-stage training method for Neural ODEs, a simple, effective, and penalty parameter-free approach to model constrained systems. In this approach the constrained optimization problem is rewritten as two unconstrained sub-problems that are solved in two stages. The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation. The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region. We experimentally demonstrate that our method produces models that satisfy the constraints and also improves their predictive performance. Thus, ensuring compliance with critical system properties and also contributing to reducing data quantity requirements. Furthermore, we show that the proposed method improves the convergence to an optimal solution and improves the explainability of Neural ODE models. Our proposed two-stage training method can be used with any NN architectures.
|
[
"['C. Coelho' 'M. Fernanda P. Costa' 'L. L. Ferrás']"
] |
null | null |
2403.02737
| null | null |
http://arxiv.org/pdf/2403.02737v1
|
2024-03-05T07:45:29Z
|
2024-03-05T07:45:29Z
|
Neural Fractional Differential Equations
|
Fractional Differential Equations (FDEs) are essential tools for modelling complex systems in science and engineering. They extend the traditional concepts of differentiation and integration to non-integer orders, enabling a more precise representation of processes characterised by non-local and memory-dependent behaviours. This property is useful in systems where variables do not respond to changes instantaneously, but instead exhibit a strong memory of past interactions. Having this in mind, and drawing inspiration from Neural Ordinary Differential Equations (Neural ODEs), we propose the Neural FDE, a novel deep neural network architecture that adjusts a FDE to the dynamics of data. This work provides a comprehensive overview of the numerical method employed in Neural FDEs and the Neural FDE architecture. The numerical outcomes suggest that, despite being more computationally demanding, the Neural FDE may outperform the Neural ODE in modelling systems with memory or dependencies on past states, and it can effectively be applied to learn more intricate dynamical systems.
|
[
"['C. Coelho' 'M. Fernanda P. Costa' 'L. L. Ferrás']"
] |
null | null |
2403.02741
| null | null |
http://arxiv.org/pdf/2403.02741v2
|
2024-06-04T17:26:30Z
|
2024-03-05T07:51:38Z
|
State-Constrained Zero-Sum Differential Games with One-Sided Information
|
We study zero-sum differential games with state constraints and one-sided information, where the informed player (Player 1) has a categorical payoff type unknown to the uninformed player (Player 2). The goal of Player 1 is to minimize his payoff without violating the constraints, while that of Player 2 is to violate the state constraints if possible, or to maximize the payoff otherwise. One example of the game is a man-to-man matchup in football. Without state constraints, Cardaliaguet (2007) showed that the value of such a game exists and is convex to the common belief of players. Our theoretical contribution is an extension of this result to games with state constraints and the derivation of the primal and dual subdynamic principles necessary for computing behavioral strategies. Different from existing works that are concerned about the scalability of no-regret learning in games with discrete dynamics, our study reveals the underlying structure of strategies for belief manipulation resulting from information asymmetry and state constraints. This structure will be necessary for scalable learning on games with continuous actions and long time windows. We use a simplified football game to demonstrate the utility of this work, where we reveal player positions and belief states in which the attacker should (or should not) play specific random deceptive moves to take advantage of information asymmetry, and compute how the defender should respond.
|
[
"['Mukesh Ghimire' 'Lei Zhang' 'Zhe Xu' 'Yi Ren']"
] |
null | null |
2403.02746
| null | null |
http://arxiv.org/pdf/2403.02746v3
|
2024-03-23T09:51:35Z
|
2024-03-05T08:02:00Z
|
Learning without Exact Guidance: Updating Large-scale High-resolution
Land Cover Maps from Low-resolution Historical Labels
|
Large-scale high-resolution (HR) land-cover mapping is a vital task to survey the Earth's surface and resolve many challenges facing humanity. However, it is still a non-trivial task hindered by complex ground details, various landforms, and the scarcity of accurate training labels over a wide-span geographic area. In this paper, we propose an efficient, weakly supervised framework (Paraformer) to guide large-scale HR land-cover mapping with easy-access historical land-cover data of low resolution (LR). Specifically, existing land-cover mapping approaches reveal the dominance of CNNs in preserving local ground details but still suffer from insufficient global modeling in various landforms. Therefore, we design a parallel CNN-Transformer feature extractor in Paraformer, consisting of a downsampling-free CNN branch and a Transformer branch, to jointly capture local and global contextual information. Besides, facing the spatial mismatch of training data, a pseudo-label-assisted training (PLAT) module is adopted to reasonably refine LR labels for weakly supervised semantic segmentation of HR images. Experiments on two large-scale datasets demonstrate the superiority of Paraformer over other state-of-the-art methods for automatically updating HR land-cover maps from LR historical labels.
|
[
"['Zhuohong Li' 'Wei He' 'Jiepan Li' 'Fangxiao Lu' 'Hongyan Zhang']"
] |
null | null |
2403.02765
| null | null |
http://arxiv.org/pdf/2403.02765v1
|
2024-03-05T08:34:04Z
|
2024-03-05T08:34:04Z
|
G4-Attention: Deep Learning Model with Attention for predicting DNA
G-Quadruplexes
|
G-Quadruplexes are the four-stranded non-canonical nucleic acid secondary structures, formed by the stacking arrangement of the guanine tetramers. They are involved in a wide range of biological roles because of their exceptionally unique and distinct structural characteristics. After the completion of the human genome sequencing project, a lot of bioinformatic algorithms were introduced to predict the active G4s regions textit{in vitro} based on the canonical G4 sequence elements, G-textit{richness}, and G-textit{skewness}, as well as the non-canonical sequence features. Recently, sequencing techniques like G4-seq and G4-ChIP-seq were developed to map the G4s textit{in vitro}, and textit{in vivo} respectively at a few hundred base resolution. Subsequently, several machine learning approaches were developed for predicting the G4 regions using the existing databases. However, their prediction models were simplistic, and the prediction accuracy was notably poor. In response, here, we propose a novel convolutional neural network with Bi-LSTM and attention layers, named G4-attention, to predict the G4 forming sequences with improved accuracy. G4-attention achieves high accuracy and attains state-of-the-art results in the G4 prediction task. Our model also predicts the G4 regions accurately in the highly class-imbalanced datasets. In addition, the developed model trained on the human genome dataset can be applied to any non-human genome DNA sequences to predict the G4 formation propensities.
|
[
"['Shrimon Mukherjee' 'Pulakesh Pramanik' 'Partha Basuchowdhuri'\n 'Santanu Bhattacharya']"
] |
null | null |
2403.02772
| null | null |
http://arxiv.org/pdf/2403.02772v1
|
2024-03-05T08:38:25Z
|
2024-03-05T08:38:25Z
|
Rehabilitation Exercise Quality Assessment through Supervised
Contrastive Learning with Hard and Soft Negatives
|
Exercise-based rehabilitation programs have proven to be effective in enhancing the quality of life and reducing mortality and rehospitalization rates. AI-driven virtual rehabilitation, which allows patients to independently complete exercises at home, utilizes AI algorithms to analyze exercise data, providing feedback to patients and updating clinicians on their progress. These programs commonly prescribe a variety of exercise types, leading to a distinct challenge in rehabilitation exercise assessment datasets: while abundant in overall training samples, these datasets often have a limited number of samples for each individual exercise type. This disparity hampers the ability of existing approaches to train generalizable models with such a small sample size per exercise. Addressing this issue, our paper introduces a novel supervised contrastive learning framework with hard and soft negative samples that effectively utilizes the entire dataset to train a single model applicable to all exercise types. This model, with a Spatial-Temporal Graph Convolutional Network (ST-GCN) architecture, demonstrated enhanced generalizability across exercises and a decrease in overall complexity. Through extensive experiments on three publicly available rehabilitation exercise assessment datasets, the University of Idaho-Physical Rehabilitation Movement Data (UI-PRMD), IntelliRehabDS (IRDS), and KInematic assessment of MOvement and clinical scores for remote monitoring of physical REhabilitation (KIMORE), our method has shown to surpass existing methods, setting a new benchmark in rehabilitation exercise assessment accuracy.
|
[
"['Mark Karlov' 'Ali Abedi' 'Shehroz S. Khan']"
] |
null | null |
2403.02774
| null | null |
http://arxiv.org/pdf/2403.02774v1
|
2024-03-05T08:41:41Z
|
2024-03-05T08:41:41Z
|
Fast, Scale-Adaptive, and Uncertainty-Aware Downscaling of Earth System
Model Fields with Generative Foundation Models
|
Accurate and high-resolution Earth system model (ESM) simulations are essential to assess the ecological and socio-economic impacts of anthropogenic climate change, but are computationally too expensive. Recent machine learning approaches have shown promising results in downscaling ESM simulations, outperforming state-of-the-art statistical approaches. However, existing methods require computationally costly retraining for each ESM and extrapolate poorly to climates unseen during training. We address these shortcomings by learning a consistency model (CM) that efficiently and accurately downscales arbitrary ESM simulations without retraining in a zero-shot manner. Our foundation model approach yields probabilistic downscaled fields at resolution only limited by the observational reference data. We show that the CM outperforms state-of-the-art diffusion models at a fraction of computational cost while maintaining high controllability on the downscaling task. Further, our method generalizes to climate states unseen during training without explicitly formulated physical constraints.
|
[
"['Philipp Hess' 'Michael Aich' 'Baoxiang Pan' 'Niklas Boers']"
] |
null | null |
2403.02775
| null | null |
http://arxiv.org/pdf/2403.02775v1
|
2024-03-05T08:45:30Z
|
2024-03-05T08:45:30Z
|
EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs
|
Large language models (LLMs) have proven to be very superior to conventional methods in various tasks. However, their expensive computations and high memory requirements are prohibitive for deployment. Model quantization is an effective method for reducing this overhead. The problem is that in most previous works, the quantized model was calibrated using few samples from the training data, which might affect the generalization of the quantized LLMs to unknown cases and tasks. Hence in this work, we explore an important question: Can we design a data-independent quantization method for LLMs to guarantee its generalization performance? In this work, we propose EasyQuant, a training-free and data-independent weight-only quantization algorithm for LLMs. Our observation indicates that two factors: outliers in the weight and quantization ranges, are essential for reducing the quantization error. Therefore, in EasyQuant, we leave the outliers (less than 1%) unchanged and optimize the quantization range to reduce the reconstruction error. With these methods, we surprisingly find that EasyQuant achieves comparable performance to the original model. Since EasyQuant does not depend on any training data, the generalization performance of quantized LLMs is safely guaranteed. Moreover, EasyQuant can be implemented in parallel so that the quantized model could be attained in a few minutes even for LLMs over 100B. To our best knowledge, we are the first work that achieves almost lossless quantization performance for LLMs under a data-independent setting and our algorithm runs over 10 times faster than the data-dependent methods.
|
[
"['Hanlin Tang' 'Yifu Sun' 'Decheng Wu' 'Kai Liu' 'Jianchen Zhu'\n 'Zhanhui Kang']"
] |
null | null |
2403.02777
| null | null |
http://arxiv.org/pdf/2403.02777v1
|
2024-03-05T08:46:54Z
|
2024-03-05T08:46:54Z
|
A Zero-Shot Reinforcement Learning Strategy for Autonomous Guidewire
Navigation
|
Purpose: The treatment of cardiovascular diseases requires complex and challenging navigation of a guidewire and catheter. This often leads to lengthy interventions during which the patient and clinician are exposed to X-ray radiation. Deep Reinforcement Learning approaches have shown promise in learning this task and may be the key to automating catheter navigation during robotized interventions. Yet, existing training methods show limited capabilities at generalizing to unseen vascular anatomies, requiring to be retrained each time the geometry changes. Methods: In this paper, we propose a zero-shot learning strategy for three-dimensional autonomous endovascular navigation. Using a very small training set of branching patterns, our reinforcement learning algorithm is able to learn a control that can then be applied to unseen vascular anatomies without retraining. Results: We demonstrate our method on 4 different vascular systems, with an average success rate of 95% at reaching random targets on these anatomies. Our strategy is also computationally efficient, allowing the training of our controller to be performed in only 2 hours. Conclusion: Our training method proved its ability to navigate unseen geometries with different characteristics, thanks to a nearly shape-invariant observation space.
|
[
"['Valentina Scarponi' 'Michel Duprez' 'Florent Nageotte' 'Stéphane Cotin']"
] |
null | null |
2403.02780
| null | null |
http://arxiv.org/pdf/2403.02780v1
|
2024-03-05T08:52:16Z
|
2024-03-05T08:52:16Z
|
Data Collaboration Analysis Over Matrix Manifolds
|
The effectiveness of machine learning (ML) algorithms is deeply intertwined with the quality and diversity of their training datasets. Improved datasets, marked by superior quality, enhance the predictive accuracy and broaden the applicability of models across varied scenarios. Researchers often integrate data from multiple sources to mitigate biases and limitations of single-source datasets. However, this extensive data amalgamation raises significant ethical concerns, particularly regarding user privacy and the risk of unauthorized data disclosure. Various global legislative frameworks have been established to address these privacy issues. While crucial for safeguarding privacy, these regulations can complicate the practical deployment of ML technologies. Privacy-Preserving Machine Learning (PPML) addresses this challenge by safeguarding sensitive information, from health records to geolocation data, while enabling the secure use of this data in developing robust ML models. Within this realm, the Non-Readily Identifiable Data Collaboration (NRI-DC) framework emerges as an innovative approach, potentially resolving the 'data island' issue among institutions through non-iterative communication and robust privacy protections. However, in its current state, the NRI-DC framework faces model performance instability due to theoretical unsteadiness in creating collaboration functions. This study establishes a rigorous theoretical foundation for these collaboration functions and introduces new formulations through optimization problems on matrix manifolds and efficient solutions. Empirical analyses demonstrate that the proposed approach, particularly the formulation over orthogonal matrix manifolds, significantly enhances performance, maintaining consistency and efficiency without compromising communication efficiency or privacy protections.
|
[
"['Keiyu Nosaka' 'Akiko Yoshise']"
] |
null | null |
2403.02786
| null | null |
http://arxiv.org/pdf/2403.02786v1
|
2024-03-05T08:59:45Z
|
2024-03-05T08:59:45Z
|
Semi-Supervised Graph Representation Learning with Human-centric
Explanation for Predicting Fatty Liver Disease
|
Addressing the challenge of limited labeled data in clinical settings, particularly in the prediction of fatty liver disease, this study explores the potential of graph representation learning within a semi-supervised learning framework. Leveraging graph neural networks (GNNs), our approach constructs a subject similarity graph to identify risk patterns from health checkup data. The effectiveness of various GNN approaches in this context is demonstrated, even with minimal labeled samples. Central to our methodology is the inclusion of human-centric explanations through explainable GNNs, providing personalized feature importance scores for enhanced interpretability and clinical relevance, thereby underscoring the potential of our approach in advancing healthcare practices with a keen focus on graph representation learning and human-centric explanation.
|
[
"['So Yeon Kim' 'Sehee Wang' 'Eun Kyung Choe']"
] |
null | null |
2403.02794
| null | null |
http://arxiv.org/pdf/2403.02794v1
|
2024-03-05T09:08:20Z
|
2024-03-05T09:08:20Z
|
A Distance Metric Learning Model Based On Variational Information
Bottleneck
|
In recent years, personalized recommendation technology has flourished and become one of the hot research directions. The matrix factorization model and the metric learning model which proposed successively have been widely studied and applied. The latter uses the Euclidean distance instead of the dot product used by the former to measure the latent space vector. While avoiding the shortcomings of the dot product, the assumption of Euclidean distance is neglected, resulting in limited recommendation quality of the model. In order to solve this problem, this paper combines the Variationl Information Bottleneck with metric learning model for the first time, and proposes a new metric learning model VIB-DML (Variational Information Bottleneck Distance Metric Learning) for rating prediction, which limits the mutual information of the latent space feature vector to improve the robustness of the model and satisfiy the assumption of Euclidean distance by decoupling the latent space feature vector. In this paper, the experimental results are compared with the root mean square error (RMSE) on the three public datasets. The results show that the generalization ability of VIB-DML is excellent. Compared with the general metric learning model MetricF, the prediction error is reduced by 7.29%. Finally, the paper proves the strong robustness of VIBDML through experiments.
|
[
"['YaoDan Zhang' 'Zidong Wang' 'Ru Jia' 'Ru Li']"
] |
null | null |
2403.02810
| null | null |
http://arxiv.org/pdf/2403.02810v1
|
2024-03-05T09:25:31Z
|
2024-03-05T09:25:31Z
|
Dynamic Gaussian Graph Operator: Learning parametric partial
differential equations in arbitrary discrete mechanics problems
|
Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.
|
[
"['Chu Wang' 'Jinhong Wu' 'Yanzhi Wang' 'Zhijian Zha' 'Qi Zhou']"
] |
null | null |
2403.02814
| null | null |
http://arxiv.org/pdf/2403.02814v1
|
2024-03-05T09:33:36Z
|
2024-03-05T09:33:36Z
|
InjectTST: A Transformer Method of Injecting Global Information into
Independent Channels for Long Time Series Forecasting
|
Transformer has become one of the most popular architectures for multivariate time series (MTS) forecasting. Recent Transformer-based MTS models generally prefer channel-independent structures with the observation that channel independence can alleviate noise and distribution drift issues, leading to more robustness. Nevertheless, it is essential to note that channel dependency remains an inherent characteristic of MTS, carrying valuable information. Designing a model that incorporates merits of both channel-independent and channel-mixing structures is a key to further improvement of MTS forecasting, which poses a challenging conundrum. To address the problem, an injection method for global information into channel-independent Transformer, InjectTST, is proposed in this paper. Instead of designing a channel-mixing model directly, we retain the channel-independent backbone and gradually inject global information into individual channels in a selective way. A channel identifier, a global mixing module and a self-contextual attention module are devised in InjectTST. The channel identifier can help Transformer distinguish channels for better representation. The global mixing module produces cross-channel global information. Through the self-contextual attention module, the independent channels can selectively concentrate on useful global information without robustness degradation, and channel mixing is achieved implicitly. Experiments indicate that InjectTST can achieve stable improvement compared with state-of-the-art models.
|
[
"['Ce Chi' 'Xing Wang' 'Kexin Yang' 'Zhiyan Song' 'Di Jin' 'Lin Zhu'\n 'Chao Deng' 'Junlan Feng']"
] |
null | null |
2403.02821
| null | null |
http://arxiv.org/pdf/2403.02821v2
|
2024-04-04T12:47:28Z
|
2024-03-05T09:44:51Z
|
An Adaptive Hydropower Management Approach for Downstream Ecosystem
Preservation
|
Hydropower plants play a pivotal role in advancing clean and sustainable energy production, contributing significantly to the global transition towards renewable energy sources. However, hydropower plants are currently perceived both positively as sources of renewable energy and negatively as disruptors of ecosystems. In this work, we highlight the overlooked potential of using hydropower plant as protectors of ecosystems by using adaptive ecological discharges. To advocate for this perspective, we propose using a neural network to predict the minimum ecological discharge value at each desired time. Additionally, we present a novel framework that seamlessly integrates it into hydropower management software, taking advantage of the well-established approach of using traditional constrained optimisation algorithms. This novel approach not only protects the ecosystems from climate change but also contributes to potentially increase the electricity production.
|
[
"['C. Coelho' 'M. Jing' 'M. Fernanda P. Costa' 'L. L. Ferrás']"
] |
null | null |
2403.02833
| null | null |
http://arxiv.org/pdf/2403.02833v2
|
2024-05-01T06:40:53Z
|
2024-03-05T10:09:31Z
|
SOFIM: Stochastic Optimization Using Regularized Fisher Information
Matrix
|
This paper introduces a new stochastic optimization method based on the regularized Fisher information matrix (FIM), named SOFIM, which can efficiently utilize the FIM to approximate the Hessian matrix for finding Newton's gradient update in large-scale stochastic optimization of machine learning models. It can be viewed as a variant of natural gradient descent, where the challenge of storing and calculating the full FIM is addressed through making use of the regularized FIM and directly finding the gradient update direction via Sherman-Morrison matrix inversion. Additionally, like the popular Adam method, SOFIM uses the first moment of the gradient to address the issue of non-stationary objectives across mini-batches due to heterogeneous data. The utilization of the regularized FIM and Sherman-Morrison matrix inversion leads to the improved convergence rate with the same space and time complexities as stochastic gradient descent (SGD) with momentum. The extensive experiments on training deep learning models using several benchmark image classification datasets demonstrate that the proposed SOFIM outperforms SGD with momentum and several state-of-the-art Newton optimization methods in term of the convergence speed for achieving the pre-specified objectives of training and test losses as well as test accuracy.
|
[
"['Mrinmay Sen' 'A. K. Qin' 'Gayathri C' 'Raghu Kishore N' 'Yen-Wei Chen'\n 'Balasubramanian Raman']"
] |
null | null |
2403.02846
| null | null |
http://arxiv.org/pdf/2403.02846v1
|
2024-03-05T10:36:27Z
|
2024-03-05T10:36:27Z
|
FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models
|
Federated Learning (FL) thrives in training a global model with numerous clients by only sharing the parameters of their local models trained with their private training datasets. Therefore, without revealing the private dataset, the clients can obtain a deep learning (DL) model with high performance. However, recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model when adversaries, posed as benign clients, are present in a group of clients. Therefore, recent studies suggested byzantine-robust FL methods that allow the server to train an accurate global model even with the adversaries present in the system. However, many existing methods require the knowledge of the number of malicious clients or the auxiliary (clean) dataset or the effectiveness reportedly decreased hugely when the private dataset was non-independently and identically distributed (non-IID). In this work, we propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates by utilizing the contrastive learning technique, which showed a tremendous improvement as a self-supervised learning method. With contrastive models, we design FLGuard as an ensemble scheme to maximize the defensive capability. We evaluate FLGuard extensively under various poisoning attacks and compare the accuracy of the global model with existing byzantine-robust FL methods. FLGuard outperforms the state-of-the-art defense methods in most cases and shows drastic improvement, especially in non-IID settings. https://github.com/201younghanlee/FLGuard
|
[
"['Younghan Lee' 'Yungi Cho' 'Woorim Han' 'Ho Bae' 'Yunheung Paek']"
] |
null | null |
2403.02867
| null | null |
http://arxiv.org/pdf/2403.02867v2
|
2024-05-21T02:49:12Z
|
2024-03-05T11:21:18Z
|
Scalable Continuous-time Diffusion Framework for Network Inference and
Influence Estimation
|
The study of continuous-time information diffusion has been an important area of research for many applications in recent years. When only the diffusion traces (cascades) are accessible, cascade-based network inference and influence estimation are two essential problems to explore. Alas, existing methods exhibit limited capability to infer and process networks with more than a few thousand nodes, suffering from scalability issues. In this paper, we view the diffusion process as a continuous-time dynamical system, based on which we establish a continuous-time diffusion model. Subsequently, we instantiate the model to a scalable and effective framework (FIM) to approximate the diffusion propagation from available cascades, thereby inferring the underlying network structure. Furthermore, we undertake an analysis of the approximation error of FIM for network inference. To achieve the desired scalability for influence estimation, we devise an advanced sampling technique and significantly boost the efficiency. We also quantify the effect of the approximation error on influence estimation theoretically. Experimental results showcase the effectiveness and superior scalability of FIM on network inference and influence estimation.
|
[
"['Keke Huang' 'Ruize Gao' 'Bogdan Cautis' 'Xiaokui Xiao']"
] |
null | null |
2403.02870
| null | null |
http://arxiv.org/pdf/2403.02870v1
|
2024-03-05T11:26:22Z
|
2024-03-05T11:26:22Z
|
Precise Extraction of Deep Learning Models via Side-Channel Attacks on
Edge/Endpoint Devices
|
With growing popularity, deep learning (DL) models are becoming larger-scale, and only the companies with vast training datasets and immense computing power can manage their business serving such large models. Most of those DL models are proprietary to the companies who thus strive to keep their private models safe from the model extraction attack (MEA), whose aim is to steal the model by training surrogate models. Nowadays, companies are inclined to offload the models from central servers to edge/endpoint devices. As revealed in the latest studies, adversaries exploit this opportunity as new attack vectors to launch side-channel attack (SCA) on the device running victim model and obtain various pieces of the model information, such as the model architecture (MA) and image dimension (ID). Our work provides a comprehensive understanding of such a relationship for the first time and would benefit future MEA studies in both offensive and defensive sides in that they may learn which pieces of information exposed by SCA are more important than the others. Our analysis additionally reveals that by grasping the victim model information from SCA, MEA can get highly effective and successful even without any prior knowledge of the model. Finally, to evince the practicality of our analysis results, we empirically apply SCA, and subsequently, carry out MEA under realistic threat assumptions. The results show up to 5.8 times better performance than when the adversary has no model information about the victim model.
|
[
"['Younghan Lee' 'Sohee Jun' 'Yungi Cho' 'Woorim Han' 'Hyungon Moon'\n 'Yunheung Paek']"
] |
null | null |
2403.02871
| null | null |
http://arxiv.org/pdf/2403.02871v2
|
2024-06-09T02:26:13Z
|
2024-03-05T11:29:05Z
|
Quantum Mixed-State Self-Attention Network
|
The rapid advancement of quantum computing has increasingly highlighted its potential in the realm of machine learning, particularly in the context of natural language processing (NLP) tasks. Quantum machine learning (QML) leverages the unique capabilities of quantum computing to offer novel perspectives and methodologies for complex data processing and pattern recognition challenges. This paper introduces a novel Quantum Mixed-State Attention Network (QMSAN), which integrates the principles of quantum computing with classical machine learning algorithms, especially self-attention networks, to enhance the efficiency and effectiveness in handling NLP tasks. QMSAN model employs a quantum attention mechanism based on mixed states, enabling efficient direct estimation of similarity between queries and keys within the quantum domain, leading to more effective attention weight acquisition. Additionally, we propose an innovative quantum positional encoding scheme, implemented through fixed quantum gates within the quantum circuit, to enhance the model's accuracy. Experimental validation on various datasets demonstrates that QMSAN model outperforms existing quantum and classical models in text classification, achieving significant performance improvements. QMSAN model not only significantly reduces the number of parameters but also exceeds classical self-attention networks in performance, showcasing its strong capability in data representation and information extraction. Furthermore, our study investigates the model's robustness in different quantum noise environments, showing that QMSAN possesses commendable robustness to low noise.
|
[
"['Fu Chen' 'Qinglin Zhao' 'Li Feng' 'Chuangtao Chen' 'Yangbin Lin'\n 'Jianhong Lin']"
] |
null | null |
2403.02873
| null | null |
http://arxiv.org/pdf/2403.02873v1
|
2024-03-05T11:38:20Z
|
2024-03-05T11:38:20Z
|
A Note on High-Probability Analysis of Algorithms with Exponential,
Sub-Gaussian, and General Light Tails
|
This short note describes a simple technique for analyzing probabilistic algorithms that rely on a light-tailed (but not necessarily bounded) source of randomization. We show that the analysis of such an algorithm can be reduced, in a black-box manner and with only a small loss in logarithmic factors, to an analysis of a simpler variant of the same algorithm that uses bounded random variables and often easier to analyze. This approach simultaneously applies to any light-tailed randomization, including exponential, sub-Gaussian, and more general fast-decaying distributions, without needing to appeal to specialized concentration inequalities. Analyses of a generalized Azuma inequality and stochastic optimization with general light-tailed noise are provided to illustrate the technique.
|
[
"['Amit Attia' 'Tomer Koren']"
] |
null | null |
2403.02882
| null | null |
http://arxiv.org/abs/2403.02882v2
|
2024-04-19T08:06:09Z
|
2024-03-05T11:41:43Z
|
Autonomous vehicle decision and control through reinforcement learning
with traffic flow randomization
|
Most of the current studies on autonomous vehicle decision-making and control tasks based on reinforcement learning are conducted in simulated environments. The training and testing of these studies are carried out under rule-based microscopic traffic flow, with little consideration of migrating them to real or near-real environments to test their performance. It may lead to a degradation in performance when the trained model is tested in more realistic traffic scenes. In this study, we propose a method to randomize the driving style and behavior of surrounding vehicles by randomizing certain parameters of the car-following model and the lane-changing model of rule-based microscopic traffic flow in SUMO. We trained policies with deep reinforcement learning algorithms under the domain randomized rule-based microscopic traffic flow in freeway and merging scenes, and then tested them separately in rule-based microscopic traffic flow and high-fidelity microscopic traffic flow. Results indicate that the policy trained under domain randomization traffic flow has significantly better success rate and calculative reward compared to the models trained under other microscopic traffic flows.
|
[
"['Yuan Lin' 'Antai Xie' 'Xiao Liu']"
] |
null | null |
2403.02884
| null | null |
http://arxiv.org/pdf/2403.02884v1
|
2024-03-05T11:42:59Z
|
2024-03-05T11:42:59Z
|
MathScale: Scaling Instruction Tuning for Mathematical Reasoning
|
Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving. However, their proficiency in solving mathematical problems remains inadequate. We propose MathScale, a simple and scalable method to create high-quality mathematical reasoning data using frontier LLMs (e.g., {tt GPT-3.5}). Inspired by the cognitive mechanism in human mathematical learning, it first extracts topics and knowledge points from seed math questions and then build a concept graph, which is subsequently used to generate new math questions. MathScale exhibits effective scalability along the size axis of the math dataset that we generate. As a result, we create a mathematical reasoning dataset (MathScaleQA) containing two million math question-answer pairs. To evaluate mathematical reasoning abilities of LLMs comprehensively, we construct {sc MwpBench}, a benchmark of Math Word Problems, which is a collection of ten datasets (including GSM8K and MATH) covering K-12, college, and competition level math problems. We apply MathScaleQA to fine-tune open-source LLMs (e.g., LLaMA-2 and Mistral), resulting in significantly improved capabilities in mathematical reasoning. Evaluated on {sc MwpBench}, MathScale-7B achieves state-of-the-art performance across all datasets, surpassing its best peers of equivalent size by 42.9% in micro average accuracy and 43.7% in macro average accuracy, respectively.
|
[
"['Zhengyang Tang' 'Xingxing Zhang' 'Benyou Wang' 'Furu Wei']"
] |
null | null |
2403.02886
| null | null |
http://arxiv.org/pdf/2403.02886v1
|
2024-03-05T11:44:14Z
|
2024-03-05T11:44:14Z
|
Revisiting Confidence Estimation: Towards Reliable Failure Prediction
|
Reliable confidence estimation is a challenging yet fundamental requirement in many risk-sensitive applications. However, modern deep neural networks are often overconfident for their incorrect predictions, i.e., misclassified samples from known classes, and out-of-distribution (OOD) samples from unknown classes. In recent years, many confidence calibration and OOD detection methods have been developed. In this paper, we find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors. We investigate this problem and reveal that popular calibration and OOD detection methods often lead to worse confidence separation between correctly classified and misclassified examples, making it difficult to decide whether to trust a prediction or not. Finally, we propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance under various settings including balanced, long-tailed, and covariate-shift classification scenarios. Our study not only provides a strong baseline for reliable confidence estimation but also acts as a bridge between understanding calibration, OOD detection, and failure prediction. The code is available at url{https://github.com/Impression2805/FMFP}.
|
[
"['Fei Zhu' 'Xu-Yao Zhang' 'Zhen Cheng' 'Cheng-Lin Liu']"
] |
null | null |
2403.02889
| null | null |
http://arxiv.org/pdf/2403.02889v2
|
2024-03-20T09:53:17Z
|
2024-03-05T11:50:01Z
|
In Search of Truth: An Interrogation Approach to Hallucination Detection
|
Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons. One critical factor hindering their widespread adoption is the occurrence of hallucinations, where LLMs invent answers that sound realistic, yet drift away from factual truth. In this paper, we present a novel method for detecting hallucinations in large language models, which tackles a critical issue in the adoption of these models in various real-world scenarios. Through extensive evaluations across multiple datasets and LLMs, including Llama-2, we study the hallucination levels of various recent LLMs and demonstrate the effectiveness of our method to automatically detect them. Notably, we observe up to 62% hallucinations for Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy (B-ACC) of 87%, all without relying on external knowledge.
|
[
"['Yakir Yehuda' 'Itzik Malkiel' 'Oren Barkan' 'Jonathan Weill'\n 'Royi Ronen' 'Noam Koenigstein']"
] |
null | null |
2403.02906
| null | null |
http://arxiv.org/pdf/2403.02906v1
|
2024-03-05T12:13:27Z
|
2024-03-05T12:13:27Z
|
Citizen Science and Machine Learning for Research and Nature
Conservation: The Case of Eurasian Lynx, Free-ranging Rodents and Insects
|
Technology is increasingly used in Nature Reserves and National Parks around the world to support conservation efforts. Endangered species, such as the Eurasian Lynx (Lynx lynx), are monitored by a network of automatic photo traps. Yet, this method produces vast amounts of data, which needs to be prepared, analyzed and interpreted. Therefore, researchers working in this area increasingly need support to process this incoming information. One opportunity is to seek support from volunteer Citizen Scientists who can help label the data, however, it is challenging to retain their interest. Another way is to automate the process with image recognition using convolutional neural networks. During the panel, we will discuss considerations related to nature research and conservation as well as opportunities for the use of Citizen Science and Machine Learning to expedite the process of data preparation, labelling and analysis.
|
[
"['Kinga Skorupska' 'Rafał Stryjek' 'Izabela Wierzbowska' 'Piotr Bebas'\n 'Maciej Grzeszczuk' 'Piotr Gago' 'Jarosław Kowalski' 'Maciej Krzywicki'\n 'Jagoda Lazarek' 'Wiesław Kopeć']"
] |
null | null |
2403.02912
| null | null |
http://arxiv.org/pdf/2403.02912v1
|
2024-03-05T12:28:00Z
|
2024-03-05T12:28:00Z
|
Mirror Descent Algorithms with Nearly Dimension-Independent Rates for
Differentially-Private Stochastic Saddle-Point Problems
|
We study the problem of differentially-private (DP) stochastic (convex-concave) saddle-points in the polyhedral setting. We propose $(varepsilon, delta)$-DP algorithms based on stochastic mirror descent that attain nearly dimension-independent convergence rates for the expected duality gap, a type of guarantee that was known before only for bilinear objectives. For convex-concave and first-order-smooth stochastic objectives, our algorithms attain a rate of $sqrt{log(d)/n} + (log(d)^{3/2}/[nvarepsilon])^{1/3}$, where $d$ is the dimension of the problem and $n$ the dataset size. Under an additional second-order-smoothness assumption, we improve the rate on the expected gap to $sqrt{log(d)/n} + (log(d)^{3/2}/[nvarepsilon])^{2/5}$. Under this additional assumption, we also show, by using bias-reduced gradient estimators, that the duality gap is bounded by $log(d)/sqrt{n} + log(d)/[nvarepsilon]^{1/2}$ with constant success probability. This result provides evidence of the near-optimality of the approach. Finally, we show that combining our methods with acceleration techniques from online learning leads to the first algorithm for DP Stochastic Convex Optimization in the polyhedral setting that is not based on Frank-Wolfe methods. For convex and first-order-smooth stochastic objectives, our algorithms attain an excess risk of $sqrt{log(d)/n} + log(d)^{7/10}/[nvarepsilon]^{2/5}$, and when additionally assuming second-order-smoothness, we improve the rate to $sqrt{log(d)/n} + log(d)/sqrt{nvarepsilon}$. Instrumental to all of these results are various extensions of the classical Maurey Sparsification Lemma, which may be of independent interest.
|
[
"['Tomás González' 'Cristóbal Guzmán' 'Courtney Paquette']"
] |
null | null |
2403.02920
| null | null |
http://arxiv.org/pdf/2403.02920v1
|
2024-03-05T12:38:14Z
|
2024-03-05T12:38:14Z
|
TaylorShift: Shifting the Complexity of Self-Attention from Squared to
Linear (and Back) using Taylor-Softmax
|
The quadratic complexity of the attention mechanism represents one of the biggest hurdles for processing long sequences using Transformers. Current methods, relying on sparse representations or stateful recurrence, sacrifice token-to-token interactions, which ultimately leads to compromises in performance. This paper introduces TaylorShift, a novel reformulation of the Taylor softmax that enables computing full token-to-token interactions in linear time and space. We analytically determine the crossover points where employing TaylorShift becomes more efficient than traditional attention, aligning closely with empirical measurements. Specifically, our findings demonstrate that TaylorShift enhances memory efficiency for sequences as short as 800 tokens and accelerates inference for inputs of approximately 1700 tokens and beyond. For shorter sequences, TaylorShift scales comparably with the vanilla attention. Furthermore, a classification benchmark across five tasks involving long sequences reveals no degradation in accuracy when employing Transformers equipped with TaylorShift. For reproducibility, we provide access to our code under https://github.com/tobna/TaylorShift.
|
[
"['Tobias Christian Nauen' 'Sebastian Palacio' 'Andreas Dengel']"
] |
null | null |
2403.02922
| null | null |
http://arxiv.org/pdf/2403.02922v1
|
2024-03-05T12:38:54Z
|
2024-03-05T12:38:54Z
|
From Spectra to Biophysical Insights: End-to-End Learning with a Biased
Radiative Transfer Model
|
Advances in machine learning have boosted the use of Earth observation data for climate change research. Yet, the interpretability of machine-learned representations remains a challenge, particularly in understanding forests' biophysical reactions to climate change. Traditional methods in remote sensing that invert radiative transfer models (RTMs) to retrieve biophysical variables from spectral data often fail to account for biases inherent in the RTM, especially for complex forests. We propose to integrate RTMs into an auto-encoder architecture, creating an end-to-end learning approach. Our method not only corrects biases in RTMs but also outperforms traditional techniques for variable retrieval like neural network regression. Furthermore, our framework has potential generally for inverting biased physical models. The code is available on https://github.com/yihshe/ai-refined-rtm.git.
|
[
"['Yihang She' 'Clement Atzberger' 'Andrew Blake' 'Srinivasan Keshav']"
] |
null | null |
2403.02930
| null | null |
http://arxiv.org/pdf/2403.02930v2
|
2024-03-25T12:07:13Z
|
2024-03-05T12:48:29Z
|
A Second Look on BASS -- Boosting Abstractive Summarization with Unified
Semantic Graphs -- A Replication Study
|
We present a detailed replication study of the BASS framework, an abstractive summarization system based on the notion of Unified Semantic Graphs. Our investigation includes challenges in replicating key components and an ablation study to systematically isolate error sources rooted in replicating novel components. Our findings reveal discrepancies in performance compared to the original work. We highlight the significance of paying careful attention even to reasonably omitted details for replicating advanced frameworks like BASS, and emphasize key practices for writing replicable papers.
|
[
"['Osman Alperen Koraş' 'Jörg Schlötterer' 'Christin Seifert']"
] |
null | null |
2403.02936
| null | null |
http://arxiv.org/pdf/2403.02936v1
|
2024-03-05T13:03:31Z
|
2024-03-05T13:03:31Z
|
AdAM: Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN
Accelerators
|
In this paper, we propose an architecture of a novel adaptive fault-tolerant approximate multiplier tailored for ASIC-based DNN accelerators.
|
[
"['Mahdi Taheri' 'Natalia Cherezova' 'Samira Nazari' 'Ahsan Rafiq'\n 'Ali Azarpeyvand' 'Tara Ghasempouri' 'Masoud Daneshtalab' 'Jaan Raik'\n 'Maksim Jenihhin']"
] |
null | null |
2403.02938
| null | null |
http://arxiv.org/abs/2403.02938v1
|
2024-03-05T13:08:52Z
|
2024-03-05T13:08:52Z
|
AIx Speed: Playback Speed Optimization Using Listening Comprehension of
Speech Recognition Models
|
Since humans can listen to audio and watch videos at faster speeds than actually observed, we often listen to or watch these pieces of content at higher playback speeds to increase the time efficiency of content comprehension. To further utilize this capability, systems that automatically adjust the playback speed according to the user's condition and the type of content to assist in more efficient comprehension of time-series content have been developed. However, there is still room for these systems to further extend human speed-listening ability by generating speech with playback speed optimized for even finer time units and providing it to humans. In this study, we determine whether humans can hear the optimized speech and propose a system that automatically adjusts playback speed at units as small as phonemes while ensuring speech intelligibility. The system uses the speech recognizer score as a proxy for how well a human can hear a certain unit of speech and maximizes the speech playback speed to the extent that a human can hear. This method can be used to produce fast but intelligible speech. In the evaluation experiment, we compared the speech played back at a constant fast speed and the flexibly speed-up speech generated by the proposed method in a blind test and confirmed that the proposed method produced speech that was easier to listen to.
|
[
"['Kazuki Kawamura' 'Jun Rekimoto']"
] |
null | null |
2403.02944
| null | null |
http://arxiv.org/pdf/2403.02944v2
|
2024-05-22T03:57:41Z
|
2024-03-05T13:15:01Z
|
Neural Image Compression with Text-guided Encoding for both Pixel-level
and Perceptual Fidelity
|
Recent advances in text-guided image compression have shown great potential to enhance the perceptual quality of reconstructed images. These methods, however, tend to have significantly degraded pixel-wise fidelity, limiting their practicality. To fill this gap, we develop a new text-guided image compression algorithm that achieves both high perceptual and pixel-wise fidelity. In particular, we propose a compression framework that leverages text information mainly by text-adaptive encoding and training with joint image-text loss. By doing so, we avoid decoding based on text-guided generative models -- known for high generative diversity -- and effectively utilize the semantic information of text at a global level. Experimental results on various datasets show that our method can achieve high pixel-level and perceptual quality, with either human- or machine-generated captions. In particular, our method outperforms all baselines in terms of LPIPS, with some room for even more improvements when we use more carefully generated captions.
|
[
"['Hagyeong Lee' 'Minkyu Kim' 'Jun-Hyuk Kim' 'Seungeon Kim' 'Dokwan Oh'\n 'Jaeho Lee']"
] |
null | null |
2403.02945
| null | null |
http://arxiv.org/pdf/2403.02945v1
|
2024-03-05T13:16:37Z
|
2024-03-05T13:16:37Z
|
Unsupervised Learning Approaches for Identifying ICU Patient Subgroups:
Do Results Generalise?
|
The use of unsupervised learning to identify patient subgroups has emerged as a potentially promising direction to improve the efficiency of Intensive Care Units (ICUs). By identifying subgroups of patients with similar levels of medical resource need, ICUs could be restructured into a collection of smaller subunits, each catering to a specific group. However, it is unclear whether common patient subgroups exist across different ICUs, which would determine whether ICU restructuring could be operationalised in a standardised manner. In this paper, we tested the hypothesis that common ICU patient subgroups exist by examining whether the results from one existing study generalise to a different dataset. We extracted 16 features representing medical resource need and used consensus clustering to derive patient subgroups, replicating the previous study. We found limited similarities between our results and those of the previous study, providing evidence against the hypothesis. Our findings imply that there is significant variation between ICUs; thus, a standardised restructuring approach is unlikely to be appropriate. Instead, potential efficiency gains might be greater when the number and nature of the subunits are tailored to each ICU individually.
|
[
"['Harry Mayne' 'Guy Parsons' 'Adam Mahdi']"
] |
null | null |
2403.02946
| null | null |
http://arxiv.org/pdf/2403.02946v1
|
2024-03-05T13:17:09Z
|
2024-03-05T13:17:09Z
|
SAFFIRA: a Framework for Assessing the Reliability of
Systolic-Array-Based DNN Accelerators
|
Systolic array has emerged as a prominent architecture for Deep Neural Network (DNN) hardware accelerators, providing high-throughput and low-latency performance essential for deploying DNNs across diverse applications. However, when used in safety-critical applications, reliability assessment is mandatory to guarantee the correct behavior of DNN accelerators. While fault injection stands out as a well-established practical and robust method for reliability assessment, it is still a very time-consuming process. This paper addresses the time efficiency issue by introducing a novel hierarchical software-based hardware-aware fault injection strategy tailored for systolic array-based DNN accelerators.
|
[
"['Mahdi Taheri' 'Masoud Daneshtalab' 'Jaan Raik' 'Maksim Jenihhin'\n 'Salvatore Pappalardo' 'Paul Jimenez' 'Bastien Deveautour'\n 'Alberto Bosio']"
] |
null | null |
2403.02957
| null | null |
http://arxiv.org/pdf/2403.02957v2
|
2024-05-23T09:39:31Z
|
2024-03-05T13:25:44Z
|
On the Asymptotic Mean Square Error Optimality of Diffusion Models
|
Diffusion models (DMs) as generative priors have recently shown great potential for denoising tasks but lack theoretical understanding with respect to their mean square error (MSE) optimality. This paper proposes a novel denoising strategy inspired by the structure of the MSE-optimal conditional mean estimator (CME). The resulting DM-based denoiser can be conveniently employed using a pre-trained DM, being particularly fast by truncating reverse diffusion steps and not requiring stochastic re-sampling. We present a comprehensive (non-)asymptotic optimality analysis of the proposed diffusion-based denoiser, demonstrating polynomial-time convergence to the CME under mild conditions. Our analysis also derives a novel Lipschitz constant that depends solely on the DM's hyperparameters. Further, we offer a new perspective on DMs, showing that they inherently combine an asymptotically optimal denoiser with a powerful generator, modifiable by switching re-sampling in the reverse process on or off. The theoretical findings are thoroughly validated with experiments based on various benchmark datasets.
|
[
"['Benedikt Fesl' 'Benedikt Böck' 'Florian Strasser' 'Michael Baur'\n 'Michael Joham' 'Wolfgang Utschick']"
] |
null | null |
2403.02966
| null | null |
http://arxiv.org/pdf/2403.02966v2
|
2024-06-19T06:47:32Z
|
2024-03-05T13:43:58Z
|
Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot
Question Answering
|
Recent studies have investigated utilizing Knowledge Graphs (KGs) to enhance Quesetion Answering (QA) performance of Large Language Models (LLMs), yet structured KG verbalization remains challengin. Existing methods, such as triple-form or free-form textual conversion of triple-form facts, encounter several issues. These include reduced evidence density due to duplicated entities or relationships, and reduced evidence clarity due to an inability to emphasize crucial evidence. To address these issues, we propose EFSum, an Evidence-focused Fact Summarization framework for enhanced QA with knowledge-augmented LLMs. We optimize an open-source LLM as a fact summarizer through distillation and preference alignment. Our extensive experiments show that EFSum improves LLM's zero-shot QA performance, and it is possible to ensure both the helpfulness and faithfulness of the summary.
|
[
"['Sungho Ko' 'Hyunjin Cho' 'Hyungjoo Chae' 'Jinyoung Yeo' 'Dongha Lee']"
] |
null | null |
2403.02967
| null | null |
http://arxiv.org/pdf/2403.02967v2
|
2024-03-18T13:41:55Z
|
2024-03-05T13:43:58Z
|
Non-Convex Stochastic Composite Optimization with Polyak Momentum
|
The stochastic proximal gradient method is a powerful generalization of the widely used stochastic gradient descent (SGD) method and has found numerous applications in Machine Learning. However, it is notoriously known that this method fails to converge in non-convex settings where the stochastic noise is significant (i.e. when only small or bounded batch sizes are used). In this paper, we focus on the stochastic proximal gradient method with Polyak momentum. We prove this method attains an optimal convergence rate for non-convex composite optimization problems, regardless of batch size. Additionally, we rigorously analyze the variance reduction effect of the Polyak momentum in the composite optimization setting and we show the method also converges when the proximal step can only be solved inexactly. Finally, we provide numerical experiments to validate our theoretical results.
|
[
"['Yuan Gao' 'Anton Rodomanov' 'Sebastian U. Stich']"
] |
null | null |
2403.02968
| null | null |
http://arxiv.org/pdf/2403.02968v2
|
2024-04-09T06:07:17Z
|
2024-03-05T13:44:28Z
|
Hamiltonian Property Testing
|
Locality is a fundamental feature of many physical time evolutions. Assumptions on locality and related structural properties also underlie recently proposed procedures for learning an unknown Hamiltonian from access to the induced time evolution. However, no protocols to rigorously test whether an unknown Hamiltonian is local were known. We investigate Hamiltonian locality testing as a property testing problem, where the task is to determine whether an unknown $n$-qubit Hamiltonian $H$ is $k$-local or $varepsilon$-far from all $k$-local Hamiltonians, given access to the time evolution along $H$. First, we emphasize the importance of the chosen distance measure: With respect to the operator norm, a worst-case distance measure, incoherent quantum locality testers require $tilde{Omega}(2^n)$ many time evolution queries and an expected total evolution time of $tilde{Omega}(2^n / varepsilon)$, and even coherent testers need $Omega(2^{n/2})$ many queries and $Omega(2^{n/2}/varepsilon)$ total evolution time. In contrast, when distances are measured according to the normalized Frobenius norm, corresponding to an average-case distance, we give a sample-, time-, and computationally efficient incoherent Hamiltonian locality testing algorithm based on randomized measurements. In fact, our procedure can be used to simultaneously test a wide class of Hamiltonian properties beyond locality. Finally, we prove that learning a general Hamiltonian remains exponentially hard with this average-case distance, thereby establishing an exponential separation between Hamiltonian testing and learning. Our work initiates the study of property testing for quantum Hamiltonians, demonstrating that a broad class of Hamiltonian properties is efficiently testable even with limited quantum capabilities, and positioning Hamiltonian testing as an independent area of research alongside Hamiltonian learning.
|
[
"['Andreas Bluhm' 'Matthias C. Caro' 'Aadil Oufkir']"
] |
null | null |
2403.02974
| null | null |
http://arxiv.org/pdf/2403.02974v1
|
2024-03-05T13:53:48Z
|
2024-03-05T13:53:48Z
|
Online Learning of Human Constraints from Feedback in Shared Autonomy
|
Real-time collaboration with humans poses challenges due to the different behavior patterns of humans resulting from diverse physical constraints. Existing works typically focus on learning safety constraints for collaboration, or how to divide and distribute the subtasks between the participating agents to carry out the main task. In contrast, we propose to learn a human constraints model that, in addition, considers the diverse behaviors of different human operators. We consider a type of collaboration in a shared-autonomy fashion, where both a human operator and an assistive robot act simultaneously in the same task space that affects each other's actions. The task of the assistive agent is to augment the skill of humans to perform a shared task by supporting humans as much as possible, both in terms of reducing the workload and minimizing the discomfort for the human operator. Therefore, we propose an augmentative assistant agent capable of learning and adapting to human physical constraints, aligning its actions with the ergonomic preferences and limitations of the human operator.
|
[
"['Shibei Zhu' 'Tran Nguyen Le' 'Samuel Kaski' 'Ville Kyrki']"
] |
null | null |
2403.02983
| null | null |
http://arxiv.org/pdf/2403.02983v1
|
2024-03-05T14:03:15Z
|
2024-03-05T14:03:15Z
|
Federated Learning Under Attack: Exposing Vulnerabilities through Data
Poisoning Attacks in Computer Networks
|
Federated Learning (FL) is a machine learning (ML) approach that enables multiple decentralized devices or edge servers to collaboratively train a shared model without exchanging raw data. During the training and sharing of model updates between clients and servers, data and models are susceptible to different data-poisoning attacks. In this study, our motivation is to explore the severity of data poisoning attacks in the computer network domain because they are easy to implement but difficult to detect. We considered two types of data-poisoning attacks, label flipping (LF) and feature poisoning (FP), and applied them with a novel approach. In LF, we randomly flipped the labels of benign data and trained the model on the manipulated data. For FP, we randomly manipulated the highly contributing features determined using the Random Forest algorithm. The datasets used in this experiment were CIC and UNSW related to computer networks. We generated adversarial samples using the two attacks mentioned above, which were applied to a small percentage of datasets. Subsequently, we trained and tested the accuracy of the model on adversarial datasets. We recorded the results for both benign and manipulated datasets and observed significant differences between the accuracy of the models on different datasets. From the experimental results, it is evident that the LF attack failed, whereas the FP attack showed effective results, which proved its significance in fooling a server. With a 1% LF attack on the CIC, the accuracy was approximately 0.0428 and the ASR was 0.9564; hence, the attack is easily detectable, while with a 1% FP attack, the accuracy and ASR were both approximately 0.9600, hence, FP attacks are difficult to detect. We repeated the experiment with different poisoning percentages.
|
[
"['Ehsan Nowroozi' 'Imran Haider' 'Rahim Taheri' 'Mauro Conti']"
] |
null | null |
2403.02995
| null | null |
http://arxiv.org/pdf/2403.02995v1
|
2024-03-05T14:21:57Z
|
2024-03-05T14:21:57Z
|
Mitigating Label Flipping Attacks in Malicious URL Detectors Using
Ensemble Trees
|
Malicious URLs provide adversarial opportunities across various industries, including transportation, healthcare, energy, and banking which could be detrimental to business operations. Consequently, the detection of these URLs is of crucial importance; however, current Machine Learning (ML) models are susceptible to backdoor attacks. These attacks involve manipulating a small percentage of training data labels, such as Label Flipping (LF), which changes benign labels to malicious ones and vice versa. This manipulation results in misclassification and leads to incorrect model behavior. Therefore, integrating defense mechanisms into the architecture of ML models becomes an imperative consideration to fortify against potential attacks. The focus of this study is on backdoor attacks in the context of URL detection using ensemble trees. By illuminating the motivations behind such attacks, highlighting the roles of attackers, and emphasizing the critical importance of effective defense strategies, this paper contributes to the ongoing efforts to fortify ML models against adversarial threats within the ML domain in network security. We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels with the aim of mitigating backdoor attacks on ensemble tree classifiers. We conducted a case study using the Alexa and Phishing Site URL datasets and showed that LF attacks can be addressed using our proposed defense mechanism. Our experimental results prove that the LF attack achieved an Attack Success Rate (ASR) between 50-65% within 2-5%, and the innovative defense method successfully detected poisoned labels with an accuracy of up to 100%.
|
[
"['Ehsan Nowroozi' 'Nada Jadalla' 'Samaneh Ghelichkhani' 'Alireza Jolfaei']"
] |
null | null |
2403.03018
| null | null |
http://arxiv.org/pdf/2403.03018v1
|
2024-03-05T14:55:14Z
|
2024-03-05T14:55:14Z
|
CRISPR: Ensemble Model
|
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) is a gene editing technology that has revolutionized the fields of biology and medicine. However, one of the challenges of using CRISPR is predicting the on-target efficacy and off-target sensitivity of single-guide RNAs (sgRNAs). This is because most existing methods are trained on separate datasets with different genes and cells, which limits their generalizability. In this paper, we propose a novel ensemble learning method for sgRNA design that is accurate and generalizable. Our method combines the predictions of multiple machine learning models to produce a single, more robust prediction. This approach allows us to learn from a wider range of data, which improves the generalizability of our model. We evaluated our method on a benchmark dataset of sgRNA designs and found that it outperformed existing methods in terms of both accuracy and generalizability. Our results suggest that our method can be used to design sgRNAs with high sensitivity and specificity, even for new genes or cells. This could have important implications for the clinical use of CRISPR, as it would allow researchers to design more effective and safer treatments for a variety of diseases.
|
[
"['Mohammad Rostami' 'Amin Ghariyazi' 'Hamed Dashti'\n 'Mohammad Hossein Rohban' 'Hamid R. Rabiee']"
] |
null | null |
2403.03020
| null | null |
http://arxiv.org/pdf/2403.03020v3
|
2024-06-01T22:35:29Z
|
2024-03-05T14:57:04Z
|
SplAgger: Split Aggregation for Meta-Reinforcement Learning
|
A core ambition of reinforcement learning (RL) is the creation of agents capable of rapid learning in novel tasks. Meta-RL aims to achieve this by directly learning such agents. Black box methods do so by training off-the-shelf sequence models end-to-end. By contrast, task inference methods explicitly infer a posterior distribution over the unknown task, typically using distinct objectives and sequence models designed to enable task inference. Recent work has shown that task inference methods are not necessary for strong performance. However, it remains unclear whether task inference sequence models are beneficial even when task inference objectives are not. In this paper, we present evidence that task inference sequence models are indeed still beneficial. In particular, we investigate sequence models with permutation invariant aggregation, which exploit the fact that, due to the Markov property, the task posterior does not depend on the order of data. We empirically confirm the advantage of permutation invariant sequence models without the use of task inference objectives. However, we also find, surprisingly, that there are multiple conditions under which permutation variance remains useful. Therefore, we propose SplAgger, which uses both permutation variant and invariant components to achieve the best of both worlds, outperforming all baselines evaluated on continuous control and memory environments. Code is provided at https://github.com/jacooba/hyper.
|
[
"['Jacob Beck' 'Matthew Jackson' 'Risto Vuorio' 'Zheng Xiong'\n 'Shimon Whiteson']"
] |
null | null |
2403.03037
| null | null |
http://arxiv.org/pdf/2403.03037v1
|
2024-03-05T15:18:02Z
|
2024-03-05T15:18:02Z
|
A Backpack Full of Skills: Egocentric Video Understanding with Diverse
Task Perspectives
|
Human comprehension of a video stream is naturally broad: in a few instants, we are able to understand what is happening, the relevance and relationship of objects, and forecast what will follow in the near future, everything all at once. We believe that - to effectively transfer such an holistic perception to intelligent machines - an important role is played by learning to correlate concepts and to abstract knowledge coming from different tasks, to synergistically exploit them when learning novel skills. To accomplish this, we seek for a unified approach to video understanding which combines shared temporal modelling of human actions with minimal overhead, to support multiple downstream tasks and enable cooperation when learning novel skills. We then propose EgoPack, a solution that creates a collection of task perspectives that can be carried across downstream tasks and used as a potential source of additional insights, as a backpack of skills that a robot can carry around and use when needed. We demonstrate the effectiveness and efficiency of our approach on four Ego4D benchmarks, outperforming current state-of-the-art methods.
|
[
"['Simone Alberto Peirone' 'Francesca Pistilli' 'Antonio Alliegro'\n 'Giuseppe Averta']"
] |
null | null |
2403.03055
| null | null |
http://arxiv.org/pdf/2403.03055v1
|
2024-03-05T15:38:54Z
|
2024-03-05T15:38:54Z
|
Distributed Policy Gradient for Linear Quadratic Networked Control with
Limited Communication Range
|
This paper proposes a scalable distributed policy gradient method and proves its convergence to near-optimal solution in multi-agent linear quadratic networked systems. The agents engage within a specified network under local communication constraints, implying that each agent can only exchange information with a limited number of neighboring agents. On the underlying graph of the network, each agent implements its control input depending on its nearby neighbors' states in the linear quadratic control setting. We show that it is possible to approximate the exact gradient only using local information. Compared with the centralized optimal controller, the performance gap decreases to zero exponentially as the communication and control ranges increase. We also demonstrate how increasing the communication range enhances system stability in the gradient descent process, thereby elucidating a critical trade-off. The simulation results verify our theoretical findings.
|
[
"['Yuzi Yan' 'Yuan Shen']"
] |
null | null |
2403.03069
| null | null |
http://arxiv.org/pdf/2403.03069v2
|
2024-06-27T12:51:41Z
|
2024-03-05T15:57:52Z
|
Improving Variational Autoencoder Estimation from Incomplete Data with
Mixture Variational Families
|
We consider the task of estimating variational autoencoders (VAEs) when the training data is incomplete. We show that missing data increases the complexity of the model's posterior distribution over the latent variables compared to the fully-observed case. The increased complexity may adversely affect the fit of the model due to a mismatch between the variational and model posterior distributions. We introduce two strategies based on (i) finite variational-mixture and (ii) imputation-based variational-mixture distributions to address the increased posterior complexity. Through a comprehensive evaluation of the proposed approaches, we show that variational mixtures are effective at improving the accuracy of VAE estimation from incomplete data.
|
[
"['Vaidotas Simkus' 'Michael U. Gutmann']"
] |
null | null |
2403.03071
| null | null |
http://arxiv.org/pdf/2403.03071v2
|
2024-06-13T11:06:45Z
|
2024-03-05T15:59:54Z
|
On a Neural Implementation of Brenier's Polar Factorization
|
In 1991, Brenier proved a theorem that generalizes the polar decomposition for square matrices -- factored as PSD $times$ unitary -- to any vector field $F:mathbb{R}^drightarrow mathbb{R}^d$. The theorem, known as the polar factorization theorem, states that any field $F$ can be recovered as the composition of the gradient of a convex function $u$ with a measure-preserving map $M$, namely $F=nabla u circ M$. We propose a practical implementation of this far-reaching theoretical result, and explore possible uses within machine learning. The theorem is closely related to optimal transport (OT) theory, and we borrow from recent advances in the field of neural optimal transport to parameterize the potential $u$ as an input convex neural network. The map $M$ can be either evaluated pointwise using $u^*$, the convex conjugate of $u$, through the identity $M=nabla u^* circ F$, or learned as an auxiliary network. Because $M$ is, in general, not injective, we consider the additional task of estimating the ill-posed inverse map that can approximate the pre-image measure $M^{-1}$ using a stochastic generator. We illustrate possible applications of Brenier's polar factorization to non-convex optimization problems, as well as sampling of densities that are not log-concave.
|
[
"['Nina Vesseron' 'Marco Cuturi']"
] |
null | null |
2403.03082
| null | null |
http://arxiv.org/pdf/2403.03082v1
|
2024-03-05T16:08:59Z
|
2024-03-05T16:08:59Z
|
Recall-Oriented Continual Learning with Generative Adversarial
Meta-Model
|
The stability-plasticity dilemma is a major challenge in continual learning, as it involves balancing the conflicting objectives of maintaining performance on previous tasks while learning new tasks. In this paper, we propose the recall-oriented continual learning framework to address this challenge. Inspired by the human brain's ability to separate the mechanisms responsible for stability and plasticity, our framework consists of a two-level architecture where an inference network effectively acquires new knowledge and a generative network recalls past knowledge when necessary. In particular, to maximize the stability of past knowledge, we investigate the complexity of knowledge depending on different representations, and thereby introducing generative adversarial meta-model (GAMM) that incrementally learns task-specific parameters instead of input data samples of the task. Through our experiments, we show that our framework not only effectively learns new knowledge without any disruption but also achieves high stability of previous knowledge in both task-aware and task-agnostic learning scenarios. Our code is available at: https://github.com/bigdata-inha/recall-oriented-cl-framework.
|
[
"['Haneol Kang' 'Dong-Wan Choi']"
] |
null | null |
2403.03089
| null | null |
http://arxiv.org/pdf/2403.03089v1
|
2024-03-05T16:21:53Z
|
2024-03-05T16:21:53Z
|
VQSynery: Robust Drug Synergy Prediction With Vector Quantization
Mechanism
|
The pursuit of optimizing cancer therapies is significantly advanced by the accurate prediction of drug synergy. Traditional methods, such as clinical trials, are reliable yet encumbered by extensive time and financial demands. The emergence of high-throughput screening and computational innovations has heralded a shift towards more efficient methodologies for exploring drug interactions. In this study, we present VQSynergy, a novel framework that employs the Vector Quantization (VQ) mechanism, integrated with gated residuals and a tailored attention mechanism, to enhance the precision and generalizability of drug synergy predictions. Our findings demonstrate that VQSynergy surpasses existing models in terms of robustness, particularly under Gaussian noise conditions, highlighting its superior performance and utility in the complex and often noisy domain of drug synergy research. This study underscores the potential of VQSynergy in revolutionizing the field through its advanced predictive capabilities, thereby contributing to the optimization of cancer treatment strategies.
|
[
"['Jiawei Wu' 'Mingyuan Yan' 'Dianbo Liu']"
] |
null | null |
2403.03100
| null | null |
http://arxiv.org/pdf/2403.03100v3
|
2024-04-23T08:38:03Z
|
2024-03-05T16:35:25Z
|
NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and
Diffusion Models
|
While recent large-scale text-to-speech (TTS) models have achieved significant progress, they still fall short in speech quality, similarity, and prosody. Considering speech intricately encompasses various attributes (e.g., content, prosody, timbre, and acoustic details) that pose significant challenges for generation, a natural idea is to factorize speech into individual subspaces representing different attributes and generate them individually. Motivated by it, we propose NaturalSpeech 3, a TTS system with novel factorized diffusion models to generate natural speech in a zero-shot way. Specifically, 1) we design a neural codec with factorized vector quantization (FVQ) to disentangle speech waveform into subspaces of content, prosody, timbre, and acoustic details; 2) we propose a factorized diffusion model to generate attributes in each subspace following its corresponding prompt. With this factorization design, NaturalSpeech 3 can effectively and efficiently model intricate speech with disentangled subspaces in a divide-and-conquer way. Experiments show that NaturalSpeech 3 outperforms the state-of-the-art TTS systems on quality, similarity, prosody, and intelligibility, and achieves on-par quality with human recordings. Furthermore, we achieve better performance by scaling to 1B parameters and 200K hours of training data.
|
[
"['Zeqian Ju' 'Yuancheng Wang' 'Kai Shen' 'Xu Tan' 'Detai Xin'\n 'Dongchao Yang' 'Yanqing Liu' 'Yichong Leng' 'Kaitao Song' 'Siliang Tang'\n 'Zhizheng Wu' 'Tao Qin' 'Xiang-Yang Li' 'Wei Ye' 'Shikun Zhang'\n 'Jiang Bian' 'Lei He' 'Jinyu Li' 'Sheng Zhao']"
] |
null | null |
2403.03101
| null | null |
http://arxiv.org/pdf/2403.03101v1
|
2024-03-05T16:39:12Z
|
2024-03-05T16:39:12Z
|
KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents
|
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges, especially when interacting with environments through generating executable actions. This inadequacy primarily stems from the lack of built-in action knowledge in language agents, which fails to effectively guide the planning trajectories during task solving and results in planning hallucination. To address this issue, we introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge. Specifically, KnowAgent employs an action knowledge base and a knowledgeable self-learning strategy to constrain the action path during planning, enabling more reasonable trajectory synthesis, and thereby enhancing the planning performance of language agents. Experimental results on HotpotQA and ALFWorld based on various backbone models demonstrate that KnowAgent can achieve comparable or superior performance to existing baselines. Further analysis indicates the effectiveness of KnowAgent in terms of planning hallucinations mitigation. Code is available in https://github.com/zjunlp/KnowAgent.
|
[
"['Yuqi Zhu' 'Shuofei Qiao' 'Yixin Ou' 'Shumin Deng' 'Ningyu Zhang'\n 'Shiwei Lyu' 'Yue Shen' 'Lei Liang' 'Jinjie Gu' 'Huajun Chen']"
] |
null | null |
2403.03103
| null | null |
http://arxiv.org/pdf/2403.03103v2
|
2024-06-15T11:25:19Z
|
2024-03-05T16:43:25Z
|
Emergent Equivariance in Deep Ensembles
|
We show that deep ensembles become equivariant for all inputs and at all training times by simply using data augmentation. Crucially, equivariance holds off-manifold and for any architecture in the infinite width limit. The equivariance is emergent in the sense that predictions of individual ensemble members are not equivariant but their collective prediction is. Neural tangent kernel theory is used to derive this result and we verify our theoretical insights using detailed numerical experiments.
|
[
"['Jan E. Gerken' 'Pan Kessel']"
] |
null | null |
2403.03145
| null | null |
http://arxiv.org/pdf/2403.03145v1
|
2024-03-05T17:35:46Z
|
2024-03-05T17:35:46Z
|
Dual Mean-Teacher: An Unbiased Semi-Supervised Framework for
Audio-Visual Source Localization
|
Audio-Visual Source Localization (AVSL) aims to locate sounding objects within video frames given the paired audio clips. Existing methods predominantly rely on self-supervised contrastive learning of audio-visual correspondence. Without any bounding-box annotations, they struggle to achieve precise localization, especially for small objects, and suffer from blurry boundaries and false positives. Moreover, the naive semi-supervised method is poor in fully leveraging the information of abundant unlabeled data. In this paper, we propose a novel semi-supervised learning framework for AVSL, namely Dual Mean-Teacher (DMT), comprising two teacher-student structures to circumvent the confirmation bias issue. Specifically, two teachers, pre-trained on limited labeled data, are employed to filter out noisy samples via the consensus between their predictions, and then generate high-quality pseudo-labels by intersecting their confidence maps. The sufficient utilization of both labeled and unlabeled data and the proposed unbiased framework enable DMT to outperform current state-of-the-art methods by a large margin, with CIoU of 90.4% and 48.8% on Flickr-SoundNet and VGG-Sound Source, obtaining 8.9%, 9.6% and 4.6%, 6.4% improvements over self- and semi-supervised methods respectively, given only 3% positional-annotations. We also extend our framework to some existing AVSL methods and consistently boost their performance.
|
[
"['Yuxin Guo' 'Shijie Ma' 'Hu Su' 'Zhiqing Wang' 'Yuhao Zhao' 'Wei Zou'\n 'Siyang Sun' 'Yun Zheng']"
] |
null | null |
2403.03149
| null | null |
http://arxiv.org/pdf/2403.03149v2
|
2024-04-04T05:23:39Z
|
2024-03-05T17:41:35Z
|
Robust Federated Learning Mitigates Client-side Training Data
Distribution Inference Attacks
|
Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data. While various countermeasures exist, they are not practical, often assuming server access to some training data or knowledge of label distribution before the attack. In this work, we bridge the gap by proposing InferGuard, a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks. In our proposed InferGuard, the server first calculates the coordinate-wise median of all the model updates it receives. A client's model update is considered malicious if it significantly deviates from the computed median update. We conduct a thorough evaluation of our proposed InferGuard on five benchmark datasets and perform a comparison with ten baseline methods. The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks, even against strong adaptive attacks. Furthermore, our method substantially outperforms the baseline methods in various practical FL scenarios.
|
[
"['Yichang Xu' 'Ming Yin' 'Minghong Fang' 'Neil Zhenqiang Gong']"
] |
null | null |
2403.03150
| null | null |
http://arxiv.org/pdf/2403.03150v1
|
2024-03-05T17:42:39Z
|
2024-03-05T17:42:39Z
|
Deep-Learned Compression for Radio-Frequency Signal Classification
|
Next-generation cellular concepts rely on the processing of large quantities of radio-frequency (RF) samples. This includes Radio Access Networks (RAN) connecting the cellular front-end based on software defined radios (SDRs) and a framework for the AI processing of spectrum-related data. The RF data collected by the dense RAN radio units and spectrum sensors may need to be jointly processed for intelligent decision making. Moving large amounts of data to AI agents may result in significant bandwidth and latency costs. We propose a deep learned compression (DLC) model, HQARF, based on learned vector quantization (VQ), to compress the complex-valued samples of RF signals comprised of 6 modulation classes. We are assessing the effects of HQARF on the performance of an AI model trained to infer the modulation class of the RF signal. Compression of narrow-band RF samples for the training and off-the-site inference will allow for an efficient use of the bandwidth and storage for non-real-time analytics, and for a decreased delay in real-time applications. While exploring the effectiveness of the HQARF signal reconstructions in modulation classification tasks, we highlight the DLC optimization space and some open problems related to the training of the VQ embedded in HQARF.
|
[
"['Armani Rodriguez' 'Yagna Kaasaragadda' 'Silvija Kokalj-Filipovic']"
] |
null | null |
2403.03157
| null | null |
http://arxiv.org/pdf/2403.03157v1
|
2024-03-05T17:49:09Z
|
2024-03-05T17:49:09Z
|
Rethinking Clustered Federated Learning in NOMA Enhanced Wireless
Networks
|
This study explores the benefits of integrating the novel clustered federated learning (CFL) approach with non-orthogonal multiple access (NOMA) under non-independent and identically distributed (non-IID) datasets, where multiple devices participate in the aggregation with time limitations and a finite number of sub-channels. A detailed theoretical analysis of the generalization gap that measures the degree of non-IID in the data distribution is presented. Following that, solutions to address the challenges posed by non-IID conditions are proposed with the analysis of the properties. Specifically, users' data distributions are parameterized as concentration parameters and grouped using spectral clustering, with Dirichlet distribution serving as the prior. The investigation into the generalization gap and convergence rate guides the design of sub-channel assignments through the matching-based algorithm, and the power allocation is achieved by Karush-Kuhn-Tucker (KKT) conditions with the derived closed-form solution. The extensive simulation results show that the proposed cluster-based FL framework can outperform FL baselines in terms of both test accuracy and convergence rate. Moreover, jointly optimizing sub-channel and power allocation in NOMA-enhanced networks can lead to a significant improvement.
|
[
"['Yushen Lin' 'Kaidi Wang' 'Zhiguo Ding']"
] |
null | null |
2403.03161
| null | null |
http://arxiv.org/pdf/2403.03161v1
|
2024-03-05T17:54:22Z
|
2024-03-05T17:54:22Z
|
PalmProbNet: A Probabilistic Approach to Understanding Palm
Distributions in Ecuadorian Tropical Forest via Transfer Learning
|
Palms play an outsized role in tropical forests and are important resources for humans and wildlife. A central question in tropical ecosystems is understanding palm distribution and abundance. However, accurately identifying and localizing palms in geospatial imagery presents significant challenges due to dense vegetation, overlapping canopies, and variable lighting conditions in mixed-forest landscapes. Addressing this, we introduce PalmProbNet, a probabilistic approach utilizing transfer learning to analyze high-resolution UAV-derived orthomosaic imagery, enabling the detection of palm trees within the dense canopy of the Ecuadorian Rainforest. This approach represents a substantial advancement in automated palm detection, effectively pinpointing palm presence and locality in mixed tropical rainforests. Our process begins by generating an orthomosaic image from UAV images, from which we extract and label palm and non-palm image patches in two distinct sizes. These patches are then used to train models with an identical architecture, consisting of an unaltered pre-trained ResNet-18 and a Multilayer Perceptron (MLP) with specifically trained parameters. Subsequently, PalmProbNet employs a sliding window technique on the landscape orthomosaic, using both small and large window sizes to generate a probability heatmap. This heatmap effectively visualizes the distribution of palms, showcasing the scalability and adaptability of our approach in various forest densities. Despite the challenging terrain, our method demonstrated remarkable performance, achieving an accuracy of 97.32% and a Cohen's kappa of 94.59% in testing.
|
[
"['Kangning Cui' 'Zishan Shao' 'Gregory Larsen' 'Victor Pauca'\n 'Sarra Alqahtani' 'David Segurado' 'João Pinheiro' 'Manqi Wang'\n 'David Lutz' 'Robert Plemmons' 'Miles Silman']"
] |
null | null |
2403.03168
| null | null |
http://arxiv.org/pdf/2403.03168v1
|
2024-03-05T18:03:51Z
|
2024-03-05T18:03:51Z
|
Learning Explicitly Conditioned Sparsifying Transforms
|
Sparsifying transforms became in the last decades widely known tools for finding structured sparse representations of signals in certain transform domains. Despite the popularity of classical transforms such as DCT and Wavelet, learning optimal transforms that guarantee good representations of data into the sparse domain has been recently analyzed in a series of papers. Typically, the conditioning number and representation ability are complementary key features of learning square transforms that may not be explicitly controlled in a given optimization model. Unlike the existing approaches from the literature, in our paper, we consider a new sparsifying transform model that enforces explicit control over the data representation quality and the condition number of the learned transforms. We confirm through numerical experiments that our model presents better numerical behavior than the state-of-the-art.
|
[
"['Andrei Pătraşcu' 'Cristian Rusu' 'Paul Irofti']"
] |
null | null |
2403.03172
| null | null |
http://arxiv.org/pdf/2403.03172v1
|
2024-03-05T18:07:34Z
|
2024-03-05T18:07:34Z
|
Reaching Consensus in Cooperative Multi-Agent Reinforcement Learning
with Goal Imagination
|
Reaching consensus is key to multi-agent coordination. To accomplish a cooperative task, agents need to coherently select optimal joint actions to maximize the team reward. However, current cooperative multi-agent reinforcement learning (MARL) methods usually do not explicitly take consensus into consideration, which may cause miscoordination problem. In this paper, we propose a model-based consensus mechanism to explicitly coordinate multiple agents. The proposed Multi-agent Goal Imagination (MAGI) framework guides agents to reach consensus with an Imagined common goal. The common goal is an achievable state with high value, which is obtained by sampling from the distribution of future states. We directly model this distribution with a self-supervised generative model, thus alleviating the "curse of dimensinality" problem induced by multi-agent multi-step policy rollout commonly used in model-based methods. We show that such efficient consensus mechanism can guide all agents cooperatively reaching valuable future states. Results on Multi-agent Particle-Environments and Google Research Football environment demonstrate the superiority of MAGI in both sample efficiency and performance.
|
[
"['Liangzhou Wang' 'Kaiwen Zhu' 'Fengming Zhu' 'Xinghu Yao' 'Shujie Zhang'\n 'Deheng Ye' 'Haobo Fu' 'Qiang Fu' 'Wei Yang']"
] |
null | null |
2403.03180
| null | null |
http://arxiv.org/pdf/2403.03180v1
|
2024-03-05T18:19:02Z
|
2024-03-05T18:19:02Z
|
Shuffling Momentum Gradient Algorithm for Convex Optimization
|
The Stochastic Gradient Descent method (SGD) and its stochastic variants have become methods of choice for solving finite-sum optimization problems arising from machine learning and data science thanks to their ability to handle large-scale applications and big datasets. In the last decades, researchers have made substantial effort to study the theoretical performance of SGD and its shuffling variants. However, only limited work has investigated its shuffling momentum variants, including shuffling heavy-ball momentum schemes for non-convex problems and Nesterov's momentum for convex settings. In this work, we extend the analysis of the shuffling momentum gradient method developed in [Tran et al (2021)] to both finite-sum convex and strongly convex optimization problems. We provide the first analysis of shuffling momentum-based methods for the strongly convex setting, attaining a convergence rate of $O(1/nT^2)$, where $n$ is the number of samples and $T$ is the number of training epochs. Our analysis is a state-of-the-art, matching the best rates of existing shuffling stochastic gradient algorithms in the literature.
|
[
"['Trang H. Tran' 'Quoc Tran-Dinh' 'Lam M. Nguyen']"
] |
null | null |
2403.03181
| null | null |
http://arxiv.org/pdf/2403.03181v2
|
2024-06-28T04:15:33Z
|
2024-03-05T18:19:29Z
|
Behavior Generation with Latent Actions
|
Generative modeling of complex behaviors from labeled datasets has been a longstanding problem in decision making. Unlike language or image generation, decision making requires modeling actions - continuous-valued vectors that are multimodal in their distribution, potentially drawn from uncurated sources, where generation errors can compound in sequential prediction. A recent class of models called Behavior Transformers (BeT) addresses this by discretizing actions using k-means clustering to capture different modes. However, k-means struggles to scale for high-dimensional action spaces or long sequences, and lacks gradient information, and thus BeT suffers in modeling long-range actions. In this work, we present Vector-Quantized Behavior Transformer (VQ-BeT), a versatile model for behavior generation that handles multimodal action prediction, conditional generation, and partial observations. VQ-BeT augments BeT by tokenizing continuous actions with a hierarchical vector quantization module. Across seven environments including simulated manipulation, autonomous driving, and robotics, VQ-BeT improves on state-of-the-art models such as BeT and Diffusion Policies. Importantly, we demonstrate VQ-BeT's improved ability to capture behavior modes while accelerating inference speed 5x over Diffusion Policies. Videos and code can be found https://sjlee.cc/vq-bet
|
[
"['Seungjae Lee' 'Yibin Wang' 'Haritheja Etukuru' 'H. Jin Kim'\n 'Nur Muhammad Mahi Shafiullah' 'Lerrel Pinto']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.