categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.03157 | null | null | http://arxiv.org/pdf/2407.03157v1 | 2024-07-03T14:34:03Z | 2024-07-03T14:34:03Z | Let the Code LLM Edit Itself When You Edit the Code | In this work, we investigate a typical scenario in code generation where a developer edits existing code in real time and requests a code assistant, e.g., a large language model, to re-predict the next token or next line on the fly. Naively, the LLM needs to re-encode the entire KV cache to provide an accurate prediction. However, this process is computationally expensive, especially when the sequence length is long. Simply encoding the edited subsequence and integrating it to the original KV cache meets the temporal confusion problem, leading to significantly worse performance. We address this efficiency and accuracy trade-off by introducing underline{textbf{Positional textbf{I}ntegrity textbf{E}ncoding} (PIE). Building upon the rotary positional encoding, PIE first removes the rotary matrices in the Key cache that introduce temporal confusion and then reapplies the correct rotary matrices. This process ensures that positional relationships between tokens are correct and requires only a single round of matrix multiplication. We validate the effectiveness of PIE through extensive experiments on the RepoBench-C-8k dataset, utilizing DeepSeek-Coder models with 1.3B, 6.7B, and 33B parameters. Our evaluation includes three real-world coding tasks: code insertion, code deletion, and multi-place code editing. Results demonstrate that PIE reduces computational overhead by over 85% compared to the standard full recomputation approach across all model sizes and tasks while well approximating the model performance. | [
"['Zhenyu He' 'Jun Zhang' 'Shengjie Luo' 'Jingjing Xu' 'Zhi Zhang' 'Di He']"
] |
null | null | 2407.03160 | null | null | http://arxiv.org/pdf/2407.03160v1 | 2024-07-03T14:35:16Z | 2024-07-03T14:35:16Z | SOS! Soft Prompt Attack Against Open-Source Large Language Models | Open-source large language models (LLMs) have become increasingly popular among both the general public and industry, as they can be customized, fine-tuned, and freely used. However, some open-source LLMs require approval before usage, which has led to third parties publishing their own easily accessible versions. Similarly, third parties have been publishing fine-tuned or quantized variants of these LLMs. These versions are particularly appealing to users because of their ease of access and reduced computational resource demands. This trend has increased the risk of training time attacks, compromising the integrity and security of LLMs. In this work, we present a new training time attack, SOS, which is designed to be low in computational demand and does not require clean data or modification of the model weights, thereby maintaining the model's utility intact. The attack addresses security issues in various scenarios, including the backdoor attack, jailbreak attack, and prompt stealing attack. Our experimental findings demonstrate that the proposed attack is effective across all evaluated targets. Furthermore, we present the other side of our SOS technique, namely the copyright token -- a novel technique that enables users to mark their copyrighted content and prevent models from using it. | [
"['Ziqing Yang' 'Michael Backes' 'Yang Zhang' 'Ahmed Salem']"
] |
null | null | 2407.03162 | null | null | http://arxiv.org/pdf/2407.03162v1 | 2024-07-03T14:35:35Z | 2024-07-03T14:35:35Z | Bunny-VisionPro: Real-Time Bimanual Dexterous Teleoperation for
Imitation Learning | Teleoperation is a crucial tool for collecting human demonstrations, but controlling robots with bimanual dexterous hands remains a challenge. Existing teleoperation systems struggle to handle the complexity of coordinating two hands for intricate manipulations. We introduce Bunny-VisionPro, a real-time bimanual dexterous teleoperation system that leverages a VR headset. Unlike previous vision-based teleoperation systems, we design novel low-cost devices to provide haptic feedback to the operator, enhancing immersion. Our system prioritizes safety by incorporating collision and singularity avoidance while maintaining real-time performance through innovative designs. Bunny-VisionPro outperforms prior systems on a standard task suite, achieving higher success rates and reduced task completion times. Moreover, the high-quality teleoperation demonstrations improve downstream imitation learning performance, leading to better generalizability. Notably, Bunny-VisionPro enables imitation learning with challenging multi-stage, long-horizon dexterous manipulation tasks, which have rarely been addressed in previous work. Our system's ability to handle bimanual manipulations while prioritizing safety and real-time performance makes it a powerful tool for advancing dexterous manipulation and imitation learning. | [
"['Runyu Ding' 'Yuzhe Qin' 'Jiyue Zhu' 'Chengzhe Jia' 'Shiqi Yang'\n 'Ruihan Yang' 'Xiaojuan Qi' 'Xiaolong Wang']"
] |
null | null | 2407.03178 | null | null | http://arxiv.org/pdf/2407.03178v1 | 2024-07-03T14:58:40Z | 2024-07-03T14:58:40Z | Relating CNN-Transformer Fusion Network for Change Detection | While deep learning, particularly convolutional neural networks (CNNs), has revolutionized remote sensing (RS) change detection (CD), existing approaches often miss crucial features due to neglecting global context and incomplete change learning. Additionally, transformer networks struggle with low-level details. RCTNet addresses these limitations by introducing textbf{(1)} an early fusion backbone to exploit both spatial and temporal features early on, textbf{(2)} a Cross-Stage Aggregation (CSA) module for enhanced temporal representation, textbf{(3)} a Multi-Scale Feature Fusion (MSF) module for enriched feature extraction in the decoder, and textbf{(4)} an Efficient Self-deciphering Attention (ESA) module utilizing transformers to capture global information and fine-grained details for accurate change detection. Extensive experiments demonstrate RCTNet's clear superiority over traditional RS image CD methods, showing significant improvement and an optimal balance between accuracy and computational cost. | [
"['Yuhao Gao' 'Gensheng Pei' 'Mengmeng Sheng' 'Zeren Sun' 'Tao Chen'\n 'Yazhou Yao']"
] |
null | null | 2407.03179 | null | null | http://arxiv.org/pdf/2407.03179v1 | 2024-07-03T14:59:46Z | 2024-07-03T14:59:46Z | Motion meets Attention: Video Motion Prompts | Videos contain rich spatio-temporal information. Traditional methods for extracting motion, used in tasks such as action recognition, often rely on visual contents rather than precise motion features. This phenomenon is referred to as 'blind motion extraction' behavior, which proves inefficient in capturing motions of interest due to a lack of motion-guided cues. Recently, attention mechanisms have enhanced many computer vision tasks by effectively highlighting salient visual areas. Inspired by this, we propose using a modified Sigmoid function with learnable slope and shift parameters as an attention mechanism to activate and modulate motion signals derived from frame differencing maps. This approach generates a sequence of attention maps that enhance the processing of motion-related video content. To ensure temporally continuity and smoothness of the attention maps, we apply pair-wise temporal attention variation regularization to remove unwanted motions (e.g., noise) while preserving important ones. We then perform Hadamard product between each pair of attention maps and the original video frames to highlight the evolving motions of interest over time. These highlighted motions, termed video motion prompts, are subsequently used as inputs to the model instead of the original video frames. We formalize this process as a motion prompt layer and incorporate the regularization term into the loss function to learn better motion prompts. This layer serves as an adapter between the model and the video data, bridging the gap between traditional 'blind motion extraction' and the extraction of relevant motions of interest. | [
"['Qixiang Chen' 'Lei Wang' 'Piotr Koniusz' 'Tom Gedeon']"
] |
null | null | 2407.03185 | null | null | http://arxiv.org/pdf/2407.03185v1 | 2024-07-03T15:07:16Z | 2024-07-03T15:07:16Z | Multiple-Resolution Tokenization for Time Series Forecasting with an
Application to Pricing | We propose a transformer architecture for time series forecasting with a focus on time series tokenisation and apply it to a real-world prediction problem from the pricing domain. Our architecture aims to learn effective representations at many scales across all available data simultaneously. The model contains a number of novel modules: a differentiated form of time series patching which employs multiple resolutions, a multiple-resolution module for time-varying known variables, a mixer-based module for capturing cross-series information, and a novel output head with favourable scaling to account for the increased number of tokens. We present an application of this model to a real world prediction problem faced by the markdown team at a very large retailer. On the experiments conducted our model outperforms in-house models and the selected existing deep learning architectures. | [
"['Egon Peršak' 'Miguel F. Anjos' 'Sebastian Lautz' 'Aleksandar Kolev']"
] |
null | null | 2407.03190 | null | null | http://dx.doi.org/10.5281/zenodo.6842883 | 2024-06-15T02:36:11Z | 2024-06-15T02:36:11Z | Cutting through the noise to motivate people: A comprehensive analysis
of COVID-19 social media posts de/motivating vaccination | The COVID-19 pandemic exposed significant weaknesses in the healthcare information system. The overwhelming volume of misinformation on social media and other socioeconomic factors created extraordinary challenges to motivate people to take proper precautions and get vaccinated. In this context, our work explored a novel direction by analyzing an extensive dataset collected over two years, identifying the topics de/motivating the public about COVID-19 vaccination. We analyzed these topics based on time, geographic location, and political orientation. We noticed that while the motivating topics remain the same over time and geographic location, the demotivating topics rapidly. We also identified that intrinsic motivation, rather than external mandate, is more advantageous to inspire the public. This study addresses scientific communication and public motivation in social media. It can help public health officials, policymakers, and social media platforms develop more effective messaging strategies to cut through the noise of misinformation and educate the public about scientific findings. | [
"['Ashiqur Rahman' 'Ehsan Mohammadi' 'Hamed Alhoori']"
] |
null | null | 2407.03194 | null | null | http://arxiv.org/pdf/2407.03194v1 | 2024-07-03T15:26:02Z | 2024-07-03T15:26:02Z | Prediction Instability in Machine Learning Ensembles | In machine learning ensembles predictions from multiple models are aggregated. Despite widespread use and strong performance of ensembles in applied problems little is known about the mathematical properties of aggregating models and associated consequences for safe, explainable use of such models. In this paper we prove a theorem that shows that any ensemble will exhibit at least one of the following forms of prediction instability. It will either ignore agreement among all underlying models, change its mind when none of the underlying models have done so, or be manipulable through inclusion or exclusion of options it would never actually predict. As a consequence, ensemble aggregation procedures will always need to balance the benefits of information use against the risk of these prediction instabilities. This analysis also sheds light on what specific forms of prediction instability to expect from particular ensemble algorithms; for example popular tree ensembles like random forest, or xgboost will violate basic, intuitive monotonicity and fairness properties. | [
"['Jeremy Kedziora']"
] |
null | null | 2407.03195 | null | null | http://arxiv.org/pdf/2407.03195v1 | 2024-07-03T15:26:34Z | 2024-07-03T15:26:34Z | Incremental Gauss--Newton Methods with Superlinear Convergence Rates | This paper addresses the challenge of solving large-scale nonlinear equations with H"older continuous Jacobians. We introduce a novel Incremental Gauss--Newton (IGN) method within explicit superlinear convergence rate, which outperforms existing methods that only achieve linear convergence rate. In particular, we formulate our problem by the nonlinear least squares with finite-sum structure, and our method incrementally iterates with the information of one component in each round. We also provide a mini-batch extension to our IGN method that obtains an even faster superlinear convergence rate. Furthermore, we conduct numerical experiments to show the advantages of the proposed methods. | [
"['Zhiling Zhou' 'Zhuanghua Liu' 'Chengchang Liu' 'Luo Luo']"
] |
null | null | 2407.03210 | null | null | http://arxiv.org/abs/2407.03210v1 | 2024-07-03T15:38:57Z | 2024-07-03T15:38:57Z | Combining AI Control Systems and Human Decision Support via Robustness
and Criticality | AI-enabled capabilities are reaching the requisite level of maturity to be deployed in the real world, yet do not always make correct or safe decisions. One way of addressing these concerns is to leverage AI control systems alongside and in support of human decisions, relying on the AI control system in safe situations while calling on a human co-decider for critical situations. We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks, including MuZero. Multiple improvements to the base agent architecture are proposed. We demonstrate how this technology has two applications: for intelligent decision tools and to enhance training / learning frameworks. In a decision support context, adversarial explanations help a user make the correct decision by highlighting those contextual factors that would need to change for a different AI-recommended decision. As another benefit of adversarial explanations, we show that the learned AI control system demonstrates robustness against adversarial tampering. Additionally, we supplement AE by introducing strategically similar autoencoders (SSAs) to help users identify and understand all salient factors being considered by the AI system. In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction. Finally, to identify when AI decisions would most benefit from human oversight, we tie this combined system to our prior art on statistically verified analyses of the criticality of decisions at any point in time. | [
"['Walt Woods' 'Alexander Grushin' 'Simon Khan' 'Alvaro Velasquez']"
] |
null | null | 2407.03211 | null | null | http://arxiv.org/pdf/2407.03211v1 | 2024-07-03T15:39:40Z | 2024-07-03T15:39:40Z | How Does Quantization Affect Multilingual LLMs? | Quantization techniques are widely used to improve inference speed and deployment of large language models. While a wide body of work examines the impact of quantized LLMs on English tasks, none have examined the effect of quantization across languages. We conduct a thorough analysis of quantized multilingual LLMs, focusing on their performance across languages and at varying scales. We use automatic benchmarks, LLM-as-a-Judge methods, and human evaluation, finding that (1) harmful effects of quantization are apparent in human evaluation, and automatic metrics severely underestimate the detriment: a 1.7% average drop in Japanese across automatic tasks corresponds to a 16.0% drop reported by human evaluators on realistic prompts; (2) languages are disparately affected by quantization, with non-Latin script languages impacted worst; and (3) challenging tasks such as mathematical reasoning degrade fastest. As the ability to serve low-compute models is critical for wide global adoption of NLP technologies, our results urge consideration of multilingual performance as a key evaluation criterion for efficient models. | [
"['Kelly Marchisio' 'Saurabh Dash' 'Hongyu Chen' 'Dennis Aumiller'\n 'Ahmet Üstün' 'Sara Hooker' 'Sebastian Ruder']"
] |
null | null | 2407.03232 | null | null | http://arxiv.org/pdf/2407.03232v1 | 2024-07-03T16:03:10Z | 2024-07-03T16:03:10Z | Single Character Perturbations Break LLM Alignment | When LLMs are deployed in sensitive, human-facing settings, it is crucial that they do not output unsafe, biased, or privacy-violating outputs. For this reason, models are both trained and instructed to refuse to answer unsafe prompts such as "Tell me how to build a bomb." We find that, despite these safeguards, it is possible to break model defenses simply by appending a space to the end of a model's input. In a study of eight open-source models, we demonstrate that this acts as a strong enough attack to cause the majority of models to generate harmful outputs with very high success rates. We examine the causes of this behavior, finding that the contexts in which single spaces occur in tokenized training data encourage models to generate lists when prompted, overriding training signals to refuse to answer unsafe requests. Our findings underscore the fragile state of current model alignment and promote the importance of developing more robust alignment methods. Code and data will be available at https://github.com/hannah-aught/space_attack. | [
"['Leon Lin' 'Hannah Brown' 'Kenji Kawaguchi' 'Michael Shieh']"
] |
null | null | 2407.03234 | null | null | http://arxiv.org/pdf/2407.03234v2 | 2024-07-15T05:20:18Z | 2024-07-03T16:03:42Z | Self-Evaluation as a Defense Against Adversarial Attacks on LLMs | When LLMs are deployed in sensitive, human-facing settings, it is crucial that they do not output unsafe, biased, or privacy-violating outputs. For this reason, models are both trained and instructed to refuse to answer unsafe prompts such as "Tell me how to build a bomb." We find that, despite these safeguards, it is possible to break model defenses simply by appending a space to the end of a model's input. In a study of eight open-source models, we demonstrate that this acts as a strong enough attack to cause the majority of models to generate harmful outputs with very high success rates. We examine the causes of this behavior, finding that the contexts in which single spaces occur in tokenized training data encourage models to generate lists when prompted, overriding training signals to refuse to answer unsafe requests. Our findings underscore the fragile state of current model alignment and promote the importance of developing more robust alignment methods. Code and data will be made available at https://github.com/Linlt-leon/self-eval. | [
"['Hannah Brown' 'Leon Lin' 'Kenji Kawaguchi' 'Michael Shieh']"
] |
null | null | 2407.03241 | null | null | http://arxiv.org/pdf/2407.03241v1 | 2024-07-03T16:10:50Z | 2024-07-03T16:10:50Z | Terrain Classification Enhanced with Uncertainty for Space Exploration
Robots from Proprioceptive Data | Terrain Classification is an essential task in space exploration, where unpredictable environments are difficult to observe using only exteroceptive sensors such as vision. Implementing Neural Network classifiers can have high performance but can be deemed untrustworthy as they lack transparency, which makes them unreliable for taking high-stakes decisions during mission planning. We address this by proposing Neural Networks with Uncertainty Quantification in Terrain Classification. We enable our Neural Networks with Monte Carlo Dropout, DropConnect, and Flipout in time series-capable architectures using only proprioceptive data as input. We use Bayesian Optimization with Hyperband for efficient hyperparameter optimization to find optimal models for trustworthy terrain classification. | [
"['Mariela De Lucas Álvarez' 'Jichen Guo' 'Raul Domínguez'\n 'Matias Valdenegro-Toro']"
] |
null | null | 2407.03250 | null | null | http://arxiv.org/pdf/2407.03250v2 | 2024-07-04T10:56:45Z | 2024-07-03T16:29:47Z | When big data actually are low-rank, or entrywise approximation of
certain function-generated matrices | The article concerns low-rank approximation of matrices generated by sampling a smooth function of two $m$-dimensional variables. We refute an argument made in the literature that, for a specific class of analytic functions, such matrices admit accurate entrywise approximation of rank that is independent of $m$. We provide a theoretical explanation of the numerical results presented in support of this argument, describing three narrower classes of functions for which $n times n$ function-generated matrices can be approximated within an entrywise error of order $varepsilon$ with rank $mathcal{O}(log(n) varepsilon^{-2} mathrm{polylog}(varepsilon^{-1}))$ that is independent of the dimension $m$: (i) functions of the inner product of the two variables, (ii) functions of the squared Euclidean distance between the variables, and (iii) shift-invariant positive-definite kernels. We extend our argument to low-rank tensor-train approximation of tensors generated with functions of the multi-linear product of their $m$-dimensional variables. We discuss our results in the context of low-rank approximation of attention in transformer neural networks. | [
"['Stanislav Budzinskiy']"
] |
null | null | 2407.03257 | null | null | http://arxiv.org/pdf/2407.03257v1 | 2024-07-03T16:38:57Z | 2024-07-03T16:38:57Z | Modern Neighborhood Components Analysis: A Deep Tabular Baseline Two
Decades Later | The growing success of deep learning in various domains has prompted investigations into its application to tabular data, where deep models have shown promising results compared to traditional tree-based methods. In this paper, we revisit Neighborhood Component Analysis (NCA), a classic tabular prediction method introduced in 2004, designed to learn a linear projection that captures semantic similarities between instances. We find that minor modifications, such as adjustments to the learning objectives and the integration of deep learning architectures, significantly enhance NCA's performance, enabling it to surpass most modern deep tabular models. Additionally, we introduce a stochastic neighbor sampling strategy that improves both the efficiency and predictive accuracy of our proposed ModernNCA -- sampling only a subset of neighbors during training, while utilizing the entire neighborhood during inference. Extensive experiments demonstrate that our ModernNCA achieves state-of-the-art results in both classification and regression tasks across various tabular datasets, outperforming both tree-based and other deep tabular models, while also reducing training time and model size. | [
"['Han-Jia Ye' 'Huai-Hong Yin' 'De-Chuan Zhan']"
] |
null | null | 2407.03261 | null | null | http://arxiv.org/pdf/2407.03261v1 | 2024-07-03T16:45:45Z | 2024-07-03T16:45:45Z | Magnetic Hysteresis Modeling with Neural Operators | Hysteresis modeling is crucial to comprehend the behavior of magnetic devices, facilitating optimal designs. Hitherto, deep learning-based methods employed to model hysteresis, face challenges in generalizing to novel input magnetic fields. This paper addresses the generalization challenge by proposing neural operators for modeling constitutive laws that exhibit magnetic hysteresis by learning a mapping between magnetic fields. In particular, two prominent neural operators -- deep operator network and Fourier neural operator -- are employed to predict novel first-order reversal curves and minor loops, where novel means they are not used to train the model. In addition, a rate-independent Fourier neural operator is proposed to predict material responses at sampling rates different from those used during training to incorporate the rate-independent characteristics of magnetic hysteresis. The presented numerical experiments demonstrate that neural operators efficiently model magnetic hysteresis, outperforming the traditional neural recurrent methods on various metrics and generalizing to novel magnetic fields. The findings emphasize the advantages of using neural operators for modeling hysteresis under varying magnetic conditions, underscoring their importance in characterizing magnetic material based devices. | [
"['Abhishek Chandra' 'Bram Daniels' 'Mitrofan Curti' 'Koen Tiels'\n 'Elena A. Lomonova']"
] |
null | null | 2407.03262 | null | null | http://arxiv.org/pdf/2407.03262v1 | 2024-07-03T16:49:28Z | 2024-07-03T16:49:28Z | Nearly Linear Sparsification of $\ell_p$ Subspace Approximation | The $ell_p$ subspace approximation problem is an NP-hard low rank approximation problem that generalizes the median hyperplane problem ($p = 1$), principal component analysis ($p = 2$), and the center hyperplane problem ($p = infty$). A popular approach to cope with the NP-hardness of this problem is to compute a strong coreset, which is a small weighted subset of the input points which simultaneously approximates the cost of every $k$-dimensional subspace, typically to $(1+varepsilon)$ relative error for a small constant $varepsilon$. We obtain the first algorithm for constructing a strong coreset for $ell_p$ subspace approximation with a nearly optimal dependence on the rank parameter $k$, obtaining a nearly linear bound of $tilde O(k)mathrm{poly}(varepsilon^{-1})$ for $p<2$ and $tilde O(k^{p/2})mathrm{poly}(varepsilon^{-1})$ for $p>2$. Prior constructions either achieved a similar size bound but produced a coreset with a modification of the original points [SW18, FKW21], or produced a coreset of the original points but lost $mathrm{poly}(k)$ factors in the coreset size [HV20, WY23]. Our techniques also lead to the first nearly optimal online strong coresets for $ell_p$ subspace approximation with similar bounds as the offline setting, resolving a problem of [WY23]. All prior approaches lose $mathrm{poly}(k)$ factors in this setting, even when allowed to modify the original points. | [
"['David P. Woodruff' 'Taisuke Yasuda']"
] |
null | null | 2407.03266 | null | null | http://arxiv.org/pdf/2407.03266v1 | 2024-07-03T16:56:08Z | 2024-07-03T16:56:08Z | Do Quantum Neural Networks have Simplicity Bias? | One hypothesis for the success of deep neural networks (DNNs) is that they are highly expressive, which enables them to be applied to many problems, and they have a strong inductive bias towards solutions that are simple, known as simplicity bias, which allows them to generalise well on unseen data because most real-world data is structured (i.e. simple). In this work, we explore the inductive bias and expressivity of quantum neural networks (QNNs), which gives us a way to compare their performance to those of DNNs. Our results show that it is possible to have simplicity bias with certain QNNs, but we prove that this type of QNN limits the expressivity of the QNN. We also show that it is possible to have QNNs with high expressivity, but they either have no inductive bias or a poor inductive bias and result in a worse generalisation performance compared to DNNs. We demonstrate that an artificial (restricted) inductive bias can be produced by intentionally restricting the expressivity of a QNN. Our results suggest a bias-expressivity tradeoff. Our conclusion is that the QNNs we studied can not generally offer an advantage over DNNs, because these QNNs either have a poor inductive bias or poor expressivity compared to DNNs. | [
"['Jessica Pointing']"
] |
null | null | 2407.03289 | null | null | http://arxiv.org/pdf/2407.03289v1 | 2024-07-03T17:22:33Z | 2024-07-03T17:22:33Z | Correlated Privacy Mechanisms for Differentially Private Distributed
Mean Estimation | Differentially private distributed mean estimation (DP-DME) is a fundamental building block in privacy-preserving federated learning, where a central server estimates the mean of $d$-dimensional vectors held by $n$ users while ensuring $(epsilon,delta)$-DP. Local differential privacy (LDP) and distributed DP with secure aggregation (SecAgg) are the most common notions of DP used in DP-DME settings with an untrusted server. LDP provides strong resilience to dropouts, colluding users, and malicious server attacks, but suffers from poor utility. In contrast, SecAgg-based DP-DME achieves an $O(n)$ utility gain over LDP in DME, but requires increased communication and computation overheads and complex multi-round protocols to handle dropouts and malicious attacks. In this work, we propose CorDP-DME, a novel DP-DME mechanism that spans the gap between DME with LDP and distributed DP, offering a favorable balance between utility and resilience to dropout and collusion. CorDP-DME is based on correlated Gaussian noise, ensuring DP without the perfect conditional privacy guarantees of SecAgg-based approaches. We provide an information-theoretic analysis of CorDP-DME, and derive theoretical guarantees for utility under any given privacy parameters and dropout/colluding user thresholds. Our results demonstrate that (anti) correlated Gaussian DP mechanisms can significantly improve utility in mean estimation tasks compared to LDP -- even in adversarial settings -- while maintaining better resilience to dropouts and attacks compared to distributed DP. | [
"['Sajani Vithana' 'Viveck R. Cadambe' 'Flavio P. Calmon' 'Haewon Jeong']"
] |
null | null | 2407.03294 | null | null | http://arxiv.org/pdf/2407.03294v1 | 2024-07-03T17:28:17Z | 2024-07-03T17:28:17Z | Vertex Exchange Method for a Class of Quadratic Programming Problems | A vertex exchange method is proposed for solving the strongly convex quadratic program subject to the generalized simplex constraint. We conduct rigorous convergence analysis for the proposed algorithm and demonstrate its essential roles in solving some important classes of constrained convex optimization. To get a feasible initial point to execute the algorithm, we also present and analyze a highly efficient semismooth Newton method for computing the projection onto the generalized simplex. The excellent practical performance of the proposed algorithms is demonstrated by a set of extensive numerical experiments. Our theoretical and numerical results further motivate the potential applications of the considered model and the proposed algorithms. | [
"['Ling Liang' 'Kim-Chuan Toh' 'Haizhao Yang']"
] |
null | null | 2407.03300 | null | null | http://arxiv.org/pdf/2407.03300v1 | 2024-07-03T17:42:46Z | 2024-07-03T17:42:46Z | DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents | Diffusion models (DMs) have revolutionized generative learning. They utilize a diffusion process to encode data into a simple Gaussian distribution. However, encoding a complex, potentially multimodal data distribution into a single continuous Gaussian distribution arguably represents an unnecessarily challenging learning problem. We propose Discrete-Continuous Latent Variable Diffusion Models (DisCo-Diff) to simplify this task by introducing complementary discrete latent variables. We augment DMs with learnable discrete latents, inferred with an encoder, and train DM and encoder end-to-end. DisCo-Diff does not rely on pre-trained networks, making the framework universally applicable. The discrete latents significantly simplify learning the DM's complex noise-to-data mapping by reducing the curvature of the DM's generative ODE. An additional autoregressive transformer models the distribution of the discrete latents, a simple step because DisCo-Diff requires only few discrete variables with small codebooks. We validate DisCo-Diff on toy data, several image synthesis tasks as well as molecular docking, and find that introducing discrete latents consistently improves model performance. For example, DisCo-Diff achieves state-of-the-art FID scores on class-conditioned ImageNet-64/128 datasets with ODE sampler. | [
"['Yilun Xu' 'Gabriele Corso' 'Tommi Jaakkola' 'Arash Vahdat'\n 'Karsten Kreis']"
] |
null | null | 2407.03310 | null | null | http://arxiv.org/pdf/2407.03310v1 | 2024-07-03T17:53:44Z | 2024-07-03T17:53:44Z | Universal Length Generalization with Turing Programs | Length generalization refers to the ability to extrapolate from short training sequences to long test sequences and is a challenge for current large language models. While prior work has proposed some architecture or data format changes to achieve length generalization, these proposals typically apply to a limited set of tasks. Building on prior scratchpad and Chain-of-Thought (CoT) techniques, we propose Turing Programs, a novel CoT strategy that decomposes an algorithmic task into steps mimicking the computation of a Turing Machine. This framework is both universal, as it can accommodate any algorithmic task, and simple, requiring only copying text from the context with small modifications. We show that by using Turing Programs, we obtain robust length generalization on a range of algorithmic tasks: addition, multiplication and in-context SGD. We then demonstrate that transformers achieve length generalization on random Turing Programs, suggesting that length generalization is possible for any algorithmic task. Finally, we theoretically prove that transformers can implement Turing Programs, constructing a simple RASP (Weiss et al.) program that simulates an arbitrary Turing machine. | [
"['Kaiying Hou' 'David Brandfonbrener' 'Sham Kakade' 'Samy Jelassi'\n 'Eran Malach']"
] |
null | null | 2407.03311 | null | null | http://arxiv.org/pdf/2407.03311v1 | 2024-07-03T17:54:11Z | 2024-07-03T17:54:11Z | Value-Penalized Auxiliary Control from Examples for Learning without
Rewards or Demonstrations | Learning from examples of success is an appealing approach to reinforcement learning that eliminates many of the disadvantages of using hand-crafted reward functions or full expert-demonstration trajectories, both of which can be difficult to acquire, biased, or suboptimal. However, learning from examples alone dramatically increases the exploration challenge, especially for complex tasks. This work introduces value-penalized auxiliary control from examples (VPACE); we significantly improve exploration in example-based control by adding scheduled auxiliary control and examples of auxiliary tasks. Furthermore, we identify a value-calibration problem, where policy value estimates can exceed their theoretical limits based on successful data. We resolve this problem, which is exacerbated by learning auxiliary tasks, through the addition of an above-success-level value penalty. Across three simulated and one real robotic manipulation environment, and 21 different main tasks, we show that our approach substantially improves learning efficiency. Videos, code, and datasets are available at https://papers.starslab.ca/vpace. | [
"['Trevor Ablett' 'Bryan Chan' 'Jayce Haoran Wang' 'Jonathan Kelly']"
] |
null | null | 2407.03321 | null | null | http://arxiv.org/pdf/2407.03321v1 | 2024-07-03T17:59:53Z | 2024-07-03T17:59:53Z | Planetarium: A Rigorous Benchmark for Translating Text to Structured
Planning Languages | Many recent works have explored using language models for planning problems. One line of research focuses on translating natural language descriptions of planning tasks into structured planning languages, such as the planning domain definition language (PDDL). While this approach is promising, accurately measuring the quality of generated PDDL code continues to pose significant challenges. First, generated PDDL code is typically evaluated using planning validators that check whether the problem can be solved with a planner. This method is insufficient because a language model might generate valid PDDL code that does not align with the natural language description of the task. Second, existing evaluation sets often have natural language descriptions of the planning task that closely resemble the ground truth PDDL, reducing the challenge of the task. To bridge this gap, we introduce benchmarkName, a benchmark designed to evaluate language models' ability to generate PDDL code from natural language descriptions of planning tasks. We begin by creating a PDDL equivalence algorithm that rigorously evaluates the correctness of PDDL code generated by language models by flexibly comparing it against a ground truth PDDL. Then, we present a dataset of $132,037$ text-to-PDDL pairs across 13 different tasks, with varying levels of difficulty. Finally, we evaluate several API-access and open-weight language models that reveal this task's complexity. For example, $87.6%$ of the PDDL problem descriptions generated by GPT-4o are syntactically parseable, $82.2%$ are valid, solve-able problems, but only $35.1%$ are semantically correct, highlighting the need for a more rigorous benchmark for this problem. | [
"['Max Zuo' 'Francisco Piedrahita Velez' 'Xiaochen Li' 'Michael L. Littman'\n 'Stephen H. Bach']"
] |
null | null | 2407.03333 | null | null | http://arxiv.org/pdf/2407.03333v1 | 2024-05-10T01:10:49Z | 2024-05-10T01:10:49Z | C-ShipGen: A Conditional Guided Diffusion Model for Parametric Ship Hull
Design | Ship design is a complex design process that may take a team of naval architects many years to complete. Improving the ship design process can lead to significant cost savings, while still delivering high-quality designs to customers. A new technology for ship hull design is diffusion models, a type of generative artificial intelligence. Prior work with diffusion models for ship hull design created high-quality ship hulls with reduced drag and larger displaced volumes. However, the work could not generate hulls that meet specific design constraints. This paper proposes a conditional diffusion model that generates hull designs given specific constraints, such as the desired principal dimensions of the hull. In addition, this diffusion model leverages the gradients from a total resistance regression model to create low-resistance designs. Five design test cases compared the diffusion model to a design optimization algorithm to create hull designs with low resistance. In all five test cases, the diffusion model was shown to create diverse designs with a total resistance less than the optimized hull, having resistance reductions over 25%. The diffusion model also generated these designs without retraining. This work can significantly reduce the design cycle time of ships by creating high-quality hulls that meet user requirements with a data-driven approach. | [
"['Noah J. Bagazinski' 'Faez Ahmed']"
] |
null | null | 2407.03340 | null | null | http://arxiv.org/pdf/2407.03340v1 | 2024-05-20T13:09:32Z | 2024-05-20T13:09:32Z | A Multi-Modal Explainability Approach for Human-Aware Robots in
Multi-Party Conversation | The addressee estimation (understanding to whom somebody is talking) is a fundamental task for human activity recognition in multi-party conversation scenarios. Specifically, in the field of human-robot interaction, it becomes even more crucial to enable social robots to participate in such interactive contexts. However, it is usually implemented as a binary classification task, restricting the robot's capability to estimate whether it was addressed and limiting its interactive skills. For a social robot to gain the trust of humans, it is also important to manifest a certain level of transparency and explainability. Explainable artificial intelligence thus plays a significant role in the current machine learning applications and models, to provide explanations for their decisions besides excellent performance. In our work, we a) present an addressee estimation model with improved performance in comparison with the previous SOTA; b) further modify this model to include inherently explainable attention-based segments; c) implement the explainable addressee estimation as part of a modular cognitive architecture for multi-party conversation in an iCub robot; d) propose several ways to incorporate explainability and transparency in the aforementioned architecture; and e) perform a pilot user study to analyze the effect of various explanations on how human participants perceive the robot. | [
"['Iveta Bečková' 'Štefan Pócoš' 'Giulia Belgiovine' 'Marco Matarese'\n 'Alessandra Sciutti' 'Carlo Mazzola']"
] |
null | null | 2407.03342 | null | null | http://arxiv.org/pdf/2407.03342v1 | 2024-05-29T01:03:48Z | 2024-05-29T01:03:48Z | Prototype Analysis in Hopfield Networks with Hebbian Learning | We discuss prototype formation in the Hopfield network. Typically, Hebbian learning with highly correlated states leads to degraded memory performance. We show this type of learning can lead to prototype formation, where unlearned states emerge as representatives of large correlated subsets of states, alleviating capacity woes. This process has similarities to prototype learning in human cognition. We provide a substantial literature review of prototype learning in associative memories, covering contributions from psychology, statistical physics, and computer science. We analyze prototype formation from a theoretical perspective and derive a stability condition for these states based on the number of examples of the prototype presented for learning, the noise in those examples, and the number of non-example states presented. The stability condition is used to construct a probability of stability for a prototype state as the factors of stability change. We also note similarities to traditional network analysis, allowing us to find a prototype capacity. We corroborate these expectations of prototype formation with experiments using a simple Hopfield network with standard Hebbian learning. We extend our experiments to a Hopfield network trained on data with multiple prototypes and find the network is capable of stabilizing multiple prototypes concurrently. We measure the basins of attraction of the multiple prototype states, finding attractor strength grows with the number of examples and the agreement of examples. We link the stability and dominance of prototype states to the energy profile of these states, particularly when comparing the profile shape to target states or other spurious states. | [
"['Hayden McAlister' 'Anthony Robins' 'Lech Szymanski']"
] |
null | null | 2407.03347 | null | null | http://arxiv.org/pdf/2407.03347v1 | 2024-06-06T05:31:45Z | 2024-06-06T05:31:45Z | Chebyshev Spectral Neural Networks for Solving Partial Differential
Equations | The purpose of this study is to utilize the Chebyshev spectral method neural network(CSNN) model to solve differential equations. This approach employs a single-layer neural network wherein Chebyshev spectral methods are used to construct neurons satisfying boundary conditions. The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function. This method avoids the need to solve non-sparse linear systems, making it convenient for algorithm implementation and solving high-dimensional problems. The unique sampling method and neuron architecture significantly enhance the training efficiency and accuracy of the neural network. Furthermore, multiple networks enables the Chebyshev spectral method to handle equations on more complex domains. The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method. | [
"['Pengsong Yin' 'Shuo Ling' 'Wenjun Ying']"
] |
null | null | 2407.03356 | null | null | http://arxiv.org/pdf/2407.03356v1 | 2024-06-20T19:00:33Z | 2024-06-20T19:00:33Z | AI Driven Laser Parameter Search: Inverse Design of Photonic Surfaces
using Greedy Surrogate-based Optimization | Photonic surfaces designed with specific optical characteristics are becoming increasingly important for use in in various energy harvesting and storage systems. , In this study, we develop a surrogate-based optimization approach for designing such surfaces. The surrogate-based optimization framework employs the Random Forest algorithm and uses a greedy, prediction-based exploration strategy to identify the laser fabrication parameters that minimize the discrepancy relative to a user-defined target optical characteristics. We demonstrate the approach on two synthetic benchmarks and two specific cases of photonic surface inverse design targets. It exhibits superior performance when compared to other optimization algorithms across all benchmarks. Additionally, we demonstrate a technique of inverse design warm starting for changed target optical characteristics which enhances the performance of the introduced approach. | [
"['Luka Grbcic' 'Minok Park' 'Juliane Müller' 'Vassilia Zorba'\n 'Wibe Albert de Jong']"
] |
null | null | 2407.03365 | null | null | http://arxiv.org/pdf/2407.03365v1 | 2024-06-28T23:51:04Z | 2024-06-28T23:51:04Z | ML Updates for OpenStreetMap: Analysis of Research Gaps and Future
Directions | Maintaining accurate, up-to-date maps is important in any dynamic urban landscape, supporting various aspects of modern society, such as urban planning, navigation, and emergency response. However, traditional (i.e. largely manual) map production and crowdsourced mapping methods still struggle to keep pace with rapid changes in the built environment. Such manual mapping workflows are time-consuming and prone to human errors, leading to early obsolescence and/or the need for extensive auditing. The current map updating process in OpenStreetMap provides an example of this limitation, relying on numerous manual steps in its online map updating workflow. To address this, there is a need to explore automating the entire end-to-end map up-dating process. Tech giants such as Google and Microsoft have already started investigating Machine Learning (ML) techniques to tackle this contemporary mapping problem. This paper offers an analysis of these ML approaches, focusing on their application to updating Open-StreetMap in particular. By analysing the current state-of-the-art in this field, this study identi-fies some key research gaps and introduces DeepMapper as a practical solution for advancing the automatic online map updating process in the future. | [
"['Lasith Niroshan' 'James D. Carswell']"
] |
null | null | 2407.03379 | null | null | http://arxiv.org/pdf/2407.03379v1 | 2024-07-02T17:45:46Z | 2024-07-02T17:45:46Z | missForestPredict -- Missing data imputation for prediction settings | Prediction models are used to predict an outcome based on input variables. Missing data in input variables often occurs at model development and at prediction time. The missForestPredict R package proposes an adaptation of the missForest imputation algorithm that is fast, user-friendly and tailored for prediction settings. The algorithm iteratively imputes variables using random forests until a convergence criterion (unified for continuous and categorical variables and based on the out-of-bag error) is met. The imputation models are saved for each variable and iteration and can be applied later to new observations at prediction time. The missForestPredict package offers extended error monitoring, control over variables used in the imputation and custom initialization. This allows users to tailor the imputation to their specific needs. The missForestPredict algorithm is compared to mean/mode imputation, linear regression imputation, mice, k-nearest neighbours, bagging, miceRanger and IterativeImputer on eight simulated datasets with simulated missingness (48 scenarios) and eight large public datasets using different prediction models. missForestPredict provides competitive results in prediction settings within short computation times. | [
"['Elena Albu' 'Shan Gao' 'Laure Wynants' 'Ben Van Calster']"
] |
null | null | 2407.03380 | null | null | http://arxiv.org/pdf/2407.03380v1 | 2024-07-02T20:13:47Z | 2024-07-02T20:13:47Z | Multi-Peptide: Multimodality Leveraged Language-Graph Learning of
Peptide Properties | Peptides are essential in biological processes and therapeutics. In this study, we introduce Multi-Peptide, an innovative approach that combines transformer-based language models with Graph Neural Networks (GNNs) to predict peptide properties. We combine PeptideBERT, a transformer model tailored for peptide property prediction, with a GNN encoder to capture both sequence-based and structural features. By employing Contrastive Language-Image Pre-training (CLIP), Multi-Peptide aligns embeddings from both modalities into a shared latent space, thereby enhancing the model's predictive accuracy. Evaluations on hemolysis and nonfouling datasets demonstrate Multi-Peptide's robustness, achieving state-of-the-art 86.185% accuracy in hemolysis prediction. This study highlights the potential of multimodal learning in bioinformatics, paving the way for accurate and reliable predictions in peptide-based research and applications. | [
"['Srivathsan Badrinarayanan' 'Chakradhar Guntuboina' 'Parisa Mollaei'\n 'Amir Barati Farimani']"
] |
null | null | 2407.03381 | null | null | http://arxiv.org/pdf/2407.03381v1 | 2024-07-02T20:28:30Z | 2024-07-02T20:28:30Z | SeqMate: A Novel Large Language Model Pipeline for Automating RNA
Sequencing | RNA sequencing techniques, like bulk RNA-seq and Single Cell (sc) RNA-seq, are critical tools for the biologist looking to analyze the genetic activity/transcriptome of a tissue or cell during an experimental procedure. Platforms like Illumina's next-generation sequencing (NGS) are used to produce the raw data for this experimental procedure. This raw FASTQ data must then be prepared via a complex series of data manipulations by bioinformaticians. This process currently takes place on an unwieldy textual user interface like a terminal/command line that requires the user to install and import multiple program packages, preventing the untrained biologist from initiating data analysis. Open-source platforms like Galaxy have produced a more user-friendly pipeline, yet the visual interface remains cluttered and highly technical, remaining uninviting for the natural scientist. To address this, SeqMate is a user-friendly tool that allows for one-click analytics by utilizing the power of a large language model (LLM) to automate both data preparation and analysis (differential expression, trajectory analysis, etc). Furthermore, by utilizing the power of generative AI, SeqMate is also capable of analyzing such findings and producing written reports of upregulated/downregulated/user-prompted genes with sources cited from known repositories like PubMed, PDB, and Uniprot. | [
"['Devam Mondal' 'Atharva Inamdar']"
] |
null | null | 2407.03382 | null | null | http://arxiv.org/pdf/2407.03382v1 | 2024-07-02T22:22:36Z | 2024-07-02T22:22:36Z | Geometric statistics with subspace structure preservation for SPD
matrices | We present a geometric framework for the processing of SPD-valued data that preserves subspace structures and is based on the efficient computation of extreme generalized eigenvalues. This is achieved through the use of the Thompson geometry of the semidefinite cone. We explore a particular geodesic space structure in detail and establish several properties associated with it. Finally, we review a novel inductive mean of SPD matrices based on this geometry. | [
"['Cyrus Mostajeran' 'Nathaël Da Costa' 'Graham Van Goffrier'\n 'Rodolphe Sepulchre']"
] |
null | null | 2407.03389 | null | null | http://arxiv.org/pdf/2407.03389v1 | 2024-07-03T09:06:19Z | 2024-07-03T09:06:19Z | A Deterministic Information Bottleneck Method for Clustering Mixed-Type
Data | In this paper, we present an information-theoretic method for clustering mixed-type data, that is, data consisting of both continuous and categorical variables. The method is a variant of the Deterministic Information Bottleneck algorithm which optimally compresses the data while retaining relevant information about the underlying structure. We compare the performance of the proposed method to that of three well-established clustering methods (KAMILA, K-Prototypes, and Partitioning Around Medoids with Gower's dissimilarity) on simulated and real-world datasets. The results demonstrate that the proposed approach represents a competitive alternative to conventional clustering techniques under specific conditions. | [
"['Efthymios Costa' 'Ioanna Papatsouma' 'Angelos Markos']"
] |
null | null | 2407.03392 | null | null | http://arxiv.org/pdf/2407.03392v1 | 2024-07-03T15:30:44Z | 2024-07-03T15:30:44Z | M5: A Whole Genome Bacterial Encoder at Single Nucleotide Resolution | A linear attention mechanism is described to extend the context length of an encoder only transformer, called M5 in this report, to a multi-million single nucleotide resolution foundation model pretrained on bacterial whole genomes. The linear attention mechanism used approximates a full quadratic attention mechanism tightly and has a simple and lightweight implementation for the use case when the key-query embedding dimensionality is low. The M5-small model is entirely trained and tested on one A100 GPU with 40gb of memory up to 196K nucleotides during training and 2M nucleotides during testing. We test the performance of the M5-small model and record notable improvements in performance as whole genome bacterial sequence lengths are increased as well as demonstrating the stability of the full multi-head attention approximation used as sequence length is increased. | [
"['Agust Egilsson']"
] |
null | null | 2407.03418 | null | null | http://arxiv.org/pdf/2407.03418v1 | 2024-07-03T18:00:48Z | 2024-07-03T18:00:48Z | HEMM: Holistic Evaluation of Multimodal Foundation Models | Multimodal foundation models that can holistically process text alongside images, video, audio, and other sensory modalities are increasingly used in a variety of real-world applications. However, it is challenging to characterize and study progress in multimodal foundation models, given the range of possible modeling decisions, tasks, and domains. In this paper, we introduce Holistic Evaluation of Multimodal Models (HEMM) to systematically evaluate the capabilities of multimodal foundation models across a set of 3 dimensions: basic skills, information flow, and real-world use cases. Basic multimodal skills are internal abilities required to solve problems, such as learning interactions across modalities, fine-grained alignment, multi-step reasoning, and the ability to handle external knowledge. Information flow studies how multimodal content changes during a task through querying, translation, editing, and fusion. Use cases span domain-specific challenges introduced in real-world multimedia, affective computing, natural sciences, healthcare, and human-computer interaction applications. Through comprehensive experiments across the 30 tasks in HEMM, we (1) identify key dataset dimensions (e.g., basic skills, information flows, and use cases) that pose challenges to today's models, and (2) distill performance trends regarding how different modeling dimensions (e.g., scale, pre-training data, multimodal alignment, pre-training, and instruction tuning objectives) influence performance. Our conclusions regarding challenging multimodal interactions, use cases, and tasks requiring reasoning and external knowledge, the benefits of data and model scale, and the impacts of instruction tuning yield actionable insights for future work in multimodal foundation models. | [
"['Paul Pu Liang' 'Akshay Goindani' 'Talha Chafekar' 'Leena Mathur'\n 'Haofei Yu' 'Ruslan Salakhutdinov' 'Louis-Philippe Morency']"
] |
null | null | 2407.03426 | null | null | http://arxiv.org/pdf/2407.03426v1 | 2024-07-03T18:09:25Z | 2024-07-03T18:09:25Z | Multi-Task Decision-Making for Multi-User 360 Video Processing over
Wireless Networks | We study a multi-task decision-making problem for 360 video processing in a wireless multi-user virtual reality (VR) system that includes an edge computing unit (ECU) to deliver 360 videos to VR users and offer computing assistance for decoding/rendering of video frames. However, this comes at the expense of increased data volume and required bandwidth. To balance this trade-off, we formulate a constrained quality of experience (QoE) maximization problem in which the rebuffering time and quality variation between video frames are bounded by user and video requirements. To solve the formulated multi-user QoE maximization, we leverage deep reinforcement learning (DRL) for multi-task rate adaptation and computation distribution (MTRC). The proposed MTRC approach does not rely on any predefined assumption about the environment and relies on video playback statistics (i.e., past throughput, decoding time, transmission time, etc.), video information, and the resulting performance to adjust the video bitrate and computation distribution. We train MTRC with real-world wireless network traces and 360 video datasets to obtain evaluation results in terms of the average QoE, peak signal-to-noise ratio (PSNR), rebuffering time, and quality variation. Our results indicate that the MTRC improves the users' QoE compared to state-of-the-art rate adaptation algorithm. Specifically, we show a 5.97 dB to 6.44 dB improvement in PSNR, a 1.66X to 4.23X improvement in rebuffering time, and a 4.21 dB to 4.35 dB improvement in quality variation. | [
"['Babak Badnava' 'Jacob Chakareski' 'Morteza Hashemi']"
] |
null | null | 2407.03428 | null | null | http://arxiv.org/pdf/2407.03428v1 | 2024-07-03T18:10:43Z | 2024-07-03T18:10:43Z | NEBULA: Neural Empirical Bayes Under Latent Representations for
Efficient and Controllable Design of Molecular Libraries | We present NEBULA, the first latent 3D generative model for scalable generation of large molecular libraries around a seed compound of interest. Such libraries are crucial for scientific discovery, but it remains challenging to generate large numbers of high quality samples efficiently. 3D-voxel-based methods have recently shown great promise for generating high quality samples de novo from random noise (Pinheiro et al., 2023). However, sampling in 3D-voxel space is computationally expensive and use in library generation is prohibitively slow. Here, we instead perform neural empirical Bayes sampling (Saremi & Hyvarinen, 2019) in the learned latent space of a vector-quantized variational autoencoder. NEBULA generates large molecular libraries nearly an order of magnitude faster than existing methods without sacrificing sample quality. Moreover, NEBULA generalizes better to unseen drug-like molecules, as demonstrated on two public datasets and multiple recently released drugs. We expect the approach herein to be highly enabling for machine learning-based drug discovery. The code is available at https://github.com/prescient-design/nebula | [
"['Ewa M. Nowara' 'Pedro O. Pinheiro' 'Sai Pooja Mahajan' 'Omar Mahmood'\n 'Andrew Martin Watkins' 'Saeed Saremi' 'Michael Maser']"
] |
null | null | 2407.03436 | null | null | http://arxiv.org/pdf/2407.03436v1 | 2024-07-03T18:27:26Z | 2024-07-03T18:27:26Z | A Role of Environmental Complexity on Representation Learning in Deep
Reinforcement Learning Agents | The environments where individuals live can present diverse navigation challenges, resulting in varying navigation abilities and strategies. Inspired by differing urban layouts and the Dual Solutions Paradigm test used for human navigators, we developed a simulated navigation environment to train deep reinforcement learning agents in a shortcut usage task. We modulated the frequency of exposure to a shortcut and navigation cue, leading to the development of artificial agents with differing abilities. We examined the encoded representations in artificial neural networks driving these agents, revealing intricate dynamics in representation learning, and correlated them with shortcut use preferences. Furthermore, we demonstrated methods to analyze representations across a population of nodes, which proved effective in finding patterns in what would otherwise be noisy single-node data. These techniques may also have broader applications in studying neural activity. From our observations in representation learning dynamics, we propose insights for human navigation learning, emphasizing the importance of navigation challenges in developing strong landmark knowledge over repeated exposures to landmarks alone. | [
"['Andrew Liu' 'Alla Borisyuk']"
] |
null | null | 2407.03440 | null | null | http://arxiv.org/pdf/2407.03440v1 | 2024-07-03T18:33:47Z | 2024-07-03T18:33:47Z | Advanced Framework for Animal Sound Classification With Features
Optimization | The automatic classification of animal sounds presents an enduring challenge in bioacoustics, owing to the diverse statistical properties of sound signals, variations in recording equipment, and prevalent low Signal-to-Noise Ratio (SNR) conditions. Deep learning models like Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) have excelled in human speech recognition but have not been effectively tailored to the intricate nature of animal sounds, which exhibit substantial diversity even within the same domain. We propose an automated classification framework applicable to general animal sound classification. Our approach first optimizes audio features from Mel-frequency cepstral coefficients (MFCC) including feature rearrangement and feature reduction. It then uses the optimized features for the deep learning model, i.e., an attention-based Bidirectional LSTM (Bi-LSTM), to extract deep semantic features for sound classification. We also contribute an animal sound benchmark dataset encompassing oceanic animals and birds1. Extensive experimentation with real-world datasets demonstrates that our approach consistently outperforms baseline methods by over 25% in precision, recall, and accuracy, promising advancements in animal sound classification. | [
"['Qiang Yang' 'Xiuying Chen' 'Changsheng Ma' 'Carlos M. Duarte'\n 'Xiangliang Zhang']"
] |
null | null | 2407.03453 | null | null | http://arxiv.org/pdf/2407.03453v1 | 2024-07-03T18:53:22Z | 2024-07-03T18:53:22Z | On Large Language Models in National Security Applications | The overwhelming success of GPT-4 in early 2023 highlighted the transformative potential of large language models (LLMs) across various sectors, including national security. This article explores the implications of LLM integration within national security contexts, analyzing their potential to revolutionize information processing, decision-making, and operational efficiency. Whereas LLMs offer substantial benefits, such as automating tasks and enhancing data analysis, they also pose significant risks, including hallucinations, data privacy concerns, and vulnerability to adversarial attacks. Through their coupling with decision-theoretic principles and Bayesian reasoning, LLMs can significantly improve decision-making processes within national security organizations. Namely, LLMs can facilitate the transition from data to actionable decisions, enabling decision-makers to quickly receive and distill available information with less manpower. Current applications within the US Department of Defense and beyond are explored, e.g., the USAF's use of LLMs for wargaming and automatic summarization, that illustrate their potential to streamline operations and support decision-making. However, these applications necessitate rigorous safeguards to ensure accuracy and reliability. The broader implications of LLM integration extend to strategic planning, international relations, and the broader geopolitical landscape, with adversarial nations leveraging LLMs for disinformation and cyber operations, emphasizing the need for robust countermeasures. Despite exhibiting "sparks" of artificial general intelligence, LLMs are best suited for supporting roles rather than leading strategic decisions. Their use in training and wargaming can provide valuable insights and personalized learning experiences for military personnel, thereby improving operational readiness. | [
"['William N. Caballero' 'Phillip R. Jenkins']"
] |
null | null | 2407.03470 | null | null | http://arxiv.org/pdf/2407.03470v1 | 2024-07-03T19:34:47Z | 2024-07-03T19:34:47Z | Prosody-Driven Privacy-Preserving Dementia Detection | Speaker embeddings extracted from voice recordings have been proven valuable for dementia detection. However, by their nature, these embeddings contain identifiable information which raises privacy concerns. In this work, we aim to anonymize embeddings while preserving the diagnostic utility for dementia detection. Previous studies rely on adversarial learning and models trained on the target attribute and struggle in limited-resource settings. We propose a novel approach that leverages domain knowledge to disentangle prosody features relevant to dementia from speaker embeddings without relying on a dementia classifier. Our experiments show the effectiveness of our approach in preserving speaker privacy (speaker recognition F1-score .01%) while maintaining high dementia detection score F1-score of 74% on the ADReSS dataset. Our results are also on par with a more constrained classifier-dependent system on ADReSSo (.01% and .66%), and have no impact on synthesized speech naturalness. | [
"['Dominika Woszczyk' 'Ranya Aloufi' 'Soteris Demetriou']"
] |
null | null | 2407.03475 | null | null | http://arxiv.org/pdf/2407.03475v1 | 2024-07-03T19:43:12Z | 2024-07-03T19:43:12Z | How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self
Distillation Networks | Two competing paradigms exist for self-supervised learning of data representations. Joint Embedding Predictive Architecture (JEPA) is a class of architectures in which semantically similar inputs are encoded into representations that are predictive of each other. A recent successful approach that falls under the JEPA framework is self-distillation, where an online encoder is trained to predict the output of the target encoder, sometimes using a lightweight predictor network. This is contrasted with the Masked AutoEncoder (MAE) paradigm, where an encoder and decoder are trained to reconstruct missing parts of the input in the data space rather, than its latent representation. A common motivation for using the JEPA approach over MAE is that the JEPA objective prioritizes abstract features over fine-grained pixel information (which can be unpredictable and uninformative). In this work, we seek to understand the mechanism behind this empirical observation by analyzing the training dynamics of deep linear models. We uncover a surprising mechanism: in a simplified linear setting where both approaches learn similar representations, JEPAs are biased to learn high-influence features, i.e., features characterized by having high regression coefficients. Our results point to a distinct implicit bias of predicting in latent space that may shed light on its success in practice. | [
"['Etai Littwin' 'Omid Saremi' 'Madhu Advani' 'Vimal Thilak'\n 'Preetum Nakkiran' 'Chen Huang' 'Joshua Susskind']"
] |
null | null | 2407.03482 | null | null | http://arxiv.org/pdf/2407.03482v2 | 2024-07-10T13:27:20Z | 2024-07-03T20:10:55Z | Domain-Aware Fine-Tuning of Foundation Models | Foundation models (FMs) have revolutionized computer vision, enabling effective learning across different domains. However, their performance under domain shift is yet underexplored. This paper investigates the zero-shot domain adaptation potential of FMs by comparing different backbone architectures and introducing novel domain-aware components that leverage domain related textual embeddings. We propose domain adaptive normalization, termed as Domino, which explicitly leverages domain embeddings during fine-tuning, thus making the model domain aware. Ultimately, Domino enables more robust computer vision models that can adapt effectively to various unseen domains. | [
"['Ugur Ali Kaplan' 'Margret Keuper' 'Anna Khoreva' 'Dan Zhang' 'Yumeng Li']"
] |
null | null | 2407.03495 | null | null | http://arxiv.org/pdf/2407.03495v1 | 2024-07-03T20:51:41Z | 2024-07-03T20:51:41Z | Codec-ASR: Training Performant Automatic Speech Recognition Systems with
Discrete Speech Representations | Discrete speech representations have garnered recent attention for their efficacy in training transformer-based models for various speech-related tasks such as automatic speech recognition (ASR), translation, speaker verification, and joint speech-text foundational models. In this work, we present a comprehensive analysis on building ASR systems with discrete codes. We investigate different methods for codec training such as quantization schemes and time-domain vs spectral feature encodings. We further explore ASR training techniques aimed at enhancing performance, training efficiency, and noise robustness. Drawing upon our findings, we introduce a codec ASR pipeline that outperforms Encodec at similar bit-rate. Remarkably, it also surpasses the state-of-the-art results achieved by strong self-supervised models on the 143 languages ML-SUPERB benchmark despite being smaller in size and pretrained on significantly less data. | [
"['Kunal Dhawan' 'Nithin Rao Koluguri' 'Ante Jukić' 'Ryan Langman'\n 'Jagadeesh Balam' 'Boris Ginsburg']"
] |
null | null | 2407.03502 | null | null | http://arxiv.org/pdf/2407.03502v1 | 2024-07-03T21:01:12Z | 2024-07-03T21:01:12Z | AgentInstruct: Toward Generative Teaching with Agentic Flows | Synthetic data is becoming increasingly important for accelerating the development of language models, both large and small. Despite several successful use cases, researchers also raised concerns around model collapse and drawbacks of imitating other models. This discrepancy can be attributed to the fact that synthetic data varies in quality and diversity. Effective use of synthetic data usually requires significant human effort in curating the data. We focus on using synthetic data for post-training, specifically creating data by powerful models to teach a new skill or behavior to another model, we refer to this setting as Generative Teaching. We introduce AgentInstruct, an extensible agentic framework for automatically creating large amounts of diverse and high-quality synthetic data. AgentInstruct can create both the prompts and responses, using only raw data sources like text documents and code files as seeds. We demonstrate the utility of AgentInstruct by creating a post training dataset of 25M pairs to teach language models different skills, such as text editing, creative writing, tool usage, coding, reading comprehension, etc. The dataset can be used for instruction tuning of any base model. We post-train Mistral-7b with the data. When comparing the resulting model Orca-3 to Mistral-7b-Instruct (which uses the same base model), we observe significant improvements across many benchmarks. For example, 40% improvement on AGIEval, 19% improvement on MMLU, 54% improvement on GSM8K, 38% improvement on BBH and 45% improvement on AlpacaEval. Additionally, it consistently outperforms other models such as LLAMA-8B-instruct and GPT-3.5-turbo. | [
"['Arindam Mitra' 'Luciano Del Corro' 'Guoqing Zheng' 'Shweti Mahajan'\n 'Dany Rouhana' 'Andres Codas' 'Yadong Lu' 'Wei-ge Chen' 'Olga Vrousgos'\n 'Corby Rosset' 'Fillipe Silva' 'Hamed Khanpour' 'Yash Lara'\n 'Ahmed Awadallah']"
] |
null | null | 2407.03515 | null | null | http://arxiv.org/pdf/2407.03515v1 | 2024-07-03T21:27:29Z | 2024-07-03T21:27:29Z | Feature-Specific Coefficients of Determination in Tree Ensembles | Tree ensemble methods provide promising predictions with models difficult to interpret. Recent introduction of Shapley values for individualized feature contributions, accompanied with several fast computing algorithms for predicted values, shows intriguing results. However, individualizing coefficients of determination, aka $R^2$, for each feature is challenged by the underlying quadratic losses, although these coefficients allow us to comparatively assess single feature's contribution to tree ensembles. Here we propose an efficient algorithm, Q-SHAP, that reduces the computational complexity to polynomial time when calculating Shapley values related to quadratic losses. Our extensive simulation studies demonstrate that this approach not only enhances computational efficiency but also improves estimation accuracy of feature-specific coefficients of determination. | [
"['Zhongli Jiang' 'Dabao Zhang' 'Min Zhang']"
] |
null | null | 2407.03522 | null | null | http://arxiv.org/pdf/2407.03522v1 | 2024-07-03T21:48:23Z | 2024-07-03T21:48:23Z | Optimal thresholds and algorithms for a model of multi-modal learning in
high dimensions | This work explores multi-modal inference in a high-dimensional simplified model, analytically quantifying the performance gain of multi-modal inference over that of analyzing modalities in isolation. We present the Bayes-optimal performance and weak recovery thresholds in a model where the objective is to recover the latent structures from two noisy data matrices with correlated spikes. The paper derives the approximate message passing (AMP) algorithm for this model and characterizes its performance in the high-dimensional limit via the associated state evolution. The analysis holds for a broad range of priors and noise channels, which can differ across modalities. The linearization of AMP is compared numerically to the widely used partial least squares (PLS) and canonical correlation analysis (CCA) methods, which are both observed to suffer from a sub-optimal recovery threshold. | [
"['Christian Keup' 'Lenka Zdeborová']"
] |
null | null | 2407.03524 | null | null | http://arxiv.org/pdf/2407.03524v1 | 2024-07-03T22:00:35Z | 2024-07-03T22:00:35Z | A multicategory jet image classification framework using deep neural
network | Jet point cloud images are high dimensional data structures that needs to be transformed to a separable feature space for machine learning algorithms to distinguish them with simple decision boundaries. In this article, the authors focus on jet category separability by particle and jet feature extraction, resulting in more efficient training of a simple deep neural network, resulting in a computational efficient interpretable model for jet classification. The methodology is tested with three to five categories of jets from the JetNet benchmark jet tagging dataset, resulting in comparable performance to particle flow network. This work demonstrates that high dimensional datasets represented in separable latent spaces lead to simpler architectures for jet classification. | [
"['Jairo Orozco Sandoval' 'Vidya Manian' 'Sudhir Malik']"
] |
null | null | 2407.03542 | null | null | http://arxiv.org/pdf/2407.03542v1 | 2024-07-03T23:27:53Z | 2024-07-03T23:27:53Z | Probing Perfection: The Relentless Art of Meddling for Pulmonary Airway
Segmentation from HRCT via a Human-AI Collaboration Based Active Learning
Method | In pulmonary tracheal segmentation, the scarcity of annotated data is a prevalent issue in medical segmentation. Additionally, Deep Learning (DL) methods face challenges: the opacity of 'black box' models and the need for performance enhancement. Our Human-Computer Interaction (HCI) based models (RS_UNet, LC_UNet, UUNet, and WD_UNet) address these challenges by combining diverse query strategies with various DL models. We train four HCI models and repeat these steps: (1) Query Strategy: The HCI models select samples that provide the most additional representative information when labeled in each iteration and identify unlabeled samples with the greatest predictive disparity using Wasserstein Distance, Least Confidence, Entropy Sampling, and Random Sampling. (2) Central line correction: Selected samples are used for expert correction of system-generated tracheal central lines in each training round. (3) Update training dataset: Experts update the training dataset after each DL model's training epoch, enhancing the trustworthiness and performance of the models. (4) Model training: The HCI model is trained using the updated dataset and an enhanced UNet version. Experimental results confirm the effectiveness of these HCI-based approaches, showing that WD-UNet, LC-UNet, UUNet, and RS-UNet achieve comparable or superior performance to state-of-the-art DL models. Notably, WD-UNet achieves this with only 15%-35% of the training data, reducing physician annotation time by 65%-85%. | [
"['Shiyi Wang' 'Yang Nan' 'Sheng Zhang' 'Federico Felder' 'Xiaodan Xing'\n 'Yingying Fang' 'Javier Del Ser' 'Simon L F Walsh' 'Guang Yang']"
] |
null | null | 2407.03557 | null | null | http://arxiv.org/pdf/2407.03557v1 | 2024-07-04T01:00:53Z | 2024-07-04T01:00:53Z | Decision-Focused Evaluation of Worst-Case Distribution Shift | Distribution shift is a key challenge for predictive models in practice, creating the need to identify potentially harmful shifts in advance of deployment. Existing work typically defines these worst-case shifts as ones that most degrade the individual-level accuracy of the model. However, when models are used to make a downstream population-level decision like the allocation of a scarce resource, individual-level accuracy may be a poor proxy for performance on the task at hand. We introduce a novel framework that employs a hierarchical model structure to identify worst-case distribution shifts in predictive resource allocation settings by capturing shifts both within and across instances of the decision problem. This task is more difficult than in standard distribution shift settings due to combinatorial interactions, where decisions depend on the joint presence of individuals in the allocation task. We show that the problem can be reformulated as a submodular optimization problem, enabling efficient approximations of worst-case loss. Applying our framework to real data, we find empirical evidence that worst-case shifts identified by one metric often significantly diverge from worst-case distributions identified by other metrics. | [
"['Kevin Ren' 'Yewon Byun' 'Bryan Wilder']"
] |
null | null | 2407.03563 | null | null | http://arxiv.org/pdf/2407.03563v1 | 2024-07-04T01:25:20Z | 2024-07-04T01:25:20Z | Learning Video Temporal Dynamics with Cross-Modal Attention for Robust
Audio-Visual Speech Recognition | Audio-visual speech recognition (AVSR) aims to transcribe human speech using both audio and video modalities. In practical environments with noise-corrupted audio, the role of video information becomes crucial. However, prior works have primarily focused on enhancing audio features in AVSR, overlooking the importance of video features. In this study, we strengthen the video features by learning three temporal dynamics in video data: context order, playback direction, and the speed of video frames. Cross-modal attention modules are introduced to enrich video features with audio information so that speech variability can be taken into account when training on the video temporal dynamics. Based on our approach, we achieve the state-of-the-art performance on the LRS2 and LRS3 AVSR benchmarks for the noise-dominant settings. Our approach excels in scenarios especially for babble and speech noise, indicating the ability to distinguish the speech signal that should be recognized from lip movements in the video modality. We support the validity of our methodology by offering the ablation experiments for the temporal dynamics losses and the cross-modal attention architecture design. | [
"['Sungnyun Kim' 'Kangwook Jang' 'Sangmin Bae' 'Hoirin Kim' 'Se-Young Yun']"
] |
null | null | 2407.03571 | null | null | http://arxiv.org/pdf/2407.03571v1 | 2024-07-04T01:46:07Z | 2024-07-04T01:46:07Z | A Fully Parameter-Free Second-Order Algorithm for Convex-Concave Minimax
Problems with Optimal Iteration Complexity | In this paper, we study second-order algorithms for the convex-concave minimax problem, which has attracted much attention in many fields such as machine learning in recent years. We propose a Lipschitz-free cubic regularization (LF-CR) algorithm for solving the convex-concave minimax optimization problem without knowing the Lipschitz constant. It can be shown that the iteration complexity of the LF-CR algorithm to obtain an $epsilon$-optimal solution with respect to the restricted primal-dual gap is upper bounded by $mathcal{O}(frac{rho|z^0-z^*|^3}{epsilon})^{frac{2}{3}}$, where $z^0=(x^0,y^0)$ is a pair of initial points, $z^*=(x^*,y^*)$ is a pair of optimal solutions, and $rho$ is the Lipschitz constant. We further propose a fully parameter-free cubic regularization (FF-CR) algorithm that does not require any parameters of the problem, including the Lipschitz constant and the upper bound of the distance from the initial point to the optimal solution. We also prove that the iteration complexity of the FF-CR algorithm to obtain an $epsilon$-optimal solution with respect to the gradient norm is upper bounded by $mathcal{O}(frac{rho|z^0-z^*|^2}{epsilon})^{frac{2}{3}}$. Numerical experiments show the efficiency of both algorithms. To the best of our knowledge, the proposed FF-CR algorithm is the first completely parameter-free second-order algorithm for solving convex-concave minimax optimization problems, and its iteration complexity is consistent with the optimal iteration complexity lower bound of existing second-order algorithms with parameters for solving convex-concave minimax problems. | [
"['Junlin Wang' 'Junnan Yang' 'Zi Xu']"
] |
null | null | 2407.03574 | null | null | http://arxiv.org/pdf/2407.03574v1 | 2024-07-04T01:53:23Z | 2024-07-04T01:53:23Z | An Axiomatic Definition of Hierarchical Clustering | In this paper, we take an axiomatic approach to defining a population hierarchical clustering for piecewise constant densities, and in a similar manner to Lebesgue integration, extend this definition to more general densities. When the density satisfies some mild conditions, e.g., when it has connected support, is continuous, and vanishes only at infinity, or when the connected components of the density satisfy these conditions, our axiomatic definition results in Hartigan's definition of cluster tree. | [
"['Ery Arias-Castro' 'Elizabeth Coda']"
] |
null | null | 2407.03593 | null | null | http://arxiv.org/pdf/2407.03593v1 | 2024-07-04T03:02:10Z | 2024-07-04T03:02:10Z | Green Multigrid Network | GreenLearning networks (GL) directly learn Green's function in physical space, making them an interpretable model for capturing unknown solution operators of partial differential equations (PDEs). For many PDEs, the corresponding Green's function exhibits asymptotic smoothness. In this paper, we propose a framework named Green Multigrid networks (GreenMGNet), an operator learning algorithm designed for a class of asymptotically smooth Green's functions. Compared with the pioneering GL, the new framework presents itself with better accuracy and efficiency, thereby achieving a significant improvement. GreenMGNet is composed of two technical novelties. First, Green's function is modeled as a piecewise function to take into account its singular behavior in some parts of the hyperplane. Such piecewise function is then approximated by a neural network with augmented output(AugNN) so that it can capture singularity accurately. Second, the asymptotic smoothness property of Green's function is used to leverage the Multi-Level Multi-Integration (MLMI) algorithm for both the training and inference stages. Several test cases of operator learning are presented to demonstrate the accuracy and effectiveness of the proposed method. On average, GreenMGNet achieves $3.8%$ to $39.15%$ accuracy improvement. To match the accuracy level of GL, GreenMGNet requires only about $10%$ of the full grid data, resulting in a $55.9%$ and $92.5%$ reduction in training time and GPU memory cost for one-dimensional test problems, and a $37.7%$ and $62.5%$ reduction for two-dimensional test problems. | [
"['Ye Lin' 'Young Ju Lee' 'Jiwei Jia']"
] |
null | null | 2407.03595 | null | null | http://arxiv.org/pdf/2407.03595v1 | 2024-07-04T03:04:55Z | 2024-07-04T03:04:55Z | Machine Learning for Economic Forecasting: An Application to China's GDP
Growth | This paper aims to explore the application of machine learning in forecasting Chinese macroeconomic variables. Specifically, it employs various machine learning models to predict the quarterly real GDP growth of China, and analyzes the factors contributing to the performance differences among these models. Our findings indicate that the average forecast errors of machine learning models are generally lower than those of traditional econometric models or expert forecasts, particularly in periods of economic stability. However, during certain inflection points, although machine learning models still outperform traditional econometric models, expert forecasts may exhibit greater accuracy in some instances due to experts' more comprehensive understanding of the macroeconomic environment and real-time economic variables. In addition to macroeconomic forecasting, this paper employs interpretable machine learning methods to identify the key attributive variables from different machine learning models, aiming to enhance the understanding and evaluation of their contributions to macroeconomic fluctuations. | [
"['Yanqing Yang' 'Xingcheng Xu' 'Jinfeng Ge' 'Yan Xu']"
] |
null | null | 2407.03601 | null | null | http://arxiv.org/pdf/2407.03601v1 | 2024-07-04T03:24:27Z | 2024-07-04T03:24:27Z | Online Non-Stationary Stochastic Quasar-Convex Optimization | Recent research has shown that quasar-convexity can be found in applications such as identification of linear dynamical systems and generalized linear models. Such observations have in turn spurred exciting developments in design and analysis algorithms that exploit quasar-convexity. In this work, we study the online stochastic quasar-convex optimization problems in a dynamic environment. We establish regret bounds of online gradient descent in terms of cumulative path variation and cumulative gradient variance for losses satisfying quasar-convexity and strong quasar-convexity. We then apply the results to generalized linear models (GLM) when the underlying parameter is time-varying. We establish regret bounds of online gradient descent when applying to GLMs with leaky ReLU activation function, logistic activation function, and ReLU activation function. Numerical results are presented to corroborate our findings. | [
"['Yuen-Man Pun' 'Iman Shames']"
] |
null | null | 2407.03622 | null | null | http://arxiv.org/pdf/2407.03622v1 | 2024-07-04T04:06:24Z | 2024-07-04T04:06:24Z | MSfusion: A Dynamic Model Splitting Approach for Resource-Constrained
Machines to Collaboratively Train Larger Models | Training large models requires a large amount of data, as well as abundant computation resources. While collaborative learning (e.g., federated learning) provides a promising paradigm to harness collective data from many participants, training large models remains a major challenge for participants with limited resources like mobile devices. We introduce MSfusion, an effective and efficient collaborative learning framework, tailored for training larger models on resourceconstraint machines through model splitting. Specifically, a double shifting model splitting scheme is designed such that in each training round, each participant is assigned a subset of model parameters to train over local data, and aggregates with sub-models of other peers on common parameters. While model splitting significantly reduces the computation and communication costs of individual participants, additional novel designs on adaptive model overlapping and contrastive loss functions help MSfusion to maintain training effectiveness, against model shift across participants. Extensive experiments on image and NLP tasks illustrate significant advantages of MSfusion in performance and efficiency for training large models, and its strong scalability: computation cost of each participant reduces significantly as the number of participants increases. | [
"['Jin Xie' 'Songze Li']"
] |
null | null | 2407.03631 | null | null | http://arxiv.org/pdf/2407.03631v1 | 2024-07-04T04:46:09Z | 2024-07-04T04:46:09Z | On the performance of sequential Bayesian update for database of diverse
tsunami scenarios | Although the sequential tsunami scenario detection framework was validated in our previous work, several tasks remain to be resolved from a practical point of view. This study aims to evaluate the performance of the previous tsunami scenario detection framework using a diverse database consisting of complex fault rupture patterns with heterogeneous slip distributions. Specifically, we compare the effectiveness of scenario superposition to that of the previous most likely scenario detection method. Additionally, how the length of the observation time window influences the accuracy of both methods is analyzed. We utilize an existing database comprising 1771 tsunami scenarios targeting the city Westport (WA, U.S.), which includes synthetic wave height records and inundation distributions as the result of fault rupture in the Cascadia subduction zone. The heterogeneous patterns of slips used in the database increase the diversity of the scenarios and thus make it a proper database for evaluating the performance of scenario superposition. To assess the performance, we consider various observation time windows shorter than 15 minutes and divide the database into five testing and learning sets. The evaluation accuracy of the maximum offshore wave, inundation depth, and its distribution is analyzed to examine the advantages of the scenario superposition method over the previous method. We introduce the dynamic time warping (DTW) method as an additional benchmark and compare its results to that of the Bayesian scenario detection method. | [
"['Reika Nomura' 'Louise A. Hirao Vermare' 'Saneiki Fujita' 'Donsub Rim'\n 'Shuji Moriguchi' 'Randall J. LeVeque' 'Kenjiro Terada']"
] |
null | null | 2407.03637 | null | null | http://arxiv.org/pdf/2407.03637v1 | 2024-07-04T05:13:58Z | 2024-07-04T05:13:58Z | HERA: High-efficiency Matrix Compression via Element Replacement | Large Language Models (LLMs) have significantly advanced natural language processing tasks such as machine translation, text generation, and sentiment analysis. However, their large size, often consisting of billions of parameters, poses challenges for storage, computation, and deployment, particularly in resource-constrained environments like mobile devices and edge computing platforms. Additionally, the key-value (k-v) cache used to speed up query processing requires substantial memory and storage, exacerbating these challenges. Vector databases have emerged as a crucial technology to efficiently manage and retrieve the high-dimensional vectors produced by LLMs, facilitating faster data access and reducing computational demands. Effective compression and quantization techniques are essential to address these challenges, as they reduce the memory footprint and computational requirements without significantly compromising performance. Traditional methods that uniformly map parameters to compressed spaces often fail to account for the uneven distribution of parameters, leading to considerable accuracy loss. Therefore, innovative approaches are needed to achieve better compression ratios while preserving model performance. In this work, we propose HERA, a novel algorithm that employs heuristic Element Replacement for compressing matrix. HERA systematically replaces elements within the model using heuristic methods, which simplifies the structure of the model and makes subsequent compression more effective. By hierarchically segmenting, compressing, and reorganizing the matrix dataset, our method can effectively reduce the quantization error to 12.3% of the original at the same compression ratio. | [
"['Yanshu Wang' 'Wang Li' 'Tong Yang']"
] |
null | null | 2407.03640 | null | null | http://arxiv.org/pdf/2407.03640v1 | 2024-07-04T05:22:55Z | 2024-07-04T05:22:55Z | Generative Technology for Human Emotion Recognition: A Scope Review | Affective computing stands at the forefront of artificial intelligence (AI), seeking to imbue machines with the ability to comprehend and respond to human emotions. Central to this field is emotion recognition, which endeavors to identify and interpret human emotional states from different modalities, such as speech, facial images, text, and physiological signals. In recent years, important progress has been made in generative models, including Autoencoder, Generative Adversarial Network, Diffusion Model, and Large Language Model. These models, with their powerful data generation capabilities, emerge as pivotal tools in advancing emotion recognition. However, up to now, there remains a paucity of systematic efforts that review generative technology for emotion recognition. This survey aims to bridge the gaps in the existing literature by conducting a comprehensive analysis of over 320 research papers until June 2024. Specifically, this survey will firstly introduce the mathematical principles of different generative models and the commonly used datasets. Subsequently, through a taxonomy, it will provide an in-depth analysis of how generative techniques address emotion recognition based on different modalities in several aspects, including data augmentation, feature extraction, semi-supervised learning, cross-domain, etc. Finally, the review will outline future research directions, emphasizing the potential of generative models to advance the field of emotion recognition and enhance the emotional intelligence of AI systems. | [
"['Fei Ma' 'Yucheng Yuan' 'Yifan Xie' 'Hongwei Ren' 'Ivan Liu' 'Ying He'\n 'Fuji Ren' 'Fei Richard Yu' 'Shiguang Ni']"
] |
null | null | 2407.03641 | null | null | http://arxiv.org/pdf/2407.03641v1 | 2024-07-04T05:23:22Z | 2024-07-04T05:23:22Z | Scalable Learned Model Soup on a Single GPU: An Efficient Subspace
Training Strategy | Pre-training followed by fine-tuning is widely adopted among practitioners. The performance can be improved by "model soups"~cite{wortsman2022model} via exploring various hyperparameter configurations.The Learned-Soup, a variant of model soups, significantly improves the performance but suffers from substantial memory and time costs due to the requirements of (i) having to load all fine-tuned models simultaneously, and (ii) a large computational graph encompassing all fine-tuned models. In this paper, we propose Memory Efficient Hyperplane Learned Soup (MEHL-Soup) to tackle this issue by formulating the learned soup as a hyperplane optimization problem and introducing block coordinate gradient descent to learn the mixing coefficients. At each iteration, MEHL-Soup only needs to load a few fine-tuned models and build a computational graph with one combined model. We further extend MEHL-Soup to MEHL-Soup+ in a layer-wise manner. Experimental results on various ViT models and data sets show that MEHL-Soup(+) outperforms Learned-Soup(+) in terms of test accuracy, and also reduces memory usage by more than $13times$. Moreover, MEHL-Soup(+) can be run on a single GPU and achieves $9times$ speed up in soup construction compared with the Learned-Soup. The code is released at https://github.com/nblt/MEHL-Soup. | [
"['Tao Li' 'Weisen Jiang' 'Fanghui Liu' 'Xiaolin Huang' 'James T. Kwok']"
] |
null | null | 2407.03665 | null | null | http://arxiv.org/pdf/2407.03665v1 | 2024-07-04T06:09:11Z | 2024-07-04T06:09:11Z | Heterogeneous Hypergraph Embedding for Recommendation Systems | Recent advancements in recommender systems have focused on integrating knowledge graphs (KGs) to leverage their auxiliary information. The core idea of KG-enhanced recommenders is to incorporate rich semantic information for more accurate recommendations. However, two main challenges persist: i) Neglecting complex higher-order interactions in the KG-based user-item network, potentially leading to sub-optimal recommendations, and ii) Dealing with the heterogeneous modalities of input sources, such as user-item bipartite graphs and KGs, which may introduce noise and inaccuracies. To address these issues, we present a novel Knowledge-enhanced Heterogeneous Hypergraph Recommender System (KHGRec). KHGRec captures group-wise characteristics of both the interaction network and the KG, modeling complex connections in the KG. Using a collaborative knowledge heterogeneous hypergraph (CKHG), it employs two hypergraph encoders to model group-wise interdependencies and ensure explainability. Additionally, it fuses signals from the input graphs with cross-view self-supervised learning and attention mechanisms. Extensive experiments on four real-world datasets show our model's superiority over various state-of-the-art baselines, with an average 5.18% relative improvement. Additional tests on noise resilience, missing data, and cold-start problems demonstrate the robustness of our KHGRec framework. Our model and evaluation datasets are publicly available at url{https://github.com/viethungvu1998/KHGRec}. | [
"['Darnbi Sakong' 'Viet Hung Vu' 'Thanh Trung Huynh' 'Phi Le Nguyen'\n 'Hongzhi Yin' 'Quoc Viet Hung Nguyen' 'Thanh Tam Nguyen']"
] |
null | null | 2407.03668 | null | null | http://arxiv.org/pdf/2407.03668v2 | 2024-07-09T07:22:42Z | 2024-07-04T06:26:01Z | Reliable Projection Based Unsupervised Learning for Semi-Definite QCQP
with Application of Beamforming Optimization | In this paper, we investigate a special class of quadratic-constrained quadratic programming (QCQP) with semi-definite constraints. Traditionally, since such a problem is non-convex and N-hard, the neural network (NN) is regarded as a promising method to obtain a high-performing solution. However, due to the inherent prediction error, it is challenging to ensure all solution output by the NN is feasible. Although some existing methods propose some naive methods, they only focus on reducing the constraint violation probability, where not all solutions are feasibly guaranteed. To deal with the above challenge, in this paper a computing efficient and reliable projection is proposed, where all solution output by the NN are ensured to be feasible. Moreover, unsupervised learning is used, so the NN can be trained effectively and efficiently without labels. Theoretically, the solution of the NN after projection is proven to be feasible, and we also prove the projection method can enhance the convergence performance and speed of the NN. To evaluate our proposed method, the quality of service (QoS)-contained beamforming scenario is studied, where the simulation results show the proposed method can achieve high-performance which is competitive with the lower bound. | [
"['Xiucheng Wang' 'Qi Qiu' 'Nan Cheng']"
] |
null | null | 2407.03672 | null | null | http://arxiv.org/pdf/2407.03672v1 | 2024-07-04T06:37:09Z | 2024-07-04T06:37:09Z | A Survey of Data Synthesis Approaches | This paper provides a detailed survey of synthetic data techniques. We first discuss the expected goals of using synthetic data in data augmentation, which can be divided into four parts: 1) Improving Diversity, 2) Data Balancing, 3) Addressing Domain Shift, and 4) Resolving Edge Cases. Synthesizing data are closely related to the prevailing machine learning techniques at the time, therefore, we summarize the domain of synthetic data techniques into four categories: 1) Expert-knowledge, 2) Direct Training, 3) Pre-train then Fine-tune, and 4) Foundation Models without Fine-tuning. Next, we categorize the goals of synthetic data filtering into four types for discussion: 1) Basic Quality, 2) Label Consistency, and 3) Data Distribution. In section 5 of this paper, we also discuss the future directions of synthetic data and state three direction that we believe is important: 1) focus more on quality, 2) the evaluation of synthetic data, and 3) multi-model data augmentation. | [
"['Hsin-Yu Chang' 'Pei-Yu Chen' 'Tun-Hsiang Chou' 'Chang-Sheng Kao'\n 'Hsuan-Yun Yu' 'Yen-Ting Lin' 'Yun-Nung Chen']"
] |
null | null | 2407.03674 | null | null | http://arxiv.org/pdf/2407.03674v2 | 2024-07-09T18:05:10Z | 2024-07-04T06:42:21Z | Short-Long Policy Evaluation with Novel Actions | From incorporating LLMs in education, to identifying new drugs and improving ways to charge batteries, innovators constantly try new strategies in search of better long-term outcomes for students, patients and consumers. One major bottleneck in this innovation cycle is the amount of time it takes to observe the downstream effects of a decision policy that incorporates new interventions. The key question is whether we can quickly evaluate long-term outcomes of a new decision policy without making long-term observations. Organizations often have access to prior data about past decision policies and their outcomes, evaluated over the full horizon of interest. Motivated by this, we introduce a new setting for short-long policy evaluation for sequential decision making tasks. Our proposed methods significantly outperform prior results on simulators of HIV treatment, kidney dialysis and battery charging. We also demonstrate that our methods can be useful for applications in AI safety by quickly identifying when a new decision policy is likely to have substantially lower performance than past policies. | [
"['Hyunji Alex Nam' 'Yash Chandak' 'Emma Brunskill']"
] |
null | null | 2407.03678 | null | null | http://arxiv.org/pdf/2407.03678v1 | 2024-07-04T06:52:48Z | 2024-07-04T06:52:48Z | Improving Self Consistency in LLMs through Probabilistic Tokenization | Prior research has demonstrated noticeable performance gains through the use of probabilistic tokenizations, an approach that involves employing multiple tokenizations of the same input string during the training phase of a language model. Despite these promising findings, modern large language models (LLMs) have yet to be trained using probabilistic tokenizations. Interestingly, while the tokenizers of these contemporary LLMs have the capability to generate multiple tokenizations, this property remains underutilized. In this work, we propose a novel method to leverage the multiple tokenization capabilities of modern LLM tokenizers, aiming to enhance the self-consistency of LLMs in reasoning tasks. Our experiments indicate that when utilizing probabilistic tokenizations, LLMs generate logically diverse reasoning paths, moving beyond mere surface-level linguistic diversity.We carefully study probabilistic tokenization and offer insights to explain the self consistency improvements it brings through extensive experimentation on 5 LLM families and 4 reasoning benchmarks. | [
"['Ashutosh Sathe' 'Divyanshu Aggarwal' 'Sunayana Sitaram']"
] |
null | null | 2407.03689 | null | null | http://arxiv.org/pdf/2407.03689v1 | 2024-07-04T07:21:38Z | 2024-07-04T07:21:38Z | Text2TimeSeries: Enhancing Financial Forecasting through Time Series
Prediction Updates with Event-Driven Insights from Large Language Models | Time series models, typically trained on numerical data, are designed to forecast future values. These models often rely on weighted averaging techniques over time intervals. However, real-world time series data is seldom isolated and is frequently influenced by non-numeric factors. For instance, stock price fluctuations are impacted by daily random events in the broader world, with each event exerting a unique influence on price signals. Previously, forecasts in financial markets have been approached in two main ways: either as time-series problems over price sequence or sentiment analysis tasks. The sentiment analysis tasks aim to determine whether news events will have a positive or negative impact on stock prices, often categorizing them into discrete labels. Recognizing the need for a more comprehensive approach to accurately model time series prediction, we propose a collaborative modeling framework that incorporates textual information about relevant events for predictions. Specifically, we leverage the intuition of large language models about future changes to update real number time series predictions. We evaluated the effectiveness of our approach on financial market data. | [
"['Litton Jose Kurisinkel' 'Pruthwik Mishra' 'Yue Zhang']"
] |
null | null | 2407.03700 | null | null | http://arxiv.org/pdf/2407.03700v1 | 2024-07-04T07:40:02Z | 2024-07-04T07:40:02Z | Deep learning architectures for data-driven damage detection in
nonlinear dynamic systems | The primary goal of structural health monitoring is to detect damage at its onset before it reaches a critical level. The in-depth investigation in the present work addresses deep learning applied to data-driven damage detection in nonlinear dynamic systems. In particular, autoencoders (AEs) and generative adversarial networks (GANs) are implemented leveraging on 1D convolutional neural networks. The onset of damage is detected in the investigated nonlinear dynamic systems by exciting random vibrations of varying intensity, without prior knowledge of the system or the excitation and in unsupervised manner. The comprehensive numerical study is conducted on dynamic systems exhibiting different types of nonlinear behavior. An experimental application related to a magneto-elastic nonlinear system is also presented to corroborate the conclusions. | [
"['Harrish Joseph' 'Giuseppe Quaranta' 'Biagio Carboni'\n 'Walter Lacarbonara']"
] |
null | null | 2407.03704 | null | null | http://arxiv.org/pdf/2407.03704v1 | 2024-07-04T07:45:46Z | 2024-07-04T07:45:46Z | Neural Probabilistic Logic Learning for Knowledge Graph Reasoning | Knowledge graph (KG) reasoning is a task that aims to predict unknown facts based on known factual samples. Reasoning methods can be divided into two categories: rule-based methods and KG-embedding based methods. The former possesses precise reasoning capabilities but finds it challenging to reason efficiently over large-scale knowledge graphs. While gaining the ability to reason over large-scale knowledge graphs, the latter sacrifices reasoning accuracy. This paper aims to design a reasoning framework called Neural Probabilistic Logic Learning(NPLL) that achieves accurate reasoning on knowledge graphs. Our approach introduces a scoring module that effectively enhances the expressive power of embedding networks, striking a balance between model simplicity and reasoning capabilities. We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference. We empirically evaluate our approach on several benchmark datasets, and the experimental results validate that our method substantially enhances the accuracy and quality of the reasoning results. | [
"['Fengsong Sun' 'Jinyu Wang' 'Zhiqing Wei' 'Xianchao Zhang']"
] |
null | null | 2407.03718 | null | null | http://arxiv.org/pdf/2407.03718v1 | 2024-07-04T08:08:12Z | 2024-07-04T08:08:12Z | Multi-Convformer: Extending Conformer with Multiple Convolution Kernels | Convolutions have become essential in state-of-the-art end-to-end Automatic Speech Recognition~(ASR) systems due to their efficient modelling of local context. Notably, its use in Conformers has led to superior performance compared to vanilla Transformer-based ASR systems. While components other than the convolution module in the Conformer have been reexamined, altering the convolution module itself has been far less explored. Towards this, we introduce Multi-Convformer that uses multiple convolution kernels within the convolution module of the Conformer in conjunction with gating. This helps in improved modeling of local dependencies at varying granularities. Our model rivals existing Conformer variants such as CgMLP and E-Branchformer in performance, while being more parameter efficient. We empirically compare our approach with Conformer and its variants across four different datasets and three different modelling paradigms and show up to 8% relative word error rate~(WER) improvements. | [
"['Darshan Prabhu' 'Yifan Peng' 'Preethi Jyothi' 'Shinji Watanabe']"
] |
null | null | 2407.03728 | null | null | http://arxiv.org/pdf/2407.03728v1 | 2024-07-04T08:21:54Z | 2024-07-04T08:21:54Z | Measuring Orthogonality in Representations of Generative Models | In unsupervised representation learning, models aim to distill essential features from high-dimensional data into lower-dimensional learned representations, guided by inductive biases. Understanding the characteristics that make a good representation remains a topic of ongoing research. Disentanglement of independent generative processes has long been credited with producing high-quality representations. However, focusing solely on representations that adhere to the stringent requirements of most disentanglement metrics, may result in overlooking many high-quality representations, well suited for various downstream tasks. These metrics often demand that generative factors be encoded in distinct, single dimensions aligned with the canonical basis of the representation space. Motivated by these observations, we propose two novel metrics: Importance-Weighted Orthogonality (IWO) and Importance-Weighted Rank (IWR). These metrics evaluate the mutual orthogonality and rank of generative factor subspaces. Throughout extensive experiments on common downstream tasks, over several benchmark datasets and models, IWO and IWR consistently show stronger correlations with downstream task performance than traditional disentanglement metrics. Our findings suggest that representation quality is closer related to the orthogonality of independent generative processes rather than their disentanglement, offering a new direction for evaluating and improving unsupervised learning models. | [
"['Robin C. Geyer' 'Alessandro Torcinovich' 'João B. Carvalho'\n 'Alexander Meyer' 'Joachim M. Buhmann']"
] |
null | null | 2407.03734 | null | null | http://arxiv.org/pdf/2407.03734v1 | 2024-07-04T08:33:52Z | 2024-07-04T08:33:52Z | Improving Self-supervised Pre-training using Accent-Specific Codebooks | Speech accents present a serious challenge to the performance of state-of-the-art end-to-end Automatic Speech Recognition (ASR) systems. Even with self-supervised learning and pre-training of ASR models, accent invariance is seldom achieved. In this work, we propose an accent-aware adaptation technique for self-supervised learning that introduces a trainable set of accent-specific codebooks to the self-supervised architecture. These learnable codebooks enable the model to capture accent specific information during pre-training, that is further refined during ASR finetuning. On the Mozilla Common Voice dataset, our proposed approach outperforms all other accent-adaptation approaches on both seen and unseen English accents, with up to 9% relative reduction in word error rate (WER). | [
"['Darshan Prabhu' 'Abhishek Gupta' 'Omkar Nitsure' 'Preethi Jyothi'\n 'Sriram Ganapathy']"
] |
null | null | 2407.03736 | null | null | http://arxiv.org/pdf/2407.03736v1 | 2024-07-04T08:37:47Z | 2024-07-04T08:37:47Z | Semantic Grouping Network for Audio Source Separation | Recently, audio-visual separation approaches have taken advantage of the natural synchronization between the two modalities to boost audio source separation performance. They extracted high-level semantics from visual inputs as the guidance to help disentangle sound representation for individual sources. Can we directly learn to disentangle the individual semantics from the sound itself? The dilemma is that multiple sound sources are mixed together in the original space. To tackle the difficulty, in this paper, we present a novel Semantic Grouping Network, termed as SGN, that can directly disentangle sound representations and extract high-level semantic information for each source from input audio mixture. Specifically, SGN aggregates category-wise source features through learnable class tokens of sounds. Then, the aggregated semantic features can be used as the guidance to separate the corresponding audio sources from the mixture. We conducted extensive experiments on music-only and universal sound separation benchmarks: MUSIC, FUSS, MUSDB18, and VGG-Sound. The results demonstrate that our SGN significantly outperforms previous audio-only methods and audio-visual models without utilizing additional visual cues. | [
"['Shentong Mo' 'Yapeng Tian']"
] |
null | null | 2407.03738 | null | null | http://arxiv.org/pdf/2407.03738v1 | 2024-07-04T08:47:05Z | 2024-07-04T08:47:05Z | BasisN: Reprogramming-Free RRAM-Based In-Memory-Computing by Basis
Combination for Deep Neural Networks | Deep neural networks (DNNs) have made breakthroughs in various fields including image recognition and language processing. DNNs execute hundreds of millions of multiply-and-accumulate (MAC) operations. To efficiently accelerate such computations, analog in-memory-computing platforms have emerged leveraging emerging devices such as resistive RAM (RRAM). However, such accelerators face the hurdle of being required to have sufficient on-chip crossbars to hold all the weights of a DNN. Otherwise, RRAM cells in the crossbars need to be reprogramed to process further layers, which causes huge time/energy overhead due to the extremely slow writing and verification of the RRAM cells. As a result, it is still not possible to deploy such accelerators to process large-scale DNNs in industry. To address this problem, we propose the BasisN framework to accelerate DNNs on any number of available crossbars without reprogramming. BasisN introduces a novel representation of the kernels in DNN layers as combinations of global basis vectors shared between all layers with quantized coefficients. These basis vectors are written to crossbars only once and used for the computations of all layers with marginal hardware modification. BasisN also provides a novel training approach to enhance computation parallelization with the global basis vectors and optimize the coefficients to construct the kernels. Experimental results demonstrate that cycles per inference and energy-delay product were reduced to below 1% compared with applying reprogramming on crossbars in processing large-scale DNNs such as DenseNet and ResNet on ImageNet and CIFAR100 datasets, while the training and hardware costs are negligible. | [
"['Amro Eldebiky' 'Grace Li Zhang' 'Xunzhao Yin' 'Cheng Zhuo'\n 'Ing-Chao Lin' 'Ulf Schlichtmann' 'Bing Li']"
] |
null | null | 2407.03759 | null | null | http://arxiv.org/pdf/2407.03759v1 | 2024-07-04T09:12:08Z | 2024-07-04T09:12:08Z | Convolutional vs Large Language Models for Software Log Classification
in Edge-Deployable Cellular Network Testing | Software logs generated by sophisticated network emulators in the telecommunications industry, such as VIAVI TM500, are extremely complex, often comprising tens of thousands of text lines with minimal resemblance to natural language. Only specialised expert engineers can decipher such logs and troubleshoot defects in test runs. While AI offers a promising solution for automating defect triage, potentially leading to massive revenue savings for companies, state-of-the-art large language models (LLMs) suffer from significant drawbacks in this specialised domain. These include a constrained context window, limited applicability to text beyond natural language, and high inference costs. To address these limitations, we propose a compact convolutional neural network (CNN) architecture that offers a context window spanning up to 200,000 characters and achieves over 96% accuracy (F1>0.9) in classifying multifaceted software logs into various layers in the telecommunications protocol stack. Specifically, the proposed model is capable of identifying defects in test runs and triaging them to the relevant department, formerly a manual engineering process that required expert knowledge. We evaluate several LLMs; LLaMA2-7B, Mixtral 8x7B, Flan-T5, BERT and BigBird, and experimentally demonstrate their shortcomings in our specialized application. Despite being lightweight, our CNN significantly outperforms LLM-based approaches in telecommunications log classification while minimizing the cost of production. Our defect triaging AI model is deployable on edge devices without dedicated hardware and widely applicable across software logs in various industries. | [
"['Achintha Ihalage' 'Sayed M. Taheri' 'Faris Muhammad'\n 'Hamed Al-Raweshidy']"
] |
null | null | 2407.03760 | null | null | http://arxiv.org/pdf/2407.03760v1 | 2024-07-04T09:14:24Z | 2024-07-04T09:14:24Z | GraphCNNpred: A stock market indices prediction using a Graph based deep
learning system | Deep learning techniques for predicting stock market prices is an popular topic in the field of data science. Customized feature engineering arises as pre-processing tools of different stock market dataset. In this paper, we give a graph neural network based convolutional neural network (CNN) model, that can be applied on diverse source of data, in the attempt to extract features to predict the trends of indices of text{S}&text{P} 500, NASDAQ, DJI, NYSE, and RUSSEL. | [
"['Yuhui Jin']"
] |
null | null | 2407.03779 | null | null | http://arxiv.org/pdf/2407.03779v1 | 2024-07-04T09:42:25Z | 2024-07-04T09:42:25Z | Functional Faithfulness in the Wild: Circuit Discovery with
Differentiable Computation Graph Pruning | In this paper, we introduce a comprehensive reformulation of the task known as Circuit Discovery, along with DiscoGP, a novel and effective algorithm based on differentiable masking for discovering circuits. Circuit discovery is the task of interpreting the computational mechanisms of language models (LMs) by dissecting their functions and capabilities into sparse subnetworks (circuits). We identified two major limitations in existing circuit discovery efforts: (1) a dichotomy between weight-based and connection-edge-based approaches forces researchers to choose between pruning connections or weights, thereby limiting the scope of mechanistic interpretation of LMs; (2) algorithms based on activation patching tend to identify circuits that are neither functionally faithful nor complete. The performance of these identified circuits is substantially reduced, often resulting in near-random performance in isolation. Furthermore, the complement of the circuit -- i.e., the original LM with the identified circuit removed -- still retains adequate performance, indicating that essential components of a complete circuits are missed by existing methods. DiscoGP successfully addresses the two aforementioned issues and demonstrates state-of-the-art faithfulness, completeness, and sparsity. The effectiveness of the algorithm and its novel structure open up new avenues of gathering new insights into the internal workings of generative AI. | [
"['Lei Yu' 'Jingcheng Niu' 'Zining Zhu' 'Gerald Penn']"
] |
null | null | 2407.03792 | null | null | http://arxiv.org/pdf/2407.03792v1 | 2024-07-04T09:55:22Z | 2024-07-04T09:55:22Z | NeuroSteiner: A Graph Transformer for Wirelength Estimation | A core objective of physical design is to minimize wirelength (WL) when placing chip components on a canvas. Computing the minimal WL of a placement requires finding rectilinear Steiner minimum trees (RSMTs), an NP-hard problem. We propose NeuroSteiner, a neural model that distills GeoSteiner, an optimal RSMT solver, to navigate the cost--accuracy frontier of WL estimation. NeuroSteiner is trained on synthesized nets labeled by GeoSteiner, alleviating the need to train on real chip designs. Moreover, NeuroSteiner's differentiability allows to place by minimizing WL through gradient descent. On ISPD 2005 and 2019, NeuroSteiner can obtain 0.3% WL error while being 60% faster than GeoSteiner, or 0.2% and 30%. | [
"['Sahil Manchanda' 'Dana Kianfar' 'Markus Peschl' 'Romain Lepert'\n 'Michaël Defferrard']"
] |
null | null | 2407.03804 | null | null | http://arxiv.org/pdf/2407.03804v1 | 2024-07-04T10:23:56Z | 2024-07-04T10:23:56Z | Multi-Time Scale Service Caching and Pricing in MEC Systems with Dynamic
Program Popularity | In mobile edge computing systems, base stations (BSs) equipped with edge servers can provide computing services to users to reduce their task execution time. However, there is always a conflict of interest between the BS and users. The BS prices the service programs based on user demand to maximize its own profit, while the users determine their offloading strategies based on the prices to minimize their costs. Moreover, service programs need to be pre-cached to meet immediate computing needs. Due to the limited caching capacity and variations in service program popularity, the BS must dynamically select which service programs to cache. Since service caching and pricing have different needs for adjustment time granularities, we propose a two-time scale framework to jointly optimize service caching, pricing and task offloading. For the large time scale, we propose a game-nested deep reinforcement learning algorithm to dynamically adjust service caching according to the estimated popularity information. For the small time scale, by modeling the interaction between the BS and users as a two-stage game, we prove the existence of the equilibrium under incomplete information and then derive the optimal pricing and offloading strategies. Extensive simulations based on a real-world dataset demonstrate the efficiency of the proposed approach. | [
"['Yiming Chen' 'Xingyuan Hu' 'Bo Gu' 'Shimin Gong' 'Zhou Su']"
] |
null | null | 2407.03821 | null | null | http://arxiv.org/pdf/2407.03821v1 | 2024-07-04T10:46:09Z | 2024-07-04T10:46:09Z | Seamless Monitoring of Stress Levels Leveraging a Universal Model for
Time Sequences | Monitoring the stress level in patients with neurodegenerative diseases can help manage symptoms, improve patient's quality of life, and provide insight into disease progression. In the literature, ECG, actigraphy, speech, voice, and facial analysis have proven effective at detecting patients' emotions. On the other hand, these tools are invasive and do not integrate smoothly into the patient's daily life. HRV has also been proven to effectively indicate stress conditions, especially in combination with other signals. However, when HRV is derived from less invasive devices than the ECG, like smartwatches and bracelets, the quality of measurements significantly degrades. This paper presents a methodology for stress detection from a smartwatch based on a universal model for time series, UniTS, which we fine-tuned for the task. We cast the problem as anomaly detection rather than classification to favor model adaptation to individual patients and allow the clinician to maintain greater control over the system's predictions. We demonstrate that our proposed model considerably surpasses 12 top-performing methods on 3 benchmark datasets. Furthermore, unlike other state-of-the-art systems, UniTS enables seamless monitoring, as it shows comparable performance when using signals from invasive or lightweight devices. | [
"['Davide Gabrielli' 'Bardh Prenkaj' 'Paola Velardi']"
] |
null | null | 2407.03824 | null | null | http://arxiv.org/pdf/2407.03824v1 | 2024-07-04T10:52:02Z | 2024-07-04T10:52:02Z | Emergent Interpretable Symbols and Content-Style Disentanglement via
Variance-Invariance Constraints | We contribute an unsupervised method that effectively learns from raw observation and disentangles its latent space into content and style representations. Unlike most disentanglement algorithms that rely on domain-specific labels and knowledge, our method is based on the insight of domain-general statistical differences between content and style -- content varies more among different fragments within a sample but maintains an invariant vocabulary across data samples, whereas style remains relatively invariant within a sample but exhibits more significant variation across different samples. We integrate such inductive bias into an encoder-decoder architecture and name our method after V3 (variance-versus-invariance). Experimental results show that V3 generalizes across two distinct domains in different modalities, music audio and images of written digits, successfully learning pitch-timbre and digit-color disentanglements, respectively. Also, the disentanglement robustness significantly outperforms baseline unsupervised methods and is even comparable to supervised counterparts. Furthermore, symbolic-level interpretability emerges in the learned codebook of content, forging a near one-to-one alignment between machine representation and human knowledge. | [
"['Yuxuan Wu' 'Ziyu Wang' 'Bhiksha Raj' 'Gus Xia']"
] |
null | null | 2407.03834 | null | null | http://arxiv.org/pdf/2407.03834v1 | 2024-07-04T11:04:26Z | 2024-07-04T11:04:26Z | 10 Years of Fair Representations: Challenges and Opportunities | Fair Representation Learning (FRL) is a broad set of techniques, mostly based on neural networks, that seeks to learn new representations of data in which sensitive or undesired information has been removed. Methodologically, FRL was pioneered by Richard Zemel et al. about ten years ago. The basic concepts, objectives and evaluation strategies for FRL methodologies remain unchanged to this day. In this paper, we look back at the first ten years of FRL by i) revisiting its theoretical standing in light of recent work in deep learning theory that shows the hardness of removing information in neural network representations and ii) presenting the results of a massive experimentation (225.000 model fits and 110.000 AutoML fits) we conducted with the objective of improving on the common evaluation scenario for FRL. More specifically, we use automated machine learning (AutoML) to adversarially "mine" sensitive information from supposedly fair representations. Our theoretical and experimental analysis suggests that deterministic, unquantized FRL methodologies have serious issues in removing sensitive information, which is especially troubling as they might seem "fair" at first glance. | [
"['Mattia Cerrato' 'Marius Köppel' 'Philipp Wolf' 'Stefan Kramer']"
] |
null | null | 2407.03836 | null | null | http://arxiv.org/pdf/2407.03836v1 | 2024-07-04T11:05:14Z | 2024-07-04T11:05:14Z | ADAPT: Multimodal Learning for Detecting Physiological Changes under
Missing Modalities | Multimodality has recently gained attention in the medical domain, where imaging or video modalities may be integrated with biomedical signals or health records. Yet, two challenges remain: balancing the contributions of modalities, especially in cases with a limited amount of data available, and tackling missing modalities. To address both issues, in this paper, we introduce the AnchoreD multimodAl Physiological Transformer (ADAPT), a multimodal, scalable framework with two key components: (i) aligning all modalities in the space of the strongest, richest modality (called anchor) to learn a joint embedding space, and (ii) a Masked Multimodal Transformer, leveraging both inter- and intra-modality correlations while handling missing modalities. We focus on detecting physiological changes in two real-life scenarios: stress in individuals induced by specific triggers and fighter pilots' loss of consciousness induced by $g$-forces. We validate the generalizability of ADAPT through extensive experiments on two datasets for these tasks, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications. | [
"['Julie Mordacq' 'Leo Milecki' 'Maria Vakalopoulou' 'Steve Oudot'\n 'Vicky Kalogeiton']"
] |
null | null | 2407.03848 | null | null | http://arxiv.org/pdf/2407.03848v1 | 2024-07-04T11:29:50Z | 2024-07-04T11:29:50Z | Bias of Stochastic Gradient Descent or the Architecture: Disentangling
the Effects of Overparameterization of Neural Networks | Neural networks typically generalize well when fitting the data perfectly, even though they are heavily overparameterized. Many factors have been pointed out as the reason for this phenomenon, including an implicit bias of stochastic gradient descent (SGD) and a possible simplicity bias arising from the neural network architecture. The goal of this paper is to disentangle the factors that influence generalization stemming from optimization and architectural choices by studying random and SGD-optimized networks that achieve zero training error. We experimentally show, in the low sample regime, that overparameterization in terms of increasing width is beneficial for generalization, and this benefit is due to the bias of SGD and not due to an architectural bias. In contrast, for increasing depth, overparameterization is detrimental for generalization, but random and SGD-optimized networks behave similarly, so this can be attributed to an architectural bias. For more information, see https://bias-sgd-or-architecture.github.io . | [
"['Amit Peleg' 'Matthias Hein']"
] |
null | null | 2407.03851 | null | null | http://arxiv.org/pdf/2407.03851v1 | 2024-07-04T11:34:42Z | 2024-07-04T11:34:42Z | Implicit Hypersurface Approximation Capacity in Deep ReLU Networks | We develop a geometric approximation theory for deep feed-forward neural networks with ReLU activations. Given a $d$-dimensional hypersurface in $mathbb{R}^{d+1}$ represented as the graph of a $C^2$-function $phi$, we show that a deep fully-connected ReLU network of width $d+1$ can implicitly construct an approximation as its zero contour with a precision bound depending on the number of layers. This result is directly applicable to the binary classification setting where the sign of the network is trained as a classifier, with the network's zero contour as a decision boundary. Our proof is constructive and relies on the geometrical structure of ReLU layers provided in [doi:10.48550/arXiv.2310.03482]. Inspired by this geometrical description, we define a new equivalent network architecture that is easier to interpret geometrically, where the action of each hidden layer is a projection onto a polyhedral cone derived from the layer's parameters. By repeatedly adding such layers, with parameters chosen such that we project small parts of the graph of $phi$ from the outside in, we, in a controlled way, construct a network that implicitly approximates the graph over a ball of radius $R$. The accuracy of this construction is controlled by a discretization parameter $delta$ and we show that the tolerance in the resulting error bound scales as $(d-1)R^{3/2}delta^{1/2}$ and the required number of layers is of order $dbig(frac{32R}{delta}big)^{frac{d+1}{2}}$. | [
"['Jonatan Vallin' 'Karl Larsson' 'Mats G. Larson']"
] |
null | null | 2407.03852 | null | null | http://arxiv.org/pdf/2407.03852v1 | 2024-07-04T11:34:43Z | 2024-07-04T11:34:43Z | Low-latency machine learning FPGA accelerator for multi-qubit state
discrimination | Measuring a qubit is a fundamental yet error prone operation in quantum computing. These errors can stem from various sources such as crosstalk, spontaneous state-transitions, and excitation caused by the readout pulse. In this work, we utilize an integrated approach to deploy neural networks (NN) on to field programmable gate arrays (FPGA). We demonstrate that it is practical to design and implement a fully connected neural network accelerator for frequency-multiplexed readout balancing computational complexity with low latency requirements without significant loss in accuracy. The neural network is implemented by quantization of weights, activation functions, and inputs. The hardware accelerator performs frequency-multiplexed readout of 5 superconducting qubits in less than 50 ns on RFSoC ZCU111 FPGA which is first of its kind in the literature. These modules can be implemented and integrated in existing Quantum control and readout platforms using a RFSoC ZCU111 ready for experimental deployment. | [
"['Pradeep Kumar Gautam' 'Shantharam Kalipatnapu' 'Shankaranarayanan H'\n 'Ujjawal Singhal' 'Benjamin Lienhard' 'Vibhor Singh'\n 'Chetan Singh Thakur']"
] |
null | null | 2407.03856 | null | null | http://arxiv.org/pdf/2407.03856v1 | 2024-07-04T11:42:36Z | 2024-07-04T11:42:36Z | Q-Adapter: Training Your LLM Adapter as a Residual Q-Function | We consider the problem of adapting Large Language Models (LLMs) pre-trained with Reinforcement Learning from Human Feedback (RLHF) to downstream preference data. Naive approaches to achieve this could be supervised fine-tuning on preferred responses or reinforcement learning with a learned reward model. However, the LLM runs the risk of forgetting its initial knowledge as the fine-tuning progresses. To customize the LLM while preserving its existing capabilities, this paper proposes a novel method, named as Q-Adapter. We start by formalizing LLM adaptation as a problem of maximizing the linear combination of two rewards, one of which corresponds to the reward optimized by the pre-trained LLM and the other to the downstream preference data. Although both rewards are unknown, we show that this can be solved by directly learning a new module from the preference data that approximates the emph{residual Q-function}. We consider this module to be an adapter because the original pre-trained LLM, together with it, can form the optimal customised LLM. Empirically, experiments on a range of domain-specific tasks and safety alignment tasks illustrate the superiority of Q-Adapter in both anti-forgetting and learning from new preferences. | [
"['Yi-Chen Li' 'Fuxiang Zhang' 'Wenjie Qiu' 'Lei Yuan' 'Chengxing Jia'\n 'Zongzhang Zhang' 'Yang Yu']"
] |
null | null | 2407.03862 | null | null | http://arxiv.org/pdf/2407.03862v1 | 2024-07-04T11:50:24Z | 2024-07-04T11:50:24Z | FedSat: A Statistical Aggregation Approach for Class Imbalaced Clients
in Federated Learning | Federated learning (FL) has emerged as a promising paradigm for privacy-preserving distributed machine learning, but faces challenges with heterogeneous data distributions across clients. This paper introduces FedSat, a novel FL approach designed to tackle various forms of data heterogeneity simultaneously. FedSat employs a cost-sensitive loss function and a prioritized class-based weighted aggregation scheme to address label skewness, missing classes, and quantity skewness across clients. While the proposed cost-sensitive loss function enhances model performance on minority classes, the prioritized class-based weighted aggregation scheme ensures client contributions are weighted based on both statistical significance and performance on critical classes. Extensive experiments across diverse data-heterogeneity settings demonstrate that FedSat significantly outperforms state-of-the-art baselines, with an average improvement of 1.8% over the second-best method and 19.87% over the weakest-performing baseline. The approach also demonstrates faster convergence compared to existing methods. These results highlight FedSat's effectiveness in addressing the challenges of heterogeneous federated learning and its potential for real-world applications. | [
"['Sujit Chowdhury' 'Raju Halder']"
] |
null | null | 2407.03864 | null | null | http://arxiv.org/pdf/2407.03864v1 | 2024-07-04T11:53:51Z | 2024-07-04T11:53:51Z | Adversarial Robustness of VAEs across Intersectional Subgroups | Despite advancements in Autoencoders (AEs) for tasks like dimensionality reduction, representation learning and data generation, they remain vulnerable to adversarial attacks. Variational Autoencoders (VAEs), with their probabilistic approach to disentangling latent spaces, show stronger resistance to such perturbations compared to deterministic AEs; however, their resilience against adversarial inputs is still a concern. This study evaluates the robustness of VAEs against non-targeted adversarial attacks by optimizing minimal sample-specific perturbations to cause maximal damage across diverse demographic subgroups (combinations of age and gender). We investigate two questions: whether there are robustness disparities among subgroups, and what factors contribute to these disparities, such as data scarcity and representation entanglement. Our findings reveal that robustness disparities exist but are not always correlated with the size of the subgroup. By using downstream gender and age classifiers and examining latent embeddings, we highlight the vulnerability of subgroups like older women, who are prone to misclassification due to adversarial perturbations pushing their representations toward those of other subgroups. | [
"['Chethan Krishnamurthy Ramanaik' 'Arjun Roy' 'Eirini Ntoutsi']"
] |
null | null | 2407.03872 | null | null | http://arxiv.org/pdf/2407.03872v1 | 2024-07-04T12:08:36Z | 2024-07-04T12:08:36Z | The Solution for the GAIIC2024 RGB-TIR object detection Challenge | This report introduces a solution to The task of RGB-TIR object detection from the perspective of unmanned aerial vehicles. Unlike traditional object detection methods, RGB-TIR object detection aims to utilize both RGB and TIR images for complementary information during detection. The challenges of RGB-TIR object detection from the perspective of unmanned aerial vehicles include highly complex image backgrounds, frequent changes in lighting, and uncalibrated RGB-TIR image pairs. To address these challenges at the model level, we utilized a lightweight YOLOv9 model with extended multi-level auxiliary branches that enhance the model's robustness, making it more suitable for practical applications in unmanned aerial vehicle scenarios. For image fusion in RGB-TIR detection, we incorporated a fusion module into the backbone network to fuse images at the feature level, implicitly addressing calibration issues. Our proposed method achieved an mAP score of 0.516 and 0.543 on A and B benchmarks respectively while maintaining the highest inference speed among all models. | [
"['Xiangyu Wu' 'Jinling Xu' 'Longfei Huang' 'Yang Yang']"
] |
null | null | 2407.03878 | null | null | http://arxiv.org/pdf/2407.03878v1 | 2024-07-04T12:15:42Z | 2024-07-04T12:15:42Z | Geodesic Optimization for Predictive Shift Adaptation on EEG data | Electroencephalography (EEG) data is often collected from diverse contexts involving different populations and EEG devices. This variability can induce distribution shifts in the data $X$ and in the biomedical variables of interest $y$, thus limiting the application of supervised machine learning (ML) algorithms. While domain adaptation (DA) methods have been developed to mitigate the impact of these shifts, such methods struggle when distribution shifts occur simultaneously in $X$ and $y$. As state-of-the-art ML models for EEG represent the data by spatial covariance matrices, which lie on the Riemannian manifold of Symmetric Positive Definite (SPD) matrices, it is appealing to study DA techniques operating on the SPD manifold. This paper proposes a novel method termed Geodesic Optimization for Predictive Shift Adaptation (GOPSA) to address test-time multi-source DA for situations in which source domains have distinct $y$ distributions. GOPSA exploits the geodesic structure of the Riemannian manifold to jointly learn a domain-specific re-centering operator representing site-specific intercepts and the regression model. We performed empirical benchmarks on the cross-site generalization of age-prediction models with resting-state EEG data from a large multi-national dataset (HarMNqEEG), which included $14$ recording sites and more than $1500$ human participants. Compared to state-of-the-art methods, our results showed that GOPSA achieved significantly higher performance on three regression metrics ($R^2$, MAE, and Spearman's $rho$) for several source-target site combinations, highlighting its effectiveness in tackling multi-source DA with predictive shifts in EEG data analysis. Our method has the potential to combine the advantages of mixed-effects modeling with machine learning for biomedical applications of EEG, such as multicenter clinical trials. | [
"['Apolline Mellot' 'Antoine Collas' 'Sylvain Chevallier'\n 'Alexandre Gramfort' 'Denis A. Engemann']"
] |
null | null | 2407.03883 | null | null | http://arxiv.org/pdf/2407.03883v1 | 2024-07-04T12:21:59Z | 2024-07-04T12:21:59Z | Protecting Deep Learning Model Copyrights with Adversarial Example-Free
Reuse Detection | Model reuse techniques can reduce the resource requirements for training high-performance deep neural networks (DNNs) by leveraging existing models. However, unauthorized reuse and replication of DNNs can lead to copyright infringement and economic loss to the model owner. This underscores the need to analyze the reuse relation between DNNs and develop copyright protection techniques to safeguard intellectual property rights. Existing white-box testing-based approaches cannot address the common heterogeneous reuse case where the model architecture is changed, and DNN fingerprinting approaches heavily rely on generating adversarial examples with good transferability, which is known to be challenging in the black-box setting. To bridge the gap, we propose NFARD, a Neuron Functionality Analysis-based Reuse Detector, which only requires normal test samples to detect reuse relations by measuring the models' differences on a newly proposed model characterization, i.e., neuron functionality (NF). A set of NF-based distance metrics is designed to make NFARD applicable to both white-box and black-box settings. Moreover, we devise a linear transformation method to handle heterogeneous reuse cases by constructing the optimal projection matrix for dimension consistency, significantly extending the application scope of NFARD. To the best of our knowledge, this is the first adversarial example-free method that exploits neuron functionality for DNN copyright protection. As a side contribution, we constructed a reuse detection benchmark named Reuse Zoo that covers various practical reuse techniques and popular datasets. Extensive evaluations on this comprehensive benchmark show that NFARD achieves F1 scores of 0.984 and 1.0 for detecting reuse relationships in black-box and white-box settings, respectively, while generating test suites 2 ~ 99 times faster than previous methods. | [
"['Xiaokun Luan' 'Xiyue Zhang' 'Jingyi Wang' 'Meng Sun']"
] |
null | null | 2407.03888 | null | null | http://arxiv.org/pdf/2407.03888v1 | 2024-07-04T12:26:31Z | 2024-07-04T12:26:31Z | Continuous-time q-Learning for Jump-Diffusion Models under Tsallis
Entropy | This paper studies continuous-time reinforcement learning for controlled jump-diffusion models by featuring the q-function (the continuous-time counterpart of Q-function) and the q-learning algorithms under the Tsallis entropy regularization. Contrary to the conventional Shannon entropy, the general form of Tsallis entropy renders the optimal policy not necessary a Gibbs measure, where some Lagrange multiplier and KKT multiplier naturally arise from certain constraints to ensure the learnt policy to be a probability distribution. As a consequence,the relationship between the optimal policy and the q-function also involves the Lagrange multiplier. In response, we establish the martingale characterization of the q-function under Tsallis entropy and devise two q-learning algorithms depending on whether the Lagrange multiplier can be derived explicitly or not. In the latter case, we need to consider different parameterizations of the q-function and the policy and update them alternatively. Finally, we examine two financial applications, namely an optimal portfolio liquidation problem and a non-LQ control problem. It is interesting to see therein that the optimal policies under the Tsallis entropy regularization can be characterized explicitly, which are distributions concentrate on some compact support. The satisfactory performance of our q-learning algorithm is illustrated in both examples. | [
"['Lijun Bo' 'Yijie Huang' 'Xiang Yu' 'Tingting Zhang']"
] |
null | null | 2407.03897 | null | null | http://arxiv.org/pdf/2407.03897v1 | 2024-07-04T12:43:53Z | 2024-07-04T12:43:53Z | gFlora: a topology-aware method to discover functional co-response
groups in soil microbial communities | We aim to learn the functional co-response group: a group of taxa whose co-response effect (the representative characteristic of the group) associates well statistically with a functional variable. Different from the state-of-the-art method, we model the soil microbial community as an ecological co-occurrence network with the taxa as nodes (weighted by their abundance) and their relationships (a combination from both spatial and functional ecological aspects) as edges (weighted by the strength of the relationships). Then, we design a method called gFlora which notably uses graph convolution over this co-occurrence network to get the co-response effect of the group, such that the network topology is also considered in the discovery process. We evaluate gFlora on two real-world soil microbiome datasets (bacteria and nematodes) and compare it with the state-of-the-art method. gFlora outperforms this on all evaluation metrics, and discovers new functional evidence for taxa which were so far under-studied. We show that the graph convolution step is crucial to taxa with low abundance, and the discovered bacteria of different genera are distributed in the co-occurrence network but still tightly connected among themselves, demonstrating that topologically they fill different but collaborative functional roles in the ecological community. | [
"['Nan Chen' 'Merlijn Schram' 'Doina Bucur']"
] |
null | null | 2407.03901 | null | null | http://arxiv.org/pdf/2407.03901v1 | 2024-07-04T12:48:36Z | 2024-07-04T12:48:36Z | DiCTI: Diffusion-based Clothing Designer via Text-guided Input | Recent developments in deep generative models have opened up a wide range of opportunities for image synthesis, leading to significant changes in various creative fields, including the fashion industry. While numerous methods have been proposed to benefit buyers, particularly in virtual try-on applications, there has been relatively less focus on facilitating fast prototyping for designers and customers seeking to order new designs. To address this gap, we introduce DiCTI (Diffusion-based Clothing Designer via Text-guided Input), a straightforward yet highly effective approach that allows designers to quickly visualize fashion-related ideas using text inputs only. Given an image of a person and a description of the desired garments as input, DiCTI automatically generates multiple high-resolution, photorealistic images that capture the expressed semantics. By leveraging a powerful diffusion-based inpainting model conditioned on text inputs, DiCTI is able to synthesize convincing, high-quality images with varied clothing designs that viably follow the provided text descriptions, while being able to process very diverse and challenging inputs, captured in completely unconstrained settings. We evaluate DiCTI in comprehensive experiments on two different datasets (VITON-HD and Fashionpedia) and in comparison to the state-of-the-art (SoTa). The results of our experiments show that DiCTI convincingly outperforms the SoTA competitor in generating higher quality images with more elaborate garments and superior text prompt adherence, both according to standard quantitative evaluation measures and human ratings, generated as part of a user study. | [
"['Ajda Lampe' 'Julija Stopar' 'Deepak Kumar Jain' 'Shinichiro Omachi'\n 'Peter Peer' 'Vitomir Štruc']"
] |
null | null | 2407.03920 | null | null | http://arxiv.org/pdf/2407.03920v1 | 2024-07-04T13:32:17Z | 2024-07-04T13:32:17Z | Support Vector Based Anomaly Detection in Federated Learning | Anomaly detection plays a crucial role in various domains, from cybersecurity to industrial systems. However, traditional centralized approaches often encounter challenges related to data privacy. In this context, Federated Learning emerges as a promising solution. This work introduces two innovative algorithms--Ensemble SVDD and Support Vector Election--that leverage Support Vector Machines for anomaly detection in a federated setting. In comparison with the Neural Networks typically used in within Federated Learning, these new algorithms emerge as potential alternatives, as they can operate effectively with small datasets and incur lower computational costs. The novel algorithms are tested in various distributed system configurations, yielding promising initial results that pave the way for further investigation. | [
"['Massimo Frasson' 'Dario Malchiodi']"
] |
null | null | 2407.03921 | null | null | http://arxiv.org/pdf/2407.03921v1 | 2024-07-04T13:34:50Z | 2024-07-04T13:34:50Z | Concept Bottleneck Models Without Predefined Concepts | There has been considerable recent interest in interpretable concept-based models such as Concept Bottleneck Models (CBMs), which first predict human-interpretable concepts and then map them to output classes. To reduce reliance on human-annotated concepts, recent works have converted pretrained black-box models into interpretable CBMs post-hoc. However, these approaches predefine a set of concepts, assuming which concepts a black-box model encodes in its representations. In this work, we eliminate this assumption by leveraging unsupervised concept discovery to automatically extract concepts without human annotations or a predefined set of concepts. We further introduce an input-dependent concept selection mechanism that ensures only a small subset of concepts is used across all classes. We show that our approach improves downstream performance and narrows the performance gap to black-box models, while using significantly fewer concepts in the classification. Finally, we demonstrate how large vision-language models can intervene on the final model weights to correct model errors. | [
"['Simon Schrodi' 'Julian Schur' 'Max Argus' 'Thomas Brox']"
] |
Subsets and Splits