arxiv_id
stringlengths
10
10
published
stringlengths
20
20
titles
stringlengths
9
243
authors
listlengths
1
389
abstract
stringlengths
96
3.09k
categories
listlengths
1
10
selected
bool
2 classes
2305.17221
2023-05-26T19:25:49Z
Federated Learning for Semantic Parsing: Task Formulation, Evaluation Setup, New Algorithms
[ "Tianshu Zhang", "Changchang Liu", "Wei-Han Lee", "Yu Su", "Huan Sun" ]
This paper studies a new task of federated learning (FL) for semantic parsing, where multiple clients collaboratively train one global model without sharing their semantic parsing data. By leveraging data from multiple clients, the FL paradigm can be especially beneficial for clients that have little training data to develop a data-hungry neural semantic parser on their own. We propose an evaluation setup to study this task, where we re-purpose widely-used single-domain text-to-SQL datasets as clients to form a realistic heterogeneous FL setting and collaboratively train a global model. As standard FL algorithms suffer from the high client heterogeneity in our realistic setup, we further propose a novel LOss Reduction Adjusted Re-weighting (Lorar) mechanism to mitigate the performance degradation, which adjusts each client's contribution to the global model update based on its training loss reduction during each round. Our intuition is that the larger the loss reduction, the further away the current global model is from the client's local optimum, and the larger weight the client should get. By applying Lorar to three widely adopted FL algorithms (FedAvg, FedOPT and FedProx), we observe that their performance can be improved substantially on average (4%-20% absolute gain under MacroAvg) and that clients with smaller datasets enjoy larger performance gains. In addition, the global model converges faster for almost all the clients.
[ "cs.CL", "cs.AI", "cs.DB", "cs.LG" ]
false
2305.17306
2023-05-26T23:46:42Z
Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance
[ "Yao Fu", "Litu Ou", "Mingyu Chen", "Yuhao Wan", "Hao Peng", "Tushar Khot" ]
As large language models (LLMs) are continuously being developed, their evaluation becomes increasingly important yet challenging. This work proposes Chain-of-Thought Hub, an open-source evaluation suite on the multi-step reasoning capabilities of large language models. We are interested in this setting for two reasons: (1) from the behavior of GPT and PaLM model family, we observe that complex reasoning is likely to be a key differentiator between weaker and stronger LLMs; (2) we envisage large language models to become the next-generation computational platform and foster an ecosystem of LLM-based new applications, this naturally requires the foundation models to perform complex tasks that often involve the composition of linguistic and logical operations. Our approach is to compile a suite of challenging reasoning benchmarks to track the progress of LLMs. Our current results show that: (1) model scale clearly correlates with reasoning capabilities; (2) As of May 2023, Claude-v1.3 and PaLM-2 are the only two models that are comparable with GPT-4, while open-sourced models still lag behind; (3) LLaMA-65B performs closely to code-davinci-002, indicating that with successful further development such as reinforcement learning from human feedback (RLHF), it has great potential to be close to GPT-3.5-Turbo. Our results also suggest that for the open-source efforts to catch up, the community may focus more on building better base models and exploring RLHF.
[ "cs.CL", "cs.AI", "cs.LG" ]
true
2305.18350
2023-05-26T11:51:31Z
Towards Open-World Product Attribute Mining: A Lightly-Supervised Approach
[ "Liyan Xu", "Chenwei Zhang", "Xian Li", "Jingbo Shang", "Jinho D. Choi" ]
We present a new task setting for attribute mining on e-commerce products, serving as a practical solution to extract open-world attributes without extensive human intervention. Our supervision comes from a high-quality seed attribute set bootstrapped from existing resources, and we aim to expand the attribute vocabulary of existing seed types, and also to discover any new attribute types automatically. A new dataset is created to support our setting, and our approach Amacer is proposed specifically to tackle the limited supervision. Especially, given that no direct supervision is available for those unseen new attributes, our novel formulation exploits self-supervised heuristic and unsupervised latent attributes, which attains implicit semantic signals as additional supervision by leveraging product context. Experiments suggest that our approach surpasses various baselines by 12 F1, expanding attributes of existing types significantly by up to 12 times, and discovering values from 39% new types.
[ "cs.LG", "cs.CL", "cs.IR" ]
false
2305.18357
2023-05-26T18:05:57Z
DeepSI: Interactive Deep Learning for Semantic Interaction
[ "Yali Bian", "Chris North" ]
In this paper, we design novel interactive deep learning methods to improve semantic interactions in visual analytics applications. The ability of semantic interaction to infer analysts' precise intents during sensemaking is dependent on the quality of the underlying data representation. We propose the $\text{DeepSI}_{\text{finetune}}$ framework that integrates deep learning into the human-in-the-loop interactive sensemaking pipeline, with two important properties. First, deep learning extracts meaningful representations from raw data, which improves semantic interaction inference. Second, semantic interactions are exploited to fine-tune the deep learning representations, which then further improves semantic interaction inference. This feedback loop between human interaction and deep learning enables efficient learning of user- and task-specific representations. To evaluate the advantage of embedding the deep learning within the semantic interaction loop, we compare $\text{DeepSI}_{\text{finetune}}$ against a state-of-the-art but more basic use of deep learning as only a feature extractor pre-processed outside of the interactive loop. Results of two complementary studies, a human-centered qualitative case study and an algorithm-centered simulation-based quantitative experiment, show that $\text{DeepSI}_{\text{finetune}}$ more accurately captures users' complex mental models with fewer interactions.
[ "cs.LG", "cs.AI", "cs.CL", "cs.HC" ]
false
2305.18620
2023-05-26T00:53:18Z
CONA: A novel CONtext-Aware instruction paradigm for communication using large language model
[ "Nan Zhou", "Xinghui Tao", "Xi Chen" ]
We introduce CONA, a novel context-aware instruction paradigm for effective knowledge dissemination using generative pre-trained transformer (GPT) models. CONA is a flexible framework designed to leverage the capabilities of Large Language Models (LLMs) and incorporate DIKW (Data, Information, Knowledge, Wisdom) hierarchy to automatically instruct and optimise presentation content, anticipate potential audience inquiries, and provide context-aware answers that adaptive to the knowledge level of the audience group. The unique aspect of the CONA paradigm lies in its combination of an independent advisory mechanism and a recursive feedback loop rooted on the DIKW hierarchy. This synergy significantly enhances context-aware contents, ensuring they are accessible and easily comprehended by the audience. This paradigm is an early pioneer to explore new methods for knowledge dissemination and communication in the LLM era, offering effective support for everyday knowledge sharing scenarios. We conduct experiments on a range of audience roles, along with materials from various disciplines using GPT4. Both quantitative and qualitative results demonstrated that the proposed CONA paradigm achieved remarkable performance compared to the outputs guided by conventional prompt engineering.
[ "cs.CL", "cs.AI", "cs.HC" ]
false
2306.01761
2023-05-26T09:27:43Z
Distinguishing Human Generated Text From ChatGPT Generated Text Using Machine Learning
[ "Niful Islam", "Debopom Sutradhar", "Humaira Noor", "Jarin Tasnim Raya", "Monowara Tabassum Maisha", "Dewan Md Farid" ]
ChatGPT is a conversational artificial intelligence that is a member of the generative pre-trained transformer of the large language model family. This text generative model was fine-tuned by both supervised learning and reinforcement learning so that it can produce text documents that seem to be written by natural intelligence. Although there are numerous advantages of this generative model, it comes with some reasonable concerns as well. This paper presents a machine learning-based solution that can identify the ChatGPT delivered text from the human written text along with the comparative analysis of a total of 11 machine learning and deep learning algorithms in the classification process. We have tested the proposed model on a Kaggle dataset consisting of 10,000 texts out of which 5,204 texts were written by humans and collected from news and social media. On the corpus generated by GPT-3.5, the proposed algorithm presents an accuracy of 77%.
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2305.16554
2023-05-26T00:43:02Z
Emergent Agentic Transformer from Chain of Hindsight Experience
[ "Hao Liu", "Pieter Abbeel" ]
Large transformer models powered by diverse data and model scale have dominated natural language modeling and computer vision and pushed the frontier of multiple AI areas. In reinforcement learning (RL), despite many efforts into transformer-based policies, a key limitation, however, is that current transformer-based policies cannot learn by directly combining information from multiple sub-optimal trials. In this work, we address this issue using recently proposed chain of hindsight to relabel experience, where we train a transformer on a sequence of trajectory experience ascending sorted according to their total rewards. Our method consists of relabelling target return of each trajectory to the maximum total reward among in sequence of trajectories and training an autoregressive model to predict actions conditioning on past states, actions, rewards, target returns, and task completion tokens, the resulting model, Agentic Transformer (AT), can learn to improve upon itself both at training and test time. As we show on D4RL and ExoRL benchmarks, to the best our knowledge, this is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches, even from sub-optimal data. Our Agentic Transformer also shows a promising scaling trend that bigger models consistently improve results.
[ "cs.LG" ]
false
2305.16593
2023-05-26T02:51:39Z
A Multi-Resolution Physics-Informed Recurrent Neural Network: Formulation and Application to Musculoskeletal Systems
[ "Karan Taneja", "Xiaolong He", "Qizhi He", "J. S. Chen" ]
This work presents a multi-resolution physics-informed recurrent neural network (MR PI-RNN), for simultaneous prediction of musculoskeletal (MSK) motion and parameter identification of the MSK systems. The MSK application was selected as the model problem due to its challenging nature in mapping the high-frequency surface electromyography (sEMG) signals to the low-frequency body joint motion controlled by the MSK and muscle contraction dynamics. The proposed method utilizes the fast wavelet transform to decompose the mixed frequency input sEMG and output joint motion signals into nested multi-resolution signals. The prediction model is subsequently trained on coarser-scale input-output signals using a gated recurrent unit (GRU), and then the trained parameters are transferred to the next level of training with finer-scale signals. These training processes are repeated recursively under a transfer-learning fashion until the full-scale training (i.e., with unfiltered signals) is achieved, while satisfying the underlying dynamic equilibrium. Numerical examples on recorded subject data demonstrate the effectiveness of the proposed framework in generating a physics-informed forward-dynamics surrogate, which yields higher accuracy in motion predictions of elbow flexion-extension of an MSK system compared to the case with single-scale training. The framework is also capable of identifying muscle parameters that are physiologically consistent with the subject's kinematics data.
[ "cs.LG" ]
false
2305.16639
2023-05-26T05:28:10Z
Universal Approximation and the Topological Neural Network
[ "Michael A. Kouritzin", "Daniel Richard" ]
A topological neural network (TNN), which takes data from a Tychonoff topological space instead of the usual finite dimensional space, is introduced. As a consequence, a distributional neural network (DNN) that takes Borel measures as data is also introduced. Combined these new neural networks facilitate things like recognizing long range dependence, heavy tails and other properties in stochastic process paths or like acting on belief states produced by particle filtering or hidden Markov model algorithms. The veracity of the TNN and DNN are then established herein by a strong universal approximation theorem for Tychonoff spaces and its corollary for spaces of measures. These theorems show that neural networks can arbitrarily approximate uniformly continuous functions (with respect to the sup metric) associated with a unique uniformity. We also provide some discussion showing that neural networks on positive-finite measures are a generalization of the recent deep learning notion of deep sets.
[ "cs.LG", "41-02" ]
false
2305.16683
2023-05-26T07:05:08Z
Future-conditioned Unsupervised Pretraining for Decision Transformer
[ "Zhihui Xie", "Zichuan Lin", "Deheng Ye", "Qiang Fu", "Wei Yang", "Shuai Li" ]
Recent research in offline reinforcement learning (RL) has demonstrated that return-conditioned supervised learning is a powerful paradigm for decision-making problems. While promising, return conditioning is limited to training data labeled with rewards and therefore faces challenges in learning from unsupervised data. In this work, we aim to utilize generalized future conditioning to enable efficient unsupervised pretraining from reward-free and sub-optimal offline data. We propose Pretrained Decision Transformer (PDT), a conceptually simple approach for unsupervised RL pretraining. PDT leverages future trajectory information as a privileged context to predict actions during training. The ability to make decisions based on both present and future factors enhances PDT's capability for generalization. Besides, this feature can be easily incorporated into a return-conditioned framework for online finetuning, by assigning return values to possible futures and sampling future embeddings based on their respective values. Empirically, PDT outperforms or performs on par with its supervised pretraining counterpart, especially when dealing with sub-optimal data. Further analysis reveals that PDT can extract diverse behaviors from offline data and controllably sample high-return behaviors by online finetuning. Code is available at here.
[ "cs.LG" ]
false
2305.16691
2023-05-26T07:24:24Z
Dual Bayesian ResNet: A Deep Learning Approach to Heart Murmur Detection
[ "Benjamin Walker", "Felix Krones", "Ivan Kiskin", "Guy Parsons", "Terry Lyons", "Adam Mahdi" ]
This study presents our team PathToMyHeart's contribution to the George B. Moody PhysioNet Challenge 2022. Two models are implemented. The first model is a Dual Bayesian ResNet (DBRes), where each patient's recording is segmented into overlapping log mel spectrograms. These undergo two binary classifications: present versus unknown or absent, and unknown versus present or absent. The classifications are aggregated to give a patient's final classification. The second model is the output of DBRes integrated with demographic data and signal features using XGBoost.DBRes achieved our best weighted accuracy of $0.771$ on the hidden test set for murmur classification, which placed us fourth for the murmur task. (On the clinical outcome task, which we neglected, we scored 17th with costs of $12637$.) On our held-out subset of the training set, integrating the demographic data and signal features improved DBRes's accuracy from $0.762$ to $0.820$. However, this decreased DBRes's weighted accuracy from $0.780$ to $0.749$. Our results demonstrate that log mel spectrograms are an effective representation of heart sound recordings, Bayesian networks provide strong supervised classification performance, and treating the ternary classification as two binary classifications increases performance on the weighted accuracy.
[ "cs.LG" ]
false
2305.16777
2023-05-26T09:39:36Z
Unleashing the Potential of Unsupervised Deep Outlier Detection through Automated Training Stopping
[ "Yihong Huang", "Yuang Zhang", "Liping Wang", "Xuemin Lin" ]
Outlier detection (OD) has received continuous research interests due to its wide applications. With the development of deep learning, increasingly deep OD algorithms are proposed. Despite the availability of numerous deep OD models, existing research has reported that the performance of deep models is extremely sensitive to the configuration of hyperparameters (HPs). However, the selection of HPs for deep OD models remains a notoriously difficult task due to the lack of any labels and long list of HPs. In our study. we shed light on an essential factor, training time, that can introduce significant variation in the performance of deep model. Even the performance is stable across other HPs, training time itself can cause a serious HP sensitivity issue. Motivated by this finding, we are dedicated to formulating a strategy to terminate model training at the optimal iteration. Specifically, we propose a novel metric called loss entropy to internally evaluate the model performance during training while an automated training stopping algorithm is devised. To our knowledge, our approach is the first to enable reliable identification of the optimal training iteration during training without requiring any labels. Our experiments on tabular, image datasets show that our approach can be applied to diverse deep models and datasets. It not only enhances the robustness of deep models to their HPs, but also improves the performance and reduces plenty of training time compared to naive training.
[ "cs.LG" ]
false
2305.16971
2023-05-26T14:26:36Z
Theoretical and Practical Perspectives on what Influence Functions Do
[ "Andrea Schioppa", "Katja Filippova", "Ivan Titov", "Polina Zablotskaia" ]
Influence functions (IF) have been seen as a technique for explaining model predictions through the lens of the training data. Their utility is assumed to be in identifying training examples "responsible" for a prediction so that, for example, correcting a prediction is possible by intervening on those examples (removing or editing them) and retraining the model. However, recent empirical studies have shown that the existing methods of estimating IF predict the leave-one-out-and-retrain effect poorly. In order to understand the mismatch between the theoretical promise and the practical results, we analyse five assumptions made by IF methods which are problematic for modern-scale deep neural networks and which concern convexity, numeric stability, training trajectory and parameter divergence. This allows us to clarify what can be expected theoretically from IF. We show that while most assumptions can be addressed successfully, the parameter divergence poses a clear limitation on the predictive power of IF: influence fades over training time even with deterministic training. We illustrate this theoretical result with BERT and ResNet models. Another conclusion from the theoretical analysis is that IF are still useful for model debugging and correcting even though some of the assumptions made in prior work do not hold: using natural language processing and computer vision tasks, we verify that mis-predictions can be successfully corrected by taking only a few fine-tuning steps on influential examples.
[ "cs.LG" ]
false
2305.17094
2023-05-26T17:06:15Z
Benchmarking state-of-the-art gradient boosting algorithms for classification
[ "Piotr Florek", "Adam Zagdański" ]
This work explores the use of gradient boosting in the context of classification. Four popular implementations, including original GBM algorithm and selected state-of-the-art gradient boosting frameworks (i.e. XGBoost, LightGBM and CatBoost), have been thoroughly compared on several publicly available real-world datasets of sufficient diversity. In the study, special emphasis was placed on hyperparameter optimization, specifically comparing two tuning strategies, i.e. randomized search and Bayesian optimization using the Tree-stuctured Parzen Estimator. The performance of considered methods was investigated in terms of common classification accuracy metrics as well as runtime and tuning time. Additionally, obtained results have been validated using appropriate statistical testing. An attempt was made to indicate a gradient boosting variant showing the right balance between effectiveness, reliability and ease of use.
[ "cs.LG", "62H30" ]
false
2305.17109
2023-05-26T17:18:14Z
Reinforcement Learning with Simple Sequence Priors
[ "Tankred Saanum", "Noémi Éltető", "Peter Dayan", "Marcel Binz", "Eric Schulz" ]
Everything else being equal, simpler models should be preferred over more complex ones. In reinforcement learning (RL), simplicity is typically quantified on an action-by-action basis -- but this timescale ignores temporal regularities, like repetitions, often present in sequential strategies. We therefore propose an RL algorithm that learns to solve tasks with sequences of actions that are compressible. We explore two possible sources of simple action sequences: Sequences that can be learned by autoregressive models, and sequences that are compressible with off-the-shelf data compression algorithms. Distilling these preferences into sequence priors, we derive a novel information-theoretic objective that incentivizes agents to learn policies that maximize rewards while conforming to these priors. We show that the resulting RL algorithm leads to faster learning, and attains higher returns than state-of-the-art model-free approaches in a series of continuous control tasks from the DeepMind Control Suite. These priors also produce a powerful information-regularized agent that is robust to noisy observations and can perform open-loop control.
[ "cs.LG" ]
false
2305.17244
2023-05-26T20:17:18Z
Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks
[ "Ketaki Joshi", "Raghavendra Pradyumna Pothukuchi", "Andre Wibisono", "Abhishek Bhattacharjee" ]
Continual learning on sequential data is critical for many machine learning (ML) deployments. Unfortunately, LSTM networks, which are commonly used to learn on sequential data, suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually. We discover that catastrophic forgetting in LSTM networks can be overcome in two novel and readily-implementable ways -- separating the LSTM memory either for each task or for each target label. Our approach eschews the need for explicit regularization, hypernetworks, and other complex methods. We quantify the benefits of our approach on recently-proposed LSTM networks for computer memory access prefetching, an important sequential learning problem in ML-based computer system optimization. Compared to state-of-the-art weight regularization methods to mitigate catastrophic forgetting, our approach is simple, effective, and enables faster learning. We also show that our proposal enables the use of small, non-regularized LSTM networks for complex natural language processing in the offline learning scenario, which was previously considered difficult.
[ "cs.LG" ]
false
2307.11684
2023-05-26T02:00:44Z
Minibatching Offers Improved Generalization Performance for Second Order Optimizers
[ "Eric Silk", "Swarnita Chakraborty", "Nairanjana Dasgupta", "Anand D. Sarwate", "Andrew Lumsdaine", "Tony Chiang" ]
Training deep neural networks (DNNs) used in modern machine learning is computationally expensive. Machine learning scientists, therefore, rely on stochastic first-order methods for training, coupled with significant hand-tuning, to obtain good performance. To better understand performance variability of different stochastic algorithms, including second-order methods, we conduct an empirical study that treats performance as a response variable across multiple training sessions of the same model. Using 2-factor Analysis of Variance (ANOVA) with interactions, we show that batch size used during training has a statistically significant effect on the peak accuracy of the methods, and that full batch largely performed the worst. In addition, we found that second-order optimizers (SOOs) generally exhibited significantly lower variance at specific batch sizes, suggesting they may require less hyperparameter tuning, leading to a reduced overall time to solution for model training.
[ "cs.LG" ]
false
2305.16614
2023-05-26T04:20:02Z
Physical Deep Reinforcement Learning: Safety and Unknown Unknowns
[ "Hongpeng Cao", "Yanbing Mao", "Lui Sha", "Marco Caccamo" ]
In this paper, we propose the Phy-DRL: a physics-model-regulated deep reinforcement learning framework for safety-critical autonomous systems. The Phy-DRL is unique in three innovations: i) proactive unknown-unknowns training, ii) conjunctive residual control (i.e., integration of data-driven control and physics-model-based control) and safety- \& stability-sensitive reward, and iii) physics-model-based neural network editing, including link editing and activation editing. Thanks to the concurrent designs, the Phy-DRL is able to 1) tolerate unknown-unknowns disturbances, 2) guarantee mathematically provable safety and stability, and 3) strictly comply with physical knowledge pertaining to Bellman equation and reward. The effectiveness of the Phy-DRL is finally validated by an inverted pendulum and a quadruped robot. The experimental results demonstrate that compared with purely data-driven DRL, Phy-DRL features remarkably fewer learning parameters, accelerated training and enlarged reward, while offering enhanced model robustness and safety assurance.
[ "cs.AI", "cs.LG" ]
false
2305.16703
2023-05-26T07:44:19Z
Sources of Uncertainty in Machine Learning -- A Statisticians' View
[ "Cornelia Gruber", "Patrick Oliver Schenk", "Malte Schierholz", "Frauke Kreuter", "Göran Kauermann" ]
Machine Learning and Deep Learning have achieved an impressive standard today, enabling us to answer questions that were inconceivable a few years ago. Besides these successes, it becomes clear, that beyond pure prediction, which is the primary strength of most supervised machine learning algorithms, the quantification of uncertainty is relevant and necessary as well. While first concepts and ideas in this direction have emerged in recent years, this paper adopts a conceptual perspective and examines possible sources of uncertainty. By adopting the viewpoint of a statistician, we discuss the concepts of aleatoric and epistemic uncertainty, which are more commonly associated with machine learning. The paper aims to formalize the two types of uncertainty and demonstrates that sources of uncertainty are miscellaneous and can not always be decomposed into aleatoric and epistemic. Drawing parallels between statistical concepts and uncertainty in machine learning, we also demonstrate the role of data and their influence on uncertainty.
[ "stat.ML", "cs.LG" ]
false
2305.16704
2023-05-26T07:47:21Z
A Closer Look at In-Context Learning under Distribution Shifts
[ "Kartik Ahuja", "David Lopez-Paz" ]
In-context learning, a capability that enables a model to learn from input examples on the fly without necessitating weight updates, is a defining characteristic of large language models. In this work, we follow the setting proposed in (Garg et al., 2022) to better understand the generality and limitations of in-context learning from the lens of the simple yet fundamental task of linear regression. The key question we aim to address is: Are transformers more adept than some natural and simpler architectures at performing in-context learning under varying distribution shifts? To compare transformers, we propose to use a simple architecture based on set-based Multi-Layer Perceptrons (MLPs). We find that both transformers and set-based MLPs exhibit in-context learning under in-distribution evaluations, but transformers more closely emulate the performance of ordinary least squares (OLS). Transformers also display better resilience to mild distribution shifts, where set-based MLPs falter. However, under severe distribution shifts, both models' in-context learning abilities diminish.
[ "cs.LG", "stat.ML" ]
true
2305.16808
2023-05-26T10:42:21Z
Geometric deep learning approach to knot theory
[ "Lennart Jaretzki" ]
In this paper, we introduce a novel way to use geometric deep learning for knot data by constructing a functor that takes knots to graphs and using graph neural networks. We will attempt to predict several knot invariants with this approach. This approach demonstrates high generalization capabilities.
[ "math.GT", "cs.LG", "57K10" ]
false
2305.16849
2023-05-26T12:00:37Z
Green Runner: A tool for efficient model selection from model repositories
[ "Jai Kannan", "Scott Barnett", "Anj Simmons", "Taylan Selvi", "Luis Cruz" ]
Deep learning models have become essential in software engineering, enabling intelligent features like image captioning and document generation. However, their popularity raises concerns about environmental impact and inefficient model selection. This paper introduces GreenRunnerGPT, a novel tool for efficiently selecting deep learning models based on specific use cases. It employs a large language model to suggest weights for quality indicators, optimizing resource utilization. The tool utilizes a multi-armed bandit framework to evaluate models against target datasets, considering tradeoffs. We demonstrate that GreenRunnerGPT is able to identify a model suited to a target use case without wasteful computations that would occur under a brute-force approach to model selection.
[ "cs.SE", "cs.LG" ]
false
2305.16864
2023-05-26T12:15:56Z
Knowledge Extraction with Interval Temporal Logic Decision Trees
[ "Guido Sciavicco", "Stan Ionel Eduard" ]
Multivariate temporal, or time, series classification is, in a way, the temporal generalization of (numeric) classification, as every instance is described by multiple time series instead of multiple values. Symbolic classification is the machine learning strategy to extract explicit knowledge from a data set, and the problem of symbolic classification of multivariate temporal series requires the design, implementation, and test of ad-hoc machine learning algorithms, such as, for example, algorithms for the extraction of temporal versions of decision trees. One of the most well-known algorithms for decision tree extraction from categorical data is Quinlan's ID3, which was later extended to deal with numerical attributes, resulting in an algorithm known as C4.5, and implemented in many open-sources data mining libraries, including the so-called Weka, which features an implementation of C4.5 called J48. ID3 was recently generalized to deal with temporal data in form of timelines, which can be seen as discrete (categorical) versions of multivariate time series, and such a generalization, based on the interval temporal logic HS, is known as Temporal ID3. In this paper we introduce Temporal C4.5, that allows the extraction of temporal decision trees from undiscretized multivariate time series, describe its implementation, called Temporal J48, and discuss the outcome of a set of experiments with the latter on a collection of public data sets, comparing the results with those obtained by other, classical, multivariate time series classification methods.
[ "cs.LG", "cs.AI" ]
false
2305.16886
2023-05-26T12:45:58Z
Peeking inside Sparse Neural Networks using Multi-Partite Graph Representations
[ "Elia Cunegatti", "Doina Bucur", "Giovanni Iacca" ]
Modern Deep Neural Networks (DNNs) have achieved very high performance at the expense of computational resources. To decrease the computational burden, several techniques have proposed to extract, from a given DNN, efficient subnetworks which are able to preserve performance while reducing the number of network parameters. The literature provides a broad set of techniques to discover such subnetworks, but few works have studied the peculiar topologies of such pruned architectures. In this paper, we propose a novel \emph{unrolled input-aware} bipartite Graph Encoding (GE) that is able to generate, for each layer in an either sparse or dense neural network, its corresponding graph representation based on its relation with the input data. We also extend it into a multipartite GE, to capture the relation between layers. Then, we leverage on topological properties to study the difference between the existing pruning algorithms and algorithm categories, as well as the relation between topologies and performance.
[ "cs.LG", "cs.AI" ]
false
2305.16890
2023-05-26T12:51:16Z
Universal Weak Coreset
[ "Ragesh Jaiswal", "Amit Kumar" ]
Coresets for $k$-means and $k$-median problems yield a small summary of the data, which preserve the clustering cost with respect to any set of $k$ centers. Recently coresets have also been constructed for constrained $k$-means and $k$-median problems. However, the notion of coresets has the drawback that (i) they can only be applied in settings where the input points are allowed to have weights, and (ii) in general metric spaces, the size of the coresets can depend logarithmically on the number of points. The notion of weak coresets, which have less stringent requirements than coresets, has been studied in the context of classical $k$-means and $k$-median problems. A weak coreset is a pair $(J,S)$ of subsets of points, where $S$ acts as a summary of the point set and $J$ as a set of potential centers. This pair satisfies the properties that (i) $S$ is a good summary of the data as long as the $k$ centers are chosen from $J$ only, and (ii) there is a good choice of $k$ centers in $J$ with cost close to the optimal cost. We develop this framework, which we call universal weak coresets, for constrained clustering settings. In conjunction with recent coreset constructions for constrained settings, our designs give greater data compression, are conceptually simpler, and apply to a wide range of constrained $k$-median and $k$-means problems.
[ "cs.DS", "cs.LG" ]
false
2305.16948
2023-05-26T14:00:35Z
Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets
[ "Hayeon Lee", "Sohyun An", "Minseon Kim", "Sung Ju Hwang" ]
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture that obtains the best performance and/or efficiency when distilling the knowledge from a given teacher model. Previous DaNAS methods have mostly tackled the search for the neural architecture for fixed datasets and the teacher, which are not generalized well on a new task consisting of an unseen dataset and an unseen teacher, thus need to perform a costly search for any new combination of the datasets and the teachers. For standard NAS tasks without KD, meta-learning-based computationally efficient NAS methods have been proposed, which learn the generalized search process over multiple tasks (datasets) and transfer the knowledge obtained over those tasks to a new task. However, since they assume learning from scratch without KD from a teacher, they might not be ideal for DaNAS scenarios. To eliminate the excessive computational cost of DaNAS methods and the sub-optimality of rapid NAS methods, we propose a distillation-aware meta accuracy prediction model, DaSS (Distillation-aware Student Search), which can predict a given architecture's final performances on a dataset when performing KD with a given teacher, without having actually to train it on the target task. The experimental results demonstrate that our proposed meta-prediction model successfully generalizes to multiple unseen datasets for DaNAS tasks, largely outperforming existing meta-NAS methods and rapid NAS baselines. Code is available at https://github.com/CownowAn/DaSS
[ "cs.LG", "cs.AI" ]
false
2305.16998
2023-05-26T14:58:30Z
A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation
[ "Zhiyi Xue", "Si Liu", "Zhaodi Zhang", "Yiting Wu", "Min Zhang" ]
The robustness of deep neural networks (DNNs) is crucial to the hosting system's reliability and security. Formal verification has been demonstrated to be effective in providing provable robustness guarantees. To improve its scalability, over-approximating the non-linear activation functions in DNNs by linear constraints has been widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. Many efforts have been dedicated to defining the so-called tightest approximations to reduce overestimation imposed by over-approximation. In this paper, we study existing approaches and identify a dominant factor in defining tight approximation, namely the approximation domain of the activation function. We find out that tight approximations defined on approximation domains may not be as tight as the ones on their actual domains, yet existing approaches all rely only on approximation domains. Based on this observation, we propose a novel dual-approximation approach to tighten over-approximations, leveraging an activation function's underestimated domain to define tight approximation bounds. We implement our approach with two complementary algorithms based respectively on Monte Carlo simulation and gradient descent into a tool called DualApp. We assess it on a comprehensive benchmark of DNNs with different architectures. Our experimental results show that DualApp significantly outperforms the state-of-the-art approaches with 100% - 1000% improvement on the verified robustness ratio and 10.64% on average (up to 66.53%) on the certified lower bound.
[ "cs.LG", "cs.AI" ]
false
2305.17063
2023-05-26T16:19:26Z
Vecchia Gaussian Process Ensembles on Internal Representations of Deep Neural Networks
[ "Felix Jimenez", "Matthias Katzfuss" ]
For regression tasks, standard Gaussian processes (GPs) provide natural uncertainty quantification, while deep neural networks (DNNs) excel at representation learning. We propose to synergistically combine these two approaches in a hybrid method consisting of an ensemble of GPs built on the output of hidden layers of a DNN. GP scalability is achieved via Vecchia approximations that exploit nearest-neighbor conditional independence. The resulting deep Vecchia ensemble not only imbues the DNN with uncertainty quantification but can also provide more accurate and robust predictions. We demonstrate the utility of our model on several datasets and carry out experiments to understand the inner workings of the proposed method.
[ "stat.ML", "cs.LG" ]
false
2305.17149
2023-05-26T05:31:23Z
Diagnostic Spatio-temporal Transformer with Faithful Encoding
[ "Jokin Labaien", "Tsuyoshi Idé", "Pin-Yu Chen", "Ekhi Zugasti", "Xabier De Carlos" ]
This paper addresses the task of anomaly diagnosis when the underlying data generation process has a complex spatio-temporal (ST) dependency. The key technical challenge is to extract actionable insights from the dependency tensor characterizing high-order interactions among temporal and spatial indices. We formalize the problem as supervised dependency discovery, where the ST dependency is learned as a side product of multivariate time-series classification. We show that temporal positional encoding used in existing ST transformer works has a serious limitation in capturing higher frequencies (short time scales). We propose a new positional encoding with a theoretical guarantee, based on discrete Fourier transform. We also propose a new ST dependency discovery framework, which can provide readily consumable diagnostic information in both spatial and temporal directions. Finally, we demonstrate the utility of the proposed model, DFStrans (Diagnostic Fourier-based Spatio-temporal Transformer), in a real industrial application of building elevator control.
[ "cs.LG", "cs.AI" ]
false
2305.17156
2023-05-26T16:40:44Z
An Improved Model Ensembled of Different Hyper-parameter Tuned Machine Learning Algorithms for Fetal Health Prediction
[ "Md. Simul Hasan Talukder", "Sharmin Akter" ]
Fetal health is a critical concern during pregnancy as it can impact the well-being of both the mother and the baby. Regular monitoring and timely interventions are necessary to ensure the best possible outcomes. While there are various methods to monitor fetal health in the mother's womb, the use of artificial intelligence (AI) can improve the accuracy, efficiency, and speed of diagnosis. In this study, we propose a robust ensemble model called ensemble of tuned Support Vector Machine and ExtraTrees (ETSE) for predicting fetal health. Initially, we employed various data preprocessing techniques such as outlier rejection, missing value imputation, data standardization, and data sampling. Then, seven machine learning (ML) classifiers including Support Vector Machine (SVM), XGBoost (XGB), Light Gradient Boosting Machine (LGBM), Decision Tree (DT), Random Forest (RF), ExtraTrees (ET), and K-Neighbors were implemented. These models were evaluated and then optimized by hyperparameter tuning using the grid search technique. Finally, we analyzed the performance of our proposed ETSE model. The performance analysis of each model revealed that our proposed ETSE model outperformed the other models with 100% precision, 100% recall, 100% F1-score, and 99.66% accuracy. This indicates that the ETSE model can effectively predict fetal health, which can aid in timely interventions and improve outcomes for both the mother and the baby.
[ "cs.LG", "cs.AI", "14J60 (Primary) 14F05, 14J26 (Secondary)", "I.2.10" ]
false
2305.17250
2023-05-26T20:37:06Z
Self-Supervised Reinforcement Learning that Transfers using Random Features
[ "Boyuan Chen", "Chuning Zhu", "Pulkit Agrawal", "Kaiqing Zhang", "Abhishek Gupta" ]
Model-free reinforcement learning algorithms have exhibited great potential in solving single-task sequential decision-making problems with high-dimensional observations and long horizons, but are known to be hard to generalize across tasks. Model-based RL, on the other hand, learns task-agnostic models of the world that naturally enables transfer across different reward functions, but struggles to scale to complex environments due to the compounding error. To get the best of both worlds, we propose a self-supervised reinforcement learning method that enables the transfer of behaviors across tasks with different rewards, while circumventing the challenges of model-based RL. In particular, we show self-supervised pre-training of model-free reinforcement learning with a number of random features as rewards allows implicit modeling of long-horizon environment dynamics. Then, planning techniques like model-predictive control using these implicit models enable fast adaptation to problems with new reward functions. Our method is self-supervised in that it can be trained on offline datasets without reward labels, but can then be quickly deployed on new tasks. We validate that our proposed method enables transfer across tasks on a variety of manipulation and locomotion domains in simulation, opening the door to generalist decision-making agents.
[ "cs.LG", "cs.AI" ]
false
2305.17255
2023-05-26T20:54:18Z
FineMorphs: Affine-diffeomorphic sequences for regression
[ "Michele Lohr", "Laurent Younes" ]
A multivariate regression model of affine and diffeomorphic transformation sequences - FineMorphs - is presented. Leveraging concepts from shape analysis, model states are optimally "reshaped" by diffeomorphisms generated by smooth vector fields during learning. Affine transformations and vector fields are optimized within an optimal control setting, and the model can naturally reduce (or increase) dimensionality and adapt to large datasets via suboptimal vector fields. An existence proof of solution and necessary conditions for optimality for the model are derived. Experimental results on real datasets from the UCI repository are presented, with favorable results in comparison with state-of-the-art in the literature and densely-connected neural networks in TensorFlow.
[ "stat.ML", "cs.LG" ]
false
2305.17277
2023-05-26T21:49:37Z
Optimizing NOTEARS Objectives via Topological Swaps
[ "Chang Deng", "Kevin Bello", "Bryon Aragam", "Pradeep Ravikumar" ]
Recently, an intriguing class of non-convex optimization problems has emerged in the context of learning directed acyclic graphs (DAGs). These problems involve minimizing a given loss or score function, subject to a non-convex continuous constraint that penalizes the presence of cycles in a graph. In this work, we delve into the optimization challenges associated with this class of non-convex programs. To address these challenges, we propose a bi-level algorithm that leverages the non-convex constraint in a novel way. The outer level of the algorithm optimizes over topological orders by iteratively swapping pairs of nodes within the topological order of a DAG. A key innovation of our approach is the development of an effective method for generating a set of candidate swapping pairs for each iteration. At the inner level, given a topological order, we utilize off-the-shelf solvers that can handle linear constraints. The key advantage of our proposed algorithm is that it is guaranteed to find a local minimum or a KKT point under weaker conditions compared to previous work and finds solutions with lower scores. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches in terms of achieving a better score. Additionally, our method can also be used as a post-processing algorithm to significantly improve the score of other algorithms. Code implementing the proposed method is available at https://github.com/duntrain/topo.
[ "stat.ML", "cs.LG" ]
false
2305.17284
2023-05-26T22:11:38Z
GC-Flow: A Graph-Based Flow Network for Effective Clustering
[ "Tianchun Wang", "Farzaneh Mirzazadeh", "Xiang Zhang", "Jie Chen" ]
Graph convolutional networks (GCNs) are \emph{discriminative models} that directly model the class posterior $p(y|\mathbf{x})$ for semi-supervised classification of graph data. While being effective, as a representation learning approach, the node representations extracted from a GCN often miss useful information for effective clustering, because the objectives are different. In this work, we design normalizing flows that replace GCN layers, leading to a \emph{generative model} that models both the class conditional likelihood $p(\mathbf{x}|y)$ and the class prior $p(y)$. The resulting neural network, GC-Flow, retains the graph convolution operations while being equipped with a Gaussian mixture representation space. It enjoys two benefits: it not only maintains the predictive power of GCN, but also produces well-separated clusters, due to the structuring of the representation space. We demonstrate these benefits on a variety of benchmark data sets. Moreover, we show that additional parameterization, such as that on the adjacency matrix used for graph convolutions, yields additional improvement in clustering.
[ "cs.LG", "stat.ML" ]
false
2305.18210
2023-05-26T13:24:17Z
Learning Causal Graphs via Monotone Triangular Transport Maps
[ "Sina Akbari", "Luca Ganassali", "Negar Kiyavash" ]
We study the problem of causal structure learning from data using optimal transport (OT). Specifically, we first provide a constraint-based method which builds upon lower-triangular monotone parametric transport maps to design conditional independence tests which are agnostic to the noise distribution. We provide an algorithm for causal discovery up to Markov Equivalence with no assumptions on the structural equations/noise distributions, which allows for settings with latent variables. Our approach also extends to score-based causal discovery by providing a novel means for defining scores. This allows us to uniquely recover the causal graph under additional identifiability and structural assumptions, such as additive noise or post-nonlinear models. We provide experimental results to compare the proposed approach with the state of the art on both synthetic and real-world datasets.
[ "stat.ME", "cs.LG" ]
false
2305.16544
2023-05-26T00:03:51Z
Inductive detection of Influence Operations via Graph Learning
[ "Nicholas A. Gabriel", "David A. Broniatowski", "Neil F. Johnson" ]
Influence operations are large-scale efforts to manipulate public opinion. The rapid detection and disruption of these operations is critical for healthy public discourse. Emergent AI technologies may enable novel operations which evade current detection methods and influence public discourse on social media with greater scale, reach, and specificity. New methods with inductive learning capacity will be needed to identify these novel operations before they indelibly alter public opinion and events. We develop an inductive learning framework which: 1) determines content- and graph-based indicators that are not specific to any operation; 2) uses graph learning to encode abstract signatures of coordinated manipulation; and 3) evaluates generalization capacity by training and testing models across operations originating from Russia, China, and Iran. We find that this framework enables strong cross-operation generalization while also revealing salient indicators$\unicode{x2013}$illustrating a generic approach which directly complements transductive methodologies, thereby enhancing detection coverage.
[ "cs.LG", "cs.CR", "cs.SI", "physics.soc-ph" ]
false
2305.16625
2023-05-26T04:34:28Z
Set-based Neural Network Encoding
[ "Bruno Andreis", "Soro Bedionita", "Sung Ju Hwang" ]
We propose an approach to neural network weight encoding for generalization performance prediction that utilizes set-to-set and set-to-vector functions to efficiently encode neural network parameters. Our approach is capable of encoding neural networks in a modelzoo of mixed architecture and different parameter sizes as opposed to previous approaches that require custom encoding models for different architectures. Furthermore, our \textbf{S}et-based \textbf{N}eural network \textbf{E}ncoder (SNE) takes into consideration the hierarchical computational structure of neural networks by utilizing a layer-wise encoding scheme that culminates to encoding all layer-wise encodings to obtain the neural network encoding vector. Additionally, we introduce a \textit{pad-chunk-encode} pipeline to efficiently encode neural network layers that is adjustable to computational and memory constraints. We also introduce two new tasks for neural network generalization performance prediction: cross-dataset and cross-architecture. In cross-dataset performance prediction, we evaluate how well performance predictors generalize across modelzoos trained on different datasets but of the same architecture. In cross-architecture performance prediction, we evaluate how well generalization performance predictors transfer to modelzoos of different architecture. Experimentally, we show that SNE outperforms the relevant baselines on the cross-dataset task and provide the first set of results on the cross-architecture task.
[ "cs.LG", "cs.AI", "cs.NE" ]
false
2305.16699
2023-05-26T07:39:26Z
Automatic Tuning of Loss Trade-offs without Hyper-parameter Search in End-to-End Zero-Shot Speech Synthesis
[ "Seongyeon Park", "Bohyung Kim", "Tae-hyun Oh" ]
Recently, zero-shot TTS and VC methods have gained attention due to their practicality of being able to generate voices even unseen during training. Among these methods, zero-shot modifications of the VITS model have shown superior performance, while having useful properties inherited from VITS. However, the performance of VITS and VITS-based zero-shot models vary dramatically depending on how the losses are balanced. This can be problematic, as it requires a burdensome procedure of tuning loss balance hyper-parameters to find the optimal balance. In this work, we propose a novel framework that finds this optimum without search, by inducing the decoder of VITS-based models to its full reconstruction ability. With our framework, we show superior performance compared to baselines in zero-shot TTS and VC, achieving state-of-the-art performance. Furthermore, we show the robustness of our framework in various settings. We provide an explanation for the results in the discussion.
[ "eess.AS", "cs.AI", "cs.LG" ]
false
2305.16708
2023-05-26T07:53:12Z
A Hierarchical Approach to Population Training for Human-AI Collaboration
[ "Yi Loo", "Chen Gong", "Malika Meghjani" ]
A major challenge for deep reinforcement learning (DRL) agents is to collaborate with novel partners that were not encountered by them during the training phase. This is specifically worsened by an increased variance in action responses when the DRL agents collaborate with human partners due to the lack of consistency in human behaviors. Recent work have shown that training a single agent as the best response to a diverse population of training partners significantly increases an agent's robustness to novel partners. We further enhance the population-based training approach by introducing a Hierarchical Reinforcement Learning (HRL) based method for Human-AI Collaboration. Our agent is able to learn multiple best-response policies as its low-level policy while at the same time, it learns a high-level policy that acts as a manager which allows the agent to dynamically switch between the low-level best-response policies based on its current partner. We demonstrate that our method is able to dynamically adapt to novel partners of different play styles and skill levels in the 2-player collaborative Overcooked game environment. We also conducted a human study in the same environment to test the effectiveness of our method when partnering with real human subjects.
[ "cs.AI", "cs.LG", "cs.MA" ]
false
2305.16836
2023-05-26T11:26:49Z
A Robust Probabilistic Approach to Stochastic Subspace Identification
[ "Brandon J. O'Connell", "Timothy J. Rogers" ]
Modal parameter estimation of operational structures is often a challenging task when confronted with unwanted distortions (outliers) in field measurements. Atypical observations present a problem to operational modal analysis (OMA) algorithms, such as stochastic subspace identification (SSI), severely biasing parameter estimates and resulting in misidentification of the system. Despite this predicament, no simple mechanism currently exists capable of dealing with such anomalies in SSI. Addressing this problem, this paper first introduces a novel probabilistic formulation of stochastic subspace identification (Prob-SSI), realised using probabilistic projections. Mathematically, the equivalence between this model and the classic algorithm is demonstrated. This fresh perspective, viewing SSI as a problem in probabilistic inference, lays the necessary mathematical foundation to enable a plethora of new, more sophisticated OMA approaches. To this end, a statistically robust SSI algorithm (robust Prob-SSI) is developed, capable of providing a principled and automatic way of handling outlying or anomalous data in the measured timeseries, such as may occur in field recordings, e.g. intermittent sensor dropout. Robust Prob-SSI is shown to outperform conventional SSI when confronted with 'corrupted' data, exhibiting improved identification performance and higher levels of confidence in the found poles when viewing consistency (stabilisation) diagrams. Similar benefits are also demonstrated on the Z24 Bridge benchmark dataset, highlighting enhanced performance on measured systems.
[ "stat.ML", "cs.LG", "eess.SP" ]
false
2305.16837
2023-05-26T11:29:06Z
ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks
[ "Giriprasad Sridhara", "Ranjani H. G.", "Sourav Mazumdar" ]
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot launched by OpenAI on November 30, 2022. OpenAI's GPT-3 family of large language models serve as the foundation for ChatGPT. ChatGPT is fine-tuned with both supervised and reinforcement learning techniques and has received widespread attention for its articulate responses across diverse domains of knowledge. In this study, we explore how ChatGPT can be used to help with common software engineering tasks. Many of the ubiquitous tasks covering the breadth of software engineering such as ambiguity resolution in software requirements, method name suggestion, test case prioritization, code review, log summarization can potentially be performed using ChatGPT. In this study, we explore fifteen common software engineering tasks using ChatGPT. We juxtapose and analyze ChatGPT's answers with the respective state of the art outputs (where available) and/or human expert ground truth. Our experiments suggest that for many tasks, ChatGPT does perform credibly and the response from it is detailed and often better than the human expert output or the state of the art output. However, for a few other tasks, ChatGPT in its present form provides incorrect answers and hence is not suited for such tasks.
[ "cs.SE", "cs.AI", "cs.LG" ]
false
2305.16845
2023-05-26T11:53:53Z
An end-to-end strategy for recovering a free-form potential from a snapshot of stellar coordinates
[ "Wassim Tenachi", "Rodrigo Ibata", "Foivos I. Diakogiannis" ]
New large observational surveys such as Gaia are leading us into an era of data abundance, offering unprecedented opportunities to discover new physical laws through the power of machine learning. Here we present an end-to-end strategy for recovering a free-form analytical potential from a mere snapshot of stellar positions and velocities. First we show how auto-differentiation can be used to capture an agnostic map of the gravitational potential and its underlying dark matter distribution in the form of a neural network. However, in the context of physics, neural networks are both a plague and a blessing as they are extremely flexible for modeling physical systems but largely consist in non-interpretable black boxes. Therefore, in addition, we show how a complementary symbolic regression approach can be used to open up this neural network into a physically meaningful expression. We demonstrate our strategy by recovering the potential of a toy isochrone system.
[ "astro-ph.GA", "astro-ph.IM", "cs.LG", "physics.comp-ph" ]
false
2305.16862
2023-05-26T12:15:11Z
Neural modeling of magnetic tape recorders
[ "Otto Mikkonen", "Alec Wright", "Eloi Moliner", "Vesa Välimäki" ]
The sound of magnetic recording media, such as open-reel and cassette tape recorders, is still sought after by today's sound practitioners due to the imperfections embedded in the physics of the magnetic recording process. This paper proposes a method for digitally emulating this character using neural networks. The signal chain of the proposed system consists of three main components: the hysteretic nonlinearity and filtering jointly produced by the magnetic recording process as well as the record and playback amplifiers, the fluctuating delay originating from the tape transport, and the combined additive noise component from various electromagnetic origins. In our approach, the hysteretic nonlinear block is modeled using a recurrent neural network, while the delay trajectories and the noise component are generated using separate diffusion models, which employ U-net deep convolutional neural networks. According to the conducted objective evaluation, the proposed architecture faithfully captures the character of the magnetic tape recorder. The results of this study can be used to construct virtual replicas of vintage sound recording devices with applications in music production and audio antiquing tasks.
[ "eess.AS", "cs.LG", "cs.SD" ]
false
2305.16903
2023-05-26T13:17:20Z
Submodular Minimax Optimization: Finding Effective Sets
[ "Loay Mualem", "Ethan R. Elenberg", "Moran Feldman", "Amin Karbasi" ]
Despite the rich existing literature about minimax optimization in continuous settings, only very partial results of this kind have been obtained for combinatorial settings. In this paper, we fill this gap by providing a characterization of submodular minimax optimization, the problem of finding a set (for either the min or the max player) that is effective against every possible response. We show when and under what conditions we can find such sets. We also demonstrate how minimax submodular optimization provides robust solutions for downstream machine learning applications such as (i) efficient prompt engineering for question answering, (ii) prompt engineering for dialog state tracking, (iii) identifying robust waiting locations for ride-sharing, (iv) ride-share difficulty kernelization, and (v) finding adversarial images. Our experiments demonstrate that our proposed algorithms consistently outperform other baselines.
[ "cs.LG", "cs.DM", "math.OC", "68R05 (Primary) 90C26, 90C20, 68T20, 68W40 (Secondary)", "G.2.1; I.2.m; F.2.2" ]
false
2305.16933
2023-05-26T13:48:37Z
Representing Piecewise Linear Functions by Functions with Small Arity
[ "Christoph Koutschan", "Bernhard Moser", "Anton Ponomarchuk", "Josef Schicho" ]
A piecewise linear function can be described in different forms: as an arbitrarily nested expression of $\min$- and $\max$-functions, as a difference of two convex piecewise linear functions, or as a linear combination of maxima of affine-linear functions. In this paper, we provide two main results: first, we show that for every piecewise linear function there exists a linear combination of $\max$-functions with at most $n+1$ arguments, and give an algorithm for its computation. Moreover, these arguments are contained in the finite set of affine-linear functions that coincide with the given function in some open set. Second, we prove that the piecewise linear function $\max(0, x_{1}, \ldots, x_{n})$ cannot be represented as a linear combination of maxima of less than $n+1$ affine-linear arguments. This was conjectured by Wang and Sun in 2005 in a paper on representations of piecewise linear functions as linear combination of maxima.
[ "cs.SC", "cs.DM", "cs.LG", "math.CO" ]
false
2305.16974
2023-05-26T14:29:33Z
Finite Time Regret Bounds for Minimum Variance Control of Autoregressive Systems with Exogenous Inputs
[ "Rahul Singh", "Akshay Mete", "Avik Kar", "P. R. Kumar" ]
Minimum variance controllers have been employed in a wide-range of industrial applications. A key challenge experienced by many adaptive controllers is their poor empirical performance in the initial stages of learning. In this paper, we address the problem of initializing them so that they provide acceptable transients, and also provide an accompanying finite-time regret analysis, for adaptive minimum variance control of an auto-regressive system with exogenous inputs (ARX). Following [3], we consider a modified version of the Certainty Equivalence (CE) adaptive controller, which we call PIECE, that utilizes probing inputs for exploration. We show that it has a $C \log T$ bound on the regret after $T$ time-steps for bounded noise, and $C\log^2 T$ in the case of sub-Gaussian noise. The simulation results demonstrate the advantage of PIECE over the algorithm proposed in [3] as well as the standard Certainty Equivalence controller especially in the initial learning phase. To the best of our knowledge, this is the first work that provides finite-time regret bounds for an adaptive minimum variance controller.
[ "eess.SY", "cs.LG", "cs.SY" ]
false
2305.17043
2023-05-26T15:52:08Z
Explaining Deep Learning for ECG Analysis: Building Blocks for Auditing and Knowledge Discovery
[ "Patrick Wagner", "Temesgen Mehari", "Wilhelm Haverkamp", "Nils Strodthoff" ]
Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the local (attributions per sample) and global (based on domain expert concepts) perspectives. We have established a set of sanity checks to identify sensible attribution methods, and we provide quantitative evidence in accordance with expert rules. This dataset-wide analysis goes beyond anecdotal evidence by aggregating data across patient subgroups. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.
[ "eess.SP", "cs.LG", "stat.ML" ]
false
2305.17071
2023-05-26T16:28:26Z
Adversarial Attacks on Online Learning to Rank with Click Feedback
[ "Jinhang Zuo", "Zhiyao Zhang", "Zhiyong Wang", "Shuai Li", "Mohammad Hajiesmaili", "Adam Wierman" ]
Online learning to rank (OLTR) is a sequential decision-making problem where a learning agent selects an ordered list of items and receives feedback through user clicks. Although potential attacks against OLTR algorithms may cause serious losses in real-world applications, little is known about adversarial attacks on OLTR. This paper studies attack strategies against multiple variants of OLTR. Our first result provides an attack strategy against the UCB algorithm on classical stochastic bandits with binary feedback, which solves the key issues caused by bounded and discrete feedback that previous works can not handle. Building on this result, we design attack algorithms against UCB-based OLTR algorithms in position-based and cascade models. Finally, we propose a general attack strategy against any algorithm under the general click model. Each attack algorithm manipulates the learning agent into choosing the target attack item $T-o(T)$ times, incurring a cumulative cost of $o(T)$. Experiments on synthetic and real data further validate the effectiveness of our proposed attack algorithms.
[ "cs.LG", "cs.CR", "cs.IR" ]
false
2305.17299
2023-05-26T23:00:19Z
Improving Stability in Decision Tree Models
[ "Dimitris Bertsimas", "Vassilis Digalakis Jr" ]
Owing to their inherently interpretable structure, decision trees are commonly used in applications where interpretability is essential. Recent work has focused on improving various aspects of decision trees, including their predictive power and robustness; however, their instability, albeit well-documented, has been addressed to a lesser extent. In this paper, we take a step towards the stabilization of decision tree models through the lens of real-world health care applications due to the relevance of stability and interpretability in this space. We introduce a new distance metric for decision trees and use it to determine a tree's level of stability. We propose a novel methodology to train stable decision trees and investigate the existence of trade-offs that are inherent to decision tree models - including between stability, predictive power, and interpretability. We demonstrate the value of the proposed methodology through an extensive quantitative and qualitative analysis of six case studies from real-world health care applications, and we show that, on average, with a small 4.6% decrease in predictive power, we gain a significant 38% improvement in the model's stability.
[ "stat.ML", "cs.AI", "cs.LG", "math.OC" ]
false
2305.17300
2023-05-26T23:04:53Z
Exploiting Large Neuroimaging Datasets to Create Connectome-Constrained Approaches for more Robust, Efficient, and Adaptable Artificial Intelligence
[ "Erik C. Johnson", "Brian S. Robinson", "Gautam K. Vallabha", "Justin Joyce", "Jordan K. Matelsky", "Raphael Norman-Tenazas", "Isaac Western", "Marisel Villafañe-Delgado", "Martha Cervantes", "Michael S. Robinette", "Arun V. Reddy", "Lindsey Kitchell", "Patricia K. Rivlin", "Elizabeth P. Reilly", "Nathan Drenkow", "Matthew J. Roos", "I-Jeng Wang", "Brock A. Wester", "William R. Gray-Roncal", "Joan A. Hoffmann" ]
Despite the progress in deep learning networks, efficient learning at the edge (enabling adaptable, low-complexity machine learning solutions) remains a critical need for defense and commercial applications. We envision a pipeline to utilize large neuroimaging datasets, including maps of the brain which capture neuron and synapse connectivity, to improve machine learning approaches. We have pursued different approaches within this pipeline structure. First, as a demonstration of data-driven discovery, the team has developed a technique for discovery of repeated subcircuits, or motifs. These were incorporated into a neural architecture search approach to evolve network architectures. Second, we have conducted analysis of the heading direction circuit in the fruit fly, which performs fusion of visual and angular velocity features, to explore augmenting existing computational models with new insight. Our team discovered a novel pattern of connectivity, implemented a new model, and demonstrated sensor fusion on a robotic platform. Third, the team analyzed circuitry for memory formation in the fruit fly connectome, enabling the design of a novel generative replay approach. Finally, the team has begun analysis of connectivity in mammalian cortex to explore potential improvements to transformer networks. These constraints increased network robustness on the most challenging examples in the CIFAR-10-C computer vision robustness benchmark task, while reducing learnable attention parameters by over an order of magnitude. Taken together, these results demonstrate multiple potential approaches to utilize insight from neural systems for developing robust and efficient machine learning techniques.
[ "cs.NE", "cs.AI", "cs.LG" ]
false
2305.18205
2023-05-26T13:24:33Z
Pulse shape discrimination based on the Tempotron: a powerful classifier on GPU
[ "Haoran Liu", "Peng Li", "Ming-Zhe Liu", "Kai-Ming Wang", "Zhuo Zuo", "Bing-Qi Liu" ]
This study introduces the Tempotron, a powerful classifier based on a third-generation neural network model, for pulse shape discrimination. By eliminating the need for manual feature extraction, the Tempotron model can process pulse signals directly, generating discrimination results based on learned prior knowledge. The study performed experiments using GPU acceleration, resulting in over a 500 times speedup compared to the CPU-based model, and investigated the impact of noise augmentation on the Tempotron's performance. Experimental results showed that the Tempotron is a potent classifier capable of achieving high discrimination accuracy. Furthermore, analyzing the neural activity of Tempotron during training shed light on its learning characteristics and aided in selecting the Tempotron's hyperparameters. The dataset used in this study and the source code of the GPU-based Tempotron are publicly available on GitHub at https://github.com/HaoranLiu507/TempotronGPU.
[ "eess.SP", "cs.LG", "nucl-ex" ]
false
2305.18356
2023-05-26T17:40:25Z
RT-kNNS Unbound: Using RT Cores to Accelerate Unrestricted Neighbor Search
[ "Vani Nagarajan", "Durga Mandarapu", "Milind Kulkarni" ]
The problem of identifying the k-Nearest Neighbors (kNNS) of a point has proven to be very useful both as a standalone application and as a subroutine in larger applications. Given its far-reaching applicability in areas such as machine learning and point clouds, extensive research has gone into leveraging GPU acceleration to solve this problem. Recent work has shown that using Ray Tracing cores in recent GPUs to accelerate kNNS is much more efficient compared to traditional acceleration using shader cores. However, the existing translation of kNNS to a ray tracing problem imposes a constraint on the search space for neighbors. Due to this, we can only use RT cores to accelerate fixed-radius kNNS, which requires the user to set a search radius a priori and hence can miss neighbors. In this work, we propose TrueKNN, the first unbounded RT-accelerated neighbor search. TrueKNN adopts an iterative approach where we incrementally grow the search space until all points have found their k neighbors. We show that our approach is orders of magnitude faster than existing approaches and can even be used to accelerate fixed-radius neighbor searches.
[ "cs.LG", "cs.CG", "cs.PF" ]
false
2306.06109
2023-05-26T04:13:31Z
Learning to Quantize Vulnerability Patterns and Match to Locate Statement-Level Vulnerabilities
[ "Michael Fu", "Trung Le", "Van Nguyen", "Chakkrit Tantithamthavorn", "Dinh Phung" ]
Deep learning (DL) models have become increasingly popular in identifying software vulnerabilities. Prior studies found that vulnerabilities across different vulnerable programs may exhibit similar vulnerable scopes, implicitly forming discernible vulnerability patterns that can be learned by DL models through supervised training. However, vulnerable scopes still manifest in various spatial locations and formats within a program, posing challenges for models to accurately identify vulnerable statements. Despite this challenge, state-of-the-art vulnerability detection approaches fail to exploit the vulnerability patterns that arise in vulnerable programs. To take full advantage of vulnerability patterns and unleash the ability of DL models, we propose a novel vulnerability-matching approach in this paper, drawing inspiration from program analysis tools that locate vulnerabilities based on pre-defined patterns. Specifically, a vulnerability codebook is learned, which consists of quantized vectors representing various vulnerability patterns. During inference, the codebook is iterated to match all learned patterns and predict the presence of potential vulnerabilities within a given program. Our approach was extensively evaluated on a real-world dataset comprising more than 188,000 C/C++ functions. The evaluation results show that our approach achieves an F1-score of 94% (6% higher than the previous best) and 82% (19% higher than the previous best) for function and statement-level vulnerability identification, respectively. These substantial enhancements highlight the effectiveness of our approach to identifying vulnerabilities. The training code and pre-trained models are available at https://github.com/optimatch/optimatch.
[ "cs.CR", "cs.AI", "cs.LG" ]
false
2307.06821
2023-05-26T17:02:29Z
Equalization in Dispersion-Managed Systems Using Learned Digital Back-Propagation
[ "Mohannad Abu-Romoh", "Nelson Costa", "Yves Jaouën", "Antonio Napoli", "João Pedro", "Bernhard Spinnler", "Mansoor Yousefi" ]
In this paper, we investigate the use of the learned digital back-propagation (LDBP) for equalizing dual-polarization fiber-optic transmission in dispersion-managed (DM) links. LDBP is a deep neural network that optimizes the parameters of DBP using the stochastic gradient descent. We evaluate DBP and LDBP in a simulated WDM dual-polarization fiber transmission system operating at the bitrate of 256 Gbit/s per channel, with a dispersion map designed for a 2016 km link with 15% residual dispersion. Our results show that in single-channel transmission, LDBP achieves an effective signal-to-noise ratio improvement of 6.3 dB and 2.5 dB, respectively, over linear equalization and DBP. In WDM transmission, the corresponding $Q$-factor gains are 1.1 dB and 0.4 dB, respectively. Additionally, we conduct a complexity analysis, which reveals that a frequency-domain implementation of LDBP and DBP is more favorable in terms of complexity than the time-domain implementation. These findings demonstrate the effectiveness of LDBP in mitigating the nonlinear effects in DM fiber-optic transmission systems.
[ "cs.NI", "cs.LG", "eess.SP" ]
false
2305.16620
2023-05-26T04:27:48Z
Pedestrian Trajectory Forecasting Using Deep Ensembles Under Sensing Uncertainty
[ "Anshul Nayak", "Azim Eskandarian", "Zachary Doerzaph", "Prasenjit Ghorai" ]
One of the fundamental challenges in the prediction of dynamic agents is robustness. Usually, most predictions are deterministic estimates of future states which are over-confident and prone to error. Recently, few works have addressed capturing uncertainty during forecasting of future states. However, these probabilistic estimation methods fail to account for the upstream noise in perception data during tracking. Sensors always have noise and state estimation becomes even more difficult under adverse weather conditions and occlusion. Traditionally, Bayes filters have been used to fuse information from noisy sensors to update states with associated belief. But, they fail to address non-linearities and long-term predictions. Therefore, we propose an end-to-end estimator that can take noisy sensor measurements and make robust future state predictions with uncertainty bounds while simultaneously taking into consideration the upstream perceptual uncertainty. For the current research, we consider an encoder-decoder based deep ensemble network for capturing both perception and predictive uncertainty simultaneously. We compared the current model to other approximate Bayesian inference methods. Overall, deep ensembles provided more robust predictions and the consideration of upstream uncertainty further increased the estimation accuracy for the model.
[ "cs.RO", "cs.AI", "cs.LG", "cs.SY", "eess.SY" ]
false
2305.16892
2023-05-26T12:53:13Z
Feature Adaptation for Sparse Linear Regression
[ "Jonathan Kelner", "Frederic Koehler", "Raghu Meka", "Dhruv Rohatgi" ]
Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian $N(0,\Sigma)$, and we seek an estimator with small excess risk. If the true signal is $t$-sparse, information-theoretically, it is possible to achieve strong recovery guarantees with only $O(t\log n)$ samples. However, computationally efficient algorithms have sample complexity linear in (some variant of) the condition number of $\Sigma$. Classical algorithms such as the Lasso can require significantly more samples than necessary even if there is only a single sparse approximate dependency among the covariates. We provide a polynomial-time algorithm that, given $\Sigma$, automatically adapts the Lasso to tolerate a small number of approximate dependencies. In particular, we achieve near-optimal sample complexity for constant sparsity and if $\Sigma$ has few ``outlier'' eigenvalues. Our algorithm fits into a broader framework of feature adaptation for sparse linear regression with ill-conditioned covariates. With this framework, we additionally provide the first polynomial-factor improvement over brute-force search for constant sparsity $t$ and arbitrary covariance $\Sigma$.
[ "cs.DS", "cs.LG", "math.ST", "stat.ML", "stat.TH" ]
false
2305.17052
2023-05-26T16:00:59Z
A Framework for Incentivized Collaborative Learning
[ "Xinran Wang", "Qi Le", "Ahmad Faraz Khan", "Jie Ding", "Ali Anwar" ]
Collaborations among various entities, such as companies, research labs, AI agents, and edge devices, have become increasingly crucial for achieving machine learning tasks that cannot be accomplished by a single entity alone. This is likely due to factors such as security constraints, privacy concerns, and limitations in computation resources. As a result, collaborative learning (CL) research has been gaining momentum. However, a significant challenge in practical applications of CL is how to effectively incentivize multiple entities to collaborate before any collaboration occurs. In this study, we propose ICL, a general framework for incentivized collaborative learning, and provide insights into the critical issue of when and why incentives can improve collaboration performance. Furthermore, we show the broad applicability of ICL to specific cases in federated learning, assisted learning, and multi-armed bandit with both theory and experimental results.
[ "cs.LG", "cs.AI", "cs.CY", "cs.GT", "cs.MA" ]
false
2305.19279
2023-05-26T08:17:22Z
Data-Driven Games in Computational Mechanics
[ "Kerstin Weinberg", "Laurent Strainier", "Sergio Conti", "Michael Ortiz" ]
We resort to game theory in order to formulate Data-Driven methods for solid mechanics in which stress and strain players pursue different objectives. The objective of the stress player is to minimize the discrepancy to a material data set, whereas the objective of the strain player is to ensure the admissibility of the mechanical state, in the sense of compatibility and equilibrium. We show that, unlike the cooperative Data-Driven games proposed in the past, the new non-cooperative Data-Driven games identify an effective material law from the data and reduce to conventional displacement boundary-value problems, which facilitates their practical implementation. However, unlike supervised machine learning methods, the proposed non-cooperative Data-Driven games are unsupervised, ansatz-free and parameter-free. In particular, the effective material law is learned from the data directly, without recourse to regression to a parameterized class of functions such as neural networks. We present analysis that elucidates sufficient conditions for convergence of the Data-Driven solutions with respect to the data. We also present selected examples of implementation and application that demonstrate the range and versatility of the approach.
[ "cs.CE", "cond-mat.mtrl-sci", "cs.LG", "cs.NA", "math.NA" ]
false
2305.16748
2023-05-26T08:50:49Z
A Decentralized Spike-based Learning Framework for Sequential Capture in Discrete Perimeter Defense Problem
[ "Mohammed Thousif", "Shridhar Velhal", "Suresh Sundaram", "Shirin Dora" ]
This paper proposes a novel Decentralized Spike-based Learning (DSL) framework for the discrete Perimeter Defense Problem (d-PDP). A team of defenders is operating on the perimeter to protect the circular territory from radially incoming intruders. At first, the d-PDP is formulated as a spatio-temporal multi-task assignment problem (STMTA). The problem of STMTA is then converted into a multi-label learning problem to obtain labels of segments that defenders have to visit in order to protect the perimeter. The DSL framework uses a Multi-Label Classifier using Synaptic Efficacy Function spiking neuRON (MLC-SEFRON) network for deterministic multi-label learning. Each defender contains a single MLC-SEFRON network. Each MLC-SEFRON network is trained independently using input from its own perspective for decentralized operations. The input spikes to the MLC-SEFRON network can be directly obtained from the spatio-temporal information of defenders and intruders without any extra pre-processing step. The output of MLC-SEFRON contains the labels of segments that a defender has to visit in order to protect the perimeter. Based on the multi-label output from the MLC-SEFRON a trajectory is generated for a defender using a Consensus-Based Bundle Algorithm (CBBA) in order to capture the intruders. The target multi-label output for training MLC-SEFRON is obtained from an expert policy. Also, the MLC-SEFRON trained for a defender can be directly used for obtaining labels of segments assigned to another defender without any retraining. The performance of MLC-SEFRON has been evaluated for full observation and partial observation scenarios of the defender. The overall performance of the DSL framework is then compared with expert policy along with other existing learning algorithms. The scalability of the DSL has been evaluated using an increasing number of defenders.
[ "cs.RO", "cs.AI", "cs.LG", "cs.MA", "cs.NE", "cs.SY", "eess.SY" ]
false
2305.17313
2023-05-27T00:17:19Z
Super-Resolution of License Plate Images Using Attention Modules and Sub-Pixel Convolution Layers
[ "Valfride Nascimento", "Rayson Laroca", "Jorge de A. Lambert", "William Robson Schwartz", "David Menotti" ]
Recent years have seen significant developments in the field of License Plate Recognition (LPR) through the integration of deep learning techniques and the increasing availability of training data. Nevertheless, reconstructing license plates (LPs) from low-resolution (LR) surveillance footage remains challenging. To address this issue, we introduce a Single-Image Super-Resolution (SISR) approach that integrates attention and transformer modules to enhance the detection of structural and textural features in LR images. Our approach incorporates sub-pixel convolution layers (also known as PixelShuffle) and a loss function that uses an Optical Character Recognition (OCR) model for feature extraction. We trained the proposed architecture on synthetic images created by applying heavy Gaussian noise to high-resolution LP images from two public datasets, followed by bicubic downsampling. As a result, the generated images have a Structural Similarity Index Measure (SSIM) of less than 0.10. Our results show that our approach for reconstructing these low-resolution synthesized images outperforms existing ones in both quantitative and qualitative measures. Our code is publicly available at https://github.com/valfride/lpr-rsr-ext/
[ "cs.CV" ]
false
2305.17374
2023-05-27T05:37:02Z
LE2Fusion: A novel local edge enhancement module for infrared and visible image fusion
[ "Yongbiao Xiao", "Hui Li", "Chunyang Cheng", "Xiaoning Song" ]
Infrared and visible image fusion task aims to generate a fused image which contains salient features and rich texture details from multi-source images. However, under complex illumination conditions, few algorithms pay attention to the edge information of local regions which is crucial for downstream tasks. To this end, we propose a fusion network based on the local edge enhancement, named LE2Fusion. Specifically, a local edge enhancement (LE2) module is proposed to improve the edge information under complex illumination conditions and preserve the essential features of image. For feature extraction, a multi-scale residual attention (MRA) module is applied to extract rich features. Then, with LE2, a set of enhancement weights are generated which are utilized in feature fusion strategy and used to guide the image reconstruction. To better preserve the local detail information and structure information, the pixel intensity loss function based on the local region is also presented. The experiments demonstrate that the proposed method exhibits better fusion performance than the state-of-the-art fusion methods on public datasets.
[ "cs.CV" ]
false
2305.17451
2023-05-27T11:30:32Z
Analysis over vision-based models for pedestrian action anticipation
[ "Lina Achaji", "Julien Moreau", "François Aioun", "François Charpillet" ]
Anticipating human actions in front of autonomous vehicles is a challenging task. Several papers have recently proposed model architectures to address this problem by combining multiple input features to predict pedestrian crossing actions. This paper focuses specifically on using images of the pedestrian's context as an input feature. We present several spatio-temporal model architectures that utilize standard CNN and Transformer modules to serve as a backbone for pedestrian anticipation. However, the objective of this paper is not to surpass state-of-the-art benchmarks but rather to analyze the positive and negative predictions of these models. Therefore, we provide insights on the explainability of vision-based Transformer models in the context of pedestrian action prediction. We will highlight cases where the model can achieve correct quantitative results but falls short in providing human-like explanations qualitatively, emphasizing the importance of investing in explainability for pedestrian action anticipation problems.
[ "cs.CV" ]
false
2305.17463
2023-05-27T12:41:23Z
Pentagon-Match (PMatch): Identification of View-Invariant Planar Feature for Local Feature Matching-Based Homography Estimation
[ "Yueh-Cheng Huang", "Chen-Tao Hsu", "Jen-Hui Chuang" ]
In computer vision, finding correct point correspondence among images plays an important role in many applications, such as image stitching, image retrieval, visual localization, etc. Most of the research works focus on the matching of local feature before a sampling method is employed, such as RANSAC, to verify initial matching results via repeated fitting of certain global transformation among the images. However, incorrect matches may still exist. Thus, a novel sampling scheme, Pentagon-Match (PMatch), is proposed in this work to verify the correctness of initially matched keypoints using pentagons randomly sampled from them. By ensuring shape and location of these pentagons are view-invariant with various evaluations of cross-ratio (CR), incorrect matches of keypoint can be identified easily with homography estimated from correctly matched pentagons. Experimental results show that highly accurate estimation of homography can be obtained efficiently for planar scenes of the HPatches dataset, based on keypoint matching results provided by LoFTR. Besides, accurate outlier identification for the above matching results and possible extension of the approach for multi-plane situation are also demonstrated.
[ "cs.CV" ]
false
2305.17522
2023-05-27T16:37:41Z
Deep Learning based Fingerprint Presentation Attack Detection: A Comprehensive Survey
[ "Hailin Li", "Raghavendra Ramachandra" ]
The vulnerabilities of fingerprint authentication systems have raised security concerns when adapting them to highly secure access-control applications. Therefore, Fingerprint Presentation Attack Detection (FPAD) methods are essential for ensuring reliable fingerprint authentication. Owing to the lack of generation capacity of traditional handcrafted based approaches, deep learning-based FPAD has become mainstream and has achieved remarkable performance in the past decade. Existing reviews have focused more on hand-cratfed rather than deep learning-based methods, which are outdated. To stimulate future research, we will concentrate only on recent deep-learning-based FPAD methods. In this paper, we first briefly introduce the most common Presentation Attack Instruments (PAIs) and publicly available fingerprint Presentation Attack (PA) datasets. We then describe the existing deep-learning FPAD by categorizing them into contact, contactless, and smartphone-based approaches. Finally, we conclude the paper by discussing the open challenges at the current stage and emphasizing the potential future perspective.
[ "cs.CV" ]
false
2305.17569
2023-05-27T20:12:19Z
Collaborative Multi-Agent Video Fast-Forwarding
[ "Shuyue Lan", "Zhilu Wang", "Ermin Wei", "Amit K. Roy-Chowdhury", "Qi Zhu" ]
Multi-agent applications have recently gained significant popularity. In many computer vision tasks, a network of agents, such as a team of robots with cameras, could work collaboratively to perceive the environment for efficient and accurate situation awareness. However, these agents often have limited computation, communication, and storage resources. Thus, reducing resource consumption while still providing an accurate perception of the environment becomes an important goal when deploying multi-agent systems. To achieve this goal, we identify and leverage the overlap among different camera views in multi-agent systems for reducing the processing, transmission and storage of redundant/unimportant video frames. Specifically, we have developed two collaborative multi-agent video fast-forwarding frameworks in distributed and centralized settings, respectively. In these frameworks, each individual agent can selectively process or skip video frames at adjustable paces based on multiple strategies via reinforcement learning. Multiple agents then collaboratively sense the environment via either 1) a consensus-based distributed framework called DMVF that periodically updates the fast-forwarding strategies of agents by establishing communication and consensus among connected neighbors, or 2) a centralized framework called MFFNet that utilizes a central controller to decide the fast-forwarding strategies for agents based on collected data. We demonstrate the efficacy and efficiency of our proposed frameworks on a real-world surveillance video dataset VideoWeb and a new simulated driving dataset CarlaSim, through extensive simulations and deployment on an embedded platform with TCP communication. We show that compared with other approaches in the literature, our frameworks achieve better coverage of important frames, while significantly reducing the number of frames processed at each agent.
[ "cs.CV" ]
false
2305.17338
2023-05-27T02:38:54Z
Multi-label Video Classification for Underwater Ship Inspection
[ "Md Abulkalam Azad", "Ahmed Mohammed", "Maryna Waszak", "Brian Elvesæter", "Martin Ludvigsen" ]
Today ship hull inspection including the examination of the external coating, detection of defects, and other types of external degradation such as corrosion and marine growth is conducted underwater by means of Remotely Operated Vehicles (ROVs). The inspection process consists of a manual video analysis which is a time-consuming and labor-intensive process. To address this, we propose an automatic video analysis system using deep learning and computer vision to improve upon existing methods that only consider spatial information on individual frames in underwater ship hull video inspection. By exploring the benefits of adding temporal information and analyzing frame-based classifiers, we propose a multi-label video classification model that exploits the self-attention mechanism of transformers to capture spatiotemporal attention in consecutive video frames. Our proposed method has demonstrated promising results and can serve as a benchmark for future research and development in underwater video inspection applications.
[ "cs.CV", "cs.AI" ]
false
2305.17355
2023-05-27T03:37:33Z
Rethinking PRL: A Multiscale Progressively Residual Learning Network for Inverse Halftoning
[ "Feiyu Li", "Jun Yang" ]
Image inverse halftoning is a classic image restoration task, aiming to recover continuous-tone images from halftone images with only bilevel pixels. Because the halftone images lose much of the original image content, inverse halftoning is a classic ill-problem. Although existing inverse halftoning algorithms achieve good performance, their results lose image details and features. Therefore, it is still a challenge to recover high-quality continuous-tone images. In this paper, we propose an end-to-end multiscale progressively residual learning network (MSPRL), which has a UNet architecture and takes multiscale input images. To make full use of different input image information, we design a shallow feature extraction module to capture similar features between images of different scales. We systematically study the performance of different methods and compare them with our proposed method. In addition, we employ different training strategies to optimize the model, which is important for optimizing the training process and improving performance. Extensive experiments demonstrate that our MSPRL model obtains considerable performance gains in detail restoration.
[ "cs.CV", "eess.IV" ]
false
2305.17368
2023-05-27T04:55:13Z
Instance-based Max-margin for Practical Few-shot Recognition
[ "Minghao Fu", "Ke Zhu", "Jianxin Wu" ]
In order to mimic the human few-shot learning (FSL) ability better and to make FSL closer to real-world applications, this paper proposes a practical FSL (pFSL) setting. pFSL is based on unsupervised pretrained models (analogous to human prior knowledge) and recognizes many novel classes simultaneously. Compared to traditional FSL, pFSL is simpler in its formulation, easier to evaluate, more challenging and more practical. To cope with the rarity of training examples, this paper proposes IbM2, an instance-based max-margin method not only for the new pFSL setting, but also works well in traditional FSL scenarios. Based on the Gaussian Annulus Theorem, IbM2 converts random noise applied to the instances into a mechanism to achieve maximum margin in the many-way pFSL (or traditional FSL) recognition task. Experiments with various self-supervised pretraining methods and diverse many- or few-way FSL tasks show that IbM2 almost always leads to improvements compared to its respective baseline methods, and in most cases the improvements are significant. With both the new pFSL setting and novel IbM2 method, this paper shows that practical few-shot learning is both viable and promising.
[ "cs.CV", "cs.AI" ]
false
2305.17388
2023-05-27T06:46:42Z
MPCHAT: Towards Multimodal Persona-Grounded Conversation
[ "Jaewoo Ahn", "Yeda Song", "Sangdoo Yun", "Gunhee Kim" ]
In order to build self-consistent personalized dialogue agents, previous research has mostly focused on textual persona that delivers personal facts or personalities. However, to fully describe the multi-faceted nature of persona, image modality can help better reveal the speaker's personal characteristics and experiences in episodic memory (Rubin et al., 2003; Conway, 2009). In this work, we extend persona-based dialogue to the multimodal domain and make two main contributions. First, we present the first multimodal persona-based dialogue dataset named MPCHAT, which extends persona with both text and images to contain episodic memories. Second, we empirically show that incorporating multimodal persona, as measured by three proposed multimodal persona-grounded dialogue tasks (i.e., next response prediction, grounding persona prediction, and speaker identification), leads to statistically significant performance improvements across all tasks. Thus, our work highlights that multimodal persona is crucial for improving multimodal dialogue comprehension, and our MPCHAT serves as a high-quality resource for this research.
[ "cs.CL", "cs.CV" ]
false
2305.17398
2023-05-27T07:40:07Z
NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images
[ "Yuan Liu", "Peng Wang", "Cheng Lin", "Xiaoxiao Long", "Jiepeng Wang", "Lingjie Liu", "Taku Komura", "Wenping Wang" ]
We present a neural rendering-based method called NeRO for reconstructing the geometry and the BRDF of reflective objects from multiview images captured in an unknown environment. Multiview reconstruction of reflective objects is extremely challenging because specular reflections are view-dependent and thus violate the multiview consistency, which is the cornerstone for most multiview reconstruction methods. Recent neural rendering techniques can model the interaction between environment lights and the object surfaces to fit the view-dependent reflections, thus making it possible to reconstruct reflective objects from multiview images. However, accurately modeling environment lights in the neural rendering is intractable, especially when the geometry is unknown. Most existing neural rendering methods, which can model environment lights, only consider direct lights and rely on object masks to reconstruct objects with weak specular reflections. Therefore, these methods fail to reconstruct reflective objects, especially when the object mask is not available and the object is illuminated by indirect lights. We propose a two-step approach to tackle this problem. First, by applying the split-sum approximation and the integrated directional encoding to approximate the shading effects of both direct and indirect lights, we are able to accurately reconstruct the geometry of reflective objects without any object masks. Then, with the object geometry fixed, we use more accurate sampling to recover the environment lights and the BRDF of the object. Extensive experiments demonstrate that our method is capable of accurately reconstructing the geometry and the BRDF of reflective objects from only posed RGB images without knowing the environment lights and the object masks. Codes and datasets are available at https://github.com/liuyuan-pal/NeRO.
[ "cs.CV", "cs.GR" ]
false
2305.17431
2023-05-27T10:03:36Z
Towards Consistent Video Editing with Text-to-Image Diffusion Models
[ "Zicheng Zhang", "Bonan Li", "Xuecheng Nie", "Congying Han", "Tiande Guo", "Luoqi Liu" ]
Existing works have advanced Text-to-Image (TTI) diffusion models for video editing in a one-shot learning manner. Despite their low requirements of data and computation, these methods might produce results of unsatisfied consistency with text prompt as well as temporal sequence, limiting their applications in the real world. In this paper, we propose to address the above issues with a novel EI$^2$ model towards \textbf{E}nhancing v\textbf{I}deo \textbf{E}diting cons\textbf{I}stency of TTI-based frameworks. Specifically, we analyze and find that the inconsistent problem is caused by newly added modules into TTI models for learning temporal information. These modules lead to covariate shift in the feature space, which harms the editing capability. Thus, we design EI$^2$ to tackle the above drawbacks with two classical modules: Shift-restricted Temporal Attention Module (STAM) and Fine-coarse Frame Attention Module (FFAM). First, through theoretical analysis, we demonstrate that covariate shift is highly related to Layer Normalization, thus STAM employs a \textit{Instance Centering} layer replacing it to preserve the distribution of temporal features. In addition, {STAM} employs an attention layer with normalized mapping to transform temporal features while constraining the variance shift. As the second part, we incorporate {STAM} with a novel {FFAM}, which efficiently leverages fine-coarse spatial information of overall frames to further enhance temporal consistency. Extensive experiments demonstrate the superiority of the proposed EI$^2$ model for text-driven video editing.
[ "cs.CV", "cs.AI" ]
false
2305.17433
2023-05-27T10:06:03Z
A Unified Framework for Slot based Response Generation in a Multimodal Dialogue System
[ "Mauajama Firdaus", "Avinash Madasu", "Asif Ekbal" ]
Natural Language Understanding (NLU) and Natural Language Generation (NLG) are the two critical components of every conversational system that handles the task of understanding the user by capturing the necessary information in the form of slots and generating an appropriate response in accordance with the extracted information. Recently, dialogue systems integrated with complementary information such as images, audio, or video have gained immense popularity. In this work, we propose an end-to-end framework with the capability to extract necessary slot values from the utterance and generate a coherent response, thereby assisting the user to achieve their desired goals in a multimodal dialogue system having both textual and visual information. The task of extracting the necessary information is dependent not only on the text but also on the visual cues present in the dialogue. Similarly, for the generation, the previous dialog context comprising multimodal information is significant for providing coherent and informative responses. We employ a multimodal hierarchical encoder using pre-trained DialoGPT and also exploit the knowledge base (Kb) to provide a stronger context for both the tasks. Finally, we design a slot attention mechanism to focus on the necessary information in a given utterance. Lastly, a decoder generates the corresponding response for the given dialogue context and the extracted slot values. Experimental results on the Multimodal Dialogue Dataset (MMD) show that the proposed framework outperforms the baselines approaches in both the tasks. The code is available at https://github.com/avinashsai/slot-gpt.
[ "cs.CV", "cs.CL" ]
false
2305.17477
2023-05-27T13:47:25Z
BASED: Benchmarking, Analysis, and Structural Estimation of Deblurring
[ "Nikita Alutis", "Egor Chistov", "Mikhail Dremin", "Dmitriy Vatolin" ]
This paper discusses the challenges of evaluating deblurring-methods quality and proposes a reduced-reference metric based on machine learning. Traditional quality-assessment metrics such as PSNR and SSIM are common for this task, but not only do they correlate poorly with subjective assessments, they also require ground-truth (GT) frames, which can be difficult to obtain in the case of deblurring. To develop and evaluate our metric, we created a new motion-blur dataset using a beam splitter. The setup captured various motion types using a static camera, as most scenes in existing datasets include blur due to camera motion. We also conducted two large subjective comparisons to aid in metric development. Our resulting metric requires no GT frames, and it correlates well with subjective human perception of blur.
[ "cs.CV", "cs.MM" ]
false
2305.17520
2023-05-27T16:33:43Z
USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense Active Learning for Super-resolution
[ "Vikrant Rangnekar", "Uddeshya Upadhyay", "Zeynep Akata", "Biplab Banerjee" ]
Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc. However, the high cost of annotation and labeling makes it challenging to achieve accurate results. We propose incorporating active learning into dense regression models to address this problem. Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance. Despite its potential, active learning has not been widely explored in high-dimensional computer vision regression tasks like super-resolution. We address this research gap and propose a new framework called USIM-DAL that leverages the statistical properties of colour images to learn informative priors using probabilistic deep neural networks that model the heteroscedastic predictive distribution allowing uncertainty quantification. Moreover, the aleatoric uncertainty from the network serves as a proxy for error that is used for active learning. Our experiments on a wide variety of datasets spanning applications in natural images (visual genome, BSD100), medical imaging (histopathology slides), and remote sensing (satellite images) demonstrate the efficacy of the newly proposed USIM-DAL and superiority over several dense regression active learning methods.
[ "cs.CV", "cs.AI" ]
false
2305.17559
2023-05-27T19:22:25Z
Pruning at Initialization -- A Sketching Perspective
[ "Noga Bar", "Raja Giryes" ]
The lottery ticket hypothesis (LTH) has increased attention to pruning neural networks at initialization. We study this problem in the linear setting. We show that finding a sparse mask at initialization is equivalent to the sketching problem introduced for efficient matrix multiplication. This gives us tools to analyze the LTH problem and gain insights into it. Specifically, using the mask found at initialization, we bound the approximation error of the pruned linear model at the end of training. We theoretically justify previous empirical evidence that the search for sparse networks may be data independent. By using the sketching perspective, we suggest a generic improvement to existing algorithms for pruning at initialization, which we show to be beneficial in the data-independent case.
[ "cs.LG", "cs.CV" ]
false
2305.17565
2023-05-27T19:58:11Z
Self-Supervised Learning of Action Affordances as Interaction Modes
[ "Liquan Wang", "Nikita Dvornik", "Rafael Dubeau", "Mayank Mittal", "Animesh Garg" ]
When humans perform a task with an articulated object, they interact with the object only in a handful of ways, while the space of all possible interactions is nearly endless. This is because humans have prior knowledge about what interactions are likely to be successful, i.e., to open a new door we first try the handle. While learning such priors without supervision is easy for humans, it is notoriously hard for machines. In this work, we tackle unsupervised learning of priors of useful interactions with articulated objects, which we call interaction modes. In contrast to the prior art, we use no supervision or privileged information; we only assume access to the depth sensor in the simulator to learn the interaction modes. More precisely, we define a successful interaction as the one changing the visual environment substantially and learn a generative model of such interactions, that can be conditioned on the desired goal state of the object. In our experiments, we show that our model covers most of the human interaction modes, outperforms existing state-of-the-art methods for affordance learning, and can generalize to objects never seen during training. Additionally, we show promising results in the goal-conditional setup, where our model can be quickly fine-tuned to perform a given task. We show in the experiments that such affordance learning predicts interaction which covers most modes of interaction for the querying articulated object and can be fine-tuned to a goal-conditional model. For supplementary: https://actaim.github.io.
[ "cs.CV", "cs.RO" ]
false
2305.18361
2023-05-27T03:55:19Z
Deep learning network to correct axial and coronal eye motion in 3D OCT retinal imaging
[ "Yiqian Wang", "Alexandra Warter", "Melina Cavichini", "Varsha Alex", "Dirk-Uwe G. Bartsch", "William R. Freeman", "Truong Q. Nguyen", "Cheolhong An" ]
Optical Coherence Tomography (OCT) is one of the most important retinal imaging technique. However, involuntary motion artifacts still pose a major challenge in OCT imaging that compromises the quality of downstream analysis, such as retinal layer segmentation and OCT Angiography. We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single volumetric scan. The proposed method consists of two fully-convolutional neural networks that predict Z and X dimensional displacement maps sequentially in two stages. The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods. Specifically, the method can recover the overall curvature of the retina, and can be generalized well to various diseases and resolutions.
[ "eess.IV", "cs.CV" ]
false
2306.04644
2023-05-27T14:33:01Z
Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map
[ "Yuguang Yang", "Runtang Guo", "Sheng Wu", "Yimi Wang", "Juan Zhang", "Xuan Gong", "Baochang Zhang" ]
Interpretation of deep learning remains a very challenging problem. Although the Class Activation Map (CAM) is widely used to interpret deep model predictions by highlighting object location, it fails to provide insight into the salient features used by the model to make decisions. Furthermore, existing evaluation protocols often overlook the correlation between interpretability performance and the model's decision quality, which presents a more fundamental issue. This paper proposes a new two-stage interpretability method called the Decomposition Class Activation Map (Decom-CAM), which offers a feature-level interpretation of the model's prediction. Decom-CAM decomposes intermediate activation maps into orthogonal features using singular value decomposition and generates saliency maps by integrating them. The orthogonality of features enables CAM to capture local features and can be used to pinpoint semantic components such as eyes, noses, and faces in the input image, making it more beneficial for deep model interpretation. To ensure a comprehensive comparison, we introduce a new evaluation protocol by dividing the dataset into subsets based on classification accuracy results and evaluating the interpretability performance on each subset separately. Our experiments demonstrate that the proposed Decom-CAM outperforms current state-of-the-art methods significantly by generating more precise saliency maps across all levels of classification accuracy. Combined with our feature-level interpretability approach, this paper could pave the way for a new direction for understanding the decision-making process of deep neural networks.
[ "cs.CV", "cs.AI" ]
false
2305.17318
2023-05-27T00:47:39Z
Radar Enlighten the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion
[ "Can Cui", "Yunsheng Ma", "Juanwu Lu", "Ziran Wang" ]
Sensor fusion is a crucial augmentation technique for improving the accuracy and reliability of perception systems for automated vehicles under diverse driving conditions. However, adverse weather and low-light conditions remain challenging, where sensor performance degrades significantly, exposing vehicle safety to potential risks. Advanced sensors such as LiDARs can help mitigate the issue but with extremely high marginal costs. In this paper, we propose a novel transformer-based 3D object detection model "REDFormer" to tackle low visibility conditions, exploiting the power of a more practical and cost-effective solution by leveraging bird's-eye-view camera-radar fusion. Using the nuScenes dataset with multi-radar point clouds, weather information, and time-of-day data, our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy. Finally, we provide extensive ablation studies of each model component on their contributions to address the above-mentioned challenges. Particularly, it is shown in the experiments that our model achieves a significant performance improvement over the baseline model in low-visibility scenarios, specifically exhibiting a 31.31% increase in rainy scenes and a 46.99% enhancement in nighttime scenes.The source code of this study is publicly available.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2305.17370
2023-05-27T05:09:03Z
Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation
[ "Neel Kanwal", "Trygve Eftestol", "Farbod Khoraminia", "Tahlita CM Zuiverloon", "Kjersti Engan" ]
Computational Pathology (CPATH) systems have the potential to automate diagnostic tasks. However, the artifacts on the digitized histological glass slides, known as Whole Slide Images (WSIs), may hamper the overall performance of CPATH systems. Deep Learning (DL) models such as Vision Transformers (ViTs) may detect and exclude artifacts before running the diagnostic algorithm. A simple way to develop robust and generalized ViTs is to train them on massive datasets. Unfortunately, acquiring large medical datasets is expensive and inconvenient, prompting the need for a generalized artifact detection method for WSIs. In this paper, we present a student-teacher recipe to improve the classification performance of ViT for the air bubbles detection task. ViT, trained under the student-teacher framework, boosts its performance by distilling existing knowledge from the high-capacity teacher model. Our best-performing ViT yields 0.961 and 0.911 F1-score and MCC, respectively, observing a 7% gain in MCC against stand-alone training. The proposed method presents a new perspective of leveraging knowledge distillation over transfer learning to encourage the use of customized transformers for efficient preprocessing pipelines in the CPATH systems.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2305.17438
2023-05-27T10:26:23Z
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
[ "Xiao Li", "Hang Chen", "Xiaolin Hu" ]
Object detection is a critical component of various security-sensitive applications, such as autonomous driving and video surveillance. However, existing deep learning-based object detectors are vulnerable to adversarial attacks, which poses a significant challenge to their reliability and safety. Through experiments, we found that existing works on improving the adversarial robustness of object detectors have given a false sense of security. We argue that using adversarially pre-trained backbone networks is essential for enhancing the adversarial robustness of object detectors. We propose a simple yet effective recipe for fast adversarial fine-tuning on object detectors with adversarially pre-trained backbones. Without any modifications to the structure of object detectors, our recipe achieved significantly better adversarial robustness than previous works. Moreover, we explore the potential of different modern object detectors to improve adversarial robustness using our recipe and demonstrate several interesting findings. Our empirical results set a new milestone and deepen the understanding of adversarially robust object detection. Code and trained checkpoints will be publicly available.
[ "cs.CV", "cs.AI", "cs.CR", "cs.LG" ]
false
2305.17456
2023-05-27T12:12:53Z
Trustworthy Deep Learning for Medical Image Segmentation
[ "Lucas Fidon" ]
Despite the recent success of deep learning methods at achieving new state-of-the-art accuracy for medical image segmentation, some major limitations are still restricting their deployment into clinics. One major limitation of deep learning-based segmentation methods is their lack of robustness to variability in the image acquisition protocol and in the imaged anatomy that were not represented or were underrepresented in the training dataset. This suggests adding new manually segmented images to the training dataset to better cover the image variability. However, in most cases, the manual segmentation of medical images requires highly skilled raters and is time-consuming, making this solution prohibitively expensive. Even when manually segmented images from different sources are available, they are rarely annotated for exactly the same regions of interest. This poses an additional challenge for current state-of-the-art deep learning segmentation methods that rely on supervised learning and therefore require all the regions of interest to be segmented for all the images to be used for training. This thesis introduces new mathematical and optimization methods to mitigate those limitations.
[ "eess.IV", "cs.CV", "cs.LG" ]
false
2305.17478
2023-05-27T13:49:35Z
Deep Variational Lesion-Deficit Mapping
[ "Guilherme Pombo", "Robert Gray", "Amy P. K. Nelson", "Chris Foulon", "John Ashburner", "Parashkev Nachev" ]
Causal mapping of the functional organisation of the human brain requires evidence of \textit{necessity} available at adequate scale only from pathological lesions of natural origin. This demands inferential models with sufficient flexibility to capture both the observable distribution of pathological damage and the unobserved distribution of the neural substrate. Current model frameworks -- both mass-univariate and multivariate -- either ignore distributed lesion-deficit relations or do not model them explicitly, relying on featurization incidental to a predictive task. Here we initiate the application of deep generative neural network architectures to the task of lesion-deficit inference, formulating it as the estimation of an expressive hierarchical model of the joint lesion and deficit distributions conditioned on a latent neural substrate. We implement such deep lesion deficit inference with variational convolutional volumetric auto-encoders. We introduce a comprehensive framework for lesion-deficit model comparison, incorporating diverse candidate substrates, forms of substrate interactions, sample sizes, noise corruption, and population heterogeneity. Drawing on 5500 volume images of ischaemic stroke, we show that our model outperforms established methods by a substantial margin across all simulation scenarios, including comparatively small-scale and noisy data regimes. Our analysis justifies the widespread adoption of this approach, for which we provide an open source implementation: https://github.com/guilherme-pombo/vae_lesion_deficit
[ "cs.LG", "cs.CV", "stat.AP", "stat.ML" ]
false
2305.17530
2023-05-27T17:16:27Z
PuMer: Pruning and Merging Tokens for Efficient Vision Language Models
[ "Qingqing Cao", "Bhargavi Paranjape", "Hannaneh Hajishirzi" ]
Large-scale vision language (VL) models use Transformers to perform cross-modal interactions between the input text and image. These cross-modal interactions are computationally expensive and memory-intensive due to the quadratic complexity of processing the input image and text. We present PuMer: a token reduction framework that uses text-informed Pruning and modality-aware Merging strategies to progressively reduce the tokens of input image and text, improving model inference speed and reducing memory footprint. PuMer learns to keep salient image tokens related to the input text and merges similar textual and visual tokens by adding lightweight token reducer modules at several cross-modal layers in the VL model. Training PuMer is mostly the same as finetuning the original VL model but faster. Our evaluation for two vision language models on four downstream VL tasks shows PuMer increases inference throughput by up to 2x and reduces memory footprint by over 50% while incurring less than a 1% accuracy drop.
[ "cs.CV", "cs.AI", "cs.CL" ]
false
2305.18367
2023-05-27T18:50:12Z
Using VGG16 Algorithms for classification of lung cancer in CT scans Image
[ "Hasan Hejbari Zargar", "Saha Hejbari Zargar", "Raziye Mehri", "Farzane Tajidini" ]
Lung cancer is the leading reason behind cancer-related deaths within the world. Early detection of lung nodules is vital for increasing the survival rate of cancer patients. Traditionally, physicians should manually identify the world suspected of getting carcinoma. When developing these detection systems, the arbitrariness of lung nodules' shape, size, and texture could be a challenge. Many studies showed the applied of computer vision algorithms to accurate diagnosis and classification of lung nodules. A deep learning algorithm called the VGG16 was developed during this paper to help medical professionals diagnose and classify carcinoma nodules. VGG16 can classify medical images of carcinoma in malignant, benign, and healthy patients. This paper showed that nodule detection using this single neural network had 92.08% sensitivity, 91% accuracy, and an AUC of 93%.
[ "eess.IV", "cs.CV", "cs.LG" ]
false
2305.19298
2023-05-27T20:44:14Z
MLOps: A Step Forward to Enterprise Machine Learning
[ "A. I. Ullah Tabassam" ]
Machine Learning Operations (MLOps) is becoming a highly crucial part of businesses looking to capitalize on the benefits of AI and ML models. This research presents a detailed review of MLOps, its benefits, difficulties, evolutions, and important underlying technologies such as MLOps frameworks, Docker, GitHub actions, and Kubernetes. The MLOps workflow, which includes model design, deployment, and operations, is explained in detail along with the various tools necessary for both model and data exploration and deployment. This article also puts light on the end-to-end production of ML projects using various maturity levels of automated pipelines, with the least at no automation at all and the highest with complete CI/CD and CT capabilities. Furthermore, a detailed example of an enterprise-level MLOps project for an object detection service is used to explain the workflow of the technology in a real-world scenario. For this purpose, a web application hosting a pre-trained model from TensorFlow 2 Model Zoo is packaged and deployed to the internet making sure that the system is scalable, reliable, and optimized for deployment at an enterprise level.
[ "cs.SE", "cs.AI", "cs.CV", "cs.LG" ]
false
2306.06078
2023-05-27T04:03:15Z
Cheating off your neighbors: Improving activity recognition through corroboration
[ "Haoxiang Yu", "Jingyi An", "Evan King", "Edison Thomaz", "Christine Julien" ]
Understanding the complexity of human activities solely through an individual's data can be challenging. However, in many situations, surrounding individuals are likely performing similar activities, while existing human activity recognition approaches focus almost exclusively on individual measurements and largely ignore the context of the activity. Consider two activities: attending a small group meeting and working at an office desk. From solely an individual's perspective, it can be difficult to differentiate between these activities as they may appear very similar, even though they are markedly different. Yet, by observing others nearby, it can be possible to distinguish between these activities. In this paper, we propose an approach to enhance the prediction accuracy of an individual's activities by incorporating insights from surrounding individuals. We have collected a real-world dataset from 20 participants with over 58 hours of data including activities such as attending lectures, having meetings, working in the office, and eating together. Compared to observing a single person in isolation, our proposed approach significantly improves accuracy. We regard this work as a first step in collaborative activity recognition, opening new possibilities for understanding human activity in group settings.
[ "cs.CV", "cs.HC", "cs.LG", "eess.SP" ]
false
2305.18371
2023-05-27T23:08:22Z
ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing UAV-Platform with Event-Based and Frame-Based Cameras
[ "Sizhen Bian", "Lukas Schulthess", "Georg Rutishauser", "Alfio Di Mauro", "Luca Benini", "Michele Magno" ]
The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehicles (UAV) is raising, especially due to the microsecond-level reaction time of the bio-inspired event sensor, which increases robustness and reduces latency of the perception tasks compared to a RGB camera. This work presents ColibriUAV, a UAV platform with both frame-based and event-based cameras interfaces for efficient perception and near-sensor processing. The proposed platform is designed around Kraken, a novel low-power RISC-V System on Chip with two hardware accelerators targeting spiking neural networks and deep ternary neural networks.Kraken is capable of efficiently processing both event data from a DVS camera and frame data from an RGB camera. A key feature of Kraken is its integrated, dedicated interface with a DVS camera. This paper benchmarks the end-to-end latency and power efficiency of the neuromorphic and event-based UAV subsystem, demonstrating state-of-the-art event data with a throughput of 7200 frames of events per second and a power consumption of 10.7 \si{\milli\watt}, which is over 6.6 times faster and a hundred times less power-consuming than the widely-used data reading approach through the USB interface. The overall sensing and processing power consumption is below 50 mW, achieving latency in the milliseconds range, making the platform suitable for low-latency autonomous nano-drones as well.
[ "cs.CV", "cs.AI", "cs.AR", "cs.SY", "eess.SY" ]
false
2305.17325
2023-05-27T02:04:19Z
Why Does Zero-Shot Cross-Lingual Generation Fail? An Explanation and a Solution
[ "Tianjian Li", "Kenton Murray" ]
Zero-shot cross-lingual transfer is when a multilingual model is trained to perform a task in one language and then is applied to another language. Although the zero-shot cross-lingual transfer approach has achieved success in various classification tasks, its performance on natural language generation tasks falls short in quality and sometimes outputs an incorrect language. In our study, we show that the fine-tuning process learns language invariant representations, which is beneficial for classification tasks but harmful for generation tasks. Motivated by this, we propose a simple method to regularize the model from learning language invariant representations and a method to select model checkpoints without a development set in the target language, both resulting in better generation quality. Experiments on three semantically diverse generation tasks show that our method reduces the accidental translation problem by 68% and improves the ROUGE-L score by 1.5 on average.
[ "cs.CL" ]
false
2305.17347
2023-05-27T03:01:53Z
CGELBank Annotation Manual v1.0
[ "Brett Reynolds", "Nathan Schneider", "Aryaman Arora" ]
CGELBank is a treebank and associated tools based on a syntactic formalism for English derived from the Cambridge Grammar of the English Language. This document lays out the particularities of the CGELBank annotation scheme.
[ "cs.CL" ]
false
2305.17350
2023-05-27T03:06:15Z
How Good is Automatic Segmentation as a Multimodal Discourse Annotation Aid?
[ "Corbyn Terpstra", "Ibrahim Khebour", "Mariah Bradford", "Brett Wisniewski", "Nikhil Krishnaswamy", "Nathaniel Blanchard" ]
Collaborative problem solving (CPS) in teams is tightly coupled with the creation of shared meaning between participants in a situated, collaborative task. In this work, we assess the quality of different utterance segmentation techniques as an aid in annotating CPS. We (1) manually transcribe utterances in a dataset of triads collaboratively solving a problem involving dialogue and physical object manipulation, (2) annotate collaborative moves according to these gold-standard transcripts, and then (3) apply these annotations to utterances that have been automatically segmented using toolkits from Google and OpenAI's Whisper. We show that the oracle utterances have minimal correspondence to automatically segmented speech, and that automatically segmented speech using different segmentation methods is also inconsistent. We also show that annotating automatically segmented speech has distinct implications compared with annotating oracle utterances--since most annotation schemes are designed for oracle cases, when annotating automatically-segmented utterances, annotators must invoke other information to make arbitrary judgments which other annotators may not replicate. We conclude with a discussion of how future annotation specs can account for these needs.
[ "cs.CL" ]
false
2305.17351
2023-05-27T03:15:10Z
Disambiguated Lexically Constrained Neural Machine Translation
[ "Jinpeng Zhang", "Nini Xiao", "Ke Wang", "Chuanqi Dong", "Xiangyu Duan", "Yuqi Zhang", "Min Zhang" ]
Lexically constrained neural machine translation (LCNMT), which controls the translation generation with pre-specified constraints, is important in many practical applications. Current approaches to LCNMT typically assume that the pre-specified lexical constraints are contextually appropriate. This assumption limits their application to real-world scenarios where a source lexicon may have multiple target constraints, and disambiguation is needed to select the most suitable one. In this paper, we propose disambiguated LCNMT (D-LCNMT) to solve the problem. D-LCNMT is a robust and effective two-stage framework that disambiguates the constraints based on contexts at first, then integrates the disambiguated constraints into LCNMT. Experimental results show that our approach outperforms strong baselines including existing data augmentation based approaches on benchmark datasets, and comprehensive experiments in scenarios where a source lexicon corresponds to multiple target constraints demonstrate the constraint disambiguation superiority of our approach.
[ "cs.CL" ]
false
2305.17356
2023-05-27T03:52:52Z
Bridging the Granularity Gap for Acoustic Modeling
[ "Chen Xu", "Yuhao Zhang", "Chengbo Jiao", "Xiaoqian Liu", "Chi Hu", "Xin Zeng", "Tong Xiao", "Anxiang Ma", "Huizhen Wang", "JingBo Zhu" ]
While Transformer has become the de-facto standard for speech, modeling upon the fine-grained frame-level features remains an open challenge of capturing long-distance dependencies and distributing the attention weights. We propose \textit{Progressive Down-Sampling} (PDS) which gradually compresses the acoustic features into coarser-grained units containing more complete semantic information, like text-level representation. In addition, we develop a representation fusion method to alleviate information loss that occurs inevitably during high compression. In this way, we compress the acoustic features into 1/32 of the initial length while achieving better or comparable performances on the speech recognition task. And as a bonus, it yields inference speedups ranging from 1.20$\times$ to 1.47$\times$. By reducing the modeling burden, we also achieve competitive results when training on the more challenging speech translation task.
[ "cs.CL" ]
false
2305.17358
2023-05-27T03:54:09Z
CTC-based Non-autoregressive Speech Translation
[ "Chen Xu", "Xiaoqian Liu", "Xiaowen Liu", "Qingxuan Sun", "Yuhao Zhang", "Murun Yang", "Qianqian Dong", "Tom Ko", "Mingxuan Wang", "Tong Xiao", "Anxiang Ma", "Jingbo Zhu" ]
Combining end-to-end speech translation (ST) and non-autoregressive (NAR) generation is promising in language and speech processing for their advantages of less error propagation and low latency. In this paper, we investigate the potential of connectionist temporal classification (CTC) for non-autoregressive speech translation (NAST). In particular, we develop a model consisting of two encoders that are guided by CTC to predict the source and target texts, respectively. Introducing CTC into NAST on both language sides has obvious challenges: 1) the conditional independent generation somewhat breaks the interdependency among tokens, and 2) the monotonic alignment assumption in standard CTC does not hold in translation tasks. In response, we develop a prediction-aware encoding approach and a cross-layer attention approach to address these issues. We also use curriculum learning to improve convergence of training. Experiments on the MuST-C ST benchmarks show that our NAST model achieves an average BLEU score of 29.5 with a speed-up of 5.67$\times$, which is comparable to the autoregressive counterpart and even outperforms the previous best result of 0.9 BLEU points.
[ "cs.CL" ]
false
2305.17367
2023-05-27T04:47:09Z
Augmenting Large Language Model Translators via Translation Memories
[ "Yongyu Mu", "Abudurexiti Reheman", "Zhiquan Cao", "Yuchun Fan", "Bei Li", "Yinqiao Li", "Tong Xiao", "Chunliang Zhang", "Jingbo Zhu" ]
Using translation memories (TMs) as prompts is a promising approach to in-context learning of machine translation models. In this work, we take a step towards prompting large language models (LLMs) with TMs and making them better translators. We find that the ability of LLMs to ``understand'' prompts is indeed helpful for making better use of TMs. Experiments show that the results of a pre-trained LLM translator can be greatly improved by using high-quality TM-based prompts. These results are even comparable to those of the state-of-the-art NMT systems which have access to large-scale in-domain bilingual data and are well tuned on the downstream tasks.
[ "cs.CL" ]
false
2305.17371
2023-05-27T05:15:28Z
Towards Better Entity Linking with Multi-View Enhanced Distillation
[ "Yi Liu", "Yuan Tian", "Jianxun Lian", "Xinlong Wang", "Yanan Cao", "Fang Fang", "Wen Zhang", "Haizhen Huang", "Denvy Deng", "Qi Zhang" ]
Dense retrieval is widely used for entity linking to retrieve entities from large-scale knowledge bases. Mainstream techniques are based on a dual-encoder framework, which encodes mentions and entities independently and calculates their relevances via rough interaction metrics, resulting in difficulty in explicitly modeling multiple mention-relevant parts within entities to match divergent mentions. Aiming at learning entity representations that can match divergent mentions, this paper proposes a Multi-View Enhanced Distillation (MVD) framework, which can effectively transfer knowledge of multiple fine-grained and mention-relevant parts within entities from cross-encoders to dual-encoders. Each entity is split into multiple views to avoid irrelevant information being over-squashed into the mention-relevant view. We further design cross-alignment and self-alignment mechanisms for this framework to facilitate fine-grained knowledge distillation from the teacher model to the student model. Meanwhile, we reserve a global-view that embeds the entity as a whole to prevent dispersal of uniform information. Experiments show our method achieves state-of-the-art performance on several entity linking benchmarks.
[ "cs.CL" ]
false
2305.17404
2023-05-27T08:03:44Z
Parallel Corpus for Indigenous Language Translation: Spanish-Mazatec and Spanish-Mixtec
[ "Atnafu Lambebo Tonja", "Christian Maldonado-Sifuentes", "David Alejandro Mendoza Castillo", "Olga Kolesnikova", "Noé Castro-Sánchez", "Grigori Sidorov", "Alexander Gelbukh" ]
In this paper, we present a parallel Spanish-Mazatec and Spanish-Mixtec corpus for machine translation (MT) tasks, where Mazatec and Mixtec are two indigenous Mexican languages. We evaluated the usability of the collected corpus using three different approaches: transformer, transfer learning, and fine-tuning pre-trained multilingual MT models. Fine-tuning the Facebook M2M100-48 model outperformed the other approaches, with BLEU scores of 12.09 and 22.25 for Mazatec-Spanish and Spanish-Mazatec translations, respectively, and 16.75 and 22.15 for Mixtec-Spanish and Spanish-Mixtec translations, respectively. The findings show that the dataset size (9,799 sentences in Mazatec and 13,235 sentences in Mixtec) affects translation performance and that indigenous languages work better when used as target languages. The findings emphasize the importance of creating parallel corpora for indigenous languages and fine-tuning models for low-resource translation tasks. Future research will investigate zero-shot and few-shot learning approaches to further improve translation performance in low-resource settings. The dataset and scripts are available at \url{https://github.com/atnafuatx/Machine-Translation-Resources}
[ "cs.CL" ]
false
2305.17406
2023-05-27T08:10:40Z
Enhancing Translation for Indigenous Languages: Experiments with Multilingual Models
[ "Atnafu Lambebo Tonja", "Hellina Hailu Nigatu", "Olga Kolesnikova", "Grigori Sidorov", "Alexander Gelbukh", "Jugal Kalita" ]
This paper describes CIC NLP's submission to the AmericasNLP 2023 Shared Task on machine translation systems for indigenous languages of the Americas. We present the system descriptions for three methods. We used two multilingual models, namely M2M-100 and mBART50, and one bilingual (one-to-one) -- Helsinki NLP Spanish-English translation model, and experimented with different transfer learning setups. We experimented with 11 languages from America and report the setups we used as well as the results we achieved. Overall, the mBART setup was able to improve upon the baseline for three out of the eleven languages.
[ "cs.CL" ]
false
2305.17416
2023-05-27T08:42:37Z
A Practical Toolkit for Multilingual Question and Answer Generation
[ "Asahi Ushio", "Fernando Alva-Manchego", "Jose Camacho-Collados" ]
Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation (QAG) is not straightforward due to the expected structured output (i.e. a list of question and answer pairs), as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG, along with lmqg, an all-in-one Python package for model fine-tuning, generation, and evaluation. We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easy-to-use code for developers who require custom models or fine-grained controls for generation.
[ "cs.CL" ]
false
2305.17440
2023-05-27T10:33:53Z
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
[ "Xuanjie Fang", "Sijie Cheng", "Yang Liu", "Wei Wang" ]
Pre-trained language models (PLMs) have been widely used to underpin various downstream tasks. However, the adversarial attack task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached two-stage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a sequential decision-making problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDM-Attack. Extensive experimental results show that SDM-Attack achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDM-Attack. The code is available at https://github.com/fduxuan/SDM-Attack.
[ "cs.CL" ]
false