arxiv_id
stringlengths
10
10
published
stringlengths
20
20
titles
stringlengths
9
243
authors
listlengths
1
389
abstract
stringlengths
96
3.09k
categories
listlengths
1
10
selected
bool
2 classes
2306.01489
2023-06-02T12:30:42Z
On Feature Diversity in Energy-based Models
[ "Firas Laakom", "Jenni Raitoharju", "Alexandros Iosifidis", "Moncef Gabbouj" ]
Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches. An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration. In this paper, we focus on the diversity of the produced feature set. We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs. We derive generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we show that indeed reducing redundancy of the feature set can consistently decrease the gap between the true and empirical expectation of the energy and boosts the performance of the model.
[ "cs.LG", "cs.IT", "math.IT" ]
false
2306.01494
2023-06-02T12:42:09Z
Local Message Passing on Frustrated Systems
[ "Luca Schmid", "Joshua Brenk", "Laurent Schmalen" ]
Message passing on factor graphs is a powerful framework for probabilistic inference, which finds important applications in various scientific domains. The most wide-spread message passing scheme is the sum-product algorithm (SPA) which gives exact results on trees but often fails on graphs with many small cycles. We search for an alternative message passing algorithm that works particularly well on such cyclic graphs. Therefore, we challenge the extrinsic principle of the SPA, which loses its objective on graphs with cycles. We further replace the local SPA message update rule at the factor nodes of the underlying graph with a generic mapping, which is optimized in a data-driven fashion. These modifications lead to a considerable improvement in performance while preserving the simplicity of the SPA. We evaluate our method for two classes of cyclic graphs: the 2x2 fully connected Ising grid and factor graphs for symbol detection on linear communication channels with inter-symbol interference. To enable the method for large graphs as they occur in practical applications, we develop a novel loss function that is inspired by the Bethe approximation from statistical physics and allows for training in an unsupervised fashion.
[ "cs.LG", "cs.IT", "eess.SP", "math.IT" ]
false
2306.01570
2023-06-02T14:31:24Z
Spatio-Temporal Deep Learning-Assisted Reduced Security-Constrained Unit Commitment
[ "Arun Venkatesh Ramesh", "Xingpeng Li" ]
Security-constrained unit commitment (SCUC) is a computationally complex process utilized in power system day-ahead scheduling and market clearing. SCUC is run daily and requires state-of-the-art algorithms to speed up the process. The constraints and data associated with SCUC are both geographically and temporally correlated to ensure the reliability of the solution, which further increases the complexity. In this paper, an advanced machine learning (ML) model is used to study the patterns in power system historical data, which inherently considers both spatial and temporal (ST) correlations in constraints. The ST-correlated ML model is trained to understand spatial correlation by considering graph neural networks (GNN) whereas temporal sequences are studied using long short-term memory (LSTM) networks. The proposed approach is validated on several test systems namely, IEEE 24-Bus system, IEEE-73 Bus system, IEEE 118-Bus system, and synthetic South-Carolina (SC) 500-Bus system. Moreover, B-{\theta} and power transfer distribution factor (PTDF) based SCUC formulations were considered in this research. Simulation results demonstrate that the ST approach can effectively predict generator commitment schedule and classify critical and non-critical lines in the system which are utilized for model reduction of SCUC to obtain computational enhancement without loss in solution quality
[ "cs.LG", "cs.SY", "eess.SY", "math.OC" ]
false
2306.01603
2023-06-02T15:12:58Z
Decentralized Federated Learning: A Survey and Perspective
[ "Liangqi Yuan", "Lichao Sun", "Philip S. Yu", "Ziran Wang" ]
Federated learning (FL) has been gaining attention for its ability to share knowledge while maintaining user data, protecting privacy, increasing learning efficiency, and reducing communication overhead. Decentralized FL (DFL) is a decentralized network architecture that eliminates the need for a central server in contrast to centralized FL (CFL). DFL enables direct communication between clients, resulting in significant savings in communication resources. In this paper, a comprehensive survey and profound perspective is provided for DFL. First, a review of the methodology, challenges, and variants of CFL is conducted, laying the background of DFL. Then, a systematic and detailed perspective on DFL is introduced, including iteration order, communication protocols, network topologies, paradigm proposals, and temporal variability. Next, based on the definition of DFL, several extended variants and categorizations are proposed with state-of-the-art technologies. Lastly, in addition to summarizing the current challenges in the DFL, some possible solutions and future research directions are also discussed.
[ "cs.LG", "cs.CY", "cs.DC", "cs.NI" ]
false
2306.01638
2023-06-02T15:58:22Z
Do we become wiser with time? On causal equivalence with tiered background knowledge
[ "Christine W. Bang", "Vanessa Didelez" ]
Equivalence classes of DAGs (represented by CPDAGs) may be too large to provide useful causal information. Here, we address incorporating tiered background knowledge yielding restricted equivalence classes represented by 'tiered MPDAGs'. Tiered knowledge leads to considerable gains in informativeness and computational efficiency: We show that construction of tiered MPDAGs only requires application of Meek's 1st rule, and that tiered MPDAGs (unlike general MPDAGs) are chain graphs with chordal components. This entails simplifications e.g. of determining valid adjustment sets for causal effect estimation. Further, we characterise when one tiered ordering is more informative than another, providing insights into useful aspects of background knowledge.
[ "stat.ML", "cs.LG", "math.ST", "stat.TH" ]
false
2306.01674
2023-06-02T16:46:47Z
Neural Differential Recurrent Neural Network with Adaptive Time Steps
[ "Yixuan Tan", "Liyan Xie", "Xiuyuan Cheng" ]
The neural Ordinary Differential Equation (ODE) model has shown success in learning complex continuous-time processes from observations on discrete time stamps. In this work, we consider the modeling and forecasting of time series data that are non-stationary and may have sharp changes like spikes. We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states, and we adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series. Theoretically, RNN-ODE-Adap yields provably a consistent estimation of the intensity function for the Hawkes-type time series data. We also provide an approximation analysis of the RNN-ODE model showing the benefit of adaptive steps. The proposed model is demonstrated to achieve higher prediction accuracy with reduced computational cost on simulated dynamic system data and point process data and on a real electrocardiography dataset.
[ "stat.ML", "cs.LG", "cs.NA", "math.NA" ]
false
2306.01699
2023-06-02T17:18:20Z
Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy
[ "Siamak Ghodsi", "Eirini Ntoutsi" ]
Group imbalance, resulting from inadequate or unrepresentative data collection methods, is a primary cause of representation bias in datasets. Representation bias can exist with respect to different groups of one or more protected attributes and might lead to prejudicial and discriminatory outcomes toward certain groups of individuals; in cases where a learning model is trained on such biased data. This paper presents MASC, a data augmentation approach that leverages affinity clustering to balance the representation of non-protected and protected groups of a target dataset by utilizing instances of the same protected attributes from similar datasets that are categorized in the same cluster as the target dataset by sharing instances of the protected attribute. The proposed method involves constructing an affinity matrix by quantifying distribution discrepancies between dataset pairs and transforming them into a symmetric pairwise similarity matrix. A non-parametric spectral clustering is then applied to this affinity matrix, automatically categorizing the datasets into an optimal number of clusters. We perform a step-by-step experiment as a demo of our method to show the procedure of the proposed data augmentation method and evaluate and discuss its performance. A comparison with other data augmentation methods, both pre- and post-augmentation, is conducted, along with a model evaluation analysis of each method. Our method can handle non-binary protected attributes so, in our experiments, bias is measured in a non-binary protected attribute setup w.r.t. racial groups distribution for two separate minority groups in comparison with the majority group before and after debiasing. Empirical results imply that our method of augmenting dataset biases using real (genuine) data from similar contexts can effectively debias the target datasets comparably to existing data augmentation strategies.
[ "cs.LG", "cs.AI", "stat.ML" ]
false
2306.01813
2023-06-02T09:04:45Z
Learning the effective order of a hypergraph dynamical system
[ "Leonie Neuhäuser", "Michael Scholkemper", "Francesco Tudisco", "Michael T. Schaub" ]
Dynamical systems on hypergraphs can display a rich set of behaviours not observable for systems with pairwise interactions. Given a distributed dynamical system with a putative hypergraph structure, an interesting question is thus how much of this hypergraph structure is actually necessary to faithfully replicate the observed dynamical behaviour. To answer this question, we propose a method to determine the minimum order of a hypergraph necessary to approximate the corresponding dynamics accurately. Specifically, we develop an analytical framework that allows us to determine this order when the type of dynamics is known. We utilize these ideas in conjunction with a hypergraph neural network to directly learn the dynamics itself and the resulting order of the hypergraph from both synthetic and real data sets consisting of observed system trajectories.
[ "cs.LG", "cs.SI", "physics.soc-ph" ]
false
2306.01814
2023-06-02T09:33:19Z
Fast Interactive Search with a Scale-Free Comparison Oracle
[ "Daniyar Chumbalov", "Lars Klein", "Lucas Maystre", "Matthias Grossglauser" ]
A comparison-based search algorithm lets a user find a target item $t$ in a database by answering queries of the form, ``Which of items $i$ and $j$ is closer to $t$?'' Instead of formulating an explicit query (such as one or several keywords), the user navigates towards the target via a sequence of such (typically noisy) queries. We propose a scale-free probabilistic oracle model called $\gamma$-CKL for such similarity triplets $(i,j;t)$, which generalizes the CKL triplet model proposed in the literature. The generalization affords independent control over the discriminating power of the oracle and the dimension of the feature space containing the items. We develop a search algorithm with provably exponential rate of convergence under the $\gamma$-CKL oracle, thanks to a backtracking strategy that deals with the unavoidable errors in updating the belief region around the target. We evaluate the performance of the algorithm both over the posited oracle and over several real-world triplet datasets. We also report on a comprehensive user study, where human subjects navigate a database of face portraits.
[ "cs.IR", "cs.HC", "cs.LG" ]
false
2306.01824
2023-06-02T14:13:50Z
Enhancing the Protein Tertiary Structure Prediction by Multiple Sequence Alignment Generation
[ "Le Zhang", "Jiayang Chen", "Tao Shen", "Yu Li", "Siqi Sun" ]
The field of protein folding research has been greatly advanced by deep learning methods, with AlphaFold2 (AF2) demonstrating exceptional performance and atomic-level precision. As co-evolution is integral to protein structure prediction, AF2's accuracy is significantly influenced by the depth of multiple sequence alignment (MSA), which requires extensive exploration of a large protein database for similar sequences. However, not all protein sequences possess abundant homologous families, and consequently, AF2's performance can degrade on such queries, at times failing to produce meaningful results. To address this, we introduce a novel generative language model, MSA-Augmenter, which leverages protein-specific attention mechanisms and large-scale MSAs to generate useful, novel protein sequences not currently found in databases. These sequences supplement shallow MSAs, enhancing the accuracy of structural property predictions. Our experiments on CASP14 demonstrate that MSA-Augmenter can generate de novo sequences that retain co-evolutionary information from inferior MSAs, thereby improving protein structure prediction quality on top of strong AF2.
[ "q-bio.QM", "cs.CE", "cs.LG", "q-bio.BM" ]
false
2306.01860
2023-06-02T18:29:07Z
No Bidding, No Regret: Pairwise-Feedback Mechanisms for Digital Goods and Data Auctions
[ "Zachary Robertson", "Oluwasanmi Koyejo" ]
The growing demand for data and AI-generated digital goods, such as personalized written content and artwork, necessitates effective pricing and feedback mechanisms that account for uncertain utility and costly production. Motivated by these developments, this study presents a novel mechanism design addressing a general repeated-auction setting where the utility derived from a sold good is revealed post-sale. The mechanism's novelty lies in using pairwise comparisons for eliciting information from the bidder, arguably easier for humans than assigning a numerical value. Our mechanism chooses allocations using an epsilon-greedy strategy and relies on pairwise comparisons between realized utility from allocated goods and an arbitrary value, avoiding the learning-to-bid problem explored in previous work. We prove this mechanism to be asymptotically truthful, individually rational, and welfare and revenue maximizing. The mechanism's relevance is broad, applying to any setting with made-to-order goods of variable quality. Experimental results on multi-label toxicity annotation data, an example of negative utilities, highlight how our proposed mechanism could enhance social welfare in data auctions. Overall, our focus on human factors contributes to the development of more human-aware and efficient mechanism design.
[ "cs.GT", "cs.AI", "cs.LG" ]
false
2306.01864
2023-06-02T18:41:39Z
Discovering COVID-19 Coughing and Breathing Patterns from Unlabeled Data Using Contrastive Learning with Varying Pre-Training Domains
[ "Jinjin Cai", "Sudip Vhaduri", "Xiao Luo" ]
Rapid discovery of new diseases, such as COVID-19 can enable a timely epidemic response, preventing the large-scale spread and protecting public health. However, limited research efforts have been taken on this problem. In this paper, we propose a contrastive learning-based modeling approach for COVID-19 coughing and breathing pattern discovery from non-COVID coughs. To validate our models, extensive experiments have been conducted using four large audio datasets and one image dataset. We further explore the effects of different factors, such as domain relevance and augmentation order on the pre-trained models. Our results show that the proposed model can effectively distinguish COVID-19 coughing and breathing from unlabeled data and labeled non-COVID coughs with an accuracy of up to 0.81 and 0.86, respectively. Findings from this work will guide future research to detect an outbreak of a new disease early.
[ "cs.LG", "cs.SD", "eess.AS" ]
false
2306.01906
2023-06-02T20:31:33Z
Synaptic motor adaptation: A three-factor learning rule for adaptive robotic control in spiking neural networks
[ "Samuel Schmidgall", "Joe Hays" ]
Legged robots operating in real-world environments must possess the ability to rapidly adapt to unexpected conditions, such as changing terrains and varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning. To facilitate rapid adaptation, we meta-optimize a three-factor learning rule via gradient descent to adapt to uncertainty by approximating an embedding produced by privileged information using only locally accessible onboard sensing data. Our algorithm performs similarly to state-of-the-art motor adaptation algorithms and presents a clear path toward achieving adaptive robotics with neuromorphic hardware.
[ "cs.RO", "cs.AI", "cs.LG", "cs.NE" ]
false
2306.01914
2023-06-02T20:43:38Z
Smooth Model Predictive Control with Applications to Statistical Learning
[ "Kwangjun Ahn", "Daniel Pfrommer", "Jack Umenberger", "Tobia Marcucci", "Zak Mhammedi", "Ali Jadbabaie" ]
Statistical learning theory and high dimensional statistics have had a tremendous impact on Machine Learning theory and have impacted a variety of domains including systems and control theory. Over the past few years we have witnessed a variety of applications of such theoretical tools to help answer questions such as: how many state-action pairs are needed to learn a static control policy to a given accuracy? Recent results have shown that continuously differentiable and stabilizing control policies can be well-approximated using neural networks with hard guarantees on performance, yet often even the simplest constrained control problems are not smooth. To address this void, in this paper we study smooth approximations of linear Model Predictive Control (MPC) policies, in which hard constraints are replaced by barrier functions, a.k.a. barrier MPC. In particular, we show that barrier MPC inherits the exponential stability properties of the original non-smooth MPC policy. Using a careful analysis of the proposed barrier MPC, we show that its smoothness constant can be carefully controlled, thereby paving the way for new sample complexity results for approximating MPC policies from sampled state-action pairs.
[ "eess.SY", "cs.LG", "cs.SY" ]
false
2306.01916
2023-06-02T21:02:51Z
In-the-wild Speech Emotion Conversion Using Disentangled Self-Supervised Representations and Neural Vocoder-based Resynthesis
[ "Navin Raj Prabhu", "Nale Lehmann-Willenbrock", "Timo Gerkmann" ]
Speech emotion conversion aims to convert the expressed emotion of a spoken utterance to a target emotion while preserving the lexical information and the speaker's identity. In this work, we specifically focus on in-the-wild emotion conversion where parallel data does not exist, and the problem of disentangling lexical, speaker, and emotion information arises. In this paper, we introduce a methodology that uses self-supervised networks to disentangle the lexical, speaker, and emotional content of the utterance, and subsequently uses a HiFiGAN vocoder to resynthesise the disentangled representations to a speech signal of the targeted emotion. For better representation and to achieve emotion intensity control, we specifically focus on the aro\-usal dimension of continuous representations, as opposed to performing emotion conversion on categorical representations. We test our methodology on the large in-the-wild MSP-Podcast dataset. Results reveal that the proposed approach is aptly conditioned on the emotional content of input speech and is capable of synthesising natural-sounding speech for a target emotion. Results further reveal that the methodology better synthesises speech for mid-scale arousal (2 to 6) than for extreme arousal (1 and 7).
[ "eess.AS", "cs.HC", "cs.LG" ]
false
2306.01920
2023-06-02T21:22:27Z
Context-Aware Bayesian Network Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning
[ "Dingyang Chen", "Qi Zhang" ]
Executing actions in a correlated manner is a common strategy for human coordination that often leads to better cooperation, which is also potentially beneficial for cooperative multi-agent reinforcement learning (MARL). However, the recent success of MARL relies heavily on the convenient paradigm of purely decentralized execution, where there is no action correlation among agents for scalability considerations. In this work, we introduce a Bayesian network to inaugurate correlations between agents' action selections in their joint policy. Theoretically, we establish a theoretical justification for why action dependencies are beneficial by deriving the multi-agent policy gradient formula under such a Bayesian network joint policy and proving its global convergence to Nash equilibria under tabular softmax policy parameterization in cooperative Markov games. Further, by equipping existing MARL algorithms with a recent method of differentiable directed acyclic graphs (DAGs), we develop practical algorithms to learn the context-aware Bayesian network policies in scenarios with partial observability and various difficulty. We also dynamically decrease the sparsity of the learned DAG throughout the training process, which leads to weakly or even purely independent policies for decentralized execution. Empirical results on a range of MARL benchmarks show the benefits of our approach.
[ "cs.MA", "cs.AI", "cs.GT", "cs.LG" ]
false
2306.01926
2023-06-02T21:45:13Z
RITA: Group Attention is All You Need for Timeseries Analytics
[ "Jiaming Liang", "Lei Cao", "Samuel Madden", "Zachary Ives", "Guoliang Li" ]
Timeseries analytics is of great importance in many real-world applications. Recently, the Transformer model, popular in natural language processing, has been leveraged to learn high quality feature embeddings from timeseries, core to the performance of various timeseries analytics tasks. However, the quadratic time and space complexities limit Transformers' scalability, especially for long timeseries. To address these issues, we develop a timeseries analytics tool, RITA, which uses a novel attention mechanism, named group attention, to address this scalability issue. Group attention dynamically clusters the objects based on their similarity into a small number of groups and approximately computes the attention at the coarse group granularity. It thus significantly reduces the time and space complexity, yet provides a theoretical guarantee on the quality of the computed attention. The dynamic scheduler of RITA continuously adapts the number of groups and the batch size in the training process, ensuring group attention always uses the fewest groups needed to meet the approximation quality requirement. Extensive experiments on various timeseries datasets and analytics tasks demonstrate that RITA outperforms the state-of-the-art in accuracy and is significantly faster -- with speedups of up to 63X.
[ "cs.LG", "cs.AI", "cs.DB" ]
false
2306.01212
2023-06-02T00:24:42Z
Linked Deep Gaussian Process Emulation for Model Networks
[ "Deyu Ming", "Daniel Williamson" ]
Modern scientific problems are often multi-disciplinary and require integration of computer models from different disciplines, each with distinct functional complexities, programming environments, and computation times. Linked Gaussian process (LGP) emulation tackles this challenge through a divide-and-conquer strategy that integrates Gaussian process emulators of the individual computer models in a network. However, the required stationarity of the component Gaussian process emulators within the LGP framework limits its applicability in many real-world applications. In this work, we conceptualize a network of computer models as a deep Gaussian process with partial exposure of its hidden layers. We develop a method for inference for these partially exposed deep networks that retains a key strength of the LGP framework, whereby each model can be emulated separately using a DGP and then linked together. We show in both synthetic and empirical examples that our linked deep Gaussian process emulators exhibit significantly better predictive performance than standard LGP emulators in terms of accuracy and uncertainty quantification. They also outperform single DGPs fitted to the network as a whole because they are able to integrate information from the partially exposed hidden layers. Our methods are implemented in an R package $\texttt{dgpsi}$ that is freely available on CRAN.
[ "stat.ML", "cs.LG", "stat.AP", "stat.CO", "stat.ME" ]
false
2306.01485
2023-06-02T12:22:35Z
Robust low-rank training via approximate orthonormal constraints
[ "Dayana Savostianova", "Emanuele Zangrando", "Gianluca Ceruti", "Francesco Tudisco" ]
With the growth of model and data sizes, a broad effort has been made to design pruning techniques that reduce the resource demand of deep learning pipelines, while retaining model performance. In order to reduce both inference and training costs, a prominent line of work uses low-rank matrix factorizations to represent the network weights. Although able to retain accuracy, we observe that low-rank methods tend to compromise model robustness against adversarial perturbations. By modeling robustness in terms of the condition number of the neural network, we argue that this loss of robustness is due to the exploding singular values of the low-rank weight matrices. Thus, we introduce a robust low-rank training algorithm that maintains the network's weights on the low-rank matrix manifold while simultaneously enforcing approximate orthonormal constraints. The resulting model reduces both training and inference costs while ensuring well-conditioning and thus better adversarial robustness, without compromising model accuracy. This is shown by extensive numerical evidence and by our main approximation theorem that shows the computed robust low-rank network well-approximates the ideal full model, provided a highly performing low-rank sub-network exists.
[ "cs.LG", "cs.AI", "cs.NA", "math.NA", "stat.ML" ]
false
2306.01988
2023-06-03T03:21:18Z
Lightweight Structure-aware Transformer Network for VHR Remote Sensing Image Change Detection
[ "Tao Lei", "Yetong Xu", "Hailong Ning", "Zhiyong Lv", "Chongdan Min", "Yaochu Jin", "Asoke K. Nandi" ]
Popular Transformer networks have been successfully applied to remote sensing (RS) image change detection (CD) identifications and achieve better results than most convolutional neural networks (CNNs), but they still suffer from two main problems. First, the computational complexity of the Transformer grows quadratically with the increase of image spatial resolution, which is unfavorable to very high-resolution (VHR) RS images. Second, these popular Transformer networks tend to ignore the importance of fine-grained features, which results in poor edge integrity and internal tightness for largely changed objects and leads to the loss of small changed objects. To address the above issues, this Letter proposes a Lightweight Structure-aware Transformer (LSAT) network for RS image CD. The proposed LSAT has two advantages. First, a Cross-dimension Interactive Self-attention (CISA) module with linear complexity is designed to replace the vanilla self-attention in visual Transformer, which effectively reduces the computational complexity while improving the feature representation ability of the proposed LSAT. Second, a Structure-aware Enhancement Module (SAEM) is designed to enhance difference features and edge detail information, which can achieve double enhancement by difference refinement and detail aggregation so as to obtain fine-grained features of bi-temporal RS images. Experimental results show that the proposed LSAT achieves significant improvement in detection accuracy and offers a better tradeoff between accuracy and computational costs than most state-of-the-art CD methods for VHR RS images.
[ "cs.CV" ]
false
2306.02021
2023-06-03T06:34:17Z
Towards Black-box Adversarial Example Detection: A Data Reconstruction-based Method
[ "Yifei Gao", "Zhiyu Lin", "Yunfan Yang", "Jitao Sang" ]
Adversarial example detection is known to be an effective adversarial defense method. Black-box attack, which is a more realistic threat and has led to various black-box adversarial training-based defense methods, however, does not attract considerable attention in adversarial example detection. In this paper, we fill this gap by positioning the problem of black-box adversarial example detection (BAD). Data analysis under the introduced BAD settings demonstrates (1) the incapability of existing detectors in addressing the black-box scenario and (2) the potential of exploring BAD solutions from a data perspective. To tackle the BAD problem, we propose a data reconstruction-based adversarial example detection method. Specifically, we use variational auto-encoder (VAE) to capture both pixel and frequency representations of normal examples. Then we use reconstruction error to detect adversarial examples. Compared with existing detection methods, the proposed method achieves substantially better detection performance in BAD, which helps promote the deployment of adversarial example detection-based defense solutions in real-world models.
[ "cs.CV" ]
false
2306.02061
2023-06-03T09:19:24Z
Balancing Logit Variation for Long-tailed Semantic Segmentation
[ "Yuchao Wang", "Jingjing Fei", "Haochen Wang", "Wei Li", "Tianpeng Bao", "Liwei Wu", "Rui Zhao", "Yujun Shen" ]
Semantic segmentation usually suffers from a long-tail data distribution. Due to the imbalanced number of samples across categories, the features of those tail classes may get squeezed into a narrow area in the feature space. Towards a balanced feature distribution, we introduce category-wise variation into the network predictions in the training phase such that an instance is no longer projected to a feature point, but a small region instead. Such a perturbation is highly dependent on the category scale, which appears as assigning smaller variation to head classes and larger variation to tail classes. In this way, we manage to close the gap between the feature areas of different categories, resulting in a more balanced representation. It is noteworthy that the introduced variation is discarded at the inference stage to facilitate a confident prediction. Although with an embarrassingly simple implementation, our method manifests itself in strong generalizability to various datasets and task settings. Extensive experiments suggest that our plug-in design lends itself well to a range of state-of-the-art approaches and boosts the performance on top of them.
[ "cs.CV" ]
false
2306.02064
2023-06-03T09:36:16Z
Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training
[ "Pucheng Dang", "Xing Hu", "Kaidi Xu", "Jinhao Duan", "Di Huang", "Husheng Han", "Rui Zhang", "Zidong Du", "Qi Guo", "Yunji Chen" ]
Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing. These unlearnable samples effectively misguide model training to learn perturbation features but ignore image semantic features. We make the in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable samples at an early stage, but rapidly go to the overfitting stage since the shallow layers tend to overfit on perturbation features and make models fall into overfitting quickly. Based on the observations, we propose Progressive Staged Training to effectively prevent models from overfitting in learning perturbation features. We evaluated our method on multiple model architectures over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method circumvents the unlearnability of all state-of-the-art methods in the literature and provides a reliable baseline for further evaluation of unlearnable techniques.
[ "cs.CV" ]
false
2306.02083
2023-06-03T11:08:38Z
Efficient Text-Guided 3D-Aware Portrait Generation with Score Distillation Sampling on Distribution
[ "Yiji Cheng", "Fei Yin", "Xiaoke Huang", "Xintong Yu", "Jiaxiang Liu", "Shikun Feng", "Yujiu Yang", "Yansong Tang" ]
Text-to-3D is an emerging task that allows users to create 3D content with infinite possibilities. Existing works tackle the problem by optimizing a 3D representation with guidance from pre-trained diffusion models. An apparent drawback is that they need to optimize from scratch for each prompt, which is computationally expensive and often yields poor visual fidelity. In this paper, we propose DreamPortrait, which aims to generate text-guided 3D-aware portraits in a single-forward pass for efficiency. To achieve this, we extend Score Distillation Sampling from datapoint to distribution formulation, which injects semantic prior into a 3D distribution. However, the direct extension will lead to the mode collapse problem since the objective only pursues semantic alignment. Hence, we propose to optimize a distribution with hierarchical condition adapters and GAN loss regularization. For better 3D modeling, we further design a 3D-aware gated cross-attention mechanism to explicitly let the model perceive the correspondence between the text and the 3D-aware space. These elaborated designs enable our model to generate portraits with robust multi-view semantic consistency, eliminating the need for optimization-based methods. Extensive experiments demonstrate our model's highly competitive performance and significant speed boost against existing methods.
[ "cs.CV" ]
false
2306.02092
2023-06-03T11:50:44Z
Relieving Triplet Ambiguity: Consensus Network for Language-Guided Image Retrieval
[ "Xu Zhang", "Zhedong Zheng", "Xiaohan Wang", "Yi Yang" ]
Language-guided image retrieval enables users to search for images and interact with the retrieval system more naturally and expressively by using a reference image and a relative caption as a query. Most existing studies mainly focus on designing image-text composition architecture to extract discriminative visual-linguistic relations. Despite great success, we identify an inherent problem that obstructs the extraction of discriminative features and considerably compromises model training: \textbf{triplet ambiguity}. This problem stems from the annotation process wherein annotators view only one triplet at a time. As a result, they often describe simple attributes, such as color, while neglecting fine-grained details like location and style. This leads to multiple false-negative candidates matching the same modification text. We propose a novel Consensus Network (Css-Net) that self-adaptively learns from noisy triplets to minimize the negative effects of triplet ambiguity. Inspired by the psychological finding that groups perform better than individuals, Css-Net comprises 1) a consensus module featuring four distinct compositors that generate diverse fused image-text embeddings and 2) a Kullback-Leibler divergence loss, which fosters learning among the compositors, enabling them to reduce biases learned from noisy triplets and reach a consensus. The decisions from four compositors are weighted during evaluation to further achieve consensus. Comprehensive experiments on three datasets demonstrate that Css-Net can alleviate triplet ambiguity, achieving competitive performance on benchmarks, such as $+2.77\%$ R@10 and $+6.67\%$ R@50 on FashionIQ.
[ "cs.CV" ]
false
2306.02094
2023-06-03T11:54:56Z
Segment Anything Meets Semantic Communication
[ "Shehbaz Tariq", "Brian Estadimas Arfeto", "Chaoning Zhang", "Hyundong Shin" ]
In light of the diminishing returns of traditional methods for enhancing transmission rates, the domain of semantic communication presents promising new frontiers. Focusing on image transmission, this paper explores the application of foundation models, particularly the Segment Anything Model (SAM) developed by Meta AI Research, to improve semantic communication. SAM is a promptable image segmentation model that has gained attention for its ability to perform zero-shot segmentation tasks without explicit training or domain-specific knowledge. By employing SAM's segmentation capability and lightweight neural network architecture for semantic coding, we propose a practical approach to semantic communication. We demonstrate that this approach retains critical semantic features, achieving higher image reconstruction quality and reducing communication overhead. This practical solution eliminates the resource-intensive stage of training a segmentation model and can be applied to any semantic coding architecture, paving the way for real-world applications.
[ "cs.CV" ]
false
2306.02095
2023-06-03T12:05:07Z
Content-aware Token Sharing for Efficient Semantic Segmentation with Vision Transformers
[ "Chenyang Lu", "Daan de Geus", "Gijs Dubbelman" ]
This paper introduces Content-aware Token Sharing (CTS), a token reduction approach that improves the computational efficiency of semantic segmentation networks that use Vision Transformers (ViTs). Existing works have proposed token reduction approaches to improve the efficiency of ViT-based image classification networks, but these methods are not directly applicable to semantic segmentation, which we address in this work. We observe that, for semantic segmentation, multiple image patches can share a token if they contain the same semantic class, as they contain redundant information. Our approach leverages this by employing an efficient, class-agnostic policy network that predicts if image patches contain the same semantic class, and lets them share a token if they do. With experiments, we explore the critical design choices of CTS and show its effectiveness on the ADE20K, Pascal Context and Cityscapes datasets, various ViT backbones, and different segmentation decoders. With Content-aware Token Sharing, we are able to reduce the number of processed tokens by up to 44%, without diminishing the segmentation quality.
[ "cs.CV" ]
false
2306.02098
2023-06-03T12:15:20Z
Towards Complex Real-World Safety Factory Inspection: A High-Quality Dataset for Safety Clothing and Helmet Detection
[ "Fusheng Yu", "Xiaoping Wang", "Jiang Li", "Shaojin Wu", "Junjie Zhang", "Zhigang Zeng" ]
Safety clothing and helmets play a crucial role in ensuring worker safety at construction sites. Recently, deep learning methods have garnered significant attention in the field of computer vision for their potential to enhance safety and efficiency in various industries. However, limited availability of high-quality datasets has hindered the development of deep learning methods for safety clothing and helmet detection. In this work, we present a large, comprehensive, and realistic high-quality dataset for safety clothing and helmet detection, which was collected from a real-world chemical plant and annotated by professional security inspectors. Our dataset has been compared with several existing open-source datasets, and its effectiveness has been verified applying some classic object detection methods. The results demonstrate that our dataset is more complete and performs better in real-world settings. Furthermore, we have released our deployment code to the public to encourage the adoption of our dataset and improve worker safety. We hope that our efforts will promote the convergence of academic research and industry, ultimately contribute to the betterment of society.
[ "cs.CV" ]
false
2306.02142
2023-06-03T15:56:30Z
TransDocAnalyser: A Framework for Offline Semi-structured Handwritten Document Analysis in the Legal Domain
[ "Sagar Chakraborty", "Gaurav Harit", "Saptarshi Ghosh" ]
State-of-the-art offline Optical Character Recognition (OCR) frameworks perform poorly on semi-structured handwritten domain-specific documents due to their inability to localize and label form fields with domain-specific semantics. Existing techniques for semi-structured document analysis have primarily used datasets comprising invoices, purchase orders, receipts, and identity-card documents for benchmarking. In this work, we build the first semi-structured document analysis dataset in the legal domain by collecting a large number of First Information Report (FIR) documents from several police stations in India. This dataset, which we call the FIR dataset, is more challenging than most existing document analysis datasets, since it combines a wide variety of handwritten text with printed text. We also propose an end-to-end framework for offline processing of handwritten semi-structured documents, and benchmark it on our novel FIR dataset. Our framework used Encoder-Decoder architecture for localizing and labelling the form fields and for recognizing the handwritten content. The encoder consists of Faster-RCNN and Vision Transformers. Further the Transformer-based decoder architecture is trained with a domain-specific tokenizer. We also propose a post-correction method to handle recognition errors pertaining to the domain-specific terms. Our proposed framework achieves state-of-the-art results on the FIR dataset outperforming several existing models
[ "cs.CV", "I.2.1" ]
false
2311.11465
2023-06-03T08:06:38Z
Understanding Segment Anything Model: SAM is Biased Towards Texture Rather than Shape
[ "Chaoning Zhang", "Yu Qiao", "Shehbaz Tariq", "Sheng Zheng", "Chenshuang Zhang", "Chenghao Li", "Hyundong Shin", "Choong Seon Hong" ]
In contrast to the human vision that mainly depends on the shape for recognizing the objects, deep image recognition models are widely known to be biased toward texture. Recently, Meta research team has released the first foundation model for image segmentation, termed segment anything model (SAM), which has attracted significant attention. In this work, we understand SAM from the perspective of texture \textit{v.s.} shape. Different from label-oriented recognition tasks, the SAM is trained to predict a mask for covering the object shape based on a promt. With this said, it seems self-evident that the SAM is biased towards shape. In this work, however, we reveal an interesting finding: the SAM is strongly biased towards texture-like dense features rather than shape. This intriguing finding is supported by a novel setup where we disentangle texture and shape cues and design texture-shape cue conflict for mask prediction.
[ "cs.CV" ]
false
2306.01983
2023-06-03T02:33:38Z
Mitigating Backdoor Attack Via Prerequisite Transformation
[ "Han Gao" ]
In recent years, with the successful application of DNN in fields such as NLP and CV, its security has also received widespread attention. (Author) proposed the method of backdoor attack in Badnet. Switch implanted backdoor into the model by poisoning the training samples. The model with backdoor did not exhibit any abnormalities on the normal validation sample set, but in the input with trigger, they were mistakenly classified as the attacker's designated category or randomly classified as a different category from the ground truth, This attack method seriously threatens the normal application of DNN in real life, such as autonomous driving, object detection, etc.This article proposes a new method to combat backdoor attacks. We refer to the features in the area covered by the trigger as trigger features, and the remaining areas as normal features. By introducing prerequisite calculation conditions during the training process, these conditions have little impact on normal features and trigger features, and can complete the training of a standard backdoor model. The model trained under these prerequisite calculation conditions can, In the verification set D'val with the same premise calculation conditions, the performance is consistent with that of the ordinary backdoor model. However, in the verification set Dval without the premise calculation conditions, the verification accuracy decreases very little (7%~12%), while the attack success rate (ASR) decreases from 90% to about 8%.Author call this method Prerequisite Transformation(PT).
[ "cs.CR", "cs.CV" ]
false
2306.02133
2023-06-03T15:06:12Z
Graph Mover's Distance: An Efficiently Computable Distance Measure for Geometric Graphs
[ "Sushovan Majhi" ]
Many applications in pattern recognition represent patterns as a geometric graph. The geometric graph distance (GGD) has recently been studied as a meaningful measure of similarity between two geometric graphs. Since computing the GGD is known to be $\mathcal{NP}$-hard, the distance measure proves an impractical choice for applications. As a computationally tractable alternative, we propose in this paper the Graph Mover's Distance (GMD), which has been formulated as an instance of the earth mover's distance. The computation of the GMD between two geometric graphs with at most $n$ vertices takes only $O(n^3)$-time. Alongside studying the metric properties of the GMD, we investigate the stability of the GGD and GMD. The GMD also demonstrates extremely promising empirical evidence at recognizing letter drawings from the {\tt LETTER} dataset \cite{da_vitoria_lobo_iam_2008}.
[ "cs.CG", "cs.CV" ]
false
2306.02143
2023-06-03T15:58:38Z
Hierarchical Multiresolution Feature- and Prior-based Graphs for Classification
[ "Faezeh Fallah" ]
To incorporate spatial (neighborhood) and bidirectional hierarchical relationships as well as features and priors of the samples into their classification, we formulated the classification problem on three variants of multiresolution neighborhood graphs and the graph of a hierarchical conditional random field. Each of these graphs was weighted and undirected and could thus incorporate the spatial or hierarchical relationships in all directions. In addition, each variant of the proposed neighborhood graphs was composed of a spatial feature-based subgraph and an aspatial prior-based subgraph. It expanded on a random walker graph by using novel mechanisms to derive the edge weights of its spatial feature-based subgraph. These mechanisms included implicit and explicit edge detection to enhance detection of weak boundaries between different classes in spatial domain. The implicit edge detection relied on the outlier detection capability of the Tukey's function and the classification reliabilities of the samples estimated by a hierarchical random forest classifier. Similar mechanism was used to derive the edge weights and thus the energy function of the hierarchical conditional random field. This way, the classification problem boiled down to a system of linear equations and a minimization of the energy function which could be done via fast and efficient techniques.
[ "cs.CV", "cs.LG" ]
false
2306.02216
2023-06-03T23:53:57Z
Forgettable Federated Linear Learning with Certified Data Removal
[ "Ruinan Jin", "Minghui Chen", "Qiong Zhang", "Xiaoxiao Li" ]
Federated learning (FL) is a trending distributed learning framework that enables collaborative model training without data sharing. Machine learning models trained on datasets can potentially expose the private information of the training data, revealing details about individual data records. In this study, we focus on the FL paradigm that grants clients the ``right to be forgotten''. The forgettable FL framework should bleach its global model weights as it has never seen that client and hence does not reveal any information about the client. To this end, we propose the Forgettable Federated Linear Learning (2F2L) framework featured with novel training and data removal strategies. The training pipeline, named Federated linear training, employs linear approximation on the model parameter space to enable our 2F2L framework work for deep neural networks while achieving comparable results with canonical neural network training. We also introduce FedRemoval, an efficient and effective removal strategy that tackles the computational challenges in FL by approximating the Hessian matrix using public server data from the pretrained model. Unlike the previous uncertified and heuristic machine unlearning methods in FL, we provide theoretical guarantees by bounding the differences of model weights by our FedRemoval and that from retraining from scratch. Experimental results on MNIST and Fashion-MNIST datasets demonstrate the effectiveness of our method in achieving a balance between model accuracy and information removal, outperforming baseline strategies and approaching retraining from scratch.
[ "cs.LG", "cs.CV" ]
false
2308.05178
2023-06-03T12:19:21Z
An Improved Model for Diabetic Retinopathy Detection by using Transfer Learning and Ensemble Learning
[ "Md. Simul Hasan Talukder", "Ajay Kirshno Sarkar", "Sharmin Akter", "Md. Nuhi-Alamin" ]
Diabetic Retinopathy (DR) is an ocular condition caused by a sustained high level of sugar in the blood, which causes the retinal capillaries to block and bleed, causing retinal tissue damage. It usually results in blindness. Early detection can help in lowering the risk of DR and its severity. The robust and accurate prediction and detection of diabetic retinopathy is a challenging task. This paper develops a machine learning model for detecting Diabetic Retinopathy that is entirely accurate. Pre-trained models such as ResNet50, InceptionV3, Xception, DenseNet121, VGG19, NASNetMobile, MobileNetV2, DensNet169, and DenseNet201 with pooling layer, dense layer, and appropriate dropout layer at the bottom of them were carried out in transfer learning (TL) approach. Data augmentation and regularization was performed to reduce overfitting. Transfer Learning model of DenseNet121, Average and weighted ensemble of DenseNet169 and DenseNet201 TL architectures contribute individually the highest accuracy of 100%, the highest precision, recall, F-1 score of 100%, 100%, and 100%, respectively.
[ "eess.IV", "cs.CV" ]
false
2306.01970
2023-06-03T00:38:40Z
Temporal-spatial Correlation Attention Network for Clinical Data Analysis in Intensive Care Unit
[ "Weizhi Nie", "Yuhe Yu", "Chen Zhang", "Dan Song", "Lina Zhao", "Yunpeng Bai" ]
In recent years, medical information technology has made it possible for electronic health record (EHR) to store fairly complete clinical data. This has brought health care into the era of "big data". However, medical data are often sparse and strongly correlated, which means that medical problems cannot be solved effectively. With the rapid development of deep learning in recent years, it has provided opportunities for the use of big data in healthcare. In this paper, we propose a temporal-saptial correlation attention network (TSCAN) to handle some clinical characteristic prediction problems, such as predicting death, predicting length of stay, detecting physiologic decline, and classifying phenotypes. Based on the design of the attention mechanism model, our approach can effectively remove irrelevant items in clinical data and irrelevant nodes in time according to different tasks, so as to obtain more accurate prediction results. Our method can also find key clinical indicators of important outcomes that can be used to improve treatment options. Our experiments use information from the Medical Information Mart for Intensive Care (MIMIC-IV) database, which is open to the public. Finally, we have achieved significant performance benefits of 2.0\% (metric) compared to other SOTA prediction methods. We achieved a staggering 90.7\% on mortality rate, 45.1\% on length of stay. The source code can be find: \url{https://github.com/yuyuheintju/TSCAN}.
[ "cs.LG", "cs.AI", "cs.CV", "cs.CY" ]
false
2306.05500
2023-06-03T21:39:07Z
Word-Level Explanations for Analyzing Bias in Text-to-Image Models
[ "Alexander Lin", "Lucas Monteiro Paes", "Sree Harsha Tanneru", "Suraj Srinivas", "Himabindu Lakkaraju" ]
Text-to-image models take a sentence (i.e., prompt) and generate images associated with this input prompt. These models have created award wining-art, videos, and even synthetic datasets. However, text-to-image (T2I) models can generate images that underrepresent minorities based on race and sex. This paper investigates which word in the input prompt is responsible for bias in generated images. We introduce a method for computing scores for each word in the prompt; these scores represent its influence on biases in the model's output. Our method follows the principle of \emph{explaining by removing}, leveraging masked language models to calculate the influence scores. We perform experiments on Stable Diffusion to demonstrate that our method identifies the replication of societal stereotypes in generated images.
[ "cs.CL", "cs.AI", "cs.CV", "cs.LG" ]
false
2306.02022
2023-06-03T06:42:17Z
ACI-BENCH: a Novel Ambient Clinical Intelligence Dataset for Benchmarking Automatic Visit Note Generation
[ "Wen-wai Yim", "Yujuan Fu", "Asma Ben Abacha", "Neal Snider", "Thomas Lin", "Meliha Yetisgen" ]
Recent immense breakthroughs in generative models such as in GPT4 have precipitated re-imagined ubiquitous usage of these models in all applications. One area that can benefit by improvements in artificial intelligence (AI) is healthcare. The note generation task from doctor-patient encounters, and its associated electronic medical record documentation, is one of the most arduous time-consuming tasks for physicians. It is also a natural prime potential beneficiary to advances in generative models. However with such advances, benchmarking is more critical than ever. Whether studying model weaknesses or developing new evaluation metrics, shared open datasets are an imperative part of understanding the current state-of-the-art. Unfortunately as clinic encounter conversations are not routinely recorded and are difficult to ethically share due to patient confidentiality, there are no sufficiently large clinic dialogue-note datasets to benchmark this task. Here we present the Ambient Clinical Intelligence Benchmark (ACI-BENCH) corpus, the largest dataset to date tackling the problem of AI-assisted note generation from visit dialogue. We also present the benchmark performances of several common state-of-the-art approaches.
[ "cs.CL" ]
false
2306.02042
2023-06-03T07:48:00Z
Impact of translation on biomedical information extraction from real-life clinical notes
[ "Christel Gérardin", "Yuhan Xiong", "Perceval Wajsbürt", "Fabrice Carrat", "Xavier Tannier" ]
The objective of our study is to determine whether using English tools to extract and normalize French medical concepts on translations provides comparable performance to French models trained on a set of annotated French clinical notes. We compare two methods: a method involving French language models and a method involving English language models. For the native French method, the Named Entity Recognition (NER) and normalization steps are performed separately. For the translated English method, after the first translation step, we compare a two-step method and a terminology-oriented method that performs extraction and normalization at the same time. We used French, English and bilingual annotated datasets to evaluate all steps (NER, normalization and translation) of our algorithms. Concerning the results, the native French method performs better than the translated English one with a global f1 score of 0.51 [0.47;0.55] against 0.39 [0.34;0.44] and 0.38 [0.36;0.40] for the two English methods tested. In conclusion, despite the recent improvement of the translation models, there is a significant performance difference between the two approaches in favor of the native French method which is more efficient on French medical texts, even with few annotated documents.
[ "cs.CL" ]
false
2306.02175
2023-06-03T18:38:02Z
TART: Improved Few-shot Text Classification Using Task-Adaptive Reference Transformation
[ "Shuo Lei", "Xuchao Zhang", "Jianfeng He", "Fanglan Chen", "Chang-Tien Lu" ]
Meta-learning has emerged as a trending technique to tackle few-shot text classification and achieve state-of-the-art performance. However, the performance of existing approaches heavily depends on the inter-class variance of the support set. As a result, it can perform well on tasks when the semantics of sampled classes are distinct while failing to differentiate classes with similar semantics. In this paper, we propose a novel Task-Adaptive Reference Transformation (TART) network, aiming to enhance the generalization by transforming the class prototypes to per-class fixed reference points in task-adaptive metric spaces. To further maximize divergence between transformed prototypes in task-adaptive metric spaces, TART introduces a discriminative reference regularization among transformed prototypes. Extensive experiments are conducted on four benchmark datasets and our method demonstrates clear superiority over the state-of-the-art models in all the datasets. In particular, our model surpasses the state-of-the-art method by 7.4% and 5.4% in 1-shot and 5-shot classification on the 20 Newsgroups dataset, respectively.
[ "cs.CL" ]
false
2306.02182
2023-06-03T19:38:04Z
FlairNLP at SemEval-2023 Task 6b: Extraction of Legal Named Entities from Legal Texts using Contextual String Embeddings
[ "Vinay N Ramesh", "Rohan Eswara" ]
Indian court legal texts and processes are essential towards the integrity of the judicial system and towards maintaining the social and political order of the nation. Due to the increase in number of pending court cases, there is an urgent need to develop tools to automate many of the legal processes with the knowledge of artificial intelligence. In this paper, we employ knowledge extraction techniques, specially the named entity extraction of legal entities within court case judgements. We evaluate several state of the art architectures in the realm of sequence labeling using models trained on a curated dataset of legal texts. We observe that a Bi-LSTM model trained on Flair Embeddings achieves the best results, and we also publish the BIO formatted dataset as part of this paper.
[ "cs.CL" ]
false
2306.02190
2023-06-03T20:12:27Z
Stubborn Lexical Bias in Data and Models
[ "Sofia Serrano", "Jesse Dodge", "Noah A. Smith" ]
In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect of such correlations are typically examined feature by feature. We investigate the cumulative impact on a model of many such intersecting features. Using a new statistical method, we examine whether such spurious patterns in data appear in models trained on the data. We select two tasks -- natural language inference and duplicate-question detection -- for which any unigram feature on its own should ideally be uninformative, which gives us a large pool of automatically extracted features with which to experiment. The large size of this pool allows us to investigate the intersection of features spuriously associated with (potentially different) labels. We then apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations, and examine how doing so affects models trained on the reweighted data. Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models, including worsened bias for slightly more complex features (bigrams). We close with discussion about the implications of our results on what it means to "debias" training data, and how issues of data quality can affect model bias.
[ "cs.CL" ]
false
2306.02196
2023-06-03T20:59:19Z
Question-Context Alignment and Answer-Context Dependencies for Effective Answer Sentence Selection
[ "Minh Van Nguyen", "Kishan KC", "Toan Nguyen", "Thien Huu Nguyen", "Ankit Chadha", "Thuy Vu" ]
Answer sentence selection (AS2) in open-domain question answering finds answer for a question by ranking candidate sentences extracted from web documents. Recent work exploits answer context, i.e., sentences around a candidate, by incorporating them as additional input string to the Transformer models to improve the correctness scoring. In this paper, we propose to improve the candidate scoring by explicitly incorporating the dependencies between question-context and answer-context into the final representation of a candidate. Specifically, we use Optimal Transport to compute the question-based dependencies among sentences in the passage where the answer is extracted from. We then represent these dependencies as edges in a graph and use Graph Convolutional Network to derive the representation of a candidate, a node in the graph. Our proposed model achieves significant improvements on popular AS2 benchmarks, i.e., WikiQA and WDRASS, obtaining new state-of-the-art on all benchmarks.
[ "cs.CL" ]
false
2306.02077
2023-06-03T10:54:23Z
Utilizing ChatGPT to Enhance Clinical Trial Enrollment
[ "Georgios Peikos", "Symeon Symeonidis", "Pranav Kasela", "Gabriella Pasi" ]
Clinical trials are a critical component of evaluating the effectiveness of new medical interventions and driving advancements in medical research. Therefore, timely enrollment of patients is crucial to prevent delays or premature termination of trials. In this context, Electronic Health Records (EHRs) have emerged as a valuable tool for identifying and enrolling eligible participants. In this study, we propose an automated approach that leverages ChatGPT, a large language model, to extract patient-related information from unstructured clinical notes and generate search queries for retrieving potentially eligible clinical trials. Our empirical evaluation, conducted on two benchmark retrieval collections, shows improved retrieval performance compared to existing approaches when several general-purposed and task-specific prompts are used. Notably, ChatGPT-generated queries also outperform human-generated queries in terms of retrieval performance. These findings highlight the potential use of ChatGPT to enhance clinical trial enrollment while ensuring the quality of medical service and minimizing direct risks to patients.
[ "cs.IR", "cs.CL" ]
false
2306.02078
2023-06-03T10:56:44Z
Incorporating Deep Syntactic and Semantic Knowledge for Chinese Sequence Labeling with GCN
[ "Xuemei Tang", "Jun Wang", "Qi Su" ]
Recently, it is quite common to integrate Chinese sequence labeling results to enhance syntactic and semantic parsing. However, little attention has been paid to the utility of hierarchy and structure information encoded in syntactic and semantic features for Chinese sequence labeling tasks. In this paper, we propose a novel framework to encode syntactic structure features and semantic information for Chinese sequence labeling tasks with graph convolutional networks (GCN). Experiments on five benchmark datasets, including Chinese word segmentation and part-of-speech tagging, demonstrate that our model can effectively improve the performance of Chinese labeling tasks.
[ "cs.CL", "cs.AI" ]
false
2306.02140
2023-06-03T15:41:59Z
Unsupervised Human Activity Recognition through Two-stage Prompting with ChatGPT
[ "Qingxin Xia", "Takuya Maekawa", "Takahiro Hara" ]
Wearable sensor devices, which offer the advantage of recording daily objects used by a person while performing an activity, enable the feasibility of unsupervised Human Activity Recognition (HAR). Unfortunately, previous unsupervised approaches using the usage sequence of objects usually require a proper description of activities manually prepared by humans. Instead, we leverage the knowledge embedded in a Large Language Model (LLM) of ChatGPT. Because the sequence of objects robustly characterizes the activity identity, it is possible that ChatGPT already learned the association between activities and objects from existing contexts. However, previous prompt engineering for ChatGPT exhibits limited generalization ability when dealing with a list of words (i.e., sequence of objects) due to the similar weighting assigned to each word in the list. In this study, we propose a two-stage prompt engineering, which first guides ChatGPT to generate activity descriptions associated with objects while emphasizing important objects for distinguishing similar activities; then outputs activity classes and explanations for enhancing the contexts that are helpful for HAR. To the best of our knowledge, this is the first study that utilizes ChatGPT to recognize activities using objects in an unsupervised manner. We conducted our approach on three datasets and demonstrated the state-of-the-art performance.
[ "cs.HC", "cs.CL" ]
false
2306.02193
2023-06-03T20:37:46Z
LDEB -- Label Digitization with Emotion Binarization and Machine Learning for Emotion Recognition in Conversational Dialogues
[ "Amitabha Dey", "Shan Suthaharan" ]
Emotion recognition in conversations (ERC) is vital to the advancements of conversational AI and its applications. Therefore, the development of an automated ERC model using the concepts of machine learning (ML) would be beneficial. However, the conversational dialogues present a unique problem where each dialogue depicts nested emotions that entangle the association between the emotional feature descriptors and emotion type (or label). This entanglement that can be multiplied with the presence of data paucity is an obstacle for a ML model. To overcome this problem, we proposed a novel approach called Label Digitization with Emotion Binarization (LDEB) that disentangles the twists by utilizing the text normalization and 7-bit digital encoding techniques and constructs a meaningful feature space for a ML model to be trained. We also utilized the publicly available dataset called the FETA-DailyDialog dataset for feature learning and developed a hierarchical ERC model using random forest (RF) and artificial neural network (ANN) classifiers. Simulations showed that the ANN-based ERC model was able to predict emotion with the best accuracy and precision scores of about 74% and 76%, respectively. Simulations also showed that the ANN-model could reach a training accuracy score of about 98% with 60 epochs. On the other hand, the RF-based ERC model was able to predict emotions with the best accuracy and precision scores of about 78% and 75%, respectively.
[ "cs.CL", "cs.LG" ]
false
2306.02038
2023-06-03T07:32:25Z
Span Identification of Epistemic Stance-Taking in Academic Written English
[ "Masaki Eguchi", "Kristopher Kyle" ]
Responding to the increasing need for automated writing evaluation (AWE) systems to assess language use beyond lexis and grammar (Burstein et al., 2016), we introduce a new approach to identify rhetorical features of stance in academic English writing. Drawing on the discourse-analytic framework of engagement in the Appraisal analysis (Martin & White, 2005), we manually annotated 4,688 sentences (126,411 tokens) for eight rhetorical stance categories (e.g., PROCLAIM, ATTRIBUTION) and additional discourse elements. We then report an experiment to train machine learning models to identify and categorize the spans of these stance expressions. The best-performing model (RoBERTa + LSTM) achieved macro-averaged F1 of .7208 in the span identification of stance-taking expressions, slightly outperforming the intercoder reliability estimates before adjudication (F1 = .6629).
[ "cs.CL", "cs.AI", "cs.LG" ]
false
2306.02153
2023-06-03T16:44:21Z
Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
[ "Ramon Sanabria", "Ondrej Klejch", "Hao Tang", "Sharon Goldwater" ]
Acoustic word embeddings are typically created by training a pooling function using pairs of word-like units. For unsupervised systems, these are mined using k-nearest neighbor (KNN) search, which is slow. Recently, mean-pooled representations from a pre-trained self-supervised English model were suggested as a promising alternative, but their performance on target languages was not fully competitive. Here, we explore improvements to both approaches: we use continued pre-training to adapt the self-supervised model to the target language, and we use a multilingual phone recognizer (MPR) to mine phone n-gram pairs for training the pooling function. Evaluating on four languages, we show that both methods outperform a recent approach on word discrimination. Moreover, the MPR method is orders of magnitude faster than KNN, and is highly data efficient. We also show a small improvement from performing learned pooling on top of the continued pre-trained representations.
[ "cs.CL", "cs.LG", "cs.SD", "eess.AS" ]
false
2306.01963
2023-06-03T00:16:27Z
Over-the-Air Federated Learning In Broadband Communication
[ "Wayne Lemieux", "Raphael Pinard", "Mitra Hassani" ]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that operates at the wireless edge. It enables clients to collaborate on model training while keeping their data private from adversaries and the central server. However, current FL approaches have limitations. Some rely on secure multiparty computation, which can be vulnerable to inference attacks. Others employ differential privacy, but this may lead to decreased test accuracy when dealing with a large number of parties contributing small amounts of data. To address these issues, this paper proposes a novel approach that integrates federated learning seamlessly into the inner workings of MIMO (Multiple-Input Multiple-Output) systems.
[ "cs.LG" ]
false
2306.01977
2023-06-03T01:21:58Z
AlerTiger: Deep Learning for AI Model Health Monitoring at LinkedIn
[ "Zhentao Xu", "Ruoying Wang", "Girish Balaji", "Manas Bundele", "Xiaofei Liu", "Leo Liu", "Tie Wang" ]
Data-driven companies use AI models extensively to develop products and intelligent business solutions, making the health of these models crucial for business success. Model monitoring and alerting in industries pose unique challenges, including a lack of clear model health metrics definition, label sparsity, and fast model iterations that result in short-lived models and features. As a product, there are also requirements for scalability, generalizability, and explainability. To tackle these challenges, we propose AlerTiger, a deep-learning-based MLOps model monitoring system that helps AI teams across the company monitor their AI models' health by detecting anomalies in models' input features and output score over time. The system consists of four major steps: model statistics generation, deep-learning-based anomaly detection, anomaly post-processing, and user alerting. Our solution generates three categories of statistics to indicate AI model health, offers a two-stage deep anomaly detection solution to address label sparsity and attain the generalizability of monitoring new models, and provides holistic reports for actionable alerts. This approach has been deployed to most of LinkedIn's production AI models for over a year and has identified several model issues that later led to significant business metric gains after fixing.
[ "cs.LG", "I.2" ]
false
2306.02006
2023-06-03T05:32:19Z
MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning
[ "Haolin Song", "Mingxiao Feng", "Wengang Zhou", "Houqiang Li" ]
Recent approaches have utilized self-supervised auxiliary tasks as representation learning to improve the performance and sample efficiency of vision-based reinforcement learning algorithms in single-agent settings. However, in multi-agent reinforcement learning (MARL), these techniques face challenges because each agent only receives partial observation from an environment influenced by others, resulting in correlated observations in the agent dimension. So it is necessary to consider agent-level information in representation learning for MARL. In this paper, we propose an effective framework called \textbf{M}ulti-\textbf{A}gent \textbf{M}asked \textbf{A}ttentive \textbf{C}ontrastive \textbf{L}earning (MA2CL), which encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space. Specifically, we use an attention reconstruction model for recovering and the model is trained via contrastive learning. MA2CL allows better utilization of contextual information at the agent level, facilitating the training of MARL agents for cooperation tasks. Extensive experiments demonstrate that our method significantly improves the performance and sample efficiency of different MARL algorithms and outperforms other methods in various vision-based and state-based scenarios. Our code can be found in \url{https://github.com/ustchlsong/MA2CL}
[ "cs.LG" ]
false
2306.02025
2023-06-03T06:51:22Z
Exploring Global and Local Information for Anomaly Detection with Normal Samples
[ "Fan Xu", "Nan Wang", "Xibin Zhao" ]
Anomaly detection aims to detect data that do not conform to regular patterns, and such data is also called outliers. The anomalies to be detected are often tiny in proportion, containing crucial information, and are suitable for application scenes like intrusion detection, fraud detection, fault diagnosis, e-commerce platforms, et al. However, in many realistic scenarios, only the samples following normal behavior are observed, while we can hardly obtain any anomaly information. To address such problem, we propose an anomaly detection method GALDetector which is combined of global and local information based on observed normal samples. The proposed method can be divided into a three-stage method. Firstly, the global similar normal scores and the local sparsity scores of unlabeled samples are computed separately. Secondly, potential anomaly samples are separated from the unlabeled samples corresponding to these two scores and corresponding weights are assigned to the selected samples. Finally, a weighted anomaly detector is trained by loads of samples, then the detector is utilized to identify else anomalies. To evaluate the effectiveness of the proposed method, we conducted experiments on three categories of real-world datasets from diverse domains, and experimental results show that our method achieves better performance when compared with other state-of-the-art methods.
[ "cs.LG" ]
false
2306.02161
2023-06-03T17:10:33Z
Few-Shot Open-Set Learning for On-Device Customization of KeyWord Spotting Systems
[ "Manuele Rusci", "Tinne Tuytelaars" ]
A personalized KeyWord Spotting (KWS) pipeline typically requires the training of a Deep Learning model on a large set of user-defined speech utterances, preventing fast customization directly applied on-device. To fill this gap, this paper investigates few-shot learning methods for open-set KWS classification by combining a deep feature encoder with a prototype-based classifier. With user-defined keywords from 10 classes of the Google Speech Command dataset, our study reports an accuracy of up to 76% in a 10-shot scenario while the false acceptance rate of unknown data is kept to 5%. In the analyzed settings, the usage of the triplet loss to train an encoder with normalized output features performs better than the prototypical networks jointly trained with a generator of dummy unknown-class prototypes. This design is also more effective than encoders trained on a classification problem and features fewer parameters than other iso-accuracy approaches.
[ "cs.LG" ]
false
2306.01995
2023-06-03T04:00:47Z
Asymptotically Optimal Pure Exploration for Infinite-Armed Bandits
[ "Xiao-Yue Gong", "Mark Sellke" ]
We study pure exploration with infinitely many bandit arms generated i.i.d. from an unknown distribution. Our goal is to efficiently select a single high quality arm whose average reward is, with probability $1-\delta$, within $\varepsilon$ of being among the top $\eta$-fraction of arms; this is a natural adaptation of the classical PAC guarantee for infinite action sets. We consider both the fixed confidence and fixed budget settings, aiming respectively for minimal expected and fixed sample complexity. For fixed confidence, we give an algorithm with expected sample complexity $O\left(\frac{\log (1/\eta)\log (1/\delta)}{\eta\varepsilon^2}\right)$. This is optimal except for the $\log (1/\eta)$ factor, and the $\delta$-dependence closes a quadratic gap in the literature. For fixed budget, we show the asymptotically optimal sample complexity as $\delta\to 0$ is $c^{-1}\log(1/\delta)\big(\log\log(1/\delta)\big)^2$ to leading order. Equivalently, the optimal failure probability given exactly $N$ samples decays as $\exp\big(-cN/\log^2 N\big)$, up to a factor $1\pm o_N(1)$ inside the exponent. The constant $c$ depends explicitly on the problem parameters (including the unknown arm distribution) through a certain Fisher information distance. Even the strictly super-linear dependence on $\log(1/\delta)$ was not known and resolves a question of Grossman and Moshkovitz (FOCS 2016, SIAM Journal on Computing 2020).
[ "cs.LG", "stat.ML" ]
false
2306.01999
2023-06-03T04:23:49Z
GAT-GAN : A Graph-Attention-based Time-Series Generative Adversarial Network
[ "Srikrishna Iyer", "Teng Teck Hou" ]
Generative Adversarial Networks (GANs) have proven to be a powerful tool for generating realistic synthetic data. However, traditional GANs often struggle to capture complex relationships between features which results in generation of unrealistic multivariate time-series data. In this paper, we propose a Graph-Attention-based Generative Adversarial Network (GAT-GAN) that explicitly includes two graph-attention layers, one that learns temporal dependencies while the other captures spatial relationships. Unlike RNN-based GANs that struggle with modeling long sequences of data points, GAT-GAN generates long time-series data of high fidelity using an adversarially trained autoencoder architecture. Our empirical evaluations, using a variety of real-time-series datasets, show that our framework consistently outperforms state-of-the-art benchmarks based on \emph{Frechet Transformer distance} and \emph{Predictive score}, that characterizes (\emph{Fidelity, Diversity}) and \emph{predictive performance} respectively. Moreover, we introduce a Frechet Inception distance-like (FID) metric for time-series data called Frechet Transformer distance (FTD) score (lower is better), to evaluate the quality and variety of generated data. We also found that low FTD scores correspond to the best-performing downstream predictive experiments. Hence, FTD scores can be used as a standardized metric to evaluate synthetic time-series data.
[ "cs.LG", "cs.AI" ]
false
2306.02150
2023-06-03T16:36:43Z
An information field theory approach to Bayesian state and parameter estimation in dynamical systems
[ "Kairui Hao", "Ilias Bilionis" ]
Dynamical system state estimation and parameter calibration problems are ubiquitous across science and engineering. Bayesian approaches to the problem are the gold standard as they allow for the quantification of uncertainties and enable the seamless fusion of different experimental modalities. When the dynamics are discrete and stochastic, one may employ powerful techniques such as Kalman, particle, or variational filters. Practitioners commonly apply these methods to continuous-time, deterministic dynamical systems after discretizing the dynamics and introducing fictitious transition probabilities. However, approaches based on time-discretization suffer from the curse of dimensionality since the number of random variables grows linearly with the number of time-steps. Furthermore, the introduction of fictitious transition probabilities is an unsatisfactory solution because it increases the number of model parameters and may lead to inference bias. To address these drawbacks, the objective of this paper is to develop a scalable Bayesian approach to state and parameter estimation suitable for continuous-time, deterministic dynamical systems. Our methodology builds upon information field theory. Specifically, we construct a physics-informed prior probability measure on the function space of system responses so that functions that satisfy the physics are more likely. This prior allows us to quantify model form errors. We connect the system's response to observations through a probabilistic model of the measurement process. The joint posterior over the system responses and all parameters is given by Bayes' rule. To approximate the intractable posterior, we develop a stochastic variational inference algorithm. In summary, the developed methodology offers a powerful framework for Bayesian estimation in dynamical systems.
[ "physics.data-an", "cs.LG" ]
false
2306.02169
2023-06-03T18:22:01Z
Probabilistic Solar Proxy Forecasting with Neural Network Ensembles
[ "Joshua D. Daniell", "Piyush M. Mehta" ]
Space weather indices are used commonly to drive forecasts of thermosphere density, which directly affects objects in low-Earth orbit (LEO) through atmospheric drag. One of the most commonly used space weather proxies, $F_{10.7 cm}$, correlates well with solar extreme ultra-violet (EUV) energy deposition into the thermosphere. Currently, the USAF contracts Space Environment Technologies (SET), which uses a linear algorithm to forecast $F_{10.7 cm}$. In this work, we introduce methods using neural network ensembles with multi-layer perceptrons (MLPs) and long-short term memory (LSTMs) to improve on the SET predictions. We make predictions only from historical $F_{10.7 cm}$ values, but also investigate data manipulation to improve forecasting. We investigate data manipulation methods (backwards averaging and lookback) as well as multi step and dynamic forecasting. This work shows an improvement over the baseline when using ensemble methods. The best models found in this work are ensemble approaches using multi step or a combination of multi step and dynamic predictions. Nearly all approaches offer an improvement, with the best models improving between 45 and 55\% on relative MSE. Other relative error metrics were shown to improve greatly when ensembles methods were used. We were also able to leverage the ensemble approach to provide a distribution of predicted values; allowing an investigation into forecast uncertainty. Our work found models that produced less biased predictions at elevated and high solar activity levels. Uncertainty was also investigated through the use of a calibration error score metric (CES), our best ensemble reached similar CES as other work.
[ "physics.space-ph", "cs.LG" ]
false
2306.05286
2023-06-03T02:45:03Z
JGAT: a joint spatio-temporal graph attention model for brain decoding
[ "Han Yi Chiu", "Liang Zhao", "Anqi Wu" ]
The decoding of brain neural networks has been an intriguing topic in neuroscience for a well-rounded understanding of different types of brain disorders and cognitive stimuli. Integrating different types of connectivity, e.g., Functional Connectivity (FC) and Structural Connectivity (SC), from multi-modal imaging techniques can take their complementary information into account and therefore have the potential to get better decoding capability. However, traditional approaches for integrating FC and SC overlook the dynamical variations, which stand a great chance to over-generalize the brain neural network. In this paper, we propose a Joint kernel Graph Attention Network (JGAT), which is a new multi-modal temporal graph attention network framework. It integrates the data from functional Magnetic Resonance Images (fMRI) and Diffusion Weighted Imaging (DWI) while preserving the dynamic information at the same time. We conduct brain-decoding tasks with our JGAT on four independent datasets: three of 7T fMRI datasets from the Human Connectome Project (HCP) and one from animal neural recordings. Furthermore, with Attention Scores (AS) and Frame Scores (FS) computed and learned from the model, we can locate several informative temporal segments and build meaningful dynamical pathways along the temporal domain for the HCP datasets. The URL to the code of JGAT model: https://github.com/BRAINML-GT/JGAT.
[ "q-bio.NC", "cs.LG" ]
false
2306.01993
2023-06-03T03:42:30Z
Provable benefits of score matching
[ "Chirag Pabbaraju", "Dhruv Rohatgi", "Anish Sevekari", "Holden Lee", "Ankur Moitra", "Andrej Risteski" ]
Score matching is an alternative to maximum likelihood (ML) for estimating a probability distribution parametrized up to a constant of proportionality. By fitting the ''score'' of the distribution, it sidesteps the need to compute this constant of proportionality (which is often intractable). While score matching and variants thereof are popular in practice, precise theoretical understanding of the benefits and tradeoffs with maximum likelihood -- both computational and statistical -- are not well understood. In this work, we give the first example of a natural exponential family of distributions such that the score matching loss is computationally efficient to optimize, and has a comparable statistical efficiency to ML, while the ML loss is intractable to optimize using a gradient-based method. The family consists of exponentials of polynomials of fixed degree, and our result can be viewed as a continuous analogue of recent developments in the discrete setting. Precisely, we show: (1) Designing a zeroth-order or first-order oracle for optimizing the maximum likelihood loss is NP-hard. (2) Maximum likelihood has a statistical efficiency polynomial in the ambient dimension and the radius of the parameters of the family. (3) Minimizing the score matching loss is both computationally and statistically efficient, with complexity polynomial in the ambient dimension.
[ "cs.LG", "cs.DS", "stat.ML" ]
false
2306.02002
2023-06-03T04:56:04Z
Can Directed Graph Neural Networks be Adversarially Robust?
[ "Zhichao Hou", "Xitong Zhang", "Wei Wang", "Charu C. Aggarwal", "Xiaorui Liu" ]
The existing research on robust Graph Neural Networks (GNNs) fails to acknowledge the significance of directed graphs in providing rich information about networks' inherent structure. This work presents the first investigation into the robustness of GNNs in the context of directed graphs, aiming to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of GNNs. Our study reveals that existing directed GNNs are not adversarially robust. In pursuit of our goal, we introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer to significantly enhance the robustness of GNNs. Combined with existing defense strategies, this framework achieves outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks. The findings in this study reveal a novel and promising direction for this crucial research area. The code will be made publicly available upon the acceptance of this work.
[ "cs.LG", "cs.AI", "cs.CR" ]
false
2306.02015
2023-06-03T06:19:20Z
Machine learning enabled experimental design and parameter estimation for ultrafast spin dynamics
[ "Zhantao Chen", "Cheng Peng", "Alexander N. Petsch", "Sathya R. Chitturi", "Alana Okullo", "Sugata Chowdhury", "Chun Hong Yoon", "Joshua J. Turner" ]
Advanced experimental measurements are crucial for driving theoretical developments and unveiling novel phenomena in condensed matter and material physics, which often suffer from the scarcity of facility resources and increasing complexities. To address the limitations, we introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED), exemplified with x-ray photon fluctuation spectroscopy (XPFS) measurements for spin fluctuations. Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED. The capability of automatic differentiation from the neural network model is further leveraged for more robust and accurate parameter estimation. Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time. Although focusing on XPFS and spin fluctuations, our method can be adapted to other experiments, facilitating more efficient data collection and accelerating scientific discoveries.
[ "cond-mat.mtrl-sci", "cs.LG", "physics.comp-ph", "physics.data-an" ]
false
2306.02108
2023-06-03T13:16:17Z
Random matrix theory and the loss surfaces of neural networks
[ "Nicholas P Baskerville" ]
Neural network models are one of the most successful approaches to machine learning, enjoying an enormous amount of development and research over recent years and finding concrete real-world applications in almost any conceivable area of science, engineering and modern life in general. The theoretical understanding of neural networks trails significantly behind their practical success and the engineering heuristics that have grown up around them. Random matrix theory provides a rich framework of tools with which aspects of neural network phenomenology can be explored theoretically. In this thesis, we establish significant extensions of prior work using random matrix theory to understand and describe the loss surfaces of large neural networks, particularly generalising to different architectures. Informed by the historical applications of random matrix theory in physics and elsewhere, we establish the presence of local random matrix universality in real neural networks and then utilise this as a modeling assumption to derive powerful and novel results about the Hessians of neural network loss surfaces and their spectra. In addition to these major contributions, we make use of random matrix models for neural network loss surfaces to shed light on modern neural network training approaches and even to derive a novel and effective variant of a popular optimisation algorithm. Overall, this thesis provides important contributions to cement the place of random matrix theory in the theoretical study of modern neural networks, reveals some of the limits of existing approaches and begins the study of an entirely new role for random matrix theory in the theory of deep learning with important experimental discoveries and novel theoretical results based on local random matrix universality.
[ "math-ph", "cs.LG", "math.MP", "math.PR" ]
false
2306.02149
2023-06-03T16:34:25Z
Infomorphic networks: Locally learning neural networks derived from partial information decomposition
[ "Marcel Graetz", "Abdullah Makkeh", "Andreas C. Schneider", "David A. Ehrlich", "Viola Priesemann", "Michael Wibral" ]
Understanding the intricate cooperation among individual neurons in performing complex tasks remains a challenge to this date. In this paper, we propose a novel type of model neuron that emulates the functional characteristics of biological neurons by optimizing an abstract local information processing goal. We have previously formulated such a goal function based on principles from partial information decomposition (PID). Here, we present a corresponding parametric local learning rule which serves as the foundation of "infomorphic networks" as a novel concrete model of neural networks. We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised and memory learning. By leveraging the explanatory power and interpretable nature of the PID framework, these infomorphic networks represent a valuable tool to advance our understanding of cortical function.
[ "cs.IT", "cs.LG", "cs.NE", "math.IT" ]
false
2306.02165
2023-06-03T17:51:04Z
Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games
[ "Tyler Malloy", "Cleotilde Gonzalez" ]
Designing cyber defense systems to account for cognitive biases in human decision making has demonstrated significant success in improving performance against human attackers. However, much of the attention in this area has focused on relatively simple accounts of biases in human attackers, and little is known about adversarial behavior or how defenses could be improved by disrupting attacker's behavior. In this work, we present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning. This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions. The proposed model can better defend against attacks from a wide range of opponents compared to alternatives that attempt to perform optimally without accounting for human biases. Additionally, the proposed model performs better against a range of human-like behavior by explicitly modeling human transfer of learning, which has not yet been applied to cyber defense scenarios. Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles and how these insights could potentially be used in real-world cybersecurity.
[ "cs.AI", "cs.CR", "cs.LG" ]
false
2306.02174
2023-06-03T18:36:12Z
Training Data Attribution for Diffusion Models
[ "Zheng Dai", "David K Gifford" ]
Diffusion models have become increasingly popular for synthesizing high-quality samples based on training datasets. However, given the oftentimes enormous sizes of the training datasets, it is difficult to assess how training data impact the samples produced by a trained diffusion model. The difficulty of relating diffusion model inputs and outputs poses significant challenges to model explainability and training data attribution. Here we propose a novel solution that reveals how training data influence the output of diffusion models through the use of ensembles. In our approach individual models in an encoded ensemble are trained on carefully engineered splits of the overall training data to permit the identification of influential training examples. The resulting model ensembles enable efficient ablation of training data influence, allowing us to assess the impact of training data on model outputs. We demonstrate the viability of these ensembles as generative models and the validity of our approach to assessing influence.
[ "stat.ML", "cs.AI", "cs.LG" ]
false
2306.02192
2023-06-03T20:34:14Z
Correcting auto-differentiation in neural-ODE training
[ "Yewei Xu", "Shi Chen", "Qin Li", "Stephen J. Wright" ]
Does the use of auto-differentiation yield reasonable updates to deep neural networks that represent neural ODEs? Through mathematical analysis and numerical evidence, we find that when the neural network employs high-order forms to approximate the underlying ODE flows (such as the Linear Multistep Method (LMM)), brute-force computation using auto-differentiation often produces non-converging artificial oscillations. In the case of Leapfrog, we propose a straightforward post-processing technique that effectively eliminates these oscillations, rectifies the gradient computation and thus respects the updates of the underlying flow.
[ "cs.LG", "cs.NA", "math.NA" ]
false
2306.02206
2023-06-03T22:30:45Z
Mitigating Molecular Aggregation in Drug Discovery with Predictive Insights from Explainable AI
[ "Hunter Sturm", "Jonas Teufel", "Kaitlin A. Isfeld", "Pascal Friederich", "Rebecca L. Davis" ]
As the importance of high-throughput screening (HTS) continues to grow due to its value in early stage drug discovery and data generation for training machine learning models, there is a growing need for robust methods for pre-screening compounds to identify and prevent false-positive hits. Small, colloidally aggregating molecules are one of the primary sources of false-positive hits in high-throughput screens, making them an ideal candidate to target for removal from libraries using predictive pre-screening tools. However, a lack of understanding of the causes of molecular aggregation introduces difficulty in the development of predictive tools for detecting aggregating molecules. Herein, we present an examination of the molecular features differentiating datasets of aggregating and non-aggregating molecules, as well as a machine learning approach to predicting molecular aggregation. Our method uses explainable graph neural networks and counterfactuals to reliably predict and explain aggregation, giving additional insights and design rules for future screening. The integration of this method in HTS approaches will help combat false positives, providing better lead molecules more rapidly and thus accelerating drug discovery cycles.
[ "q-bio.BM", "cond-mat.soft", "cs.LG" ]
false
2306.02208
2023-06-03T22:41:44Z
Tight Regret Bounds for Single-pass Streaming Multi-armed Bandits
[ "Chen Wang" ]
Regret minimization in streaming multi-armed bandits (MABs) has been studied extensively in recent years. In the single-pass setting with $K$ arms and $T$ trials, a regret lower bound of $\Omega(T^{2/3})$ has been proved for any algorithm with $o(K)$ memory (Maiti et al. [NeurIPS'21]; Agarwal at al. [COLT'22]). On the other hand, however, the previous best regret upper bound is still $O(K^{1/3} T^{2/3}\log^{1/3}(T))$, which is achieved by the streaming implementation of the simple uniform exploration. The $O(K^{1/3}\log^{1/3}(T))$ gap leaves the open question of the tight regret bound in the single-pass MABs with sublinear arm memory. In this paper, we answer this open problem and complete the picture of regret minimization in single-pass streaming MABs. We first improve the regret lower bound to $\Omega(K^{1/3}T^{2/3})$ for algorithms with $o(K)$ memory, which matches the uniform exploration regret up to a logarithm factor in $T$. We then show that the $\log^{1/3}(T)$ factor is not necessary, and we can achieve $O(K^{1/3}T^{2/3})$ regret by finding an $\varepsilon$-best arm and committing to it in the rest of the trials. For regret minimization with high constant probability, we can apply the single-memory $\varepsilon$-best arm algorithms in Jin et al. [ICML'21] to obtain the optimal bound. Furthermore, for the expected regret minimization, we design an algorithm with a single-arm memory that achieves $O(K^{1/3} T^{2/3}\log(K))$ regret, and an algorithm with $O(\log^{*}(n))$-memory with the optimal $O(K^{1/3} T^{2/3})$ regret following the $\varepsilon$-best arm algorithm in Assadi and Wang [STOC'20]. We further tested the empirical performances of our algorithms. The simulation results show that the proposed algorithms consistently outperform the benchmark uniform exploration algorithm by a large margin, and on occasion, reduce the regret by up to 70%.
[ "cs.LG", "cs.DS", "stat.ML" ]
false
2306.02212
2023-06-03T23:31:27Z
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization
[ "Ruichen Jiang", "Aryan Mokhtari" ]
In this paper, we propose an accelerated quasi-Newton proximal extragradient (A-QPNE) method for solving unconstrained smooth convex optimization problems. With access only to the gradients of the objective, we prove that our method can achieve a convergence rate of ${O}\bigl(\min\{\frac{1}{k^2}, \frac{\sqrt{d\log k}}{k^{2.5}}\}\bigr)$, where $d$ is the problem dimension and $k$ is the number of iterations. In particular, in the regime where $k = {O}(d)$, our method matches the optimal rate of ${O}(\frac{1}{k^2})$ by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where $k = \Omega(d \log d)$, it outperforms NAG and converges at a faster rate of ${O}\bigl(\frac{\sqrt{d\log k}}{k^{2.5}}\bigr)$. To the best of our knowledge, this result is the first to demonstrate a provable gain of a quasi-Newton-type method over NAG in the convex setting. To achieve such results, we build our method on a recent variant of the Monteiro-Svaiter acceleration framework and adopt an online learning perspective to update the Hessian approximation matrices, in which we relate the convergence rate of our method to the dynamic regret of a specific online convex optimization problem in the space of matrices.
[ "math.OC", "cs.LG", "stat.ML" ]
false
2306.03105
2023-06-03T06:06:27Z
Data driven localized wave solution of the Fokas-Lenells equation using modified PINN
[ "Gautam Kumar Saharia", "Sagardeep Talukdar", "Riki Dutta", "Sudipta Nandy" ]
We investigate data driven localized wave solutions of the Fokas-Lenells equation by using physics informed neural network(PINN). We improve basic PINN by incorporating control parameters into the residual loss function. We also add conserve quantity as another loss term to modify the PINN. Using modified PINN we obtain the data driven bright soliton and dark soliton solutions of Fokas-Lenells equation. Conserved quantities informed loss function achieve more accuracy in terms of relative L2 error between predicted and exact soliton solutions. We hope that the present investigation would be useful to study the applications of deep learning in nonlinear optics and other branches of nonlinear physics. Source codes are available at https://github.com/gautamksaharia/Fokas-Lenells
[ "nlin.PS", "cs.LG", "nlin.SI" ]
false
2306.02243
2023-06-04T03:06:37Z
Retrieval-Enhanced Visual Prompt Learning for Few-shot Classification
[ "Jintao Rong", "Hao Chen", "Tianxiao Chen", "Linlin Ou", "Xinyi Yu", "Yifan Liu" ]
Prompt learning has become a popular approach for adapting large vision-language models, such as CLIP, to downstream tasks. Typically, prompt learning relies on a fixed prompt token or an input-conditional token to fit a small amount of data under full supervision. While this paradigm can generalize to a certain range of unseen classes, it may struggle when domain gap increases, such as in fine-grained classification and satellite image segmentation. To address this limitation, we propose Retrieval-enhanced Prompt learning (RePrompt), which introduces retrieval mechanisms to cache the knowledge representations from downstream tasks. we first construct a retrieval database from training examples, or from external examples when available. We then integrate this retrieval-enhanced mechanism into various stages of a simple prompt learning baseline. By referencing similar samples in the training set, the enhanced model is better able to adapt to new tasks with few samples. Our extensive experiments over 15 vision datasets, including 11 downstream tasks with few-shot setting and 4 domain generalization benchmarks, demonstrate that RePrompt achieves considerably improved performance. Our proposed approach provides a promising solution to the challenges faced by prompt learning when domain gap increases. The code and models will be available.
[ "cs.CV" ]
false
2306.02275
2023-06-04T06:42:09Z
USD: Unknown Sensitive Detector Empowered by Decoupled Objectness and Segment Anything Model
[ "Yulin He", "Wei Chen", "Yusong Tan", "Siqi Wang" ]
Open World Object Detection (OWOD) is a novel and challenging computer vision task that enables object detection with the ability to detect unknown objects. Existing methods typically estimate the object likelihood with an additional objectness branch, but ignore the conflict in learning objectness and classification boundaries, which oppose each other on the semantic manifold and training objective. To address this issue, we propose a simple yet effective learning strategy, namely Decoupled Objectness Learning (DOL), which divides the learning of these two boundaries into suitable decoder layers. Moreover, detecting unknown objects comprehensively requires a large amount of annotations, but labeling all unknown objects is both difficult and expensive. Therefore, we propose to take advantage of the recent Large Vision Model (LVM), specifically the Segment Anything Model (SAM), to enhance the detection of unknown objects. Nevertheless, the output results of SAM contain noise, including backgrounds and fragments, so we introduce an Auxiliary Supervision Framework (ASF) that uses a pseudo-labeling and a soft-weighting strategies to alleviate the negative impact of noise. Extensive experiments on popular benchmarks, including Pascal VOC and MS COCO, demonstrate the effectiveness of our approach. Our proposed Unknown Sensitive Detector (USD) outperforms the recent state-of-the-art methods in terms of Unknown Recall, achieving significant improvements of 14.3\%, 15.5\%, and 8.9\% on the M-OWODB, and 27.1\%, 29.1\%, and 25.1\% on the S-OWODB.
[ "cs.CV" ]
false
2306.02277
2023-06-04T06:49:44Z
EfficientSRFace: An Efficient Network with Super-Resolution Enhancement for Accurate Face Detection
[ "Guangtao Wang", "Jun Li", "Jie Xie", "Jianhua Xu", "Bo Yang" ]
In face detection, low-resolution faces, such as numerous small faces of a human group in a crowded scene, are common in dense face prediction tasks. They usually contain limited visual clues and make small faces less distinguishable from the other small objects, which poses great challenge to accurate face detection. Although deep convolutional neural network has significantly promoted the research on face detection recently, current deep face detectors rarely take into account low-resolution faces and are still vulnerable to the real-world scenarios where massive amount of low-resolution faces exist. Consequently, they usually achieve degraded performance for low-resolution face detection. In order to alleviate this problem, we develop an efficient detector termed EfficientSRFace by introducing a feature-level super-resolution reconstruction network for enhancing the feature representation capability of the model. This module plays an auxiliary role in the training process, and can be removed during the inference without increasing the inference time. Extensive experiments on public benchmarking datasets, such as FDDB and WIDER Face, show that the embedded image super-resolution module can significantly improve the detection accuracy at the cost of a small amount of additional parameters and computational overhead, while helping our model achieve competitive performance compared with the state-of-the-arts methods.
[ "cs.CV" ]
false
2306.02301
2023-06-04T08:53:28Z
rPPG-MAE: Self-supervised Pre-training with Masked Autoencoders for Remote Physiological Measurement
[ "Xin Liu", "Yuting Zhang", "Zitong Yu", "Hao Lu", "Huanjing Yue", "Jingyu Yang" ]
Remote photoplethysmography (rPPG) is an important technique for perceiving human vital signs, which has received extensive attention. For a long time, researchers have focused on supervised methods that rely on large amounts of labeled data. These methods are limited by the requirement for large amounts of data and the difficulty of acquiring ground truth physiological signals. To address these issues, several self-supervised methods based on contrastive learning have been proposed. However, they focus on the contrastive learning between samples, which neglect the inherent self-similar prior in physiological signals and seem to have a limited ability to cope with noisy. In this paper, a linear self-supervised reconstruction task was designed for extracting the inherent self-similar prior in physiological signals. Besides, a specific noise-insensitive strategy was explored for reducing the interference of motion and illumination. The proposed framework in this paper, namely rPPG-MAE, demonstrates excellent performance even on the challenging VIPL-HR dataset. We also evaluate the proposed method on two public datasets, namely PURE and UBFC-rPPG. The results show that our method not only outperforms existing self-supervised methods but also exceeds the state-of-the-art (SOTA) supervised methods. One important observation is that the quality of the dataset seems more important than the size in self-supervised pre-training of rPPG. The source code is released at https://github.com/linuxsino/rPPG-MAE.
[ "cs.CV" ]
false
2306.02314
2023-06-04T09:40:25Z
Using Unreliable Pseudo-Labels for Label-Efficient Semantic Segmentation
[ "Haochen Wang", "Yuchao Wang", "Yujun Shen", "Junsong Fan", "Yuxi Wang", "Zhaoxiang Zhang" ]
The crux of label-efficient semantic segmentation is to produce high-quality pseudo-labels to leverage a large amount of unlabeled or weakly labeled data. A common practice is to select the highly confident predictions as the pseudo-ground-truths for each pixel, but it leads to a problem that most pixels may be left unused due to their unreliability. However, we argue that every pixel matters to the model training, even those unreliable and ambiguous pixels. Intuitively, an unreliable prediction may get confused among the top classes, however, it should be confident about the pixel not belonging to the remaining classes. Hence, such a pixel can be convincingly treated as a negative key to those most unlikely categories. Therefore, we develop an effective pipeline to make sufficient use of unlabeled data. Concretely, we separate reliable and unreliable pixels via the entropy of predictions, push each unreliable pixel to a category-wise queue that consists of negative keys, and manage to train the model with all candidate pixels. Considering the training evolution, we adaptively adjust the threshold for the reliable-unreliable partition. Experimental results on various benchmarks and training settings demonstrate the superiority of our approach over the state-of-the-art alternatives.
[ "cs.CV" ]
false
2306.02329
2023-06-04T11:08:53Z
Multi-CLIP: Contrastive Vision-Language Pre-training for Question Answering tasks in 3D Scenes
[ "Alexandros Delitzas", "Maria Parelli", "Nikolas Hars", "Georgios Vlassis", "Sotirios Anagnostidis", "Gregor Bachmann", "Thomas Hofmann" ]
Training models to apply common-sense linguistic knowledge and visual concepts from 2D images to 3D scene understanding is a promising direction that researchers have only recently started to explore. However, it still remains understudied whether 2D distilled knowledge can provide useful representations for downstream 3D vision-language tasks such as 3D question answering. In this paper, we propose a novel 3D pre-training Vision-Language method, namely Multi-CLIP, that enables a model to learn language-grounded and transferable 3D scene point cloud representations. We leverage the representational power of the CLIP model by maximizing the agreement between the encoded 3D scene features and the corresponding 2D multi-view image and text embeddings in the CLIP space via a contrastive objective. To validate our approach, we consider the challenging downstream tasks of 3D Visual Question Answering (3D-VQA) and 3D Situated Question Answering (3D-SQA). To this end, we develop novel multi-modal transformer-based architectures and we demonstrate how our pre-training method can benefit their performance. Quantitative and qualitative experimental results show that Multi-CLIP outperforms state-of-the-art works across the downstream tasks of 3D-VQA and 3D-SQA and leads to a well-structured 3D scene feature space.
[ "cs.CV" ]
false
2306.02346
2023-06-04T12:42:45Z
CDLT: A Dataset with Concept Drift and Long-Tailed Distribution for Fine-Grained Visual Categorization
[ "Shuo Ye", "Yufeng Shi", "Ruxin Wang", "Yu Wang", "Jiamiao Xu", "Chuanwu Yang", "Xinge You" ]
Data is the foundation for the development of computer vision, and the establishment of datasets plays an important role in advancing the techniques of fine-grained visual categorization~(FGVC). In the existing FGVC datasets used in computer vision, it is generally assumed that each collected instance has fixed characteristics and the distribution of different categories is relatively balanced. In contrast, the real world scenario reveals the fact that the characteristics of instances tend to vary with time and exhibit a long-tailed distribution. Hence, the collected datasets may mislead the optimization of the fine-grained classifiers, resulting in unpleasant performance in real applications. Starting from the real-world conditions and to promote the practical progress of fine-grained visual categorization, we present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the dataset is collected by gathering 11195 images of 250 instances in different species for 47 consecutive months in their natural contexts. The collection process involves dozens of crowd workers for photographing and domain experts for labelling. Extensive baseline experiments using the state-of-the-art fine-grained classification models demonstrate the issues of concept drift and long-tailed distribution existed in the dataset, which require the attention of future researches.
[ "cs.CV" ]
false
2306.02351
2023-06-04T13:01:19Z
RSSOD-Bench: A large-scale benchmark dataset for Salient Object Detection in Optical Remote Sensing Imagery
[ "Zhitong Xiong", "Yanfeng Liu", "Qi Wang", "Xiao Xiang Zhu" ]
We present the RSSOD-Bench dataset for salient object detection (SOD) in optical remote sensing imagery. While SOD has achieved success in natural scene images with deep learning, research in SOD for remote sensing imagery (RSSOD) is still in its early stages. Existing RSSOD datasets have limitations in terms of scale, and scene categories, which make them misaligned with real-world applications. To address these shortcomings, we construct the RSSOD-Bench dataset, which contains images from four different cities in the USA. The dataset provides annotations for various salient object categories, such as buildings, lakes, rivers, highways, bridges, aircraft, ships, athletic fields, and more. The salient objects in RSSOD-Bench exhibit large-scale variations, cluttered backgrounds, and different seasons. Unlike existing datasets, RSSOD-Bench offers uniform distribution across scene categories. We benchmark 23 different state-of-the-art approaches from both the computer vision and remote sensing communities. Experimental results demonstrate that more research efforts are required for the RSSOD task.
[ "cs.CV" ]
false
2306.02374
2023-06-04T15:14:20Z
GAN-based Deidentification of Drivers' Face Videos: An Assessment of Human Factors Implications in NDS Data
[ "Surendrabikram Thapa", "Abhijit Sarkar" ]
This paper addresses the problem of sharing drivers' face videos for transportation research while adhering to proper ethical guidelines. The paper first gives an overview of the multitude of problems associated with sharing such data and then proposes a framework on how artificial intelligence-based techniques, specifically face swapping, can be used for de-identifying drivers' faces. Through extensive experimentation with an Oak Ridge National Laboratory (ORNL) dataset, we demonstrate the effectiveness of face-swapping algorithms in preserving essential attributes related to human factors research, including eye movements, head movements, and mouth movements. The efficacy of the framework was also tested on various naturalistic driving study data collected at the Virginia Tech Transportation Institute. The results achieved through the proposed techniques were evaluated qualitatively and quantitatively using various metrics. Finally, we discuss possible measures for sharing the de-identified videos with the greater research community.
[ "cs.CV" ]
false
2306.02443
2023-06-04T19:14:44Z
ESTISR: Adapting Efficient Scene Text Image Super-resolution for Real-Scenes
[ "Minghao Fu", "Xin Man", "Yihan Xu", "Jie Shao" ]
While scene text image super-resolution (STISR) has yielded remarkable improvements in accurately recognizing scene text, prior methodologies have placed excessive emphasis on optimizing performance, rather than paying due attention to efficiency - a crucial factor in ensuring deployment of the STISR-STR pipeline. In this work, we propose a novel Efficient Scene Text Image Super-resolution (ESTISR) Network for resource-limited deployment platform. ESTISR's functionality primarily depends on two critical components: a CNN-based feature extractor and an efficient self-attention mechanism used for decoding low-resolution images. We designed a re-parameterized inverted residual block specifically suited for resource-limited circumstances as the feature extractor. Meanwhile, we proposed a novel self-attention mechanism, softmax shrinking, based on a kernel-based approach. This innovative technique offers linear complexity while also naturally incorporating discriminating low-level features into the self-attention structure. Extensive experiments on TextZoom show that ESTISR retains a high image restoration quality and improved STR accuracy of low-resolution images. Furthermore, ESTISR consistently outperforms current methods in terms of actual running time and peak memory consumption, while achieving a better trade-off between performance and efficiency.
[ "cs.CV" ]
false
2306.02507
2023-06-04T23:56:53Z
Deep learning powered real-time identification of insects using citizen science data
[ "Shivani Chiranjeevi", "Mojdeh Sadaati", "Zi K Deng", "Jayanth Koushik", "Talukder Z Jubery", "Daren Mueller", "Matthew E O Neal", "Nirav Merchant", "Aarti Singh", "Asheesh K Singh", "Soumik Sarkar", "Arti Singh", "Baskar Ganapathysubramanian" ]
Insect-pests significantly impact global agricultural productivity and quality. Effective management involves identifying the full insect community, including beneficial insects and harmful pests, to develop and implement integrated pest management strategies. Automated identification of insects under real-world conditions presents several challenges, including differentiating similar-looking species, intra-species dissimilarity and inter-species similarity, several life cycle stages, camouflage, diverse imaging conditions, and variability in insect orientation. A deep-learning model, InsectNet, is proposed to address these challenges. InsectNet is endowed with five key features: (a) utilization of a large dataset of insect images collected through citizen science; (b) label-free self-supervised learning for large models; (c) improving prediction accuracy for species with a small sample size; (d) enhancing model trustworthiness; and (e) democratizing access through streamlined MLOps. This approach allows accurate identification (>96% accuracy) of over 2500 insect species, including pollinator (e.g., butterflies, bees), parasitoid (e.g., some wasps and flies), predator species (e.g., lady beetles, mantises, dragonflies) and harmful pest species (e.g., armyworms, cutworms, grasshoppers, stink bugs). InsectNet can identify invasive species, provide fine-grained insect species identification, and work effectively in challenging backgrounds. It also can abstain from making predictions when uncertain, facilitating seamless human intervention and making it a practical and trustworthy tool. InsectNet can guide citizen science data collection, especially for invasive species where early detection is crucial. Similar approaches may transform other agricultural challenges like disease detection and underscore the importance of data collection, particularly through citizen science efforts..
[ "cs.CV" ]
false
2306.02263
2023-06-04T05:00:12Z
MAVD: The First Open Large-Scale Mandarin Audio-Visual Dataset with Depth Information
[ "Jianrong Wang", "Yuchen Huo", "Li Liu", "Tianyi Xu", "Qi Li", "Sen Li" ]
Audio-visual speech recognition (AVSR) gains increasing attention from researchers as an important part of human-computer interaction. However, the existing available Mandarin audio-visual datasets are limited and lack the depth information. To address this issue, this work establishes the MAVD, a new large-scale Mandarin multimodal corpus comprising 12,484 utterances spoken by 64 native Chinese speakers. To ensure the dataset covers diverse real-world scenarios, a pipeline for cleaning and filtering the raw text material has been developed to create a well-balanced reading material. In particular, the latest data acquisition device of Microsoft, Azure Kinect is used to capture depth information in addition to the traditional audio signals and RGB images during data acquisition. We also provide a baseline experiment, which could be used to evaluate the effectiveness of the dataset. The dataset and code will be released at https://github.com/SpringHuo/MAVD.
[ "cs.SD", "cs.CV" ]
false
2306.02335
2023-06-04T11:52:59Z
Towards Robust Feature Learning with t-vFM Similarity for Continual Learning
[ "Bilan Gao", "YoungBin Kim" ]
Continual learning has been developed using standard supervised contrastive loss from the perspective of feature learning. Due to the data imbalance during the training, there are still challenges in learning better representations. In this work, we suggest using a different similarity metric instead of cosine similarity in supervised contrastive loss in order to learn more robust representations. We validate the our method on one of the image classification datasets Seq-CIFAR-10 and the results outperform recent continual learning baselines.
[ "cs.LG", "cs.CV" ]
false
2306.02393
2023-06-04T16:05:26Z
Accessible Robot Control in Mixed Reality
[ "Ganlin Zhang", "Deheng Zhang", "Longteng Duan", "Guo Han" ]
A novel method to control the Spot robot of Boston Dynamics by Hololens 2 is proposed. This method is mainly designed for people with physical disabilities, users can control the robot's movement and robot arm without using their hands. The eye gaze tracking and head motion tracking technologies of Hololens 2 are utilized for sending control commands. The movement of the robot would follow the eye gaze and the robot arm would mimic the pose of the user's head. Through our experiment, our method is comparable with the traditional control method by joystick in both time efficiency and user experience. Demo can be found on our project webpage: https://zhangganlin.github.io/Holo-Spot-Page/index.html
[ "cs.RO", "cs.CV" ]
false
2306.02398
2023-06-04T16:17:19Z
Scale Guided Hypernetwork for Blind Super-Resolution Image Quality Assessment
[ "Jun Fu" ]
With the emergence of image super-resolution (SR) algorithm, how to blindly evaluate the quality of super-resolution images has become an urgent task. However, existing blind SR image quality assessment (IQA) metrics merely focus on visual characteristics of super-resolution images, ignoring the available scale information. In this paper, we reveal that the scale factor has a statistically significant impact on subjective quality scores of SR images, indicating that the scale information can be used to guide the task of blind SR IQA. Motivated by this, we propose a scale guided hypernetwork framework that evaluates SR image quality in a scale-adaptive manner. Specifically, the blind SR IQA procedure is divided into three stages, i.e., content perception, evaluation rule generation, and quality prediction. After content perception, a hypernetwork generates the evaluation rule used in quality prediction based on the scale factor of the SR image. We apply the proposed scale guided hypernetwork framework to existing representative blind IQA metrics, and experimental results show that the proposed framework not only boosts the performance of these IQA metrics but also enhances their generalization abilities. Source code will be available at https://github.com/JunFu1995/SGH.
[ "cs.CV", "eess.IV" ]
false
2306.02424
2023-06-04T17:57:51Z
Sanity Checks for Saliency Methods Explaining Object Detectors
[ "Deepan Chakravarthi Padmanabhan", "Paul G. Plöger", "Octavio Arriaga", "Matias Valdenegro-Toro" ]
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
[ "cs.CV", "cs.LG" ]
false
2306.04466
2023-06-04T10:30:28Z
Point Cloud Video Anomaly Detection Based on Point Spatio-Temporal Auto-Encoder
[ "Tengjiao He", "Wenguang Wang" ]
Video anomaly detection has great potential in enhancing safety in the production and monitoring of crucial areas. Currently, most video anomaly detection methods are based on RGB modality, but its redundant semantic information may breach the privacy of residents or patients. The 3D data obtained by depth camera and LiDAR can accurately locate anomalous events in 3D space while preserving human posture and motion information. Identifying individuals through the point cloud is difficult due to its sparsity, which protects personal privacy. In this study, we propose Point Spatio-Temporal Auto-Encoder (PSTAE), an autoencoder framework that uses point cloud videos as input to detect anomalies in point cloud videos. We introduce PSTOp and PSTTransOp to maintain spatial geometric and temporal motion information in point cloud videos. To measure the reconstruction loss of the proposed autoencoder framework, we propose a reconstruction loss measurement strategy based on a shallow feature extractor. Experimental results on the TIMo dataset show that our method outperforms currently representative depth modality-based methods in terms of AUROC and has superior performance in detecting Medical Issue anomalies. These results suggest the potential of point cloud modality in video anomaly detection. Our method sets a new state-of-the-art (SOTA) on the TIMo dataset.
[ "cs.CV", "eess.SP" ]
false
2306.02236
2023-06-04T02:33:12Z
Detector Guidance for Multi-Object Text-to-Image Generation
[ "Luping Liu", "Zijian Zhang", "Yi Ren", "Rongjie Huang", "Xiang Yin", "Zhou Zhao" ]
Diffusion models have demonstrated impressive performance in text-to-image generation. They utilize a text encoder and cross-attention blocks to infuse textual information into images at a pixel level. However, their capability to generate images with text containing multiple objects is still restricted. Previous works identify the problem of information mixing in the CLIP text encoder and introduce the T5 text encoder or incorporate strong prior knowledge to assist with the alignment. We find that mixing problems also occur on the image side and in the cross-attention blocks. The noisy images can cause different objects to appear similar, and the cross-attention blocks inject information at a pixel level, leading to leakage of global object understanding and resulting in object mixing. In this paper, we introduce Detector Guidance (DG), which integrates a latent object detection model to separate different objects during the generation process. DG first performs latent object detection on cross-attention maps (CAMs) to obtain object information. Based on this information, DG then masks conflicting prompts and enhances related prompts by manipulating the following CAMs. We evaluate the effectiveness of DG using Stable Diffusion on COCO, CC, and a novel multi-related object benchmark, MRO. Human evaluations demonstrate that DG provides an 8-22\% advantage in preventing the amalgamation of conflicting concepts and ensuring that each object possesses its unique region without any human involvement and additional iterations. Our implementation is available at \url{https://github.com/luping-liu/Detector-Guidance}.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2306.02268
2023-06-04T06:01:53Z
Revisiting Class Imbalance for End-to-end Semi-Supervised Object Detection
[ "Purbayan Kar", "Vishal Chudasama", "Naoyuki Onoe", "Pankaj Wasnik" ]
Semi-supervised object detection (SSOD) has made significant progress with the development of pseudo-label-based end-to-end methods. However, many of these methods face challenges due to class imbalance, which hinders the effectiveness of the pseudo-label generator. Furthermore, in the literature, it has been observed that low-quality pseudo-labels severely limit the performance of SSOD. In this paper, we examine the root causes of low-quality pseudo-labels and present novel learning mechanisms to improve the label generation quality. To cope with high false-negative and low precision rates, we introduce an adaptive thresholding mechanism that helps the proposed network to filter out optimal bounding boxes. We further introduce a Jitter-Bagging module to provide accurate information on localization to help refine the bounding boxes. Additionally, two new losses are introduced using the background and foreground scores predicted by the teacher and student networks to improvise the pseudo-label recall rate. Furthermore, our method applies strict supervision to the teacher network by feeding strong & weak augmented data to generate robust pseudo-labels so that it can detect small and complex objects. Finally, the extensive experiments show that the proposed network outperforms state-of-the-art methods on MS-COCO and Pascal VOC datasets and allows the baseline network to achieve 100% supervised performance with much less (i.e., 20%) labeled data.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2306.02306
2023-06-04T09:03:05Z
Cross-CBAM: A Lightweight network for Scene Segmentation
[ "Zhengbin Zhang", "Zhenhao Xu", "Xingsheng Gu", "Juan Xiong" ]
Scene parsing is a great challenge for real-time semantic segmentation. Although traditional semantic segmentation networks have made remarkable leap-forwards in semantic accuracy, the performance of inference speed is unsatisfactory. Meanwhile, this progress is achieved with fairly large networks and powerful computational resources. However, it is difficult to run extremely large models on edge computing devices with limited computing power, which poses a huge challenge to the real-time semantic segmentation tasks. In this paper, we present the Cross-CBAM network, a novel lightweight network for real-time semantic segmentation. Specifically, a Squeeze-and-Excitation Atrous Spatial Pyramid Pooling Module(SE-ASPP) is proposed to get variable field-of-view and multiscale information. And we propose a Cross Convolutional Block Attention Module(CCBAM), in which a cross-multiply operation is employed in the CCBAM module to make high-level semantic information guide low-level detail information. Different from previous work, these works use attention to focus on the desired information in the backbone. CCBAM uses cross-attention for feature fusion in the FPN structure. Extensive experiments on the Cityscapes dataset and Camvid dataset demonstrate the effectiveness of the proposed Cross-CBAM model by achieving a promising trade-off between segmentation accuracy and inference speed. On the Cityscapes test set, we achieve 73.4% mIoU with a speed of 240.9FPS and 77.2% mIoU with a speed of 88.6FPS on NVIDIA GTX 1080Ti.
[ "cs.CV", "cs.LG", "eess.IV" ]
false
2306.02407
2023-06-04T16:55:38Z
Heteroskedastic Geospatial Tracking with Distributed Camera Networks
[ "Colin Samplawski", "Shiwei Fang", "Ziqi Wang", "Deepak Ganesan", "Mani Srivastava", "Benjamin M. Marlin" ]
Visual object tracking has seen significant progress in recent years. However, the vast majority of this work focuses on tracking objects within the image plane of a single camera and ignores the uncertainty associated with predicted object locations. In this work, we focus on the geospatial object tracking problem using data from a distributed camera network. The goal is to predict an object's track in geospatial coordinates along with uncertainty over the object's location while respecting communication constraints that prohibit centralizing raw image data. We present a novel single-object geospatial tracking data set that includes high-accuracy ground truth object locations and video data from a network of four cameras. We present a modeling framework for addressing this task including a novel backbone model and explore how uncertainty calibration and fine-tuning through a differentiable tracker affect performance.
[ "cs.CV", "cs.AI", "cs.DC", "cs.LG" ]
false
2306.02459
2023-06-04T20:22:14Z
Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search
[ "Yash Akhauri", "Mohamed S. Abdelfattah" ]
Many hardware-aware neural architecture search (NAS) methods have been developed to optimize the topology of neural networks (NN) with the joint objectives of higher accuracy and lower latency. Recently, both accuracy and latency predictors have been used in NAS with great success, achieving high sample efficiency and accurate modeling of hardware (HW) device latency respectively. However, a new accuracy predictor needs to be trained for every new NAS search space or NN task, and a new latency predictor needs to be additionally trained for every new HW device. In this paper, we explore methods to enable multi-task, multi-search-space, and multi-HW adaptation of accuracy and latency predictors to reduce the cost of NAS. We introduce a novel search-space independent NN encoding based on zero-cost proxies that achieves sample-efficient prediction on multiple tasks and NAS search spaces, improving the end-to-end sample efficiency of latency and accuracy predictors by over an order of magnitude in multiple scenarios. For example, our NN encoding enables multi-search-space transfer of latency predictors from NASBench-201 to FBNet (and vice-versa) in under 85 HW measurements, a 400$\times$ improvement in sample efficiency compared to a recent meta-learning approach. Our method also improves the total sample efficiency of accuracy predictors by over an order of magnitude. Finally, we demonstrate the effectiveness of our method for multi-search-space and multi-task accuracy prediction on 28 NAS search spaces and tasks.
[ "cs.LG", "cs.AR", "cs.CV", "cs.PF" ]
false
2306.02487
2023-06-04T21:40:11Z
Discussion Paper: The Threat of Real Time Deepfakes
[ "Guy Frankovits", "Yisroel Mirsky" ]
Generative deep learning models are able to create realistic audio and video. This technology has been used to impersonate the faces and voices of individuals. These ``deepfakes'' are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent. The technology continues to advance and today attackers have the ability to generate deepfakes in real-time. This new capability poses a significant threat to society as attackers begin to exploit the technology in advances social engineering attacks. In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.
[ "cs.AI", "cs.CR", "cs.CV" ]
false
2306.02258
2023-06-04T04:24:43Z
Probing Physical Reasoning with Counter-Commonsense Context
[ "Kazushi Kondo", "Saku Sugawara", "Akiko Aizawa" ]
In this study, we create a CConS (Counter-commonsense Contextual Size comparison) dataset to investigate how physical commonsense affects the contextualized size comparison task; the proposed dataset consists of both contexts that fit physical commonsense and those that do not. This dataset tests the ability of language models to predict the size relationship between objects under various contexts generated from our curated noun list and templates. We measure the ability of several masked language models and generative models. The results show that while large language models can use prepositions such as ``in'' and ``into'' in the provided context to infer size relationships, they fail to use verbs and thus make incorrect judgments led by their prior physical commonsense.
[ "cs.CL" ]
false
2306.02302
2023-06-04T08:54:32Z
Does Character-level Information Always Improve DRS-based Semantic Parsing?
[ "Tomoya Kurosawa", "Hitomi Yanaka" ]
Even in the era of massive language models, it has been suggested that character-level representations improve the performance of neural models. The state-of-the-art neural semantic parser for Discourse Representation Structures uses character-level representations, improving performance in the four languages (i.e., English, German, Dutch, and Italian) in the Parallel Meaning Bank dataset. However, how and why character-level information improves the parser's performance remains unclear. This study provides an in-depth analysis of performance changes by order of character sequences. In the experiments, we compare F1-scores by shuffling the order and randomizing character sequences after testing the performance of character-level information. Our results indicate that incorporating character-level information does not improve the performance in English and German. In addition, we find that the parser is not sensitive to correct character order in Dutch. Nevertheless, performance improvements are observed when using character-level information.
[ "cs.CL" ]
false
2306.02334
2023-06-04T11:52:36Z
Long Text Generation Challenge
[ "Nikolay Mikhaylovskiy" ]
We propose a shared task of human-like long text generation, LTG Challenge, that asks models to output a consistent human-like long text (a Harry Potter generic audience fanfic in English), given a prompt of about 1000 tokens. We suggest a novel statistical metric of the text structuredness, GloVe Autocorrelations Power/ Exponential Law Mean Absolute Percentage Error Ratio (GAPELMAPER) and a human evaluation protocol. We hope that LTG can open new avenues for researchers to investigate sampling approaches, prompting strategies, autoregressive and non-autoregressive text generation architectures and break the barrier to generate consistent long (40K+ token) texts.
[ "cs.CL", "I.2.7" ]
false
2306.02348
2023-06-04T12:53:12Z
Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations
[ "Aleksey Tikhonov", "Lisa Bylinina", "Denis Paperno" ]
Multimodal embeddings aim to enrich the semantic information in neural representations of language compared to text-only models. While different embeddings exhibit different applicability and performance on downstream tasks, little is known about the systematic representation differences attributed to the visual modality. Our paper compares word embeddings from three vision-and-language models (CLIP, OpenCLIP and Multilingual CLIP) and three text-only models, with static (FastText) as well as contextual representations (multilingual BERT; XLM-RoBERTa). This is the first large-scale study of the effect of visual grounding on language representations, including 46 semantic parameters. We identify meaning properties and relations that characterize words whose embeddings are most affected by the inclusion of visual modality in the training data; that is, points where visual grounding turns out most important. We find that the effect of visual modality correlates most with denotational semantic properties related to concreteness, but is also detected for several specific semantic classes, as well as for valence, a sentiment-related connotational property of linguistic expressions.
[ "cs.CL" ]
false
2306.02405
2023-06-04T16:52:11Z
An Information-Theoretic Analysis of Self-supervised Discrete Representations of Speech
[ "Badr M. Abdullah", "Mohammed Maqsood Shaik", "Bernd Möbius", "Dietrich Klakow" ]
Self-supervised representation learning for speech often involves a quantization step that transforms the acoustic input into discrete units. However, it remains unclear how to characterize the relationship between these discrete units and abstract phonetic categories such as phonemes. In this paper, we develop an information-theoretic framework whereby we represent each phonetic category as a distribution over discrete units. We then apply our framework to two different self-supervised models (namely wav2vec 2.0 and XLSR) and use American English speech as a case study. Our study demonstrates that the entropy of phonetic distributions reflects the variability of the underlying speech sounds, with phonetically similar sounds exhibiting similar distributions. While our study confirms the lack of direct, one-to-one correspondence, we find an intriguing, indirect relationship between phonetic categories and discrete units.
[ "cs.CL" ]
false
2306.02408
2023-06-04T17:02:59Z
Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
[ "Beichen Zhang", "Kun Zhou", "Xilin Wei", "Wayne Xin Zhao", "Jing Sha", "Shijin Wang", "Ji-Rong Wen" ]
Chain-of-thought prompting~(CoT) and tool augmentation have been validated in recent work as effective practices for improving large language models~(LLMs) to perform step-by-step reasoning on complex math-related tasks. However, most existing math reasoning datasets may be not able to fully evaluate and analyze the ability of LLMs in manipulating tools and performing reasoning, as they may only require very few invocations of tools or miss annotations for evaluating intermediate reasoning steps. To address the issue, we construct \textbf{CARP}, a new Chinese dataset consisting of 4,886 computation-intensive algebra problems with formulated annotations on intermediate steps. In CARP, we test four LLMs with CoT prompting, and find that they are all prone to make mistakes at the early steps of the solution, leading to wrong answers. Based on this finding, we propose a new approach that can deliberate the reasoning steps with tool interfaces, namely \textbf{DELI}. In DELI, we first initialize a step-by-step solution based on retrieved exemplars, then iterate two deliberation procedures that check and refine the intermediate steps of the generated solution, from the perspectives of tool manipulation and natural language reasoning, until obtaining converged solutions or reaching the maximum turn. Experimental results on CARP and six other datasets show that the proposed DELI mostly outperforms competitive baselines, and can further boost the performance of existing CoT methods. Our data and code are available in \url{https://github.com/RUCAIBox/CARP}.
[ "cs.CL" ]
false