arxiv_id
stringlengths
10
10
published
stringlengths
20
20
titles
stringlengths
9
243
authors
listlengths
1
389
abstract
stringlengths
96
3.09k
categories
listlengths
1
10
selected
bool
2 classes
2306.03010
2023-06-05T16:25:33Z
Interval Load Forecasting for Individual Households in the Presence of Electric Vehicle Charging
[ "Raiden Skala", "Mohamed Ahmed T. A. Elgalhud", "Katarina Grolinger", "Syed Mir" ]
The transition to Electric Vehicles (EV) in place of traditional internal combustion engines is increasing societal demand for electricity. The ability to integrate the additional demand from EV charging into forecasting electricity demand is critical for maintaining the reliability of electricity generation and distribution. Load forecasting studies typically exclude households with home EV charging, focusing on offices, schools, and public charging stations. Moreover, they provide point forecasts which do not offer information about prediction uncertainty. Consequently, this paper proposes the Long Short-Term Memory Bayesian Neural Networks (LSTM-BNNs) for household load forecasting in presence of EV charging. The approach takes advantage of the LSTM model to capture the time dependencies and uses the dropout layer with Bayesian inference to generate prediction intervals. Results show that the proposed LSTM-BNNs achieve accuracy similar to point forecasts with the advantage of prediction intervals. Moreover, the impact of lockdowns related to the COVID-19 pandemic on the load forecasting model is examined, and the analysis shows that there is no major change in the model performance as, for the considered households, the randomness of the EV charging outweighs the change due to pandemic.
[ "cs.LG", "cs.AI", "cs.SY", "eess.SY" ]
false
2306.03014
2023-06-05T16:30:17Z
On the Behavior of Intrusive and Non-intrusive Speech Enhancement Metrics in Predictive and Generative Settings
[ "Danilo de Oliveira", "Julius Richter", "Jean-Marie Lemercier", "Tal Peer", "Timo Gerkmann" ]
Since its inception, the field of deep speech enhancement has been dominated by predictive (discriminative) approaches, such as spectral mapping or masking. Recently, however, novel generative approaches have been applied to speech enhancement, attaining good denoising performance with high subjective quality scores. At the same time, advances in deep learning also allowed for the creation of neural network-based metrics, which have desirable traits such as being able to work without a reference (non-intrusively). Since generatively enhanced speech tends to exhibit radically different residual distortions, its evaluation using instrumental speech metrics may behave differently compared to predictively enhanced speech. In this paper, we evaluate the performance of the same speech enhancement backbone trained under predictive and generative paradigms on a variety of metrics and show that intrusive and non-intrusive measures correlate differently for each paradigm. This analysis motivates the search for metrics that can together paint a complete and unbiased picture of speech enhancement performance, irrespective of the model's training process.
[ "eess.AS", "cs.LG", "cs.SD" ]
false
2306.03040
2023-06-05T17:03:10Z
Learning Similarity among Users for Personalized Session-Based Recommendation from hierarchical structure of User-Session-Item
[ "Jisoo Cha", "Haemin Jeong", "Wooju Kim" ]
The task of the session-based recommendation is to predict the next interaction of the user based on the anonymized user's behavior pattern. And personalized version of this system is a promising research field due to its availability to deal with user information. However, there's a problem that the user's preferences and historical sessions were not considered in the typical session-based recommendation since it concentrates only on user-item interaction. In addition, the existing personalized session-based recommendation model has a limited capability in that it only considers the preference of the current user without considering those of similar users. It means there can be the loss of information included within the hierarchical data structure of the user-session-item. To tackle with this problem, we propose USP-SBR(abbr. of User Similarity Powered - Session Based Recommender). To model global historical sessions of users, we propose UserGraph that has two types of nodes - ItemNode and UserNode. We then connect the nodes with three types of edges. The first type of edges connects ItemNode as chronological order, and the second connects ItemNode to UserNode, and the last connects UserNode to ItemNode. With these user embeddings, we propose additional contrastive loss, that makes users with similar intention be close to each other in the vector space. we apply graph neural network on these UserGraph and update nodes. Experimental results on two real-world datasets demonstrate that our method outperforms some state-of-the-art approaches.
[ "cs.IR", "cs.AI", "cs.LG", "68P20" ]
false
2306.03065
2023-06-05T17:43:46Z
LibAUC: A Deep Learning Library for X-Risk Optimization
[ "Zhuoning Yuan", "Dixian Zhu", "Zi-Hao Qiu", "Gang Li", "Xuanhui Wang", "Tianbao Yang" ]
This paper introduces the award-winning deep learning (DL) library called LibAUC for implementing state-of-the-art algorithms towards optimizing a family of risk functions named X-risks. X-risks refer to a family of compositional functions in which the loss function of each data point is defined in a way that contrasts the data point with a large number of others. They have broad applications in AI for solving classical and emerging problems, including but not limited to classification for imbalanced data (CID), learning to rank (LTR), and contrastive learning of representations (CLR). The motivation of developing LibAUC is to address the convergence issues of existing libraries for solving these problems. In particular, existing libraries may not converge or require very large mini-batch sizes in order to attain good performance for these problems, due to the usage of the standard mini-batch technique in the empirical risk minimization (ERM) framework. Our library is for deep X-risk optimization (DXO) that has achieved great success in solving a variety of tasks for CID, LTR and CLR. The contributions of this paper include: (1) It introduces a new mini-batch based pipeline for implementing DXO algorithms, which differs from existing DL pipeline in the design of controlled data samplers and dynamic mini-batch losses; (2) It provides extensive benchmarking experiments for ablation studies and comparison with existing libraries. The LibAUC library features scalable performance for millions of items to be contrasted, faster and better convergence than existing libraries for optimizing X-risks, seamless PyTorch deployment and versatile APIs for various loss optimization. Our library is available to the open source community at https://github.com/Optimization-AI/LibAUC, to facilitate further academic research and industrial applications.
[ "cs.LG", "cs.AI", "math.OC", "stat.ML" ]
false
2306.03109
2023-06-05T04:34:54Z
Machine Learning Force Fields with Data Cost Aware Training
[ "Alexander Bukharin", "Tianyi Liu", "Shengjie Wang", "Simiao Zuo", "Weihao Gao", "Wen Yan", "Tuo Zhao" ]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation, which finds widespread applications in chemistry and biomedical research. Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels generated by expensive quantum mechanical algorithms, which may scale as $O(n^3)$ to $O(n^7)$, with $n$ proportional to the number of basis functions. To address this issue, we propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data. The motivation behind ASTEROID is that inaccurate data, though incurring large bias, can help capture the sophisticated structures of the underlying force field. Therefore, we first train a MLFF model on a large amount of inaccurate training data, employing a bias-aware loss function to prevent the model from overfitting tahe potential bias of this data. We then fine-tune the obtained model using a small amount of accurate training data, which preserves the knowledge learned from the inaccurate training data while significantly improving the model's accuracy. Moreover, we propose a variant of ASTEROID based on score matching for the setting where the inaccurate training data are unlabeled. Extensive experiments on MD datasets and downstream tasks validate the efficacy of ASTEROID. Our code and data are available at https://github.com/abukharin3/asteroid.
[ "q-bio.QM", "cs.LG", "physics.chem-ph" ]
false
2306.03112
2023-06-05T08:38:30Z
Synthesizing Affective Neurophysiological Signals Using Generative Models: A Review Paper
[ "Alireza F. Nia", "Vanessa Tang", "Gonzalo Maso Talou", "Mark Billinghurst" ]
The integration of emotional intelligence in machines is an important step in advancing human-computer interaction. This demands the development of reliable end-to-end emotion recognition systems. However, the scarcity of public affective datasets presents a challenge. In this literature review, we emphasize the use of generative models to address this issue in neurophysiological signals, particularly Electroencephalogram (EEG) and Functional Near-Infrared Spectroscopy (fNIRS). We provide a comprehensive analysis of different generative models used in the field, examining their input formulation, deployment strategies, and methodologies for evaluating the quality of synthesized data. This review serves as a comprehensive overview, offering insights into the advantages, challenges, and promising future directions in the application of generative models in emotion recognition systems. Through this review, we aim to facilitate the progression of neurophysiological data augmentation, thereby supporting the development of more efficient and reliable emotion recognition systems.
[ "cs.HC", "cs.AI", "cs.LG", "q-bio.NC" ]
false
2306.03115
2023-06-05T13:13:19Z
AutoExp: A multidisciplinary, multi-sensor framework to evaluate human activities in self-driving cars
[ "Carlos Crispim-Junior", "Romain Guesdon", "Christophe Jallais", "Florent Laroche", "Stephanie Souche-Le Corvec", "Laure Tougne Rodet" ]
The adoption of self-driving cars will certainly revolutionize our lives, even though they may take more time to become fully autonomous than initially predicted. The first vehicles are already present in certain cities of the world, as part of experimental robot-taxi services. However, most existing studies focus on the navigation part of such vehicles. We currently miss methods, datasets, and studies to assess the in-cabin human component of the adoption of such technology in real-world conditions. This paper proposes an experimental framework to study the activities of occupants of self-driving cars using a multidisciplinary approach (computer vision associated with human and social sciences), particularly non-driving related activities. The framework is composed of an experimentation scenario, and a data acquisition module. We seek firstly to capture real-world data about the usage of the vehicle in the nearest possible, real-world conditions, and secondly to create a dataset containing in-cabin human activities to foster the development and evaluation of computer vision algorithms. The acquisition module records multiple views of the front seats of the vehicle (Intel RGB-D and GoPro cameras); in addition to survey data about the internal states and attitudes of participants towards this type of vehicle before, during, and after the experimentation. We evaluated the proposed framework with the realization of real-world experimentation with 30 participants (1 hour each) to study the acceptance of SDCs of SAE level 4.
[ "cs.HC", "cs.AI", "cs.LG" ]
false
2306.03175
2023-06-05T18:32:53Z
Infusing Lattice Symmetry Priors in Attention Mechanisms for Sample-Efficient Abstract Geometric Reasoning
[ "Mattia Atzeni", "Mrinmaya Sachan", "Andreas Loukas" ]
The Abstraction and Reasoning Corpus (ARC) (Chollet, 2019) and its most recent language-complete instantiation (LARC) has been postulated as an important step towards general AI. Yet, even state-of-the-art machine learning models struggle to achieve meaningful performance on these problems, falling behind non-learning based approaches. We argue that solving these tasks requires extreme generalization that can only be achieved by proper accounting for core knowledge priors. As a step towards this goal, we focus on geometry priors and introduce LatFormer, a model that incorporates lattice symmetry priors in attention masks. We show that, for any transformation of the hypercubic lattice, there exists a binary attention mask that implements that group action. Hence, our study motivates a modification to the standard attention mechanism, where attention weights are scaled using soft masks generated by a convolutional network. Experiments on synthetic geometric reasoning show that LatFormer requires 2 orders of magnitude fewer data than standard attention and transformers. Moreover, our results on ARC and LARC tasks that incorporate geometric priors provide preliminary evidence that these complex datasets do not lie out of the reach of deep learning models.
[ "cs.AI", "cs.LG", "stat.ML" ]
false
2306.03179
2023-06-05T18:40:35Z
Fair Patient Model: Mitigating Bias in the Patient Representation Learned from the Electronic Health Records
[ "Sonish Sivarajkumar", "Yufei Huang", "Yanshan Wang" ]
Objective: To pre-train fair and unbiased patient representations from Electronic Health Records (EHRs) using a novel weighted loss function that reduces bias and improves fairness in deep representation learning models. Methods: We defined a new loss function, called weighted loss function, in the deep representation learning model to balance the importance of different groups of patients and features. We applied the proposed model, called Fair Patient Model (FPM), to a sample of 34,739 patients from the MIMIC-III dataset and learned patient representations for four clinical outcome prediction tasks. Results: FPM outperformed the baseline models in terms of three fairness metrics: demographic parity, equality of opportunity difference, and equalized odds ratio. FPM also achieved comparable predictive performance with the baselines, with an average accuracy of 0.7912. Feature analysis revealed that FPM captured more information from clinical features than the baselines. Conclusion: FPM is a novel method to pre-train fair and unbiased patient representations from EHR data using a weighted loss function. The learned representations can be used for various downstream tasks in healthcare and can be extended to other domains where bias and fairness are important.
[ "cs.LG", "cs.AI", "cs.CY" ]
false
2306.03195
2023-06-05T19:13:44Z
Lumos in the Night Sky: AI-enabled Visual Tool for Exploring Night-Time Light Patterns
[ "Jakob Hederich", "Shreya Ghosh", "Zeyu He", "Prasenjit Mitra" ]
We introduce NightPulse, an interactive tool for Night-time light (NTL) data visualization and analytics, which enables researchers and stakeholders to explore and analyze NTL data with a user-friendly platform. Powered by efficient system architecture, NightPulse supports image segmentation, clustering, and change pattern detection to identify urban development and sprawl patterns. It captures temporal trends of NTL and semantics of cities, answering questions about demographic factors, city boundaries, and unusual differences.
[ "cs.HC", "cs.AI", "cs.IR", "cs.LG" ]
false
2306.03202
2023-06-05T19:22:02Z
Nonlinear Distributionally Robust Optimization
[ "Mohammed Rayyan Sheriff", "Peyman Mohajerin Esfahani" ]
This article focuses on a class of distributionally robust optimization (DRO) problems where, unlike the growing body of the literature, the objective function is potentially non-linear in the distribution. Existing methods to optimize nonlinear functions in probability space use the Frechet derivatives, which present both theoretical and computational challenges. Motivated by this, we propose an alternative notion for the derivative and corresponding smoothness based on Gateaux (G)-derivative for generic risk measures. These concepts are explained via three running risk measure examples of variance, entropic risk, and risk on finite support sets. We then propose a G-derivative based Frank-Wolfe~(FW) algorithm for generic non-linear optimization problems in probability spaces and establish its convergence under the proposed notion of smoothness in a completely norm-independent manner. We use the set-up of the FW algorithm to devise a methodology to compute a saddle point of the non-linear DRO problem. Finally, for the minimum variance portfolio selection problem we analyze the regularity conditions and compute the FW-oracle in various settings, and validate the theoretical results numerically.
[ "stat.ML", "cs.LG", "math.OC" ]
false
2306.03221
2023-06-05T20:11:30Z
Structural Re-weighting Improves Graph Domain Adaptation
[ "Shikun Liu", "Tianchun Li", "Yongbin Feng", "Nhan Tran", "Han Zhao", "Qiu Qiang", "Pan Li" ]
In many real-world applications, graph-structured data used for training and testing have differences in distribution, such as in high energy physics (HEP) where simulation data used for training may not match real experiments. Graph domain adaptation (GDA) is a method used to address these differences. However, current GDA primarily works by aligning the distributions of node representations output by a single graph neural network encoder shared across the training and testing domains, which may often yield sub-optimal solutions. This work examines different impacts of distribution shifts caused by either graph structure or node attributes and identifies a new type of shift, named conditional structure shift (CSS), which current GDA approaches are provably sub-optimal to deal with. A novel approach, called structural reweighting (StruRW), is proposed to address this issue and is tested on synthetic graphs, four benchmark datasets, and a new application in HEP. StruRW has shown significant performance improvement over the baselines in the settings with large graph structure shifts, and reasonable performance improvement when node attribute shift dominates.
[ "cs.LG", "cs.AI", "cs.SI" ]
false
2306.03249
2023-06-05T21:08:34Z
Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood Estimation for Latent Gaussian Models
[ "Alexander Lin", "Bahareh Tolooshams", "Yves Atchadé", "Demba Ba" ]
Latent Gaussian models have a rich history in statistics and machine learning, with applications ranging from factor analysis to compressed sensing to time series analysis. The classical method for maximizing the likelihood of these models is the expectation-maximization (EM) algorithm. For problems with high-dimensional latent variables and large datasets, EM scales poorly because it needs to invert as many large covariance matrices as the number of data points. We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversion. Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation. In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
[ "cs.LG", "eess.SP", "stat.CO" ]
false
2306.03257
2023-06-05T21:19:37Z
Generating Private Synthetic Data with Genetic Algorithms
[ "Terrance Liu", "Jingwu Tang", "Giuseppe Vietri", "Zhiwei Steven Wu" ]
We study the problem of efficiently generating differentially private synthetic data that approximate the statistical properties of an underlying sensitive dataset. In recent years, there has been a growing line of work that approaches this problem using first-order optimization techniques. However, such techniques are restricted to optimizing differentiable objectives only, severely limiting the types of analyses that can be conducted. For example, first-order mechanisms have been primarily successful in approximating statistical queries only in the form of marginals for discrete data domains. In some cases, one can circumvent such issues by relaxing the task's objective to maintain differentiability. However, even when possible, these approaches impose a fundamental limitation in which modifications to the minimization problem become additional sources of error. Therefore, we propose Private-GSD, a private genetic algorithm based on zeroth-order optimization heuristics that do not require modifying the original objective. As a result, it avoids the aforementioned limitations of first-order optimization. We empirically evaluate Private-GSD against baseline algorithms on data derived from the American Community Survey across a variety of statistics--otherwise known as statistical queries--both for discrete and real-valued attributes. We show that Private-GSD outperforms the state-of-the-art methods on non-differential queries while matching accuracy in approximating differentiable ones.
[ "cs.NE", "cs.CR", "cs.LG" ]
false
2306.03288
2023-06-05T22:21:26Z
Deep Learning From Crowdsourced Labels: Coupled Cross-entropy Minimization, Identifiability, and Regularization
[ "Shahana Ibrahim", "Tri Nguyen", "Xiao Fu" ]
Using noisy crowdsourced labels from multiple annotators, a deep learning-based end-to-end (E2E) system aims to learn the label correction mechanism and the neural classifier simultaneously. To this end, many E2E systems concatenate the neural classifier with multiple annotator-specific ``label confusion'' layers and co-train the two parts in a parameter-coupled manner. The formulated coupled cross-entropy minimization (CCEM)-type criteria are intuitive and work well in practice. Nonetheless, theoretical understanding of the CCEM criterion has been limited. The contribution of this work is twofold: First, performance guarantees of the CCEM criterion are presented. Our analysis reveals for the first time that the CCEM can indeed correctly identify the annotators' confusion characteristics and the desired ``ground-truth'' neural classifier under realistic conditions, e.g., when only incomplete annotator labeling and finite samples are available. Second, based on the insights learned from our analysis, two regularized variants of the CCEM are proposed. The regularization terms provably enhance the identifiability of the target model parameters in various more challenging cases. A series of synthetic and real data experiments are presented to showcase the effectiveness of our approach.
[ "cs.LG", "eess.SP", "stat.ML" ]
false
2306.03314
2023-06-05T23:55:37Z
Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents
[ "Yashar Talebirad", "Amirhossein Nadiri" ]
In this paper, we present a novel framework for enhancing the capabilities of large language models (LLMs) by leveraging the power of multi-agent systems. Our framework introduces a collaborative environment where multiple intelligent agent components, each with distinctive attributes and roles, work together to handle complex tasks more efficiently and effectively. We demonstrate the practicality and versatility of our framework through case studies in artificial general intelligence (AGI), specifically focusing on the Auto-GPT and BabyAGI models. We also examine the "Gorilla" model, which integrates external APIs into the LLM. Our framework addresses limitations and challenges such as looping issues, security risks, scalability, system evaluation, and ethical considerations. By modeling various domains such as courtroom simulations and software development scenarios, we showcase the potential applications and benefits of our proposed multi-agent system. Our framework provides an avenue for advancing the capabilities and performance of LLMs through collaboration and knowledge exchange among intelligent agents.
[ "cs.AI", "cs.LG", "cs.MA" ]
false
2306.04648
2023-06-05T13:57:23Z
On training locally adaptive CP
[ "Nicolo Colombo" ]
We address the problem of making Conformal Prediction (CP) intervals locally adaptive. Most existing methods focus on approximating the object-conditional validity of the intervals by partitioning or re-weighting the calibration set. Our strategy is new and conceptually different. Instead of re-weighting the calibration data, we redefine the conformity measure through a trainable change of variables, $A \to \phi_X(A)$, that depends explicitly on the object attributes, $X$. Under certain conditions and if $\phi_X$ is monotonic in $A$ for any $X$, the transformations produce prediction intervals that are guaranteed to be marginally valid and have $X$-dependent sizes. We describe how to parameterize and train $\phi_X$ to maximize the interval efficiency. Contrary to other CP-aware training methods, the objective function is smooth and can be minimized through standard gradient methods without approximations.
[ "cs.LG", "cs.AI", "stat.ML" ]
false
2306.02826
2023-06-05T12:22:46Z
Near-Optimal Quantum Coreset Construction Algorithms for Clustering
[ "Yecheng Xue", "Xiaoyu Chen", "Tongyang Li", "Shaofeng H. -C. Jiang" ]
$k$-Clustering in $\mathbb{R}^d$ (e.g., $k$-median and $k$-means) is a fundamental machine learning problem. While near-linear time approximation algorithms were known in the classical setting for a dataset with cardinality $n$, it remains open to find sublinear-time quantum algorithms. We give quantum algorithms that find coresets for $k$-clustering in $\mathbb{R}^d$ with $\tilde{O}(\sqrt{nk}d^{3/2})$ query complexity. Our coreset reduces the input size from $n$ to $\mathrm{poly}(k\epsilon^{-1}d)$, so that existing $\alpha$-approximation algorithms for clustering can run on top of it and yield $(1 + \epsilon)\alpha$-approximation. This eventually yields a quadratic speedup for various $k$-clustering approximation algorithms. We complement our algorithm with a nearly matching lower bound, that any quantum algorithm must make $\Omega(\sqrt{nk})$ queries in order to achieve even $O(1)$-approximation for $k$-clustering.
[ "quant-ph", "cs.AI", "cs.DS", "cs.LG", "stat.ML" ]
false
2306.02971
2023-06-05T15:35:00Z
Online Learning with Feedback Graphs: The True Shape of Regret
[ "Tomáš Kocák", "Alexandra Carpentier" ]
Sequential learning with feedback graphs is a natural extension of the multi-armed bandit problem where the problem is equipped with an underlying graph structure that provides additional information - playing an action reveals the losses of all the neighbors of the action. This problem was introduced by \citet{mannor2011} and received considerable attention in recent years. It is generally stated in the literature that the minimax regret rate for this problem is of order $\sqrt{\alpha T}$, where $\alpha$ is the independence number of the graph, and $T$ is the time horizon. However, this is proven only when the number of rounds $T$ is larger than $\alpha^3$, which poses a significant restriction for the usability of this result in large graphs. In this paper, we define a new quantity $R^*$, called the \emph{problem complexity}, and prove that the minimax regret is proportional to $R^*$ for any graph and time horizon $T$. Introducing an intricate exploration strategy, we define the \mainAlgorithm algorithm that achieves the minimax optimal regret bound and becomes the first provably optimal algorithm for this setting, even if $T$ is smaller than $\alpha^3$.
[ "cs.LG", "cs.IT", "math.IT", "math.ST", "stat.TH" ]
false
2307.00004
2023-06-05T23:23:04Z
PV Fleet Modeling via Smooth Periodic Gaussian Copula
[ "Mehmet G. Ogut", "Bennet Meyers", "Stephen P. Boyd" ]
We present a method for jointly modeling power generation from a fleet of photovoltaic (PV) systems. We propose a white-box method that finds a function that invertibly maps vector time-series data to independent and identically distributed standard normal variables. The proposed method, based on a novel approach for fitting a smooth, periodic copula transform to data, captures many aspects of the data such as diurnal variation in the distribution of power output, dependencies among different PV systems, and dependencies across time. It consists of interpretable steps and is scalable to many systems. The resulting joint probability model of PV fleet output across systems and time can be used to generate synthetic data, impute missing data, perform anomaly detection, and make forecasts. In this paper, we explain the method and demonstrate these applications.
[ "stat.AP", "cs.AI", "cs.LG", "eess.SP", "physics.data-an" ]
false
2306.03318
2023-06-06T00:01:40Z
Student Classroom Behavior Detection based on Improved YOLOv7
[ "Fan Yang" ]
Accurately detecting student behavior in classroom videos can aid in analyzing their classroom performance and improving teaching effectiveness. However, the current accuracy rate in behavior detection is low. To address this challenge, we propose the Student Classroom Behavior Detection method, based on improved YOLOv7. First, we created the Student Classroom Behavior dataset (SCB-Dataset), which includes 18.4k labels and 4.2k images, covering three behaviors: hand raising, reading, and writing. To improve detection accuracy in crowded scenes, we integrated the biformer attention module and Wise-IoU into the YOLOv7 network. Finally, experiments were conducted on the SCB-Dataset, and the model achieved an [email protected] of 79%, resulting in a 1.8% improvement over previous results. The SCB-Dataset and code are available for download at: https://github.com/Whiffe/SCB-dataset.
[ "cs.CV" ]
false
2306.03380
2023-06-06T03:40:29Z
A Unified Framework to Super-Resolve Face Images of Varied Low Resolutions
[ "Qiuyu Peng", "Zifei Jiang", "Yan Huang", "Jingliang Peng" ]
The existing face image super-resolution (FSR) algorithms usually train a specific model for a specific low input resolution for optimal results. By contrast, we explore in this work a unified framework that is trained once and then used to super-resolve input face images of varied low resolutions. For that purpose, we propose a novel neural network architecture that is composed of three anchor auto-encoders, one feature weight regressor and a final image decoder. The three anchor auto-encoders are meant for optimal FSR for three pre-defined low input resolutions, or named anchor resolutions, respectively. An input face image of an arbitrary low resolution is firstly up-scaled to the target resolution by bi-cubic interpolation and then fed to the three auto-encoders in parallel. The three encoded anchor features are then fused with weights determined by the feature weight regressor. At last, the fused feature is sent to the final image decoder to derive the super-resolution result. As shown by experiments, the proposed algorithm achieves robust and state-of-the-art performance over a wide range of low input resolutions by a single framework. Code and models will be made available after the publication of this work.
[ "cs.CV" ]
false
2306.03422
2023-06-06T05:48:09Z
Prompting Large Language Models to Reformulate Queries for Moment Localization
[ "Wenfeng Yan", "Shaoxiang Chen", "Zuxuan Wu", "Yu-Gang Jiang" ]
The task of moment localization is to localize a temporal moment in an untrimmed video for a given natural language query. Since untrimmed video contains highly redundant contents, the quality of the query is crucial for accurately localizing moments, i.e., the query should provide precise information about the target moment so that the localization model can understand what to look for in the videos. However, the natural language queries in current datasets may not be easy to understand for existing models. For example, the Ego4D dataset uses question sentences as the query to describe relatively complex moments. While being natural and straightforward for humans, understanding such question sentences are challenging for mainstream moment localization models like 2D-TAN. Inspired by the recent success of large language models, especially their ability of understanding and generating complex natural language contents, in this extended abstract, we make early attempts at reformulating the moment queries into a set of instructions using large language models and making them more friendly to the localization models.
[ "cs.CV" ]
false
2306.03428
2023-06-06T05:59:23Z
GaitGCI: Generative Counterfactual Intervention for Gait Recognition
[ "Huanzhang Dou", "Pengyi Zhang", "Wei Su", "Yunlong Yu", "Yining Lin", "Xi Li" ]
Gait is one of the most promising biometrics that aims to identify pedestrians from their walking patterns. However, prevailing methods are susceptible to confounders, resulting in the networks hardly focusing on the regions that reflect effective walking patterns. To address this fundamental problem in gait recognition, we propose a Generative Counterfactual Intervention framework, dubbed GaitGCI, consisting of Counterfactual Intervention Learning (CIL) and Diversity-Constrained Dynamic Convolution (DCDC). CIL eliminates the impacts of confounders by maximizing the likelihood difference between factual/counterfactual attention while DCDC adaptively generates sample-wise factual/counterfactual attention to efficiently perceive the sample-wise properties. With matrix decomposition and diversity constraint, DCDC guarantees the model to be efficient and effective. Extensive experiments indicate that proposed GaitGCI: 1) could effectively focus on the discriminative and interpretable regions that reflect gait pattern; 2) is model-agnostic and could be plugged into existing models to improve performance with nearly no extra cost; 3) efficiently achieves state-of-the-art performance on arbitrary scenarios (in-the-lab and in-the-wild).
[ "cs.CV" ]
false
2306.03482
2023-06-06T08:08:18Z
Looking and Listening: Audio Guided Text Recognition
[ "Wenwen Yu", "Mingyu Liu", "Biao Yang", "Enming Zhang", "Deqiang Jiang", "Xing Sun", "Yuliang Liu", "Xiang Bai" ]
Text recognition in the wild is a long-standing problem in computer vision. Driven by end-to-end deep learning, recent studies suggest vision and language processing are effective for scene text recognition. Yet, solving edit errors such as add, delete, or replace is still the main challenge for existing approaches. In fact, the content of the text and its audio are naturally corresponding to each other, i.e., a single character error may result in a clear different pronunciation. In this paper, we propose the AudioOCR, a simple yet effective probabilistic audio decoder for mel spectrogram sequence prediction to guide the scene text recognition, which only participates in the training phase and brings no extra cost during the inference stage. The underlying principle of AudioOCR can be easily applied to the existing approaches. Experiments using 7 previous scene text recognition methods on 12 existing regular, irregular, and occluded benchmarks demonstrate our proposed method can bring consistent improvement. More importantly, through our experimentation, we show that AudioOCR possesses a generalizability that extends to more challenging scenarios, including recognizing non-English text, out-of-vocabulary words, and text with various accents. Code will be available at https://github.com/wenwenyu/AudioOCR.
[ "cs.CV" ]
false
2306.03497
2023-06-06T08:28:55Z
Instructive Feature Enhancement for Dichotomous Medical Image Segmentation
[ "Lian Liu", "Han Zhou", "Jiongquan Chen", "Sijing Liu", "Wenlong Shi", "Dong Ni", "Deng-Ping Fan", "Xin Yang" ]
Deep neural networks have been widely applied in dichotomous medical image segmentation (DMIS) of many anatomical structures in several modalities, achieving promising performance. However, existing networks tend to struggle with task-specific, heavy and complex designs to improve accuracy. They made little instructions to which feature channels would be more beneficial for segmentation, and that may be why the performance and universality of these segmentation models are hindered. In this study, we propose an instructive feature enhancement approach, namely IFE, to adaptively select feature channels with rich texture cues and strong discriminability to enhance raw features based on local curvature or global information entropy criteria. Being plug-and-play and applicable for diverse DMIS tasks, IFE encourages the model to focus on texture-rich features which are especially important for the ambiguous and challenging boundary identification, simultaneously achieving simplicity, universality, and certain interpretability. To evaluate the proposed IFE, we constructed the first large-scale DMIS dataset Cosmos55k, which contains 55,023 images from 7 modalities and 26 anatomical structures. Extensive experiments show that IFE can improve the performance of classic segmentation networks across different anatomies and modalities with only slight modifications. Code is available at https://github.com/yezi-66/IFE
[ "cs.CV" ]
false
2306.03508
2023-06-06T08:53:53Z
Semantic Segmentation on VSPW Dataset through Contrastive Loss and Multi-dataset Training Approach
[ "Min Yan", "Qianxiong Ning", "Qian Wang" ]
Video scene parsing incorporates temporal information, which can enhance the consistency and accuracy of predictions compared to image scene parsing. The added temporal dimension enables a more comprehensive understanding of the scene, leading to more reliable results. This paper presents the winning solution of the CVPR2023 workshop for video semantic segmentation, focusing on enhancing Spatial-Temporal correlations with contrastive loss. We also explore the influence of multi-dataset training by utilizing a label-mapping technique. And the final result is aggregating the output of the above two models. Our approach achieves 65.95% mIoU performance on the VSPW dataset, ranked 1st place on the VSPW challenge at CVPR 2023.
[ "cs.CV" ]
false
2306.03576
2023-06-06T10:51:05Z
Human 3D Avatar Modeling with Implicit Neural Representation: A Brief Survey
[ "Mingyang Sun", "Dingkang Yang", "Dongliang Kou", "Yang Jiang", "Weihua Shan", "Zhe Yan", "Lihua Zhang" ]
A human 3D avatar is one of the important elements in the metaverse, and the modeling effect directly affects people's visual experience. However, the human body has a complex topology and diverse details, so it is often expensive, time-consuming, and laborious to build a satisfactory model. Recent studies have proposed a novel method, implicit neural representation, which is a continuous representation method and can describe objects with arbitrary topology at arbitrary resolution. Researchers have applied implicit neural representation to human 3D avatar modeling and obtained more excellent results than traditional methods. This paper comprehensively reviews the application of implicit neural representation in human body modeling. First, we introduce three implicit representations of occupancy field, SDF, and NeRF, and make a classification of the literature investigated in this paper. Then the application of implicit modeling methods in the body, hand, and head are compared and analyzed respectively. Finally, we point out the shortcomings of current work and provide available suggestions for researchers.
[ "cs.CV" ]
false
2306.03577
2023-06-06T10:52:06Z
An Open Patch Generator based Fingerprint Presentation Attack Detection using Generative Adversarial Network
[ "Anuj Rai", "Ashutosh Anshul", "Ashwini Jha", "Prayag Jain", "Ramprakash Sharma", "Somnath Dey" ]
The low-cost, user-friendly, and convenient nature of Automatic Fingerprint Recognition Systems (AFRS) makes them suitable for a wide range of applications. This spreading use of AFRS also makes them vulnerable to various security threats. Presentation Attack (PA) or spoofing is one of the threats which is caused by presenting a spoof of a genuine fingerprint to the sensor of AFRS. Fingerprint Presentation Attack Detection (FPAD) is a countermeasure intended to protect AFRS against fake or spoof fingerprints created using various fabrication materials. In this paper, we have proposed a Convolutional Neural Network (CNN) based technique that uses a Generative Adversarial Network (GAN) to augment the dataset with spoof samples generated from the proposed Open Patch Generator (OPG). This OPG is capable of generating realistic fingerprint samples which have no resemblance to the existing spoof fingerprint samples generated with other materials. The augmented dataset is fed to the DenseNet classifier which helps in increasing the performance of the Presentation Attack Detection (PAD) module for the various real-world attacks possible with unknown spoof materials. Experimental evaluations of the proposed approach are carried out on the Liveness Detection (LivDet) 2015, 2017, and 2019 competition databases. An overall accuracy of 96.20\%, 94.97\%, and 92.90\% has been achieved on the LivDet 2015, 2017, and 2019 databases, respectively under the LivDet protocol scenarios. The performance of the proposed PAD model is also validated in the cross-material and cross-sensor attack paradigm which further exhibits its capability to be used under real-world attack scenarios.
[ "cs.CV" ]
false
2306.03594
2023-06-06T11:31:29Z
Emotional Talking Head Generation based on Memory-Sharing and Attention-Augmented Networks
[ "Jianrong Wang", "Yaxin Zhao", "Li Liu", "Tianyi Xu", "Qi Li", "Sen Li" ]
Given an audio clip and a reference face image, the goal of the talking head generation is to generate a high-fidelity talking head video. Although some audio-driven methods of generating talking head videos have made some achievements in the past, most of them only focused on lip and audio synchronization and lack the ability to reproduce the facial expressions of the target person. To this end, we propose a talking head generation model consisting of a Memory-Sharing Emotion Feature extractor (MSEF) and an Attention-Augmented Translator based on U-net (AATU). Firstly, MSEF can extract implicit emotional auxiliary features from audio to estimate more accurate emotional face landmarks.~Secondly, AATU acts as a translator between the estimated landmarks and the photo-realistic video frames. Extensive qualitative and quantitative experiments have shown the superiority of the proposed method to the previous works. Codes will be made publicly available.
[ "cs.CV" ]
false
2306.03597
2023-06-06T11:36:14Z
Human-Object Interaction Prediction in Videos through Gaze Following
[ "Zhifan Ni", "Esteve Valls Mascaró", "Hyemin Ahn", "Dongheui Lee" ]
Understanding the human-object interactions (HOIs) from a video is essential to fully comprehend a visual scene. This line of research has been addressed by detecting HOIs from images and lately from videos. However, the video-based HOI anticipation task in the third-person view remains understudied. In this paper, we design a framework to detect current HOIs and anticipate future HOIs in videos. We propose to leverage human gaze information since people often fixate on an object before interacting with it. These gaze features together with the scene contexts and the visual appearances of human-object pairs are fused through a spatio-temporal transformer. To evaluate the model in the HOI anticipation task in a multi-person scenario, we propose a set of person-wise multi-label metrics. Our model is trained and validated on the VidHOI dataset, which contains videos capturing daily life and is currently the largest video HOI dataset. Experimental results in the HOI detection task show that our approach improves the baseline by a great margin of 36.3% relatively. Moreover, we conduct an extensive ablation study to demonstrate the effectiveness of our modifications and extensions to the spatio-temporal transformer. Our code is publicly available on https://github.com/nizhf/hoi-prediction-gaze-transformer.
[ "cs.CV" ]
false
2306.03630
2023-06-06T12:36:57Z
Mutual Information Regularization for Weakly-supervised RGB-D Salient Object Detection
[ "Aixuan Li", "Yuxin Mao", "Jing Zhang", "Yuchao Dai" ]
In this paper, we present a weakly-supervised RGB-D salient object detection model via scribble supervision. Specifically, as a multimodal learning task, we focus on effective multimodal representation learning via inter-modal mutual information regularization. In particular, following the principle of disentangled representation learning, we introduce a mutual information upper bound with a mutual information minimization regularizer to encourage the disentangled representation of each modality for salient object detection. Based on our multimodal representation learning framework, we introduce an asymmetric feature extractor for our multimodal data, which is proven more effective than the conventional symmetric backbone setting. We also introduce multimodal variational auto-encoder as stochastic prediction refinement techniques, which takes pseudo labels from the first training stage as supervision and generates refined prediction. Experimental results on benchmark RGB-D salient object detection datasets verify both effectiveness of our explicit multimodal disentangled representation learning method and the stochastic prediction refinement strategy, achieving comparable performance with the state-of-the-art fully supervised models. Our code and data are available at: https://github.com/baneitixiaomai/MIRV.
[ "cs.CV" ]
false
2306.03711
2023-06-06T14:21:22Z
Deep Learning-Enabled Sleep Staging From Vital Signs and Activity Measured Using a Near-Infrared Video Camera
[ "Jonathan Carter", "João Jorge", "Bindia Venugopal", "Oliver Gibson", "Lionel Tarassenko" ]
Conventional sleep monitoring is time-consuming, expensive and uncomfortable, requiring a large number of contact sensors to be attached to the patient. Video data is commonly recorded as part of a sleep laboratory assessment. If accurate sleep staging could be achieved solely from video, this would overcome many of the problems of traditional methods. In this work we use heart rate, breathing rate and activity measures, all derived from a near-infrared video camera, to perform sleep stage classification. We use a deep transfer learning approach to overcome data scarcity, by using an existing contact-sensor dataset to learn effective representations from the heart and breathing rate time series. Using a dataset of 50 healthy volunteers, we achieve an accuracy of 73.4\% and a Cohen's kappa of 0.61 in four-class sleep stage classification, establishing a new state-of-the-art for video-based sleep staging.
[ "cs.CV" ]
false
2306.03747
2023-06-06T15:07:39Z
Towards Scalable Multi-View Reconstruction of Geometry and Materials
[ "Carolin Schmitt", "Božidar Antić", "Andrei Neculai", "Joo Ho Lee", "Andreas Geiger" ]
In this paper, we propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes that exceed object-scale and hence cannot be captured with stationary light stages. The input are high-resolution RGB-D images captured by a mobile, hand-held capture system with point lights for active illumination. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. To facilitate scalability to large numbers of observation views and optimization variables, we introduce a distributed optimization algorithm that reconstructs 2.5D keyframe-based representations of the scene. A novel multi-view consistency regularizer effectively synchronizes neighboring keyframes such that the local optimization results allow for seamless integration into a globally consistent 3D model. We provide a study on the importance of each component in our formulation and show that our method compares favorably to baselines. We further demonstrate that our method accurately reconstructs various objects and materials and allows for expansion to spatially larger scenes. We believe that this work represents a significant step towards making geometry and material estimation from hand-held scanners scalable.
[ "cs.CV" ]
false
2306.03847
2023-06-06T16:35:45Z
Learning Human Mesh Recovery in 3D Scenes
[ "Zehong Shen", "Zhi Cen", "Sida Peng", "Qing Shuai", "Hujun Bao", "Xiaowei Zhou" ]
We present a novel method for recovering the absolute pose and shape of a human in a pre-scanned scene given a single image. Unlike previous methods that perform sceneaware mesh optimization, we propose to first estimate absolute position and dense scene contacts with a sparse 3D CNN, and later enhance a pretrained human mesh recovery network by cross-attention with the derived 3D scene cues. Joint learning on images and scene geometry enables our method to reduce the ambiguity caused by depth and occlusion, resulting in more reasonable global postures and contacts. Encoding scene-aware cues in the network also allows the proposed method to be optimization-free, and opens up the opportunity for real-time applications. The experiments show that the proposed network is capable of recovering accurate and physically-plausible meshes by a single forward pass and outperforms state-of-the-art methods in terms of both accuracy and speed.
[ "cs.CV" ]
false
2306.03908
2023-06-06T17:59:51Z
SAM3D: Segment Anything in 3D Scenes
[ "Yunhan Yang", "Xiaoyang Wu", "Tong He", "Hengshuang Zhao", "Xihui Liu" ]
In this work, we propose SAM3D, a novel framework that is able to predict masks in 3D point clouds by leveraging the Segment-Anything Model (SAM) in RGB images without further training or finetuning. For a point cloud of a 3D scene with posed RGB images, we first predict segmentation masks of RGB images with SAM, and then project the 2D masks into the 3D points. Later, we merge the 3D masks iteratively with a bottom-up merging approach. At each step, we merge the point cloud masks of two adjacent frames with the bidirectional merging approach. In this way, the 3D masks predicted from different frames are gradually merged into the 3D masks of the whole 3D scene. Finally, we can optionally ensemble the result from our SAM3D with the over-segmentation results based on the geometric information of the 3D scenes. Our approach is experimented with ScanNet dataset and qualitative results demonstrate that our SAM3D achieves reasonable and fine-grained 3D segmentation results without any training or finetuning of SAM.
[ "cs.CV" ]
false
2306.03932
2023-06-06T18:00:47Z
Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!
[ "Zaid Khan", "Vijay Kumar BG", "Samuel Schulter", "Xiang Yu", "Yun Fu", "Manmohan Chandraker" ]
Finetuning a large vision language model (VLM) on a target dataset after large scale pretraining is a dominant paradigm in visual question answering (VQA). Datasets for specialized tasks such as knowledge-based VQA or VQA in non natural-image domains are orders of magnitude smaller than those for general-purpose VQA. While collecting additional labels for specialized tasks or domains can be challenging, unlabeled images are often available. We introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning large VLMs on small-scale VQA datasets. SelTDA uses the VLM and target dataset to build a teacher model that can generate question-answer pseudolabels directly conditioned on an image alone, allowing us to pseudolabel unlabeled images. SelTDA then finetunes the initial VLM on the original dataset augmented with freshly pseudolabeled images. We describe a series of experiments showing that our self-taught data augmentation increases robustness to adversarially searched questions, counterfactual examples and rephrasings, improves domain generalization, and results in greater retention of numerical reasoning skills. The proposed strategy requires no additional annotations or architectural modifications, and is compatible with any modern encoder-decoder multimodal transformer. Code available at https://github.com/codezakh/SelTDA.
[ "cs.CV" ]
false
2306.04000
2023-06-06T20:28:04Z
A Quality Aware Sample-to-Sample Comparison for Face Recognition
[ "Mohammad Saeed Ebrahimi Saadabadi", "Sahar Rahimi Malakshan", "Ali Zafari", "Moktari Mostofa", "Nasser M. Nasrabadi" ]
Currently available face datasets mainly consist of a large number of high-quality and a small number of low-quality samples. As a result, a Face Recognition (FR) network fails to learn the distribution of low-quality samples since they are less frequent during training (underrepresented). Moreover, current state-of-the-art FR training paradigms are based on the sample-to-center comparison (i.e., Softmax-based classifier), which results in a lack of uniformity between train and test metrics. This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace). In this regard, Softmax centers are adaptively guided to pay more attention to low-quality samples by using a quality-aware function. Accordingly, QAFace adds a quality-based adjustment to the updating procedure of the Softmax-based classifier to improve the performance on the underrepresented low-quality samples. Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets. In addition, QAFace ignores the unrecognizable low-quality samples using the feature magnitude as a proxy for quality. As a result, QAFace prevents class centers from getting distracted from the optimal direction. The proposed method is superior to the state-of-the-art algorithms in extensive experimental results on the CFP-FP, LFW, CPLFW, CALFW, AgeDB, IJB-B, and IJB-C datasets.
[ "cs.CV" ]
false
2306.04451
2023-06-06T07:58:59Z
Referring Expression Comprehension Using Language Adaptive Inference
[ "Wei Su", "Peihan Miao", "Huanzhang Dou", "Yongjian Fu", "Xi Li" ]
Different from universal object detection, referring expression comprehension (REC) aims to locate specific objects referred to by natural language expressions. The expression provides high-level concepts of relevant visual and contextual patterns, which vary significantly with different expressions and account for only a few of those encoded in the REC model. This leads us to a question: do we really need the entire network with a fixed structure for various referring expressions? Ideally, given an expression, only expression-relevant components of the REC model are required. These components should be small in number as each expression only contains very few visual and contextual clues. This paper explores the adaptation between expressions and REC models for dynamic inference. Concretely, we propose a neat yet efficient framework named Language Adaptive Dynamic Subnets (LADS), which can extract language-adaptive subnets from the REC model conditioned on the referring expressions. By using the compact subnet, the inference can be more economical and efficient. Extensive experiments on RefCOCO, RefCOCO+, RefCOCOg, and Referit show that the proposed method achieves faster inference speed and higher accuracy against state-of-the-art approaches.
[ "cs.CV" ]
false
2306.04650
2023-06-06T07:24:53Z
GaitMPL: Gait Recognition with Memory-Augmented Progressive Learning
[ "Huanzhang Dou", "Pengyi Zhang", "Yuhan Zhao", "Lin Dong", "Zequn Qin", "Xi Li" ]
Gait recognition aims at identifying the pedestrians at a long distance by their biometric gait patterns. It is inherently challenging due to the various covariates and the properties of silhouettes (textureless and colorless), which result in two kinds of pair-wise hard samples: the same pedestrian could have distinct silhouettes (intra-class diversity) and different pedestrians could have similar silhouettes (inter-class similarity). In this work, we propose to solve the hard sample issue with a Memory-augmented Progressive Learning network (GaitMPL), including Dynamic Reweighting Progressive Learning module (DRPL) and Global Structure-Aligned Memory bank (GSAM). Specifically, DRPL reduces the learning difficulty of hard samples by easy-to-hard progressive learning. GSAM further augments DRPL with a structure-aligned memory mechanism, which maintains and models the feature distribution of each ID. Experiments on two commonly used datasets, CASIA-B and OU-MVLP, demonstrate the effectiveness of GaitMPL. On CASIA-B, we achieve the state-of-the-art performance, i.e., 88.0% on the most challenging condition (Clothing) and 93.3% on the average condition, which outperforms the other methods by at least 3.8% and 1.4%, respectively.
[ "cs.CV" ]
false
2306.04652
2023-06-06T08:26:22Z
Language Adaptive Weight Generation for Multi-task Visual Grounding
[ "Wei Su", "Peihan Miao", "Huanzhang Dou", "Gaoang Wang", "Liang Qiao", "Zheyang Li", "Xi Li" ]
Although the impressive performance in visual grounding, the prevailing approaches usually exploit the visual backbone in a passive way, i.e., the visual backbone extracts features with fixed weights without expression-related hints. The passive perception may lead to mismatches (e.g., redundant and missing), limiting further performance improvement. Ideally, the visual backbone should actively extract visual features since the expressions already provide the blueprint of desired visual features. The active perception can take expressions as priors to extract relevant visual features, which can effectively alleviate the mismatches. Inspired by this, we propose an active perception Visual Grounding framework based on Language Adaptive Weights, called VG-LAW. The visual backbone serves as an expression-specific feature extractor through dynamic weights generated for various expressions. Benefiting from the specific and relevant visual features extracted from the language-aware visual backbone, VG-LAW does not require additional modules for cross-modal interaction. Along with a neat multi-task head, VG-LAW can be competent in referring expression comprehension and segmentation jointly. Extensive experiments on four representative datasets, i.e., RefCOCO, RefCOCO+, RefCOCOg, and ReferItGame, validate the effectiveness of the proposed framework and demonstrate state-of-the-art performance.
[ "cs.CV" ]
false
2306.04654
2023-06-06T15:04:45Z
DenseDINO: Boosting Dense Self-Supervised Learning with Token-Based Point-Level Consistency
[ "Yike Yuan", "Xinghe Fu", "Yunlong Yu", "Xi Li" ]
In this paper, we propose a simple yet effective transformer framework for self-supervised learning called DenseDINO to learn dense visual representations. To exploit the spatial information that the dense prediction tasks require but neglected by the existing self-supervised transformers, we introduce point-level supervision across views in a novel token-based way. Specifically, DenseDINO introduces some extra input tokens called reference tokens to match the point-level features with the position prior. With the reference token, the model could maintain spatial consistency and deal with multi-object complex scene images, thus generalizing better on dense prediction tasks. Compared with the vanilla DINO, our approach obtains competitive performance when evaluated on classification in ImageNet and achieves a large margin (+7.2% mIoU) improvement in semantic segmentation on PascalVOC under the linear probing protocol for segmentation.
[ "cs.CV" ]
false
2306.03331
2023-06-06T01:02:31Z
A Robust Likelihood Model for Novelty Detection
[ "Ranya Almohsen", "Shivang Patel", "Donald A. Adjeroh", "Gianfranco Doretto" ]
Current approaches to novelty or anomaly detection are based on deep neural networks. Despite their effectiveness, neural networks are also vulnerable to imperceptible deformations of the input data. This is a serious issue in critical applications, or when data alterations are generated by an adversarial attack. While this is a known problem that has been studied in recent years for the case of supervised learning, the case of novelty detection has received very limited attention. Indeed, in this latter setting the learning is typically unsupervised because outlier data is not available during training, and new approaches for this case need to be investigated. We propose a new prior that aims at learning a robust likelihood for the novelty test, as a defense against attacks. We also integrate the same prior with a state-of-the-art novelty detection approach. Because of the geometric properties of that approach, the resulting robust training is computationally very efficient. An initial evaluation of the method indicates that it is effective at improving performance with respect to the standard models in the absence and presence of attacks.
[ "cs.CV", "cs.LG" ]
false
2306.03375
2023-06-06T03:29:47Z
Identifying Shared Decodable Concepts in the Human Brain Using Image-Language Foundation Models
[ "Cory Efird", "Alex Murphy", "Joel Zylberberg", "Alona Fyshe" ]
We introduce a method that takes advantage of high-quality pretrained multimodal representations to explore fine-grained semantic networks in the human brain. Previous studies have documented evidence of functional localization in the brain, with different anatomical regions preferentially activating for different types of sensory input. Many such localized structures are known, including the fusiform face area and parahippocampal place area. This raises the question of whether additional brain regions (or conjunctions of brain regions) are also specialized for other important semantic concepts. To identify such brain regions, we developed a data-driven approach to uncover visual concepts that are decodable from a massive functional magnetic resonance imaging (fMRI) dataset. Our analysis is broadly split into three sections. First, a fully connected neural network is trained to map brain responses to the outputs of an image-language foundation model, CLIP (Radford et al., 2021). Subsequently, a contrastive-learning dimensionality reduction method reveals the brain-decodable components of CLIP space. In the final section of our analysis, we localize shared decodable concepts in the brain using a voxel-masking optimization method to produce a shared decodable concept (SDC) space. The accuracy of our procedure is validated by comparing it to previous localization experiments that identify regions for faces, bodies, and places. In addition to these concepts, whose corresponding brain regions were already known, we localize novel concept representations which are shared across participants to other areas of the human brain. We also demonstrate how this method can be used to inspect fine-grained semantic networks for individual participants. We envisage that this extensible method can also be adapted to explore other questions at the intersection of AI and neuroscience.
[ "cs.AI", "cs.CV" ]
false
2306.03445
2023-06-06T06:53:05Z
MetaGait: Learning to Learn an Omni Sample Adaptive Representation for Gait Recognition
[ "Huanzhang Dou", "Pengyi Zhang", "Wei Su", "Yunlong Yu", "Xi Li" ]
Gait recognition, which aims at identifying individuals by their walking patterns, has recently drawn increasing research attention. However, gait recognition still suffers from the conflicts between the limited binary visual clues of the silhouette and numerous covariates with diverse scales, which brings challenges to the model's adaptiveness. In this paper, we address this conflict by developing a novel MetaGait that learns to learn an omni sample adaptive representation. Towards this goal, MetaGait injects meta-knowledge, which could guide the model to perceive sample-specific properties, into the calibration network of the attention mechanism to improve the adaptiveness from the omni-scale, omni-dimension, and omni-process perspectives. Specifically, we leverage the meta-knowledge across the entire process, where Meta Triple Attention and Meta Temporal Pooling are presented respectively to adaptively capture omni-scale dependency from spatial/channel/temporal dimensions simultaneously and to adaptively aggregate temporal information through integrating the merits of three complementary temporal aggregation methods. Extensive experiments demonstrate the state-of-the-art performance of the proposed MetaGait. On CASIA-B, we achieve rank-1 accuracy of 98.7%, 96.0%, and 89.3% under three conditions, respectively. On OU-MVLP, we achieve rank-1 accuracy of 92.4%.
[ "cs.CV", "cs.AI" ]
false
2306.03476
2023-06-06T07:50:46Z
Putting Humans in the Image Captioning Loop
[ "Aliki Anagnostopoulou", "Mareike Hartmann", "Daniel Sonntag" ]
Image Captioning (IC) models can highly benefit from human feedback in the training process, especially in cases where data is limited. We present work-in-progress on adapting an IC system to integrate human feedback, with the goal to make it easily adaptable to user-specific data. Our approach builds on a base IC model pre-trained on the MS COCO dataset, which generates captions for unseen images. The user will then be able to offer feedback on the image and the generated/predicted caption, which will be augmented to create additional training instances for the adaptation of the model. The additional instances are integrated into the model using step-wise updates, and a sparse memory replay component is used to avoid catastrophic forgetting. We hope that this approach, while leading to improved results, will also result in customizable IC models.
[ "cs.CL", "cs.CV" ]
false
2306.03491
2023-06-06T08:16:16Z
SciCap+: A Knowledge Augmented Dataset to Study the Challenges of Scientific Figure Captioning
[ "Zhishen Yang", "Raj Dabre", "Hideki Tanaka", "Naoaki Okazaki" ]
In scholarly documents, figures provide a straightforward way of communicating scientific findings to readers. Automating figure caption generation helps move model understandings of scientific documents beyond text and will help authors write informative captions that facilitate communicating scientific findings. Unlike previous studies, we reframe scientific figure captioning as a knowledge-augmented image captioning task that models need to utilize knowledge embedded across modalities for caption generation. To this end, we extended the large-scale SciCap dataset~\cite{hsu-etal-2021-scicap-generating} to SciCap+ which includes mention-paragraphs (paragraphs mentioning figures) and OCR tokens. Then, we conduct experiments with the M4C-Captioner (a multimodal transformer-based model with a pointer network) as a baseline for our study. Our results indicate that mention-paragraphs serves as additional context knowledge, which significantly boosts the automatic standard image caption evaluation scores compared to the figure-only baselines. Human evaluations further reveal the challenges of generating figure captions that are informative to readers. The code and SciCap+ dataset will be publicly available at https://github.com/ZhishenYang/scientific_figure_captioning_dataset
[ "cs.CV", "cs.CL" ]
false
2306.03494
2023-06-06T08:22:47Z
LegoNet: Alternating Model Blocks for Medical Image Segmentation
[ "Ikboljon Sobirov", "Cheng Xie", "Muhammad Siddique", "Parijat Patel", "Kenneth Chan", "Thomas Halborg", "Christos Kotanidis", "Zarqiash Fatima", "Henry West", "Keith Channon", "Stefan Neubauer", "Charalambos Antoniades", "Mohammad Yaqub" ]
Since the emergence of convolutional neural networks (CNNs), and later vision transformers (ViTs), the common paradigm for model development has always been using a set of identical block types with varying parameters/hyper-parameters. To leverage the benefits of different architectural designs (e.g. CNNs and ViTs), we propose to alternate structurally different types of blocks to generate a new architecture, mimicking how Lego blocks can be assembled together. Using two CNN-based and one SwinViT-based blocks, we investigate three variations to the so-called LegoNet that applies the new concept of block alternation for the segmentation task in medical imaging. We also study a new clinical problem which has not been investigated before, namely the right internal mammary artery (RIMA) and perivascular space segmentation from computed tomography angiography (CTA) which has demonstrated a prognostic value to major cardiovascular outcomes. We compare the model performance against popular CNN and ViT architectures using two large datasets (e.g. achieving 0.749 dice similarity coefficient (DSC) on the larger dataset). We evaluate the performance of the model on three external testing cohorts as well, where an expert clinician made corrections to the model segmented results (DSC>0.90 for the three cohorts). To assess our proposed model for suitability in clinical use, we perform intra- and inter-observer variability analysis. Finally, we investigate a joint self-supervised learning approach to assess its impact on model performance. The code and the pretrained model weights will be available upon acceptance.
[ "eess.IV", "cs.CV" ]
false
2306.03500
2023-06-06T08:38:10Z
Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory
[ "Aliki Anagnostopoulou", "Mareike Hartmann", "Daniel Sonntag" ]
Interactive machine learning (IML) is a beneficial learning paradigm in cases of limited data availability, as human feedback is incrementally integrated into the training process. In this paper, we present an IML pipeline for image captioning which allows us to incrementally adapt a pre-trained image captioning model to a new data distribution based on user input. In order to incorporate user input into the model, we explore the use of a combination of simple data augmentation methods to obtain larger data batches for each newly annotated data instance and implement continual learning methods to prevent catastrophic forgetting from repeated updates. For our experiments, we split a domain-specific image captioning dataset, namely VizWiz, into non-overlapping parts to simulate an incremental input flow for continually adapting the model to new data. We find that, while data augmentation worsens results, even when relatively small amounts of data are available, episodic memory is an effective strategy to retain knowledge from previously seen clusters.
[ "cs.CL", "cs.CV" ]
false
2306.03511
2023-06-06T08:56:58Z
Curriculum-Based Augmented Fourier Domain Adaptation for Robust Medical Image Segmentation
[ "An Wang", "Mobarakol Islam", "Mengya Xu", "Hongliang Ren" ]
Accurate and robust medical image segmentation is fundamental and crucial for enhancing the autonomy of computer-aided diagnosis and intervention systems. Medical data collection normally involves different scanners, protocols, and populations, making domain adaptation (DA) a highly demanding research field to alleviate model degradation in the deployment site. To preserve the model performance across multiple testing domains, this work proposes the Curriculum-based Augmented Fourier Domain Adaptation (Curri-AFDA) for robust medical image segmentation. In particular, our curriculum learning strategy is based on the causal relationship of a model under different levels of data shift in the deployment phase, where the higher the shift is, the harder to recognize the variance. Considering this, we progressively introduce more amplitude information from the target domain to the source domain in the frequency space during the curriculum-style training to smoothly schedule the semantic knowledge transfer in an easier-to-harder manner. Besides, we incorporate the training-time chained augmentation mixing to help expand the data distributions while preserving the domain-invariant semantics, which is beneficial for the acquired model to be more robust and generalize better to unseen domains. Extensive experiments on two segmentation tasks of Retina and Nuclei collected from multiple sites and scanners suggest that our proposed method yields superior adaptation and generalization performance. Meanwhile, our approach proves to be more robust under various corruption types and increasing severity levels. In addition, we show our method is also beneficial in the domain-adaptive classification task with skin lesion datasets. The code is available at https://github.com/lofrienger/Curri-AFDA.
[ "eess.IV", "cs.CV" ]
false
2306.03537
2023-06-06T09:35:45Z
Real-Time Onboard Object Detection for Augmented Reality: Enhancing Head-Mounted Display with YOLOv8
[ "Mikołaj Łysakowski", "Kamil Żywanowski", "Adam Banaszczyk", "Michał R. Nowicki", "Piotr Skrzypczyński", "Sławomir K. Tadeja" ]
This paper introduces a software architecture for real-time object detection using machine learning (ML) in an augmented reality (AR) environment. Our approach uses the recent state-of-the-art YOLOv8 network that runs onboard on the Microsoft HoloLens 2 head-mounted display (HMD). The primary motivation behind this research is to enable the application of advanced ML models for enhanced perception and situational awareness with a wearable, hands-free AR platform. We show the image processing pipeline for the YOLOv8 model and the techniques used to make it real-time on the resource-limited edge computing platform of the headset. The experimental results demonstrate that our solution achieves real-time processing without needing offloading tasks to the cloud or any other external servers while retaining satisfactory accuracy regarding the usual mAP metric and measured qualitative performance
[ "cs.CV", "cs.HC" ]
false
2306.03641
2023-06-06T12:52:07Z
Single-Shot Global Localization via Graph-Theoretic Correspondence Matching
[ "Shigemichi Matsuzaki", "Kenji Koide", "Shuji Oishi", "Masashi Yokozuka", "Atsuhiko Banno" ]
This paper describes a method of global localization based on graph-theoretic association of instances between a query and the prior map. The proposed framework employs correspondence matching based on the maximum clique problem (MCP). The framework is potentially applicable to other map and/or query modalities thanks to the graph-based abstraction of the problem, while many of existing global localization methods rely on a query and the dataset in the same modality. We implement it with a semantically labeled 3D point cloud map, and a semantic segmentation image as a query. Leveraging the graph-theoretic framework, the proposed method realizes global localization exploiting only the map and the query. The method shows promising results on multiple large-scale simulated maps of urban scenes.
[ "cs.RO", "cs.CV" ]
false
2306.03648
2023-06-06T13:04:05Z
Supervised Knowledge May Hurt Novel Class Discovery Performance
[ "Ziyun Li", "Jona Otholt", "Ben Dai", "Di Hu", "Christoph Meinel", "Haojin Yang" ]
Novel class discovery (NCD) aims to infer novel categories in an unlabeled dataset by leveraging prior knowledge of a labeled set comprising disjoint but related classes. Given that most existing literature focuses primarily on utilizing supervised knowledge from a labeled set at the methodology level, this paper considers the question: Is supervised knowledge always helpful at different levels of semantic relevance? To proceed, we first establish a novel metric, so-called transfer flow, to measure the semantic similarity between labeled/unlabeled datasets. To show the validity of the proposed metric, we build up a large-scale benchmark with various degrees of semantic similarities between labeled/unlabeled datasets on ImageNet by leveraging its hierarchical class structure. The results based on the proposed benchmark show that the proposed transfer flow is in line with the hierarchical class structure; and that NCD performance is consistent with the semantic similarities (measured by the proposed metric). Next, by using the proposed transfer flow, we conduct various empirical experiments with different levels of semantic similarity, yielding that supervised knowledge may hurt NCD performance. Specifically, using supervised information from a low-similarity labeled set may lead to a suboptimal result as compared to using pure self-supervised knowledge. These results reveal the inadequacy of the existing NCD literature which usually assumes that supervised knowledge is beneficial. Finally, we develop a pseudo-version of the transfer flow as a practical reference to decide if supervised knowledge should be used in NCD. Its effectiveness is supported by our empirical studies, which show that the pseudo transfer flow (with or without supervised knowledge) is consistent with the corresponding accuracy based on various datasets. Code is released at https://github.com/J-L-O/SK-Hurt-NCD
[ "cs.LG", "cs.CV" ]
false
2306.03660
2023-06-06T13:23:42Z
PQM: A Point Quality Evaluation Metric for Dense Maps
[ "Yash Turkar", "Pranay Meshram", "Charuvahan Adhivarahan", "Karthik Dantu" ]
LiDAR-based mapping/reconstruction are important for various applications, but evaluating the quality of the dense maps they produce is challenging. The current methods have limitations, including the inability to capture completeness, structural information, and local variations in error. In this paper, we propose a novel point quality evaluation metric (PQM) that consists of four sub-metrics to provide a more comprehensive evaluation of point cloud quality. The completeness sub-metric evaluates the proportion of missing data, the artifact score sub-metric recognizes and characterizes artifacts, the accuracy sub-metric measures registration accuracy, and the resolution sub-metric quantifies point cloud density. Through an ablation study using a prototype dataset, we demonstrate the effectiveness of each of the sub-metrics and compare them to popular point cloud distance measures. Using three LiDAR SLAM systems to generate maps, we evaluate their output map quality and demonstrate the metrics robustness to noise and artifacts. Our implementation of PQM, datasets and detailed documentation on how to integrate with your custom dense mapping pipeline can be found at github.com/droneslab/pqm
[ "cs.CV", "eess.IV" ]
false
2306.03730
2023-06-06T14:48:50Z
Modality-Agnostic Learning for Medical Image Segmentation Using Multi-modality Self-distillation
[ "Qisheng He", "Nicholas Summerfield", "Ming Dong", "Carri Glide-Hurst" ]
Medical image segmentation of tumors and organs at risk is a time-consuming yet critical process in the clinic that utilizes multi-modality imaging (e.g, different acquisitions, data types, and sequences) to increase segmentation precision. In this paper, we propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS), to investigate the impact of input modalities on medical image segmentation. MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities. Thus, it provides a versatile and efficient approach to handle limited modalities during testing. Our extensive experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance than current state-of-the-art methods. Furthermore, using MAG-MS, we provide valuable insight and guidance on selecting input modalities for medical image segmentation tasks.
[ "eess.IV", "cs.CV" ]
false
2306.03779
2023-06-06T15:34:45Z
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex
[ "Drew Linsley", "Ivan F. Rodriguez", "Thomas Fel", "Michael Arcaro", "Saloni Sharma", "Margaret Livingstone", "Thomas Serre" ]
One of the most impactful findings in computational neuroscience over the past decade is that the object recognition accuracy of deep neural networks (DNNs) correlates with their ability to predict neural responses to natural images in the inferotemporal (IT) cortex. This discovery supported the long-held theory that object recognition is a core objective of the visual cortex, and suggested that more accurate DNNs would serve as better models of IT neuron responses to images. Since then, deep learning has undergone a revolution of scale: billion parameter-scale DNNs trained on billions of images are rivaling or outperforming humans at visual tasks including object recognition. Have today's DNNs become more accurate at predicting IT neuron responses to images as they have grown more accurate at object recognition? Surprisingly, across three independent experiments, we find this is not the case. DNNs have become progressively worse models of IT as their accuracy has increased on ImageNet. To understand why DNNs experience this trade-off and evaluate if they are still an appropriate paradigm for modeling the visual system, we turn to recordings of IT that capture spatially resolved maps of neuronal activity elicited by natural images. These neuronal activity maps reveal that DNNs trained on ImageNet learn to rely on different visual features than those encoded by IT and that this problem worsens as their accuracy increases. We successfully resolved this issue with the neural harmonizer, a plug-and-play training routine for DNNs that aligns their learned representations with humans. Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy and neural prediction accuracy that assails current DNNs and offer a path to more accurate models of biological vision.
[ "cs.CV", "cs.AI" ]
false
2306.03802
2023-06-06T15:45:53Z
Learning to Ground Instructional Articles in Videos through Narrations
[ "Effrosyni Mavroudi", "Triantafyllos Afouras", "Lorenzo Torresani" ]
In this paper we present an approach for localizing steps of procedural activities in narrated how-to videos. To deal with the scarcity of labeled data at scale, we source the step descriptions from a language knowledge base (wikiHow) containing instructional articles for a large variety of procedural tasks. Without any form of manual supervision, our model learns to temporally ground the steps of procedural articles in how-to videos by matching three modalities: frames, narrations, and step descriptions. Specifically, our method aligns steps to video by fusing information from two distinct pathways: i) {\em direct} alignment of step descriptions to frames, ii) {\em indirect} alignment obtained by composing steps-to-narrations with narrations-to-video correspondences. Notably, our approach performs global temporal grounding of all steps in an article at once by exploiting order information, and is trained with step pseudo-labels which are iteratively refined and aggressively filtered. In order to validate our model we introduce a new evaluation benchmark -- HT-Step -- obtained by manually annotating a 124-hour subset of HowTo100M\footnote{A test server is accessible at \url{https://eval.ai/web/challenges/challenge-page/2082}.} with steps sourced from wikiHow articles. Experiments on this benchmark as well as zero-shot evaluations on CrossTask demonstrate that our multi-modality alignment yields dramatic gains over several baselines and prior works. Finally, we show that our inner module for matching narration-to-video outperforms by a large margin the state of the art on the HTM-Align narration-video alignment benchmark.
[ "cs.CV", "cs.AI" ]
true
2306.03810
2023-06-06T15:52:55Z
X-Align++: cross-modal cross-view alignment for Bird's-eye-view segmentation
[ "Shubhankar Borse", "Senthil Yogamani", "Marvin Klingner", "Varun Ravi", "Hong Cai", "Abdulaziz Almuzairee", "Fatih Porikli" ]
Bird's-eye-view (BEV) grid is a typical representation of the perception of road components, e.g., drivable area, in autonomous driving. Most existing approaches rely on cameras only to perform segmentation in BEV space, which is fundamentally constrained by the absence of reliable depth information. The latest works leverage both camera and LiDAR modalities but suboptimally fuse their features using simple, concatenation-based mechanisms. In this paper, we address these problems by enhancing the alignment of the unimodal features in order to aid feature fusion, as well as enhancing the alignment between the cameras' perspective view (PV) and BEV representations. We propose X-Align, a novel end-to-end cross-modal and cross-view learning framework for BEV segmentation consisting of the following components: (i) a novel Cross-Modal Feature Alignment (X-FA) loss, (ii) an attention-based Cross-Modal Feature Fusion (X-FF) module to align multi-modal BEV features implicitly, and (iii) an auxiliary PV segmentation branch with Cross-View Segmentation Alignment (X-SA) losses to improve the PV-to-BEV transformation. We evaluate our proposed method across two commonly used benchmark datasets, i.e., nuScenes and KITTI-360. Notably, X-Align significantly outperforms the state-of-the-art by 3 absolute mIoU points on nuScenes. We also provide extensive ablation studies to demonstrate the effectiveness of the individual components.
[ "cs.CV", "cs.RO" ]
false
2306.03983
2023-06-06T19:36:11Z
Unsupervised Iterative U-Net with an Internal Guidance Layer for Vertebrae Contrast Enhancement in Chest X-Ray Images
[ "Ella Eidlin", "Assaf Hoogi", "Nathan S. Netanyahu" ]
X-ray imaging is a fundamental clinical tool for screening and diagnosing various diseases. However, the spatial resolution of radiographs is often limited, making it challenging to diagnose small image details and leading to difficulties in identifying vertebrae anomalies at an early stage in chest radiographs. To address this limitation, we propose a novel and robust approach to significantly improve the quality of X-ray images by iteratively training a deep neural network. Our framework includes an embedded internal guidance layer that enhances the fine structures of spinal vertebrae in chest X-ray images through fully unsupervised training, utilizing an iterative procedure that employs the same network architecture in each enhancement phase. Additionally, we have designed an optimized loss function that accurately identifies object boundaries and enhances spinal features, thereby further enhancing the quality of the images. Experimental results demonstrate that our proposed method surpasses existing detail enhancement methods in terms of BRISQUE scores, and is comparable in terms of LPC-SI. Furthermore, our approach exhibits superior performance in restoring hidden fine structures, as evidenced by our qualitative results. This innovative approach has the potential to significantly enhance the diagnostic accuracy and early detection of diseases, making it a promising advancement in X-ray imaging technology.
[ "eess.IV", "cs.CV", "I.4; I.4.3" ]
false
2306.03355
2023-06-06T02:13:27Z
BatchSampler: Sampling Mini-Batches for Contrastive Learning in Vision, Language, and Graphs
[ "Zhen Yang", "Tinglin Huang", "Ming Ding", "Yuxiao Dong", "Rex Ying", "Yukuo Cen", "Yangliao Geng", "Jie Tang" ]
In-Batch contrastive learning is a state-of-the-art self-supervised method that brings semantically-similar instances close while pushing dissimilar instances apart within a mini-batch. Its key to success is the negative sharing strategy, in which every instance serves as a negative for the others within the mini-batch. Recent studies aim to improve performance by sampling hard negatives \textit{within the current mini-batch}, whose quality is bounded by the mini-batch itself. In this work, we propose to improve contrastive learning by sampling mini-batches from the input data. We present BatchSampler\footnote{The code is available at \url{https://github.com/THUDM/BatchSampler}} to sample mini-batches of hard-to-distinguish (i.e., hard and true negatives to each other) instances. To make each mini-batch have fewer false negatives, we design the proximity graph of randomly-selected instances. To form the mini-batch, we leverage random walk with restart on the proximity graph to help sample hard-to-distinguish instances. BatchSampler is a simple and general technique that can be directly plugged into existing contrastive learning models in vision, language, and graphs. Extensive experiments on datasets of three modalities show that BatchSampler can consistently improve the performance of powerful contrastive models, as shown by significant improvements of SimCLR on ImageNet-100, SimCSE on STS (language), and GraphCL and MVGRL on graph datasets.
[ "cs.LG", "cs.CL", "cs.CV" ]
false
2306.03400
2023-06-06T04:30:18Z
G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors
[ "Quoc Khanh Nguyen", "Truong Thanh Hung Nguyen", "Vo Thanh Khang Nguyen", "Van Binh Truong", "Quoc Hung Cao" ]
Nowadays, deep neural networks for object detection in images are very prevalent. However, due to the complexity of these networks, users find it hard to understand why these objects are detected by models. We proposed Gaussian Class Activation Mapping Explainer (G-CAME), which generates a saliency map as the explanation for object detection models. G-CAME can be considered a CAM-based method that uses the activation maps of selected layers combined with the Gaussian kernel to highlight the important regions in the image for the predicted box. Compared with other Region-based methods, G-CAME can transcend time constraints as it takes a very short time to explain an object. We also evaluated our method qualitatively and quantitatively with YOLOX on the MS-COCO 2017 dataset and guided to apply G-CAME into the two-stage Faster-RCNN model.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2306.03406
2023-06-06T04:57:39Z
Deep neural networks architectures from the perspective of manifold learning
[ "German Magai" ]
Despite significant advances in the field of deep learning in ap-plications to various areas, an explanation of the learning pro-cess of neural network models remains an important open ques-tion. The purpose of this paper is a comprehensive comparison and description of neural network architectures in terms of ge-ometry and topology. We focus on the internal representation of neural networks and on the dynamics of changes in the topology and geometry of a data manifold on different layers. In this paper, we use the concepts of topological data analysis (TDA) and persistent homological fractal dimension. We present a wide range of experiments with various datasets and configurations of convolutional neural network (CNNs) architectures and Transformers in CV and NLP tasks. Our work is a contribution to the development of the important field of explainable and interpretable AI within the framework of geometrical deep learning.
[ "cs.LG", "cs.AI", "cs.CV", "math.AT" ]
false
2306.03522
2023-06-06T09:14:05Z
A Functional Data Perspective and Baseline On Multi-Layer Out-of-Distribution Detection
[ "Eduardo Dadalto", "Pierre Colombo", "Guillaume Staerman", "Nathan Noiry", "Pablo Piantanida" ]
A key feature of out-of-distribution (OOD) detection is to exploit a trained neural network by extracting statistical patterns and relationships through the multi-layer classifier to detect shifts in the expected input data distribution. Despite achieving solid results, several state-of-the-art methods rely on the penultimate or last layer outputs only, leaving behind valuable information for OOD detection. Methods that explore the multiple layers either require a special architecture or a supervised objective to do so. This work adopts an original approach based on a functional view of the network that exploits the sample's trajectories through the various layers and their statistical dependencies. It goes beyond multivariate features aggregation and introduces a baseline rooted in functional anomaly detection. In this new framework, OOD detection translates into detecting samples whose trajectories differ from the typical behavior characterized by the training set. We validate our method and empirically demonstrate its effectiveness in OOD detection compared to strong state-of-the-art baselines on computer vision benchmarks.
[ "cs.LG", "cs.CV", "stat.ML" ]
false
2306.03551
2023-06-06T09:57:04Z
Scalable Concept Extraction in Industry 4.0
[ "Andrés Felipe Posada-Moreno", "Kai Müller", "Florian Brillowski", "Friedrich Solowjow", "Thomas Gries", "Sebastian Trimpe" ]
The industry 4.0 is leveraging digital technologies and machine learning techniques to connect and optimize manufacturing processes. Central to this idea is the ability to transform raw data into human understandable knowledge for reliable data-driven decision-making. Convolutional Neural Networks (CNNs) have been instrumental in processing image data, yet, their ``black box'' nature complicates the understanding of their prediction process. In this context, recent advances in the field of eXplainable Artificial Intelligence (XAI) have proposed the extraction and localization of concepts, or which visual cues intervene on the prediction process of CNNs. This paper tackles the application of concept extraction (CE) methods to industry 4.0 scenarios. To this end, we modify a recently developed technique, ``Extracting Concepts with Local Aggregated Descriptors'' (ECLAD), improving its scalability. Specifically, we propose a novel procedure for calculating concept importance, utilizing a wrapper function designed for CNNs. This process is aimed at decreasing the number of times each image needs to be evaluated. Subsequently, we demonstrate the potential of CE methods, by applying them in three industrial use cases. We selected three representative use cases in the context of quality control for material design (tailored textiles), manufacturing (carbon fiber reinforcement), and maintenance (photovoltaic module inspection). In these examples, CE was able to successfully extract and locate concepts directly related to each task. This is, the visual cues related to each concept, coincided with what human experts would use to perform the task themselves, even when the visual cues were entangled between multiple classes. Through empirical results, we show that CE can be applied for understanding CNNs in an industrial context, giving useful insights that can relate to domain knowledge.
[ "cs.AI", "cs.CV", "cs.LG" ]
false
2306.03727
2023-06-06T14:45:44Z
Towards Visual Foundational Models of Physical Scenes
[ "Chethan Parameshwara", "Alessandro Achille", "Matthew Trager", "Xiaolong Li", "Jiawei Mo", "Matthew Trager", "Ashwin Swaminathan", "CJ Taylor", "Dheera Venkatraman", "Xiaohan Fei", "Stefano Soatto" ]
We describe a first step towards learning general-purpose visual representations of physical scenes using only image prediction as a training criterion. To do so, we first define "physical scene" and show that, even though different agents may maintain different representations of the same scene, the underlying physical scene that can be inferred is unique. Then, we show that NeRFs cannot represent the physical scene, as they lack extrapolation mechanisms. Those, however, could be provided by Diffusion Models, at least in theory. To test this hypothesis empirically, NeRFs can be combined with Diffusion Models, a process we refer to as NeRF Diffusion, used as unsupervised representations of the physical scene. Our analysis is limited to visual data, without external grounding mechanisms that can be provided by independent sensory modalities.
[ "cs.CV", "cs.AI", "cs.LG", "cs.RO" ]
false
2306.03835
2023-06-06T16:25:29Z
Atrial Septal Defect Detection in Children Based on Ultrasound Video Using Multiple Instances Learning
[ "Yiman Liu", "Qiming Huang", "Xiaoxiang Han", "Tongtong Liang", "Zhifang Zhang", "Lijun Chen", "Jinfeng Wang", "Angelos Stefanidis", "Jionglong Su", "Jiangang Chen", "Qingli Li", "Yuqi Zhang" ]
Purpose: Congenital heart defect (CHD) is the most common birth defect. Thoracic echocardiography (TTE) can provide sufficient cardiac structure information, evaluate hemodynamics and cardiac function, and is an effective method for atrial septal defect (ASD) examination. This paper aims to study a deep learning method based on cardiac ultrasound video to assist in ASD diagnosis. Materials and methods: We select two standard views of the atrial septum (subAS) and low parasternal four-compartment view (LPS4C) as the two views to identify ASD. We enlist data from 300 children patients as part of a double-blind experiment for five-fold cross-validation to verify the performance of our model. In addition, data from 30 children patients (15 positives and 15 negatives) are collected for clinician testing and compared to our model test results (these 30 samples do not participate in model training). We propose an echocardiography video-based atrial septal defect diagnosis system. In our model, we present a block random selection, maximal agreement decision and frame sampling strategy for training and testing respectively, resNet18 and r3D networks are used to extract the frame features and aggregate them to build a rich video-level representation. Results: We validate our model using our private dataset by five-cross validation. For ASD detection, we achieve 89.33 AUC, 84.95 accuracy, 85.70 sensitivity, 81.51 specificity and 81.99 F1 score. Conclusion: The proposed model is multiple instances learning-based deep learning model for video atrial septal defect detection which effectively improves ASD detection accuracy when compared to the performances of previous networks and clinical doctors.
[ "eess.IV", "cs.CV", "cs.LG" ]
false
2306.03934
2023-06-06T18:01:08Z
Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via Volumetric Pseudo-Labeling
[ "Constantin Seibold", "Alexander Jaus", "Matthias A. Fink", "Moon Kim", "Simon Reiß", "Ken Herrmann", "Jens Kleesiek", "Rainer Stiefelhagen" ]
Purpose: Interpreting chest radiographs (CXR) remains challenging due to the ambiguity of overlapping structures such as the lungs, heart, and bones. To address this issue, we propose a novel method for extracting fine-grained anatomical structures in CXR using pseudo-labeling of three-dimensional computed tomography (CT) scans. Methods: We created a large-scale dataset of 10,021 thoracic CTs with 157 labels and applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels. These labels were projected onto a two-dimensional plane, similar to the CXR, allowing the training of detailed semantic segmentation models for CXR without any manual annotation effort. Results: Our resulting segmentation models demonstrated remarkable performance on CXR, with a high average model-annotator agreement between two radiologists with mIoU scores of 0.93 and 0.85 for frontal and lateral anatomy, while inter-annotator agreement remained at 0.95 and 0.83 mIoU. Our anatomical segmentations allowed for the accurate extraction of relevant explainable medical features such as the cardio-thoracic-ratio. Conclusion: Our method of volumetric pseudo-labeling paired with CT projection offers a promising approach for detailed anatomical segmentation of CXR with a high agreement with human annotators. This technique may have important clinical implications, particularly in the analysis of various thoracic pathologies.
[ "eess.IV", "cs.CV", "cs.LG", "I.4.6; I.4.7; I.4.8" ]
false
2306.03954
2023-06-06T18:30:51Z
Recognition of Handwritten Japanese Characters Using Ensemble of Convolutional Neural Networks
[ "Angel I. Solis", "Justin Zarkovacki", "John Ly", "Adham Atyabi" ]
The Japanese writing system is complex, with three character types of Hiragana, Katakana, and Kanji. Kanji consists of thousands of unique characters, further adding to the complexity of character identification and literature understanding. Being able to translate handwritten Japanese characters into digital text is useful for data analysis, translation, learning and cultural preservation. In this study, a machine learning approach to analyzing and recognizing handwritten Japanese characters (Kanji) is proposed. The study used an ensemble of three convolutional neural networks (CNNs) for recognizing handwritten Kanji characters and utilized four datasets of MNIST, K-MNIST, Kuzushiji-49 (K49) and the top 150 represented classes in the Kuzushiji-Kanji (K-Kanji) dataset for its performance evaluation. The results indicate feasibility of using proposed CNN-ensemble architecture for recognizing handwritten characters, achieving 99.4%, 96.4%, 95.0% and 96.4% classification accuracy on MNIST, K-MNIS, K49, and K-Kanji datasets respectively.
[ "cs.CV", "cs.AI", "cs.CL", "cs.LG" ]
false
2306.03993
2023-06-06T20:08:54Z
Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification
[ "Christopher Neff", "Armin Danesh Pazho", "Hamed Tabkhi" ]
Following the popularity of Unsupervised Domain Adaptation (UDA) in person re-identification, the recently proposed setting of Online Unsupervised Domain Adaptation (OUDA) attempts to bridge the gap towards practical applications by introducing a consideration of streaming data. However, this still falls short of truly representing real-world applications. This paper defines the setting of Real-world Real-time Online Unsupervised Domain Adaptation (R$^2$OUDA) for Person Re-identification. The R$^2$OUDA setting sets the stage for true real-world real-time OUDA, bringing to light four major limitations found in real-world applications that are often neglected in current research: system generated person images, subset distribution selection, time-based data stream segmentation, and a segment-based time constraint. To address all aspects of this new R$^2$OUDA setting, this paper further proposes Real-World Real-Time Online Streaming Mutual Mean-Teaching (R$^2$MMT), a novel multi-camera system for real-world person re-identification. Taking a popular person re-identification dataset, R$^2$MMT was used to construct over 100 data subsets and train more than 3000 models, exploring the breadth of the R$^2$OUDA setting to understand the training time and accuracy trade-offs and limitations for real-world applications. R$^2$MMT, a real-world system able to respect the strict constraints of the proposed R$^2$OUDA setting, achieves accuracies within 0.1% of comparable OUDA methods that cannot be applied directly to real-world applications.
[ "cs.CV", "cs.AI", "cs.LG" ]
false
2306.04021
2023-06-06T21:27:08Z
Energy-Based Models for Cross-Modal Localization using Convolutional Transformers
[ "Alan Wu", "Michael S. Ryoo" ]
We present a novel framework using Energy-Based Models (EBMs) for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS. Lidar sensors have become ubiquitous on autonomous vehicles for describing its surrounding environment. Map priors are typically built using the same sensor modality for localization purposes. However, these map building endeavors using range sensors are often expensive and time-consuming. Alternatively, we leverage the use of satellite images as map priors, which are widely available, easily accessible, and provide comprehensive coverage. We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner, which is challenging due to the drastic difference in appearance between the sparse range sensor readings and the rich satellite imagery. We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
[ "cs.CV", "cs.AI", "cs.LG", "cs.RO" ]
false
2306.04032
2023-06-06T21:49:56Z
BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens Metadata Embedding
[ "Zhihao Yang", "Wenyi Lian", "Siyuan Lai" ]
Bokeh effect is an optical phenomenon that offers a pleasant visual experience, typically generated by high-end cameras with wide aperture lenses. The task of bokeh effect transformation aims to produce a desired effect in one set of lenses and apertures based on another combination. Current models are limited in their ability to render a specific set of bokeh effects, primarily transformations from sharp to blur. In this paper, we propose a novel universal method for embedding lens metadata into the model and introducing a loss calculation method using alpha masks from the newly released Bokeh Effect Transformation Dataset(BETD) [3]. Based on the above techniques, we propose the BokehOrNot model, which is capable of producing both blur-to-sharp and sharp-to-blur bokeh effect with various combinations of lenses and aperture sizes. Our proposed model outperforms current leading bokeh rendering and image restoration models and renders visually natural bokeh effects. Our code is available at: https://github.com/indicator0/bokehornot.
[ "cs.CV", "cs.LG", "eess.IV" ]
false
2306.04653
2023-06-06T10:22:43Z
From Data to Action: Exploring AI and IoT-driven Solutions for Smarter Cities
[ "Tiago Dias", "Tiago Fonseca", "João Vitorino", "Andreia Martins", "Sofia Malpique", "Isabel Praça" ]
The emergence of smart cities demands harnessing advanced technologies like the Internet of Things (IoT) and Artificial Intelligence (AI) and promises to unlock cities' potential to become more sustainable, efficient, and ultimately livable for their inhabitants. This work introduces an intelligent city management system that provides a data-driven approach to three use cases: (i) analyze traffic information to reduce the risk of traffic collisions and improve driver and pedestrian safety, (ii) identify when and where energy consumption can be reduced to improve cost savings, and (iii) detect maintenance issues like potholes in the city's roads and sidewalks, as well as the beginning of hazards like floods and fires. A case study in Aveiro City demonstrates the system's effectiveness in generating actionable insights that enhance security, energy efficiency, and sustainability, while highlighting the potential of AI and IoT-driven solutions for smart city development.
[ "cs.LG", "cs.CV", "cs.SY", "eess.SY" ]
false
2306.07349
2023-06-06T17:59:10Z
ATT3D: Amortized Text-to-3D Object Synthesis
[ "Jonathan Lorraine", "Kevin Xie", "Xiaohui Zeng", "Chen-Hsuan Lin", "Towaki Takikawa", "Nicholas Sharp", "Tsung-Yi Lin", "Ming-Yu Liu", "Sanja Fidler", "James Lucas" ]
Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields. DreamFusion recently achieved high-quality results but requires a lengthy, per-prompt optimization to create 3D objects. To address this, we amortize optimization over text prompts by training on many prompts simultaneously with a unified model, instead of separately. With this, we share computation across a prompt set, training in less time than per-prompt optimization. Our framework - Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to generalize to unseen setups and smooth interpolations between text for novel assets and simple animations.
[ "cs.LG", "cs.AI", "cs.CV", "68T45", "I.2.6; I.2.7; I.3.6; I.3.7" ]
true
2306.03679
2023-06-06T13:41:37Z
Human-imperceptible, Machine-recognizable Images
[ "Fusheng Hao", "Fengxiang He", "Yikai Wang", "Fuxiang Wu", "Jing Zhang", "Jun Cheng", "Dacheng Tao" ]
Massive human-related data is collected to train neural networks for computer vision tasks. A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data. To reconcile this conflict, this paper proposes an efficient privacy-preserving learning paradigm, where images are first encrypted to become ``human-imperceptible, machine-recognizable'' via one of the two encryption strategies: (1) random shuffling to a set of equally-sized patches and (2) mixing-up sub-patches of the images. Then, minimal adaptations are made to vision transformer to enable it to learn on the encrypted images for vision tasks, including image classification and object detection. Extensive experiments on ImageNet and COCO show that the proposed paradigm achieves comparable accuracy with the competitive methods. Decrypting the encrypted images requires solving an NP-hard jigsaw puzzle or an ill-posed inverse problem, which is empirically shown intractable to be recovered by various attackers, including the powerful vision transformer-based attacker. We thus show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information. The code is available at \url{https://github.com/FushengHao/PrivacyPreservingML.}
[ "cs.CV", "cs.AI", "cs.CR", "cs.LG", "stat.ML" ]
false
2306.03350
2023-06-06T01:56:44Z
Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning
[ "Chujie Zheng", "Pei Ke", "Zheng Zhang", "Minlie Huang" ]
It has always been an important yet challenging problem to control language models to avoid generating texts with undesirable attributes, such as toxic language and unnatural repetition. We introduce Click for controllable text generation, which needs no modification to the model architecture and facilitates out-of-the-box use of trained models. It employs a contrastive loss on sequence likelihood, which fundamentally decreases the generation probability of negative samples (i.e., generations with undesirable attributes). It also adopts a novel likelihood ranking-based strategy to construct contrastive samples from model generations. On the tasks of language detoxification, sentiment steering, and repetition reduction, we show that Click outperforms strong baselines of controllable text generation and demonstrate the superiority of Click's sample construction strategy.
[ "cs.CL" ]
false
2306.03415
2023-06-06T05:30:49Z
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning
[ "Peggy Tang", "Junbin Gao", "Lei Zhang", "Zhiyong Wang" ]
Recently, compressive text summarisation offers a balance between the conciseness issue of extractive summarisation and the factual hallucination issue of abstractive summarisation. However, most existing compressive summarisation methods are supervised, relying on the expensive effort of creating a new training dataset with corresponding compressive summaries. In this paper, we propose an efficient and interpretable compressive summarisation method that utilises unsupervised dual-agent reinforcement learning to optimise a summary's semantic coverage and fluency by simulating human judgment on summarisation quality. Our model consists of an extractor agent and a compressor agent, and both agents have a multi-head attentional pointer-based structure. The extractor agent first chooses salient sentences from a document, and then the compressor agent compresses these extracted sentences by selecting salient words to form a summary without using reference summaries to compute the summary reward. To our best knowledge, this is the first work on unsupervised compressive summarisation. Experimental results on three widely used datasets (e.g., Newsroom, CNN/DM, and XSum) show that our model achieves promising performance and a significant improvement on Newsroom in terms of the ROUGE metric, as well as interpretability of semantic coverage of summarisation results.
[ "cs.CL" ]
false
2306.03469
2023-06-06T07:42:39Z
Joint Event Extraction via Structural Semantic Matching
[ "Haochen Li", "Tianhao Gao", "Jingkun Wang", "Weiping Li" ]
Event Extraction (EE) is one of the essential tasks in information extraction, which aims to detect event mentions from text and find the corresponding argument roles. The EE task can be abstracted as a process of matching the semantic definitions and argument structures of event types with the target text. This paper encodes the semantic features of event types and makes structural matching with target text. Specifically, Semantic Type Embedding (STE) and Dynamic Structure Encoder (DSE) modules are proposed. Also, the Joint Structural Semantic Matching (JSSM) model is built to jointly perform event detection and argument extraction tasks through a bidirectional attention layer. The experimental results on the ACE2005 dataset indicate that our model achieves a significant performance improvement
[ "cs.CL" ]
false
2306.03507
2023-06-06T08:53:01Z
"A Little is Enough": Few-Shot Quality Estimation based Corpus Filtering improves Machine Translation
[ "Akshay Batheja", "Pushpak Bhattacharyya" ]
Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The goal of QE aligns with the task of corpus filtering, where we assign the quality score to the sentence pairs present in the pseudo-parallel corpus. We propose a Quality Estimation based Filtering approach to extract high-quality parallel data from the pseudo-parallel corpus. To the best of our knowledge, this is a novel adaptation of the QE framework to extract quality parallel corpus from the pseudo-parallel corpus. By training with this filtered corpus, we observe an improvement in the Machine Translation (MT) system's performance by up to 1.8 BLEU points, for English-Marathi, Chinese-English, and Hindi-Bengali language pairs, over the baseline model. The baseline model is the one that is trained on the whole pseudo-parallel corpus. Our Few-shot QE model transfer learned from the English-Marathi QE model and fine-tuned on only 500 Hindi-Bengali training instances, shows an improvement of up to 0.6 BLEU points for Hindi-Bengali language pair, compared to the baseline model. This demonstrates the promise of transfer learning in the setting under discussion. QE systems typically require in the order of (7K-25K) of training data. Our Hindi-Bengali QE is trained on only 500 instances of training that is 1/40th of the normal requirement and achieves comparable performance. All the scripts and datasets utilized in this study will be publicly available.
[ "cs.CL" ]
false
2306.03598
2023-06-06T11:37:46Z
CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models
[ "Jiazheng Li", "Zhaoyue Sun", "Bin Liang", "Lin Gui", "Yulan He" ]
Text classifiers built on Pre-trained Language Models (PLMs) have achieved remarkable progress in various tasks including sentiment analysis, natural language inference, and question-answering. However, the occurrence of uncertain predictions by these classifiers poses a challenge to their reliability when deployed in practical applications. Much effort has been devoted to designing various probes in order to understand what PLMs capture. But few studies have delved into factors influencing PLM-based classifiers' predictive uncertainty. In this paper, we propose a novel framework, called CUE, which aims to interpret uncertainties inherent in the predictions of PLM-based models. In particular, we first map PLM-encoded representations to a latent space via a variational auto-encoder. We then generate text representations by perturbing the latent space which causes fluctuation in predictive uncertainty. By comparing the difference in predictive uncertainty between the perturbed and the original text representations, we are able to identify the latent dimensions responsible for uncertainty and subsequently trace back to the input features that contribute to such uncertainty. Our extensive experiments on four benchmark datasets encompassing linguistic acceptability classification, emotion classification, and natural language inference show the feasibility of our proposed framework. Our source code is available at: https://github.com/lijiazheng99/CUE.
[ "cs.CL" ]
false
2306.03652
2023-06-06T13:13:27Z
Injecting knowledge into language generation: a case study in auto-charting after-visit care instructions from medical dialogue
[ "Maksim Eremeev", "Ilya Valmianski", "Xavier Amatriain", "Anitha Kannan" ]
Factual correctness is often the limiting factor in practical applications of natural language generation in high-stakes domains such as healthcare. An essential requirement for maintaining factuality is the ability to deal with rare tokens. This paper focuses on rare tokens that appear in both the source and the reference sequences, and which, when missed during generation, decrease the factual correctness of the output text. For high-stake domains that are also knowledge-rich, we show how to use knowledge to (a) identify which rare tokens that appear in both source and reference are important and (b) uplift their conditional probability. We introduce the ``utilization rate'' that encodes knowledge and serves as a regularizer by maximizing the marginal probability of selected tokens. We present a study in a knowledge-rich domain of healthcare, where we tackle the problem of generating after-visit care instructions based on patient-doctor dialogues. We verify that, in our dataset, specific medical concepts with high utilization rates are underestimated by conventionally trained sequence-to-sequence models. We observe that correcting this with our approach to knowledge injection reduces the uncertainty of the model as well as improves factuality and coherence without negatively impacting fluency.
[ "cs.CL" ]
false
2306.03678
2023-06-06T13:41:09Z
On the Difference of BERT-style and CLIP-style Text Encoders
[ "Zhihong Chen", "Guiming Hardy Chen", "Shizhe Diao", "Xiang Wan", "Benyou Wang" ]
Masked language modeling (MLM) has been one of the most popular pretraining recipes in natural language processing, e.g., BERT, one of the representative models. Recently, contrastive language-image pretraining (CLIP) has also attracted attention, especially its vision models that achieve excellent performance on a broad range of vision tasks. However, few studies are dedicated to studying the text encoders learned by CLIP. In this paper, we analyze the difference between BERT-style and CLIP-style text encoders from three experiments: (i) general text understanding, (ii) vision-centric text understanding, and (iii) text-to-image generation. Experimental analyses show that although CLIP-style text encoders underperform BERT-style ones for general text understanding tasks, they are equipped with a unique ability, i.e., synesthesia, for the cross-modal association, which is more similar to the senses of humans.
[ "cs.CL" ]
false
2306.03736
2023-06-06T14:52:47Z
FinRED: A Dataset for Relation Extraction in Financial Domain
[ "Soumya Sharma", "Tapas Nayak", "Arusarka Bose", "Ajay Kumar Meena", "Koustuv Dasgupta", "Niloy Ganguly", "Pawan Goyal" ]
Relation extraction models trained on a source domain cannot be applied on a different target domain due to the mismatch between relation sets. In the current literature, there is no extensive open-source relation extraction dataset specific to the finance domain. In this paper, we release FinRED, a relation extraction dataset curated from financial news and earning call transcripts containing relations from the finance domain. FinRED has been created by mapping Wikidata triplets using distance supervision method. We manually annotate the test data to ensure proper evaluation. We also experiment with various state-of-the-art relation extraction models on this dataset to create the benchmark. We see a significant drop in their performance on FinRED compared to the general relation extraction datasets which tells that we need better models for financial relation extraction.
[ "cs.CL" ]
false
2306.03853
2023-06-06T16:45:44Z
From Key Points to Key Point Hierarchy: Structured and Expressive Opinion Summarization
[ "Arie Cattan", "Lilach Eden", "Yoav Kantor", "Roy Bar-Haim" ]
Key Point Analysis (KPA) has been recently proposed for deriving fine-grained insights from collections of textual comments. KPA extracts the main points in the data as a list of concise sentences or phrases, termed key points, and quantifies their prevalence. While key points are more expressive than word clouds and key phrases, making sense of a long, flat list of key points, which often express related ideas in varying levels of granularity, may still be challenging. To address this limitation of KPA, we introduce the task of organizing a given set of key points into a hierarchy, according to their specificity. Such hierarchies may be viewed as a novel type of Textual Entailment Graph. We develop ThinkP, a high quality benchmark dataset of key point hierarchies for business and product reviews, obtained by consolidating multiple annotations. We compare different methods for predicting pairwise relations between key points, and for inferring a hierarchy from these pairwise predictions. In particular, for the task of computing pairwise key point relations, we achieve significant gains over existing strong baselines by applying directional distributional similarity methods to a novel distributional representation of key points, and further boost performance via weak supervision.
[ "cs.CL" ]
false
2306.03978
2023-06-06T19:31:08Z
Büyük dil modellerinin Türkçe verisetleri ile eğitilmesi ve ince ayarlanması
[ "A. Taha Arslan" ]
Large language models have advanced enormously, gained vast attraction and are having a phase of intensed research. Some of the developed models and training datasets have been made open-accessible. Hence these may be further fine-tuned with some techniques to obtain specialized models for specific tasks. When it comes to Turkish language, open-access models do not provide satisfactory coverage. This is also observed over published datasets. In this work, we propose some ideas to mitigate this issue: creating large Turkish datasets, training LLMs with these and fine-tuning pre-trained models with Turkish inputs. We report our findings on Turkish-based trainings with the problems encountered along the way. We conclude with outcomes of these experiments and propose ideas for further works. -- B\"uy\"uk dil modelleri inan{\i}lmaz \"ol\c{c}\"ude geli\c{s}mekte, b\"uy\"uk ilgi toplayarak ve \"uzerlerinde yo\u{g}un ara\c{s}tirmalarin yapildi\u{g}i bir d\"onemdedirler. Geli\c{s}tirilen modeller ve e\u{g}itimde kullanilan verisetlerinden bazilari a\c{c}ik eri\c{s}imli olarak sunulmaktadir. B\"oylece ince ayarlama teknikleri uygulayarak \"ozelle\c{s}mi\c{s} g\"orevler i\c{c}in \c{c}ali\c{s}abilir modeller elde edilmektedir. T\"urk\c{c}e s\"oz konusu oldu\u{g}unda bu modellerinin kapsayicili\u{g}i yeterli d\"uzeyde de\u{g}ildir. Bu durum, yayimlanan verisetlerinde de g\"ozlemlenebilir. Bunu a\c{s}manin yollari T\"urk\c{c}e i\c{c}erikli b\"uy\"uk verisetlerinin olu\c{s}turulmasi, b\"uy\"uk dil modellerinin bunlarla e\u{g}itilmesi ve \"onceden e\u{g}itilmi\c{s} modellerin T\"urk\c{c}e girdilerle ince ayarlanmalari olabilir. Bu \c{c}ali\c{s}mada a\c{c}ik eri\c{s}imli dil modelleri ve verisetleri \"uzerinde durulmakta ve T\"urk\c{c}e temelli bazi deneyler, kar\c{s}ila\c{s}ilan sorunlar ve sonu\c{c}lar irdelenmektedir.
[ "cs.CL", "I.2.7" ]
false
2306.04043
2023-06-06T22:14:59Z
An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
[ "Rose Neis", "Karin de Langis", "Zae Myung Kim", "Dongyeop Kang" ]
Capturing readers' engagement in fiction is a challenging but important aspect of narrative understanding. In this study, we collected 23 readers' reactions to 2 short stories through eye tracking, sentence-level annotations, and an overall engagement scale survey. We analyzed the significance of various qualities of the text in predicting how engaging a reader is likely to find it. As enjoyment of fiction is highly contextual, we also investigated individual differences in our data. Furthering our understanding of what captivates readers in fiction will help better inform models used in creative narrative generation and collaborative writing tools.
[ "cs.CL" ]
false
2306.04067
2023-06-06T23:56:18Z
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models
[ "Zhongbin Xie", "Thomas Lukasiewicz" ]
The increasingly large size of modern pretrained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases. In this paper, we investigate recent parameter-efficient methods in combination with counterfactual data augmentation (CDA) for bias mitigation. We conduct extensive experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance and abilities to preserve the internal knowledge of a pre-trained model. We find that the parameter-efficient methods (i) are effective in mitigating gender bias, where adapter tuning is consistently the most effective one and prompt tuning is more suitable for GPT-2 than BERT, (ii) are less effective when it comes to racial and religious bias, which may be attributed to the limitations of CDA, and (iii) can perform similarly to or sometimes better than full fine-tuning with improved time and memory efficiency, as well as maintain the internal knowledge in BERT and GPT-2, evaluated via fact retrieval and downstream fine-tuning.
[ "cs.CL" ]
false
2306.05434
2023-06-06T18:06:24Z
How Good is the Model in Model-in-the-loop Event Coreference Resolution Annotation?
[ "Shafiuddin Rehan Ahmed", "Abhijnan Nath", "Michael Regan", "Adam Pollins", "Nikhil Krishnaswamy", "James H. Martin" ]
Annotating cross-document event coreference links is a time-consuming and cognitively demanding task that can compromise annotation quality and efficiency. To address this, we propose a model-in-the-loop annotation approach for event coreference resolution, where a machine learning model suggests likely corefering event pairs only. We evaluate the effectiveness of this approach by first simulating the annotation process and then, using a novel annotator-centric Recall-Annotation effort trade-off metric, we compare the results of various underlying models and datasets. We finally present a method for obtaining 97\% recall while substantially reducing the workload required by a fully manual annotation process. Code and data can be found at https://github.com/ahmeshaf/model_in_coref
[ "cs.CL" ]
false
2306.03586
2023-06-06T11:08:20Z
Language acquisition: do children and language models follow similar learning stages?
[ "Linnea Evanson", "Yair Lakretz", "Jean-Rémi King" ]
During language acquisition, children follow a typical sequence of learning stages, whereby they first learn to categorize phonemes before they develop their lexicon and eventually master increasingly complex syntactic structures. However, the computational principles that lead to this learning trajectory remain largely unknown. To investigate this, we here compare the learning trajectories of deep language models to those of children. Specifically, we test whether, during its training, GPT-2 exhibits stages of language acquisition comparable to those observed in children aged between 18 months and 6 years. For this, we train 48 GPT-2 models from scratch and evaluate their syntactic and semantic abilities at each training step, using 96 probes curated from the BLiMP, Zorro and BIG-Bench benchmarks. We then compare these evaluations with the behavior of 54 children during language production. Our analyses reveal three main findings. First, similarly to children, the language models tend to learn linguistic skills in a systematic order. Second, this learning scheme is parallel: the language tasks that are learned last improve from the very first training steps. Third, some - but not all - learning stages are shared between children and these language models. Overall, these results shed new light on the principles of language acquisition, and highlight important divergences in how humans and modern algorithms learn to process natural language.
[ "cs.CL", "cs.AI" ]
false
2306.03608
2023-06-06T11:54:48Z
A Survey of Quantum-Cognitively Inspired Sentiment Analysis Models
[ "Yaochen Liu", "Qiuchi Li", "Benyou Wang", "Yazhou Zhang", "Dawei Song" ]
Quantum theory, originally proposed as a physical theory to describe the motions of microscopic particles, has been applied to various non-physics domains involving human cognition and decision-making that are inherently uncertain and exhibit certain non-classical, quantum-like characteristics. Sentiment analysis is a typical example of such domains. In the last few years, by leveraging the modeling power of quantum probability (a non-classical probability stemming from quantum mechanics methodology) and deep neural networks, a range of novel quantum-cognitively inspired models for sentiment analysis have emerged and performed well. This survey presents a timely overview of the latest developments in this fascinating cross-disciplinary area. We first provide a background of quantum probability and quantum cognition at a theoretical level, analyzing their advantages over classical theories in modeling the cognitive aspects of sentiment analysis. Then, recent quantum-cognitively inspired models are introduced and discussed in detail, focusing on how they approach the key challenges of the sentiment analysis task. Finally, we discuss the limitations of the current research and highlight future research directions.
[ "cs.CL", "cs.AI" ]
false
2306.03628
2023-06-06T12:30:29Z
Convergence and Diversity in the Control Hierarchy
[ "Alexandra Butoi", "Ryan Cotterell", "David Chiang" ]
Weir has defined a hierarchy of language classes whose second member ($\mathcal{L}_2$) is generated by tree-adjoining grammars (TAG), linear indexed grammars (LIG), combinatory categorial grammars, and head grammars. The hierarchy is obtained using the mechanism of control, and $\mathcal{L}_2$ is obtained using a context-free grammar (CFG) whose derivations are controlled by another CFG. We adapt Weir's definition of a controllable CFG to give a definition of controllable pushdown automata (PDAs). This yields three new characterizations of $\mathcal{L}_2$ as the class of languages generated by PDAs controlling PDAs, PDAs controlling CFGs, and CFGs controlling PDAs. We show that these four formalisms are not only weakly equivalent but equivalent in a stricter sense that we call d-weak equivalence. Furthermore, using an even stricter notion of equivalence called d-strong equivalence, we make precise the intuition that a CFG controlling a CFG is a TAG, a PDA controlling a PDA is an embedded PDA, and a PDA controlling a CFG is a LIG. The fourth member of this family, a CFG controlling a PDA, does not correspond to any formalism we know of, so we invent one and call it a Pushdown Adjoining Automaton.
[ "cs.FL", "cs.CL" ]
false
2306.03650
2023-06-06T13:08:22Z
A Quantum Probability Driven Framework for Joint Multi-Modal Sarcasm, Sentiment and Emotion Analysis
[ "Yaochen Liu", "Yazhou Zhang", "Dawei Song" ]
Sarcasm, sentiment, and emotion are three typical kinds of spontaneous affective responses of humans to external events and they are tightly intertwined with each other. Such events may be expressed in multiple modalities (e.g., linguistic, visual and acoustic), e.g., multi-modal conversations. Joint analysis of humans' multi-modal sarcasm, sentiment, and emotion is an important yet challenging topic, as it is a complex cognitive process involving both cross-modality interaction and cross-affection correlation. From the probability theory perspective, cross-affection correlation also means that the judgments on sarcasm, sentiment, and emotion are incompatible. However, this exposed phenomenon cannot be sufficiently modelled by classical probability theory due to its assumption of compatibility. Neither do the existing approaches take it into consideration. In view of the recent success of quantum probability (QP) in modeling human cognition, particularly contextual incompatible decision making, we take the first step towards introducing QP into joint multi-modal sarcasm, sentiment, and emotion analysis. Specifically, we propose a QUantum probabIlity driven multi-modal sarcasm, sEntiment and emoTion analysis framework, termed QUIET. Extensive experiments on two datasets and the results show that the effectiveness and advantages of QUIET in comparison with a wide range of the state-of-the-art baselines. We also show the great potential of QP in multi-affect analysis.
[ "cs.CL", "cs.AI" ]
false
2306.03733
2023-06-06T14:49:25Z
A Novel Approach To User Agent String Parsing For Vulnerability Analysis Using Mutli-Headed Attention
[ "Dhruv Nandakumar", "Sathvik Murli", "Ankur Khosla", "Kevin Choi", "Abdul Rahman", "Drew Walsh", "Scott Riede", "Eric Dull", "Edward Bowen" ]
The increasing reliance on the internet has led to the proliferation of a diverse set of web-browsers and operating systems (OSs) capable of browsing the web. User agent strings (UASs) are a component of web browsing that are transmitted with every Hypertext Transfer Protocol (HTTP) request. They contain information about the client device and software, which is used by web servers for various purposes such as content negotiation and security. However, due to the proliferation of various browsers and devices, parsing UASs is a non-trivial task due to a lack of standardization of UAS formats. Current rules-based approaches are often brittle and can fail when encountering such non-standard formats. In this work, a novel methodology for parsing UASs using Multi-Headed Attention Based transformers is proposed. The proposed methodology exhibits strong performance in parsing a variety of UASs with differing formats. Furthermore, a framework to utilize parsed UASs to estimate the vulnerability scores for large sections of publicly visible IT networks or regions is also discussed. The methodology present here can also be easily extended or deployed for real-time parsing of logs in enterprise settings.
[ "cs.CR", "cs.CL" ]
false
2306.03856
2023-06-06T16:51:03Z
Iterative Translation Refinement with Large Language Models
[ "Pinzhen Chen", "Zhicheng Guo", "Barry Haddow", "Kenneth Heafield" ]
Large language models have shown surprising performances in understanding instructions and performing natural language tasks. In this paper, we propose iterative translation refinement to leverage the power of large language models for more natural translation and post-editing. We show that by simply involving a large language model in an iterative process, the output quality improves beyond mere translation. Extensive test scenarios with GPT-3.5 reveal that although iterations reduce string-based metric scores, neural metrics indicate comparable if not improved translation quality. Further, human evaluations demonstrate that our method effectively reduces translationese compared to initial GPT translations and even human references, especially for into-English directions. Ablation studies underscore the importance of anchoring the refinement process to the source input and a reasonable initial translation.
[ "cs.CL", "cs.AI" ]
false
2306.03866
2023-06-06T17:09:29Z
Correction of Errors in Preference Ratings from Automated Metrics for Text Generation
[ "Jan Deriu", "Pius von Däniken", "Don Tuggener", "Mark Cieliebak" ]
A major challenge in the field of Text Generation is evaluation: Human evaluations are cost-intensive, and automated metrics often display considerable disagreement with human judgments. In this paper, we propose a statistical model of Text Generation evaluation that accounts for the error-proneness of automated metrics when used to generate preference rankings between system outputs. We show that existing automated metrics are generally over-confident in assigning significant differences between systems in this setting. However, our model enables an efficient combination of human and automated ratings to remedy the error-proneness of the automated metrics. We show that using this combination, we only require about 50% of the human annotations typically used in evaluations to arrive at robust and statistically significant results while yielding the same evaluation outcome as the pure human evaluation in 95% of cases. We showcase the benefits of approach for three text generation tasks: dialogue systems, machine translation, and text summarization.
[ "cs.CL", "cs.AI" ]
false
2306.03907
2023-06-06T17:59:49Z
CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions
[ "Janis Goldzycher" ]
The widespread popularity of social media has led to an increase in hateful, abusive, and sexist language, motivating methods for the automatic detection of such phenomena. The goal of the SemEval shared task \textit{Towards Explainable Detection of Online Sexism} (EDOS 2023) is to detect sexism in English social media posts (subtask A), and to categorize such posts into four coarse-grained sexism categories (subtask B), and eleven fine-grained subcategories (subtask C). In this paper, we present our submitted systems for all three subtasks, based on a multi-task model that has been fine-tuned on a range of related tasks and datasets before being fine-tuned on the specific EDOS subtasks. We implement multi-task learning by formulating each task as binary pairwise text classification, where the dataset and label descriptions are given along with the input text. The results show clear improvements over a fine-tuned DeBERTa-V3 serving as a baseline leading to $F_1$-scores of 85.9\% in subtask A (rank 13/84), 64.8\% in subtask B (rank 19/69), and 44.9\% in subtask C (26/63).
[ "cs.CL", "cs.CY" ]
false
2306.03959
2023-06-06T18:42:08Z
Leveraging Explicit Procedural Instructions for Data-Efficient Action Prediction
[ "Julia White", "Arushi Raghuvanshi", "Yada Pruksachatkun" ]
Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests. While large language models have found success automating these dialogues in constrained environments, their widespread deployment is limited by the substantial quantities of task-specific data required for training. The following paper presents a data-efficient solution to constructing dialogue systems, leveraging explicit instructions derived from agent guidelines, such as company policies or customer service manuals. Our proposed Knowledge-Augmented Dialogue System (KADS) combines a large language model with a knowledge retrieval module that pulls documents outlining relevant procedures from a predefined set of policies, given a user-agent interaction. To train this system, we introduce a semi-supervised pre-training scheme that employs dialogue-document matching and action-oriented masked language modeling with partial parameter freezing. We evaluate the effectiveness of our approach on prominent task-oriented dialogue datasets, Action-Based Conversations Dataset and Schema-Guided Dialogue, for two dialogue tasks: action state tracking and workflow discovery. Our results demonstrate that procedural knowledge augmentation improves accuracy predicting in- and out-of-distribution actions while preserving high performance in settings with low or sparse data.
[ "cs.CL", "cs.IR" ]
false
2306.04009
2023-06-06T20:45:18Z
Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks
[ "Kanishka Misra", "Cicero Nogueira dos Santos", "Siamak Shakeri" ]
Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve upon this limitation by relying on random walks over structured knowledge graphs. Specifically, we use soft prompts to guide LMs to chain together their encoded knowledge by learning to map multi-hop questions to random walk paths that lead to the answer. Applying our methods on two T5 LMs shows substantial improvements over standard tuning approaches in answering questions that require 2-hop reasoning.
[ "cs.CL", "cs.AI" ]
true
2306.04059
2023-06-06T23:15:59Z
Augmenting Reddit Posts to Determine Wellness Dimensions impacting Mental Health
[ "Chandreen Liyanage", "Muskan Garg", "Vijay Mago", "Sunghwan Sohn" ]
Amid ongoing health crisis, there is a growing necessity to discern possible signs of Wellness Dimensions (WD) manifested in self-narrated text. As the distribution of WD on social media data is intrinsically imbalanced, we experiment the generative NLP models for data augmentation to enable further improvement in the pre-screening task of classifying WD. To this end, we propose a simple yet effective data augmentation approach through prompt-based Generative NLP models, and evaluate the ROUGE scores and syntactic/semantic similarity among existing interpretations and augmented data. Our approach with ChatGPT model surpasses all the other methods and achieves improvement over baselines such as Easy-Data Augmentation and Backtranslation. Introducing data augmentation to generate more training samples and balanced dataset, results in the improved F-score and the Matthew's Correlation Coefficient for upto 13.11% and 15.95%, respectively.
[ "cs.CL", "cs.CY" ]
false
2306.05370
2023-06-06T12:59:03Z
Detecting Human Rights Violations on Social Media during Russia-Ukraine War
[ "Poli Nemkova", "Solomon Ubani", "Suleyman Olcay Polat", "Nayeon Kim", "Rodney D. Nielsen" ]
The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an $F_2$ score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations.
[ "cs.CY", "cs.CL" ]
false
2306.03411
2023-06-06T05:18:21Z
Generate-then-Retrieve: Intent-Aware FAQ Retrieval in Product Search
[ "Zhiyu Chen", "Jason Choi", "Besnik Fetahu", "Oleg Rokhlenko", "Shervin Malmasi" ]
Customers interacting with product search engines are increasingly formulating information-seeking queries. Frequently Asked Question (FAQ) retrieval aims to retrieve common question-answer pairs for a user query with question intent. Integrating FAQ retrieval in product search can not only empower users to make more informed purchase decisions, but also enhance user retention through efficient post-purchase support. Determining when an FAQ entry can satisfy a user's information need within product search, without disrupting their shopping experience, represents an important challenge. We propose an intent-aware FAQ retrieval system consisting of (1) an intent classifier that predicts when a user's information need can be answered by an FAQ; (2) a reformulation model that rewrites a query into a natural question. Offline evaluation demonstrates that our approach improves Hit@1 by 13% on retrieving ground-truth FAQs, while reducing latency by 95% compared to baseline systems. These improvements are further validated by real user feedback, where 71% of displayed FAQs on top of product search results received explicit positive user feedback. Overall, our findings show promising directions for integrating FAQ retrieval into product search at scale.
[ "cs.CL", "cs.AI", "cs.IR" ]
false