arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.11310
|
2023-05-18T21:22:07Z
|
AMII: Adaptive Multimodal Inter-personal and Intra-personal Model for
Adapted Behavior Synthesis
|
[
"Jieyeon Woo",
"Mireille Fares",
"Catherine Pelachaud",
"Catherine Achard"
] |
Socially Interactive Agents (SIAs) are physical or virtual embodied agents
that display similar behavior as human multimodal behavior. Modeling SIAs'
non-verbal behavior, such as speech and facial gestures, has always been a
challenging task, given that a SIA can take the role of a speaker or a
listener. A SIA must emit appropriate behavior adapted to its own speech, its
previous behaviors (intra-personal), and the User's behaviors (inter-personal)
for both roles. We propose AMII, a novel approach to synthesize adaptive facial
gestures for SIAs while interacting with Users and acting interchangeably as a
speaker or as a listener. AMII is characterized by modality memory encoding
schema - where modality corresponds to either speech or facial gestures - and
makes use of attention mechanisms to capture the intra-personal and
inter-personal relationships. We validate our approach by conducting objective
evaluations and comparing it with the state-of-the-art approaches.
|
[
"cs.HC",
"cs.LG",
"cs.SD",
"eess.AS",
"68T07",
"I.2.11"
] | false |
2305.14369
|
2023-05-18T18:43:13Z
|
Learning low-dimensional dynamics from whole-brain data improves task
capture
|
[
"Eloy Geenjaar",
"Donghyun Kim",
"Riyasat Ohib",
"Marlena Duda",
"Amrit Kashyap",
"Sergey Plis",
"Vince Calhoun"
] |
The neural dynamics underlying brain activity are critical to understanding
cognitive processes and mental disorders. However, current voxel-based
whole-brain dimensionality reduction techniques fall short of capturing these
dynamics, producing latent timeseries that inadequately relate to behavioral
tasks. To address this issue, we introduce a novel approach to learning
low-dimensional approximations of neural dynamics by using a sequential
variational autoencoder (SVAE) that represents the latent dynamical system via
a neural ordinary differential equation (NODE). Importantly, our method finds
smooth dynamics that can predict cognitive processes with accuracy higher than
classical methods. Our method also shows improved spatial localization to
task-relevant brain regions and identifies well-known structures such as the
motor homunculus from fMRI motor task recordings. We also find that non-linear
projections to the latent space enhance performance for specific tasks,
offering a promising direction for future research. We evaluate our approach on
various task-fMRI datasets, including motor, working memory, and relational
processing tasks, and demonstrate that it outperforms widely used
dimensionality reduction techniques in how well the latent timeseries relates
to behavioral sub-tasks, such as left-hand or right-hand tapping. Additionally,
we replace the NODE with a recurrent neural network (RNN) and compare the two
approaches to understand the importance of explicitly learning a dynamical
system. Lastly, we analyze the robustness of the learned dynamical systems
themselves and find that their fixed points are robust across seeds,
highlighting our method's potential for the analysis of cognitive processes as
dynamical systems.
|
[
"q-bio.NC",
"cs.CE",
"cs.LG"
] | false |
2305.18234
|
2023-05-18T11:04:50Z
|
Temporal Aware Mixed Attention-based Convolution and Transformer Network
(MACTN) for EEG Emotion Recognition
|
[
"Xiaopeng Si",
"Dong Huang",
"Yulin Sun",
"Dong Ming"
] |
Emotion recognition plays a crucial role in human-computer interaction, and
electroencephalography (EEG) is advantageous for reflecting human emotional
states. In this study, we propose MACTN, a hierarchical hybrid model for
jointly modeling local and global temporal information. The model is inspired
by neuroscience research on the temporal dynamics of emotions. MACTN extracts
local emotional features through a convolutional neural network (CNN) and
integrates sparse global emotional features through a transformer. Moreover, we
employ channel attention mechanisms to identify the most task-relevant
channels. Through extensive experimentation on two publicly available datasets,
namely THU-EP and DEAP, our proposed method, MACTN, consistently achieves
superior classification accuracy and F1 scores compared to other existing
methods in most experimental settings. Furthermore, ablation studies have shown
that the integration of both self-attention mechanisms and channel attention
mechanisms leads to improved classification performance. Finally, an earlier
version of this method, which shares the same ideas, won the Emotional BCI
Competition's final championship in the 2022 World Robot Contest.
|
[
"eess.SP",
"cs.AI",
"cs.LG"
] | false |
2305.18312
|
2023-05-18T18:32:51Z
|
Balancing Test Accuracy and Security in Computerized Adaptive Testing
|
[
"Wanyong Feng",
"Aritra Ghosh",
"Stephen Sireci",
"Andrew S. Lan"
] |
Computerized adaptive testing (CAT) is a form of personalized testing that
accurately measures students' knowledge levels while reducing test length.
Bilevel optimization-based CAT (BOBCAT) is a recent framework that learns a
data-driven question selection algorithm to effectively reduce test length and
improve test accuracy. However, it suffers from high question exposure and test
overlap rates, which potentially affects test security. This paper introduces a
constrained version of BOBCAT to address these problems by changing its
optimization setup and enabling us to trade off test accuracy for question
exposure and test overlap rates. We show that C-BOBCAT is effective through
extensive experiments on two real-world adult testing datasets.
|
[
"cs.CY",
"cs.AI",
"cs.LG"
] | false |
2305.10852
|
2023-05-18T10:15:03Z
|
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors
Quantization
|
[
"Nicolò Dal Fabbro",
"Michele Rossi",
"Luca Schenato",
"Subhrakanti Dey"
] |
Edge networks call for communication efficient (low overhead) and robust
distributed optimization (DO) algorithms. These are, in fact, desirable
qualities for DO frameworks, such as federated edge learning techniques, in the
presence of data and system heterogeneity, and in scenarios where internode
communication is the main bottleneck. Although computationally demanding,
Newton-type (NT) methods have been recently advocated as enablers of robust
convergence rates in challenging DO problems where edge devices have sufficient
computational power. Along these lines, in this work we propose Q-SHED, an
original NT algorithm for DO featuring a novel bit-allocation scheme based on
incremental Hessian eigenvectors quantization. The proposed technique is
integrated with the recent SHED algorithm, from which it inherits appealing
features like the small number of required Hessian computations, while being
bandwidth-versatile at a bit-resolution level. Our empirical evaluation against
competing approaches shows that Q-SHED can reduce by up to 60% the number of
communication rounds required for convergence.
|
[
"eess.SY",
"cs.LG",
"cs.MA",
"cs.SY",
"math.OC"
] | false |
2305.11107
|
2023-05-18T16:52:27Z
|
From Data-Fitting to Discovery: Interpreting the Neural Dynamics of
Motor Control through Reinforcement Learning
|
[
"Eugene R. Rush",
"Kaushik Jayaram",
"J. Sean Humbert"
] |
In motor neuroscience, artificial recurrent neural networks models often
complement animal studies. However, most modeling efforts are limited to
data-fitting, and the few that examine virtual embodied agents in a
reinforcement learning context, do not draw direct comparisons to their
biological counterparts. Our study addressing this gap, by uncovering
structured neural activity of a virtual robot performing legged locomotion that
directly support experimental findings of primate walking and cycling. We find
that embodied agents trained to walk exhibit smooth dynamics that avoid
tangling -- or opposing neural trajectories in neighboring neural space -- a
core principle in computational neuroscience. Specifically, across a wide suite
of gaits, the agent displays neural trajectories in the recurrent layers are
less tangled than those in the input-driven actuation layers. To better
interpret the neural separation of these elliptical-shaped trajectories, we
identify speed axes that maximizes variance of mean activity across different
forward, lateral, and rotational speed conditions.
|
[
"q-bio.NC",
"cs.AI",
"cs.LG",
"cs.NE",
"cs.RO"
] | false |
2305.11260
|
2023-05-18T18:55:06Z
|
Constrained Environment Optimization for Prioritized Multi-Agent
Navigation
|
[
"Zhan Gao",
"Amanda Prorok"
] |
Traditional approaches to the design of multi-agent navigation algorithms
consider the environment as a fixed constraint, despite the influence of
spatial constraints on agents' performance. Yet hand-designing conducive
environment layouts is inefficient and potentially expensive. The goal of this
paper is to consider the environment as a decision variable in a system-level
optimization problem, where both agent performance and environment cost are
incorporated. Towards this end, we propose novel problems of unprioritized and
prioritized environment optimization, where the former considers agents
unbiasedly and the latter accounts for agent priorities. We show, through
formal proofs, under which conditions the environment can change while
guaranteeing completeness (i.e., all agents reach goals), and analyze the role
of agent priorities in the environment optimization. We proceed to impose
real-world constraints on the environment optimization and formulate it
mathematically as a constrained stochastic optimization problem. Since the
relation between agents, environment and performance is challenging to model,
we leverage reinforcement learning to develop a model-free solution and a
primal-dual mechanism to handle constraints. Distinct information processing
architectures are integrated for various implementation scenarios, including
online/offline optimization and discrete/continuous environment. Numerical
results corroborate the theory and demonstrate the validity and adaptability of
our approach.
|
[
"eess.SY",
"cs.LG",
"cs.MA",
"cs.RO",
"cs.SY"
] | false |
2305.11373
|
2023-05-19T01:26:43Z
|
Deep Image Compression Using Scene Text Quality Assessment
|
[
"Shohei Uchigasaki",
"Tomo Miyazaki",
"Shinichiro Omachi"
] |
Image compression is a fundamental technology for Internet communication
engineering. However, a high compression rate with general methods may degrade
images, resulting in unreadable texts. In this paper, we propose an image
compression method for maintaining text quality. We developed a scene text
image quality assessment model to assess text quality in compressed images. The
assessment model iteratively searches for the best-compressed image holding
high-quality text. Objective and subjective results showed that the proposed
method was superior to existing methods. Furthermore, the proposed assessment
model outperformed other deep-learning regression models.
|
[
"cs.CV"
] | false |
2305.11394
|
2023-05-19T02:44:58Z
|
Remembering What Is Important: A Factorised Multi-Head Retrieval and
Auxiliary Memory Stabilisation Scheme for Human Motion Prediction
|
[
"Tharindu Fernando",
"Harshala Gammulle",
"Sridha Sridharan",
"Simon Denman",
"Clinton Fookes"
] |
Humans exhibit complex motions that vary depending on the task that they are
performing, the interactions they engage in, as well as subject-specific
preferences. Therefore, forecasting future poses based on the history of the
previous motions is a challenging task. This paper presents an innovative
auxiliary-memory-powered deep neural network framework for the improved
modelling of historical knowledge. Specifically, we disentangle
subject-specific, task-specific, and other auxiliary information from the
observed pose sequences and utilise these factorised features to query the
memory. A novel Multi-Head knowledge retrieval scheme leverages these
factorised feature embeddings to perform multiple querying operations over the
historical observations captured within the auxiliary memory. Moreover, our
proposed dynamic masking strategy makes this feature disentanglement process
dynamic. Two novel loss functions are introduced to encourage diversity within
the auxiliary memory while ensuring the stability of the memory contents, such
that it can locate and store salient information that can aid the long-term
prediction of future motion, irrespective of data imbalances or the diversity
of the input data distribution. With extensive experiments conducted on two
public benchmarks, Human3.6M and CMU-Mocap, we demonstrate that these design
choices collectively allow the proposed approach to outperform the current
state-of-the-art methods by significant margins: $>$ 17\% on the Human3.6M
dataset and $>$ 9\% on the CMU-Mocap dataset.
|
[
"cs.CV"
] | false |
2305.11439
|
2023-05-19T05:45:17Z
|
Few-Shot Learning with Visual Distribution Calibration and Cross-Modal
Distribution Alignment
|
[
"Runqi Wang",
"Hao Zheng",
"Xiaoyue Duan",
"Jianzhuang Liu",
"Yuning Lu",
"Tian Wang",
"Songcen Xu",
"Baochang Zhang"
] |
Pre-trained vision-language models have inspired much research on few-shot
learning. However, with only a few training images, there exist two crucial
problems: (1) the visual feature distributions are easily distracted by
class-irrelevant information in images, and (2) the alignment between the
visual and language feature distributions is difficult. To deal with the
distraction problem, we propose a Selective Attack module, which consists of
trainable adapters that generate spatial attention maps of images to guide the
attacks on class-irrelevant image areas. By messing up these areas, the
critical features are captured and the visual distributions of image features
are calibrated. To better align the visual and language feature distributions
that describe the same object class, we propose a cross-modal distribution
alignment module, in which we introduce a vision-language prototype for each
class to align the distributions, and adopt the Earth Mover's Distance (EMD) to
optimize the prototypes. For efficient computation, the upper bound of EMD is
derived. In addition, we propose an augmentation strategy to increase the
diversity of the images and the text prompts, which can reduce overfitting to
the few-shot training images. Extensive experiments on 11 datasets demonstrate
that our method consistently outperforms prior arts in few-shot learning. The
implementation code will be available at https://github.com/bhrqw/SADA.
|
[
"cs.CV"
] | false |
2305.11451
|
2023-05-19T06:12:50Z
|
SurgMAE: Masked Autoencoders for Long Surgical Video Analysis
|
[
"Muhammad Abdullah Jamal",
"Omid Mohareri"
] |
There has been a growing interest in using deep learning models for
processing long surgical videos, in order to automatically detect
clinical/operational activities and extract metrics that can enable workflow
efficiency tools and applications. However, training such models require vast
amounts of labeled data which is costly and not scalable. Recently,
self-supervised learning has been explored in computer vision community to
reduce the burden of the annotation cost. Masked autoencoders (MAE) got the
attention in self-supervised paradigm for Vision Transformers (ViTs) by
predicting the randomly masked regions given the visible patches of an image or
a video clip, and have shown superior performance on benchmark datasets.
However, the application of MAE in surgical data remains unexplored. In this
paper, we first investigate whether MAE can learn transferrable representations
in surgical video domain. We propose SurgMAE, which is a novel architecture
with a masking strategy based on sampling high spatio-temporal tokens for MAE.
We provide an empirical study of SurgMAE on two large scale long surgical video
datasets, and find that our method outperforms several baselines in low data
regime. We conduct extensive ablation studies to show the efficacy of our
approach and also demonstrate it's superior performance on UCF-101 to prove
it's generalizability in non-surgical datasets as well.
|
[
"cs.CV"
] | false |
2305.11452
|
2023-05-19T06:13:26Z
|
ReDirTrans: Latent-to-Latent Translation for Gaze and Head Redirection
|
[
"Shiwei Jin",
"Zhen Wang",
"Lei Wang",
"Ning Bi",
"Truong Nguyen"
] |
Learning-based gaze estimation methods require large amounts of training data
with accurate gaze annotations. Facing such demanding requirements of gaze data
collection and annotation, several image synthesis methods were proposed, which
successfully redirected gaze directions precisely given the assigned
conditions. However, these methods focused on changing gaze directions of the
images that only include eyes or restricted ranges of faces with low resolution
(less than $128\times128$) to largely reduce interference from other attributes
such as hairs, which limits application scenarios. To cope with this
limitation, we proposed a portable network, called ReDirTrans, achieving
latent-to-latent translation for redirecting gaze directions and head
orientations in an interpretable manner. ReDirTrans projects input latent
vectors into aimed-attribute embeddings only and redirects these embeddings
with assigned pitch and yaw values. Then both the initial and edited embeddings
are projected back (deprojected) to the initial latent space as residuals to
modify the input latent vectors by subtraction and addition, representing old
status removal and new status addition. The projection of aimed attributes only
and subtraction-addition operations for status replacement essentially mitigate
impacts on other attributes and the distribution of latent vectors. Thus, by
combining ReDirTrans with a pretrained fixed e4e-StyleGAN pair, we created
ReDirTrans-GAN, which enables accurately redirecting gaze in full-face images
with $1024\times1024$ resolution while preserving other attributes such as
identity, expression, and hairstyle. Furthermore, we presented improvements for
the downstream learning-based gaze estimation task, using redirected samples as
dataset augmentation.
|
[
"cs.CV"
] | false |
2305.11513
|
2023-05-19T08:26:08Z
|
When SAM Meets Shadow Detection
|
[
"Leiping Jie",
"Hui Zhang"
] |
As a promptable generic object segmentation model, segment anything model
(SAM) has recently attracted significant attention, and also demonstrates its
powerful performance. Nevertheless, it still meets its Waterloo when
encountering several tasks, e.g., medical image segmentation, camouflaged
object detection, etc. In this report, we try SAM on an unexplored popular
task: shadow detection. Specifically, four benchmarks were chosen and evaluated
with widely used metrics. The experimental results show that the performance
for shadow detection using SAM is not satisfactory, especially when comparing
with the elaborate models. Code is available at
https://github.com/LeipingJie/SAMSh.
|
[
"cs.CV"
] | false |
2305.11522
|
2023-05-19T08:43:37Z
|
DSFNet: Dual Space Fusion Network for Occlusion-Robust 3D Dense Face
Alignment
|
[
"Heyuan Li",
"Bo Wang",
"Yu Cheng",
"Mohan Kankanhalli",
"Robby T. Tan"
] |
Sensitivity to severe occlusion and large view angles limits the usage
scenarios of the existing monocular 3D dense face alignment methods. The
state-of-the-art 3DMM-based method, directly regresses the model's
coefficients, underutilizing the low-level 2D spatial and semantic information,
which can actually offer cues for face shape and orientation. In this work, we
demonstrate how modeling 3D facial geometry in image and model space jointly
can solve the occlusion and view angle problems. Instead of predicting the
whole face directly, we regress image space features in the visible facial
region by dense prediction first. Subsequently, we predict our model's
coefficients based on the regressed feature of the visible regions, leveraging
the prior knowledge of whole face geometry from the morphable models to
complete the invisible regions. We further propose a fusion network that
combines the advantages of both the image and model space predictions to
achieve high robustness and accuracy in unconstrained scenarios. Thanks to the
proposed fusion module, our method is robust not only to occlusion and large
pitch and roll view angles, which is the benefit of our image space approach,
but also to noise and large yaw angles, which is the benefit of our model space
method. Comprehensive evaluations demonstrate the superior performance of our
method compared with the state-of-the-art methods. On the 3D dense face
alignment task, we achieve 3.80% NME on the AFLW2000-3D dataset, which
outperforms the state-of-the-art method by 5.5%. Code is available at
https://github.com/lhyfst/DSFNet.
|
[
"cs.CV"
] | false |
2305.11601
|
2023-05-19T11:28:05Z
|
Towards Better Gradient Consistency for Neural Signed Distance Functions
via Level Set Alignment
|
[
"Baorui Ma",
"Junsheng Zhou",
"Yu-Shen Liu",
"Zhizhong Han"
] |
Neural signed distance functions (SDFs) have shown remarkable capability in
representing geometry with details. However, without signed distance
supervision, it is still a challenge to infer SDFs from point clouds or
multi-view images using neural networks. In this paper, we claim that gradient
consistency in the field, indicated by the parallelism of level sets, is the
key factor affecting the inference accuracy. Hence, we propose a level set
alignment loss to evaluate the parallelism of level sets, which can be
minimized to achieve better gradient consistency. Our novelty lies in that we
can align all level sets to the zero level set by constraining gradients at
queries and their projections on the zero level set in an adaptive way. Our
insight is to propagate the zero level set to everywhere in the field through
consistent gradients to eliminate uncertainty in the field that is caused by
the discreteness of 3D point clouds or the lack of observations from multi-view
images. Our proposed loss is a general term which can be used upon different
methods to infer SDFs from 3D point clouds and multi-view images. Our numerical
and visual comparisons demonstrate that our loss can significantly improve the
accuracy of SDFs inferred from point clouds or multi-view images under various
benchmarks. Code and data are available at
https://github.com/mabaorui/TowardsBetterGradient .
|
[
"cs.CV"
] | false |
2305.11664
|
2023-05-19T13:30:10Z
|
Few-shot 3D Shape Generation
|
[
"Jingyuan Zhu",
"Huimin Ma",
"Jiansheng Chen",
"Jian Yuan"
] |
Realistic and diverse 3D shape generation is helpful for a wide variety of
applications such as virtual reality, gaming, and animation. Modern generative
models, such as GANs and diffusion models, learn from large-scale datasets and
generate new samples following similar data distributions. However, when
training data is limited, deep neural generative networks overfit and tend to
replicate training samples. Prior works focus on few-shot image generation to
produce high-quality and diverse results using a few target images.
Unfortunately, abundant 3D shape data is typically hard to obtain as well. In
this work, we make the first attempt to realize few-shot 3D shape generation by
adapting generative models pre-trained on large source domains to target
domains using limited data. To relieve overfitting and keep considerable
diversity, we propose to maintain the probability distributions of the pairwise
relative distances between adapted samples at feature-level and shape-level
during domain adaptation. Our approach only needs the silhouettes of few-shot
target samples as training data to learn target geometry distributions and
achieve generated shapes with diverse topology and textures. Moreover, we
introduce several metrics to evaluate the quality and diversity of few-shot 3D
shape generation. The effectiveness of our approach is demonstrated
qualitatively and quantitatively under a series of few-shot 3D shape adaptation
setups.
|
[
"cs.CV"
] | false |
2305.11716
|
2023-05-19T14:52:40Z
|
Efficient and Deterministic Search Strategy Based on Residual
Projections for Point Cloud Registration
|
[
"Xinyi Li",
"Yinlong Liu",
"Hu Cao",
"Xueli Liu",
"Feihu Zhang",
"Alois Knoll"
] |
Estimating the rigid transformation between two LiDAR scans through putative
3D correspondences is a typical point cloud registration paradigm. Current 3D
feature matching approaches commonly lead to numerous outlier correspondences,
making outlier-robust registration techniques indispensable. Many recent
studies have adopted the branch and bound (BnB) optimization framework to solve
the correspondence-based point cloud registration problem globally and
deterministically. Nonetheless, BnB-based methods are time-consuming to search
the entire 6-dimensional parameter space, since their computational complexity
is exponential to the dimension of the solution domain. In order to enhance
algorithm efficiency, existing works attempt to decouple the 6 degrees of
freedom (DOF) original problem into two 3-DOF sub-problems, thereby reducing
the dimension of the parameter space. In contrast, our proposed approach
introduces a novel pose decoupling strategy based on residual projections,
effectively decomposing the raw problem into three 2-DOF rotation search
sub-problems. Subsequently, we employ a novel BnB-based search method to solve
these sub-problems, achieving efficient and deterministic registration.
Furthermore, our method can be adapted to address the challenging problem of
simultaneous pose and correspondence registration (SPCR). Through extensive
experiments conducted on synthetic and real-world datasets, we demonstrate that
our proposed method outperforms state-of-the-art methods in terms of
efficiency, while simultaneously ensuring robustness.
|
[
"cs.CV"
] | false |
2305.11718
|
2023-05-19T14:56:05Z
|
Towards Accurate Image Coding: Improved Autoregressive Image Generation
with Dynamic Vector Quantization
|
[
"Mengqi Huang",
"Zhendong Mao",
"Zhuowei Chen",
"Yongdong Zhang"
] |
Existing vector quantization (VQ) based autoregressive models follow a
two-stage generation paradigm that first learns a codebook to encode images as
discrete codes, and then completes generation based on the learned codebook.
However, they encode fixed-size image regions into fixed-length codes and
ignore their naturally different information densities, which results in
insufficiency in important regions and redundancy in unimportant ones, and
finally degrades the generation quality and speed. Moreover, the fixed-length
coding leads to an unnatural raster-scan autoregressive generation. To address
the problem, we propose a novel two-stage framework: (1) Dynamic-Quantization
VAE (DQ-VAE) which encodes image regions into variable-length codes based on
their information densities for an accurate and compact code representation.
(2) DQ-Transformer which thereby generates images autoregressively from
coarse-grained (smooth regions with fewer codes) to fine-grained (details
regions with more codes) by modeling the position and content of codes in each
granularity alternately, through a novel stacked-transformer architecture and
shared-content, non-shared position input layers designs. Comprehensive
experiments on various generation tasks validate our superiorities in both
effectiveness and efficiency. Code will be released at
https://github.com/CrossmodalGroup/DynamicVectorQuantization.
|
[
"cs.CV"
] | false |
2305.11729
|
2023-05-19T15:04:49Z
|
ViDaS Video Depth-aware Saliency Network
|
[
"Ioanna Diamanti",
"Antigoni Tsiami",
"Petros Koutras",
"Petros Maragos"
] |
We introduce ViDaS, a two-stream, fully convolutional Video, Depth-Aware
Saliency network to address the problem of attention modeling ``in-the-wild",
via saliency prediction in videos. Contrary to existing visual saliency
approaches using only RGB frames as input, our network employs also depth as an
additional modality. The network consists of two visual streams, one for the
RGB frames, and one for the depth frames. Both streams follow an
encoder-decoder approach and are fused to obtain a final saliency map. The
network is trained end-to-end and is evaluated in a variety of different
databases with eye-tracking data, containing a wide range of video content.
Although the publicly available datasets do not contain depth, we estimate it
using three different state-of-the-art methods, to enable comparisons and a
deeper insight. Our method outperforms in most cases state-of-the-art models
and our RGB-only variant, which indicates that depth can be beneficial to
accurately estimating saliency in videos displayed on a 2D screen. Depth has
been widely used to assist salient object detection problems, where it has been
proven to be very beneficial. Our problem though differs significantly from
salient object detection, since it is not restricted to specific salient
objects, but predicts human attention in a more general aspect. These two
problems not only have different objectives, but also different ground truth
data and evaluation metrics. To our best knowledge, this is the first
competitive deep learning video saliency estimation approach that combines both
RGB and Depth features to address the general problem of saliency estimation
``in-the-wild". The code will be publicly released.
|
[
"cs.CV"
] | false |
2305.11733
|
2023-05-19T15:11:06Z
|
Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment
|
[
"Mengke Li",
"Yiu-ming Cheung",
"Yang Lu"
] |
Long-tailed data is still a big challenge for deep neural networks, even
though they have achieved great success on balanced data. We observe that
vanilla training on long-tailed data with cross-entropy loss makes the
instance-rich head classes severely squeeze the spatial distribution of the
tail classes, which leads to difficulty in classifying tail class samples.
Furthermore, the original cross-entropy loss can only propagate gradient
short-lively because the gradient in softmax form rapidly approaches zero as
the logit difference increases. This phenomenon is called softmax saturation.
It is unfavorable for training on balanced data, but can be utilized to adjust
the validity of the samples in long-tailed data, thereby solving the distorted
embedding space of long-tailed problems. To this end, this paper proposes the
Gaussian clouded logit adjustment by Gaussian perturbation of different class
logits with varied amplitude. We define the amplitude of perturbation as cloud
size and set relatively large cloud sizes to tail classes. The large cloud size
can reduce the softmax saturation and thereby making tail class samples more
active as well as enlarging the embedding space. To alleviate the bias in a
classifier, we therefore propose the class-based effective number sampling
strategy with classifier re-training. Extensive experiments on benchmark
datasets validate the superior performance of the proposed method. Source code
is available at https://github.com/Keke921/GCLLoss.
|
[
"cs.CV"
] | false |
2305.11739
|
2023-05-19T15:20:00Z
|
Survey of Automatic Plankton Image Recognition: Challenges, Existing
Solutions and Future Perspectives
|
[
"Tuomas Eerola",
"Daniel Batrakhanov",
"Nastaran Vatankhah Barazandeh",
"Kaisa Kraft",
"Lumi Haraguchi",
"Lasse Lensu",
"Sanna Suikkanen",
"Jukka Seppälä",
"Timo Tamminen",
"Heikki Kälviäinen"
] |
Planktonic organisms are key components of aquatic ecosystems and respond
quickly to changes in the environment, therefore their monitoring is vital to
understand the changes in the environment. Yet, monitoring plankton at
appropriate scales still remains a challenge, limiting our understanding of
functioning of aquatic systems and their response to changes. Modern plankton
imaging instruments can be utilized to sample at high frequencies, enabling
novel possibilities to study plankton populations. However, manual analysis of
the data is costly, time consuming and expert based, making such approach
unsuitable for large-scale application and urging for automatic solutions. The
key problem related to the utilization of plankton datasets through image
analysis is plankton recognition. Despite the large amount of research done,
automatic methods have not been widely adopted for operational use. In this
paper, a comprehensive survey on existing solutions for automatic plankton
recognition is presented. First, we identify the most notable challenges that
that make the development of plankton recognition systems difficult. Then, we
provide a detailed description of solutions for these challenges proposed in
plankton recognition literature. Finally, we propose a workflow to identify the
specific challenges in new datasets and the recommended approaches to address
them. For many of the challenges, applicable solutions exist. However,
important challenges remain unsolved: 1) the domain shift between the datasets
hindering the development of a general plankton recognition system that would
work across different imaging instruments, 2) the difficulty to identify and
process the images of previously unseen classes, and 3) the uncertainty in
expert annotations that affects the training of the machine learning models for
recognition. These challenges should be addressed in the future research.
|
[
"cs.CV"
] | false |
2305.11918
|
2023-05-19T02:25:56Z
|
PASTS: Progress-Aware Spatio-Temporal Transformer Speaker For
Vision-and-Language Navigation
|
[
"Liuyi Wang",
"Chengju Liu",
"Zongtao He",
"Shu Li",
"Qingqing Yan",
"Huiyi Chen",
"Qijun Chen"
] |
Vision-and-language navigation (VLN) is a crucial but challenging cross-modal
navigation task. One powerful technique to enhance the generalization
performance in VLN is the use of an independent speaker model to provide pseudo
instructions for data augmentation. However, current speaker models based on
Long-Short Term Memory (LSTM) lack the ability to attend to features relevant
at different locations and time steps. To address this, we propose a novel
progress-aware spatio-temporal transformer speaker (PASTS) model that uses the
transformer as the core of the network. PASTS uses a spatio-temporal encoder to
fuse panoramic representations and encode intermediate connections through
steps. Besides, to avoid the misalignment problem that could result in
incorrect supervision, a speaker progress monitor (SPM) is proposed to enable
the model to estimate the progress of instruction generation and facilitate
more fine-grained caption results. Additionally, a multifeature dropout (MFD)
strategy is introduced to alleviate overfitting. The proposed PASTS is flexible
to be combined with existing VLN models. The experimental results demonstrate
that PASTS outperforms all existing speaker models and successfully improves
the performance of previous VLN models, achieving state-of-the-art performance
on the standard Room-to-Room (R2R) dataset.
|
[
"cs.CV"
] | false |
2305.11392
|
2023-05-19T02:42:35Z
|
Fast-StrucTexT: An Efficient Hourglass Transformer with Modality-guided
Dynamic Token Merge for Document Understanding
|
[
"Mingliang Zhai",
"Yulin Li",
"Xiameng Qin",
"Chen Yi",
"Qunyi Xie",
"Chengquan Zhang",
"Kun Yao",
"Yuwei Wu",
"Yunde Jia"
] |
Transformers achieve promising performance in document understanding because
of their high effectiveness and still suffer from quadratic computational
complexity dependency on the sequence length. General efficient transformers
are challenging to be directly adapted to model document. They are unable to
handle the layout representation in documents, e.g. word, line and paragraph,
on different granularity levels and seem hard to achieve a good trade-off
between efficiency and performance. To tackle the concerns, we propose
Fast-StrucTexT, an efficient multi-modal framework based on the StrucTexT
algorithm with an hourglass transformer architecture, for visual document
understanding. Specifically, we design a modality-guided dynamic token merging
block to make the model learn multi-granularity representation and prunes
redundant tokens. Additionally, we present a multi-modal interaction module
called Symmetry Cross Attention (SCA) to consider multi-modal fusion and
efficiently guide the token mergence. The SCA allows one modality input as
query to calculate cross attention with another modality in a dual phase.
Extensive experiments on FUNSD, SROIE, and CORD datasets demonstrate that our
model achieves the state-of-the-art performance and almost 1.9X faster
inference time than the state-of-the-art methods.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.11540
|
2023-05-19T09:20:27Z
|
Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with
Images as Pivots
|
[
"Jinyi Hu",
"Xu Han",
"Xiaoyuan Yi",
"Yutong Chen",
"Wenhao Li",
"Zhiyuan Liu",
"Maosong Sun"
] |
Diffusion models have made impressive progress in text-to-image synthesis.
However, training such large-scale models (e.g. Stable Diffusion), from scratch
requires high computational costs and massive high-quality text-image pairs,
which becomes unaffordable in other languages. To handle this challenge, we
propose IAP, a simple but effective method to transfer English Stable Diffusion
into Chinese. IAP optimizes only a separate Chinese text encoder with all other
parameters fixed to align Chinese semantics space to the English one in CLIP.
To achieve this, we innovatively treat images as pivots and minimize the
distance of attentive features produced from cross-attention between images and
each language respectively. In this way, IAP establishes connections of
Chinese, English and visual semantics in CLIP's embedding space efficiently,
advancing the quality of the generated image with direct Chinese prompts.
Experimental results show that our method outperforms several strong Chinese
diffusion models with only 5%~10% training data.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.11560
|
2023-05-19T09:57:19Z
|
Brain Captioning: Decoding human brain activity into images and text
|
[
"Matteo Ferrante",
"Furkan Ozcelik",
"Tommaso Boccato",
"Rufin VanRullen",
"Nicola Toschi"
] |
Every day, the human brain processes an immense volume of visual information,
relying on intricate neural mechanisms to perceive and interpret these stimuli.
Recent breakthroughs in functional magnetic resonance imaging (fMRI) have
enabled scientists to extract visual information from human brain activity
patterns. In this study, we present an innovative method for decoding brain
activity into meaningful images and captions, with a specific focus on brain
captioning due to its enhanced flexibility as compared to brain decoding into
images. Our approach takes advantage of cutting-edge image captioning models
and incorporates a unique image reconstruction pipeline that utilizes latent
diffusion models and depth estimation. We utilized the Natural Scenes Dataset,
a comprehensive fMRI dataset from eight subjects who viewed images from the
COCO dataset. We employed the Generative Image-to-text Transformer (GIT) as our
backbone for captioning and propose a new image reconstruction pipeline based
on latent diffusion models. The method involves training regularized linear
regression models between brain activity and extracted features. Additionally,
we incorporated depth maps from the ControlNet model to further guide the
reconstruction process. We evaluate our methods using quantitative metrics for
both generated captions and images. Our brain captioning approach outperforms
existing methods, while our image reconstruction pipeline generates plausible
images with improved spatial relationships. In conclusion, we demonstrate
significant progress in brain decoding, showcasing the enormous potential of
integrating vision and language to better understand human cognition. Our
approach provides a flexible platform for future research, with potential
applications in various fields, including neural art, style transfer, and
portable devices.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.11675
|
2023-05-19T13:44:25Z
|
Cinematic Mindscapes: High-quality Video Reconstruction from Brain
Activity
|
[
"Zijiao Chen",
"Jiaxin Qing",
"Juan Helen Zhou"
] |
Reconstructing human vision from brain activities has been an appealing task
that helps to understand our cognitive process. Even though recent research has
seen great success in reconstructing static images from non-invasive brain
recordings, work on recovering continuous visual experiences in the form of
videos is limited. In this work, we propose Mind-Video that learns
spatiotemporal information from continuous fMRI data of the cerebral cortex
progressively through masked brain modeling, multimodal contrastive learning
with spatiotemporal attention, and co-training with an augmented Stable
Diffusion model that incorporates network temporal inflation. We show that
high-quality videos of arbitrary frame rates can be reconstructed with
Mind-Video using adversarial guidance. The recovered videos were evaluated with
various semantic and pixel-level metrics. We achieved an average accuracy of
85% in semantic classification tasks and 0.19 in structural similarity index
(SSIM), outperforming the previous state-of-the-art by 45%. We also show that
our model is biologically plausible and interpretable, reflecting established
physiological processes.
|
[
"cs.CV",
"cs.CE"
] | true |
2305.11701
|
2023-05-19T14:25:27Z
|
S-JEA: Stacked Joint Embedding Architectures for Self-Supervised Visual
Representation Learning
|
[
"Alžběta Manová",
"Aiden Durrant",
"Georgios Leontidis"
] |
The recent emergence of Self-Supervised Learning (SSL) as a fundamental
paradigm for learning image representations has, and continues to, demonstrate
high empirical success in a variety of tasks. However, most SSL approaches fail
to learn embeddings that capture hierarchical semantic concepts that are
separable and interpretable. In this work, we aim to learn highly separable
semantic hierarchical representations by stacking Joint Embedding Architectures
(JEA) where higher-level JEAs are input with representations of lower-level
JEA. This results in a representation space that exhibits distinct
sub-categories of semantic concepts (e.g., model and colour of vehicles) in
higher-level JEAs. We empirically show that representations from stacked JEA
perform on a similar level as traditional JEA with comparative parameter counts
and visualise the representation spaces to validate the semantic hierarchies.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.11728
|
2023-05-19T15:04:16Z
|
Towards More Transparent and Accurate Cancer Diagnosis with an
Unsupervised CAE Approach
|
[
"Zahra Tabatabaei",
"Adrian Colomer",
"Javier Oliver Moll",
"Valery Naranjo"
] |
Digital pathology has revolutionized cancer diagnosis by leveraging
Content-Based Medical Image Retrieval (CBMIR) for analyzing histopathological
Whole Slide Images (WSIs). CBMIR enables searching for similar content,
enhancing diagnostic reliability and accuracy. In 2020, breast and prostate
cancer constituted 11.7% and 14.1% of cases, respectively, as reported by the
Global Cancer Observatory (GCO). The proposed Unsupervised CBMIR (UCBMIR)
replicates the traditional cancer diagnosis workflow, offering a dependable
method to support pathologists in WSI-based diagnostic conclusions. This
approach alleviates pathologists' workload, potentially enhancing diagnostic
efficiency. To address the challenge of the lack of labeled histopathological
images in CBMIR, a customized unsupervised Convolutional Auto Encoder (CAE) was
developed, extracting 200 features per image for the search engine component.
UCBMIR was evaluated using widely-used numerical techniques in CBMIR, alongside
visual evaluation and comparison with a classifier. The validation involved
three distinct datasets, with an external evaluation demonstrating its
effectiveness. UCBMIR outperformed previous studies, achieving a top 5 recall
of 99% and 80% on BreaKHis and SICAPv2, respectively, using the first
evaluation technique. Precision rates of 91% and 70% were achieved for BreaKHis
and SICAPv2, respectively, using the second evaluation technique. Furthermore,
UCBMIR demonstrated the capability to identify various patterns in patches,
achieving an 81% accuracy in the top 5 when tested on an external image from
Arvaniti.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.11795
|
2023-05-19T16:30:50Z
|
A One-Class Classifier for the Detection of GAN Manipulated
Multi-Spectral Satellite Images
|
[
"Lydia Abady",
"Giovanna Maria Dimitri",
"Mauro Barni"
] |
The highly realistic image quality achieved by current image generative
models has many academic and industrial applications. To limit the use of such
models to benign applications, though, it is necessary that tools to
conclusively detect whether an image has been generated synthetically or not
are developed. For this reason, several detectors have been developed providing
excellent performance in computer vision applications, however, they can not be
applied as they are to multispectral satellite images, and hence new models
must be trained. In general, two-class classifiers can achieve very good
detection accuracies, however they are not able to generalise to image domains
and generative models architectures different than those used during training.
For this reason, in this paper, we propose a one-class classifier based on
Vector Quantized Variational Autoencoder 2 (VQ-VAE 2) features to overcome the
limitations of two-class classifiers. First, we emphasize the generalization
problem that binary classifiers suffer from by training and testing an
EfficientNet-B4 architecture on multiple multispectral datasets. Then we show
that, since the VQ-VAE 2 based classifier is trained only on pristine images,
it is able to detect images belonging to different domains and generated by
architectures that have not been used during training. Last, we compare the two
classifiers head-to-head on the same generated datasets, highlighting the
superiori generalization capabilities of the VQ-VAE 2-based detector.
|
[
"cs.CV",
"eess.IV"
] | false |
2305.11968
|
2023-05-19T19:30:52Z
|
An End-to-end Pipeline for 3D Slide-wise Multi-stain Renal Pathology
Registration
|
[
"Peize Li",
"Ruining Deng",
"Yuankai Huo"
] |
Tissue examination and quantification in a 3D context on serial section whole
slide images (WSIs) were laborintensive and time-consuming tasks. Our previous
study proposed a novel registration-based method (Map3D) to automatically align
WSIs to the same physical space, reducing the human efforts of screening serial
sections from WSIs. However, the registration performance of our Map3D method
was only evaluated on single-stain WSIs with large-scale kidney tissue samples.
In this paper, we provide a Docker for an end-to-end 3D slide-wise registration
pipeline on needle biopsy serial sections in a multi-stain paradigm. The
contribution of this study is three-fold: (1) We release a containerized Docker
for an end-to-end multi-stain WSI registration. (2) We prove that the Map3D
pipeline is capable of sectional registration from multi-stain WSI. (3) We
verify that the Map3D pipeline can also be applied to needle biopsy tissue
samples. The source code and the Docker have been made publicly available at
https://github.com/hrlblab/Map3D.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.11367
|
2023-05-19T01:06:08Z
|
Smart Pressure e-Mat for Human Sleeping Posture and Dynamic Activity
Recognition
|
[
"Liangqi Yuan",
"Yuan Wei",
"Jia Li"
] |
With the emphasis on healthcare, early childhood education, and fitness,
non-invasive measurement and recognition methods have received more attention.
Pressure sensing has been extensively studied due to its advantages of simple
structure, easy access, visualization application, and harmlessness. This paper
introduces a smart pressure e-mat (SPeM) system based on a piezoresistive
material Velostat for human monitoring applications, including sleeping
postures, sports, and yoga recognition. After a subsystem scans e-mat readings
and processes the signal, it generates a pressure image stream. Deep neural
networks (DNNs) are used to fit and train the pressure image stream and
recognize the corresponding human behavior. Four sleeping postures and five
dynamic activities inspired by Nintendo Switch Ring Fit Adventure (RFA) are
used as a preliminary validation of the proposed SPeM system. The SPeM system
achieves high accuracies on both applications, which demonstrates the high
accuracy and generalization ability of the models. Compared with other pressure
sensor-based systems, SPeM possesses more flexible applications and commercial
application prospects, with reliable, robust, and repeatable properties.
|
[
"cs.CV",
"cs.HC",
"cs.LG",
"eess.SP"
] | false |
2305.11419
|
2023-05-19T04:07:26Z
|
JetSeg: Efficient Real-Time Semantic Segmentation Model for Low-Power
GPU-Embedded Systems
|
[
"Miguel Lopez-Montiel",
"Daniel Alejandro Lopez",
"Oscar Montiel"
] |
Real-time semantic segmentation is a challenging task that requires
high-accuracy models with low-inference times. Implementing these models on
embedded systems is limited by hardware capability and memory usage, which
produces bottlenecks. We propose an efficient model for real-time semantic
segmentation called JetSeg, consisting of an encoder called JetNet, and an
improved RegSeg decoder. The JetNet is designed for GPU-Embedded Systems and
includes two main components: a new light-weight efficient block called
JetBlock, that reduces the number of parameters minimizing memory usage and
inference time without sacrificing accuracy; a new strategy that involves the
combination of asymmetric and non-asymmetric convolutions with
depthwise-dilated convolutions called JetConv, a channel shuffle operation,
light-weight activation functions, and a convenient number of group
convolutions for embedded systems, and an innovative loss function named
JetLoss, which integrates the Precision, Recall, and IoUB losses to improve
semantic segmentation and reduce computational complexity. Experiments
demonstrate that JetSeg is much faster on workstation devices and more suitable
for Low-Power GPU-Embedded Systems than existing state-of-the-art models for
real-time semantic segmentation. Our approach outperforms state-of-the-art
real-time encoder-decoder models by reducing 46.70M parameters and 5.14%
GFLOPs, which makes JetSeg up to 2x faster on the NVIDIA Titan RTX GPU and the
Jetson Xavier than other models. The JetSeg code is available at
https://github.com/mmontielpz/jetseg.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.11497
|
2023-05-19T07:52:22Z
|
TreePrompt: Learning to Compose Tree Prompts for Explainable Visual
Grounding
|
[
"Chenchi Zhang",
"Jun Xiao",
"Lei Chen",
"Jian Shao",
"Long Chen"
] |
Prompt tuning has achieved great success in transferring the knowledge from
large pretrained vision-language models into downstream tasks, and has
dominated the performance on visual grounding (VG). However, almost all
existing prompt tuning paradigms suffer from poor interpretability. In this
paper, we argue that their poor interpretability is attributed to the holistic
prompt generation and inference process. By "holistic", we mean that they
usually directly learn a set of vectors as the prompt (i.e., prompt
generation), and use the learned global prompt to augment the textual input for
the VG model (i.e., prompt inference). To this end, we propose a new prompt
construction paradigm with explicit explainable ability, named TreePrompt.
Specifically, we first deconstruct a complex sentence into a tree, that is
consistent with human reasoning. Then, following the syntax tree, we compose a
structured prompt in a bottom-up manner. Thanks to this step-by-step prompt
construction process, each intermediate prompt (i.e., tree node) permits us to
understand the reasoning process. Extensive ablations on various backbones and
benchmarks consistently demonstrate the effectiveness and interpretability of
our TreePrompt.
|
[
"cs.CV",
"cs.AI",
"cs.CL",
"cs.MM"
] | false |
2305.11504
|
2023-05-19T08:10:43Z
|
JOINEDTrans: Prior Guided Multi-task Transformer for Joint Optic
Disc/Cup Segmentation and Fovea Detection
|
[
"Huaqing He",
"Li Lin",
"Zhiyuan Cai",
"Pujin Cheng",
"Xiaoying Tang"
] |
Deep learning-based image segmentation and detection models have largely
improved the efficiency of analyzing retinal landmarks such as optic disc (OD),
optic cup (OC), and fovea. However, factors including ophthalmic
disease-related lesions and low image quality issues may severely complicate
automatic OD/OC segmentation and fovea detection. Most existing works treat the
identification of each landmark as a single task, and take into account no
prior information. To address these issues, we propose a prior guided
multi-task transformer framework for joint OD/OC segmentation and fovea
detection, named JOINEDTrans. JOINEDTrans effectively combines various spatial
features of the fundus images, relieving the structural distortions induced by
lesions and other imaging issues. It contains a segmentation branch and a
detection branch. To be noted, we employ an encoder pretrained in a vessel
segmentation task to effectively exploit the positional relationship among
vessel, OD/OC, and fovea, successfully incorporating spatial prior into the
proposed JOINEDTrans framework. There are a coarse stage and a fine stage in
JOINEDTrans. In the coarse stage, OD/OC coarse segmentation and fovea heatmap
localization are obtained through a joint segmentation and detection module. In
the fine stage, we crop regions of interest for subsequent refinement and use
predictions obtained in the coarse stage to provide additional information for
better performance and faster convergence. Experimental results demonstrate
that JOINEDTrans outperforms existing state-of-the-art methods on the publicly
available GAMMA, REFUGE, and PALM fundus image datasets. We make our code
available at https://github.com/HuaqingHe/JOINEDTrans
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.11715
|
2023-05-19T14:51:05Z
|
A quality assurance framework for real-time monitoring of deep learning
segmentation models in radiotherapy
|
[
"Xiyao Jin",
"Yao Hao",
"Jessica Hilliard",
"Zhehao Zhang",
"Maria A. Thomas",
"Hua Li",
"Abhinav K. Jha",
"Geoffrey D. Hugo"
] |
To safely deploy deep learning models in the clinic, a quality assurance
framework is needed for routine or continuous monitoring of input-domain shift
and the models' performance without ground truth contours. In this work,
cardiac substructure segmentation was used as an example task to establish a QA
framework. A benchmark dataset consisting of Computed Tomography (CT) images
along with manual cardiac delineations of 241 patients were collected,
including one 'common' image domain and five 'uncommon' domains. Segmentation
models were tested on the benchmark dataset for an initial evaluation of model
capacity and limitations. An image domain shift detector was developed by
utilizing a trained Denoising autoencoder (DAE) and two hand-engineered
features. Another Variational Autoencoder (VAE) was also trained to estimate
the shape quality of the auto-segmentation results. Using the extracted
features from the image/segmentation pair as inputs, a regression model was
trained to predict the per-patient segmentation accuracy, measured by Dice
coefficient similarity (DSC). The framework was tested across 19 segmentation
models to evaluate the generalizability of the entire framework.
As results, the predicted DSC of regression models achieved a mean absolute
error (MAE) ranging from 0.036 to 0.046 with an averaged MAE of 0.041. When
tested on the benchmark dataset, the performances of all segmentation models
were not significantly affected by scanning parameters: FOV, slice thickness
and reconstructions kernels. For input images with Poisson noise, CNN-based
segmentation models demonstrated a decreased DSC ranging from 0.07 to 0.41,
while the transformer-based model was not significantly affected.
|
[
"eess.IV",
"cs.CV",
"physics.med-ph"
] | false |
2305.11845
|
2023-05-19T17:37:28Z
|
RxnScribe: A Sequence Generation Model for Reaction Diagram Parsing
|
[
"Yujie Qian",
"Jiang Guo",
"Zhengkai Tu",
"Connor W. Coley",
"Regina Barzilay"
] |
Reaction diagram parsing is the task of extracting reaction schemes from a
diagram in the chemistry literature. The reaction diagrams can be arbitrarily
complex, thus robustly parsing them into structured data is an open challenge.
In this paper, we present RxnScribe, a machine learning model for parsing
reaction diagrams of varying styles. We formulate this structured prediction
task with a sequence generation approach, which condenses the traditional
pipeline into an end-to-end model. We train RxnScribe on a dataset of 1,378
diagrams and evaluate it with cross validation, achieving an 80.0% soft match
F1 score, with significant improvements over previous models. Our code and data
are publicly available at https://github.com/thomas0809/RxnScribe.
|
[
"cs.CL",
"cs.AI",
"cs.CV"
] | false |
2305.12036
|
2023-05-19T23:32:06Z
|
SIDAR: Synthetic Image Dataset for Alignment & Restoration
|
[
"Monika Kwiatkowski",
"Simon Matern",
"Olaf Hellwich"
] |
Image alignment and image restoration are classical computer vision tasks.
However, there is still a lack of datasets that provide enough data to train
and evaluate end-to-end deep learning models. Obtaining ground-truth data for
image alignment requires sophisticated structure-from-motion methods or optical
flow systems that often do not provide enough data variance, i.e., typically
providing a high number of image correspondences, while only introducing few
changes of scenery within the underlying image sequences. Alternative
approaches utilize random perspective distortions on existing image data.
However, this only provides trivial distortions, lacking the complexity and
variance of real-world scenarios. Instead, our proposed data augmentation helps
to overcome the issue of data scarcity by using 3D rendering: images are added
as textures onto a plane, then varying lighting conditions, shadows, and
occlusions are added to the scene. The scene is rendered from multiple
viewpoints, generating perspective distortions more consistent with real-world
scenarios, with homographies closely resembling those of camera projections
rather than randomized homographies. For each scene, we provide a sequence of
distorted images with corresponding occlusion masks, homographies, and
ground-truth labels. The resulting dataset can serve as a training and
evaluation set for a multitude of tasks involving image alignment and artifact
removal, such as deep homography estimation, dense image matching, 2D bundle
adjustment, inpainting, shadow removal, denoising, content retrieval, and
background subtraction. Our data generation pipeline is customizable and can be
applied to any existing dataset, serving as a data augmentation to further
improve the feature learning of any existing method.
|
[
"cs.CV",
"cs.GR",
"cs.LG"
] | false |
2305.13333
|
2023-05-19T19:23:08Z
|
Evaluating LeNet Algorithms in Classification Lung Cancer from
Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases
|
[
"Jafar Abdollahi"
] |
The advancement of computer-aided detection systems had a significant impact
on clinical analysis and decision-making on human disease. Lung cancer requires
more attention among the numerous diseases being examined because it affects
both men and women, increasing the mortality rate. LeNet, a deep learning
model, is used in this study to detect lung tumors. The studies were run on a
publicly available dataset made up of CT image data (IQ-OTH/NCCD).
Convolutional neural networks (CNNs) were employed in the experiment for
feature extraction and classification. The proposed system was evaluated on
Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases datasets
the success percentage was calculated as 99.51%, sensitivity (93%) and
specificity (95%), and better results were obtained compared to the existing
methods. Development and validation of algorithms such as ours are important
initial steps in the development of software suites that could be adopted in
routine pathological practices and potentially help reduce the burden on
pathologists.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.11692
|
2023-05-19T14:13:47Z
|
Surgical-VQLA: Transformer with Gated Vision-Language Embedding for
Visual Question Localized-Answering in Robotic Surgery
|
[
"Long Bai",
"Mobarakol Islam",
"Lalithkumar Seenivasan",
"Hongliang Ren"
] |
Despite the availability of computer-aided simulators and recorded videos of
surgical procedures, junior residents still heavily rely on experts to answer
their queries. However, expert surgeons are often overloaded with clinical and
academic workloads and limit their time in answering. For this purpose, we
develop a surgical question-answering system to facilitate robot-assisted
surgical scene and activity understanding from recorded videos. Most of the
existing VQA methods require an object detector and regions based feature
extractor to extract visual features and fuse them with the embedded text of
the question for answer generation. However, (1) surgical object detection
model is scarce due to smaller datasets and lack of bounding box annotation;
(2) current fusion strategy of heterogeneous modalities like text and image is
naive; (3) the localized answering is missing, which is crucial in complex
surgical scenarios. In this paper, we propose Visual Question
Localized-Answering in Robotic Surgery (Surgical-VQLA) to localize the specific
surgical area during the answer prediction. To deal with the fusion of the
heterogeneous modalities, we design gated vision-language embedding (GVLE) to
build input patches for the Language Vision Transformer (LViT) to predict the
answer. To get localization, we add the detection head in parallel with the
prediction head of the LViT. We also integrate GIoU loss to boost localization
performance by preserving the accuracy of the question-answering model. We
annotate two datasets of VQLA by utilizing publicly available surgical videos
from MICCAI challenges EndoVis-17 and 18. Our validation results suggest that
Surgical-VQLA can better understand the surgical scene and localize the
specific area related to the question-answering. GVLE presents an efficient
language-vision embedding technique by showing superior performance over the
existing benchmarks.
|
[
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG",
"cs.RO"
] | false |
2305.11846
|
2023-05-19T17:38:32Z
|
Any-to-Any Generation via Composable Diffusion
|
[
"Zineng Tang",
"Ziyi Yang",
"Chenguang Zhu",
"Michael Zeng",
"Mohit Bansal"
] |
We present Composable Diffusion (CoDi), a novel generative model capable of
generating any combination of output modalities, such as language, image,
video, or audio, from any combination of input modalities. Unlike existing
generative AI systems, CoDi can generate multiple modalities in parallel and
its input is not limited to a subset of modalities like text or image. Despite
the absence of training datasets for many combinations of modalities, we
propose to align modalities in both the input and output space. This allows
CoDi to freely condition on any input combination and generate any group of
modalities, even if they are not present in the training data. CoDi employs a
novel composable generation strategy which involves building a shared
multimodal space by bridging alignment in the diffusion process, enabling the
synchronized generation of intertwined modalities, such as temporally aligned
video and audio. Highly customizable and flexible, CoDi achieves strong
joint-modality generation quality, and outperforms or is on par with the
unimodal state-of-the-art for single-modality synthesis. The project page with
demonstrations and code is at https://codi-gen.github.io
|
[
"cs.CV",
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | true |
2305.11355
|
2023-05-19T00:14:10Z
|
MD3: The Multi-Dialect Dataset of Dialogues
|
[
"Jacob Eisenstein",
"Vinodkumar Prabhakaran",
"Clara Rivera",
"Dorottya Demszky",
"Devyani Sharma"
] |
We introduce a new dataset of conversational speech representing English from
India, Nigeria, and the United States. The Multi-Dialect Dataset of Dialogues
(MD3) strikes a new balance between open-ended conversational speech and
task-oriented dialogue by prompting participants to perform a series of short
information-sharing tasks. This facilitates quantitative cross-dialectal
comparison, while avoiding the imposition of a restrictive task structure that
might inhibit the expression of dialect features. Preliminary analysis of the
dataset reveals significant differences in syntax and in the use of discourse
markers. The dataset, which will be made publicly available with the
publication of this paper, includes more than 20 hours of audio and more than
200,000 orthographically-transcribed tokens.
|
[
"cs.CL"
] | false |
2305.11374
|
2023-05-19T01:27:29Z
|
Characterizing tradeoffs between teaching via language and
demonstrations in multi-agent systems
|
[
"Dhara Yu",
"Noah D. Goodman",
"Jesse Mu"
] |
Humans teach others about the world through language and demonstration. When
might one of these modalities be more effective than the other? In this work,
we study the factors that modulate the effectiveness of language vs.
demonstration using multi-agent systems to model human communication.
Specifically, we train neural network agents to teach via language or
demonstration in a grounded communication task, manipulating 1) the inherent
difficulty of the task and 2) the competence of the teacher. We find that
teaching by demonstration is more effective in the simplest settings, but
language is more effective as task difficulty increases, due to its ability to
generalize more effectively to unseen scenarios. Overall, these results provide
converging evidence for a tradeoff between language and demonstration as
teaching modalities in humans, and make the novel predictions that
demonstration may be optimal for easy tasks, while language enables
generalization in more challenging settings.
|
[
"cs.CL"
] | false |
2305.11449
|
2023-05-19T06:04:21Z
|
Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer
with Fine-tuning Slow and Fast
|
[
"Yiduo Guo",
"Yaobo Liang",
"Dongyan Zhao",
"Bing Liu",
"Duan Nan"
] |
Existing research has shown that a multilingual pre-trained language model
fine-tuned with one (source) language also performs well on downstream tasks
for non-source languages, even though no fine-tuning is done on these
languages. However, there is a clear gap between the performance of the source
language and that of the non-source languages. This paper analyzes the
fine-tuning process, discovers when the performance gap changes and identifies
which network weights affect the overall performance most. Additionally, the
paper seeks to answer to what extent the gap can be reduced by reducing
forgetting. Based on the analysis results, a method named Fine-tuning slow and
fast with four training policies is proposed to address these issues.
Experimental results show the proposed method outperforms baselines by a clear
margin.
|
[
"cs.CL"
] | false |
2305.11462
|
2023-05-19T06:30:19Z
|
Extending Memory for Language Modelling
|
[
"Anupiya Nugaliyadde"
] |
Breakthroughs in deep learning and memory networks have made major advances
in natural language understanding. Language is sequential and information
carried through the sequence can be captured through memory networks. Learning
the sequence is one of the key aspects in learning the language. However,
memory networks are not capable of holding infinitely long sequences in their
memories and are limited by various constraints such as the vanishing or
exploding gradient problem. Therefore, natural language understanding models
are affected when presented with long sequential text. We introduce Long Term
Memory network (LTM) to learn from infinitely long sequences. LTM gives
priority to the current inputs to allow it to have a high impact. Language
modeling is an important factor in natural language understanding. LTM was
tested in language modeling, which requires long term memory. LTM is tested on
Penn Tree bank dataset, Google Billion Word dataset and WikiText-2 dataset. We
compare LTM with other language models which require long term memory.
|
[
"cs.CL"
] | false |
2305.11482
|
2023-05-19T07:24:27Z
|
Enhancing Personalized Dialogue Generation with Contrastive Latent
Variables: Combining Sparse and Dense Persona
|
[
"Yihong Tang",
"Bo Wang",
"Miao Fang",
"Dongming Zhao",
"Kun Huang",
"Ruifang He",
"Yuexian Hou"
] |
The personalized dialogue explores the consistent relationship between
dialogue generation and personality. Existing personalized dialogue agents
model persona profiles from three resources: sparse or dense persona
descriptions and dialogue histories. However, sparse structured persona
attributes are explicit but uninformative, dense persona texts contain rich
persona descriptions with much noise, and dialogue history query is both noisy
and uninformative for persona modeling. In this work, we combine the advantages
of the three resources to obtain a richer and more accurate persona. We design
a Contrastive Latent Variable-based model (CLV) that clusters the dense persona
descriptions into sparse categories, which are combined with the history query
to generate personalized responses. Experimental results on Chinese and English
datasets demonstrate our model's superiority in personalization.
|
[
"cs.CL"
] | false |
2305.11501
|
2023-05-19T08:06:50Z
|
From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment
|
[
"Yu Zhao",
"Yike Wu",
"Xiangrui Cai",
"Ying Zhang",
"Haiwei Zhang",
"Xiaojie Yuan"
] |
Entity Alignment (EA) aims to find the equivalent entities between two
Knowledge Graphs (KGs). Existing methods usually encode the triples of entities
as embeddings and learn to align the embeddings, which prevents the direct
interaction between the original information of the cross-KG entities.
Moreover, they encode the relational triples and attribute triples of an entity
in heterogeneous embedding spaces, which prevents them from helping each other.
In this paper, we transform both triples into unified textual sequences, and
model the EA task as a bi-directional textual entailment task between the
sequences of cross-KG entities. Specifically, we feed the sequences of two
entities simultaneously into a pre-trained language model (PLM) and propose two
kinds of PLM-based entity aligners that model the entailment probability
between sequences as the similarity between entities. Our approach captures the
unified correlation pattern of two kinds of information between entities, and
explicitly models the fine-grained interaction between original entity
information. The experiments on five cross-lingual EA datasets show that our
approach outperforms the state-of-the-art EA methods and enables the mutual
enhancement of the heterogeneous information. Codes are available at
https://github.com/OreOZhao/TEA.
|
[
"cs.CL"
] | false |
2305.11503
|
2023-05-19T08:09:45Z
|
A Topic-aware Summarization Framework with Different Modal Side
Information
|
[
"Xiuying Chen",
"Mingzhe Li",
"Shen Gao",
"Xin Cheng",
"Qiang Yang",
"Qishen Zhang",
"Xin Gao",
"Xiangliang Zhang"
] |
Automatic summarization plays an important role in the exponential document
growth on the Web. On content websites such as CNN.com and WikiHow.com, there
often exist various kinds of side information along with the main document for
attention attraction and easier understanding, such as videos, images, and
queries. Such information can be used for better summarization, as they often
explicitly or implicitly mention the essence of the article. However, most of
the existing side-aware summarization methods are designed to incorporate
either single-modal or multi-modal side information, and cannot effectively
adapt to each other. In this paper, we propose a general summarization
framework, which can flexibly incorporate various modalities of side
information. The main challenges in designing a flexible summarization model
with side information include: (1) the side information can be in textual or
visual format, and the model needs to align and unify it with the document into
the same semantic space, (2) the side inputs can contain information from
various aspects, and the model should recognize the aspects useful for
summarization. To address these two challenges, we first propose a unified
topic encoder, which jointly discovers latent topics from the document and
various kinds of side information. The learned topics flexibly bridge and guide
the information flow between multiple inputs in a graph encoder through a
topic-aware interaction. We secondly propose a triplet contrastive learning
mechanism to align the single-modal or multi-modal information into a unified
semantic space, where the summary quality is enhanced by better understanding
the document and side information. Results show that our model significantly
surpasses strong baselines on three public single-modal or multi-modal
benchmark summarization datasets.
|
[
"cs.CL"
] | false |
2305.11516
|
2023-05-19T08:27:17Z
|
Contextualized Word Vector-based Methods for Discovering Semantic
Differences with No Training nor Word Alignment
|
[
"Ryo Nagata",
"Hiroya Takamura",
"Naoki Otani",
"Yoshifumi Kawasaki"
] |
In this paper, we propose methods for discovering semantic differences in
words appearing in two corpora based on the norms of contextualized word
vectors. The key idea is that the coverage of meanings is reflected in the norm
of its mean word vector. The proposed methods do not require the assumptions
concerning words and corpora for comparison that the previous methods do. All
they require are to compute the mean vector of contextualized word vectors and
its norm for each word type. Nevertheless, they are (i) robust for the skew in
corpus size; (ii) capable of detecting semantic differences in infrequent
words; and (iii) effective in pinpointing word instances that have a meaning
missing in one of the two corpora for comparison. We show these advantages for
native and non-native English corpora and also for historical corpora.
|
[
"cs.CL"
] | false |
2305.11553
|
2023-05-19T09:53:45Z
|
Unsupervised Scientific Abstract Segmentation with Normalized Mutual
Information
|
[
"Yingqiang Gao",
"Jessica Lam",
"Nianlong Gu",
"Richard H. R. Hahnloser"
] |
The abstracts of scientific papers consist of premises and conclusions.
Structured abstracts explicitly highlight the conclusion sentences, whereas
non-structured abstracts may have conclusion sentences at uncertain positions.
This implicit nature of conclusion positions makes the automatic segmentation
of scientific abstracts into premises and conclusions a challenging task. In
this work, we empirically explore using Normalized Mutual Information (NMI) for
abstract segmentation. We consider each abstract as a recurrent cycle of
sentences and place segmentation boundaries by greedily optimizing the NMI
score between premises and conclusions. On non-structured abstracts, our
proposed unsupervised approach GreedyCAS achieves the best performance across
all evaluation metrics; on structured abstracts, GreedyCAS outperforms all
baseline methods measured by $P_k$. The strong correlation of NMI to our
evaluation metrics reveals the effectiveness of NMI for abstract segmentation.
|
[
"cs.CL"
] | false |
2305.11603
|
2023-05-19T11:30:37Z
|
Attributable and Scalable Opinion Summarization
|
[
"Tom Hosking",
"Hao Tang",
"Mirella Lapata"
] |
We propose a method for unsupervised opinion summarization that encodes
sentences from customer reviews into a hierarchical discrete latent space, then
identifies common opinions based on the frequency of their encodings. We are
able to generate both abstractive summaries by decoding these frequent
encodings, and extractive summaries by selecting the sentences assigned to the
same frequent encodings. Our method is attributable, because the model
identifies sentences used to generate the summary as part of the summarization
process. It scales easily to many hundreds of input reviews, because
aggregation is performed in the latent space rather than over long sequences of
tokens. We also demonstrate that our appraoch enables a degree of control,
generating aspect-specific summaries by restricting the model to parts of the
encoding space that correspond to desired aspects (e.g., location or food).
Automatic and human evaluation on two datasets from different domains
demonstrates that our method generates summaries that are more informative than
prior work and better grounded in the input reviews.
|
[
"cs.CL"
] | false |
2305.11673
|
2023-05-19T13:38:53Z
|
Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis
in Four Languages
|
[
"Seraphina Goldfarb-Tarrant",
"Adam Lopez",
"Roi Blanco",
"Diego Marcheggiani"
] |
Sentiment analysis (SA) systems are used in many products and hundreds of
languages. Gender and racial biases are well-studied in English SA systems, but
understudied in other languages, with few resources for such studies. To remedy
this, we build a counterfactual evaluation corpus for gender and racial/migrant
bias in four languages. We demonstrate its usefulness by answering a simple but
important question that an engineer might need to answer when deploying a
system: What biases do systems import from pre-trained models when compared to
a baseline with no pre-training? Our evaluation corpus, by virtue of being
counterfactual, not only reveals which models have less bias, but also
pinpoints changes in model bias behaviour, which enables more targeted
mitigation strategies. We release our code and evaluation corpora to facilitate
future research.
|
[
"cs.CL"
] | false |
2305.11725
|
2023-05-19T15:01:48Z
|
S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid
Question Answering
|
[
"Fangyu Lei",
"Xiang Li",
"Yifan Wei",
"Shizhu He",
"Yiming Huang",
"Jun Zhao",
"Kang Liu"
] |
Answering multi-hop questions over hybrid factual knowledge from the given
text and table (TextTableQA) is a challenging task. Existing models mainly
adopt a retriever-reader framework, which have several deficiencies, such as
noisy labeling in training retriever, insufficient utilization of heterogeneous
information over text and table, and deficient ability for different reasoning
operations. In this paper, we propose a three-stage TextTableQA framework
S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever
with refinement training to solve the noisy labeling problem. Then, a hybrid
selector considers the linked relationships between heterogeneous data to
select the most relevant factual knowledge. For the final stage, instead of
adapting a reading comprehension module like in previous methods, we employ a
generation-based reasoner to obtain answers. This includes two approaches: a
row-wise generator and an LLM prompting generator~(first time used in this
task). The experimental results demonstrate that our method achieves
competitive results in the few-shot setting. When trained on the full dataset,
our approach outperforms all baseline methods, ranking first on the HybridQA
leaderboard.
|
[
"cs.CL"
] | false |
2305.11761
|
2023-05-19T15:46:08Z
|
ReSeTOX: Re-learning attention weights for toxicity mitigation in
machine translation
|
[
"Javier García Gilabert",
"Carlos Escolano",
"Marta R. Costa-Jussà"
] |
Our proposed method, ReSeTOX (REdo SEarch if TOXic), addresses the issue of
Neural Machine Translation (NMT) generating translation outputs that contain
toxic words not present in the input. The objective is to mitigate the
introduction of toxic language without the need for re-training. In the case of
identified added toxicity during the inference process, ReSeTOX dynamically
adjusts the key-value self-attention weights and re-evaluates the beam search
hypotheses. Experimental results demonstrate that ReSeTOX achieves a remarkable
57% reduction in added toxicity while maintaining an average translation
quality of 99.5% across 164 languages.
|
[
"cs.CL"
] | false |
2305.11791
|
2023-05-19T16:25:43Z
|
Enhancing Few-shot NER with Prompt Ordering based Data Augmentation
|
[
"Huiming Wang",
"Liying Cheng",
"Wenxuan Zhang",
"De Wen Soh",
"Lidong Bing"
] |
Recently, data augmentation (DA) methods have been proven to be effective for
pre-trained language models (PLMs) in low-resource settings, including few-shot
named entity recognition (NER). However, conventional NER DA methods are mostly
aimed at sequence labeling models, i.e., token-level classification, and few
are compatible with unified autoregressive generation frameworks, which can
handle a wider range of NER tasks, such as nested NER. Furthermore, these
generation frameworks have a strong assumption that the entities will appear in
the target sequence with the same left-to-right order as the source sequence.
In this paper, we claim that there is no need to keep this strict order, and
more diversified but reasonable target entity sequences can be provided during
the training stage as a novel DA method. Nevertheless, a naive mixture of
augmented data can confuse the model since one source sequence will then be
paired with different target sequences. Therefore, we propose a simple but
effective Prompt Ordering based Data Augmentation (PODA) method to improve the
training of unified autoregressive generation frameworks under few-shot NER
scenarios. Experimental results on three public NER datasets and further
analyses demonstrate the effectiveness of our approach.
|
[
"cs.CL"
] | false |
2305.11806
|
2023-05-19T16:42:17Z
|
The Inside Story: Towards Better Understanding of Machine Translation
Neural Evaluation Metrics
|
[
"Ricardo Rei",
"Nuno M. Guerreiro",
"Marcos Treviso",
"Luisa Coheur",
"Alon Lavie",
"André F. T. Martins"
] |
Neural metrics for machine translation evaluation, such as COMET, exhibit
significant improvements in their correlation with human judgments, as compared
to traditional metrics based on lexical overlap, such as BLEU. Yet, neural
metrics are, to a great extent, "black boxes" returning a single sentence-level
score without transparency about the decision-making process. In this work, we
develop and compare several neural explainability methods and demonstrate their
effectiveness for interpreting state-of-the-art fine-tuned neural metrics. Our
study reveals that these metrics leverage token-level information that can be
directly attributed to translation errors, as assessed through comparison of
token-level neural saliency maps with Multidimensional Quality Metrics (MQM)
annotations and with synthetically-generated critical translation errors. To
ease future research, we release our code at:
https://github.com/Unbabel/COMET/tree/explainable-metrics.
|
[
"cs.CL"
] | false |
2305.11808
|
2023-05-19T16:45:19Z
|
Pseudo-Label Training and Model Inertia in Neural Machine Translation
|
[
"Benjamin Hsu",
"Anna Currey",
"Xing Niu",
"Maria Nădejde",
"Georgiana Dinu"
] |
Like many other machine learning applications, neural machine translation
(NMT) benefits from over-parameterized deep neural models. However, these
models have been observed to be brittle: NMT model predictions are sensitive to
small input changes and can show significant variation across re-training or
incremental model updates. This work studies a frequently used method in NMT,
pseudo-label training (PLT), which is common to the related techniques of
forward-translation (or self-training) and sequence-level knowledge
distillation. While the effect of PLT on quality is well-documented, we
highlight a lesser-known effect: PLT can enhance a model's stability to model
updates and input perturbations, a set of properties we call model inertia. We
study inertia effects under different training settings and we identify
distribution simplification as a mechanism behind the observed results.
|
[
"cs.CL"
] | false |
2305.11859
|
2023-05-19T17:49:19Z
|
Complex Claim Verification with Evidence Retrieved in the Wild
|
[
"Jifan Chen",
"Grace Kim",
"Aniruddh Sriram",
"Greg Durrett",
"Eunsol Choi"
] |
Evidence retrieval is a core part of automatic fact-checking. Prior work
makes simplifying assumptions in retrieval that depart from real-world use
cases: either no access to evidence, access to evidence curated by a human
fact-checker, or access to evidence available long after the claim has been
made. In this work, we present the first fully automated pipeline to check
real-world claims by retrieving raw evidence from the web. We restrict our
retriever to only search documents available prior to the claim's making,
modeling the realistic scenario where an emerging claim needs to be checked.
Our pipeline includes five components: claim decomposition, raw document
retrieval, fine-grained evidence retrieval, claim-focused summarization, and
veracity judgment. We conduct experiments on complex political claims in the
ClaimDecomp dataset and show that the aggregated evidence produced by our
pipeline improves veracity judgments. Human evaluation finds the evidence
summary produced by our system is reliable (it does not hallucinate
information) and relevant to answering key questions about a claim, suggesting
that it can assist fact-checkers even when it cannot surface a complete
evidence set.
|
[
"cs.CL"
] | false |
2305.11948
|
2023-05-19T18:08:45Z
|
Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes
|
[
"Surabhi Datta",
"Tasneem Kaochar",
"Hio Cheng Lam",
"Nelly Nwosu",
"Luca Giancardo",
"Alice Z. Chuang",
"Robert M. Feldman",
"Kirk Roberts"
] |
We introduce an annotated corpus of 600 ophthalmology notes labeled with
detailed spatial and contextual information of ophthalmic entities. We extend
our previously proposed frame semantics-based spatial representation schema,
Rad-SpatialNet, to represent spatial language in ophthalmology text, resulting
in the Eye-SpatialNet schema. The spatially-grounded entities are findings,
procedures, and drugs. To accurately capture all spatial details, we add some
domain-specific elements in Eye-SpatialNet. The annotated corpus contains 1715
spatial triggers, 7308 findings, 2424 anatomies, and 9914 descriptors. To
automatically extract the spatial information, we employ a two-turn question
answering approach based on the transformer language model BERT. The results
are promising, with F1 scores of 89.31, 74.86, and 88.47 for spatial triggers,
Figure, and Ground frame elements, respectively. This is the first work to
represent and extract a wide variety of clinical information in ophthalmology.
Extracting detailed information can benefit ophthalmology applications and
research targeted toward disease progression and screening.
|
[
"cs.CL"
] | false |
2305.11952
|
2023-05-19T18:26:26Z
|
Self-QA: Unsupervised Knowledge Guided Language Model Alignment
|
[
"Xuanyu Zhang",
"Qing Yang"
] |
Large-scale language models like ChatGPT and GPT-4 have gained attention for
their impressive conversational and generative capabilities. However, the
creation of supervised paired question-answering data for instruction tuning
presents formidable challenges. This endeavor necessitates substantial human
effort for data annotation and wrestles with issues concerning data quality,
diversity, accuracy, and other related factors. To overcome these obstacles, we
introduce an innovative framework named Self-QA, which replaces the traditional
practice of human-written instruction seeds with a vast amount of unsupervised
knowledge, enabling the model to generate a larger quantity of correct and
domain-specific instruction data. The effectiveness of our proposed method is
demonstrated through experiments conducted on unsupervised corpora from various
domains.
|
[
"cs.CL"
] | false |
2305.11979
|
2023-05-19T19:53:54Z
|
A Weak Supervision Approach for Few-Shot Aspect Based Sentiment
|
[
"Robert Vacareanu",
"Siddharth Varia",
"Kishaloy Halder",
"Shuai Wang",
"Giovanni Paolini",
"Neha Anna John",
"Miguel Ballesteros",
"Smaranda Muresan"
] |
We explore how weak supervision on abundant unlabeled data can be leveraged
to improve few-shot performance in aspect-based sentiment analysis (ABSA)
tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and we
use it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. We
test the resulting model on three widely used ABSA datasets, before and after
fine-tuning. Our proposed method preserves the full fine-tuning performance
while showing significant improvements (15.84% absolute F1) in the few-shot
learning scenario for the harder tasks. In zero-shot (i.e., without
fine-tuning), our method outperforms the previous state of the art on the
aspect extraction sentiment classification (AESC) task and is, additionally,
capable of performing the harder aspect sentiment triplet extraction (ASTE)
task.
|
[
"cs.CL"
] | false |
2305.12000
|
2023-05-19T20:56:22Z
|
Deep Learning Approaches to Lexical Simplification: A Survey
|
[
"Kai North",
"Tharindu Ranasinghe",
"Matthew Shardlow",
"Marcos Zampieri"
] |
Lexical Simplification (LS) is the task of replacing complex for simpler
words in a sentence whilst preserving the sentence's original meaning. LS is
the lexical component of Text Simplification (TS) with the aim of making texts
more accessible to various target populations. A past survey (Paetzold and
Specia, 2017) has provided a detailed overview of LS. Since this survey,
however, the AI/NLP community has been taken by storm by recent advances in
deep learning, particularly with the introduction of large language models
(LLM) and prompt learning. The high performance of these models sparked renewed
interest in LS. To reflect these recent advances, we present a comprehensive
survey of papers published between 2017 and 2023 on LS and its sub-tasks with a
special focus on deep learning. We also present benchmark datasets for the
future development of LS systems.
|
[
"cs.CL"
] | false |
2305.12002
|
2023-05-19T21:01:20Z
|
XuanYuan 2.0: A Large Chinese Financial Chat Model with Hundreds of
Billions Parameters
|
[
"Xuanyu Zhang",
"Qing Yang",
"Dongliang Xu"
] |
In recent years, pre-trained language models have undergone rapid development
with the emergence of large-scale models. However, there is a lack of
open-sourced chat models specifically designed for the Chinese language,
especially in the field of Chinese finance, at the scale of hundreds of
billions. To address this gap, we introduce XuanYuan 2.0, the largest Chinese
chat model to date, built upon the BLOOM-176B architecture. Additionally, we
propose a novel training method called hybrid-tuning to mitigate catastrophic
forgetting. By combining general-domain with domain-specific knowledge and
integrating the stages of pre-training and fine-tuning, XuanYuan 2.0 is capable
of providing accurate and contextually appropriate responses in the Chinese
financial domain.
|
[
"cs.CL"
] | false |
2305.12018
|
2023-05-19T22:02:55Z
|
BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases
|
[
"Xin Liu",
"Muhammad Khalifa",
"Lu Wang"
] |
Energy-based models (EBMs) have gained popularity for controlled text
generation due to their high applicability to a wide range of constraints.
However, sampling from EBMs is non-trivial, as it often requires a large number
of iterations to converge to plausible text, which slows down the decoding
process and makes it less practical for real-world applications. In this work,
we propose BOLT, which relies on tunable biases to directly adjust the language
model's output logits. Unlike prior work, BOLT maintains the generator's
autoregressive nature to assert a strong control on token-wise conditional
dependencies and overall fluency, and thus converges faster. When compared with
state-of-the-arts on controlled generation tasks using both soft constraints
(e.g., sentiment control) and hard constraints (e.g., keyword-guided topic
control), BOLT demonstrates significantly improved efficiency and fluency. On
sentiment control, BOLT is 7x faster than competitive baselines, and more
fluent in 74.4% of the evaluation samples according to human judges.
|
[
"cs.CL"
] | false |
2306.05552
|
2023-05-19T02:09:52Z
|
ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery
|
[
"Anaelia Ovalle",
"Mehrab Beikzadeh",
"Parshan Teimouri",
"Kai-Wei Chang",
"Majid Sarrafzadeh"
] |
Large language models have been useful in expanding mental health care
delivery. ChatGPT, in particular, has gained popularity for its ability to
generate human-like dialogue. However, data-sensitive domains -- including but
not limited to healthcare -- face challenges in using ChatGPT due to privacy
and data-ownership concerns. To enable its utilization, we propose a text
ambiguation framework that preserves user privacy. We ground this in the task
of addressing stress prompted by user-provided texts to demonstrate the
viability and helpfulness of privacy-preserved generations. Our results suggest
that chatGPT recommendations are still able to be moderately helpful and
relevant, even when the original user text is not provided.
|
[
"cs.CL",
"I.2; I.7"
] | false |
2305.11398
|
2023-05-19T02:51:25Z
|
Comfort Foods and Community Connectedness: Investigating Diet Change
during COVID-19 Using YouTube Videos on Twitter
|
[
"Yelena Mejova",
"Lydia Manikonda"
] |
Unprecedented lockdowns at the start of the COVID-19 pandemic have
drastically changed the routines of millions of people, potentially impacting
important health-related behaviors. In this study, we use YouTube videos
embedded in tweets about diet, exercise and fitness posted before and during
COVID-19 to investigate the influence of the pandemic lockdowns on diet and
nutrition. In particular, we examine the nutritional profile of the foods
mentioned in the transcript, description and title of each video in terms of
six macronutrients (protein, energy, fat, sodium, sugar, and saturated fat).
These macronutrient values were further linked to demographics to assess if
there are specific effects on those potentially having insufficient access to
healthy sources of food. Interrupted time series analysis revealed a
considerable shift in the aggregated macronutrient scores before and during
COVID-19. In particular, whereas areas with lower incomes showed decrease in
energy, fat, and saturated fat, those with higher percentage of African
Americans showed an elevation in sodium. Word2Vec word similarities and odds
ratio analysis suggested a shift from popular diets and lifestyle bloggers
before the lockdowns to the interest in a variety of healthy foods, communal
sharing of quick and easy recipes, as well as a new emphasis on comfort foods.
To the best of our knowledge, this work is novel in terms of linking attention
signals in tweets, content of videos, their nutrients profile, and aggregate
demographics of the users. The insights made possible by this combination of
resources are important for monitoring the secondary health effects of social
distancing, and informing social programs designed to alleviate these effects.
|
[
"cs.SI",
"cs.CL"
] | false |
2305.11438
|
2023-05-19T05:39:41Z
|
Phonetic and Prosody-aware Self-supervised Learning Approach for
Non-native Fluency Scoring
|
[
"Kaiqi Fu",
"Shaojun Gao",
"Shuju Shi",
"Xiaohai Tian",
"Wei Li",
"Zejun Ma"
] |
Speech fluency/disfluency can be evaluated by analyzing a range of phonetic
and prosodic features. Deep neural networks are commonly trained to map
fluency-related features into the human scores. However, the effectiveness of
deep learning-based models is constrained by the limited amount of labeled
training samples. To address this, we introduce a self-supervised learning
(SSL) approach that takes into account phonetic and prosody awareness for
fluency scoring. Specifically, we first pre-train the model using a
reconstruction loss function, by masking phones and their durations jointly on
a large amount of unlabeled speech and text prompts. We then fine-tune the
pre-trained model using human-annotated scoring data. Our experimental results,
conducted on datasets such as Speechocean762 and our non-native datasets, show
that our proposed method outperforms the baseline systems in terms of Pearson
correlation coefficients (PCC). Moreover, we also conduct an ablation study to
better understand the contribution of phonetic and prosody factors during the
pre-training stage.
|
[
"cs.CL",
"eess.AS"
] | false |
2305.11480
|
2023-05-19T07:16:04Z
|
CCGen: Explainable Complementary Concept Generation in E-Commerce
|
[
"Jie Huang",
"Yifan Gao",
"Zheng Li",
"Jingfeng Yang",
"Yangqiu Song",
"Chao Zhang",
"Zining Zhu",
"Haoming Jiang",
"Kevin Chen-Chuan Chang",
"Bing Yin"
] |
We propose and study Complementary Concept Generation (CCGen): given a
concept of interest, e.g., "Digital Cameras", generating a list of
complementary concepts, e.g., 1) Camera Lenses 2) Batteries 3) Camera Cases 4)
Memory Cards 5) Battery Chargers. CCGen is beneficial for various applications
like query suggestion and item recommendation, especially in the e-commerce
domain. To solve CCGen, we propose to train language models to generate ranked
lists of concepts with a two-step training strategy. We also teach the models
to generate explanations by incorporating explanations distilled from large
teacher models. Extensive experiments and analysis demonstrate that our model
can generate high-quality concepts complementary to the input concept while
producing explanations to justify the predictions.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.11517
|
2023-05-19T08:30:11Z
|
DiffuSIA: A Spiral Interaction Architecture for Encoder-Decoder Text
Diffusion
|
[
"Chao-Hong Tan",
"Jia-Chen Gu",
"Zhen-Hua Ling"
] |
Diffusion models have emerged as the new state-of-the-art family of deep
generative models, and their promising potentials for text generation have
recently attracted increasing attention. Existing studies mostly adopt a single
encoder architecture with partially noising processes for conditional text
generation, but its degree of flexibility for conditional modeling is limited.
In fact, the encoder-decoder architecture is naturally more flexible for its
detachable encoder and decoder modules, which is extensible to multilingual and
multimodal generation tasks for conditions and target texts. However, the
encoding process of conditional texts lacks the understanding of target texts.
To this end, a spiral interaction architecture for encoder-decoder text
diffusion (DiffuSIA) is proposed. Concretely, the conditional information from
encoder is designed to be captured by the diffusion decoder, while the target
information from decoder is designed to be captured by the conditional encoder.
These two types of information flow run through multilayer interaction spirally
for deep fusion and understanding. DiffuSIA is evaluated on four text
generation tasks, including paraphrase, text simplification, question
generation, and open-domain dialogue generation. Experimental results show that
DiffuSIA achieves competitive performance among previous methods on all four
tasks, demonstrating the effectiveness and generalization ability of the
proposed method.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.11529
|
2023-05-19T08:53:41Z
|
A Sequence-to-Sequence Approach for Arabic Pronoun Resolution
|
[
"Hanan S. Murayshid",
"Hafida Benhidour",
"Said Kerrache"
] |
This paper proposes a sequence-to-sequence learning approach for Arabic
pronoun resolution, which explores the effectiveness of using advanced natural
language processing (NLP) techniques, specifically Bi-LSTM and the BERT
pre-trained Language Model, in solving the pronoun resolution problem in
Arabic. The proposed approach is evaluated on the AnATAr dataset, and its
performance is compared to several baseline models, including traditional
machine learning models and handcrafted feature-based models. Our results
demonstrate that the proposed model outperforms the baseline models, which
include KNN, logistic regression, and SVM, across all metrics. In addition, we
explore the effectiveness of various modifications to the model, including
concatenating the anaphor text beside the paragraph text as input, adding a
mask to focus on candidate scores, and filtering candidates based on gender and
number agreement with the anaphor. Our results show that these modifications
significantly improve the model's performance, achieving up to 81% on MRR and
71% for F1 score while also demonstrating higher precision, recall, and
accuracy. These findings suggest that the proposed model is an effective
approach to Arabic pronoun resolution and highlights the potential benefits of
leveraging advanced NLP neural models.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.11536
|
2023-05-19T09:07:52Z
|
PORTRAIT: a hybrid aPproach tO cReate extractive ground-TRuth summAry
for dIsaster evenT
|
[
"Piyush Kumar Garg",
"Roshni Chakraborty",
"Sourav Kumar Dandapat"
] |
Disaster summarization approaches provide an overview of the important
information posted during disaster events on social media platforms, such as,
Twitter. However, the type of information posted significantly varies across
disasters depending on several factors like the location, type, severity, etc.
Verification of the effectiveness of disaster summarization approaches still
suffer due to the lack of availability of good spectrum of datasets along with
the ground-truth summary. Existing approaches for ground-truth summary
generation (ground-truth for extractive summarization) relies on the wisdom and
intuition of the annotators. Annotators are provided with a complete set of
input tweets from which a subset of tweets is selected by the annotators for
the summary. This process requires immense human effort and significant time.
Additionally, this intuition-based selection of the tweets might lead to a high
variance in summaries generated across annotators. Therefore, to handle these
challenges, we propose a hybrid (semi-automated) approach (PORTRAIT) where we
partly automate the ground-truth summary generation procedure. This approach
reduces the effort and time of the annotators while ensuring the quality of the
created ground-truth summary. We validate the effectiveness of PORTRAIT on 5
disaster events through quantitative and qualitative comparisons of
ground-truth summaries generated by existing intuitive approaches, a
semi-automated approach, and PORTRAIT. We prepare and release the ground-truth
summaries for 5 disaster events which consist of both natural and man-made
disaster events belonging to 4 different countries. Finally, we provide a study
about the performance of various state-of-the-art summarization approaches on
the ground-truth summaries generated by PORTRAIT using ROUGE-N F1-scores.
|
[
"cs.CL",
"cs.SI"
] | false |
2305.11558
|
2023-05-19T09:56:09Z
|
Blank-regularized CTC for Frame Skipping in Neural Transducer
|
[
"Yifan Yang",
"Xiaoyu Yang",
"Liyong Guo",
"Zengwei Yao",
"Wei Kang",
"Fangjun Kuang",
"Long Lin",
"Xie Chen",
"Daniel Povey"
] |
Neural Transducer and connectionist temporal classification (CTC) are popular
end-to-end automatic speech recognition systems. Due to their frame-synchronous
design, blank symbols are introduced to address the length mismatch between
acoustic frames and output tokens, which might bring redundant computation.
Previous studies managed to accelerate the training and inference of neural
Transducers by discarding frames based on the blank symbols predicted by a
co-trained CTC. However, there is no guarantee that the co-trained CTC can
maximize the ratio of blank symbols. This paper proposes two novel
regularization methods to explicitly encourage more blanks by constraining the
self-loop of non-blank symbols in the CTC. It is interesting to find that the
frame reduction ratio of the neural Transducer can approach the theoretical
boundary. Experiments on LibriSpeech corpus show that our proposed method
accelerates the inference of neural Transducer by 4 times without sacrificing
performance. Our work is open-sourced and publicly available
https://github.com/k2-fsa/icefall.
|
[
"eess.AS",
"cs.CL"
] | false |
2305.11592
|
2023-05-19T11:05:55Z
|
IKDSumm: Incorporating Key-phrases into BERT for extractive Disaster
Tweet Summarization
|
[
"Piyush Kumar Garg",
"Roshni Chakraborty",
"Srishti Gupta",
"Sourav Kumar Dandapat"
] |
Online social media platforms, such as Twitter, are one of the most valuable
sources of information during disaster events. Therefore, humanitarian
organizations, government agencies, and volunteers rely on a summary of this
information, i.e., tweets, for effective disaster management. Although there
are several existing supervised and unsupervised approaches for automated tweet
summary approaches, these approaches either require extensive labeled
information or do not incorporate specific domain knowledge of disasters.
Additionally, the most recent approaches to disaster summarization have
proposed BERT-based models to enhance the summary quality. However, for further
improved performance, we introduce the utilization of domain-specific knowledge
without any human efforts to understand the importance (salience) of a tweet
which further aids in summary creation and improves summary quality. In this
paper, we propose a disaster-specific tweet summarization framework, IKDSumm,
which initially identifies the crucial and important information from each
tweet related to a disaster through key-phrases of that tweet. We identify
these key-phrases by utilizing the domain knowledge (using existing ontology)
of disasters without any human intervention. Further, we utilize these
key-phrases to automatically generate a summary of the tweets. Therefore, given
tweets related to a disaster, IKDSumm ensures fulfillment of the summarization
key objectives, such as information coverage, relevance, and diversity in
summary without any human intervention. We evaluate the performance of IKDSumm
with 8 state-of-the-art techniques on 12 disaster datasets. The evaluation
results show that IKDSumm outperforms existing techniques by approximately
2-79% in terms of ROUGE-N F1-score.
|
[
"cs.CL",
"cs.SI"
] | false |
2305.11598
|
2023-05-19T11:20:37Z
|
Introspective Tips: Large Language Model for In-Context Decision Making
|
[
"Liting Chen",
"Lu Wang",
"Hang Dong",
"Yali Du",
"Jie Yan",
"Fangkai Yang",
"Shuang Li",
"Pu Zhao",
"Si Qin",
"Saravan Rajmohan",
"Qingwei Lin",
"Dongmei Zhang"
] |
The emergence of large language models (LLMs) has substantially influenced
natural language processing, demonstrating exceptional results across various
tasks. In this study, we employ ``Introspective Tips" to facilitate LLMs in
self-optimizing their decision-making. By introspectively examining
trajectories, LLM refines its policy by generating succinct and valuable tips.
Our method enhances the agent's performance in both few-shot and zero-shot
learning situations by considering three essential scenarios: learning from the
agent's past experiences, integrating expert demonstrations, and generalizing
across diverse games. Importantly, we accomplish these improvements without
fine-tuning the LLM parameters; rather, we adjust the prompt to generalize
insights from the three aforementioned situations. Our framework not only
supports but also emphasizes the advantage of employing LLM in in-contxt
decision-making. Experiments involving over 100 games in TextWorld illustrate
the superior performance of our approach.
|
[
"cs.AI",
"cs.CL"
] | true |
2305.11625
|
2023-05-19T12:09:30Z
|
Searching by Code: a New SearchBySnippet Dataset and SnippeR Retrieval
Model for Searching by Code Snippets
|
[
"Ivan Sedykh",
"Dmitry Abulkhanov",
"Nikita Sorokin",
"Sergey Nikolenko",
"Valentin Malykh"
] |
Code search is an important task that has seen many developments in recent
years. However, previous attempts have mostly considered the problem of
searching for code by a text query. We argue that using a code snippet (and
possibly an associated traceback) as a query and looking for answers with
bugfixing instructions and code samples is a natural use case that is not
covered by existing approaches. Moreover, existing datasets use comments
extracted from code rather than full-text descriptions as text, making them
unsuitable for this use case. We present a new SearchBySnippet dataset
implementing the search-by-code use case based on StackOverflow data; it turns
out that in this setting, existing architectures fall short of the simplest
BM25 baseline even after fine-tuning. We present a new single encoder model
SnippeR that outperforms several strong baselines on the SearchBySnippet
dataset with a result of 0.451 Recall@10; we propose the SearchBySnippet
dataset and SnippeR as a new important benchmark for code search evaluation.
|
[
"cs.CL",
"cs.SE"
] | false |
2305.11626
|
2023-05-19T12:09:49Z
|
CCT-Code: Cross-Consistency Training for Multilingual Clone Detection
and Code Search
|
[
"Nikita Sorokin",
"Dmitry Abulkhanov",
"Sergey Nikolenko",
"Valentin Malykh"
] |
We consider the clone detection and information retrieval problems for source
code, well-known tasks important for any programming language. Although it is
also an important and interesting problem to find code snippets that operate
identically but are written in different programming languages, to the best of
our knowledge multilingual clone detection has not been studied in literature.
In this work, we formulate the multilingual clone detection problem and present
XCD, a new benchmark dataset produced from the CodeForces submissions dataset.
Moreover, we present a novel training procedure, called cross-consistency
training (CCT), that we apply to train language models on source code in
different programming languages. The resulting CCT-LM model, initialized with
GraphCodeBERT and fine-tuned with CCT, achieves new state of the art,
outperforming existing approaches on the POJ-104 clone detection benchmark with
95.67\% MAP and AdvTest code search benchmark with 47.18\% MRR; it also shows
the best results on the newly created multilingual clone detection benchmark
XCD across all programming languages.
|
[
"cs.CL",
"cs.SE"
] | false |
2305.11744
|
2023-05-19T15:30:33Z
|
Inference-time Re-ranker Relevance Feedback for Neural Information
Retrieval
|
[
"Revanth Gangi Reddy",
"Pradeep Dasigi",
"Md Arafat Sultan",
"Arman Cohan",
"Avirup Sil",
"Heng Ji",
"Hannaneh Hajishirzi"
] |
Neural information retrieval often adopts a retrieve-and-rerank framework: a
bi-encoder network first retrieves K (e.g., 100) candidates that are then
re-ranked using a more powerful cross-encoder model to rank the better
candidates higher. The re-ranker generally produces better candidate scores
than the retriever, but is limited to seeing only the top K retrieved
candidates, thus providing no improvements in retrieval performance as measured
by Recall@K. In this work, we leverage the re-ranker to also improve retrieval
by providing inference-time relevance feedback to the retriever. Concretely, we
update the retriever's query representation for a test instance using a
lightweight inference-time distillation of the re-ranker's prediction for that
instance. The distillation loss is designed to bring the retriever's candidate
scores closer to those of the re-ranker. A second retrieval step is then
performed with the updated query vector. We empirically show that our approach,
which can serve arbitrary retrieve-and-rerank pipelines, significantly improves
retrieval recall in multiple domains, languages, and modalities.
|
[
"cs.IR",
"cs.CL"
] | false |
2305.11759
|
2023-05-19T15:45:29Z
|
Controlling the Extraction of Memorized Data from Large Language Models
via Prompt-Tuning
|
[
"Mustafa Safa Ozdayi",
"Charith Peris",
"Jack FitzGerald",
"Christophe Dupuy",
"Jimit Majmudar",
"Haidar Khan",
"Rahil Parikh",
"Rahul Gupta"
] |
Large Language Models (LLMs) are known to memorize significant portions of
their training data. Parts of this memorized content have been shown to be
extractable by simply querying the model, which poses a privacy risk. We
present a novel approach which uses prompt-tuning to control the extraction
rates of memorized content in LLMs. We present two prompt training strategies
to increase and decrease extraction rates, which correspond to an attack and a
defense, respectively. We demonstrate the effectiveness of our techniques by
using models from the GPT-Neo family on a public benchmark. For the 1.3B
parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in
extraction rate compared to our baseline. Our defense can be tuned to achieve
different privacy-utility trade-offs by a user-specified hyperparameter. We
achieve an extraction rate reduction of up to 97.7% relative to our baseline,
with a perplexity increase of 16.9%.
|
[
"cs.CL",
"cs.AI"
] | true |
2305.11778
|
2023-05-19T16:14:07Z
|
Cross-Lingual Supervision improves Large Language Models Pre-training
|
[
"Andrea Schioppa",
"Xavier Garcia",
"Orhan Firat"
] |
The recent rapid progress in pre-training Large Language Models has relied on
using self-supervised language modeling objectives like next token prediction
or span corruption. On the other hand, Machine Translation Systems are mostly
trained using cross-lingual supervision that requires aligned data between
source and target languages. We demonstrate that pre-training Large Language
Models on a mixture of a self-supervised Language Modeling objective and the
supervised Machine Translation objective, therefore including cross-lingual
parallel data during pre-training, yields models with better in-context
learning abilities. As pre-training is a very resource-intensive process and a
grid search on the best mixing ratio between the two objectives is
prohibitively expensive, we propose a simple yet effective strategy to learn it
during pre-training.
|
[
"cs.CL",
"cs.LG"
] | true |
2305.11779
|
2023-05-19T16:18:00Z
|
DMDD: A Large-Scale Dataset for Dataset Mentions Detection
|
[
"Huitong Pan",
"Qi Zhang",
"Eduard Dragut",
"Cornelia Caragea",
"Longin Jan Latecki"
] |
The recognition of dataset names is a critical task for automatic information
extraction in scientific literature, enabling researchers to understand and
identify research opportunities. However, existing corpora for dataset mention
detection are limited in size and naming diversity. In this paper, we introduce
the Dataset Mentions Detection Dataset (DMDD), the largest publicly available
corpus for this task. DMDD consists of the DMDD main corpus, comprising 31,219
scientific articles with over 449,000 dataset mentions weakly annotated in the
format of in-text spans, and an evaluation set, which comprises of 450
scientific articles manually annotated for evaluation purposes. We use DMDD to
establish baseline performance for dataset mention detection and linking. By
analyzing the performance of various models on DMDD, we are able to identify
open problems in dataset mention detection. We invite the community to use our
dataset as a challenge to develop novel dataset mention detection models.
|
[
"cs.CL",
"cs.AI",
"I.2.7"
] | false |
2305.11840
|
2023-05-19T17:30:19Z
|
SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage
Leveraging Generative Models
|
[
"Akshita Jha",
"Aida Davani",
"Chandan K. Reddy",
"Shachi Dave",
"Vinodkumar Prabhakaran",
"Sunipa Dev"
] |
Stereotype benchmark datasets are crucial to detect and mitigate social
stereotypes about groups of people in NLP models. However, existing datasets
are limited in size and coverage, and are largely restricted to stereotypes
prevalent in the Western society. This is especially problematic as language
technologies gain hold across the globe. To address this gap, we present
SeeGULL, a broad-coverage stereotype dataset, built by utilizing generative
capabilities of large language models such as PaLM, and GPT-3, and leveraging a
globally diverse rater pool to validate the prevalence of those stereotypes in
society. SeeGULL is in English, and contains stereotypes about identity groups
spanning 178 countries across 8 different geo-political regions across 6
continents, as well as state-level identities within the US and India. We also
include fine-grained offensiveness scores for different stereotypes and
demonstrate their global disparities. Furthermore, we include comparative
annotations about the same groups by annotators living in the region vs. those
that are based in North America, and demonstrate that within-region stereotypes
about groups differ from those prevalent in North America. CONTENT WARNING:
This paper contains stereotype examples that may be offensive.
|
[
"cs.CL",
"cs.CY"
] | true |
2305.11841
|
2023-05-19T17:33:38Z
|
How Does Generative Retrieval Scale to Millions of Passages?
|
[
"Ronak Pradeep",
"Kai Hui",
"Jai Gupta",
"Adam D. Lelkes",
"Honglei Zhuang",
"Jimmy Lin",
"Donald Metzler",
"Vinh Q. Tran"
] |
Popularized by the Differentiable Search Index, the emerging paradigm of
generative retrieval re-frames the classic information retrieval problem into a
sequence-to-sequence modeling task, forgoing external indices and encoding an
entire document corpus within a single Transformer. Although many different
approaches have been proposed to improve the effectiveness of generative
retrieval, they have only been evaluated on document corpora on the order of
100k in size. We conduct the first empirical study of generative retrieval
techniques across various corpus scales, ultimately scaling up to the entire MS
MARCO passage ranking task with a corpus of 8.8M passages and evaluating model
sizes up to 11B parameters. We uncover several findings about scaling
generative retrieval to millions of passages; notably, the central importance
of using synthetic queries as document representations during indexing, the
ineffectiveness of existing proposed architecture modifications when accounting
for compute cost, and the limits of naively scaling model parameters with
respect to retrieval performance. While we find that generative retrieval is
competitive with state-of-the-art dual encoders on small corpora, scaling to
millions of passages remains an important and unsolved challenge. We believe
these findings will be valuable for the community to clarify the current state
of generative retrieval, highlight the unique challenges, and inspire new
research directions.
|
[
"cs.IR",
"cs.CL"
] | true |
2305.11864
|
2023-05-19T17:53:12Z
|
North Sámi Dialect Identification with Self-supervised Speech Models
|
[
"Sofoklis Kakouros",
"Katri Hiovain-Asikainen"
] |
The North S\'{a}mi (NS) language encapsulates four primary dialectal variants
that are related but that also have differences in their phonology, morphology,
and vocabulary. The unique geopolitical location of NS speakers means that in
many cases they are bilingual in S\'{a}mi as well as in the dominant state
language: Norwegian, Swedish, or Finnish. This enables us to study the NS
variants both with respect to the spoken state language and their acoustic
characteristics. In this paper, we investigate an extensive set of acoustic
features, including MFCCs and prosodic features, as well as state-of-the-art
self-supervised representations, namely, XLS-R, WavLM, and HuBERT, for the
automatic detection of the four NS variants. In addition, we examine how the
majority state language is reflected in the dialects. Our results show that NS
dialects are influenced by the state language and that the four dialects are
separable, reaching high classification accuracy, especially with the XLS-R
model.
|
[
"eess.AS",
"cs.CL"
] | false |
2305.13331
|
2023-05-19T15:10:36Z
|
A New Benchmark of Aphasia Speech Recognition and Detection Based on
E-Branchformer and Multi-task Learning
|
[
"Jiyang Tang",
"William Chen",
"Xuankai Chang",
"Shinji Watanabe",
"Brian MacWhinney"
] |
Aphasia is a language disorder that affects the speaking ability of millions
of patients. This paper presents a new benchmark for Aphasia speech recognition
and detection tasks using state-of-the-art speech recognition techniques with
the AphsiaBank dataset. Specifically, we introduce two multi-task learning
methods based on the CTC/Attention architecture to perform both tasks
simultaneously. Our system achieves state-of-the-art speaker-level detection
accuracy (97.3%), and a relative WER reduction of 11% for moderate Aphasia
patients. In addition, we demonstrate the generalizability of our approach by
applying it to another disordered speech database, the DementiaBank Pitt
corpus. We will make our all-in-one recipes and pre-trained model publicly
available to facilitate reproducibility. Our standardized data preprocessing
pipeline and open-source recipes enable researchers to compare results
directly, promoting progress in disordered speech processing.
|
[
"eess.AS",
"cs.CL"
] | false |
2305.11411
|
2023-05-19T03:48:16Z
|
DUB: Discrete Unit Back-translation for Speech Translation
|
[
"Dong Zhang",
"Rong Ye",
"Tom Ko",
"Mingxuan Wang",
"Yaqian Zhou"
] |
How can speech-to-text translation (ST) perform as well as machine
translation (MT)? The key point is to bridge the modality gap between speech
and text so that useful MT techniques can be applied to ST. Recently, the
approach of representing speech with unsupervised discrete units yields a new
way to ease the modality problem. This motivates us to propose Discrete Unit
Back-translation (DUB) to answer two questions: (1) Is it better to represent
speech with discrete units than with continuous features in direct ST? (2) How
much benefit can useful MT techniques bring to ST? With DUB, the
back-translation technique can successfully be applied on direct ST and obtains
an average boost of 5.5 BLEU on MuST-C En-De/Fr/Es. In the low-resource
language scenario, our method achieves comparable performance to existing
methods that rely on large-scale external data. Code and models are available
at https://github.com/0nutation/DUB.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.11444
|
2023-05-19T05:53:49Z
|
Arukikata Travelogue Dataset
|
[
"Hiroki Ouchi",
"Hiroyuki Shindo",
"Shoko Wakamiya",
"Yuki Matsuda",
"Naoya Inoue",
"Shohei Higashiyama",
"Satoshi Nakamura",
"Taro Watanabe"
] |
We have constructed Arukikata Travelogue Dataset and released it free of
charge for academic research. This dataset is a Japanese text dataset with a
total of over 31 million words, comprising 4,672 Japanese domestic travelogues
and 9,607 overseas travelogues. Before providing our dataset, there was a
scarcity of widely available travelogue data for research purposes, and each
researcher had to prepare their own data. This hinders the replication of
existing studies and fair comparative analysis of experimental results. Our
dataset enables any researchers to conduct investigation on the same data and
to ensure transparency and reproducibility in research. In this paper, we
describe the academic significance, characteristics, and prospects of our
dataset.
|
[
"cs.CL",
"cs.AI",
"cs.DL"
] | false |
2305.11455
|
2023-05-19T06:21:15Z
|
Shattering the Agent-Environment Interface for Fine-Tuning Inclusive
Language Models
|
[
"Wanqiao Xu",
"Shi Dong",
"Dilip Arumugam",
"Benjamin Van Roy"
] |
A centerpiece of the ever-popular reinforcement learning from human feedback
(RLHF) approach to fine-tuning autoregressive language models is the explicit
training of a reward model to emulate human feedback, distinct from the
language model itself. This reward model is then coupled with policy-gradient
methods to dramatically improve the alignment between language model outputs
and desired responses. In this work, we adopt a novel perspective wherein a
pre-trained language model is itself simultaneously a policy, reward function,
and transition function. An immediate consequence of this is that reward
learning and language model fine-tuning can be performed jointly and directly,
without requiring any further downstream policy optimization. While this
perspective does indeed break the traditional agent-environment interface, we
nevertheless maintain that there can be enormous statistical benefits afforded
by bringing to bear traditional algorithmic concepts from reinforcement
learning. Our experiments demonstrate one concrete instance of this through
efficient exploration based on the representation and resolution of epistemic
uncertainty. In order to illustrate these ideas in a transparent manner, we
restrict attention to a simple didactic data generating process and leave for
future work extension to systems of practical scale.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.11460
|
2023-05-19T06:27:16Z
|
Self-Agreement: A Framework for Fine-tuning Language Models to Find
Agreement among Diverse Opinions
|
[
"Shiyao Ding",
"Takayuki Ito"
] |
Finding an agreement among diverse opinions is a challenging topic in
multiagent systems. Recently, large language models (LLMs) have shown great
potential in addressing this challenge due to their remarkable capabilities in
comprehending human opinions and generating human-like text. However, they
typically rely on extensive human-annotated data. In this paper, we propose
Self-Agreement, a novel framework for fine-tuning LLMs to autonomously find
agreement using data generated by LLM itself. Specifically, our approach
employs the generative pre-trained transformer-3 (GPT-3) to generate multiple
opinions for each question in a question dataset and create several agreement
candidates among these opinions. Then, a bidirectional encoder representations
from transformers (BERT)-based model evaluates the agreement score of each
agreement candidate and selects the one with the highest agreement score. This
process yields a dataset of question-opinion-agreements, which we use to
fine-tune a pre-trained LLM for discovering agreements among diverse opinions.
Remarkably, a pre-trained LLM fine-tuned by our Self-Agreement framework
achieves comparable performance to GPT-3 with only 1/25 of its parameters,
showcasing its ability to identify agreement among various opinions without the
need for human-annotated data.
|
[
"cs.CL",
"cs.AI",
"cs.MA"
] | false |
2305.11498
|
2023-05-19T07:55:37Z
|
Recouple Event Field via Probabilistic Bias for Event Extraction
|
[
"Xingyu Bai",
"Taiqiang Wu",
"Han Guo",
"Zhe Zhao",
"Xuefeng Yang",
"Jiayi Li",
"Weijie Liu",
"Qi Ju",
"Weigang Guo",
"Yujiu Yang"
] |
Event Extraction (EE), aiming to identify and classify event triggers and
arguments from event mentions, has benefited from pre-trained language models
(PLMs). However, existing PLM-based methods ignore the information of
trigger/argument fields, which is crucial for understanding event schemas. To
this end, we propose a Probabilistic reCoupling model enhanced Event extraction
framework (ProCE). Specifically, we first model the syntactic-related event
fields as probabilistic biases, to clarify the event fields from ambiguous
entanglement. Furthermore, considering multiple occurrences of the same
triggers/arguments in EE, we explore probabilistic interaction strategies among
multiple fields of the same triggers/arguments, to recouple the corresponding
clarified distributions and capture more latent information fields. Experiments
on EE datasets demonstrate the effectiveness and generalization of our proposed
approach.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2305.11569
|
2023-05-19T10:15:11Z
|
Language-Universal Phonetic Representation in Multilingual Speech
Pretraining for Low-Resource Speech Recognition
|
[
"Siyuan Feng",
"Ming Tu",
"Rui Xia",
"Chuanzeng Huang",
"Yuxuan Wang"
] |
We improve low-resource ASR by integrating the ideas of multilingual training
and self-supervised learning. Concretely, we leverage an International Phonetic
Alphabet (IPA) multilingual model to create frame-level pseudo labels for
unlabeled speech, and use these pseudo labels to guide hidden-unit BERT
(HuBERT) based speech pretraining in a phonetically-informed manner. The
experiments on the Multilingual Speech (MLS) Corpus show that the proposed
approach consistently outperforms the standard HuBERT on all the target
languages. Moreover, on 3 of the 4 languages, comparing to the standard HuBERT,
the approach performs better, meanwhile is able to save supervised training
data by 1.5k hours (75%) at most. Our approach outperforms most of the state of
the arts, with much less pretraining data in terms of hours and language
diversity. Compared to XLSR-53 and a retraining based multilingual method, our
approach performs better with full and limited finetuning data scenarios.
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2305.11576
|
2023-05-19T10:24:30Z
|
Language-universal phonetic encoder for low-resource speech recognition
|
[
"Siyuan Feng",
"Ming Tu",
"Rui Xia",
"Chuanzeng Huang",
"Yuxuan Wang"
] |
Multilingual training is effective in improving low-resource ASR, which may
partially be explained by phonetic representation sharing between languages. In
end-to-end (E2E) ASR systems, graphemes are often used as basic modeling units,
however graphemes may not be ideal for multilingual phonetic sharing. In this
paper, we leverage International Phonetic Alphabet (IPA) based
language-universal phonetic model to improve low-resource ASR performances, for
the first time within the attention encoder-decoder architecture. We propose an
adaptation method on the phonetic IPA model to further improve the proposed
approach on extreme low-resource languages. Experiments carried out on the
open-source MLS corpus and our internal databases show our approach outperforms
baseline monolingual models and most state-of-the-art works. Our main approach
and adaptation are effective on extremely low-resource languages, even within
domain- and language-mismatched scenarios.
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] | false |
2305.11663
|
2023-05-19T13:24:32Z
|
Algorithmic failure as a humanities methodology: machine learning's
mispredictions identify rich cases for qualitative analysis
|
[
"Jill Walker Rettberg"
] |
This commentary tests a methodology proposed by Munk et al. (2022) for using
failed predictions in machine learning as a method to identify ambiguous and
rich cases for qualitative analysis. Using a dataset describing actions
performed by fictional characters interacting with machine vision technologies
in 500 artworks, movies, novels and videogames, I trained a simple machine
learning algorithm (using the kNN algorithm in R) to predict whether or not an
action was active or passive using only information about the fictional
characters. Predictable actions were generally unemotional and unambiguous
activities where machine vision technologies were treated as simple tools.
Unpredictable actions, that is, actions that the algorithm could not correctly
predict, were more ambivalent and emotionally loaded, with more complex power
relationships between characters and technologies. The results thus support
Munk et al.'s theory that failed predictions can be productively used to
identify rich cases for qualitative analysis. This test goes beyond simply
replicating Munk et al.'s results by demonstrating that the method can be
applied to a broader humanities domain, and that it does not require complex
neural networks but can also work with a simpler machine learning algorithm.
Further research is needed to develop an understanding of what kinds of data
the method is useful for and which kinds of machine learning are most
generative. To support this, the R code required to produce the results is
included so the test can be replicated. The code can also be reused or adapted
to test the method on other datasets.
|
[
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CY",
"J.5"
] | false |
2305.11683
|
2023-05-19T14:06:16Z
|
Sensing of inspiration events from speech: comparison of deep learning
and linguistic methods
|
[
"Aki Härmä",
"Ulf Grossekathöfer",
"Okke Ouweltjes",
"Venkata Srikanth Nallanthighal"
] |
Respiratory chest belt sensor can be used to measure the respiratory rate and
other respiratory health parameters. Virtual Respiratory Belt, VRB, algorithms
estimate the belt sensor waveform from speech audio. In this paper we compare
the detection of inspiration events (IE) from respiratory belt sensor data
using a novel neural VRB algorithm and the detections based on time-aligned
linguistic content. The results show the superiority of the VRB method over
word pause detection or grammatical content segmentation. The comparison of the
methods show that both read and spontaneous speech content has a significant
amount of ungrammatical breathing, that is, breathing events that are not
aligned with grammatically appropriate places in language. This study gives new
insights into the development of VRB methods and adds to the general
understanding of speech breathing behavior. Moreover, a new VRB method, VRBOLA,
for the reconstruction of the continuous breathing waveform is demonstrated.
|
[
"cs.SD",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2305.11926
|
2023-05-19T13:43:36Z
|
MParrotTTS: Multilingual Multi-speaker Text to Speech Synthesis in Low
Resource Setting
|
[
"Neil Shah",
"Vishal Tambrahalli",
"Saiteja Kosgi",
"Niranjan Pedanekar",
"Vineet Gandhi"
] |
We present MParrotTTS, a unified multilingual, multi-speaker text-to-speech
(TTS) synthesis model that can produce high-quality speech. Benefiting from a
modularized training paradigm exploiting self-supervised speech
representations, MParrotTTS adapts to a new language with minimal supervised
data and generalizes to languages not seen while training the self-supervised
backbone. Moreover, without training on any bilingual or parallel examples,
MParrotTTS can transfer voices across languages while preserving the
speaker-specific characteristics, e.g., synthesizing fluent Hindi speech using
a French speaker's voice and accent. We present extensive results on six
languages in terms of speech naturalness and speaker similarity in parallel and
cross-lingual synthesis. The proposed model outperforms the state-of-the-art
multilingual TTS models and baselines, using only a small fraction of
supervised training data. Speech samples from our model can be found at
https://paper2438.github.io/tts/
|
[
"cs.SD",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2305.11458
|
2023-05-19T06:26:18Z
|
A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation
|
[
"Jingjing Zheng",
"Wenzhe Wang",
"Xiaoqin Zhang",
"Xianta Jiang"
] |
This study aims to solve the over-reliance on the rank estimation strategy in
the standard tensor factorization-based tensor recovery and the problem of a
large computational cost in the standard t-SVD-based tensor recovery. To this
end, we proposes a new tensor norm with a dual low-rank constraint, which
utilizes the low-rank prior and rank information at the same time. In the
proposed tensor norm, a series of surrogate functions of the tensor tubal rank
can be used to achieve better performance in harness low-rankness within tensor
data. It is proven theoretically that the resulting tensor completion model can
effectively avoid performance degradation caused by inaccurate rank estimation.
Meanwhile, attributed to the proposed dual low-rank constraint, the t-SVD of a
smaller tensor instead of the original big one is computed by using a sample
trick. Based on this, the total cost at each iteration of the optimization
algorithm is reduced to $\mathcal{O}(n^3\log n +kn^3)$ from $\mathcal{O}(n^4)$
achieved with standard methods, where $k$ is the estimation of the true tensor
rank and far less than $n$. Our method was evaluated on synthetic and
real-world data, and it demonstrated superior performance and efficiency over
several existing state-of-the-art tensor completion methods.
|
[
"cs.LG"
] | false |
2305.11495
|
2023-05-19T07:51:36Z
|
Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank
Approximation
|
[
"Wenjin Qin",
"Hailin Wang",
"Feng Zhang",
"Weijun Ma",
"Jianjun Wang",
"Tingwen Huang"
] |
Within the tensor singular value decomposition (T-SVD) framework, existing
robust low-rank tensor completion approaches have made great achievements in
various areas of science and engineering. Nevertheless, these methods involve
the T-SVD based low-rank approximation, which suffers from high computational
costs when dealing with large-scale tensor data. Moreover, most of them are
only applicable to third-order tensors. Against these issues, in this article,
two efficient low-rank tensor approximation approaches fusing randomized
techniques are first devised under the order-d (d >= 3) T-SVD framework. On
this basis, we then further investigate the robust high-order tensor completion
(RHTC) problem, in which a double nonconvex model along with its corresponding
fast optimization algorithms with convergence guarantees are developed. To the
best of our knowledge, this is the first study to incorporate the randomized
low-rank approximation into the RHTC problem. Empirical studies on large-scale
synthetic and real tensor data illustrate that the proposed method outperforms
other state-of-the-art approaches in terms of both computational efficiency and
estimated precision.
|
[
"cs.LG"
] | false |
2305.11512
|
2023-05-19T08:22:23Z
|
Enriching Disentanglement: Definitions to Metrics
|
[
"Yivan Zhang",
"Masashi Sugiyama"
] |
Disentangled representation learning is a challenging task that involves
separating multiple factors of variation in complex data. Although various
metrics for learning and evaluating disentangled representations have been
proposed, it remains unclear what these metrics truly quantify and how to
compare them. In this work, we study the definitions of disentanglement given
by first-order equational predicates and introduce a systematic approach for
transforming an equational definition into a compatible quantitative metric
based on enriched category theory. Specifically, we show how to replace (i)
equality with metric or divergence, (ii) logical connectives with order
operations, (iii) universal quantifier with aggregation, and (iv) existential
quantifier with the best approximation. Using this approach, we derive metrics
for measuring the desired properties of a disentangled representation extractor
and demonstrate their effectiveness on synthetic data. Our proposed approach
provides practical guidance for researchers in selecting appropriate evaluation
metrics and designing effective learning algorithms for disentangled
representation learning.
|
[
"cs.LG"
] | false |
2305.11654
|
2023-05-19T13:09:33Z
|
V2X-Boosted Federated Learning for Cooperative Intelligent
Transportation Systems with Contextual Client Selection
|
[
"Rui Song",
"Lingjuan Lyu",
"Wei Jiang",
"Andreas Festag",
"Alois Knoll"
] |
Machine learning (ML) has revolutionized transportation systems, enabling
autonomous driving and smart traffic services. Federated learning (FL)
overcomes privacy constraints by training ML models in distributed systems,
exchanging model parameters instead of raw data. However, the dynamic states of
connected vehicles affect the network connection quality and influence the FL
performance. To tackle this challenge, we propose a contextual client selection
pipeline that uses Vehicle-to-Everything (V2X) messages to select clients based
on the predicted communication latency. The pipeline includes: (i) fusing V2X
messages, (ii) predicting future traffic topology, (iii) pre-clustering clients
based on local data distribution similarity, and (iv) selecting clients with
minimal latency for future model aggregation. Experiments show that our
pipeline outperforms baselines on various datasets, particularly in non-iid
settings.
|
[
"cs.LG"
] | false |
2305.11684
|
2023-05-19T14:06:36Z
|
Self-Reinforcement Attention Mechanism For Tabular Learning
|
[
"Kodjo Mawuena Amekoe",
"Mohamed Djallel Dilmi",
"Hanene Azzag",
"Mustapha Lebbah",
"Zaineb Chelly Dagdia",
"Gregoire Jaffre"
] |
Apart from the high accuracy of machine learning models, what interests many
researchers in real-life problems (e.g., fraud detection, credit scoring) is to
find hidden patterns in data; particularly when dealing with their challenging
imbalanced characteristics. Interpretability is also a key requirement that
needs to accompany the used machine learning model. In this concern, often,
intrinsically interpretable models are preferred to complex ones, which are in
most cases black-box models. Also, linear models are used in some high-risk
fields to handle tabular data, even if performance must be sacrificed. In this
paper, we introduce Self-Reinforcement Attention (SRA), a novel attention
mechanism that provides a relevance of features as a weight vector which is
used to learn an intelligible representation. This weight is then used to
reinforce or reduce some components of the raw input through element-wise
vector multiplication. Our results on synthetic and real-world imbalanced data
show that our proposed SRA block is effective in end-to-end combination with
baseline models.
|
[
"cs.LG"
] | false |
2305.11726
|
2023-05-19T15:02:10Z
|
Non-stationary Projection-free Online Learning with Dynamic and Adaptive
Regret Guarantees
|
[
"Yibo Wang",
"Wenhao Yang",
"Wei Jiang",
"Shiyin Lu",
"Bing Wang",
"Haihong Tang",
"Yuanyu Wan",
"Lijun Zhang"
] |
Projection-free online learning has drawn increasing interest due to its
efficiency in solving high-dimensional problems with complicated constraints.
However, most existing projection-free online methods focus on minimizing the
static regret, which unfortunately fails to capture the challenge of changing
environments. In this paper, we investigate non-stationary projection-free
online learning, and choose dynamic regret and adaptive regret to measure the
performance. Specifically, we first provide a novel dynamic regret analysis for
an existing projection-free method named $\text{BOGD}_\text{IP}$, and establish
an $\mathcal{O}(T^{3/4}(1+P_T))$ dynamic regret bound, where $P_T$ denotes the
path-length of the comparator sequence. Then, we improve the upper bound to
$\mathcal{O}(T^{3/4}(1+P_T)^{1/4})$ by running multiple $\text{BOGD}_\text{IP}$
algorithms with different step sizes in parallel, and tracking the best one on
the fly. Our results are the first general-case dynamic regret bounds for
projection-free online learning, and can recover the existing
$\mathcal{O}(T^{3/4})$ static regret by setting $P_T = 0$. Furthermore, we
propose a projection-free method to attain an $\tilde{\mathcal{O}}(\tau^{3/4})$
adaptive regret bound for any interval with length $\tau$, which nearly matches
the static regret over that interval. The essential idea is to maintain a set
of $\text{BOGD}_\text{IP}$ algorithms dynamically, and combine them by a meta
algorithm. Moreover, we demonstrate that it is also equipped with an
$\mathcal{O}(T^{3/4}(1+P_T)^{1/4})$ dynamic regret bound. Finally, empirical
studies verify our theoretical findings.
|
[
"cs.LG"
] | false |
2305.11976
|
2023-05-19T19:49:44Z
|
Unsupervised Change Point Detection for heterogeneous sensor signals
|
[
"Mario Krause"
] |
Change point detection is a crucial aspect of analyzing time series data, as
the presence of a change point indicates an abrupt and significant change in
the process generating the data. While many algorithms for the problem of
change point detection have been developed over time, it can be challenging to
select the appropriate algorithm for a specific problem. The choice of the
algorithm heavily depends on the nature of the problem and the underlying data
source. In this paper, we will exclusively examine unsupervised techniques due
to their flexibility in the application to various data sources without the
requirement for abundant annotated training data and the re-calibration of the
model. The examined methods will be introduced and evaluated based on several
criteria to compare the algorithms.
|
[
"cs.LG"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.