arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.14513
|
2023-05-23T20:41:04Z
|
Windscreen Optical Quality for AI Algorithms: Refractive Power and MTF
not Sufficient
|
[
"Dominik Werner Wolf",
"Markus Ulrich",
"Alexander Braun"
] |
Windscreen optical quality is an important aspect of any advanced driver
assistance system, and also for future autonomous driving, as today at least
some cameras of the sensor suite are situated behind the windscreen. Automotive
mass production processes require measurement systems that characterize the
optical quality of the windscreens in a meaningful way, which for modern
perception stacks implies meaningful for artificial intelligence (AI)
algorithms. The measured optical quality needs to be linked to the performance
of these algorithms, such that performance limits - and thus production
tolerance limits - can be defined. In this article we demonstrate that the main
metric established in the industry - refractive power - is fundamentally not
capable of capturing relevant optical properties of windscreens. Further, as
the industry is moving towards the modulation transfer function (MTF) as an
alternative, we mathematically show that this metric cannot be used on
windscreens alone, but that the windscreen forms a novel optical system
together with the optics of the camera system. Hence, the required goal of a
qualification system that is installed at the windscreen supplier and
independently measures the optical quality cannot be achieved using MTF. We
propose a novel concept to determine the optical quality of windscreens and to
use simulation to link this optical quality to the performance of AI
algorithms, which can hopefully lead to novel inspection systems.
|
[
"cs.CV"
] | false |
2305.13665
|
2023-05-23T04:19:16Z
|
Dual Focal Loss for Calibration
|
[
"Linwei Tao",
"Minjing Dong",
"Chang Xu"
] |
The use of deep neural networks in real-world applications require
well-calibrated networks with confidence scores that accurately reflect the
actual probability. However, it has been found that these networks often
provide over-confident predictions, which leads to poor calibration. Recent
efforts have sought to address this issue by focal loss to reduce
over-confidence, but this approach can also lead to under-confident
predictions. While different variants of focal loss have been explored, it is
difficult to find a balance between over-confidence and under-confidence. In
our work, we propose a new loss function by focusing on dual logits. Our method
not only considers the ground truth logit, but also take into account the
highest logit ranked after the ground truth logit. By maximizing the gap
between these two logits, our proposed dual focal loss can achieve a better
balance between over-confidence and under-confidence. We provide theoretical
evidence to support our approach and demonstrate its effectiveness through
evaluations on multiple models and datasets, where it achieves state-of-the-art
performance. Code is available at https://github.com/Linwei94/DualFocalLoss
|
[
"cs.CV",
"cs.AI"
] | false |
2305.13689
|
2023-05-23T04:54:09Z
|
Know Your Self-supervised Learning: A Survey on Image-based Generative
and Discriminative Training
|
[
"Utku Ozbulak",
"Hyun Jung Lee",
"Beril Boga",
"Esla Timothy Anzaku",
"Homin Park",
"Arnout Van Messem",
"Wesley De Neve",
"Joris Vankerschaver"
] |
Although supervised learning has been highly successful in improving the
state-of-the-art in the domain of image-based computer vision in the past, the
margin of improvement has diminished significantly in recent years, indicating
that a plateau is in sight. Meanwhile, the use of self-supervised learning
(SSL) for the purpose of natural language processing (NLP) has seen tremendous
successes during the past couple of years, with this new learning paradigm
yielding powerful language models. Inspired by the excellent results obtained
in the field of NLP, self-supervised methods that rely on clustering,
contrastive learning, distillation, and information-maximization, which all
fall under the banner of discriminative SSL, have experienced a swift uptake in
the area of computer vision. Shortly afterwards, generative SSL frameworks that
are mostly based on masked image modeling, complemented and surpassed the
results obtained with discriminative SSL. Consequently, within a span of three
years, over $100$ unique general-purpose frameworks for generative and
discriminative SSL, with a focus on imaging, were proposed. In this survey, we
review a plethora of research efforts conducted on image-oriented SSL,
providing a historic view and paying attention to best practices as well as
useful software packages. While doing so, we discuss pretext tasks for
image-based SSL, as well as techniques that are commonly used in image-based
SSL. Lastly, to aid researchers who aim at contributing to image-focused SSL,
we outline a number of promising research directions.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.13765
|
2023-05-23T07:30:00Z
|
Human Body Pose Estimation for Gait Identification: A Comprehensive
Survey of Datasets and Models
|
[
"Luke K. Topham",
"Wasiq Khan",
"Dhiya Al-Jumeily",
"Abir Hussain"
] |
Person identification is a problem that has received substantial attention,
particularly in security domains. Gait recognition is one of the most
convenient approaches enabling person identification at a distance without the
need of high-quality images. There are several review studies addressing person
identification such as the utilization of facial images, silhouette images, and
wearable sensor. Despite skeleton-based person identification gaining
popularity while overcoming the challenges of traditional approaches, existing
survey studies lack the comprehensive review of skeleton-based approaches to
gait identification. We present a detailed review of the human pose estimation
and gait analysis that make the skeleton-based approaches possible. The study
covers various types of related datasets, tools, methodologies, and evaluation
metrics with associated challenges, limitations, and application domains.
Detailed comparisons are presented for each of these aspects with
recommendations for potential research and alternatives. A common trend
throughout this paper is the positive impact that deep learning techniques are
beginning to have on topics such as human pose estimation and gait
identification. The survey outcomes might be useful for the related research
community and other stakeholders in terms of performance analysis of existing
methodologies, potential research gaps, application domains, and possible
contributions in the future.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.13770
|
2023-05-23T07:34:49Z
|
MIPI 2023 Challenge on Nighttime Flare Removal: Methods and Results
|
[
"Yuekun Dai",
"Chongyi Li",
"Shangchen Zhou",
"Ruicheng Feng",
"Qingpeng Zhu",
"Qianhui Sun",
"Wenxiu Sun",
"Chen Change Loy",
"Jinwei Gu"
] |
Developing and integrating advanced image sensors with novel algorithms in
camera systems are prevalent with the increasing demand for computational
photography and imaging on mobile platforms. However, the lack of high-quality
data for research and the rare opportunity for in-depth exchange of views from
industry and academia constrain the development of mobile intelligent
photography and imaging (MIPI). With the success of the 1st MIPI Workshop@ECCV
2022, we introduce the second MIPI challenge including four tracks focusing on
novel image sensors and imaging algorithms. In this paper, we summarize and
review the Nighttime Flare Removal track on MIPI 2023. In total, 120
participants were successfully registered, and 11 teams submitted results in
the final testing phase. The developed solutions in this challenge achieved
state-of-the-art performance on Nighttime Flare Removal. A detailed description
of all models developed in this challenge is provided in this paper. More
details of this challenge and the link to the dataset can be found at
https://mipi-challenge.org/MIPI2023/ .
|
[
"cs.CV",
"eess.IV"
] | false |
2305.13814
|
2023-05-23T08:29:42Z
|
Leveraging BEV Representation for 360-degree Visual Place Recognition
|
[
"Xuecheng Xu",
"Yanmei Jiao",
"Sha Lu",
"Xiaqing Ding",
"Rong Xiong",
"Yue Wang"
] |
This paper investigates the advantages of using Bird's Eye View (BEV)
representation in 360-degree visual place recognition (VPR). We propose a novel
network architecture that utilizes the BEV representation in feature
extraction, feature aggregation, and vision-LiDAR fusion, which bridges visual
cues and spatial awareness. Our method extracts image features using standard
convolutional networks and combines the features according to pre-defined 3D
grid spatial points. To alleviate the mechanical and time misalignments between
cameras, we further introduce deformable attention to learn the compensation.
Upon the BEV feature representation, we then employ the polar transform and the
Discrete Fourier transform for aggregation, which is shown to be
rotation-invariant. In addition, the image and point cloud cues can be easily
stated in the same coordinates, which benefits sensor fusion for place
recognition. The proposed BEV-based method is evaluated in ablation and
comparative studies on two datasets, including on-the-road and off-the-road
scenarios. The experimental results verify the hypothesis that BEV can benefit
VPR by its superior performance compared to baseline methods. To the best of
our knowledge, this is the first trial of employing BEV representation in this
task.
|
[
"cs.CV",
"cs.RO"
] | false |
2305.13839
|
2023-05-23T09:02:33Z
|
SAR-to-Optical Image Translation via Thermodynamics-inspired Network
|
[
"Mingjin Zhang",
"Jiamin Xu",
"Chengyu He",
"Wenteng Shang",
"Yunsong Li",
"Xinbo Gao"
] |
Synthetic aperture radar (SAR) is prevalent in the remote sensing field but
is difficult to interpret in human visual perception. Recently, SAR-to-optical
(S2O) image conversion methods have provided a prospective solution for
interpretation. However, since there is a huge domain difference between
optical and SAR images, they suffer from low image quality and geometric
distortion in the produced optical images. Motivated by the analogy between
pixels during the S2O image translation and molecules in a heat field,
Thermodynamics-inspired Network for SAR-to-Optical Image Translation (S2O-TDN)
is proposed in this paper. Specifically, we design a Third-order Finite
Difference (TFD) residual structure in light of the TFD equation of
thermodynamics, which allows us to efficiently extract inter-domain invariant
features and facilitate the learning of the nonlinear translation mapping. In
addition, we exploit the first law of thermodynamics (FLT) to devise an
FLT-guided branch that promotes the state transition of the feature values from
the unstable diffusion state to the stable one, aiming to regularize the
feature diffusion and preserve image structures during S2O image translation.
S2O-TDN follows an explicit design principle derived from thermodynamic theory
and enjoys the advantage of explainability. Experiments on the public SEN1-2
dataset show the advantages of the proposed S2O-TDN over the current methods
with more delicate textures and higher quantitative results.
|
[
"cs.CV",
"eess.IV"
] | false |
2305.13858
|
2023-05-23T09:27:17Z
|
Producing a Standard Dataset of Speed Climbing Training Videos Using
Deep Learning Techniques
|
[
"Yufei Xie",
"Shaoman Li",
"Penghui Lin"
] |
This dissertation presents a methodology for recording speed climbing
training sessions with multiple cameras and annotating the videos with relevant
data, including body position, hand and foot placement, and timing. The
annotated data is then analyzed using deep learning techniques to create a
standard dataset of speed climbing training videos. The results demonstrate the
potential of the new dataset for improving speed climbing training and
research, including identifying areas for improvement, creating personalized
training plans, and analyzing the effects of different training methods.The
findings will also be applied to the training process of the Jiangxi climbing
team through further empirical research to test the findings and further
explore the feasibility of this study.
|
[
"cs.CV",
"cs.AI",
"68T45",
"I.2.10"
] | false |
2305.13872
|
2023-05-23T09:47:23Z
|
Variational Bayesian Framework for Advanced Image Generation with
Domain-Related Variables
|
[
"Yuxiao Li",
"Santiago Mazuelas",
"Yuan Shen"
] |
Deep generative models (DGMs) and their conditional counterparts provide a
powerful ability for general-purpose generative modeling of data distributions.
However, it remains challenging for existing methods to address advanced
conditional generative problems without annotations, which can enable multiple
applications like image-to-image translation and image editing. We present a
unified Bayesian framework for such problems, which introduces an inference
stage on latent variables within the learning process. In particular, we
propose a variational Bayesian image translation network (VBITN) that enables
multiple image translation and editing tasks. Comprehensive experiments show
the effectiveness of our method on unsupervised image-to-image translation, and
demonstrate the novel advanced capabilities for semantic editing and mixed
domain translation.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.13880
|
2023-05-23T10:01:58Z
|
Generalized Expectation Maximization Framework for Blind Image Super
Resolution
|
[
"Yuxiao Li",
"Zhiming Wang",
"Yuan Shen"
] |
Learning-based methods for blind single image super resolution (SISR) conduct
the restoration by a learned mapping between high-resolution (HR) images and
their low-resolution (LR) counterparts degraded with arbitrary blur kernels.
However, these methods mostly require an independent step to estimate the blur
kernel, leading to error accumulation between steps. We propose an end-to-end
learning framework for the blind SISR problem, which enables image restoration
within a unified Bayesian framework with either full- or semi-supervision. The
proposed method, namely SREMN, integrates learning techniques into the
generalized expectation-maximization (GEM) algorithm and infers HR images from
the maximum likelihood estimation (MLE). Extensive experiments show the
superiority of the proposed method with comparison to existing work and novelty
in semi-supervised learning.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.13886
|
2023-05-23T10:10:49Z
|
Deep Transductive Transfer Learning for Automatic Target Recognition
|
[
"Shoaib M. Sami",
"Nasser M. Nasrabadi",
"Raghuveer Rao"
] |
One of the major obstacles in designing an automatic target recognition (ATR)
algorithm, is that there are often labeled images in one domain (i.e., infrared
source domain) but no annotated images in the other target domains (i.e.,
visible, SAR, LIDAR). Therefore, automatically annotating these images is
essential to build a robust classifier in the target domain based on the
labeled images of the source domain. Transductive transfer learning is an
effective way to adapt a network to a new target domain by utilizing a
pretrained ATR network in the source domain. We propose an unpaired
transductive transfer learning framework where a CycleGAN model and a
well-trained ATR classifier in the source domain are used to construct an ATR
classifier in the target domain without having any labeled data in the target
domain. We employ a CycleGAN model to transfer the mid-wave infrared (MWIR)
images to visible (VIS) domain images (or visible to MWIR domain). To train the
transductive CycleGAN, we optimize a cost function consisting of the
adversarial, identity, cycle-consistency, and categorical cross-entropy loss
for both the source and target classifiers. In this paper, we perform a
detailed experimental analysis on the challenging DSIAC ATR dataset. The
dataset consists of ten classes of vehicles at different poses and distances
ranging from 1-5 kilometers on both the MWIR and VIS domains. In our
experiment, we assume that the images in the VIS domain are the unlabeled
target dataset. We first detect and crop the vehicles from the raw images and
then project them into a common distance of 2 kilometers. Our proposed
transductive CycleGAN achieves 71.56% accuracy in classifying the visible
domain vehicles in the DSIAC ATR dataset.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.13948
|
2023-05-23T11:17:45Z
|
Decoupled Kullback-Leibler Divergence Loss
|
[
"Jiequan Cui",
"Zhuotao Tian",
"Zhisheng Zhong",
"Xiaojuan Qi",
"Bei Yu",
"Hanwang Zhang"
] |
In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss
and observe that it is equivalent to the Doupled Kullback-Leibler (DKL)
Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss
and 2) a Cross-Entropy loss incorporating soft labels. From our analysis of the
DKL loss, we have identified two areas for improvement. Firstly, we address the
limitation of DKL in scenarios like knowledge distillation by breaking its
asymmetry property in training optimization. This modification ensures that the
wMSE component is always effective during training, providing extra
constructive cues. Secondly, we introduce global information into DKL for
intra-class consistency regularization. With these two enhancements, we derive
the Improved Kullback-Leibler (IKL) Divergence loss and evaluate its
effectiveness by conducting experiments on CIFAR-10/100 and ImageNet datasets,
focusing on adversarial training and knowledge distillation tasks. The proposed
approach achieves new state-of-the-art performance on both tasks, demonstrating
the substantial practical merits. Code and models will be available soon at
https://github.com/jiequancui/DKL.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14008
|
2023-05-23T12:40:28Z
|
Multi-Echo Denoising in Adverse Weather
|
[
"Alvari Seppänen",
"Risto Ojala",
"Kari Tammi"
] |
Adverse weather can cause noise to light detection and ranging (LiDAR) data.
This is a problem since it is used in many outdoor applications, e.g. object
detection and mapping. We propose the task of multi-echo denoising, where the
goal is to pick the echo that represents the objects of interest and discard
other echoes. Thus, the idea is to pick points from alternative echoes that are
not available in standard strongest echo point clouds due to the noise. In an
intuitive sense, we are trying to see through the adverse weather. To achieve
this goal, we propose a novel self-supervised deep learning method and the
characteristics similarity regularization method to boost its performance.
Based on extensive experiments on a semi-synthetic dataset, our method achieves
superior performance compared to the state-of-the-art in self-supervised
adverse weather denoising (23% improvement). Moreover, the experiments with a
real multi-echo adverse weather dataset prove the efficacy of multi-echo
denoising. Our work enables more reliable point cloud acquisition in adverse
weather and thus promises safer autonomous driving and driving assistance
systems in such conditions. The code is available at
https://github.com/alvariseppanen/SMEDNet
|
[
"cs.CV",
"cs.RO"
] | false |
2305.14017
|
2023-05-23T12:53:50Z
|
Faster Video Moment Retrieval with Point-Level Supervision
|
[
"Xun Jiang",
"Zailei Zhou",
"Xing Xu",
"Yang Yang",
"Guoqing Wang",
"Heng Tao Shen"
] |
Video Moment Retrieval (VMR) aims at retrieving the most relevant events from
an untrimmed video with natural language queries. Existing VMR methods suffer
from two defects: (1) massive expensive temporal annotations are required to
obtain satisfying performance; (2) complicated cross-modal interaction modules
are deployed, which lead to high computational cost and low efficiency for the
retrieval process. To address these issues, we propose a novel method termed
Cheaper and Faster Moment Retrieval (CFMR), which well balances the retrieval
accuracy, efficiency, and annotation cost for VMR. Specifically, our proposed
CFMR method learns from point-level supervision where each annotation is a
single frame randomly located within the target moment. It is 6 times cheaper
than the conventional annotations of event boundaries. Furthermore, we also
design a concept-based multimodal alignment mechanism to bypass the usage of
cross-modal interaction modules during the inference process, remarkably
improving retrieval efficiency. The experimental results on three widely used
VMR benchmarks demonstrate the proposed CFMR method establishes new
state-of-the-art with point-level supervision. Moreover, it significantly
accelerates the retrieval speed with more than 100 times FLOPs compared to
existing approaches with point-level supervision.
|
[
"cs.CV",
"cs.MM"
] | false |
2305.14038
|
2023-05-23T13:09:22Z
|
Why semantics matters: A deep study on semantic particle-filtering
localization in a LiDAR semantic pole-map
|
[
"Yuming Huang",
"Yi Gu",
"Chengzhong Xu",
"Hui Kong"
] |
In most urban and suburban areas, pole-like structures such as tree trunks or
utility poles are ubiquitous. These structural landmarks are very useful for
the localization of autonomous vehicles given their geometrical locations in
maps and measurements from sensors. In this work, we aim at creating an
accurate map for autonomous vehicles or robots with pole-like structures as the
dominant localization landmarks, hence called pole-map. In contrast to the
previous pole-based mapping or localization methods, we exploit the semantics
of pole-like structures. Specifically, semantic segmentation is achieved by a
new mask-range transformer network in a mask-classfication paradigm. With the
semantics extracted for the pole-like structures in each frame, a multi-layer
semantic pole-map is created by aggregating the detected pole-like structures
from all frames. Given the semantic pole-map, we propose a semantic
particle-filtering localization scheme for vehicle localization. Theoretically,
we have analyzed why the semantic information can benefit the particle-filter
localization, and empirically it is validated on the public SemanticKITTI
dataset that the particle-filtering localization with semantics achieves much
better performance than the counterpart without semantics when each particle's
odometry prediction and/or the online observation is subject to uncertainties
at significant levels.
|
[
"cs.CV",
"cs.RO"
] | false |
2305.14057
|
2023-05-23T13:36:55Z
|
Can Language Models Understand Physical Concepts?
|
[
"Lei Li",
"Jingjing Xu",
"Qingxiu Dong",
"Ce Zheng",
"Qi Liu",
"Lingpeng Kong",
"Xu Sun"
] |
Language models~(LMs) gradually become general-purpose interfaces in the
interactive and embodied world, where the understanding of physical concepts is
an essential prerequisite. However, it is not yet clear whether LMs can
understand physical concepts in the human world. To investigate this, we design
a benchmark VEC that covers the tasks of (i) Visual concepts, such as the shape
and material of objects, and (ii) Embodied Concepts, learned from the
interaction with the world such as the temperature of objects. Our zero
(few)-shot prompting results show that the understanding of certain visual
concepts emerges as scaling up LMs, but there are still basic concepts to which
the scaling law does not apply. For example, OPT-175B performs close to humans
with a zero-shot accuracy of 85\% on the material concept, yet behaves like
random guessing on the mass concept. Instead, vision-augmented LMs such as CLIP
and BLIP achieve a human-level understanding of embodied concepts. Analysis
indicates that the rich semantics in visual representation can serve as a
valuable source of embodied knowledge. Inspired by this, we propose a
distillation method to transfer embodied knowledge from VLMs to LMs, achieving
performance gain comparable with that by scaling up the parameters of LMs 134x.
Our dataset is available at \url{https://github.com/TobiasLee/VEC}
|
[
"cs.CL",
"cs.CV"
] | false |
2305.14059
|
2023-05-23T13:38:01Z
|
Accelerated Coordinate Encoding: Learning to Relocalize in Minutes using
RGB and Poses
|
[
"Eric Brachmann",
"Tommaso Cavallari",
"Victor Adrian Prisacariu"
] |
Learning-based visual relocalizers exhibit leading pose accuracy, but require
hours or days of training. Since training needs to happen on each new scene
again, long training times make learning-based relocalization impractical for
most applications, despite its promise of high accuracy. In this paper we show
how such a system can actually achieve the same accuracy in less than 5
minutes. We start from the obvious: a relocalization network can be split in a
scene-agnostic feature backbone, and a scene-specific prediction head. Less
obvious: using an MLP prediction head allows us to optimize across thousands of
view points simultaneously in each single training iteration. This leads to
stable and extremely fast convergence. Furthermore, we substitute effective but
slow end-to-end training using a robust pose solver with a curriculum over a
reprojection loss. Our approach does not require privileged knowledge, such a
depth maps or a 3D model, for speedy training. Overall, our approach is up to
300x faster in mapping than state-of-the-art scene coordinate regression, while
keeping accuracy on par.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14142
|
2023-05-23T15:08:56Z
|
A multimodal method based on cross-attention and convolution for
postoperative infection diagnosis
|
[
"Xianjie Liu",
"Hongwei Shi"
] |
Postoperative infection diagnosis is a common and serious complication that
generally poses a high diagnostic challenge. This study focuses on PJI, a type
of postoperative infection. X-ray examination is an imaging examination for
suspected PJI patients that can evaluate joint prostheses and adjacent tissues,
and detect the cause of pain. Laboratory examination data has high sensitivity
and specificity and has significant potential in PJI diagnosis. In this study,
we proposed a self-supervised masked autoencoder pre-training strategy and a
multimodal fusion diagnostic network MED-NVC, which effectively implements the
interaction between two modal features through the feature fusion network of
CrossAttention. We tested our proposed method on our collected PJI dataset and
evaluated its performance and feasibility through comparison and ablation
experiments. The results showed that our method achieved an ACC of 94.71% and
an AUC of 98.22%, which is better than the latest method and also reduces the
number of parameters. Our proposed method has the potential to provide
clinicians with a powerful tool for enhancing accuracy and efficiency.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.14165
|
2023-05-23T15:30:56Z
|
Impact of Light and Shadow on Robustness of Deep Neural Networks
|
[
"Chengyin Hu",
"Weiwen Shi",
"Chao Li",
"Jialiang Sun",
"Donghua Wang",
"Junqi Wu",
"Guijian Tang"
] |
Deep neural networks (DNNs) have made remarkable strides in various computer
vision tasks, including image classification, segmentation, and object
detection. However, recent research has revealed a vulnerability in advanced
DNNs when faced with deliberate manipulations of input data, known as
adversarial attacks. Moreover, the accuracy of DNNs is heavily influenced by
the distribution of the training dataset. Distortions or perturbations in the
color space of input images can introduce out-of-distribution data, resulting
in misclassification. In this work, we propose a brightness-variation dataset,
which incorporates 24 distinct brightness levels for each image within a subset
of ImageNet. This dataset enables us to simulate the effects of light and
shadow on the images, so as is to investigate the impact of light and shadow on
the performance of DNNs. In our study, we conduct experiments using several
state-of-the-art DNN architectures on the aforementioned dataset. Through our
analysis, we discover a noteworthy positive correlation between the brightness
levels and the loss of accuracy in DNNs. Furthermore, we assess the
effectiveness of recently proposed robust training techniques and strategies,
including AugMix, Revisit, and Free Normalizer, using the ResNet50 architecture
on our brightness-variation dataset. Our experimental results demonstrate that
these techniques can enhance the robustness of DNNs against brightness
variation, leading to improved performance when dealing with images exhibiting
varying brightness levels.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.14173
|
2023-05-23T15:44:56Z
|
TVTSv2: Learning Out-of-the-box Spatiotemporal Visual Representations at
Scale
|
[
"Ziyun Zeng",
"Yixiao Ge",
"Zhan Tong",
"Xihui Liu",
"Shu-Tao Xia",
"Ying Shan"
] |
The ultimate goal for foundation models is realizing task-agnostic, i.e.,
supporting out-of-the-box usage without task-specific fine-tuning. Although
breakthroughs have been made in natural language processing and image
representation learning, it is still challenging for video models to reach it
due to the increasing uncertainty of spatiotemporal signals. To ease training,
existing works leverage image foundation models' prior knowledge and equip them
with efficient temporal modules. Despite the satisfactory fine-tuning
performance, we empirically find they fall short of out-of-the-box usage, given
the even degraded performance in zero-shot/linear protocols compared to their
baseline counterparts. In this work, we analyze the factor that leads to
degradation from the perspective of language supervision distortion. We argue
that tuning a text encoder end-to-end, as done in previous work, is suboptimal
since it may overfit in terms of styles, thereby losing its original
generalization ability to capture the semantics of various language registers.
The overfitted text encoder, in turn, provides a harmful supervision signal,
degrading the video representation. To tackle this issue, we propose a
degradation-free pre-training strategy to retain the generalization ability of
the text encoder via freezing shallow layers while enabling the task-related
semantics capturing in tunable deep layers. As for the training objective, we
adopted the transcript sorting task in TVTS incorporated with masking
techniques to enable scalable training. As a result, we produce a series of
models, dubbed TVTSv2, with up to one billion parameters. We achieve new
state-of-the-arts on various video benchmarks with a frozen backbone,
surpassing the recent ImageBind, InternVideo, etc. Code is available at
https://github.com/TencentARC/TVTS.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.14229
|
2023-05-23T16:44:49Z
|
Provably Learning Object-Centric Representations
|
[
"Jack Brady",
"Roland S. Zimmermann",
"Yash Sharma",
"Bernhard Schölkopf",
"Julius von Kügelgen",
"Wieland Brendel"
] |
Learning structured representations of the visual world in terms of objects
promises to significantly improve the generalization abilities of current
machine learning models. While recent efforts to this end have shown promising
empirical progress, a theoretical account of when unsupervised object-centric
representation learning is possible is still lacking. Consequently,
understanding the reasons for the success of existing object-centric methods as
well as designing new theoretically grounded methods remains challenging. In
the present work, we analyze when object-centric representations can provably
be learned without supervision. To this end, we first introduce two assumptions
on the generative process for scenes comprised of several objects, which we
call compositionality and irreducibility. Under this generative process, we
prove that the ground-truth object representations can be identified by an
invertible and compositional inference model, even in the presence of
dependencies between objects. We empirically validate our results through
experiments on synthetic data. Finally, we provide evidence that our theory
holds predictive power for existing object-centric models by showing a close
correspondence between models' compositionality and invertibility and their
empirical identifiability.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.14268
|
2023-05-23T17:20:20Z
|
Masked Path Modeling for Vision-and-Language Navigation
|
[
"Zi-Yi Dou",
"Feng Gao",
"Nanyun Peng"
] |
Vision-and-language navigation (VLN) agents are trained to navigate in
real-world environments by following natural language instructions. A major
challenge in VLN is the limited availability of training data, which hinders
the models' ability to generalize effectively. Previous approaches have
attempted to address this issue by introducing additional supervision during
training, often requiring costly human-annotated data that restricts
scalability. In this paper, we introduce a masked path modeling (MPM)
objective, which pretrains an agent using self-collected data for downstream
navigation tasks. Our proposed method involves allowing the agent to actively
explore navigation environments without a specific goal and collect the paths
it traverses. Subsequently, we train the agent on this collected data to
reconstruct the original path given a randomly masked subpath. This way, the
agent can actively accumulate a diverse and substantial amount of data while
learning conditional action generation. To evaluate the effectiveness of our
technique, we conduct experiments on various VLN datasets and demonstrate the
versatility of MPM across different levels of instruction complexity. Our
results exhibit significant improvements in success rates, with enhancements of
1.32\%, 1.05\%, and 1.19\% on the val-unseen split of the Room-to-Room,
Room-for-Room, and Room-across-Room datasets, respectively. Furthermore, we
conduct an analysis that highlights the potential for additional improvements
when the agent is allowed to explore unseen environments prior to testing.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.14344
|
2023-05-23T17:59:46Z
|
Siamese Masked Autoencoders
|
[
"Agrim Gupta",
"Jiajun Wu",
"Jia Deng",
"Li Fei-Fei"
] |
Establishing correspondence between images or scenes is a significant
challenge in computer vision, especially given occlusions, viewpoint changes,
and varying object appearances. In this paper, we present Siamese Masked
Autoencoders (SiamMAE), a simple extension of Masked Autoencoders (MAE) for
learning visual correspondence from videos. SiamMAE operates on pairs of
randomly sampled video frames and asymmetrically masks them. These frames are
processed independently by an encoder network, and a decoder composed of a
sequence of cross-attention layers is tasked with predicting the missing
patches in the future frame. By masking a large fraction ($95\%$) of patches in
the future frame while leaving the past frame unchanged, SiamMAE encourages the
network to focus on object motion and learn object-centric representations.
Despite its conceptual simplicity, features learned via SiamMAE outperform
state-of-the-art self-supervised methods on video object segmentation, pose
keypoint propagation, and semantic part propagation tasks. SiamMAE achieves
competitive results without relying on data augmentation, handcrafted
tracking-based pretext tasks, or other techniques to prevent representational
collapse.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14468
|
2023-05-23T18:52:11Z
|
Run Like a Girl! Sports-Related Gender Bias in Language and Vision
|
[
"Sophia Harrison",
"Eleonora Gualdoni",
"Gemma Boleda"
] |
Gender bias in Language and Vision datasets and models has the potential to
perpetuate harmful stereotypes and discrimination. We analyze gender bias in
two Language and Vision datasets. Consistent with prior work, we find that both
datasets underrepresent women, which promotes their invisibilization. Moreover,
we hypothesize and find that a bias affects human naming choices for people
playing sports: speakers produce names indicating the sport (e.g. 'tennis
player' or 'surfer') more often when it is a man or a boy participating in the
sport than when it is a woman or a girl, with an average of 46% vs. 35% of
sports-related names for each gender. A computational model trained on these
naming data reproduces the bias. We argue that both the data and the model
result in representational harm against women.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.14470
|
2023-05-23T18:53:24Z
|
Integrated Object Deformation and Contact Patch Estimation from
Visuo-Tactile Feedback
|
[
"Mark Van der Merwe",
"Youngsun Wi",
"Dmitry Berenson",
"Nima Fazeli"
] |
Reasoning over the interplay between object deformation and force
transmission through contact is central to the manipulation of compliant
objects. In this paper, we propose Neural Deforming Contact Field (NDCF), a
representation that jointly models object deformations and contact patches from
visuo-tactile feedback using implicit representations. Representing the object
geometry and contact with the environment implicitly allows a single model to
predict contact patches of varying complexity. Additionally, learning geometry
and contact simultaneously allows us to enforce physical priors, such as
ensuring contacts lie on the surface of the object. We propose a neural network
architecture to learn a NDCF, and train it using simulated data. We then
demonstrate that the learned NDCF transfers directly to the real-world without
the need for fine-tuning. We benchmark our proposed approach against a baseline
representing geometry and contact patches with point clouds. We find that NDCF
performs better on simulated data and in transfer to the real-world.
|
[
"cs.RO",
"cs.CV"
] | false |
2305.14551
|
2023-05-23T22:23:37Z
|
Exploring Semantic Variations in GAN Latent Spaces via Matrix
Factorization
|
[
"Andrey Palaev",
"Rustam A. Lukmanov",
"Adil Khan"
] |
Controlled data generation with GANs is desirable but challenging due to the
nonlinearity and high dimensionality of their latent spaces. In this work, we
explore image manipulations learned by GANSpace, a state-of-the-art method
based on PCA. Through quantitative and qualitative assessments we show: (a)
GANSpace produces a wide range of high-quality image manipulations, but they
can be highly entangled, limiting potential use cases; (b) Replacing PCA with
ICA improves the quality and disentanglement of manipulations; (c) The quality
of the generated images can be sensitive to the size of GANs, but regardless of
their complexity, fundamental controlling directions can be observed in their
latent spaces.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14566
|
2023-05-23T23:07:53Z
|
An Accelerated Pipeline for Multi-label Renal Pathology Image
Segmentation at the Whole Slide Image Level
|
[
"Haoju Leng",
"Ruining Deng",
"Zuhayr Asad",
"R. Michael Womick",
"Haichun Yang",
"Lipeng Wan",
"Yuankai Huo"
] |
Deep-learning techniques have been used widely to alleviate the
labour-intensive and time-consuming manual annotation required for pixel-level
tissue characterization. Our previous study introduced an efficient single
dynamic network - Omni-Seg - that achieved multi-class multi-scale pathological
segmentation with less computational complexity. However, the patch-wise
segmentation paradigm still applies to Omni-Seg, and the pipeline is
time-consuming when providing segmentation for Whole Slide Images (WSIs). In
this paper, we propose an enhanced version of the Omni-Seg pipeline in order to
reduce the repetitive computing processes and utilize a GPU to accelerate the
model's prediction for both better model performance and faster speed. Our
proposed method's innovative contribution is two-fold: (1) a Docker is released
for an end-to-end slide-wise multi-tissue segmentation for WSIs; and (2) the
pipeline is deployed on a GPU to accelerate the prediction, achieving better
segmentation quality in less time. The proposed accelerated implementation
reduced the average processing time (at the testing stage) on a standard needle
biopsy WSI from 2.3 hours to 22 minutes, using 35 WSIs from the Kidney Tissue
Atlas (KPMP) Datasets. The source code and the Docker have been made publicly
available at https://github.com/ddrrnn123/Omni-Seg.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.18327
|
2023-05-23T11:16:41Z
|
A Study on Deep CNN Structures for Defect Detection From Laser
Ultrasonic Visualization Testing Images
|
[
"Miya Nakajima",
"Takahiro Saitoh",
"Tsuyoshi Kato"
] |
The importance of ultrasonic nondestructive testing has been increasing in
recent years, and there are high expectations for the potential of laser
ultrasonic visualization testing, which combines laser ultrasonic testing with
scattered wave visualization technology. Even if scattered waves are
visualized, inspectors still need to carefully inspect the images. To automate
this, this paper proposes a deep neural network for automatic defect detection
and localization in LUVT images. To explore the structure of a neural network
suitable to this task, we compared the LUVT image analysis problem with the
generic object detection problem. Numerical experiments using real-world data
from a SUS304 flat plate showed that the proposed method is more effective than
the general object detection model in terms of prediction performance. We also
show that the computational time required for prediction is faster than that of
the general object detection model.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.19275
|
2023-05-23T12:17:31Z
|
Automated spacing measurement of formwork system members with 3D point
cloud data
|
[
"Keyi Wu",
"Samuel A. Prieto",
"Eyob Mengiste",
"Borja García de Soto"
] |
The formwork system belonging to the temporary structure plays an important
role in the smooth progress and successful completion of a construction
project. Ensuring that the formwork system is installed as designed is
essential for construction safety and quality. The current way to measure the
spacing between formwork system members is mostly done using manual measuring
tools. This research proposes a framework to measure the spacing of formwork
system members using 3D point cloud data to enhance the automation of this
quality inspection. The novelty is not only in the integration of the different
techniques used but in the detection and measurement of key members in the
formwork system without human intervention. The proposed framework was tested
on a real construction site. Five cases were investigated to compare the 3D
point cloud data approach to the manual approach with traditional measuring
tools. The results indicate that the 3D point cloud data approach is a
promising solution and can potentially be an effective alternative to the
manual approach.
|
[
"cs.HC",
"cs.CV",
"J.6"
] | false |
2306.06207
|
2023-05-23T11:23:38Z
|
Towards clinical translation of deep-learning based classification of
DSA image sequences for stroke treatment
|
[
"Timo Baumgärtner",
"Benjamin J. Mittmann",
"Till Malzacher",
"Johannes Roßkopf",
"Michael Braun",
"Bernd Schmitz",
"Alfred M. Franz"
] |
In the event of stroke, a catheter-guided procedure (thrombectomy) is used to
remove blood clots. Feasibility of machine learning based automatic
classifications for thrombus detection on digital substraction angiography
(DSA) sequences has been demonstrated. It was however not used live in the
clinic, yet. We present an open-source tool for automatic thrombus
classification and test it on three selected clinical cases regarding
functionality and classification runtime. With our trained model all large
vessel occlusions in the M1 segment were correctly classified. One small
remaining M3 thrombus was not detected. Runtime was in the range from 1 to 10
seconds depending on the used hardware. We conclude that our open-source
software tool enables clinical staff to classify DSA sequences in (close to)
realtime and can be used for further studies in clinics.
|
[
"physics.med-ph",
"cs.CV"
] | false |
2305.13651
|
2023-05-23T03:49:41Z
|
Adversarial Defenses via Vector Quantization
|
[
"Zhiyi Dong",
"Yongyi Mao"
] |
Building upon Randomized Discretization, we develop two novel adversarial
defenses against white-box PGD attacks, utilizing vector quantization in higher
dimensional spaces. These methods, termed pRD and swRD, not only offer a
theoretical guarantee in terms of certified accuracy, they are also shown, via
abundant experiments, to perform comparably or even superior to the current art
of adversarial defenses. These methods can be extended to a version that allows
further training of the target classifier and demonstrates further improved
performance.
|
[
"cs.LG",
"cs.CR",
"cs.CV"
] | false |
2305.13738
|
2023-05-23T06:45:55Z
|
i-Code Studio: A Configurable and Composable Framework for Integrative
AI
|
[
"Yuwei Fang",
"Mahmoud Khademi",
"Chenguang Zhu",
"Ziyi Yang",
"Reid Pryzant",
"Yichong Xu",
"Yao Qian",
"Takuya Yoshioka",
"Lu Yuan",
"Michael Zeng",
"Xuedong Huang"
] |
Artificial General Intelligence (AGI) requires comprehensive understanding
and generation capabilities for a variety of tasks spanning different
modalities and functionalities. Integrative AI is one important direction to
approach AGI, through combining multiple models to tackle complex multimodal
tasks. However, there is a lack of a flexible and composable platform to
facilitate efficient and effective model composition and coordination. In this
paper, we propose the i-Code Studio, a configurable and composable framework
for Integrative AI. The i-Code Studio orchestrates multiple pre-trained models
in a finetuning-free fashion to conduct complex multimodal tasks. Instead of
simple model composition, the i-Code Studio provides an integrative, flexible,
and composable setting for developers to quickly and easily compose
cutting-edge services and technologies tailored to their specific requirements.
The i-Code Studio achieves impressive results on a variety of zero-shot
multimodal tasks, such as video-to-text retrieval, speech-to-speech
translation, and visual question answering. We also demonstrate how to quickly
build a multimodal agent based on the i-Code Studio that can communicate and
personalize for users.
|
[
"cs.CL",
"cs.AI",
"cs.CV"
] | false |
2305.13803
|
2023-05-23T08:15:45Z
|
NORM: Knowledge Distillation via N-to-One Representation Matching
|
[
"Xiaolong Liu",
"Lujun Li",
"Chao Li",
"Anbang Yao"
] |
Existing feature distillation methods commonly adopt the One-to-one
Representation Matching between any pre-selected teacher-student layer pair. In
this paper, we present N-to-One Representation (NORM), a new two-stage
knowledge distillation method, which relies on a simple Feature Transform (FT)
module consisting of two linear layers. In view of preserving the intact
information learnt by the teacher network, during training, our FT module is
merely inserted after the last convolutional layer of the student network. The
first linear layer projects the student representation to a feature space
having N times feature channels than the teacher representation from the last
convolutional layer, and the second linear layer contracts the expanded output
back to the original feature space. By sequentially splitting the expanded
student representation into N non-overlapping feature segments having the same
number of feature channels as the teacher's, they can be readily forced to
approximate the intact teacher representation simultaneously, formulating a
novel many-to-one representation matching mechanism conditioned on a single
teacher-student layer pair. After training, such an FT module will be naturally
merged into the subsequent fully connected layer thanks to its linear property,
introducing no extra parameters or architectural modifications to the student
network at inference. Extensive experiments on different visual recognition
benchmarks demonstrate the leading performance of our method. For instance, the
ResNet18|MobileNet|ResNet50-1/4 model trained by NORM reaches
72.14%|74.26%|68.03% top-1 accuracy on the ImageNet dataset when using a
pre-trained ResNet34|ResNet50|ResNet50 model as the teacher, achieving an
absolute improvement of 2.01%|4.63%|3.03% against the individually trained
counterpart. Code is available at https://github.com/OSVAI/NORM
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.13962
|
2023-05-23T11:40:43Z
|
CPNet: Exploiting CLIP-based Attention Condenser and Probability Map
Guidance for High-fidelity Talking Face Generation
|
[
"Jingning Xu",
"Benlai Tang",
"Mingjie Wang",
"Minghao Li",
"Meirong Ma"
] |
Recently, talking face generation has drawn ever-increasing attention from
the research community in computer vision due to its arduous challenges and
widespread application scenarios, e.g. movie animation and virtual anchor.
Although persevering efforts have been undertaken to enhance the fidelity and
lip-sync quality of generated talking face videos, there is still large room
for further improvements of synthesis quality and efficiency. Actually, these
attempts somewhat ignore the explorations of fine-granularity feature
extraction/integration and the consistency between probability distributions of
landmarks, thereby recurring the issues of local details blurring and degraded
fidelity. To mitigate these dilemmas, in this paper, a novel CLIP-based
Attention and Probability Map Guided Network (CPNet) is delicately designed for
inferring high-fidelity talking face videos. Specifically, considering the
demands of fine-grained feature recalibration, a clip-based attention condenser
is exploited to transfer knowledge with rich semantic priors from the
prevailing CLIP model. Moreover, to guarantee the consistency in probability
space and suppress the landmark ambiguity, we creatively propose the density
map of facial landmark as auxiliary supervisory signal to guide the landmark
distribution learning of generated frame. Extensive experiments on the
widely-used benchmark dataset demonstrate the superiority of our CPNet against
state of the arts in terms of image and lip-sync quality. In addition, a cohort
of studies are also conducted to ablate the impacts of the individual pivotal
components.
|
[
"cs.MM",
"cs.AI",
"cs.CV"
] | false |
2305.14188
|
2023-05-23T16:07:58Z
|
The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks
|
[
"Iuri Frosio",
"Jan Kautz"
] |
Many defenses against adversarial attacks (\eg robust classifiers,
randomization, or image purification) use countermeasures put to work only
after the attack has been crafted. We adopt a different perspective to
introduce $A^5$ (Adversarial Augmentation Against Adversarial Attacks), a novel
framework including the first certified preemptive defense against adversarial
attacks. The main idea is to craft a defensive perturbation to guarantee that
any attack (up to a given magnitude) towards the input in hand will fail. To
this aim, we leverage existing automatic perturbation analysis tools for neural
networks. We study the conditions to apply $A^5$ effectively, analyze the
importance of the robustness of the to-be-defended classifier, and inspect the
appearance of the robustified images. We show effective on-the-fly defensive
augmentation with a robustifier network that ignores the ground truth label,
and demonstrate the benefits of robustifier and classifier co-training. In our
tests, $A^5$ consistently beats state of the art certified defenses on MNIST,
CIFAR10, FashionMNIST and Tinyimagenet. We also show how to apply $A^5$ to
create certifiably robust physical objects. Our code at
https://github.com/NVlabs/A5 allows experimenting on a wide range of scenarios
beyond the man-in-the-middle attack tested here, including the case of physical
attacks.
|
[
"cs.LG",
"cs.CR",
"cs.CV"
] | false |
2305.14325
|
2023-05-23T17:55:11Z
|
Improving Factuality and Reasoning in Language Models through Multiagent
Debate
|
[
"Yilun Du",
"Shuang Li",
"Antonio Torralba",
"Joshua B. Tenenbaum",
"Igor Mordatch"
] |
Large language models (LLMs) have demonstrated remarkable capabilities in
language generation, understanding, and few-shot learning in recent years. An
extensive body of work has explored how their performance may be further
improved through the tools of prompting, ranging from verification,
self-consistency, or intermediate scratchpads. In this paper, we present a
complementary approach to improve language responses where multiple language
model instances propose and debate their individual responses and reasoning
processes over multiple rounds to arrive at a common final answer. Our findings
indicate that this approach significantly enhances mathematical and strategic
reasoning across a number of tasks. We also demonstrate that our approach
improves the factual validity of generated content, reducing fallacious answers
and hallucinations that contemporary models are prone to. Our approach may be
directly applied to existing black-box models and uses identical procedure and
prompts for all tasks we investigate. Overall, our findings suggest that such
"society of minds" approach has the potential to significantly advance the
capabilities of LLMs and pave the way for further breakthroughs in language
generation and understanding.
|
[
"cs.CL",
"cs.AI",
"cs.CV",
"cs.LG"
] | false |
2305.14395
|
2023-05-23T06:23:08Z
|
Towards credible visual model interpretation with path attribution
|
[
"Naveed Akhtar",
"Muhammad A. A. K. Jalwana"
] |
Originally inspired by game-theory, path attribution framework stands out
among the post-hoc model interpretation tools due to its axiomatic nature.
However, recent developments show that this framework can still suffer from
counter-intuitive results. Moreover, specifically for deep visual models, the
existing path-based methods also fall short on conforming to the original
intuitions that are the basis of the claimed axiomatic properties of this
framework. We address these problems with a systematic investigation, and
pinpoint the conditions in which the counter-intuitive results can be avoided
for deep visual model interpretation with the path attribution strategy. We
also devise a scheme to preclude the conditions in which visual model
interpretation can invalidate the axiomatic properties of path attribution.
These insights are combined into a method that enables reliable visual model
interpretation. Our findings are establish empirically with multiple datasets,
models and evaluation metrics. Extensive experiments show a consistent
performance gain of our method over the baselines.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.14403
|
2023-05-23T11:18:37Z
|
Layer-adaptive Structured Pruning Guided by Latency
|
[
"Siyuan Pan",
"Linna Zhang",
"Jie Zhang",
"Xiaoshuang Li",
"Liang Hou",
"Xiaobing Tu"
] |
Structured pruning can simplify network architecture and improve inference
speed. Combined with the underlying hardware and inference engine in which the
final model is deployed, better results can be obtained by using latency
collaborative loss function to guide network pruning together. Existing pruning
methods that optimize latency have demonstrated leading performance, however,
they often overlook the hardware features and connection in the network. To
address this problem, we propose a global importance score SP-LAMP(Structured
Pruning Layer-Adaptive Magnitude-based Pruning) by deriving a global importance
score LAMP from unstructured pruning to structured pruning. In SP-LAMP, each
layer includes a filter with an SP-LAMP score of 1, and the remaining filters
are grouped. We utilize a group knapsack solver to maximize the SP-LAMP score
under latency constraints. In addition, we improve the strategy of collect the
latency to make it more accurate. In particular, for ResNet50/ResNet18 on
ImageNet and CIFAR10, SP-LAMP is 1.28x/8.45x faster with +1.7%/-1.57% top-1
accuracy changed, respectively. Experimental results in ResNet56 on CIFAR10
demonstrate that our algorithm achieves lower latency compared to alternative
approaches while ensuring accuracy and FLOPs.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.14409
|
2023-05-23T15:55:37Z
|
Evolution: A Unified Formula for Feature Operators from a High-level
Perspective
|
[
"Zhicheng Cai"
] |
Traditionally, different types of feature operators (e.g., convolution,
self-attention and involution) utilize different approaches to extract and
aggregate the features. Resemblance can be hardly discovered from their
mathematical formulas. However, these three operators all serve the same
paramount purpose and bear no difference in essence. Hence we probe into the
essence of various feature operators from a high-level perspective, transformed
their components equivalently, and explored their mathematical expressions
within higher dimensions. We raise one clear and concrete unified formula for
different feature operators termed as Evolution. Evolution utilizes the
Evolution Function to generate the Evolution Kernel, which extracts and
aggregates the features in certain positions of the input feature map. We
mathematically deduce the equivalent transformation from the traditional
formulas of these feature operators to Evolution and prove the unification. In
addition, we discuss the forms of Evolution Functions and the properties of
generated Evolution Kernels, intending to give inspirations to the further
research and innovations of powerful feature operators.
|
[
"cs.LG",
"cs.CV",
"cs.NA",
"math.NA"
] | false |
2305.14467
|
2023-05-23T18:47:19Z
|
FLAIR #2: textural and temporal information for semantic segmentation
from multi-source optical imagery
|
[
"Anatol Garioud",
"Apolline De Wit",
"Marc Poupée",
"Marion Valette",
"Sébastien Giordano",
"Boris Wattrelos"
] |
The FLAIR #2 dataset hereby presented includes two very distinct types of
data, which are exploited for a semantic segmentation task aimed at mapping
land cover. The data fusion workflow proposes the exploitation of the fine
spatial and textural information of very high spatial resolution (VHR)
mono-temporal aerial imagery and the temporal and spectral richness of high
spatial resolution (HR) time series of Copernicus Sentinel-2 satellite images.
The French National Institute of Geographical and Forest Information (IGN), in
response to the growing availability of high-quality Earth Observation (EO)
data, is actively exploring innovative strategies to integrate these data with
heterogeneous characteristics. IGN is therefore offering this dataset to
promote innovation and improve our knowledge of our territories.
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2305.14568
|
2023-05-23T23:11:05Z
|
GO-LDA: Generalised Optimal Linear Discriminant Analysis
|
[
"Jiahui Liu",
"Xiaohao Cai",
"Mahesan Niranjan"
] |
Linear discriminant analysis (LDA) has been a useful tool in pattern
recognition and data analysis research and practice. While linearity of class
boundaries cannot always be expected, nonlinear projections through pre-trained
deep neural networks have served to map complex data onto feature spaces in
which linear discrimination has served well. The solution to binary LDA is
obtained by eigenvalue analysis of within-class and between-class scatter
matrices. It is well known that the multiclass LDA is solved by an extension to
the binary LDA, a generalised eigenvalue problem, from which the largest
subspace that can be extracted is of dimension one lower than the number of
classes in the given problem. In this paper, we show that, apart from the first
of the discriminant directions, the generalised eigenanalysis solution to
multiclass LDA does neither yield orthogonal discriminant directions nor
maximise discrimination of projected data along them. Surprisingly, to the best
of our knowledge, this has not been noted in decades of literature on LDA. To
overcome this drawback, we present a derivation with a strict theoretical
support for sequentially obtaining discriminant directions that are orthogonal
to previously computed ones and maximise in each step the Fisher criterion. We
show distributions of projections along these axes and demonstrate that
discrimination of data projected onto these discriminant directions has optimal
separation, which is much higher than those from the generalised eigenvectors
of the multiclass LDA. Using a wide range of benchmark tasks, we present a
comprehensive empirical demonstration that on a number of pattern recognition
and classification problems, the optimal discriminant subspaces obtained by the
proposed method, referred to as GO-LDA (Generalised Optimal LDA), can offer
superior accuracy.
|
[
"cs.CV",
"cs.NA",
"math.NA"
] | false |
2305.14589
|
2023-05-23T23:57:44Z
|
Attentive Continuous Generative Self-training for Unsupervised Domain
Adaptive Medical Image Translation
|
[
"Xiaofeng Liu",
"Jerry L. Prince",
"Fangxu Xing",
"Jiachen Zhuo",
"Reese Timothy",
"Maureen Stone",
"Georges El Fakhri",
"Jonghye Woo"
] |
Self-training is an important class of unsupervised domain adaptation (UDA)
approaches that are used to mitigate the problem of domain shift, when applying
knowledge learned from a labeled source domain to unlabeled and heterogeneous
target domains. While self-training-based UDA has shown considerable promise on
discriminative tasks, including classification and segmentation, through
reliable pseudo-label filtering based on the maximum softmax probability, there
is a paucity of prior work on self-training-based UDA for generative tasks,
including image modality translation. To fill this gap, in this work, we seek
to develop a generative self-training (GST) framework for domain adaptive image
translation with continuous value prediction and regression objectives.
Specifically, we quantify both aleatoric and epistemic uncertainties within our
GST using variational Bayes learning to measure the reliability of synthesized
data. We also introduce a self-attention scheme that de-emphasizes the
background region to prevent it from dominating the training process. The
adaptation is then carried out by an alternating optimization scheme with
target domain supervision that focuses attention on the regions with reliable
pseudo-labels. We evaluated our framework on two cross-scanner/center,
inter-subject translation tasks, including tagged-to-cine magnetic resonance
(MR) image translation and T1-weighted MR-to-fractional anisotropy translation.
Extensive validations with unpaired target domain data showed that our GST
yielded superior synthesis performance in comparison to adversarial training
UDA methods.
|
[
"eess.IV",
"cs.CV",
"cs.LG",
"physics.med-ph"
] | false |
2305.18222
|
2023-05-23T15:20:31Z
|
survAIval: Survival Analysis with the Eyes of AI
|
[
"Kamil Kowol",
"Stefan Bracke",
"Hanno Gottschalk"
] |
In this study, we propose a novel approach to enrich the training data for
automated driving by using a self-designed driving simulator and two human
drivers to generate safety-critical corner cases in a short period of time, as
already presented in~\cite{kowol22simulator}. Our results show that
incorporating these corner cases during training improves the recognition of
corner cases during testing, even though, they were recorded due to visual
impairment. Using the corner case triggering pipeline developed in the previous
work, we investigate the effectiveness of using expert models to overcome the
domain gap due to different weather conditions and times of day, compared to a
universal model from a development perspective. Our study reveals that expert
models can provide significant benefits in terms of performance and efficiency,
and can reduce the time and effort required for model training. Our results
contribute to the progress of automated driving, providing a pathway for safer
and more reliable autonomous vehicles on the road in the future.
|
[
"cs.CV",
"cs.LG",
"cs.RO"
] | false |
2305.19329
|
2023-05-23T21:31:16Z
|
Mitigating Test-Time Bias for Fair Image Retrieval
|
[
"Fanjie Kong",
"Shuai Yuan",
"Weituo Hao",
"Ricardo Henao"
] |
We address the challenge of generating fair and unbiased image retrieval
results given neutral textual queries (with no explicit gender or race
connotations), while maintaining the utility (performance) of the underlying
vision-language (VL) model. Previous methods aim to disentangle learned
representations of images and text queries from gender and racial
characteristics. However, we show these are inadequate at alleviating bias for
the desired equal representation result, as there usually exists test-time bias
in the target retrieval set. So motivated, we introduce a straightforward
technique, Post-hoc Bias Mitigation (PBM), that post-processes the outputs from
the pre-trained vision-language model. We evaluate our algorithm on real-world
image search datasets, Occupation 1 and 2, as well as two large-scale
image-text datasets, MS-COCO and Flickr30k. Our approach achieves the lowest
bias, compared with various existing bias-mitigation methods, in text-based
image retrieval result while maintaining satisfactory retrieval performance.
The source code is publicly available at
\url{https://anonymous.4open.science/r/Fair_Text_based_Image_Retrieval-D8B2}.
|
[
"cs.CV",
"cs.IR",
"cs.LG"
] | false |
2305.13623
|
2023-05-23T02:44:15Z
|
Validating Multimedia Content Moderation Software via Semantic Fusion
|
[
"Wenxuan Wang",
"Jingyuan Huang",
"Chang Chen",
"Jiazhen Gu",
"Jianping Zhang",
"Weibin Wu",
"Pinjia He",
"Michael Lyu"
] |
The exponential growth of social media platforms, such as Facebook and
TikTok, has revolutionized communication and content publication in human
society. Users on these platforms can publish multimedia content that delivers
information via the combination of text, audio, images, and video. Meanwhile,
the multimedia content release facility has been increasingly exploited to
propagate toxic content, such as hate speech, malicious advertisements, and
pornography. To this end, content moderation software has been widely deployed
on these platforms to detect and blocks toxic content. However, due to the
complexity of content moderation models and the difficulty of understanding
information across multiple modalities, existing content moderation software
can fail to detect toxic content, which often leads to extremely negative
impacts.
We introduce Semantic Fusion, a general, effective methodology for validating
multimedia content moderation software. Our key idea is to fuse two or more
existing single-modal inputs (e.g., a textual sentence and an image) into a new
input that combines the semantics of its ancestors in a novel manner and has
toxic nature by construction. This fused input is then used for validating
multimedia content moderation software. We realized Semantic Fusion as DUO, a
practical content moderation software testing tool. In our evaluation, we
employ DUO to test five commercial content moderation software and two
state-of-the-art models against three kinds of toxic content. The results show
that DUO achieves up to 100% error finding rate (EFR) when testing moderation
software. In addition, we leverage the test cases generated by DUO to retrain
the two models we explored, which largely improves model robustness while
maintaining the accuracy on the original test set.
|
[
"cs.SE",
"cs.AI",
"cs.CL",
"cs.CV",
"cs.MM"
] | false |
2305.13571
|
2023-05-23T01:03:40Z
|
Latent Positional Information is in the Self-Attention Variance of
Transformer Language Models Without Positional Embeddings
|
[
"Ta-Chung Chi",
"Ting-Han Fan",
"Li-Wei Chen",
"Alexander I. Rudnicky",
"Peter J. Ramadge"
] |
The use of positional embeddings in transformer language models is widely
accepted. However, recent research has called into question the necessity of
such embeddings. We further extend this inquiry by demonstrating that a
randomly initialized and frozen transformer language model, devoid of
positional embeddings, inherently encodes strong positional information through
the shrinkage of self-attention variance. To quantify this variance, we derive
the underlying distribution of each step within a transformer layer. Through
empirical validation using a fully pretrained model, we show that the variance
shrinkage effect still persists after extensive gradient updates. Our findings
serve to justify the decision to discard positional embeddings and thus
facilitate more efficient pretraining of transformer language models.
|
[
"cs.CL"
] | false |
2305.13585
|
2023-05-23T01:25:29Z
|
Query Structure Modeling for Inductive Logical Reasoning Over Knowledge
Graphs
|
[
"Siyuan Wang",
"Zhongyu Wei",
"Meng Han",
"Zhihao Fan",
"Haijun Shan",
"Qi Zhang",
"Xuanjing Huang"
] |
Logical reasoning over incomplete knowledge graphs to answer complex logical
queries is a challenging task. With the emergence of new entities and relations
in constantly evolving KGs, inductive logical reasoning over KGs has become a
crucial problem. However, previous PLMs-based methods struggle to model the
logical structures of complex queries, which limits their ability to generalize
within the same structure. In this paper, we propose a structure-modeled
textual encoding framework for inductive logical reasoning over KGs. It encodes
linearized query structures and entities using pre-trained language models to
find answers. For structure modeling of complex queries, we design stepwise
instructions that implicitly prompt PLMs on the execution order of geometric
operations in each query. We further separately model different geometric
operations (i.e., projection, intersection, and union) on the representation
space using a pre-trained encoder with additional attention and maxout layers
to enhance structured modeling. We conduct experiments on two inductive logical
reasoning datasets and three transductive datasets. The results demonstrate the
effectiveness of our method on logical reasoning over KGs in both inductive and
transductive settings.
|
[
"cs.CL"
] | false |
2305.13589
|
2023-05-23T01:45:18Z
|
BiasX: "Thinking Slow" in Toxic Content Moderation with Explanations of
Implied Social Biases
|
[
"Yiming Zhang",
"Sravani Nanduri",
"Liwei Jiang",
"Tongshuang Wu",
"Maarten Sap"
] |
Toxicity annotators and content moderators often default to mental shortcuts
when making decisions. This can lead to subtle toxicity being missed, and
seemingly toxic but harmless content being over-detected. We introduce BiasX, a
framework that enhances content moderation setups with free-text explanations
of statements' implied social biases, and explore its effectiveness through a
large-scale crowdsourced user study. We show that indeed, participants
substantially benefit from explanations for correctly identifying subtly
(non-)toxic content. The quality of explanations is critical: imperfect
machine-generated explanations (+2.4% on hard toxic examples) help less
compared to expert-written human explanations (+7.2%). Our results showcase the
promise of using free-text explanations to encourage more thoughtful toxicity
moderation.
|
[
"cs.CL"
] | false |
2305.13614
|
2023-05-23T02:25:01Z
|
LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation
|
[
"Siyuan Chen",
"Mengyue Wu",
"Kenny Q. Zhu",
"Kunyao Lan",
"Zhiling Zhang",
"Lyuchun Cui"
] |
Empowering chatbots in the field of mental health is receiving increasing
amount of attention, while there still lacks exploration in developing and
evaluating chatbots in psychiatric outpatient scenarios. In this work, we focus
on exploring the potential of ChatGPT in powering chatbots for psychiatrist and
patient simulation. We collaborate with psychiatrists to identify objectives
and iteratively develop the dialogue system to closely align with real-world
scenarios. In the evaluation experiments, we recruit real psychiatrists and
patients to engage in diagnostic conversations with the chatbots, collecting
their ratings for assessment. Our findings demonstrate the feasibility of using
ChatGPT-powered chatbots in psychiatric scenarios and explore the impact of
prompt designs on chatbot behavior and user experience.
|
[
"cs.CL"
] | false |
2305.13641
|
2023-05-23T03:19:21Z
|
AxomiyaBERTa: A Phonologically-aware Transformer Model for Assamese
|
[
"Abhijnan Nath",
"Sheikh Mannan",
"Nikhil Krishnaswamy"
] |
Despite their successes in NLP, Transformer-based language models still
require extensive computing resources and suffer in low-resource or low-compute
settings. In this paper, we present AxomiyaBERTa, a novel BERT model for
Assamese, a morphologically-rich low-resource language (LRL) of Eastern India.
AxomiyaBERTa is trained only on the masked language modeling (MLM) task,
without the typical additional next sentence prediction (NSP) objective, and
our results show that in resource-scarce settings for very low-resource
languages like Assamese, MLM alone can be successfully leveraged for a range of
tasks. AxomiyaBERTa achieves SOTA on token-level tasks like Named Entity
Recognition and also performs well on "longer-context" tasks like Cloze-style
QA and Wiki Title Prediction, with the assistance of a novel embedding
disperser and phonological signals respectively. Moreover, we show that
AxomiyaBERTa can leverage phonological signals for even more challenging tasks,
such as a novel cross-document coreference task on a translated version of the
ECB+ corpus, where we present a new SOTA result for an LRL. Our source code and
evaluation scripts may be found at https://github.com/csu-signal/axomiyaberta.
|
[
"cs.CL"
] | false |
2305.13645
|
2023-05-23T03:40:36Z
|
mPMR: A Multilingual Pre-trained Machine Reader at Scale
|
[
"Weiwen Xu",
"Xin Li",
"Wai Lam",
"Lidong Bing"
] |
We present multilingual Pre-trained Machine Reader (mPMR), a novel method for
multilingual machine reading comprehension (MRC)-style pre-training. mPMR aims
to guide multilingual pre-trained language models (mPLMs) to perform natural
language understanding (NLU) including both sequence classification and span
extraction in multiple languages. To achieve cross-lingual generalization when
only source-language fine-tuning data is available, existing mPLMs solely
transfer NLU capability from a source language to target languages. In
contrast, mPMR allows the direct inheritance of multilingual NLU capability
from the MRC-style pre-training to downstream tasks. Therefore, mPMR acquires
better NLU capability for target languages. mPMR also provides a unified solver
for tackling cross-lingual span extraction and sequence classification, thereby
enabling the extraction of rationales to explain the sentence-pair
classification process.
|
[
"cs.CL"
] | false |
2305.13657
|
2023-05-23T04:00:16Z
|
ChatGPT as your Personal Data Scientist
|
[
"Md Mahadi Hassan",
"Alex Knipper",
"Shubhra Kanti Karmaker Santu"
] |
The rise of big data has amplified the need for efficient, user-friendly
automated machine learning (AutoML) tools. However, the intricacy of
understanding domain-specific data and defining prediction tasks necessitates
human intervention making the process time-consuming while preventing full
automation. Instead, envision an intelligent agent capable of assisting users
in conducting AutoML tasks through intuitive, natural conversations without
requiring in-depth knowledge of the underlying machine learning (ML) processes.
This agent's key challenge is to accurately comprehend the user's prediction
goals and, consequently, formulate precise ML tasks, adjust data sets and model
parameters accordingly, and articulate results effectively. In this paper, we
take a pioneering step towards this ambitious goal by introducing a
ChatGPT-based conversational data-science framework to act as a "personal data
scientist". Precisely, we utilize Large Language Models (ChatGPT) to build a
natural interface between the users and the ML models (Scikit-Learn), which in
turn, allows us to approach this ambitious problem with a realistic solution.
Our model pivots around four dialogue states: Data Visualization, Task
Formulation, Prediction Engineering, and Result Summary and Recommendation.
Each state marks a unique conversation phase, impacting the overall user-system
interaction. Multiple LLM instances, serving as "micro-agents", ensure a
cohesive conversation flow, granting us granular control over the
conversation's progression. In summary, we developed an end-to-end system that
not only proves the viability of the novel concept of conversational data
science but also underscores the potency of LLMs in solving complex tasks.
Interestingly, its development spotlighted several critical weaknesses in the
current LLMs (ChatGPT) and highlighted substantial opportunities for
improvement.
|
[
"cs.CL"
] | false |
2305.13685
|
2023-05-23T04:48:30Z
|
Causal Intervention for Abstractive Related Work Generation
|
[
"Jiachang Liu",
"Qi Zhang",
"Chongyang Shi",
"Usman Naseem",
"Shoujin Wang",
"Ivor Tsang"
] |
Abstractive related work generation has attracted increasing attention in
generating coherent related work that better helps readers grasp the background
in the current research. However, most existing abstractive models ignore the
inherent causality of related work generation, leading to low quality of
generated related work and spurious correlations that affect the models'
generalizability. In this study, we argue that causal intervention can address
these limitations and improve the quality and coherence of the generated
related works. To this end, we propose a novel Causal Intervention Module for
Related Work Generation (CaM) to effectively capture causalities in the
generation process and improve the quality and coherence of the generated
related works. Specifically, we first model the relations among sentence order,
document relation, and transitional content in related work generation using a
causal graph. Then, to implement the causal intervention and mitigate the
negative impact of spurious correlations, we use do-calculus to derive ordinary
conditional probabilities and identify causal effects through CaM. Finally, we
subtly fuse CaM with Transformer to obtain an end-to-end generation model.
Extensive experiments on two real-world datasets show that causal interventions
in CaM can effectively promote the model to learn causal relations and produce
related work of higher quality and coherence.
|
[
"cs.CL"
] | false |
2305.13693
|
2023-05-23T05:00:59Z
|
Automated Metrics for Medical Multi-Document Summarization Disagree with
Human Evaluations
|
[
"Lucy Lu Wang",
"Yulia Otmakhova",
"Jay DeYoung",
"Thinh Hung Truong",
"Bailey E. Kuehl",
"Erin Bransom",
"Byron C. Wallace"
] |
Evaluating multi-document summarization (MDS) quality is difficult. This is
especially true in the case of MDS for biomedical literature reviews, where
models must synthesize contradicting evidence reported across different
documents. Prior work has shown that rather than performing the task, models
may exploit shortcuts that are difficult to detect using standard n-gram
similarity metrics such as ROUGE. Better automated evaluation metrics are
needed, but few resources exist to assess metrics when they are proposed.
Therefore, we introduce a dataset of human-assessed summary quality facets and
pairwise preferences to encourage and support the development of better
automated evaluation methods for literature review MDS. We take advantage of
community submissions to the Multi-document Summarization for Literature Review
(MSLR) shared task to compile a diverse and representative sample of generated
summaries. We analyze how automated summarization evaluation metrics correlate
with lexical features of generated summaries, to other automated metrics
including several we propose in this work, and to aspects of human-assessed
summary quality. We find that not only do automated metrics fail to capture
aspects of quality as assessed by humans, in many cases the system rankings
produced by these metrics are anti-correlated with rankings according to human
annotators.
|
[
"cs.CL"
] | false |
2305.13696
|
2023-05-23T05:09:53Z
|
Abstractive Text Summarization Using the BRIO Training Paradigm
|
[
"Khang Nhut Lam",
"Thieu Gia Doan",
"Khang Thua Pham",
"Jugal Kalita"
] |
Summary sentences produced by abstractive summarization models may be
coherent and comprehensive, but they lack control and rely heavily on reference
summaries. The BRIO training paradigm assumes a non-deterministic distribution
to reduce the model's dependence on reference summaries, and improve model
performance during inference. This paper presents a straightforward but
effective technique to improve abstractive summaries by fine-tuning pre-trained
language models, and training them with the BRIO paradigm. We build a text
summarization dataset for Vietnamese, called VieSum. We perform experiments
with abstractive summarization models trained with the BRIO paradigm on the
CNNDM and the VieSum datasets. The results show that the models, trained on
basic hardware, outperform all existing abstractive summarization models,
especially for Vietnamese.
|
[
"cs.CL"
] | false |
2305.13697
|
2023-05-23T05:11:34Z
|
UNIMO-3: Multi-granularity Interaction for Vision-Language
Representation Learning
|
[
"Hao Yang",
"Can Gao",
"Hao Líu",
"Xinyan Xiao",
"Yanyan Zhao",
"Bing Qin"
] |
Vision-and-language (VL) pre-training, which aims to learn a general
representation of image-text pairs that can be transferred to various
vision-and-language tasks. Compared with modeling uni-modal data, the main
challenge of the VL model is: how to learn the cross-modal interaction from
multimodal data, especially the fine-grained interaction. Existing works have
shown that fully transformer-based models that adopt attention mechanisms to
learn in-layer cross-model interaction can demonstrate impressive performance
on various cross-modal downstream tasks. However, they ignored that the
semantic information of the different modals at the same layer was not uniform,
which leads to the cross-modal interaction collapsing into a limited
multi-modal semantic information interaction. In this work, we propose the
UNIMO-3 model, which has the capacity to simultaneously learn the multimodal
in-layer interaction and cross-layer interaction. UNIMO-3 model can establish
effective connections between different layers in a cross-modal encoder, and
adaptively capture the interaction between two modalities at different levels.
The experimental results show that our model achieves state-of-the-art
performance in various downstream tasks, and through ablation study can prove
that effective cross-layer learning improves the model's ability of multimodal
representation.
|
[
"cs.CL"
] | false |
2305.13698
|
2023-05-23T05:21:02Z
|
Exploring Large Language Models for Classical Philology
|
[
"Frederick Riemenschneider",
"Anette Frank"
] |
Recent advances in NLP have led to the creation of powerful language models
for many languages including Ancient Greek and Latin. While prior work on
Classical languages unanimously uses BERT, in this work we create four language
models for Ancient Greek that vary along two dimensions to study their
versatility for tasks of interest for Classical languages: we explore (i)
encoder-only and encoder-decoder architectures using RoBERTa and T5 as strong
model types, and create for each of them (ii) a monolingual Ancient Greek and a
multilingual instance that includes Latin and English. We evaluate all models
on morphological and syntactic tasks, including lemmatization, which
demonstrates the added value of T5's decoding abilities. We further define two
probing tasks to investigate the knowledge acquired by models pre-trained on
Classical texts. Our experiments provide the first benchmarking analysis of
existing models of Ancient Greek. Results show that our models provide
significant improvements over the SoTA. The systematic analysis of model types
can inform future research in designing language models for Classical
languages, including the development of novel generative tasks. We make all our
models available as community resources, along with a large curated
pre-training corpus for Ancient Greek, to support the creation of a larger,
comparable model zoo for Classical Philology. Our models and resources are
available at https://github.com/Heidelberg-NLP/ancient-language-models.
|
[
"cs.CL",
"I.2.7"
] | false |
2305.13703
|
2023-05-23T05:41:18Z
|
MemeCap: A Dataset for Captioning and Interpreting Memes
|
[
"EunJeong Hwang",
"Vered Shwartz"
] |
Memes are a widely popular tool for web users to express their thoughts using
visual metaphors. Understanding memes requires recognizing and interpreting
visual metaphors with respect to the text inside or around the meme, often
while employing background knowledge and reasoning abilities. We present the
task of meme captioning and release a new dataset, MemeCap. Our dataset
contains 6.3K memes along with the title of the post containing the meme, the
meme captions, the literal image caption, and the visual metaphors. Despite the
recent success of vision and language (VL) models on tasks such as image
captioning and visual question answering, our extensive experiments using
state-of-the-art VL models show that they still struggle with visual metaphors,
and perform substantially worse than humans.
|
[
"cs.CL"
] | false |
2305.13707
|
2023-05-23T05:46:45Z
|
Do All Languages Cost the Same? Tokenization in the Era of Commercial
Language Models
|
[
"Orevaoghene Ahia",
"Sachin Kumar",
"Hila Gonen",
"Jungo Kasai",
"David R. Mortensen",
"Noah A. Smith",
"Yulia Tsvetkov"
] |
Language models have graduated from being research prototypes to
commercialized products offered as web APIs, and recent works have highlighted
the multilingual capabilities of these products. The API vendors charge their
users based on usage, more specifically on the number of ``tokens'' processed
or generated by the underlying language models. What constitutes a token,
however, is training data and model dependent with a large variance in the
number of tokens required to convey the same information in different
languages. In this work, we analyze the effect of this non-uniformity on the
fairness of an API's pricing policy across languages. We conduct a systematic
analysis of the cost and utility of OpenAI's language model API on multilingual
benchmarks in 22 typologically diverse languages. We show evidence that
speakers of a large number of the supported languages are overcharged while
obtaining poorer results. These speakers tend to also come from regions where
the APIs are less affordable to begin with. Through these analyses, we aim to
increase transparency around language model APIs' pricing policies and
encourage the vendors to make them more equitable.
|
[
"cs.CL"
] | false |
2305.13710
|
2023-05-23T05:48:21Z
|
Using Textual Interface to Align External Knowledge for End-to-End
Task-Oriented Dialogue Systems
|
[
"Qingyang Wu",
"Deema Alnuhait",
"Derek Chen",
"Zhou Yu"
] |
Traditional end-to-end task-oriented dialogue systems have been built with a
modularized design. However, such design often causes misalignment between the
agent response and external knowledge, due to inadequate representation of
information. Furthermore, its evaluation metrics emphasize assessing the
agent's pre-lexicalization response, neglecting the quality of the completed
response. In this work, we propose a novel paradigm that uses a textual
interface to align external knowledge and eliminate redundant processes. We
demonstrate our paradigm in practice through MultiWOZ-Remake, including an
interactive textual interface built for the MultiWOZ database and a
correspondingly re-processed dataset. We train an end-to-end dialogue system to
evaluate this new dataset. The experimental results show that our approach
generates more natural final responses and achieves a greater task success rate
compared to the previous models.
|
[
"cs.CL"
] | false |
2305.13740
|
2023-05-23T06:51:48Z
|
TeCS: A Dataset and Benchmark for Tense Consistency of Machine
Translation
|
[
"Yiming Ai",
"Zhiwei He",
"Kai Yu",
"Rui Wang"
] |
Tense inconsistency frequently occurs in machine translation. However, there
are few criteria to assess the model's mastery of tense prediction from a
linguistic perspective. In this paper, we present a parallel tense test set,
containing French-English 552 utterances. We also introduce a corresponding
benchmark, tense prediction accuracy. With the tense test set and the
benchmark, researchers are able to measure the tense consistency performance of
machine translation systems for the first time.
|
[
"cs.CL"
] | false |
2305.13782
|
2023-05-23T07:50:36Z
|
Images in Language Space: Exploring the Suitability of Large Language
Models for Vision & Language Tasks
|
[
"Sherzod Hakimov",
"David Schlangen"
] |
Large language models have demonstrated robust performance on various
language tasks using zero-shot or few-shot learning paradigms. While being
actively researched, multimodal models that can additionally handle images as
input have yet to catch up in size and generality with language-only models. In
this work, we ask whether language-only models can be utilised for tasks that
require visual input -- but also, as we argue, often require a strong reasoning
component. Similar to some recent related work, we make visual information
accessible to the language model using separate verbalisation models.
Specifically, we investigate the performance of open-source, open-access
language models against GPT-3 on five vision-language tasks when given
textually-encoded visual information. Our results suggest that language models
are effective for solving vision-language tasks even with limited samples. This
approach also enhances the interpretability of a model's output by providing a
means of tracing the output back through the verbalised image content.
|
[
"cs.CL"
] | false |
2305.13805
|
2023-05-23T08:16:52Z
|
Towards Zero-shot Relation Extraction in Web Mining: A Multimodal
Approach with Relative XML Path
|
[
"Zilong Wang",
"Jingbo Shang"
] |
The rapid growth of web pages and the increasing complexity of their
structure poses a challenge for web mining models. Web mining models are
required to understand the semi-structured web pages, particularly when little
is known about the subject or template of a new page. Current methods migrate
language models to the web mining by embedding the XML source code into the
transformer or encoding the rendered layout with graph neural networks.
However, these approaches do not take into account the relationships between
text nodes within and across pages. In this paper, we propose a new approach,
ReXMiner, for zero-shot relation extraction in web mining. ReXMiner encodes the
shortest relative paths in the Document Object Model (DOM) tree which is a more
accurate and efficient signal for key-value pair extraction within a web page.
It also incorporates the popularity of each text node by counting the
occurrence of the same text node across different web pages. We use the
contrastive learning to address the issue of sparsity in relation extraction.
Extensive experiments on public benchmarks show that our method, ReXMiner,
outperforms the state-of-the-art baselines in the task of zero-shot relation
extraction in web mining.
|
[
"cs.CL"
] | false |
2305.13817
|
2023-05-23T08:38:33Z
|
Detecting automatically the layout of clinical documents to enhance the
performances of downstream natural language processing
|
[
"Christel Gérardin",
"Perceval Wajsbürt",
"Basile Dura",
"Alice Calliger",
"Alexandre Moucher",
"Xavier Tannier",
"Romain Bey"
] |
Objective:Develop and validate an algorithm for analyzing the layout of PDF
clinical documents to improve the performance of downstream natural language
processing tasks. Materials and Methods: We designed an algorithm to process
clinical PDF documents and extract only clinically relevant text. The algorithm
consists of several steps: initial text extraction using a PDF parser, followed
by classification into categories such as body text, left notes, and footers
using a Transformer deep neural network architecture, and finally an
aggregation step to compile the lines of a given label in the text. We
evaluated the technical performance of the body text extraction algorithm by
applying it to a random sample of documents that were annotated. Medical
performance was evaluated by examining the extraction of medical concepts of
interest from the text in their respective sections. Finally, we tested an
end-to-end system on a medical use case of automatic detection of acute
infection described in the hospital report. Results:Our algorithm achieved
per-line precision, recall, and F1 score of 98.4, 97.0, and 97.7, respectively,
for body line extraction. The precision, recall, and F1 score per document for
the acute infection detection algorithm were 82.54 (95CI 72.86-91.60), 85.24
(95CI 76.61-93.70), 83.87 (95CI 76, 92-90.08) with exploitation of the results
of the advanced body extraction algorithm, respectively. Conclusion:We have
developed and validated a system for extracting body text from clinical
documents in PDF format by identifying their layout. We were able to
demonstrate that this preprocessing allowed us to obtain better performances
for a common downstream task, i.e., the extraction of medical concepts in their
respective sections, thus proving the interest of this method on a clinical use
case.
|
[
"cs.CL"
] | false |
2305.13820
|
2023-05-23T08:43:42Z
|
An Open Dataset and Model for Language Identification
|
[
"Laurie Burchell",
"Alexandra Birch",
"Nikolay Bogoychev",
"Kenneth Heafield"
] |
Language identification (LID) is a fundamental step in many natural language
processing pipelines. However, current LID systems are far from perfect,
particularly on lower-resource languages. We present a LID model which achieves
a macro-average F1 score of 0.93 and a false positive rate of 0.033 across 201
languages, outperforming previous work. We achieve this by training on a
curated dataset of monolingual data, the reliability of which we ensure by
auditing a sample from each source and each language manually. We make both the
model and the dataset available to the research community. Finally, we carry
out detailed analysis into our model's performance, both in comparison to
existing open models and by language class.
|
[
"cs.CL"
] | false |
2305.13826
|
2023-05-23T08:49:50Z
|
"Is the Pope Catholic?" Applying Chain-of-Thought Reasoning to
Understanding Conversational Implicatures
|
[
"Zae Myung Kim",
"David E. Taylor",
"Dongyeop Kang"
] |
Conversational implicatures are pragmatic inferences that require listeners
to deduce the intended meaning conveyed by a speaker from their explicit
utterances. Although such inferential reasoning is fundamental to human
communication, recent research indicates that large language models struggle to
comprehend these implicatures as effectively as the average human. This paper
demonstrates that by incorporating Grice's Four Maxims into the model through
chain-of-thought prompting, we can significantly enhance its performance,
surpassing even the average human performance on this task.
|
[
"cs.CL"
] | false |
2305.13844
|
2023-05-23T09:07:42Z
|
Arukikata Travelogue Dataset with Geographic Entity Mention,
Coreference, and Link Annotation
|
[
"Shohei Higashiyama",
"Hiroki Ouchi",
"Hiroki Teranishi",
"Hiroyuki Otomo",
"Yusuke Ide",
"Aitaro Yamamoto",
"Hiroyuki Shindo",
"Yuki Matsuda",
"Shoko Wakamiya",
"Naoya Inoue",
"Ikuya Yamada",
"Taro Watanabe"
] |
Geoparsing is a fundamental technique for analyzing geo-entity information in
text. We focus on document-level geoparsing, which considers geographic
relatedness among geo-entity mentions, and presents a Japanese travelogue
dataset designed for evaluating document-level geoparsing systems. Our dataset
comprises 200 travelogue documents with rich geo-entity information: 12,171
mentions, 6,339 coreference clusters, and 2,551 geo-entities linked to
geo-database entries.
|
[
"cs.CL"
] | false |
2305.13857
|
2023-05-23T09:24:53Z
|
Revealing User Familiarity Bias in Task-Oriented Dialogue via
Interactive Evaluation
|
[
"Takyoung Kim",
"Jamin Shin",
"Young-Ho Kim",
"Sanghwan Bae",
"Sungdong Kim"
] |
Most task-oriented dialogue (TOD) benchmarks assume users that know exactly
how to use the system by constraining the user behaviors within the system's
capabilities via strict user goals, namely "user familiarity" bias. This data
bias deepens when it combines with data-driven TOD systems, as it is impossible
to fathom the effect of it with existing static evaluations. Hence, we conduct
an interactive user study to unveil how vulnerable TOD systems are against
realistic scenarios. In particular, we compare users with 1) detailed goal
instructions that conform to the system boundaries (closed-goal) and 2) vague
goal instructions that are often unsupported but realistic (open-goal). Our
study reveals that conversations in open-goal settings lead to catastrophic
failures of the system, in which 92% of the dialogues had significant issues.
Moreover, we conduct a thorough analysis to identify distinctive features
between the two settings through error annotation. From this, we discover a
novel "pretending" behavior, in which the system pretends to handle the user
requests even though they are beyond the system's capabilities. We discuss its
characteristics and toxicity while emphasizing transparency and a fallback
strategy for robust TOD systems.
|
[
"cs.CL"
] | false |
2305.13863
|
2023-05-23T09:36:21Z
|
Probing Brain Context-Sensitivity with Masked-Attention Generation
|
[
"Alexandre Pasquiou",
"Yair Lakretz",
"Bertrand Thirion",
"Christophe Pallier"
] |
Two fundamental questions in neurolinguistics concerns the brain regions that
integrate information beyond the lexical level, and the size of their window of
integration. To address these questions we introduce a new approach named
masked-attention generation. It uses GPT-2 transformers to generate word
embeddings that capture a fixed amount of contextual information. We then
tested whether these embeddings could predict fMRI brain activity in humans
listening to naturalistic text. The results showed that most of the cortex
within the language network is sensitive to contextual information, and that
the right hemisphere is more sensitive to longer contexts than the left.
Masked-attention generation supports previous analyses of context-sensitivity
in the brain, and complements them by quantifying the window size of context
integration per voxel.
|
[
"cs.CL"
] | false |
2305.13944
|
2023-05-23T11:02:28Z
|
Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic
Frame Induction
|
[
"Kosuke Yamada",
"Ryohei Sasano",
"Koichi Takeda"
] |
The semantic frame induction tasks are defined as a clustering of words into
the frames that they evoke, and a clustering of their arguments according to
the frame element roles that they should fill. In this paper, we address the
latter task of argument clustering, which aims to acquire frame element
knowledge, and propose a method that applies deep metric learning. In this
method, a pre-trained language model is fine-tuned to be suitable for
distinguishing frame element roles through the use of frame-annotated data, and
argument clustering is performed with embeddings obtained from the fine-tuned
model. Experimental results on FrameNet demonstrate that our method achieves
substantially better performance than existing methods.
|
[
"cs.CL"
] | false |
2305.13972
|
2023-05-23T11:56:03Z
|
Make a Choice! Knowledge Base Question Answering with In-Context
Learning
|
[
"Chuanyuan Tan",
"Yuehe Chen",
"Wenbiao Shao",
"Wenliang Chen"
] |
Question answering over knowledge bases (KBQA) aims to answer factoid
questions with a given knowledge base (KB). Due to the large scale of KB,
annotated data is impossible to cover all fact schemas in KB, which poses a
challenge to the generalization ability of methods that require a sufficient
amount of annotated data. Recently, LLMs have shown strong few-shot performance
in many NLP tasks. We expect LLM can help existing methods improve their
generalization ability, especially in low-resource situations. In this paper,
we present McL-KBQA, a framework that incorporates the few-shot ability of LLM
into the KBQA method via ICL-based multiple choice and then improves the
effectiveness of the QA tasks. Experimental results on two KBQA datasets
demonstrate the competitive performance of McL-KBQA with strong improvements in
generalization. We expect to explore a new way to QA tasks from KBQA in
conjunction with LLM, how to generate answers normatively and correctly with
strong generalization.
|
[
"cs.CL"
] | false |
2305.13973
|
2023-05-23T11:56:17Z
|
Effortless Integration of Memory Management into Open-Domain
Conversation Systems
|
[
"Eunbi Choi",
"Kyoung-Woon On",
"Gunsoo Han",
"Sungwoong Kim",
"Daniel Wontae Nam",
"Daejin Jo",
"Seung Eun Rho",
"Taehwan Kwon",
"Minjoon Seo"
] |
Open-domain conversation systems integrate multiple conversation skills into
a single system through a modular approach. One of the limitations of the
system, however, is the absence of management capability for external memory.
In this paper, we propose a simple method to improve BlenderBot3 by integrating
memory management ability into it. Since no training data exists for this
purpose, we propose an automating dataset creation for memory management. Our
method 1) requires little cost for data construction, 2) does not affect
performance in other tasks, and 3) reduces external memory. We show that our
proposed model BlenderBot3-M^3, which is multi-task trained with memory
management, outperforms BlenderBot3 with a relative 4% performance gain in
terms of F1 score.
|
[
"cs.CL"
] | false |
2305.13989
|
2023-05-23T12:15:33Z
|
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African
Languages
|
[
"Cheikh M. Bamba Dione",
"David Adelani",
"Peter Nabende",
"Jesujoba Alabi",
"Thapelo Sindane",
"Happy Buzaaba",
"Shamsuddeen Hassan Muhammad",
"Chris Chinenye Emezue",
"Perez Ogayo",
"Anuoluwapo Aremu",
"Catherine Gitau",
"Derguene Mbaye",
"Jonathan Mukiibi",
"Blessing Sibanda",
"Bonaventure F. P. Dossou",
"Andiswa Bukula",
"Rooweither Mabuya",
"Allahsera Auguste Tapo",
"Edwin Munkoh-Buabeng",
"victoire Memdjokam Koagne",
"Fatoumata Ouoba Kabore",
"Amelia Taylor",
"Godson Kalipe",
"Tebogo Macucwa",
"Vukosi Marivate",
"Tajuddeen Gwadabe",
"Mboning Tchiaze Elvis",
"Ikechukwu Onyenwe",
"Gratien Atindogbe",
"Tolulope Adelani",
"Idris Akinade",
"Olanrewaju Samuel",
"Marien Nahimana",
"Théogène Musabeyezu",
"Emile Niyomutabazi",
"Ester Chimhenga",
"Kudzai Gotosa",
"Patrick Mizha",
"Apelete Agbolo",
"Seydou Traore",
"Chinedu Uchechukwu",
"Aliyu Yusuf",
"Muhammad Abdullahi",
"Dietrich Klakow"
] |
In this paper, we present MasakhaPOS, the largest part-of-speech (POS)
dataset for 20 typologically diverse African languages. We discuss the
challenges in annotating POS for these languages using the UD (universal
dependencies) guidelines. We conducted extensive POS baseline experiments using
conditional random field and several multilingual pre-trained language models.
We applied various cross-lingual transfer models trained with data available in
UD. Evaluating on the MasakhaPOS dataset, we show that choosing the best
transfer language(s) in both single-source and multi-source setups greatly
improves the POS tagging performance of the target languages, in particular
when combined with cross-lingual parameter-efficient fine-tuning methods.
Crucially, transferring knowledge from a language that matches the language
family and morphosyntactic properties seems more effective for POS tagging in
unseen languages.
|
[
"cs.CL"
] | false |
2305.14002
|
2023-05-23T12:29:44Z
|
Improving Language Models via Plug-and-Play Retrieval Feedback
|
[
"Wenhao Yu",
"Zhihan Zhang",
"Zhenwen Liang",
"Meng Jiang",
"Ashish Sabharwal"
] |
Large language models (LLMs) exhibit remarkable performance across various
NLP tasks. However, they often generate incorrect or hallucinated information,
which hinders their practical applicability in real-world scenarios. Human
feedback has been shown to effectively enhance the factuality and quality of
generated content, addressing some of these limitations. However, this approach
is resource-intensive, involving manual input and supervision, which can be
time-consuming and expensive. Moreover, it cannot be provided during inference,
further limiting its practical utility in dynamic and interactive applications.
In this paper, we introduce ReFeed, a novel pipeline designed to enhance LLMs
by providing automatic retrieval feedback in a plug-and-play framework without
the need for expensive fine-tuning. ReFeed first generates initial outputs,
then utilizes a retrieval model to acquire relevant information from large
document collections, and finally incorporates the retrieved information into
the in-context demonstration for output refinement, thereby addressing the
limitations of LLMs in a more efficient and cost-effective manner. Experiments
on four knowledge-intensive benchmark datasets demonstrate our proposed ReFeed
could improve over +6.0% under zero-shot setting and +2.5% under few-shot
setting, compared to baselines without using retrieval feedback.
|
[
"cs.CL"
] | false |
2305.14006
|
2023-05-23T12:35:49Z
|
Multi-Granularity Prompts for Topic Shift Detection in Dialogue
|
[
"Jiangyi Lin",
"Yaxin Fan",
"Xiaomin Chu",
"Peifeng Li",
"Qiaoming Zhu"
] |
The goal of dialogue topic shift detection is to identify whether the current
topic in a conversation has changed or needs to change. Previous work focused
on detecting topic shifts using pre-trained models to encode the utterance,
failing to delve into the various levels of topic granularity in the dialogue
and understand dialogue contents. To address the above issues, we take a
prompt-based approach to fully extract topic information from dialogues at
multiple-granularity, i.e., label, turn, and topic. Experimental results on our
annotated Chinese Natural Topic Dialogue dataset CNTD and the publicly
available English TIAGE dataset show that the proposed model outperforms the
baselines. Further experiments show that the information extracted at different
levels of granularity effectively helps the model comprehend the conversation
topics.
|
[
"cs.CL"
] | false |
2305.14007
|
2023-05-23T12:37:14Z
|
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A
Case Study in Financial NLP
|
[
"Jingwei Ni",
"Zhijing Jin",
"Qian Wang",
"Mrinmaya Sachan",
"Markus Leippold"
] |
Multi-task learning (MTL) aims at achieving a better model by leveraging data
and knowledge from multiple tasks. However, MTL does not always work --
sometimes negative transfer occurs between tasks, especially when aggregating
loosely related skills, leaving it an open question when MTL works. Previous
studies show that MTL performance can be improved by algorithmic tricks.
However, what tasks and skills should be included is less well explored. In
this work, we conduct a case study in Financial NLP where multiple datasets
exist for skills relevant to the domain, such as numeric reasoning and
sentiment analysis. Due to the task difficulty and data scarcity in the
Financial NLP domain, we explore when aggregating such diverse skills from
multiple datasets with MTL can work. Our findings suggest that the key to MTL
success lies in skill diversity, relatedness between tasks, and choice of
aggregation size and shared capacity. Specifically, MTL works well when tasks
are diverse but related, and when the size of the task aggregation and the
shared capacity of the model are balanced to avoid overwhelming certain tasks.
|
[
"cs.CL"
] | false |
2305.14010
|
2023-05-23T12:43:19Z
|
IfQA: A Dataset for Open-domain Question Answering under Counterfactual
Presuppositions
|
[
"Wenhao Yu",
"Meng Jiang",
"Peter Clark",
"Ashish Sabharwal"
] |
Although counterfactual reasoning is a fundamental aspect of intelligence,
the lack of large-scale counterfactual open-domain question-answering (QA)
benchmarks makes it difficult to evaluate and improve models on this ability.
To address this void, we introduce the first such dataset, named IfQA, where
each question is based on a counterfactual presupposition via an "if" clause.
For example, if Los Angeles was on the east coast of the U.S., what would be
the time difference between Los Angeles and Paris? Such questions require
models to go beyond retrieving direct factual knowledge from the Web: they must
identify the right information to retrieve and reason about an imagined
situation that may even go against the facts built into their parameters. The
IfQA dataset contains over 3,800 questions that were annotated annotated by
crowdworkers on relevant Wikipedia passages. Empirical analysis reveals that
the IfQA dataset is highly challenging for existing open-domain QA methods,
including supervised retrieve-then-read pipeline methods (EM score 36.2), as
well as recent few-shot approaches such as chain-of-thought prompting with
GPT-3 (EM score 27.4). The unique challenges posed by the IfQA benchmark will
push open-domain QA research on both retrieval and counterfactual reasoning
fronts.
|
[
"cs.CL"
] | false |
2305.14044
|
2023-05-23T13:14:34Z
|
Process-To-Text: A Framework for the Quantitative Description of
Processes in Natural Language
|
[
"Yago Fontenla-Seco",
"Alberto Bugarín-Diz",
"Manuel Lama"
] |
In this paper we present the Process-To-Text (P2T) framework for the
automatic generation of textual descriptive explanations of processes. P2T
integrates three AI paradigms: process mining for extracting temporal and
structural information from a process, fuzzy linguistic protoforms for
modelling uncertain terms, and natural language generation for building the
explanations. A real use-case in the cardiology domain is presented, showing
the potential of P2T for providing natural language explanations addressed to
specialists.
|
[
"cs.CL"
] | false |
2305.14081
|
2023-05-23T14:04:12Z
|
How to Solve Few-Shot Abusive Content Detection Using the Data We
Actually Have
|
[
"Viktor Hangya",
"Alexander Fraser"
] |
Due to the broad range of social media platforms and their user groups, the
requirements of abusive language detection systems are varied and
ever-changing. Already a large set of annotated corpora with different
properties and label sets were created, such as hate or misogyny detection, but
the form and targets of abusive speech are constantly changing. Since, the
annotation of new corpora is expensive, in this work we leverage datasets we
already have, covering a wide range of tasks related to abusive language
detection, in order to build models cheaply for a new target label set and/or
language, using only a few training examples of the target domain. We propose a
two-step approach: first we train our model in a multitask fashion. We then
carry out few-shot adaptation to the target requirements. Our experiments show
that by leveraging already existing datasets and only a few-shots of the target
task the performance of models can be improved not only monolingually but
across languages as well. Our analysis also shows that our models acquire a
general understanding of abusive language, since they improve the prediction of
labels which are present only in the target dataset. We also analyze the
trade-off between specializing the already existing datasets to a given target
setup for best performance and its negative effects on model adaptability.
|
[
"cs.CL"
] | false |
2305.14211
|
2023-05-23T16:28:42Z
|
Towards Graph-hop Retrieval and Reasoning in Complex Question Answering
over Textual Database
|
[
"Minjun Zhu",
"Yixuan Weng",
"Shizhu He",
"Kang Liu",
"Jun Zhao"
] |
In Textual question answering (TQA) systems, complex questions often require
retrieving multiple textual fact chains with multiple reasoning steps. While
existing benchmarks are limited to single-chain or single-hop retrieval
scenarios. In this paper, we propose to conduct Graph-Hop -- a novel
multi-chains and multi-hops retrieval and reasoning paradigm in complex
question answering. We construct a new benchmark called ReasonGraphQA, which
provides explicit and fine-grained evidence graphs for complex questions to
support interpretable reasoning, comprehensive and detailed reasoning. And
ReasonGraphQA also shows an advantage in reasoning diversity and scale.
Moreover, We propose a strong graph-hop baseline called Bidirectional Graph
Retrieval (BGR) method for generating an explanation graph of textual evidence
in knowledge reasoning and question answering. We have thoroughly evaluated
existing evidence retrieval and reasoning models on the ReasonGraphQA.
Experiments highlight Graph-Hop is a promising direction for answering complex
questions, but it still has certain limitations. We have further studied
mitigation strategies to meet these challenges and discuss future directions.
|
[
"cs.CL"
] | false |
2305.14224
|
2023-05-23T16:38:01Z
|
mmT5: Modular Multilingual Pre-Training Solves Source Language
Hallucinations
|
[
"Jonas Pfeiffer",
"Francesco Piccinno",
"Massimo Nicosia",
"Xinyi Wang",
"Machel Reid",
"Sebastian Ruder"
] |
Multilingual sequence-to-sequence models perform poorly with increased
language coverage and fail to consistently generate text in the correct target
language in few-shot settings. To address these challenges, we propose mmT5, a
modular multilingual sequence-to-sequence model. mmT5 utilizes
language-specific modules during pre-training, which disentangle
language-specific information from language-agnostic information. We identify
representation drift during fine-tuning as a key limitation of modular
generative models and develop strategies that enable effective zero-shot
transfer. Our model outperforms mT5 at the same parameter sizes by a large
margin on representative natural language understanding and generation tasks in
40+ languages. Compared to mT5, mmT5 raises the rate of generating text in the
correct language under zero-shot settings from 7% to 99%, thereby greatly
alleviating the source language hallucination problem.
|
[
"cs.CL"
] | false |
2305.14225
|
2023-05-23T16:40:07Z
|
ManiTweet: A New Benchmark for Identifying Manipulation of News on
Social Media
|
[
"Kung-Hsiang Huang",
"Hou Pong Chan",
"Kathleen McKeown",
"Heng Ji"
] |
Considerable advancements have been made to tackle the misrepresentation of
information derived from reference articles in the domains of fact-checking and
faithful summarization. However, an unaddressed aspect remains - the
identification of social media posts that manipulate information within
associated news articles. This task presents a significant challenge, primarily
due to the prevalence of personal opinions in such posts. We present a novel
task, identifying manipulation of news on social media, which aims to detect
manipulation in social media posts and identify manipulated or inserted
information. To study this task, we have proposed a data collection schema and
curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and
corresponding articles. Our analysis demonstrates that this task is highly
challenging, with large language models (LLMs) yielding unsatisfactory
performance. Additionally, we have developed a simple yet effective basic model
that outperforms LLMs significantly on the ManiTweet dataset. Finally, we have
conducted an exploratory analysis of human-written tweets, unveiling intriguing
connections between manipulation and the domain and factuality of news
articles, as well as revealing that manipulated sentences are more likely to
encapsulate the main story or consequences of a news outlet.
|
[
"cs.CL"
] | false |
2305.14256
|
2023-05-23T17:10:37Z
|
Linear Cross-Lingual Mapping of Sentence Embeddings
|
[
"Oleg Vasilyev",
"Fumika Isono",
"John Bohannon"
] |
Semantics of a sentence is defined with much less ambiguity than semantics of
a single word, and it should be better preserved by translation to another
language. If multilingual sentence embeddings intend to represent sentence
semantics, then the similarity between embeddings of any two sentences must be
invariant with respect to translation. Based on this suggestion, we consider a
simple linear cross-lingual mapping as a possible improvement of the
multilingual embeddings. We also consider deviation from orthogonality
conditions as a measure of deficiency of the embeddings.
|
[
"cs.CL"
] | false |
2305.14321
|
2023-05-23T17:53:30Z
|
ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and
Text Embeddings
|
[
"William Brannon",
"Suyash Fulay",
"Hang Jiang",
"Wonjune Kang",
"Brandon Roy",
"Jad Kabbara",
"Deb Roy"
] |
We propose ConGraT(Contrastive Graph-Text pretraining), a general,
self-supervised method for jointly learning separate representations of texts
and nodes in a parent (or ``supervening'') graph, where each text is associated
with one of the nodes. Datasets fitting this paradigm are common, from social
media (users and posts), to citation networks over articles, to link graphs
over web pages. We expand on prior work by providing a general,
self-supervised, joint pretraining method, one which does not depend on
particular dataset structure or a specific task. Our method uses two separate
encoders for graph nodes and texts, which are trained to align their
representations within a common latent space. Training uses a batch-wise
contrastive learning objective inspired by prior work on joint text and image
encoding. As graphs are more structured objects than images, we also extend the
training objective to incorporate information about node similarity and
plausible next guesses in matching nodes and texts. Experiments on various
datasets reveal that ConGraT outperforms strong baselines on various downstream
tasks, including node and text category classification and link prediction.
Code and certain datasets are available at
https://github.com/wwbrannon/congrat.
|
[
"cs.CL"
] | false |
2305.14322
|
2023-05-23T17:53:38Z
|
RET-LLM: Towards a General Read-Write Memory for Large Language Models
|
[
"Ali Modarressi",
"Ayyoob Imani",
"Mohsen Fayyaz",
"Hinrich Schütze"
] |
Large language models (LLMs) have significantly advanced the field of natural
language processing (NLP) through their extensive parameters and comprehensive
data utilization. However, existing LLMs lack a dedicated memory unit, limiting
their ability to explicitly store and retrieve knowledge for various tasks. In
this paper, we propose RET-LLM a novel framework that equips LLMs with a
general write-read memory unit, allowing them to extract, store, and recall
knowledge from the text as needed for task performance. Inspired by Davidsonian
semantics theory, we extract and save knowledge in the form of triplets. The
memory unit is designed to be scalable, aggregatable, updatable, and
interpretable. Through qualitative evaluations, we demonstrate the superiority
of our proposed framework over baseline approaches in question answering tasks.
Moreover, our framework exhibits robust performance in handling temporal-based
question answering tasks, showcasing its ability to effectively manage
time-dependent information.
|
[
"cs.CL"
] | false |
2305.14434
|
2023-05-23T18:01:49Z
|
Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment
Triplet Extraction
|
[
"Yew Ken Chia",
"Hui Chen",
"Wei Han",
"Guizhen Chen",
"Sharifah Mahani Aljunied",
"Soujanya Poria",
"Lidong Bing"
] |
Aspect Sentiment Triplet Extraction (ASTE) is a subtask of Aspect-Based
Sentiment Analysis (ABSA) that considers each opinion term, their expressed
sentiment, and the corresponding aspect targets. However, existing methods are
limited to the in-domain setting with two domains. Hence, we propose a
domain-expanded benchmark to address the in-domain, out-of-domain and
cross-domain settings. We support the new benchmark by annotating more than
4000 data samples for two new domains based on hotel and cosmetics reviews. Our
analysis of five existing methods shows that while there is a significant gap
between in-domain and out-of-domain performance, generative methods have a
strong potential for domain generalization. Our datasets, code implementation
and models are available at https://github.com/DAMO-NLP-SG/domain-expanded-aste .
|
[
"cs.CL"
] | false |
2305.14441
|
2023-05-23T18:07:04Z
|
Exploring Contrast Consistency of Open-Domain Question Answering Systems
on Minimally Edited Questions
|
[
"Zhihan Zhang",
"Wenhao Yu",
"Zheng Ning",
"Mingxuan Ju",
"Meng Jiang"
] |
Contrast consistency, the ability of a model to make consistently correct
predictions in the presence of perturbations, is an essential aspect in NLP.
While studied in tasks such as sentiment analysis and reading comprehension, it
remains unexplored in open-domain question answering (OpenQA) due to the
difficulty of collecting perturbed questions that satisfy factuality
requirements. In this work, we collect minimally edited questions as
challenging contrast sets to evaluate OpenQA models. Our collection approach
combines both human annotation and large language model generation. We find
that the widely used dense passage retriever (DPR) performs poorly on our
contrast sets, despite fitting the training set well and performing
competitively on standard test sets. To address this issue, we introduce a
simple and effective query-side contrastive loss with the aid of data
augmentation to improve DPR training. Our experiments on the contrast sets
demonstrate that DPR's contrast consistency is improved without sacrificing its
accuracy on the standard test sets.
|
[
"cs.CL"
] | false |
2305.14450
|
2023-05-23T18:17:43Z
|
Is Information Extraction Solved by ChatGPT? An Analysis of Performance,
Evaluation Criteria, Robustness and Errors
|
[
"Ridong Han",
"Tao Peng",
"Chaohao Yang",
"Benyou Wang",
"Lu Liu",
"Xiang Wan"
] |
ChatGPT has stimulated the research boom in the field of large language
models. In this paper, we assess the capabilities of ChatGPT from four
perspectives including Performance, Evaluation Criteria, Robustness and Error
Types. Specifically, we first evaluate ChatGPT's performance on 17 datasets
with 14 IE sub-tasks under the zero-shot, few-shot and chain-of-thought
scenarios, and find a huge performance gap between ChatGPT and SOTA results.
Next, we rethink this gap and propose a soft-matching strategy for evaluation
to more accurately reflect ChatGPT's performance. Then, we analyze the
robustness of ChatGPT on 14 IE sub-tasks, and find that: 1) ChatGPT rarely
outputs invalid responses; 2) Irrelevant context and long-tail target types
greatly affect ChatGPT's performance; 3) ChatGPT cannot understand well the
subject-object relationships in RE task. Finally, we analyze the errors of
ChatGPT, and find that "unannotated spans" is the most dominant error type.
This raises concerns about the quality of annotated data, and indicates the
possibility of annotating data with ChatGPT. The data and code are released at
Github site.
|
[
"cs.CL"
] | false |
2305.14471
|
2023-05-23T18:54:15Z
|
CGCE: A Chinese Generative Chat Evaluation Benchmark for General and
Financial Domains
|
[
"Xuanyu Zhang",
"Bingbing Li",
"Qing Yang"
] |
Generative chat models, such as ChatGPT and GPT-4, have revolutionized
natural language generation (NLG) by incorporating instructions and human
feedback to achieve significant performance improvements. However, the lack of
standardized evaluation benchmarks for chat models, particularly for Chinese
and domain-specific models, hinders their assessment and progress. To address
this gap, we introduce the Chinese Generative Chat Evaluation (CGCE) benchmark,
focusing on general and financial domains. The CGCE benchmark encompasses
diverse tasks, including 200 questions in the general domain and 150 specific
professional questions in the financial domain. Manual scoring evaluates
factors such as accuracy, coherence, expression clarity, and completeness. The
CGCE benchmark provides researchers with a standardized framework to assess and
compare Chinese generative chat models, fostering advancements in NLG research.
|
[
"cs.CL"
] | false |
2305.14507
|
2023-05-23T20:26:03Z
|
Deduction under Perturbed Evidence: Probing Student Simulation
Capabilities of Large Language Models
|
[
"Shashank Sonkar",
"Richard G. Baraniuk"
] |
We explore whether Large Language Models (LLMs) are capable of logical
reasoning with distorted facts, which we call Deduction under Perturbed
Evidence (DUPE). DUPE presents a unique challenge to LLMs since they typically
rely on their parameters, which encode mostly accurate information, to reason
and make inferences. However, in DUPE, LLMs must reason over manipulated or
falsified evidence present in their prompts, which can result in false
conclusions that are valid only under the manipulated evidence. Our goal with
DUPE is to determine whether LLMs can arrive at these false conclusions and
identify whether the dominant factor influencing the deduction process is the
encoded data in the parameters or the manipulated evidence in the prompts. To
evaluate the DUPE capabilities of LLMs, we create a DUPEd version of the
StrategyQA dataset, where facts are manipulated to reverse the answer to the
question. Our findings show that even the most advanced GPT models struggle to
reason on manipulated facts - showcasing poor DUPE skills - with accuracy
dropping by 45% compared to the original dataset. We also investigate prompt
settings inspired from student simulation models, which mitigate the accuracy
drop to some extent. Our findings have practical implications for understanding
the performance of LLMs in real-world applications such as student simulation
models that involve reasoning over inaccurate information.
|
[
"cs.CL"
] | false |
2305.14533
|
2023-05-23T21:33:43Z
|
How to Choose How to Choose Your Chatbot: A Massively Multi-System
MultiReference Data Set for Dialog Metric Evaluation
|
[
"Huda Khayrallah",
"Zuhaib Akhtar",
"Edward Cohen",
"João Sedoc"
] |
We release MMSMR, a Massively Multi-System MultiReference dataset to enable
future work on metrics and evaluation for dialog. Automatic metrics for
dialogue evaluation should be robust proxies for human judgments; however, the
verification of robustness is currently far from satisfactory. To quantify the
robustness correlation and understand what is necessary in a test set, we
create and release an 8-reference dialog dataset by extending single-reference
evaluation sets and introduce this new language learning conversation dataset.
We then train 1750 systems and evaluate them on our novel test set and the
DailyDialog dataset. We release the novel test set, and model hyper parameters,
inference outputs, and metric scores for each system on a variety of datasets.
|
[
"cs.CL"
] | false |
2305.14540
|
2023-05-23T21:50:06Z
|
LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond
|
[
"Philippe Laban",
"Wojciech Kryściński",
"Divyansh Agarwal",
"Alexander R. Fabbri",
"Caiming Xiong",
"Shafiq Joty",
"Chien-Sheng Wu"
] |
With the recent appearance of LLMs in practical settings, having methods that
can effectively detect factual inconsistencies is crucial to reduce the
propagation of misinformation and improve trust in model outputs. When testing
on existing factual consistency benchmarks, we find that a few large language
models (LLMs) perform competitively on classification benchmarks for factual
inconsistency detection compared to traditional non-LLM methods. However, a
closer analysis reveals that most LLMs fail on more complex formulations of the
task and exposes issues with existing evaluation benchmarks, affecting
evaluation precision. To address this, we propose a new protocol for
inconsistency detection benchmark creation and implement it in a 10-domain
benchmark called SummEdits. This new benchmark is 20 times more cost-effective
per sample than previous benchmarks and highly reproducible, as we estimate
inter-annotator agreement at about 0.9. Most LLMs struggle on SummEdits, with
performance close to random chance. The best-performing model, GPT-4, is still
8\% below estimated human performance, highlighting the gaps in LLMs' ability
to reason about facts and detect inconsistencies when they occur.
|
[
"cs.CL"
] | true |
2305.14548
|
2023-05-23T22:11:47Z
|
Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization
|
[
"Hou Pong Chan",
"Qi Zeng",
"Heng Ji"
] |
Existing factual consistency evaluation approaches for text summarization
provide binary predictions and limited insights into the weakness of
summarization systems. Therefore, we propose the task of fine-grained
inconsistency detection, the goal of which is to predict the fine-grained types
of factual errors in a summary. Motivated by how humans inspect factual
inconsistency in summaries, we propose an interpretable fine-grained
inconsistency detection model, FineGrainFact, which explicitly represents the
facts in the documents and summaries with semantic frames extracted by semantic
role labeling, and highlights the related semantic frames to predict
inconsistency. The highlighted semantic frames help verify predicted error
types and correct inconsistent summaries. Experiment results demonstrate that
our model outperforms strong baselines and provides evidence to support or
refute the summary.
|
[
"cs.CL"
] | false |
2305.14564
|
2023-05-23T23:06:04Z
|
PEARL: Prompting Large Language Models to Plan and Execute Actions Over
Long Documents
|
[
"Simeng Sun",
"Yang Liu",
"Shuohang Wang",
"Chenguang Zhu",
"Mohit Iyyer"
] |
Strategies such as chain-of-thought prompting improve the performance of
large language models (LLMs) on complex reasoning tasks by decomposing input
examples into intermediate steps. However, it remains unclear how to apply such
methods to reason over long input documents, in which both the decomposition
and the output of each intermediate step are non-trivial to obtain. In this
work, we propose PEARL, a prompting framework to improve reasoning over long
documents, which consists of three stages: action mining, plan formulation, and
plan execution. More specifically, given a question about a long document,
PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,
FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain
the answer. Each stage of PEARL is implemented via zero-shot or few-shot
prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate
PEARL on a challenging subset of the QuALITY dataset, which contains questions
that require complex reasoning over long narrative texts. PEARL outperforms
zero-shot and chain-of-thought prompting on this dataset, and ablation
experiments show that each stage of PEARL is critical to its performance.
Overall, PEARL is a first step towards leveraging LLMs to reason over long
documents.
|
[
"cs.CL"
] | true |
2305.14569
|
2023-05-23T23:14:38Z
|
Few-shot Unified Question Answering: Tuning Models or Prompts?
|
[
"Srijan Bansal",
"Semih Yavuz",
"Bo Pang",
"Meghana Bhat",
"Yingbo Zhou"
] |
Question-answering (QA) tasks often investigate specific question types,
knowledge domains, or reasoning skills, leading to specialized models catering
to specific categories of QA tasks. While recent research has explored the idea
of unified QA models, such models are usually explored for high-resource
scenarios and require re-training to extend their capabilities. To overcome
these drawbacks, the paper explores the potential of two paradigms of tuning,
model, and prompts, for unified QA under a low-resource setting. The paper
provides an exhaustive analysis of their applicability using 16 QA datasets,
revealing that prompt tuning can perform as well as model tuning in a few-shot
setting with a good initialization. The study also shows that parameter-sharing
results in superior few-shot performance, simple knowledge transfer techniques
for prompt initialization can be effective, and prompt tuning achieves a
significant performance boost from pre-training in a low-resource regime. The
research offers insights into the advantages and limitations of prompt tuning
for unified QA in a few-shot setting, contributing to the development of
effective and efficient systems in low-resource scenarios.
|
[
"cs.CL"
] | false |
2305.16876
|
2023-05-23T06:32:55Z
|
CombLM: Adapting Black-Box Language Models through Small Fine-Tuned
Models
|
[
"Aitor Ormazabal",
"Mikel Artetxe",
"Eneko Agirre"
] |
Methods for adapting language models (LMs) to new tasks and domains have
traditionally assumed white-box access to the model, and work by modifying its
parameters. However, this is incompatible with a recent trend in the field,
where the highest quality models are only available as black-boxes through
inference APIs. Even when the model weights are available, the computational
cost of fine-tuning large LMs can be prohibitive for most practitioners. In
this work, we present a lightweight method for adapting large LMs to new
domains and tasks, assuming no access to their weights or intermediate
activations. Our approach fine-tunes a small white-box LM and combines it with
the large black-box LM at the probability level through a small network,
learned on a small validation set. We validate our approach by adapting a large
LM (OPT-30B) to several domains and a downstream task (machine translation),
observing improved performance in all cases, of up to 9%, while using a domain
expert 23x smaller.
|
[
"cs.CL"
] | false |
2305.13648
|
2023-05-23T03:44:06Z
|
Non-parametric, Nearest-neighbor-assisted Fine-tuning for Neural Machine
Translation
|
[
"Jiayi Wang",
"Ke Wang",
"Yuqi Zhang",
"Yu Zhao",
"Pontus Stenetorp"
] |
Non-parametric, k-nearest-neighbor algorithms have recently made inroads to
assist generative models such as language models and machine translation
decoders. We explore whether such non-parametric models can improve machine
translation models at the fine-tuning stage by incorporating statistics from
the kNN predictions to inform the gradient updates for a baseline translation
model. There are multiple methods which could be used to incorporate kNN
statistics and we investigate gradient scaling by a gating mechanism, the kNN's
ground truth probability, and reinforcement learning. For four standard
in-domain machine translation datasets, compared with classic fine-tuning, we
report consistent improvements of all of the three methods by as much as 1.45
BLEU and 1.28 BLEU for German-English and English-German translations
respectively. Through qualitative analysis, we found particular improvements
when it comes to translating grammatical relations or function words, which
results in increased fluency of our model.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.13652
|
2023-05-23T03:50:35Z
|
Cross-lingual Knowledge Transfer and Iterative Pseudo-labeling for
Low-Resource Speech Recognition with Transducers
|
[
"Jan Silovsky",
"Liuhui Deng",
"Arturo Argueta",
"Tresi Arvizo",
"Roger Hsiao",
"Sasha Kuznietsov",
"Yiu-Chang Lin",
"Xiaoqiang Xiao",
"Yuanyuan Zhang"
] |
Voice technology has become ubiquitous recently. However, the accuracy, and
hence experience, in different languages varies significantly, which makes the
technology not equally inclusive. The availability of data for different
languages is one of the key factors affecting accuracy, especially in training
of all-neural end-to-end automatic speech recognition systems.
Cross-lingual knowledge transfer and iterative pseudo-labeling are two
techniques that have been shown to be successful for improving the accuracy of
ASR systems, in particular for low-resource languages, like Ukrainian.
Our goal is to train an all-neural Transducer-based ASR system to replace a
DNN-HMM hybrid system with no manually annotated training data. We show that
the Transducer system trained using transcripts produced by the hybrid system
achieves 18% reduction in terms of word error rate. However, using a
combination of cross-lingual knowledge transfer from related languages and
iterative pseudo-labeling, we are able to achieve 35% reduction of the error
rate.
|
[
"cs.CL",
"eess.AS"
] | false |
2305.13668
|
2023-05-23T04:22:00Z
|
Grounding and Distinguishing Conceptual Vocabulary Through Similarity
Learning in Embodied Simulations
|
[
"Sadaf Ghaffari",
"Nikhil Krishnaswamy"
] |
We present a novel method for using agent experiences gathered through an
embodied simulation to ground contextualized word vectors to object
representations. We use similarity learning to make comparisons between
different object types based on their properties when interacted with, and to
extract common features pertaining to the objects' behavior. We then use an
affine transformation to calculate a projection matrix that transforms
contextualized word vectors from different transformer-based language models
into this learned space, and evaluate whether new test instances of transformed
token vectors identify the correct concept in the object embedding space. Our
results expose properties of the embedding spaces of four different transformer
models and show that grounding object token vectors is usually more helpful to
grounding verb and attribute token vectors than the reverse, which reflects
earlier conclusions in the analogical reasoning and psycholinguistic
literature.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.13690
|
2023-05-23T04:56:04Z
|
Towards Asking Clarification Questions for Information Seeking on
Task-Oriented Dialogues
|
[
"Yue Feng",
"Hossein A. Rahmani",
"Aldo Lipani",
"Emine Yilmaz"
] |
Task-oriented dialogue systems aim at providing users with task-specific
services. Users of such systems often do not know all the information about the
task they are trying to accomplish, requiring them to seek information about
the task. To provide accurate and personalized task-oriented information
seeking results, task-oriented dialogue systems need to address two potential
issues: 1) users' inability to describe their complex information needs in
their requests; and 2) ambiguous/missing information the system has about the
users. In this paper, we propose a new Multi-Attention Seq2Seq Network, named
MAS2S, which can ask questions to clarify the user's information needs and the
user's profile in task-oriented information seeking. We also extend an existing
dataset for task-oriented information seeking, leading to the \ourdataset which
contains about 100k task-oriented information seeking dialogues that are made
publicly available\footnote{Dataset and code is available at
\href{https://github.com/sweetalyssum/clarit}{https://github.com/sweetalyssum/clarit}.}.
Experimental results on \ourdataset show that MAS2S outperforms baselines on
both clarification question generation and answer prediction.
|
[
"cs.CL",
"cs.IR"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.