arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.14601
|
2023-05-24T00:51:04Z
|
FaceFusion: Exploiting Full Spectrum of Multiple Datasets
|
[
"Chiyoung Song",
"Dongjae Lee"
] |
The size of training dataset is known to be among the most dominating aspects
of training high-performance face recognition embedding model. Building a large
dataset from scratch could be cumbersome and time-intensive, while combining
multiple already-built datasets poses the risk of introducing large amount of
label noise. We present a novel training method, named FaceFusion. It creates a
fused view of different datasets that is untainted by identity conflicts, while
concurrently training an embedding network using the view in an end-to-end
fashion. Using the unified view of combined datasets enables the embedding
network to be trained against the entire spectrum of the datasets, leading to a
noticeable performance boost. Extensive experiments confirm superiority of our
method, whose performance in public evaluation datasets surpasses not only that
of using a single training dataset, but also that of previously known methods
under various training circumstances.
|
[
"cs.CV"
] | false |
2305.14621
|
2023-05-24T01:39:41Z
|
Realistically distributing object placements in synthetic training data
improves the performance of vision-based object detection models
|
[
"Setareh Dabiri",
"Vasileios Lioutas",
"Berend Zwartsenberg",
"Yunpeng Liu",
"Matthew Niedoba",
"Xiaoxuan Liang",
"Dylan Green",
"Justice Sefas",
"Jonathan Wilder Lavington",
"Frank Wood",
"Adam Scibior"
] |
When training object detection models on synthetic data, it is important to
make the distribution of synthetic data as close as possible to the
distribution of real data. We investigate specifically the impact of object
placement distribution, keeping all other aspects of synthetic data fixed. Our
experiment, training a 3D vehicle detection model in CARLA and testing on
KITTI, demonstrates a substantial improvement resulting from improving the
object placement distribution.
|
[
"cs.CV"
] | false |
2305.14674
|
2023-05-24T03:32:03Z
|
T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities
|
[
"Kangfu Mei",
"Mo Zhou",
"Vishal M. Patel"
] |
Diffusion Probabilistic Field (DPF) models the distribution of continuous
functions defined over metric spaces. While DPF shows great potential for
unifying data generation of various modalities including images, videos, and 3D
geometry, it does not scale to a higher data resolution. This can be attributed
to the ``scaling property'', where it is difficult for the model to capture
local structures through uniform sampling. To this end, we propose a new model
comprising of a view-wise sampling algorithm to focus on local structure
learning, and incorporating additional guidance, e.g., text description, to
complement the global geometry. The model can be scaled to generate
high-resolution data while unifying multiple modalities. Experimental results
on data generation in various modalities demonstrate the effectiveness of our
model, as well as its potential as a foundation framework for scalable
modality-unified visual content generation.
|
[
"cs.CV"
] | false |
2305.14691
|
2023-05-24T03:53:20Z
|
Label-Efficient Learning in Agriculture: A Comprehensive Review
|
[
"Jiajia Li",
"Dong Chen",
"Xinda Qi",
"Zhaojian Li",
"Yanbo Huang",
"Daniel Morris",
"Xiaobo Tan"
] |
The past decade has witnessed many great successes of machine learning (ML)
and deep learning (DL) applications in agricultural systems, including weed
control, plant disease diagnosis, agricultural robotics, and precision
livestock management. Despite tremendous progresses, one downside of such ML/DL
models is that they generally rely on large-scale labeled datasets for
training, and the performance of such models is strongly influenced by the size
and quality of available labeled data samples. In addition, collecting,
processing, and labeling such large-scale datasets is extremely costly and
time-consuming, partially due to the rising cost in human labor. Therefore,
developing label-efficient ML/DL methods for agricultural applications has
received significant interests among researchers and practitioners. In fact,
there are more than 50 papers on developing and applying deep-learning-based
label-efficient techniques to address various agricultural problems since 2016,
which motivates the authors to provide a timely and comprehensive review of
recent label-efficient ML/DL methods in agricultural applications. To this end,
we first develop a principled taxonomy to organize these methods according to
the degree of supervision, including weak supervision (i.e., active learning
and semi-/weakly- supervised learning), and no supervision (i.e., un-/self-
supervised learning), supplemented by representative state-of-the-art
label-efficient ML/DL methods. In addition, a systematic review of various
agricultural applications exploiting these label-efficient algorithms, such as
precision agriculture, plant phenotyping, and postharvest quality assessment,
is presented. Finally, we discuss the current problems and challenges, as well
as future research directions. A well-classified paper list can be accessed at
https://github.com/DongChen06/Label-efficient-in-Agriculture.
|
[
"cs.CV"
] | false |
2305.14715
|
2023-05-24T04:33:28Z
|
Leveraging Future Relationship Reasoning for Vehicle Trajectory
Prediction
|
[
"Daehee Park",
"Hobin Ryu",
"Yunseo Yang",
"Jegyeong Cho",
"Jiwon Kim",
"Kuk-Jin Yoon"
] |
Understanding the interaction between multiple agents is crucial for
realistic vehicle trajectory prediction. Existing methods have attempted to
infer the interaction from the observed past trajectories of agents using
pooling, attention, or graph-based methods, which rely on a deterministic
approach. However, these methods can fail under complex road structures, as
they cannot predict various interactions that may occur in the future. In this
paper, we propose a novel approach that uses lane information to predict a
stochastic future relationship among agents. To obtain a coarse future motion
of agents, our method first predicts the probability of lane-level waypoint
occupancy of vehicles. We then utilize the temporal probability of passing
adjacent lanes for each agent pair, assuming that agents passing adjacent lanes
will highly interact. We also model the interaction using a probabilistic
distribution, which allows for multiple possible future interactions. The
distribution is learned from the posterior distribution of interaction obtained
from ground truth future trajectories. We validate our method on popular
trajectory prediction datasets: nuScenes and Argoverse. The results show that
the proposed method brings remarkable performance gain in prediction accuracy,
and achieves state-of-the-art performance in long-term prediction benchmark
dataset.
|
[
"cs.CV"
] | false |
2305.14768
|
2023-05-24T06:17:53Z
|
Dual Path Transformer with Partition Attention
|
[
"Zhengkai Jiang",
"Liang Liu",
"Jiangning Zhang",
"Yabiao Wang",
"Mingang Chen",
"Chengjie Wang"
] |
This paper introduces a novel attention mechanism, called dual attention,
which is both efficient and effective. The dual attention mechanism consists of
two parallel components: local attention generated by Convolutional Neural
Networks (CNNs) and long-range attention generated by Vision Transformers
(ViTs). To address the high computational complexity and memory footprint of
vanilla Multi-Head Self-Attention (MHSA), we introduce a novel Multi-Head
Partition-wise Attention (MHPA) mechanism. The partition-wise attention
approach models both intra-partition and inter-partition attention
simultaneously. Building on the dual attention block and partition-wise
attention mechanism, we present a hierarchical vision backbone called
DualFormer. We evaluate the effectiveness of our model on several computer
vision tasks, including image classification on ImageNet, object detection on
COCO, and semantic segmentation on Cityscapes. Specifically, the proposed
DualFormer-XS achieves 81.5\% top-1 accuracy on ImageNet, outperforming the
recent state-of-the-art MPViT-XS by 0.6\% top-1 accuracy with much higher
throughput.
|
[
"cs.CV"
] | false |
2305.14787
|
2023-05-24T06:42:27Z
|
Polarimetric Imaging for Perception
|
[
"Michael Baltaxe",
"Tomer Pe'er",
"Dan Levi"
] |
Autonomous driving and advanced driver-assistance systems rely on a set of
sensors and algorithms to perform the appropriate actions and provide alerts as
a function of the driving scene. Typically, the sensors include color cameras,
radar, lidar and ultrasonic sensors. Strikingly however, although light
polarization is a fundamental property of light, it is seldom harnessed for
perception tasks. In this work we analyze the potential for improvement in
perception tasks when using an RGB-polarimetric camera, as compared to an RGB
camera. We examine monocular depth estimation and free space detection during
the middle of the day, when polarization is independent of subject heading, and
show that a quantifiable improvement can be achieved for both of them using
state-of-the-art deep neural networks, with a minimum of architectural changes.
We also present a new dataset composed of RGB-polarimetric images, lidar scans,
GNSS / IMU readings and free space segmentations that further supports
developing perception algorithms that take advantage of light polarization.
|
[
"cs.CV"
] | false |
2305.14813
|
2023-05-24T07:09:25Z
|
Semi-Supervised and Long-Tailed Object Detection with CascadeMatch
|
[
"Yuhang Zang",
"Kaiyang Zhou",
"Chen Huang",
"Chen Change Loy"
] |
This paper focuses on long-tailed object detection in the semi-supervised
learning setting, which poses realistic challenges, but has rarely been studied
in the literature. We propose a novel pseudo-labeling-based detector called
CascadeMatch. Our detector features a cascade network architecture, which has
multi-stage detection heads with progressive confidence thresholds. To avoid
manually tuning the thresholds, we design a new adaptive pseudo-label mining
mechanism to automatically identify suitable values from data. To mitigate
confirmation bias, where a model is negatively reinforced by incorrect
pseudo-labels produced by itself, each detection head is trained by the
ensemble pseudo-labels of all detection heads. Experiments on two long-tailed
datasets, i.e., LVIS and COCO-LT, demonstrate that CascadeMatch surpasses
existing state-of-the-art semi-supervised approaches -- across a wide range of
detection architectures -- in handling long-tailed object detection. For
instance, CascadeMatch outperforms Unbiased Teacher by 1.9 AP Fix on LVIS when
using a ResNet50-based Cascade R-CNN structure, and by 1.7 AP Fix when using
Sparse R-CNN with a Transformer encoder. We also show that CascadeMatch can
even handle the challenging sparsely annotated object detection problem.
|
[
"cs.CV"
] | false |
2305.14831
|
2023-05-24T07:36:47Z
|
OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields
|
[
"Zhiwen Yan",
"Chen Li",
"Gim Hee Lee"
] |
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive
results in novel view synthesis on 3D dynamic scenes. However, they often
require complete video sequences for training followed by novel view synthesis,
which is similar to playing back the recording of a dynamic 3D scene. In
contrast, we propose OD-NeRF to efficiently train and render dynamic NeRFs
on-the-fly which instead is capable of streaming the dynamic scene. When
training on-the-fly, the training frames become available sequentially and the
model is trained and rendered frame-by-frame. The key challenge of efficient
on-the-fly training is how to utilize the radiance field estimated from the
previous frames effectively. To tackle this challenge, we propose: 1) a NeRF
model conditioned on the multi-view projected colors to implicitly track
correspondence between the current and previous frames, and 2) a transition and
update algorithm that leverages the occupancy grid from the last frame to
sample efficiently at the current frame. Our algorithm can achieve an
interactive speed of 6FPS training and rendering on synthetic dynamic scenes
on-the-fly, and a significant speed-up compared to the state-of-the-art on
real-world dynamic scenes.
|
[
"cs.CV"
] | false |
2305.14840
|
2023-05-24T07:44:16Z
|
Predicting Token Impact Towards Efficient Vision Transformer
|
[
"Hong Wang",
"Su Yang",
"Xiaoke Huang",
"Weishan Zhang"
] |
Token filtering to reduce irrelevant tokens prior to self-attention is a
straightforward way to enable efficient vision Transformer. This is the first
work to view token filtering from a feature selection perspective, where we
weigh the importance of a token according to how much it can change the loss
once masked. If the loss changes greatly after masking a token of interest, it
means that such a token has a significant impact on the final decision and is
thus relevant. Otherwise, the token is less important for the final decision,
so it can be filtered out. After applying the token filtering module
generalized from the whole training data, the token number fed to the
self-attention module can be obviously reduced in the inference phase, leading
to much fewer computations in all the subsequent self-attention layers. The
token filter can be realized using a very simple network, where we utilize
multi-layer perceptron. Except for the uniqueness of performing token filtering
only once from the very beginning prior to self-attention, the other core
feature making our method different from the other token filters lies in the
predictability of token impact from a feature selection point of view. The
experiments show that the proposed method provides an efficient way to approach
a light weighted model after optimized with a backbone by means of fine tune,
which is easy to be deployed in comparison with the existing methods based on
training from scratch.
|
[
"cs.CV",
"I.5.1"
] | false |
2305.14856
|
2023-05-24T08:06:12Z
|
Optimization-Based Improvement of Face Image Quality Assessment
Techniques
|
[
"Žiga Babnik",
"Naser Damer",
"Vitomir Štruc"
] |
Contemporary face recognition (FR) models achieve near-ideal recognition
performance in constrained settings, yet do not fully translate the performance
to unconstrained (realworld) scenarios. To help improve the performance and
stability of FR systems in such unconstrained settings, face image quality
assessment (FIQA) techniques try to infer sample-quality information from the
input face images that can aid with the recognition process. While existing
FIQA techniques are able to efficiently capture the differences between high
and low quality images, they typically cannot fully distinguish between images
of similar quality, leading to lower performance in many scenarios. To address
this issue, we present in this paper a supervised quality-label optimization
approach, aimed at improving the performance of existing FIQA techniques. The
developed optimization procedure infuses additional information (computed with
a selected FR model) into the initial quality scores generated with a given
FIQA technique to produce better estimates of the "actual" image quality. We
evaluate the proposed approach in comprehensive experiments with six
state-of-the-art FIQA approaches (CR-FIQA, FaceQAN, SER-FIQ, PCNet, MagFace,
SDD-FIQA) on five commonly used benchmarks (LFW, CFPFP, CPLFW, CALFW, XQLFW)
using three targeted FR models (ArcFace, ElasticFace, CurricularFace) with
highly encouraging results.
|
[
"cs.CV"
] | false |
2305.14880
|
2023-05-24T08:31:38Z
|
Multiresolution Feature Guidance Based Transformer for Anomaly Detection
|
[
"Shuting Yan",
"Pingping Chen",
"Honghui Chen",
"Huan Mao",
"Feng Chen",
"Zhijian Lin"
] |
Anomaly detection is represented as an unsupervised learning to identify
deviated images from normal images. In general, there are two main challenges
of anomaly detection tasks, i.e., the class imbalance and the unexpectedness of
anomalies. In this paper, we propose a multiresolution feature guidance method
based on Transformer named GTrans for unsupervised anomaly detection and
localization. In GTrans, an Anomaly Guided Network (AGN) pre-trained on
ImageNet is developed to provide surrogate labels for features and tokens.
Under the tacit knowledge guidance of the AGN, the anomaly detection network
named Trans utilizes Transformer to effectively establish a relationship
between features with multiresolution, enhancing the ability of the Trans in
fitting the normal data manifold. Due to the strong generalization ability of
AGN, GTrans locates anomalies by comparing the differences in spatial distance
and direction of multi-scale features extracted from the AGN and the Trans. Our
experiments demonstrate that the proposed GTrans achieves state-of-the-art
performance in both detection and localization on the MVTec AD dataset. GTrans
achieves image-level and pixel-level anomaly detection AUROC scores of 99.0%
and 97.9% on the MVTec AD dataset, respectively.
|
[
"cs.CV"
] | false |
2305.14914
|
2023-05-24T09:03:18Z
|
GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for
Remote Sensing Data
|
[
"Zhitong Xiong",
"Sining Chen",
"Yi Wang",
"Lichao Mou",
"Xiao Xiang Zhu"
] |
Geometric information in the normalized digital surface models (nDSM) is
highly correlated with the semantic class of the land cover. Exploiting two
modalities (RGB and nDSM (height)) jointly has great potential to improve the
segmentation performance. However, it is still an under-explored field in
remote sensing due to the following challenges. First, the scales of existing
datasets are relatively small and the diversity of existing datasets is
limited, which restricts the ability of validation. Second, there is a lack of
unified benchmarks for performance assessment, which leads to difficulties in
comparing the effectiveness of different models. Last, sophisticated
multi-modal semantic segmentation methods have not been deeply explored for
remote sensing data. To cope with these challenges, in this paper, we introduce
a new remote-sensing benchmark dataset for multi-modal semantic segmentation
based on RGB-Height (RGB-H) data. Towards a fair and comprehensive analysis of
existing methods, the proposed benchmark consists of 1) a large-scale dataset
including co-registered RGB and nDSM pairs and pixel-wise semantic labels; 2) a
comprehensive evaluation and analysis of existing multi-modal fusion strategies
for both convolutional and Transformer-based networks on remote sensing data.
Furthermore, we propose a novel and effective Transformer-based intermediary
multi-modal fusion (TIMF) module to improve the semantic segmentation
performance through adaptive token-level multi-modal fusion.The designed
benchmark can foster future research on developing new methods for multi-modal
learning on remote sensing data. Extensive analyses of those methods are
conducted and valuable insights are provided through the experimental results.
Code for the benchmark and baselines can be accessed at
\url{https://github.com/EarthNets/RSI-MMSegmentation}.
|
[
"cs.CV"
] | false |
2305.14918
|
2023-05-24T09:06:01Z
|
Incremental Dense Reconstruction from Monocular Video with Guided Sparse
Feature Volume Fusion
|
[
"Xingxing Zuo",
"Nan Yang",
"Nathaniel Merrill",
"Binbin Xu",
"Stefan Leutenegger"
] |
Incrementally recovering 3D dense structures from monocular videos is of
paramount importance since it enables various robotics and AR applications.
Feature volumes have recently been shown to enable efficient and accurate
incremental dense reconstruction without the need to first estimate depth, but
they are not able to achieve as high of a resolution as depth-based methods due
to the large memory consumption of high-resolution feature volumes. This letter
proposes a real-time feature volume-based dense reconstruction method that
predicts TSDF (Truncated Signed Distance Function) values from a novel
sparsified deep feature volume, which is able to achieve higher resolutions
than previous feature volume-based methods, and is favorable in large-scale
outdoor scenarios where the majority of voxels are empty. An uncertainty-aware
multi-view stereo (MVS) network is leveraged to infer initial voxel locations
of the physical surface in a sparse feature volume. Then for refining the
recovered 3D geometry, deep features are attentively aggregated from multiview
images at potential surface locations, and temporally fused. Besides achieving
higher resolutions than before, our method is shown to produce more complete
reconstructions with finer detail in many cases. Extensive evaluations on both
public and self-collected datasets demonstrate a very competitive real-time
reconstruction result for our method compared to state-of-the-art
reconstruction methods in both indoor and outdoor settings.
|
[
"cs.CV"
] | false |
2305.14962
|
2023-05-24T09:56:47Z
|
ICDAR 2023 Competition on Robust Layout Segmentation in Corporate
Documents
|
[
"Christoph Auer",
"Ahmed Nassar",
"Maksym Lysak",
"Michele Dolfi",
"Nikolaos Livathinos",
"Peter Staar"
] |
Transforming documents into machine-processable representations is a
challenging task due to their complex structures and variability in formats.
Recovering the layout structure and content from PDF files or scanned material
has remained a key problem for decades. ICDAR has a long tradition in hosting
competitions to benchmark the state-of-the-art and encourage the development of
novel solutions to document layout understanding. In this report, we present
the results of our \textit{ICDAR 2023 Competition on Robust Layout Segmentation
in Corporate Documents}, which posed the challenge to accurately segment the
page layout in a broad range of document styles and domains, including
corporate reports, technical literature and patents. To raise the bar over
previous competitions, we engineered a hard competition dataset and proposed
the recent DocLayNet dataset for training. We recorded 45 team registrations
and received official submissions from 21 teams. In the presented solutions, we
recognize interesting combinations of recent computer vision models, data
augmentation strategies and ensemble methods to achieve remarkable accuracy in
the task we posed. A clear trend towards adoption of vision-transformer based
methods is evident. The results demonstrate substantial progress towards
achieving robust and highly generalizing methods for document layout
understanding.
|
[
"cs.CV"
] | false |
2305.14969
|
2023-05-24T10:02:27Z
|
MMNet: Multi-Mask Network for Referring Image Segmentation
|
[
"Yichen Yan",
"Xingjian He",
"Wenxuan Wan",
"Jing Liu"
] |
Referring image segmentation aims to segment an object referred to by natural
language expression from an image. However, this task is challenging due to the
distinct data properties between text and image, and the randomness introduced
by diverse objects and unrestricted language expression. Most of previous work
focus on improving cross-modal feature fusion while not fully addressing the
inherent uncertainty caused by diverse objects and unrestricted language. To
tackle these problems, we propose an end-to-end Multi-Mask Network for
referring image segmentation(MMNet). we first combine picture and language and
then employ an attention mechanism to generate multiple queries that represent
different aspects of the language expression. We then utilize these queries to
produce a series of corresponding segmentation masks, assigning a score to each
mask that reflects its importance. The final result is obtained through the
weighted sum of all masks, which greatly reduces the randomness of the language
expression. Our proposed framework demonstrates superior performance compared
to state-of-the-art approaches on the two most commonly used datasets, RefCOCO,
RefCOCO+ and G-Ref, without the need for any post-processing. This further
validates the efficacy of our proposed framework.
|
[
"cs.CV"
] | false |
2305.14977
|
2023-05-24T10:12:50Z
|
Sampling-based Uncertainty Estimation for an Instance Segmentation
Network
|
[
"Florian Heidecker",
"Ahmad El-Khateeb",
"Bernhard Sick"
] |
The examination of uncertainty in the predictions of machine learning (ML)
models is receiving increasing attention. One uncertainty modeling technique
used for this purpose is Monte-Carlo (MC)-Dropout, where repeated predictions
are generated for a single input. Therefore, clustering is required to describe
the resulting uncertainty, but only through efficient clustering is it possible
to describe the uncertainty from the model attached to each object. This
article uses Bayesian Gaussian Mixture (BGM) to solve this problem. In
addition, we investigate different values for the dropout rate and other
techniques, such as focal loss and calibration, which we integrate into the
Mask-RCNN model to obtain the most accurate uncertainty approximation of each
instance and showcase it graphically.
|
[
"cs.CV"
] | false |
2305.15078
|
2023-05-24T11:57:03Z
|
Learning INR for Event-guided Rolling Shutter Frame Correction, Deblur,
and Interpolation
|
[
"Yunfan Lu",
"Guoqiang Liang",
"Lin Wang"
] |
Images captured by rolling shutter (RS) cameras under fast camera motion
often contain obvious image distortions and blur, which can be modeled as a
row-wise combination of a sequence of global shutter (GS) frames within the
exposure time naturally, recovering high-frame-rate GS sharp frames from an RS
blur image needs to simultaneously consider RS correction, deblur, and frame
interpolation Taking this task is nontrivial, and to our knowledge, no feasible
solutions exist by far. A naive way is to decompose the complete process into
separate tasks and simply cascade existing methods; however, this results in
cumulative errors and noticeable artifacts. Event cameras enjoy many
advantages, e.g., high temporal resolution, making them potential for our
problem. To this end, we make the first attempt to recover high-frame-rate
sharp GS frames from an RS blur image and paired event data. Our key idea is to
learn an implicit neural representation (INR) to directly map the position and
time coordinates to RGB values to address the interlocking degradations in the
image restoration process. Specifically, we introduce spatial-temporal implicit
encoding (STE) to convert an RS blur image and events into a spatial-temporal
representation (STR). To query a specific sharp frame (GS or RS), we embed the
exposure time into STR and decode the embedded features to recover a sharp
frame. Moreover, we propose an RS blur image-guided integral loss to better
train the network. Our method is relatively lightweight as it contains only
0.379M parameters and demonstrates high efficiency as the STE is called only
once for any number of interpolation frames. Extensive experiments show that
our method significantly outperforms prior methods addressing only one or two
of the tasks.
|
[
"cs.CV"
] | false |
2305.15084
|
2023-05-24T12:02:42Z
|
Audio-Visual Dataset and Method for Anomaly Detection in Traffic Videos
|
[
"Błażej Leporowski",
"Arian Bakhtiarnia",
"Nicole Bonnici",
"Adrian Muscat",
"Luca Zanella",
"Yiming Wang",
"Alexandros Iosifidis"
] |
We introduce the first audio-visual dataset for traffic anomaly detection
taken from real-world scenes, called MAVAD, with a diverse range of weather and
illumination conditions. In addition, we propose a novel method named AVACA
that combines visual and audio features extracted from video sequences by means
of cross-attention to detect anomalies. We demonstrate that the addition of
audio improves the performance of AVACA by up to 5.2%. We also evaluate the
impact of image anonymization, showing only a minor decrease in performance
averaging at 1.7%.
|
[
"cs.CV"
] | false |
2305.15091
|
2023-05-24T12:15:19Z
|
Modeling Complex Object Changes in Satellite Image Time-Series: Approach
based on CSP and Spatiotemporal Graph
|
[
"Zouhayra Ayadi",
"Wadii Boulila",
"Imed Riadh Farah"
] |
This paper proposes a method for automatically monitoring and analyzing the
evolution of complex geographic objects. The objects are modeled as a
spatiotemporal graph, which separates filiation relations, spatial relations,
and spatiotemporal relations, and is analyzed by detecting frequent sub-graphs
using constraint satisfaction problems (CSP). The process is divided into four
steps: first, the identification of complex objects in each satellite image;
second, the construction of a spatiotemporal graph to model the spatiotemporal
changes of the complex objects; third, the creation of sub-graphs to be
detected in the base spatiotemporal graph; and fourth, the analysis of the
spatiotemporal graph by detecting the sub-graphs and solving a constraint
network to determine relevant sub-graphs. The final step is further broken down
into two sub-steps: (i) the modeling of the constraint network with defined
variables and constraints, and (ii) the solving of the constraint network to
find relevant sub-graphs in the spatiotemporal graph. Experiments were
conducted using real-world satellite images representing several cities in
Saudi Arabia, and the results demonstrate the effectiveness of the proposed
approach.
|
[
"cs.CV"
] | false |
2305.15114
|
2023-05-24T13:07:46Z
|
Thinking Twice: Clinical-Inspired Thyroid Ultrasound Lesion Detection
Based on Feature Feedback
|
[
"Lingtao Wang",
"Jianrui Ding",
"Fenghe Tang",
"Chunping Ning"
] |
Accurate detection of thyroid lesions is a critical aspect of computer-aided
diagnosis. However, most existing detection methods perform only one feature
extraction process and then fuse multi-scale features, which can be affected by
noise and blurred features in ultrasound images. In this study, we propose a
novel detection network based on a feature feedback mechanism inspired by
clinical diagnosis. The mechanism involves first roughly observing the overall
picture and then focusing on the details of interest. It comprises two parts: a
feedback feature selection module and a feature feedback pyramid. The feedback
feature selection module efficiently selects the features extracted in the
first phase in both space and channel dimensions to generate high semantic
prior knowledge, which is similar to coarse observation. The feature feedback
pyramid then uses this high semantic prior knowledge to enhance feature
extraction in the second phase and adaptively fuses the two features, similar
to fine observation. Additionally, since radiologists often focus on the shape
and size of lesions for diagnosis, we propose an adaptive detection head
strategy to aggregate multi-scale features. Our proposed method achieves an AP
of 70.3% and AP50 of 99.0% on the thyroid ultrasound dataset and meets the
real-time requirement. The code is available at
https://github.com/HIT-wanglingtao/Thinking-Twice.
|
[
"cs.CV"
] | false |
2305.15154
|
2023-05-24T13:51:48Z
|
Clinically Labeled Contrastive Learning for OCT Biomarker Classification
|
[
"Kiran Kokilepersaud",
"Stephanie Trejo Corona",
"Mohit Prabhushankar",
"Ghassan AlRegib",
"Charles Wykoff"
] |
This paper presents a novel positive and negative set selection strategy for
contrastive learning of medical images based on labels that can be extracted
from clinical data. In the medical field, there exists a variety of labels for
data that serve different purposes at different stages of a diagnostic and
treatment process. Clinical labels and biomarker labels are two examples. In
general, clinical labels are easier to obtain in larger quantities because they
are regularly collected during routine clinical care, while biomarker labels
require expert analysis and interpretation to obtain. Within the field of
ophthalmology, previous work has shown that clinical values exhibit
correlations with biomarker structures that manifest within optical coherence
tomography (OCT) scans. We exploit this relationship by using the clinical data
as pseudo-labels for our data without biomarker labels in order to choose
positive and negative instances for training a backbone network with a
supervised contrastive loss. In this way, a backbone network learns a
representation space that aligns with the clinical data distribution available.
Afterwards, we fine-tune the network trained in this manner with the smaller
amount of biomarker labeled data with a cross-entropy loss in order to classify
these key indicators of disease directly from OCT scans. We also expand on this
concept by proposing a method that uses a linear combination of clinical
contrastive losses. We benchmark our methods against state of the art
self-supervised methods in a novel setting with biomarkers of varying
granularity. We show performance improvements by as much as 5\% in total
biomarker detection AUROC.
|
[
"cs.CV"
] | false |
2305.15199
|
2023-05-24T14:35:54Z
|
Promoting Generalization in Cross-Dataset Remote Photoplethysmography
|
[
"Nathan Vance",
"Jeremy Speth",
"Benjamin Sporrer",
"Patrick Flynn"
] |
Remote Photoplethysmography (rPPG), or the remote monitoring of a subject's
heart rate using a camera, has seen a shift from handcrafted techniques to deep
learning models. While current solutions offer substantial performance gains,
we show that these models tend to learn a bias to pulse wave features inherent
to the training dataset. We develop augmentations to mitigate this learned bias
by expanding both the range and variability of heart rates that the model sees
while training, resulting in improved model convergence when training and
cross-dataset generalization at test time. Through a 3-way cross dataset
analysis we demonstrate a reduction in mean absolute error from over 13 beats
per minute to below 3 beats per minute. We compare our method with other recent
rPPG systems, finding similar performance under a variety of evaluation
parameters.
|
[
"cs.CV"
] | false |
2305.15219
|
2023-05-24T15:00:01Z
|
DynStatF: An Efficient Feature Fusion Strategy for LiDAR 3D Object
Detection
|
[
"Yao Rong",
"Xiangyu Wei",
"Tianwei Lin",
"Yueyu Wang",
"Enkelejda Kasneci"
] |
Augmenting LiDAR input with multiple previous frames provides richer semantic
information and thus boosts performance in 3D object detection, However,
crowded point clouds in multi-frames can hurt the precise position information
due to the motion blur and inaccurate point projection. In this work, we
propose a novel feature fusion strategy, DynStaF (Dynamic-Static Fusion), which
enhances the rich semantic information provided by the multi-frame (dynamic
branch) with the accurate location information from the current single-frame
(static branch). To effectively extract and aggregate complimentary features,
DynStaF contains two modules, Neighborhood Cross Attention (NCA) and
Dynamic-Static Interaction (DSI), operating through a dual pathway
architecture. NCA takes the features in the static branch as queries and the
features in the dynamic branch as keys (values). When computing the attention,
we address the sparsity of point clouds and take only neighborhood positions
into consideration. NCA fuses two features at different feature map scales,
followed by DSI providing the comprehensive interaction. To analyze our
proposed strategy DynStaF, we conduct extensive experiments on the nuScenes
dataset. On the test set, DynStaF increases the performance of PointPillars in
NDS by a large margin from 57.7% to 61.6%. When combined with CenterPoint, our
framework achieves 61.0% mAP and 67.7% NDS, leading to state-of-the-art
performance without bells and whistles.
|
[
"cs.CV"
] | false |
2305.15248
|
2023-05-24T15:33:46Z
|
Delving Deeper into Data Scaling in Masked Image Modeling
|
[
"Cheng-Ze Lu",
"Xiaojie Jin",
"Qibin Hou",
"Jun Hao Liew",
"Ming-Ming Cheng",
"Jiashi Feng"
] |
Understanding whether self-supervised learning methods can scale with
unlimited data is crucial for training large-scale models. In this work, we
conduct an empirical study on the scaling capability of masked image modeling
(MIM) methods (e.g., MAE) for visual recognition. Unlike most previous works
that depend on the widely-used ImageNet dataset, which is manually curated and
object-centric, we take a step further and propose to investigate this problem
in a more practical setting. Specifically, we utilize the web-collected
Coyo-700M dataset. We randomly sample varying numbers of training images from
the Coyo dataset and construct a series of sub-datasets, containing 0.5M, 1M,
5M, 10M, and 100M images, for pre-training. Our goal is to investigate how the
performance changes on downstream tasks when scaling with different sizes of
data and models. The study reveals that: 1) MIM can be viewed as an effective
method to improve the model capacity when the scale of the training data is
relatively small; 2) Strong reconstruction targets can endow the models with
increased capacities on downstream tasks; 3) MIM pre-training is data-agnostic
under most scenarios, which means that the strategy of sampling pre-training
data is non-critical. We hope these observations could provide valuable
insights for future research on MIM.
|
[
"cs.CV"
] | false |
2305.15302
|
2023-05-24T16:26:05Z
|
Multi-Modal Mutual Attention and Iterative Interaction for Referring
Image Segmentation
|
[
"Chang Liu",
"Henghui Ding",
"Yulun Zhang",
"Xudong Jiang"
] |
We address the problem of referring image segmentation that aims to generate
a mask for the object specified by a natural language expression. Many recent
works utilize Transformer to extract features for the target object by
aggregating the attended visual regions. However, the generic attention
mechanism in Transformer only uses the language input for attention weight
calculation, which does not explicitly fuse language features in its output.
Thus, its output feature is dominated by vision information, which limits the
model to comprehensively understand the multi-modal information, and brings
uncertainty for the subsequent mask decoder to extract the output mask. To
address this issue, we propose Multi-Modal Mutual Attention ($\mathrm{M^3Att}$)
and Multi-Modal Mutual Decoder ($\mathrm{M^3Dec}$) that better fuse information
from the two input modalities. Based on {$\mathrm{M^3Dec}$}, we further propose
Iterative Multi-modal Interaction ($\mathrm{IMI}$) to allow continuous and
in-depth interactions between language and vision features. Furthermore, we
introduce Language Feature Reconstruction ($\mathrm{LFR}$) to prevent the
language information from being lost or distorted in the extracted feature.
Extensive experiments show that our proposed approach significantly improves
the baseline and outperforms state-of-the-art referring image segmentation
methods on RefCOCO series datasets consistently.
|
[
"cs.CV"
] | false |
2305.15365
|
2023-05-24T17:15:19Z
|
Boundary Attention Mapping (BAM): Fine-grained saliency maps for
segmentation of Burn Injuries
|
[
"Mahla Abdolahnejad",
"Justin Lee",
"Hannah Chan",
"Alex Morzycki",
"Olivier Ethier",
"Anthea Mo",
"Peter X. Liu",
"Joshua N. Wong",
"Colin Hong",
"Rakesh Joshi"
] |
Burn injuries can result from mechanisms such as thermal, chemical, and
electrical insults. A prompt and accurate assessment of burns is essential for
deciding definitive clinical treatments. Currently, the primary approach for
burn assessments, via visual and tactile observations, is approximately 60%-80%
accurate. The gold standard is biopsy and a close second would be non-invasive
methods like Laser Doppler Imaging (LDI) assessments, which have up to 97%
accuracy in predicting burn severity and the required healing time. In this
paper, we introduce a machine learning pipeline for assessing burn severities
and segmenting the regions of skin that are affected by burn. Segmenting 2D
colour images of burns allows for the injured versus non-injured skin to be
delineated, clearly marking the extent and boundaries of the localized
burn/region-of-interest, even during remote monitoring of a burn patient. We
trained a convolutional neural network (CNN) to classify four severities of
burns. We built a saliency mapping method, Boundary Attention Mapping (BAM),
that utilises this trained CNN for the purpose of accurately localizing and
segmenting the burn regions from skin burn images. We demonstrated the
effectiveness of our proposed pipeline through extensive experiments and
evaluations using two datasets; 1) A larger skin burn image dataset consisting
of 1684 skin burn images of four burn severities, 2) An LDI dataset that
consists of a total of 184 skin burn images with their associated LDI scans.
The CNN trained using the first dataset achieved an average F1-Score of 78% and
micro/macro- average ROC of 85% in classifying the four burn severities.
Moreover, a comparison between the BAM results and LDI results for measuring
injury boundary showed that the segmentations generated by our method achieved
91.60% accuracy, 78.17% sensitivity, and 93.37% specificity.
|
[
"cs.CV"
] | false |
2305.15391
|
2023-05-24T17:53:07Z
|
A Neural Space-Time Representation for Text-to-Image Personalization
|
[
"Yuval Alaluf",
"Elad Richardson",
"Gal Metzer",
"Daniel Cohen-Or"
] |
A key aspect of text-to-image personalization methods is the manner in which
the target concept is represented within the generative process. This choice
greatly affects the visual fidelity, downstream editability, and disk space
needed to store the learned concept. In this paper, we explore a new
text-conditioning space that is dependent on both the denoising process
timestep (time) and the denoising U-Net layers (space) and showcase its
compelling properties. A single concept in the space-time representation is
composed of hundreds of vectors, one for each combination of time and space,
making this space challenging to optimize directly. Instead, we propose to
implicitly represent a concept in this space by optimizing a small neural
mapper that receives the current time and space parameters and outputs the
matching token embedding. In doing so, the entire personalized concept is
represented by the parameters of the learned mapper, resulting in a compact,
yet expressive, representation. Similarly to other personalization methods, the
output of our neural mapper resides in the input space of the text encoder. We
observe that one can significantly improve the convergence and visual fidelity
of the concept by introducing a textual bypass, where our neural mapper
additionally outputs a residual that is added to the output of the text
encoder. Finally, we show how one can impose an importance-based ordering over
our implicit representation, providing users control over the reconstruction
and editability of the learned concept using a single trained model. We
demonstrate the effectiveness of our approach over a range of concepts and
prompts, showing our method's ability to generate high-quality and controllable
compositions without fine-tuning any parameters of the generative model itself.
|
[
"cs.CV"
] | false |
2305.15407
|
2023-05-24T17:59:18Z
|
Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic
Contrast Sets
|
[
"Brandon Smith",
"Miguel Farinha",
"Siobhan Mackenzie Hall",
"Hannah Rose Kirk",
"Aleksandar Shtedritski",
"Max Bain"
] |
Vision-language models are growing in popularity and public visibility to
generate, edit, and caption images at scale; but their outputs can perpetuate
and amplify societal biases learned during pre-training on uncurated image-text
pairs from the internet. Although debiasing methods have been proposed, we
argue that these measurements of model bias lack validity due to dataset bias.
We demonstrate there are spurious correlations in COCO Captions, the most
commonly used dataset for evaluating bias, between background context and the
gender of people in-situ. This is problematic because commonly-used bias
metrics (such as Bias@K) rely on per-gender base rates. To address this issue,
we propose a novel dataset debiasing pipeline to augment the COCO dataset with
synthetic, gender-balanced contrast sets, where only the gender of the subject
is edited and the background is fixed. However, existing image editing methods
have limitations and sometimes produce low-quality images; so, we introduce a
method to automatically filter the generated images based on their similarity
to real images. Using our balanced synthetic contrast sets, we benchmark bias
in multiple CLIP-based models, demonstrating how metrics are skewed by
imbalance in the original COCO images. Our results indicate that the proposed
approach improves the validity of the evaluation, ultimately contributing to
more realistic understanding of bias in vision-language models.
|
[
"cs.CV"
] | false |
2305.15483
|
2023-05-24T18:10:24Z
|
Weakly Supervised Vision-and-Language Pre-training with Relative
Representations
|
[
"Chi Chen",
"Peng Li",
"Maosong Sun",
"Yang Liu"
] |
Weakly supervised vision-and-language pre-training (WVLP), which learns
cross-modal representations with limited cross-modal supervision, has been
shown to effectively reduce the data cost of pre-training while maintaining
decent performance on downstream tasks. However, current WVLP methods use only
local descriptions of images, i.e., object tags, as cross-modal anchors to
construct weakly-aligned image-text pairs for pre-training. This affects the
data quality and thus the effectiveness of pre-training. In this paper, we
propose to directly take a small number of aligned image-text pairs as anchors,
and represent each unaligned image and text by its similarities to these
anchors, i.e., relative representations. We build a WVLP framework based on the
relative representations, namely RELIT, which collects high-quality
weakly-aligned image-text pairs from large-scale image-only and text-only data
for pre-training through relative representation-based retrieval and
generation. Experiments on four downstream tasks show that RELIT achieves new
state-of-the-art results under the weakly supervised setting.
|
[
"cs.CV"
] | false |
2305.14598
|
2023-05-24T00:42:06Z
|
Vision + Language Applications: A Survey
|
[
"Yutong Zhou",
"Nobutaka Shimada"
] |
Text-to-image generation has attracted significant interest from researchers
and practitioners in recent years due to its widespread and diverse
applications across various industries. Despite the progress made in the domain
of vision and language research, the existing literature remains relatively
limited, particularly with regard to advancements and applications in this
field. This paper explores a relevant research track within multimodal
applications, including text, vision, audio, and others. In addition to the
studies discussed in this paper, we are also committed to continually updating
the latest relevant papers, datasets, application projects and corresponding
information at https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image
|
[
"cs.CV",
"cs.MM"
] | false |
2305.14612
|
2023-05-24T01:22:26Z
|
Assessment of Anterior Cruciate Ligament Injury Risk Based on Human Key
Points Detection Algorithm
|
[
"Ziyu Gong",
"Xiong Zhao",
"Chen Yang"
] |
This paper aims to detect the potential injury risk of the anterior cruciate
ligament (ACL) by proposing an ACL potential injury risk assessment algorithm
based on key points of the human body detected using computer vision
technology. To obtain the key points data of the human body in each frame,
OpenPose, an open source computer vision algorithm, was employed. The obtained
data underwent preprocessing and were then fed into an ACL potential injury
feature extraction model based on the Landing Error Evaluation System (LESS).
This model extracted several important parameters, including the knee flexion
angle, the trunk flexion on the sagittal plane, trunk flexion angle on the
frontal plane, the ankle knee horizontal distance, and the ankle shoulder
horizontal distance. Each of these features was assigned a threshold interval,
and a segmented evaluation function was utilized to score them accordingly. To
calculate the final score of the participant, the score values were input into
a weighted scoring model designed based on the Analytic Hierarchy Process
(AHP). The AHP based model takes into account the relative importance of each
feature in the overall assessment. The results demonstrate that the proposed
algorithm effectively detects the potential risk of ACL injury. The proposed
algorithm demonstrates its effectiveness in detecting ACL injury risk, offering
valuable insights for injury prevention and intervention strategies in sports
and related fields. Code is available at:
https://github.com/ZiyuGong-proj/Assessment-of-ACL-Injury-Risk-Based-on-Openpose
|
[
"cs.CV",
"stat.AP"
] | false |
2305.14657
|
2023-05-24T02:52:30Z
|
Dealing with Cross-Task Class Discrimination in Online Continual
Learning
|
[
"Yiduo Guo",
"Bing Liu",
"Dongyan Zhao"
] |
Existing continual learning (CL) research regards catastrophic forgetting
(CF) as almost the only challenge. This paper argues for another challenge in
class-incremental learning (CIL), which we call cross-task class discrimination
(CTCD),~i.e., how to establish decision boundaries between the classes of the
new task and old tasks with no (or limited) access to the old task data. CTCD
is implicitly and partially dealt with by replay-based methods. A replay method
saves a small amount of data (replay data) from previous tasks. When a batch of
current task data arrives, the system jointly trains the new data and some
sampled replay data. The replay data enables the system to partially learn the
decision boundaries between the new classes and the old classes as the amount
of the saved data is small. However, this paper argues that the replay approach
also has a dynamic training bias issue which reduces the effectiveness of the
replay data in solving the CTCD problem. A novel optimization objective with a
gradient-based adaptive method is proposed to dynamically deal with the problem
in the online CL process. Experimental results show that the new method
achieves much better results in online CL.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.14684
|
2023-05-24T03:45:03Z
|
Collaborative Auto-encoding for Blind Image Quality Assessment
|
[
"Zehong Zhou",
"Fei Zhou",
"Guoping Qiu"
] |
Blind image quality assessment (BIQA) is a challenging problem with important
real-world applications. Recent efforts attempting to exploit powerful
representations by deep neural networks (DNN) are hindered by the lack of
subjectively annotated data. This paper presents a novel BIQA method which
overcomes this fundamental obstacle. Specifically, we design a pair of
collaborative autoencoders (COAE) consisting of a content autoencoder (CAE) and
a distortion autoencoder (DAE) that work together to extract content and
distortion representations, which are shown to be highly descriptive of image
quality. While the CAE follows a standard codec procedure, we introduce the
CAE-encoded feature as an extra input to the DAE's decoder for reconstructing
distorted images, thus effectively forcing DAE's encoder to extract distortion
representations. The self-supervised learning framework allows the COAE
including two feature extractors to be trained by almost unlimited amount of
data, thus leaving limited samples with annotations to finetune a BIQA model.
We will show that the proposed BIQA method achieves state-of-the-art
performance and has superior generalization capability over other learning
based models. The codes are available at:
https://github.com/Macro-Zhou/NRIQA-VISOR/.
|
[
"cs.CV",
"eess.IV"
] | false |
2305.14731
|
2023-05-24T05:09:43Z
|
AutoDepthNet: High Frame Rate Depth Map Reconstruction using Commodity
Depth and RGB Cameras
|
[
"Peyman Gholami",
"Robert Xiao"
] |
Depth cameras have found applications in diverse fields, such as computer
vision, artificial intelligence, and video gaming. However, the high latency
and low frame rate of existing commodity depth cameras impose limitations on
their applications. We propose a fast and accurate depth map reconstruction
technique to reduce latency and increase the frame rate in depth cameras. Our
approach uses only a commodity depth camera and color camera in a hybrid camera
setup; our prototype is implemented using a Kinect Azure depth camera at 30 fps
and a high-speed RGB iPhone 11 Pro camera captured at 240 fps. The proposed
network, AutoDepthNet, is an encoder-decoder model that captures frames from
the high-speed RGB camera and combines them with previous depth frames to
reconstruct a stream of high frame rate depth maps. On GPU, with a 480 x 270
output resolution, our system achieves an inference time of 8 ms, enabling
real-time use at up to 200 fps with parallel processing. AutoDepthNet can
estimate depth values with an average RMS error of 0.076, a 44.5% improvement
compared to an optical flow-based comparison method. Our method can also
improve depth map quality by estimating depth values for missing and
invalidated pixels. The proposed method can be easily applied to existing depth
cameras and facilitates the use of depth cameras in applications that require
high-speed depth estimation. We also showcase the effectiveness of the
framework in upsampling different sparse datasets e.g. video object
segmentation. As a demonstration of our method, we integrated our framework
into existing body tracking systems and demonstrated the robustness of the
proposed method in such applications.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.14754
|
2023-05-24T05:57:58Z
|
SUVR: A Search-based Approach to Unsupervised Visual Representation
Learning
|
[
"Yi-Zhan Xu",
"Chih-Yao Chen",
"Cheng-Te Li"
] |
Unsupervised learning has grown in popularity because of the difficulty of
collecting annotated data and the development of modern frameworks that allow
us to learn from unlabeled data. Existing studies, however, either disregard
variations at different levels of similarity or only consider negative samples
from one batch. We argue that image pairs should have varying degrees of
similarity, and the negative samples should be allowed to be drawn from the
entire dataset. In this work, we propose Search-based Unsupervised Visual
Representation Learning (SUVR) to learn better image representations in an
unsupervised manner. We first construct a graph from the image dataset by the
similarity between images, and adopt the concept of graph traversal to explore
positive samples. In the meantime, we make sure that negative samples can be
drawn from the full dataset. Quantitative experiments on five benchmark image
classification datasets demonstrate that SUVR can significantly outperform
strong competing methods on unsupervised embedding learning. Qualitative
experiments also show that SUVR can produce better representations in which
similar images are clustered closer together than unrelated images in the
latent space.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14846
|
2023-05-24T07:54:44Z
|
Introducing Competition to Boost the Transferability of Targeted
Adversarial Examples through Clean Feature Mixup
|
[
"Junyoung Byun",
"Myung-Joon Kwon",
"Seungju Cho",
"Yoonji Kim",
"Changick Kim"
] |
Deep neural networks are widely known to be susceptible to adversarial
examples, which can cause incorrect predictions through subtle input
modifications. These adversarial examples tend to be transferable between
models, but targeted attacks still have lower attack success rates due to
significant variations in decision boundaries. To enhance the transferability
of targeted adversarial examples, we propose introducing competition into the
optimization process. Our idea is to craft adversarial perturbations in the
presence of two new types of competitor noises: adversarial perturbations
towards different target classes and friendly perturbations towards the correct
class. With these competitors, even if an adversarial example deceives a
network to extract specific features leading to the target class, this
disturbance can be suppressed by other competitors. Therefore, within this
competition, adversarial examples should take different attack strategies by
leveraging more diverse features to overwhelm their interference, leading to
improving their transferability to different models. Considering the
computational complexity, we efficiently simulate various interference from
these two types of competitors in feature space by randomly mixing up stored
clean features in the model inference and named this method Clean Feature Mixup
(CFM). Our extensive experimental results on the ImageNet-Compatible and
CIFAR-10 datasets show that the proposed method outperforms the existing
baselines with a clear margin. Our code is available at
https://github.com/dreamflake/CFM.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14885
|
2023-05-24T08:34:43Z
|
Towards View-invariant and Accurate Loop Detection Based on Scene Graph
|
[
"Chuhao Liu",
"Shaojie Shen"
] |
Loop detection plays a key role in visual Simultaneous Localization and
Mapping (SLAM) by correcting the accumulated pose drift. In indoor scenarios,
the richly distributed semantic landmarks are view-point invariant and hold
strong descriptive power in loop detection. The current semantic-aided loop
detection embeds the topology between semantic instances to search a loop.
However, current semantic-aided loop detection methods face challenges in
dealing with ambiguous semantic instances and drastic viewpoint differences,
which are not fully addressed in the literature. This paper introduces a novel
loop detection method based on an incrementally created scene graph, targeting
the visual SLAM at indoor scenes. It jointly considers the macro-view topology,
micro-view topology, and occupancy of semantic instances to find correct
correspondences. Experiments using handheld RGB-D sequence show our method is
able to accurately detect loops in drastically changed viewpoints. It maintains
a high precision in observing objects with similar topology and appearance. Our
method also demonstrates that it is robust in changed indoor scenes.
|
[
"cs.CV",
"cs.RO"
] | false |
2305.14985
|
2023-05-24T10:19:57Z
|
IdealGPT: Iteratively Decomposing Vision and Language Reasoning via
Large Language Models
|
[
"Haoxuan You",
"Rui Sun",
"Zhecan Wang",
"Long Chen",
"Gengyu Wang",
"Hammad A. Ayyubi",
"Kai-Wei Chang",
"Shih-Fu Chang"
] |
The field of vision-and-language (VL) understanding has made unprecedented
progress with end-to-end large pre-trained VL models (VLMs). However, they
still fall short in zero-shot reasoning tasks that require multi-step
inferencing. To achieve this goal, previous works resort to a
divide-and-conquer pipeline. In this paper, we argue that previous efforts have
several inherent shortcomings: 1) They rely on domain-specific sub-question
decomposing models. 2) They force models to predict the final answer even if
the sub-questions or sub-answers provide insufficient information. We address
these limitations via IdealGPT, a framework that iteratively decomposes VL
reasoning using large language models (LLMs). Specifically, IdealGPT utilizes
an LLM to generate sub-questions, a VLM to provide corresponding sub-answers,
and another LLM to reason to achieve the final answer. These three modules
perform the divide-and-conquer procedure iteratively until the model is
confident about the final answer to the main question. We evaluate IdealGPT on
multiple challenging VL reasoning tasks under a zero-shot setting. In
particular, our IdealGPT outperforms the best existing GPT-4-like models by an
absolute 10% on VCR and 15% on SNLI-VE. Code is available at
https://github.com/Hxyou/IdealGPT
|
[
"cs.CV",
"cs.CL"
] | false |
2305.14986
|
2023-05-24T10:21:31Z
|
Non-adversarial Robustness of Deep Learning Methods for Computer Vision
|
[
"Gorana Gojić",
"Vladimir Vincan",
"Ognjen Kundačina",
"Dragiša Mišković",
"Dinu Dragan"
] |
Non-adversarial robustness, also known as natural robustness, is a property
of deep learning models that enables them to maintain performance even when
faced with distribution shifts caused by natural variations in data. However,
achieving this property is challenging because it is difficult to predict in
advance the types of distribution shifts that may occur. To address this
challenge, researchers have proposed various approaches, some of which
anticipate potential distribution shifts, while others utilize knowledge about
the shifts that have already occurred to enhance model generalizability. In
this paper, we present a brief overview of the most recent techniques for
improving the robustness of computer vision methods, as well as a summary of
commonly used robustness benchmark datasets for evaluating the model's
performance under data distribution shifts. Finally, we examine the strengths
and limitations of the approaches reviewed and identify general trends in deep
learning robustness improvement for computer vision.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.15087
|
2023-05-24T12:05:53Z
|
Pento-DIARef: A Diagnostic Dataset for Learning the Incremental
Algorithm for Referring Expression Generation from Examples
|
[
"Philipp Sadler",
"David Schlangen"
] |
NLP tasks are typically defined extensionally through datasets containing
example instantiations (e.g., pairs of image i and text t), but motivated
intensionally through capabilities invoked in verbal descriptions of the task
(e.g., "t is a description of i, for which the content of i needs to be
recognised and understood"). We present Pento-DIARef, a diagnostic dataset in a
visual domain of puzzle pieces where referring expressions are generated by a
well-known symbolic algorithm (the "Incremental Algorithm"), which itself is
motivated by appeal to a hypothesised capability (eliminating distractors
through application of Gricean maxims). Our question then is whether the
extensional description (the dataset) is sufficient for a neural model to pick
up the underlying regularity and exhibit this capability given the simple task
definition of producing expressions from visual inputs. We find that a model
supported by a vision detection step and a targeted data generation scheme
achieves an almost perfect BLEU@1 score and sentence accuracy, whereas simpler
baselines do not.
|
[
"cs.CL",
"cs.CV"
] | false |
2305.15097
|
2023-05-24T12:27:42Z
|
Computer Vision for Construction Progress Monitoring: A Real-Time Object
Detection Approach
|
[
"Jiesheng Yang",
"Andreas Wilde",
"Karsten Menzel",
"Md Zubair Sheikh",
"Boris Kuznetsov"
] |
Construction progress monitoring (CPM) is essential for effective project
management, ensuring on-time and on-budget delivery. Traditional CPM methods
often rely on manual inspection and reporting, which are time-consuming and
prone to errors. This paper proposes a novel approach for automated CPM using
state-of-the-art object detection algorithms. The proposed method leverages
e.g. YOLOv8's real-time capabilities and high accuracy to identify and track
construction elements within site images and videos. A dataset was created,
consisting of various building elements and annotated with relevant objects for
training and validation. The performance of the proposed approach was evaluated
using standard metrics, such as precision, recall, and F1-score, demonstrating
significant improvement over existing methods. The integration of Computer
Vision into CPM provides stakeholders with reliable, efficient, and
cost-effective means to monitor project progress, facilitating timely
decision-making and ultimately contributing to the successful completion of
construction projects.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.15227
|
2023-05-24T15:09:41Z
|
Real time dense anomaly detection by learning on synthetic negative data
|
[
"Anja Delić",
"Matej Grcić",
"Siniša Šegvić"
] |
Most approaches to dense anomaly detection rely on generative modeling or on
discriminative methods that train with negative data. We consider a recent
hybrid method that optimizes the same shared representation according to
cross-entropy of the discriminative predictions, and negative log likelihood of
the predicted energy-based density. We extend that work with a jointly trained
generative flow that samples synthetic negatives at the border of the inlier
distribution. The proposed extension provides potential to learn the hybrid
method without real negative data. Our experiments analyze the impact of
training with synthetic negative data and validate contribution of the
energy-based density during training and evaluation.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.15311
|
2023-05-24T16:31:30Z
|
Personalized Dictionary Learning for Heterogeneous Datasets
|
[
"Geyu Liang",
"Naichen Shi",
"Raed Al Kontar",
"Salar Fattahi"
] |
We introduce a relevant yet challenging problem named Personalized Dictionary
Learning (PerDL), where the goal is to learn sparse linear representations from
heterogeneous datasets that share some commonality. In PerDL, we model each
dataset's shared and unique features as global and local dictionaries.
Challenges for PerDL not only are inherited from classical dictionary learning
(DL), but also arise due to the unknown nature of the shared and unique
features. In this paper, we rigorously formulate this problem and provide
conditions under which the global and local dictionaries can be provably
disentangled. Under these conditions, we provide a meta-algorithm called
Personalized Matching and Averaging (PerMA) that can recover both global and
local dictionaries from heterogeneous datasets. PerMA is highly efficient; it
converges to the ground truth at a linear rate under suitable conditions.
Moreover, it automatically borrows strength from strong learners to improve the
prediction of weak learners. As a general framework for extracting global and
local dictionaries, we show the application of PerDL in different learning
tasks, such as training with imbalanced datasets and video surveillance.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.15316
|
2023-05-24T16:33:02Z
|
Training on Thin Air: Improve Image Classification with Generated Data
|
[
"Yongchao Zhou",
"Hshmat Sahak",
"Jimmy Ba"
] |
Acquiring high-quality data for training discriminative models is a crucial
yet challenging aspect of building effective predictive systems. In this paper,
we present Diffusion Inversion, a simple yet effective method that leverages
the pre-trained generative model, Stable Diffusion, to generate diverse,
high-quality training data for image classification. Our approach captures the
original data distribution and ensures data coverage by inverting images to the
latent space of Stable Diffusion, and generates diverse novel training images
by conditioning the generative model on noisy versions of these vectors. We
identify three key components that allow our generated images to successfully
supplant the original dataset, leading to a 2-3x enhancement in sample
complexity and a 6.5x decrease in sampling time. Moreover, our approach
consistently outperforms generic prompt-based steering methods and KNN
retrieval baseline across a wide range of datasets. Additionally, we
demonstrate the compatibility of our approach with widely-used data
augmentation techniques, as well as the reliability of the generated data in
supporting various neural architectures and enhancing few-shot learning.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.15367
|
2023-05-24T17:22:39Z
|
SAMScore: A Semantic Structural Similarity Metric for Image Translation
Evaluation
|
[
"Yunxiang Li",
"Meixu Chen",
"Wenxuan Yang",
"Kai Wang",
"Jun Ma",
"Alan C. Bovik",
"You Zhang"
] |
Image translation has wide applications, such as style transfer and modality
conversion, usually aiming to generate images having both high degrees of
realism and faithfulness. These problems remain difficult, especially when it
is important to preserve semantic structures. Traditional image-level
similarity metrics are of limited use, since the semantics of an image are
high-level, and not strongly governed by pixel-wise faithfulness to an original
image. Towards filling this gap, we introduce SAMScore, a generic semantic
structural similarity metric for evaluating the faithfulness of image
translation models. SAMScore is based on the recent high-performance Segment
Anything Model (SAM), which can perform semantic similarity comparisons with
standout accuracy. We applied SAMScore on 19 image translation tasks, and found
that it is able to outperform all other competitive metrics on all of the
tasks. We envision that SAMScore will prove to be a valuable tool that will
help to drive the vibrant field of image translation, by allowing for more
precise evaluations of new and evolving translation models. The code is
available at https://github.com/Kent0n-Li/SAMScore.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.15551
|
2023-05-24T20:33:38Z
|
Malicious or Benign? Towards Effective Content Moderation for Children's
Videos
|
[
"Syed Hammad Ahmed",
"Muhammad Junaid Khan",
"H. M. Umer Qaisar",
"Gita Sukthankar"
] |
Online video platforms receive hundreds of hours of uploads every minute,
making manual content moderation impossible. Unfortunately, the most vulnerable
consumers of malicious video content are children from ages 1-5 whose attention
is easily captured by bursts of color and sound. Scammers attempting to
monetize their content may craft malicious children's videos that are
superficially similar to educational videos, but include scary and disgusting
characters, violent motions, loud music, and disturbing noises. Prominent video
hosting platforms like YouTube have taken measures to mitigate malicious
content on their platform, but these videos often go undetected by current
content moderation tools that are focused on removing pornographic or
copyrighted content. This paper introduces our toolkit Malicious or Benign for
promoting research on automated content moderation of children's videos. We
present 1) a customizable annotation tool for videos, 2) a new dataset with
difficult to detect test cases of malicious content and 3) a benchmark suite of
state-of-the-art video classification models.
|
[
"cs.CV",
"cs.SI"
] | false |
2305.15562
|
2023-05-24T20:52:34Z
|
Let There Be Order: Rethinking Ordering in Autoregressive Graph
Generation
|
[
"Jie Bu",
"Kazi Sajeed Mehrab",
"Anuj Karpatne"
] |
Conditional graph generation tasks involve training a model to generate a
graph given a set of input conditions. Many previous studies employ
autoregressive models to incrementally generate graph components such as nodes
and edges. However, as graphs typically lack a natural ordering among their
components, converting a graph into a sequence of tokens is not
straightforward. While prior works mostly rely on conventional heuristics or
graph traversal methods like breadth-first search (BFS) or depth-first search
(DFS) to convert graphs to sequences, the impact of ordering on graph
generation has largely been unexplored. This paper contributes to this problem
by: (1) highlighting the crucial role of ordering in autoregressive graph
generation models, (2) proposing a novel theoretical framework that perceives
ordering as a dimensionality reduction problem, thereby facilitating a deeper
understanding of the relationship between orderings and generated graph
accuracy, and (3) introducing "latent sort," a learning-based ordering scheme
to perform dimensionality reduction of graph tokens. Our experimental results
showcase the effectiveness of latent sort across a wide range of graph
generation tasks, encouraging future works to further explore and develop
learning-based ordering schemes for autoregressive graph generation.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.15584
|
2023-05-24T21:41:08Z
|
Understanding Label Bias in Single Positive Multi-Label Learning
|
[
"Julio Arroyo",
"Pietro Perona",
"Elijah Cole"
] |
Annotating data for multi-label classification is prohibitively expensive
because every category of interest must be confirmed to be present or absent.
Recent work on single positive multi-label (SPML) learning shows that it is
possible to train effective multi-label classifiers using only one positive
label per image. However, the standard benchmarks for SPML are derived from
traditional multi-label classification datasets by retaining one positive label
for each training example (chosen uniformly at random) and discarding all other
labels. In realistic settings it is not likely that positive labels are chosen
uniformly at random. This work introduces protocols for studying label bias in
SPML and provides new empirical results.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.15608
|
2023-05-24T22:51:52Z
|
Semantic Segmentation by Semantic Proportions
|
[
"Halil Ibrahim Aysel",
"Xiaohao Cai",
"Adam Prügel-Bennett"
] |
Semantic segmentation is a critical task in computer vision that aims to
identify and classify individual pixels in an image, with numerous applications
for example autonomous driving and medical image analysis. However, semantic
segmentation can be super challenging particularly due to the need for large
amounts of annotated data. Annotating images is a time-consuming and costly
process, often requiring expert knowledge and significant effort. In this
paper, we propose a novel approach for semantic segmentation by eliminating the
need of ground-truth segmentation maps. Instead, our approach requires only the
rough information of individual semantic class proportions, shortened as
semantic proportions. It greatly simplifies the data annotation process and
thus will significantly reduce the annotation time and cost, making it more
feasible for large-scale applications. Moreover, it opens up new possibilities
for semantic segmentation tasks where obtaining the full ground-truth
segmentation maps may not be feasible or practical. Extensive experimental
results demonstrate that our approach can achieve comparable and sometimes even
better performance against the benchmark method that relies on the ground-truth
segmentation maps. Utilising semantic proportions suggested in this work offers
a promising direction for future research in the field of semantic
segmentation.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.01756
|
2023-05-24T04:02:49Z
|
CSI-Based Efficient Self-Quarantine Monitoring System Using Branchy
Convolution Neural Network
|
[
"Jingtao Guo",
"Ivan Wang-Hei Ho"
] |
Nowadays, Coronavirus disease (COVID-19) has become a global pandemic because
of its fast spread in various countries. To build an anti-epidemic barrier,
self-isolation is required for people who have been to any at-risk places or
have been in close contact with infected people. However, existing camera or
wearable device-based monitoring systems may present privacy leakage risks or
cause user inconvenience in some cases. In this paper, we propose a Wi-Fi-based
device-free self-quarantine monitoring system. Specifically, we exploit channel
state information (CSI) derived from Wi-Fi signals as human activity features.
We collect CSI data in a simulated self-quarantine scenario and present
BranchyGhostNet, a lightweight convolution neural network (CNN) with an early
exit prediction branch, for the efficient joint task of room occupancy
detection (ROD) and human activity recognition (HAR). The early exiting branch
is used for ROD, and the final one is used for HAR. Our experimental results
indicate that the proposed model can achieve an average accuracy of 98.19% for
classifying five different human activities. They also confirm that after
leveraging the early exit prediction mechanism, the inference latency for ROD
can be significantly reduced by 54.04% when compared with the final exiting
branch while guaranteeing the accuracy of ROD.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.14672
|
2023-05-24T03:25:33Z
|
Quantifying Character Similarity with Vision Transformers
|
[
"Xinmei Yang",
"Abhishek Arora",
"Shao-Yu Jheng",
"Melissa Dell"
] |
Record linkage is a bedrock of quantitative social science, as analyses often
require linking data from multiple, noisy sources. Off-the-shelf string
matching methods are widely used, as they are straightforward and cheap to
implement and scale. Not all character substitutions are equally probable, and
for some settings there are widely used handcrafted lists denoting which string
substitutions are more likely, that improve the accuracy of string matching.
However, such lists do not exist for many settings, skewing research with
linked datasets towards a few high-resource contexts that are not
representative of the diversity of human societies. This study develops an
extensible way to measure character substitution costs for OCR'ed documents, by
employing large-scale self-supervised training of vision transformers (ViT)
with augmented digital fonts. For each language written with the CJK script, we
contrastively learn a metric space where different augmentations of the same
character are represented nearby. In this space, homoglyphic characters - those
with similar appearance such as ``O'' and ``0'' - have similar vector
representations. Using the cosine distance between characters' representations
as the substitution cost in an edit distance matching algorithm significantly
improves record linkage compared to other widely used string matching methods,
as OCR errors tend to be homoglyphic in nature. Homoglyphs can plausibly
capture character visual similarity across any script, including low-resource
settings. We illustrate this by creating homoglyph sets for 3,000 year old
ancient Chinese characters, which are highly pictorial. Fascinatingly, a ViT is
able to capture relationships in how different abstract concepts were
conceptualized by ancient societies, that have been noted in the archaeological
literature.
|
[
"cs.CL",
"cs.CV",
"econ.GN",
"q-fin.EC"
] | false |
2305.14841
|
2023-05-24T07:45:54Z
|
Deep Learning-based Bio-Medical Image Segmentation using UNet
Architecture and Transfer Learning
|
[
"Nima Hassanpour",
"Abouzar Ghavami"
] |
Image segmentation is a branch of computer vision that is widely used in real
world applications including biomedical image processing. With recent
advancement of deep learning, image segmentation has achieved at a very high
level performance. Recently, UNet architecture is found as the core of novel
deep learning segmentation methods. In this paper we implement UNet
architecture from scratch with using basic blocks in Pytorch and evaluate its
performance on multiple biomedical image datasets. We also use transfer
learning to apply novel modified UNet segmentation packages on the biomedical
image datasets. We fine tune the pre-trained transferred model with each
specific dataset. We compare its performance with our fundamental UNet
implementation. We show that transferred learning model has better performance
in image segmentation than UNet model that is implemented from scratch.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.15149
|
2023-05-24T13:48:36Z
|
Reliability Scores from Saliency Map Clusters for Improved Image-based
Harvest-Readiness Prediction in Cauliflower
|
[
"Jana Kierdorf",
"Ribana Roscher"
] |
Cauliflower is a hand-harvested crop that must fulfill high-quality standards
in sales making the timing of harvest important. However, accurately
determining harvest-readiness can be challenging due to the cauliflower head
being covered by its canopy. While deep learning enables automated
harvest-readiness estimation, errors can occur due to field-variability and
limited training data. In this paper, we analyze the reliability of a
harvest-readiness classifier with interpretable machine learning. By
identifying clusters of saliency maps, we derive reliability scores for each
classification result using knowledge about the domain and the image
properties. For unseen data, the reliability can be used to (i) inform farmers
to improve their decision-making and (ii) increase the model prediction
accuracy. Using RGB images of single cauliflower plants at different
developmental stages from the GrowliFlower dataset, we investigate various
saliency mapping approaches and find that they result in different quality of
reliability scores. With the most suitable interpretation tool, we adjust the
classification result and achieve a 15.72% improvement of the overall accuracy
to 88.14% and a 15.44% improvement of the average class accuracy to 88.52% for
the GrowliFlower dataset.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.15241
|
2023-05-24T15:25:19Z
|
Robust Classification via a Single Diffusion Model
|
[
"Huanran Chen",
"Yinpeng Dong",
"Zhengyi Wang",
"Xiao Yang",
"Chengqi Duan",
"Hang Su",
"Jun Zhu"
] |
Recently, diffusion models have been successfully applied to improving
adversarial robustness of image classifiers by purifying the adversarial noises
or generating realistic data for adversarial training. However, the
diffusion-based purification can be evaded by stronger adaptive attacks while
adversarial training does not perform well under unseen threats, exhibiting
inevitable limitations of these methods. To better harness the expressive power
of diffusion models, in this paper we propose Robust Diffusion Classifier
(RDC), a generative classifier that is constructed from a pre-trained diffusion
model to be adversarially robust. Our method first maximizes the data
likelihood of a given input and then predicts the class probabilities of the
optimized input using the conditional likelihood of the diffusion model through
Bayes' theorem. Since our method does not require training on particular
adversarial attacks, we demonstrate that it is more generalizable to defend
against multiple unseen threats. In particular, RDC achieves $73.24\%$ robust
accuracy against $\ell_\infty$ norm-bounded perturbations with
$\epsilon_\infty=8/255$ on CIFAR-10, surpassing the previous state-of-the-art
adversarial training models by $+2.34\%$. The findings highlight the potential
of generative classifiers by employing diffusion models for adversarial
robustness compared with the commonly studied discriminative classifiers.
|
[
"cs.CV",
"cs.CR",
"cs.LG"
] | false |
2305.15544
|
2023-05-24T20:18:21Z
|
Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-
and Video-Quality Metrics
|
[
"Ekaterina Shumitskaya",
"Anastasia Antsiferova",
"Dmitriy Vatolin"
] |
Modern neural-network-based no-reference image- and video-quality metrics
exhibit performance as high as full-reference metrics. These metrics are widely
used to improve visual quality in computer vision methods and compare video
processing methods. However, these metrics are not stable to traditional
adversarial attacks, which can cause incorrect results. Our goal is to
investigate the boundaries of no-reference metrics applicability, and in this
paper, we propose a fast adversarial perturbation attack on no-reference
quality metrics. The proposed attack (FACPA) can be exploited as a
preprocessing step in real-time video processing and compression algorithms.
This research can yield insights to further aid in designing of stable
neural-network-based no-reference quality metrics.
|
[
"cs.CV",
"cs.MM",
"eess.IV"
] | false |
2305.16347
|
2023-05-24T14:48:18Z
|
Prompt Evolution for Generative AI: A Classifier-Guided Approach
|
[
"Melvin Wong",
"Yew-Soon Ong",
"Abhishek Gupta",
"Kavitesh K. Bali",
"Caishun Chen"
] |
Synthesis of digital artifacts conditioned on user prompts has become an
important paradigm facilitating an explosion of use cases with generative AI.
However, such models often fail to connect the generated outputs and desired
target concepts/preferences implied by the prompts. Current research addressing
this limitation has largely focused on enhancing the prompts before output
generation or improving the model's performance up front. In contrast, this
paper conceptualizes prompt evolution, imparting evolutionary selection
pressure and variation during the generative process to produce multiple
outputs that satisfy the target concepts/preferences better. We propose a
multi-objective instantiation of this broader idea that uses a multi-label
image classifier-guided approach. The predicted labels from the classifiers
serve as multiple objectives to optimize, with the aim of producing diversified
images that meet user preferences. A novelty of our evolutionary algorithm is
that the pre-trained generative model gives us implicit mutation operations,
leveraging the model's stochastic generative capability to automate the
creation of Pareto-optimized images more faithful to user preferences.
|
[
"cs.LG",
"cs.AI",
"cs.CV",
"cs.NE",
"I.2"
] | false |
2306.04629
|
2023-05-24T15:42:38Z
|
Generative Adversarial Shaders for Real-Time Realism Enhancement
|
[
"Arturo Salmi",
"Szabolcs Cséfalvay",
"James Imber"
] |
Application of realism enhancement methods, particularly in real-time and
resource-constrained settings, has been frustrated by the expense of existing
methods. These achieve high quality results only at the cost of long runtimes
and high bandwidth, memory, and power requirements. We present an efficient
alternative: a high-performance, generative shader-based approach that adapts
machine learning techniques to real-time applications, even in
resource-constrained settings such as embedded and mobile GPUs. The proposed
learnable shader pipeline comprises differentiable functions that can be
trained in an end-to-end manner using an adversarial objective, allowing for
faithful reproduction of the appearance of a target image set without manual
tuning. The shader pipeline is optimized for highly efficient execution on the
target device, providing temporally stable, faster-than-real time results with
quality competitive with many neural network-based methods.
|
[
"cs.GR",
"cs.CV",
"cs.LG",
"I.2; I.3; I.4"
] | false |
2307.06392
|
2023-05-24T21:00:50Z
|
Deep learning-based Segmentation of Rabbit fetal skull with limited and
sub-optimal annotations
|
[
"Rajath Soans",
"Alexa Gleason",
"Tosha Shah",
"Corey Miller",
"Barbara Robinson",
"Kimberly Brannen",
"Antong Chen"
] |
In this paper, we propose a deep learning-based method to segment the
skeletal structures in the micro-CT images of Dutch-Belted rabbit fetuses which
can assist in the assessment of drug-induced skeletal abnormalities as a
required study in developmental and reproductive toxicology (DART). Our
strategy leverages sub-optimal segmentation labels of 22 skull bones from 26
micro-CT volumes and maps them to 250 unlabeled volumes on which a deep
CNN-based segmentation model is trained. In the experiments, our model was able
to achieve an average Dice Similarity Coefficient (DSC) of 0.89 across all
bones on the testing set, and 14 out of the 26 skull bones reached average DSC
>0.93. Our next steps are segmenting the whole body followed by developing a
model to classify abnormalities.
|
[
"q-bio.QM",
"cs.CV",
"eess.IV",
"q-bio.TO"
] | false |
2305.14625
|
2023-05-24T01:48:33Z
|
KNN-LM Does Not Improve Open-ended Text Generation
|
[
"Shufan Wang",
"Yixiao Song",
"Andrew Drozdov",
"Aparna Garimella",
"Varun Manjunatha",
"Mohit Iyyer"
] |
In this paper, we study the generation quality of interpolation-based
retrieval-augmented language models (LMs). These methods, best exemplified by
the KNN-LM, interpolate the LM's predicted distribution of the next word with a
distribution formed from the most relevant retrievals for a given prefix. While
the KNN-LM and related methods yield impressive decreases in perplexity, we
discover that they do not exhibit corresponding improvements in open-ended
generation quality, as measured by both automatic evaluation metrics (e.g.,
MAUVE) and human evaluations. Digging deeper, we find that interpolating with a
retrieval distribution actually increases perplexity compared to a baseline
Transformer LM for the majority of tokens in the WikiText-103 test set, even
though the overall perplexity is lower due to a smaller number of tokens for
which perplexity dramatically decreases after interpolation. However, when
decoding a long sequence at inference time, significant improvements on this
smaller subset of tokens are washed out by slightly worse predictions on most
tokens. Furthermore, we discover that the entropy of the retrieval distribution
increases faster than that of the base LM as the generated sequence becomes
longer, which indicates that retrieval is less reliable when using
model-generated text as queries (i.e., is subject to exposure bias). We hope
that our analysis spurs future work on improved decoding algorithms and
interpolation strategies for retrieval-augmented language models.
|
[
"cs.CL"
] | false |
2305.14630
|
2023-05-24T02:03:23Z
|
Testing Causal Models of Word Meaning in GPT-3 and -4
|
[
"Sam Musker",
"Ellie Pavlick"
] |
Large Language Models (LLMs) have driven extraordinary improvements in NLP.
However, it is unclear how such models represent lexical concepts-i.e., the
meanings of the words they use. This paper evaluates the lexical
representations of GPT-3 and GPT-4 through the lens of HIPE theory, a theory of
concept representations which focuses on representations of words describing
artifacts (such as "mop", "pencil", and "whistle"). The theory posits a causal
graph that relates the meanings of such words to the form, use, and history of
the objects to which they refer. We test LLMs using the same stimuli originally
used by Chaigneau et al. (2004) to evaluate the theory in humans, and consider
a variety of prompt designs. Our experiments concern judgements about causal
outcomes, object function, and object naming. We find no evidence that GPT-3
encodes the causal structure hypothesized by HIPE, but do find evidence that
GPT-4 encodes such structure. The results contribute to a growing body of
research characterizing the representational capacity of large language models.
|
[
"cs.CL",
"I.2.7"
] | false |
2305.14645
|
2023-05-24T02:30:31Z
|
Iteratively Improving Biomedical Entity Linking and Event Extraction via
Hard Expectation-Maximization
|
[
"Xiaochu Li",
"Minqian Liu",
"Zhiyang Xu",
"Lifu Huang"
] |
Biomedical entity linking and event extraction are two crucial tasks to
support text understanding and retrieval in the biomedical domain. These two
tasks intrinsically benefit each other: entity linking disambiguates the
biomedical concepts by referring to external knowledge bases and the domain
knowledge further provides additional clues to understand and extract the
biological processes, while event extraction identifies a key trigger and
entities involved to describe each biological process which also captures the
structural context to better disambiguate the biomedical entities. However,
previous research typically solves these two tasks separately or in a pipeline,
leading to error propagation. What's more, it's even more challenging to solve
these two tasks together as there is no existing dataset that contains
annotations for both tasks. To solve these challenges, we propose joint
biomedical entity linking and event extraction by regarding the event
structures and entity references in knowledge bases as latent variables and
updating the two task-specific models in a hard Expectation-Maximization (EM)
fashion: (1) predicting the missing variables for each partially annotated
dataset based on the current two task-specific models, and (2) updating the
parameters of each model on the corresponding pseudo completed dataset.
Experimental results on two benchmark datasets: Genia 2011 for event extraction
and BC4GO for entity linking, show that our joint framework significantly
improves the model for each individual task and outperforms the strong
baselines for both tasks. We will make the code and model checkpoints publicly
available once the paper is accepted.
|
[
"cs.CL"
] | false |
2305.14660
|
2023-05-24T02:53:48Z
|
Complex Mathematical Symbol Definition Structures: A Dataset and Model
for Coordination Resolution in Definition Extraction
|
[
"Anna Martin-Boyle",
"Andrew Head",
"Kyle Lo",
"Risham Sidhu",
"Marti A. Hearst",
"Dongyeop Kang"
] |
Mathematical symbol definition extraction is important for improving
scholarly reading interfaces and scholarly information extraction (IE).
However, the task poses several challenges: math symbols are difficult to
process as they are not composed of natural language morphemes; and scholarly
papers often contain sentences that require resolving complex coordinate
structures. We present SymDef, an English language dataset of 5,927 sentences
from full-text scientific papers where each sentence is annotated with all
mathematical symbols linked with their corresponding definitions. This dataset
focuses specifically on complex coordination structures such as "respectively"
constructions, which often contain overlapping definition spans. We also
introduce a new definition extraction method that masks mathematical symbols,
creates a copy of each sentence for each symbol, specifies a target symbol, and
predicts its corresponding definition spans using slot filling. Our experiments
show that our definition extraction model significantly outperforms RoBERTa and
other strong IE baseline systems by 10.9 points with a macro F1 score of 84.82.
With our dataset and model, we can detect complex definitions in scholarly
documents to make scientific writing more readable.
|
[
"cs.CL",
"I.2.7"
] | false |
2305.14676
|
2023-05-24T03:33:21Z
|
GRILL: Grounded Vision-language Pre-training via Aligning Text and Image
Regions
|
[
"Woojeong Jin",
"Subhabrata Mukherjee",
"Yu Cheng",
"Yelong Shen",
"Weizhu Chen",
"Ahmed Hassan Awadallah",
"Damien Jose",
"Xiang Ren"
] |
Generalization to unseen tasks is an important ability for few-shot learners
to achieve better zero-/few-shot performance on diverse tasks. However, such
generalization to vision-language tasks including grounding and generation
tasks has been under-explored; existing few-shot VL models struggle to handle
tasks that involve object grounding and multiple images such as visual
commonsense reasoning or NLVR2. In this paper, we introduce GRILL, GRounded
vIsion Language aLigning, a novel VL model that can be generalized to diverse
tasks including visual question answering, captioning, and grounding tasks with
no or very few training instances. Specifically, GRILL learns object grounding
and localization by exploiting object-text alignments, which enables it to
transfer to grounding tasks in a zero-/few-shot fashion. We evaluate our model
on various zero-/few-shot VL tasks and show that it consistently surpasses the
state-of-the-art few-shot methods.
|
[
"cs.CL"
] | false |
2305.14682
|
2023-05-24T03:42:44Z
|
TACR: A Table-alignment-based Cell-selection and Reasoning Model for
Hybrid Question-Answering
|
[
"Jian Wu",
"Yicheng Xu",
"Yan Gao",
"Jian-Guang Lou",
"Börje F. Karlsson",
"Manabu Okumura"
] |
Hybrid Question-Answering (HQA), which targets reasoning over tables and
passages linked from table cells, has witnessed significant research in recent
years. A common challenge in HQA and other passage-table QA datasets is that it
is generally unrealistic to iterate over all table rows, columns, and linked
passages to retrieve evidence. Such a challenge made it difficult for previous
studies to show their reasoning ability in retrieving answers. To bridge this
gap, we propose a novel Table-alignment-based Cell-selection and Reasoning
model (TACR) for hybrid text and table QA, evaluated on the HybridQA and
WikiTableQuestions datasets. In evidence retrieval, we design a
table-question-alignment enhanced cell-selection method to retrieve
fine-grained evidence. In answer reasoning, we incorporate a QA module that
treats the row containing selected cells as context. Experimental results over
the HybridQA and WikiTableQuestions (WTQ) datasets show that TACR achieves
state-of-the-art results on cell selection and outperforms fine-grained
evidence retrieval baselines on HybridQA, while achieving competitive
performance on WTQ. We also conducted a detailed analysis to demonstrate that
being able to align questions to tables in the cell-selection stage can result
in important gains from experiments of over 90\% table row and column selection
accuracy, meanwhile also improving output explainability.
|
[
"cs.CL"
] | false |
2305.14696
|
2023-05-24T04:01:27Z
|
SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to
Rank
|
[
"Dheeraj Mekala",
"Adithya Samavedhi",
"Chengyu Dong",
"Jingbo Shang"
] |
Deep neural classifiers trained with cross-entropy loss (CE loss) often
suffer from poor calibration, necessitating the task of out-of-distribution
(OOD) detection. Traditional supervised OOD detection methods require expensive
manual annotation of in-distribution and OOD samples. To address the annotation
bottleneck, we introduce SELFOOD, a self-supervised OOD detection method that
requires only in-distribution samples as supervision. We cast OOD detection as
an inter-document intra-label (IDIL) ranking problem and train the classifier
with our pairwise ranking loss, referred to as IDIL loss. Specifically, given a
set of in-distribution documents and their labels, for each label, we train the
classifier to rank the softmax scores of documents belonging to that label to
be higher than the scores of documents that belong to other labels. Unlike CE
loss, our IDIL loss function reaches zero when the desired confidence ranking
is achieved and gradients are backpropagated to decrease probabilities
associated with incorrect labels rather than continuously increasing the
probability of the correct label. Extensive experiments with several
classifiers on multiple classification datasets demonstrate the effectiveness
of our method in both coarse- and fine-grained settings.
|
[
"cs.CL"
] | false |
2305.14716
|
2023-05-24T04:36:32Z
|
GlobalBench: A Benchmark for Global Progress in Natural Language
Processing
|
[
"Yueqi Song",
"Catherine Cui",
"Simran Khanuja",
"Pengfei Liu",
"Fahim Faisal",
"Alissa Ostapenko",
"Genta Indra Winata",
"Alham Fikri Aji",
"Samuel Cahyawijaya",
"Yulia Tsvetkov",
"Antonios Anastasopoulos",
"Graham Neubig"
] |
Despite the major advances in NLP, significant disparities in NLP system
performance across languages still exist. Arguably, these are due to uneven
resource allocation and sub-optimal incentives to work on less resourced
languages. To track and further incentivize the global development of equitable
language technology, we introduce GlobalBench. Prior multilingual benchmarks
are static and have focused on a limited number of tasks and languages. In
contrast, GlobalBench is an ever-expanding collection that aims to dynamically
track progress on all NLP datasets in all languages. Rather than solely
measuring accuracy, GlobalBench also tracks the estimated per-speaker utility
and equity of technology across all languages, providing a multi-faceted view
of how language technology is serving people of the world. Furthermore,
GlobalBench is designed to identify the most under-served languages, and
rewards research efforts directed towards those languages. At present, the most
under-served languages are the ones with a relatively high population, but
nonetheless overlooked by composite multilingual benchmarks (like Punjabi,
Portuguese, and Wu Chinese). Currently, GlobalBench covers 966 datasets in 190
languages, and has 1,128 system submissions spanning 62 languages.
|
[
"cs.CL"
] | false |
2305.14719
|
2023-05-24T04:47:55Z
|
CuRIAM: Corpus re Interpretation and Metalanguage in U.S. Supreme Court
Opinions
|
[
"Michael Kranzlein",
"Nathan Schneider",
"Kevin Tobia"
] |
Most judicial decisions involve the interpretation of legal texts; as such,
judicial opinion requires the use of language as a medium to comment on or draw
attention to other language. Language used this way is called metalanguage. We
develop an annotation schema for categorizing types of legal metalanguage and
apply our schema to a set of U.S. Supreme Court opinions, yielding a corpus
totaling 59k tokens. We remark on several patterns observed in the kinds of
metalanguage used by the justices.
|
[
"cs.CL"
] | false |
2305.14725
|
2023-05-24T05:01:48Z
|
AMELI: Enhancing Multimodal Entity Linking with Fine-Grained Attributes
|
[
"Barry Menglong Yao",
"Yu Chen",
"Qifan Wang",
"Sijia Wang",
"Minqian Liu",
"Zhiyang Xu",
"Licheng Yu",
"Lifu Huang"
] |
We propose attribute-aware multimodal entity linking, where the input is a
mention described with a text and image, and the goal is to predict the
corresponding target entity from a multimodal knowledge base (KB) where each
entity is also described with a text description, a visual image and a set of
attributes and values. To support this research, we construct AMELI, a
large-scale dataset consisting of 18,472 reviews and 35,598 products. To
establish baseline performance on AMELI, we experiment with the current
state-of-the-art multimodal entity linking approaches and our enhanced
attribute-aware model and demonstrate the importance of incorporating the
attribute information into the entity linking process. To be best of our
knowledge, we are the first to build benchmark dataset and solutions for the
attribute-aware multimodal entity linking task. Datasets and codes will be made
publicly available.
|
[
"cs.CL",
"I.2.7"
] | false |
2305.14739
|
2023-05-24T05:19:15Z
|
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
|
[
"Weijia Shi",
"Xiaochuang Han",
"Mike Lewis",
"Yulia Tsvetkov",
"Luke Zettlemoyer",
"Scott Wen-tau Yih"
] |
Language models (LMs) often struggle to pay enough attention to the input
context, and generate texts that are unfaithful or contain hallucinations. To
mitigate this issue, we present context-aware decoding (CAD), which follows a
contrastive output distribution that amplifies the difference between the
output probabilities when a model is used with and without context. Our
experiments show that CAD, without additional training, significantly improves
the faithfulness of different LM families, including OPT, GPT, LLaMA and
FLAN-T5 for summarization tasks (e.g., 14.3% gain for LLaMA in factuality
metrics). Furthermore, CAD is particularly effective in overriding a model's
prior knowledge when it contradicts the provided context, leading to
substantial improvements in tasks where resolving the knowledge conflict is
essential.
|
[
"cs.CL"
] | false |
2305.14750
|
2023-05-24T05:53:11Z
|
Mastering the ABCDs of Complex Questions: Answer-Based Claim
Decomposition for Fine-grained Self-Evaluation
|
[
"Nishant Balepur",
"Jie Huang",
"Samraj Moorjani",
"Hari Sundaram",
"Kevin Chen-Chuan Chang"
] |
When answering complex questions, large language models (LLMs) may produce
answers that do not satisfy all criteria of the question. While existing
self-evaluation techniques aim to detect if such answers are correct, these
techniques are unable to determine which criteria of the question are satisfied
by the generated answers. To address this issue, we propose answer-based claim
decomposition (ABCD), a prompting strategy that decomposes questions into a
series of true/false claims that can be used to verify which criteria of the
input question an answer satisfies. Using the decomposed ABCD claims, we
perform fine-grained self-evaluation. Through preliminary experiments on three
datasets, including a newly-collected challenge dataset ObscureQA, we find that
GPT-3.5 has some ability to determine to what extent its answer satisfies the
criteria of the input question, and can give insights into the errors and
knowledge gaps of the model.
|
[
"cs.CL"
] | false |
2305.14763
|
2023-05-24T06:14:31Z
|
Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in
Large Language Models
|
[
"Natalie Shapira",
"Mosh Levy",
"Seyed Hossein Alavi",
"Xuhui Zhou",
"Yejin Choi",
"Yoav Goldberg",
"Maarten Sap",
"Vered Shwartz"
] |
The escalating debate on AI's capabilities warrants developing reliable
metrics to assess machine "intelligence". Recently, many anecdotal examples
were used to suggest that newer large language models (LLMs) like ChatGPT and
GPT-4 exhibit Neural Theory-of-Mind (N-ToM); however, prior work reached
conflicting conclusions regarding those abilities. We investigate the extent of
LLMs' N-ToM through an extensive evaluation on 6 tasks and find that while LLMs
exhibit certain N-ToM abilities, this behavior is far from being robust. We
further examine the factors impacting performance on N-ToM tasks and discover
that LLMs struggle with adversarial examples, indicating reliance on shallow
heuristics rather than robust ToM abilities. We caution against drawing
conclusions from anecdotal examples, limited benchmark testing, and using
human-designed psychological tests to evaluate models.
|
[
"cs.CL"
] | false |
2305.14783
|
2023-05-24T06:39:12Z
|
Disentangled Phonetic Representation for Chinese Spelling Correction
|
[
"Zihong Liang",
"Xiaojun Quan",
"Qifan Wang"
] |
Chinese Spelling Correction (CSC) aims to detect and correct erroneous
characters in Chinese texts. Although efforts have been made to introduce
phonetic information (Hanyu Pinyin) in this task, they typically merge phonetic
representations with character representations, which tends to weaken the
representation effect of normal texts. In this work, we propose to disentangle
the two types of features to allow for direct interaction between textual and
phonetic information. To learn useful phonetic representations, we introduce a
pinyin-to-character objective to ask the model to predict the correct
characters based solely on phonetic information, where a separation mask is
imposed to disable attention from phonetic input to text. To avoid overfitting
the phonetics, we further design a self-distillation module to ensure that
semantic information plays a major role in the prediction. Extensive
experiments on three CSC benchmarks demonstrate the superiority of our method
in using phonetic information.
|
[
"cs.CL"
] | false |
2305.14847
|
2023-05-24T07:57:04Z
|
Drafting Event Schemas using Language Models
|
[
"Anisha Gunjal",
"Greg Durrett"
] |
Past work has studied event prediction and event language modeling, sometimes
mediated through structured representations of knowledge in the form of event
schemas. Such schemas can lead to explainable predictions and forecasting of
unseen events given incomplete information. In this work, we look at the
process of creating such schemas to describe complex events. We use large
language models (LLMs) to draft schemas directly in natural language, which can
be further refined by human curators as necessary. Our focus is on whether we
can achieve sufficient diversity and recall of key events and whether we can
produce the schemas in a sufficiently descriptive style. We show that large
language models are able to achieve moderate recall against schemas taken from
two different datasets, with even better results when multiple prompts and
multiple samples are combined. Moreover, we show that textual entailment
methods can be used for both matching schemas to instances of events as well as
evaluating overlap between gold and predicted schemas. Our method paves the way
for easier distillation of event knowledge from large language model into
schemas.
|
[
"cs.CL"
] | false |
2305.14857
|
2023-05-24T08:06:33Z
|
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual
Transfer
|
[
"Akari Asai",
"Sneha Kudugunta",
"Xinyan Velocity Yu",
"Terra Blevins",
"Hila Gonen",
"Machel Reid",
"Yulia Tsvetkov",
"Sebastian Ruder",
"Hannaneh Hajishirzi"
] |
Despite remarkable advancements in few-shot generalization in natural
language processing, most models are developed and evaluated primarily in
English. To facilitate research on few-shot cross-lingual transfer, we
introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across
54 languages in a sequence-to-sequence format and provides a fixed set of
few-shot examples and instructions. BUFFET is designed to establish a rigorous
and equitable evaluation framework for few-shot cross-lingual transfer across a
broad range of tasks and languages. Using BUFFET, we perform thorough
evaluations of state-of-the-art multilingual large language models with
different transfer methods, namely in-context learning and fine-tuning. Our
findings reveal significant room for improvement in few-shot in-context
cross-lingual transfer. In particular, ChatGPT with in-context learning often
performs worse than much smaller mT5-base models fine-tuned on English task
data and few-shot in-language examples. Our analysis suggests various avenues
for future research in few-shot cross-lingual transfer, such as improved
pretraining, understanding, and future evaluations.
|
[
"cs.CL"
] | false |
2305.14898
|
2023-05-24T08:52:08Z
|
PIVOINE: Instruction Tuning for Open-world Information Extraction
|
[
"Keming Lu",
"Xiaoman Pan",
"Kaiqiang Song",
"Hongming Zhang",
"Dong Yu",
"Jianshu Chen"
] |
We consider the problem of Open-world Information Extraction (Open-world IE),
which extracts comprehensive entity profiles from unstructured texts. Different
from the conventional closed-world setting of Information Extraction (IE),
Open-world IE considers a more general situation where entities and relations
could be beyond a predefined ontology. More importantly, we seek to develop a
large language model (LLM) that is able to perform Open-world IE to extract
desirable entity profiles characterized by (possibly fine-grained) natural
language instructions. We achieve this by finetuning LLMs using instruction
tuning. In particular, we construct INSTRUCTOPENWIKI, a substantial instruction
tuning dataset for Open-world IE enriched with a comprehensive corpus,
extensive annotations, and diverse instructions. We finetune the pretrained
BLOOM models on INSTRUCTOPENWIKI and obtain PIVOINE, an LLM for Open-world IE
with strong instruction-following capabilities. Our experiments demonstrate
that PIVOINE significantly outperforms traditional closed-world methods and
other LLM baselines, displaying impressive generalization capabilities on both
unseen instructions and out-of-ontology cases. Consequently, PIVOINE emerges as
a promising solution to tackle the open-world challenge in IE effectively.
|
[
"cs.CL"
] | false |
2305.14908
|
2023-05-24T08:59:00Z
|
PURR: Efficiently Editing Language Model Hallucinations by Denoising
Language Model Corruptions
|
[
"Anthony Chen",
"Panupong Pasupat",
"Sameer Singh",
"Hongrae Lee",
"Kelvin Guu"
] |
The remarkable capabilities of large language models have been accompanied by
a persistent drawback: the generation of false and unsubstantiated claims
commonly known as "hallucinations". To combat this issue, recent research has
introduced approaches that involve editing and attributing the outputs of
language models, particularly through prompt-based editing. However, the
inference cost and speed of using large language models for editing currently
bottleneck prompt-based methods. These bottlenecks motivate the training of
compact editors, which is challenging due to the scarcity of training data for
this purpose. To overcome these challenges, we exploit the power of large
language models to introduce corruptions (i.e., noise) into text and
subsequently fine-tune compact editors to denoise the corruptions by
incorporating relevant evidence. Our methodology is entirely unsupervised and
provides us with faux hallucinations for training in any domain. Our Petite
Unsupervised Research and Revision model, PURR, not only improves attribution
over existing editing methods based on fine-tuning and prompting, but also
achieves faster execution times by orders of magnitude.
|
[
"cs.CL"
] | false |
2305.14913
|
2023-05-24T09:03:01Z
|
CoLaDa: A Collaborative Label Denoising Framework for Cross-lingual
Named Entity Recognition
|
[
"Tingting Ma",
"Qianhui Wu",
"Huiqiang Jiang",
"Börje F. Karlsson",
"Tiejun Zhao",
"Chin-Yew Lin"
] |
Cross-lingual named entity recognition (NER) aims to train an NER system that
generalizes well to a target language by leveraging labeled data in a given
source language. Previous work alleviates the data scarcity problem by
translating source-language labeled data or performing knowledge distillation
on target-language unlabeled data. However, these methods may suffer from label
noise due to the automatic labeling process. In this paper, we propose CoLaDa,
a Collaborative Label Denoising Framework, to address this problem.
Specifically, we first explore a model-collaboration-based denoising scheme
that enables models trained on different data sources to collaboratively
denoise pseudo labels used by each other. We then present an
instance-collaboration-based strategy that considers the label consistency of
each token's neighborhood in the representation space for denoising.
Experiments on different benchmark datasets show that the proposed CoLaDa
achieves superior results compared to previous methods, especially when
generalizing to distant languages.
|
[
"cs.CL"
] | false |
2305.14929
|
2023-05-24T09:11:11Z
|
Aligning Language Models to User Opinions
|
[
"EunJeong Hwang",
"Bodhisattwa Prasad Majumder",
"Niket Tandon"
] |
An important aspect of developing LLMs that interact with humans is to align
models' behavior to their users. It is possible to prompt an LLM into behaving
as a certain persona, especially a user group or ideological persona the model
captured during its pertaining stage. But, how to best align an LLM with a
specific user and not a demographic or ideological group remains an open
question. Mining public opinion surveys (by Pew Research), we find that the
opinions of a user and their demographics and ideologies are not mutual
predictors. We use this insight to align LLMs by modeling both user opinions as
well as user demographics and ideology, achieving up to 7 points accuracy gains
in predicting public opinions from survey questions across a broad set of
topics. In addition to the typical approach of prompting LLMs with demographics
and ideology, we discover that utilizing the most relevant past opinions from
individual users enables the model to predict user opinions more accurately.
|
[
"cs.CL"
] | false |
2305.14935
|
2023-05-24T09:17:05Z
|
Modeling Appropriate Language in Argumentation
|
[
"Timon Ziegenbein",
"Shahbaz Syed",
"Felix Lange",
"Martin Potthast",
"Henning Wachsmuth"
] |
Online discussion moderators must make ad-hoc decisions about whether the
contributions of discussion participants are appropriate or should be removed
to maintain civility. Existing research on offensive language and the resulting
tools cover only one aspect among many involved in such decisions. The question
of what is considered appropriate in a controversial discussion has not yet
been systematically addressed. In this paper, we operationalize appropriate
language in argumentation for the first time. In particular, we model
appropriateness through the absence of flaws, grounded in research on argument
quality assessment, especially in aspects from rhetoric. From these, we derive
a new taxonomy of 14 dimensions that determine inappropriate language in online
discussions. Building on three argument quality corpora, we then create a
corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses
support that the taxonomy covers the concept of appropriateness
comprehensively, showing several plausible correlations with argument quality
dimensions. Moreover, results of baseline approaches to assessing
appropriateness suggest that all dimensions can be modeled computationally on
the corpus.
|
[
"cs.CL"
] | false |
2305.14936
|
2023-05-24T09:18:28Z
|
Trade-Offs Between Fairness and Privacy in Language Modeling
|
[
"Cleo Matzken",
"Steffen Eger",
"Ivan Habernal"
] |
Protecting privacy in contemporary NLP models is gaining in importance. So
does the need to mitigate social biases of such models. But can we have both at
the same time? Existing research suggests that privacy preservation comes at
the price of worsening biases in classification tasks. In this paper, we
explore the extent to which this tradeoff really holds when we incorporate both
privacy preservation and de-biasing techniques into training text generation
models. How does improving the model along one dimension affect the other
dimension as well as the utility of the model? We conduct an extensive set of
experiments that include bias detection, privacy attacks, language modeling,
and performance on downstream tasks.
|
[
"cs.CL"
] | false |
2305.14963
|
2023-05-24T09:57:06Z
|
PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text
Classification
|
[
"Yau-Shian Wang",
"Ta-Chung Chi",
"Ruohong Zhang",
"Yiming Yang"
] |
We present PESCO, a novel contrastive learning framework that substantially
improves the performance of zero-shot text classification. We formulate text
classification as a neural text matching problem where each document is treated
as a query, and the system learns the mapping from each query to the relevant
class labels by (1) adding prompts to enhance label matching, and (2) using
retrieved labels to enrich the training set in a self-training loop of
contrastive learning. PESCO achieves state-of-the-art performance on four
benchmark text classification datasets. On DBpedia, we achieve 98.5\% accuracy
without any labeled data, which is close to the fully-supervised result.
Extensive experiments and analyses show all the components of PESCO are
necessary for improving the performance of zero-shot text classification.
|
[
"cs.CL"
] | false |
2305.15005
|
2023-05-24T10:45:25Z
|
Sentiment Analysis in the Era of Large Language Models: A Reality Check
|
[
"Wenxuan Zhang",
"Yue Deng",
"Bing Liu",
"Sinno Jialin Pan",
"Lidong Bing"
] |
Sentiment analysis (SA) has been a long-standing research area in natural
language processing. It can offer rich insights into human sentiments and
opinions and has thus seen considerable interest from both academia and
industry. With the advent of large language models (LLMs) such as ChatGPT,
there is a great potential for their employment on SA problems. However, the
extent to which existing LLMs can be leveraged for different sentiment analysis
tasks remains unclear. This paper aims to provide a comprehensive investigation
into the capabilities of LLMs in performing various sentiment analysis tasks,
from conventional sentiment classification to aspect-based sentiment analysis
and multifaceted analysis of subjective texts. We evaluate performance across
13 tasks on 26 datasets and compare the results against small language models
(SLMs) trained on domain-specific datasets. Our study reveals that while LLMs
demonstrate satisfactory performance in simpler tasks, they lag behind in more
complex tasks requiring deeper understanding or structured sentiment
information. However, LLMs significantly outperform SLMs in few-shot learning
settings, suggesting their potential when annotation resources are limited. We
also highlight the limitations of current evaluation practices in assessing
LLMs' SA abilities and propose a novel benchmark, \textsc{SentiEval}, for a
more comprehensive and realistic evaluation. Data and code during our
investigations are available at
\url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.
|
[
"cs.CL"
] | false |
2305.15010
|
2023-05-24T10:48:53Z
|
Injecting Knowledge into Biomedical Pre-trained Models via Polymorphism
and Synonymous Substitution
|
[
"Hongbo Zhang",
"Xiang Wan",
"Benyou Wang"
] |
Pre-trained language models (PLMs) were considered to be able to store
relational knowledge present in the training data. However, some relational
knowledge seems to be discarded unsafely in PLMs due to \textbf{report bias}:
low-frequency relational knowledge might be underexpressed compared to
high-frequency one in PLMs. This gives us a hint that relational knowledge
might not be redundant to the stored knowledge of PLMs, but rather be
complementary. To additionally inject relational knowledge into PLMs, we
propose a simple-yet-effective approach to inject relational knowledge into
PLMs, which is inspired by three observations (namely, polymorphism, synonymous
substitution, and association). In particular, we switch entities in the
training corpus to related entities (either hypernyms/hyponyms/synonyms, or
arbitrarily-related concepts). Experimental results show that the proposed
approach could not only better capture relational knowledge, but also improve
the performance in various biomedical downstream tasks. Our model is available
in \url{https://github.com/StevenZHB/BioPLM_InjectingKnowledge}.
|
[
"cs.CL"
] | false |
2305.15014
|
2023-05-24T10:57:53Z
|
Unlocking Temporal Question Answering for Large Language Models Using
Code Execution
|
[
"Xingxuan Li",
"Liying Cheng",
"Qingyu Tan",
"Hwee Tou Ng",
"Shafiq Joty",
"Lidong Bing"
] |
Large language models (LLMs) have made significant progress in natural
language processing (NLP), and are utilized extensively in various
applications. Recent works, such as chain-of-thought (CoT), have shown that
intermediate reasoning steps can improve the performance of LLMs for complex
reasoning tasks, such as math problems and symbolic question-answering tasks.
However, we notice the challenge that LLMs face when it comes to temporal
reasoning. Our preliminary experiments show that generating intermediate
reasoning steps does not always boost the performance of complex temporal
question-answering tasks. Therefore, we propose a novel framework that combines
the extraction capability of LLMs and the logical reasoning capability of a
Python solver to tackle this issue. Extensive experiments and analysis
demonstrate the effectiveness of our framework in handling intricate time-bound
reasoning tasks.
|
[
"cs.CL"
] | false |
2305.15041
|
2023-05-24T11:27:59Z
|
Generating Faithful Synthetic Data with Large Language Models: A Case
Study in Computational Social Science
|
[
"Veniamin Veselovsky",
"Manoel Horta Ribeiro",
"Akhil Arora",
"Martin Josifoski",
"Ashton Anderson",
"Robert West"
] |
Large Language Models (LLMs) have democratized synthetic data generation,
which in turn has the potential to simplify and broaden a wide gamut of NLP
tasks. Here, we tackle a pervasive problem in synthetic data generation: its
generative distribution often differs from the distribution of real-world data
researchers care about (in other words, it is unfaithful). In a case study on
sarcasm detection, we study three strategies to increase the faithfulness of
synthetic data: grounding, filtering, and taxonomy-based generation. We
evaluate these strategies using the performance of classifiers trained with
generated synthetic data on real-world data. While all three strategies improve
the performance of classifiers, we find that grounding works best for the task
at hand. As synthetic data generation plays an ever-increasing role in NLP
research, we expect this work to be a stepping stone in improving its utility.
We conclude this paper with some recommendations on how to generate
high(er)-fidelity synthetic data for specific tasks.
|
[
"cs.CL"
] | false |
2305.15044
|
2023-05-24T11:34:39Z
|
Is Summary Useful or Not? An Extrinsic Human Evaluation of Text
Summaries on Downstream Tasks
|
[
"Xiao Pu",
"Mingqi Gao",
"Xiaojun Wan"
] |
Research on automated text summarization relies heavily on human and
automatic evaluation. While recent work on human evaluation mainly adopted
intrinsic evaluation methods, judging the generic quality of text summaries,
e.g. informativeness and coherence, our work focuses on evaluating the
usefulness of text summaries with extrinsic methods. We carefully design three
different downstream tasks for extrinsic human evaluation of summaries, i.e.,
question answering, text classification and text similarity assessment. We
carry out experiments using system rankings and user behavior data to evaluate
the performance of different summarization models. We find summaries are
particularly useful in tasks that rely on an overall judgment of the text,
while being less effective for question answering tasks. The results show that
summaries generated by fine-tuned models lead to higher consistency in
usefulness across all three tasks, as rankings of fine-tuned summarization
systems are close across downstream tasks according to the proposed extrinsic
metrics. Summaries generated by models in the zero-shot setting, however, are
found to be biased towards the text classification and similarity assessment
tasks, due to its general and less detailed summary style. We further evaluate
the correlation of 14 intrinsic automatic metrics with human criteria and show
that intrinsic automatic metrics perform well in evaluating the usefulness of
summaries in the question-answering task, but are less effective in the other
two tasks. This highlights the limitations of relying solely on intrinsic
automatic metrics in evaluating the performance and usefulness of summaries.
|
[
"cs.CL"
] | false |
2305.15045
|
2023-05-24T11:35:31Z
|
SETI: Systematicity Evaluation of Textual Inference
|
[
"Xiyan Fu",
"Anette Frank"
] |
We propose SETI (Systematicity Evaluation of Textual Inference), a novel and
comprehensive benchmark designed for evaluating pre-trained language models
(PLMs) for their systematicity capabilities in the domain of textual inference.
Specifically, SETI offers three different NLI tasks and corresponding datasets
to evaluate various types of systematicity in reasoning processes. In order to
solve these tasks, models are required to perform compositional inference based
on known primitive constituents. We conduct experiments of SETI on six widely
used PLMs. Results show that various PLMs are able to solve unseen
compositional inferences when having encountered the knowledge of how to
combine primitives, with good performance. However, they are considerably
limited when this knowledge is unknown to the model (40-100% points decrease).
Furthermore, we find that PLMs can improve drastically once exposed to crucial
compositional knowledge in minimalistic shots. These findings position SETI as
the first benchmark for measuring the future progress of PLMs in achieving
systematicity generalization in the textual inference.
|
[
"cs.CL"
] | false |
2305.15051
|
2023-05-24T11:41:33Z
|
A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event
Extraction
|
[
"Erica Cai",
"Brendan O'Connor"
] |
We consider dyadic zero-shot event extraction (EE) to identify actions
between pairs of actors. The \emph{zero-shot} setting allows social scientists
or other non-computational researchers to extract any customized,
user-specified set of events without training, resulting in a \emph{dyadic}
event database, allowing insight into sociopolitical relational dynamics among
actors and the higher level organizations or countries they represent.
Unfortunately, we find that current zero-shot EE methods perform poorly for the
task, with issues including word sense ambiguity, modality mismatch, and
efficiency. Straightforward application of large language model prompting
typically performs even worse. We address these challenges with a new
fine-grained, multi-stage generative question-answer method, using a Monte
Carlo approach to exploit and overcome the randomness of generative outputs. It
performs 90\% fewer queries than a previous approach, with strong performance
on the widely-used Automatic Content Extraction dataset. Finally, we extend our
method to extract affiliations of actor arguments and demonstrate our method
and findings on a dyadic international relations case study.
|
[
"cs.CL"
] | false |
2305.15056
|
2023-05-24T11:45:59Z
|
Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering
|
[
"Jiajie Zhang",
"Shulin Cao",
"Tingjia Zhang",
"Xin Lv",
"Jiaxin Shi",
"Qi Tian",
"Juanzi Li",
"Lei Hou"
] |
Explainable question answering (XQA) aims to answer a given question and
provide an explanation why the answer is selected. Existing XQA methods focus
on reasoning on a single knowledge source, e.g., structured knowledge bases,
unstructured corpora, etc. However, integrating information from heterogeneous
knowledge sources is essential to answer complex questions. In this paper, we
propose to leverage question decomposing for heterogeneous knowledge
integration, by breaking down a complex question into simpler ones, and
selecting the appropriate knowledge source for each sub-question. To facilitate
reasoning, we propose a novel two-stage XQA framework, Reasoning over
Hierarchical Question Decomposition Tree (RoHT). First, we build the
Hierarchical Question Decomposition Tree (HQDT) to understand the semantics of
a complex question; then, we conduct probabilistic reasoning over HQDT from
root to leaves recursively, to aggregate heterogeneous knowledge at different
tree levels and search for a best solution considering the decomposing and
answering probabilities. The experiments on complex QA datasets KQA Pro and
Musique show that our framework outperforms SOTA methods significantly,
demonstrating the effectiveness of leveraging question decomposing for
knowledge integration and our RoHT framework.
|
[
"cs.CL"
] | false |
2305.15098
|
2023-05-24T12:28:35Z
|
Referral Augmentation for Zero-Shot Information Retrieval
|
[
"Michael Tang",
"Shunyu Yao",
"John Yang",
"Karthik Narasimhan"
] |
We propose Referral-Augmented Retrieval (RAR), a simple technique that
concatenates document indices with referrals, i.e. text from other documents
that cite or link to the given document, to provide significant performance
gains for zero-shot information retrieval. The key insight behind our method is
that referrals provide a more complete, multi-view representation of a
document, much like incoming page links in algorithms like PageRank provide a
comprehensive idea of a webpage's importance. RAR works with both sparse and
dense retrievers, and outperforms generative text expansion techniques such as
DocT5Query and Query2Doc a 37% and 21% absolute improvement on ACL paper
retrieval Recall@10 -- while also eliminating expensive model training and
inference. We also analyze different methods for multi-referral aggregation and
show that RAR enables up-to-date information retrieval without re-training.
|
[
"cs.CL"
] | false |
2305.15099
|
2023-05-24T12:33:06Z
|
Fourier Transformer: Fast Long Range Modeling by Removing Sequence
Redundancy with FFT Operator
|
[
"Ziwei He",
"Meng Yang",
"Minwei Feng",
"Jingcheng Yin",
"Xinbing Wang",
"Jingwen Leng",
"Zhouhan Lin"
] |
The transformer model is known to be computationally demanding, and
prohibitively costly for long sequences, as the self-attention module uses a
quadratic time and space complexity with respect to sequence length. Many
researchers have focused on designing new forms of self-attention or
introducing new parameters to overcome this limitation, however a large portion
of them prohibits the model to inherit weights from large pretrained models. In
this work, the transformer's inefficiency has been taken care of from another
perspective. We propose Fourier Transformer, a simple yet effective approach by
progressively removing redundancies in hidden sequence using the ready-made
Fast Fourier Transform (FFT) operator to perform Discrete Cosine Transformation
(DCT). Fourier Transformer is able to significantly reduce computational costs
while retain the ability to inherit from various large pretrained models.
Experiments show that our model achieves state-of-the-art performances among
all transformer-based models on the long-range modeling benchmark LRA with
significant improvement in both speed and space. For generative seq-to-seq
tasks including CNN/DailyMail and ELI5, by inheriting the BART weights our
model outperforms the standard BART and other efficient models. \footnote{Our
code is publicly available at
\url{https://github.com/LUMIA-Group/FourierTransformer}}
|
[
"cs.CL"
] | false |
2305.15108
|
2023-05-24T12:55:04Z
|
The Role of Output Vocabulary in T2T LMs for SPARQL Semantic Parsing
|
[
"Debayan Banerjee",
"Pranav Ajit Nair",
"Ricardo Usbeck",
"Chris Biemann"
] |
In this work, we analyse the role of output vocabulary for text-to-text (T2T)
models on the task of SPARQL semantic parsing. We perform experiments within
the the context of knowledge graph question answering (KGQA), where the task is
to convert questions in natural language to the SPARQL query language. We
observe that the query vocabulary is distinct from human vocabulary. Language
Models (LMs) are pre-dominantly trained for human language tasks, and hence, if
the query vocabulary is replaced with a vocabulary more attuned to the LM
tokenizer, the performance of models may improve. We carry out carefully
selected vocabulary substitutions on the queries and find absolute gains in the
range of 17% on the GrailQA dataset.
|
[
"cs.CL"
] | false |
2305.15119
|
2023-05-24T13:11:04Z
|
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
|
[
"Alberto Muñoz-Ortiz",
"David Vilares"
] |
The usefulness of part-of-speech tags for parsing has been heavily questioned
due to the success of word-contextualized parsers. Yet, most studies are
limited to coarse-grained tags and high quality written content; while we know
little about their influence when it comes to models in production that face
lexical errors. We expand these setups and design an adversarial attack to
verify if the use of morphological information by parsers: (i) contributes to
error propagation or (ii) if on the other hand it can play a role to correct
mistakes that word-only neural parsers make. The results on 14 diverse UD
treebanks show that under such attacks, for transition- and graph-based models
their use contributes to degrade the performance even faster, while for the
(lower-performing) sequence labeling parsers they are helpful. We also show
that if morphological tags were utopically robust against lexical
perturbations, they would be able to correct parsing mistakes.
|
[
"cs.CL"
] | false |
2305.15175
|
2023-05-24T14:06:27Z
|
Pre-training Multi-party Dialogue Models with Latent Discourse Inference
|
[
"Yiyang Li",
"Xinting Huang",
"Wei Bi",
"Hai Zhao"
] |
Multi-party dialogues are more difficult for models to understand than
one-to-one two-party dialogues, since they involve multiple interlocutors,
resulting in interweaving reply-to relations and information flows. To step
over these obstacles, an effective way is to pre-train a model that understands
the discourse structure of multi-party dialogues, namely, to whom each
utterance is replying. However, due to the lack of explicitly annotated
discourse labels in multi-party dialogue corpora, previous works fail to scale
up the pre-training process by putting aside the unlabeled multi-party
conversational data for nothing. To fully utilize the unlabeled data, we
propose to treat the discourse structures as latent variables, then jointly
infer them and pre-train the discourse-aware model by unsupervised latent
variable inference methods. Experiments on multiple downstream tasks show that
our pre-trained model outperforms strong baselines by large margins and
achieves state-of-the-art (SOTA) results, justifying the effectiveness of our
method. The official implementation of this paper is available at
https://github.com/EricLee8/MPD_EMVI.
|
[
"cs.CL"
] | false |
2305.15183
|
2023-05-24T14:18:52Z
|
Are Pre-trained Language Models Useful for Model Ensemble in Chinese
Grammatical Error Correction?
|
[
"Chenming Tang",
"Xiuyu Wu",
"Yunfang Wu"
] |
Model ensemble has been in widespread use for Grammatical Error Correction
(GEC), boosting model performance. We hypothesize that model ensemble based on
the perplexity (PPL) computed by pre-trained language models (PLMs) should
benefit the GEC system. To this end, we explore several ensemble strategies
based on strong PLMs with four sophisticated single models. However, the
performance does not improve but even gets worse after the PLM-based ensemble.
This surprising result sets us doing a detailed analysis on the data and coming
up with some insights on GEC. The human references of correct sentences is far
from sufficient in the test data, and the gap between a correct sentence and an
idiomatic one is worth our attention. Moreover, the PLM-based ensemble
strategies provide an effective way to extend and improve GEC benchmark data.
Our source code is available at
https://github.com/JamyDon/PLM-based-CGEC-Model-Ensemble.
|
[
"cs.CL"
] | false |
2305.15212
|
2023-05-24T14:51:01Z
|
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model
Fine-tuning
|
[
"Zhen-Ru Zhang",
"Chuanqi Tan",
"Haiyang Xu",
"Chengyu Wang",
"Jun Huang",
"Songfang Huang"
] |
Fine-tuning large pre-trained language models on various downstream tasks
with whole parameters is prohibitively expensive. Hence, Parameter-efficient
fine-tuning has attracted attention that only optimizes a few task-specific
parameters with the frozen pre-trained model. In this work, we focus on prefix
tuning, which only optimizes continuous prefix vectors (i.e. pseudo tokens)
inserted into Transformer layers. Based on the observation that the learned
syntax and semantics representation varies a lot at different layers, we argue
that the adaptive prefix will be further tailored to each layer than the fixed
one, enabling the fine-tuning more effective and efficient. Thus, we propose
Adaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained
token level and coarse-grained layer level with a gate mechanism. Experiments
on the SuperGLUE and NER datasets show the effectiveness of APT. In addition,
taking the gate as a probing, we validate the efficiency and effectiveness of
the variable prefix.
|
[
"cs.CL"
] | false |
2305.15262
|
2023-05-24T15:48:29Z
|
Revisiting Parallel Context Windows: A Frustratingly Simple Alternative
and Chain-of-Thought Deterioration
|
[
"Kejuan Yang",
"Xiao Liu",
"Kaiwen Men",
"Aohan Zeng",
"Yuxiao Dong",
"Jie Tang"
] |
We identify two crucial limitations in the evaluation of recent
parallel-integrated method Parallel Context Windows (PCW), which extends the
maximum context lengths of language models, e.g., 2048 for LLaMA, by harnessing
window-wise attention and positional embedding techniques. We first show that a
simple yet strong baseline, weighted sum ensemble, is missing for the
in-context few-shot classification. Moreover, on more challenging
Chain-of-Thought (CoT) reasoning (e.g., HotpotQA), PCW would present unexpected
deterioration regarding question miscomprehension and false inference. Based on
our findings, we suggest that the existing PCW design may not guarantee
sufficient improvement and practicality in handling lengthy documents in
real-world applications. More community efforts on enabling language models'
long context understanding ability should be paid.
|
[
"cs.CL"
] | false |
2305.15273
|
2023-05-24T15:59:44Z
|
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
|
[
"Qihuang Zhong",
"Liang Ding",
"Juhua Liu",
"Xuebo Liu",
"Min Zhang",
"Bo Du",
"Dacheng Tao"
] |
Token dropping is a recently-proposed strategy to speed up the pretraining of
masked language models, such as BERT, by skipping the computation of a subset
of the input tokens at several middle layers. It can effectively reduce the
training time without degrading much performance on downstream tasks. However,
we empirically find that token dropping is prone to a semantic loss problem and
falls short in handling semantic-intense tasks. Motivated by this, we propose a
simple yet effective semantic-consistent learning method (ScTD) to improve the
token dropping. ScTD aims to encourage the model to learn how to preserve the
semantic information in the representation space. Extensive experiments on 12
tasks show that, with the help of our ScTD, token dropping can achieve
consistent and significant performance gains across all task types and model
sizes. More encouragingly, ScTD saves up to 57% of pretraining time and brings
up to +1.56% average improvement over the vanilla token dropping.
|
[
"cs.CL"
] | false |
2305.15275
|
2023-05-24T16:00:54Z
|
Self-Evolution Learning for Discriminative Language Model Pretraining
|
[
"Qihuang Zhong",
"Liang Ding",
"Juhua Liu",
"Bo Du",
"Dacheng Tao"
] |
Masked language modeling, widely used in discriminative language model (e.g.,
BERT) pretraining, commonly adopts a random masking strategy. However, random
masking does not consider the importance of the different words in the sentence
meaning, where some of them are more worthy to be predicted. Therefore, various
masking strategies (e.g., entity-level masking) are proposed, but most of them
require expensive prior knowledge and generally train from scratch without
reusing existing model weights. In this paper, we present Self-Evolution
learning (SE), a simple and effective token masking and learning method to
fully and wisely exploit the knowledge from data. SE focuses on learning the
informative yet under-explored tokens and adaptively regularizes the training
by introducing a novel Token-specific Label Smoothing approach. Experiments on
10 tasks show that our SE brings consistent and significant improvements
(+1.43~2.12 average scores) upon different PLMs. In-depth analyses demonstrate
that SE improves linguistic knowledge learning and generalization.
|
[
"cs.CL"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.