arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.15165
|
2023-05-24T13:56:57Z
|
Personalized DP-SGD using Sampling Mechanisms
|
[
"Geon Heo",
"Junseok Seo",
"Steven Euijong Whang"
] |
Personalized privacy becomes critical in deep learning for Trustworthy AI.
While Differentially Private Stochastic Gradient Descent (DP-SGD) is widely
used in deep learning methods supporting privacy, it provides the same level of
privacy to all individuals, which may lead to overprotection and low utility.
In practice, different users may require different privacy levels, and the
model can be improved by using more information about the users with lower
privacy requirements. There are also recent works on differential privacy of
individuals when using DP-SGD, but they are mostly about individual privacy
accounting and do not focus on satisfying different privacy levels. We thus
extend DP-SGD to support a recent privacy notion called
($\Phi$,$\Delta$)-Personalized Differential Privacy (($\Phi$,$\Delta$)-PDP),
which extends an existing PDP concept called $\Phi$-PDP. Our algorithm uses a
multi-round personalized sampling mechanism and embeds it within the DP-SGD
iterations. Experiments on real datasets show that our algorithm outperforms
DP-SGD and simple combinations of DP-SGD with existing PDP mechanisms in terms
of model performance and efficiency due to its embedded sampling mechanism.
|
[
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2305.15188
|
2023-05-24T14:27:22Z
|
Policy Learning based on Deep Koopman Representation
|
[
"Wenjian Hao",
"Paulo C. Heredia",
"Bowen Huang",
"Zehui Lu",
"Zihao Liang",
"Shaoshuai Mou"
] |
This paper proposes a policy learning algorithm based on the Koopman operator
theory and policy gradient approach, which seeks to approximate an unknown
dynamical system and search for optimal policy simultaneously, using the
observations gathered through interaction with the environment. The proposed
algorithm has two innovations: first, it introduces the so-called deep Koopman
representation into the policy gradient to achieve a linear approximation of
the unknown dynamical system, all with the purpose of improving data
efficiency; second, the accumulated errors for long-term tasks induced by
approximating system dynamics are avoided by applying Bellman's principle of
optimality. Furthermore, a theoretical analysis is provided to prove the
asymptotic convergence of the proposed algorithm and characterize the
corresponding sampling complexity. These conclusions are also supported by
simulations on several challenging benchmark environments.
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.15193
|
2023-05-24T14:31:11Z
|
Adaptive Policy Learning to Additional Tasks
|
[
"Wenjian Hao",
"Zehui Lu",
"Zihao Liang",
"Tianyu Zhou",
"Shaoshuai Mou"
] |
This paper develops a policy learning method for tuning a pre-trained policy
to adapt to additional tasks without altering the original task. A method named
Adaptive Policy Gradient (APG) is proposed in this paper, which combines
Bellman's principle of optimality with the policy gradient approach to improve
the convergence rate. This paper provides theoretical analysis which guarantees
the convergence rate and sample complexity of $\mathcal{O}(1/T)$ and
$\mathcal{O}(1/\epsilon)$, respectively, where $T$ denotes the number of
iterations and $\epsilon$ denotes the accuracy of the resulting stationary
policy. Furthermore, several challenging numerical simulations, including
cartpole, lunar lander, and robot arm, are provided to show that APG obtains
similar performance compared to existing deterministic policy gradient methods
while utilizing much less data and converging at a faster rate.
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.15203
|
2023-05-24T14:40:23Z
|
Relating Implicit Bias and Adversarial Attacks through Intrinsic
Dimension
|
[
"Lorenzo Basile",
"Nikos Karantzas",
"Alberto D'Onofrio",
"Luca Bortolussi",
"Alex Rodriguez",
"Fabio Anselmi"
] |
Despite their impressive performance in classification, neural networks are
known to be vulnerable to adversarial attacks. These attacks are small
perturbations of the input data designed to fool the model. Naturally, a
question arises regarding the potential connection between the architecture,
settings, or properties of the model and the nature of the attack. In this
work, we aim to shed light on this problem by focusing on the implicit bias of
the neural network, which refers to its inherent inclination to favor specific
patterns or outcomes. Specifically, we investigate one aspect of the implicit
bias, which involves the essential Fourier frequencies required for accurate
image classification. We conduct tests to assess the statistical relationship
between these frequencies and those necessary for a successful attack. To delve
into this relationship, we propose a new method that can uncover non-linear
correlations between sets of coordinates, which, in our case, are the
aforementioned frequencies. By exploiting the entanglement between intrinsic
dimension and correlation, we provide empirical evidence that the network bias
in Fourier space and the target frequencies of adversarial attacks are closely
tied.
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"stat.ML"
] | false |
2305.15264
|
2023-05-24T15:52:07Z
|
Error Feedback Shines when Features are Rare
|
[
"Peter Richtárik",
"Elnur Gasanov",
"Konstantin Burlachenko"
] |
We provide the first proof that gradient descent $\left({\color{green}\sf
GD}\right)$ with greedy sparsification $\left({\color{green}\sf TopK}\right)$
and error feedback $\left({\color{green}\sf EF}\right)$ can obtain better
communication complexity than vanilla ${\color{green}\sf GD}$ when solving the
distributed optimization problem $\min_{x\in \mathbb{R}^d}
{f(x)=\frac{1}{n}\sum_{i=1}^n f_i(x)}$, where $n$ = # of clients, $d$ = # of
features, and $f_1,\dots,f_n$ are smooth nonconvex functions. Despite intensive
research since 2014 when ${\color{green}\sf EF}$ was first proposed by Seide et
al., this problem remained open until now. We show that ${\color{green}\sf EF}$
shines in the regime when features are rare, i.e., when each feature is present
in the data owned by a small number of clients only. To illustrate our main
result, we show that in order to find a random vector $\hat{x}$ such that
$\lVert {\nabla f(\hat{x})} \rVert^2 \leq \varepsilon$ in expectation,
${\color{green}\sf GD}$ with the ${\color{green}\sf Top1}$ sparsifier and
${\color{green}\sf EF}$ requires ${\cal O} \left(\left( L+{\color{blue}r}
\sqrt{ \frac{{\color{red}c}}{n} \min \left( \frac{{\color{red}c}}{n} \max_i
L_i^2, \frac{1}{n}\sum_{i=1}^n L_i^2 \right) }\right) \frac{1}{\varepsilon}
\right)$ bits to be communicated by each worker to the server only, where $L$
is the smoothness constant of $f$, $L_i$ is the smoothness constant of $f_i$,
${\color{red}c}$ is the maximal number of clients owning any feature ($1\leq
{\color{red}c} \leq n$), and ${\color{blue}r}$ is the maximal number of
features owned by any client ($1\leq {\color{blue}r} \leq d$). Clearly, the
communication complexity improves as ${\color{red}c}$ decreases (i.e., as
features become more rare), and can be much better than the ${\cal
O}({\color{blue}r} L \frac{1}{\varepsilon})$ communication complexity of
${\color{green}\sf GD}$ in the same regime.
|
[
"math.OC",
"cs.DC",
"cs.LG",
"stat.ML"
] | false |
2305.15557
|
2023-05-24T20:43:47Z
|
Non-Parametric Learning of Stochastic Differential Equations with Fast
Rates of Convergence
|
[
"Riccardo Bonalli",
"Alessandro Rudi"
] |
We propose a novel non-parametric learning paradigm for the identification of
drift and diffusion coefficients of non-linear stochastic differential
equations, which relies upon discrete-time observations of the state. The key
idea essentially consists of fitting a RKHS-based approximation of the
corresponding Fokker-Planck equation to such observations, yielding theoretical
estimates of learning rates which, unlike previous works, become increasingly
tighter when the regularity of the unknown drift and diffusion coefficients
becomes higher. Our method being kernel-based, offline pre-processing may in
principle be profitably leveraged to enable efficient numerical implementation.
|
[
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC"
] | false |
2306.09247
|
2023-05-24T05:27:22Z
|
ATLAS: Automatically Detecting Discrepancies Between Privacy Policies
and Privacy Labels
|
[
"Akshath Jain",
"David Rodriguez",
"Jose M. del Alamo",
"Norman Sadeh"
] |
Privacy policies are long, complex documents that end-users seldom read.
Privacy labels aim to ameliorate these issues by providing succinct summaries
of salient data practices. In December 2020, Apple began requiring that app
developers submit privacy labels describing their apps' data practices. Yet,
research suggests that app developers often struggle to do so. In this paper,
we automatically identify possible discrepancies between mobile app privacy
policies and their privacy labels. Such discrepancies could be indicators of
potential privacy compliance issues.
We introduce the Automated Privacy Label Analysis System (ATLAS). ATLAS
includes three components: a pipeline to systematically retrieve iOS App Store
listings and privacy policies; an ensemble-based classifier capable of
predicting privacy labels from the text of privacy policies with 91.3% accuracy
using state-of-the-art NLP techniques; and a discrepancy analysis mechanism
that enables a large-scale privacy analysis of the iOS App Store.
Our system has enabled us to analyze 354,725 iOS apps. We find several
interesting trends. For example, only 40.3% of apps in the App Store provide
easily accessible privacy policies, and only 29.6% of apps provide both
accessible privacy policies and privacy labels. Among apps that provide both,
88.0% have at least one possible discrepancy between the text of their privacy
policy and their privacy label, which could be indicative of a potential
compliance issue. We find that, on average, apps have 5.32 such potential
compliance issues.
We hope that ATLAS will help app developers, researchers, regulators, and
mobile app stores alike. For example, app developers could use our classifier
to check for discrepancies between their privacy policies and privacy labels,
and regulators could use our system to help review apps at scale for potential
compliance issues.
|
[
"cs.CR",
"cs.AI",
"cs.LG"
] | false |
2305.14632
|
2023-05-24T02:09:28Z
|
Supermodular Rank: Set Function Decomposition and Optimization
|
[
"Rishi Sonthalia",
"Anna Seigal",
"Guido Montufar"
] |
We define the supermodular rank of a function on a lattice. This is the
smallest number of terms needed to decompose it into a sum of supermodular
functions. The supermodular summands are defined with respect to different
partial orders. We characterize the maximum possible value of the supermodular
rank and describe the functions with fixed supermodular rank. We analogously
define the submodular rank. We use submodular decompositions to optimize set
functions. Given a bound on the submodular rank of a set function, we formulate
an algorithm that splits an optimization problem into submodular subproblems.
We show that this method improves the approximation ratio guarantees of several
algorithms for monotone set function maximization and ratio of set functions
minimization, at a computation overhead that depends on the submodular rank.
|
[
"math.CO",
"cs.CC",
"cs.DM",
"cs.LG",
"math.OC"
] | false |
2305.15602
|
2023-05-24T22:22:19Z
|
Control invariant set enhanced safe reinforcement learning: improved
sampling efficiency, guaranteed stability and robustness
|
[
"Song Bo",
"Bernard T. Agyeman",
"Xunyuan Yin",
"Jinfeng Liu"
] |
Reinforcement learning (RL) is an area of significant research interest, and
safe RL in particular is attracting attention due to its ability to handle
safety-driven constraints that are crucial for real-world applications. This
work proposes a novel approach to RL training, called control invariant set
(CIS) enhanced RL, which leverages the advantages of utilizing the explicit
form of CIS to improve stability guarantees and sampling efficiency.
Furthermore, the robustness of the proposed approach is investigated in the
presence of uncertainty. The approach consists of two learning stages: offline
and online. In the offline stage, CIS is incorporated into the reward design,
initial state sampling, and state reset procedures. This incorporation of CIS
facilitates improved sampling efficiency during the offline training process.
In the online stage, RL is retrained whenever the predicted next step state is
outside of the CIS, which serves as a stability criterion, by introducing a
Safety Supervisor to examine the safety of the action and make necessary
corrections. The stability analysis is conducted for both cases, with and
without uncertainty. To evaluate the proposed approach, we apply it to a
simulated chemical reactor. The results show a significant improvement in
sampling efficiency during offline training and closed-loop stability guarantee
in the online implementation, with and without uncertainty.
|
[
"eess.SY",
"cs.AI",
"cs.LG",
"cs.SY",
"math.DS"
] | false |
2305.18332
|
2023-05-24T16:08:55Z
|
Reconfigurable Distributed FPGA Cluster Design for Deep Learning
Accelerators
|
[
"Hans Johnson",
"Tianyang Fang",
"Alejandro Perez-Vicente",
"Jafar Saniie"
] |
We propose a distributed system based on lowpower embedded FPGAs designed for
edge computing applications focused on exploring distributing scheduling
optimizations for Deep Learning (DL) workloads to obtain the best performance
regarding latency and power efficiency. Our cluster was modular throughout the
experiment, and we have implementations that consist of up to 12 Zynq-7020
chip-based boards as well as 5 UltraScale+ MPSoC FPGA boards connected through
an ethernet switch, and the cluster will evaluate configurable Deep Learning
Accelerator (DLA) Versatile Tensor Accelerator (VTA). This adaptable
distributed architecture is distinguished by its capacity to evaluate and
manage neural network workloads in numerous configurations which enables users
to conduct multiple experiments tailored to their specific application needs.
The proposed system can simultaneously execute diverse Neural Network (NN)
models, arrange the computation graph in a pipeline structure, and manually
allocate greater resources to the most computationally intensive layers of the
NN graph.
|
[
"cs.DC",
"cs.AR",
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.15445
|
2023-05-24T08:47:01Z
|
Deep Learning-enabled MCMC for Probabilistic State Estimation in
District Heating Grids
|
[
"Andreas Bott",
"Tim Janke",
"Florian Steinke"
] |
Flexible district heating grids form an important part of future, low-carbon
energy systems. We examine probabilistic state estimation in such grids, i.e.,
we aim to estimate the posterior probability distribution over all grid state
variables such as pressures, temperatures, and mass flows conditional on
measurements of a subset of these states. Since the posterior state
distribution does not belong to a standard class of probability distributions,
we use Markov Chain Monte Carlo (MCMC) sampling in the space of network heat
exchanges and evaluate the samples in the grid state space to estimate the
posterior. Converting the heat exchange samples into grid states by solving the
non-linear grid equations makes this approach computationally burdensome.
However, we propose to speed it up by employing a deep neural network that is
trained to approximate the solution of the exact but slow non-linear solver.
This novel approach is shown to deliver highly accurate posterior distributions
both for classic tree-shaped as well as meshed heating grids, at significantly
reduced computational costs that are acceptable for online control. Our state
estimation approach thus enables tightening the safety margins for temperature
and pressure control and thereby a more efficient grid operation.
|
[
"cs.LG",
"cs.NA",
"cs.SY",
"eess.SY",
"math.NA",
"stat.ME",
"62G05",
"I.6.4"
] | false |
2305.15660
|
2023-05-25T02:13:37Z
|
Zero-shot Generation of Training Data with Denoising Diffusion
Probabilistic Model for Handwritten Chinese Character Recognition
|
[
"Dongnan Gui",
"Kai Chen",
"Haisong Ding",
"Qiang Huo"
] |
There are more than 80,000 character categories in Chinese while most of them
are rarely used. To build a high performance handwritten Chinese character
recognition (HCCR) system supporting the full character set with a traditional
approach, many training samples need be collected for each character category,
which is both time-consuming and expensive. In this paper, we propose a novel
approach to transforming Chinese character glyph images generated from font
libraries to handwritten ones with a denoising diffusion probabilistic model
(DDPM). Training from handwritten samples of a small character set, the DDPM is
capable of mapping printed strokes to handwritten ones, which makes it possible
to generate photo-realistic and diverse style handwritten samples of unseen
character categories. Combining DDPM-synthesized samples of unseen categories
with real samples of other categories, we can build an HCCR system to support
the full character set. Experimental results on CASIA-HWDB dataset with 3,755
character categories show that the HCCR systems trained with synthetic samples
perform similarly with the one trained with real samples in terms of
recognition accuracy. The proposed method has the potential to address HCCR
with a larger vocabulary.
|
[
"cs.CV"
] | false |
2305.15679
|
2023-05-25T03:08:51Z
|
A Similarity Alignment Model for Video Copy Segment Matching
|
[
"Zhenhua Liu",
"Feipeng Ma",
"Tianyi Wang",
"Fengyun Rao"
] |
With the development of multimedia technology, Video Copy Detection has been
a crucial problem for social media platforms. Meta AI hold Video Similarity
Challenge on CVPR 2023 to push the technology forward. In this report, we share
our winner solutions on Matching Track. We propose a Similarity Alignment
Model(SAM) for video copy segment matching. Our SAM exhibits superior
performance compared to other competitors, with a 0.108 / 0.144 absolute
improvement over the second-place competitor in Phase 1 / Phase 2. Code is
available at
https://github.com/FeipengMa6/VSC22-Submission/tree/main/VSC22-Matching-Track-1st.
|
[
"cs.CV"
] | false |
2305.15688
|
2023-05-25T03:34:24Z
|
Frame-Event Alignment and Fusion Network for High Frame Rate Tracking
|
[
"Jiqing Zhang",
"Yuanchen Wang",
"Wenxi Liu",
"Meng Li",
"Jinpeng Bai",
"Baocai Yin",
"Xin Yang"
] |
Most existing RGB-based trackers target low frame rate benchmarks of around
30 frames per second. This setting restricts the tracker's functionality in the
real world, especially for fast motion. Event-based cameras as bioinspired
sensors provide considerable potential for high frame rate tracking due to
their high temporal resolution. However, event-based cameras cannot offer
fine-grained texture information like conventional cameras. This unique
complementarity motivates us to combine conventional frames and events for high
frame rate object tracking under various challenging conditions. Inthispaper,
we propose an end-to-end network consisting of multi-modality alignment and
fusion modules to effectively combine meaningful information from both
modalities at different measurement rates. The alignment module is responsible
for cross-style and cross-frame-rate alignment between frame and event
modalities under the guidance of the moving cues furnished by events. While the
fusion module is accountable for emphasizing valuable features and suppressing
noise information by the mutual complement between the two modalities.
Extensive experiments show that the proposed approach outperforms
state-of-the-art trackers by a significant margin in high frame rate tracking.
With the FE240hz dataset, our approach achieves high frame rate tracking up to
240Hz.
|
[
"cs.CV"
] | false |
2305.15694
|
2023-05-25T04:03:46Z
|
Learning Occupancy for Monocular 3D Object Detection
|
[
"Liang Peng",
"Junkai Xu",
"Haoran Cheng",
"Zheng Yang",
"Xiaopei Wu",
"Wei Qian",
"Wenxiao Wang",
"Boxi Wu",
"Deng Cai"
] |
Monocular 3D detection is a challenging task due to the lack of accurate 3D
information. Existing approaches typically rely on geometry constraints and
dense depth estimates to facilitate the learning, but often fail to fully
exploit the benefits of three-dimensional feature extraction in frustum and 3D
space. In this paper, we propose \textbf{OccupancyM3D}, a method of learning
occupancy for monocular 3D detection. It directly learns occupancy in frustum
and 3D space, leading to more discriminative and informative 3D features and
representations. Specifically, by using synchronized raw sparse LiDAR point
clouds, we define the space status and generate voxel-based occupancy labels.
We formulate occupancy prediction as a simple classification problem and design
associated occupancy losses. Resulting occupancy estimates are employed to
enhance original frustum/3D features. As a result, experiments on KITTI and
Waymo open datasets demonstrate that the proposed method achieves a new state
of the art and surpasses other methods by a significant margin. Codes and
pre-trained models will be available at
\url{https://github.com/SPengLiang/OccupancyM3D}.
|
[
"cs.CV"
] | false |
2305.15699
|
2023-05-25T04:14:49Z
|
Cross-view Action Recognition Understanding From Exocentric to
Egocentric Perspective
|
[
"Thanh-Dat Truong",
"Khoa Luu"
] |
Understanding action recognition in egocentric videos has emerged as a vital
research topic with numerous practical applications. With the limitation in the
scale of egocentric data collection, learning robust deep learning-based action
recognition models remains difficult. Transferring knowledge learned from the
large-scale exocentric data to the egocentric data is challenging due to the
difference in videos across views. Our work introduces a novel cross-view
learning approach to action recognition (CVAR) that effectively transfers
knowledge from the exocentric to the egocentric view. First, we introduce a
novel geometric-based constraint into the self-attention mechanism in
Transformer based on analyzing the camera positions between two views. Then, we
propose a new cross-view self-attention loss learned on unpaired cross-view
data to enforce the self-attention mechanism learning to transfer knowledge
across views. Finally, to further improve the performance of our cross-view
learning approach, we present the metrics to measure the correlations in videos
and attention maps effectively. Experimental results on standard egocentric
action recognition benchmarks, i.e., Charades-Ego, EPIC-Kitchens-55, and
EPIC-Kitchens-100, have shown our approach's effectiveness and state-of-the-art
performance.
|
[
"cs.CV"
] | false |
2305.15709
|
2023-05-25T04:44:17Z
|
PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image
Deraining for Semantic Segmentation
|
[
"Xianghao Jiao",
"Yaohua Liu",
"Jiaxin Gao",
"Xinyuan Chu",
"Risheng Liu",
"Xin Fan"
] |
In light of the significant progress made in the development and application
of semantic segmentation tasks, there has been increasing attention towards
improving the robustness of segmentation models against natural degradation
factors (e.g., rain streaks) or artificially attack factors (e.g., adversarial
attack). Whereas, most existing methods are designed to address a single
degradation factor and are tailored to specific application scenarios. In this
work, we present the first attempt to improve the robustness of semantic
segmentation tasks by simultaneously handling different types of degradation
factors. Specifically, we introduce the Preprocessing Enhanced Adversarial
Robust Learning (PEARL) framework based on the analysis of our proposed Naive
Adversarial Training (NAT) framework. Our approach effectively handles both
rain streaks and adversarial perturbation by transferring the robustness of the
segmentation model to the image derain model. Furthermore, as opposed to the
commonly used Negative Adversarial Attack (NAA), we design the Auxiliary Mirror
Attack (AMA) to introduce positive information prior to the training of the
PEARL framework, which improves defense capability and segmentation
performance. Our extensive experiments and ablation studies based on different
derain methods and segmentation models have demonstrated the significant
performance improvement of PEARL with AMA in defense against various
adversarial attacks and rain streaks while maintaining high generalization
performance across different datasets.
|
[
"cs.CV"
] | false |
2305.15727
|
2023-05-25T05:19:17Z
|
POPE: 6-DoF Promptable Pose Estimation of Any Object, in Any Scene, with
One Reference
|
[
"Zhiwen Fan",
"Panwang Pan",
"Peihao Wang",
"Yifan Jiang",
"Dejia Xu",
"Hanwen Jiang",
"Zhangyang Wang"
] |
Despite the significant progress in six degrees-of-freedom (6DoF) object pose
estimation, existing methods have limited applicability in real-world scenarios
involving embodied agents and downstream 3D vision tasks. These limitations
mainly come from the necessity of 3D models, closed-category detection, and a
large number of densely annotated support views. To mitigate this issue, we
propose a general paradigm for object pose estimation, called Promptable Object
Pose Estimation (POPE). The proposed approach POPE enables zero-shot 6DoF
object pose estimation for any target object in any scene, while only a single
reference is adopted as the support view. To achieve this, POPE leverages the
power of the pre-trained large-scale 2D foundation model, employs a framework
with hierarchical feature representation and 3D geometry principles. Moreover,
it estimates the relative camera pose between object prompts and the target
object in new views, enabling both two-view and multi-view 6DoF pose estimation
tasks. Comprehensive experimental results demonstrate that POPE exhibits
unrivaled robust performance in zero-shot settings, by achieving a significant
reduction in the averaged Median Pose Error by 52.38% and 50.47% on the LINEMOD
and OnePose datasets, respectively. We also conduct more challenging testings
in causally captured images (see Figure 1), which further demonstrates the
robustness of POPE. Project page can be found with
https://paulpanwang.github.io/POPE/.
|
[
"cs.CV"
] | false |
2305.15753
|
2023-05-25T06:05:52Z
|
T2TD: Text-3D Generation Model based on Prior Knowledge Guidance
|
[
"Weizhi Nie",
"Ruidong Chen",
"Weijie Wang",
"Bruno Lepri",
"Nicu Sebe"
] |
In recent years, 3D models have been utilized in many applications, such as
auto-driver, 3D reconstruction, VR, and AR. However, the scarcity of 3D model
data does not meet its practical demands. Thus, generating high-quality 3D
models efficiently from textual descriptions is a promising but challenging way
to solve this problem. In this paper, inspired by the ability of human beings
to complement visual information details from ambiguous descriptions based on
their own experience, we propose a novel text-3D generation model (T2TD), which
introduces the related shapes or textual information as the prior knowledge to
improve the performance of the 3D generation model. In this process, we first
introduce the text-3D knowledge graph to save the relationship between 3D
models and textual semantic information, which can provide the related shapes
to guide the target 3D model generation. Second, we integrate an effective
causal inference model to select useful feature information from these related
shapes, which removes the unrelated shape information and only maintains
feature information that is strongly relevant to the textual description.
Meanwhile, to effectively integrate multi-modal prior knowledge into textual
information, we adopt a novel multi-layer transformer structure to
progressively fuse related shape and textual information, which can effectively
compensate for the lack of structural information in the text and enhance the
final performance of the 3D generation model. The final experimental results
demonstrate that our approach significantly improves 3D model generation
quality and outperforms the SOTA methods on the text2shape datasets.
|
[
"cs.CV"
] | false |
2305.15762
|
2023-05-25T06:22:01Z
|
Dynamic Enhancement Network for Partial Multi-modality Person
Re-identification
|
[
"Aihua Zheng",
"Ziling He",
"Zi Wang",
"Chenglong Li",
"Jin Tang"
] |
Many existing multi-modality studies are based on the assumption of modality
integrity. However, the problem of missing arbitrary modalities is very common
in real life, and this problem is less studied, but actually important in the
task of multi-modality person re-identification (Re-ID). To this end, we design
a novel dynamic enhancement network (DENet), which allows missing arbitrary
modalities while maintaining the representation ability of multiple modalities,
for partial multi-modality person Re-ID. To be specific, the multi-modal
representation of the RGB, near-infrared (NIR) and thermal-infrared (TIR)
images is learned by three branches, in which the information of missing
modalities is recovered by the feature transformation module. Since the missing
state might be changeable, we design a dynamic enhancement module, which
dynamically enhances modality features according to the missing state in an
adaptive manner, to improve the multi-modality representation. Extensive
experiments on multi-modality person Re-ID dataset RGBNT201 and vehicle Re-ID
dataset RGBNT100 comparing to the state-of-the-art methods verify the
effectiveness of our method in complex and changeable environments.
|
[
"cs.CV"
] | false |
2305.15764
|
2023-05-25T06:22:03Z
|
Multi-query Vehicle Re-identification: Viewpoint-conditioned Network,
Unified Dataset and New Metric
|
[
"Aihua Zheng",
"Chaobin Zhang",
"Weijun Zhang",
"Chenglong Li",
"Jin Tang",
"Chang Tan",
"Ruoran Jia"
] |
Existing vehicle re-identification methods mainly rely on the single query,
which has limited information for vehicle representation and thus significantly
hinders the performance of vehicle Re-ID in complicated surveillance networks.
In this paper, we propose a more realistic and easily accessible task, called
multi-query vehicle Re-ID, which leverages multiple queries to overcome
viewpoint limitation of single one. Based on this task, we make three major
contributions. First, we design a novel viewpoint-conditioned network (VCNet),
which adaptively combines the complementary information from different vehicle
viewpoints, for multi-query vehicle Re-ID. Moreover, to deal with the problem
of missing vehicle viewpoints, we propose a cross-view feature recovery module
which recovers the features of the missing viewpoints by learnt the correlation
between the features of available and missing viewpoints. Second, we create a
unified benchmark dataset, taken by 6142 cameras from a real-life
transportation surveillance system, with comprehensive viewpoints and large
number of crossed scenes of each vehicle for multi-query vehicle Re-ID
evaluation. Finally, we design a new evaluation metric, called mean cross-scene
precision (mCSP), which measures the ability of cross-scene recognition by
suppressing the positive samples with similar viewpoints from same camera.
Comprehensive experiments validate the superiority of the proposed method
against other methods, as well as the effectiveness of the designed metric in
the evaluation of multi-query vehicle Re-ID.
|
[
"cs.CV"
] | false |
2305.15768
|
2023-05-25T06:24:14Z
|
High-Similarity-Pass Attention for Single Image Super-Resolution
|
[
"Jian-Nan Su",
"Min Gan",
"Guang-Yong Chen",
"Wenzhong Guo",
"C. L. Philip Chen"
] |
Recent developments in the field of non-local attention (NLA) have led to a
renewed interest in self-similarity-based single image super-resolution (SISR).
Researchers usually used the NLA to explore non-local self-similarity (NSS) in
SISR and achieve satisfactory reconstruction results. However, a surprising
phenomenon that the reconstruction performance of the standard NLA is similar
to the NLA with randomly selected regions stimulated our interest to revisit
NLA. In this paper, we first analyzed the attention map of the standard NLA
from different perspectives and discovered that the resulting probability
distribution always has full support for every local feature, which implies a
statistical waste of assigning values to irrelevant non-local features,
especially for SISR which needs to model long-range dependence with a large
number of redundant non-local features. Based on these findings, we introduced
a concise yet effective soft thresholding operation to obtain
high-similarity-pass attention (HSPA), which is beneficial for generating a
more compact and interpretable distribution. Furthermore, we derived some key
properties of the soft thresholding operation that enable training our HSPA in
an end-to-end manner. The HSPA can be integrated into existing deep SISR models
as an efficient general building block. In addition, to demonstrate the
effectiveness of the HSPA, we constructed a deep high-similarity-pass attention
network (HSPAN) by integrating a few HSPAs in a simple backbone. Extensive
experimental results demonstrate that HSPAN outperforms state-of-the-art
approaches on both quantitative and qualitative evaluations.
|
[
"cs.CV"
] | false |
2305.15773
|
2023-05-25T06:34:14Z
|
Multi-scale Efficient Graph-Transformer for Whole Slide Image
Classification
|
[
"Saisai Ding",
"Juncheng Li",
"Jun Wang",
"Shihui Ying",
"Jun Shi"
] |
The multi-scale information among the whole slide images (WSIs) is essential
for cancer diagnosis. Although the existing multi-scale vision Transformer has
shown its effectiveness for learning multi-scale image representation, it still
cannot work well on the gigapixel WSIs due to their extremely large image
sizes. To this end, we propose a novel Multi-scale Efficient Graph-Transformer
(MEGT) framework for WSI classification. The key idea of MEGT is to adopt two
independent Efficient Graph-based Transformer (EGT) branches to process the
low-resolution and high-resolution patch embeddings (i.e., tokens in a
Transformer) of WSIs, respectively, and then fuse these tokens via a
multi-scale feature fusion module (MFFM). Specifically, we design an EGT to
efficiently learn the local-global information of patch tokens, which
integrates the graph representation into Transformer to capture spatial-related
information of WSIs. Meanwhile, we propose a novel MFFM to alleviate the
semantic gap among different resolution patches during feature fusion, which
creates a non-patch token for each branch as an agent to exchange information
with another branch by cross-attention. In addition, to expedite network
training, a novel token pruning module is developed in EGT to reduce the
redundant tokens. Extensive experiments on TCGA-RCC and CAMELYON16 datasets
demonstrate the effectiveness of the proposed MEGT.
|
[
"cs.CV"
] | false |
2305.15779
|
2023-05-25T06:46:28Z
|
Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models
|
[
"Jooyoung Choi",
"Yunjey Choi",
"Yunji Kim",
"Junho Kim",
"Sungroh Yoon"
] |
Text-to-image diffusion models can generate diverse, high-fidelity images
based on user-provided text prompts. Recent research has extended these models
to support text-guided image editing. While text guidance is an intuitive
editing interface for users, it often fails to ensure the precise concept
conveyed by users. To address this issue, we propose Custom-Edit, in which we
(i) customize a diffusion model with a few reference images and then (ii)
perform text-guided editing. Our key discovery is that customizing only
language-relevant parameters with augmented prompts improves reference
similarity significantly while maintaining source similarity. Moreover, we
provide our recipe for each customization and editing process. We compare
popular customization methods and validate our findings on two editing methods
using various datasets.
|
[
"cs.CV"
] | true |
2305.15781
|
2023-05-25T06:50:08Z
|
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
Small Scale to Large Scale
|
[
"Zhiwei Hao",
"Jianyuan Guo",
"Kai Han",
"Han Hu",
"Chang Xu",
"Yunhe Wang"
] |
The tremendous success of large models trained on extensive datasets
demonstrates that scale is a key ingredient in achieving superior results.
Therefore, the reflection on the rationality of designing knowledge
distillation (KD) approaches for limited-capacity architectures solely based on
small-scale datasets is now deemed imperative. In this paper, we identify the
\emph{small data pitfall} that presents in previous KD methods, which results
in the underestimation of the power of vanilla KD framework on large-scale
datasets such as ImageNet-1K. Specifically, we show that employing stronger
data augmentation techniques and using larger datasets can directly decrease
the gap between vanilla KD and other meticulously designed KD variants. This
highlights the necessity of designing and evaluating KD approaches in the
context of practical scenarios, casting off the limitations of small-scale
datasets. Our investigation of the vanilla KD and its variants in more complex
schemes, including stronger training strategies and different model capacities,
demonstrates that vanilla KD is elegantly simple but astonishingly effective in
large-scale scenarios. Without bells and whistles, we obtain state-of-the-art
ResNet-50, ViT-S, and ConvNeXtV2-T models for ImageNet, which achieve 83.1\%,
84.3\%, and 85.0\% top-1 accuracy, respectively. PyTorch code and checkpoints
can be found at https://github.com/Hao840/vanillaKD.
|
[
"cs.CV"
] | false |
2305.15808
|
2023-05-25T07:43:39Z
|
Towards Language-guided Interactive 3D Generation: LLMs as Layout
Interpreter with Generative Feedback
|
[
"Yiqi Lin",
"Hao Wu",
"Ruichen Wang",
"Haonan Lu",
"Xiaodong Lin",
"Hui Xiong",
"Lin Wang"
] |
Generating and editing a 3D scene guided by natural language poses a
challenge, primarily due to the complexity of specifying the positional
relations and volumetric changes within the 3D space. Recent advancements in
Large Language Models (LLMs) have demonstrated impressive reasoning,
conversational, and zero-shot generation abilities across various domains.
Surprisingly, these models also show great potential in realizing and
interpreting the 3D space. In light of this, we propose a novel language-guided
interactive 3D generation system, dubbed LI3D, that integrates LLMs as a 3D
layout interpreter into the off-the-shelf layout-to-3D generative models,
allowing users to flexibly and interactively generate visual content.
Specifically, we design a versatile layout structure base on the bounding boxes
and semantics to prompt the LLMs to model the spatial generation and reasoning
from language. Our system also incorporates LLaVA, a large language and vision
assistant, to provide generative feedback from the visual aspect for improving
the visual quality of generated content. We validate the effectiveness of LI3D,
primarily in 3D generation and editing through multi-round interactions, which
can be flexibly extended to 2D generation and editing. Various experiments
demonstrate the potential benefits of incorporating LLMs in generative AI for
applications, e.g., metaverse. Moreover, we benchmark the layout reasoning
performance of LLMs with neural visual artist tasks, revealing their emergent
ability in the spatial layout domain.
|
[
"cs.CV"
] | false |
2305.15862
|
2023-05-25T08:54:08Z
|
A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion
|
[
"Risheng Liu",
"Zhu Liu",
"Jinyuan Liu",
"Xin Fan",
"Zhongxuan Luo"
] |
Image fusion plays a key role in a variety of multi-sensor-based vision
systems, especially for enhancing visual quality and/or extracting aggregated
features for perception. However, most existing methods just consider image
fusion as an individual task, thus ignoring its underlying relationship with
these downstream vision problems. Furthermore, designing proper fusion
architectures often requires huge engineering labor. It also lacks mechanisms
to improve the flexibility and generalization ability of current fusion
approaches. To mitigate these issues, we establish a Task-guided,
Implicit-searched and Meta-initialized (TIM) deep model to address the image
fusion problem in a challenging real-world scenario. Specifically, we first
propose a constrained strategy to incorporate information from downstream tasks
to guide the unsupervised learning process of image fusion. Within this
framework, we then design an implicit search scheme to automatically discover
compact architectures for our fusion model with high efficiency. In addition, a
pretext meta initialization technique is introduced to leverage divergence
fusion data to support fast adaptation for different kinds of image fusion
tasks. Qualitative and quantitative experimental results on different
categories of image fusion problems and related downstream tasks (e.g., visual
enhancement and semantic understanding) substantiate the flexibility and
effectiveness of our TIM. The source code will be available at
https://github.com/LiuZhu-CV/TIMFusion.
|
[
"cs.CV"
] | false |
2305.15909
|
2023-05-25T10:15:29Z
|
Camera-Incremental Object Re-Identification with Identity Knowledge
Evolution
|
[
"Hantao Yao",
"Lu Yu",
"Jifei Luo",
"Changsheng Xu"
] |
Object Re-identification (ReID) aims to retrieve the probe object from many
gallery images with the ReID model inferred based on a stationary camera-free
dataset by associating and collecting the identities across all camera views.
When deploying the ReID algorithm in real-world scenarios, the aspect of
storage, privacy constraints, and dynamic changes of cameras would degrade its
generalizability and applicability. Treating each camera's data independently,
we introduce a novel ReID task named Camera-Incremental Object
Re-identification (CIOR) by continually optimizing the ReID mode from the
incoming stream of the camera dataset. Since the identities under different
camera views might describe the same object, associating and distilling the
knowledge of common identities would boost the discrimination and benefit from
alleviating the catastrophic forgetting. In this paper, we propose a novel
Identity Knowledge Evolution (IKE) framework for CIOR, consisting of the
Identity Knowledge Association (IKA), Identity Knowledge Distillation (IKD),
and Identity Knowledge Update (IKU). IKA is proposed to discover the common
identities between the current identity and historical identities. IKD has
applied to distillate historical identity knowledge from common identities and
quickly adapt the historical model to the current camera view. After each
camera has been trained, IKU is applied to continually expand the identity
knowledge by combining the historical and current identity memories. The
evaluation of Market-CL and Veri-CL shows the Identity Knowledge Evolution
(IKE) effectiveness for CIOR.
code:https://github.com/htyao89/Camera-Incremental-Object-ReID
|
[
"cs.CV"
] | false |
2305.15940
|
2023-05-25T11:22:17Z
|
Mask Attack Detection Using Vascular-weighted Motion-robust rPPG Signals
|
[
"Chenglin Yao",
"Jianfeng Ren",
"Ruibin Bai",
"Heshan Du",
"Jiang Liu",
"Xudong Jiang"
] |
Detecting 3D mask attacks to a face recognition system is challenging.
Although genuine faces and 3D face masks show significantly different remote
photoplethysmography (rPPG) signals, rPPG-based face anti-spoofing methods
often suffer from performance degradation due to unstable face alignment in the
video sequence and weak rPPG signals. To enhance the rPPG signal in a
motion-robust way, a landmark-anchored face stitching method is proposed to
align the faces robustly and precisely at the pixel-wise level by using both
SIFT keypoints and facial landmarks. To better encode the rPPG signal, a
weighted spatial-temporal representation is proposed, which emphasizes the face
regions with rich blood vessels. In addition, characteristics of rPPG signals
in different color spaces are jointly utilized. To improve the generalization
capability, a lightweight EfficientNet with a Gated Recurrent Unit (GRU) is
designed to extract both spatial and temporal features from the rPPG
spatial-temporal representation for classification. The proposed method is
compared with the state-of-the-art methods on five benchmark datasets under
both intra-dataset and cross-dataset evaluations. The proposed method shows a
significant and consistent improvement in performance over other
state-of-the-art rPPG-based methods for face spoofing detection.
|
[
"cs.CV"
] | false |
2305.15975
|
2023-05-25T12:12:31Z
|
Triplet Knowledge Distillation
|
[
"Xijun Wang",
"Dongyang Liu",
"Meina Kan",
"Chunrui Han",
"Zhongqin Wu",
"Shiguang Shan"
] |
In Knowledge Distillation, the teacher is generally much larger than the
student, making the solution of the teacher likely to be difficult for the
student to learn. To ease the mimicking difficulty, we introduce a triplet
knowledge distillation mechanism named TriKD. Besides teacher and student,
TriKD employs a third role called anchor model. Before distillation begins, the
pre-trained anchor model delimits a subspace within the full solution space of
the target problem. Solutions within the subspace are expected to be easy
targets that the student could mimic well. Distillation then begins in an
online manner, and the teacher is only allowed to express solutions within the
aforementioned subspace. Surprisingly, benefiting from accurate but
easy-to-mimic hints, the student can finally perform well. After the student is
well trained, it can be used as the new anchor for new students, forming a
curriculum learning strategy. Our experiments on image classification and face
recognition with various models clearly demonstrate the effectiveness of our
method. Furthermore, the proposed TriKD is also effective in dealing with the
overfitting issue. Moreover, our theoretical analysis supports the rationality
of our triplet distillation.
|
[
"cs.CV"
] | false |
2305.16124
|
2023-05-25T14:56:03Z
|
Robust Category-Level 3D Pose Estimation from Synthetic Data
|
[
"Jiahao Yang",
"Wufei Ma",
"Angtian Wang",
"Xiaoding Yuan",
"Alan Yuille",
"Adam Kortylewski"
] |
Obtaining accurate 3D object poses is vital for numerous computer vision
applications, such as 3D reconstruction and scene understanding. However,
annotating real-world objects is time-consuming and challenging. While
synthetically generated training data is a viable alternative, the domain shift
between real and synthetic data is a significant challenge. In this work, we
aim to narrow the performance gap between models trained on synthetic data and
few real images and fully supervised models trained on large-scale data. We
achieve this by approaching the problem from two perspectives: 1) We introduce
SyntheticP3D, a new synthetic dataset for object pose estimation generated from
CAD models and enhanced with a novel algorithm. 2) We propose a novel approach
(CC3D) for training neural mesh models that perform pose estimation via inverse
rendering. In particular, we exploit the spatial relationships between features
on the mesh surface and a contrastive learning scheme to guide the domain
adaptation process. Combined, these two approaches enable our models to perform
competitively with state-of-the-art models using only 10% of the respective
real training images, while outperforming the SOTA model by 10.4% with a
threshold of pi/18 using only 50% of the real training data. Our trained model
further demonstrates robust generalization to out-of-distribution scenarios
despite being trained with minimal real data.
|
[
"cs.CV"
] | false |
2305.16140
|
2023-05-25T15:15:03Z
|
Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and
Feature Disentanglement
|
[
"Jiawei Qin",
"Takuru Shimoyama",
"Xucong Zhang",
"Yusuke Sugano"
] |
Along with the recent development of deep neural networks, appearance-based
gaze estimation has succeeded considerably when training and testing within the
same domain. Compared to the within-domain task, the variance of different
domains makes the cross-domain performance drop severely, preventing gaze
estimation deployment in real-world applications. Among all the factors, ranges
of head pose and gaze are believed to play a significant role in the final
performance of gaze estimation, while collecting large ranges of data is
expensive. This work proposes an effective model training pipeline consisting
of a training data synthesis and a gaze estimation model for unsupervised
domain adaptation. The proposed data synthesis leverages the single-image 3D
reconstruction to expand the range of the head poses from the source domain
without requiring a 3D facial shape dataset. To bridge the inevitable gap
between synthetic and real images, we further propose an unsupervised domain
adaptation method suitable for synthetic full-face data. We propose a
disentangling autoencoder network to separate gaze-related features and
introduce background augmentation consistency loss to utilize the
characteristics of the synthetic source domain. Through comprehensive
experiments, we show that the model only using monocular-reconstructed
synthetic training data can perform comparably to real data with a large label
range. Our proposed domain adaptation approach further improves the performance
on multiple target domains. The code and data will be available at
\url{https://github.com/ut-vision/AdaptiveGaze}.
|
[
"cs.CV"
] | false |
2305.16214
|
2023-05-25T16:22:04Z
|
Self-aware and Cross-sample Prototypical Learning for Semi-supervised
Medical Image Segmentation
|
[
"Zhenxi Zhang",
"Ran Ran",
"Chunna Tian",
"Heng Zhou",
"Xin Li",
"Fan Yang",
"Zhicheng Jiao"
] |
Consistency learning plays a crucial role in semi-supervised medical image
segmentation as it enables the effective utilization of limited annotated data
while leveraging the abundance of unannotated data. The effectiveness and
efficiency of consistency learning are challenged by prediction diversity and
training stability, which are often overlooked by existing studies. Meanwhile,
the limited quantity of labeled data for training often proves inadequate for
formulating intra-class compactness and inter-class discrepancy of pseudo
labels. To address these issues, we propose a self-aware and cross-sample
prototypical learning method (SCP-Net) to enhance the diversity of prediction
in consistency learning by utilizing a broader range of semantic information
derived from multiple inputs. Furthermore, we introduce a self-aware
consistency learning method that exploits unlabeled data to improve the
compactness of pseudo labels within each class. Moreover, a dual loss
re-weighting method is integrated into the cross-sample prototypical
consistency learning method to improve the reliability and stability of our
model. Extensive experiments on ACDC dataset and PROMISE12 dataset validate
that SCP-Net outperforms other state-of-the-art semi-supervised segmentation
methods and achieves significant performance gains compared to the limited
supervised training. Our code will come soon.
|
[
"cs.CV"
] | false |
2305.16216
|
2023-05-25T16:23:39Z
|
Cross-supervised Dual Classifiers for Semi-supervised Medical Image
Segmentation
|
[
"Zhenxi Zhang",
"Ran Ran",
"Chunna Tian",
"Heng Zhou",
"Fan Yang",
"Xin Li",
"Zhicheng Jiao"
] |
Semi-supervised medical image segmentation offers a promising solution for
large-scale medical image analysis by significantly reducing the annotation
burden while achieving comparable performance. Employing this method exhibits a
high degree of potential for optimizing the segmentation process and increasing
its feasibility in clinical settings during translational investigations.
Recently, cross-supervised training based on different co-training sub-networks
has become a standard paradigm for this task. Still, the critical issues of
sub-network disagreement and label-noise suppression require further attention
and progress in cross-supervised training. This paper proposes a
cross-supervised learning framework based on dual classifiers (DC-Net),
including an evidential classifier and a vanilla classifier. The two
classifiers exhibit complementary characteristics, enabling them to handle
disagreement effectively and generate more robust and accurate pseudo-labels
for unlabeled data. We also incorporate the uncertainty estimation from the
evidential classifier into cross-supervised training to alleviate the negative
effect of the error supervision signal. The extensive experiments on LA and
Pancreas-CT dataset illustrate that DC-Net outperforms other state-of-the-art
methods for semi-supervised segmentation. The code will be released soon.
|
[
"cs.CV"
] | false |
2305.16220
|
2023-05-25T16:28:30Z
|
On the Robustness of Segment Anything
|
[
"Yihao Huang",
"Yue Cao",
"Tianlin Li",
"Felix Juefei-Xu",
"Di Lin",
"Ivor W. Tsang",
"Yang Liu",
"Qing Guo"
] |
Segment anything model (SAM) has presented impressive objectness
identification capability with the idea of prompt learning and a new collected
large-scale dataset. Given a prompt (e.g., points, bounding boxes, or masks)
and an input image, SAM is able to generate valid segment masks for all objects
indicated by the prompts, presenting high generalization across diverse
scenarios and being a general method for zero-shot transfer to downstream
vision tasks. Nevertheless, it remains unclear whether SAM may introduce errors
in certain threatening scenarios. Clarifying this is of significant importance
for applications that require robustness, such as autonomous vehicles. In this
paper, we aim to study the testing-time robustness of SAM under adversarial
scenarios and common corruptions. To this end, we first build a testing-time
robustness evaluation benchmark for SAM by integrating existing public
datasets. Second, we extend representative adversarial attacks against SAM and
study the influence of different prompts on robustness. Third, we study the
robustness of SAM under diverse corruption types by evaluating SAM on corrupted
datasets with different prompts. With experiments conducted on SA-1B and KITTI
datasets, we find that SAM exhibits remarkable robustness against various
corruptions, except for blur-related corruption. Furthermore, SAM remains
susceptible to adversarial attacks, particularly when subjected to PGD and BIM
attacks. We think such a comprehensive study could highlight the importance of
the robustness issues of SAM and trigger a series of new tasks for SAM as well
as downstream vision tasks.
|
[
"cs.CV"
] | false |
2305.16233
|
2023-05-25T16:44:51Z
|
Interactive Segment Anything NeRF with Feature Imitation
|
[
"Xiaokang Chen",
"Jiaxiang Tang",
"Diwen Wan",
"Jingbo Wang",
"Gang Zeng"
] |
This paper investigates the potential of enhancing Neural Radiance Fields
(NeRF) with semantics to expand their applications. Although NeRF has been
proven useful in real-world applications like VR and digital creation, the lack
of semantics hinders interaction with objects in complex scenes. We propose to
imitate the backbone feature of off-the-shelf perception models to achieve
zero-shot semantic segmentation with NeRF. Our framework reformulates the
segmentation process by directly rendering semantic features and only applying
the decoder from perception models. This eliminates the need for expensive
backbones and benefits 3D consistency. Furthermore, we can project the learned
semantics onto extracted mesh surfaces for real-time interaction. With the
state-of-the-art Segment Anything Model (SAM), our framework accelerates
segmentation by 16 times with comparable mask quality. The experimental results
demonstrate the efficacy and computational advantages of our approach. Project
page: \url{https://me.kiui.moe/san/}.
|
[
"cs.CV"
] | false |
2305.16310
|
2023-05-25T17:59:01Z
|
Securing Deep Generative Models with Universal Adversarial Signature
|
[
"Yu Zeng",
"Mo Zhou",
"Yuan Xue",
"Vishal M. Patel"
] |
Recent advances in deep generative models have led to the development of
methods capable of synthesizing high-quality, realistic images. These models
pose threats to society due to their potential misuse. Prior research attempted
to mitigate these threats by detecting generated images, but the varying traces
left by different generative models make it challenging to create a universal
detector capable of generalizing to new, unseen generative models. In this
paper, we propose to inject a universal adversarial signature into an arbitrary
pre-trained generative model, in order to make its generated contents more
detectable and traceable. First, the imperceptible optimal signature for each
image can be found by a signature injector through adversarial training.
Subsequently, the signature can be incorporated into an arbitrary generator by
fine-tuning it with the images processed by the signature injector. In this
way, the detector corresponding to the signature can be reused for any
fine-tuned generator for tracking the generator identity. The proposed method
is validated on the FFHQ and ImageNet datasets with various state-of-the-art
generative models, consistently showing a promising detection rate. Code will
be made publicly available at \url{https://github.com/zengxianyu/genwm}.
|
[
"cs.CV"
] | false |
2305.16315
|
2023-05-25T17:59:35Z
|
NAP: Neural 3D Articulation Prior
|
[
"Jiahui Lei",
"Congyue Deng",
"Bokui Shen",
"Leonidas Guibas",
"Kostas Daniilidis"
] |
We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative
model to synthesize 3D articulated object models. Despite the extensive
research on generating 3D objects, compositions, or scenes, there remains a
lack of focus on capturing the distribution of articulated objects, a common
object category for human and robot interaction. To generate articulated
objects, we first design a novel articulation tree/graph parameterization and
then apply a diffusion-denoising probabilistic model over this representation
where articulated objects can be generated via denoising from random complete
graphs. In order to capture both the geometry and the motion structure whose
distribution will affect each other, we design a graph-attention denoising
network for learning the reverse diffusion process. We propose a novel distance
that adapts widely used 3D generation metrics to our novel task to evaluate
generation quality, and experiments demonstrate our high performance in
articulated object generation. We also demonstrate several conditioned
generation applications, including Part2Motion, PartNet-Imagination,
Motion2Part, and GAPart2Object.
|
[
"cs.CV"
] | false |
2305.16411
|
2023-05-25T18:23:20Z
|
ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image
|
[
"Zhenzhen Weng",
"Zeyu Wang",
"Serena Yeung"
] |
Recent advancements in text-to-image generation have enabled significant
progress in zero-shot 3D shape generation. This is achieved by score
distillation, a methodology that uses pre-trained text-to-image diffusion
models to optimize the parameters of a 3D neural presentation, e.g. Neural
Radiance Field (NeRF). While showing promising results, existing methods are
often not able to preserve the geometry of complex shapes, such as human
bodies. To address this challenge, we present ZeroAvatar, a method that
introduces the explicit 3D human body prior to the optimization process.
Specifically, we first estimate and refine the parameters of a parametric human
body from a single image. Then during optimization, we use the posed parametric
body as additional geometry constraint to regularize the diffusion model as
well as the underlying density field. Lastly, we propose a UV-guided texture
regularization term to further guide the completion of texture on invisible
body parts. We show that ZeroAvatar significantly enhances the robustness and
3D consistency of optimization-based image-to-3D avatar generation,
outperforming existing zero-shot image-to-3D methods.
|
[
"cs.CV"
] | true |
2305.16481
|
2023-05-25T21:26:43Z
|
SimHaze: game engine simulated data for real-world dehazing
|
[
"Zhengyang Lou",
"Huan Xu",
"Fangzhou Mu",
"Yanli Liu",
"Xiaoyu Zhang",
"Liang Shang",
"Jiang Li",
"Bochen Guan",
"Yin Li",
"Yu Hen Hu"
] |
Deep models have demonstrated recent success in single-image dehazing. Most
prior methods consider fully supervised training and learn from paired clean
and hazy images, where a hazy image is synthesized based on a clean image and
its estimated depth map. This paradigm, however, can produce low-quality hazy
images due to inaccurate depth estimation, resulting in poor generalization of
the trained models. In this paper, we explore an alternative approach for
generating paired clean-hazy images by leveraging computer graphics. Using a
modern game engine, our approach renders crisp clean images and their precise
depth maps, based on which high-quality hazy images can be synthesized for
training dehazing models. To this end, we present SimHaze: a new synthetic haze
dataset. More importantly, we show that training with SimHaze alone allows the
latest dehazing models to achieve significantly better performance in
comparison to previous dehazing datasets. Our dataset and code will be made
publicly available.
|
[
"cs.CV"
] | false |
2305.16492
|
2023-05-25T21:46:12Z
|
Image Classification of Stroke Blood Clot Origin using Deep
Convolutional Neural Networks and Visual Transformers
|
[
"David Azatyan"
] |
Stroke is one of two main causes of death worldwide. Many individuals suffer
from ischemic stroke every year. Only in US more over 700,000 individuals meet
ischemic stroke due to blood clot blocking an artery to the brain every year.
The paper describes particular approach how to apply Artificial Intelligence
for purposes of separating two major acute ischemic stroke (AIS) etiology
subtypes: cardiac and large artery atherosclerosis. Four deep neural network
architectures and simple ensemble method are used in the approach.
|
[
"cs.CV"
] | false |
2305.15652
|
2023-05-25T01:58:42Z
|
Towards Total Online Unsupervised Anomaly Detection and Localization in
Industrial Vision
|
[
"Han Gao",
"Huiyuan Luo",
"Fei Shen",
"Zhengtao Zhang"
] |
Although existing image anomaly detection methods yield impressive results,
they are mostly an offline learning paradigm that requires excessive data
pre-collection, limiting their adaptability in industrial scenarios with online
streaming data. Online learning-based image anomaly detection methods are more
compatible with industrial online streaming data but are rarely noticed. For
the first time, this paper presents a fully online learning image anomaly
detection method, namely LeMO, learning memory for online image anomaly
detection. LeMO leverages learnable memory initialized with orthogonal random
noise, eliminating the need for excessive data in memory initialization and
circumventing the inefficiencies of offline data collection. Moreover, a
contrastive learning-based loss function for anomaly detection is designed to
enable online joint optimization of memory and image target-oriented features.
The presented method is simple and highly effective. Extensive experiments
demonstrate the superior performance of LeMO in the online setting.
Additionally, in the offline setting, LeMO is also competitive with the current
state-of-the-art methods and achieves excellent performance in few-shot
scenarios.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.15708
|
2023-05-25T04:43:47Z
|
Score-Based Multimodal Autoencoders
|
[
"Daniel Wesego",
"Amirmohammad Rooshenas"
] |
Multimodal Variational Autoencoders (VAEs) represent a promising group of
generative models that facilitate the construction of a tractable posterior
within the latent space, given multiple modalities. Daunhawer et al. (2022)
demonstrate that as the number of modalities increases, the generative quality
of each modality declines. In this study, we explore an alternative approach to
enhance the generative performance of multimodal VAEs by jointly modeling the
latent space of unimodal VAEs using score-based models (SBMs). The role of the
SBM is to enforce multimodal coherence by learning the correlation among the
latent variables. Consequently, our model combines the superior generative
quality of unimodal VAEs with coherent integration across different modalities.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.15740
|
2023-05-25T05:42:58Z
|
MPE4G: Multimodal Pretrained Encoder for Co-Speech Gesture Generation
|
[
"Gwantae Kim",
"Seonghyeok Noh",
"Insung Ham",
"Hanseok Ko"
] |
When virtual agents interact with humans, gestures are crucial to delivering
their intentions with speech. Previous multimodal co-speech gesture generation
models required encoded features of all modalities to generate gestures. If
some input modalities are removed or contain noise, the model may not generate
the gestures properly. To acquire robust and generalized encodings, we propose
a novel framework with a multimodal pre-trained encoder for co-speech gesture
generation. In the proposed method, the multi-head-attention-based encoder is
trained with self-supervised learning to contain the information on each
modality. Moreover, we collect full-body gestures that consist of 3D joint
rotations to improve visualization and apply gestures to the extensible body
model. Through the series of experiments and human evaluation, the proposed
method renders realistic co-speech gestures not only when all input modalities
are given but also when the input modalities are missing or noisy.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.15765
|
2023-05-25T06:22:10Z
|
Language-Guided 3D Object Detection in Point Cloud for Autonomous
Driving
|
[
"Wenhao Cheng",
"Junbo Yin",
"Wei Li",
"Ruigang Yang",
"Jianbing Shen"
] |
This paper addresses the problem of 3D referring expression comprehension
(REC) in autonomous driving scenario, which aims to ground a natural language
to the targeted region in LiDAR point clouds. Previous approaches for REC
usually focus on the 2D or 3D-indoor domain, which is not suitable for
accurately predicting the location of the queried 3D region in an autonomous
driving scene. In addition, the upper-bound limitation and the heavy
computation cost motivate us to explore a better solution. In this work, we
propose a new multi-modal visual grounding task, termed LiDAR Grounding. Then
we devise a Multi-modal Single Shot Grounding (MSSG) approach with an effective
token fusion strategy. It jointly learns the LiDAR-based object detector with
the language features and predicts the targeted region directly from the
detector without any post-processing. Moreover, the image feature can be
flexibly integrated into our approach to provide rich texture and color
information. The cross-modal learning enforces the detector to concentrate on
important regions in the point cloud by considering the informative language
expressions, thus leading to much better accuracy and efficiency. Extensive
experiments on the Talk2Car dataset demonstrate the effectiveness of the
proposed methods. Our work offers a deeper insight into the LiDAR-based
grounding task and we expect it presents a promising direction for the
autonomous driving community.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.15911
|
2023-05-25T10:18:57Z
|
NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation
|
[
"Pengcheng Shi",
"Xutao Guo",
"Yanwu Yang",
"Chenfei Ye",
"Ting Ma"
] |
Convolutional neural networks (CNN) and Transformer variants have emerged as
the leading medical image segmentation backbones. Nonetheless, due to their
limitations in either preserving global image context or efficiently processing
irregular shapes in visual objects, these backbones struggle to effectively
integrate information from diverse anatomical regions and reduce
inter-individual variability, particularly for the vasculature. Motivated by
the successful breakthroughs of graph neural networks (GNN) in capturing
topological properties and non-Euclidean relationships across various fields,
we propose NexToU, a novel hybrid architecture for medical image segmentation.
NexToU comprises improved Pool GNN and Swin GNN modules from Vision GNN (ViG)
for learning both global and local topological representations while minimizing
computational costs. To address the containment and exclusion relationships
among various anatomical structures, we reformulate the topological interaction
(TI) module based on the nature of binary trees, rapidly encoding the
topological constraints into NexToU. Extensive experiments conducted on three
datasets (including distinct imaging dimensions, disease types, and imaging
modalities) demonstrate that our method consistently outperforms other
state-of-the-art (SOTA) architectures. All the code is publicly available at
https://github.com/PengchengShi1220/NexToU.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.15942
|
2023-05-25T11:24:38Z
|
Comparison of Pedestrian Prediction Models from Trajectory and
Appearance Data for Autonomous Driving
|
[
"Anthony Knittel",
"Morris Antonello",
"John Redford",
"Subramanian Ramamoorthy"
] |
The ability to anticipate pedestrian motion changes is a critical capability
for autonomous vehicles. In urban environments, pedestrians may enter the road
area and create a high risk for driving, and it is important to identify these
cases. Typical predictors use the trajectory history to predict future motion,
however in cases of motion initiation, motion in the trajectory may only be
clearly visible after a delay, which can result in the pedestrian has entered
the road area before an accurate prediction can be made. Appearance data
includes useful information such as changes of gait, which are early indicators
of motion changes, and can inform trajectory prediction. This work presents a
comparative evaluation of trajectory-only and appearance-based methods for
pedestrian prediction, and introduces a new dataset experiment for prediction
using appearance. We create two trajectory and image datasets based on the
combination of image and trajectory sequences from the popular NuScenes
dataset, and examine prediction of trajectories using observed appearance to
influence futures. This shows some advantages over trajectory prediction alone,
although problems with the dataset prevent advantages of appearance-based
models from being shown. We describe methods for improving the dataset and
experiment to allow benefits of appearance-based models to be captured.
|
[
"cs.CV",
"cs.RO"
] | false |
2305.16025
|
2023-05-25T13:06:38Z
|
NVTC: Nonlinear Vector Transform Coding
|
[
"Runsen Feng",
"Zongyu Guo",
"Weiping Li",
"Zhibo Chen"
] |
In theory, vector quantization (VQ) is always better than scalar quantization
(SQ) in terms of rate-distortion (R-D) performance. Recent state-of-the-art
methods for neural image compression are mainly based on nonlinear transform
coding (NTC) with uniform scalar quantization, overlooking the benefits of VQ
due to its exponentially increased complexity. In this paper, we first
investigate on some toy sources, demonstrating that even if modern neural
networks considerably enhance the compression performance of SQ with nonlinear
transform, there is still an insurmountable chasm between SQ and VQ. Therefore,
revolving around VQ, we propose a novel framework for neural image compression
named Nonlinear Vector Transform Coding (NVTC). NVTC solves the critical
complexity issue of VQ through (1) a multi-stage quantization strategy and (2)
nonlinear vector transforms. In addition, we apply entropy-constrained VQ in
latent space to adaptively determine the quantization boundaries for joint
rate-distortion optimization, which improves the performance both theoretically
and experimentally. Compared to previous NTC approaches, NVTC demonstrates
superior rate-distortion performance, faster decoding speed, and smaller model
size. Our code is available at https://github.com/USTC-IMCL/NVTC
|
[
"cs.CV",
"eess.IV"
] | false |
2305.16034
|
2023-05-25T13:14:29Z
|
Collaborative Blind Image Deblurring
|
[
"Thomas Eboli",
"Jean-Michel Morel",
"Gabriele Facciolo"
] |
Blurry images usually exhibit similar blur at various locations across the
image domain, a property barely captured in nowadays blind deblurring neural
networks. We show that when extracting patches of similar underlying blur is
possible, jointly processing the stack of patches yields superior accuracy than
handling them separately. Our collaborative scheme is implemented in a neural
architecture with a pooling layer on the stack dimension. We present three
practical patch extraction strategies for image sharpening, camera shake
removal and optical aberration correction, and validate the proposed approach
on both synthetic and real-world benchmarks. For each blur instance, the
proposed collaborative strategy yields significant quantitative and qualitative
improvements.
|
[
"cs.CV",
"eess.IV"
] | false |
2305.16138
|
2023-05-25T15:12:08Z
|
Introducing Explicit Gaze Constraints to Face Swapping
|
[
"Ethan Wilson",
"Frederick Shic",
"Eakta Jain"
] |
Face swapping combines one face's identity with another face's non-appearance
attributes (expression, head pose, lighting) to generate a synthetic face. This
technology is rapidly improving, but falls flat when reconstructing some
attributes, particularly gaze. Image-based loss metrics that consider the full
face do not effectively capture the perceptually important, yet spatially
small, eye regions. Improving gaze in face swaps can improve naturalness and
realism, benefiting applications in entertainment, human computer interaction,
and more. Improved gaze will also directly improve Deepfake detection efforts,
serving as ideal training data for classifiers that rely on gaze for
classification. We propose a novel loss function that leverages gaze prediction
to inform the face swap model during training and compare against existing
methods. We find all methods to significantly benefit gaze in resulting face
swaps.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.16275
|
2023-05-25T17:31:39Z
|
CENSUS-HWR: a large training dataset for offline handwriting recognition
|
[
"Chetan Joshi",
"Lawry Sorenson",
"Ammon Wolfert",
"Dr. Mark Clement",
"Dr. Joseph Price",
"Dr. Kasey Buckles"
] |
Progress in Automated Handwriting Recognition has been hampered by the lack
of large training datasets. Nearly all research uses a set of small datasets
that often cause models to overfit. We present CENSUS-HWR, a new dataset
consisting of full English handwritten words in 1,812,014 gray scale images. A
total of 1,865,134 handwritten texts from a vocabulary of 10,711 words in the
English language are present in this collection. This dataset is intended to
serve handwriting models as a benchmark for deep learning algorithms. This huge
English handwriting recognition dataset has been extracted from the US 1930 and
1940 censuses taken by approximately 70,000 enumerators each year. The dataset
and the trained model with their weights are freely available to download at
https://censustree.org/data.html.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.16295
|
2023-05-25T17:50:17Z
|
HAAV: Hierarchical Aggregation of Augmented Views for Image Captioning
|
[
"Chia-Wen Kuo",
"Zsolt Kira"
] |
A great deal of progress has been made in image captioning, driven by
research into how to encode the image using pre-trained models. This includes
visual encodings (e.g. image grid features or detected objects) and more
recently textual encodings (e.g. image tags or text descriptions of image
regions). As more advanced encodings are available and incorporated, it is
natural to ask: how to efficiently and effectively leverage the heterogeneous
set of encodings? In this paper, we propose to regard the encodings as
augmented views of the input image. The image captioning model encodes each
view independently with a shared encoder efficiently, and a contrastive loss is
incorporated across the encoded views in a novel way to improve their
representation quality and the model's data efficiency. Our proposed
hierarchical decoder then adaptively weighs the encoded views according to
their effectiveness for caption generation by first aggregating within each
view at the token level, and then across views at the view level. We
demonstrate significant performance improvements of +5.6% CIDEr on MS-COCO and
+12.9% CIDEr on Flickr30k compared to state of the arts, and conduct rigorous
analyses to demonstrate the importance of each part of our design.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.16355
|
2023-05-25T04:16:07Z
|
PandaGPT: One Model To Instruction-Follow Them All
|
[
"Yixuan Su",
"Tian Lan",
"Huayang Li",
"Jialu Xu",
"Yan Wang",
"Deng Cai"
] |
We present PandaGPT, an approach to emPower large lANguage moDels with visual
and Auditory instruction-following capabilities. Our pilot experiments show
that PandaGPT can perform complex tasks such as detailed image description
generation, writing stories inspired by videos, and answering questions about
audios. More interestingly, PandaGPT can take multimodal inputs simultaneously
and compose their semantics naturally. For example, PandaGPT can connect how
objects look in an image/video and how they sound in an audio. To do so,
PandaGPT combines the multimodal encoders from ImageBind and the large language
models from Vicuna. Notably, only aligned image-text pairs are required for the
training of PandaGPT. Thanks to the strong capability of ImageBind in embedding
data from different modalities into the same space, PandaGPT displays emergent,
i.e. zero-shot, cross-modal behaviors for data other than image and text (e.g.,
video, audio, depth, thermal, and IMU). We hope that PandaGPT serves as an
initial step toward building AGI that can perceive and understand inputs in
different modalities holistically, as we humans do. Our project page is at
https://panda-gpt.github.io/.
|
[
"cs.CL",
"cs.CV"
] | true |
2305.16369
|
2023-05-25T12:06:43Z
|
A Semi-Automated Corner Case Detection and Evaluation Pipeline
|
[
"Isabelle Tulleners",
"Tobias Moers",
"Thomas Schulik",
"Martin Sedlacek"
] |
In order to deploy automated vehicles to the public, it has to be proven that
the vehicle can safely and robustly handle traffic in many different scenarios.
One important component of automated vehicles is the perception system that
captures and processes the environment around the vehicle. Perception systems
require large datasets for training their deep neural network. Knowing which
parts of the data in these datasets describe a corner case is an advantage
during training or testing of the network. These corner cases describe
situations that are rare and potentially challenging for the network. We
propose a pipeline that converts collective expert knowledge descriptions into
the extended KI Absicherung ontology. The ontology is used to describe scenes
and scenarios that can be mapped to perception datasets. The corner cases can
then be extracted from the datasets. In addition, the pipeline enables the
evaluation of the detection networks against the extracted corner cases to
measure their performance.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.18337
|
2023-05-25T13:47:04Z
|
You Don't Have to Be Perfect to Be Amazing: Unveil the Utility of
Synthetic Images
|
[
"Xiaodan Xing",
"Federico Felder",
"Yang Nan",
"Giorgos Papanastasiou",
"Walsh Simon",
"Guang Yang"
] |
Synthetic images generated from deep generative models have the potential to
address data scarcity and data privacy issues. The selection of synthesis
models is mostly based on image quality measurements, and most researchers
favor synthetic images that produce realistic images, i.e., images with good
fidelity scores, such as low Fr\'echet Inception Distance (FID) and high Peak
Signal-To-Noise Ratio (PSNR). However, the quality of synthetic images is not
limited to fidelity, and a wide spectrum of metrics should be evaluated to
comprehensively measure the quality of synthetic images. In addition, quality
metrics are not truthful predictors of the utility of synthetic images, and the
relations between these evaluation metrics are not yet clear. In this work, we
have established a comprehensive set of evaluators for synthetic images,
including fidelity, variety, privacy, and utility. By analyzing more than 100k
chest X-ray images and their synthetic copies, we have demonstrated that there
is an inevitable trade-off between synthetic image fidelity, variety, and
privacy. In addition, we have empirically demonstrated that the utility score
does not require images with both high fidelity and high variety. For intra-
and cross-task data augmentation, mode-collapsed images and low-fidelity images
can still demonstrate high utility. Finally, our experiments have also showed
that it is possible to produce images with both high utility and privacy, which
can provide a strong rationale for the use of deep generative models in
privacy-preserving applications. Our study can shore up comprehensive guidance
for the evaluation of synthetic images and elicit further developments for
utility-aware deep generative models in medical image synthesis.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.05376
|
2023-05-25T19:17:39Z
|
Anomaly Detection in Satellite Videos using Diffusion Models
|
[
"Akash Awasthi",
"Son Ly",
"Jaer Nizam",
"Samira Zare",
"Videet Mehta",
"Safwan Ahmed",
"Keshav Shah",
"Ramakrishna Nemani",
"Saurabh Prasad",
"Hien Van Nguyen"
] |
The definition of anomaly detection is the identification of an unexpected
event. Real-time detection of extreme events such as wildfires, cyclones, or
floods using satellite data has become crucial for disaster management.
Although several earth-observing satellites provide information about
disasters, satellites in the geostationary orbit provide data at intervals as
frequent as every minute, effectively creating a video from space. There are
many techniques that have been proposed to identify anomalies in surveillance
videos; however, the available datasets do not have dynamic behavior, so we
discuss an anomaly framework that can work on very high-frequency datasets to
find very fast-moving anomalies. In this work, we present a diffusion model
which does not need any motion component to capture the fast-moving anomalies
and outperforms the other baseline methods.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.05381
|
2023-05-25T08:59:26Z
|
FollowNet: A Comprehensive Benchmark for Car-Following Behavior Modeling
|
[
"Xianda Chen",
"Meixin Zhu",
"Kehua Chen",
"Pengqin Wang",
"Hongliang Lu",
"Hui Zhong",
"Xu Han",
"Yinhai Wang"
] |
Car-following is a control process in which a following vehicle (FV) adjusts
its acceleration to keep a safe distance from the lead vehicle (LV). Recently,
there has been a booming of data-driven models that enable more accurate
modeling of car-following through real-world driving datasets. Although there
are several public datasets available, their formats are not always consistent,
making it challenging to determine the state-of-the-art models and how well a
new model performs compared to existing ones. In contrast, research fields such
as image recognition and object detection have benchmark datasets like
ImageNet, Microsoft COCO, and KITTI. To address this gap and promote the
development of microscopic traffic flow modeling, we establish a public
benchmark dataset for car-following behavior modeling. The benchmark consists
of more than 80K car-following events extracted from five public driving
datasets using the same criteria. These events cover diverse situations
including different road types, various weather conditions, and mixed traffic
flows with autonomous vehicles. Moreover, to give an overview of current
progress in car-following modeling, we implemented and tested representative
baseline models with the benchmark. Results show that the deep deterministic
policy gradient (DDPG) based model performs competitively with a lower MSE for
spacing compared to traditional intelligent driver model (IDM) and
Gazis-Herman-Rothery (GHR) models, and a smaller collision rate compared to
fully connected neural network (NN) and long short-term memory (LSTM) models in
most datasets. The established benchmark will provide researchers with
consistent data formats and metrics for cross-comparing different car-following
models, promoting the development of more accurate models. We open-source our
dataset and implementation code in
https://github.com/HKUST-DRIVE-AI-LAB/FollowNet.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.06061
|
2023-05-25T14:13:29Z
|
clustering an african hairstyle dataset using pca and k-means
|
[
"Teffo Phomolo Nicrocia",
"Owolawi Pius Adewale",
"Pholo Moanda Diana"
] |
The adoption of digital transformation was not expressed in building an
African face shape classifier. In this paper, an approach is presented that
uses k-means to classify African women images. African women rely on beauty
standards recommendations, personal preference, or the newest trends in
hairstyles to decide on the appropriate hairstyle for them. In this paper, an
approach is presented that uses K-means clustering to classify African women's
images. In order to identify potential facial clusters, Haarcascade is used for
feature-based training, and K-means clustering is applied for image
classification.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.15644
|
2023-05-25T01:44:09Z
|
Meta Adaptive Task Sampling for Few-Domain Generalization
|
[
"Zheyan Shen",
"Han Yu",
"Peng Cui",
"Jiashuo Liu",
"Xingxuan Zhang",
"Linjun Zhou",
"Furui Liu"
] |
To ensure the out-of-distribution (OOD) generalization performance,
traditional domain generalization (DG) methods resort to training on data from
multiple sources with different underlying distributions. And the success of
those DG methods largely depends on the fact that there are diverse training
distributions. However, it usually needs great efforts to obtain enough
heterogeneous data due to the high expenses, privacy issues or the scarcity of
data. Thus an interesting yet seldom investigated problem arises: how to
improve the OOD generalization performance when the perceived heterogeneity is
limited. In this paper, we instantiate a new framework called few-domain
generalization (FDG), which aims to learn a generalizable model from very few
domains of novel tasks with the knowledge acquired from previous learning
experiences on base tasks. Moreover, we propose a Meta Adaptive Task Sampling
(MATS) procedure to differentiate base tasks according to their semantic and
domain-shift similarity to the novel task. Empirically, we show that the newly
introduced FDG framework can substantially improve the OOD generalization
performance on the novel task and further combining MATS with episodic training
could outperform several state-of-the-art DG baselines on widely used
benchmarks like PACS and DomainNet.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2305.15692
|
2023-05-25T03:54:41Z
|
Deep Neural Networks in Video Human Action Recognition: A Review
|
[
"Zihan Wang",
"Yang Yang",
"Zhi Liu",
"Yifan Zheng"
] |
Currently, video behavior recognition is one of the most foundational tasks
of computer vision. The 2D neural networks of deep learning are built for
recognizing pixel-level information such as images with RGB, RGB-D, or optical
flow formats, with the current increasingly wide usage of surveillance video
and more tasks related to human action recognition. There are increasing tasks
requiring temporal information for frames dependency analysis. The researchers
have widely studied video-based recognition rather than
image-based(pixel-based) only to extract more informative elements from
geometry tasks. Our current related research addresses multiple novel proposed
research works and compares their advantages and disadvantages between the
derived deep learning frameworks rather than machine learning frameworks. The
comparison happened between existing frameworks and datasets, which are video
format data only. Due to the specific properties of human actions and the
increasingly wide usage of deep neural networks, we collected all research
works within the last three years between 2020 to 2022. In our article, the
performance of deep neural networks surpassed most of the techniques in the
feature learning and extraction tasks, especially video action recognition.
|
[
"cs.CV",
"cs.AI",
"cs.HC"
] | false |
2305.15734
|
2023-05-25T05:35:11Z
|
On the Impact of Knowledge Distillation for Model Interpretability
|
[
"Hyeongrok Han",
"Siwon Kim",
"Hyun-Soo Choi",
"Sungroh Yoon"
] |
Several recent studies have elucidated why knowledge distillation (KD)
improves model performance. However, few have researched the other advantages
of KD in addition to its improving model performance. In this study, we have
attempted to show that KD enhances the interpretability as well as the accuracy
of models. We measured the number of concept detectors identified in network
dissection for a quantitative comparison of model interpretability. We
attributed the improvement in interpretability to the class-similarity
information transferred from the teacher to student models. First, we confirmed
the transfer of class-similarity information from the teacher to student model
via logit distillation. Then, we analyzed how class-similarity information
affects model interpretability in terms of its presence or absence and degree
of similarity information. We conducted various quantitative and qualitative
experiments and examined the results on different datasets, different KD
methods, and according to different measures of interpretability. Our research
showed that KD models by large models could be used more reliably in various
fields.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2305.15748
|
2023-05-25T05:55:53Z
|
ReactFace: Multiple Appropriate Facial Reaction Generation in Dyadic
Interactions
|
[
"Cheng Luo",
"Siyang Song",
"Weicheng Xie",
"Micol Spitale",
"Linlin Shen",
"Hatice Gunes"
] |
In dyadic interaction, predicting the listener's facial reactions is
challenging as different reactions may be appropriate in response to the same
speaker's behaviour. This paper presents a novel framework called ReactFace
that learns an appropriate facial reaction distribution from a speaker's
behaviour rather than replicating the real facial reaction of the listener.
ReactFace generates multiple different but appropriate photo-realistic human
facial reactions by (i) learning an appropriate facial reaction distribution
representing multiple appropriate facial reactions; and (ii) synchronizing the
generated facial reactions with the speaker's verbal and non-verbal behaviours
at each time stamp, resulting in realistic 2D facial reaction sequences.
Experimental results demonstrate the effectiveness of our approach in
generating multiple diverse, synchronized, and appropriate facial reactions
from each speaker's behaviour, with the quality of the generated reactions
being influenced by the speaker's speech and facial behaviours. Our code is
made publicly available at \url{https://github.com/lingjivoo/ReactFace}.
|
[
"cs.CV",
"cs.HC",
"cs.MM"
] | false |
2305.15813
|
2023-05-25T07:53:18Z
|
Leveraging object detection for the identification of lung cancer
|
[
"Karthick Prasad Gunasekaran"
] |
Lung cancer poses a significant global public health challenge, emphasizing
the importance of early detection for improved patient outcomes. Recent
advancements in deep learning algorithms have shown promising results in
medical image analysis. This study aims to explore the application of object
detection particularly YOLOv5, an advanced object identification system, in
medical imaging for lung cancer identification. To train and evaluate the
algorithm, a dataset comprising chest X-rays and corresponding annotations was
obtained from Kaggle. The YOLOv5 model was employed to train an algorithm
capable of detecting cancerous lung lesions. The training process involved
optimizing hyperparameters and utilizing augmentation techniques to enhance the
model's performance. The trained YOLOv5 model exhibited exceptional proficiency
in identifying lung cancer lesions, displaying high accuracy and recall rates.
It successfully pinpointed malignant areas in chest radiographs, as validated
by a separate test set where it outperformed previous techniques. Additionally,
the YOLOv5 model demonstrated computational efficiency, enabling real-time
detection and making it suitable for integration into clinical procedures. This
proposed approach holds promise in assisting radiologists in the early
discovery and diagnosis of lung cancer, ultimately leading to prompt treatment
and improved patient outcomes.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.16103
|
2023-05-25T14:34:08Z
|
ChatBridge: Bridging Modalities with Large Language Model as a Language
Catalyst
|
[
"Zijia Zhao",
"Longteng Guo",
"Tongtian Yue",
"Sihan Chen",
"Shuai Shao",
"Xinxin Zhu",
"Zehuan Yuan",
"Jing Liu"
] |
Building general-purpose models that can perceive diverse real-world
modalities and solve various tasks is an appealing target in artificial
intelligence. In this paper, we present ChatBridge, a novel multimodal language
model that leverages the expressive capabilities of language as the catalyst to
bridge the gap between various modalities. We show that only language-paired
two-modality data is sufficient to connect all modalities. ChatBridge leverages
recent large language models (LLM) and extends their zero-shot capabilities to
incorporate diverse multimodal inputs. ChatBridge undergoes a two-stage
training. The first stage aligns each modality with language, which brings
emergent multimodal correlation and collaboration abilities. The second stage
instruction-finetunes ChatBridge to align it with user intent with our newly
proposed multimodal instruction tuning dataset, named MULTIS, which covers a
wide range of 16 multimodal tasks of text, image, video, and audio modalities.
We show strong quantitative and qualitative results on zero-shot multimodal
tasks covering text, image, video, and audio modalities. All codes, data, and
models of ChatBridge will be open-sourced.
|
[
"cs.CV",
"cs.AI",
"cs.CL",
"cs.MM"
] | false |
2305.16222
|
2023-05-25T16:29:16Z
|
Incomplete Multimodal Learning for Complex Brain Disorders Prediction
|
[
"Reza Shirkavand",
"Liang Zhan",
"Heng Huang",
"Li Shen",
"Paul M. Thompson"
] |
Recent advancements in the acquisition of various brain data sources have
created new opportunities for integrating multimodal brain data to assist in
early detection of complex brain disorders. However, current data integration
approaches typically need a complete set of biomedical data modalities, which
may not always be feasible, as some modalities are only available in
large-scale research cohorts and are prohibitive to collect in routine clinical
practice. Especially in studies of brain diseases, research cohorts may include
both neuroimaging data and genetic data, but for practical clinical diagnosis,
we often need to make disease predictions only based on neuroimages. As a
result, it is desired to design machine learning models which can use all
available data (different data could provide complementary information) during
training but conduct inference using only the most common data modality. We
propose a new incomplete multimodal data integration approach that employs
transformers and generative adversarial networks to effectively exploit
auxiliary modalities available during training in order to improve the
performance of a unimodal model at inference. We apply our new method to
predict cognitive degeneration and disease outcomes using the multimodal
imaging genetic data from Alzheimer's Disease Neuroimaging Initiative (ADNI)
cohort. Experimental results demonstrate that our approach outperforms the
related machine learning and deep learning methods by a significant margin.
|
[
"eess.IV",
"cs.CV",
"cs.LG",
"q-bio.NC"
] | false |
2305.16269
|
2023-05-25T17:25:14Z
|
UDPM: Upsampling Diffusion Probabilistic Models
|
[
"Shady Abu-Hussein",
"Raja Giryes"
] |
In recent years, Denoising Diffusion Probabilistic Models (DDPM) have caught
significant attention. By composing a Markovian process that starts in the data
domain and then gradually adds noise until reaching pure white noise, they
achieve superior performance in learning data distributions. Yet, these models
require a large number of diffusion steps to produce aesthetically pleasing
samples, which is inefficient. In addition, unlike common generative
adversarial networks, the latent space of diffusion models is not
interpretable. In this work, we propose to generalize the denoising diffusion
process into an Upsampling Diffusion Probabilistic Model (UDPM), in which we
reduce the latent variable dimension in addition to the traditional noise level
addition. As a result, we are able to sample images of size $256\times 256$
with only 7 diffusion steps, which is less than two orders of magnitude
compared to standard DDPMs. We formally develop the Markovian diffusion
processes of the UDPM, and demonstrate its generation capabilities on the
popular FFHQ, LSUN horses, ImageNet, and AFHQv2 datasets. Another favorable
property of UDPM is that it is very easy to interpolate its latent space, which
is not the case with standard diffusion models. Our code is available online
\url{https://github.com/shadyabh/UDPM}
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2305.16301
|
2023-05-25T17:55:59Z
|
Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos
|
[
"Matthew Chang",
"Aditya Prakash",
"Saurabh Gupta"
] |
The analysis and use of egocentric videos for robotic tasks is made
challenging by occlusion due to the hand and the visual mismatch between the
human hand and a robot end-effector. In this sense, the human hand presents a
nuisance. However, often hands also provide a valuable signal, e.g. the hand
pose may suggest what kind of object is being held. In this work, we propose to
extract a factored representation of the scene that separates the agent (human
hand) and the environment. This alleviates both occlusion and mismatch while
preserving the signal, thereby easing the design of models for downstream
robotics tasks. At the heart of this factorization is our proposed Video
Inpainting via Diffusion Model (VIDM) that leverages both a prior on real-world
images (through a large-scale pre-trained diffusion model) and the appearance
of the object in earlier frames of the video (through attention). Our
experiments demonstrate the effectiveness of VIDM at improving inpainting
quality on egocentric videos and the power of our factored representation for
numerous tasks: object detection, 3D reconstruction of manipulated objects, and
learning of reward functions, policies, and affordances from videos.
|
[
"cs.CV",
"cs.LG",
"cs.RO"
] | false |
2305.16312
|
2023-05-25T17:59:04Z
|
UMat: Uncertainty-Aware Single Image High Resolution Material Capture
|
[
"Carlos Rodriguez-Pardo",
"Henar Dominguez-Elvira",
"David Pascual-Hernandez",
"Elena Garces"
] |
We propose a learning-based method to recover normals, specularity, and
roughness from a single diffuse image of a material, using microgeometry
appearance as our primary cue. Previous methods that work on single images tend
to produce over-smooth outputs with artifacts, operate at limited resolution,
or train one model per class with little room for generalization. Previous
methods that work on single images tend to produce over-smooth outputs with
artifacts, operate at limited resolution, or train one model per class with
little room for generalization. In contrast, in this work, we propose a novel
capture approach that leverages a generative network with attention and a U-Net
discriminator, which shows outstanding performance integrating global
information at reduced computational complexity. We showcase the performance of
our method with a real dataset of digitized textile materials and show that a
commodity flatbed scanner can produce the type of diffuse illumination required
as input to our method. Additionally, because the problem might be illposed --
more than a single diffuse image might be needed to disambiguate the specular
reflection -- or because the training dataset is not representative enough of
the real distribution, we propose a novel framework to quantify the model's
confidence about its prediction at test time. Our method is the first one to
deal with the problem of modeling uncertainty in material digitization,
increasing the trustworthiness of the process and enabling more intelligent
strategies for dataset creation, as we demonstrate with an active learning
experiment.
|
[
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG",
"68T07 (Primary) 68T45, 68U10, 68U05 (Secondary)",
"I.4.0; I.2.6; I.3.0"
] | false |
2305.16361
|
2023-05-25T08:07:07Z
|
An Experimental Investigation into the Evaluation of Explainability
Methods
|
[
"Sédrick Stassin",
"Alexandre Englebert",
"Géraldin Nanfack",
"Julien Albert",
"Nassim Versbraegen",
"Gilles Peiffer",
"Miriam Doh",
"Nicolas Riche",
"Benoît Frenay",
"Christophe De Vleeschouwer"
] |
EXplainable Artificial Intelligence (XAI) aims to help users to grasp the
reasoning behind the predictions of an Artificial Intelligence (AI) system.
Many XAI approaches have emerged in recent years. Consequently, a subfield
related to the evaluation of XAI methods has gained considerable attention,
with the aim to determine which methods provide the best explanation using
various approaches and criteria. However, the literature lacks a comparison of
the evaluation metrics themselves, that one can use to evaluate XAI methods.
This work aims to fill this gap by comparing 14 different metrics when applied
to nine state-of-the-art XAI methods and three dummy methods (e.g., random
saliency maps) used as references. Experimental results show which of these
metrics produces highly correlated results, indicating potential redundancy. We
also demonstrate the significant impact of varying the baseline hyperparameter
on the evaluation metric values. Finally, we use dummy methods to assess the
reliability of metrics in terms of ranking, pointing out their limitations.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2305.16364
|
2023-05-25T10:27:07Z
|
E2EAI: End-to-End Deep Learning Framework for Active Investing
|
[
"Zikai Wei",
"Bo Dai",
"Dahua Lin"
] |
Active investing aims to construct a portfolio of assets that are believed to
be relatively profitable in the markets, with one popular method being to
construct a portfolio via factor-based strategies. In recent years, there have
been increasing efforts to apply deep learning to pursue "deep factors'' with
more active returns or promising pipelines for asset trends prediction.
However, the question of how to construct an active investment portfolio via an
end-to-end deep learning framework (E2E) is still open and rarely addressed in
existing works. In this paper, we are the first to propose an E2E that covers
almost the entire process of factor investing through factor selection, factor
combination, stock selection, and portfolio construction. Extensive experiments
on real stock market data demonstrate the effectiveness of our end-to-end deep
leaning framework in active investing.
|
[
"q-fin.PM",
"cs.CV",
"cs.LG"
] | false |
2305.16404
|
2023-05-25T18:11:21Z
|
GrowSP: Unsupervised Semantic Segmentation of 3D Point Clouds
|
[
"Zihui Zhang",
"Bo Yang",
"Bing Wang",
"Bo Li"
] |
We study the problem of 3D semantic segmentation from raw point clouds.
Unlike existing methods which primarily rely on a large amount of human
annotations for training neural networks, we propose the first purely
unsupervised method, called GrowSP, to successfully identify complex semantic
classes for every point in 3D scenes, without needing any type of human labels
or pretrained models. The key to our approach is to discover 3D semantic
elements via progressive growing of superpoints. Our method consists of three
major components, 1) the feature extractor to learn per-point features from
input point clouds, 2) the superpoint constructor to progressively grow the
sizes of superpoints, and 3) the semantic primitive clustering module to group
superpoints into semantic elements for the final semantic segmentation. We
extensively evaluate our method on multiple datasets, demonstrating superior
performance over all unsupervised baselines and approaching the classic
fully-supervised PointNet. We hope our work could inspire more advanced methods
for unsupervised 3D semantic learning.
|
[
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] | false |
2305.16465
|
2023-05-25T20:42:23Z
|
An AI-Ready Multiplex Staining Dataset for Reproducible and Accurate
Characterization of Tumor Immune Microenvironment
|
[
"Parmida Ghahremani",
"Joseph Marino",
"Juan Hernandez-Prera",
"Janis V. de la Iglesia",
"Robbert JC Slebos",
"Christine H. Chung",
"Saad Nadeem"
] |
We introduce a new AI-ready computational pathology dataset containing
restained and co-registered digitized images from eight head-and-neck squamous
cell carcinoma patients. Specifically, the same tumor sections were stained
with the expensive multiplex immunofluorescence (mIF) assay first and then
restained with cheaper multiplex immunohistochemistry (mIHC). This is a first
public dataset that demonstrates the equivalence of these two staining methods
which in turn allows several use cases; due to the equivalence, our cheaper
mIHC staining protocol can offset the need for expensive mIF staining/scanning
which requires highly-skilled lab technicians. As opposed to subjective and
error-prone immune cell annotations from individual pathologists (disagreement
> 50%) to drive SOTA deep learning approaches, this dataset provides objective
immune and tumor cell annotations via mIF/mIHC restaining for more reproducible
and accurate characterization of tumor immune microenvironment (e.g. for
immunotherapy). We demonstrate the effectiveness of this dataset in three use
cases: (1) IHC quantification of CD3/CD8 tumor-infiltrating lymphocytes via
style transfer, (2) virtual translation of cheap mIHC stains to more expensive
mIF stains, and (3) virtual tumor/immune cellular phenotyping on standard
hematoxylin images. The dataset is available at
\url{https://github.com/nadeemlab/DeepLIIF}.
|
[
"eess.IV",
"cs.CV",
"q-bio.QM"
] | false |
2305.16467
|
2023-05-25T20:45:36Z
|
Pair-Variational Autoencoders (PairVAE) for Linking and
Cross-Reconstruction of Characterization Data from Complementary Structural
Characterization Techniques
|
[
"Shizhao Lu",
"Arthi Jayaraman"
] |
In material research, structural characterization often requires multiple
complementary techniques to obtain a holistic morphological view of the
synthesized material. Depending on the availability of and accessibility of the
different characterization techniques (e.g., scattering, microscopy,
spectroscopy), each research facility or academic research lab may have access
to high-throughput capability in one technique but face limitations (sample
preparation, resolution, access time) with other techniques(s). Furthermore,
one type of structural characterization data may be easier to interpret than
another (e.g., microscopy images are easier to interpret than small angle
scattering profiles). Thus, it is useful to have machine learning models that
can be trained on paired structural characterization data from multiple
techniques so that the model can generate one set of characterization data from
the other. In this paper we demonstrate one such machine learning workflow,
PairVAE, that works with data from Small Angle X-Ray Scattering (SAXS) that
presents information about bulk morphology and images from Scanning Electron
Microscopy (SEM) that presents two-dimensional local structural information of
the sample. Using paired SAXS and SEM data of novel block copolymer assembled
morphologies [open access data from Doerk G.S., et al. Science Advances. 2023
Jan 13;9(2): eadd3687], we train our PairVAE. After successful training, we
demonstrate that the PairVAE can generate SEM images of the block copolymer
morphology when it takes as input that sample's corresponding SAXS 2D pattern,
and vice versa. This method can be extended to other soft materials
morphologies as well and serves as a valuable tool for easy interpretation of
2D SAXS patterns as well as creating a database for other downstream
calculations of structure-property relationships.
|
[
"cond-mat.soft",
"cond-mat.mtrl-sci",
"cs.CV",
"cs.LG"
] | false |
2305.15677
|
2023-05-25T03:03:21Z
|
Nonlinear Bipartite Output Regulation with Application to Turing Pattern
|
[
"Dong Liang",
"Martin Guay",
"Shimin Wang"
] |
In this paper, a bipartite output regulation problem is solved for a class of
nonlinear multi-agent systems subject to static signed communication networks.
A nonlinear distributed observer is proposed for a nonlinear exosystem with
cooperation-competition interactions to address the problem. Sufficient
conditions are provided to guarantee its existence and stability. The
exponential stability of the observer is established. As a practical
application, a leader-following bipartite consensus problem is solved for a
class of nonlinear multi-agent systems based on the observer. Finally, a
network of multiple pendulum systems is treated to support the feasibility of
the proposed design. The possible application of the approach to generate
specific Turing patterns is also presented.
|
[
"math.OC",
"cs.CV",
"cs.SY",
"eess.SY",
"nlin.PS"
] | false |
2305.15637
|
2023-05-25T01:27:29Z
|
Morphological Inflection: A Reality Check
|
[
"Jordan Kodner",
"Sarah Payne",
"Salam Khalifa",
"Zoey Liu"
] |
Morphological inflection is a popular task in sub-word NLP with both
practical and cognitive applications. For years now, state-of-the-art systems
have reported high, but also highly variable, performance across data sets and
languages. We investigate the causes of this high performance and high
variability; we find several aspects of data set creation and evaluation which
systematically inflate performance and obfuscate differences between languages.
To improve generalizability and reliability of results, we propose new data
sampling and evaluation strategies that better reflect likely use-cases. Using
these new strategies, we make new observations on the generalization abilities
of current inflection systems.
|
[
"cs.CL"
] | false |
2305.15684
|
2023-05-25T03:18:18Z
|
Perturbation-based Self-supervised Attention for Attention Bias in Text
Classification
|
[
"Huawen Feng",
"Zhenxi Lin",
"Qianli Ma"
] |
In text classification, the traditional attention mechanisms usually focus
too much on frequent words, and need extensive labeled data in order to learn.
This paper proposes a perturbation-based self-supervised attention approach to
guide attention learning without any annotation overhead. Specifically, we add
as much noise as possible to all the words in the sentence without changing
their semantics and predictions. We hypothesize that words that tolerate more
noise are less significant, and we can use this information to refine the
attention distribution. Experimental results on three text classification tasks
show that our approach can significantly improve the performance of current
attention-based models, and is more effective than existing self-supervised
methods. We also provide a visualization analysis to verify the effectiveness
of our approach.
|
[
"cs.CL"
] | false |
2305.15717
|
2023-05-25T05:00:12Z
|
The False Promise of Imitating Proprietary LLMs
|
[
"Arnav Gudibande",
"Eric Wallace",
"Charlie Snell",
"Xinyang Geng",
"Hao Liu",
"Pieter Abbeel",
"Sergey Levine",
"Dawn Song"
] |
An emerging method to cheaply improve a weaker language model is to finetune
it on outputs from a stronger model, such as a proprietary system like ChatGPT
(e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply
imitate the proprietary model's capabilities using a weaker open-source model.
In this work, we critically analyze this approach. We first finetune a series
of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data
sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the
models using crowd raters and canonical NLP benchmarks. Initially, we were
surprised by the output quality of our imitation models -- they appear far
better at following instructions, and crowd workers rate their outputs as
competitive with ChatGPT. However, when conducting more targeted automatic
evaluations, we find that imitation models close little to none of the gap from
the base LM to ChatGPT on tasks that are not heavily supported in the imitation
data. We show that these performance discrepancies may slip past human raters
because imitation models are adept at mimicking ChatGPT's style but not its
factuality. Overall, we conclude that model imitation is a false promise: there
exists a substantial capabilities gap between open and closed LMs that, with
current methods, can only be bridged using an unwieldy amount of imitation data
or by using more capable base LMs. In turn, we argue that the highest leverage
action for improving open-source models is to tackle the difficult challenge of
developing better base LMs, rather than taking the shortcut of imitating
proprietary systems.
|
[
"cs.CL"
] | true |
2305.15718
|
2023-05-25T05:01:33Z
|
Towards Higher Pareto Frontier in Multilingual Machine Translation
|
[
"Yichong Huang",
"Xiaocheng Feng",
"Xinwei Geng",
"Baohang Li",
"Bing Qin"
] |
Multilingual neural machine translation has witnessed remarkable progress in
recent years. However, the long-tailed distribution of multilingual corpora
poses a challenge of Pareto optimization, i.e., optimizing for some languages
may come at the cost of degrading the performance of others. Existing balancing
training strategies are equivalent to a series of Pareto optimal solutions,
which trade off on a Pareto frontier. In this work, we propose a new training
framework, Pareto Mutual Distillation (Pareto-MD), towards pushing the Pareto
frontier outwards rather than making trade-offs. Specifically, Pareto-MD
collaboratively trains two Pareto optimal solutions that favor different
languages and allows them to learn from the strengths of each other via
knowledge distillation. Furthermore, we introduce a novel strategy to enable
stronger communication between Pareto optimal solutions and broaden the
applicability of our approach. Experimental results on the widely-used WMT and
TED datasets show that our method significantly pushes the Pareto frontier and
outperforms baselines by up to +2.46 BLEU.
|
[
"cs.CL"
] | false |
2305.15725
|
2023-05-25T05:12:33Z
|
Learn to Not Link: Exploring NIL Prediction in Entity Linking
|
[
"Fangwei Zhu",
"Jifan Yu",
"Hailong Jin",
"Juanzi Li",
"Lei Hou",
"Zhifang Sui"
] |
Entity linking models have achieved significant success via utilizing
pretrained language models to capture semantic features. However, the NIL
prediction problem, which aims to identify mentions without a corresponding
entity in the knowledge base, has received insufficient attention. We
categorize mentions linking to NIL into Missing Entity and Non-Entity Phrase,
and propose an entity linking dataset NEL that focuses on the NIL prediction
problem. NEL takes ambiguous entities as seeds, collects relevant mention
context in the Wikipedia corpus, and ensures the presence of mentions linking
to NIL by human annotation and entity masking. We conduct a series of
experiments with the widely used bi-encoder and cross-encoder entity linking
models, results show that both types of NIL mentions in training data have a
significant influence on the accuracy of NIL prediction. Our code and dataset
can be accessed at https://github.com/solitaryzero/NIL_EL
|
[
"cs.CL"
] | false |
2305.15756
|
2023-05-25T06:11:31Z
|
UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive
Learning Framework for Text-based Recommendation
|
[
"Zhiming Mao",
"Huimin Wang",
"Yiming Du",
"Kam-fai Wong"
] |
Prior study has shown that pretrained language models (PLM) can boost the
performance of text-based recommendation. In contrast to previous works that
either use PLM to encode user history as a whole input text, or impose an
additional aggregation network to fuse multi-turn history representations, we
propose a unified local- and global-attention Transformer encoder to better
model two-level contexts of user history. Moreover, conditioned on user history
encoded by Transformer encoders, our framework leverages Transformer decoders
to estimate the language perplexity of candidate text items, which can serve as
a straightforward yet significant contrastive signal for user-item text
matching. Based on this, our framework, UniTRec, unifies the contrastive
objectives of discriminative matching scores and candidate text perplexity to
jointly enhance text-based recommendation. Extensive evaluation shows that
UniTRec delivers SOTA performance on three text-based recommendation tasks.
Code is available at https://github.com/Veason-silverbullet/UniTRec.
|
[
"cs.CL"
] | false |
2305.15891
|
2023-05-25T09:44:44Z
|
CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset
|
[
"Hanchong Zhang",
"Jieyu Li",
"Lu Chen",
"Ruisheng Cao",
"Yunyan Zhang",
"Yu Huang",
"Yefeng Zheng",
"Kai Yu"
] |
The cross-domain text-to-SQL task aims to build a system that can parse user
questions into SQL on complete unseen databases, and the single-domain
text-to-SQL task evaluates the performance on identical databases. Both of
these setups confront unavoidable difficulties in real-world applications. To
this end, we introduce the cross-schema text-to-SQL task, where the databases
of evaluation data are different from that in the training data but come from
the same domain. Furthermore, we present CSS, a large-scale CrosS-Schema
Chinese text-to-SQL dataset, to carry on corresponding studies. CSS originally
consisted of 4,340 question/SQL pairs across 2 databases. In order to
generalize models to different medical systems, we extend CSS and create 19 new
databases along with 29,280 corresponding dataset examples. Moreover, CSS is
also a large corpus for single-domain Chinese text-to-SQL studies. We present
the data collection approach and a series of analyses of the data statistics.
To show the potential and usefulness of CSS, benchmarking baselines have been
conducted and reported. Our dataset is publicly available at
\url{https://huggingface.co/datasets/zhanghanchong/css}.
|
[
"cs.CL"
] | false |
2305.15895
|
2023-05-25T09:49:40Z
|
Collective Knowledge Graph Completion with Mutual Knowledge Distillation
|
[
"Weihang Zhang",
"Ovidiu Serban",
"Jiahao Sun",
"Yi-ke Guo"
] |
Knowledge graph completion (KGC), the task of predicting missing information
based on the existing relational data inside a knowledge graph (KG), has drawn
significant attention in recent years. However, the predictive power of KGC
methods is often limited by the completeness of the existing knowledge graphs
from different sources and languages. In monolingual and multilingual settings,
KGs are potentially complementary to each other. In this paper, we study the
problem of multi-KG completion, where we focus on maximizing the collective
knowledge from different KGs to alleviate the incompleteness of individual KGs.
Specifically, we propose a novel method called CKGC-CKD that uses
relation-aware graph convolutional network encoder models on both individual
KGs and a large fused KG in which seed alignments between KGs are regarded as
edges for message propagation. An additional mutual knowledge distillation
mechanism is also employed to maximize the knowledge transfer between the
models of "global" fused KG and the "local" individual KGs. Experimental
results on multilingual datasets have shown that our method outperforms all
state-of-the-art models in the KGC task.
|
[
"cs.CL"
] | false |
2305.15908
|
2023-05-25T10:13:53Z
|
Response Generation in Longitudinal Dialogues: Which Knowledge
Representation Helps?
|
[
"Seyed Mahed Mousavi",
"Simone Caldarella",
"Giuseppe Riccardi"
] |
Longitudinal Dialogues (LD) are the most challenging type of conversation for
human-machine dialogue systems. LDs include the recollections of events,
personal thoughts, and emotions specific to each individual in a sparse
sequence of dialogue sessions. Dialogue systems designed for LDs should
uniquely interact with the users over multiple sessions and long periods of
time (e.g. weeks), and engage them in personal dialogues to elaborate on their
feelings, thoughts, and real-life events. In this paper, we study the task of
response generation in LDs. We evaluate whether general-purpose Pre-trained
Language Models (PLM) are appropriate for this purpose. We fine-tune two PLMs,
GePpeTto (GPT-2) and iT5, using a dataset of LDs. We experiment with different
representations of the personal knowledge extracted from LDs for grounded
response generation, including the graph representation of the mentioned events
and participants. We evaluate the performance of the models via automatic
metrics and the contribution of the knowledge via the Integrated Gradients
technique. We categorize the natural language generation errors via human
evaluations of contextualization, appropriateness and engagement of the user.
|
[
"cs.CL"
] | false |
2305.16023
|
2023-05-25T13:05:52Z
|
NaSGEC: a Multi-Domain Chinese Grammatical Error Correction Dataset from
Native Speaker Texts
|
[
"Yue Zhang",
"Bo Zhang",
"Haochen Jiang",
"Zhenghua Li",
"Chen Li",
"Fei Huang",
"Min Zhang"
] |
We introduce NaSGEC, a new dataset to facilitate research on Chinese
grammatical error correction (CGEC) for native speaker texts from multiple
domains. Previous CGEC research primarily focuses on correcting texts from a
single domain, especially learner essays. To broaden the target domain, we
annotate multiple references for 12,500 sentences from three native domains,
i.e., social media, scientific writing, and examination. We provide solid
benchmark results for NaSGEC by employing cutting-edge CGEC models and
different training data. We further perform detailed analyses of the
connections and gaps between our domains from both empirical and statistical
views. We hope this work can inspire future studies on an important but
under-explored direction--cross-domain GEC.
|
[
"cs.CL"
] | false |
2305.16106
|
2023-05-25T14:38:05Z
|
Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System
|
[
"Shimin Li",
"Xiaotian Zhang",
"Yanjun Zheng",
"Linyang Li",
"Xipeng Qiu"
] |
Dialogue data in real scenarios tend to be sparsely available, rendering
data-starved end-to-end dialogue systems trained inadequately. We discover that
data utilization efficiency in low-resource scenarios can be enhanced by mining
alignment information uncertain utterance and deterministic dialogue state.
Therefore, we innovatively implement dual learning in task-oriented dialogues
to exploit the correlation of heterogeneous data. In addition, the one-to-one
duality is converted into a multijugate duality to reduce the influence of
spurious correlations in dual training for generalization. Without introducing
additional parameters, our method could be implemented in arbitrary networks.
Extensive empirical analyses demonstrate that our proposed method improves the
effectiveness of end-to-end task-oriented dialogue systems under multiple
benchmarks and obtains state-of-the-art results in low-resource scenarios.
|
[
"cs.CL"
] | false |
2305.16157
|
2023-05-25T15:23:29Z
|
Training Data Extraction From Pre-trained Language Models: A Survey
|
[
"Shotaro Ishihara"
] |
As the deployment of pre-trained language models (PLMs) expands, pressing
security concerns have arisen regarding the potential for malicious extraction
of training data, posing a threat to data privacy. This study is the first to
provide a comprehensive survey of training data extraction from PLMs. Our
review covers more than 100 key papers in fields such as natural language
processing and security. First, preliminary knowledge is recapped and a
taxonomy of various definitions of memorization is presented. The approaches
for attack and defense are then systemized. Furthermore, the empirical findings
of several quantitative studies are highlighted. Finally, future research
directions based on this review are suggested.
|
[
"cs.CL"
] | false |
2305.16166
|
2023-05-25T15:26:13Z
|
Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis
|
[
"Xuming Hu",
"Zhijiang Guo",
"Zhiyang Teng",
"Irwin King",
"Philip S. Yu"
] |
Multimodal relation extraction (MRE) is the task of identifying the semantic
relationships between two entities based on the context of the sentence image
pair. Existing retrieval-augmented approaches mainly focused on modeling the
retrieved textual knowledge, but this may not be able to accurately identify
complex relations. To improve the prediction, this research proposes to
retrieve textual and visual evidence based on the object, sentence, and whole
image. We further develop a novel approach to synthesize the object-level,
image-level, and sentence-level information for better reasoning between the
same and different modalities. Extensive experiments and analyses show that the
proposed method is able to effectively select and compare evidence across
modalities and significantly outperforms state-of-the-art models.
|
[
"cs.CL"
] | false |
2305.16171
|
2023-05-25T15:30:31Z
|
Multi-lingual and Multi-cultural Figurative Language Understanding
|
[
"Anubha Kabra",
"Emmy Liu",
"Simran Khanuja",
"Alham Fikri Aji",
"Genta Indra Winata",
"Samuel Cahyawijaya",
"Anuoluwapo Aremu",
"Perez Ogayo",
"Graham Neubig"
] |
Figurative language permeates human communication, but at the same time is
relatively understudied in NLP. Datasets have been created in English to
accelerate progress towards measuring and improving figurative language
processing in language models (LMs). However, the use of figurative language is
an expression of our cultural and societal experiences, making it difficult for
these phrases to be universally applicable. In this work, we create a
figurative language inference dataset, \datasetname, for seven diverse
languages associated with a variety of cultures: Hindi, Indonesian, Javanese,
Kannada, Sundanese, Swahili and Yoruba. Our dataset reveals that each language
relies on cultural and regional concepts for figurative expressions, with the
highest overlap between languages originating from the same region. We assess
multilingual LMs' abilities to interpret figurative language in zero-shot and
few-shot settings. All languages exhibit a significant deficiency compared to
English, with variations in performance reflecting the availability of
pre-training and fine-tuning data, emphasizing the need for LMs to be exposed
to a broader range of linguistic and cultural variation during training.
|
[
"cs.CL"
] | false |
2305.16252
|
2023-05-25T17:06:34Z
|
Overcoming Catastrophic Forgetting in Massively Multilingual Continual
Learning
|
[
"Genta Indra Winata",
"Lingjue Xie",
"Karthik Radhakrishnan",
"Shijie Wu",
"Xisen Jin",
"Pengxiang Cheng",
"Mayank Kulkarni",
"Daniel Preotiuc-Pietro"
] |
Real-life multilingual systems should be able to efficiently incorporate new
languages as data distributions fed to the system evolve and shift over time.
To do this, systems need to handle the issue of catastrophic forgetting, where
the model performance drops for languages or tasks seen further in its past. In
this paper, we study catastrophic forgetting, as well as methods to minimize
this, in a massively multilingual continual learning framework involving up to
51 languages and covering both classification and sequence labeling tasks. We
present LR ADJUST, a learning rate scheduling method that is simple, yet
effective in preserving new information without strongly overwriting past
knowledge. Furthermore, we show that this method is effective across multiple
continual learning approaches. Finally, we provide further insights into the
dynamics of catastrophic forgetting in this massively multilingual setup.
|
[
"cs.CL"
] | false |
2305.16302
|
2023-05-25T17:56:04Z
|
Cross-Lingual Knowledge Distillation for Answer Sentence Selection in
Low-Resource Languages
|
[
"Shivanshu Gupta",
"Yoshitomo Matsubara",
"Ankit Chadha",
"Alessandro Moschitti"
] |
While impressive performance has been achieved on the task of Answer Sentence
Selection (AS2) for English, the same does not hold for languages that lack
large labeled datasets. In this work, we propose Cross-Lingual Knowledge
Distillation (CLKD) from a strong English AS2 teacher as a method to train AS2
models for low-resource languages in the tasks without the need of labeled data
for the target language. To evaluate our method, we introduce 1) Xtr-WikiQA, a
translation-based WikiQA dataset for 9 additional languages, and 2) TyDi-AS2, a
multilingual AS2 dataset with over 70K questions spanning 8 typologically
diverse languages. We conduct extensive experiments on Xtr-WikiQA and TyDi-AS2
with multiple teachers, diverse monolingual and multilingual pretrained
language models (PLMs) as students, and both monolingual and multilingual
training. The results demonstrate that CLKD either outperforms or rivals even
supervised fine-tuning with the same amount of labeled data and a combination
of machine translation and the teacher model. Our method can potentially enable
stronger AS2 models for low-resource languages, while TyDi-AS2 can serve as the
largest multilingual AS2 dataset for further studies in the research community.
|
[
"cs.CL"
] | false |
2305.16357
|
2023-05-25T06:25:16Z
|
EDM3: Event Detection as Multi-task Text Generation
|
[
"Ujjwala Anantheswaran",
"Himanshu Gupta",
"Mihir Parmar",
"Kuntal Kumar Pal",
"Chitta Baral"
] |
Event detection refers to identifying event occurrences in a text and
comprises of two subtasks; event identification and classification. We present
EDM3, a novel approach for Event Detection that formulates three generative
tasks: identification, classification, and combined detection. We show that
EDM3 helps to learn transferable knowledge that can be leveraged to perform
Event Detection and its subtasks concurrently, mitigating the error propagation
inherent in pipelined approaches. Unlike previous dataset- or domain-specific
approaches, EDM3 utilizes the existing knowledge of language models, allowing
it to be trained over any classification schema. We evaluate EDM3 on multiple
event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3
outperforms 1) single-task performance by 8.4% on average and 2) multi-task
performance without instructional prompts by 2.4% on average. We obtain SOTA
results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other
datasets. We analyze our approach to demonstrate its efficacy in low-resource
and multi-sentence settings. We also show the effectiveness of this approach on
non-standard event configurations such as multi-word and multi-class event
triggers. Overall, our results show that EDM3 is a promising approach for Event
Detection that has the potential for real-world applications.
|
[
"cs.CL"
] | false |
2305.16407
|
2023-05-25T18:18:42Z
|
Script Normalization for Unconventional Writing of Under-Resourced
Languages in Bilingual Communities
|
[
"Sina Ahmadi",
"Antonios Anastasopoulos"
] |
The wide accessibility of social media has provided linguistically
under-represented communities with an extraordinary opportunity to create
content in their native languages. This, however, comes with certain challenges
in script normalization, particularly where the speakers of a language in a
bilingual community rely on another script or orthography to write their native
language. This paper addresses the problem of script normalization for several
such languages that are mainly written in a Perso-Arabic script. Using
synthetic data with various levels of noise and a transformer-based model, we
demonstrate that the problem can be effectively remediated. We conduct a
small-scale evaluation of real data as well. Our experiments indicate that
script normalization is also beneficial to improve the performance of
downstream tasks such as machine translation and language identification.
|
[
"cs.CL"
] | false |
2305.16444
|
2023-05-25T19:42:51Z
|
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by
Rewriting Text
|
[
"Ashim Gupta",
"Carter Wood Blum",
"Temma Choji",
"Yingjie Fei",
"Shalin Shah",
"Alakananda Vempala",
"Vivek Srikumar"
] |
Can language models transform inputs to protect text classifiers against
adversarial attacks? In this work, we present ATINTER, a model that intercepts
and learns to rewrite adversarial inputs to make them non-adversarial for a
downstream text classifier. Our experiments on four datasets and five attack
mechanisms reveal that ATINTER is effective at providing better adversarial
robustness than existing defense approaches, without compromising task
accuracy. For example, on sentiment classification using the SST-2 dataset, our
method improves the adversarial accuracy over the best existing defense
approach by more than 4% with a smaller decrease in task accuracy (0.5% vs
2.5%). Moreover, we show that ATINTER generalizes across multiple downstream
tasks and classifiers without having to explicitly retrain it for those
settings. Specifically, we find that when ATINTER is trained to remove
adversarial perturbations for the sentiment classification task on the SST-2
dataset, it even transfers to a semantically different task of news
classification (on AGNews) and improves the adversarial robustness by more than
10%.
|
[
"cs.CL"
] | false |
2305.16490
|
2023-05-25T21:40:58Z
|
Prototype-Based Interpretability for Legal Citation Prediction
|
[
"Chu Fei Luo",
"Rohan Bhambhoria",
"Samuel Dahan",
"Xiaodan Zhu"
] |
Deep learning has made significant progress in the past decade, and
demonstrates potential to solve problems with extensive social impact. In
high-stakes decision making areas such as law, experts often require
interpretability for automatic systems to be utilized in practical settings. In
this work, we attempt to address these requirements applied to the important
problem of legal citation prediction (LCP). We design the task with parallels
to the thought-process of lawyers, i.e., with reference to both precedents and
legislative provisions. After initial experimental results, we refine the
target citation predictions with the feedback of legal experts. Additionally,
we introduce a prototype architecture to add interpretability, achieving strong
performance while adhering to decision parameters used by lawyers. Our study
builds on and leverages the state-of-the-art language processing models for
law, while addressing vital considerations for high-stakes tasks with practical
societal impact.
|
[
"cs.CL"
] | false |
2305.16503
|
2023-05-25T22:08:57Z
|
IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks
|
[
"Xuanli He",
"Jun Wang",
"Benjamin Rubinstein",
"Trevor Cohn"
] |
Backdoor attacks are an insidious security threat against machine learning
models. Adversaries can manipulate the predictions of compromised models by
inserting triggers into the training phase. Various backdoor attacks have been
devised which can achieve nearly perfect attack success without affecting model
predictions for clean inputs. Means of mitigating such vulnerabilities are
underdeveloped, especially in natural language processing. To fill this gap, we
introduce IMBERT, which uses either gradients or self-attention scores derived
from victim models to self-defend against backdoor attacks at inference time.
Our empirical studies demonstrate that IMBERT can effectively identify up to
98.5% of inserted triggers. Thus, it significantly reduces the attack success
rate while attaining competitive accuracy on the clean dataset across
widespread insertion-based attacks compared to two baselines. Finally, we show
that our approach is model-agnostic, and can be easily ported to several
pre-trained transformer models.
|
[
"cs.CL"
] | false |
2305.16519
|
2023-05-25T22:54:13Z
|
The Dangers of trusting Stochastic Parrots: Faithfulness and Trust in
Open-domain Conversational Question Answering
|
[
"Sabrina Chiesurin",
"Dimitris Dimakopoulos",
"Marco Antonio Sobrevilla Cabezudo",
"Arash Eshghi",
"Ioannis Papaioannou",
"Verena Rieser",
"Ioannis Konstas"
] |
Large language models are known to produce output which sounds fluent and
convincing, but is also often wrong, e.g. "unfaithful" with respect to a
rationale as retrieved from a knowledge base. In this paper, we show that
task-based systems which exhibit certain advanced linguistic dialog behaviors,
such as lexical alignment (repeating what the user said), are in fact preferred
and trusted more, whereas other phenomena, such as pronouns and ellipsis are
dis-preferred. We use open-domain question answering systems as our test-bed
for task based dialog generation and compare several open- and closed-book
models. Our results highlight the danger of systems that appear to be
trustworthy by parroting user input while providing an unfaithful response.
|
[
"cs.CL"
] | false |
2305.15673
|
2023-05-25T02:45:22Z
|
BookGPT: A General Framework for Book Recommendation Empowered by Large
Language Model
|
[
"Aakas Zhiyuli",
"Yanfang Chen",
"Xuan Zhang",
"Xun Liang"
] |
With the continuous development and change exhibited by large language model
(LLM) technology, represented by generative pretrained transformers (GPTs),
many classic scenarios in various fields have re-emerged with new
opportunities. This paper takes ChatGPT as the modeling object, incorporates
LLM technology into the typical book resource understanding and recommendation
scenario for the first time, and puts it into practice. By building a
ChatGPT-like book recommendation system (BookGPT) framework based on ChatGPT,
this paper attempts to apply ChatGPT to recommendation modeling for three
typical tasks, book rating recommendation, user rating recommendation, and book
summary recommendation, and explores the feasibility of LLM technology in book
recommendation scenarios. At the same time, based on different evaluation
schemes for book recommendation tasks and the existing classic recommendation
models, this paper discusses the advantages and disadvantages of the BookGPT in
book recommendation scenarios and analyzes the opportunities and improvement
directions for subsequent LLMs in these scenarios.
|
[
"cs.IR",
"cs.CL"
] | false |
2305.15678
|
2023-05-25T03:03:29Z
|
Revisiting non-English Text Simplification: A Unified Multilingual
Benchmark
|
[
"Michael J. Ryan",
"Tarek Naous",
"Wei Xu"
] |
Recent advancements in high-quality, large-scale English resources have
pushed the frontier of English Automatic Text Simplification (ATS) research.
However, less work has been done on multilingual text simplification due to the
lack of a diverse evaluation benchmark that covers complex-simple sentence
pairs in many languages. This paper introduces the MultiSim benchmark, a
collection of 27 resources in 12 distinct languages containing over 1.7 million
complex-simple sentence pairs. This benchmark will encourage research in
developing more effective multilingual text simplification models and
evaluation metrics. Our experiments using MultiSim with pre-trained
multilingual language models reveal exciting performance improvements from
multilingual training in non-English settings. We observe strong performance
from Russian in zero-shot cross-lingual transfer to low-resource languages. We
further show that few-shot prompting with BLOOM-176b achieves comparable
quality to reference simplifications outperforming fine-tuned models in most
languages. We validate these findings through human evaluation.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.15749
|
2023-05-25T05:57:54Z
|
Multilingual Text-to-Speech Synthesis for Turkic Languages Using
Transliteration
|
[
"Rustem Yeshpanov",
"Saida Mussakhojayeva",
"Yerbolat Khassanov"
] |
This work aims to build a multilingual text-to-speech (TTS) synthesis system
for ten lower-resourced Turkic languages: Azerbaijani, Bashkir, Kazakh, Kyrgyz,
Sakha, Tatar, Turkish, Turkmen, Uyghur, and Uzbek. We specifically target the
zero-shot learning scenario, where a TTS model trained using the data of one
language is applied to synthesise speech for other, unseen languages. An
end-to-end TTS system based on the Tacotron 2 architecture was trained using
only the available data of the Kazakh language. To generate speech for the
other Turkic languages, we first mapped the letters of the Turkic alphabets
onto the symbols of the International Phonetic Alphabet (IPA), which were then
converted to the Kazakh alphabet letters. To demonstrate the feasibility of the
proposed approach, we evaluated the multilingual Turkic TTS model subjectively
and obtained promising results. To enable replication of the experiments, we
make our code and dataset publicly available in our GitHub repository.
|
[
"eess.AS",
"cs.CL"
] | false |
2305.15757
|
2023-05-25T06:15:53Z
|
Healing Unsafe Dialogue Responses with Weak Supervision Signals
|
[
"Zi Liang",
"Pinghui Wang",
"Ruofei Zhang",
"Shuo Zhang",
"Xiaofan Ye Yi Huang",
"Junlan Feng"
] |
Recent years have seen increasing concerns about the unsafe response
generation of large-scale dialogue systems, where agents will learn offensive
or biased behaviors from the real-world corpus. Some methods are proposed to
address the above issue by detecting and replacing unsafe training examples in
a pipeline style. Though effective, they suffer from a high annotation cost and
adapt poorly to unseen scenarios as well as adversarial attacks. Besides, the
neglect of providing safe responses (e.g. simply replacing with templates) will
cause the information-missing problem of dialogues. To address these issues, we
propose an unsupervised pseudo-label sampling method, TEMP, that can
automatically assign potential safe responses. Specifically, our TEMP method
groups responses into several clusters and samples multiple labels with an
adaptively sharpened sampling strategy, inspired by the observation that unsafe
samples in the clusters are usually few and distribute in the tail. Extensive
experiments in chitchat and task-oriented dialogues show that our TEMP
outperforms state-of-the-art models with weak supervision signals and obtains
comparable results under unsupervised learning settings.
|
[
"cs.CL",
"cs.AI"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.