arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.12284
|
2023-05-20T21:35:29Z
|
Safely Learning Dynamical Systems
|
[
"Amir Ali Ahmadi",
"Abraar Chaudhry",
"Vikas Sindhwani",
"Stephen Tu"
] |
A fundamental challenge in learning an unknown dynamical system is to reduce
model uncertainty by making measurements while maintaining safety. In this
work, we formulate a mathematical definition of what it means to safely learn a
dynamical system by sequentially deciding where to initialize the next
trajectory. In our framework, the state of the system is required to stay
within a safety region for a horizon of $T$ time steps under the action of all
dynamical systems that (i) belong to a given initial uncertainty set, and (ii)
are consistent with the information gathered so far.
For our first set of results, we consider the setting of safely learning a
linear dynamical system involving $n$ states. For the case $T=1$, we present a
linear programming-based algorithm that either safely recovers the true
dynamics from at most $n$ trajectories, or certifies that safe learning is
impossible. For $T=2$, we give a semidefinite representation of the set of safe
initial conditions and show that $\lceil n/2 \rceil$ trajectories generically
suffice for safe learning. Finally, for $T = \infty$, we provide semidefinite
representable inner approximations of the set of safe initial conditions and
show that one trajectory generically suffices for safe learning.
Our second set of results concerns the problem of safely learning a general
class of nonlinear dynamical systems. For the case $T=1$, we give a
second-order cone programming based representation of the set of safe initial
conditions. For $T=\infty$, we provide semidefinite representable inner
approximations to the set of safe initial conditions. We show how one can
safely collect trajectories and fit a polynomial model of the nonlinear
dynamics that is consistent with the initial uncertainty set and best agrees
with the observations.
|
[
"math.OC",
"cs.LG",
"cs.SY",
"eess.SY",
"math.DS"
] | false |
2305.12216
|
2023-05-20T15:46:55Z
|
On First-Order Meta-Reinforcement Learning with Moreau Envelopes
|
[
"Mohammad Taha Toghani",
"Sebastian Perez-Salazar",
"César A. Uribe"
] |
Meta-Reinforcement Learning (MRL) is a promising framework for training
agents that can quickly adapt to new environments and tasks. In this work, we
study the MRL problem under the policy gradient formulation, where we propose a
novel algorithm that uses Moreau envelope surrogate regularizers to jointly
learn a meta-policy that is adjustable to the environment of each individual
task. Our algorithm, called Moreau Envelope Meta-Reinforcement Learning
(MEMRL), learns a meta-policy that can adapt to a distribution of tasks by
efficiently updating the policy parameters using a combination of
gradient-based optimization and Moreau Envelope regularization. Moreau
Envelopes provide a smooth approximation of the policy optimization problem,
which enables us to apply standard optimization techniques and converge to an
appropriate stationary point. We provide a detailed analysis of the MEMRL
algorithm, where we show a sublinear convergence rate to a first-order
stationary point for non-convex policy gradient optimization. We finally show
the effectiveness of MEMRL on a multi-task 2D-navigation problem.
|
[
"cs.LG",
"cs.AI",
"cs.RO",
"cs.SY",
"eess.SY",
"math.OC"
] | false |
2305.12327
|
2023-05-21T03:14:42Z
|
Coronary Artery Semantic Labeling using Edge Attention Graph Matching
Network
|
[
"Chen Zhao",
"Zhihui Xu",
"Guang-Uei Hung",
"Weihua Zhou"
] |
Coronary artery disease (CAD) is one of the primary causes leading deaths
worldwide. The presence of atherosclerotic lesions in coronary arteries is the
underlying pathophysiological basis of CAD, and accurate extraction of
individual arterial branches using invasive coronary angiography (ICA) is
crucial for stenosis detection and CAD diagnosis. We propose an innovative
approach called the Edge Attention Graph Matching Network (EAGMN) for coronary
artery semantic labeling. By converting the coronary artery semantic
segmentation task into a graph node similarity comparison task, identifying the
node-to-node correspondence would assign semantic labels for each arterial
branch. More specifically, The EAGMN utilizes the association graph constructed
from the two individual graphs as input. Experimental results indicate the
EAGMN achieved a weighted accuracy of 0.8653, a weighted precision of 0.8656, a
weighted recall of 0.8653 and a weighted F1-score of 0.8643. Furthermore, we
employ ZORRO to provide interpretability and explainability of the graph
matching for artery semantic labeling. These findings highlight the potential
of the EAGMN for accurate and efficient coronary artery semantic labeling using
ICAs. By leveraging the inherent characteristics of ICAs and incorporating
graph matching techniques, our proposed model provides a promising solution for
improving CAD diagnosis and treatment
|
[
"cs.CV"
] | false |
2305.12344
|
2023-05-21T04:41:52Z
|
YOLOv3 with Spatial Pyramid Pooling for Object Detection with Unmanned
Aerial Vehicles
|
[
"Wahyu Pebrianto",
"Panca Mudjirahardjo",
"Sholeh Hadi Pramono",
"Rahmadwati",
"Raden Arief Setyawan"
] |
Object detection with Unmanned Aerial Vehicles (UAVs) has attracted much
attention in the research field of computer vision. However, not easy to
accurately detect objects with data obtained from UAVs, which capture images
from very high altitudes, making the image dominated by small object sizes,
that difficult to detect. Motivated by that challenge, we aim to improve the
performance of the one-stage detector YOLOv3 by adding a Spatial Pyramid
Pooling (SPP) layer on the end of the backbone darknet-53 to obtain more
efficient feature extraction process in object detection tasks with UAVs. We
also conducted an evaluation study on different versions of YOLOv3 methods.
Includes YOLOv3 with SPP, YOLOv3, and YOLOv3-tiny, which we analyzed with the
VisDrone2019-Det dataset. Here we show that YOLOv3 with SPP can get results mAP
0.6% higher than YOLOv3 and 26.6% than YOLOv3-Tiny at 640x640 input scale and
is even able to maintain accuracy at different input image scales than other
versions of the YOLOv3 method. Those results prove that the addition of SPP
layers to YOLOv3 can be an efficient solution for improving the performance of
the object detection method with data obtained from UAVs.
|
[
"cs.CV"
] | false |
2305.12354
|
2023-05-21T05:24:43Z
|
Bi-ViT: Pushing the Limit of Vision Transformer Quantization
|
[
"Yanjing Li",
"Sheng Xu",
"Mingbao Lin",
"Xianbin Cao",
"Chuanjian Liu",
"Xiao Sun",
"Baochang Zhang"
] |
Vision transformers (ViTs) quantization offers a promising prospect to
facilitate deploying large pre-trained networks on resource-limited devices.
Fully-binarized ViTs (Bi-ViT) that pushes the quantization of ViTs to its limit
remain largely unexplored and a very challenging task yet, due to their
unacceptable performance. Through extensive empirical analyses, we identify the
severe drop in ViT binarization is caused by attention distortion in
self-attention, which technically stems from the gradient vanishing and ranking
disorder. To address these issues, we first introduce a learnable scaling
factor to reactivate the vanished gradients and illustrate its effectiveness
through theoretical and experimental analyses. We then propose a ranking-aware
distillation method to rectify the disordered ranking in a teacher-student
framework. Bi-ViT achieves significant improvements over popular DeiT and Swin
backbones in terms of Top-1 accuracy and FLOPs. For example, with DeiT-Tiny and
Swin-Tiny, our method significantly outperforms baselines by 22.1% and 21.4%
respectively, while 61.5x and 56.1x theoretical acceleration in terms of FLOPs
compared with real-valued counterparts on ImageNet.
|
[
"cs.CV"
] | false |
2305.12361
|
2023-05-21T06:19:08Z
|
A Dual-level Detection Method for Video Copy Detection
|
[
"Tianyi Wang",
"Feipeng Ma",
"Zhenhua Liu",
"Fengyun Rao"
] |
With the development of multimedia technology, Video Copy Detection has been
a crucial problem for social media platforms. Meta AI hold Video Similarity
Challenge on CVPR 2023 to push the technology forward. In this paper, we share
our winner solutions on both tracks to help progress in this area. For
Descriptor Track, we propose a dual-level detection method with Video Editing
Detection (VED) and Frame Scenes Detection (FSD) to tackle the core challenges
on Video Copy Detection. Experimental results demonstrate the effectiveness and
efficiency of our proposed method. Code is available at
https://github.com/FeipengMa6/VSC22-Submission.
|
[
"cs.CV"
] | false |
2305.12398
|
2023-05-21T08:29:16Z
|
Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition
|
[
"Haojun Xu",
"Yan Gao",
"Zheng Hui",
"Jie Li",
"Xinbo Gao"
] |
How humans understand and recognize the actions of others is a complex
neuroscientific problem that involves a combination of cognitive mechanisms and
neural networks. Research has shown that humans have brain areas that recognize
actions that process top-down attentional information, such as the
temporoparietal association area. Also, humans have brain regions dedicated to
understanding the minds of others and analyzing their intentions, such as the
medial prefrontal cortex of the temporal lobe. Skeleton-based action
recognition creates mappings for the complex connections between the human
skeleton movement patterns and behaviors. Although existing studies encoded
meaningful node relationships and synthesized action representations for
classification with good results, few of them considered incorporating a priori
knowledge to aid potential representation learning for better performance.
LA-GCN proposes a graph convolution network using large-scale language models
(LLM) knowledge assistance. First, the LLM knowledge is mapped into a priori
global relationship (GPR) topology and a priori category relationship (CPR)
topology between nodes. The GPR guides the generation of new "bone"
representations, aiming to emphasize essential node information from the data
level. The CPR mapping simulates category prior knowledge in human brain
regions, encoded by the PC-AC module and used to add additional
supervision-forcing the model to learn class-distinguishable features. In
addition, to improve information transfer efficiency in topology modeling, we
propose multi-hop attention graph convolution. It aggregates each node's
k-order neighbor simultaneously to speed up model convergence. LA-GCN reaches
state-of-the-art on NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.
|
[
"cs.CV"
] | false |
2305.12410
|
2023-05-21T09:21:41Z
|
DiffUCD:Unsupervised Hyperspectral Image Change Detection with Semantic
Correlation Diffusion Model
|
[
"Xiangrong Zhang",
"Shunli Tian",
"Guanchun Wang",
"Huiyu Zhou",
"Licheng Jiao"
] |
Hyperspectral image change detection (HSI-CD) has emerged as a crucial
research area in remote sensing due to its ability to detect subtle changes on
the earth's surface. Recently, diffusional denoising probabilistic models
(DDPM) have demonstrated remarkable performance in the generative domain. Apart
from their image generation capability, the denoising process in diffusion
models can comprehensively account for the semantic correlation of
spectral-spatial features in HSI, resulting in the retrieval of semantically
relevant features in the original image. In this work, we extend the diffusion
model's application to the HSI-CD field and propose a novel unsupervised HSI-CD
with semantic correlation diffusion model (DiffUCD). Specifically, the semantic
correlation diffusion model (SCDM) leverages abundant unlabeled samples and
fully accounts for the semantic correlation of spectral-spatial features, which
mitigates pseudo change between multi-temporal images arising from inconsistent
imaging conditions. Besides, objects with the same semantic concept at the same
spatial location may exhibit inconsistent spectral signatures at different
times, resulting in pseudo change. To address this problem, we propose a
cross-temporal contrastive learning (CTCL) mechanism that aligns the spectral
feature representations of unchanged samples. By doing so, the spectral
difference invariant features caused by environmental changes can be obtained.
Experiments conducted on three publicly available datasets demonstrate that the
proposed method outperforms the other state-of-the-art unsupervised methods in
terms of Overall Accuracy (OA), Kappa Coefficient (KC), and F1 scores,
achieving improvements of approximately 3.95%, 8.13%, and 4.45%, respectively.
Notably, our method can achieve comparable results to those fully supervised
methods requiring numerous annotated samples.
|
[
"cs.CV"
] | false |
2305.12452
|
2023-05-21T13:14:28Z
|
Advancing Referring Expression Segmentation Beyond Single Image
|
[
"Yixuan Wu",
"Zhao Zhang",
"Xie Chi",
"Feng Zhu",
"Rui Zhao"
] |
Referring Expression Segmentation (RES) is a widely explored multi-modal
task, which endeavors to segment the pre-existing object within a single image
with a given linguistic expression. However, in broader real-world scenarios,
it is not always possible to determine if the described object exists in a
specific image. Typically, we have a collection of images, some of which may
contain the described objects. The current RES setting curbs its practicality
in such situations. To overcome this limitation, we propose a more realistic
and general setting, named Group-wise Referring Expression Segmentation (GRES),
which expands RES to a collection of related images, allowing the described
objects to be present in a subset of input images. To support this new setting,
we introduce an elaborately compiled dataset named Grouped Referring Dataset
(GRD), containing complete group-wise annotations of target objects described
by given expressions. We also present a baseline method named Grouped Referring
Segmenter (GRSer), which explicitly captures the language-vision and
intra-group vision-vision interactions to achieve state-of-the-art results on
the proposed GRES and related tasks, such as Co-Salient Object Detection and
RES. Our dataset and codes will be publicly released in
https://github.com/yixuan730/group-res.
|
[
"cs.CV"
] | false |
2305.12506
|
2023-05-21T16:51:15Z
|
CNN-based Dendrite Core Detection from Microscopic Images of
Directionally Solidified Ni-base Alloys
|
[
"Xiaoguang Li"
] |
Dendrite core is the center point of the dendrite. The information of
dendrite core is very helpful for material scientists to analyze the properties
of materials. Therefore, detecting the dendrite core is a very important task
in the material science field. Meanwhile, because of some special properties of
the dendrites, this task is also very challenging. Different from the typical
detection problems in the computer vision field, detecting the dendrite core
aims to detect a single point location instead of the bounding-box. As a
result, the existing regressing bounding-box based detection methods can not
work well on this task because the calculated center point location based on
the upper-left and lower-right corners of the bounding-box is usually not
precise. In this work, we formulate the dendrite core detection problem as a
segmentation task and proposed a novel detection method to detect the dendrite
core directly. Our whole pipeline contains three steps: Easy Sample Detection
(ESD), Hard Sample Detection(HSD), and Hard Sample Refinement (HSR).
Specifically, ESD and HSD focus on the easy samples and hard samples of
dendrite cores respectively. Both of them employ the same Central Point
Detection Network (CPDN) but do not share parameters. To make HSD only focus on
the feature of hard samples of dendrite cores, we destroy the structure of the
easy samples of dendrites which are detected by ESD and force HSD to learn the
feature of hard samples. HSR is a binary classifier which is used to filter out
the false positive prediction of HSD. We evaluate our method on the dendrite
dataset. Our method outperforms the state-of-the-art baselines on three
metrics, i.e., Recall, Precision, and F-score.
|
[
"cs.CV"
] | false |
2305.12384
|
2023-05-21T07:46:46Z
|
From Patches to Objects: Exploiting Spatial Reasoning for Better Visual
Representations
|
[
"Toni Albert",
"Bjoern Eskofier",
"Dario Zanca"
] |
As the field of deep learning steadily transitions from the realm of academic
research to practical application, the significance of self-supervised
pretraining methods has become increasingly prominent. These methods,
particularly in the image domain, offer a compelling strategy to effectively
utilize the abundance of unlabeled image data, thereby enhancing downstream
tasks' performance. In this paper, we propose a novel auxiliary pretraining
method that is based on spatial reasoning. Our proposed method takes advantage
of a more flexible formulation of contrastive learning by introducing spatial
reasoning as an auxiliary task for discriminative self-supervised methods.
Spatial Reasoning works by having the network predict the relative distances
between sampled non-overlapping patches. We argue that this forces the network
to learn more detailed and intricate internal representations of the objects
and the relationships between their constituting parts. Our experiments
demonstrate substantial improvement in downstream performance in linear
evaluation compared to similar work and provide directions for further research
into spatial reasoning.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.12414
|
2023-05-21T09:43:17Z
|
Real-time Aerial Detection and Reasoning on Embedded-UAVs
|
[
"Tin Lai"
] |
We present a unified pipeline architecture for a real-time detection system
on an embedded system for UAVs. Neural architectures have been the industry
standard for computer vision. However, most existing works focus solely on
concatenating deeper layers to achieve higher accuracy with run-time
performance as the trade-off. This pipeline of networks can exploit the
domain-specific knowledge on aerial pedestrian detection and activity
recognition for the emerging UAV applications of autonomous surveying and
activity reporting. In particular, our pipeline architectures operate in a
time-sensitive manner, have high accuracy in detecting pedestrians from various
aerial orientations, use a novel attention map for multi-activities
recognition, and jointly refine its detection with temporal information.
Numerically, we demonstrate our model's accuracy and fast inference speed on
embedded systems. We empirically deployed our prototype hardware with full live
feeds in a real-world open-field environment.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.12447
|
2023-05-21T12:40:25Z
|
BreastSAM: A Study of Segment Anything Model for Breast Tumor Detection
in Ultrasound Images
|
[
"Mingzhe Hu",
"Yuheng Li",
"Xiaofeng Yang"
] |
Breast cancer is one of the most common cancers among women worldwide, with
early detection significantly increasing survival rates. Ultrasound imaging is
a critical diagnostic tool that aids in early detection by providing real-time
imaging of the breast tissue. We conducted a thorough investigation of the
Segment Anything Model (SAM) for the task of interactive segmentation of breast
tumors in ultrasound images. We explored three pre-trained model variants:
ViT_h, ViT_l, and ViT_b, among which ViT_l demonstrated superior performance in
terms of mean pixel accuracy, Dice score, and IoU score. The significance of
prompt interaction in improving the model's segmentation performance was also
highlighted, with substantial improvements in performance metrics when prompts
were incorporated. The study further evaluated the model's differential
performance in segmenting malignant and benign breast tumors, with the model
showing exceptional proficiency in both categories, albeit with slightly better
performance for benign tumors. Furthermore, we analyzed the impacts of various
breast tumor characteristics - size, contrast, aspect ratio, and complexity -
on segmentation performance. Our findings reveal that tumor contrast and size
positively impact the segmentation result, while complex boundaries pose
challenges. The study provides valuable insights for using SAM as a robust and
effective algorithm for breast tumor segmentation in ultrasound images.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.12561
|
2023-05-21T20:22:38Z
|
M2LADS: A System for Generating MultiModal Learning Analytics Dashboards
in Open Education
|
[
"Álvaro Becerra",
"Roberto Daza",
"Ruth Cobos",
"Aythami Morales",
"Mutlu Cukurova",
"Julian Fierrez"
] |
In this article, we present a Web-based System called M2LADS, which supports
the integration and visualization of multimodal data recorded in learning
sessions in a MOOC in the form of Web-based Dashboards. Based on the edBB
platform, the multimodal data gathered contains biometric and behavioral
signals including electroencephalogram data to measure learners' cognitive
attention, heart rate for affective measures, visual attention from the video
recordings. Additionally, learners' static background data and their learning
performance measures are tracked using LOGCE and MOOC tracking logs
respectively, and both are included in the Web-based System. M2LADS provides
opportunities to capture learners' holistic experience during their
interactions with the MOOC, which can in turn be used to improve their learning
outcomes through feedback visualizations and interventions, as well as to
enhance learning analytics models and improve the open content of the MOOC.
|
[
"cs.HC",
"cs.CV"
] | false |
2305.12570
|
2023-05-21T21:16:20Z
|
Generalizable synthetic MRI with physics-informed convolutional networks
|
[
"Luuk Jacobs",
"Stefano Mandija",
"Hongyan Liu",
"Cornelis A. T. van den Berg",
"Alessandro Sbrizzi",
"Matteo Maspero"
] |
In this study, we develop a physics-informed deep learning-based method to
synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a
single five-minute acquisition and investigate its ability to generalize to
arbitrary contrasts to accelerate neuroimaging protocols. A dataset of
fifty-five subjects acquired with a standard MRI protocol and a five-minute
transient-state sequence was used to develop a physics-informed deep
learning-based method. The model, based on a generative adversarial network,
maps data acquired from the five-minute scan to "effective" quantitative
parameter maps, here named q*-maps, by using its generated PD, T1, and T2
values in a signal model to synthesize four standard contrasts (proton
density-weighted, T1-weighted, T2-weighted, and T2-weighted fluid-attenuated
inversion recovery), from which losses are computed. The q*-maps are compared
to literature values and the synthetic contrasts are compared to an end-to-end
deep learning-based method proposed by literature. The generalizability of the
proposed method is investigated for five volunteers by synthesizing three
non-standard contrasts unseen during training and comparing these to respective
ground truth acquisitions via contrast-to-noise ratio and quantitative
assessment. The physics-informed method was able to match the high-quality
synthMRI of the end-to-end method for the four standard contrasts, with mean
\pm standard deviation structural similarity metrics above 0.75 \pm 0.08 and
peak signal-to-noise ratios above 22.4 \pm 1.9 and 22.6 \pm 2.1. Additionally,
the physics-informed method provided retrospective contrast adjustment, with
visually similar signal contrast and comparable contrast-to-noise ratios to the
ground truth acquisitions for three sequences unused for model training,
demonstrating its generalizability and potential application to accelerate
neuroimaging protocols.
|
[
"physics.med-ph",
"cs.CV"
] | false |
2305.12328
|
2023-05-21T03:28:13Z
|
InstructVid2Vid: Controllable Video Editing with Natural Language
Instructions
|
[
"Bosheng Qin",
"Juncheng Li",
"Siliang Tang",
"Tat-Seng Chua",
"Yueting Zhuang"
] |
We present an end-to-end diffusion-based method for editing videos with human
language instructions, namely $\textbf{InstructVid2Vid}$. Our approach enables
the editing of input videos based on natural language instructions without any
per-example fine-tuning or inversion. The proposed InstructVid2Vid model
combines a pretrained image generation model, Stable Diffusion, with a
conditional 3D U-Net architecture to generate time-dependent sequence of video
frames. To obtain the training data, we incorporate the knowledge and expertise
of different models, including ChatGPT, BLIP, and Tune-a-Video, to synthesize
video-instruction triplets, which is a more cost-efficient alternative to
collecting data in real-world scenarios. To improve the consistency between
adjacent frames of generated videos, we propose the Frame Difference Loss,
which is incorporated during the training process. During inference, we extend
the classifier-free guidance to text-video input to guide the generated
results, making them more related to both the input video and instruction.
Experiments demonstrate that InstructVid2Vid is able to generate high-quality,
temporally coherent videos and perform diverse edits, including attribute
editing, change of background, and style transfer. These results highlight the
versatility and effectiveness of our proposed method. Code is released in
$\href{https://github.com/BrightQin/InstructVid2Vid}{InstructVid2Vid}$.
|
[
"cs.CV",
"cs.AI",
"cs.MM"
] | false |
2305.12358
|
2023-05-21T05:45:38Z
|
AutoPaint: A Self-Inpainting Method for Unsupervised Anomaly Detection
|
[
"Mehdi Astaraki",
"Francesca De Benetti",
"Yousef Yeganeh",
"Iuliana Toma-Dasu",
"Örjan Smedby",
"Chunliang Wang",
"Nassir Navab",
"Thomas Wendler"
] |
Robust and accurate detection and segmentation of heterogenous tumors
appearing in different anatomical organs with supervised methods require
large-scale labeled datasets covering all possible types of diseases. Due to
the unavailability of such rich datasets and the high cost of annotations,
unsupervised anomaly detection (UAD) methods have been developed aiming to
detect the pathologies as deviation from the normality by utilizing the
unlabeled healthy image data. However, developed UAD models are often trained
with an incomplete distribution of healthy anatomies and have difficulties in
preserving anatomical constraints. This work intends to, first, propose a
robust inpainting model to learn the details of healthy anatomies and
reconstruct high-resolution images by preserving anatomical constraints.
Second, we propose an autoinpainting pipeline to automatically detect tumors,
replace their appearance with the learned healthy anatomies, and based on that
segment the tumoral volumes in a purely unsupervised fashion. Three imaging
datasets, including PET, CT, and PET-CT scans of lung tumors and head and neck
tumors, are studied as benchmarks for evaluation. Experimental results
demonstrate the significant superiority of the proposed method over a wide
range of state-of-the-art UAD methods. Moreover, the unsupervised method we
propose produces comparable results to a robust supervised segmentation method
when applied to multimodal images.
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2305.12369
|
2023-05-21T06:43:35Z
|
HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with
Cross-person Memory Transformer
|
[
"Yubin Kim",
"Dong Won Lee",
"Paul Pu Liang",
"Sharifa Algohwinem",
"Cynthia Breazeal",
"Hae Won Park"
] |
Accurately modeling affect dynamics, which refers to the changes and
fluctuations in emotions and affective displays during human conversations, is
crucial for understanding human interactions. By analyzing affect dynamics, we
can gain insights into how people communicate, respond to different situations,
and form relationships. However, modeling affect dynamics is challenging due to
contextual factors, such as the complex and nuanced nature of interpersonal
relationships, the situation, and other factors that influence affective
displays. To address this challenge, we propose a Cross-person Memory
Transformer (CPM-T) framework which is able to explicitly model affective
dynamics (intrapersonal and interpersonal influences) by identifying verbal and
non-verbal cues, and with a large language model to utilize the pre-trained
knowledge and perform verbal reasoning. The CPM-T framework maintains memory
modules to store and update the contexts within the conversation window,
enabling the model to capture dependencies between earlier and later parts of a
conversation. Additionally, our framework employs cross-modal attention to
effectively align information from multi-modalities and leverage cross-person
attention to align behaviors in multi-party interactions. We evaluate the
effectiveness and generalizability of our approach on three publicly available
datasets for joint engagement, rapport, and human beliefs prediction tasks.
Remarkably, the CPM-T framework outperforms baseline models in average
F1-scores by up to 7.3%, 9.3%, and 2.0% respectively. Finally, we demonstrate
the importance of each component in the framework via ablation studies with
respect to multimodal temporal behavior.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.12417
|
2023-05-21T09:54:12Z
|
CNN-based Methods for Object Recognition with High-Resolution Tactile
Sensors
|
[
"Juan M. Gandarias",
"Alfonso J. García-Cerezo",
"Jesús M. Gómez-de-Gabriel"
] |
Novel high-resolution pressure-sensor arrays allow treating pressure readings
as standard images. Computer vision algorithms and methods such as
Convolutional Neural Networks (CNN) can be used to identify contact objects. In
this paper, a high-resolution tactile sensor has been attached to a robotic
end-effector to identify contacted objects. Two CNN-based approaches have been
employed to classify pressure images. These methods include a transfer learning
approach using a pre-trained CNN on an RGB-images dataset and a custom-made CNN
(TactNet) trained from scratch with tactile information. The transfer learning
approach can be carried out by retraining the classification layers of the
network or replacing these layers with an SVM. Overall, 11 configurations based
on these methods have been tested: 8 transfer learning-based, and 3
TactNet-based. Moreover, a study of the performance of the methods and a
comparative discussion with the current state-of-the-art on tactile object
recognition is presented.
|
[
"cs.CV",
"cs.AI",
"cs.RO"
] | false |
2305.12311
|
2023-05-21T01:25:44Z
|
i-Code V2: An Autoregressive Generation Framework over Vision, Language,
and Speech Data
|
[
"Ziyi Yang",
"Mahmoud Khademi",
"Yichong Xu",
"Reid Pryzant",
"Yuwei Fang",
"Chenguang Zhu",
"Dongdong Chen",
"Yao Qian",
"Mei Gao",
"Yi-Ling Chen",
"Robert Gmyr",
"Naoyuki Kanda",
"Noel Codella",
"Bin Xiao",
"Yu Shi",
"Lu Yuan",
"Takuya Yoshioka",
"Michael Zeng",
"Xuedong Huang"
] |
The convergence of text, visual, and audio data is a key step towards
human-like artificial intelligence, however the current Vision-Language-Speech
landscape is dominated by encoder-only models which lack generative abilities.
We propose closing this gap with i-Code V2, the first model capable of
generating natural language from any combination of Vision, Language, and
Speech data. i-Code V2 is an integrative system that leverages state-of-the-art
single-modality encoders, combining their outputs with a new modality-fusing
encoder in order to flexibly project combinations of modalities into a shared
representational space. Next, language tokens are generated from these
representations via an autoregressive decoder. The whole framework is
pretrained end-to-end on a large collection of dual- and single-modality
datasets using a novel text completion objective that can be generalized across
arbitrary combinations of modalities. i-Code V2 matches or outperforms
state-of-the-art single- and dual-modality baselines on 7 multimodal tasks,
demonstrating the power of generative multimodal pretraining across a diversity
of tasks and signals.
|
[
"cs.CL",
"cs.AI",
"cs.CV",
"cs.LG",
"eess.AS"
] | false |
2305.11916
|
2023-05-21T12:17:27Z
|
F-PABEE: Flexible-patience-based Early Exiting for Single-label and
Multi-label text Classification Tasks
|
[
"Xiangxiang Gao",
"Wei Zhu",
"Jiasheng Gao",
"Congrui Yin"
] |
Computational complexity and overthinking problems have become the
bottlenecks for pre-training language models (PLMs) with millions or even
trillions of parameters. A Flexible-Patience-Based Early Exiting method
(F-PABEE) has been proposed to alleviate the problems mentioned above for
single-label classification (SLC) and multi-label classification (MLC) tasks.
F-PABEE makes predictions at the classifier and will exit early if predicted
distributions of cross-layer are consecutively similar. It is more flexible
than the previous state-of-the-art (SOTA) early exiting method PABEE because it
can simultaneously adjust the similarity score thresholds and the patience
parameters. Extensive experiments show that: (1) F-PABEE makes a better
speedup-accuracy balance than existing early exiting strategies on both SLC and
MLC tasks. (2) F-PABEE achieves faster inference and better performances on
different PLMs such as BERT and ALBERT. (3) F-PABEE-JSKD performs best for
F-PABEE with different similarity measures.
|
[
"cs.CL"
] | false |
2305.12307
|
2023-05-21T00:32:37Z
|
OntoType: Ontology-Guided Zero-Shot Fine-Grained Entity Typing with Weak
Supervision from Pre-Trained Language Models
|
[
"Tanay Komarlu",
"Minhao Jiang",
"Xuan Wang",
"Jiawei Han"
] |
Fine-grained entity typing (FET), which assigns entities in text with
context-sensitive, fine-grained semantic types, will play an important role in
natural language understanding. A supervised FET method, which typically relies
on human-annotated corpora for training, is costly and difficult to scale.
Recent studies leverage pre-trained language models (PLMs) to generate rich and
context-aware weak supervision for FET. However, a PLM may still generate a
mixture of rough and fine-grained types, or tokens unsuitable for typing. In
this study, we vision that an ontology provides a semantics-rich, hierarchical
structure, which will help select the best results generated by multiple PLM
models and head words. Specifically, we propose a novel zero-shot,
ontology-guided FET method, OntoType, which follows a type ontological
structure, from coarse to fine, ensembles multiple PLM prompting results to
generate a set of type candidates, and refines its type resolution, under the
local context with a natural language inference model. Our experiments on the
Ontonotes, FIGER, and NYT datasets using their associated ontological
structures demonstrate that our method outperforms the state-of-the-art
zero-shot fine-grained entity typing methods. Our error analysis shows that
refinement of the existing ontology structures will further improve
fine-grained entity typing.
|
[
"cs.CL"
] | false |
2305.12330
|
2023-05-21T03:35:45Z
|
Task-agnostic Distillation of Encoder-Decoder Language Models
|
[
"Chen Zhang",
"Yang Yang",
"Jingang Wang",
"Dawei Song"
] |
Finetuning pretrained language models (LMs) have enabled appealing
performance on a diverse array of tasks. The intriguing task-agnostic property
has driven a shifted focus from task-specific to task-agnostic distillation of
LMs. While task-agnostic, compute-efficient, performance-preserved LMs can be
yielded by task-agnostic distillation, previous studies mainly sit in
distillation of either encoder-only LMs (e.g., BERT) or decoder-only ones
(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,
T5) can posit very distinguished behaviors. Frustratingly, we discover that
existing task-agnostic distillation methods can fail to handle the distillation
of encoder-decoder LMs. To the demand, we explore a few paths and uncover a
path named as MiniEnD that successfully tackles the distillation of
encoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language
understanding and abstractive summarization. The results showcase that MiniEnD
is generally effective and is competitive compared to other alternatives. We
further scale MiniEnD up to distillation of 3B encoder-decoder language models
with interpolated distillation. The results imply the opportunities and
challenges in distilling large language models (e.g., LLaMA).
|
[
"cs.CL"
] | false |
2305.12371
|
2023-05-21T06:46:33Z
|
Machine Translation by Projecting Text into the Same
Phonetic-Orthographic Space Using a Common Encoding
|
[
"Amit Kumar",
"Shantipriya Parida",
"Ajay Pratap",
"Anil Kumar Singh"
] |
The use of subword embedding has proved to be a major innovation in Neural
Machine Translation (NMT). It helps NMT to learn better context vectors for Low
Resource Languages (LRLs) so as to predict the target words by better modelling
the morphologies of the two languages and also the morphosyntax transfer. Even
so, their performance for translation in Indian language to Indian language
scenario is still not as good as for resource-rich languages. One reason for
this is the relative morphological richness of Indian languages, while another
is that most of them fall into the extremely low resource or zero-shot
categories. Since most major Indian languages use Indic or Brahmi origin
scripts, the text written in them is highly phonetic in nature and phonetically
similar in terms of abstract letters and their arrangements. We use these
characteristics of Indian languages and their scripts to propose an approach
based on common multilingual Latin-based encodings (WX notation) that take
advantage of language similarity while addressing the morphological complexity
issue in NMT. These multilingual Latin-based encodings in NMT, together with
Byte Pair Embedding (BPE) allow us to better exploit their phonetic and
orthographic as well as lexical similarities to improve the translation quality
by projecting different but similar languages on the same orthographic-phonetic
character space. We verify the proposed approach by demonstrating experiments
on similar language pairs (Gujarati-Hindi, Marathi-Hindi, Nepali-Hindi,
Maithili-Hindi, Punjabi-Hindi, and Urdu-Hindi) under low resource conditions.
The proposed approach shows an improvement in a majority of cases, in one case
as much as ~10 BLEU points compared to baseline techniques for similar language
pairs. We also get up to ~1 BLEU points improvement on distant and zero-shot
language pairs.
|
[
"cs.CL"
] | false |
2305.12389
|
2023-05-21T08:02:06Z
|
SHINE: Syntax-augmented Hierarchical Interactive Encoder for Zero-shot
Cross-lingual Information Extraction
|
[
"Jun-Yu Ma",
"Jia-Chen Gu",
"Zhen-Hua Ling",
"Quan Liu",
"Cong Liu",
"Guoping Hu"
] |
Zero-shot cross-lingual information extraction(IE) aims at constructing an IE
model for some low-resource target languages, given annotations exclusively in
some rich-resource languages. Recent studies based on language-universal
features have shown their effectiveness and are attracting increasing
attention. However, prior work has neither explored the potential of
establishing interactions between language-universal features and contextual
representations nor incorporated features that can effectively model
constituent span attributes and relationships between multiple spans. In this
study, a syntax-augmented hierarchical interactive encoder (SHINE) is proposed
to transfer cross-lingual IE knowledge. The proposed encoder is capable of
interactively capturing complementary information between features and
contextual information, to derive language-agnostic representations for various
IE tasks. Concretely, a multi-level interaction network is designed to
hierarchically interact the complementary information to strengthen domain
adaptability. Besides, in addition to the well-studied syntax features of
part-of-speech and dependency relation, a new syntax feature of constituency
structure is introduced to model the constituent span information which is
crucial for IE. Experiments across seven languages on three IE tasks and four
benchmarks verify the effectiveness and generalization ability of the proposed
method.
|
[
"cs.CL"
] | false |
2305.12394
|
2023-05-21T08:15:12Z
|
Pruning Pre-trained Language Models with Principled Importance and
Self-regularization
|
[
"Siyu Ren",
"Kenny Q. Zhu"
] |
Iterative pruning is one of the most effective compression methods for
pre-trained language models. We discovered that finding the optimal pruning
decision is an equality-constrained 0-1 Integer Linear Programming problem. The
solution to this optimization problem leads to a principled importance
criterion which we use to rank parameters during iterative model pruning. To
mitigate the poor generalization at high sparsity levels, we propose a
self-regularization scheme where model prediction is regularized by the latest
checkpoint with increasing sparsity throughout pruning. Our experiments on
natural language understanding, question-answering, named entity recognition,
and data-to-text generation with various Transformer-based PLMs show the
effectiveness of the approach at various sparsity levels.
|
[
"cs.CL"
] | false |
2305.12412
|
2023-05-21T09:22:41Z
|
EM Pre-training for Multi-party Dialogue Response Generation
|
[
"Yiyang Li",
"Hai Zhao"
] |
Dialogue response generation requires an agent to generate a response
according to the current dialogue history, in terms of which two-party
dialogues have been well studied, but leaving a great gap for multi-party
dialogues at the same time. Different from two-party dialogues where each
response is a direct reply to its previous utterance, the addressee of a
response utterance should be specified before it is generated in the
multi-party scenario. Thanks to the huge amount of two-party conversational
data, various pre-trained language models for two-party dialogue response
generation have been proposed. However, due to the lack of annotated addressee
labels in multi-party dialogue datasets, it is hard to use them to pre-train a
response generation model for multi-party dialogues. To tackle this obstacle,
we propose an Expectation-Maximization (EM) approach that iteratively performs
the expectation steps to generate addressee labels, and the maximization steps
to optimize a response generation model. Theoretical analyses and extensive
experiments have justified the feasibility and effectiveness of our proposed
method.
|
[
"cs.CL"
] | false |
2305.12458
|
2023-05-21T13:30:56Z
|
Infor-Coef: Information Bottleneck-based Dynamic Token Downsampling for
Compact and Efficient language model
|
[
"Wenxi Tan"
] |
The prevalence of Transformer-based pre-trained language models (PLMs) has
led to their wide adoption for various natural language processing tasks.
However, their excessive overhead leads to large latency and computational
costs. The statically compression methods allocate fixed computation to
different samples, resulting in redundant computation. The dynamic token
pruning method selectively shortens the sequences but are unable to change the
model size and hardly achieve the speedups as static pruning. In this paper, we
propose a model accelaration approaches for large language models that
incorporates dynamic token downsampling and static pruning, optimized by the
information bottleneck loss. Our model, Infor-Coef, achieves an 18x FLOPs
speedup with an accuracy degradation of less than 8\% compared to BERT. This
work provides a promising approach to compress and accelerate transformer-based
models for NLP tasks.
|
[
"cs.CL"
] | false |
2305.12480
|
2023-05-21T15:07:04Z
|
Is Translation Helpful? An Empirical Analysis of Cross-Lingual Transfer
in Low-Resource Dialog Generation
|
[
"Lei Shen",
"Shuai Yu",
"Xiaoyu Shen"
] |
Cross-lingual transfer is important for developing high-quality chatbots in
multiple languages due to the strongly imbalanced distribution of language
resources. A typical approach is to leverage off-the-shelf machine translation
(MT) systems to utilize either the training corpus or developed models from
high-resource languages. In this work, we investigate whether it is helpful to
utilize MT at all in this task. To do so, we simulate a low-resource scenario
assuming access to limited Chinese dialog data in the movie domain and large
amounts of English dialog data from multiple domains. Experiments show that
leveraging English dialog corpora can indeed improve the naturalness, relevance
and cross-domain transferability in Chinese. However, directly using English
dialog corpora in its original form, surprisingly, is better than using its
translated version. As the topics and wording habits in daily conversations are
strongly culture-dependent, MT can reinforce the bias from high-resource
languages, yielding unnatural generations in the target language. Considering
the cost of translating large amounts of text and the strong effects of the
translation quality, we suggest future research should rather focus on
utilizing the original English data for cross-lingual transfer in dialog
generation. We perform extensive human evaluations and ablation studies. The
analysis results, together with the collected dataset, are presented to draw
attention towards this area and benefit future research.
|
[
"cs.CL"
] | false |
2305.12518
|
2023-05-21T17:23:54Z
|
VAKTA-SETU: A Speech-to-Speech Machine Translation Service in Select
Indic Languages
|
[
"Shivam Mhaskar",
"Vineet Bhat",
"Akshay Batheja",
"Sourabh Deoghare",
"Paramveer Choudhary",
"Pushpak Bhattacharyya"
] |
In this work, we present our deployment-ready Speech-to-Speech Machine
Translation (SSMT) system for English-Hindi, English-Marathi, and Hindi-Marathi
language pairs. We develop the SSMT system by cascading Automatic Speech
Recognition (ASR), Disfluency Correction (DC), Machine Translation (MT), and
Text-to-Speech Synthesis (TTS) models. We discuss the challenges faced during
the research and development stage and the scalable deployment of the SSMT
system as a publicly accessible web service. On the MT part of the pipeline
too, we create a Text-to-Text Machine Translation (TTMT) service in all six
translation directions involving English, Hindi, and Marathi. To mitigate data
scarcity, we develop a LaBSE-based corpus filtering tool to select high-quality
parallel sentences from a noisy pseudo-parallel corpus for training the TTMT
system. All the data used for training the SSMT and TTMT systems and the best
models are being made publicly available. Users of our system are (a) Govt. of
India in the context of its new education policy (NEP), (b) tourists who
criss-cross the multilingual landscape of India, (c) Indian Judiciary where a
leading cause of the pendency of cases (to the order of 10 million as on date)
is the translation of case papers, (d) farmers who need weather and price
information and so on. We also share the feedback received from various
stakeholders when our SSMT and TTMT systems were demonstrated in large public
events.
|
[
"cs.CL"
] | false |
2305.12565
|
2023-05-21T21:02:55Z
|
Understanding the Effect of Data Augmentation on Knowledge Distillation
|
[
"Ziqi Wang",
"Chi Han",
"Wenxuan Bao",
"Heng Ji"
] |
Knowledge distillation (KD) requires sufficient data to transfer knowledge
from large-scale teacher models to small-scale student models. Therefore, data
augmentation has been widely used to mitigate the shortage of data under
specific scenarios. Classic data augmentation techniques, such as synonym
replacement and k-nearest-neighbors, are initially designed for fine-tuning. To
avoid severe semantic shifts and preserve task-specific labels, those methods
prefer to change only a small proportion of tokens (e.g., changing 10% tokens
is generally the best option for fine-tuning). However, such data augmentation
methods are sub-optimal for knowledge distillation since the teacher model
could provide label distributions and is more tolerant to semantic shifts. We
first observe that KD prefers as much data as possible, which is different from
fine-tuning that too much data will not gain more performance. Since changing
more tokens leads to more semantic shifts, we use the proportion of changed
tokens to reflect semantic shift degrees. Then we find that KD prefers
augmented data with a larger semantic shift degree (e.g., changing 30% tokens
is generally the best option for KD) than fine-tuning (changing 10% tokens).
Besides, our findings show that smaller datasets prefer larger degrees until
the out-of-distribution problem occurs (e.g., datasets with less than 10k
inputs may prefer the 50% degree, and datasets with more than 100k inputs may
prefer the 10% degree). Our work sheds light on the preference difference in
data augmentation between fine-tuning and knowledge distillation and encourages
the community to explore KD-specific data augmentation methods.
|
[
"cs.CL"
] | false |
2305.12567
|
2023-05-21T21:06:23Z
|
Model-Generated Pretraining Signals Improves Zero-Shot Generalization of
Text-to-Text Transformers
|
[
"Linyuan Gong",
"Chenyan Xiong",
"Xiaodong Liu",
"Payal Bajaj",
"Yiqing Xie",
"Alvin Cheung",
"Jianfeng Gao",
"Xia Song"
] |
This paper explores the effectiveness of model-generated signals in improving
zero-shot generalization of text-to-text Transformers such as T5. We study
various designs to pretrain T5 using an auxiliary model to construct more
challenging token replacements for the main model to denoise. Key aspects under
study include the decoding target, the location of the RTD head, and the
masking pattern. Based on these studies, we develop a new model, METRO-T0,
which is pretrained using the redesigned ELECTRA-Style pretraining strategies
and then prompt-finetuned on a mixture of NLP tasks. METRO-T0 outperforms all
similar-sized baselines on prompted NLP benchmarks, such as T0 Eval and MMLU,
and rivals the state-of-the-art T0-11B model with only 8% of its parameters.
Our analysis on model's neural activation and parameter sensitivity reveals
that the effectiveness of METRO-T0 stems from more balanced contribution of
parameters and better utilization of their capacity. The code and model
checkpoints are available at https://github.com/gonglinyuan/metro_t0.
|
[
"cs.CL"
] | false |
2305.12586
|
2023-05-21T22:44:25Z
|
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A
Study on Prompt Design Strategies
|
[
"Linyong Nan",
"Yilun Zhao",
"Weijin Zou",
"Narutatsu Ri",
"Jaesung Tae",
"Ellen Zhang",
"Arman Cohan",
"Dragomir Radev"
] |
In-context learning (ICL) has emerged as a new approach to various natural
language processing tasks, utilizing large language models (LLMs) to make
predictions based on context that has been supplemented with a few examples or
task-specific instructions. In this paper, we aim to extend this method to
question answering tasks that utilize structured knowledge sources, and improve
Text-to-SQL systems by exploring various prompt design strategies for employing
LLMs. We conduct a systematic investigation into different demonstration
selection methods and optimal instruction formats for prompting LLMs in the
Text-to-SQL task. Our approach involves leveraging the syntactic structure of
an example's SQL query to retrieve demonstrations, and we demonstrate that
pursuing both diversity and similarity in demonstration selection leads to
enhanced performance. Furthermore, we show that LLMs benefit from
database-related knowledge augmentations. Our most effective strategy
outperforms the state-of-the-art system by 2.5 points (Execution Accuracy) and
the best fine-tuned system by 5.1 points on the Spider dataset. These results
highlight the effectiveness of our approach in adapting LLMs to the Text-to-SQL
task, and we present an analysis of the factors contributing to the success of
our strategy.
|
[
"cs.CL"
] | false |
2305.12594
|
2023-05-21T23:04:14Z
|
Modeling User Satisfaction Dynamics in Dialogue via Hawkes Process
|
[
"Fanghua Ye",
"Zhiyuan Hu",
"Emine Yilmaz"
] |
Dialogue systems have received increasing attention while automatically
evaluating their performance remains challenging. User satisfaction estimation
(USE) has been proposed as an alternative. It assumes that the performance of a
dialogue system can be measured by user satisfaction and uses an estimator to
simulate users. The effectiveness of USE depends heavily on the estimator.
Existing estimators independently predict user satisfaction at each turn and
ignore satisfaction dynamics across turns within a dialogue. In order to fully
simulate users, it is crucial to take satisfaction dynamics into account. To
fill this gap, we propose a new estimator ASAP (sAtisfaction eStimation via
HAwkes Process) that treats user satisfaction across turns as an event sequence
and employs a Hawkes process to effectively model the dynamics in this
sequence. Experimental results on four benchmark dialogue datasets demonstrate
that ASAP can substantially outperform state-of-the-art baseline estimators.
|
[
"cs.CL"
] | false |
2305.12434
|
2023-05-21T11:25:59Z
|
BiasAsker: Measuring the Bias in Conversational AI System
|
[
"Yuxuan Wan",
"Wenxuan Wang",
"Pinjia He",
"Jiazhen Gu",
"Haonan Bai",
"Michael Lyu"
] |
Powered by advanced Artificial Intelligence (AI) techniques, conversational
AI systems, such as ChatGPT and digital assistants like Siri, have been widely
deployed in daily life. However, such systems may still produce content
containing biases and stereotypes, causing potential social problems. Due to
the data-driven, black-box nature of modern AI techniques, comprehensively
identifying and measuring biases in conversational systems remains a
challenging task. Particularly, it is hard to generate inputs that can
comprehensively trigger potential bias due to the lack of data containing both
social groups as well as biased properties. In addition, modern conversational
systems can produce diverse responses (e.g., chatting and explanation), which
makes existing bias detection methods simply based on the sentiment and the
toxicity hardly being adopted. In this paper, we propose BiasAsker, an
automated framework to identify and measure social bias in conversational AI
systems. To obtain social groups and biased properties, we construct a
comprehensive social bias dataset, containing a total of 841 groups and 8,110
biased properties. Given the dataset, BiasAsker automatically generates
questions and adopts a novel method based on existence measurement to identify
two types of biases (i.e., absolute bias and related bias) in conversational
systems. Extensive experiments on 8 commercial systems and 2 famous research
models, such as ChatGPT and GPT-3, show that 32.83% of the questions generated
by BiasAsker can trigger biased behaviors in these widely deployed
conversational systems. All the code, data, and experimental results have been
released to facilitate future research.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.12449
|
2023-05-21T12:48:38Z
|
Communication Efficient Federated Learning for Multilingual Neural
Machine Translation with Adapter
|
[
"Yi Liu",
"Xiaohan Bi",
"Lei Li",
"Sishuo Chen",
"Wenkai Yang",
"Xu Sun"
] |
Federated Multilingual Neural Machine Translation (Fed-MNMT) has emerged as a
promising paradigm for institutions with limited language resources. This
approach allows multiple institutions to act as clients and train a unified
model through model synchronization, rather than collecting sensitive data for
centralized training. This significantly reduces the cost of corpus collection
and preserves data privacy. However, as pre-trained language models (PLMs)
continue to increase in size, the communication cost for transmitting
parameters during synchronization has become a training speed bottleneck. In
this paper, we propose a communication-efficient Fed-MNMT framework that
addresses this issue by keeping PLMs frozen and only transferring lightweight
adapter modules between clients. Since different language pairs exhibit
substantial discrepancies in data distributions, adapter parameters of clients
may conflict with each other. To tackle this, we explore various clustering
strategies to group parameters for integration and mitigate the negative
effects of conflicting parameters. Experimental results demonstrate that our
framework reduces communication cost by over 98% while achieving similar or
even better performance compared to competitive baselines. Further analysis
reveals that clustering strategies effectively solve the problem of linguistic
discrepancy and pruning adapter modules further improves communication
efficiency.
|
[
"cs.CL",
"cs.AI"
] | false |
2305.12463
|
2023-05-21T14:03:49Z
|
Teaching the Pre-trained Model to Generate Simple Texts for Text
Simplification
|
[
"Renliang Sun",
"Wei Xu",
"Xiaojun Wan"
] |
Randomly masking text spans in ordinary texts in the pre-training stage
hardly allows models to acquire the ability to generate simple texts. It can
hurt the performance of pre-trained models on text simplification tasks. In
this paper, we propose a new continued pre-training strategy to teach the
pre-trained model to generate simple texts. We continue pre-training BART, a
representative model, to obtain SimpleBART. It consistently and significantly
improves the results on lexical simplification, sentence simplification, and
document-level simplification tasks over BART. At the end, we compare
SimpleBART with several representative large language models (LLMs).
|
[
"cs.CL",
"cs.AI"
] | false |
2305.12510
|
2023-05-21T17:04:21Z
|
A Deeper (Autoregressive) Approach to Non-Convergent Discourse Parsing
|
[
"Yoav Tulpan",
"Oren Tsur"
] |
Online social platforms provide a bustling arena for information-sharing and
for multi-party discussions. Various frameworks for dialogic discourse parsing
were developed and used for the processing of discussions and for predicting
the productivity of a dialogue. However, most of these frameworks are not
suitable for the analysis of contentious discussions that are commonplace in
many online platforms. A novel multi-label scheme for contentious dialog
parsing was recently introduced by Zakharov et al. (2021). While the schema is
well developed, the computational approach they provide is both naive and
inefficient, as a different model (architecture) using a different
representation of the input, is trained for each of the 31 tags in the
annotation scheme. Moreover, all their models assume full knowledge of label
collocations and context, which is unlikely in any realistic setting. In this
work, we present a unified model for Non-Convergent Discourse Parsing that does
not require any additional input other than the previous dialog utterances. We
fine-tuned a RoBERTa backbone, combining embeddings of the utterance, the
context and the labels through GRN layers and an asymmetric loss function.
Overall, our model achieves results comparable with SOTA, without using label
collocation and without training a unique architecture/model for each label.
|
[
"cs.CL",
"cs.SI"
] | false |
2305.12542
|
2023-05-21T18:53:26Z
|
ToxBuster: In-game Chat Toxicity Buster with BERT
|
[
"Zachary Yang",
"Yasmine Maricar",
"MohammadReza Davari",
"Nicolas Grenon-Godbout",
"Reihaneh Rabbany"
] |
Detecting toxicity in online spaces is challenging and an ever more pressing
problem given the increase in social media and gaming consumption. We introduce
ToxBuster, a simple and scalable model trained on a relatively large dataset of
194k lines of game chat from Rainbow Six Siege and For Honor, carefully
annotated for different kinds of toxicity. Compared to the existing
state-of-the-art, ToxBuster achieves 82.95% (+7) in precision and 83.56% (+57)
in recall. This improvement is obtained by leveraging past chat history and
metadata. We also study the implication towards real-time and post-game
moderation as well as the model transferability from one game to another.
|
[
"cs.CL",
"cs.CY"
] | false |
2305.13342
|
2023-05-21T22:52:13Z
|
On the Limitations of Simulating Active Learning
|
[
"Katerina Margatina",
"Nikolaos Aletras"
] |
Active learning (AL) is a human-and-model-in-the-loop paradigm that
iteratively selects informative unlabeled data for human annotation, aiming to
improve over random sampling. However, performing AL experiments with human
annotations on-the-fly is a laborious and expensive process, thus unrealistic
for academic research. An easy fix to this impediment is to simulate AL, by
treating an already labeled and publicly available dataset as the pool of
unlabeled data. In this position paper, we first survey recent literature and
highlight the challenges across all different steps within the AL loop. We
further unveil neglected caveats in the experimental setup that can
significantly affect the quality of AL research. We continue with an
exploration of how the simulation setting can govern empirical findings,
arguing that it might be one of the answers behind the ever posed question
``why do active learning algorithms sometimes fail to outperform random
sampling?''. We argue that evaluating AL algorithms on available labeled
datasets might provide a lower bound as to their effectiveness in real data. We
believe it is essential to collectively shape the best practices for AL
research, particularly as engineering advancements in LLMs push the research
focus towards data-driven approaches (e.g., data efficiency, alignment,
fairness). In light of this, we have developed guidelines for future work. Our
aim is to draw attention to these limitations within the community, in the hope
of finding ways to address them.
|
[
"cs.LG",
"cs.CL"
] | false |
2305.12376
|
2023-05-21T07:10:31Z
|
Measuring Intersectional Biases in Historical Documents
|
[
"Nadav Borenstein",
"Karolina Stańczak",
"Thea Rolskov",
"Natália da Silva Perez",
"Natacha Klein Käfer",
"Isabelle Augenstein"
] |
Data-driven analyses of biases in historical texts can help illuminate the
origin and development of biases prevailing in modern society.
However, digitised historical documents pose a challenge for NLP
practitioners as these corpora suffer from errors introduced by optical
character recognition (OCR) and are written in an archaic language. In this
paper, we investigate the continuities and transformations of bias in
historical newspapers published in the Caribbean during the colonial era (18th
to 19th centuries). Our analyses are performed along the axes of gender, race,
and their intersection. We examine these biases by conducting a temporal study
in which we measure the development of lexical associations using
distributional semantics models and word embeddings. Further, we evaluate the
effectiveness of techniques designed to process OCR-generated data and assess
their stability when trained on and applied to the noisy historical newspapers.
We find that there is a trade-off between the stability of the word embeddings
and their compatibility with the historical dataset. We provide evidence that
gender and racial biases are interdependent, and their intersection triggers
distinct effects. These findings align with the theory of intersectionality,
which stresses that biases affecting people with multiple marginalised
identities compound to more than the sum of their constituents.
|
[
"cs.CL",
"cs.CY",
"cs.LG"
] | false |
2305.12483
|
2023-05-21T15:20:20Z
|
Model Analysis & Evaluation for Ambiguous Question Answering
|
[
"Konstantinos Papakostas",
"Irene Papadopoulou"
] |
Ambiguous questions are a challenge for Question Answering models, as they
require answers that cover multiple interpretations of the original query. To
this end, these models are required to generate long-form answers that often
combine conflicting pieces of information. Although recent advances in the
field have shown strong capabilities in generating fluent responses, certain
research questions remain unanswered. Does model/data scaling improve the
answers' quality? Do automated metrics align with human judgment? To what
extent do these models ground their answers in evidence? In this study, we aim
to thoroughly investigate these aspects, and provide valuable insights into the
limitations of the current approaches. To aid in reproducibility and further
extension of our work, we open-source our code at
https://github.com/din0s/ambig_lfqa.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.12487
|
2023-05-21T15:42:41Z
|
Augmenting Autotelic Agents with Large Language Models
|
[
"Cédric Colas",
"Laetitia Teodorescu",
"Pierre-Yves Oudeyer",
"Xingdi Yuan",
"Marc-Alexandre Côté"
] |
Humans learn to master open-ended repertoires of skills by imagining and
practicing their own goals. This autotelic learning process, literally the
pursuit of self-generated (auto) goals (telos), becomes more and more
open-ended as the goals become more diverse, abstract and creative. The
resulting exploration of the space of possible skills is supported by an
inter-individual exploration: goal representations are culturally evolved and
transmitted across individuals, in particular using language. Current
artificial agents mostly rely on predefined goal representations corresponding
to goal spaces that are either bounded (e.g. list of instructions), or
unbounded (e.g. the space of possible visual inputs) but are rarely endowed
with the ability to reshape their goal representations, to form new
abstractions or to imagine creative goals. In this paper, we introduce a
language model augmented autotelic agent (LMA3) that leverages a pretrained
language model (LM) to support the representation, generation and learning of
diverse, abstract, human-relevant goals. The LM is used as an imperfect model
of human cultural transmission; an attempt to capture aspects of humans'
common-sense, intuitive physics and overall interests. Specifically, it
supports three key components of the autotelic architecture: 1)~a relabeler
that describes the goals achieved in the agent's trajectories, 2)~a goal
generator that suggests new high-level goals along with their decomposition
into subgoals the agent already masters, and 3)~reward functions for each of
these goals. Without relying on any hand-coded goal representations, reward
functions or curriculum, we show that LMA3 agents learn to master a large
diversity of skills in a task-agnostic text-based environment.
|
[
"cs.AI",
"cs.CL",
"cs.LG"
] | true |
2305.12501
|
2023-05-21T16:37:21Z
|
Exploring How Generative Adversarial Networks Learn Phonological
Representations
|
[
"Jingyi Chen",
"Micha Elsner"
] |
This paper explores how Generative Adversarial Networks (GANs) learn
representations of phonological phenomena. We analyze how GANs encode
contrastive and non-contrastive nasality in French and English vowels by
applying the ciwGAN architecture (Begus 2021a). Begus claims that ciwGAN
encodes linguistically meaningful representations with categorical variables in
its latent space and manipulating the latent variables shows an almost one to
one corresponding control of the phonological features in ciwGAN's generated
outputs. However, our results show an interactive effect of latent variables on
the features in the generated outputs, which suggests the learned
representations in neural networks are different from the phonological
representations proposed by linguists. On the other hand, ciwGAN is able to
distinguish contrastive and noncontrastive features in English and French by
encoding them differently. Comparing the performance of GANs learning from
different languages results in a better understanding of what language specific
features contribute to developing language specific phonological
representations. We also discuss the role of training data frequencies in
phonological feature learning.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.12535
|
2023-05-21T18:29:10Z
|
Explaining How Transformers Use Context to Build Predictions
|
[
"Javier Ferrando",
"Gerard I. Gállego",
"Ioannis Tsiamas",
"Marta R. Costa-jussà"
] |
Language Generation Models produce words based on the previous context.
Although existing methods offer input attributions as explanations for a
model's prediction, it is still unclear how prior words affect the model's
decision throughout the layers. In this work, we leverage recent advances in
explainability of the Transformer and present a procedure to analyze models for
language generation. Using contrastive examples, we compare the alignment of
our explanations with evidence of the linguistic phenomena, and show that our
method consistently aligns better than gradient-based and perturbation-based
baselines. Then, we investigate the role of MLPs inside the Transformer and
show that they learn features that help the model predict words that are
grammatically acceptable. Lastly, we apply our method to Neural Machine
Translation models, and demonstrate that they generate human-like source-target
alignments for building predictions.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.12552
|
2023-05-21T19:26:46Z
|
Wav2SQL: Direct Generalizable Speech-To-SQL Parsing
|
[
"Huadai Liu",
"Rongjie Huang",
"Jinzheng He",
"Gang Sun",
"Ran Shen",
"Xize Cheng",
"Zhou Zhao"
] |
Speech-to-SQL (S2SQL) aims to convert spoken questions into SQL queries given
relational databases, which has been traditionally implemented in a cascaded
manner while facing the following challenges: 1) model training is faced with
the major issue of data scarcity, where limited parallel data is available; and
2) the systems should be robust enough to handle diverse out-of-domain speech
samples that differ from the source data. In this work, we propose the first
direct speech-to-SQL parsing model Wav2SQL which avoids error compounding
across cascaded systems. Specifically, 1) to accelerate speech-driven SQL
parsing research in the community, we release a large-scale and multi-speaker
dataset MASpider; 2) leveraging the recent progress in the large-scale
pre-training, we show that it alleviates the data scarcity issue and allow for
direct speech-to-SQL parsing; and 3) we include the speech re-programming and
gradient reversal classifier techniques to reduce acoustic variance and learned
style-agnostic representation, improving generalization to unseen out-of-domain
custom data. Experimental results demonstrate that Wav2SQL avoids error
compounding and achieves state-of-the-art results by up to 2.5\% accuracy
improvement over the baseline.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.12564
|
2023-05-21T20:57:12Z
|
ChatGPT Is More Likely to Be Perceived as Male Than Female
|
[
"Jared Wong",
"Jin Kim"
] |
We investigate how people perceive ChatGPT, and, in particular, how they
assign human-like attributes such as gender to the chatbot. Across five
pre-registered studies (N = 1,552), we find that people are more likely to
perceive ChatGPT to be male than female. Specifically, people perceive male
gender identity (1) following demonstrations of ChatGPT's core abilities (e.g.,
providing information or summarizing text), (2) in the absence of such
demonstrations, and (3) across different methods of eliciting perceived gender
(using various scales and asking to name ChatGPT). Moreover, we find that this
seemingly default perception of ChatGPT as male can reverse when ChatGPT's
feminine-coded abilities are highlighted (e.g., providing emotional support for
a user).
|
[
"cs.HC",
"cs.AI",
"cs.CL",
"cs.LG"
] | false |
2305.12579
|
2023-05-21T22:06:14Z
|
Hystoc: Obtaining word confidences for fusion of end-to-end ASR systems
|
[
"Karel Beneš",
"Martin Kocour",
"Lukáš Burget"
] |
End-to-end (e2e) systems have recently gained wide popularity in automatic
speech recognition. However, these systems do generally not provide
well-calibrated word-level confidences. In this paper, we propose Hystoc, a
simple method for obtaining word-level confidences from hypothesis-level
scores. Hystoc is an iterative alignment procedure which turns hypotheses from
an n-best output of the ASR system into a confusion network. Eventually,
word-level confidences are obtained as posterior probabilities in the
individual bins of the confusion network. We show that Hystoc provides
confidences that correlate well with the accuracy of the ASR hypothesis.
Furthermore, we show that utilizing Hystoc in fusion of multiple e2e ASR
systems increases the gains from the fusion by up to 1\,\% WER absolute on
Spanish RTVE2020 dataset. Finally, we experiment with using Hystoc for direct
fusion of n-best outputs from multiple systems, but we only achieve minor gains
when fusing very similar systems.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.12329
|
2023-05-21T03:32:48Z
|
Anomaly Detection Using One-Class SVM for Logs of Juniper Router Devices
|
[
"Tat-Bao-Thien Nguyen",
"Teh-Lu Liao",
"Tuan-Anh Vu"
] |
The article deals with anomaly detection of Juniper router logs. Abnormal
Juniper router logs include logs that are usually different from the normal
operation, and they often reflect the abnormal operation of router devices. To
prevent router devices from being damaged and help administrator to grasp the
situation of error quickly, detecting abnormal operation soon is very
important. In this work, we present a new way to get important features from
log data of Juniper router devices and use machine learning method (basing on
One-Class SVM model) for anomaly detection. One-Class SVM model requires some
knowledge and comprehension about logs of Juniper router devices so that it can
analyze, interpret, and test the knowledge ac-quired. We collect log data from
a lot of real Juniper router devices and clas-sify them based on our knowledge.
Before these logs are used for training and testing the One-Class SVM model,
the feature extraction phase for these data was carried out. Finally, with the
proposed method, the system errors of the routers were dectected quickly and
accurately. This may help our com-pany to reduce the operation cost for the
router systems.
|
[
"cs.LG"
] | false |
2305.12365
|
2023-05-21T06:29:17Z
|
Towards Optimal Energy Management Strategy for Hybrid Electric Vehicle
with Reinforcement Learning
|
[
"Xinyang Wu",
"Elisabeth Wedernikow",
"Christof Nitsche",
"Marco F. Huber"
] |
In recent years, the development of Artificial Intelligence (AI) has shown
tremendous potential in diverse areas. Among them, reinforcement learning (RL)
has proven to be an effective solution for learning intelligent control
strategies. As an inevitable trend for mitigating climate change, hybrid
electric vehicles (HEVs) rely on efficient energy management strategies (EMS)
to minimize energy consumption. Many researchers have employed RL to learn
optimal EMS for specific vehicle models. However, most of these models tend to
be complex and proprietary, making them unsuitable for broad applicability.
This paper presents a novel framework, in which we implement and integrate
RL-based EMS with the open-source vehicle simulation tool called FASTSim. The
learned RL-based EMSs are evaluated on various vehicle models using different
test drive cycles and prove to be effective in improving energy efficiency.
|
[
"cs.LG"
] | false |
2305.12578
|
2023-05-21T21:57:32Z
|
Self-Explainable Graph Neural Networks for Link Prediction
|
[
"Huaisheng Zhu",
"Dongsheng Luo",
"Xianfeng Tang",
"Junjie Xu",
"Hui Liu",
"Suhang Wang"
] |
Graph Neural Networks (GNNs) have achieved state-of-the-art performance for
link prediction. However, GNNs suffer from poor interpretability, which limits
their adoptions in critical scenarios that require knowing why certain links
are predicted. Despite various methods proposed for the explainability of GNNs,
most of them are post-hoc explainers developed for explaining node
classification. Directly adopting existing post-hoc explainers for explaining
link prediction is sub-optimal because: (i) post-hoc explainers usually adopt
another strategy or model to explain a target model, which could misinterpret
the target model; and (ii) GNN explainers for node classification identify
crucial subgraphs around each node for the explanation; while for link
prediction, one needs to explain the prediction for each pair of nodes based on
graph structure and node attributes. Therefore, in this paper, we study a novel
problem of self-explainable GNNs for link prediction, which can simultaneously
give accurate predictions and explanations. Concretely, we propose a new
framework and it can find various $K$ important neighbors of one node to learn
pair-specific representations for links from this node to other nodes. These
$K$ different neighbors represent important characteristics of the node and
model various factors for links from it. Thus, $K$ neighbors can provide
explanations for the existence of links. Experiments on both synthetic and
real-world datasets verify the effectiveness of the proposed framework for link
prediction and explanation.
|
[
"cs.LG"
] | false |
2305.12585
|
2023-05-21T22:44:18Z
|
GeometricImageNet: Extending convolutional neural networks to vector and
tensor images
|
[
"Wilson Gregory",
"David W. Hogg",
"Ben Blum-Smith",
"Maria Teresa Arias",
"Kaze W. K. Wong",
"Soledad Villar"
] |
Convolutional neural networks and their ilk have been very successful for
many learning tasks involving images. These methods assume that the input is a
scalar image representing the intensity in each pixel, possibly in multiple
channels for color images. In natural-science domains however, image-like data
sets might have vectors (velocity, say), tensors (polarization, say),
pseudovectors (magnetic field, say), or other geometric objects in each pixel.
Treating the components of these objects as independent channels in a CNN
neglects their structure entirely. Our formulation -- the GeometricImageNet --
combines a geometric generalization of convolution with outer products, tensor
index contractions, and tensor index permutations to construct geometric-image
functions of geometric images that use and benefit from the tensor structure.
The framework permits, with a very simple adjustment, restriction to function
spaces that are exactly equivariant to translations, discrete rotations, and
reflections. We use representation theory to quantify the dimension of the
space of equivariant polynomial functions on 2-dimensional vector images. We
give partial results on the expressivity of GeometricImageNet on small images.
In numerical experiments, we find that GeometricImageNet has good
generalization for a small simulated physics system, even when trained with a
small training set. We expect this tool will be valuable for scientific and
engineering machine learning, for example in cosmology or ocean dynamics.
|
[
"cs.LG"
] | false |
2305.12313
|
2023-05-21T01:36:25Z
|
When are ensembles really effective?
|
[
"Ryan Theisen",
"Hyunsuk Kim",
"Yaoqing Yang",
"Liam Hodgkinson",
"Michael W. Mahoney"
] |
Ensembling has a long history in statistical data analysis, with many
impactful applications. However, in many modern machine learning settings, the
benefits of ensembling are less ubiquitous and less obvious. We study, both
theoretically and empirically, the fundamental question of when ensembling
yields significant performance improvements in classification tasks.
Theoretically, we prove new results relating the \emph{ensemble improvement
rate} (a measure of how much ensembling decreases the error rate versus a
single model, on a relative scale) to the \emph{disagreement-error ratio}. We
show that ensembling improves performance significantly whenever the
disagreement rate is large relative to the average error rate; and that,
conversely, one classifier is often enough whenever the disagreement rate is
low relative to the average error rate. On the way to proving these results, we
derive, under a mild condition called \emph{competence}, improved upper and
lower bounds on the average test error rate of the majority vote classifier. To
complement this theory, we study ensembling empirically in a variety of
settings, verifying the predictions made by our theory, and identifying
practical scenarios where ensembling does and does not result in large
performance improvements. Perhaps most notably, we demonstrate a distinct
difference in behavior between interpolating models (popular in current
practice) and non-interpolating models (such as tree-based methods, where
ensembling is popular), demonstrating that ensembling helps considerably more
in the latter case than in the former.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.12316
|
2023-05-21T01:57:56Z
|
One-Shot Federated Learning for LEO Constellations that Reduces
Convergence Time from Days to 90 Minutes
|
[
"Mohamed Elmahallawy",
"Tie Luo"
] |
A Low Earth orbit (LEO) satellite constellation consists of a large number of
small satellites traveling in space with high mobility and collecting vast
amounts of mobility data such as cloud movement for weather forecast, large
herds of animals migrating across geo-regions, spreading of forest fires, and
aircraft tracking. Machine learning can be utilized to analyze these mobility
data to address global challenges, and Federated Learning (FL) is a promising
approach because it eliminates the need for transmitting raw data and hence is
both bandwidth and privacy-friendly. However, FL requires many communication
rounds between clients (satellites) and the parameter server (PS), leading to
substantial delays of up to several days in LEO constellations. In this paper,
we propose a novel one-shot FL approach for LEO satellites, called LEOShot,
that needs only a single communication round to complete the entire learning
process. LEOShot comprises three processes: (i) synthetic data generation, (ii)
knowledge distillation, and (iii) virtual model retraining. We evaluate and
benchmark LEOShot against the state of the art and the results show that it
drastically expedites FL convergence by more than an order of magnitude. Also
surprisingly, despite the one-shot nature, its model accuracy is on par with or
even outperforms regular iterative FL schemes by a large margin
|
[
"cs.LG",
"cs.NI"
] | false |
2305.12320
|
2023-05-21T02:37:26Z
|
Random Relabeling for Efficient Machine Unlearning
|
[
"Junde Li",
"Swaroop Ghosh"
] |
Learning algorithms and data are the driving forces for machine learning to
bring about tremendous transformation of industrial intelligence. However,
individuals' right to retract their personal data and relevant data privacy
regulations pose great challenges to machine learning: how to design an
efficient mechanism to support certified data removals. Removal of previously
seen data known as machine unlearning is challenging as these data points were
implicitly memorized in training process of learning algorithms. Retraining
remaining data from scratch straightforwardly serves such deletion requests,
however, this naive method is not often computationally feasible. We propose
the unlearning scheme random relabeling, which is applicable to generic
supervised learning algorithms, to efficiently deal with sequential data
removal requests in the online setting. A less constraining removal
certification method based on probability distribution similarity with naive
unlearning is further developed for logit-based classifiers.
|
[
"cs.LG",
"cs.CR"
] | false |
2305.12335
|
2023-05-21T03:58:16Z
|
Temporal Fusion Transformers for Streamflow Prediction: Value of
Combining Attention with Recurrence
|
[
"Sinan Rasiya Koya",
"Tirthankar Roy"
] |
Over the past few decades, the hydrology community has witnessed notable
advancements in streamflow prediction, particularly with the introduction of
cutting-edge machine-learning algorithms. Recurrent neural networks, especially
Long Short-Term Memory (LSTM) networks, have become popular due to their
capacity to create precise forecasts and realistically mimic the system
dynamics. Attention-based models, such as Transformers, can learn from the
entire data sequence concurrently, a feature that LSTM does not have. This work
tests the hypothesis that combining recurrence with attention can improve
streamflow prediction. We set up the Temporal Fusion Transformer (TFT)
architecture, a model that combines both of these aspects and has never been
applied in hydrology before. We compare the performance of LSTM, Transformers,
and TFT over 2,610 globally distributed catchments from the recently available
Caravan dataset. Our results demonstrate that TFT indeed exceeds the
performance benchmark set by the LSTM and Transformers for streamflow
prediction. Additionally, being an explainable AI method, TFT helps in gaining
insights into the streamflow generation processes.
|
[
"cs.LG",
"physics.geo-ph"
] | false |
2305.12349
|
2023-05-21T05:00:40Z
|
PINA: Leveraging Side Information in eXtreme Multi-label Classification
via Predicted Instance Neighborhood Aggregation
|
[
"Eli Chien",
"Jiong Zhang",
"Cho-Jui Hsieh",
"Jyun-Yu Jiang",
"Wei-Cheng Chang",
"Olgica Milenkovic",
"Hsiang-Fu Yu"
] |
The eXtreme Multi-label Classification~(XMC) problem seeks to find relevant
labels from an exceptionally large label space. Most of the existing XMC
learners focus on the extraction of semantic features from input query text.
However, conventional XMC studies usually neglect the side information of
instances and labels, which can be of use in many real-world applications such
as recommendation systems and e-commerce product search. We propose Predicted
Instance Neighborhood Aggregation (PINA), a data enhancement method for the
general XMC problem that leverages beneficial side information. Unlike most
existing XMC frameworks that treat labels and input instances as featureless
indicators and independent entries, PINA extracts information from the label
metadata and the correlations among training instances. Extensive experimental
results demonstrate the consistent gain of PINA on various XMC tasks compared
to the state-of-the-art methods: PINA offers a gain in accuracy compared to
standard XR-Transformers on five public benchmark datasets. Moreover, PINA
achieves a $\sim 5\%$ gain in accuracy on the largest dataset
LF-AmazonTitles-1.3M. Our implementation is publicly available.
|
[
"cs.LG",
"cs.IR"
] | false |
2305.12352
|
2023-05-21T05:11:30Z
|
Pre-trained Mixed Integer Optimization through Multi-variable
Cardinality Branching
|
[
"Yanguang Chen",
"Wenzhi Gao",
"Dongdong Ge",
"Yinyu Ye"
] |
We propose a new method to accelerate online Mixed Integer Optimization with
Pre-trained machine learning models (PreMIO). The key component of PreMIO is a
multi-variable cardinality branching procedure that splits the feasible region
with data-driven hyperplanes, which can be easily integrated into any MIP
solver with two lines of code. Moreover, we incorporate learning theory and
concentration inequalities to develop a straightforward and interpretable
hyper-parameter selection strategy for our method. We test the performance of
PreMIO by applying it to state-of-the-art MIP solvers and running numerical
experiments on both classical OR benchmark datasets and real-life instances.
The results validate the effectiveness of our proposed method.
|
[
"math.OC",
"cs.LG"
] | false |
2305.12356
|
2023-05-21T05:28:37Z
|
Integer or Floating Point? New Outlooks for Low-Bit Quantization on
Large Language Models
|
[
"Yijia Zhang",
"Lingran Zhao",
"Shijie Cao",
"Wenqiang Wang",
"Ting Cao",
"Fan Yang",
"Mao Yang",
"Shanghang Zhang",
"Ningyi Xu"
] |
Efficient deployment of large language models (LLMs) necessitates low-bit
quantization to minimize model size and inference cost. While low-bit integer
formats (e.g., INT8/INT4) have been the conventional choice, emerging low-bit
floating-point formats (e.g., FP8/FP4) offer a compelling alternative and are
gaining support from cutting-edge hardware, such as NVIDIA's H100 GPU. However,
the superiority of low-bit INT versus FP formats for quantization on LLMs
remains unclear. In this study, we conduct a comparative analysis of INT and FP
quantization with the same bit-width, revealing that the optimal quantization
format varies across different layers due to the complexity and diversity of
tensor distribution. Consequently, we advocate the Mixture of Formats
Quantization (MoFQ), which selects the optimal format on a layer-wise basis.
This simple yet effective approach achieves state-of-the-art results in both
weight-only (W-only) and weight-activation (WA) post-training quantization
scenarios when tested on LLaMA across various tasks. In 4-bit W-only
quantization, MoFQ surpasses GPTQ without complex hyperparameter tuning and
with an order of magnitude faster quantization speed. While in 8-bit WA
quantization, MoFQ significantly outperforms INT/FP-only methods, achieving
performance close to the full precision model. Notably, MoFQ incurs no hardware
overhead compared to INT/FP-only quantization, as the bit-width remains
unchanged.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.12364
|
2023-05-21T06:28:53Z
|
Machine Learning for Socially Responsible Portfolio Optimisation
|
[
"Taeisha Nundlall",
"Terence L Van Zyl"
] |
Socially responsible investors build investment portfolios intending to
incite social and environmental advancement alongside a financial return.
Although Mean-Variance (MV) models successfully generate the highest possible
return based on an investor's risk tolerance, MV models do not make provisions
for additional constraints relevant to socially responsible (SR) investors. In
response to this problem, the MV model must consider Environmental, Social, and
Governance (ESG) scores in optimisation. Based on the prominent MV model, this
study implements portfolio optimisation for socially responsible investors. The
amended MV model allows SR investors to enter markets with competitive SR
portfolios despite facing a trade-off between their investment Sharpe Ratio and
the average ESG score of the portfolio.
|
[
"q-fin.PM",
"cs.LG"
] | false |
2305.12393
|
2023-05-21T08:12:54Z
|
Layer Collaboration in the Forward-Forward Algorithm
|
[
"Guy Lorberbom",
"Itai Gat",
"Yossi Adi",
"Alex Schwing",
"Tamir Hazan"
] |
Backpropagation, which uses the chain rule, is the de-facto standard
algorithm for optimizing neural networks nowadays. Recently, Hinton (2022)
proposed the forward-forward algorithm, a promising alternative that optimizes
neural nets layer-by-layer, without propagating gradients throughout the
network. Although such an approach has several advantages over back-propagation
and shows promising results, the fact that each layer is being trained
independently limits the optimization process. Specifically, it prevents the
network's layers from collaborating to learn complex and rich features. In this
work, we study layer collaboration in the forward-forward algorithm. We show
that the current version of the forward-forward algorithm is suboptimal when
considering information flow in the network, resulting in a lack of
collaboration between layers of the network. We propose an improved version
that supports layer collaboration to better utilize the network structure,
while not requiring any additional assumptions or computations. We empirically
demonstrate the efficacy of the proposed version when considering both
information flow and objective metrics. Additionally, we provide a theoretical
motivation for the proposed method, inspired by functional entropy theory.
|
[
"cs.LG",
"cs.NE"
] | false |
2305.12402
|
2023-05-21T08:51:55Z
|
Bandit Multi-linear DR-Submodular Maximization and Its Applications on
Adversarial Submodular Bandits
|
[
"Zongqi Wan",
"Jialin Zhang",
"Wei Chen",
"Xiaoming Sun",
"Zhijie Zhang"
] |
We investigate the online bandit learning of the monotone multi-linear
DR-submodular functions, designing the algorithm $\mathtt{BanditMLSM}$ that
attains $O(T^{2/3}\log T)$ of $(1-1/e)$-regret. Then we reduce submodular
bandit with partition matroid constraint and bandit sequential monotone
maximization to the online bandit learning of the monotone multi-linear
DR-submodular functions, attaining $O(T^{2/3}\log T)$ of $(1-1/e)$-regret in
both problems, which improve the existing results. To the best of our
knowledge, we are the first to give a sublinear regret algorithm for the
submodular bandit with partition matroid constraint. A special case of this
problem is studied by Streeter et al.(2009). They prove a $O(T^{4/5})$
$(1-1/e)$-regret upper bound. For the bandit sequential submodular
maximization, the existing work proves an $O(T^{2/3})$ regret with a suboptimal
$1/2$ approximation ratio (Niazadeh et al. 2021).
|
[
"cs.LG",
"cs.AI"
] | false |
2305.12470
|
2023-05-21T14:12:02Z
|
Quasi-Monte Carlo Graph Random Features
|
[
"Isaac Reid",
"Krzysztof Choromanski",
"Adrian Weller"
] |
We present a novel mechanism to improve the accuracy of the
recently-introduced class of graph random features (GRFs). Our method induces
negative correlations between the lengths of the algorithm's random walks by
imposing antithetic termination: a procedure to sample more diverse random
walks which may be of independent interest. It has a trivial drop-in
implementation. We derive strong theoretical guarantees on the properties of
these quasi-Monte Carlo GRFs (q-GRFs), proving that they yield lower-variance
estimators of the 2-regularised Laplacian kernel under mild conditions.
Remarkably, our results hold for any graph topology. We demonstrate empirical
accuracy improvements on a variety of tasks including a new practical
application: time-efficient approximation of the graph diffusion process. To
our knowledge, q-GRFs constitute the first rigorously studied quasi-Monte Carlo
scheme for kernels defined on combinatorial objects, inviting new research on
correlations between graph random walks.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.12557
|
2023-05-21T20:12:27Z
|
Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization
|
[
"Junyi Zhu",
"Xingchen Ma",
"Matthew B. Blaschko"
] |
Federated Learning (FL) is a distributed learning scheme to train a shared
model across clients. One common and fundamental challenge in FL is that the
sets of data across clients could be non-identically distributed and have
different sizes. Personalized Federated Learning (PFL) attempts to solve this
challenge via locally adapted models. In this work, we present a novel
framework for PFL based on hierarchical Bayesian modeling and variational
inference. A global model is introduced as a latent variable to augment the
joint distribution of clients' parameters and capture the common trends of
different clients, optimization is derived based on the principle of maximizing
the marginal likelihood and conducted using variational expectation
maximization. Our algorithm gives rise to a closed-form estimation of a
confidence value which comprises the uncertainty of clients' parameters and
local model deviations from the global model. The confidence value is used to
weigh clients' parameters in the aggregation stage and adjust the
regularization effect of the global model. We evaluate our method through
extensive empirical studies on multiple datasets. Experimental results show
that our approach obtains competitive results under mild heterogeneous
circumstances while significantly outperforming state-of-the-art PFL frameworks
in highly heterogeneous settings. Our code is available at
https://github.com/JunyiZhu-AI/confidence_aware_PFL.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.12590
|
2023-05-21T23:01:13Z
|
FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN
Accelerators through Fault-Aware Quantization
|
[
"Muhammad Abdullah Hanif",
"Muhammad Shafique"
] |
Permanent faults induced due to imperfections in the manufacturing process of
Deep Neural Network (DNN) accelerators are a major concern, as they negatively
impact the manufacturing yield of the chip fabrication process. Fault-aware
training is the state-of-the-art approach for mitigating such faults. However,
it incurs huge retraining overheads, specifically when used for large DNNs
trained on complex datasets. To address this issue, we propose a novel
Fault-Aware Quantization (FAQ) technique for mitigating the effects of stuck-at
permanent faults in the on-chip weight memory of DNN accelerators at a
negligible overhead cost compared to fault-aware retraining while offering
comparable accuracy results. We propose a lookup table-based algorithm to
achieve ultra-low model conversion time. We present extensive evaluation of the
proposed approach using five different DNNs, i.e., ResNet-18, VGG11, VGG16,
AlexNet and MobileNetV2, and three different datasets, i.e., CIFAR-10,
CIFAR-100 and ImageNet. The results demonstrate that FAQ helps in maintaining
the baseline accuracy of the DNNs at low and moderate fault rates without
involving costly fault-aware training. For example, for ResNet-18 trained on
the CIFAR-10 dataset, at 0.04 fault rate FAQ offers (on average) an increase of
76.38% in accuracy. Similarly, for VGG11 trained on the CIFAR-10 dataset, at
0.04 fault rate FAQ offers (on average) an increase of 70.47% in accuracy. The
results also show that FAQ incurs negligible overheads, i.e., less than 5% of
the time required to run 1 epoch of retraining. We additionally demonstrate the
efficacy of our technique when used in conjunction with fault-aware retraining
and show that the use of FAQ inside fault-aware retraining enables fast
accuracy recovery.
|
[
"cs.AR",
"cs.LG"
] | false |
2305.12600
|
2023-05-21T23:16:30Z
|
PRODIGY: Enabling In-context Learning Over Graphs
|
[
"Qian Huang",
"Hongyu Ren",
"Peng Chen",
"Gregor Kržmanc",
"Daniel Zeng",
"Percy Liang",
"Jure Leskovec"
] |
In-context learning is the ability of a pretrained model to adapt to novel
and diverse downstream tasks by conditioning on prompt examples, without
optimizing any parameters. While large language models have demonstrated this
ability, how in-context learning could be performed over graphs is unexplored.
In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse
\textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first
pretraining framework that enables in-context learning over graphs. The key
idea of our framework is to formulate in-context learning over graphs with a
novel \emph{prompt graph} representation, which connects prompt examples and
queries. We then propose a graph neural network architecture over the prompt
graph and a corresponding family of in-context pretraining objectives. With
PRODIGY, the pretrained model can directly perform novel downstream
classification tasks on unseen graphs via in-context learning. We provide
empirical evidence of the effectiveness of our framework by showcasing its
strong in-context learning performance on tasks involving citation networks and
knowledge graphs. Our approach outperforms the in-context learning accuracy of
contrastive pretraining baselines with hard-coded adaptation by 18\% on average
across all setups. Moreover, it also outperforms standard finetuning with
limited data by 33\% on average with in-context learning.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.14378
|
2023-05-21T08:00:23Z
|
Predicting Stock Market Time-Series Data using CNN-LSTM Neural Network
Model
|
[
"Aadhitya A",
"Rajapriya R",
"Vineetha R S",
"Anurag M Bagde"
] |
Stock market is often important as it represents the ownership claims on
businesses. Without sufficient stocks, a company cannot perform well in
finance. Predicting a stock market performance of a company is nearly hard
because every time the prices of a company stock keeps changing and not
constant. So, its complex to determine the stock data. But if the previous
performance of a company in stock market is known, then we can track the data
and provide predictions to stockholders in order to wisely take decisions on
handling the stocks to a company. To handle this, many machine learning models
have been invented but they didn't succeed due to many reasons like absence of
advanced libraries, inaccuracy of model when made to train with real time data
and much more. So, to track the patterns and the features of data, a CNN-LSTM
Neural Network can be made. Recently, CNN is now used in Natural Language
Processing (NLP) based applications, so by identifying the features from stock
data and converting them into tensors, we can obtain the features and then send
it to LSTM neural network to find the patterns and thereby predicting the stock
market for given period of time. The accuracy of the CNN-LSTM NN model is found
to be high even when allowed to train on real-time stock market data. This
paper describes about the features of the custom CNN-LSTM model, experiments we
made with the model (like training with stock market datasets, performance
comparison with other models) and the end product we obtained at final stage.
|
[
"q-fin.ST",
"cs.LG"
] | false |
2305.15430
|
2023-05-21T06:55:10Z
|
Bounded Projection Matrix Approximation with Applications to Community
Detection
|
[
"Zheng Zhai",
"Hengchao Chen",
"Qiang Sun"
] |
Community detection is an important problem in unsupervised learning. This
paper proposes to solve a projection matrix approximation problem with an
additional entrywise bounded constraint. Algorithmically, we introduce a new
differentiable convex penalty and derive an alternating direction method of
multipliers (ADMM) algorithm. Theoretically, we establish the convergence
properties of the proposed algorithm. Numerical experiments demonstrate the
superiority of our algorithm over its competitors, such as the semi-definite
relaxation method and spectral clustering.
|
[
"cs.SI",
"cs.LG"
] | false |
2305.15431
|
2023-05-21T11:01:14Z
|
Exploring and Exploiting Data Heterogeneity in Recommendation
|
[
"Zimu Wang",
"Jiashuo Liu",
"Hao Zou",
"Xingxuan Zhang",
"Yue He",
"Dongxu Liang",
"Peng Cui"
] |
Massive amounts of data are the foundation of data-driven recommendation
models. As an inherent nature of big data, data heterogeneity widely exists in
real-world recommendation systems. It reflects the differences in the
properties among sub-populations. Ignoring the heterogeneity in recommendation
data could limit the performance of recommendation models, hurt the
sub-populational robustness, and make the models misled by biases. However,
data heterogeneity has not attracted substantial attention in the
recommendation community. Therefore, it inspires us to adequately explore and
exploit heterogeneity for solving the above problems and assisting data
analysis. In this work, we focus on exploring two representative categories of
heterogeneity in recommendation data that is the heterogeneity of prediction
mechanism and covariate distribution and propose an algorithm that explores the
heterogeneity through a bilevel clustering method. Furthermore, the uncovered
heterogeneity is exploited for two purposes in recommendation scenarios which
are prediction with multiple sub-models and supporting debias. Extensive
experiments on real-world data validate the existence of heterogeneity in
recommendation data and the effectiveness of exploring and exploiting data
heterogeneity in recommendation.
|
[
"cs.IR",
"cs.LG"
] | false |
2305.12407
|
2023-05-21T09:08:09Z
|
Federated Offline Policy Learning with Heterogeneous Observational Data
|
[
"Aldo Gael Carranza",
"Susan Athey"
] |
We consider the problem of learning personalized decision policies on
observational data from heterogeneous data sources. Moreover, we examine this
problem in the federated setting where a central server aims to learn a policy
on the data distributed across the heterogeneous sources without exchanging
their raw data. We present a federated policy learning algorithm based on
aggregation of local policies trained with doubly robust offline policy
evaluation and learning strategies. We provide a novel regret analysis for our
approach that establishes a finite-sample upper bound on a notion of global
regret across a distribution of clients. In addition, for any individual
client, we establish a corresponding local regret upper bound characterized by
the presence of distribution shift relative to all other clients. We support
our theoretical findings with experimental results. Our analysis and
experiments provide insights into the value of heterogeneous client
participation in federation for policy learning in heterogeneous settings.
|
[
"cs.LG",
"cs.DC",
"econ.EM",
"stat.ML"
] | false |
2305.12424
|
2023-05-21T10:44:02Z
|
Mol-PECO: a deep learning model to predict human olfactory perception
from molecular structures
|
[
"Mengji Zhang",
"Yusuke Hiki",
"Akira Funahashi",
"Tetsuya J. Kobayashi"
] |
While visual and auditory information conveyed by wavelength of light and
frequency of sound have been decoded, predicting olfactory information encoded
by the combination of odorants remains challenging due to the unknown and
potentially discontinuous perceptual space of smells and odorants. Herein, we
develop a deep learning model called Mol-PECO (Molecular Representation by
Positional Encoding of Coulomb Matrix) to predict olfactory perception from
molecular structures. Mol-PECO updates the learned atom embedding by
directional graph convolutional networks (GCN), which model the Laplacian
eigenfunctions as positional encoding, and Coulomb matrix, which encodes atomic
coordinates and charges. With a comprehensive dataset of 8,503 molecules,
Mol-PECO directly achieves an area-under-the-receiver-operating-characteristic
(AUROC) of 0.813 in 118 odor descriptors, superior to the machine learning of
molecular fingerprints (AUROC of 0.761) and GCN of adjacency matrix (AUROC of
0.678). The learned embeddings by Mol-PECO also capture a meaningful odor space
with global clustering of descriptors and local retrieval of similar odorants.
Our work may promote the understanding and decoding of the olfactory sense and
mechanisms.
|
[
"cs.LG",
"cs.AI",
"q-bio.BM",
"q-bio.NC"
] | false |
2305.12475
|
2023-05-21T14:40:43Z
|
Two Sides of One Coin: the Limits of Untuned SGD and the Power of
Adaptive Methods
|
[
"Junchi Yang",
"Xiang Li",
"Ilyas Fatkhullin",
"Niao He"
] |
The classical analysis of Stochastic Gradient Descent (SGD) with polynomially
decaying stepsize $\eta_t = \eta/\sqrt{t}$ relies on well-tuned $\eta$
depending on problem parameters such as Lipschitz smoothness constant, which is
often unknown in practice. In this work, we prove that SGD with arbitrary $\eta
> 0$, referred to as untuned SGD, still attains an order-optimal convergence
rate $\widetilde{O}(T^{-1/4})$ in terms of gradient norm for minimizing smooth
objectives. Unfortunately, it comes at the expense of a catastrophic
exponential dependence on the smoothness constant, which we show is unavoidable
for this scheme even in the noiseless setting. We then examine three families
of adaptive methods $\unicode{x2013}$ Normalized SGD (NSGD), AMSGrad, and
AdaGrad $\unicode{x2013}$ unveiling their power in preventing such exponential
dependency in the absence of information about the smoothness parameter and
boundedness of stochastic gradients. Our results provide theoretical
justification for the advantage of adaptive methods over untuned SGD in
alleviating the issue with large gradients.
|
[
"math.OC",
"cs.LG",
"stat.ML"
] | false |
2305.12543
|
2023-05-21T19:00:06Z
|
A Reinforcement Learning Approach for Robust Supervisory Control of UAVs
Under Disturbances
|
[
"Ibrahim Ahmed",
"Marcos Quinones-Grueiro",
"Gautam Biswas"
] |
In this work, we present an approach to supervisory reinforcement learning
control for unmanned aerial vehicles (UAVs). UAVs are dynamic systems where
control decisions in response to disturbances in the environment have to be
made in the order of milliseconds. We formulate a supervisory control
architecture that interleaves with extant embedded control and demonstrates
robustness to environmental disturbances in the form of adverse wind
conditions. We run case studies with a Tarot T-18 Octorotor to demonstrate the
effectiveness of our approach and compare it against a classic cascade control
architecture used in most vehicles. While the results show the performance
difference is marginal for nominal operations, substantial performance
improvement is obtained with the supervisory RL approach under unseen wind
conditions.
|
[
"eess.SY",
"cs.LG",
"cs.SY"
] | false |
2305.12571
|
2023-05-21T21:21:46Z
|
Reproducibility Requires Consolidated Artifacts
|
[
"Iordanis Fostiropoulos",
"Bowman Brown",
"Laurent Itti"
] |
Machine learning is facing a 'reproducibility crisis' where a significant
number of works report failures when attempting to reproduce previously
published results. We evaluate the sources of reproducibility failures using a
meta-analysis of 142 replication studies from ReScience C and 204 code
repositories. We find that missing experiment details such as hyperparameters
are potential causes of unreproducibility. We experimentally show the bias of
different hyperparameter selection strategies and conclude that consolidated
artifacts with a unified framework can help support reproducibility.
|
[
"cs.LG",
"cs.AI",
"cs.SE"
] | false |
2305.12581
|
2023-05-21T22:29:55Z
|
A parametric distribution for exact post-selection inference with data
carving
|
[
"Erik Drysdale"
] |
Post-selection inference (PoSI) is a statistical technique for obtaining
valid confidence intervals and p-values when hypothesis generation and testing
use the same source of data. PoSI can be used on a range of popular algorithms
including the Lasso. Data carving is a variant of PoSI in which a portion of
held out data is combined with the hypothesis generating data at inference
time. While data carving has attractive theoretical and empirical properties,
existing approaches rely on computationally expensive MCMC methods to carry out
inference. This paper's key contribution is to show that pivotal quantities can
be constructed for the data carving procedure based on a known parametric
distribution. Specifically, when the selection event is characterized by a set
of polyhedral constraints on a Gaussian response, data carving will follow the
sum of a normal and a truncated normal (SNTN), which is a variant of the
truncated bivariate normal distribution. The main impact of this insight is
that obtaining exact inference for data carving can be made computationally
trivial, since the CDF of the SNTN distribution can be found using the CDF of a
standard bivariate normal. A python package sntn has been released to further
facilitate the adoption of data carving with PoSI.
|
[
"stat.ME",
"cs.LG",
"stat.ML"
] | false |
2305.13341
|
2023-05-21T19:22:50Z
|
Discovering Causal Relations and Equations from Data
|
[
"Gustau Camps-Valls",
"Andreas Gerhardus",
"Urmi Ninad",
"Gherardo Varando",
"Georg Martius",
"Emili Balaguer-Ballester",
"Ricardo Vinuesa",
"Emiliano Diaz",
"Laure Zanna",
"Jakob Runge"
] |
Physics is a field of science that has traditionally used the scientific
method to answer questions about why natural phenomena occur and to make
testable models that explain the phenomena. Discovering equations, laws and
principles that are invariant, robust and causal explanations of the world has
been fundamental in physical sciences throughout the centuries. Discoveries
emerge from observing the world and, when possible, performing interventional
studies in the system under study. With the advent of big data and the use of
data-driven methods, causal and equation discovery fields have grown and made
progress in computer science, physics, statistics, philosophy, and many applied
fields. All these domains are intertwined and can be used to discover causal
relations, physical laws, and equations from observational data. This paper
reviews the concepts, methods, and relevant works on causal and equation
discovery in the broad field of Physics and outlines the most important
challenges and promising future lines of research. We also provide a taxonomy
for observational causal and equation discovery, point out connections, and
showcase a complete set of case studies in Earth and climate sciences, fluid
dynamics and mechanics, and the neurosciences. This review demonstrates that
discovering fundamental laws and causal relations by observing natural
phenomena is being revolutionised with the efficient exploitation of
observational data, modern machine learning algorithms and the interaction with
domain knowledge. Exciting times are ahead with many challenges and
opportunities to improve our understanding of complex systems.
|
[
"physics.data-an",
"cs.AI",
"cs.LG",
"stat.ME"
] | false |
2305.12649
|
2023-05-22T02:46:34Z
|
Imbalance-Agnostic Source-Free Domain Adaptation via Avatar Prototype
Alignment
|
[
"Hongbin Lin",
"Mingkui Tan",
"Yifan Zhang",
"Zhen Qiu",
"Shuaicheng Niu",
"Dong Liu",
"Qing Du",
"Yanxia Liu"
] |
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to adapt a
well-trained source model to an unlabeled target domain without access to the
source data. One key challenge is the lack of source data during domain
adaptation. To handle this, we propose to mine the hidden knowledge of the
source model and exploit it to generate source avatar prototypes. To this end,
we propose a Contrastive Prototype Generation and Adaptation (CPGA) method.
CPGA consists of two stages: Prototype generation and Prototype adaptation.
Extensive experiments on three UDA benchmark datasets demonstrate the
superiority of CPGA. However, existing SF.UDA studies implicitly assume
balanced class distributions for both the source and target domains, which
hinders their real applications. To address this issue, we study a more
practical SF-UDA task, termed imbalance-agnostic SF-UDA, where the class
distributions of both the unseen source domain and unlabeled target domain are
unknown and could be arbitrarily skewed. This task is much more challenging
than vanilla SF-UDA due to the co-occurrence of covariate shifts and
unidentified class distribution shifts between the source and target domains.
To address this task, we extend CPGA and propose a new Target-aware Contrastive
Prototype Generation and Adaptation (T-CPGA) method. Specifically, for better
prototype adaptation in the imbalance-agnostic scenario, T-CPGA applies a new
pseudo label generation strategy to identify unknown target class distribution
and generate accurate pseudo labels, by utilizing the collective intelligence
of the source model and an additional contrastive language-image pre-trained
model. Meanwhile, we further devise a target label-distribution-aware
classifier to adapt the model to the unknown target class distribution. We
empirically show that T-CPGA significantly outperforms CPGA and other SF-UDA
methods in imbalance-agnostic SF-UDA.
|
[
"cs.CV"
] | false |
2305.12659
|
2023-05-22T03:03:29Z
|
UVOSAM: A Mask-free Paradigm for Unsupervised Video Object Segmentation
via Segment Anything Model
|
[
"Zhenghao Zhang",
"Zhichao Wei",
"Shengfan Zhang",
"Zuozhuo Dai",
"Siyu Zhu"
] |
Unsupervised video object segmentation has made significant progress in
recent years, but the manual annotation of video mask datasets is expensive and
limits the diversity of available datasets. The Segment Anything Model (SAM)
has introduced a new prompt-driven paradigm for image segmentation, unlocking a
range of previously unexplored capabilities. In this paper, we propose a novel
paradigm called UVOSAM, which leverages SAM for unsupervised video object
segmentation without requiring video mask labels. To address SAM's limitations
in instance discovery and identity association, we introduce a video salient
object tracking network that automatically generates trajectories for prominent
foreground objects. These trajectories then serve as prompts for SAM to produce
video masks on a frame-by-frame basis. Our experimental results demonstrate
that UVOSAM significantly outperforms current mask-supervised methods. These
findings suggest that UVOSAM has the potential to improve unsupervised video
object segmentation and reduce the cost of manual annotation.
|
[
"cs.CV"
] | false |
2305.12724
|
2023-05-22T05:18:34Z
|
Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object
Tracking
|
[
"Feng Yan",
"Weixin Luo",
"Yujie Zhong",
"Yiyang Gan",
"Lin Ma"
] |
Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not
surpassed non-end-to-end tracking-by-detection methods. One potential reason is
its label assignment strategy during training that consistently binds the
tracked objects with tracking queries and then assigns the few newborns to
detection queries. With one-to-one bipartite matching, such an assignment will
yield unbalanced training, i.e., scarce positive samples for detection queries,
especially for an enclosed scene, as the majority of the newborns come on stage
at the beginning of videos. Thus, e2e-MOT will be easier to yield a tracking
terminal without renewal or re-initialization, compared to other
tracking-by-detection methods. To alleviate this problem, we present Co-MOT, a
simple and effective method to facilitate e2e-MOT by a novel coopetition label
assignment with a shadow concept. Specifically, we add tracked objects to the
matching targets for detection queries when performing the label assignment for
training the intermediate decoders. For query initialization, we expand each
query by a set of shadow counterparts with limited disturbance to itself. With
extensive ablations, Co-MOT achieves superior performance without extra costs,
e.g., 69.4% HOTA on DanceTrack and 52.8% TETA on BDD100K. Impressively, Co-MOT
only requires 38\% FLOPs of MOTRv2 to attain a similar performance, resulting
in the 1.4$\times$ faster inference speed.
|
[
"cs.CV"
] | false |
2305.12799
|
2023-05-22T07:53:36Z
|
Interactive Data Synthesis for Systematic Vision Adaptation via
LLMs-AIGCs Collaboration
|
[
"Qifan Yu",
"Juncheng Li",
"Wentao Ye",
"Siliang Tang",
"Yueting Zhuang"
] |
Recent text-to-image generation models have shown promising results in
generating high-fidelity photo-realistic images. In parallel, the problem of
data scarcity has brought a growing interest in employing AIGC technology for
high-quality data expansion. However, this paradigm requires well-designed
prompt engineering that cost-less data expansion and labeling remain
under-explored. Inspired by LLM's powerful capability in task guidance, we
propose a new paradigm of annotated data expansion named as ChatGenImage. The
core idea behind it is to leverage the complementary strengths of diverse
models to establish a highly effective and user-friendly pipeline for
interactive data augmentation. In this work, we extensively study how LLMs
communicate with AIGC model to achieve more controllable image generation and
make the first attempt to collaborate them for automatic data augmentation for
a variety of downstream tasks. Finally, we present fascinating results obtained
from our ChatGenImage framework and demonstrate the powerful potential of our
synthetic data for systematic vision adaptation. Our codes are available at
https://github.com/Yuqifan1117/Labal-Anything-Pipeline.
|
[
"cs.CV"
] | false |
2305.12800
|
2023-05-22T07:54:13Z
|
Single Domain Dynamic Generalization for Iris Presentation Attack
Detection
|
[
"Yachun Li",
"Jingjing Wang",
"Yuhui Chen",
"Di Xie",
"Shiliang Pu"
] |
Iris presentation attack detection (PAD) has achieved great success under
intra-domain settings but easily degrades on unseen domains. Conventional
domain generalization methods mitigate the gap by learning domain-invariant
features. However, they ignore the discriminative information in the
domain-specific features. Moreover, we usually face a more realistic scenario
with only one single domain available for training. To tackle the above issues,
we propose a Single Domain Dynamic Generalization (SDDG) framework, which
simultaneously exploits domain-invariant and domain-specific features on a
per-sample basis and learns to generalize to various unseen domains with
numerous natural images. Specifically, a dynamic block is designed to
adaptively adjust the network with a dynamic adaptor. And an information
maximization loss is further combined to increase diversity. The whole network
is integrated into the meta-learning paradigm. We generate amplitude perturbed
images and cover diverse domains with natural images. Therefore, the network
can learn to generalize to the perturbed domains in the meta-test phase.
Extensive experiments show the proposed method is effective and outperforms the
state-of-the-art on LivDet-Iris 2017 dataset.
|
[
"cs.CV"
] | false |
2305.12811
|
2023-05-22T08:12:25Z
|
Label Smarter, Not Harder: CleverLabel for Faster Annotation of
Ambiguous Image Classification with Higher Quality
|
[
"Lars Schmarje",
"Vasco Grossmann",
"Tim Michels",
"Jakob Nazarenus",
"Monty Santarossa",
"Claudius Zelenka",
"Reinhard Koch"
] |
High-quality data is crucial for the success of machine learning, but
labeling large datasets is often a time-consuming and costly process. While
semi-supervised learning can help mitigate the need for labeled data, label
quality remains an open issue due to ambiguity and disagreement among
annotators. Thus, we use proposal-guided annotations as one option which leads
to more consistency between annotators. However, proposing a label increases
the probability of the annotators deciding in favor of this specific label.
This introduces a bias which we can simulate and remove. We propose a new
method CleverLabel for Cost-effective LabEling using Validated proposal-guidEd
annotations and Repaired LABELs. CleverLabel can reduce labeling costs by up to
30.0%, while achieving a relative improvement in Kullback-Leibler divergence of
up to 29.8% compared to the previous state-of-the-art on a multi-domain
real-world image classification benchmark. CleverLabel offers a novel solution
to the challenge of efficiently labeling large datasets while also improving
the label quality.
|
[
"cs.CV"
] | false |
2305.12833
|
2023-05-22T08:53:50Z
|
Boosting Long-tailed Object Detection via Step-wise Learning on
Smooth-tail Data
|
[
"Na Dong",
"Yongqiang Zhang",
"Mingli Ding",
"Gim Hee Lee"
] |
Real-world data tends to follow a long-tailed distribution, where the class
imbalance results in dominance of the head classes during training. In this
paper, we propose a frustratingly simple but effective step-wise learning
framework to gradually enhance the capability of the model in detecting all
categories of long-tailed datasets. Specifically, we build smooth-tail data
where the long-tailed distribution of categories decays smoothly to correct the
bias towards head classes. We pre-train a model on the whole long-tailed data
to preserve discriminability between all categories. We then fine-tune the
class-agnostic modules of the pre-trained model on the head class dominant
replay data to get a head class expert model with improved decision boundaries
from all categories. Finally, we train a unified model on the tail class
dominant replay data while transferring knowledge from the head class expert
model to ensure accurate detection of all categories. Extensive experiments on
long-tailed datasets LVIS v0.5 and LVIS v1.0 demonstrate the superior
performance of our method, where we can improve the AP with ResNet-50 backbone
from 27.0% to 30.3% AP, and especially for the rare categories from 15.5% to
24.9% AP. Our best model using ResNet-101 backbone can achieve 30.7% AP, which
suppresses all existing detectors using the same backbone.
|
[
"cs.CV"
] | false |
2305.12843
|
2023-05-22T09:08:46Z
|
Registering Neural Radiance Fields as 3D Density Images
|
[
"Han Jiang",
"Ruoxuan Li",
"Haosen Sun",
"Yu-Wing Tai",
"Chi-Keung Tang"
] |
No significant work has been done to directly merge two partially overlapping
scenes using NeRF representations. Given pre-trained NeRF models of a 3D scene
with partial overlapping, this paper aligns them with a rigid transform, by
generalizing the traditional registration pipeline, that is, key point
detection and point set registration, to operate on 3D density fields. To
describe corner points as key points in 3D, we propose to use universal
pre-trained descriptor-generating neural networks that can be trained and
tested on different scenes. We perform experiments to demonstrate that the
descriptor networks can be conveniently trained using a contrastive learning
strategy. We demonstrate that our method, as a global approach, can effectively
register NeRF models, thus making possible future large-scale NeRF construction
by registering its smaller and overlapping NeRFs captured individually.
|
[
"cs.CV"
] | false |
2305.12853
|
2023-05-22T09:24:55Z
|
Real-Aug: Realistic Scene Synthesis for LiDAR Augmentation in 3D Object
Detection
|
[
"Jinglin Zhan",
"Tiejun Liu",
"Rengang Li",
"Jingwei Zhang",
"Zhaoxiang Zhang",
"Yuntao Chen"
] |
Data and model are the undoubtable two supporting pillars for LiDAR object
detection. However, data-centric works have fallen far behind compared with the
ever-growing list of fancy new models. In this work, we systematically study
the synthesis-based LiDAR data augmentation approach (so-called GT-Aug) which
offers maxium controllability over generated data samples. We pinpoint the main
shortcoming of existing works is introducing unrealistic LiDAR scan patterns
during GT-Aug. In light of this finding, we propose Real-Aug, a synthesis-based
augmentation method which prioritizes on generating realistic LiDAR scans. Our
method consists a reality-conforming scene composition module which handles the
details of the composition and a real-synthesis mixing up training strategy
which gradually adapts the data distribution from synthetic data to the real
one. To verify the effectiveness of our methods, we conduct extensive ablation
studies and validate the proposed Real-Aug on a wide combination of detectors
and datasets. We achieve a state-of-the-art 0.744 NDS and 0.702 mAP on nuScenes
test set. The code shall be released soon.
|
[
"cs.CV"
] | false |
2305.12863
|
2023-05-22T09:40:32Z
|
Towards Benchmarking and Assessing Visual Naturalness of Physical World
Adversarial Attacks
|
[
"Simin Li",
"Shuing Zhang",
"Gujun Chen",
"Dong Wang",
"Pu Feng",
"Jiakai Wang",
"Aishan Liu",
"Xin Yi",
"Xianglong Liu"
] |
Physical world adversarial attack is a highly practical and threatening
attack, which fools real world deep learning systems by generating conspicuous
and maliciously crafted real world artifacts. In physical world attacks,
evaluating naturalness is highly emphasized since human can easily detect and
remove unnatural attacks. However, current studies evaluate naturalness in a
case-by-case fashion, which suffers from errors, bias and inconsistencies. In
this paper, we take the first step to benchmark and assess visual naturalness
of physical world attacks, taking autonomous driving scenario as the first
attempt. First, to benchmark attack naturalness, we contribute the first
Physical Attack Naturalness (PAN) dataset with human rating and gaze. PAN
verifies several insights for the first time: naturalness is (disparately)
affected by contextual features (i.e., environmental and semantic variations)
and correlates with behavioral feature (i.e., gaze signal). Second, to
automatically assess attack naturalness that aligns with human ratings, we
further introduce Dual Prior Alignment (DPA) network, which aims to embed human
knowledge into model reasoning process. Specifically, DPA imitates human
reasoning in naturalness assessment by rating prior alignment and mimics human
gaze behavior by attentive prior alignment. We hope our work fosters researches
to improve and automatically assess naturalness of physical world attacks. Our
code and dataset can be found at https://github.com/zhangsn-19/PAN.
|
[
"cs.CV"
] | false |
2305.12912
|
2023-05-22T10:52:11Z
|
BMB: Balanced Memory Bank for Imbalanced Semi-supervised Learning
|
[
"Wujian Peng",
"Zejia Weng",
"Hengduo Li",
"Zuxuan Wu"
] |
Exploring a substantial amount of unlabeled data, semi-supervised learning
(SSL) boosts the recognition performance when only a limited number of labels
are provided. However, traditional methods assume that the data distribution is
class-balanced, which is difficult to achieve in reality due to the long-tailed
nature of real-world data. While the data imbalance problem has been
extensively studied in supervised learning (SL) paradigms, directly
transferring existing approaches to SSL is nontrivial, as prior knowledge about
data distribution remains unknown in SSL. In light of this, we propose Balanced
Memory Bank (BMB), a semi-supervised framework for long-tailed recognition. The
core of BMB is an online-updated memory bank that caches historical features
with their corresponding pseudo labels, and the memory is also carefully
maintained to ensure the data therein are class-rebalanced. Additionally, an
adaptive weighting module is introduced to work jointly with the memory bank so
as to further re-calibrate the biased training process. We conduct experiments
on multiple datasets and demonstrate, among other things, that BMB surpasses
state-of-the-art approaches by clear margins, for example 8.2$\%$ on the 1$\%$
labeled subset of ImageNet127 (with a resolution of 64$\times$64) and 4.3$\%$
on the 50$\%$ labeled subset of ImageNet-LT.
|
[
"cs.CV"
] | false |
2305.12954
|
2023-05-22T12:02:31Z
|
Is Synthetic Data From Diffusion Models Ready for Knowledge
Distillation?
|
[
"Zheng Li",
"Yuxuan Li",
"Penghai Zhao",
"Renjie Song",
"Xiang Li",
"Jian Yang"
] |
Diffusion models have recently achieved astonishing performance in generating
high-fidelity photo-realistic images. Given their huge success, it is still
unclear whether synthetic images are applicable for knowledge distillation when
real images are unavailable. In this paper, we extensively study whether and
how synthetic images produced from state-of-the-art diffusion models can be
used for knowledge distillation without access to real images, and obtain three
key conclusions: (1) synthetic data from diffusion models can easily lead to
state-of-the-art performance among existing synthesis-based distillation
methods, (2) low-fidelity synthetic images are better teaching materials, and
(3) relatively weak classifiers are better teachers. Code is available at
https://github.com/zhengli97/DM-KD.
|
[
"cs.CV"
] | false |
2305.12955
|
2023-05-22T12:03:20Z
|
Gated Stereo: Joint Depth Estimation from Gated and Wide-Baseline Active
Stereo Cues
|
[
"Stefanie Walz",
"Mario Bijelic",
"Andrea Ramazzina",
"Amanpreet Walia",
"Fahim Mannan",
"Felix Heide"
] |
We propose Gated Stereo, a high-resolution and long-range depth estimation
technique that operates on active gated stereo images. Using active and high
dynamic range passive captures, Gated Stereo exploits multi-view cues alongside
time-of-flight intensity cues from active gating. To this end, we propose a
depth estimation method with a monocular and stereo depth prediction branch
which are combined in a final fusion stage. Each block is supervised through a
combination of supervised and gated self-supervision losses. To facilitate
training and validation, we acquire a long-range synchronized gated stereo
dataset for automotive scenarios. We find that the method achieves an
improvement of more than 50 % MAE compared to the next best RGB stereo method,
and 74 % MAE to existing monocular gated methods for distances up to 160 m. Our
code,models and datasets are available here.
|
[
"cs.CV"
] | false |
2305.12959
|
2023-05-22T12:09:51Z
|
Contrastive Predictive Autoencoders for Dynamic Point Cloud
Self-Supervised Learning
|
[
"Xiaoxiao Sheng",
"Zhiqiang Shen",
"Gang Xiao"
] |
We present a new self-supervised paradigm on point cloud sequence
understanding. Inspired by the discriminative and generative self-supervised
methods, we design two tasks, namely point cloud sequence based Contrastive
Prediction and Reconstruction (CPR), to collaboratively learn more
comprehensive spatiotemporal representations. Specifically, dense point cloud
segments are first input into an encoder to extract embeddings. All but the
last ones are then aggregated by a context-aware autoregressor to make
predictions for the last target segment. Towards the goal of modeling
multi-granularity structures, local and global contrastive learning are
performed between predictions and targets. To further improve the
generalization of representations, the predictions are also utilized to
reconstruct raw point cloud sequences by a decoder, where point cloud
colorization is employed to discriminate against different frames. By combining
classic contrast and reconstruction paradigms, it makes the learned
representations with both global discrimination and local perception. We
conduct experiments on four point cloud sequence benchmarks, and report the
results on action recognition and gesture recognition under multiple
experimental settings. The performances are comparable with supervised methods
and show powerful transferability.
|
[
"cs.CV"
] | false |
2305.13031
|
2023-05-22T13:33:41Z
|
HGFormer: Hierarchical Grouping Transformer for Domain Generalized
Semantic Segmentation
|
[
"Jian Ding",
"Nan Xue",
"Gui-Song Xia",
"Bernt Schiele",
"Dengxin Dai"
] |
Current semantic segmentation models have achieved great success under the
independent and identically distributed (i.i.d.) condition. However, in
real-world applications, test data might come from a different domain than
training data. Therefore, it is important to improve model robustness against
domain differences. This work studies semantic segmentation under the domain
generalization setting, where a model is trained only on the source domain and
tested on the unseen target domain. Existing works show that Vision
Transformers are more robust than CNNs and show that this is related to the
visual grouping property of self-attention. In this work, we propose a novel
hierarchical grouping transformer (HGFormer) to explicitly group pixels to form
part-level masks and then whole-level masks. The masks at different scales aim
to segment out both parts and a whole of classes. HGFormer combines mask
classification results at both scales for class label prediction. We assemble
multiple interesting cross-domain settings by using seven public semantic
segmentation datasets. Experiments show that HGFormer yields more robust
semantic segmentation results than per-pixel classification methods and flat
grouping transformers, and outperforms previous methods significantly. Code
will be available at https://github.com/dingjiansw101/HGFormer.
|
[
"cs.CV"
] | false |
2305.13077
|
2023-05-22T14:48:53Z
|
ControlVideo: Training-free Controllable Text-to-Video Generation
|
[
"Yabo Zhang",
"Yuxiang Wei",
"Dongsheng Jiang",
"Xiaopeng Zhang",
"Wangmeng Zuo",
"Qi Tian"
] |
Text-driven diffusion models have unlocked unprecedented abilities in image
generation, whereas their video counterpart still lags behind due to the
excessive training cost of temporal modeling. Besides the training burden, the
generated videos also suffer from appearance inconsistency and structural
flickers, especially in long video synthesis. To address these challenges, we
design a \emph{training-free} framework called \textbf{ControlVideo} to enable
natural and efficient text-to-video generation. ControlVideo, adapted from
ControlNet, leverages coarsely structural consistency from input motion
sequences, and introduces three modules to improve video generation. Firstly,
to ensure appearance coherence between frames, ControlVideo adds fully
cross-frame interaction in self-attention modules. Secondly, to mitigate the
flicker effect, it introduces an interleaved-frame smoother that employs frame
interpolation on alternated frames. Finally, to produce long videos
efficiently, it utilizes a hierarchical sampler that separately synthesizes
each short clip with holistic coherency. Empowered with these modules,
ControlVideo outperforms the state-of-the-arts on extensive motion-prompt pairs
quantitatively and qualitatively. Notably, thanks to the efficient designs, it
generates both short and long videos within several minutes using one NVIDIA
2080Ti. Code is available at https://github.com/YBYBZhang/ControlVideo.
|
[
"cs.CV"
] | true |
2305.13167
|
2023-05-22T15:54:22Z
|
VLAB: Enhancing Video Language Pre-training by Feature Adapting and
Blending
|
[
"Xingjian He",
"Sihan Chen",
"Fan Ma",
"Zhicheng Huang",
"Xiaojie Jin",
"Zikang Liu",
"Dongmei Fu",
"Yi Yang",
"Jing Liu",
"Jiashi Feng"
] |
Large-scale image-text contrastive pre-training models, such as CLIP, have
been demonstrated to effectively learn high-quality multimodal representations.
However, there is limited research on learning video-text representations for
general video multimodal tasks based on these powerful features. Towards this
goal, we propose a novel video-text pre-training method dubbed VLAB: Video
Language pre-training by feature Adapting and Blending, which transfers CLIP
representations to video pre-training tasks and develops unified video
multimodal models for a wide range of video-text tasks. Specifically, VLAB is
founded on two key strategies: feature adapting and feature blending. In the
former, we introduce a new video adapter module to address CLIP's deficiency in
modeling temporal information and extend the model's capability to encompass
both contrastive and generative tasks. In the latter, we propose an end-to-end
training method that further enhances the model's performance by exploiting the
complementarity of image and video features. We validate the effectiveness and
versatility of VLAB through extensive experiments on highly competitive video
multimodal tasks, including video text retrieval, video captioning, and video
question answering. Remarkably, VLAB outperforms competing methods
significantly and sets new records in video question answering on MSRVTT, MSVD,
and TGIF datasets. It achieves an accuracy of 49.6, 61.0, and 79.0,
respectively. Codes and models will be released.
|
[
"cs.CV"
] | false |
2305.13173
|
2023-05-22T16:00:01Z
|
Semantic-Promoted Debiasing and Background Disambiguation for Zero-Shot
Instance Segmentation
|
[
"Shuting He",
"Henghui Ding",
"Wei Jiang"
] |
Zero-shot instance segmentation aims to detect and precisely segment objects
of unseen categories without any training samples. Since the model is trained
on seen categories, there is a strong bias that the model tends to classify all
the objects into seen categories. Besides, there is a natural confusion between
background and novel objects that have never shown up in training. These two
challenges make novel objects hard to be raised in the final instance
segmentation results. It is desired to rescue novel objects from background and
dominated seen categories. To this end, we propose D$^2$Zero with
Semantic-Promoted Debiasing and Background Disambiguation to enhance the
performance of Zero-shot instance segmentation. Semantic-promoted debiasing
utilizes inter-class semantic relationships to involve unseen categories in
visual feature training and learns an input-conditional classifier to conduct
dynamical classification based on the input image. Background disambiguation
produces image-adaptive background representation to avoid mistaking novel
objects for background. Extensive experiments show that we significantly
outperform previous state-of-the-art methods by a large margin, e.g., 16.86%
improvement on COCO. Project page: https://henghuiding.github.io/D2Zero/
|
[
"cs.CV"
] | false |
2305.13220
|
2023-05-22T16:50:19Z
|
Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids
|
[
"Wei Dong",
"Chris Choy",
"Charles Loop",
"Or Litany",
"Yuke Zhu",
"Anima Anandkumar"
] |
Indoor scene reconstruction from monocular images has long been sought after
by augmented reality and robotics developers. Recent advances in neural field
representations and monocular priors have led to remarkable results in
scene-level surface reconstructions. The reliance on Multilayer Perceptrons
(MLP), however, significantly limits speed in training and rendering. In this
work, we propose to directly use signed distance function (SDF) in sparse voxel
block grids for fast and accurate scene reconstruction without MLPs. Our
globally sparse and locally dense data structure exploits surfaces' spatial
sparsity, enables cache-friendly queries, and allows direct extensions to
multi-modal data such as color and semantic labels. To apply this
representation to monocular scene reconstruction, we develop a scale
calibration algorithm for fast geometric initialization from monocular depth
priors. We apply differentiable volume rendering from this initialization to
refine details with fast convergence. We also introduce efficient
high-dimensional Continuous Random Fields (CRFs) to further exploit the
semantic-geometry consistency between scene objects. Experiments show that our
approach is 10x faster in training and 100x faster in rendering while achieving
comparable accuracy to state-of-the-art neural implicit methods.
|
[
"cs.CV"
] | false |
2305.13232
|
2023-05-22T17:05:06Z
|
Revisiting Data Augmentation in Model Compression: An Empirical and
Comprehensive Study
|
[
"Muzhou Yu",
"Linfeng Zhang",
"Kaisheng Ma"
] |
The excellent performance of deep neural networks is usually accompanied by a
large number of parameters and computations, which have limited their usage on
the resource-limited edge devices. To address this issue, abundant methods such
as pruning, quantization and knowledge distillation have been proposed to
compress neural networks and achieved significant breakthroughs. However, most
of these compression methods focus on the architecture or the training method
of neural networks but ignore the influence from data augmentation. In this
paper, we revisit the usage of data augmentation in model compression and give
a comprehensive study on the relation between model sizes and their optimal
data augmentation policy. To sum up, we mainly have the following three
observations: (A) Models in different sizes prefer data augmentation with
different magnitudes. Hence, in iterative pruning, data augmentation with
varying magnitudes leads to better performance than data augmentation with a
consistent magnitude. (B) Data augmentation with a high magnitude may
significantly improve the performance of large models but harm the performance
of small models. Fortunately, small models can still benefit from strong data
augmentations by firstly learning them with "additional parameters" and then
discard these "additional parameters" during inference. (C) The prediction of a
pre-trained large model can be utilized to measure the difficulty of data
augmentation. Thus it can be utilized as a criterion to design better data
augmentation policies. We hope this paper may promote more research on the
usage of data augmentation in model compression.
|
[
"cs.CV"
] | false |
2305.13307
|
2023-05-22T17:59:05Z
|
NeRFuser: Large-Scale Scene Representation by NeRF Fusion
|
[
"Jiading Fang",
"Shengjie Lin",
"Igor Vasiljevic",
"Vitor Guizilini",
"Rares Ambrus",
"Adrien Gaidon",
"Gregory Shakhnarovich",
"Matthew R. Walter"
] |
A practical benefit of implicit visual representations like Neural Radiance
Fields (NeRFs) is their memory efficiency: large scenes can be efficiently
stored and shared as small neural nets instead of collections of images.
However, operating on these implicit visual data structures requires extending
classical image-based vision techniques (e.g., registration, blending) from
image sets to neural fields. Towards this goal, we propose NeRFuser, a novel
architecture for NeRF registration and blending that assumes only access to
pre-generated NeRFs, and not the potentially large sets of images used to
generate them. We propose registration from re-rendering, a technique to infer
the transformation between NeRFs based on images synthesized from individual
NeRFs. For blending, we propose sample-based inverse distance weighting to
blend visual information at the ray-sample level. We evaluate NeRFuser on
public benchmarks and a self-collected object-centric indoor dataset, showing
the robustness of our method, including to views that are challenging to render
from the individual source NeRFs.
|
[
"cs.CV"
] | false |
2305.13308
|
2023-05-22T17:59:41Z
|
If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based
Text-to-Image Generation by Selection
|
[
"Shyamgopal Karthik",
"Karsten Roth",
"Massimiliano Mancini",
"Zeynep Akata"
] |
Despite their impressive capabilities, diffusion-based text-to-image (T2I)
models can lack faithfulness to the text prompt, where generated images may not
contain all the mentioned objects, attributes or relations. To alleviate these
issues, recent works proposed post-hoc methods to improve model faithfulness
without costly retraining, by modifying how the model utilizes the input
prompt. In this work, we take a step back and show that large T2I diffusion
models are more faithful than usually assumed, and can generate images faithful
to even complex prompts without the need to manipulate the generative process.
Based on that, we show how faithfulness can be simply treated as a candidate
selection problem instead, and introduce a straightforward pipeline that
generates candidate images for a text prompt and picks the best one according
to an automatic scoring system that can leverage already existing T2I
evaluation metrics. Quantitative comparisons alongside user studies on diverse
benchmarks show consistently improved faithfulness over post-hoc enhancement
methods, with comparable or lower computational cost. Code is available at
\url{https://github.com/ExplainableML/ImageSelect}.
|
[
"cs.CV"
] | false |
2305.13312
|
2023-05-22T17:59:58Z
|
Contextualising Implicit Representations for Semantic Tasks
|
[
"Theo W. Costain",
"Kejie Li",
"Victor A. Prisacariu"
] |
Prior works have demonstrated that implicit representations trained only for
reconstruction tasks typically generate encodings that are not useful for
semantic tasks. In this work, we propose a method that contextualises the
encodings of implicit representations, enabling their use in downstream tasks
(e.g. semantic segmentation), without requiring access to the original training
data or encoding network. Using an implicit representation trained for a
reconstruction task alone, our contextualising module takes an encoding trained
for reconstruction only and reveals meaningful semantic information that is
hidden in the encodings, without compromising the reconstruction performance.
With our proposed module, it becomes possible to pre-train implicit
representations on larger datasets, improving their reconstruction performance
compared to training on only a smaller labelled dataset, whilst maintaining
their segmentation performance on the labelled dataset. Importantly, our method
allows for future foundation implicit representation models to be fine-tuned on
unseen tasks, regardless of encoder or dataset availability.
|
[
"cs.CV"
] | false |
2305.13353
|
2023-05-22T17:54:01Z
|
RenderMe-360: A Large Digital Asset Library and Benchmarks Towards
High-fidelity Head Avatars
|
[
"Dongwei Pan",
"Long Zhuo",
"Jingtan Piao",
"Huiwen Luo",
"Wei Cheng",
"Yuxin Wang",
"Siming Fan",
"Shengqi Liu",
"Lei Yang",
"Bo Dai",
"Ziwei Liu",
"Chen Change Loy",
"Chen Qian",
"Wayne Wu",
"Dahua Lin",
"Kwan-Yee Lin"
] |
Synthesizing high-fidelity head avatars is a central problem for computer
vision and graphics. While head avatar synthesis algorithms have advanced
rapidly, the best ones still face great obstacles in real-world scenarios. One
of the vital causes is inadequate datasets -- 1) current public datasets can
only support researchers to explore high-fidelity head avatars in one or two
task directions; 2) these datasets usually contain digital head assets with
limited data volume, and narrow distribution over different attributes. In this
paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive
advance in head avatar research. It contains massive data assets, with 243+
million complete head frames, and over 800k video sequences from 500 different
identities captured by synchronized multi-view cameras at 30 FPS. It is a
large-scale digital library for head avatars with three key attributes: 1) High
Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K
cameras in 360 degrees. 2) High Diversity: The collected subjects vary from
different ages, eras, ethnicities, and cultures, providing abundant materials
with distinctive styles in appearance and geometry. Moreover, each subject is
asked to perform various motions, such as expressions and head rotations, which
further extend the richness of assets. 3) Rich Annotations: we provide
annotations with different granularities: cameras' parameters, matting, scan,
2D/3D facial landmarks, FLAME fitting, and text description.
Based on the dataset, we build a comprehensive benchmark for head avatar
research, with 16 state-of-the-art methods performed on five main tasks: novel
view synthesis, novel expression synthesis, hair rendering, hair editing, and
talking head generation. Our experiments uncover the strengths and weaknesses
of current methods. RenderMe-360 opens the door for future exploration in head
avatars.
|
[
"cs.CV"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.