arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.00427
|
2023-06-01T08:07:58Z
|
Out-of-distribution forgetting: vulnerability of continual learning to
intra-class distribution shift
|
[
"Liangxuan Guo",
"Yang Chen",
"Shan Yu"
] |
Continual learning (CL) is an important technique to allow artificial neural
networks to work in open environments. CL enables a system to learn new tasks
without severe interference to its performance on old tasks, i.e., overcome the
problems of catastrophic forgetting. In joint learning, it is well known that
the out-of-distribution (OOD) problem caused by intentional attacks or
environmental perturbations will severely impair the ability of networks to
generalize. In this work, we reported a special form of catastrophic forgetting
raised by the OOD problem in continual learning settings, and we named it
out-of-distribution forgetting (OODF). In continual image classification tasks,
we found that for a given category, introducing an intra-class distribution
shift significantly impaired the recognition accuracy of CL methods for that
category during subsequent learning. Interestingly, this phenomenon is special
for CL as the same level of distribution shift had only negligible effects in
the joint learning scenario. We verified that CL methods without dedicating
subnetworks for individual tasks are all vulnerable to OODF. Moreover, OODF
does not depend on any specific way of shifting the distribution, suggesting it
is a risk for CL in a wide range of circumstances. Taken together, our work
identified an under-attended risk during CL, highlighting the importance of
developing approaches that can overcome OODF.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2306.00455
|
2023-06-01T08:58:35Z
|
MindBigData 2023 MNIST-8B The 8 billion datapoints Multimodal Dataset of
Brain Signals
|
[
"David Vivancos"
] |
MindBigData 2023 MNIST-8B is the largest, to date (June 1st 2023), brain
signals open dataset created for Machine Learning, based on EEG signals from a
single subject captured using a custom 128 channels device, replicating the
full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were
captured while the subject was watching the pixels of the original digits one
by one on a screen and listening at the same time to the spoken number 0 to 9
from the real label. The data, collection procedures, hardware and software
created are described in detail, background extra information and other related
datasets can be found at our previous paper MindBigData 2022: A Large Dataset
of Brain Signals.
|
[
"cs.LG",
"cs.CV",
"q-bio.NC",
"68T01, 68T45",
"H.2.8; I.2.0; I.2.1; J.3; J.7"
] | false |
2306.00501
|
2023-06-01T09:53:35Z
|
Image generation with shortest path diffusion
|
[
"Ayan Das",
"Stathi Fotiadis",
"Anil Batra",
"Farhang Nabiei",
"FengTing Liao",
"Sattar Vakili",
"Da-shan Shiu",
"Alberto Bernacchia"
] |
The field of image generation has made significant progress thanks to the
introduction of Diffusion Models, which learn to progressively reverse a given
image corruption. Recently, a few studies introduced alternative ways of
corrupting images in Diffusion Models, with an emphasis on blurring. However,
these studies are purely empirical and it remains unclear what is the optimal
procedure for corrupting an image. In this work, we hypothesize that the
optimal procedure minimizes the length of the path taken when corrupting an
image towards a given final state. We propose the Fisher metric for the path
length, measured in the space of probability distributions. We compute the
shortest path according to this metric, and we show that it corresponds to a
combination of image sharpening, rather than blurring, and noise deblurring.
While the corruption was chosen arbitrarily in previous work, our Shortest Path
Diffusion (SPD) determines uniquely the entire spatiotemporal structure of the
corruption. We show that SPD improves on strong baselines without any
hyperparameter tuning, and outperforms all previous Diffusion Models based on
image blurring. Furthermore, any small deviation from the shortest path leads
to worse performance, suggesting that SPD provides the optimal procedure to
corrupt images. Our work sheds new light on observations made in recent works
and provides a new approach to improve diffusion models on images and other
types of data.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2306.00503
|
2023-06-01T09:54:31Z
|
MEWL: Few-shot multimodal word learning with referential uncertainty
|
[
"Guangyuan Jiang",
"Manjie Xu",
"Shiji Xin",
"Wei Liang",
"Yujia Peng",
"Chi Zhang",
"Yixin Zhu"
] |
Without explicit feedback, humans can rapidly learn the meaning of words.
Children can acquire a new word after just a few passive exposures, a process
known as fast mapping. This word learning capability is believed to be the most
fundamental building block of multimodal understanding and reasoning. Despite
recent advancements in multimodal learning, a systematic and rigorous
evaluation is still missing for human-like word learning in machines. To fill
in this gap, we introduce the MachinE Word Learning (MEWL) benchmark to assess
how machines learn word meaning in grounded visual scenes. MEWL covers human's
core cognitive toolkits in word learning: cross-situational reasoning,
bootstrapping, and pragmatic learning. Specifically, MEWL is a few-shot
benchmark suite consisting of nine tasks for probing various word learning
capabilities. These tasks are carefully designed to be aligned with the
children's core abilities in word learning and echo the theories in the
developmental literature. By evaluating multimodal and unimodal agents'
performance with a comparative analysis of human performance, we notice a sharp
divergence in human and machine word learning. We further discuss these
differences between humans and machines and call for human-like few-shot word
learning in machines.
|
[
"cs.CL",
"cs.AI",
"cs.CV",
"cs.LG"
] | false |
2306.00530
|
2023-06-01T10:29:58Z
|
Contrastive Learning MRI Reconstruction
|
[
"Mevan Ekanayake",
"Zhifeng Chen",
"Gary Egan",
"Mehrtash Harandi",
"Zhaolin Chen"
] |
Purpose: We propose a novel contrastive learning latent space representation
for MRI datasets with partially acquired scans. We show that this latent space
can be utilized for accelerated MR image reconstruction.
Theory and Methods: Our novel framework, referred to as COLADA (stands for
Contrastive Learning for highly accelerated MR image reconstruction), maximizes
the mutual information between differently accelerated images of an MRI scan by
using self-supervised contrastive learning. In other words, it attempts to
"pull" the latent representations of the same scan together and "push" the
latent representations of other scans away. The generated MRI latent space is
subsequently utilized for MR image reconstruction and the performance was
assessed in comparison to several baseline deep learning reconstruction
methods. Furthermore, the quality of the proposed latent space representation
was analyzed using Alignment and Uniformity.
Results: COLADA comprehensively outperformed other reconstruction methods
with robustness to variations in undersampling patterns, pathological
abnormalities, and noise in k-space during inference. COLADA proved the high
quality of reconstruction on unseen data with minimal fine-tuning. The analysis
of representation quality suggests that the contrastive features produced by
COLADA are optimally distributed in latent space.
Conclusion: To the best of our knowledge, this is the first attempt to
utilize contrastive learning on differently accelerated images for MR image
reconstruction. The proposed latent space representation has practical usage
due to a large number of existing partially sampled datasets. This implies the
possibility of exploring self-supervised contrastive learning further to
enhance the latent space of MRI for image reconstruction.
|
[
"eess.IV",
"cs.AI",
"cs.CV"
] | false |
2306.00559
|
2023-06-01T11:18:57Z
|
We never go out of Style: Motion Disentanglement by Subspace
Decomposition of Latent Space
|
[
"Rishubh Parihar",
"Raghav Magazine",
"Piyush Tiwari",
"R. Venkatesh Babu"
] |
Real-world objects perform complex motions that involve multiple independent
motion components. For example, while talking, a person continuously changes
their expressions, head, and body pose. In this work, we propose a novel method
to decompose motion in videos by using a pretrained image GAN model. We
discover disentangled motion subspaces in the latent space of widely used
style-based GAN models that are semantically meaningful and control a single
explainable motion component. The proposed method uses only a few $(\approx10)$
ground truth video sequences to obtain such subspaces. We extensively evaluate
the disentanglement properties of motion subspaces on face and car datasets,
quantitatively and qualitatively. Further, we present results for multiple
downstream tasks such as motion editing, and selective motion transfer, e.g.
transferring only facial expressions without training for it.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2306.00714
|
2023-06-01T14:20:06Z
|
Dissecting Arbitrary-scale Super-resolution Capability from Pre-trained
Diffusion Generative Models
|
[
"Ruibin Li",
"Qihua Zhou",
"Song Guo",
"Jie Zhang",
"Jingcai Guo",
"Xinyang Jiang",
"Yifei Shen",
"Zhenhua Han"
] |
Diffusion-based Generative Models (DGMs) have achieved unparalleled
performance in synthesizing high-quality visual content, opening up the
opportunity to improve image super-resolution (SR) tasks. Recent solutions for
these tasks often train architecture-specific DGMs from scratch, or require
iterative fine-tuning and distillation on pre-trained DGMs, both of which take
considerable time and hardware investments. More seriously, since the DGMs are
established with a discrete pre-defined upsampling scale, they cannot well
match the emerging requirements of arbitrary-scale super-resolution (ASSR),
where a unified model adapts to arbitrary upsampling scales, instead of
preparing a series of distinct models for each case. These limitations beg an
intriguing question: can we identify the ASSR capability of existing
pre-trained DGMs without the need for distillation or fine-tuning? In this
paper, we take a step towards resolving this matter by proposing Diff-SR, a
first ASSR attempt based solely on pre-trained DGMs, without additional
training efforts. It is motivated by an exciting finding that a simple
methodology, which first injects a specific amount of noise into the
low-resolution images before invoking a DGM's backward diffusion process,
outperforms current leading solutions. The key insight is determining a
suitable amount of noise to inject, i.e., small amounts lead to poor low-level
fidelity, while over-large amounts degrade the high-level signature. Through a
finely-grained theoretical analysis, we propose the Perceptual Recoverable
Field (PRF), a metric that achieves the optimal trade-off between these two
factors. Extensive experiments verify the effectiveness, flexibility, and
adaptability of Diff-SR, demonstrating superior performance to state-of-the-art
solutions under diverse ASSR environments.
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2306.00834
|
2023-06-01T15:57:12Z
|
Deformable Convolutions and LSTM-based Flexible Event Frame Fusion
Network for Motion Deblurring
|
[
"Dan Yang",
"Mehmet Yamac"
] |
Event cameras differ from conventional RGB cameras in that they produce
asynchronous data sequences. While RGB cameras capture every frame at a fixed
rate, event cameras only capture changes in the scene, resulting in sparse and
asynchronous data output. Despite the fact that event data carries useful
information that can be utilized in motion deblurring of RGB cameras,
integrating event and image information remains a challenge. Recent
state-of-the-art CNN-based deblurring solutions produce multiple 2-D event
frames based on the accumulation of event data over a time period. In most of
these techniques, however, the number of event frames is fixed and predefined,
which reduces temporal resolution drastically, particularly for scenarios when
fast-moving objects are present or when longer exposure times are required. It
is also important to note that recent modern cameras (e.g., cameras in mobile
phones) dynamically set the exposure time of the image, which presents an
additional problem for networks developed for a fixed number of event frames. A
Long Short-Term Memory (LSTM)-based event feature extraction module has been
developed for addressing these challenges, which enables us to use a
dynamically varying number of event frames. Using these modules, we constructed
a state-of-the-art deblurring network, Deformable Convolutions and LSTM-based
Flexible Event Frame Fusion Network (DLEFNet). It is particularly useful for
scenarios in which exposure times vary depending on factors such as lighting
conditions or the presence of fast-moving objects in the scene. It has been
demonstrated through evaluation results that the proposed method can outperform
the existing state-of-the-art networks for deblurring task in synthetic and
real-world data sets.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2306.00864
|
2023-06-01T16:23:47Z
|
A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics
|
[
"Hong-Yu Zhou",
"Yizhou Yu",
"Chengdi Wang",
"Shu Zhang",
"Yuanxu Gao",
"Jia Pan",
"Jun Shao",
"Guangming Lu",
"Kang Zhang",
"Weimin Li"
] |
During the diagnostic process, clinicians leverage multimodal information,
such as chief complaints, medical images, and laboratory-test results.
Deep-learning models for aiding diagnosis have yet to meet this requirement.
Here we report a Transformer-based representation-learning model as a clinical
diagnostic aid that processes multimodal input in a unified manner. Rather than
learning modality-specific features, the model uses embedding layers to convert
images and unstructured and structured text into visual tokens and text tokens,
and bidirectional blocks with intramodal and intermodal attention to learn a
holistic representation of radiographs, the unstructured chief complaint and
clinical history, structured clinical information such as laboratory-test
results and patient demographic information. The unified model outperformed an
image-only model and non-unified multimodal diagnosis models in the
identification of pulmonary diseases (by 12% and 9%, respectively) and in the
prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and
7%, respectively). Leveraging unified multimodal Transformer-based models may
help streamline triage of patients and facilitate the clinical decision
process.
|
[
"cs.CV",
"cs.CL",
"cs.LG"
] | false |
2306.00905
|
2023-06-01T17:02:51Z
|
T2IAT: Measuring Valence and Stereotypical Biases in Text-to-Image
Generation
|
[
"Jialu Wang",
"Xinyue Gabby Liu",
"Zonglin Di",
"Yang Liu",
"Xin Eric Wang"
] |
Warning: This paper contains several contents that may be toxic, harmful, or
offensive.
In the last few years, text-to-image generative models have gained remarkable
success in generating images with unprecedented quality accompanied by a
breakthrough of inference speed. Despite their rapid progress, human biases
that manifest in the training examples, particularly with regard to common
stereotypical biases, like gender and skin tone, still have been found in these
generative models. In this work, we seek to measure more complex human biases
exist in the task of text-to-image generations. Inspired by the well-known
Implicit Association Test (IAT) from social psychology, we propose a novel
Text-to-Image Association Test (T2IAT) framework that quantifies the implicit
stereotypes between concepts and valence, and those in the images. We replicate
the previously documented bias tests on generative models, including morally
neutral tests on flowers and insects as well as demographic stereotypical tests
on diverse social attributes. The results of these experiments demonstrate the
presence of complex stereotypical behaviors in image generations.
|
[
"cs.CL",
"cs.AI",
"cs.CV",
"I.2.6"
] | false |
2306.00927
|
2023-06-01T17:31:07Z
|
Second Sight: Using brain-optimized encoding models to align image
distributions with human brain activity
|
[
"Reese Kneeland",
"Jordyn Ojeda",
"Ghislain St-Yves",
"Thomas Naselaris"
] |
Two recent developments have accelerated progress in image reconstruction
from human brain activity: large datasets that offer samples of brain activity
in response to many thousands of natural scenes, and the open-sourcing of
powerful stochastic image-generators that accept both low- and high-level
guidance. Most work in this space has focused on obtaining point estimates of
the target image, with the ultimate goal of approximating literal pixel-wise
reconstructions of target images from the brain activity patterns they evoke.
This emphasis belies the fact that there is always a family of images that are
equally compatible with any evoked brain activity pattern, and the fact that
many image-generators are inherently stochastic and do not by themselves offer
a method for selecting the single best reconstruction from among the samples
they generate. We introduce a novel reconstruction procedure (Second Sight)
that iteratively refines an image distribution to explicitly maximize the
alignment between the predictions of a voxel-wise encoding model and the brain
activity patterns evoked by any target image. We show that our process
converges on a distribution of high-quality reconstructions by refining both
semantic content and low-level image details across iterations. Images sampled
from these converged image distributions are competitive with state-of-the-art
reconstruction algorithms. Interestingly, the time-to-convergence varies
systematically across visual cortex, with earlier visual areas generally taking
longer and converging on narrower image distributions, relative to higher-level
brain areas. Second Sight thus offers a succinct and novel method for exploring
the diversity of representations across visual brain areas.
|
[
"q-bio.NC",
"cs.CV",
"cs.LG"
] | false |
2306.00985
|
2023-06-01T17:59:55Z
|
Using generative AI to investigate medical imagery models and datasets
|
[
"Oran Lang",
"Doron Yaya-Stupp",
"Ilana Traynis",
"Heather Cole-Lewis",
"Chloe R. Bennett",
"Courtney Lyles",
"Charles Lau",
"Christopher Semturs",
"Dale R. Webster",
"Greg S. Corrado",
"Avinatan Hassidim",
"Yossi Matias",
"Yun Liu",
"Naama Hammel",
"Boris Babenko"
] |
AI models have shown promise in many medical imaging tasks. However, our
ability to explain what signals these models have learned is severely lacking.
Explanations are needed in order to increase the trust in AI-based models, and
could enable novel scientific discovery by uncovering signals in the data that
are not yet known to experts. In this paper, we present a method for automatic
visual explanations leveraging team-based expertise by generating hypotheses of
what visual signals in the images are correlated with the task. We propose the
following 4 steps: (i) Train a classifier to perform a given task (ii) Train a
classifier guided StyleGAN-based image generator (StylEx) (iii) Automatically
detect and visualize the top visual attributes that the classifier is sensitive
towards (iv) Formulate hypotheses for the underlying mechanisms, to stimulate
future research. Specifically, we present the discovered attributes to an
interdisciplinary panel of experts so that hypotheses can account for social
and structural determinants of health. We demonstrate results on eight
prediction tasks across three medical imaging modalities: retinal fundus
photographs, external eye photographs, and chest radiographs. We showcase
examples of attributes that capture clinically known features, confounders that
arise from factors beyond physiological mechanisms, and reveal a number of
physiologically plausible novel attributes. Our approach has the potential to
enable researchers to better understand, improve their assessment, and extract
new knowledge from AI-based models. Importantly, we highlight that attributes
generated by our framework can capture phenomena beyond physiology or
pathophysiology, reflecting the real world nature of healthcare delivery and
socio-cultural factors. Finally, we intend to release code to enable
researchers to train their own StylEx models and analyze their predictive
tasks.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.00987
|
2023-06-01T17:59:57Z
|
StyleGAN knows Normal, Depth, Albedo, and More
|
[
"Anand Bhattad",
"Daniel McKee",
"Derek Hoiem",
"D. A. Forsyth"
] |
Intrinsic images, in the original sense, are image-like maps of scene
properties like depth, normal, albedo or shading. This paper demonstrates that
StyleGAN can easily be induced to produce intrinsic images. The procedure is
straightforward. We show that, if StyleGAN produces $G({w})$ from latents
${w}$, then for each type of intrinsic image, there is a fixed offset ${d}_c$
so that $G({w}+{d}_c)$ is that type of intrinsic image for $G({w})$. Here
${d}_c$ is {\em independent of ${w}$}. The StyleGAN we used was pretrained by
others, so this property is not some accident of our training regime. We show
that there are image transformations StyleGAN will {\em not} produce in this
fashion, so StyleGAN is not a generic image regression engine.
It is conceptually exciting that an image generator should ``know'' and
represent intrinsic images. There may also be practical advantages to using a
generative model to produce intrinsic images. The intrinsic images obtained
from StyleGAN compare well both qualitatively and quantitatively with those
obtained by using SOTA image regression techniques; but StyleGAN's intrinsic
images are robust to relighting effects, unlike SOTA methods.
|
[
"cs.CV",
"cs.GR",
"cs.LG"
] | false |
2306.01075
|
2023-06-01T18:27:48Z
|
Pedestrian Crossing Action Recognition and Trajectory Prediction with 3D
Human Keypoints
|
[
"Jiachen Li",
"Xinwei Shi",
"Feiyu Chen",
"Jonathan Stroud",
"Zhishuai Zhang",
"Tian Lan",
"Junhua Mao",
"Jeonhyung Kang",
"Khaled S. Refaat",
"Weilong Yang",
"Eugene Ie",
"Congcong Li"
] |
Accurate understanding and prediction of human behaviors are critical
prerequisites for autonomous vehicles, especially in highly dynamic and
interactive scenarios such as intersections in dense urban areas. In this work,
we aim at identifying crossing pedestrians and predicting their future
trajectories. To achieve these goals, we not only need the context information
of road geometry and other traffic participants but also need fine-grained
information of the human pose, motion and activity, which can be inferred from
human keypoints. In this paper, we propose a novel multi-task learning
framework for pedestrian crossing action recognition and trajectory prediction,
which utilizes 3D human keypoints extracted from raw sensor data to capture
rich information on human pose and activity. Moreover, we propose to apply two
auxiliary tasks and contrastive learning to enable auxiliary supervisions to
improve the learned keypoints representation, which further enhances the
performance of major tasks. We validate our approach on a large-scale in-house
dataset, as well as a public benchmark dataset, and show that our approach
achieves state-of-the-art performance on a wide range of evaluation metrics.
The effectiveness of each model component is validated in a detailed ablation
study.
|
[
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] | false |
2306.01081
|
2023-06-01T18:43:16Z
|
4DSR-GCN: 4D Video Point Cloud Upsampling using Graph Convolutional
Networks
|
[
"Lorenzo Berlincioni",
"Stefano Berretti",
"Marco Bertini",
"Alberto Del Bimbo"
] |
Time varying sequences of 3D point clouds, or 4D point clouds, are now being
acquired at an increasing pace in several applications (e.g., LiDAR in
autonomous or assisted driving). In many cases, such volume of data is
transmitted, thus requiring that proper compression tools are applied to either
reduce the resolution or the bandwidth. In this paper, we propose a new
solution for upscaling and restoration of time-varying 3D video point clouds
after they have been heavily compressed. In consideration of recent growing
relevance of 3D applications, %We focused on a model allowing user-side
upscaling and artifact removal for 3D video point clouds, a real-time stream of
which would require . Our model consists of a specifically designed Graph
Convolutional Network (GCN) that combines Dynamic Edge Convolution and Graph
Attention Networks for feature aggregation in a Generative Adversarial setting.
By taking inspiration PointNet++, We present a different way to sample dense
point clouds with the intent to make these modules work in synergy to provide
each node enough features about its neighbourhood in order to later on generate
new vertices. Compared to other solutions in the literature that address the
same task, our proposed model is capable of obtaining comparable results in
terms of quality of the reconstruction, while using a substantially lower
number of parameters (about 300KB), making our solution deployable in edge
computing devices such as LiDAR.
|
[
"cs.CV",
"cs.AI",
"cs.MM"
] | false |
2306.06116
|
2023-06-01T17:05:18Z
|
Overview of Deep Learning Methods for Retinal Vessel Segmentation
|
[
"Gorana Gojić",
"Ognjen Kundačina",
"Dragiša Mišković",
"Dinu Dragan"
] |
Methods for automated retinal vessel segmentation play an important role in
the treatment and diagnosis of many eye and systemic diseases. With the fast
development of deep learning methods, more and more retinal vessel segmentation
methods are implemented as deep neural networks. In this paper, we provide a
brief review of recent deep learning methods from highly influential journals
and conferences. The review objectives are: (1) to assess the design
characteristics of the latest methods, (2) to report and analyze quantitative
values of performance evaluation metrics, and (3) to analyze the advantages and
disadvantages of the recent solutions.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.00548
|
2023-06-01T11:09:31Z
|
Label- and slide-free tissue histology using 3D epi-mode quantitative
phase imaging and virtual H&E staining
|
[
"Tanishq Mathew Abraham",
"Paloma Casteleiro Costa",
"Caroline Filan",
"Zhe Guang",
"Zhaobin Zhang",
"Stewart Neill",
"Jeffrey J. Olson",
"Richard Levenson",
"Francisco E. Robles"
] |
Histological staining of tissue biopsies, especially hematoxylin and eosin
(H&E) staining, serves as the benchmark for disease diagnosis and comprehensive
clinical assessment of tissue. However, the process is laborious and
time-consuming, often limiting its usage in crucial applications such as
surgical margin assessment. To address these challenges, we combine an emerging
3D quantitative phase imaging technology, termed quantitative oblique back
illumination microscopy (qOBM), with an unsupervised generative adversarial
network pipeline to map qOBM phase images of unaltered thick tissues (i.e.,
label- and slide-free) to virtually stained H&E-like (vH&E) images. We
demonstrate that the approach achieves high-fidelity conversions to H&E with
subcellular detail using fresh tissue specimens from mouse liver, rat
gliosarcoma, and human gliomas. We also show that the framework directly
enables additional capabilities such as H&E-like contrast for volumetric
imaging. The quality and fidelity of the vH&E images are validated using both a
neural network classifier trained on real H&E images and tested on virtual H&E
images, and a user study with neuropathologists. Given its simple and low-cost
embodiment and ability to provide real-time feedback in vivo, this deep
learning-enabled qOBM approach could enable new workflows for histopathology
with the potential to significantly save time, labor, and costs in cancer
screening, detection, treatment guidance, and more.
|
[
"eess.IV",
"cs.CV",
"cs.LG",
"physics.med-ph",
"q-bio.QM"
] | false |
2306.00956
|
2023-06-01T17:51:22Z
|
The ObjectFolder Benchmark: Multisensory Learning with Neural and Real
Objects
|
[
"Ruohan Gao",
"Yiming Dou",
"Hao Li",
"Tanmay Agarwal",
"Jeannette Bohg",
"Yunzhu Li",
"Li Fei-Fei",
"Jiajun Wu"
] |
We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for
multisensory object-centric learning, centered around object recognition,
reconstruction, and manipulation with sight, sound, and touch. We also
introduce the ObjectFolder Real dataset, including the multisensory
measurements for 100 real-world household objects, building upon a newly
designed pipeline for collecting the 3D meshes, videos, impact sounds, and
tactile readings of real-world objects. We conduct systematic benchmarking on
both the 1,000 multisensory neural objects from ObjectFolder, and the real
multisensory data from ObjectFolder Real. Our results demonstrate the
importance of multisensory perception and reveal the respective roles of
vision, audio, and touch for different object-centric learning tasks. By
publicly releasing our dataset and benchmark suite, we hope to catalyze and
enable new research in multisensory object-centric learning in computer vision,
robotics, and beyond. Project page: https://objectfolder.stanford.edu
|
[
"cs.CV",
"cs.AI",
"cs.GR",
"cs.HC",
"cs.RO"
] | true |
2306.01016
|
2023-06-01T05:39:45Z
|
PV2TEA: Patching Visual Modality to Textual-Established Information
Extraction
|
[
"Hejie Cui",
"Rongmei Lin",
"Nasser Zalmout",
"Chenwei Zhang",
"Jingbo Shang",
"Carl Yang",
"Xian Li"
] |
Information extraction, e.g., attribute value extraction, has been
extensively studied and formulated based only on text. However, many attributes
can benefit from image-based extraction, like color, shape, pattern, among
others. The visual modality has long been underutilized, mainly due to
multimodal annotation difficulty. In this paper, we aim to patch the visual
modality to the textual-established attribute information extractor. The
cross-modality integration faces several unique challenges: (C1) images and
textual descriptions are loosely paired intra-sample and inter-samples; (C2)
images usually contain rich backgrounds that can mislead the prediction; (C3)
weakly supervised labels from textual-established extractors are biased for
multimodal training. We present PV2TEA, an encoder-decoder architecture
equipped with three bias reduction schemes: (S1) Augmented label-smoothed
contrast to improve the cross-modality alignment for loosely-paired image and
text; (S2) Attention-pruning that adaptively distinguishes the visual
foreground; (S3) Two-level neighborhood regularization that mitigates the label
textual bias via reliability estimation. Empirical results on real-world
e-Commerce datasets demonstrate up to 11.74% absolute (20.97% relatively) F1
increase over unimodal baselines.
|
[
"cs.CL",
"cs.AI",
"cs.CV",
"cs.LG",
"cs.MM"
] | false |
2306.06117
|
2023-06-01T20:35:06Z
|
Strengths and Weaknesses of 3D Pose Estimation and Inertial Motion
Capture System for Movement Therapy
|
[
"Shawan Mohammed",
"Hannah Siebers",
"Ted Preuß"
] |
3D pose estimation offers the opportunity for fast, non-invasive, and
accurate motion analysis. This is of special interest also for clinical use.
Currently, motion capture systems are used, as they offer robust and precise
data acquisition, which is essential in the case of clinical applications. In
this study, we investigate the accuracy of the state-of-the-art 3D position
estimation approach MeTrabs, compared to the established inertial sensor system
MTw Awinda for specific motion exercises. The study uses and provides an
evaluation dataset of parallel recordings from 10 subjects during various
movement therapy exercises. The information from the Awinda system and the
frames for monocular pose estimation are synchronized. For the comparison,
clinically relevant parameters for joint angles of ankle, knee, back, and elbow
flexion-extension were estimated and evaluated using mean, median, and maximum
deviation between the calculated joint angles for the different exercises,
camera positions, and clothing items. The results of the analysis indicate that
the mean and median deviations can be kept below 5{\deg} for some of the
studied angles. These joints could be considered for medical applications even
considering the maximum deviations of 15{\deg}. However, caution should be
applied to certain particularly problematic joints. In particular, elbow
flexions, which showed high maximum deviations of up to 50{\deg} in our
analysis. Furthermore, the type of exercise plays a crucial role in the
reliable and safe application of the 3D position estimation method. For
example, all joint angles showed a significant deterioration in performance
during exercises near the ground.
|
[
"eess.IV",
"cs.AI",
"cs.CV",
"cs.LG",
"eess.SP"
] | false |
2306.00400
|
2023-06-01T07:03:47Z
|
BiSync: A Bilingual Editor for Synchronized Monolingual Texts
|
[
"Josep Crego",
"Jitao Xu",
"François Yvon"
] |
In our globalized world, a growing number of situations arise where people
are required to communicate in one or several foreign languages. In the case of
written communication, users with a good command of a foreign language may find
assistance from computer-aided translation (CAT) technologies. These
technologies often allow users to access external resources, such as
dictionaries, terminologies or bilingual concordancers, thereby interrupting
and considerably hindering the writing process. In addition, CAT systems assume
that the source sentence is fixed and also restrict the possible changes on the
target side. In order to make the writing process smoother, we present BiSync,
a bilingual writing assistant that allows users to freely compose text in two
languages, while maintaining the two monolingual texts synchronized. We also
include additional functionalities, such as the display of alternative prefix
translations and paraphrases, which are intended to facilitate the authoring of
texts. We detail the model architecture used for synchronization and evaluate
the resulting tool, showing that high accuracy can be attained with limited
computational resources. The interface and models are publicly available at
https://github.com/jmcrego/BiSync and a demonstration video can be watched on
YouTube at https://youtu.be/_l-ugDHfNgU .
|
[
"cs.CL"
] | false |
2306.00434
|
2023-06-01T08:21:20Z
|
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts
for Zero-Shot Dialogue State Tracking
|
[
"Qingyue Wang",
"Liang Ding",
"Yanan Cao",
"Yibing Zhan",
"Zheng Lin",
"Shi Wang",
"Dacheng Tao",
"Li Guo"
] |
Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle
a variety of task-oriented dialogue domains without the cost of collecting
in-domain data. Existing works mainly study common data- or model-level
augmentation methods to enhance the generalization but fail to effectively
decouple the semantics of samples, limiting the zero-shot performance of DST.
In this paper, we present a simple and effective "divide, conquer and combine"
solution, which explicitly disentangles the semantics of seen data, and
leverages the performance and robustness with the mixture-of-experts mechanism.
Specifically, we divide the seen data into semantically independent subsets and
train corresponding experts, the newly unseen samples are mapped and inferred
with mixture-of-experts with our designed ensemble inference. Extensive
experiments on MultiWOZ2.1 upon the T5-Adapter show our schema significantly
and consistently improves the zero-shot performance, achieving the SOTA on
settings without external knowledge, with only 10M trainable parameters1.
|
[
"cs.CL"
] | false |
2306.00435
|
2023-06-01T08:22:21Z
|
How Many Answers Should I Give? An Empirical Study of Multi-Answer
Reading Comprehension
|
[
"Chen Zhang",
"Jiuheng Lin",
"Xiao Liu",
"Yuxuan Lai",
"Yansong Feng",
"Dongyan Zhao"
] |
The multi-answer phenomenon, where a question may have multiple answers
scattered in the document, can be well handled by humans but is challenging
enough for machine reading comprehension (MRC) systems. Despite recent progress
in multi-answer MRC, there lacks a systematic analysis of how this phenomenon
arises and how to better address it. In this work, we design a taxonomy to
categorize commonly-seen multi-answer MRC instances, with which we inspect
three multi-answer datasets and analyze where the multi-answer challenge comes
from. We further analyze how well different paradigms of current multi-answer
MRC models deal with different types of multi-answer instances. We find that
some paradigms capture well the key information in the questions while others
better model the relationship between questions and contexts. We thus explore
strategies to make the best of the strengths of different paradigms.
Experiments show that generation models can be a promising platform to
incorporate different paradigms. Our annotations and code are released for
further research.
|
[
"cs.CL"
] | false |
2306.00437
|
2023-06-01T08:27:00Z
|
Responsibility Perspective Transfer for Italian Femicide News
|
[
"Gosse Minnema",
"Huiyuan Lai",
"Benedetta Muscato",
"Malvina Nissim"
] |
Different ways of linguistically expressing the same real-world event can
lead to different perceptions of what happened. Previous work has shown that
different descriptions of gender-based violence (GBV) influence the reader's
perception of who is to blame for the violence, possibly reinforcing
stereotypes which see the victim as partly responsible, too. As a contribution
to raise awareness on perspective-based writing, and to facilitate access to
alternative perspectives, we introduce the novel task of automatically
rewriting GBV descriptions as a means to alter the perceived level of
responsibility on the perpetrator. We present a quasi-parallel dataset of
sentences with low and high perceived responsibility levels for the
perpetrator, and experiment with unsupervised (mBART-based), zero-shot and
few-shot (GPT3-based) methods for rewriting sentences. We evaluate our models
using a questionnaire study and a suite of automatic metrics.
|
[
"cs.CL"
] | false |
2306.00645
|
2023-06-01T13:10:48Z
|
Contextual Distortion Reveals Constituency: Masked Language Models are
Implicit Parsers
|
[
"Jiaxi Li",
"Wei Lu"
] |
Recent advancements in pre-trained language models (PLMs) have demonstrated
that these models possess some degree of syntactic awareness. To leverage this
knowledge, we propose a novel chart-based method for extracting parse trees
from masked language models (LMs) without the need to train separate parsers.
Our method computes a score for each span based on the distortion of contextual
representations resulting from linguistic perturbations. We design a set of
perturbations motivated by the linguistic concept of constituency tests, and
use these to score each span by aggregating the distortion scores. To produce a
parse tree, we use chart parsing to find the tree with the minimum score. Our
method consistently outperforms previous state-of-the-art methods on English
with masked LMs, and also demonstrates superior performance in a multilingual
setting, outperforming the state of the art in 6 out of 8 languages. Notably,
although our method does not involve parameter updates or extensive
hyperparameter search, its performance can even surpass some unsupervised
parsing methods that require fine-tuning. Our analysis highlights that the
distortion of contextual representation resulting from syntactic perturbation
can serve as an effective indicator of constituency across languages.
|
[
"cs.CL"
] | false |
2306.00660
|
2023-06-01T13:34:21Z
|
Improving Polish to English Neural Machine Translation with Transfer
Learning: Effects of Data Volume and Language Similarity
|
[
"Juuso Eronen",
"Michal Ptaszynski",
"Karol Nowakowski",
"Zheng Lin Chia",
"Fumito Masui"
] |
This paper investigates the impact of data volume and the use of similar
languages on transfer learning in a machine translation task. We find out that
having more data generally leads to better performance, as it allows the model
to learn more patterns and generalizations from the data. However, related
languages can also be particularly effective when there is limited data
available for a specific language pair, as the model can leverage the
similarities between the languages to improve performance. To demonstrate, we
fine-tune mBART model for a Polish-English translation task using the OPUS-100
dataset. We evaluate the performance of the model under various transfer
learning configurations, including different transfer source languages and
different shot levels for Polish, and report the results. Our experiments show
that a combination of related languages and larger amounts of data outperforms
the model trained on related languages or larger amounts of data alone.
Additionally, we show the importance of related languages in zero-shot and
few-shot configurations.
|
[
"cs.CL"
] | false |
2306.00665
|
2023-06-01T13:37:55Z
|
Automatic Glossary of Clinical Terminology: a Large-Scale Dictionary of
Biomedical Definitions Generated from Ontological Knowledge
|
[
"François Remy",
"Thomas Demeester"
] |
Background: More than 400,000 biomedical concepts and some of their
relationships are contained in SnomedCT, a comprehensive biomedical ontology.
However, their concept names are not always readily interpretable by
non-experts, or patients looking at their own electronic health records (EHR).
Clear definitions or descriptions in understandable language are often not
available. Therefore, generating human-readable definitions for biomedical
concepts might help make the information they encode more accessible and
understandable to a wider public.
Objective: In this article, we introduce the Automatic Glossary of Clinical
Terminology (AGCT), a large-scale biomedical dictionary of clinical concepts
generated using high-quality information extracted from the biomedical
knowledge contained in SnomedCT.
Methods: We generate a novel definition for every SnomedCT concept, after
prompting the OpenAI Turbo model, a variant of GPT 3.5, using a high-quality
verbalization of the SnomedCT relationships of the to-be-defined concept. A
significant subset of the generated definitions was subsequently judged by NLP
researchers with biomedical expertise on 5-point scales along the following
three axes: factuality, insight, and fluency.
Results: AGCT contains 422,070 computer-generated definitions for SnomedCT
concepts, covering various domains such as diseases, procedures, drugs, and
anatomy. The average length of the definitions is 49 words. The definitions
were assigned average scores of over 4.5 out of 5 on all three axes, indicating
a majority of factual, insightful, and fluent definitions.
Conclusion: AGCT is a novel and valuable resource for biomedical tasks that
require human-readable definitions for SnomedCT concepts. It can also serve as
a base for developing robust biomedical retrieval models or other applications
that leverage natural language understanding of biomedical knowledge.
|
[
"cs.CL"
] | false |
2306.00672
|
2023-06-01T13:44:45Z
|
Towards Argument-Aware Abstractive Summarization of Long Legal Opinions
with Summary Reranking
|
[
"Mohamed Elaraby",
"Yang Zhong",
"Diane Litman"
] |
We propose a simple approach for the abstractive summarization of long legal
opinions that considers the argument structure of the document. Legal opinions
often contain complex and nuanced argumentation, making it challenging to
generate a concise summary that accurately captures the main points of the
legal opinion. Our approach involves using argument role information to
generate multiple candidate summaries, then reranking these candidates based on
alignment with the document's argument structure. We demonstrate the
effectiveness of our approach on a dataset of long legal opinions and show that
it outperforms several strong baselines.
|
[
"cs.CL"
] | false |
2306.00708
|
2023-06-01T14:16:53Z
|
Boosting the Performance of Transformer Architectures for Semantic
Textual Similarity
|
[
"Ivan Rep",
"Vladimir Čeperić"
] |
Semantic textual similarity is the task of estimating the similarity between
the meaning of two texts. In this paper, we fine-tune transformer architectures
for semantic textual similarity on the Semantic Textual Similarity Benchmark by
tuning the model partially and then end-to-end. We experiment with BERT,
RoBERTa, and DeBERTaV3 cross-encoders by approaching the problem as a binary
classification task or a regression task. We combine the outputs of the
transformer models and use handmade features as inputs for boosting algorithms.
Due to worse test set results coupled with improvements on the validation set,
we experiment with different dataset splits to further investigate this
occurrence. We also provide an error analysis, focused on the edges of the
prediction range.
|
[
"cs.CL",
"I.2.7"
] | false |
2306.00858
|
2023-06-01T16:17:16Z
|
Adversarial learning of neural user simulators for dialogue policy
optimisation
|
[
"Simon Keizer",
"Caroline Dockes",
"Norbert Braunschweiler",
"Svetlana Stoyanchev",
"Rama Doddipatla"
] |
Reinforcement learning based dialogue policies are typically trained in
interaction with a user simulator. To obtain an effective and robust policy,
this simulator should generate user behaviour that is both realistic and
varied. Current data-driven simulators are trained to accurately model the user
behaviour in a dialogue corpus. We propose an alternative method using
adversarial learning, with the aim to simulate realistic user behaviour with
more variation. We train and evaluate several simulators on a corpus of
restaurant search dialogues, and then use them to train dialogue system
policies. In policy cross-evaluation experiments we demonstrate that an
adversarially trained simulator produces policies with 8.3% higher success rate
than those trained with a maximum likelihood simulator. Subjective results from
a crowd-sourced dialogue system user evaluation confirm the effectiveness of
adversarially training user simulators.
|
[
"cs.CL"
] | false |
2306.01058
|
2023-06-01T18:01:33Z
|
Are Layout-Infused Language Models Robust to Layout Distribution Shifts?
A Case Study with Scientific Documents
|
[
"Catherine Chen",
"Zejiang Shen",
"Dan Klein",
"Gabriel Stanovsky",
"Doug Downey",
"Kyle Lo"
] |
Recent work has shown that infusing layout features into language models
(LMs) improves processing of visually-rich documents such as scientific papers.
Layout-infused LMs are often evaluated on documents with familiar layout
features (e.g., papers from the same publisher), but in practice models
encounter documents with unfamiliar distributions of layout features, such as
new combinations of text sizes and styles, or new spatial configurations of
textual elements. In this work we test whether layout-infused LMs are robust to
layout distribution shifts. As a case study we use the task of scientific
document structure recovery, segmenting a scientific paper into its structural
categories (e.g., "title", "caption", "reference"). To emulate distribution
shifts that occur in practice we re-partition the GROTOAP2 dataset. We find
that under layout distribution shifts model performance degrades by up to 20
F1. Simple training strategies, such as increasing training diversity, can
reduce this degradation by over 35% relative F1; however, models fail to reach
in-distribution performance in any tested out-of-distribution conditions. This
work highlights the need to consider layout distribution shifts during model
evaluation, and presents a methodology for conducting such evaluations.
|
[
"cs.CL"
] | false |
2306.01090
|
2023-06-01T19:04:17Z
|
Improving the Robustness of Summarization Systems with Dual Augmentation
|
[
"Xiuying Chen",
"Guodong Long",
"Chongyang Tao",
"Mingzhe Li",
"Xin Gao",
"Chengqi Zhang",
"Xiangliang Zhang"
] |
A robust summarization system should be able to capture the gist of the
document, regardless of the specific word choices or noise in the input. In
this work, we first explore the summarization models' robustness against
perturbations including word-level synonym substitution and noise. To create
semantic-consistent substitutes, we propose a SummAttacker, which is an
efficient approach to generating adversarial samples based on language models.
Experimental results show that state-of-the-art summarization models have a
significant decrease in performance on adversarial and noisy test sets. Next,
we analyze the vulnerability of the summarization systems and explore improving
the robustness by data augmentation. Specifically, the first brittleness factor
we found is the poor understanding of infrequent words in the input.
Correspondingly, we feed the encoder with more diverse cases created by
SummAttacker in the input space. The other factor is in the latent space, where
the attacked inputs bring more variations to the hidden states. Hence, we
construct adversarial decoder input and devise manifold softmixing operation in
hidden space to introduce more diversity. Experimental results on Gigaword and
CNN/DM datasets demonstrate that our approach achieves significant improvements
over strong baselines and exhibits higher robustness on noisy, attacked, and
clean datasets.
|
[
"cs.CL"
] | false |
2306.01117
|
2023-06-01T20:05:05Z
|
Examining the Causal Effect of First Names on Language Models: The Case
of Social Commonsense Reasoning
|
[
"Sullam Jeoung",
"Jana Diesner",
"Halil Kilicoglu"
] |
As language models continue to be integrated into applications of personal
and societal relevance, ensuring these models' trustworthiness is crucial,
particularly with respect to producing consistent outputs regardless of
sensitive attributes. Given that first names may serve as proxies for
(intersectional) socio-demographic representations, it is imperative to examine
the impact of first names on commonsense reasoning capabilities. In this paper,
we study whether a model's reasoning given a specific input differs based on
the first names provided. Our underlying assumption is that the reasoning about
Alice should not differ from the reasoning about James. We propose and
implement a controlled experimental framework to measure the causal effect of
first names on commonsense reasoning, enabling us to distinguish between model
predictions due to chance and caused by actual factors of interest. Our results
indicate that the frequency of first names has a direct effect on model
prediction, with less frequent names yielding divergent predictions compared to
more frequent names. To gain insights into the internal mechanisms of models
that are contributing to these behaviors, we also conduct an in-depth
explainable analysis. Overall, our findings suggest that to ensure model
robustness, it is essential to augment datasets with more diverse first names
during the configuration stage.
|
[
"cs.CL"
] | false |
2306.01169
|
2023-06-01T21:58:33Z
|
Hybrid Long Document Summarization using C2F-FAR and ChatGPT: A
Practical Study
|
[
"Guang Lu",
"Sylvia B. Larcher",
"Tu Tran"
] |
Text summarization is a downstream natural language processing (NLP) task
that challenges the understanding and generation capabilities of language
models. Considerable progress has been made in automatically summarizing short
texts, such as news articles, often leading to satisfactory results. However,
summarizing long documents remains a major challenge. This is due to the
complex contextual information in the text and the lack of open-source
benchmarking datasets and evaluation frameworks that can be used to develop and
test model performance. In this work, we use ChatGPT, the latest breakthrough
in the field of large language models (LLMs), together with the extractive
summarization model C2F-FAR (Coarse-to-Fine Facet-Aware Ranking) to propose a
hybrid extraction and summarization pipeline for long documents such as
business articles and books. We work with the world-renowned company
getAbstract AG and leverage their expertise and experience in professional book
summarization. A practical study has shown that machine-generated summaries can
perform at least as well as human-written summaries when evaluated using
current automated evaluation metrics. However, a closer examination of the
texts generated by ChatGPT through human evaluations has shown that there are
still critical issues in terms of text coherence, faithfulness, and style.
Overall, our results show that the use of ChatGPT is a very promising but not
yet mature approach for summarizing long documents and can at best serve as an
inspiration for human editors. We anticipate that our work will inform NLP
researchers about the extent to which ChatGPT's capabilities for summarizing
long documents overlap with practitioners' needs. Further work is needed to
test the proposed hybrid summarization pipeline, in particular involving GPT-4,
and to propose a new evaluation framework tailored to the task of summarizing
long documents.
|
[
"cs.CL"
] | false |
2306.01183
|
2023-06-01T22:43:37Z
|
Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation
|
[
"Adithya V Ganesan",
"Yash Kumar Lal",
"August Håkan Nilsson",
"H. Andrew Schwartz"
] |
Very large language models (LLMs) perform extremely well on a spectrum of NLP
tasks in a zero-shot setting. However, little is known about their performance
on human-level NLP problems which rely on understanding psychological concepts,
such as assessing personality traits. In this work, we investigate the
zero-shot ability of GPT-3 to estimate the Big 5 personality traits from users'
social media posts. Through a set of systematic experiments, we find that
zero-shot GPT-3 performance is somewhat close to an existing pre-trained SotA
for broad classification upon injecting knowledge about the trait in the
prompts. However, when prompted to provide fine-grained classification, its
performance drops to close to a simple most frequent class (MFC) baseline. We
further analyze where GPT-3 performs better, as well as worse, than a
pretrained lexical model, illustrating systematic errors that suggest ways to
improve LLMs on human-level NLP tasks.
|
[
"cs.CL",
"68T50",
"J.4; I.2; I.7"
] | false |
2306.01200
|
2023-06-01T23:27:49Z
|
Multi-Dimensional Evaluation of Text Summarization with In-Context
Learning
|
[
"Sameer Jain",
"Vaishakh Keshava",
"Swarnashree Mysore Sathyendra",
"Patrick Fernandes",
"Pengfei Liu",
"Graham Neubig",
"Chunting Zhou"
] |
Evaluation of natural language generation (NLG) is complex and
multi-dimensional. Generated text can be evaluated for fluency, coherence,
factuality, or any other dimensions of interest. Most frameworks that perform
such multi-dimensional evaluation require training on large manually or
synthetically generated datasets. In this paper, we study the efficacy of large
language models as multi-dimensional evaluators using in-context learning,
obviating the need for large training datasets. Our experiments show that
in-context learning-based evaluators are competitive with learned evaluation
frameworks for the task of text summarization, establishing state-of-the-art on
dimensions such as relevance and factual consistency. We then analyze the
effects of factors such as the selection and number of in-context examples on
performance. Finally, we study the efficacy of in-context learning based
evaluators in evaluating zero-shot summaries written by large language models
such as GPT-3.
|
[
"cs.CL"
] | false |
2306.00346
|
2023-06-01T04:55:43Z
|
CAISA at SemEval-2023 Task 8: Counterfactual Data Augmentation for
Mitigating Class Imbalance in Causal Claim Identification
|
[
"Akbar Karimi",
"Lucie Flek"
] |
The class imbalance problem can cause machine learning models to produce an
undesirable performance on the minority class as well as the whole dataset.
Using data augmentation techniques to increase the number of samples is one way
to tackle this problem. We introduce a novel counterfactual data augmentation
by verb replacement for the identification of medical claims. In addition, we
investigate the impact of this method and compare it with 3 other data
augmentation techniques, showing that the proposed method can result in a
significant (relative) improvement in the minority class.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.00374
|
2023-06-01T06:13:51Z
|
CFL: Causally Fair Language Models Through Token-level Attribute
Controlled Generation
|
[
"Rahul Madhavan",
"Rishabh Garg",
"Kahini Wadhawan",
"Sameep Mehta"
] |
We propose a method to control the attributes of Language Models (LMs) for
the text generation task using Causal Average Treatment Effect (ATE) scores and
counterfactual augmentation. We explore this method, in the context of LM
detoxification, and propose the Causally Fair Language (CFL) architecture for
detoxifying pre-trained LMs in a plug-and-play manner. Our architecture is
based on a Structural Causal Model (SCM) that is mathematically transparent and
computationally efficient as compared with many existing detoxification
techniques. We also propose several new metrics that aim to better understand
the behaviour of LMs in the context of toxic text generation. Further, we
achieve state of the art performance for toxic degeneration, which are computed
using \RTP (RTP) benchmark. Our experiments show that CFL achieves such a
detoxification without much impact on the model perplexity. We also show that
CFL mitigates the unintended bias problem through experiments on the BOLD
dataset.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.00445
|
2023-06-01T08:34:26Z
|
A big data approach towards sarcasm detection in Russian
|
[
"A. A. Gurin",
"T. M. Sadykov",
"T. A. Zhukov"
] |
We present a set of deterministic algorithms for Russian inflection and
automated text synthesis. These algorithms are implemented in a publicly
available web-service www.passare.ru. This service provides functions for
inflection of single words, word matching and synthesis of grammatically
correct Russian text. Selected code and datasets are available at
https://github.com/passare-ru/PassareFunctions/ Performance of the inflectional
functions has been tested against the annotated corpus of Russian language
OpenCorpora, compared with that of other solutions, and used for estimating the
morphological variability and complexity of different parts of speech in
Russian.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.00502
|
2023-06-01T09:53:53Z
|
Revisiting Event Argument Extraction: Can EAE Models Learn Better When
Being Aware of Event Co-occurrences?
|
[
"Yuxin He",
"Jingyue Hu",
"Buzhou Tang"
] |
Event co-occurrences have been proved effective for event extraction (EE) in
previous studies, but have not been considered for event argument extraction
(EAE) recently. In this paper, we try to fill this gap between EE research and
EAE research, by highlighting the question that ``Can EAE models learn better
when being aware of event co-occurrences?''. To answer this question, we
reformulate EAE as a problem of table generation and extend a SOTA prompt-based
EAE model into a non-autoregressive generation framework, called TabEAE, which
is able to extract the arguments of multiple events in parallel. Under this
framework, we experiment with 3 different training-inference schemes on 4
datasets (ACE05, RAMS, WikiEvents and MLEE) and discover that via training the
model to extract all events in parallel, it can better distinguish the semantic
boundary of each event and its ability to extract single event gets
substantially improved. Experimental results show that our method achieves new
state-of-the-art performance on the 4 datasets. Our code is avilable at
https://github.com/Stardust-hyx/TabEAE.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.00535
|
2023-06-01T10:42:56Z
|
The Effects of Input Type and Pronunciation Dictionary Usage in Transfer
Learning for Low-Resource Text-to-Speech
|
[
"Phat Do",
"Matt Coler",
"Jelske Dijkstra",
"Esther Klabbers"
] |
We compare phone labels and articulatory features as input for cross-lingual
transfer learning in text-to-speech (TTS) for low-resource languages (LRLs).
Experiments with FastSpeech 2 and the LRL West Frisian show that using
articulatory features outperformed using phone labels in both intelligibility
and naturalness. For LRLs without pronunciation dictionaries, we propose two
novel approaches: a) using a massively multilingual model to convert
grapheme-to-phone (G2P) in both training and synthesizing, and b) using a
universal phone recognizer to create a makeshift dictionary. Results show that
the G2P approach performs largely on par with using a ground-truth dictionary
and the phone recognition approach, while performing generally worse, remains a
viable option for LRLs less suitable for the G2P approach. Within each
approach, using articulatory features as input outperforms using phone labels.
|
[
"cs.CL",
"eess.AS"
] | false |
2306.00539
|
2023-06-01T10:46:08Z
|
A Call for Standardization and Validation of Text Style Transfer
Evaluation
|
[
"Phil Ostheimer",
"Mayank Nagda",
"Marius Kloft",
"Sophie Fellenz"
] |
Text Style Transfer (TST) evaluation is, in practice, inconsistent.
Therefore, we conduct a meta-analysis on human and automated TST evaluation and
experimentation that thoroughly examines existing literature in the field. The
meta-analysis reveals a substantial standardization gap in human and automated
evaluation. In addition, we also find a validation gap: only few automated
metrics have been validated using human experiments. To this end, we thoroughly
scrutinize both the standardization and validation gap and reveal the resulting
pitfalls. This work also paves the way to close the standardization and
validation gap in TST evaluation by calling out requirements to be met by
future research.
|
[
"cs.LG",
"cs.CL"
] | false |
2306.00652
|
2023-06-01T13:20:22Z
|
Explanation Graph Generation via Generative Pre-training over Synthetic
Graphs
|
[
"Han Cui",
"Shangzhan Li",
"Yu Zhang",
"Qi Shi"
] |
The generation of explanation graphs is a significant task that aims to
produce explanation graphs in response to user input, revealing the internal
reasoning process. This task is challenging due to the significant discrepancy
between unstructured user queries and structured explanation graphs. Current
research commonly fine-tunes a text-based pre-trained language model on a small
downstream dataset that is annotated with labeled graphs. However, due to the
limited scale of available datasets, this approach may prove to be insufficient
in bridging the gap between natural language text and structured graphs. In
this paper, to alleviate the above limitations, we propose a novel pre-trained
framework EG3P(for Explanation Graph Generation via Generative Pre-training
over synthetic graphs) for the explanation graph generation task. Specifically,
we first propose a text-to-graph generative task to pre-train the model with
the goal of bridging the text-graph gap. Additionally, we propose an automatic
corpus synthesis strategy for synthesizing a large scale of high-quality
corpus, reducing the reliance on costly manual annotation methods. Experimental
results on ExplaGraphs show the effectiveness of EG3P that our model surpasses
all baseline systems with remarkable margins. Besides, further analysis
demonstrates that EG3P is able to generate better explanation graphs on actual
reasoning tasks such as CommonsenseQA and OpenbookQA.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.00667
|
2023-06-01T13:39:33Z
|
Predicting the Quality of Revisions in Argumentative Writing
|
[
"Zhexiong Liu",
"Diane Litman",
"Elaine Wang",
"Lindsay Matsumura",
"Richard Correnti"
] |
The ability to revise in response to feedback is critical to students'
writing success. In the case of argument writing in specific, identifying
whether an argument revision (AR) is successful or not is a complex problem
because AR quality is dependent on the overall content of an argument. For
example, adding the same evidence sentence could strengthen or weaken existing
claims in different argument contexts (ACs). To address this issue we developed
Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality
predictions. The experiments on two corpora, our annotated elementary essays
and existing college essays benchmark, demonstrate the superiority of the
proposed ACs over baselines.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.00751
|
2023-06-01T14:46:34Z
|
Differentiable Tree Operations Promote Compositional Generalization
|
[
"Paul Soulos",
"Edward Hu",
"Kate McCurdy",
"Yunmo Chen",
"Roland Fernandez",
"Paul Smolensky",
"Jianfeng Gao"
] |
In the context of structure-to-structure transformation tasks, learning
sequences of discrete symbolic operations poses significant challenges due to
their non-differentiability. To facilitate the learning of these symbolic
sequences, we introduce a differentiable tree interpreter that compiles
high-level symbolic tree operations into subsymbolic matrix operations on
tensors. We present a novel Differentiable Tree Machine (DTM) architecture that
integrates our interpreter with an external memory and an agent that learns to
sequentially select tree operations to execute the target transformation in an
end-to-end manner. With respect to out-of-distribution compositional
generalization on synthetic semantic parsing and language generation tasks, DTM
achieves 100% while existing baselines such as Transformer, Tree Transformer,
LSTM, and Tree2Tree LSTM achieve less than 30%. DTM remains highly
interpretable in addition to its perfect performance.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.00774
|
2023-06-01T15:06:11Z
|
In-Context Learning User Simulators for Task-Oriented Dialog Systems
|
[
"Silvia Terragni",
"Modestas Filipavicius",
"Nghia Khau",
"Bruna Guedes",
"André Manso",
"Roland Mathis"
] |
This paper presents a novel application of large language models in user
simulation for task-oriented dialog systems, specifically focusing on an
in-context learning approach. By harnessing the power of these models, the
proposed approach generates diverse utterances based on user goals and limited
dialog examples. Unlike traditional simulators, this method eliminates the need
for labor-intensive rule definition or extensive annotated data, making it more
efficient and accessible. Additionally, an error analysis of the interaction
between the user simulator and dialog system uncovers common mistakes,
providing valuable insights into areas that require improvement. Our
implementation is available at
https://github.com/telepathylabsai/prompt-based-user-simulator.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.00784
|
2023-06-01T15:16:18Z
|
Interpretable Math Word Problem Solution Generation Via Step-by-step
Planning
|
[
"Mengxue Zhang",
"Zichao Wang",
"Zhichao Yang",
"Weiqi Feng",
"Andrew Lan"
] |
Solutions to math word problems (MWPs) with step-by-step explanations are
valuable, especially in education, to help students better comprehend
problem-solving strategies. Most existing approaches only focus on obtaining
the final correct answer. A few recent approaches leverage intermediate
solution steps to improve final answer correctness but often cannot generate
coherent steps with a clear solution strategy. Contrary to existing work, we
focus on improving the correctness and coherence of the intermediate solutions
steps. We propose a step-by-step planning approach for intermediate solution
generation, which strategically plans the generation of the next solution step
based on the MWP and the previous solution steps. Our approach first plans the
next step by predicting the necessary math operation needed to proceed, given
history steps, then generates the next step, token-by-token, by prompting a
language model with the predicted math operation. Experiments on the GSM8K
dataset demonstrate that our approach improves the accuracy and
interpretability of the solution on both automatic metrics and human
evaluation.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.00791
|
2023-06-01T15:22:05Z
|
Modeling and Analyzing Scorer Preferences in Short-Answer Math Questions
|
[
"Mengxue Zhang",
"Neil Heffernan",
"Andrew Lan"
] |
Automated scoring of student responses to open-ended questions, including
short-answer questions, has great potential to scale to a large number of
responses. Recent approaches for automated scoring rely on supervised learning,
i.e., training classifiers or fine-tuning language models on a small number of
responses with human-provided score labels. However, since scoring is a
subjective process, these human scores are noisy and can be highly variable,
depending on the scorer. In this paper, we investigate a collection of models
that account for the individual preferences and tendencies of each human scorer
in the automated scoring task. We apply these models to a short-answer math
response dataset where each response is scored (often differently) by multiple
different human scorers. We conduct quantitative experiments to show that our
scorer models lead to improved automated scoring accuracy. We also conduct
quantitative experiments and case studies to analyze the individual preferences
and tendencies of scorers. We found that scorers can be grouped into several
obvious clusters, with each cluster having distinct features, and analyzed them
in detail.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.01061
|
2023-06-01T18:08:51Z
|
Reimagining Retrieval Augmented Language Models for Answering Queries
|
[
"Wang-Chiew Tan",
"Yuliang Li",
"Pedro Rodriguez",
"Richard James",
"Xi Victoria Lin",
"Alon Halevy",
"Scott Yih"
] |
We present a reality check on large language models and inspect the promise
of retrieval augmented language models in comparison. Such language models are
semi-parametric, where models integrate model parameters and knowledge from
external data sources to make their predictions, as opposed to the parametric
nature of vanilla large language models. We give initial experimental findings
that semi-parametric architectures can be enhanced with views, a query
analyzer/planner, and provenance to make a significantly more powerful system
for question answering in terms of accuracy and efficiency, and potentially for
other NLP tasks
|
[
"cs.CL",
"cs.DB"
] | true |
2306.01093
|
2023-06-01T19:10:09Z
|
UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of
Multilingual BERT for Low-resource Sentiment Analysis
|
[
"Dou Hu",
"Lingwei Wei",
"Yaxin Liu",
"Wei Zhou",
"Songlin Hu"
] |
This paper describes our system designed for SemEval-2023 Task 12: Sentiment
analysis for African languages. The challenge faced by this task is the
scarcity of labeled data and linguistic resources in low-resource settings. To
alleviate these, we propose a generalized multilingual system SACL-XLMR for
sentiment analysis on low-resource languages. Specifically, we design a
lexicon-based multilingual BERT to facilitate language adaptation and
sentiment-aware representation learning. Besides, we apply a supervised
adversarial contrastive learning technique to learn sentiment-spread structured
representations and enhance model generalization. Our system achieved
competitive results, largely outperforming baselines on both multilingual and
zero-shot sentiment classification subtasks. Notably, the system obtained the
1st rank on the zero-shot classification subtask in the official ranking.
Extensive experiments demonstrate the effectiveness of our system.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.01116
|
2023-06-01T20:03:56Z
|
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora
with Web Data, and Web Data Only
|
[
"Guilherme Penedo",
"Quentin Malartic",
"Daniel Hesslow",
"Ruxandra Cojocaru",
"Alessandro Cappelli",
"Hamza Alobeidli",
"Baptiste Pannier",
"Ebtesam Almazrouei",
"Julien Launay"
] |
Large language models are commonly trained on a mixture of filtered web data
and curated high-quality corpora, such as social media conversations, books, or
technical papers. This curation process is believed to be necessary to produce
performant models with broad zero-shot generalization abilities. However, as
larger models requiring pretraining on trillions of tokens are considered, it
is unclear how scalable is curation and whether we will run out of unique
high-quality data soon. At variance with previous beliefs, we show that
properly filtered and deduplicated web data alone can lead to powerful models;
even significantly outperforming models from the state-of-the-art trained on
The Pile. Despite extensive filtering, the high-quality data we extract from
the web is still plentiful, and we are able to obtain five trillion tokens from
CommonCrawl. We publicly release an extract of 600 billion tokens from our
RefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.
|
[
"cs.CL",
"cs.AI"
] | true |
2306.01144
|
2023-06-01T20:56:34Z
|
Evaluating the Capabilities of Multi-modal Reasoning Models with
Synthetic Task Data
|
[
"Nathan Vaska",
"Victoria Helus"
] |
The impressive advances and applications of large language and joint
language-and-visual understanding models has led to an increased need for
methods of probing their potential reasoning capabilities. However, the
difficulty of gather naturally-occurring data for complex multi-modal reasoning
tasks bottlenecks the evaluation of AI methods on tasks which are not already
covered by an academic dataset. In this work, we leverage recent advances in
high resolution text-to-image generation to develop a framework for generating
evaluation data for multi-modal reasoning tasks. We apply this framework to
generate context-dependent anomaly data, creating a synthetic dataset on a
challenging task which is not well covered by existing datasets. We benchmark
the performance of a state-of-the-art visual question answering (VQA) model
against data generated with this method, and demonstrate that while the task is
tractable, the model performs significantly worse on the context-dependent
anomaly detection task than on standard VQA tasks.
|
[
"cs.LG",
"cs.CL"
] | false |
2306.01150
|
2023-06-01T21:11:24Z
|
Did You Read the Instructions? Rethinking the Effectiveness of Task
Definitions in Instruction Learning
|
[
"Fan Yin",
"Jesse Vig",
"Philippe Laban",
"Shafiq Joty",
"Caiming Xiong",
"Chien-Sheng Jason Wu"
] |
Large language models (LLMs) have shown impressive performance in following
natural language instructions to solve unseen tasks. However, it remains
unclear whether models truly understand task definitions and whether the
human-written definitions are optimal. In this paper, we systematically study
the role of task definitions in instruction learning. We first conduct an
ablation analysis informed by human annotations to understand which parts of a
task definition are most important, and find that model performance only drops
substantially when removing contents describing the task output, in particular
label information. Next, we propose an automatic algorithm to compress task
definitions to a minimal supporting set of tokens, and find that 60\% of tokens
can be removed while maintaining or even improving model performance. Based on
these results, we propose two strategies to help models better leverage task
instructions: (1) providing only key information for tasks in a common
structured format, and (2) adding a meta-tuning stage to help the model better
understand the definitions. With these two strategies, we achieve a 4.2 Rouge-L
improvement over 119 unseen test tasks.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.01164
|
2023-06-01T21:40:48Z
|
Leveraging Natural Language Processing For Public Health Screening On
YouTube: A COVID-19 Case Study
|
[
"Ahrar Bin Aslam",
"Zafi Sherhan Syed",
"Muhammad Faiz Khan",
"Asghar Baloch",
"Muhammad Shehram Shah Syed"
] |
Background: Social media platforms have become a viable source of medical
information, with patients and healthcare professionals using them to share
health-related information and track diseases. Similarly, YouTube, the largest
video-sharing platform in the world contains vlogs where individuals talk about
their illnesses. The aim of our study was to investigate the use of Natural
Language Processing (NLP) to identify the spoken content of YouTube vlogs
related to the diagnosis of Coronavirus disease of 2019 (COVID-19) for public
health screening. Methods: COVID-19 videos on YouTube were searched using
relevant keywords. A total of 1000 videos being spoken in English were
downloaded out of which 791 were classified as vlogs, 192 were non-vlogs, and
17 were deleted by the channel. The videos were converted into a textual format
using Microsoft Streams. The textual data was preprocessed using basic and
advanced preprocessing methods. A lexicon of 200 words was created which
contained words related to COVID-19. The data was analyzed using topic
modeling, word clouds, and lexicon matching. Results: The word cloud results
revealed discussions about COVID-19 symptoms like "fever", along with generic
terms such as "mask" and "isolation". Lexical analysis demonstrated that in
96.46% of videos, patients discussed generic terms, and in 95.45% of videos,
people talked about COVID-19 symptoms. LDA Topic Modeling results also
generated topics that successfully captured key themes and content related to
our investigation of COVID-19 diagnoses in YouTube vlogs. Conclusion: By
leveraging NLP techniques on YouTube vlogs public health practitioners can
enhance their ability to mitigate the effects of pandemics and effectively
respond to public health challenges.
|
[
"cs.CL",
"cs.SI"
] | false |
2306.00288
|
2023-06-01T02:06:13Z
|
Training-free Neural Architecture Search for RNNs and Transformers
|
[
"Aaron Serianni",
"Jugal Kalita"
] |
Neural architecture search (NAS) has allowed for the automatic creation of
new and effective neural network architectures, offering an alternative to the
laborious process of manually designing complex architectures. However,
traditional NAS algorithms are slow and require immense amounts of computing
power. Recent research has investigated training-free NAS metrics for image
classification architectures, drastically speeding up search algorithms. In
this paper, we investigate training-free NAS metrics for recurrent neural
network (RNN) and BERT-based transformer architectures, targeted towards
language modeling tasks. First, we develop a new training-free metric, named
hidden covariance, that predicts the trained performance of an RNN architecture
and significantly outperforms existing training-free metrics. We experimentally
evaluate the effectiveness of the hidden covariance metric on the NAS-Bench-NLP
benchmark. Second, we find that the current search space paradigm for
transformer architectures is not optimized for training-free neural
architecture search. Instead, a simple qualitative analysis can effectively
shrink the search space to the best performing architectures. This conclusion
is based on our investigation of existing training-free metrics and new metrics
developed from recent transformer pruning literature, evaluated on our own
benchmark of trained BERT architectures. Ultimately, our analysis shows that
the architecture search space and the training-free metric must be developed
together in order to achieve effective results.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2306.00377
|
2023-06-01T06:19:01Z
|
Developing and Building Ontologies in Cyber Security
|
[
"Muhammad Shoaib Farooq",
"Muhammad Talha Waseem"
] |
Cyber Security is one of the most arising disciplines in our modern society.
We work on Cybersecurity domain and in this the topic we chose is Cyber
Security Ontologies. In this we gather all latest and previous ontologies and
compare them on the basis of different analyzing factors to get best of them.
Reason to select this topic is to assemble different ontologies from different
era of time. Because, researches that included in this SLR is mostly studied
single ontology. If any researcher wants to study ontologies, he has to study
every single ontology and select which one is best for his research. So, we
assemble different types of ontology and compare them against each other to get
best of them. A total 24 papers between years 2010-2020 are carefully selected
through systematic process and classified accordingly. Lastly, this SLR have
been presented to provide the researchers promising future directions in the
domain of cybersecurity ontologies.
|
[
"cs.CR",
"cs.AI",
"cs.CL"
] | false |
2306.00410
|
2023-06-01T07:25:10Z
|
Towards hate speech detection in low-resource languages: Comparing ASR
to acoustic word embeddings on Wolof and Swahili
|
[
"Christiaan Jacobs",
"Nathanaël Carraz Rakotonirina",
"Everlyn Asiko Chimoto",
"Bruce A. Bassett",
"Herman Kamper"
] |
We consider hate speech detection through keyword spotting on radio
broadcasts. One approach is to build an automatic speech recognition (ASR)
system for the target low-resource language. We compare this to using acoustic
word embedding (AWE) models that map speech segments to a space where matching
words have similar vectors. We specifically use a multilingual AWE model
trained on labelled data from well-resourced languages to spot keywords in data
in the unseen target language. In contrast to ASR, the AWE approach only
requires a few keyword exemplars. In controlled experiments on Wolof and
Swahili where training and test data are from the same domain, an ASR model
trained on just five minutes of data outperforms the AWE approach. But in an
in-the-wild test on Swahili radio broadcasts with actual hate speech keywords,
the AWE model (using one minute of template data) is more robust, giving
similar performance to an ASR system trained on 30 hours of labelled data.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.00550
|
2023-06-01T11:11:39Z
|
Chain-Of-Thought Prompting Under Streaming Batch: A Case Study
|
[
"Yuxin Tang"
] |
Recently, Large Language Models (LLMs) have demonstrated remarkable
capabilities. Chain-of-Thought (CoT) has been proposed as a way of assisting
LLMs in performing complex reasoning. However, developing effective prompts can
be a challenging and labor-intensive task. Many studies come out of some way to
automatically construct CoT from test data. Most of them assume that all test
data is visible before testing and only select a small subset to generate
rationales, which is an unrealistic assumption. In this paper, we present a
case study on how to construct and optimize chain-of-thought prompting using
batch data in streaming settings.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2306.00622
|
2023-06-01T12:45:53Z
|
ReviewerGPT? An Exploratory Study on Using Large Language Models for
Paper Reviewing
|
[
"Ryan Liu",
"Nihar B. Shah"
] |
Given the rapid ascent of large language models (LLMs), we study the
question: (How) can large language models help in reviewing of scientific
papers or proposals? We first conduct some pilot studies where we find that (i)
GPT-4 outperforms other LLMs (Bard, Vicuna, Koala, Alpaca, LLaMa, Dolly,
OpenAssistant, StableLM), and (ii) prompting with a specific question (e.g., to
identify errors) outperforms prompting to simply write a review. With these
insights, we study the use of LLMs (specifically, GPT-4) for three tasks:
1. Identifying errors: We construct 13 short computer science papers each
with a deliberately inserted error, and ask the LLM to check for the
correctness of these papers. We observe that the LLM finds errors in 7 of them,
spanning both mathematical and conceptual errors.
2. Verifying checklists: We task the LLM to verify 16 closed-ended checklist
questions in the respective sections of 15 NeurIPS 2022 papers. We find that
across 119 {checklist question, paper} pairs, the LLM had an 86.6% accuracy.
3. Choosing the "better" paper: We generate 10 pairs of abstracts,
deliberately designing each pair in such a way that one abstract was clearly
superior than the other. The LLM, however, struggled to discern these
relatively straightforward distinctions accurately, committing errors in its
evaluations for 6 out of the 10 pairs.
Based on these experiments, we think that LLMs have a promising use as
reviewing assistants for specific reviewing tasks, but not (yet) for complete
evaluations of papers or proposals.
|
[
"cs.CL",
"cs.AI",
"cs.DL"
] | true |
2306.00697
|
2023-06-01T14:07:19Z
|
How Generative Spoken Language Modeling Encodes Noisy Speech:
Investigation from Phonetics to Syntactics
|
[
"Joonyong Park",
"Shinnosuke Takamichi",
"Tomohiko Nakamura",
"Kentaro Seki",
"Detai Xin",
"Hiroshi Saruwatari"
] |
We examine the speech modeling potential of generative spoken language
modeling (GSLM), which involves using learned symbols derived from data rather
than phonemes for speech analysis and synthesis. Since GSLM facilitates
textless spoken language processing, exploring its effectiveness is critical
for paving the way for novel paradigms in spoken-language processing. This
paper presents the findings of GSLM's encoding and decoding effectiveness at
the spoken-language and speech levels. Through speech resynthesis experiments,
we revealed that resynthesis errors occur at the levels ranging from phonology
to syntactics and GSLM frequently resynthesizes natural but content-altered
speech.
|
[
"cs.CL",
"cs.AI",
"eess.AS"
] | false |
2306.00755
|
2023-06-01T14:50:39Z
|
Enhancing the Unified Streaming and Non-streaming Model with Contrastive
Learning
|
[
"Yuting Yang",
"Yuke Li",
"Binbin Du"
] |
The unified streaming and non-streaming speech recognition model has achieved
great success due to its comprehensive capabilities. In this paper, we propose
to improve the accuracy of the unified model by bridging the inherent
representation gap between the streaming and non-streaming modes with a
contrastive objective. Specifically, the top-layer hidden representation at the
same frame of the streaming and non-streaming modes are regarded as a positive
pair, encouraging the representation of the streaming mode close to its
non-streaming counterpart. The multiple negative samples are randomly selected
from the rest frames of the same sample under the non-streaming mode.
Experimental results demonstrate that the proposed method achieves consistent
improvements toward the unified model in both streaming and non-streaming
modes. Our method achieves CER of 4.66% in the streaming mode and CER of 4.31%
in the non-streaming mode, which sets a new state-of-the-art on the AISHELL-1
benchmark.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.00924
|
2023-06-01T17:24:35Z
|
Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play
Multi-Character Belief Tracker
|
[
"Melanie Sclar",
"Sachin Kumar",
"Peter West",
"Alane Suhr",
"Yejin Choi",
"Yulia Tsvetkov"
] |
Theory of Mind (ToM)$\unicode{x2014}$the ability to reason about the mental
states of other people$\unicode{x2014}$is a key element of our social
intelligence. Yet, despite their ever more impressive performance, large-scale
neural language models still lack basic theory of mind capabilities
out-of-the-box. We posit that simply scaling up models will not imbue them with
theory of mind due to the inherently symbolic and implicit nature of the
phenomenon, and instead investigate an alternative: can we design a
decoding-time algorithm that enhances theory of mind of off-the-shelf neural
language models without explicit supervision? We present SymbolicToM, a
plug-and-play approach to reason about the belief states of multiple characters
in reading comprehension tasks via explicit symbolic representation. More
concretely, our approach tracks each entity's beliefs, their estimation of
other entities' beliefs, and higher-order levels of reasoning, all through
graphical representations, allowing for more precise and interpretable
reasoning than previous approaches. Empirical results on the well-known ToMi
benchmark (Le et al., 2019) demonstrate that SymbolicToM dramatically enhances
off-the-shelf neural networks' theory of mind in a zero-shot setting while
showing robust out-of-distribution performance compared to supervised
baselines. Our work also reveals spurious patterns in existing theory of mind
benchmarks, emphasizing the importance of out-of-distribution evaluation and
methods that do not overfit a particular dataset.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.00928
|
2023-06-01T17:33:04Z
|
ACLM: A Selective-Denoising based Generative Data Augmentation Approach
for Low-Resource Complex NER
|
[
"Sreyan Ghosh",
"Utkarsh Tyagi",
"Manan Suri",
"Sonal Kumar",
"S Ramaneswaran",
"Dinesh Manocha"
] |
Complex Named Entity Recognition (NER) is the task of detecting
linguistically complex named entities in low-context text. In this paper, we
present ACLM Attention-map aware keyword selection for Conditional Language
Model fine-tuning), a novel data augmentation approach based on conditional
generation to address the data scarcity problem in low-resource complex NER.
ACLM alleviates the context-entity mismatch issue, a problem existing NER data
augmentation techniques suffer from and often generates incoherent
augmentations by placing complex named entities in the wrong context. ACLM
builds on BART and is optimized on a novel text reconstruction or denoising
task - we use selective masking (aided by attention maps) to retain the named
entities and certain keywords in the input sentence that provide contextually
relevant additional knowledge or hints about the named entities. Compared with
other data augmentation strategies, ACLM can generate more diverse and coherent
augmentations preserving the true word sense of complex entities in the
sentence. We demonstrate the effectiveness of ACLM both qualitatively and
quantitatively on monolingual, cross-lingual, and multilingual complex NER
across various low-resource settings. ACLM outperforms all our neural baselines
by a significant margin (1%-36%). In addition, we demonstrate the application
of ACLM to other domains that suffer from data scarcity (e.g., biomedical). In
practice, ACLM generates more effective and factual augmentations for these
domains than prior methods. Code: https://github.com/Sreyan88/ACLM
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2306.00947
|
2023-06-01T17:45:32Z
|
EEL: Efficiently Encoding Lattices for Reranking
|
[
"Prasann Singhal",
"Jiacheng Xu",
"Xi Ye",
"Greg Durrett"
] |
Standard decoding approaches for conditional text generation tasks typically
search for an output hypothesis with high model probability, but this may not
yield the best hypothesis according to human judgments of quality. Reranking to
optimize for "downstream" metrics can better optimize for quality, but many
metrics of interest are computed with pre-trained language models, which are
slow to apply to large numbers of hypotheses. We explore an approach for
reranking hypotheses by using Transformers to efficiently encode lattices of
generated outputs, a method we call EEL. With a single Transformer pass over
the entire lattice, we can approximately compute a contextualized
representation of each token as if it were only part of a single hypothesis in
isolation. We combine this approach with a new class of token-factored
rerankers (TFRs) that allow for efficient extraction of high reranker-scoring
hypotheses from the lattice. Empirically, our approach incurs minimal
degradation error compared to the exponentially slower approach of encoding
each hypothesis individually. When applying EEL with TFRs across three text
generation tasks, our results show both substantial speedup compared to naive
reranking and often better performance on downstream metrics than comparable
approaches.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.01031
|
2023-06-01T14:56:19Z
|
Bypass Temporal Classification: Weakly Supervised Automatic Speech
Recognition with Imperfect Transcripts
|
[
"Dongji Gao",
"Matthew Wiesner",
"Hainan Xu",
"Leibny Paola Garcia",
"Daniel Povey",
"Sanjeev Khudanpur"
] |
This paper presents a novel algorithm for building an automatic speech
recognition (ASR) model with imperfect training data. Imperfectly transcribed
speech is a prevalent issue in human-annotated speech corpora, which degrades
the performance of ASR models. To address this problem, we propose Bypass
Temporal Classification (BTC) as an expansion of the Connectionist Temporal
Classification (CTC) criterion. BTC explicitly encodes the uncertainties
associated with transcripts during training. This is accomplished by enhancing
the flexibility of the training graph, which is implemented as a weighted
finite-state transducer (WFST) composition. The proposed algorithm improves the
robustness and accuracy of ASR systems, particularly when working with
imprecisely transcribed speech corpora. Our implementation will be
open-sourced.
|
[
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.01069
|
2023-06-01T18:17:13Z
|
TimelineQA: A Benchmark for Question Answering over Timelines
|
[
"Wang-Chiew Tan",
"Jane Dwivedi-Yu",
"Yuliang Li",
"Lambert Mathias",
"Marzieh Saeidi",
"Jing Nathan Yan",
"Alon Y. Halevy"
] |
Lifelogs are descriptions of experiences that a person had during their life.
Lifelogs are created by fusing data from the multitude of digital services,
such as online photos, maps, shopping and content streaming services. Question
answering over lifelogs can offer personal assistants a critical resource when
they try to provide advice in context. However, obtaining answers to questions
over lifelogs is beyond the current state of the art of question answering
techniques for a variety of reasons, the most pronounced of which is that
lifelogs combine free text with some degree of structure such as temporal and
geographical information.
We create and publicly release TimelineQA1, a benchmark for accelerating
progress on querying lifelogs. TimelineQA generates lifelogs of imaginary
people. The episodes in the lifelog range from major life episodes such as high
school graduation to those that occur on a daily basis such as going for a run.
We describe a set of experiments on TimelineQA with several state-of-the-art QA
models. Our experiments reveal that for atomic queries, an extractive QA system
significantly out-performs a state-of-the-art retrieval-augmented QA system.
For multi-hop queries involving aggregates, we show that the best result is
obtained with a state-of-the-art table QA technique, assuming the ground truth
set of episodes for deriving the answer is available.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2306.01160
|
2023-06-01T21:33:59Z
|
Faster Causal Attention Over Large Sequences Through Sparse Flash
Attention
|
[
"Matteo Pagliardini",
"Daniele Paliotta",
"Martin Jaggi",
"François Fleuret"
] |
Transformer-based language models have found many diverse applications
requiring them to process sequences of increasing length. For these
applications, the causal self-attention -- which is the only component scaling
quadratically w.r.t. the sequence length -- becomes a central concern. While
many works have proposed schemes to sparsify the attention patterns and reduce
the computational overhead of self-attention, those are often limited by
implementations concerns and end up imposing a simple and static structure over
the attention matrix. Conversely, implementing more dynamic sparse attentions
often results in runtimes significantly slower than computing the full
attention using the Flash implementation from Dao et al. (2022). We extend
FlashAttention to accommodate a large class of attention sparsity patterns
that, in particular, encompass key/query dropping and hashing-based attention.
This leads to implementations with no computational complexity overhead and a
multi-fold runtime speedup on top of FlashAttention. Even with relatively low
degrees of sparsity, our method improves visibly upon FlashAttention as the
sequence length increases. Without sacrificing perplexity, we increase the
training speed of a transformer language model by $2.0\times$ and $3.3\times$
for sequences of respectively $8k$ and $16k$ tokens.
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] | true |
2306.01201
|
2023-06-01T23:29:23Z
|
Learning When to Speak: Latency and Quality Trade-offs for Simultaneous
Speech-to-Speech Translation with Offline Models
|
[
"Liam Dugan",
"Anshul Wadhawan",
"Kyle Spence",
"Chris Callison-Burch",
"Morgan McGuire",
"Victor Zordan"
] |
Recent work in speech-to-speech translation (S2ST) has focused primarily on
offline settings, where the full input utterance is available before any output
is given. This, however, is not reasonable in many real-world scenarios. In
latency-sensitive applications, rather than waiting for the full utterance,
translations should be spoken as soon as the information in the input is
present. In this work, we introduce a system for simultaneous S2ST targeting
real-world use cases. Our system supports translation from 57 languages to
English with tunable parameters for dynamically adjusting the latency of the
output -- including four policies for determining when to speak an output
sequence. We show that these policies achieve offline-level accuracy with
minimal increases in latency over a Greedy (wait-$k$) baseline. We open-source
our evaluation code and interactive test script to aid future SimulS2ST
research and application development.
|
[
"cs.CL",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.01206
|
2023-06-01T23:39:07Z
|
Estimating Semantic Similarity between In-Domain and Out-of-Domain
Samples
|
[
"Rhitabrat Pokharel",
"Ameeta Agrawal"
] |
Prior work typically describes out-of-domain (OOD) or out-of-distribution
(OODist) samples as those that originate from dataset(s) or source(s) different
from the training set but for the same task. When compared to in-domain (ID)
samples, the models have been known to usually perform poorer on OOD samples,
although this observation is not consistent. Another thread of research has
focused on OOD detection, albeit mostly using supervised approaches. In this
work, we first consolidate and present a systematic analysis of multiple
definitions of OOD and OODist as discussed in prior literature. Then, we
analyze the performance of a model under ID and OOD/OODist settings in a
principled way. Finally, we seek to identify an unsupervised method for
reliably identifying OOD/OODist samples without using a trained model. The
results of our extensive evaluation using 12 datasets from 4 different tasks
suggest the promising potential of unsupervised metrics in this task.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.01805
|
2023-06-01T18:49:47Z
|
Cook-Gen: Robust Generative Modeling of Cooking Actions from Recipes
|
[
"Revathy Venkataramanan",
"Kaushik Roy",
"Kanak Raj",
"Renjith Prasad",
"Yuxin Zi",
"Vignesh Narayanan",
"Amit Sheth"
] |
As people become more aware of their food choices, food computation models
have become increasingly popular in assisting people in maintaining healthy
eating habits. For example, food recommendation systems analyze recipe
instructions to assess nutritional contents and provide recipe recommendations.
The recent and remarkable successes of generative AI methods, such as
auto-regressive large language models, can lead to robust methods for a more
comprehensive understanding of recipes for healthy food recommendations beyond
surface-level nutrition content assessments. In this study, we explore the use
of generative AI methods to extend current food computation models, primarily
involving the analysis of nutrition and ingredients, to also incorporate
cooking actions (e.g., add salt, fry the meat, boil the vegetables, etc.).
Cooking actions are notoriously hard to model using statistical learning
methods due to irregular data patterns - significantly varying natural language
descriptions for the same action (e.g., marinate the meat vs. marinate the meat
and leave overnight) and infrequently occurring patterns (e.g., add salt occurs
far more frequently than marinating the meat). The prototypical approach to
handling irregular data patterns is to increase the volume of data that the
model ingests by orders of magnitude. Unfortunately, in the cooking domain,
these problems are further compounded with larger data volumes presenting a
unique challenge that is not easily handled by simply scaling up. In this work,
we propose novel aggregation-based generative AI methods, Cook-Gen, that
reliably generate cooking actions from recipes, despite difficulties with
irregular data patterns, while also outperforming Large Language Models and
other strong baselines.
|
[
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2306.03773
|
2023-06-01T11:42:34Z
|
Some voices are too common: Building fair speech recognition systems
using the Common Voice dataset
|
[
"Lucas Maison",
"Yannick Estève"
] |
Automatic speech recognition (ASR) systems become increasingly efficient
thanks to new advances in neural network training like self-supervised
learning. However, they are known to be unfair toward certain groups, for
instance, people speaking with an accent. In this work, we use the French
Common Voice dataset to quantify the biases of a pre-trained wav2vec~2.0 model
toward several demographic groups. By fine-tuning the pre-trained model on a
variety of fixed-size, carefully crafted training sets, we demonstrate the
importance of speaker diversity. We also run an in-depth analysis of the Common
Voice corpus and identify important shortcomings that should be taken into
account by users of this dataset.
|
[
"eess.AS",
"cs.CL",
"cs.LG",
"cs.SD"
] | false |
2306.03789
|
2023-06-01T21:31:00Z
|
On the Robustness of Arabic Speech Dialect Identification
|
[
"Peter Sullivan",
"AbdelRahim Elmadany",
"Muhammad Abdul-Mageed"
] |
Arabic dialect identification (ADI) tools are an important part of the
large-scale data collection pipelines necessary for training speech recognition
models. As these pipelines require application of ADI tools to potentially
out-of-domain data, we aim to investigate how vulnerable the tools may be to
this domain shift. With self-supervised learning (SSL) models as a starting
point, we evaluate transfer learning and direct classification from SSL
features. We undertake our evaluation under rich conditions, with a goal to
develop ADI systems from pretrained models and ultimately evaluate performance
on newly collected data. In order to understand what factors contribute to
model decisions, we carry out a careful human study of a subset of our data.
Our analysis confirms that domain shift is a major challenge for ADI models. We
also find that while self-training does alleviate this challenges, it may be
insufficient for realistic conditions.
|
[
"eess.AS",
"cs.CL",
"cs.LG"
] | false |
2306.00482
|
2023-06-01T09:31:57Z
|
Inspecting Spoken Language Understanding from Kids for Basic Math
Learning at Home
|
[
"Eda Okur",
"Roddy Fuentes Alba",
"Saurav Sahay",
"Lama Nachman"
] |
Enriching the quality of early childhood education with interactive math
learning at home systems, empowered by recent advances in conversational AI
technologies, is slowly becoming a reality. With this motivation, we implement
a multimodal dialogue system to support play-based learning experiences at
home, guiding kids to master basic math concepts. This work explores Spoken
Language Understanding (SLU) pipeline within a task-oriented dialogue system
developed for Kid Space, with cascading Automatic Speech Recognition (ASR) and
Natural Language Understanding (NLU) components evaluated on our home
deployment data with kids going through gamified math learning activities. We
validate the advantages of a multi-task architecture for NLU and experiment
with a diverse set of pretrained language representations for Intent
Recognition and Entity Extraction tasks in the math learning domain. To
recognize kids' speech in realistic home environments, we investigate several
ASR systems, including the commercial Google Cloud and the latest open-source
Whisper solutions with varying model sizes. We evaluate the SLU pipeline by
testing our best-performing NLU models on noisy ASR output to inspect the
challenges of understanding children for math learning in authentic homes.
|
[
"cs.CY",
"cs.CL",
"cs.SD",
"eess.AS",
"math.HO"
] | false |
2306.00765
|
2023-06-01T15:00:39Z
|
Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection
|
[
"Erik Arakelyan",
"Arnav Arora",
"Isabelle Augenstein"
] |
Stance Detection is concerned with identifying the attitudes expressed by an
author towards a target of interest. This task spans a variety of domains
ranging from social media opinion identification to detecting the stance for a
legal claim. However, the framing of the task varies within these domains, in
terms of the data collection protocol, the label dictionary and the number of
available annotations. Furthermore, these stance annotations are significantly
imbalanced on a per-topic and inter-topic basis. These make multi-domain stance
detection a challenging task, requiring standardization and domain adaptation.
To overcome this challenge, we propose $\textbf{T}$opic $\textbf{E}$fficient
$\textbf{St}$anc$\textbf{E}$ $\textbf{D}$etection (TESTED), consisting of a
topic-guided diversity sampling technique and a contrastive objective that is
used for fine-tuning a stance classifier. We evaluate the method on an existing
benchmark of $16$ datasets with in-domain, i.e. all topics seen and
out-of-domain, i.e. unseen topics, experiments. The results show that our
method outperforms the state-of-the-art with an average of $3.5$ F1 points
increase in-domain, and is more generalizable with an averaged increase of
$10.2$ F1 on out-of-domain evaluation while using $\leq10\%$ of the training
data. We show that our sampling technique mitigates both inter- and per-topic
class imbalances. Finally, our analysis demonstrates that the contrastive
learning objective allows the model a more pronounced segmentation of samples
with varying labels.
|
[
"cs.CL",
"cs.AI",
"cs.IR",
"stat.CO",
"stat.ML"
] | false |
2306.00256
|
2023-06-01T00:29:52Z
|
DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus
Algorithm
|
[
"Lisang Ding",
"Kexin Jin",
"Bicheng Ying",
"Kun Yuan",
"Wotao Yin"
] |
Decentralized Stochastic Gradient Descent (SGD) is an emerging neural network
training approach that enables multiple agents to train a model collaboratively
and simultaneously. Rather than using a central parameter server to collect
gradients from all the agents, each agent keeps a copy of the model parameters
and communicates with a small number of other agents to exchange model updates.
Their communication, governed by the communication topology and gossip weight
matrices, facilitates the exchange of model updates. The state-of-the-art
approach uses the dynamic one-peer exponential-2 topology, achieving faster
training times and improved scalability than the ring, grid, torus, and
hypercube topologies. However, this approach requires a power-of-2 number of
agents, which is impractical at scale. In this paper, we remove this
restriction and propose \underline{D}ecentralized \underline{SGD} with
\underline{C}ommunication-optimal \underline{E}xact \underline{C}onsensus
\underline{A}lgorithm (DSGD-CECA), which works for any number of agents while
still achieving state-of-the-art properties. In particular, DSGD-CECA incurs a
unit per-iteration communication overhead and an $\tilde{O}(n^3)$ transient
iteration complexity. Our proof is based on newly discovered properties of
gossip weight matrices and a novel approach to combine them with DSGD's
convergence analysis. Numerical experiments show the efficiency of DSGD-CECA.
|
[
"cs.LG"
] | false |
2306.00620
|
2023-06-01T12:45:00Z
|
OTW: Optimal Transport Warping for Time Series
|
[
"Fabian Latorre",
"Chenghao Liu",
"Doyen Sahoo",
"Steven C. H. Hoi"
] |
Dynamic Time Warping (DTW) has become the pragmatic choice for measuring
distance between time series. However, it suffers from unavoidable quadratic
time complexity when the optimal alignment matrix needs to be computed exactly.
This hinders its use in deep learning architectures, where layers involving DTW
computations cause severe bottlenecks. To alleviate these issues, we introduce
a new metric for time series data based on the Optimal Transport (OT)
framework, called Optimal Transport Warping (OTW). OTW enjoys linear time/space
complexity, is differentiable and can be parallelized. OTW enjoys a moderate
sensitivity to time and shape distortions, making it ideal for time series. We
show the efficacy and efficiency of OTW on 1-Nearest Neighbor Classification
and Hierarchical Clustering, as well as in the case of using OTW instead of DTW
in Deep Learning architectures.
|
[
"cs.LG"
] | false |
2306.00972
|
2023-06-01T17:58:46Z
|
Improving and Benchmarking Offline Reinforcement Learning Algorithms
|
[
"Bingyi Kang",
"Xiao Ma",
"Yirui Wang",
"Yang Yue",
"Shuicheng Yan"
] |
Recently, Offline Reinforcement Learning (RL) has achieved remarkable
progress with the emergence of various algorithms and datasets. However, these
methods usually focus on algorithmic advancements, ignoring that many low-level
implementation choices considerably influence or even drive the final
performance. As a result, it becomes hard to attribute the progress in Offline
RL as these choices are not sufficiently discussed and aligned in the
literature. In addition, papers focusing on a dataset (e.g., D4RL) often ignore
algorithms proposed on another dataset (e.g., RL Unplugged), causing isolation
among the algorithms, which might slow down the overall progress. Therefore,
this work aims to bridge the gaps caused by low-level choices and datasets. To
this end, we empirically investigate 20 implementation choices using three
representative algorithms (i.e., CQL, CRR, and IQL) and present a guidebook for
choosing implementations. Following the guidebook, we find two variants CRR+
and CQL+ , achieving new state-of-the-art on D4RL. Moreover, we benchmark eight
popular offline RL algorithms across datasets under unified training and
evaluation framework. The findings are inspiring: the success of a learning
paradigm severely depends on the data distribution, and some previous
conclusions are biased by the dataset used. Our code is available at
https://github.com/sail-sg/offbench.
|
[
"cs.LG"
] | false |
2306.01070
|
2023-06-01T18:17:23Z
|
Hierarchical Attention Encoder Decoder
|
[
"Asier Mujika"
] |
Recent advances in large language models have shown that autoregressive
modeling can generate complex and novel sequences that have many real-world
applications. However, these models must generate outputs autoregressively,
which becomes time-consuming when dealing with long sequences. Hierarchical
autoregressive approaches that compress data have been proposed as a solution,
but these methods still generate outputs at the original data frequency,
resulting in slow and memory-intensive models. In this paper, we propose a
model based on the Hierarchical Recurrent Encoder Decoder (HRED) architecture.
This model independently encodes input sub-sequences without global context,
processes these sequences using a lower-frequency model, and decodes outputs at
the original data frequency. By interpreting the encoder as an implicitly
defined embedding matrix and using sampled softmax estimation, we develop a
training algorithm that can train the entire model without a high-frequency
decoder, which is the most memory and compute-intensive part of hierarchical
approaches. In a final, brief phase, we train the decoder to generate data at
the original granularity. Our algorithm significantly reduces memory
requirements for training autoregressive models and it also improves the total
training wall-clock time.
|
[
"cs.LG"
] | false |
2306.01129
|
2023-06-01T20:28:44Z
|
White-Box Transformers via Sparse Rate Reduction
|
[
"Yaodong Yu",
"Sam Buchanan",
"Druv Pai",
"Tianzhe Chu",
"Ziyang Wu",
"Shengbang Tong",
"Benjamin D. Haeffele",
"Yi Ma"
] |
In this paper, we contend that the objective of representation learning is to
compress and transform the distribution of the data, say sets of tokens,
towards a mixture of low-dimensional Gaussian distributions supported on
incoherent subspaces. The quality of the final representation can be measured
by a unified objective function called sparse rate reduction. From this
perspective, popular deep networks such as transformers can be naturally viewed
as realizing iterative schemes to optimize this objective incrementally.
Particularly, we show that the standard transformer block can be derived from
alternating optimization on complementary parts of this objective: the
multi-head self-attention operator can be viewed as a gradient descent step to
compress the token sets by minimizing their lossy coding rate, and the
subsequent multi-layer perceptron can be viewed as attempting to sparsify the
representation of the tokens. This leads to a family of white-box
transformer-like deep network architectures which are mathematically fully
interpretable. Despite their simplicity, experiments show that these networks
indeed learn to optimize the designed objective: they compress and sparsify
representations of large-scale real-world vision datasets such as ImageNet, and
achieve performance very close to thoroughly engineered transformers such as
ViT. Code is at \url{https://github.com/Ma-Lab-Berkeley/CRATE}.
|
[
"cs.LG"
] | false |
2306.01154
|
2023-06-01T21:24:53Z
|
The Law of Parsimony in Gradient Descent for Learning Deep Linear
Networks
|
[
"Can Yaras",
"Peng Wang",
"Wei Hu",
"Zhihui Zhu",
"Laura Balzano",
"Qing Qu"
] |
Over the past few years, an extensively studied phenomenon in training deep
networks is the implicit bias of gradient descent towards parsimonious
solutions. In this work, we investigate this phenomenon by narrowing our focus
to deep linear networks. Through our analysis, we reveal a surprising "law of
parsimony" in the learning dynamics when the data possesses low-dimensional
structures. Specifically, we show that the evolution of gradient descent
starting from orthogonal initialization only affects a minimal portion of
singular vector spaces across all weight matrices. In other words, the learning
process happens only within a small invariant subspace of each weight matrix,
despite the fact that all weight parameters are updated throughout training.
This simplicity in learning dynamics could have significant implications for
both efficient training and a better understanding of deep networks. First, the
analysis enables us to considerably improve training efficiency by taking
advantage of the low-dimensional structure in learning dynamics. We can
construct smaller, equivalent deep linear networks without sacrificing the
benefits associated with the wider counterparts. Second, it allows us to better
understand deep representation learning by elucidating the linear progressive
separation and concentration of representations from shallow to deep layers. We
also conduct numerical experiments to support our theoretical results. The code
for our experiments can be found at https://github.com/cjyaras/lawofparsimony.
|
[
"cs.LG"
] | false |
2306.01189
|
2023-06-01T22:59:45Z
|
A General Framework for Uncertainty Quantification via Neural SDE-RNN
|
[
"Shweta Dahale",
"Sai Munikoti",
"Balasubramaniam Natarajan"
] |
Uncertainty quantification is a critical yet unsolved challenge for deep
learning, especially for the time series imputation with irregularly sampled
measurements. To tackle this problem, we propose a novel framework based on the
principles of recurrent neural networks and neural stochastic differential
equations for reconciling irregularly sampled measurements. We impute
measurements at any arbitrary timescale and quantify the uncertainty in the
imputations in a principled manner. Specifically, we derive analytical
expressions for quantifying and propagating the epistemic and aleatoric
uncertainty across time instants. Our experiments on the IEEE 37 bus test
distribution system reveal that our framework can outperform state-of-the-art
uncertainty quantification approaches for time-series data imputations.
|
[
"cs.LG"
] | false |
2306.00295
|
2023-06-01T02:27:08Z
|
EMOTE: An Explainable architecture for Modelling the Other Through
Empathy
|
[
"Manisha Senadeera",
"Thommen Karimpanal George",
"Sunil Gupta",
"Stephan Jacobs",
"Santu Rana"
] |
We can usually assume others have goals analogous to our own. This assumption
can also, at times, be applied to multi-agent games - e.g. Agent 1's attraction
to green pellets is analogous to Agent 2's attraction to red pellets. This
"analogy" assumption is tied closely to the cognitive process known as empathy.
Inspired by empathy, we design a simple and explainable architecture to model
another agent's action-value function. This involves learning an "Imagination
Network" to transform the other agent's observed state in order to produce a
human-interpretable "empathetic state" which, when presented to the learning
agent, produces behaviours that mimic the other agent. Our approach is
applicable to multi-agent scenarios consisting of a single learning agent and
other (independent) agents acting according to fixed policies. This
architecture is particularly beneficial for (but not limited to) algorithms
using a composite value or reward function. We show our method produces better
performance in multi-agent games, where it robustly estimates the other's model
in different environment configurations. Additionally, we show that the
empathetic states are human interpretable, and thus verifiable.
|
[
"cs.AI",
"cs.LG"
] | false |
2306.00312
|
2023-06-01T03:22:15Z
|
(Almost) Provable Error Bounds Under Distribution Shift via Disagreement
Discrepancy
|
[
"Elan Rosenfeld",
"Saurabh Garg"
] |
We derive an (almost) guaranteed upper bound on the error of deep neural
networks under distribution shift using unlabeled test data. Prior methods
either give bounds that are vacuous in practice or give estimates that are
accurate on average but heavily underestimate error for a sizeable fraction of
shifts. In particular, the latter only give guarantees based on complex
continuous measures such as test calibration -- which cannot be identified
without labels -- and are therefore unreliable. Instead, our bound requires a
simple, intuitive condition which is well justified by prior empirical works
and holds in practice effectively 100% of the time. The bound is inspired by
$\mathcal{H}\Delta\mathcal{H}$-divergence but is easier to evaluate and
substantially tighter, consistently providing non-vacuous guarantees.
Estimating the bound requires optimizing one multiclass classifier to disagree
with another, for which some prior works have used sub-optimal proxy losses; we
devise a "disagreement loss" which is theoretically justified and performs
better in practice. We expect this loss can serve as a drop-in replacement for
future methods which require maximizing multiclass disagreement. Across a wide
range of benchmarks, our method gives valid error bounds while achieving
average accuracy comparable to competitive estimation baselines. Code is
publicly available at https://github.com/erosenfeld/disagree_discrep .
|
[
"stat.ML",
"cs.LG"
] | false |
2306.00315
|
2023-06-01T03:26:11Z
|
Explicit Feature Interaction-aware Uplift Network for Online Marketing
|
[
"Dugang Liu",
"Xing Tang",
"Han Gao",
"Fuyuan Lyu",
"Xiuqiang He"
] |
As a key component in online marketing, uplift modeling aims to accurately
capture the degree to which different treatments motivate different users, such
as coupons or discounts, also known as the estimation of individual treatment
effect (ITE). In an actual business scenario, the options for treatment may be
numerous and complex, and there may be correlations between different
treatments. In addition, each marketing instance may also have rich user and
contextual features. However, existing methods still fall short in both fully
exploiting treatment information and mining features that are sensitive to a
particular treatment. In this paper, we propose an explicit feature
interaction-aware uplift network (EFIN) to address these two problems. Our EFIN
includes four customized modules: 1) a feature encoding module encodes not only
the user and contextual features, but also the treatment features; 2) a
self-interaction module aims to accurately model the user's natural response
with all but the treatment features; 3) a treatment-aware interaction module
accurately models the degree to which a particular treatment motivates a user
through interactions between the treatment features and other features, i.e.,
ITE; and 4) an intervention constraint module is used to balance the ITE
distribution of users between the control and treatment groups so that the
model would still achieve a accurate uplift ranking on data collected from a
non-random intervention marketing scenario. We conduct extensive experiments on
two public datasets and one product dataset to verify the effectiveness of our
EFIN. In addition, our EFIN has been deployed in a credit card bill payment
scenario of a large online financial platform with a significant improvement.
|
[
"cs.LG",
"cs.IR"
] | false |
2306.00317
|
2023-06-01T03:31:12Z
|
FlexRound: Learnable Rounding based on Element-wise Division for
Post-Training Quantization
|
[
"Jung Hyun Lee",
"Jeonghoon Kim",
"Se Jung Kwon",
"Dongsoo Lee"
] |
Post-training quantization (PTQ) has been gaining popularity for the
deployment of deep neural networks on resource-limited devices since unlike
quantization-aware training, neither a full training dataset nor end-to-end
training is required at all. As PTQ schemes based on reconstructing each layer
or block output turn out to be effective to enhance quantized model
performance, recent works have developed algorithms to devise and learn a new
weight-rounding scheme so as to better reconstruct each layer or block output.
In this work, we propose a simple yet effective new weight-rounding mechanism
for PTQ, coined FlexRound, based on element-wise division instead of typical
element-wise addition such that FlexRound enables jointly learning a common
quantization grid size as well as a different scale for each pre-trained
weight. Thanks to the reciprocal rule of derivatives induced by element-wise
division, FlexRound is inherently able to exploit pre-trained weights when
updating their corresponding scales, and thus, flexibly quantize pre-trained
weights depending on their magnitudes. We empirically validate the efficacy of
FlexRound on a wide range of models and tasks. To the best of our knowledge,
our work is the first to carry out comprehensive experiments on not only image
classification and natural language understanding but also natural language
generation, assuming a per-tensor uniform PTQ setting. Moreover, we
demonstrate, for the first time, that large language models can be efficiently
quantized, with only a negligible impact on performance compared to
half-precision baselines, achieved by reconstructing the output in a
block-by-block manner.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00324
|
2023-06-01T03:43:53Z
|
Achieving Fairness in Multi-Agent Markov Decision Processes Using
Reinforcement Learning
|
[
"Peizhong Ju",
"Arnob Ghosh",
"Ness B. Shroff"
] |
Fairness plays a crucial role in various multi-agent systems (e.g.,
communication networks, financial markets, etc.). Many multi-agent dynamical
interactions can be cast as Markov Decision Processes (MDPs). While existing
research has focused on studying fairness in known environments, the
exploration of fairness in such systems for unknown environments remains open.
In this paper, we propose a Reinforcement Learning (RL) approach to achieve
fairness in multi-agent finite-horizon episodic MDPs. Instead of maximizing the
sum of individual agents' value functions, we introduce a fairness function
that ensures equitable rewards across agents. Since the classical Bellman's
equation does not hold when the sum of individual value functions is not
maximized, we cannot use traditional approaches. Instead, in order to explore,
we maintain a confidence bound of the unknown environment and then propose an
online convex optimization based approach to obtain a policy constrained to
this confidence region. We show that such an approach achieves sub-linear
regret in terms of the number of episodes. Additionally, we provide a probably
approximately correct (PAC) guarantee based on the obtained regret bound. We
also propose an offline RL algorithm and bound the optimality gap with respect
to the optimal fair solution. To mitigate computational complexity, we
introduce a policy-gradient type method for the fair objective. Simulation
experiments also demonstrate the efficacy of our approach.
|
[
"cs.LG",
"cs.MA"
] | false |
2306.00338
|
2023-06-01T04:38:32Z
|
Last Switch Dependent Bandits with Monotone Payoff Functions
|
[
"Ayoub Foussoul",
"Vineet Goyal",
"Orestis Papadigenopoulos",
"Assaf Zeevi"
] |
In a recent work, Laforgue et al. introduce the model of last switch
dependent (LSD) bandits, in an attempt to capture nonstationary phenomena
induced by the interaction between the player and the environment. Examples
include satiation, where consecutive plays of the same action lead to decreased
performance, or deprivation, where the payoff of an action increases after an
interval of inactivity. In this work, we take a step towards understanding the
approximability of planning LSD bandits, namely, the (NP-hard) problem of
computing an optimal arm-pulling strategy under complete knowledge of the
model. In particular, we design the first efficient constant approximation
algorithm for the problem and show that, under a natural monotonicity
assumption on the payoffs, its approximation guarantee (almost) matches the
state-of-the-art for the special and well-studied class of recharging bandits
(also known as delay-dependent). In this attempt, we develop new tools and
insights for this class of problems, including a novel higher-dimensional
relaxation and the technique of mirroring the evolution of virtual states. We
believe that these novel elements could potentially be used for approaching
richer classes of action-induced nonstationary bandits (e.g., special instances
of restless bandits). In the case where the model parameters are initially
unknown, we develop an online learning adaptation of our algorithm for which we
provide sublinear regret guarantees against its full-information counterpart.
|
[
"cs.LG",
"cs.DS"
] | false |
2306.00344
|
2023-06-01T04:50:06Z
|
BOtied: Multi-objective Bayesian optimization with tied multivariate
ranks
|
[
"Ji Won Park",
"Nataša Tagasovska",
"Michael Maser",
"Stephen Ra",
"Kyunghyun Cho"
] |
Many scientific and industrial applications require joint optimization of
multiple, potentially competing objectives. Multi-objective Bayesian
optimization (MOBO) is a sample-efficient framework for identifying
Pareto-optimal solutions. We show a natural connection between non-dominated
solutions and the highest multivariate rank, which coincides with the outermost
level line of the joint cumulative distribution function (CDF). We propose the
CDF indicator, a Pareto-compliant metric for evaluating the quality of
approximate Pareto sets that complements the popular hypervolume indicator. At
the heart of MOBO is the acquisition function, which determines the next
candidate to evaluate by navigating the best compromises among the objectives.
Multi-objective acquisition functions that rely on box decomposition of the
objective space, such as the expected hypervolume improvement (EHVI) and
entropy search, scale poorly to a large number of objectives. We propose an
acquisition function, called BOtied, based on the CDF indicator. BOtied can be
implemented efficiently with copulas, a statistical tool for modeling complex,
high-dimensional distributions. We benchmark BOtied against common acquisition
functions, including EHVI and random scalarization (ParEGO), in a series of
synthetic and real-data experiments. BOtied performs on par with the baselines
across datasets and metrics while being computationally efficient.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.00381
|
2023-06-01T06:25:58Z
|
Better Context Makes Better Code Language Models: A Case Study on
Function Call Argument Completion
|
[
"Hengzhi Pei",
"Jinman Zhao",
"Leonard Lausen",
"Sheng Zha",
"George Karypis"
] |
Pretrained code language models have enabled great progress towards program
synthesis. However, common approaches only consider in-file local context and
thus miss information and constraints imposed by other parts of the codebase
and its external dependencies. Existing code completion benchmarks also lack
such context. To resolve these restrictions we curate a new dataset of
permissively licensed Python packages that includes full projects and their
dependencies and provide tools to extract non-local information with the help
of program analyzers. We then focus on the task of function call argument
completion which requires predicting the arguments to function calls. We show
that existing code completion models do not yield good results on our
completion task. To better solve this task, we query a program analyzer for
information relevant to a given function call, and consider ways to provide the
analyzer results to different code completion models during inference and
training. Our experiments show that providing access to the function
implementation and function usages greatly improves the argument completion
performance. Our ablation study provides further insights on how different
types of information available from the program analyzer and different ways of
incorporating the information affect the model performance.
|
[
"cs.SE",
"cs.LG",
"I.2.2; I.2.7"
] | false |
2306.00382
|
2023-06-01T06:26:26Z
|
Calibrated Propensity Scores for Causal Effect Estimation
|
[
"Shachi Deshpande",
"Volodymyr Kuleshov"
] |
Propensity scores are commonly used to balance observed covariates while
estimating treatment effects. Estimates obtained through propensity score
weighing can be biased when the propensity score model cannot learn the true
treatment assignment mechanism. We argue that the probabilistic output of a
learned propensity score model should be calibrated, i.e. a predictive
treatment probability of 90% should correspond to 90% of individuals being
assigned the treatment group. We propose simple recalibration techniques to
ensure this property. We investigate the theoretical properties of a calibrated
propensity score model and its role in unbiased treatment effect estimation. We
demonstrate improved causal effect estimation with calibrated propensity scores
in several tasks including high-dimensional genome-wide association studies,
where we also show reduced computational requirements when calibration is
applied to simpler propensity score models.
|
[
"stat.ME",
"cs.LG",
"I.2.m"
] | false |
2306.00452
|
2023-06-01T08:51:18Z
|
Speech Self-Supervised Representation Benchmarking: Are We Doing it
Right?
|
[
"Salah Zaiem",
"Youcef Kemiche",
"Titouan Parcollet",
"Slim Essid",
"Mirco Ravanelli"
] |
Self-supervised learning (SSL) has recently allowed leveraging large datasets
of unlabeled speech signals to reach impressive performance on speech tasks
using only small amounts of annotated data. The high number of proposed
approaches fostered the need and rise of extended benchmarks that evaluate
their performance on a set of downstream tasks exploring various aspects of the
speech signal. However, and while the number of considered tasks has been
growing, most rely upon a single decoding architecture that maps the frozen SSL
representations to the downstream labels. This work investigates the robustness
of such benchmarking results to changes in the decoder architecture.
Interestingly, it appears that varying the architecture of the downstream
decoder leads to significant variations in the leaderboards of most tasks.
Concerningly, our study reveals that benchmarking using limited decoders may
cause a counterproductive increase in the sizes of the developed SSL models.
|
[
"eess.AS",
"cs.LG"
] | false |
2306.00481
|
2023-06-01T09:30:49Z
|
Automatic Data Augmentation for Domain Adapted Fine-Tuning of
Self-Supervised Speech Representations
|
[
"Salah Zaiem",
"Titouan Parcollet",
"Slim Essid"
] |
Self-Supervised Learning (SSL) has allowed leveraging large amounts of
unlabeled speech data to improve the performance of speech recognition models
even with small annotated datasets. Despite this, speech SSL representations
may fail while facing an acoustic mismatch between the pretraining and target
datasets. To address this issue, we propose a novel supervised domain
adaptation method, designed for cases exhibiting such a mismatch in acoustic
domains. It consists in applying properly calibrated data augmentations on a
large clean dataset, bringing it closer to the target domain, and using it as
part of an initial fine-tuning stage. Augmentations are automatically selected
through the minimization of a conditional-dependence estimator, based on the
target dataset. The approach is validated during an oracle experiment with
controlled distortions and on two amateur-collected low-resource domains,
reaching better performances compared to the baselines in both cases.
|
[
"eess.AS",
"cs.LG"
] | false |
2306.00520
|
2023-06-01T10:20:44Z
|
On Masked Pre-training and the Marginal Likelihood
|
[
"Pablo Moreno-Muñoz",
"Pol G. Recasens",
"Søren Hauberg"
] |
Masked pre-training removes random input dimensions and learns a model that
can predict the missing values. Empirical results indicate that this intuitive
form of self-supervised learning yields models that generalize very well to new
domains. A theoretical understanding is, however, lacking. This paper shows
that masked pre-training with a suitable cumulative scoring function
corresponds to maximizing the model's marginal likelihood, which is de facto
the Bayesian model selection measure of generalization. Beyond shedding light
on the success of masked pre-training, this insight also suggests that Bayesian
models can be trained with appropriately designed self-supervision.
Empirically, we confirm the developed theory and explore the main learning
principles of masked pre-training in large language models.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.00522
|
2023-06-01T10:23:28Z
|
A New PHO-rmula for Improved Performance of Semi-Structured Networks
|
[
"David Rügamer"
] |
Recent advances to combine structured regression models and deep neural
networks for better interpretability, more expressiveness, and statistically
valid uncertainty quantification demonstrate the versatility of semi-structured
neural networks (SSNs). We show that techniques to properly identify the
contributions of the different model components in SSNs, however, lead to
suboptimal network estimation, slower convergence, and degenerated or erroneous
predictions. In order to solve these problems while preserving favorable model
properties, we propose a non-invasive post-hoc orthogonalization (PHO) that
guarantees identifiability of model components and provides better estimation
and prediction quality. Our theoretical findings are supported by numerical
experiments, a benchmark comparison as well as a real-world application to
COVID-19 infections.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.00541
|
2023-06-01T10:51:12Z
|
Decomposing Global Feature Effects Based on Feature Interactions
|
[
"Julia Herbinger",
"Bernd Bischl",
"Giuseppe Casalicchio"
] |
Global feature effect methods, such as partial dependence plots, provide an
intelligible visualization of the expected marginal feature effect. However,
such global feature effect methods can be misleading, as they do not represent
local feature effects of single observations well when feature interactions are
present. We formally introduce generalized additive decomposition of global
effects (GADGET), which is a new framework based on recursive partitioning to
find interpretable regions in the feature space such that the
interaction-related heterogeneity of local feature effects is minimized. We
provide a mathematical foundation of the framework and show that it is
applicable to the most popular methods to visualize marginal feature effects,
namely partial dependence, accumulated local effects, and Shapley additive
explanations (SHAP) dependence. Furthermore, we introduce a new
permutation-based interaction test to detect significant feature interactions
that is applicable to any feature effect method that fits into our proposed
framework. We empirically evaluate the theoretical characteristics of the
proposed methods based on various feature effect methods in different
experimental settings. Moreover, we apply our introduced methodology to two
real-world examples to showcase their usefulness.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.00554
|
2023-06-01T11:16:58Z
|
ShaRP: Shape-Regularized Multidimensional Projections
|
[
"Alister Machado",
"Alexandru Telea",
"Michael Behrisch"
] |
Projections, or dimensionality reduction methods, are techniques of choice
for the visual exploration of high-dimensional data. Many such techniques
exist, each one of them having a distinct visual signature - i.e., a
recognizable way to arrange points in the resulting scatterplot. Such
signatures are implicit consequences of algorithm design, such as whether the
method focuses on local vs global data pattern preservation; optimization
techniques; and hyperparameter settings. We present a novel projection
technique - ShaRP - that provides users explicit control over the visual
signature of the created scatterplot, which can cater better to interactive
visualization scenarios. ShaRP scales well with dimensionality and dataset
size, generically handles any quantitative dataset, and provides this extended
functionality of controlling projection shapes at a small, user-controllable
cost in terms of quality metrics.
|
[
"cs.HC",
"cs.LG"
] | false |
2306.00582
|
2023-06-01T11:52:58Z
|
Anomaly Detection with Variance Stabilized Density Estimation
|
[
"Amit Rozner",
"Barak Battash",
"Henry Li",
"Lior Wolf",
"Ofir Lindenbaum"
] |
Density estimation based anomaly detection schemes typically model anomalies
as examples that reside in low-density regions. We propose a modified density
estimation problem and demonstrate its effectiveness for anomaly detection.
Specifically, we assume the density function of normal samples is uniform in
some compact domain. This assumption implies the density function is more
stable (with lower variance) around normal samples than anomalies. We first
corroborate this assumption empirically using a wide range of real-world data.
Then, we design a variance stabilized density estimation problem for maximizing
the likelihood of the observed samples while minimizing the variance of the
density around normal samples. We introduce an ensemble of autoregressive
models to learn the variance stabilized distribution. Finally, we perform an
extensive benchmark with 52 datasets demonstrating that our method leads to
state-of-the-art results while alleviating the need for data-specific
hyperparameter tuning.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00586
|
2023-06-01T11:57:47Z
|
Evaluating the "Learning on Graphs" Conference Experience
|
[
"Bastian Rieck",
"Corinna Coupette"
] |
With machine learning conferences growing ever larger, and reviewing
processes becoming increasingly elaborate, more data-driven insights into their
workings are required. In this report, we present the results of a survey
accompanying the first "Learning on Graphs" (LoG) Conference. The survey was
directed to evaluate the submission and review process from different
perspectives, including authors, reviewers, and area chairs alike.
|
[
"cs.LG",
"cs.CY"
] | false |
2306.00616
|
2023-06-01T12:41:05Z
|
Progressive Learning for Physics-informed Neural Motion Planning
|
[
"Ruiqi Ni",
"Ahmed H. Qureshi"
] |
Motion planning (MP) is one of the core robotics problems requiring fast
methods for finding a collision-free robot motion path connecting the given
start and goal states. Neural motion planners (NMPs) demonstrate fast
computational speed in finding path solutions but require a huge amount of
expert trajectories for learning, thus adding a significant training
computational load. In contrast, recent advancements have also led to a
physics-informed NMP approach that directly solves the Eikonal equation for
motion planning and does not require expert demonstrations for learning.
However, experiments show that the physics-informed NMP approach performs
poorly in complex environments and lacks scalability in multiple scenarios and
high-dimensional real robot settings. To overcome these limitations, this paper
presents a novel and tractable Eikonal equation formulation and introduces a
new progressive learning strategy to train neural networks without expert data
in complex, cluttered, multiple high-dimensional robot motion planning
scenarios. The results demonstrate that our method outperforms state-of-the-art
traditional MP, data-driven NMP, and physics-informed NMP methods by a
significant margin in terms of computational planning speed, path quality, and
success rates. We also show that our approach scales to multiple complex,
cluttered scenarios and the real robot set up in a narrow passage environment.
The proposed method's videos and code implementations are available at
https://github.com/ruiqini/P-NTFields.
|
[
"cs.RO",
"cs.LG"
] | false |
2306.00656
|
2023-06-01T13:24:56Z
|
Normalization Enhances Generalization in Visual Reinforcement Learning
|
[
"Lu Li",
"Jiafei Lyu",
"Guozheng Ma",
"Zilin Wang",
"Zhenjie Yang",
"Xiu Li",
"Zhiheng Li"
] |
Recent advances in visual reinforcement learning (RL) have led to impressive
success in handling complex tasks. However, these methods have demonstrated
limited generalization capability to visual disturbances, which poses a
significant challenge for their real-world application and adaptability. Though
normalization techniques have demonstrated huge success in supervised and
unsupervised learning, their applications in visual RL are still scarce. In
this paper, we explore the potential benefits of integrating normalization into
visual RL methods with respect to generalization performance. We find that,
perhaps surprisingly, incorporating suitable normalization techniques is
sufficient to enhance the generalization capabilities, without any additional
special design. We utilize the combination of two normalization techniques,
CrossNorm and SelfNorm, for generalizable visual RL. Extensive experiments are
conducted on DMControl Generalization Benchmark and CARLA to validate the
effectiveness of our method. We show that our method significantly improves
generalization capability while only marginally affecting sample efficiency. In
particular, when integrated with DrQ-v2, our method enhances the test
performance of DrQ-v2 on CARLA across various scenarios, from 14% of the
training performance to 97%.
|
[
"cs.LG",
"cs.AI"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.