arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.02475
|
2023-06-04T20:47:07Z
|
Modeling Cross-Cultural Pragmatic Inference with Codenames Duet
|
[
"Omar Shaikh",
"Caleb Ziems",
"William Held",
"Aryan J. Pariani",
"Fred Morstatter",
"Diyi Yang"
] |
Pragmatic reference enables efficient interpersonal communication. Prior work
uses simple reference games to test models of pragmatic reasoning, often with
unidentified speakers and listeners. In practice, however, speakers'
sociocultural background shapes their pragmatic assumptions. For example,
readers of this paper assume NLP refers to "Natural Language Processing," and
not "Neuro-linguistic Programming." This work introduces the Cultural Codes
dataset, which operationalizes sociocultural pragmatic inference in a simple
word reference game.
Cultural Codes is based on the multi-turn collaborative two-player game,
Codenames Duet. Our dataset consists of 794 games with 7,703 turns, distributed
across 153 unique players. Alongside gameplay, we collect information about
players' personalities, values, and demographics. Utilizing theories of
communication and pragmatics, we predict each player's actions via joint
modeling of their sociocultural priors and the game context. Our experiments
show that accounting for background characteristics significantly improves
model performance for tasks related to both clue giving and guessing,
indicating that sociocultural priors play a vital role in gameplay decisions.
|
[
"cs.CL"
] | false |
2306.02492
|
2023-06-04T21:53:04Z
|
RadLing: Towards Efficient Radiology Report Understanding
|
[
"Rikhiya Ghosh",
"Sanjeev Kumar Karn",
"Manuela Daniela Danu",
"Larisa Micu",
"Ramya Vunikili",
"Oladimeji Farri"
] |
Most natural language tasks in the radiology domain use language models
pre-trained on biomedical corpus. There are few pretrained language models
trained specifically for radiology, and fewer still that have been trained in a
low data setting and gone on to produce comparable results in fine-tuning
tasks. We present RadLing, a continuously pretrained language model using
Electra-small (Clark et al., 2020) architecture, trained using over 500K
radiology reports, that can compete with state-of-the-art results for fine
tuning tasks in radiology domain. Our main contribution in this paper is
knowledge-aware masking which is a taxonomic knowledge-assisted pretraining
task that dynamically masks tokens to inject knowledge during pretraining. In
addition, we also introduce an knowledge base-aided vocabulary extension to
adapt the general tokenization vocabulary to radiology domain.
|
[
"cs.CL"
] | false |
2306.02242
|
2023-06-04T03:05:25Z
|
Extract and Attend: Improving Entity Translation in Neural Machine
Translation
|
[
"Zixin Zeng",
"Rui Wang",
"Yichong Leng",
"Junliang Guo",
"Xu Tan",
"Tao Qin",
"Tie-yan Liu"
] |
While Neural Machine Translation(NMT) has achieved great progress in recent
years, it still suffers from inaccurate translation of entities (e.g.,
person/organization name, location), due to the lack of entity training
instances. When we humans encounter an unknown entity during translation, we
usually first look up in a dictionary and then organize the entity translation
together with the translations of other parts to form a smooth target sentence.
Inspired by this translation process, we propose an Extract-and-Attend approach
to enhance entity translation in NMT, where the translation candidates of
source entities are first extracted from a dictionary and then attended to by
the NMT model to generate the target sentence. Specifically, the translation
candidates are extracted by first detecting the entities in a source sentence
and then translating the entities through looking up in a dictionary. Then, the
extracted candidates are added as a prefix of the decoder input to be attended
to by the decoder when generating the target sentence through self-attention.
Experiments conducted on En-Zh and En-Ru demonstrate that the proposed method
is effective on improving both the translation accuracy of entities and the
overall translation quality, with up to 35% reduction on entity error rate and
0.85 gain on BLEU and 13.8 gain on COMET.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02247
|
2023-06-04T03:26:43Z
|
Sen2Pro: A Probabilistic Perspective to Sentence Embedding from
Pre-trained Language Model
|
[
"Lingfeng Shen",
"Haiyun Jiang",
"Lemao Liu",
"Shuming Shi"
] |
Sentence embedding is one of the most fundamental tasks in Natural Language
Processing and plays an important role in various tasks. The recent
breakthrough in sentence embedding is achieved by pre-trained language models
(PLMs). Despite its success, an embedded vector (Sen2Vec) representing a point
estimate does not naturally express uncertainty in a taskagnostic way. This
paper thereby proposes an efficient framework on probabilistic sentence
embedding (Sen2Pro) from PLMs, and it represents a sentence as a probability
density distribution in an embedding space to reflect both model uncertainty
and data uncertainty (i.e., many-to-one nature) in the sentence representation.
The proposed framework performs in a plug-and-play way without retraining PLMs
anymore, and it is easy to implement and generally applied on top of any PLM.
The superiority of Sen2Pro over Sen2Vec has been theoretically verified and
practically illustrated on different NLP tasks.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02282
|
2023-06-04T07:01:30Z
|
Exploring and Verbalizing Academic Ideas by Concept Co-occurrence
|
[
"Yi Xu",
"Shuqian Sheng",
"Bo Xue",
"Luoyi Fu",
"Xinbing Wang",
"Chenghu Zhou"
] |
Researchers usually come up with new ideas only after thoroughly
comprehending vast quantities of literature. The difficulty of this procedure
is exacerbated by the fact that the number of academic publications is growing
exponentially. In this study, we devise a framework based on concept
co-occurrence for academic idea inspiration, which has been integrated into a
research assistant system. From our perspective, the fusion of two concepts
that co-occur in an academic paper can be regarded as an important way of the
emergence of a new idea. We construct evolving concept graphs according to the
co-occurrence relationship of concepts from 20 disciplines or topics. Then we
design a temporal link prediction method based on masked language model to
explore potential connections between different concepts. To verbalize the
newly discovered connections, we also utilize the pretrained language model to
generate a description of an idea based on a new data structure called
co-occurrence citation quintuple. We evaluate our proposed system using both
automatic metrics and human assessment. The results demonstrate that our system
has broad prospects and can assist researchers in expediting the process of
discovering new ideas.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02295
|
2023-06-04T08:12:34Z
|
A Mathematical Abstraction for Balancing the Trade-off Between
Creativity and Reality in Large Language Models
|
[
"Ritwik Sinha",
"Zhao Song",
"Tianyi Zhou"
] |
Large Language Models have become popular for their remarkable capabilities
in human-oriented tasks and traditional natural language processing tasks. Its
efficient functioning is attributed to the attention mechanism in the
Transformer architecture, enabling it to concentrate on particular aspects of
the input.
LLMs are increasingly being used in domains such as generating prose, poetry
or art, which require the model to be creative (e.g. Adobe firefly). LLMs
possess advanced language generation abilities that enable them to generate
distinctive and captivating content. This utilization of LLMs in generating
narratives shows their flexibility and potential for use in domains that extend
beyond conventional natural language processing duties.
In different contexts, we may expect the LLM to generate factually correct
answers, that match reality; e.g., question-answering systems or online
assistants. In such situations, being correct is critical to LLMs being trusted
in practice. The Bing Chatbot provides its users with the flexibility to select
one of the three output modes: creative, balanced, and precise. Each mode
emphasizes creativity and factual accuracy differently.
In this work, we provide a mathematical abstraction to describe creativity
and reality based on certain losses. A model trained on these losses balances
the trade-off between the creativity and reality of the model.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.02379
|
2023-06-04T15:26:28Z
|
Modular Transformers: Compressing Transformers into Modularized Layers
for Flexible Efficient Inference
|
[
"Wangchunshu Zhou",
"Ronan Le Bras",
"Yejin Choi"
] |
Pre-trained Transformer models like T5 and BART have advanced the state of
the art on a wide range of text generation tasks. Compressing these models into
smaller ones has become critically important for practical use. Common neural
network compression techniques such as knowledge distillation or quantization
are limited to static compression where the compression ratio is fixed. In this
paper, we introduce Modular Transformers, a modularized encoder-decoder
framework for flexible sequence-to-sequence model compression. Modular
Transformers train modularized layers that have the same function of two or
more consecutive layers in the original model via module replacing and
knowledge distillation. After training, the modularized layers can be flexibly
assembled into sequence-to-sequence models that meet different
performance-efficiency trade-offs. Experimental results show that after a
single training phase, by simply varying the assembling strategy, Modular
Transformers can achieve flexible compression ratios from 1.1x to 6x with
little to moderate relative performance drop.
|
[
"cs.CL",
"cs.LG"
] | false |
2306.02428
|
2023-06-04T18:21:44Z
|
Taught by the Internet, Exploring Bias in OpenAIs GPT3
|
[
"Ali Ayaz",
"Aditya Nawalgaria",
"Ruilian Yin"
] |
This research delves into the current literature on bias in Natural Language
Processing Models and the techniques proposed to mitigate the problem of bias,
including why it is important to tackle bias in the first place. Additionally,
these techniques are further analysed in the light of newly developed models
that tower in size over past editions. To achieve those aims, the authors of
this paper conducted their research on GPT3 by OpenAI, the largest NLP model
available to consumers today. With 175 billion parameters in contrast to BERTs
340 million, GPT3 is the perfect model to test the common pitfalls of NLP
models. Tests were conducted through the development of an Applicant Tracking
System using GPT3. For the sake of feasibility and time constraints, the tests
primarily focused on gender bias, rather than all or multiple types of bias.
Finally, current mitigation techniques are considered and tested to measure
their degree of functionality.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02457
|
2023-06-04T20:18:40Z
|
Adaptive and Personalized Exercise Generation for Online Language
Learning
|
[
"Peng Cui",
"Mrinmaya Sachan"
] |
Adaptive learning aims to provide customized educational activities (e.g.,
exercises) to address individual learning needs. However, manual construction
and delivery of such activities is a laborious process. Thus, in this paper, we
study a novel task of adaptive and personalized exercise generation for online
language learning. To this end, we combine a knowledge tracing model that
estimates each student's evolving knowledge states from their learning history
and a controlled text generation model that generates exercise sentences based
on the student's current estimated knowledge state and instructor requirements
of desired properties (e.g., domain knowledge and difficulty). We train and
evaluate our model on real-world learner interaction data from Duolingo and
demonstrate that LMs guided by student states can generate superior exercises.
Then, we discuss the potential use of our model in educational applications
using various simulations. These simulations show that our model can adapt to
students' individual abilities and can facilitate their learning efficiency by
personalizing learning sequences.
|
[
"cs.CL",
"cs.AI"
] | false |
2306.02273
|
2023-06-04T06:38:15Z
|
End-to-End Joint Target and Non-Target Speakers ASR
|
[
"Ryo Masumura",
"Naoki Makishima",
"Taiga Yamane",
"Yoshihiko Yamazaki",
"Saki Mizuno",
"Mana Ihori",
"Mihiro Uchida",
"Keita Suzuki",
"Hiroshi Sato",
"Tomohiro Tanaka",
"Akihiko Takashima",
"Satoshi Suzuki",
"Takafumi Moriya",
"Nobukatsu Hojo",
"Atsushi Ando"
] |
This paper proposes a novel automatic speech recognition (ASR) system that
can transcribe individual speaker's speech while identifying whether they are
target or non-target speakers from multi-talker overlapped speech.
Target-speaker ASR systems are a promising way to only transcribe a target
speaker's speech by enrolling the target speaker's information. However, in
conversational ASR applications, transcribing both the target speaker's speech
and non-target speakers' ones is often required to understand interactive
information. To naturally consider both target and non-target speakers in a
single ASR model, our idea is to extend autoregressive modeling-based
multi-talker ASR systems to utilize the enrollment speech of the target
speaker. Our proposed ASR is performed by recursively generating both textual
tokens and tokens that represent target or non-target speakers. Our experiments
demonstrate the effectiveness of our proposed method.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.02294
|
2023-06-04T08:09:26Z
|
Exposing Bias in Online Communities through Large-Scale Language Models
|
[
"Celine Wald",
"Lukas Pfahler"
] |
Progress in natural language generation research has been shaped by the
ever-growing size of language models. While large language models pre-trained
on web data can generate human-sounding text, they also reproduce social biases
and contribute to the propagation of harmful stereotypes. This work utilises
the flaw of bias in language models to explore the biases of six different
online communities. In order to get an insight into the communities'
viewpoints, we fine-tune GPT-Neo 1.3B with six social media datasets. The bias
of the resulting models is evaluated by prompting the models with different
demographics and comparing the sentiment and toxicity values of these
generations. Together, these methods reveal that bias differs in type and
intensity for the various models. This work not only affirms how easily bias is
absorbed from training data but also presents a scalable method to identify and
compare the bias of different datasets or communities. Additionally, the
examples generated for this work demonstrate the limitations of using automated
sentiment and toxicity classifiers in bias research.
|
[
"cs.CL",
"cs.CY",
"cs.LG"
] | false |
2306.02307
|
2023-06-04T09:16:39Z
|
Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference
in Low Resource Settings
|
[
"Daniel Rotem",
"Michael Hassid",
"Jonathan Mamou",
"Roy Schwartz"
] |
Adaptive inference is a simple method for reducing inference costs. The
method works by maintaining multiple classifiers of different capacities, and
allocating resources to each test instance according to its difficulty. In this
work, we compare the two main approaches for adaptive inference, Early-Exit and
Multi-Model, when training data is limited. First, we observe that for models
with the same architecture and size, individual Multi-Model classifiers
outperform their Early-Exit counterparts by an average of 2.3%. We show that
this gap is caused by Early-Exit classifiers sharing model parameters during
training, resulting in conflicting gradient updates of model weights. We find
that despite this gap, Early-Exit still provides a better speed-accuracy
trade-off due to the overhead of the Multi-Model approach. To address these
issues, we propose SWEET (Separating Weights in Early Exit Transformers), an
Early-Exit fine-tuning method that assigns each classifier its own set of
unique model weights, not updated by other classifiers. We compare SWEET's
speed-accuracy curve to standard Early-Exit and Multi-Model baselines and find
that it outperforms both methods at fast speeds while maintaining comparable
scores to Early-Exit at slow speeds. Moreover, SWEET individual classifiers
outperform Early-Exit ones by 1.1% on average. SWEET enjoys the benefits of
both methods, paving the way for further reduction of inference costs in NLP.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.02317
|
2023-06-04T10:00:12Z
|
SpellMapper: A non-autoregressive neural spellchecker for ASR
customization with candidate retrieval based on n-gram mappings
|
[
"Alexandra Antonova",
"Evelina Bakhturina",
"Boris Ginsburg"
] |
Contextual spelling correction models are an alternative to shallow fusion to
improve automatic speech recognition (ASR) quality given user vocabulary. To
deal with large user vocabularies, most of these models include candidate
retrieval mechanisms, usually based on minimum edit distance between fragments
of ASR hypothesis and user phrases. However, the edit-distance approach is
slow, non-trainable, and may have low recall as it relies only on common
letters. We propose: 1) a novel algorithm for candidate retrieval, based on
misspelled n-gram mappings, which gives up to 90% recall with just the top 10
candidates on Spoken Wikipedia; 2) a non-autoregressive neural model based on
BERT architecture, where the initial transcript and ten candidates are combined
into one input. The experiments on Spoken Wikipedia show 21.4% word error rate
improvement compared to a baseline ASR system.
|
[
"cs.CL",
"cs.AI",
"cs.SD",
"eess.AS"
] | false |
2306.02388
|
2023-06-04T15:44:51Z
|
Commonsense Knowledge Transfer for Pre-trained Language Models
|
[
"Wangchunshu Zhou",
"Ronan Le Bras",
"Yejin Choi"
] |
Despite serving as the foundation models for a wide range of NLP benchmarks,
pre-trained language models have shown limited capabilities of acquiring
implicit commonsense knowledge from self-supervision alone, compared to
learning linguistic and factual knowledge that appear more explicitly in the
surface patterns in text. In this work, we introduce commonsense knowledge
transfer, a framework to transfer the commonsense knowledge stored in a neural
commonsense knowledge model to a general-purpose pre-trained language model. It
first exploits general texts to form queries for extracting commonsense
knowledge from the neural commonsense knowledge model and then refines the
language model with two self-supervised objectives: commonsense mask infilling
and commonsense relation prediction, which align human language with the
underlying commonsense knowledge. Empirical results show that our approach
consistently improves the model's performance on downstream tasks that require
commonsense reasoning. Moreover, we find that the improvement is more
significant in the few-shot setting. This suggests that our approach helps
language models better transfer to downstream tasks without extensive
supervision by injecting commonsense knowledge into their parameters.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.02326
|
2023-06-04T10:50:52Z
|
Cross-LKTCN: Modern Convolution Utilizing Cross-Variable Dependency for
Multivariate Time Series Forecasting Dependency for Multivariate Time Series
Forecasting
|
[
"Donghao Luo",
"Xue Wang"
] |
The past few years have witnessed the rapid development in multivariate time
series forecasting. The key to accurate forecasting results is capturing the
long-term dependency between each time step (cross-time dependency) and
modeling the complex dependency between each variable (cross-variable
dependency) in multivariate time series. However, recent methods mainly focus
on the cross-time dependency but seldom consider the cross-variable dependency.
To fill this gap, we find that convolution, a traditional technique but
recently losing steam in time series forecasting, meets the needs of
respectively capturing the cross-time and cross-variable dependency. Based on
this finding, we propose a modern pure convolution structure, namely
Cross-LKTCN, to better utilize both cross-time and cross-variable dependency
for time series forecasting. Specifically in each Cross-LKTCN block, a
depth-wise large kernel convolution with large receptive field is proposed to
capture cross-time dependency, and then two successive point-wise group
convolution feed forward networks are proposed to capture cross-variable
dependency. Experimental results on real-world benchmarks show that Cross-LKTCN
achieves state-of-the-art forecasting performance and improves the forecasting
accuracy significantly compared with existing convolutional-based models and
cross-variable methods.
|
[
"cs.LG"
] | false |
2306.02419
|
2023-06-04T17:51:37Z
|
Bad Habits: Policy Confounding and Out-of-Trajectory Generalization in
RL
|
[
"Miguel Suau",
"Matthijs T. J. Spaan",
"Frans A. Oliehoek"
] |
Reinforcement learning agents may sometimes develop habits that are effective
only when specific policies are followed. After an initial exploration phase in
which agents try out different actions, they eventually converge toward a
particular policy. When this occurs, the distribution of state-action
trajectories becomes narrower, and agents start experiencing the same
transitions again and again. At this point, spurious correlations may arise.
Agents may then pick up on these correlations and learn state representations
that do not generalize beyond the agent's trajectory distribution. In this
paper, we provide a mathematical characterization of this phenomenon, which we
refer to as policy confounding, and show, through a series of examples, when
and how it occurs in practice.
|
[
"cs.LG"
] | false |
2306.02449
|
2023-06-04T19:43:54Z
|
The Power Of Simplicity: Why Simple Linear Models Outperform Complex
Machine Learning Techniques -- Case Of Breast Cancer Diagnosis
|
[
"Muhammad Arbab Arshad",
"Sakib Shahriar",
"Khizar Anjum"
] |
This research paper investigates the effectiveness of simple linear models
versus complex machine learning techniques in breast cancer diagnosis,
emphasizing the importance of interpretability and computational efficiency in
the medical domain. We focus on Logistic Regression (LR), Decision Trees (DT),
and Support Vector Machines (SVM) and optimize their performance using the UCI
Machine Learning Repository dataset. Our findings demonstrate that the simpler
linear model, LR, outperforms the more complex DT and SVM techniques, with a
test score mean of 97.28%, a standard deviation of 1.62%, and a computation
time of 35.56 ms. In comparison, DT achieved a test score mean of 93.73%, and
SVM had a test score mean of 96.44%. The superior performance of LR can be
attributed to its simplicity and interpretability, which provide a clear
understanding of the relationship between input features and the outcome. This
is particularly valuable in the medical domain, where interpretability is
crucial for decision-making. Moreover, the computational efficiency of LR
offers advantages in terms of scalability and real-world applicability. The
results of this study highlight the power of simplicity in the context of
breast cancer diagnosis and suggest that simpler linear models like LR can be
more effective, interpretable, and computationally efficient than their complex
counterparts, making them a more suitable choice for medical applications.
|
[
"cs.LG"
] | false |
2306.02223
|
2023-06-04T00:50:35Z
|
Prescriptive PCA: Dimensionality Reduction for Two-stage Stochastic
Optimization
|
[
"Long He",
"Ho-Yin Mak"
] |
In this paper, we consider the alignment between an upstream dimensionality
reduction task of learning a low-dimensional representation of a set of
high-dimensional data and a downstream optimization task of solving a
stochastic program parameterized by said representation. In this case, standard
dimensionality reduction methods (e.g., principal component analysis) may not
perform well, as they aim to maximize the amount of information retained in the
representation and do not generally reflect the importance of such information
in the downstream optimization problem. To address this problem, we develop a
prescriptive dimensionality reduction framework that aims to minimize the
degree of suboptimality in the optimization phase. For the case where the
downstream stochastic optimization problem has an expected value objective, we
show that prescriptive dimensionality reduction can be performed via solving a
distributionally-robust optimization problem, which admits a semidefinite
programming relaxation. Computational experiments based on a warehouse
transshipment problem and a vehicle repositioning problem show that our
approach significantly outperforms principal component analysis with real and
synthetic data sets.
|
[
"cs.LG",
"math.OC"
] | false |
2306.02224
|
2023-06-04T01:07:20Z
|
Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions
|
[
"Hui Yang",
"Sifu Yue",
"Yunzhong He"
] |
Auto-GPT is an autonomous agent that leverages recent advancements in
adapting Large Language Models (LLMs) for decision-making tasks. While there
has been a growing interest in Auto-GPT stypled agents, questions remain
regarding the effectiveness and flexibility of Auto-GPT in solving real-world
decision-making tasks. Its limited capability for real-world engagement and the
absence of benchmarks contribute to these uncertainties. In this paper, we
present a comprehensive benchmark study of Auto-GPT styled agents in
decision-making tasks that simulate real-world scenarios. Our aim is to gain
deeper insights into this problem and understand the adaptability of GPT-based
agents. We compare the performance of popular LLMs such as GPT-4, GPT-3.5,
Claude, and Vicuna in Auto-GPT styled decision-making tasks. Furthermore, we
introduce the Additional Opinions algorithm, an easy and effective method that
incorporates supervised/imitation-based learners into the Auto-GPT scheme. This
approach enables lightweight supervised learning without requiring fine-tuning
of the foundational LLMs. We demonstrate through careful baseline comparisons
and ablation studies that the Additional Opinions algorithm significantly
enhances performance in online decision-making benchmarks, including WebShop
and ALFWorld.
|
[
"cs.AI",
"cs.LG"
] | false |
2306.02261
|
2023-06-04T04:55:02Z
|
Online estimation of the hand-eye transformation from surgical scenes
|
[
"Krittin Pachtrachai",
"Francisco Vasconcelos",
"Danail Stoyanov"
] |
Hand-eye calibration algorithms are mature and provide accurate
transformation estimations for an effective camera-robot link but rely on a
sufficiently wide range of calibration data to avoid errors and degenerate
configurations. To solve the hand-eye problem in robotic-assisted minimally
invasive surgery and also simplify the calibration procedure by using neural
network method cooporating with the new objective function. We present a neural
network-based solution that estimates the transformation from a sequence of
images and kinematic data which significantly simplifies the calibration
procedure. The network utilises the long short-term memory architecture to
extract temporal information from the data and solve the hand-eye problem. The
objective function is derived from the linear combination of remote centre of
motion constraint, the re-projection error and its derivative to induce a small
change in the hand-eye transformation. The method is validated with the data
from da Vinci Si and the result shows that the estimated hand-eye matrix is
able to re-project the end-effector from the robot coordinate to the camera
coordinate within 10 to 20 pixels of accuracy in both testing dataset. The
calibration performance is also superior to the previous neural network-based
hand-eye method. The proposed algorithm shows that the calibration procedure
can be simplified by using deep learning techniques and the performance is
improved by the assumption of non-static hand-eye transformations.
|
[
"cs.RO",
"cs.LG"
] | false |
2306.02271
|
2023-06-04T06:30:13Z
|
SubspaceNet: Deep Learning-Aided Subspace Methods for DoA Estimation
|
[
"Dor H. Shmuel",
"Julian P. Merkofer",
"Guy Revach",
"Ruud J. G. van Sloun",
"Nir Shlezinger"
] |
Direction of arrival (DoA) estimation is a fundamental task in array
processing. A popular family of DoA estimation algorithms are subspace methods,
which operate by dividing the measurements into distinct signal and noise
subspaces. Subspace methods, such as Multiple Signal Classification (MUSIC) and
Root-MUSIC, rely on several restrictive assumptions, including narrowband
non-coherent sources and fully calibrated arrays, and their performance is
considerably degraded when these do not hold. In this work we propose
SubspaceNet; a data-driven DoA estimator which learns how to divide the
observations into distinguishable subspaces. This is achieved by utilizing a
dedicated deep neural network to learn the empirical autocorrelation of the
input, by training it as part of the Root-MUSIC method, leveraging the inherent
differentiability of this specific DoA estimator, while removing the need to
provide a ground-truth decomposable autocorrelation matrix. Once trained, the
resulting SubspaceNet serves as a universal surrogate covariance estimator that
can be applied in combination with any subspace-based DoA estimation method,
allowing its successful application in challenging setups. SubspaceNet is shown
to enable various DoA estimation algorithms to cope with coherent sources,
wideband signals, low SNR, array mismatches, and limited snapshots, while
preserving the interpretability and the suitability of classic subspace
methods.
|
[
"eess.SP",
"cs.LG"
] | false |
2306.02283
|
2023-06-04T07:01:31Z
|
Matrix Completion from General Deterministic Sampling Patterns
|
[
"Hanbyul Lee",
"Rahul Mazumder",
"Qifan Song",
"Jean Honorio"
] |
Most of the existing works on provable guarantees for low-rank matrix
completion algorithms rely on some unrealistic assumptions such that matrix
entries are sampled randomly or the sampling pattern has a specific structure.
In this work, we establish theoretical guarantee for the exact and approximate
low-rank matrix completion problems which can be applied to any deterministic
sampling schemes. For this, we introduce a graph having observed entries as its
edge set, and investigate its graph properties involving the performance of the
standard constrained nuclear norm minimization algorithm. We theoretically and
experimentally show that the algorithm can be successful as the observation
graph is well-connected and has similar node degrees. Our result can be viewed
as an extension of the works by Bhojanapalli and Jain [2014] and Burnwal and
Vidyasagar [2020], in which the node degrees of the observation graph were
assumed to be the same. In particular, our theory significantly improves their
results when the underlying matrix is symmetric.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.02300
|
2023-06-04T08:53:27Z
|
How neural networks learn to classify chaotic time series
|
[
"Alessandro Corbetta",
"Thomas Geert de Jong"
] |
Neural networks are increasingly employed to model, analyze and control
non-linear dynamical systems ranging from physics to biology. Owing to their
universal approximation capabilities, they regularly outperform
state-of-the-art model-driven methods in terms of accuracy, computational
speed, and/or control capabilities. On the other hand, neural networks are very
often they are taken as black boxes whose explainability is challenged, among
others, by huge amounts of trainable parameters. In this paper, we tackle the
outstanding issue of analyzing the inner workings of neural networks trained to
classify regular-versus-chaotic time series. This setting, well-studied in
dynamical systems, enables thorough formal analyses. We focus specifically on a
family of networks dubbed Large Kernel Convolutional Neural Networks (LKCNN),
recently introduced by Boull\'{e} et al. (2021). These non-recursive networks
have been shown to outperform other established architectures (e.g. residual
networks, shallow neural networks and fully convolutional networks) at this
classification task. Furthermore, they outperform ``manual'' classification
approaches based on direct reconstruction of the Lyapunov exponent. We find
that LKCNNs use qualitative properties of the input sequence. In particular, we
show that the relation between input periodicity and activation periodicity is
key for the performance of LKCNN models. Low performing models show, in fact,
analogous periodic activations to random untrained models. This could give very
general criteria for identifying, a priori, trained models that have poor
accuracy.
|
[
"math.DS",
"cs.LG",
"37M10, 68T07"
] | false |
2306.02325
|
2023-06-04T10:50:13Z
|
Random Feedback Alignment Algorithms to train Neural Networks: Why do
they Align?
|
[
"Dominique Chu",
"Florian Bacho"
] |
Feedback alignment algorithms are an alternative to backpropagation to train
neural networks, whereby some of the partial derivatives that are required to
compute the gradient are replaced by random terms. This essentially transforms
the update rule into a random walk in weight space. Surprisingly, learning
still works with those algorithms, including training of deep neural networks.
This is generally attributed to an alignment of the update of the random walker
with the true gradient - the eponymous gradient alignment -- which drives an
approximate gradient descend. The mechanism that leads to this alignment
remains unclear, however. In this paper, we use mathematical reasoning and
simulations to investigate gradient alignment. We observe that the feedback
alignment update rule has fixed points, which correspond to extrema of the loss
function. We show that gradient alignment is a stability criterion for those
fixed points. It is only a necessary criterion for algorithm performance.
Experimentally, we demonstrate that high levels of gradient alignment can lead
to poor algorithm performance and that the alignment is not always driving the
gradient descend.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02368
|
2023-06-04T14:27:50Z
|
Revisiting Data-Free Knowledge Distillation with Poisoned Teachers
|
[
"Junyuan Hong",
"Yi Zeng",
"Shuyang Yu",
"Lingjuan Lyu",
"Ruoxi Jia",
"Jiayu Zhou"
] |
Data-free knowledge distillation (KD) helps transfer knowledge from a
pre-trained model (known as the teacher model) to a smaller model (known as the
student model) without access to the original training data used for training
the teacher model. However, the security of the synthetic or
out-of-distribution (OOD) data required in data-free KD is largely unknown and
under-explored. In this work, we make the first effort to uncover the security
risk of data-free KD w.r.t. untrusted pre-trained models. We then propose
Anti-Backdoor Data-Free KD (ABD), the first plug-in defensive method for
data-free KD methods to mitigate the chance of potential backdoors being
transferred. We empirically evaluate the effectiveness of our proposed ABD in
diminishing transferred backdoor knowledge while maintaining compatible
downstream performances as the vanilla KD. We envision this work as a milestone
for alarming and mitigating the potential backdoors in data-free KD. Codes are
released at https://github.com/illidanlab/ABD.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.02376
|
2023-06-04T15:19:44Z
|
Towards Deep Attention in Graph Neural Networks: Problems and Remedies
|
[
"Soo Yong Lee",
"Fanchen Bu",
"Jaemin Yoo",
"Kijung Shin"
] |
Graph neural networks (GNNs) learn the representation of graph-structured
data, and their expressiveness can be further enhanced by inferring node
relations for propagation. Attention-based GNNs infer neighbor importance to
manipulate the weight of its propagation. Despite their popularity, the
discussion on deep graph attention and its unique challenges has been limited.
In this work, we investigate some problematic phenomena related to deep graph
attention, including vulnerability to over-smoothed features and smooth
cumulative attention. Through theoretical and empirical analyses, we show that
various attention-based GNNs suffer from these problems. Motivated by our
findings, we propose AEROGNN, a novel GNN architecture designed for deep graph
attention. AERO-GNN provably mitigates the proposed problems of deep graph
attention, which is further empirically demonstrated with (a) its adaptive and
less smooth attention functions and (b) higher performance at deep layers (up
to 64). On 9 out of 12 node classification benchmarks, AERO-GNN outperforms the
baseline GNNs, highlighting the advantages of deep graph attention. Our code is
available at https://github.com/syleeheal/AERO-GNN.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02389
|
2023-06-04T15:48:09Z
|
Fast Continual Multi-View Clustering with Incomplete Views
|
[
"Xinhang Wan",
"Bin Xiao",
"Xinwang Liu",
"Jiyuan Liu",
"Weixuan Liang",
"En Zhu"
] |
Multi-view clustering (MVC) has gained broad attention owing to its capacity
to exploit consistent and complementary information across views. This paper
focuses on a challenging issue in MVC called the incomplete continual data
problem (ICDP). In specific, most existing algorithms assume that views are
available in advance and overlook the scenarios where data observations of
views are accumulated over time. Due to privacy considerations or memory
limitations, previous views cannot be stored in these situations. Some works
are proposed to handle it, but all fail to address incomplete views. Such an
incomplete continual data problem (ICDP) in MVC is tough to solve since
incomplete information with continual data increases the difficulty of
extracting consistent and complementary knowledge among views. We propose Fast
Continual Multi-View Clustering with Incomplete Views (FCMVC-IV) to address it.
Specifically, it maintains a consensus coefficient matrix and updates knowledge
with the incoming incomplete view rather than storing and recomputing all the
data matrices. Considering that the views are incomplete, the newly collected
view might contain samples that have yet to appear; two indicator matrices and
a rotation matrix are developed to match matrices with different dimensions.
Besides, we design a three-step iterative algorithm to solve the resultant
problem in linear complexity with proven convergence. Comprehensive experiments
on various datasets show the superiority of FCMVC-IV.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02421
|
2023-06-04T17:53:30Z
|
Auto-Validate by-History: Auto-Program Data Quality Constraints to
Validate Recurring Data Pipelines
|
[
"Dezhan Tu",
"Yeye He",
"Weiwei Cui",
"Song Ge",
"Haidong Zhang",
"Han Shi",
"Dongmei Zhang",
"Surajit Chaudhuri"
] |
Data pipelines are widely employed in modern enterprises to power a variety
of Machine-Learning (ML) and Business-Intelligence (BI) applications.
Crucially, these pipelines are \emph{recurring} (e.g., daily or hourly) in
production settings to keep data updated so that ML models can be re-trained
regularly, and BI dashboards refreshed frequently. However, data quality (DQ)
issues can often creep into recurring pipelines because of upstream schema and
data drift over time. As modern enterprises operate thousands of recurring
pipelines, today data engineers have to spend substantial efforts to
\emph{manually} monitor and resolve DQ issues, as part of their DataOps and
MLOps practices.
Given the high human cost of managing large-scale pipeline operations, it is
imperative that we can \emph{automate} as much as possible. In this work, we
propose Auto-Validate-by-History (AVH) that can automatically detect DQ issues
in recurring pipelines, leveraging rich statistics from historical executions.
We formalize this as an optimization problem, and develop constant-factor
approximation algorithms with provable precision guarantees. Extensive
evaluations using 2000 production data pipelines at Microsoft demonstrate the
effectiveness and efficiency of AVH.
|
[
"cs.DB",
"cs.LG"
] | false |
2306.02430
|
2023-06-04T18:26:25Z
|
A Unified Framework for Factorizing Distributional Value Functions for
Multi-Agent Reinforcement Learning
|
[
"Wei-Fang Sun",
"Cheng-Kuang Lee",
"Simon See",
"Chun-Yi Lee"
] |
In fully cooperative multi-agent reinforcement learning (MARL) settings,
environments are highly stochastic due to the partial observability of each
agent and the continuously changing policies of other agents. To address the
above issues, we proposed a unified framework, called DFAC, for integrating
distributional RL with value function factorization methods. This framework
generalizes expected value function factorization methods to enable the
factorization of return distributions. To validate DFAC, we first demonstrate
its ability to factorize the value functions of a simple matrix game with
stochastic rewards. Then, we perform experiments on all Super Hard maps of the
StarCraft Multi-Agent Challenge and six self-designed Ultra Hard maps, showing
that DFAC is able to outperform a number of baselines.
|
[
"cs.MA",
"cs.LG"
] | false |
2306.02433
|
2023-06-04T18:32:50Z
|
Riemannian Low-Rank Model Compression for Federated Learning with
Over-the-Air Aggregation
|
[
"Ye Xue",
"Vincent Lau"
] |
Low-rank model compression is a widely used technique for reducing the
computational load when training machine learning models. However, existing
methods often rely on relaxing the low-rank constraint of the model weights
using a regularized nuclear norm penalty, which requires an appropriate
hyperparameter that can be difficult to determine in practice. Furthermore,
existing compression techniques are not directly applicable to efficient
over-the-air (OTA) aggregation in federated learning (FL) systems for
distributed Internet-of-Things (IoT) scenarios. In this paper, we propose a
novel manifold optimization formulation for low-rank model compression in FL
that does not relax the low-rank constraint. Our optimization is conducted
directly over the low-rank manifold, guaranteeing that the model is exactly
low-rank. We also introduce a consensus penalty in the optimization formulation
to support OTA aggregation. Based on our optimization formulation, we propose
an alternating Riemannian optimization algorithm with a precoder that enables
efficient OTA aggregation of low-rank local models without sacrificing training
performance. Additionally, we provide convergence analysis in terms of key
system parameters and conduct extensive experiments with real-world datasets to
demonstrate the effectiveness of our proposed Riemannian low-rank model
compression scheme compared to various state-of-the-art baselines.
|
[
"eess.SP",
"cs.LG"
] | false |
2306.02437
|
2023-06-04T18:48:32Z
|
Data Quality in Imitation Learning
|
[
"Suneel Belkhale",
"Yuchen Cui",
"Dorsa Sadigh"
] |
In supervised learning, the question of data quality and curation has been
over-shadowed in recent years by increasingly more powerful and expressive
models that can ingest internet-scale data. However, in offline learning for
robotics, we simply lack internet scale data, and so high quality datasets are
a necessity. This is especially true in imitation learning (IL), a sample
efficient paradigm for robot learning using expert demonstrations. Policies
learned through IL suffer from state distribution shift at test time due to
compounding errors in action prediction, which leads to unseen states that the
policy cannot recover from. Instead of designing new algorithms to address
distribution shift, an alternative perspective is to develop new ways of
assessing and curating datasets. There is growing evidence that the same IL
algorithms can have substantially different performance across different
datasets. This calls for a formalism for defining metrics of "data quality"
that can further be leveraged for data curation. In this work, we take the
first step toward formalizing data quality for imitation learning through the
lens of distribution shift: a high quality dataset encourages the policy to
stay in distribution at test time. We propose two fundamental properties that
shape the quality of a dataset: i) action divergence: the mismatch between the
expert and learned policy at certain states; and ii) transition diversity: the
noise present in the system for a given state and action. We investigate the
combined effect of these two key properties in imitation learning
theoretically, and we empirically analyze models trained on a variety of
different data sources. We show that state diversity is not always beneficial,
and we demonstrate how action divergence and transition diversity interact in
practice.
|
[
"cs.RO",
"cs.LG"
] | false |
2306.02447
|
2023-06-04T19:30:28Z
|
Active Inference-Based Optimization of Discriminative Neural Network
Classifiers
|
[
"Faezeh Fallah"
] |
Commonly used objective functions (losses) for a supervised optimization of
discriminative neural network classifiers were either distribution-based or
metric-based. The distribution-based losses could compromise the generalization
or cause classification biases towards the dominant classes of an imbalanced
class-sample distribution. The metric-based losses could make the network model
independent of any distribution and thus improve its generalization. However,
they could still be biased towards the dominant classes and could suffer from
discrepancies when a class was absent in both the reference (ground truth) and
the predicted labels. In this paper, we proposed a novel optimization process
which not only tackled the unbalancedness of the class-sample distribution of
the training samples but also provided a mechanism to tackle errors in the
reference labels of the training samples. This was achieved by proposing a
novel algorithm to find candidate classification labels of the training samples
from their prior probabilities and the currently estimated posteriors on the
network and a novel objective function for the optimizations. The algorithm was
the result of casting the generalized Kelly criterion for optimal betting into
a multiclass classification problem. The proposed objective function was the
expected free energy of a prospective active inference and could incorporate
the candidate labels, the original reference labels, and the priors of the
training samples while still being distribution-based. The incorporation of the
priors into the optimization not only helped to tackle errors in the reference
labels but also allowed to reduce classification biases towards the dominant
classes by focusing the attention of the neural network on important but
minority foreground classes.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.02473
|
2023-06-04T20:45:14Z
|
Anomaly Detection Techniques in Smart Grid Systems: A Review
|
[
"Shampa Banik",
"Sohag Kumar Saha",
"Trapa Banik",
"S M Mostaq Hossain"
] |
Smart grid data can be evaluated for anomaly detection in numerous fields,
including cyber-security, fault detection, electricity theft, etc. The strange
anomalous behaviors may have been caused by various reasons, including peculiar
consumption patterns of the consumers, malfunctioning grid infrastructures,
outages, external cyber-attacks, or energy fraud. Recently, anomaly detection
of the smart grid has attracted a large amount of interest from researchers,
and it is widely applied in a number of high-impact fields. One of the most
significant challenges within the smart grid is the implementation of efficient
anomaly detection for multiple forms of aberrant behaviors. In this paper, we
provide a scoping review of research from the recent advancements in anomaly
detection in the context of smart grids. We categorize our study from numerous
aspects for deep understanding and inspection of the research challenges so
far. Finally, after analyzing the gap in the reviewed paper, the direction for
future research on anomaly detection in smart-grid systems has been provided
briefly.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.02399
|
2023-06-04T16:24:19Z
|
Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz
Dynamic Risk Measures
|
[
"Hao Liang",
"Zhi-quan Luo"
] |
We study finite episodic Markov decision processes incorporating dynamic risk
measures to capture risk sensitivity. To this end, we present two model-based
algorithms applied to \emph{Lipschitz} dynamic risk measures, a wide range of
risk measures that subsumes spectral risk measure, optimized certainty
equivalent, distortion risk measures among others. We establish both regret
upper bounds and lower bounds. Notably, our upper bounds demonstrate optimal
dependencies on the number of actions and episodes, while reflecting the
inherent trade-off between risk sensitivity and sample complexity.
Additionally, we substantiate our theoretical results through numerical
experiments.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2306.02400
|
2023-06-04T16:29:30Z
|
Perceptual Kalman Filters: Online State Estimation under a Perfect
Perceptual-Quality Constraint
|
[
"Dror Freirich",
"Tomer Michaeli",
"Ron Meir"
] |
Many practical settings call for the reconstruction of temporal signals from
corrupted or missing data. Classic examples include decoding, tracking, signal
enhancement and denoising. Since the reconstructed signals are ultimately
viewed by humans, it is desirable to achieve reconstructions that are pleasing
to human perception. Mathematically, perfect perceptual-quality is achieved
when the distribution of restored signals is the same as that of natural
signals, a requirement which has been heavily researched in static estimation
settings (i.e. when a whole signal is processed at once). Here, we study the
problem of optimal causal filtering under a perfect perceptual-quality
constraint, which is a task of fundamentally different nature. Specifically, we
analyze a Gaussian Markov signal observed through a linear noisy
transformation. In the absence of perceptual constraints, the Kalman filter is
known to be optimal in the MSE sense for this setting. Here, we show that
adding the perfect perceptual quality constraint (i.e. the requirement of
temporal consistency), introduces a fundamental dilemma whereby the filter may
have to "knowingly" ignore new information revealed by the observations in
order to conform to its past decisions. This often comes at the cost of a
significant increase in the MSE (beyond that encountered in static settings).
Our analysis goes beyond the classic innovation process of the Kalman filter,
and introduces the novel concept of an unutilized information process. Using
this tool, we present a recursive formula for perceptual filters, and
demonstrate the qualitative effects of perfect perceptual-quality estimation on
a video reconstruction problem.
|
[
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] | false |
2306.02411
|
2023-06-04T17:08:41Z
|
A Topological Approach to Measuring Training Data Quality
|
[
"Álvaro Torras-Casas",
"Eduardo Paluzo-Hidalgo",
"Rocio Gonzalez-Diaz"
] |
Data quality is crucial for the successful training, generalization and
performance of artificial intelligence models. Furthermore, it is known that
the leading approaches in artificial intelligence are notoriously data-hungry.
In this paper, we propose the use of small training datasets towards faster
training. Specifically, we provide a novel topological method based on
morphisms between persistence modules to measure the training data quality with
respect to the complete dataset. This way, we can provide an explanation of why
the chosen training dataset will lead to poor performance.
|
[
"math.AT",
"cs.AI",
"cs.LG"
] | false |
2306.02418
|
2023-06-04T17:50:20Z
|
ContraBAR: Contrastive Bayes-Adaptive Deep RL
|
[
"Era Choshen",
"Aviv Tamar"
] |
In meta reinforcement learning (meta RL), an agent seeks a Bayes-optimal
policy -- the optimal policy when facing an unknown task that is sampled from
some known task distribution. Previous approaches tackled this problem by
inferring a belief over task parameters, using variational inference methods.
Motivated by recent successes of contrastive learning approaches in RL, such as
contrastive predictive coding (CPC), we investigate whether contrastive methods
can be used for learning Bayes-optimal behavior. We begin by proving that
representations learned by CPC are indeed sufficient for Bayes optimality.
Based on this observation, we propose a simple meta RL algorithm that uses CPC
in lieu of variational belief inference. Our method, ContraBAR, achieves
comparable performance to state-of-the-art in domains with state-based
observation and circumvents the computational toll of future observation
reconstruction, enabling learning in domains with image-based observations. It
can also be combined with image augmentations for domain randomization and used
seamlessly in both online and offline meta RL settings.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2306.02420
|
2023-06-04T17:52:49Z
|
Complexity of Block Coordinate Descent with Proximal Regularization and
Applications to Wasserstein CP-dictionary Learning
|
[
"Dohyun Kwon",
"Hanbaek Lyu"
] |
We consider the block coordinate descent methods of Gauss-Seidel type with
proximal regularization (BCD-PR), which is a classical method of minimizing
general nonconvex objectives under constraints that has a wide range of
practical applications. We theoretically establish the worst-case complexity
bound for this algorithm. Namely, we show that for general nonconvex smooth
objectives with block-wise constraints, the classical BCD-PR algorithm
converges to an epsilon-stationary point within O(1/epsilon) iterations. Under
a mild condition, this result still holds even if the algorithm is executed
inexactly in each step. As an application, we propose a provable and efficient
algorithm for `Wasserstein CP-dictionary learning', which seeks a set of
elementary probability distributions that can well-approximate a given set of
d-dimensional joint probability distributions. Our algorithm is a version of
BCD-PR that operates in the dual space, where the primal problem is regularized
both entropically and proximally.
|
[
"cs.LG",
"cs.AI",
"cs.NA",
"math.NA",
"math.OC"
] | false |
2306.02544
|
2023-06-05T02:29:38Z
|
Fourier Test-time Adaptation with Multi-level Consistency for Robust
Classification
|
[
"Yuhao Huang",
"Xin Yang",
"Xiaoqiong Huang",
"Xinrui Zhou",
"Haozhe Chi",
"Haoran Dou",
"Xindi Hu",
"Jian Wang",
"Xuedong Deng",
"Dong Ni"
] |
Deep classifiers may encounter significant performance degradation when
processing unseen testing data from varying centers, vendors, and protocols.
Ensuring the robustness of deep models against these domain shifts is crucial
for their widespread clinical application. In this study, we propose a novel
approach called Fourier Test-time Adaptation (FTTA), which employs a
dual-adaptation design to integrate input and model tuning, thereby jointly
improving the model robustness. The main idea of FTTA is to build a reliable
multi-level consistency measurement of paired inputs for achieving
self-correction of prediction. Our contribution is two-fold. First, we
encourage consistency in global features and local attention maps between the
two transformed images of the same input. Here, the transformation refers to
Fourier-based input adaptation, which can transfer one unseen image into source
style to reduce the domain gap. Furthermore, we leverage style-interpolated
images to enhance the global and local features with learnable parameters,
which can smooth the consistency measurement and accelerate convergence.
Second, we introduce a regularization technique that utilizes style
interpolation consistency in the frequency space to encourage self-consistency
in the logit space of the model output. This regularization provides strong
self-supervised signals for robustness enhancement. FTTA was extensively
validated on three large classification datasets with different modalities and
organs. Experimental results show that FTTA is general and outperforms other
strong state-of-the-art methods.
|
[
"cs.CV"
] | false |
2306.02562
|
2023-06-05T03:32:27Z
|
Video Diffusion Models with Local-Global Context Guidance
|
[
"Siyuan Yang",
"Lu Zhang",
"Yu Liu",
"Zhizhuo Jiang",
"You He"
] |
Diffusion models have emerged as a powerful paradigm in video synthesis tasks
including prediction, generation, and interpolation. Due to the limitation of
the computational budget, existing methods usually implement conditional
diffusion models with an autoregressive inference pipeline, in which the future
fragment is predicted based on the distribution of adjacent past frames.
However, only the conditions from a few previous frames can't capture the
global temporal coherence, leading to inconsistent or even outrageous results
in long-term video prediction. In this paper, we propose a Local-Global Context
guided Video Diffusion model (LGC-VD) to capture multi-perception conditions
for producing high-quality videos in both conditional/unconditional settings.
In LGC-VD, the UNet is implemented with stacked residual blocks with
self-attention units, avoiding the undesirable computational cost in 3D Conv.
We construct a local-global context guidance strategy to capture the
multi-perceptual embedding of the past fragment to boost the consistency of
future prediction. Furthermore, we propose a two-stage training strategy to
alleviate the effect of noisy frames for more stable predictions. Our
experiments demonstrate that the proposed method achieves favorable performance
on video prediction, interpolation, and unconditional video generation. We
release code at https://github.com/exisas/LGC-VD.
|
[
"cs.CV"
] | false |
2306.02717
|
2023-06-05T09:09:10Z
|
User-friendly Image Editing with Minimal Text Input: Leveraging
Captioning and Injection Techniques
|
[
"Sunwoo Kim",
"Wooseok Jang",
"Hyunsu Kim",
"Junho Kim",
"Yunjey Choi",
"Seungryong Kim",
"Gayeong Lee"
] |
Recent text-driven image editing in diffusion models has shown remarkable
success. However, the existing methods assume that the user's description
sufficiently grounds the contexts in the source image, such as objects,
background, style, and their relations. This assumption is unsuitable for
real-world applications because users have to manually engineer text prompts to
find optimal descriptions for different images. From the users' standpoint,
prompt engineering is a labor-intensive process, and users prefer to provide a
target word for editing instead of a full sentence. To address this problem, we
first demonstrate the importance of a detailed text description of the source
image, by dividing prompts into three categories based on the level of semantic
details. Then, we propose simple yet effective methods by combining prompt
generation frameworks, thereby making the prompt engineering process more
user-friendly. Extensive qualitative and quantitative experiments demonstrate
the importance of prompts in text-driven image editing and our method is
comparable to ground-truth prompts.
|
[
"cs.CV"
] | false |
2306.02741
|
2023-06-05T09:41:51Z
|
ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative
Neural Radiance Fields
|
[
"Kanghyeok Ko",
"Minhyeok Lee"
] |
Generative Neural Radiance Fields (NeRFs) have demonstrated remarkable
proficiency in synthesizing multi-view images by learning the distribution of a
set of unposed images. Despite the aptitude of existing generative NeRFs in
generating 3D-consistent high-quality random samples within data distribution,
the creation of a 3D representation of a singular input image remains a
formidable challenge. In this manuscript, we introduce ZIGNeRF, an innovative
model that executes zero-shot Generative Adversarial Network (GAN) inversion
for the generation of multi-view images from a single out-of-domain image. The
model is underpinned by a novel inverter that maps out-of-domain images into
the latent code of the generator manifold. Notably, ZIGNeRF is capable of
disentangling the object from the background and executing 3D operations such
as 360-degree rotation or depth and horizontal translation. The efficacy of our
model is validated using multiple real-image datasets: Cats, AFHQ, CelebA,
CelebA-HQ, and CompCars.
|
[
"cs.CV"
] | false |
2306.02763
|
2023-06-05T10:33:25Z
|
STAR Loss: Reducing Semantic Ambiguity in Facial Landmark Detection
|
[
"Zhenglin Zhou",
"Huaxia Li",
"Hong Liu",
"Nanyang Wang",
"Gang Yu",
"Rongrong Ji"
] |
Recently, deep learning-based facial landmark detection has achieved
significant improvement. However, the semantic ambiguity problem degrades
detection performance. Specifically, the semantic ambiguity causes inconsistent
annotation and negatively affects the model's convergence, leading to worse
accuracy and instability prediction. To solve this problem, we propose a
Self-adapTive Ambiguity Reduction (STAR) loss by exploiting the properties of
semantic ambiguity. We find that semantic ambiguity results in the anisotropic
predicted distribution, which inspires us to use predicted distribution to
represent semantic ambiguity. Based on this, we design the STAR loss that
measures the anisotropism of the predicted distribution. Compared with the
standard regression loss, STAR loss is encouraged to be small when the
predicted distribution is anisotropic and thus adaptively mitigates the impact
of semantic ambiguity. Moreover, we propose two kinds of eigenvalue restriction
methods that could avoid both distribution's abnormal change and the model's
premature convergence. Finally, the comprehensive experiments demonstrate that
STAR loss outperforms the state-of-the-art methods on three benchmarks, i.e.,
COFW, 300W, and WFLW, with negligible computation overhead. Code is at
https://github.com/ZhenglinZhou/STAR.
|
[
"cs.CV"
] | false |
2306.02765
|
2023-06-05T10:41:25Z
|
Differentially Private Cross-camera Person Re-identification
|
[
"Lucas Maris",
"Yuki Matsuda",
"Keiichi Yasumoto"
] |
Camera-based person re-identification is a heavily privacy-invading task by
design, benefiting from rich visual data to match together person
representations across different cameras. This high-dimensional data can then
easily be used for other, perhaps less desirable, applications. We here
investigate the possibility of protecting such image data against uses outside
of the intended re-identification task, and introduce a differential privacy
mechanism leveraging both pixelisation and colour quantisation for this
purpose. We show its ability to distort images in such a way that adverse task
performances are significantly reduced, while retaining high re-identification
performances.
|
[
"cs.CV"
] | false |
2306.02776
|
2023-06-05T11:01:00Z
|
Cheap-fake Detection with LLM using Prompt Engineering
|
[
"Guangyang Wu",
"Weijie Wu",
"Xiaohong Liu",
"Kele Xu",
"Tianjiao Wan",
"Wenyi Wang"
] |
The misuse of real photographs with conflicting image captions in news items
is an example of the out-of-context (OOC) misuse of media. In order to detect
OOC media, individuals must determine the accuracy of the statement and
evaluate whether the triplet (~\textit{i.e.}, the image and two captions)
relates to the same event. This paper presents a novel learnable approach for
detecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. The
proposed method is based on the COSMOS structure, which assesses the coherence
between an image and captions, as well as between two captions. We enhance the
baseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as a
feature extractor. Specifically, we propose an innovative approach to feature
extraction utilizing prompt engineering to develop a robust and reliable
feature extractor with GPT3.5 model. The proposed method captures the
correlation between two captions and effectively integrates this module into
the COSMOS baseline model, which allows for a deeper understanding of the
relationship between captions. By incorporating this module, we demonstrate the
potential for significant improvements in cheap-fakes detection performance.
The proposed methodology holds promising implications for various applications
such as natural language processing, image captioning, and text-to-image
synthesis. Docker for submission is available at
https://hub.docker.com/repository/docker/mulns/ acmmmcheapfakes.
|
[
"cs.CV"
] | false |
2306.02782
|
2023-06-05T11:16:50Z
|
Reassembling Broken Objects using Breaking Curves
|
[
"Ali Alagrami",
"Luca Palmieri",
"Sinem Aslan",
"Marcello Pelillo",
"Sebastiano Vascon"
] |
Reassembling 3D broken objects is a challenging task. A robust solution that
generalizes well must deal with diverse patterns associated with different
types of broken objects. We propose a method that tackles the pairwise assembly
of 3D point clouds, that is agnostic on the type of object, and that relies
solely on their geometrical information, without any prior information on the
shape of the reconstructed object. The method receives two point clouds as
input and segments them into regions using detected closed boundary contours,
known as breaking curves. Possible alignment combinations of the regions of
each broken object are evaluated and the best one is selected as the final
alignment. Experiments were carried out both on available 3D scanned objects
and on a recent benchmark for synthetic broken objects. Results show that our
solution performs well in reassembling different kinds of broken objects.
|
[
"cs.CV"
] | false |
2306.02854
|
2023-06-05T13:10:48Z
|
Asymmetric Patch Sampling for Contrastive Learning
|
[
"Chengchao Shen",
"Jianzhong Chen",
"Shu Wang",
"Hulin Kuang",
"Jin Liu",
"Jianxin Wang"
] |
Asymmetric appearance between positive pair effectively reduces the risk of
representation degradation in contrastive learning. However, there are still a
mass of appearance similarities between positive pair constructed by the
existing methods, which inhibits the further representation improvement. In
this paper, we propose a novel asymmetric patch sampling strategy for
contrastive learning, to further boost the appearance asymmetry for better
representations. Specifically, dual patch sampling strategies are applied to
the given image, to obtain asymmetric positive pairs. First, sparse patch
sampling is conducted to obtain the first view, which reduces spatial
redundancy of image and allows a more asymmetric view. Second, a selective
patch sampling is proposed to construct another view with large appearance
discrepancy relative to the first one. Due to the inappreciable appearance
similarity between positive pair, the trained model is encouraged to capture
the similarity on semantics, instead of low-level ones. Experimental results
demonstrate that our proposed method significantly outperforms the existing
self-supervised methods on both ImageNet-1K and CIFAR dataset, e.g., 2.5%
finetune accuracy improvement on CIFAR100. Furthermore, our method achieves
state-of-the-art performance on downstream tasks, object detection and instance
segmentation on COCO.Additionally, compared to other self-supervised methods,
our method is more efficient on both memory and computation during training.
The source code is available at https://github.com/visresearch/aps.
|
[
"cs.CV"
] | false |
2306.02878
|
2023-06-05T13:49:24Z
|
Single-Stage 3D Geometry-Preserving Depth Estimation Model Training on
Dataset Mixtures with Uncalibrated Stereo Data
|
[
"Nikolay Patakin",
"Mikhail Romanov",
"Anna Vorontsova",
"Mikhail Artemyev",
"Anton Konushin"
] |
Nowadays, robotics, AR, and 3D modeling applications attract considerable
attention to single-view depth estimation (SVDE) as it allows estimating scene
geometry from a single RGB image. Recent works have demonstrated that the
accuracy of an SVDE method hugely depends on the diversity and volume of the
training data. However, RGB-D datasets obtained via depth capturing or 3D
reconstruction are typically small, synthetic datasets are not photorealistic
enough, and all these datasets lack diversity. The large-scale and diverse data
can be sourced from stereo images or stereo videos from the web. Typically
being uncalibrated, stereo data provides disparities up to unknown shift
(geometrically incomplete data), so stereo-trained SVDE methods cannot recover
3D geometry. It was recently shown that the distorted point clouds obtained
with a stereo-trained SVDE method can be corrected with additional point cloud
modules (PCM) separately trained on the geometrically complete data. On the
contrary, we propose GP$^{2}$, General-Purpose and Geometry-Preserving training
scheme, and show that conventional SVDE models can learn correct shifts
themselves without any post-processing, benefiting from using stereo data even
in the geometry-preserving setting. Through experiments on different dataset
mixtures, we prove that GP$^{2}$-trained models outperform methods relying on
PCM in both accuracy and speed, and report the state-of-the-art results in the
general-purpose geometry-preserving SVDE. Moreover, we show that SVDE models
can learn to predict geometrically correct depth even when geometrically
complete data comprises the minor part of the training set.
|
[
"cs.CV"
] | false |
2306.02900
|
2023-06-05T14:06:40Z
|
Robust Fiber ODF Estimation Using Deep Constrained Spherical
Deconvolution for Diffusion MRI
|
[
"Tianyuan Yao",
"Francois Rheault",
"Leon Y Cai",
"Vishwesh nath",
"Zuhayr Asad",
"Nancy Newlin",
"Can Cui",
"Ruining Deng",
"Karthik Ramadass",
"Andrea Shafer",
"Susan Resnick",
"Kurt Schilling",
"Bennett A. Landman",
"Yuankai Huo"
] |
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a critical imaging
method for capturing and modeling tissue microarchitecture at a millimeter
scale. A common practice to model the measured DW-MRI signal is via fiber
orientation distribution function (fODF). This function is the essential first
step for the downstream tractography and connectivity analyses. With recent
advantages in data sharing, large-scale multi-site DW-MRI datasets are being
made available for multi-site studies. However, measurement variabilities
(e.g., inter- and intra-site variability, hardware performance, and sequence
design) are inevitable during the acquisition of DW-MRI. Most existing
model-based methods (e.g., constrained spherical deconvolution (CSD)) and
learning based methods (e.g., deep learning (DL)) do not explicitly consider
such variabilities in fODF modeling, which consequently leads to inferior
performance on multi-site and/or longitudinal diffusion studies. In this paper,
we propose a novel data-driven deep constrained spherical deconvolution method
to explicitly constrain the scan-rescan variabilities for a more reproducible
and robust estimation of brain microstructure from repeated DW-MRI scans.
Specifically, the proposed method introduces a new 3D volumetric
scanner-invariant regularization scheme during the fODF estimation. We study
the Human Connectome Project (HCP) young adults test-retest group as well as
the MASiVar dataset (with inter- and intra-site scan/rescan data). The
Baltimore Longitudinal Study of Aging (BLSA) dataset is employed for external
validation. From the experimental results, the proposed data-driven framework
outperforms the existing benchmarks in repeated fODF estimation. The proposed
method is assessing the downstream connectivity analysis and shows increased
performance in distinguishing subjects with different biomarkers.
|
[
"cs.CV"
] | false |
2306.02903
|
2023-06-05T14:10:28Z
|
Instruct-Video2Avatar: Video-to-Avatar Generation with Instructions
|
[
"Shaoxu Li"
] |
We propose a method for synthesizing edited photo-realistic digital avatars
with text instructions. Given a short monocular RGB video and text
instructions, our method uses an image-conditioned diffusion model to edit one
head image and uses the video stylization method to accomplish the editing of
other head images. Through iterative training and update (three times or more),
our method synthesizes edited photo-realistic animatable 3D neural head avatars
with a deformable neural radiance field head synthesis method. In quantitative
and qualitative studies on various subjects, our method outperforms
state-of-the-art methods.
|
[
"cs.CV"
] | false |
2306.02930
|
2023-06-05T14:48:30Z
|
Human Spine Motion Capture using Perforated Kinesiology Tape
|
[
"Hendrik Hachmann",
"Bodo Rosenhahn"
] |
In this work, we present a marker-based multi-view spine tracking method that
is specifically adjusted to the requirements for movements in sports. A maximal
focus is on the accurate detection of markers and fast usage of the system. For
this task, we take advantage of the prior knowledge of the arrangement of dots
in perforated kinesiology tape. We detect the tape and its dots using a Mask
R-CNN and a blob detector. Here, we can focus on detection only while skipping
any image-based feature encoding or matching. We conduct a reasoning in 3D by a
linear program and Markov random fields, in which the structure of the
kinesiology tape is modeled and the shape of the spine is optimized. In
comparison to state-of-the-art systems, we demonstrate that our system achieves
high precision and marker density, is robust against occlusions, and capable of
capturing fast movements.
|
[
"cs.CV"
] | false |
2306.02954
|
2023-06-05T15:20:44Z
|
Color-aware Deep Temporal Backdrop Duplex Matting System
|
[
"Hendrik Hachmann",
"Bodo Rosenhahn"
] |
Deep learning-based alpha matting showed tremendous improvements in recent
years, yet, feature film production studios still rely on classical chroma
keying including costly post-production steps. This perceived discrepancy can
be explained by some missing links necessary for production which are currently
not adequately addressed in the alpha matting community, in particular
foreground color estimation or color spill compensation. We propose a neural
network-based temporal multi-backdrop production system that combines
beneficial features from chroma keying and alpha matting. Given two consecutive
frames with different background colors, our one-encoder-dual-decoder network
predicts foreground colors and alpha values using a patch-based overlap-blend
approach. The system is able to handle imprecise backdrops, dynamic cameras,
and dynamic foregrounds and has no restrictions on foreground colors. We
compare our method to state-of-the-art algorithms using benchmark datasets and
a video sequence captured by a demonstrator setup. We verify that a dual
backdrop input is superior to the usually applied trimap-based approach. In
addition, the proposed studio set is actor friendly, and produces high-quality,
temporal consistent alpha and color estimations that include a superior color
spill compensation.
|
[
"cs.CV"
] | false |
2306.03050
|
2023-06-05T17:22:27Z
|
ELEV-VISION: Automated Lowest Floor Elevation Estimation from Segmenting
Street View Images
|
[
"Yu-Hsuan Ho",
"Cheng-Chun Lee",
"Nicholas D. Diaz",
"Samuel D. Brody",
"Ali Mostafavi"
] |
We propose an automated lowest floor elevation (LFE) estimation algorithm
based on computer vision techniques to leverage the latent information in
street view images. Flood depth-damage models use a combination of LFE and
flood depth for determining flood risk and extent of damage to properties. We
used image segmentation for detecting door bottoms and roadside edges from
Google Street View images. The characteristic of equirectangular projection
with constant spacing representation of horizontal and vertical angles allows
extraction of the pitch angle from the camera to the door bottom. The depth
from the camera to the door bottom was obtained from the depthmap paired with
the Google Street View image. LFEs were calculated from the pitch angle and the
depth. The testbed for application of the proposed method is Meyerland (Harris
County, Texas). The results show that the proposed method achieved mean
absolute error of 0.190 m (1.18 %) in estimating LFE. The height difference
between the street and the lowest floor (HDSL) was estimated to provide
information for flood damage estimation. The proposed automatic LFE estimation
algorithm using Street View images and image segmentation provides a rapid and
cost-effective method for LFE estimation compared with the surveys using total
station theodolite and unmanned aerial systems. By obtaining more accurate and
up-to-date LFE data using the proposed method, city planners, emergency
planners and insurance companies could make a more precise estimation of flood
damage.
|
[
"cs.CV"
] | false |
2306.03206
|
2023-06-05T19:28:19Z
|
MoDAR: Using Motion Forecasting for 3D Object Detection in Point Cloud
Sequences
|
[
"Yingwei Li",
"Charles R. Qi",
"Yin Zhou",
"Chenxi Liu",
"Dragomir Anguelov"
] |
Occluded and long-range objects are ubiquitous and challenging for 3D object
detection. Point cloud sequence data provide unique opportunities to improve
such cases, as an occluded or distant object can be observed from different
viewpoints or gets better visibility over time. However, the efficiency and
effectiveness in encoding long-term sequence data can still be improved. In
this work, we propose MoDAR, using motion forecasting outputs as a type of
virtual modality, to augment LiDAR point clouds. The MoDAR modality propagates
object information from temporal contexts to a target frame, represented as a
set of virtual points, one for each object from a waypoint on a forecasted
trajectory. A fused point cloud of both raw sensor points and the virtual
points can then be fed to any off-the-shelf point-cloud based 3D object
detector. Evaluated on the Waymo Open Dataset, our method significantly
improves prior art detectors by using motion forecasting from extra-long
sequences (e.g. 18 seconds), achieving new state of the arts, while not adding
much computation overhead.
|
[
"cs.CV"
] | false |
2306.03222
|
2023-06-05T20:16:19Z
|
Confidence-based federated distillation for vision-based lane-centering
|
[
"Yitao Chen",
"Dawei Chen",
"Haoxin Wang",
"Kyungtae Han",
"Ming Zhao"
] |
A fundamental challenge of autonomous driving is maintaining the vehicle in
the center of the lane by adjusting the steering angle. Recent advances
leverage deep neural networks to predict steering decisions directly from
images captured by the car cameras. Machine learning-based steering angle
prediction needs to consider the vehicle's limitation in uploading large
amounts of potentially private data for model training. Federated learning can
address these constraints by enabling multiple vehicles to collaboratively
train a global model without sharing their private data, but it is difficult to
achieve good accuracy as the data distribution is often non-i.i.d. across the
vehicles. This paper presents a new confidence-based federated distillation
method to improve the performance of federated learning for steering angle
prediction. Specifically, it proposes the novel use of entropy to determine the
predictive confidence of each local model, and then selects the most confident
local model as the teacher to guide the learning of the global model. A
comprehensive evaluation of vision-based lane centering shows that the proposed
approach can outperform FedAvg and FedDF by 11.3% and 9%, respectively.
|
[
"cs.CV"
] | false |
2306.03287
|
2023-06-05T22:20:52Z
|
ICDAR 2023 Competition on Structured Text Extraction from Visually-Rich
Document Images
|
[
"Wenwen Yu",
"Chengquan Zhang",
"Haoyu Cao",
"Wei Hua",
"Bohan Li",
"Huang Chen",
"Mingyu Liu",
"Mingrui Chen",
"Jianfeng Kuang",
"Mengjun Cheng",
"Yuning Du",
"Shikun Feng",
"Xiaoguang Hu",
"Pengyuan Lyu",
"Kun Yao",
"Yuechen Yu",
"Yuliang Liu",
"Wanxiang Che",
"Errui Ding",
"Cheng-Lin Liu",
"Jiebo Luo",
"Shuicheng Yan",
"Min Zhang",
"Dimosthenis Karatzas",
"Xing Sun",
"Jingdong Wang",
"Xiang Bai"
] |
Structured text extraction is one of the most valuable and challenging
application directions in the field of Document AI. However, the scenarios of
past benchmarks are limited, and the corresponding evaluation protocols usually
focus on the submodules of the structured text extraction scheme. In order to
eliminate these problems, we organized the ICDAR 2023 competition on Structured
text extraction from Visually-Rich Document images (SVRD). We set up two tracks
for SVRD including Track 1: HUST-CELL and Track 2: Baidu-FEST, where HUST-CELL
aims to evaluate the end-to-end performance of Complex Entity Linking and
Labeling, and Baidu-FEST focuses on evaluating the performance and
generalization of Zero-shot / Few-shot Structured Text extraction from an
end-to-end perspective. Compared to the current document benchmarks, our two
tracks of competition benchmark enriches the scenarios greatly and contains
more than 50 types of visually-rich document images (mainly from the actual
enterprise applications). The competition opened on 30th December, 2022 and
closed on 24th March, 2023. There are 35 participants and 91 valid submissions
received for Track 1, and 15 participants and 26 valid submissions received for
Track 2. In this report we will presents the motivation, competition datasets,
task definition, evaluation protocol, and submission summaries. According to
the performance of the submissions, we believe there is still a large gap on
the expected information extraction performance for complex and zero-shot
scenarios. It is hoped that this competition will attract many researchers in
the field of CV and NLP, and bring some new thoughts to the field of Document
AI.
|
[
"cs.CV"
] | false |
2306.02564
|
2023-06-05T03:36:01Z
|
Spatial Implicit Neural Representations for Global-Scale Species Mapping
|
[
"Elijah Cole",
"Grant Van Horn",
"Christian Lange",
"Alexander Shepard",
"Patrick Leary",
"Pietro Perona",
"Scott Loarie",
"Oisin Mac Aodha"
] |
Estimating the geographical range of a species from sparse observations is a
challenging and important geospatial prediction problem. Given a set of
locations where a species has been observed, the goal is to build a model to
predict whether the species is present or absent at any location. This problem
has a long history in ecology, but traditional methods struggle to take
advantage of emerging large-scale crowdsourced datasets which can include tens
of millions of records for hundreds of thousands of species. In this work, we
use Spatial Implicit Neural Representations (SINRs) to jointly estimate the
geographical range of 47k species simultaneously. We find that our approach
scales gracefully, making increasingly better predictions as we increase the
number of species and the amount of data per species when training. To make
this problem accessible to machine learning researchers, we provide four new
benchmarks that measure different aspects of species range estimation and
spatial representation learning. Using these benchmarks, we demonstrate that
noisy and biased crowdsourced data can be combined with implicit neural
representations to approximate expert-developed range maps for many species.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.02648
|
2023-06-05T07:32:47Z
|
Continuous Cartesian Genetic Programming based representation for
Multi-Objective Neural Architecture Search
|
[
"Cosijopii Garcia-Garcia",
"Alicia Morales-Reyes",
"Hugo Jair Escalante"
] |
We propose a novel approach for the challenge of designing less complex yet
highly effective convolutional neural networks (CNNs) through the use of
cartesian genetic programming (CGP) for neural architecture search (NAS). Our
approach combines real-based and block-chained CNNs representations based on
CGP for optimization in the continuous domain using multi-objective
evolutionary algorithms (MOEAs). Two variants are introduced that differ in the
granularity of the search space they consider. The proposed CGP-NASV1 and
CGP-NASV2 algorithms were evaluated using the non-dominated sorting genetic
algorithm II (NSGA-II) on the CIFAR-10 and CIFAR-100 datasets. The empirical
analysis was extended to assess the crossover operator from differential
evolution (DE), the multi-objective evolutionary algorithm based on
decomposition (MOEA/D) and S metric selection evolutionary multi-objective
algorithm (SMS-EMOA) using the same representation. Experimental results
demonstrate that our approach is competitive with state-of-the-art proposals in
terms of classification performance and model complexity.
|
[
"cs.NE",
"cs.CV"
] | false |
2306.02651
|
2023-06-05T07:34:41Z
|
Dynamic Interactive Relation Capturing via Scene Graph Learning for
Robotic Surgical Report Generation
|
[
"Hongqiu Wang",
"Yueming Jin",
"Lei Zhu"
] |
For robot-assisted surgery, an accurate surgical report reflects clinical
operations during surgery and helps document entry tasks, post-operative
analysis and follow-up treatment. It is a challenging task due to many complex
and diverse interactions between instruments and tissues in the surgical scene.
Although existing surgical report generation methods based on deep learning
have achieved large success, they often ignore the interactive relation between
tissues and instrumental tools, thereby degrading the report generation
performance. This paper presents a neural network to boost surgical report
generation by explicitly exploring the interactive relation between tissues and
surgical instruments. We validate the effectiveness of our method on a
widely-used robotic surgery benchmark dataset, and experimental results show
that our network can significantly outperform existing state-of-the-art
surgical report generation methods (e.g., 7.48% and 5.43% higher for BLEU-1 and
ROUGE).
|
[
"cs.CV",
"cs.LG"
] | false |
2306.02656
|
2023-06-05T07:42:53Z
|
Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method
Using Segment Anything
|
[
"Zhaotong Luo",
"Guohang Yan",
"Yikang Li"
] |
The research on extrinsic calibration between Light Detection and
Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and
generic manner. Since deep learning has been employed in calibration, the
restrictions on the scene are greatly reduced. However, data driven method has
the drawback of low transfer-ability. It cannot adapt to dataset variations
unless additional training is taken. With the advent of foundation model, this
problem can be significantly mitigated. By using the Segment Anything
Model(SAM), we propose a novel LiDAR-camera calibration method, which requires
zero extra training and adapts to common scenes. With an initial guess, we
opimize the extrinsic parameter by maximizing the consistency of points that
are projected inside each image mask. The consistency includes three properties
of the point cloud: the intensity, normal vector and categories derived from
some segmentation methods. The experiments on different dataset have
demonstrated the generality and comparable accuracy of our method. The code is
available at https://github.com/OpenCalib/CalibAnything.
|
[
"cs.CV",
"cs.RO"
] | false |
2306.02691
|
2023-06-05T08:32:12Z
|
Cyclic Learning: Bridging Image-level Labels and Nuclei Instance
Segmentation
|
[
"Yang Zhou",
"Yongjian Wu",
"Zihua Wang",
"Bingzheng Wei",
"Maode Lai",
"Jianzhong Shou",
"Yubo Fan",
"Yan Xu"
] |
Nuclei instance segmentation on histopathology images is of great clinical
value for disease analysis. Generally, fully-supervised algorithms for this
task require pixel-wise manual annotations, which is especially time-consuming
and laborious for the high nuclei density. To alleviate the annotation burden,
we seek to solve the problem through image-level weakly supervised learning,
which is underexplored for nuclei instance segmentation. Compared with most
existing methods using other weak annotations (scribble, point, etc.) for
nuclei instance segmentation, our method is more labor-saving. The obstacle to
using image-level annotations in nuclei instance segmentation is the lack of
adequate location information, leading to severe nuclei omission or overlaps.
In this paper, we propose a novel image-level weakly supervised method, called
cyclic learning, to solve this problem. Cyclic learning comprises a front-end
classification task and a back-end semi-supervised instance segmentation task
to benefit from multi-task learning (MTL). We utilize a deep learning
classifier with interpretability as the front-end to convert image-level labels
to sets of high-confidence pseudo masks and establish a semi-supervised
architecture as the back-end to conduct nuclei instance segmentation under the
supervision of these pseudo masks. Most importantly, cyclic learning is
designed to circularly share knowledge between the front-end classifier and the
back-end semi-supervised part, which allows the whole system to fully extract
the underlying information from image-level labels and converge to a better
optimum. Experiments on three datasets demonstrate the good generality of our
method, which outperforms other image-level weakly supervised methods for
nuclei instance segmentation, and achieves comparable performance to
fully-supervised methods.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.02712
|
2023-06-05T09:02:48Z
|
NFTVis: Visual Analysis of NFT Performance
|
[
"Fan Yan",
"Xumeng Wang",
"Ketian Mao",
"Wei Zhang",
"Wei Chen"
] |
A non-fungible token (NFT) is a data unit stored on the blockchain. Nowadays,
more and more investors and collectors (NFT traders), who participate in
transactions of NFTs, have an urgent need to assess the performance of NFTs.
However, there are two challenges for NFT traders when analyzing the
performance of NFT. First, the current rarity models have flaws and are
sometimes not convincing. In addition, NFT performance is dependent on multiple
factors, such as images (high-dimensional data), history transactions
(network), and market evolution (time series). It is difficult to take
comprehensive consideration and analyze NFT performance efficiently. To address
these challenges, we propose NFTVis, a visual analysis system that facilitates
assessing individual NFT performance. A new NFT rarity model is proposed to
quantify NFTs with images. Four well-coordinated views are designed to
represent the various factors affecting the performance of the NFT. Finally, we
evaluate the usefulness and effectiveness of our system using two case studies
and user studies.
|
[
"cs.CV",
"cs.CE"
] | false |
2306.02815
|
2023-06-05T12:12:23Z
|
Transformer-Based UNet with Multi-Headed Cross-Attention Skip
Connections to Eliminate Artifacts in Scanned Documents
|
[
"David Kreuzer",
"Michael Munz"
] |
The extraction of text in high quality is essential for text-based document
analysis tasks like Document Classification or Named Entity Recognition.
Unfortunately, this is not always ensured, as poor scan quality and the
resulting artifacts lead to errors in the Optical Character Recognition (OCR)
process. Current approaches using Convolutional Neural Networks show promising
results for background removal tasks but fail correcting artifacts like
pixelation or compression errors. For general images, Transformer backbones are
getting integrated more frequently in well-known neural network structures for
denoising tasks. In this work, a modified UNet structure using a Swin
Transformer backbone is presented to remove typical artifacts in scanned
documents. Multi-headed cross-attention skip connections are used to more
selectively learn features in respective levels of abstraction. The performance
of this approach is examined regarding compression errors, pixelation and
random noise. An improvement in text extraction quality with a reduced error
rate of up to 53.9% on the synthetic data is archived. The pretrained
base-model can be easily adapted to new artifacts. The cross-attention skip
connections allow to integrate textual information extracted from the encoder
or in form of commands to more selectively control the models outcome. The
latter is shown by means of an example application.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.02883
|
2023-06-05T13:52:08Z
|
Unsupervised network for low-light enhancement
|
[
"Praveen Kandula",
"Maitreya Suin",
"A. N. Rajagopalan"
] |
Supervised networks address the task of low-light enhancement using paired
images. However, collecting a wide variety of low-light/clean paired images is
tedious as the scene needs to remain static during imaging. In this paper, we
propose an unsupervised low-light enhancement network using contextguided
illumination-adaptive norm (CIN). Inspired by coarse to fine methods, we
propose to address this task in two stages. In stage-I, a pixel amplifier
module (PAM) is used to generate a coarse estimate with an overall improvement
in visibility and aesthetic quality. Stage-II further enhances the saturated
dark pixels and scene properties of the image using CIN. Different ablation
studies show the importance of PAM and CIN in improving the visible quality of
the image. Next, we propose a region-adaptive single input multiple output
(SIMO) model that can generate multiple enhanced images from a single lowlight
image. The objective of SIMO is to let users choose the image of their liking
from a pool of enhanced images. Human subjective analysis of SIMO results shows
that the distribution of preferred images varies, endorsing the importance of
SIMO-type models. Lastly, we propose a low-light road scene (LLRS) dataset
having an unpaired collection of low-light and clean scenes. Unlike existing
datasets, the clean and low-light scenes in LLRS are real and captured using
fixed camera settings. Exhaustive comparisons on publicly available datasets,
and the proposed dataset reveal that the results of our model outperform prior
art quantitatively and qualitatively.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.02901
|
2023-06-05T14:06:43Z
|
A Vessel-Segmentation-Based CycleGAN for Unpaired Multi-modal Retinal
Image Synthesis
|
[
"Aline Sindel",
"Andreas Maier",
"Vincent Christlein"
] |
Unpaired image-to-image translation of retinal images can efficiently
increase the training dataset for deep-learning-based multi-modal retinal
registration methods. Our method integrates a vessel segmentation network into
the image-to-image translation task by extending the CycleGAN framework. The
segmentation network is inserted prior to a UNet vision transformer generator
network and serves as a shared representation between both domains. We
reformulate the original identity loss to learn the direct mapping between the
vessel segmentation and the real image. Additionally, we add a segmentation
loss term to ensure shared vessel locations between fake and real images. In
the experiments, our method shows a visually realistic look and preserves the
vessel structures, which is a prerequisite for generating multi-modal training
data for image registration.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.02912
|
2023-06-05T14:15:46Z
|
Unsupervised haze removal from underwater images
|
[
"Praveen Kandula",
"A. N. Rajagopalan"
] |
Several supervised networks exist that remove haze information from
underwater images using paired datasets and pixel-wise loss functions. However,
training these networks requires large amounts of paired data which is
cumbersome, complex and time-consuming. Also, directly using adversarial and
cycle consistency loss functions for unsupervised learning is inaccurate as the
underlying mapping from clean to underwater images is one-to-many, resulting in
an inaccurate constraint on the cycle consistency loss. To address these
issues, we propose a new method to remove haze from underwater images using
unpaired data. Our model disentangles haze and content information from
underwater images using a Haze Disentanglement Network (HDN). The disentangled
content is used by a restoration network to generate a clean image using
adversarial losses. The disentangled haze is then used as a guide for
underwater image regeneration resulting in a strong constraint on cycle
consistency loss and improved performance gains. Different ablation studies
show that the haze and content from underwater images are effectively
separated. Exhaustive experiments reveal that accurate cycle consistency
constraint and the proposed network architecture play an important role in
yielding enhanced results. Experiments on UFO-120, UWNet, UWScenes, and UIEB
underwater datasets indicate that the results of our method outperform prior
art both visually and quantitatively.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.02921
|
2023-06-05T14:34:58Z
|
Zero shot framework for satellite image restoration
|
[
"Praveen Kandula",
"A. N. Rajagopalan"
] |
Satellite images are typically subject to multiple distortions. Different
factors affect the quality of satellite images, including changes in
atmosphere, surface reflectance, sun illumination, viewing geometries etc.,
limiting its application to downstream tasks. In supervised networks, the
availability of paired datasets is a strong assumption. Consequently, many
unsupervised algorithms have been proposed to address this problem. These
methods synthetically generate a large dataset of degraded images using image
formation models. A neural network is then trained with an adversarial loss to
discriminate between images from distorted and clean domains. However, these
methods yield suboptimal performance when tested on real images that do not
necessarily conform to the generation mechanism. Also, they require a large
amount of training data and are rendered unsuitable when only a few images are
available. We propose a distortion disentanglement and knowledge distillation
framework for satellite image restoration to address these important issues.
Our algorithm requires only two images: the distorted satellite image to be
restored and a reference image with similar semantics. Specifically, we first
propose a mechanism to disentangle distortion. This enables us to generate
images with varying degrees of distortion using the disentangled distortion and
the reference image. We then propose the use of knowledge distillation to train
a restoration network using the generated image pairs. As a final step, the
distorted image is passed through the restoration network to get the final
output. Ablation studies show that our proposed mechanism successfully
disentangles distortion.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.02949
|
2023-06-05T15:14:47Z
|
INDigo: An INN-Guided Probabilistic Diffusion Algorithm for Inverse
Problems
|
[
"Di You",
"Andreas Floros",
"Pier Luigi Dragotti"
] |
Recently it has been shown that using diffusion models for inverse problems
can lead to remarkable results. However, these approaches require a closed-form
expression of the degradation model and can not support complex degradations.
To overcome this limitation, we propose a method (INDigo) that combines
invertible neural networks (INN) and diffusion models for general inverse
problems. Specifically, we train the forward process of INN to simulate an
arbitrary degradation process and use the inverse as a reconstruction process.
During the diffusion sampling process, we impose an additional data-consistency
step that minimizes the distance between the intermediate result and the
INN-optimized result at every iteration, where the INN-optimized image is
composed of the coarse information given by the observed degraded image and the
details generated by the diffusion process. With the help of INN, our algorithm
effectively estimates the details lost in the degradation process and is no
longer limited by the requirement of knowing the closed-form expression of the
degradation model. Experiments demonstrate that our algorithm obtains
competitive results compared with recently leading methods both quantitatively
and visually. Moreover, our algorithm performs well on more complex degradation
models and real-world low-quality images.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.03002
|
2023-06-05T16:11:19Z
|
Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face
Morphing Detection
|
[
"Eduarda Caldeira",
"Pedro C. Neto",
"Tiago Gonçalves",
"Naser Damer",
"Ana F. Sequeira",
"Jaime S. Cardoso"
] |
Morphing attacks keep threatening biometric systems, especially face
recognition systems. Over time they have become simpler to perform and more
realistic, as such, the usage of deep learning systems to detect these attacks
has grown. At the same time, there is a constant concern regarding the lack of
interpretability of deep learning models. Balancing performance and
interpretability has been a difficult task for scientists. However, by
leveraging domain information and proving some constraints, we have been able
to develop IDistill, an interpretable method with state-of-the-art performance
that provides information on both the identity separation on morph samples and
their contribution to the final prediction. The domain information is learnt by
an autoencoder and distilled to a classifier system in order to teach it to
separate identity information. When compared to other methods in the literature
it outperforms them in three out of five databases and is competitive in the
remaining.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.03021
|
2023-06-05T16:38:11Z
|
Automating Style Analysis and Visualization With Explainable AI -- Case
Studies on Brand Recognition
|
[
"Yu-hsuan Chen",
"Levent Burak Kara",
"Jonathan Cagan"
] |
Incorporating style-related objectives into shape design has been centrally
important to maximize product appeal. However, stylistic features such as
aesthetics and semantic attributes are hard to codify even for experts. As
such, algorithmic style capture and reuse have not fully benefited from
automated data-driven methodologies due to the challenging nature of design
describability. This paper proposes an AI-driven method to fully automate the
discovery of brand-related features. Our approach introduces BIGNet, a two-tier
Brand Identification Graph Neural Network (GNN) to classify and analyze scalar
vector graphics (SVG). First, to tackle the scarcity of vectorized product
images, this research proposes two data acquisition workflows: parametric
modeling from small curve-based datasets, and vectorization from large
pixel-based datasets. Secondly, this study constructs a novel hierarchical GNN
architecture to learn from both SVG's curve-level and chunk-level parameters.
In the first case study, BIGNet not only classifies phone brands but also
captures brand-related features across multiple scales, such as the location of
the lens, the height-width ratio, and the screen-frame gap, as confirmed by AI
evaluation. In the second study, this paper showcases the generalizability of
BIGNet learning from a vectorized car image dataset and validates the
consistency and robustness of its predictions given four scenarios. The results
match the difference commonly observed in luxury vs. economy brands in the
automobile market. Finally, this paper also visualizes the activation maps
generated from a convolutional neural network and shows BIGNet's advantage of
being a more human-friendly, explainable, and explicit style-capturing agent.
Code and dataset can be found on Github:
1. Phone case study: github.com/parksandrecfan/bignet-phone 2. Car case
study: github.com/parksandrecfan/bignet-car
|
[
"cs.CV",
"cs.LG"
] | false |
2306.03151
|
2023-06-05T18:04:57Z
|
DISCount: Counting in Large Image Collections with Detector-Based
Importance Sampling
|
[
"Gustavo Perez",
"Subhransu Maji",
"Daniel Sheldon"
] |
Many modern applications use computer vision to detect and count objects in
massive image collections. However, when the detection task is very difficult
or in the presence of domain shifts, the counts may be inaccurate even with
significant investments in training data and model development. We propose
DISCount -- a detector-based importance sampling framework for counting in
large image collections that integrates an imperfect detector with
human-in-the-loop screening to produce unbiased estimates of counts. We propose
techniques for solving counting problems over multiple spatial or temporal
regions using a small number of screened samples and estimate confidence
intervals. This enables end-users to stop screening when estimates are
sufficiently accurate, which is often the goal in a scientific study. On the
technical side we develop variance reduction techniques based on control
variates and prove the (conditional) unbiasedness of the estimators. DISCount
leads to a 9-12x reduction in the labeling costs over naive screening for tasks
we consider, such as counting birds in radar imagery or estimating damaged
buildings in satellite imagery, and also surpasses alternative covariate-based
screening approaches in efficiency.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.03168
|
2023-06-05T18:22:23Z
|
Composition and Deformance: Measuring Imageability with a Text-to-Image
Model
|
[
"Si Wu",
"David A. Smith"
] |
Although psycholinguists and psychologists have long studied the tendency of
linguistic strings to evoke mental images in hearers or readers, most
computational studies have applied this concept of imageability only to
isolated words. Using recent developments in text-to-image generation models,
such as DALLE mini, we propose computational methods that use generated images
to measure the imageability of both single English words and connected text. We
sample text prompts for image generation from three corpora: human-generated
image captions, news article sentences, and poem lines. We subject these
prompts to different deformances to examine the model's ability to detect
changes in imageability caused by compositional change. We find high
correlation between the proposed computational measures of imageability and
human judgments of individual words. We also find the proposed measures more
consistently respond to changes in compositionality than baseline approaches.
We discuss possible effects of model training and implications for the study of
compositionality in text-to-image models.
|
[
"cs.CL",
"cs.CV"
] | false |
2306.03229
|
2023-06-05T20:26:17Z
|
Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception
|
[
"Drew Linsley",
"Pinyuan Feng",
"Thibaut Boissin",
"Alekh Karkada Ashok",
"Thomas Fel",
"Stephanie Olaiya",
"Thomas Serre"
] |
Deep neural networks (DNNs) are known to have a fundamental sensitivity to
adversarial attacks, perturbations of the input that are imperceptible to
humans yet powerful enough to change the visual decision of a model.
Adversarial attacks have long been considered the "Achilles' heel" of deep
learning, which may eventually force a shift in modeling paradigms.
Nevertheless, the formidable capabilities of modern large-scale DNNs have
somewhat eclipsed these early concerns. Do adversarial attacks continue to pose
a threat to DNNs?
Here, we investigate how the robustness of DNNs to adversarial attacks has
evolved as their accuracy on ImageNet has continued to improve. We measure
adversarial robustness in two different ways: First, we measure the smallest
adversarial attack needed to cause a model to change its object categorization
decision. Second, we measure how aligned successful attacks are with the
features that humans find diagnostic for object recognition. We find that
adversarial attacks are inducing bigger and more easily detectable changes to
image pixels as DNNs grow better on ImageNet, but these attacks are also
becoming less aligned with features that humans find diagnostic for
recognition. To better understand the source of this trade-off, we turn to the
neural harmonizer, a DNN training routine that encourages models to leverage
the same features as humans to solve tasks. Harmonized DNNs achieve the best of
both worlds and experience attacks that are detectable and affect features that
humans find diagnostic for recognition, meaning that attacks on these models
are more likely to be rendered ineffective by inducing similar effects on human
perception. Our findings suggest that the sensitivity of DNNs to adversarial
attacks can be mitigated by DNN scale, data scale, and training routines that
align models with biological intelligence.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.03271
|
2023-06-05T21:41:00Z
|
Dual self-distillation of U-shaped networks for 3D medical image
segmentation
|
[
"Soumyanil Banerjee",
"Ming Dong",
"Carri Glide-Hurst"
] |
U-shaped networks and its variants have demonstrated exceptional results for
medical image segmentation. In this paper, we propose a novel dual
self-distillation (DSD) framework for U-shaped networks for 3D medical image
segmentation. DSD distills knowledge from the ground-truth segmentation labels
to the decoder layers and also between the encoder and decoder layers of a
single U-shaped network. DSD is a generalized training strategy that could be
attached to the backbone architecture of any U-shaped network to further
improve its segmentation performance. We attached DSD on two state-of-the-art
U-shaped backbones, and extensive experiments on two public 3D medical image
segmentation datasets (cardiac substructure and brain tumor) demonstrated
significant improvement over those backbones. On average, after attaching DSD
to the U-shaped backbones, we observed an improvement of 4.25% and 3.15% in
Dice similarity score for cardiac substructure and brain tumor segmentation
respectively.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.06068
|
2023-06-05T11:16:47Z
|
DeepStay: Stay Region Extraction from Location Trajectories using Weak
Supervision
|
[
"Christian Löwens",
"Daniela Thyssens",
"Emma Andersson",
"Christina Jenkins",
"Lars Schmidt-Thieme"
] |
Nowadays, mobile devices enable constant tracking of the user's position and
location trajectories can be used to infer personal points of interest (POIs)
like homes, workplaces, or stores. A common way to extract POIs is to first
identify spatio-temporal regions where a user spends a significant amount of
time, known as stay regions (SRs).
Common approaches to SR extraction are evaluated either solely unsupervised
or on a small-scale private dataset, as popular public datasets are unlabeled.
Most of these methods rely on hand-crafted features or thresholds and do not
learn beyond hyperparameter optimization. Therefore, we propose a weakly and
self-supervised transformer-based model called DeepStay, which is trained on
location trajectories to predict stay regions. To the best of our knowledge,
this is the first approach based on deep learning and the first approach that
is evaluated on a public, labeled dataset. Our SR extraction method outperforms
state-of-the-art methods. In addition, we conducted a limited experiment on the
task of transportation mode detection from GPS trajectories using the same
architecture and achieved significantly higher scores than the
state-of-the-art. Our code is available at
https://github.com/christianll9/deepstay.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.02577
|
2023-06-05T04:00:39Z
|
Exploring the Role of the Bottleneck in Slot-Based Models Through
Covariance Regularization
|
[
"Andrew Stange",
"Robert Lo",
"Abishek Sridhar",
"Kousik Rajesh"
] |
In this project we attempt to make slot-based models with an image
reconstruction objective competitive with those that use a feature
reconstruction objective on real world datasets. We propose a loss-based
approach to constricting the bottleneck of slot-based models, allowing
larger-capacity encoder networks to be used with Slot Attention without
producing degenerate stripe-shaped masks. We find that our proposed method
offers an improvement over the baseline Slot Attention model but does not reach
the performance of \dinosaur on the COCO2017 dataset. Throughout this project,
we confirm the superiority of a feature reconstruction objective over an image
reconstruction objective and explore the role of the architectural bottleneck
in slot-based models.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2306.02589
|
2023-06-05T04:33:32Z
|
DAGrid: Directed Accumulator Grid
|
[
"Hang Zhang",
"Renjiu Hu",
"Xiang Chen",
"Rongguang Wang",
"Jinwei Zhang",
"Jiahao Li"
] |
Recent research highlights that the Directed Accumulator (DA), through its
parametrization of geometric priors into neural networks, has notably improved
the performance of medical image recognition, particularly with small and
imbalanced datasets. However, DA's potential in pixel-wise dense predictions is
unexplored. To bridge this gap, we present the Directed Accumulator Grid
(DAGrid), which allows geometric-preserving filtering in neural networks, thus
broadening the scope of DA's applications to include pixel-level dense
prediction tasks. DAGrid utilizes homogeneous data types in conjunction with
designed sampling grids to construct geometrically transformed representations,
retaining intricate geometric information and promoting long-range information
propagation within the neural networks. Contrary to its symmetric counterpart,
grid sampling, which might lose information in the sampling process, DAGrid
aggregates all pixels, ensuring a comprehensive representation in the
transformed space. The parallelization of DAGrid on modern GPUs is facilitated
using CUDA programming, and also back propagation is enabled for deep neural
network training. Empirical results show DAGrid-enhanced neural networks excel
in supervised skin lesion segmentation and unsupervised cardiac image
registration. Specifically, the network incorporating DAGrid has realized a
70.8% reduction in network parameter size and a 96.8% decrease in FLOPs, while
concurrently improving the Dice score for skin lesion segmentation by 1.0%
compared to state-of-the-art transformers. Furthermore, it has achieved
improvements of 4.4% and 8.2% in the average Dice score and Dice score of the
left ventricular mass, respectively, indicating an increase in registration
accuracy for cardiac images. The source code is available at
https://github.com/tinymilky/DeDA.
|
[
"cs.CV",
"cs.AI",
"eess.IV",
"eess.SP"
] | false |
2306.02596
|
2023-06-05T05:03:11Z
|
A Novel Interpretable and Generalizable Re-synchronization Model for
Cued Speech based on a Multi-Cuer Corpus
|
[
"Lufei Gao",
"Shan Huang",
"Li Liu"
] |
Cued Speech (CS) is a multi-modal visual coding system combining lip reading
with several hand cues at the phonetic level to make the spoken language
visible to the hearing impaired. Previous studies solved asynchronous problems
between lip and hand movements by a cuer\footnote{The people who perform Cued
Speech are called the cuer.}-dependent piecewise linear model for English and
French CS. In this work, we innovatively propose three statistical measure on
the lip stream to build an interpretable and generalizable model for predicting
hand preceding time (HPT), which achieves cuer-independent by a proper
normalization. Particularly, we build the first Mandarin CS corpus comprising
annotated videos from five speakers including three normal and two hearing
impaired individuals. Consequently, we show that the hand preceding phenomenon
exists in Mandarin CS production with significant differences between normal
and hearing impaired people. Extensive experiments demonstrate that our model
outperforms the baseline and the previous state-of-the-art methods.
|
[
"eess.AS",
"cs.CL",
"cs.CV"
] | false |
2306.02623
|
2023-06-05T06:50:42Z
|
Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual
Document Understanding Models
|
[
"Jiabang He",
"Yi Hu",
"Lei Wang",
"Xing Xu",
"Ning Liu",
"Hui Liu",
"Heng Tao Shen"
] |
Numerous pre-training techniques for visual document understanding (VDU) have
recently shown substantial improvements in performance across a wide range of
document tasks. However, these pre-trained VDU models cannot guarantee
continued success when the distribution of test data differs from the
distribution of training data. In this paper, to investigate how robust
existing pre-trained VDU models are to various distribution shifts, we first
develop an out-of-distribution (OOD) benchmark termed Do-GOOD for the
fine-Grained analysis on Document image-related tasks specifically. The Do-GOOD
benchmark defines the underlying mechanisms that result in different
distribution shifts and contains 9 OOD datasets covering 3 VDU related tasks,
e.g., document information extraction, classification and question answering.
We then evaluate the robustness and perform a fine-grained analysis of 5 latest
VDU pre-trained models and 2 typical OOD generalization algorithms on these OOD
datasets. Results from the experiments demonstrate that there is a significant
performance gap between the in-distribution (ID) and OOD settings for document
images, and that fine-grained analysis of distribution shifts can reveal the
brittle nature of existing pre-trained VDU models and OOD generalization
algorithms. The code and datasets for our Do-GOOD benchmark can be found at
https://github.com/MAEHCM/Do-GOOD.
|
[
"cs.CV",
"cs.CL",
"cs.MM"
] | false |
2306.02634
|
2023-06-05T07:09:21Z
|
Computational 3D topographic microscopy from terabytes of data per
sample
|
[
"Kevin C. Zhou",
"Mark Harfouche",
"Maxwell Zheng",
"Joakim Jönsson",
"Kyung Chul Lee",
"Ron Appel",
"Paul Reamey",
"Thomas Doman",
"Veton Saliu",
"Gregor Horstmeyer",
"Roarke Horstmeyer"
] |
We present a large-scale computational 3D topographic microscope that enables
6-gigapixel profilometric 3D imaging at micron-scale resolution across $>$110
cm$^2$ areas over multi-millimeter axial ranges. Our computational microscope,
termed STARCAM (Scanning Topographic All-in-focus Reconstruction with a
Computational Array Microscope), features a parallelized, 54-camera
architecture with 3-axis translation to capture, for each sample of interest, a
multi-dimensional, 2.1-terabyte (TB) dataset, consisting of a total of 224,640
9.4-megapixel images. We developed a self-supervised neural network-based
algorithm for 3D reconstruction and stitching that jointly estimates an
all-in-focus photometric composite and 3D height map across the entire field of
view, using multi-view stereo information and image sharpness as a focal
metric. The memory-efficient, compressed differentiable representation offered
by the neural network effectively enables joint participation of the entire
multi-TB dataset during the reconstruction process. To demonstrate the broad
utility of our new computational microscope, we applied STARCAM to a variety of
decimeter-scale objects, with applications ranging from cultural heritage to
industrial inspection.
|
[
"physics.optics",
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2306.02673
|
2023-06-05T08:07:01Z
|
Cross-Modal Vertical Federated Learning for MRI Reconstruction
|
[
"Yunlu Yan",
"Hong Wang",
"Yawen Huang",
"Nanjun He",
"Lei Zhu",
"Yuexiang Li",
"Yong Xu",
"Yefeng Zheng"
] |
Federated learning enables multiple hospitals to cooperatively learn a shared
model without privacy disclosure. Existing methods often take a common
assumption that the data from different hospitals have the same modalities.
However, such a setting is difficult to fully satisfy in practical
applications, since the imaging guidelines may be different between hospitals,
which makes the number of individuals with the same set of modalities limited.
To this end, we formulate this practical-yet-challenging cross-modal vertical
federated learning task, in which shape data from multiple hospitals have
different modalities with a small amount of multi-modality data collected from
the same individuals. To tackle such a situation, we develop a novel framework,
namely Federated Consistent Regularization constrained Feature Disentanglement
(Fed-CRFD), for boosting MRI reconstruction by effectively exploring the
overlapping samples (individuals with multi-modalities) and solving the domain
shift problem caused by different modalities. Particularly, our Fed-CRFD
involves an intra-client feature disentangle scheme to decouple data into
modality-invariant and modality-specific features, where the modality-invariant
features are leveraged to mitigate the domain shift problem. In addition, a
cross-client latent representation consistency constraint is proposed
specifically for the overlapping samples to further align the
modality-invariant features extracted from different modalities. Hence, our
method can fully exploit the multi-source data from hospitals while alleviating
the domain shift problem. Extensive experiments on two typical MRI datasets
demonstrate that our network clearly outperforms state-of-the-art MRI
reconstruction methods. The source code will be publicly released upon the
publication of this work.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.02800
|
2023-06-05T11:55:57Z
|
Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma
Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study
|
[
"Achim Hekler",
"Roman C. Maron",
"Sarah Haggenmüller",
"Max Schmitt",
"Christoph Wies",
"Jochen S. Utikal",
"Friedegund Meier",
"Sarah Hobelsberger",
"Frank F. Gellrich",
"Mildred Sergon",
"Axel Hauschild",
"Lars E. French",
"Lucie Heinzerling",
"Justin G. Schlager",
"Kamran Ghoreschi",
"Max Schlaak",
"Franz J. Hilke",
"Gabriela Poch",
"Sören Korsing",
"Carola Berking",
"Markus V. Heppt",
"Michael Erdmann",
"Sebastian Haferkamp",
"Konstantin Drexler",
"Dirk Schadendorf",
"Wiebke Sondermann",
"Matthias Goebeler",
"Bastian Schilling",
"Jakob N. Kather",
"Eva Krieghoff-Henning",
"Titus J. Brinker"
] |
Background: Convolutional neural network (CNN)-based melanoma classifiers
face several challenges that limit their usefulness in clinical practice.
Objective: To investigate the impact of multiple real-world dermoscopic views
of a single lesion of interest on a CNN-based melanoma classifier.
Methods: This study evaluated 656 suspected melanoma lesions. Classifier
performance was measured using area under the receiver operating characteristic
curve (AUROC), expected calibration error (ECE) and maximum confidence change
(MCC) for (I) a single-view scenario, (II) a multiview scenario using multiple
artificially modified images per lesion and (III) a multiview scenario with
multiple real-world images per lesion.
Results: The multiview approach with real-world images significantly
increased the AUROC from 0.905 (95% CI, 0.879-0.929) in the single-view
approach to 0.930 (95% CI, 0.909-0.951). ECE and MCC also improved
significantly from 0.131 (95% CI, 0.105-0.159) to 0.072 (95% CI: 0.052-0.093)
and from 0.149 (95% CI, 0.125-0.171) to 0.115 (95% CI: 0.099-0.131),
respectively. Comparing multiview real-world to artificially modified images
showed comparable diagnostic accuracy and uncertainty estimation, but
significantly worse robustness for the latter.
Conclusion: Using multiple real-world images is an inexpensive method to
positively impact the performance of a CNN-based melanoma classifier.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.02848
|
2023-06-05T12:58:13Z
|
HireVAE: An Online and Adaptive Factor Model Based on Hierarchical and
Regime-Switch VAE
|
[
"Zikai Wei",
"Anyi Rao",
"Bo Dai",
"Dahua Lin"
] |
Factor model is a fundamental investment tool in quantitative investment,
which can be empowered by deep learning to become more flexible and efficient
in practical complicated investing situations. However, it is still an open
question to build a factor model that can conduct stock prediction in an online
and adaptive setting, where the model can adapt itself to match the current
market regime identified based on only point-in-time market information. To
tackle this problem, we propose the first deep learning based online and
adaptive factor model, HireVAE, at the core of which is a hierarchical latent
space that embeds the underlying relationship between the market situation and
stock-wise latent factors, so that HireVAE can effectively estimate useful
latent factors given only historical market information and subsequently
predict accurate stock returns. Across four commonly used real stock market
benchmarks, the proposed HireVAE demonstrate superior performance in terms of
active returns over previous methods, verifying the potential of such online
and adaptive factor model.
|
[
"cs.LG",
"cs.CV",
"q-fin.PM"
] | false |
2306.02886
|
2023-06-05T13:53:57Z
|
Image Reconstruction for Accelerated MR Scan with Faster Fourier
Convolutional Neural Networks
|
[
"Xiaohan Liu",
"Yanwei Pang",
"Xuebin Sun",
"Yiming Liu",
"Yonghong Hou",
"Zhenchang Wang",
"Xuelong Li"
] |
Partial scan is a common approach to accelerate Magnetic Resonance Imaging
(MRI) data acquisition in both 2D and 3D settings. However, accurately
reconstructing images from partial scan data (i.e., incomplete k-space
matrices) remains challenging due to lack of an effectively global receptive
field in both spatial and k-space domains. To address this problem, we propose
the following: (1) a novel convolutional operator called Faster Fourier
Convolution (FasterFC) to replace the two consecutive convolution operations
typically used in convolutional neural networks (e.g., U-Net, ResNet). Based on
the spectral convolution theorem in Fourier theory, FasterFC employs
alternating kernels of size 1 in 3D case) in different domains to extend the
dual-domain receptive field to the global and achieves faster calculation speed
than traditional Fast Fourier Convolution (FFC). (2) A 2D accelerated MRI
method, FasterFC-End-to-End-VarNet, which uses FasterFC to improve the
sensitivity maps and reconstruction quality. (3) A multi-stage 3D accelerated
MRI method called FasterFC-based Single-to-group Network (FAS-Net) that
utilizes a single-to-group algorithm to guide k-space domain reconstruction,
followed by FasterFC-based cascaded convolutional neural networks to expand the
effective receptive field in the dual-domain. Experimental results on the
fastMRI and Stanford MRI Data datasets demonstrate that FasterFC improves the
quality of both 2D and 3D reconstruction. Moreover, FAS-Net, as a 3D
high-resolution multi-coil (eight) accelerated MRI method, achieves superior
reconstruction performance in both qualitative and quantitative results
compared with state-of-the-art 2D and 3D methods.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.03066
|
2023-06-05T17:43:50Z
|
Of Mice and Mates: Automated Classification and Modelling of Mouse
Behaviour in Groups using a Single Model across Cages
|
[
"Michael P. J. Camilleri",
"Rasneer S. Bains",
"Christopher K. I. Williams"
] |
Behavioural experiments often happen in specialised arenas, but this may
confound the analysis. To address this issue, we provide tools to study mice in
the homecage environment, equipping biologists with the possibility to capture
the temporal aspect of the individual's behaviour and model the interaction and
interdependence between cage-mates with minimal human intervention. We develop
the Activity Labelling Module (ALM) to automatically classify mouse behaviour
from video, and a novel Group Behaviour Model (GBM) for summarising their joint
behaviour across cages, using a permutation matrix to match the mouse
identities in each cage to the model. We also release two datasets, ABODe for
training behaviour classifiers and IMADGE for modelling behaviour.
|
[
"cs.CV",
"cs.LG",
"stat.ML"
] | false |
2306.03110
|
2023-06-05T05:11:03Z
|
SwinRDM: Integrate SwinRNN with Diffusion Model towards High-Resolution
and High-Quality Weather Forecasting
|
[
"Lei Chen",
"Fei Du",
"Yuan Hu",
"Fan Wang",
"Zhibin Wang"
] |
Data-driven medium-range weather forecasting has attracted much attention in
recent years. However, the forecasting accuracy at high resolution is
unsatisfactory currently. Pursuing high-resolution and high-quality weather
forecasting, we develop a data-driven model SwinRDM which integrates an
improved version of SwinRNN with a diffusion model. SwinRDM performs
predictions at 0.25-degree resolution and achieves superior forecasting
accuracy to IFS (Integrated Forecast System), the state-of-the-art operational
NWP model, on representative atmospheric variables including 500 hPa
geopotential (Z500), 850 hPa temperature (T850), 2-m temperature (T2M), and
total precipitation (TP), at lead times of up to 5 days. We propose to leverage
a two-step strategy to achieve high-resolution predictions at 0.25-degree
considering the trade-off between computation memory and forecasting accuracy.
Recurrent predictions for future atmospheric fields are firstly performed at
1.40625-degree resolution, and then a diffusion-based super-resolution model is
leveraged to recover the high spatial resolution and finer-scale atmospheric
details. SwinRDM pushes forward the performance and potential of data-driven
models for a large margin towards operational applications.
|
[
"cs.AI",
"cs.CV",
"physics.ao-ph"
] | false |
2306.03177
|
2023-06-05T18:37:05Z
|
DeepVQE: Real Time Deep Voice Quality Enhancement for Joint Acoustic
Echo Cancellation, Noise Suppression and Dereverberation
|
[
"Evgenii Indenbom",
"Nicolae-Catalin Ristea",
"Ando Saabas",
"Tanel Parnamaa",
"Jegor Guzvin",
"Ross Cutler"
] |
Acoustic echo cancellation (AEC), noise suppression (NS) and dereverberation
(DR) are an integral part of modern full-duplex communication systems. As the
demand for teleconferencing systems increases, addressing these tasks is
required for an effective and efficient online meeting experience. Most prior
research proposes solutions for these tasks separately, combining them with
digital signal processing (DSP) based components, resulting in complex
pipelines that are often impractical to deploy in real-world applications. This
paper proposes a real-time cross-attention deep model, named DeepVQE, based on
residual convolutional neural networks (CNNs) and recurrent neural networks
(RNNs) to simultaneously address AEC, NS, and DR. We conduct several ablation
studies to analyze the contributions of different components of our model to
the overall performance. DeepVQE achieves state-of-the-art performance on
non-personalized tracks from the ICASSP 2023 Acoustic Echo Cancellation
Challenge and ICASSP 2023 Deep Noise Suppression Challenge test sets, showing
that a single model can handle multiple tasks with excellent performance.
Moreover, the model runs in real-time and has been successfully tested for the
Microsoft Teams platform.
|
[
"cs.SD",
"cs.CV",
"eess.AS"
] | false |
2306.03228
|
2023-06-05T20:22:05Z
|
Discovering Novel Biological Traits From Images Using Phylogeny-Guided
Neural Networks
|
[
"Mohannad Elhamod",
"Mridul Khurana",
"Harish Babu Manogaran",
"Josef C. Uyeda",
"Meghan A. Balk",
"Wasila Dahdul",
"Yasin Bakış",
"Henry L. Bart Jr.",
"Paula M. Mabee",
"Hilmar Lapp",
"James P. Balhoff",
"Caleb Charpentier",
"David Carlyn",
"Wei-Lun Chao",
"Charles V. Stewart",
"Daniel I. Rubenstein",
"Tanya Berger-Wolf",
"Anuj Karpatne"
] |
Discovering evolutionary traits that are heritable across species on the tree
of life (also referred to as a phylogenetic tree) is of great interest to
biologists to understand how organisms diversify and evolve. However, the
measurement of traits is often a subjective and labor-intensive process, making
trait discovery a highly label-scarce problem. We present a novel approach for
discovering evolutionary traits directly from images without relying on trait
labels. Our proposed approach, Phylo-NN, encodes the image of an organism into
a sequence of quantized feature vectors -- or codes -- where different segments
of the sequence capture evolutionary signals at varying ancestry levels in the
phylogeny. We demonstrate the effectiveness of our approach in producing
biologically meaningful results in a number of downstream tasks including
species image generation and species-to-species image translation, using fish
species as a target example.
|
[
"cs.LG",
"cs.CV",
"eess.IV"
] | false |
2306.03270
|
2023-06-05T21:39:11Z
|
Brain Tumor Recurrence vs. Radiation Necrosis Classification and Patient
Survivability Prediction
|
[
"M. S. Sadique",
"W. Farzana",
"A. Temtam",
"E. Lappinen",
"A. Vossough",
"K. M. Iftekharuddin"
] |
GBM (Glioblastoma multiforme) is the most aggressive type of brain tumor in
adults that has a short survival rate even after aggressive treatment with
surgery and radiation therapy. The changes on magnetic resonance imaging (MRI)
for patients with GBM after radiotherapy are indicative of either
radiation-induced necrosis (RN) or recurrent brain tumor (rBT). Screening for
rBT and RN at an early stage is crucial for facilitating faster treatment and
better outcomes for the patients. Differentiating rBT from RN is challenging as
both may present with similar radiological and clinical characteristics on MRI.
Moreover, learning-based rBT versus RN classification using MRI may suffer from
class imbalance due to lack of patient data. While synthetic data generation
using generative models has shown promise to address class imbalance, the
underlying data representation may be different in synthetic or augmented data.
This study proposes computational modeling with statistically rigorous repeated
random sub-sampling to balance the subset sample size for rBT and RN
classification. The proposed pipeline includes multiresolution radiomic feature
(MRF) extraction followed by feature selection with statistical significance
testing (p<0.05). The five-fold cross validation results show the proposed
model with MRF features classifies rBT from RN with an area under the curve
(AUC) of 0.8920+-.055. Moreover, considering the dependence between survival
time and censor time (where patients are not followed up until death), we
demonstrate the feasibility of using MRF radiomic features as a non-invasive
biomarker to identify patients who are at higher risk of recurrence or
radiation necrosis. The cross-validated results show that the MRF model
provides the best overall performance with an AUC of 0.770+-.032.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2306.06118
|
2023-06-05T08:20:46Z
|
Estimation of River Water Surface Elevation Using UAV Photogrammetry and
Machine Learning
|
[
"Radosław Szostak",
"Marcin Pietroń",
"Przemysław Wachniew",
"Mirosław Zimnoch",
"Paweł Ćwiąkała"
] |
Unmanned aerial vehicle (UAV) photogrammetry allows for the creation of
orthophotos and digital surface models (DSMs) of a terrain. However, DSMs of
water bodies mapped with this technique reveal water surface distortions,
preventing the use of photogrammetric data for accurate determination of water
surface elevation (WSE). Firstly, we propose a new solution in which a
convolutional neural network (CNN) is used as a WSE estimator from
photogrammetric DSMs and orthophotos. Second, we improved the previously known
"water-edge" method by filtering the outliers using a forward-backwards
exponential weighted moving average. Further improvement in these two methods
was achieved by performing a linear regression of the WSE values against
chainage. The solutions estimate the uncertainty of the predictions. This is
the first approach in which DL was used for this task. A brand new machine
learning data set has been created. It was collected on a small lowland river
in winter and summer conditions. It consists of 322 samples, each corresponding
to a 10 by 10 meter area of the river channel and adjacent land. Each data set
sample contains orthophoto and DSM arrays as input, along with a single
ground-truth WSE value as output. The data set was supplemented with data
collected by other researchers that compared the state-of-the-art methods for
determining WSE using an UAV. The results of the DL solution were verified
using k-fold cross-validation method. This provided an in-depth examination of
the model's ability to perform on previously unseen data. The WSE RMSEs differ
for each k-fold cross-validation subset and range from 1.7 cm up to 17.2 cm.
The RMSE results of the improved "water-edge" method are at least six times
lower than the RMSE results achieved by the conventional "water-edge" method.
The results obtained by new methods are predominantly outperforming existing
ones.
|
[
"cs.LG",
"cs.CV",
"eess.IV"
] | false |
2306.02514
|
2023-06-05T00:32:57Z
|
Jambu: A historical linguistic database for South Asian languages
|
[
"Aryaman Arora",
"Adam Farris",
"Samopriya Basu",
"Suresh Kolichala"
] |
We introduce Jambu, a cognate database of South Asian languages which unifies
dozens of previous sources in a structured and accessible format. The database
includes 287k lemmata from 602 lects, grouped together in 23k sets of cognates.
We outline the data wrangling necessary to compile the dataset and train neural
models for reflex prediction on the Indo-Aryan subset of the data. We hope that
Jambu is an invaluable resource for all historical linguists and Indologists,
and look towards further improvement and expansion of the database.
|
[
"cs.CL"
] | false |
2306.02569
|
2023-06-05T03:49:13Z
|
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and
Zero-Shot Fact Verification with Pre-trained Language Models
|
[
"Fengzhu Zeng",
"Wei Gao"
] |
Few-shot or zero-shot fact verification only relies on a few or no labeled
training examples. In this paper, we propose a novel method called ProToCo, to
\underline{Pro}mpt pre-trained language models (PLMs) \underline{To} be
\underline{Co}nsistent, for improving the factuality assessment capability of
PLMs in the few-shot and zero-shot settings. Given a claim-evidence pair,
ProToCo generates multiple variants of the claim with different relations and
frames a simple consistency mechanism as constraints for making compatible
predictions across these variants. We update PLMs by using parameter-efficient
fine-tuning (PEFT), leading to more accurate predictions in few-shot and
zero-shot fact verification tasks. Our experiments on three public verification
datasets show that ProToCo significantly outperforms state-of-the-art few-shot
fact verification baselines. With a small number of unlabeled instances,
ProToCo also outperforms the strong zero-shot learner T0 on zero-shot
verification. Compared to large PLMs using in-context learning (ICL) method,
ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model in
both few- and zero-shot settings.
|
[
"cs.CL"
] | false |
2306.02597
|
2023-06-05T05:04:28Z
|
Early Rumor Detection Using Neural Hawkes Process with a New Benchmark
Dataset
|
[
"Fengzhu Zeng",
"Wei Gao"
] |
Little attention has been paid on \underline{EA}rly \underline{R}umor
\underline{D}etection (EARD), and EARD performance was evaluated
inappropriately on a few datasets where the actual early-stage information is
largely missing. To reverse such situation, we construct BEARD, a new
\underline{B}enchmark dataset for \underline{EARD}, based on claims from
fact-checking websites by trying to gather as many early relevant posts as
possible. We also propose HEARD, a novel model based on neural
\underline{H}awkes process for \underline{EARD}, which can guide a generic
rumor detection model to make timely, accurate and stable predictions.
Experiments show that HEARD achieves effective EARD performance on two commonly
used general rumor detection datasets and our BEARD dataset.
|
[
"cs.CL"
] | false |
2306.02646
|
2023-06-05T07:32:21Z
|
Colexifications for Bootstrapping Cross-lingual Datasets: The Case of
Phonology, Concreteness, and Affectiveness
|
[
"Yiyi Chen",
"Johannes Bjerva"
] |
Colexification refers to the linguistic phenomenon where a single lexical
form is used to convey multiple meanings. By studying cross-lingual
colexifications, researchers have gained valuable insights into fields such as
psycholinguistics and cognitive sciences [Jackson et al.,2019]. While several
multilingual colexification datasets exist, there is untapped potential in
using this information to bootstrap datasets across such semantic features. In
this paper, we aim to demonstrate how colexifications can be leveraged to
create such cross-lingual datasets. We showcase curation procedures which
result in a dataset covering 142 languages across 21 language families across
the world. The dataset includes ratings of concreteness and affectiveness,
mapped with phonemes and phonological features. We further analyze the dataset
along different dimensions to demonstrate potential of the proposed procedures
in facilitating further interdisciplinary research in psychology, cognitive
science, and multilingual natural language processing (NLP). Based on initial
investigations, we observe that i) colexifications that are closer in
concreteness/affectiveness are more likely to colexify; ii) certain
initial/last phonemes are significantly correlated with
concreteness/affectiveness intra language families, such as /k/ as the initial
phoneme in both Turkic and Tai-Kadai correlated with concreteness, and /p/ in
Dravidian and Sino-Tibetan correlated with Valence; iii) the type-to-token
ratio (TTR) of phonemes are positively correlated with concreteness across
several language families, while the length of phoneme segments are negatively
correlated with concreteness; iv) certain phonological features are negatively
correlated with concreteness across languages. The dataset is made public
online for further research.
|
[
"cs.CL"
] | false |
2306.02671
|
2023-06-05T08:05:05Z
|
Improving Grammar-based Sequence-to-Sequence Modeling with Decomposition
and Constraints
|
[
"Chao Lou",
"Kewei Tu"
] |
Neural QCFG is a grammar-based sequence-tosequence (seq2seq) model with
strong inductive biases on hierarchical structures. It excels in
interpretability and generalization but suffers from expensive inference. In
this paper, we study two low-rank variants of Neural QCFG for faster inference
with different trade-offs between efficiency and expressiveness. Furthermore,
utilizing the symbolic interface provided by the grammar, we introduce two soft
constraints over tree hierarchy and source coverage. We experiment with various
datasets and find that our models outperform vanilla Neural QCFG in most
settings.
|
[
"cs.CL"
] | false |
2306.02754
|
2023-06-05T10:17:50Z
|
PULSAR: Pre-training with Extracted Healthcare Terms for Summarising
Patients' Problems and Data Augmentation with Black-box Large Language Models
|
[
"Hao Li",
"Yuping Wu",
"Viktor Schlegel",
"Riza Batista-Navarro",
"Thanh-Tung Nguyen",
"Abhinav Ramesh Kashyap",
"Xiaojun Zeng",
"Daniel Beck",
"Stefan Winkler",
"Goran Nenadic"
] |
Medical progress notes play a crucial role in documenting a patient's
hospital journey, including his or her condition, treatment plan, and any
updates for healthcare providers. Automatic summarisation of a patient's
problems in the form of a problem list can aid stakeholders in understanding a
patient's condition, reducing workload and cognitive bias. BioNLP 2023 Shared
Task 1A focuses on generating a list of diagnoses and problems from the
provider's progress notes during hospitalisation. In this paper, we introduce
our proposed approach to this task, which integrates two complementary
components. One component employs large language models (LLMs) for data
augmentation; the other is an abstractive summarisation LLM with a novel
pre-training objective for generating the patients' problems summarised as a
list. Our approach was ranked second among all submissions to the shared task.
The performance of our model on the development and test datasets shows that
our approach is more robust on unknown data, with an improvement of up to 3.1
points over the same size of the larger model.
|
[
"cs.CL"
] | false |
2306.02767
|
2023-06-05T10:46:33Z
|
Cross-Lingual Transfer with Target Language-Ready Task Adapters
|
[
"Marinela Parović",
"Alan Ansell",
"Ivan Vulić",
"Anna Korhonen"
] |
Adapters have emerged as a modular and parameter-efficient approach to
(zero-shot) cross-lingual transfer. The established MAD-X framework employs
separate language and task adapters which can be arbitrarily combined to
perform the transfer of any task to any target language. Subsequently, BAD-X,
an extension of the MAD-X framework, achieves improved transfer at the cost of
MAD-X's modularity by creating "bilingual" adapters specific to the
source-target language pair. In this work, we aim to take the best of both
worlds by (i) fine-tuning task adapters adapted to the target language(s)
(so-called "target language-ready" (TLR) adapters) to maintain high transfer
performance, but (ii) without sacrificing the highly modular design of MAD-X.
The main idea of "target language-ready" adapters is to resolve the
training-vs-inference discrepancy of MAD-X: the task adapter "sees" the target
language adapter for the very first time during inference, and thus might not
be fully compatible with it. We address this mismatch by exposing the task
adapter to the target language adapter during training, and empirically
validate several variants of the idea: in the simplest form, we alternate
between using the source and target language adapters during task adapter
training, which can be generalized to cycling over any set of language
adapters. We evaluate different TLR-based transfer configurations with varying
degrees of generality across a suite of standard cross-lingual benchmarks, and
find that the most general (and thus most modular) configuration consistently
outperforms MAD-X and BAD-X on most tasks and languages.
|
[
"cs.CL"
] | false |
2306.02777
|
2023-06-05T11:01:58Z
|
German CheXpert Chest X-ray Radiology Report Labeler
|
[
"Alessandro Wollek",
"Sardi Hyska",
"Thomas Sedlmeyr",
"Philip Haitzer",
"Johannes Rueckel",
"Bastian O. Sabel",
"Michael Ingrisch",
"Tobias Lasser"
] |
This study aimed to develop an algorithm to automatically extract annotations
for chest X-ray classification models from German thoracic radiology reports.
An automatic label extraction model was designed based on the CheXpert
architecture, and a web-based annotation interface was created for iterative
improvements. Results showed that automated label extraction can reduce time
spent on manual labeling and improve overall modeling performance. The model
trained on automatically extracted labels performed competitively to manually
labeled data and strongly outperformed the model trained on publicly available
data.
|
[
"cs.CL"
] | false |
2306.02819
|
2023-06-05T12:15:12Z
|
Enhancing Language Representation with Constructional Information for
Natural Language Understanding
|
[
"Lvxiaowei Xu",
"Jianwang Wu",
"Jiawei Peng",
"Zhilin Gong",
"Ming Cai",
"Tianxiang Wang"
] |
Natural language understanding (NLU) is an essential branch of natural
language processing, which relies on representations generated by pre-trained
language models (PLMs). However, PLMs primarily focus on acquiring
lexico-semantic information, while they may be unable to adequately handle the
meaning of constructions. To address this issue, we introduce construction
grammar (CxG), which highlights the pairings of form and meaning, to enrich
language representation. We adopt usage-based construction grammar as the basis
of our work, which is highly compatible with statistical models such as PLMs.
Then a HyCxG framework is proposed to enhance language representation through a
three-stage solution. First, all constructions are extracted from sentences via
a slot-constraints approach. As constructions can overlap with each other,
bringing redundancy and imbalance, we formulate the conditional max coverage
problem for selecting the discriminative constructions. Finally, we propose a
relational hypergraph attention network to acquire representation from
constructional information by capturing high-order word interactions among
constructions. Extensive experiments demonstrate the superiority of the
proposed model on a variety of NLU tasks.
|
[
"cs.CL"
] | false |
2306.02870
|
2023-06-05T13:43:50Z
|
On "Scientific Debt" in NLP: A Case for More Rigour in Language Model
Pre-Training Research
|
[
"Made Nindyatama Nityasya",
"Haryo Akbarianto Wibowo",
"Alham Fikri Aji",
"Genta Indra Winata",
"Radityo Eko Prasojo",
"Phil Blunsom",
"Adhiguna Kuncoro"
] |
This evidence-based position paper critiques current research practices
within the language model pre-training literature. Despite rapid recent
progress afforded by increasingly better pre-trained language models (PLMs),
current PLM research practices often conflate different possible sources of
model improvement, without conducting proper ablation studies and principled
comparisons between different models under comparable conditions. These
practices (i) leave us ill-equipped to understand which pre-training approaches
should be used under what circumstances; (ii) impede reproducibility and credit
assignment; and (iii) render it difficult to understand: "How exactly does each
factor contribute to the progress that we have today?" We provide a case in
point by revisiting the success of BERT over its baselines, ELMo and GPT-1, and
demonstrate how -- under comparable conditions where the baselines are tuned to
a similar extent -- these baselines (and even-simpler variants thereof) can, in
fact, achieve competitive or better performance than BERT. These findings
demonstrate how disentangling different factors of model improvements can lead
to valuable new insights. We conclude with recommendations for how to encourage
and incentivize this line of work, and accelerate progress towards a better and
more systematic understanding of what factors drive the progress of our
foundation models today.
|
[
"cs.CL"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.