arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.03435
|
2023-06-06T06:23:38Z
|
On the Role of Attention in Prompt-tuning
|
[
"Samet Oymak",
"Ankit Singh Rawat",
"Mahdi Soltanolkotabi",
"Christos Thrampoulidis"
] |
Prompt-tuning is an emerging strategy to adapt large language models (LLM) to
downstream tasks by learning a (soft-)prompt parameter from data. Despite its
success in LLMs, there is limited theoretical understanding of the power of
prompt-tuning and the role of the attention mechanism in prompting. In this
work, we explore prompt-tuning for one-layer attention architectures and study
contextual mixture-models where each input token belongs to a context-relevant
or -irrelevant set. We isolate the role of prompt-tuning through a
self-contained prompt-attention model. Our contributions are as follows: (1) We
show that softmax-prompt-attention is provably more expressive than
softmax-self-attention and linear-prompt-attention under our contextual data
model. (2) We analyze the initial trajectory of gradient descent and show that
it learns the prompt and prediction head with near-optimal sample complexity
and demonstrate how prompt can provably attend to sparse context-relevant
tokens. (3) Assuming a known prompt but an unknown prediction head, we
characterize the exact finite sample performance of prompt-attention which
reveals the fundamental performance limits and the precise benefit of the
context information. We also provide experiments that verify our theoretical
insights on real datasets and demonstrate how prompt-tuning enables the model
to attend to context-relevant information.
|
[
"cs.LG",
"cs.CL",
"stat.ML"
] | false |
2306.03443
|
2023-06-06T06:49:41Z
|
Alzheimer Disease Classification through ASR-based Transcriptions:
Exploring the Impact of Punctuation and Pauses
|
[
"Lucía Gómez-Zaragozá",
"Simone Wills",
"Cristian Tejedor-Garcia",
"Javier Marín-Morales",
"Mariano Alcañiz",
"Helmer Strik"
] |
Alzheimer's Disease (AD) is the world's leading neurodegenerative disease,
which often results in communication difficulties. Analysing speech can serve
as a diagnostic tool for identifying the condition. The recent ADReSS challenge
provided a dataset for AD classification and highlighted the utility of manual
transcriptions. In this study, we used the new state-of-the-art Automatic
Speech Recognition (ASR) model Whisper to obtain the transcriptions, which also
include automatic punctuation. The classification models achieved test accuracy
scores of 0.854 and 0.833 combining the pretrained FastText word embeddings and
recurrent neural networks on manual and ASR transcripts respectively.
Additionally, we explored the influence of including pause information and
punctuation in the transcriptions. We found that punctuation only yielded minor
improvements in some cases, whereas pause encoding aided AD classification for
both manual and ASR transcriptions across all approaches investigated.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.03444
|
2023-06-06T06:49:58Z
|
Automatic Assessment of Oral Reading Accuracy for Reading Diagnostics
|
[
"Bo Molenaar",
"Cristian Tejedor-Garcia",
"Helmer Strik",
"Catia Cucchiarini"
] |
Automatic assessment of reading fluency using automatic speech recognition
(ASR) holds great potential for early detection of reading difficulties and
subsequent timely intervention. Precise assessment tools are required,
especially for languages other than English. In this study, we evaluate six
state-of-the-art ASR-based systems for automatically assessing Dutch oral
reading accuracy using Kaldi and Whisper. Results show our most successful
system reached substantial agreement with human evaluations (MCC = .63). The
same system reached the highest correlation between forced decoding confidence
scores and word correctness (r = .45). This system's language model (LM)
consisted of manual orthographic transcriptions and reading prompts of the test
data, which shows that including reading errors in the LM improves assessment
performance. We discuss the implications for developing automatic assessment
systems and identify possible avenues of future research.
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2306.03460
|
2023-06-06T07:28:49Z
|
Natural Language Commanding via Program Synthesis
|
[
"Apurva Gandhi",
"Thong Q. Nguyen",
"Huitian Jiao",
"Robert Steen",
"Ameya Bhatawdekar"
] |
We present Semantic Interpreter, a natural language-friendly AI system for
productivity software such as Microsoft Office that leverages large language
models (LLMs) to execute user intent across application features. While LLMs
are excellent at understanding user intent expressed as natural language, they
are not sufficient for fulfilling application-specific user intent that
requires more than text-to-text transformations. We therefore introduce the
Office Domain Specific Language (ODSL), a concise, high-level language
specialized for performing actions in and interacting with entities in Office
applications. Semantic Interpreter leverages an Analysis-Retrieval prompt
construction method with LLMs for program synthesis, translating natural
language user utterances to ODSL programs that can be transpiled to application
APIs and then executed. We focus our discussion primarily on a research
exploration for Microsoft PowerPoint.
|
[
"cs.LG",
"cs.CL",
"cs.HC"
] | true |
2306.03723
|
2023-06-06T14:41:30Z
|
Financial Numeric Extreme Labelling: A Dataset and Benchmarking for XBRL
Tagging
|
[
"Soumya Sharma",
"Subhendu Khatuya",
"Manjunath Hegde",
"Afreen Shaikh. Koustuv Dasgupta",
"Pawan Goyal",
"Niloy Ganguly"
] |
The U.S. Securities and Exchange Commission (SEC) mandates all public
companies to file periodic financial statements that should contain numerals
annotated with a particular label from a taxonomy. In this paper, we formulate
the task of automating the assignment of a label to a particular numeral span
in a sentence from an extremely large label set. Towards this task, we release
a dataset, Financial Numeric Extreme Labelling (FNXL), annotated with 2,794
labels. We benchmark the performance of the FNXL dataset by formulating the
task as (a) a sequence labelling problem and (b) a pipeline with span
extraction followed by Extreme Classification. Although the two approaches
perform comparably, the pipeline solution provides a slight edge for the least
frequent labels.
|
[
"cs.CL",
"cs.AI",
"cs.CE"
] | false |
2306.03902
|
2023-06-06T17:58:44Z
|
Utterance Classification with Logical Neural Network: Explainable AI for
Mental Disorder Diagnosis
|
[
"Yeldar Toleubay",
"Don Joven Agravante",
"Daiki Kimura",
"Baihan Lin",
"Djallel Bouneffouf",
"Michiaki Tatsubori"
] |
In response to the global challenge of mental health problems, we proposes a
Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis
of mental disorders. Due to the lack of effective therapy coverage for mental
disorders, there is a need for an AI solution that can assist therapists with
the diagnosis. However, current Neural Network models lack explainability and
may not be trusted by therapists. The LNN is a Recurrent Neural Network
architecture that combines the learning capabilities of neural networks with
the reasoning capabilities of classical logic-based AI. The proposed system
uses input predicates from clinical interviews to output a mental disorder
class, and different predicate pruning techniques are used to achieve
scalability and higher scores. In addition, we provide an insight extraction
method to aid therapists with their diagnosis. The proposed system addresses
the lack of explainability of current Neural Network models and provides a more
trustworthy solution for mental disorder diagnosis.
|
[
"cs.CL",
"cs.AI",
"cs.LO",
"q-bio.NC"
] | false |
2306.03917
|
2023-06-06T18:00:01Z
|
Turning large language models into cognitive models
|
[
"Marcel Binz",
"Eric Schulz"
] |
Large language models are powerful systems that excel at many tasks, ranging
from translation to mathematical reasoning. Yet, at the same time, these models
often show unhuman-like characteristics. In the present paper, we address this
gap and ask whether large language models can be turned into cognitive models.
We find that -- after finetuning them on data from psychological experiments --
these models offer accurate representations of human behavior, even
outperforming traditional cognitive models in two decision-making domains. In
addition, we show that their representations contain the information necessary
to model behavior on the level of individual subjects. Finally, we demonstrate
that finetuning on multiple tasks enables large language models to predict
human behavior in a previously unseen task. Taken together, these results
suggest that large, pre-trained models can be adapted to become generalist
cognitive models, thereby opening up new research directions that could
transform cognitive psychology and the behavioral sciences as a whole.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2306.05432
|
2023-06-06T15:22:16Z
|
Towards End-to-end Speech-to-text Summarization
|
[
"Raul Monteiro",
"Diogo Pernes"
] |
Speech-to-text (S2T) summarization is a time-saving technique for filtering
and keeping up with the broadcast news uploaded online on a daily basis. The
rise of large language models from deep learning with impressive text
generation capabilities has placed the research focus on summarization systems
that produce paraphrased compact versions of the document content, also known
as abstractive summaries. End-to-end (E2E) modelling of S2T abstractive
summarization is a promising approach that offers the possibility of generating
rich latent representations that leverage non-verbal and acoustic information,
as opposed to the use of only linguistic information from automatically
generated transcripts in cascade systems. However, the few literature on E2E
modelling of this task fails on exploring different domains, namely broadcast
news, which is challenging domain where large and diversified volumes of data
are presented to the user every day. We model S2T summarization both with a
cascade and an E2E system for a corpus of broadcast news in French. Our novel
E2E model leverages external data by resorting to transfer learning from a
pre-trained T2T summarizer. Experiments show that both our cascade and E2E
abstractive summarizers are stronger than an extractive baseline. However, the
performance of the E2E model still lies behind the cascade one, which is object
of an extensive analysis that includes future directions to close that gap.
|
[
"cs.CL",
"cs.AI",
"cs.LG",
"eess.AS"
] | false |
2306.06083
|
2023-06-06T21:13:08Z
|
Improving Fairness and Robustness in End-to-End Speech Recognition
through unsupervised clustering
|
[
"Irina-Elena Veliche",
"Pascale Fung"
] |
The challenge of fairness arises when Automatic Speech Recognition (ASR)
systems do not perform equally well for all sub-groups of the population. In
the past few years there have been many improvements in overall speech
recognition quality, but without any particular focus on advancing Equality and
Equity for all user groups for whom systems do not perform well. ASR fairness
is therefore also a robustness issue. Meanwhile, data privacy also takes
priority in production systems. In this paper, we present a privacy preserving
approach to improve fairness and robustness of end-to-end ASR without using
metadata, zip codes, or even speaker or utterance embeddings directly in
training. We extract utterance level embeddings using a speaker ID model
trained on a public dataset, which we then use in an unsupervised fashion to
create acoustic clusters. We use cluster IDs instead of speaker utterance
embeddings as extra features during model training, which shows improvements
for all demographic groups and in particular for different accents.
|
[
"cs.SD",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2306.03322
|
2023-06-06T00:23:28Z
|
Stochastic Multi-Level Compositional Optimization Algorithms over
Networks with Level-Independent Convergence Rate
|
[
"Hongchang Gao"
] |
Stochastic multi-level compositional optimization problems cover many new
machine learning paradigms, e.g., multi-step model-agnostic meta-learning,
which require efficient optimization algorithms for large-scale applications.
This paper studies the decentralized stochastic multi-level optimization
algorithm, which is challenging because the multi-level structure and
decentralized communication scheme may make the number of levels affect the
order of the convergence rate. To this end, we develop two novel decentralized
optimization algorithms to deal with the multi-level function and its gradient.
Our theoretical results show that both algorithms can achieve the
level-independent convergence rate for nonconvex problems under much milder
conditions compared with existing single-machine algorithms. To the best of our
knowledge, this is the first work that achieves the level-independent
convergence rate under the decentralized setting. Moreover, extensive
experiments confirm the efficacy of our proposed algorithms.
|
[
"cs.LG"
] | false |
2306.03356
|
2023-06-06T02:14:20Z
|
Query Complexity of Active Learning for Function Family With Nearly
Orthogonal Basis
|
[
"Xiang Chen",
"Zhao Song",
"Baocheng Sun",
"Junze Yin",
"Danyang Zhuo"
] |
Many machine learning algorithms require large numbers of labeled data to
deliver state-of-the-art results. In applications such as medical diagnosis and
fraud detection, though there is an abundance of unlabeled data, it is costly
to label the data by experts, experiments, or simulations. Active learning
algorithms aim to reduce the number of required labeled data points while
preserving performance. For many convex optimization problems such as linear
regression and $p$-norm regression, there are theoretical bounds on the number
of required labels to achieve a certain accuracy. We call this the query
complexity of active learning. However, today's active learning algorithms
require the underlying learned function to have an orthogonal basis. For
example, when applying active learning to linear regression, the requirement is
the target function is a linear composition of a set of orthogonal linear
functions, and active learning can find the coefficients of these linear
functions. We present a theoretical result to show that active learning does
not need an orthogonal basis but rather only requires a nearly orthogonal
basis. We provide the corresponding theoretical proofs for the function family
of nearly orthogonal basis, and its applications associated with the
algorithmically efficient active learning framework.
|
[
"cs.LG"
] | false |
2306.03390
|
2023-06-06T04:07:21Z
|
Origin-Destination Network Generation via Gravity-Guided GAN
|
[
"Can Rong",
"Huandong Wang",
"Yong Li"
] |
Origin-destination (OD) flow, which contains valuable population mobility
information including direction and volume, is critical in many urban
applications, such as urban planning, transportation management, etc. However,
OD data is not always easy to access due to high costs or privacy concerns.
Therefore, we must consider generating OD through mathematical models. Existing
works utilize physics laws or machine learning (ML) models to build the
association between urban structures and OD flows while these two kinds of
methods suffer from the limitation of over-simplicity and poor generalization
ability, respectively. In this paper, we propose to adopt physics-informed ML
paradigm, which couple the physics scientific knowledge and data-driven ML
methods, to construct a model named Origin-Destination Generation Networks
(ODGN) for better population mobility modeling by leveraging the complementary
strengths of combining physics and ML methods. Specifically, we first build a
Multi-view Graph Attention Networks (MGAT) to capture the urban features of
every region and then use a gravity-guided predictor to obtain OD flow between
every two regions. Furthermore, we use a conditional GAN training strategy and
design a sequence-based discriminator to consider the overall topological
features of OD as a network. Extensive experiments on real-world datasets have
been done to demonstrate the superiority of our proposed method compared with
baselines.
|
[
"cs.LG"
] | false |
2306.03412
|
2023-06-06T05:20:53Z
|
DEK-Forecaster: A Novel Deep Learning Model Integrated with EMD-KNN for
Traffic Prediction
|
[
"Sajal Saha",
"Sudipto Baral",
"Anwar Haque"
] |
Internet traffic volume estimation has a significant impact on the business
policies of the ISP (Internet Service Provider) industry and business
successions. Forecasting the internet traffic demand helps to shed light on the
future traffic trend, which is often helpful for ISPs decision-making in
network planning activities and investments. Besides, the capability to
understand future trend contributes to managing regular and long-term
operations. This study aims to predict the network traffic volume demand using
deep sequence methods that incorporate Empirical Mode Decomposition (EMD) based
noise reduction, Empirical rule based outlier detection, and $K$-Nearest
Neighbour (KNN) based outlier mitigation. In contrast to the former studies,
the proposed model does not rely on a particular EMD decomposed component
called Intrinsic Mode Function (IMF) for signal denoising. In our proposed
traffic prediction model, we used an average of all IMFs components for signal
denoising. Moreover, the abnormal data points are replaced by $K$ nearest data
points average, and the value for $K$ has been optimized based on the KNN
regressor prediction error measured in Root Mean Squared Error (RMSE). Finally,
we selected the best time-lagged feature subset for our prediction model based
on AutoRegressive Integrated Moving Average (ARIMA) and Akaike Information
Criterion (AIC) value. Our experiments are conducted on real-world internet
traffic datasets from industry, and the proposed method is compared with
various traditional deep sequence baseline models. Our results show that the
proposed EMD-KNN integrated prediction models outperform comparative models.
|
[
"cs.LG"
] | false |
2306.03440
|
2023-06-06T06:37:07Z
|
Quantifying the Variability Collapse of Neural Networks
|
[
"Jing Xu",
"Haoxiong Liu"
] |
Recent studies empirically demonstrate the positive relationship between the
transferability of neural networks and the within-class variation of the last
layer features. The recently discovered Neural Collapse (NC) phenomenon
provides a new perspective of understanding such last layer geometry of neural
networks. In this paper, we propose a novel metric, named Variability Collapse
Index (VCI), to quantify the variability collapse phenomenon in the NC
paradigm. The VCI metric is well-motivated and intrinsically related to the
linear probing loss on the last layer features. Moreover, it enjoys desired
theoretical and empirical properties, including invariance under invertible
linear transformations and numerical stability, that distinguishes it from
previous metrics. Our experiments verify that VCI is indicative of the
variability collapse and the transferability of pretrained neural networks.
|
[
"cs.LG"
] | false |
2306.03542
|
2023-06-06T09:38:57Z
|
Masked Autoencoders are Efficient Continual Federated Learners
|
[
"Subarnaduti Paul",
"Lars-Joel Frey",
"Roshni Kamath",
"Kristian Kersting",
"Martin Mundt"
] |
Machine learning is typically framed from a perspective of i.i.d., and more
importantly, isolated data. In parts, federated learning lifts this assumption,
as it sets out to solve the real-world challenge of collaboratively learning a
shared model from data distributed across clients. However, motivated primarily
by privacy and computational constraints, the fact that data may change,
distributions drift, or even tasks advance individually on clients, is seldom
taken into account. The field of continual learning addresses this separate
challenge and first steps have recently been taken to leverage synergies in
distributed supervised settings, in which several clients learn to solve
changing classification tasks over time without forgetting previously seen
ones. Motivated by these prior works, we posit that such federated continual
learning should be grounded in unsupervised learning of representations that
are shared across clients; in the loose spirit of how humans can indirectly
leverage others' experience without exposure to a specific task. For this
purpose, we demonstrate that masked autoencoders for distribution estimation
are particularly amenable to this setup. Specifically, their masking strategy
can be seamlessly integrated with task attention mechanisms to enable selective
knowledge transfer between clients. We empirically corroborate the latter
statement through several continual federated scenarios on both image and
binary datasets.
|
[
"cs.LG"
] | false |
2306.03615
|
2023-06-06T12:07:50Z
|
Zero-shot Preference Learning for Offline RL via Optimal Transport
|
[
"Runze Liu",
"Yali Du",
"Fengshuo Bai",
"Jiafei Lyu",
"Xiu Li"
] |
Preference-based Reinforcement Learning (PbRL) has demonstrated remarkable
efficacy in aligning rewards with human intentions. However, a significant
challenge lies in the need of substantial human labels, which is costly and
time-consuming. Additionally, the expensive preference data obtained from prior
tasks is not typically reusable for subsequent task learning, leading to
extensive labeling for each new task. In this paper, we propose a novel
zero-shot preference-based RL algorithm that leverages labeled preference data
from source tasks to infer labels for target tasks, eliminating the requirement
for human queries. Our approach utilizes Gromov-Wasserstein distance to align
trajectory distributions between source and target tasks. The solved optimal
transport matrix serves as a correspondence between trajectories of two tasks,
making it possible to identify corresponding trajectory pairs between tasks and
transfer the preference labels. However, learning directly from inferred labels
that contains a fraction of noisy labels will result in an inaccurate reward
function, subsequently affecting policy performance. To this end, we introduce
Robust Preference Transformer, which models the rewards as Gaussian
distributions and incorporates reward uncertainty in addition to reward mean.
The empirical results on robotic manipulation tasks of Meta-World and Robomimic
show that our method has strong capabilities of transferring preferences
between tasks and learns reward functions from noisy labels robustly.
Furthermore, we reveal that our method attains near-oracle performance with a
small proportion of scripted labels.
|
[
"cs.LG"
] | false |
2306.03647
|
2023-06-06T13:03:24Z
|
Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks
|
[
"Yurong Zhong",
"Zhe Xie",
"Weiling Li",
"Xin Luo"
] |
An Undirected Weighted Network (UWN) is commonly found in big data-related
applications. Note that such a network's information connected with its nodes,
and edges can be expressed as a Symmetric, High-Dimensional and Incomplete
(SHDI) matrix. However, existing models fail in either modeling its intrinsic
symmetry or low-data density, resulting in low model scalability or
representation learning ability. For addressing this issue, a Proximal
Symmetric Nonnegative Latent-factor-analysis (PSNL) model is proposed. It
incorporates a proximal term into symmetry-aware and data density-oriented
objective function for high representation accuracy. Then an adaptive
Alternating Direction Method of Multipliers (ADMM)-based learning scheme is
implemented through a Tree-structured of Parzen Estimators (TPE) method for
high computational efficiency. Empirical studies on four UWNs demonstrate that
PSNL achieves higher accuracy gain than state-of-the-art models, as well as
highly competitive computational efficiency.
|
[
"cs.LG"
] | false |
2306.03680
|
2023-06-06T13:43:09Z
|
Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
|
[
"Linjie Xu",
"Zhengyao Jiang",
"Jinyu Wang",
"Lei Song",
"Jiang Bian"
] |
Offline reinforcement learning (RL) methodologies enforce constraints on the
policy to adhere closely to the behavior policy, thereby stabilizing value
learning and mitigating the selection of out-of-distribution (OOD) actions
during test time. Conventional approaches apply identical constraints for both
value learning and test time inference. However, our findings indicate that the
constraints suitable for value estimation may in fact be excessively
restrictive for action selection during test time. To address this issue, we
propose a Mildly Constrained Evaluation Policy (MCEP) for test time inference
with a more constrained target policy for value estimation. Since the target
policy has been adopted in various prior approaches, MCEP can be seamlessly
integrated with them as a plug-in. We instantiate MCEP based on TD3-BC
[Fujimoto and Gu, 2021] and AWAC [Nair et al., 2020] algorithms. The empirical
results on MuJoCo locomotion tasks show that the MCEP significantly outperforms
the target policy and achieves competitive results to state-of-the-art offline
RL methods. The codes are open-sourced at https://github.com/egg-west/MCEP.git.
|
[
"cs.LG"
] | false |
2306.03715
|
2023-06-06T14:23:34Z
|
Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability
|
[
"Jianing Zhu",
"Hengzhuang Li",
"Jiangchao Yao",
"Tongliang Liu",
"Jianliang Xu",
"Bo Han"
] |
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI
when deploying machine learning models in real-world applications. Previous
paradigms either explore better scoring functions or utilize the knowledge of
outliers to equip the models with the ability of OOD detection. However, few of
them pay attention to the intrinsic OOD detection capability of the given
model. In this work, we generally discover the existence of an intermediate
stage of a model trained on in-distribution (ID) data having higher OOD
detection performance than that of its final stage across different settings,
and further identify one critical data-level attribution to be learning with
the atypical samples. Based on such insights, we propose a novel method,
Unleashing Mask, which aims to restore the OOD discriminative capabilities of
the well-trained model with ID data. Our method utilizes a mask to figure out
the memorized atypical samples, and then finetune the model or prune it with
the introduced mask to forget them. Extensive experiments and analysis
demonstrate the effectiveness of our method. The code is available at:
https://github.com/tmlr-group/Unleashing-Mask.
|
[
"cs.LG"
] | false |
2306.03745
|
2023-06-06T15:04:31Z
|
Soft Merging of Experts with Adaptive Routing
|
[
"Mohammed Muqeeth",
"Haokun Liu",
"Colin Raffel"
] |
Sparsely activated neural networks with conditional computation learn to
route their inputs through different "expert" subnetworks, providing a form of
modularity that densely activated models lack. Despite their possible benefits,
models with learned routing often underperform their parameter-matched densely
activated counterparts as well as models that use non-learned heuristic routing
strategies. In this paper, we hypothesize that these shortcomings stem from the
gradient estimation techniques used to train sparsely activated models that use
non-differentiable discrete routing decisions. To address this issue, we
introduce Soft Merging of Experts with Adaptive Routing (SMEAR), which avoids
discrete routing by using a single "merged" expert constructed via a weighted
average of all of the experts' parameters. By routing activations through a
single merged expert, SMEAR does not incur a significant increase in
computational costs and enables standard gradient-based training. We
empirically validate that models using SMEAR outperform models that route based
on metadata or learn sparse routing through gradient estimation. Furthermore,
we provide qualitative analysis demonstrating that the experts learned via
SMEAR exhibit a significant amount of specialization. All of the code used in
our experiments is publicly available.
|
[
"cs.LG"
] | false |
2306.03824
|
2023-06-06T16:12:35Z
|
Understanding Generalization of Federated Learning via Stability:
Heterogeneity Matters
|
[
"Zhenyu Sun",
"Xiaochun Niu",
"Ermin Wei"
] |
Generalization performance is a key metric in evaluating machine learning
models when applied to real-world applications. Good generalization indicates
the model can predict unseen data correctly when trained under a limited number
of data. Federated learning (FL), which has emerged as a popular distributed
learning framework, allows multiple devices or clients to train a shared model
without violating privacy requirements. While the existing literature has
studied extensively the generalization performances of centralized machine
learning algorithms, similar analysis in the federated settings is either
absent or with very restrictive assumptions on the loss functions. In this
paper, we aim to analyze the generalization performances of federated learning
by means of algorithmic stability, which measures the change of the output
model of an algorithm when perturbing one data point. Three widely-used
algorithms are studied, including FedAvg, SCAFFOLD, and FedProx, under convex
and non-convex loss functions. Our analysis shows that the generalization
performances of models trained by these three algorithms are closely related to
the heterogeneity of clients' datasets as well as the convergence behaviors of
the algorithms. Particularly, in the i.i.d. setting, our results recover the
classical results of stochastic gradient descent (SGD).
|
[
"cs.LG"
] | false |
2306.03900
|
2023-06-06T17:58:12Z
|
Model Spider: Learning to Rank Pre-Trained Models Efficiently
|
[
"Yi-Kai Zhang",
"Ting-Ji Huang",
"Yao-Xiang Ding",
"De-Chuan Zhan",
"Han-Jia Ye"
] |
Figuring out which Pre-Trained Model (PTM) from a model zoo fits the target
task is essential to take advantage of plentiful model resources. With the
availability of numerous heterogeneous PTMs from diverse fields, efficiently
selecting the most suitable PTM is challenging due to the time-consuming costs
of carrying out forward or backward passes over all PTMs. In this paper, we
propose Model Spider, which tokenizes both PTMs and tasks by summarizing their
characteristics into vectors to enable efficient PTM selection. By leveraging
the approximated performance of PTMs on a separate set of training tasks, Model
Spider learns to construct tokens and measure the fitness score between a
model-task pair via their tokens. The ability to rank relevant PTMs higher than
others generalizes to new tasks. With the top-ranked PTM candidates, we further
learn to enrich task tokens with their PTM-specific semantics to re-rank the
PTMs for better selection. Model Spider balances efficiency and selection
ability, making PTM selection like a spider preying on a web. Model Spider
demonstrates promising performance in various configurations of model zoos.
|
[
"cs.LG"
] | false |
2306.03911
|
2023-06-06T14:13:16Z
|
Multi-constrained Symmetric Nonnegative Latent Factor Analysis for
Accurately Representing Large-scale Undirected Weighted Networks
|
[
"Yurong Zhong",
"Zhe Xie",
"Weiling Li",
"Xin Luo"
] |
An Undirected Weighted Network (UWN) is frequently encountered in a
big-data-related application concerning the complex interactions among numerous
nodes, e.g., a protein interaction network from a bioinformatics application. A
Symmetric High-Dimensional and Incomplete (SHDI) matrix can smoothly illustrate
such an UWN, which contains rich knowledge like node interaction behaviors and
local complexes. To extract desired knowledge from an SHDI matrix, an analysis
model should carefully consider its symmetric-topology for describing an UWN's
intrinsic symmetry. Representation learning to an UWN borrows the success of a
pyramid of symmetry-aware models like a Symmetric Nonnegative Matrix
Factorization (SNMF) model whose objective function utilizes a sole Latent
Factor (LF) matrix for representing SHDI's symmetry rigorously. However, they
suffer from the following drawbacks: 1) their computational complexity is high;
and 2) their modeling strategy narrows their representation features, making
them suffer from low learning ability. Aiming at addressing above critical
issues, this paper proposes a Multi-constrained Symmetric Nonnegative
Latent-factor-analysis (MSNL) model with two-fold ideas: 1) introducing
multi-constraints composed of multiple LF matrices, i.e., inequality and
equality ones into a data-density-oriented objective function for precisely
representing the intrinsic symmetry of an SHDI matrix with broadened feature
space; and 2) implementing an Alternating Direction Method of Multipliers
(ADMM)-incorporated learning scheme for precisely solving such a
multi-constrained model. Empirical studies on three SHDI matrices from a real
bioinformatics or industrial application demonstrate that the proposed MSNL
model achieves stronger representation learning ability to an SHDI matrix than
state-of-the-art models do.
|
[
"cs.LG"
] | false |
2306.03985
|
2023-06-06T19:44:37Z
|
Agent Performing Autonomous Stock Trading under Good and Bad Situations
|
[
"Yunfei Luo",
"Zhangqi Duan"
] |
Stock trading is one of the popular ways for financial management. However,
the market and the environment of economy is unstable and usually not
predictable. Furthermore, engaging in stock trading requires time and effort to
analyze, create strategies, and make decisions. It would be convenient and
effective if an agent could assist or even do the task of analyzing and
modeling the past data and then generate a strategy for autonomous trading.
Recently, reinforcement learning has been shown to be robust in various tasks
that involve achieving a goal with a decision making strategy based on
time-series data. In this project, we have developed a pipeline that simulates
the stock trading environment and have trained an agent to automate the stock
trading process with deep reinforcement learning methods, including deep
Q-learning, deep SARSA, and the policy gradient method. We evaluate our
platform during relatively good (before 2021) and bad (2021 - 2022) situations.
The stocks we've evaluated on including Google, Apple, Tesla, Meta, Microsoft,
and IBM. These stocks are among the popular ones, and the changes in trends are
representative in terms of having good and bad situations. We showed that
before 2021, the three reinforcement methods we have tried always provide
promising profit returns with total annual rates around $70\%$ to $90\%$, while
maintain a positive profit return after 2021 with total annual rates around 2%
to 7%.
|
[
"cs.LG"
] | false |
2306.03362
|
2023-06-06T02:29:40Z
|
Boosting Offline Reinforcement Learning with Action Preference Query
|
[
"Qisen Yang",
"Shenzhi Wang",
"Matthieu Gaetan Lin",
"Shiji Song",
"Gao Huang"
] |
Training practical agents usually involve offline and online reinforcement
learning (RL) to balance the policy's performance and interaction costs. In
particular, online fine-tuning has become a commonly used method to correct the
erroneous estimates of out-of-distribution data learned in the offline training
phase. However, even limited online interactions can be inaccessible or
catastrophic for high-stake scenarios like healthcare and autonomous driving.
In this work, we introduce an interaction-free training scheme dubbed
Offline-with-Action-Preferences (OAP). The main insight is that, compared to
online fine-tuning, querying the preferences between pre-collected and learned
actions can be equally or even more helpful to the erroneous estimate problem.
By adaptively encouraging or suppressing policy constraint according to action
preferences, OAP could distinguish overestimation from beneficial policy
improvement and thus attains a more accurate evaluation of unseen data.
Theoretically, we prove a lower bound of the behavior policy's performance
improvement brought by OAP. Moreover, comprehensive experiments on the D4RL
benchmark and state-of-the-art algorithms demonstrate that OAP yields higher
(29% on average) scores, especially on challenging AntMaze tasks (98% higher).
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03402
|
2023-06-06T04:47:44Z
|
Binary Classification with Instance and Label Dependent Label Noise
|
[
"Hyungki Im",
"Paul Grigas"
] |
Learning with label dependent label noise has been extensively explored in
both theory and practice; however, dealing with instance (i.e., feature) and
label dependent label noise continues to be a challenging task. The difficulty
arises from the fact that the noise rate varies for each instance, making it
challenging to estimate accurately. The question of whether it is possible to
learn a reliable model using only noisy samples remains unresolved. We answer
this question with a theoretical analysis that provides matching upper and
lower bounds. Surprisingly, our results show that, without any additional
assumptions, empirical risk minimization achieves the optimal excess risk
bound. Specifically, we derive a novel excess risk bound proportional to the
noise level, which holds in very general settings, by comparing the empirical
risk minimizers obtained from clean samples and noisy samples. Second, we show
that the minimax lower bound for the 0-1 loss is a constant proportional to the
average noise rate. Our findings suggest that learning solely with noisy
samples is impossible without access to clean samples or strong assumptions on
the distribution of the data.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.03405
|
2023-06-06T04:53:06Z
|
Vehicle Dynamics Modeling for Autonomous Racing Using Gaussian Processes
|
[
"Jingyun Ning",
"Madhur Behl"
] |
Autonomous racing is increasingly becoming a proving ground for autonomous
vehicle technology at the limits of its current capabilities. The most
prominent examples include the F1Tenth racing series, Formula Student
Driverless (FSD), Roborace, and the Indy Autonomous Challenge (IAC). Especially
necessary, in high speed autonomous racing, is the knowledge of accurate
racecar vehicle dynamics. The choice of the vehicle dynamics model has to be
made by balancing the increasing computational demands in contrast to improved
accuracy of more complex models. Recent studies have explored learning-based
methods, such as Gaussian Process (GP) regression for approximating the vehicle
dynamics model. However, these efforts focus on higher level constructs such as
motion planning, or predictive control and lack both in realism and rigor of
the GP modeling process, which is often over-simplified. This paper presents
the most detailed analysis of the applicability of GP models for approximating
vehicle dynamics for autonomous racing. In particular we construct dynamic, and
extended kinematic models for the popular F1TENTH racing platform. We
investigate the effect of kernel choices, sample sizes, racetrack layout,
racing lines, and velocity profiles on the efficacy and generalizability of the
learned dynamics. We conduct 400+ simulations on real F1 track layouts to
provide comprehensive recommendations to the research community for training
accurate GP regression for single-track vehicle dynamics of a racecar.
|
[
"cs.RO",
"cs.LG"
] | false |
2306.03408
|
2023-06-06T05:11:58Z
|
Agents Explore the Environment Beyond Good Actions to Improve Their
Model for Better Decisions
|
[
"Matthias Unverzagt"
] |
Improving the decision-making capabilities of agents is a key challenge on
the road to artificial intelligence. To improve the planning skills needed to
make good decisions, MuZero's agent combines prediction by a network model and
planning by a tree search using the predictions. MuZero's learning process can
fail when predictions are poor but planning requires them. We use this as an
impetus to get the agent to explore parts of the decision tree in the
environment that it otherwise would not explore. The agent achieves this, first
by normal planning to come up with an improved policy. Second, it randomly
deviates from this policy at the beginning of each training episode. And third,
it switches back to the improved policy at a random time step to experience the
rewards from the environment associated with the improved policy, which is the
basis for learning the correct value expectation. The simple board game
Tic-Tac-Toe is used to illustrate how this approach can improve the agent's
decision-making ability. The source code, written entirely in Java, is
available at https://github.com/enpasos/muzero.
|
[
"cs.AI",
"cs.LG"
] | false |
2306.03434
|
2023-06-06T06:22:42Z
|
Learning-Based Heuristic for Combinatorial Optimization of the Minimum
Dominating Set Problem using Graph Convolutional Networks
|
[
"Abihith Kothapalli",
"Mudassir Shabbir",
"Xenofon Koutsoukos"
] |
A dominating set of a graph $\mathcal{G=(V, E)}$ is a subset of vertices
$S\subseteq\mathcal{V}$ such that every vertex $v\in \mathcal{V} \setminus S$
outside the dominating set is adjacent to a vertex $u\in S$ within the set. The
minimum dominating set problem seeks to find a dominating set of minimum
cardinality and is a well-established NP-hard combinatorial optimization
problem. We propose a novel learning-based heuristic approach to compute
solutions for the minimum dominating set problem using graph convolutional
networks. We conduct an extensive experimental evaluation of the proposed
method on a combination of randomly generated graphs and real-world graph
datasets. Our results indicate that the proposed learning-based approach can
outperform a classical greedy approximation algorithm. Furthermore, we
demonstrate the generalization capability of the graph convolutional network
across datasets and its ability to scale to graphs of higher order than those
on which it was trained. Finally, we utilize the proposed learning-based
heuristic in an iterative greedy algorithm, achieving state-of-the-art
performance in the computation of dominating sets.
|
[
"cs.LG",
"cs.DM"
] | false |
2306.03447
|
2023-06-06T07:00:24Z
|
GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets
|
[
"Shubham Gupta",
"Sahil Manchanda",
"Sayan Ranu",
"Srikanta Bedathur"
] |
Graph neural networks (GNNs), in general, are built on the assumption of a
static set of features characterizing each node in a graph. This assumption is
often violated in practice. Existing methods partly address this issue through
feature imputation. However, these techniques (i) assume uniformity of feature
set across nodes, (ii) are transductive by nature, and (iii) fail to work when
features are added or removed over time. In this work, we address these
limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a
novel allotropic transformation on the original graph, wherein the nodes and
features are decoupled through a bipartite encoding. Through a carefully chosen
message passing framework on the allotropic transformation, we make the model
parameter size independent of the number of features and thereby inductive to
both unseen nodes and features. We prove that GRAFENNE is at least as
expressive as any of the existing message-passing GNNs in terms of
Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen
features does not come at the cost of expressivity. In addition, as
demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN
with high empirical efficacy and the ability to learn in continual fashion over
streaming feature sets.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03521
|
2023-06-06T09:12:49Z
|
Machine learning in and out of equilibrium
|
[
"Shishir Adhikari",
"Alkan Kabakçıoğlu",
"Alexander Strang",
"Deniz Yuret",
"Michael Hinczewski"
] |
The algorithms used to train neural networks, like stochastic gradient
descent (SGD), have close parallels to natural processes that navigate a
high-dimensional parameter space -- for example protein folding or evolution.
Our study uses a Fokker-Planck approach, adapted from statistical physics, to
explore these parallels in a single, unified framework. We focus in particular
on the stationary state of the system in the long-time limit, which in
conventional SGD is out of equilibrium, exhibiting persistent currents in the
space of network parameters. As in its physical analogues, the current is
associated with an entropy production rate for any given training trajectory.
The stationary distribution of these rates obeys the integral and detailed
fluctuation theorems -- nonequilibrium generalizations of the second law of
thermodynamics. We validate these relations in two numerical examples, a
nonlinear regression network and MNIST digit classification. While the
fluctuation theorems are universal, there are other aspects of the stationary
state that are highly sensitive to the training details. Surprisingly, the
effective loss landscape and diffusion matrix that determine the shape of the
stationary distribution vary depending on the simple choice of minibatching
done with or without replacement. We can take advantage of this nonequilibrium
sensitivity to engineer an equilibrium stationary state for a particular
application: sampling from a posterior distribution of network weights in
Bayesian machine learning. We propose a new variation of stochastic gradient
Langevin dynamics (SGLD) that harnesses without replacement minibatching. In an
example system where the posterior is exactly known, this SGWORLD algorithm
outperforms SGLD, converging to the posterior orders of magnitude faster as a
function of the learning rate.
|
[
"cs.LG",
"cond-mat.stat-mech"
] | false |
2306.03527
|
2023-06-06T09:22:52Z
|
Rec4Ad: A Free Lunch to Mitigate Sample Selection Bias for Ads CTR
Prediction in Taobao
|
[
"Jingyue Gao",
"Shuguang Han",
"Han Zhu",
"Siran Yang",
"Yuning Jiang",
"Jian Xu",
"Bo Zheng"
] |
Click-Through Rate (CTR) prediction serves as a fundamental component in
online advertising. A common practice is to train a CTR model on advertisement
(ad) impressions with user feedback. Since ad impressions are purposely
selected by the model itself, their distribution differs from the inference
distribution and thus exhibits sample selection bias (SSB) that affects model
performance. Existing studies on SSB mainly employ sample re-weighting
techniques which suffer from high variance and poor model calibration. Another
line of work relies on costly uniform data that is inadequate to train
industrial models. Thus mitigating SSB in industrial models with a
uniform-data-free framework is worth exploring. Fortunately, many platforms
display mixed results of organic items (i.e., recommendations) and sponsored
items (i.e., ads) to users, where impressions of ads and recommendations are
selected by different systems but share the same user decision rationales.
Based on the above characteristics, we propose to leverage recommendations
samples as a free lunch to mitigate SSB for ads CTR model (Rec4Ad). After
elaborating data augmentation, Rec4Ad learns disentangled representations with
alignment and decorrelation modules for enhancement. When deployed in Taobao
display advertising system, Rec4Ad achieves substantial gains in key business
metrics, with a lift of up to +6.6\% CTR and +2.9\% RPM.
|
[
"cs.IR",
"cs.LG"
] | false |
2306.03536
|
2023-06-06T09:35:29Z
|
On Pitfalls of Test-Time Adaptation
|
[
"Hao Zhao",
"Yuejiang Liu",
"Alexandre Alahi",
"Tao Lin"
] |
Test-Time Adaptation (TTA) has recently emerged as a promising approach for
tackling the robustness challenge under distribution shifts. However, the lack
of consistent settings and systematic studies in prior literature hinders
thorough assessments of existing methods. To address this issue, we present
TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art
algorithms, a diverse array of distribution shifts, and two evaluation
protocols. Through extensive experiments, our benchmark reveals three common
pitfalls in prior efforts. First, selecting appropriate hyper-parameters,
especially for model selection, is exceedingly difficult due to online batch
dependency. Second, the effectiveness of TTA varies greatly depending on the
quality and properties of the model being adapted. Third, even under optimal
algorithmic conditions, none of the existing methods are capable of addressing
all common types of distribution shifts. Our findings underscore the need for
future research in the field to conduct rigorous evaluations on a broader set
of models and shifts, and to re-examine the assumptions behind the empirical
success of TTA. Our code is available at
\url{https://github.com/lins-lab/ttab}.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03558
|
2023-06-06T10:18:36Z
|
Machine Unlearning: A Survey
|
[
"Heng Xu",
"Tianqing Zhu",
"Lefeng Zhang",
"Wanlei Zhou",
"Philip S. Yu"
] |
Machine learning has attracted widespread attention and evolved into an
enabling technology for a wide range of highly successful applications, such as
intelligent computer vision, speech recognition, medical diagnosis, and more.
Yet a special need has arisen where, due to privacy, usability, and/or the
right to be forgotten, information about some specific samples needs to be
removed from a model, called machine unlearning. This emerging technology has
drawn significant interest from both academics and industry due to its
innovation and practicality. At the same time, this ambitious problem has led
to numerous research efforts aimed at confronting its challenges. To the best
of our knowledge, no study has analyzed this complex topic or compared the
feasibility of existing unlearning solutions in different kinds of scenarios.
Accordingly, with this survey, we aim to capture the key concepts of unlearning
techniques. The existing solutions are classified and summarized based on their
characteristics within an up-to-date and comprehensive review of each
category's advantages and limitations. The survey concludes by highlighting
some of the outstanding issues with unlearning techniques, along with some
feasible directions for new research opportunities.
|
[
"cs.CR",
"cs.LG"
] | false |
2306.03561
|
2023-06-06T10:25:10Z
|
CIN++: Enhancing Topological Message Passing
|
[
"Lorenzo Giusti",
"Teodora Reu",
"Francesco Ceccarelli",
"Cristian Bodnar",
"Pietro Liò"
] |
Graph Neural Networks (GNNs) have demonstrated remarkable success in learning
from graph-structured data. However, they face significant limitations in
expressive power, struggling with long-range interactions and lacking a
principled approach to modeling higher-order structures and group interactions.
Cellular Isomorphism Networks (CINs) recently addressed most of these
challenges with a message passing scheme based on cell complexes. Despite their
advantages, CINs make use only of boundary and upper messages which do not
consider a direct interaction between the rings present in the underlying
complex. Accounting for these interactions might be crucial for learning
representations of many real-world complex phenomena such as the dynamics of
supramolecular assemblies, neural activity within the brain, and gene
regulation processes. In this work, we propose CIN++, an enhancement of the
topological message passing scheme introduced in CINs. Our message passing
scheme accounts for the aforementioned limitations by letting the cells to
receive also lower messages within each layer. By providing a more
comprehensive representation of higher-order and long-range interactions, our
enhanced topological message passing scheme achieves state-of-the-art results
on large-scale and long-range chemistry benchmarks.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03566
|
2023-06-06T10:34:03Z
|
Memory-Based Dual Gaussian Processes for Sequential Learning
|
[
"Paul E. Chang",
"Prakhar Verma",
"S. T. John",
"Arno Solin",
"Mohammad Emtiyaz Khan"
] |
Sequential learning with Gaussian processes (GPs) is challenging when access
to past data is limited, for example, in continual and active learning. In such
cases, errors can accumulate over time due to inaccuracies in the posterior,
hyperparameters, and inducing points, making accurate learning challenging.
Here, we present a method to keep all such errors in check using the recently
proposed dual sparse variational GP. Our method enables accurate inference for
generic likelihoods and improves learning by actively building and updating a
memory of past data. We demonstrate its effectiveness in several applications
involving Bayesian optimization, active learning, and continual learning.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.03607
|
2023-06-06T11:50:09Z
|
Buying Information for Stochastic Optimization
|
[
"Mingchen Ma",
"Christos Tzamos"
] |
Stochastic optimization is one of the central problems in Machine Learning
and Theoretical Computer Science. In the standard model, the algorithm is given
a fixed distribution known in advance. In practice though, one may acquire at a
cost extra information to make better decisions. In this paper, we study how to
buy information for stochastic optimization and formulate this question as an
online learning problem. Assuming the learner has an oracle for the original
optimization problem, we design a $2$-competitive deterministic algorithm and a
$e/(e-1)$-competitive randomized algorithm for buying information. We show that
this ratio is tight as the problem is equivalent to a robust generalization of
the ski-rental problem, which we call super-martingale stopping.
We also consider an adaptive setting where the learner can choose to buy
information after taking some actions for the underlying optimization problem.
We focus on the classic optimization problem, Min-Sum Set Cover, where the goal
is to quickly find an action that covers a given request drawn from a known
distribution. We provide an $8$-competitive algorithm running in polynomial
time that chooses actions and decides when to buy information about the
underlying request.
|
[
"cs.DS",
"cs.LG"
] | false |
2306.03626
|
2023-06-06T12:27:54Z
|
Understanding Progressive Training Through the Framework of Randomized
Coordinate Descent
|
[
"Rafał Szlendak",
"Elnur Gasanov",
"Peter Richtárik"
] |
We propose a Randomized Progressive Training algorithm (RPT) -- a stochastic
proxy for the well-known Progressive Training method (PT) (Karras et al.,
2017). Originally designed to train GANs (Goodfellow et al., 2014), PT was
proposed as a heuristic, with no convergence analysis even for the simplest
objective functions. On the contrary, to the best of our knowledge, RPT is the
first PT-type algorithm with rigorous and sound theoretical guarantees for
general smooth objective functions. We cast our method into the established
framework of Randomized Coordinate Descent (RCD) (Nesterov, 2012; Richt\'arik &
Tak\'a\v{c}, 2014), for which (as a by-product of our investigations) we also
propose a novel, simple and general convergence analysis encapsulating
strongly-convex, convex and nonconvex objectives. We then use this framework to
establish a convergence theory for RPT. Finally, we validate the effectiveness
of our method through extensive computational experiments.
|
[
"cs.LG",
"math.OC"
] | false |
2306.03702
|
2023-06-06T14:15:29Z
|
Bayesian post-hoc regularization of random forests
|
[
"Bastian Pfeifer"
] |
Random Forests are powerful ensemble learning algorithms widely used in
various machine learning tasks. However, they have a tendency to overfit noisy
or irrelevant features, which can result in decreased generalization
performance. Post-hoc regularization techniques aim to mitigate this issue by
modifying the structure of the learned ensemble after its training. Here, we
propose Bayesian post-hoc regularization to leverage the reliable patterns
captured by leaf nodes closer to the root, while potentially reducing the
impact of more specific and potentially noisy leaf nodes deeper in the tree.
This approach allows for a form of pruning that does not alter the general
structure of the trees but rather adjusts the influence of leaf nodes based on
their proximity to the root node. We have evaluated the performance of our
method on various machine learning data sets. Our approach demonstrates
competitive performance with the state-of-the-art methods and, in certain
cases, surpasses them in terms of predictive accuracy and generalization.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03726
|
2023-06-06T14:45:24Z
|
Exploring Model Dynamics for Accumulative Poisoning Discovery
|
[
"Jianing Zhu",
"Xiawei Guo",
"Jiangchao Yao",
"Chao Du",
"Li He",
"Shuo Yuan",
"Tongliang Liu",
"Liang Wang",
"Bo Han"
] |
Adversarial poisoning attacks pose huge threats to various machine learning
applications. Especially, the recent accumulative poisoning attacks show that
it is possible to achieve irreparable harm on models via a sequence of
imperceptible attacks followed by a trigger batch. Due to the limited
data-level discrepancy in real-time data streaming, current defensive methods
are indiscriminate in handling the poison and clean samples. In this paper, we
dive into the perspective of model dynamics and propose a novel information
measure, namely, Memorization Discrepancy, to explore the defense via the
model-level information. By implicitly transferring the changes in the data
manipulation to that in the model outputs, Memorization Discrepancy can
discover the imperceptible poison samples based on their distinct dynamics from
the clean samples. We thoroughly explore its properties and propose
Discrepancy-aware Sample Correction (DSC) to defend against accumulative
poisoning attacks. Extensive experiments comprehensively characterized
Memorization Discrepancy and verified its effectiveness. The code is publicly
available at: https://github.com/tmlr-group/Memorization-Discrepancy.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.03770
|
2023-06-06T15:31:05Z
|
Graph Classification Gaussian Processes via Spectral Features
|
[
"Felix L. Opolka",
"Yin-Cong Zhi",
"Pietro Liò",
"Xiaowen Dong"
] |
Graph classification aims to categorise graphs based on their structure and
node attributes. In this work, we propose to tackle this task using tools from
graph signal processing by deriving spectral features, which we then use to
design two variants of Gaussian process models for graph classification. The
first variant uses spectral features based on the distribution of energy of a
node feature signal over the spectrum of the graph. We show that even such a
simple approach, having no learned parameters, can yield competitive
performance compared to strong neural network and graph kernel baselines. A
second, more sophisticated variant is designed to capture multi-scale and
localised patterns in the graph by learning spectral graph wavelet filters,
obtaining improved performance on synthetic and real-world data sets. Finally,
we show that both models produce well calibrated uncertainty estimates,
enabling reliable decision making based on the model predictions.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.03834
|
2023-06-06T16:24:27Z
|
MTS2Graph: Interpretable Multivariate Time Series Classification with
Temporal Evolving Graphs
|
[
"Raneen Younis",
"Abdul Hakmeh",
"Zahra Ahmadi"
] |
Conventional time series classification approaches based on bags of patterns
or shapelets face significant challenges in dealing with a vast amount of
feature candidates from high-dimensional multivariate data. In contrast, deep
neural networks can learn low-dimensional features efficiently, and in
particular, Convolutional Neural Networks (CNN) have shown promising results in
classifying Multivariate Time Series (MTS) data. A key factor in the success of
deep neural networks is this astonishing expressive power. However, this power
comes at the cost of complex, black-boxed models, conflicting with the goals of
building reliable and human-understandable models. An essential criterion in
understanding such predictive deep models involves quantifying the contribution
of time-varying input variables to the classification. Hence, in this work, we
introduce a new framework for interpreting multivariate time series data by
extracting and clustering the input representative patterns that highly
activate CNN neurons. This way, we identify each signal's role and
dependencies, considering all possible combinations of signals in the MTS
input. Then, we construct a graph that captures the temporal relationship
between the extracted patterns for each layer. An effective graph merging
strategy finds the connection of each node to the previous layer's nodes.
Finally, a graph embedding algorithm generates new representations of the
created interpretable time-series features. To evaluate the performance of our
proposed framework, we run extensive experiments on eight datasets of the
UCR/UEA archive, along with HAR and PAM datasets. The experiments indicate the
benefit of our time-aware graph-based representation in MTS classification
while enriching them with more interpretability.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.03887
|
2023-06-06T17:46:48Z
|
Fast Context Adaptation in Cost-Aware Continual Learning
|
[
"Seyyidahmed Lahmer",
"Federico Mason",
"Federico Chiariotti",
"Andrea Zanella"
] |
In the past few years, DRL has become a valuable solution to automatically
learn efficient resource management strategies in complex networks with
time-varying statistics. However, the increased complexity of 5G and Beyond
networks requires correspondingly more complex learning agents and the learning
process itself might end up competing with users for communication and
computational resources. This creates friction: on the one hand, the learning
process needs resources to quickly convergence to an effective strategy; on the
other hand, the learning process needs to be efficient, i.e., take as few
resources as possible from the user's data plane, so as not to throttle users'
QoS. In this paper, we investigate this trade-off and propose a dynamic
strategy to balance the resources assigned to the data plane and those reserved
for learning. With the proposed approach, a learning agent can quickly converge
to an efficient resource allocation strategy and adapt to changes in the
environment as for the CL paradigm, while minimizing the impact on the users'
QoS. Simulation results show that the proposed method outperforms static
allocation methods with minimal learning overhead, almost reaching the
performance of an ideal out-of-band CL solution.
|
[
"cs.NI",
"cs.LG"
] | false |
2306.03949
|
2023-06-06T18:27:20Z
|
Partial Inference in Structured Prediction
|
[
"Chuyang Ke",
"Jean Honorio"
] |
In this paper, we examine the problem of partial inference in the context of
structured prediction. Using a generative model approach, we consider the task
of maximizing a score function with unary and pairwise potentials in the space
of labels on graphs. Employing a two-stage convex optimization algorithm for
label recovery, we analyze the conditions under which a majority of the labels
can be recovered. We introduce a novel perspective on the Karush-Kuhn-Tucker
(KKT) conditions and primal and dual construction, and provide statistical and
topological requirements for partial recovery with provable guarantees.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.03968
|
2023-06-06T19:02:57Z
|
Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
|
[
"Alexander Immer",
"Tycho F. A. van der Ouderaa",
"Mark van der Wilk",
"Gunnar Rätsch",
"Bernhard Schölkopf"
] |
Selecting hyperparameters in deep learning greatly impacts its effectiveness
but requires manual effort and expertise. Recent works show that Bayesian model
selection with Laplace approximations can allow to optimize such
hyperparameters just like standard neural network parameters using gradients
and on the training data. However, estimating a single hyperparameter gradient
requires a pass through the entire dataset, limiting the scalability of such
algorithms. In this work, we overcome this issue by introducing lower bounds to
the linearized Laplace approximation of the marginal likelihood. In contrast to
previous estimators, these bounds are amenable to stochastic-gradient-based
optimization and allow to trade off estimation accuracy against computational
complexity. We derive them using the function-space form of the linearized
Laplace, which can be estimated using the neural tangent kernel.
Experimentally, we show that the estimators can significantly accelerate
gradient-based hyperparameter optimization.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.03982
|
2023-06-06T19:35:09Z
|
Globally injective and bijective neural operators
|
[
"Takashi Furuya",
"Michael Puthawala",
"Matti Lassas",
"Maarten V. de Hoop"
] |
Recently there has been great interest in operator learning, where networks
learn operators between function spaces from an essentially
infinite-dimensional perspective. In this work we present results for when the
operators learned by these networks are injective and surjective. As a warmup,
we combine prior work in both the finite-dimensional ReLU and operator learning
setting by giving sharp conditions under which ReLU layers with linear neural
operators are injective. We then consider the case the case when the activation
function is pointwise bijective and obtain sufficient conditions for the layer
to be injective. We remark that this question, while trivial in the finite-rank
case, is subtler in the infinite-rank case and is proved using tools from
Fredholm theory. Next, we prove that our supplied injective neural operators
are universal approximators and that their implementation, with finite-rank
neural networks, are still injective. This ensures that injectivity is not
`lost' in the transcription from analytical operators to their finite-rank
implementation with networks. Finally, we conclude with an increase in
abstraction and consider general conditions when subnetworks, which may be many
layers deep, are injective and surjective and provide an exact inversion from a
`linearization.' This section uses general arguments from Fredholm theory and
Leray-Schauder degree theory for non-linear integral equations to analyze the
mapping properties of neural operators in function spaces. These results apply
to subnetworks formed from the layers considered in this work, under natural
conditions. We believe that our work has applications in Bayesian UQ where
injectivity enables likelihood estimation and in inverse problems where
surjectivity and injectivity corresponds to existence and uniqueness,
respectively.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.04004
|
2023-06-06T20:35:20Z
|
Randomized Schur Complement Views for Graph Contrastive Learning
|
[
"Vignesh Kothapalli"
] |
We introduce a randomized topological augmentor based on Schur complements
for Graph Contrastive Learning (GCL). Given a graph laplacian matrix, the
technique generates unbiased approximations of its Schur complements and treats
the corresponding graphs as augmented views. We discuss the benefits of our
approach, provide theoretical justifications and present connections with graph
diffusion. Unlike previous efforts, we study the empirical effectiveness of the
augmentor in a controlled fashion by varying the design choices for subsequent
GCL phases, such as encoding and contrasting. Extensive experiments on node and
graph classification benchmarks demonstrate that our technique consistently
outperforms pre-defined and adaptive augmentation approaches to achieve
state-of-the-art results.
|
[
"cs.LG",
"cs.SI"
] | false |
2306.04039
|
2023-06-06T22:08:42Z
|
Revisiting Neural Retrieval on Accelerators
|
[
"Jiaqi Zhai",
"Zhaojie Gong",
"Yueming Wang",
"Xiao Sun",
"Zheng Yan",
"Fu Li",
"Xing Liu"
] |
Retrieval finds a small number of relevant candidates from a large corpus for
information retrieval and recommendation applications. A key component of
retrieval is to model (user, item) similarity, which is commonly represented as
the dot product of two learned embeddings. This formulation permits efficient
inference, commonly known as Maximum Inner Product Search (MIPS). Despite its
popularity, dot products cannot capture complex user-item interactions, which
are multifaceted and likely high rank. We hence examine non-dot-product
retrieval settings on accelerators, and propose \textit{mixture of logits}
(MoL), which models (user, item) similarity as an adaptive composition of
elementary similarity functions. This new formulation is expressive, capable of
modeling high rank (user, item) interactions, and further generalizes to the
long tail. When combined with a hierarchical retrieval strategy,
\textit{h-indexer}, we are able to scale up MoL to 100M corpus on a single GPU
with latency comparable to MIPS baselines. On public datasets, our approach
leads to uplifts of up to 77.3\% in hit rate (HR). Experiments on a large
recommendation surface at Meta showed strong metric gains and reduced
popularity bias, validating the proposed approach's performance and improved
generalization.
|
[
"cs.LG",
"cs.IR"
] | false |
2306.04066
|
2023-06-06T23:54:01Z
|
Intelligent sampling for surrogate modeling, hyperparameter
optimization, and data analysis
|
[
"Chandrika Kamath"
] |
Sampling techniques are used in many fields, including design of experiments,
image processing, and graphics. The techniques in each field are designed to
meet the constraints specific to that field such as uniform coverage of the
range of each dimension or random samples that are at least a certain distance
apart from each other. When an application imposes new constraints, for
example, by requiring samples in a non-rectangular domain or the addition of
new samples to an existing set, a common solution is to modify the algorithm
currently in use, often with less than satisfactory results. As an alternative,
we propose the concept of intelligent sampling, where we devise algorithms
specifically tailored to meet our sampling needs, either by creating new
algorithms or by modifying suitable algorithms from other fields. Surprisingly,
both qualitative and quantitative comparisons indicate that some relatively
simple algorithms can be easily modified to meet the many sampling requirements
of surrogate modeling, hyperparameter optimization, and data analysis; these
algorithms outperform their more sophisticated counterparts currently in use,
resulting in better use of time and computer resources.
|
[
"cs.LG",
"stat.CO"
] | false |
2306.04658
|
2023-06-06T19:27:11Z
|
Mathematics-assisted directed evolution and protein engineering
|
[
"Yuchi Qiu",
"Guo-Wei Wei"
] |
Directed evolution is a molecular biology technique that is transforming
protein engineering by creating proteins with desirable properties and
functions. However, it is experimentally impossible to perform the deep
mutational scanning of the entire protein library due to the enormous
mutational space, which scales as $20^N$ , where N is the number of amino
acids. This has led to the rapid growth of AI-assisted directed evolution
(AIDE) or AI-assisted protein engineering (AIPE) as an emerging research field.
Aided with advanced natural language processing (NLP) techniques, including
long short-term memory, autoencoder, and transformer, sequence-based embeddings
have been dominant approaches in AIDE and AIPE. Persistent Laplacians, an
emerging technique in topological data analysis (TDA), have made
structure-based embeddings a superb option in AIDE and AIPE. We argue that a
class of persistent topological Laplacians (PTLs), including persistent
Laplacians, persistent path Laplacians, persistent sheaf Laplacians, persistent
hypergraph Laplacians, persistent hyperdigraph Laplacians, and evolutionary de
Rham-Hodge theory, can effectively overcome the limitations of the current TDA
and offer a new generation of more powerful TDA approaches. In the general
framework of topological deep learning, mathematics-assisted directed evolution
(MADE) has a great potential for future protein engineering.
|
[
"q-bio.BM",
"cs.LG"
] | false |
2306.03335
|
2023-06-06T01:13:18Z
|
Unraveling Projection Heads in Contrastive Learning: Insights from
Expansion and Shrinkage
|
[
"Yu Gui",
"Cong Ma",
"Yiqiao Zhong"
] |
We investigate the role of projection heads, also known as projectors, within
the encoder-projector framework (e.g., SimCLR) used in contrastive learning. We
aim to demystify the observed phenomenon where representations learned before
projectors outperform those learned after -- measured using the downstream
linear classification accuracy, even when the projectors themselves are linear.
In this paper, we make two significant contributions towards this aim.
Firstly, through empirical and theoretical analysis, we identify two crucial
effects -- expansion and shrinkage -- induced by the contrastive loss on the
projectors. In essence, contrastive loss either expands or shrinks the signal
direction in the representations learned by an encoder, depending on factors
such as the augmentation strength, the temperature used in contrastive loss,
etc. Secondly, drawing inspiration from the expansion and shrinkage phenomenon,
we propose a family of linear transformations to accurately model the
projector's behavior. This enables us to precisely characterize the downstream
linear classification accuracy in the high-dimensional asymptotic limit. Our
findings reveal that linear projectors operating in the shrinkage (or
expansion) regime hinder (or improve) the downstream classification accuracy.
This provides the first theoretical explanation as to why (linear) projectors
impact the downstream performance of learned representations. Our theoretical
findings are further corroborated by extensive experiments on both synthetic
data and real image data.
|
[
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2306.03466
|
2023-06-06T07:36:47Z
|
Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems
|
[
"Samuel Hurault",
"Ulugbek Kamilov",
"Arthur Leclaire",
"Nicolas Papadakis"
] |
Plug-and-Play (PnP) methods are efficient iterative algorithms for solving
ill-posed image inverse problems. PnP methods are obtained by using deep
Gaussian denoisers instead of the proximal operator or the gradient-descent
step within proximal algorithms. Current PnP schemes rely on data-fidelity
terms that have either Lipschitz gradients or closed-form proximal operators,
which is not applicable to Poisson inverse problems. Based on the observation
that the Gaussian noise is not the adequate noise model in this setting, we
propose to generalize PnP using theBregman Proximal Gradient (BPG) method. BPG
replaces the Euclidean distance with a Bregman divergence that can better
capture the smoothness properties of the problem. We introduce the Bregman
Score Denoiser specifically parametrized and trained for the new Bregman
geometry and prove that it corresponds to the proximal operator of a nonconvex
potential. We propose two PnP algorithms based on the Bregman Score Denoiser
for solving Poisson inverse problems. Extending the convergence results of BPG
in the nonconvex settings, we show that the proposed methods converge,
targeting stationary points of an explicit global functional. Experimental
evaluations conducted on various Poisson inverse problems validate the
convergence results and showcase effective restoration performance.
|
[
"eess.IV",
"cs.LG",
"math.OC"
] | false |
2306.03515
|
2023-06-06T09:01:17Z
|
Logic Diffusion for Knowledge Graph Reasoning
|
[
"Xiaoying Xie",
"Biao Gong",
"Yiliang Lv",
"Zhen Han",
"Guoshuai Zhao",
"Xueming Qian"
] |
Most recent works focus on answering first order logical queries to explore
the knowledge graph reasoning via multi-hop logic predictions. However,
existing reasoning models are limited by the circumscribed logical paradigms of
training samples, which leads to a weak generalization of unseen logic. To
address these issues, we propose a plug-in module called Logic Diffusion (LoD)
to discover unseen queries from surroundings and achieves dynamical equilibrium
between different kinds of patterns. The basic idea of LoD is relation
diffusion and sampling sub-logic by random walking as well as a special
training mechanism called gradient adaption. Besides, LoD is accompanied by a
novel loss function to further achieve the robust logical diffusion when facing
noisy data in training or testing sets. Extensive experiments on four public
datasets demonstrate the superiority of mainstream knowledge graph reasoning
models with LoD over state-of-the-art. Moreover, our ablation study proves the
general effectiveness of LoD on the noise-rich knowledge graph.
|
[
"cs.LG",
"cs.AI",
"cs.LO"
] | false |
2306.03534
|
2023-06-06T09:34:11Z
|
Continual Learning in Linear Classification on Separable Data
|
[
"Itay Evron",
"Edward Moroshko",
"Gon Buzaglo",
"Maroun Khriesh",
"Badea Marjieh",
"Nathan Srebro",
"Daniel Soudry"
] |
We analyze continual learning on a sequence of separable linear
classification tasks with binary labels. We show theoretically that learning
with weak regularization reduces to solving a sequential max-margin problem,
corresponding to a special case of the Projection Onto Convex Sets (POCS)
framework. We then develop upper bounds on the forgetting and other quantities
of interest under various settings with recurring tasks, including cyclic and
random orderings of tasks. We discuss several practical implications to popular
training practices like regularization scheduling and weighting. We point out
several theoretical differences between our continual classification setting
and a recently studied continual regression setting.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.03548
|
2023-06-06T09:50:38Z
|
Learning Dynamical Systems from Noisy Data with Inverse-Explicit
Integrators
|
[
"Håkon Noren",
"Sølve Eidnes",
"Elena Celledoni"
] |
We introduce the mean inverse integrator (MII), a novel approach to increase
the accuracy when training neural networks to approximate vector fields of
dynamical systems from noisy data. This method can be used to average multiple
trajectories obtained by numerical integrators such as Runge-Kutta methods. We
show that the class of mono-implicit Runge-Kutta methods (MIRK) has particular
advantages when used in connection with MII. When training vector field
approximations, explicit expressions for the loss functions are obtained when
inserting the training data in the MIRK formulae, unlocking symmetric and
high-order integrators that would otherwise be implicit for initial value
problems. The combined approach of applying MIRK within MII yields a
significantly lower error compared to the plain use of the numerical integrator
without averaging the trajectories. This is demonstrated with experiments using
data from several (chaotic) Hamiltonian systems. Additionally, we perform a
sensitivity analysis of the loss functions under normally distributed
perturbations, supporting the favorable performance of MII.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.03646
|
2023-06-06T13:00:47Z
|
Dance Generation by Sound Symbolic Words
|
[
"Miki Okamura",
"Naruya Kondo",
"Tatsuki Fushimi",
"Maki Sakamoto",
"Yoichi Ochiai"
] |
This study introduces a novel approach to generate dance motions using
onomatopoeia as input, with the aim of enhancing creativity and diversity in
dance generation. Unlike text and music, onomatopoeia conveys rhythm and
meaning through abstract word expressions without constraints on expression and
without need for specialized knowledge. We adapt the AI Choreographer framework
and employ the Sakamoto system, a feature extraction method for onomatopoeia
focusing on phonemes and syllables. Additionally, we present a new dataset of
40 onomatopoeia-dance motion pairs collected through a user survey. Our results
demonstrate that the proposed method enables more intuitive dance generation
and can create dance motions using sound-symbolic words from a variety of
languages, including those without onomatopoeia. This highlights the potential
for diverse dance creation across different languages and cultures, accessible
to a wider audience. Qualitative samples from our model can be found at:
https://sites.google.com/view/onomatopoeia-dance/home/.
|
[
"cs.LG",
"cs.HC",
"cs.SD",
"eess.AS"
] | false |
2306.03725
|
2023-06-06T14:44:52Z
|
Towards Memory-Efficient Training for Extremely Large Output Spaces --
Learning with 500k Labels on a Single Commodity GPU
|
[
"Erik Schultheis",
"Rohit Babbar"
] |
In classification problems with large output spaces (up to millions of
labels), the last layer can require an enormous amount of memory. Using sparse
connectivity would drastically reduce the memory requirements, but as we show
below, it can result in much diminished predictive performance of the model.
Fortunately, we found that this can be mitigated by introducing a penultimate
layer of intermediate size. We further demonstrate that one can constrain the
connectivity of the sparse layer to be uniform, in the sense that each output
neuron will have the exact same number of incoming connections. This allows for
efficient implementations of sparse matrix multiplication and connection
redistribution on GPU hardware. Via a custom CUDA implementation, we show that
the proposed approach can scale to datasets with 670,000 labels on a single
commodity GPU with only 4GB memory.
|
[
"cs.LG",
"cs.AI",
"cs.DC"
] | false |
2306.03739
|
2023-06-06T14:56:47Z
|
Learning to Do or Learning While Doing: Reinforcement Learning and
Bayesian Optimisation for Online Continuous Tuning
|
[
"Jan Kaiser",
"Chenran Xu",
"Annika Eichler",
"Andrea Santamaria Garcia",
"Oliver Stein",
"Erik Bründermann",
"Willi Kuropka",
"Hannes Dinter",
"Frank Mayet",
"Thomas Vinatier",
"Florian Burkart",
"Holger Schlarb"
] |
Online tuning of real-world plants is a complex optimisation problem that
continues to require manual intervention by experienced human operators.
Autonomous tuning is a rapidly expanding field of research, where
learning-based methods, such as Reinforcement Learning-trained Optimisation
(RLO) and Bayesian optimisation (BO), hold great promise for achieving
outstanding plant performance and reducing tuning times. Which algorithm to
choose in different scenarios, however, remains an open question. Here we
present a comparative study using a routine task in a real particle accelerator
as an example, showing that RLO generally outperforms BO, but is not always the
best choice. Based on the study's results, we provide a clear set of criteria
to guide the choice of algorithm for a given tuning task. These can ease the
adoption of learning-based autonomous tuning solutions to the operation of
complex real-world plants, ultimately improving the availability and pushing
the limits of operability of these facilities, thereby enabling scientific and
engineering advancements.
|
[
"cs.LG",
"cs.AI",
"physics.acc-ph"
] | false |
2306.03757
|
2023-06-06T15:17:34Z
|
Exploring the effects of robotic design on learning and neural control
|
[
"Joshua Paul Powers"
] |
The ongoing deep learning revolution has allowed computers to outclass humans
in various games and perceive features imperceptible to humans during
classification tasks. Current machine learning techniques have clearly
distinguished themselves in specialized tasks. However, we have yet to see
robots capable of performing multiple tasks at an expert level. Most work in
this field is focused on the development of more sophisticated learning
algorithms for a robot's controller given a largely static and presupposed
robotic design. By focusing on the development of robotic bodies, rather than
neural controllers, I have discovered that robots can be designed such that
they overcome many of the current pitfalls encountered by neural controllers in
multitask settings. Through this discovery, I also present novel metrics to
explicitly measure the learning ability of a robotic design and its resistance
to common problems such as catastrophic interference.
Traditionally, the physical robot design requires human engineers to plan
every aspect of the system, which is expensive and often relies on human
intuition. In contrast, within the field of evolutionary robotics, evolutionary
algorithms are used to automatically create optimized designs, however, such
designs are often still limited in their ability to perform in a multitask
setting. The metrics created and presented here give a novel path to automated
design that allow evolved robots to synergize with their controller to improve
the computational efficiency of their learning while overcoming catastrophic
interference.
Overall, this dissertation intimates the ability to automatically design
robots that are more general purpose than current robots and that can perform
various tasks while requiring less computation.
|
[
"cs.RO",
"cs.AI",
"cs.LG",
"cs.NE"
] | false |
2306.03795
|
2023-06-06T15:40:27Z
|
AI-Supported Assessment of Load Safety
|
[
"Julius Schöning",
"Niklas Kruse"
] |
Load safety assessment and compliance is an essential step in the corporate
process of every logistics service provider. In 2020, a total of 11,371 police
checks of trucks were carried out, during which 9.6% (1091) violations against
the load safety regulations were detected. For a logistic service provider,
every load safety violation results in height fines and damage to reputation.
An assessment of load safety supported by artificial intelligence (AI) will
reduce the risk of accidents by unsecured loads and fines during safety
assessments. This work shows how photos of the load, taken by the truck driver
or the loadmaster after the loading process, can be used to assess load safety.
By a trained two-stage artificial neural network (ANN), these photos are
classified into three different classes I) cargo loaded safely, II) cargo
loaded unsafely, and III) unusable image. By applying several architectures of
convolutional neural networks (CNN), it can be shown that it is possible to
distinguish between unusable and usable images for cargo safety assessment.
This distinction is quite crucial since the truck driver and the loadmaster
sometimes provide photos without the essential image features like the case
structure of the truck and the whole cargo. A human operator or another ANN
will then assess the load safety within the second stage.
|
[
"cs.AI",
"cs.HC",
"cs.LG"
] | false |
2306.03830
|
2023-06-06T16:15:56Z
|
Inductive Bias for Emergent Communication in a Continuous Setting
|
[
"John Isak Fjellvang Villanger",
"Troels Arnfred Bojesen"
] |
We study emergent communication in a multi-agent reinforcement learning
setting, where the agents solve cooperative tasks and have access to a
communication channel. The communication channel may consist of either discrete
symbols or continuous variables. We introduce an inductive bias to aid with the
emergence of good communication protocols for continuous messages, and we look
at the effect this type of inductive bias has for continuous and discrete
messages in itself or when used in combination with reinforcement learning. We
demonstrate that this type of inductive bias has a beneficial effect on the
communication protocols learnt in two toy environments, Negotiation and
Sequence Guess.
|
[
"cs.LG",
"cs.AI",
"cs.MA",
"I.2.11"
] | false |
2306.03962
|
2023-06-06T18:45:05Z
|
PILLAR: How to make semi-private learning more effective
|
[
"Francesco Pinto",
"Yaxi Hu",
"Fanny Yang",
"Amartya Sanyal"
] |
In Semi-Supervised Semi-Private (SP) learning, the learner has access to both
public unlabelled and private labelled data. We propose a computationally
efficient algorithm that, under mild assumptions on the data, provably achieves
significantly lower private labelled sample complexity and can be efficiently
run on real-world datasets. For this purpose, we leverage the features
extracted by networks pre-trained on public (labelled or unlabelled) data,
whose distribution can significantly differ from the one on which SP learning
is performed. To validate its empirical effectiveness, we propose a wide
variety of experiments under tight privacy constraints ($\epsilon = 0.1$) and
with a focus on low-data regimes. In all of these settings, our algorithm
exhibits significantly improved performance over available baselines that use
similar amounts of public data.
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"stat.ML"
] | false |
2306.03976
|
2023-06-06T19:18:46Z
|
Explainable AI using expressive Boolean formulas
|
[
"Gili Rosenberg",
"J. Kyle Brubaker",
"Martin J. A. Schuetz",
"Grant Salton",
"Zhihuai Zhu",
"Elton Yechao Zhu",
"Serdar Kadıoğlu",
"Sima E. Borujeni",
"Helmut G. Katzgraber"
] |
We propose and implement an interpretable machine learning classification
model for Explainable AI (XAI) based on expressive Boolean formulas. Potential
applications include credit scoring and diagnosis of medical conditions. The
Boolean formula defines a rule with tunable complexity (or interpretability),
according to which input data are classified. Such a formula can include any
operator that can be applied to one or more Boolean variables, thus providing
higher expressivity compared to more rigid rule-based and tree-based
approaches. The classifier is trained using native local optimization
techniques, efficiently searching the space of feasible formulas. Shallow rules
can be determined by fast Integer Linear Programming (ILP) or Quadratic
Unconstrained Binary Optimization (QUBO) solvers, potentially powered by
special purpose hardware or quantum devices. We combine the expressivity and
efficiency of the native local optimizer with the fast operation of these
devices by executing non-local moves that optimize over subtrees of the full
Boolean formula. We provide extensive numerical benchmarking results featuring
several baselines on well-known public datasets. Based on the results, we find
that the native local rule classifier is generally competitive with the other
classifiers. The addition of non-local moves achieves similar results with
fewer iterations, and therefore using specialized or quantum hardware could
lead to a speedup by fast proposal of non-local moves.
|
[
"cs.AI",
"cs.LG",
"math.OC",
"quant-ph"
] | false |
2306.04001
|
2023-06-06T20:28:37Z
|
One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from
Electromagnetic Solvers
|
[
"Sriram Ravula",
"Varun Gorti",
"Bo Deng",
"Swagato Chakraborty",
"James Pingenot",
"Bhyrav Mutnury",
"Doug Wallace",
"Doug Winterberg",
"Adam Klivans",
"Alexandros G. Dimakis"
] |
A key problem when modeling signal integrity for passive filters and
interconnects in IC packages is the need for multiple S-parameter measurements
within a desired frequency band to obtain adequate resolution. These samples
are often computationally expensive to obtain using electromagnetic (EM) field
solvers. Therefore, a common approach is to select a small subset of the
necessary samples and use an appropriate fitting mechanism to recreate a
densely-sampled broadband representation. We present the first deep generative
model-based approach to fit S-parameters from EM solvers using one-dimensional
Deep Image Prior (DIP). DIP is a technique that optimizes the weights of a
randomly-initialized convolutional neural network to fit a signal from noisy or
under-determined measurements. We design a custom architecture and propose a
novel regularization inspired by smoothing splines that penalizes discontinuous
jumps. We experimentally compare DIP to publicly available and proprietary
industrial implementations of Vector Fitting (VF), the industry-standard tool
for fitting S-parameters. Relative to publicly available implementations of VF,
our method shows superior performance on nearly all test examples using only
5-15% of the frequency samples. Our method is also competitive to proprietary
VF tools and often outperforms them for challenging input instances.
|
[
"cs.LG",
"cs.AI",
"eess.SP"
] | false |
2306.04008
|
2023-06-06T20:43:07Z
|
Green Steganalyzer: A Green Learning Approach to Image Steganalysis
|
[
"Yao Zhu",
"Xinyu Wang",
"Hong-Shuo Chen",
"Ronald Salloum",
"C. -C. Jay Kuo"
] |
A novel learning solution to image steganalysis based on the green learning
paradigm, called Green Steganalyzer (GS), is proposed in this work. GS consists
of three modules: 1) pixel-based anomaly prediction, 2) embedding location
detection, and 3) decision fusion for image-level detection. In the first
module, GS decomposes an image into patches, adopts Saab transforms for feature
extraction, and conducts self-supervised learning to predict an anomaly score
of their center pixel. In the second module, GS analyzes the anomaly scores of
a pixel and its neighborhood to find pixels of higher embedding probabilities.
In the third module, GS focuses on pixels of higher embedding probabilities and
fuses their anomaly scores to make final image-level classification. Compared
with state-of-the-art deep-learning models, GS achieves comparable detection
performance against S-UNIWARD, WOW and HILL steganography schemes with
significantly lower computational complexity and a smaller model size, making
it attractive for mobile/edge applications. Furthermore, GS is mathematically
transparent because of its modular design.
|
[
"eess.IV",
"cs.CR",
"cs.LG"
] | false |
2306.04040
|
2023-06-06T22:11:13Z
|
FedVal: Different good or different bad in federated learning
|
[
"Viktor Valadi",
"Xinchi Qiu",
"Pedro Porto Buarque de Gusmão",
"Nicholas D. Lane",
"Mina Alibeigi"
] |
Federated learning (FL) systems are susceptible to attacks from malicious
actors who might attempt to corrupt the training model through various
poisoning attacks. FL also poses new challenges in addressing group bias, such
as ensuring fair performance for different demographic groups. Traditional
methods used to address such biases require centralized access to the data,
which FL systems do not have. In this paper, we present a novel approach FedVal
for both robustness and fairness that does not require any additional
information from clients that could raise privacy concerns and consequently
compromise the integrity of the FL system. To this end, we propose an
innovative score function based on a server-side validation method that
assesses client updates and determines the optimal aggregation balance between
locally-trained models. Our research shows that this approach not only provides
solid protection against poisoning attacks but can also be used to reduce group
bias and subsequently promote fairness while maintaining the system's
capability for differential privacy. Extensive experiments on the CIFAR-10,
FEMNIST, and PUMS ACSIncome datasets in different configurations demonstrate
the effectiveness of our method, resulting in state-of-the-art performances. We
have proven robustness in situations where 80% of participating clients are
malicious. Additionally, we have shown a significant increase in accuracy for
underrepresented labels from 32% to 53%, and increase in recall rate for
underrepresented features from 19% to 50%.
|
[
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2306.04049
|
2023-06-06T22:35:16Z
|
One-sided Matrix Completion from Two Observations Per Row
|
[
"Steven Cao",
"Percy Liang",
"Gregory Valiant"
] |
Given only a few observed entries from a low-rank matrix $X$, matrix
completion is the problem of imputing the missing entries, and it formalizes a
wide range of real-world settings that involve estimating missing data.
However, when there are too few observed entries to complete the matrix, what
other aspects of the underlying matrix can be reliably recovered? We study one
such problem setting, that of "one-sided" matrix completion, where our goal is
to recover the right singular vectors of $X$, even in the regime where
recovering the left singular vectors is impossible, which arises when there are
more rows than columns and very few observations. We propose a natural
algorithm that involves imputing the missing values of the matrix $X^TX$ and
show that even with only two observations per row in $X$, we can provably
recover $X^TX$ as long as we have at least $\Omega(r^2 d \log d)$ rows, where
$r$ is the rank and $d$ is the number of columns. We evaluate our algorithm on
one-sided recovery of synthetic data and low-coverage genome sequencing. In
these settings, our algorithm substantially outperforms standard matrix
completion and a variety of direct factorization methods.
|
[
"cs.LG",
"cs.DS",
"stat.ML"
] | false |
2306.04655
|
2023-06-06T16:14:15Z
|
Modulation Classification Through Deep Learning Using Resolution
Transformed Spectrograms
|
[
"Muhammad Waqas",
"Muhammad Ashraf",
"Muhammad Zakwan"
] |
Modulation classification is an essential step of signal processing and has
been regularly applied in the field of tele-communication. Since variations of
frequency with respect to time remains a vital distinction among radio signals
having different modulation formats, these variations can be used for feature
extraction by converting 1-D radio signals into frequency domain. In this
paper, we propose a scheme for Automatic Modulation Classification (AMC) using
modern architectures of Convolutional Neural Networks (CNN), through generating
spectrum images of eleven different modulation types. Additionally, we perform
resolution transformation of spectrograms that results up to 99.61% of
computational load reduction and 8x faster conversion from the received I/Q
data. This proposed AMC is implemented on CPU and GPU, to recognize digital as
well as analogue signal modulation schemes on signals. The performance is
evaluated on existing CNN models including SqueezeNet, Resnet-50,
InceptionResnet-V2, Inception-V3, VGG-16 and Densenet-201. Best results of
91.2% are achieved in presence of AWGN and other noise impairments in the
signals, stating that the transformed spectrogram-based AMC has good
classification accuracy as the spectral features are highly discriminant, and
CNN based models have capability to extract these high-dimensional features.
The spectrograms were created under different SNRs ranging from 5 to 30db with
a step size of 5db to observe the experimental results at various SNR levels.
The proposed methodology is efficient to be applied in wireless communication
networks for real-time applications.
|
[
"eess.SP",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.03481
|
2023-06-06T08:06:43Z
|
Transition role of entangled data in quantum machine learning
|
[
"Xinbiao Wang",
"Yuxuan Du",
"Zhuozhuo Tu",
"Yong Luo",
"Xiao Yuan",
"Dacheng Tao"
] |
Entanglement serves as the resource to empower quantum computing. Recent
progress has highlighted its positive impact on learning quantum dynamics,
wherein the integration of entanglement into quantum operations or measurements
of quantum machine learning (QML) models leads to substantial reductions in
training data size, surpassing a specified prediction error threshold. However,
an analytical understanding of how the entanglement degree in data affects
model performance remains elusive. In this study, we address this knowledge gap
by establishing a quantum no-free-lunch (NFL) theorem for learning quantum
dynamics using entangled data. Contrary to previous findings, we prove that the
impact of entangled data on prediction error exhibits a dual effect, depending
on the number of permitted measurements. With a sufficient number of
measurements, increasing the entanglement of training data consistently reduces
the prediction error or decreases the required size of the training data to
achieve the same prediction error. Conversely, when few measurements are
allowed, employing highly entangled data could lead to an increased prediction
error. The achieved results provide critical guidance for designing advanced
QML protocols, especially for those tailored for execution on early-stage
quantum computers with limited access to quantum resources.
|
[
"quant-ph",
"cs.AI",
"cs.IT",
"cs.LG",
"math.IT"
] | false |
2306.03838
|
2023-06-06T16:27:17Z
|
Spherical Fourier Neural Operators: Learning Stable Dynamics on the
Sphere
|
[
"Boris Bonev",
"Thorsten Kurth",
"Christian Hundt",
"Jaideep Pathak",
"Maximilian Baust",
"Karthik Kashinath",
"Anima Anandkumar"
] |
Fourier Neural Operators (FNOs) have proven to be an efficient and effective
method for resolution-independent operator learning in a broad variety of
application areas across scientific machine learning. A key reason for their
success is their ability to accurately model long-range dependencies in
spatio-temporal data by learning global convolutions in a computationally
efficient manner. To this end, FNOs rely on the discrete Fourier transform
(DFT), however, DFTs cause visual and spectral artifacts as well as pronounced
dissipation when learning operators in spherical coordinates since they
incorrectly assume a flat geometry. To overcome this limitation, we generalize
FNOs on the sphere, introducing Spherical FNOs (SFNOs) for learning operators
on spherical geometries. We apply SFNOs to forecasting atmospheric dynamics,
and demonstrate stable auto\-regressive rollouts for a year of simulated time
(1,460 steps), while retaining physically plausible dynamics. The SFNO has
important implications for machine learning-based simulation of climate
dynamics that could eventually help accelerate our response to climate change.
|
[
"cs.LG",
"cs.NA",
"math.NA",
"physics.ao-ph",
"physics.comp-ph"
] | false |
2306.04180
|
2023-06-07T06:29:03Z
|
FusedRF: Fusing Multiple Radiance Fields
|
[
"Rahul Goel",
"Dhawal Sirikonda",
"Rajvi Shah",
"PJ Narayanan"
] |
Radiance Fields (RFs) have shown great potential to represent scenes from
casually captured discrete views. Compositing parts or whole of multiple
captured scenes could greatly interest several XR applications. Prior works can
generate new views of such scenes by tracing each scene in parallel. This
increases the render times and memory requirements with the number of
components. In this work, we provide a method to create a single, compact,
fused RF representation for a scene composited using multiple RFs. The fused RF
has the same render times and memory utilizations as a single RF. Our method
distills information from multiple teacher RFs into a single student RF while
also facilitating further manipulations like addition and deletion into the
fused representation.
|
[
"cs.CV"
] | false |
2306.04184
|
2023-06-07T06:40:54Z
|
StructuredMesh: 3D Structured Optimization of Façade Components on
Photogrammetric Mesh Models using Binary Integer Programming
|
[
"Libin Wang",
"Han Hu",
"Qisen Shang",
"Bo Xu",
"Qing Zhu"
] |
The lack of fa\c{c}ade structures in photogrammetric mesh models renders them
inadequate for meeting the demands of intricate applications. Moreover, these
mesh models exhibit irregular surfaces with considerable geometric noise and
texture quality imperfections, making the restoration of structures
challenging. To address these shortcomings, we present StructuredMesh, a novel
approach for reconstructing fa\c{c}ade structures conforming to the regularity
of buildings within photogrammetric mesh models. Our method involves capturing
multi-view color and depth images of the building model using a virtual camera
and employing a deep learning object detection pipeline to semi-automatically
extract the bounding boxes of fa\c{c}ade components such as windows, doors, and
balconies from the color image. We then utilize the depth image to remap these
boxes into 3D space, generating an initial fa\c{c}ade layout. Leveraging
architectural knowledge, we apply binary integer programming (BIP) to optimize
the 3D layout's structure, encompassing the positions, orientations, and sizes
of all components. The refined layout subsequently informs fa\c{c}ade modeling
through instance replacement. We conducted experiments utilizing building mesh
models from three distinct datasets, demonstrating the adaptability,
robustness, and noise resistance of our proposed methodology. Furthermore, our
3D layout evaluation metrics reveal that the optimized layout enhances
precision, recall, and F-score by 6.5%, 4.5%, and 5.5%, respectively, in
comparison to the initial layout.
|
[
"cs.CV",
"68U05",
"I.5.3"
] | false |
2306.04231
|
2023-06-07T08:14:17Z
|
Learning Probabilistic Coordinate Fields for Robust Correspondences
|
[
"Weiyue Zhao",
"Hao Lu",
"Xinyi Ye",
"Zhiguo Cao",
"Xin Li"
] |
We introduce Probabilistic Coordinate Fields (PCFs), a novel
geometric-invariant coordinate representation for image correspondence
problems. In contrast to standard Cartesian coordinates, PCFs encode
coordinates in correspondence-specific barycentric coordinate systems (BCS)
with affine invariance. To know \textit{when and where to trust} the encoded
coordinates, we implement PCFs in a probabilistic network termed PCF-Net, which
parameterizes the distribution of coordinate fields as Gaussian mixture models.
By jointly optimizing coordinate fields and their confidence conditioned on
dense flows, PCF-Net can work with various feature descriptors when quantifying
the reliability of PCFs by confidence maps. An interesting observation of this
work is that the learned confidence map converges to geometrically coherent and
semantically consistent regions, which facilitates robust coordinate
representation. By delivering the confident coordinates to keypoint/feature
descriptors, we show that PCF-Net can be used as a plug-in to existing
correspondence-dependent approaches. Extensive experiments on both indoor and
outdoor datasets suggest that accurate geometric invariant coordinates help to
achieve the state of the art in several correspondence problems, such as sparse
feature matching, dense image registration, camera pose estimation, and
consistency filtering. Further, the interpretable confidence map predicted by
PCF-Net can also be leveraged to other novel applications from texture transfer
to multi-homography classification.
|
[
"cs.CV"
] | false |
2306.04272
|
2023-06-07T09:13:56Z
|
On the Generalization of Multi-modal Contrastive Learning
|
[
"Qi Zhang",
"Yifei Wang",
"Yisen Wang"
] |
Multi-modal contrastive learning (MMCL) has recently garnered considerable
interest due to its superior performance in visual tasks, achieved by embedding
multi-modal data, such as visual-language pairs. However, there still lack
theoretical understandings of how MMCL extracts useful visual representation
from multi-modal pairs, and particularly, how MMCL outperforms previous
approaches like self-supervised contrastive learning (SSCL). In this paper, by
drawing an intrinsic connection between MMCL and asymmetric matrix
factorization, we establish the first generalization guarantees of MMCL for
visual downstream tasks. Based on this framework, we further unify MMCL and
SSCL by showing that MMCL implicitly performs SSCL with (pseudo) positive pairs
induced by text pairs. Through this unified perspective, we characterize the
advantage of MMCL by showing that text pairs induce more semantically
consistent and diverse positive pairs, which, according to our analysis,
provably benefit downstream generalization. Inspired by this finding, we
propose CLIP-guided resampling methods to significantly improve the downstream
performance of SSCL on ImageNet by leveraging multi-modal information. Code is
available at https://github.com/PKU-ML/CLIP-Help-SimCLR.
|
[
"cs.CV"
] | false |
2306.04385
|
2023-06-07T12:34:55Z
|
SF-FSDA: Source-Free Few-Shot Domain Adaptive Object Detection with
Efficient Labeled Data Factory
|
[
"Han Sun",
"Rui Gong",
"Konrad Schindler",
"Luc Van Gool"
] |
Domain adaptive object detection aims to leverage the knowledge learned from
a labeled source domain to improve the performance on an unlabeled target
domain. Prior works typically require the access to the source domain data for
adaptation, and the availability of sufficient data on the target domain.
However, these assumptions may not hold due to data privacy and rare data
collection. In this paper, we propose and investigate a more practical and
challenging domain adaptive object detection problem under both source-free and
few-shot conditions, named as SF-FSDA. To overcome this problem, we develop an
efficient labeled data factory based approach. Without accessing the source
domain, the data factory renders i) infinite amount of synthesized
target-domain like images, under the guidance of the few-shot image samples and
text description from the target domain; ii) corresponding bounding box and
category annotations, only demanding minimum human effort, i.e., a few manually
labeled examples. On the one hand, the synthesized images mitigate the
knowledge insufficiency brought by the few-shot condition. On the other hand,
compared to the popular pseudo-label technique, the generated annotations from
data factory not only get rid of the reliance on the source pretrained object
detection model, but also alleviate the unavoidably pseudo-label noise due to
domain shift and source-free condition. The generated dataset is further
utilized to adapt the source pretrained object detection model, realizing the
robust object detection under SF-FSDA. The experiments on different settings
showcase that our proposed approach outperforms other state-of-the-art methods
on SF-FSDA problem. Our codes and models will be made publicly available.
|
[
"cs.CV"
] | false |
2306.04474
|
2023-06-07T14:45:24Z
|
FoSp: Focus and Separation Network for Early Smoke Segmentation
|
[
"Lujian Yao",
"Haitao Zhao",
"Jingchao Peng",
"Zhongze Wang",
"Kaijie Zhao"
] |
Early smoke segmentation (ESS) enables the accurate identification of smoke
sources, facilitating the prompt extinguishing of fires and preventing
large-scale gas leaks. But ESS poses greater challenges than conventional
object and regular smoke segmentation due to its small scale and transparent
appearance, which can result in high miss detection rate and low precision. To
address these issues, a Focus and Separation Network (FoSp) is proposed. We
first introduce a Focus module employing bidirectional cascade which guides
low-resolution and high-resolution features towards mid-resolution to locate
and determine the scope of smoke, reducing the miss detection rate. Next, we
propose a Separation module that separates smoke images into a pure smoke
foreground and a smoke-free background, enhancing the contrast between smoke
and background fundamentally, improving segmentation precision. Finally, a
Domain Fusion module is developed to integrate the distinctive features of the
two modules which can balance recall and precision to achieve high F_beta.
Futhermore, to promote the development of ESS, we introduce a high-quality
real-world dataset called SmokeSeg, which contains more small and transparent
smoke than the existing datasets. Experimental results show that our model
achieves the best performance on three available datasets: SYN70K (mIoU:
83.00%), SMOKE5K (F_beta: 81.6%) and SmokeSeg (F_beta: 72.05%). Especially, our
FoSp outperforms SegFormer by 7.71% (F_beta) for early smoke segmentation on
SmokeSeg.
|
[
"cs.CV"
] | false |
2306.04482
|
2023-06-07T17:42:42Z
|
ICON$^2$: Reliably Benchmarking Predictive Inequity in Object Detection
|
[
"Sruthi Sudhakar",
"Viraj Prabhu",
"Olga Russakovsky",
"Judy Hoffman"
] |
As computer vision systems are being increasingly deployed at scale in
high-stakes applications like autonomous driving, concerns about social bias in
these systems are rising. Analysis of fairness in real-world vision systems,
such as object detection in driving scenes, has been limited to observing
predictive inequity across attributes such as pedestrian skin tone, and lacks a
consistent methodology to disentangle the role of confounding variables e.g.
does my model perform worse for a certain skin tone, or are such scenes in my
dataset more challenging due to occlusion and crowds? In this work, we
introduce ICON$^2$, a framework for robustly answering this question. ICON$^2$
leverages prior knowledge on the deficiencies of object detection systems to
identify performance discrepancies across sub-populations, compute correlations
between these potential confounders and a given sensitive attribute, and
control for the most likely confounders to obtain a more reliable estimate of
model bias. Using our approach, we conduct an in-depth study on the performance
of object detection with respect to income from the BDD100K driving dataset,
revealing useful insights.
|
[
"cs.CV"
] | false |
2306.04506
|
2023-06-07T15:15:13Z
|
Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and
radiance priors
|
[
"Xianrui Luo",
"Juewen Peng",
"Ke Xian",
"Zijin Wu",
"Zhiguo Cao"
] |
We consider the problem of realistic bokeh rendering from a single
all-in-focus image. Bokeh rendering mimics aesthetic shallow depth-of-field
(DoF) in professional photography, but these visual effects generated by
existing methods suffer from simple flat background blur and blurred in-focus
regions, giving rise to unrealistic rendered results. In this work, we argue
that realistic bokeh rendering should (i) model depth relations and distinguish
in-focus regions, (ii) sustain sharp in-focus regions, and (iii) render
physically accurate Circle of Confusion (CoC). To this end, we present a
Defocus to Focus (D2F) framework to learn realistic bokeh rendering by fusing
defocus priors with the all-in-focus image and by implementing radiance priors
in layered fusion. Since no depth map is provided, we introduce defocus
hallucination to integrate depth by learning to focus. The predicted defocus
map implies the blur amount of bokeh and is used to guide weighted layered
rendering. In layered rendering, we fuse images blurred by different kernels
based on the defocus map. To increase the reality of the bokeh, we adopt
radiance virtualization to simulate scene radiance. The scene radiance used in
weighted layered rendering reassigns weights in the soft disk kernel to produce
the CoC. To ensure the sharpness of in-focus regions, we propose to fuse
upsampled bokeh images and original images. We predict the initial fusion mask
from our defocus map and refine the mask with a deep network. We evaluate our
model on a large-scale bokeh dataset. Extensive experiments show that our
approach is capable of rendering visually pleasing bokeh effects in complex
scenes. In particular, our solution receives the runner-up award in the AIM
2020 Rendering Realistic Bokeh Challenge.
|
[
"cs.CV"
] | false |
2306.04540
|
2023-06-07T15:46:15Z
|
NeMO: Neural Map Growing System for Spatiotemporal Fusion in
Bird's-Eye-View and BDD-Map Benchmark
|
[
"Xi Zhu",
"Xiya Cao",
"Zhiwei Dong",
"Caifa Zhou",
"Qiangbo Liu",
"Wei Li",
"Yongliang Wang"
] |
Vision-centric Bird's-Eye View (BEV) representation is essential for
autonomous driving systems (ADS). Multi-frame temporal fusion which leverages
historical information has been demonstrated to provide more comprehensive
perception results. While most research focuses on ego-centric maps of fixed
settings, long-range local map generation remains less explored. This work
outlines a new paradigm, named NeMO, for generating local maps through the
utilization of a readable and writable big map, a learning-based fusion module,
and an interaction mechanism between the two. With an assumption that the
feature distribution of all BEV grids follows an identical pattern, we adopt a
shared-weight neural network for all grids to update the big map. This paradigm
supports the fusion of longer time series and the generation of long-range BEV
local maps. Furthermore, we release BDD-Map, a BDD100K-based dataset
incorporating map element annotations, including lane lines, boundaries, and
pedestrian crossing. Experiments on the NuScenes and BDD-Map datasets
demonstrate that NeMO outperforms state-of-the-art map segmentation methods. We
also provide a new scene-level BEV map evaluation setting along with the
corresponding baseline for a more comprehensive comparison.
|
[
"cs.CV"
] | false |
2306.04619
|
2023-06-07T17:47:50Z
|
ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections
|
[
"Chun-Han Yao",
"Amit Raj",
"Wei-Chih Hung",
"Yuanzhen Li",
"Michael Rubinstein",
"Ming-Hsuan Yang",
"Varun Jampani"
] |
Estimating 3D articulated shapes like animal bodies from monocular images is
inherently challenging due to the ambiguities of camera viewpoint, pose,
texture, lighting, etc. We propose ARTIC3D, a self-supervised framework to
reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
Specifically, ARTIC3D is built upon a skeleton-based surface representation and
is further guided by 2D diffusion priors from Stable Diffusion. First, we
enhance the input images with occlusions/truncation via 2D diffusion to obtain
cleaner mask estimates and semantic features. Second, we perform
diffusion-guided 3D optimization to estimate shape and texture that are of
high-fidelity and faithful to input images. We also propose a novel technique
to calculate more stable image-level gradients via diffusion models compared to
existing alternatives. Finally, we produce realistic animations by fine-tuning
the rendered shape and texture under rigid part transformations. Extensive
evaluations on multiple existing datasets as well as newly introduced noisy web
image collections with occlusions and truncation demonstrate that ARTIC3D
outputs are more robust to noisy images, higher quality in terms of shape and
texture details, and more realistic when animated. Project page:
https://chhankyao.github.io/artic3d/
|
[
"cs.CV"
] | true |
2306.04715
|
2023-06-07T18:26:22Z
|
UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot
Vision-Language Tasks
|
[
"Yanan Sun",
"Zihan Zhong",
"Qi Fan",
"Chi-Keung Tang",
"Yu-Wing Tai"
] |
Large-scale joint training of multimodal models, e.g., CLIP, have
demonstrated great performance in many vision-language tasks. However,
image-text pairs for pre-training are restricted to the intersection of images
and texts, limiting their ability to cover a large distribution of real-world
data, where noise can also be introduced as misaligned pairs during
pre-processing. Conversely, unimodal models trained on text or image data alone
through unsupervised techniques can achieve broader coverage of diverse
real-world data and are not constrained by the requirement of simultaneous
presence of image and text. In this paper, we demonstrate that using
large-scale unsupervised unimodal models as pre-training can enhance the
zero-shot performance of image-text pair models. Our thorough studies validate
that models pre-trained as such can learn rich representations of both
modalities, improving their ability to understand how images and text relate to
each other. Our experiments show that unimodal pre-training outperforms
state-of-the-art CLIP-based models by 6.5% (52.3% $\rightarrow$ 58.8%) on
PASCAL-5$^i$ and 6.2% (27.2% $\rightarrow$ 33.4%) on COCO-20$^i$ semantic
segmentation under zero-shot setting respectively. By learning representations
of both modalities, unimodal pre-training offers broader coverage, reduced
misalignment errors, and the ability to capture more complex features and
patterns in the real-world data resulting in better performance especially for
zero-shot vision-language tasks.
|
[
"cs.CV"
] | false |
2306.04736
|
2023-06-07T19:12:03Z
|
BU-CVKit: Extendable Computer Vision Framework for Species Independent
Tracking and Analysis
|
[
"Mahir Patel",
"Lucas Carstensen",
"Yiwen Gu",
"Michael E. Hasselmo",
"Margrit Betke"
] |
A major bottleneck of interdisciplinary computer vision (CV) research is the
lack of a framework that eases the reuse and abstraction of state-of-the-art CV
models by CV and non-CV researchers alike. We present here BU-CVKit, a computer
vision framework that allows the creation of research pipelines with chainable
Processors. The community can create plugins of their work for the framework,
hence improving the re-usability, accessibility, and exposure of their work
with minimal overhead. Furthermore, we provide MuSeqPose Kit, a user interface
for the pose estimation package of BU-CVKit, which automatically scans for
installed plugins and programmatically generates an interface for them based on
the metadata provided by the user. It also provides software support for
standard pose estimation features such as annotations, 3D reconstruction,
reprojection, and camera calibration. Finally, we show examples of behavioral
neuroscience pipelines created through the sample plugins created for our
framework.
|
[
"cs.CV"
] | false |
2306.04774
|
2023-06-07T20:45:15Z
|
RefineVIS: Video Instance Segmentation with Temporal Attention
Refinement
|
[
"Andre Abrantes",
"Jiang Wang",
"Peng Chu",
"Quanzeng You",
"Zicheng Liu"
] |
We introduce a novel framework called RefineVIS for Video Instance
Segmentation (VIS) that achieves good object association between frames and
accurate segmentation masks by iteratively refining the representations using
sequence context. RefineVIS learns two separate representations on top of an
off-the-shelf frame-level image instance segmentation model: an association
representation responsible for associating objects across frames and a
segmentation representation that produces accurate segmentation masks.
Contrastive learning is utilized to learn temporally stable association
representations. A Temporal Attention Refinement (TAR) module learns
discriminative segmentation representations by exploiting temporal
relationships and a novel temporal contrastive denoising technique. Our method
supports both online and offline inference. It achieves state-of-the-art video
instance segmentation accuracy on YouTube-VIS 2019 (64.4 AP), Youtube-VIS 2021
(61.4 AP), and OVIS (46.1 AP) datasets. The visualization shows that the TAR
module can generate more accurate instance segmentation masks, particularly for
challenging cases such as highly occluded objects.
|
[
"cs.CV"
] | false |
2306.04822
|
2023-06-07T23:06:53Z
|
Optimizing ViViT Training: Time and Memory Reduction for Action
Recognition
|
[
"Shreyank N Gowda",
"Anurag Arnab",
"Jonathan Huang"
] |
In this paper, we address the challenges posed by the substantial training
time and memory consumption associated with video transformers, focusing on the
ViViT (Video Vision Transformer) model, in particular the Factorised Encoder
version, as our baseline for action recognition tasks. The factorised encoder
variant follows the late-fusion approach that is adopted by many state of the
art approaches. Despite standing out for its favorable speed/accuracy tradeoffs
among the different variants of ViViT, its considerable training time and
memory requirements still pose a significant barrier to entry. Our method is
designed to lower this barrier and is based on the idea of freezing the spatial
transformer during training. This leads to a low accuracy model if naively
done. But we show that by (1) appropriately initializing the temporal
transformer (a module responsible for processing temporal information) (2)
introducing a compact adapter model connecting frozen spatial representations
((a module that selectively focuses on regions of the input image) to the
temporal transformer, we can enjoy the benefits of freezing the spatial
transformer without sacrificing accuracy. Through extensive experimentation
over 6 benchmarks, we demonstrate that our proposed training strategy
significantly reduces training costs (by $\sim 50\%$) and memory consumption
while maintaining or slightly improving performance by up to 1.79\% compared to
the baseline model. Our approach additionally unlocks the capability to utilize
larger image transformer models as our spatial transformer and access more
frames with the same memory consumption.
|
[
"cs.CV"
] | true |
2306.04098
|
2023-06-07T01:43:09Z
|
Phoenix: A Federated Generative Diffusion Model
|
[
"Fiona Victoria Stanley Jothiraj",
"Afra Mashhadi"
] |
Generative AI has made impressive strides in enabling users to create diverse
and realistic visual content such as images, videos, and audio. However,
training generative models on large centralized datasets can pose challenges in
terms of data privacy, security, and accessibility. Federated learning (FL) is
an approach that uses decentralized techniques to collaboratively train a
shared deep learning model while retaining the training data on individual edge
devices to preserve data privacy. This paper proposes a novel method for
training a Denoising Diffusion Probabilistic Model (DDPM) across multiple data
sources using FL techniques. Diffusion models, a newly emerging generative
model, show promising results in achieving superior quality images than
Generative Adversarial Networks (GANs). Our proposed method Phoenix is an
unconditional diffusion model that leverages strategies to improve the data
diversity of generated samples even when trained on data with statistical
heterogeneity or Non-IID (Non-Independent and Identically Distributed) data. We
demonstrate how our approach outperforms the default diffusion model in an FL
setting. These results indicate that high-quality samples can be generated by
maintaining data diversity, preserving privacy, and reducing communication
between data sources, offering exciting new possibilities in the field of
generative AI.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.04114
|
2023-06-07T02:55:09Z
|
Manga Rescreening with Interpretable Screentone Representation
|
[
"Minshan Xie",
"Chengze Li",
"Tien-Tsin Wong"
] |
The process of adapting or repurposing manga pages is a time-consuming task
that requires manga artists to manually work on every single screentone region
and apply new patterns to create novel screentones across multiple panels. To
address this issue, we propose an automatic manga rescreening pipeline that
aims to minimize the human effort involved in manga adaptation. Our pipeline
automatically recognizes screentone regions and generates novel screentones
with newly specified characteristics (e.g., intensity or type). Existing manga
generation methods have limitations in understanding and synthesizing complex
tone- or intensity-varying regions. To overcome these limitations, we propose a
novel interpretable representation of screentones that disentangles their
intensity and type features, enabling better recognition and synthesis of
screentones. This interpretable screentone representation reduces ambiguity in
recognizing intensity-varying regions and provides fine-grained controls during
screentone synthesis by decoupling and anchoring the type or the intensity
feature. Our proposed method is demonstrated to be effective and convenient
through various experiments, showcasing the superiority of the newly proposed
pipeline with the interpretable screentone representations.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.04144
|
2023-06-07T04:36:21Z
|
UCTB: An Urban Computing Tool Box for Spatiotemporal Crowd Flow
Prediction
|
[
"Liyue Chen",
"Di Chai",
"Leye Wang"
] |
Spatiotemporal crowd flow prediction is one of the key technologies in smart
cities. Currently, there are two major pain points that plague related research
and practitioners. Firstly, crowd flow is related to multiple domain knowledge
factors; however, due to the diversity of application scenarios, it is
difficult for subsequent work to make reasonable and comprehensive use of
domain knowledge. Secondly, with the development of deep learning technology,
the implementation of relevant techniques has become increasingly complex;
reproducing advanced models has become a time-consuming and increasingly
cumbersome task. To address these issues, we design and implement a
spatiotemporal crowd flow prediction toolbox called UCTB (Urban Computing Tool
Box), which integrates multiple spatiotemporal domain knowledge and
state-of-the-art models simultaneously. The relevant code and supporting
documents have been open-sourced at https://github.com/uctb/UCTB.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.04214
|
2023-06-07T07:40:04Z
|
DualHGNN: A Dual Hypergraph Neural Network for Semi-Supervised Node
Classification based on Multi-View Learning and Density Awareness
|
[
"Jianpeng Liao",
"Jun Yan",
"Qian Tao"
] |
Graph-based semi-supervised node classification has been shown to become a
state-of-the-art approach in many applications with high research value and
significance. Most existing methods are only based on the original intrinsic or
artificially established graph structure which may not accurately reflect the
"true" correlation among data and are not optimal for semi-supervised node
classification in the downstream graph neural networks. Besides, while existing
graph-based methods mostly utilize the explicit graph structure, some implicit
information, for example, the density information, can also provide latent
information that can be further exploited. To address these limitations, this
paper proposes the Dual Hypergraph Neural Network (DualHGNN), a new dual
connection model integrating both hypergraph structure learning and hypergraph
representation learning simultaneously in a unified architecture. The DualHGNN
first leverages a multi-view hypergraph learning network to explore the optimal
hypergraph structure from multiple views, constrained by a consistency loss
proposed to improve its generalization. Then, DualHGNN employs a density-aware
hypergraph attention network to explore the high-order semantic correlation
among data points based on the density-aware attention mechanism. Extensive
experiments are conducted in various benchmark datasets, and the results
demonstrate the effectiveness of the proposed approach.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.04362
|
2023-06-07T11:52:36Z
|
Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for
Pre-training and Benchmarks
|
[
"Haiyang Xu",
"Qinghao Ye",
"Xuan Wu",
"Ming Yan",
"Yuan Miao",
"Jiabo Ye",
"Guohai Xu",
"Anwen Hu",
"Yaya Shi",
"Guangwei Xu",
"Chenliang Li",
"Qi Qian",
"Maofei Que",
"Ji Zhang",
"Xiao Zeng",
"Fei Huang"
] |
To promote the development of Vision-Language Pre-training (VLP) and
multimodal Large Language Model (LLM) in the Chinese community, we firstly
release the largest public Chinese high-quality video-language dataset named
Youku-mPLUG, which is collected from Youku, a well-known Chinese video-sharing
website, with strict criteria of safety, diversity, and quality. Youku-mPLUG
contains 10 million Chinese video-text pairs filtered from 400 million raw
videos across a wide range of 45 diverse categories for large-scale
pre-training. In addition, to facilitate a comprehensive evaluation of
video-language models, we carefully build the largest human-annotated Chinese
benchmarks covering three popular video-language tasks of cross-modal
retrieval, video captioning, and video category classification. Youku-mPLUG can
enable researchers to conduct more in-depth multimodal research and develop
better applications in the future. Furthermore, we release popular
video-language pre-training models, ALPRO and mPLUG-2, and our proposed
modularized decoder-only model mPLUG-video pre-trained on Youku-mPLUG.
Experiments show that models pre-trained on Youku-mPLUG gain up to 23.1%
improvement in video category classification. Besides, mPLUG-video achieves a
new state-of-the-art result on these benchmarks with 80.5% top-1 accuracy in
video category classification and 68.9 CIDEr score in video captioning,
respectively. Finally, we scale up mPLUG-video based on the frozen Bloomz with
only 1.7% trainable parameters as Chinese multimodal LLM, and demonstrate
impressive instruction and video understanding ability. The zero-shot
instruction understanding experiment indicates that pretraining with
Youku-mPLUG can enhance the ability to comprehend overall and detailed visual
semantics, recognize scene text, and leverage open-domain knowledge.
|
[
"cs.CV",
"cs.CL"
] | true |
2306.04557
|
2023-06-07T16:04:08Z
|
PhenoBench -- A Large Dataset and Benchmarks for Semantic Image
Interpretation in the Agricultural Domain
|
[
"Jan Weyler",
"Federico Magistri",
"Elias Marks",
"Yue Linn Chong",
"Matteo Sodano",
"Gianmarco Roggiolani",
"Nived Chebrolu",
"Cyrill Stachniss",
"Jens Behley"
] |
The production of food, feed, fiber, and fuel is a key task of agriculture.
Especially crop production has to cope with a multitude of challenges in the
upcoming decades caused by a growing world population, climate change, the need
for sustainable production, lack of skilled workers, and generally the limited
availability of arable land. Vision systems could help cope with these
challenges by offering tools to make better and more sustainable field
management decisions and support the breeding of new varieties of crops by
allowing temporally dense and reproducible measurements. Recently, tackling
perception tasks in the agricultural domain got increasing interest in the
computer vision and robotics community since agricultural robotics are one
promising solution for coping with the lack of workers and enable a more
sustainable agricultural production at the same time. While large datasets and
benchmarks in other domains are readily available and have enabled significant
progress toward more reliable vision systems, agricultural datasets and
benchmarks are comparably rare. In this paper, we present a large dataset and
benchmarks for the semantic interpretation of images of real agricultural
fields. Our dataset recorded with a UAV provides high-quality, dense
annotations of crops and weeds, but also fine-grained labels of crop leaves at
the same time, which enable the development of novel algorithms for visual
perception in the agricultural domain. Together with the labeled data, we
provide novel benchmarks for evaluating different visual perception tasks on a
hidden test set comprised of different fields: known fields covered by the
training data and a completely unseen field. The tasks cover semantic
segmentation, panoptic segmentation of plants, leaf instance segmentation,
detection of plants and leaves, and hierarchical panoptic segmentation for
jointly identifying plants and leaves.
|
[
"cs.CV",
"cs.RO"
] | false |
2306.04579
|
2023-06-07T16:28:53Z
|
A Dataset for Deep Learning-based Bone Structure Analyses in Total Hip
Arthroplasty
|
[
"Kaidong Zhang",
"Ziyang Gan",
"Dong Liu",
"Xifu Shang"
] |
Total hip arthroplasty (THA) is a widely used surgical procedure in
orthopedics. For THA, it is of clinical significance to analyze the bone
structure from the CT images, especially to observe the structure of the
acetabulum and femoral head, before the surgical procedure. For such bone
structure analyses, deep learning technologies are promising but require
high-quality labeled data for the learning, while the data labeling is costly.
We address this issue and propose an efficient data annotation pipeline for
producing a deep learning-oriented dataset. Our pipeline consists of
non-learning-based bone extraction (BE) and acetabulum and femoral head
segmentation (AFS) and active-learning-based annotation refinement (AAR). For
BE we use the classic graph-cut algorithm. For AFS we propose an improved
algorithm, including femoral head boundary localization using first-order and
second-order gradient regularization, line-based non-maximum suppression, and
anatomy prior-based femoral head extraction. For AAR, we refine the
algorithm-produced pseudo labels with the help of trained deep models: we
measure the uncertainty based on the disagreement between the original pseudo
labels and the deep model predictions, and then find out the samples with the
largest uncertainty to ask for manual labeling. Using the proposed pipeline, we
construct a large-scale bone structure analyses dataset from more than 300
clinical and diverse CT scans. We perform careful manual labeling for the test
set of our data. We then benchmark multiple state-of-the art deep
learning-based methods of medical image segmentation using the training and
test sets of our data. The extensive experimental results validate the efficacy
of the proposed data annotation pipeline. The dataset, related codes and models
will be publicly available at https://github.com/hitachinsk/THA.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.04593
|
2023-06-07T16:46:44Z
|
MarineVRS: Marine Video Retrieval System with Explainability via
Semantic Understanding
|
[
"Tan-Sang Ha",
"Hai Nguyen-Truong",
"Tuan-Anh Vu",
"Sai-Kit Yeung"
] |
Building a video retrieval system that is robust and reliable, especially for
the marine environment, is a challenging task due to several factors such as
dealing with massive amounts of dense and repetitive data, occlusion,
blurriness, low lighting conditions, and abstract queries. To address these
challenges, we present MarineVRS, a novel and flexible video retrieval system
designed explicitly for the marine domain. MarineVRS integrates
state-of-the-art methods for visual and linguistic object representation to
enable efficient and accurate search and analysis of vast volumes of underwater
video data. In addition, unlike the conventional video retrieval system, which
only permits users to index a collection of images or videos and search using a
free-form natural language sentence, our retrieval system includes an
additional Explainability module that outputs the segmentation masks of the
objects that the input query referred to. This feature allows users to identify
and isolate specific objects in the video footage, leading to more detailed
analysis and understanding of their behavior and movements. Finally, with its
adaptability, explainability, accuracy, and scalability, MarineVRS is a
powerful tool for marine researchers and scientists to efficiently and
accurately process vast amounts of data and gain deeper insights into the
behavior and movements of marine species.
|
[
"cs.CV",
"cs.IR"
] | false |
2306.04622
|
2023-06-07T17:52:29Z
|
Yet Another Algorithm for Supervised Principal Component Analysis:
Supervised Linear Centroid-Encoder
|
[
"Tomojit Ghosh",
"Michael Kirby"
] |
We propose a new supervised dimensionality reduction technique called
Supervised Linear Centroid-Encoder (SLCE), a linear counterpart of the
nonlinear Centroid-Encoder (CE) \citep{ghosh2022supervised}. SLCE works by
mapping the samples of a class to its class centroid using a linear
transformation. The transformation is a projection that reconstructs a point
such that its distance from the corresponding class centroid, i.e.,
centroid-reconstruction loss, is minimized in the ambient space. We derive a
closed-form solution using an eigendecomposition of a symmetric matrix. We did
a detailed analysis and presented some crucial mathematical properties of the
proposed approach. %We also provide an iterative solution approach based
solving the optimization problem using a descent method. We establish a
connection between the eigenvalues and the centroid-reconstruction loss. In
contrast to Principal Component Analysis (PCA) which reconstructs a sample in
the ambient space, the transformation of SLCE uses the instances of a class to
rebuild the corresponding class centroid. Therefore the proposed method can be
considered a form of supervised PCA. Experimental results show the performance
advantage of SLCE over other supervised methods.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.04632
|
2023-06-07T17:56:02Z
|
Designing a Better Asymmetric VQGAN for StableDiffusion
|
[
"Zixin Zhu",
"Xuelu Feng",
"Dongdong Chen",
"Jianmin Bao",
"Le Wang",
"Yinpeng Chen",
"Lu Yuan",
"Gang Hua"
] |
StableDiffusion is a revolutionary text-to-image generator that is causing a
stir in the world of image generation and editing. Unlike traditional methods
that learn a diffusion model in pixel space, StableDiffusion learns a diffusion
model in the latent space via a VQGAN, ensuring both efficiency and quality. It
not only supports image generation tasks, but also enables image editing for
real images, such as image inpainting and local editing. However, we have
observed that the vanilla VQGAN used in StableDiffusion leads to significant
information loss, causing distortion artifacts even in non-edited image
regions. To this end, we propose a new asymmetric VQGAN with two simple
designs. Firstly, in addition to the input from the encoder, the decoder
contains a conditional branch that incorporates information from task-specific
priors, such as the unmasked image region in inpainting. Secondly, the decoder
is much heavier than the encoder, allowing for more detailed recovery while
only slightly increasing the total inference cost. The training cost of our
asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder
while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our
asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and
local editing methods. Extensive experiments demonstrate that it can
significantly improve the inpainting and editing performance, while maintaining
the original text-to-image capability. The code is available at
\url{https://github.com/buxiangzhiren/Asymmetric_VQGAN}.
|
[
"cs.CV",
"cs.GR"
] | true |
2306.04636
|
2023-06-07T17:59:22Z
|
GP-UNIT: Generative Prior for Versatile Unsupervised Image-to-Image
Translation
|
[
"Shuai Yang",
"Liming Jiang",
"Ziwei Liu",
"Chen Change Loy"
] |
Recent advances in deep learning have witnessed many successful unsupervised
image-to-image translation models that learn correspondences between two visual
domains without paired data. However, it is still a great challenge to build
robust mappings between various domains especially for those with drastic
visual discrepancies. In this paper, we introduce a novel versatile framework,
Generative Prior-guided UNsupervised Image-to-image Translation (GP-UNIT), that
improves the quality, applicability and controllability of the existing
translation models. The key idea of GP-UNIT is to distill the generative prior
from pre-trained class-conditional GANs to build coarse-level cross-domain
correspondences, and to apply the learned prior to adversarial translations to
excavate fine-level correspondences. With the learned multi-level content
correspondences, GP-UNIT is able to perform valid translations between both
close domains and distant domains. For close domains, GP-UNIT can be
conditioned on a parameter to determine the intensity of the content
correspondences during translation, allowing users to balance between content
and style consistency. For distant domains, semi-supervised learning is
explored to guide GP-UNIT to discover accurate semantic correspondences that
are hard to learn solely from the appearance. We validate the superiority of
GP-UNIT over state-of-the-art translation models in robust, high-quality and
diversified translations between various domains through extensive experiments.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.04701
|
2023-06-07T18:08:11Z
|
Robust-DefReg: A Robust Deformable Point Cloud Registration Method based
on Graph Convolutional Neural Networks
|
[
"Sara Monji-Azad",
"Marvin Kinz",
"Jürgen Hesser"
] |
Point cloud registration is a fundamental problem in computer vision that
aims to estimate the transformation between corresponding sets of points.
Non-rigid registration, in particular, involves addressing challenges including
various levels of deformation, noise, outliers, and data incompleteness. This
paper introduces Robust-DefReg, a robust non-rigid point cloud registration
method based on graph convolutional networks (GCNNs). Robust-DefReg is a
coarse-to-fine registration approach within an end-to-end pipeline, leveraging
the advantages of both coarse and fine methods. The method learns global
features to find correspondences between source and target point clouds, to
enable appropriate initial alignment, and subsequently fine registration. The
simultaneous achievement of high accuracy and robustness across all challenges
is reported less frequently in existing studies, making it a key objective of
the Robust-DefReg method. The proposed method achieves high accuracy in large
deformations while maintaining computational efficiency. This method possesses
three primary attributes: high accuracy, robustness to different challenges,
and computational efficiency. The experimental results show that the proposed
Robust-DefReg holds significant potential as a foundational architecture for
future investigations in non-rigid point cloud registration. The source code of
Robust-DefReg is available.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.04709
|
2023-06-07T18:21:22Z
|
Improved statistical benchmarking of digital pathology models using
pairwise frames evaluation
|
[
"Ylaine Gerardin",
"John Shamshoian",
"Judy Shen",
"Nhat Le",
"Jamie Prezioso",
"John Abel",
"Isaac Finberg",
"Daniel Borders",
"Raymond Biju",
"Michael Nercessian",
"Vaed Prasad",
"Joseph Lee",
"Spencer Wyman",
"Sid Gupta",
"Abigail Emerson",
"Bahar Rahsepar",
"Darpan Sanghavi",
"Ryan Leung",
"Limin Yu",
"Archit Khosla",
"Amaro Taylor-Weiner"
] |
Nested pairwise frames is a method for relative benchmarking of cell or
tissue digital pathology models against manual pathologist annotations on a set
of sampled patches. At a high level, the method compares agreement between a
candidate model and pathologist annotations with agreement among pathologists'
annotations. This evaluation framework addresses fundamental issues of data
size and annotator variability in using manual pathologist annotations as a
source of ground truth for model validation. We implemented nested pairwise
frames evaluation for tissue classification, cell classification, and cell
count prediction tasks and show results for cell and tissue models deployed on
an H&E-stained melanoma dataset.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.04738
|
2023-06-07T19:20:01Z
|
MultiEarth 2023 -- Multimodal Learning for Earth and Environment
Workshop and Challenge
|
[
"Miriam Cha",
"Gregory Angelides",
"Mark Hamilton",
"Andy Soszynski",
"Brandon Swenson",
"Nathaniel Maidel",
"Phillip Isola",
"Taylor Perron",
"Bill Freeman"
] |
The Multimodal Learning for Earth and Environment Workshop (MultiEarth 2023)
is the second annual CVPR workshop aimed at the monitoring and analysis of the
health of Earth ecosystems by leveraging the vast amount of remote sensing data
that is continuously being collected. The primary objective of this workshop is
to bring together the Earth and environmental science communities as well as
the multimodal representation learning communities to explore new ways of
harnessing technological advancements in support of environmental monitoring.
The MultiEarth Workshop also seeks to provide a common benchmark for processing
multimodal remote sensing information by organizing public challenges focused
on monitoring the Amazon rainforest. These challenges include estimating
deforestation, detecting forest fires, translating synthetic aperture radar
(SAR) images to the visible domain, and projecting environmental trends. This
paper presents the challenge guidelines, datasets, and evaluation metrics. Our
challenge website is available at
https://sites.google.com/view/rainforest-challenge/multiearth-2023.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.04745
|
2023-06-07T19:46:30Z
|
3D Human Keypoints Estimation From Point Clouds in the Wild Without
Human Labels
|
[
"Zhenzhen Weng",
"Alexander S. Gorban",
"Jingwei Ji",
"Mahyar Najibi",
"Yin Zhou",
"Dragomir Anguelov"
] |
Training a 3D human keypoint detector from point clouds in a supervised
manner requires large volumes of high quality labels. While it is relatively
easy to capture large amounts of human point clouds, annotating 3D keypoints is
expensive, subjective, error prone and especially difficult for long-tail cases
(pedestrians with rare poses, scooterists, etc.). In this work, we propose
GC-KPL - Geometry Consistency inspired Key Point Leaning, an approach for
learning 3D human joint locations from point clouds without human labels. We
achieve this by our novel unsupervised loss formulations that account for the
structure and movement of the human body. We show that by training on a large
training set from Waymo Open Dataset without any human annotated keypoints, we
are able to achieve reasonable performance as compared to the fully supervised
approach. Further, the backbone benefits from the unsupervised training and is
useful in downstream fewshot learning of keypoints, where fine-tuning on only
10 percent of the labeled training data gives comparable performance to
fine-tuning on the entire set. We demonstrated that GC-KPL outperforms by a
large margin over SoTA when trained on entire dataset and efficiently leverages
large volumes of unlabeled data.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.04811
|
2023-06-07T22:20:51Z
|
Generative Text-Guided 3D Vision-Language Pretraining for Unified
Medical Image Segmentation
|
[
"Yinda Chen",
"Che Liu",
"Wei Huang",
"Sibo Cheng",
"Rossella Arcucci",
"Zhiwei Xiong"
] |
Vision-Language Pretraining (VLP) has demonstrated remarkable capabilities in
learning visual representations from textual descriptions of images without
annotations. Yet, effective VLP demands large-scale image-text pairs, a
resource that suffers scarcity in the medical domain. Moreover, conventional
VLP is limited to 2D images while medical images encompass diverse modalities,
often in 3D, making the learning process more challenging. To address these
challenges, we present Generative Text-Guided 3D Vision-Language Pretraining
for Unified Medical Image Segmentation (GTGM), a framework that extends of VLP
to 3D medical images without relying on paired textual descriptions.
Specifically, GTGM utilizes large language models (LLM) to generate
medical-style text from 3D medical images. This synthetic text is then used to
supervise 3D visual representation learning. Furthermore, a negative-free
contrastive learning objective strategy is introduced to cultivate consistent
visual representations between augmented 3D medical image patches, which
effectively mitigates the biases associated with strict positive-negative
sample pairings. We evaluate GTGM on three imaging modalities - Computed
Tomography (CT), Magnetic Resonance Imaging (MRI), and electron microscopy (EM)
over 13 datasets. GTGM's superior performance across various medical image
segmentation tasks underscores its effectiveness and versatility, by enabling
VLP extension into 3D medical imagery while bypassing the need for paired text.
|
[
"cs.CV",
"cs.AI"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.