categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2403.00578
| null | null |
http://arxiv.org/pdf/2403.00578v1
|
2024-03-01T14:58:36Z
|
2024-03-01T14:58:36Z
|
SINDy vs Hard Nonlinearities and Hidden Dynamics: a Benchmarking Study
|
In this work we analyze the effectiveness of the Sparse Identification of Nonlinear Dynamics (SINDy) technique on three benchmark datasets for nonlinear identification, to provide a better understanding of its suitability when tackling real dynamical systems. While SINDy can be an appealing strategy for pursuing physics-based learning, our analysis highlights difficulties in dealing with unobserved states and non-smooth dynamics. Due to the ubiquity of these features in real systems in general, and control applications in particular, we complement our analysis with hands-on approaches to tackle these issues in order to exploit SINDy also in these challenging contexts.
|
[
"['Aurelio Raffa Ugolini' 'Valentina Breschi' 'Andrea Manzoni'\n 'Mara Tanelli']"
] |
null | null |
2403.00584
| null | null |
http://arxiv.org/pdf/2403.00584v1
|
2024-03-01T15:05:21Z
|
2024-03-01T15:05:21Z
|
Generalized User Representations for Transfer Learning
|
We present a novel framework for user representation in large-scale recommender systems, aiming at effectively representing diverse user taste in a generalized manner. Our approach employs a two-stage methodology combining representation learning and transfer learning. The representation learning model uses an autoencoder that compresses various user features into a representation space. In the second stage, downstream task-specific models leverage user representations via transfer learning instead of curating user features individually. We further augment this methodology on the representation's input features to increase flexibility and enable reaction to user events, including new user experiences, in Near-Real Time. Additionally, we propose a novel solution to manage deployment of this framework in production models, allowing downstream models to work independently. We validate the performance of our framework through rigorous offline and online experiments within a large-scale system, showcasing its remarkable efficacy across multiple evaluation tasks. Finally, we show how the proposed framework can significantly reduce infrastructure costs compared to alternative approaches.
|
[
"['Ghazal Fazelnia' 'Sanket Gupta' 'Claire Keum' 'Mark Koh' 'Ian Anderson'\n 'Mounia Lalmas']"
] |
null | null |
2403.00625
| null | null |
http://arxiv.org/pdf/2403.00625v1
|
2024-03-01T16:01:28Z
|
2024-03-01T16:01:28Z
|
Bias Mitigation in Fine-tuning Pre-trained Models for Enhanced Fairness
and Efficiency
|
Fine-tuning pre-trained models is a widely employed technique in numerous real-world applications. However, fine-tuning these models on new tasks can lead to unfair outcomes. This is due to the absence of generalization guarantees for fairness properties, regardless of whether the original pre-trained model was developed with fairness considerations. To tackle this issue, we introduce an efficient and robust fine-tuning framework specifically designed to mitigate biases in new tasks. Our empirical analysis shows that the parameters in the pre-trained model that affect predictions for different demographic groups are different, so based on this observation, we employ a transfer learning strategy that neutralizes the importance of these influential weights, determined using Fisher information across demographic groups. Additionally, we integrate this weight importance neutralization strategy with a matrix factorization technique, which provides a low-rank approximation of the weight matrix using fewer parameters, reducing the computational demands. Experiments on multiple pre-trained models and new tasks demonstrate the effectiveness of our method.
|
[
"['Yixuan Zhang' 'Feng Zhou']"
] |
null | null |
2403.00642
| null | null |
http://arxiv.org/pdf/2403.00642v2
|
2024-04-26T08:24:11Z
|
2024-03-01T16:22:05Z
|
Rethinking The Uniformity Metric in Self-Supervised Learning
|
Uniformity plays an important role in evaluating learned representations, providing insights into self-supervised learning. In our quest for effective uniformity metrics, we pinpoint four principled properties that such metrics should possess. Namely, an effective uniformity metric should remain invariant to instance permutations and sample replications while accurately capturing feature redundancy and dimensional collapse. Surprisingly, we find that the uniformity metric proposed by citet{Wang2020UnderstandingCR} fails to satisfy the majority of these properties. Specifically, their metric is sensitive to sample replications, and can not account for feature redundancy and dimensional collapse correctly. To overcome these limitations, we introduce a new uniformity metric based on the Wasserstein distance, which satisfies all the aforementioned properties. Integrating this new metric in existing self-supervised learning methods effectively mitigates dimensional collapse and consistently improves their performance on downstream tasks involving CIFAR-10 and CIFAR-100 datasets. Code is available at url{https://github.com/statsle/WassersteinSSL}.
|
[
"['Xianghong Fang' 'Jian Li' 'Qiang Sun' 'Benyou Wang']"
] |
null | null |
2403.00646
| null | null |
http://arxiv.org/pdf/2403.00646v1
|
2024-03-01T16:26:47Z
|
2024-03-01T16:26:47Z
|
Stability-Certified Learning of Control Systems with Quadratic
Nonlinearities
|
This work primarily focuses on an operator inference methodology aimed at constructing low-dimensional dynamical models based on a priori hypotheses about their structure, often informed by established physics or expert insights. Stability is a fundamental attribute of dynamical systems, yet it is not always assured in models derived through inference. Our main objective is to develop a method that facilitates the inference of quadratic control dynamical systems with inherent stability guarantees. To this aim, we investigate the stability characteristics of control systems with energy-preserving nonlinearities, thereby identifying conditions under which such systems are bounded-input bounded-state stable. These insights are subsequently applied to the learning process, yielding inferred models that are inherently stable by design. The efficacy of our proposed framework is demonstrated through a couple of numerical examples.
|
[
"['Igor Pontes Duff' 'Pawan Goyal' 'Peter Benner']"
] |
null | null |
2403.00669
| null | null |
http://arxiv.org/pdf/2403.00669v1
|
2024-03-01T17:01:47Z
|
2024-03-01T17:01:47Z
|
Advancing Additive Manufacturing through Deep Learning: A Comprehensive
Review of Current Progress and Future Challenges
|
Additive manufacturing (AM) has already proved itself to be the potential alternative to widely-used subtractive manufacturing due to its extraordinary capacity of manufacturing highly customized products with minimum material wastage. Nevertheless, it is still not being considered as the primary choice for the industry due to some of its major inherent challenges, including complex and dynamic process interactions, which are sometimes difficult to fully understand even with traditional machine learning because of the involvement of high-dimensional data such as images, point clouds, and voxels. However, the recent emergence of deep learning (DL) is showing great promise in overcoming many of these challenges as DL can automatically capture complex relationships from high-dimensional data without hand-crafted feature extraction. Therefore, the volume of research in the intersection of AM and DL is exponentially growing each year which makes it difficult for the researchers to keep track of the trend and future potential directions. Furthermore, to the best of our knowledge, there is no comprehensive review paper in this research track summarizing the recent studies. Therefore, this paper reviews the recent studies that apply DL for making the AM process better with a high-level summary of their contributions and limitations. Finally, it summarizes the current challenges and recommends some of the promising opportunities in this domain for further investigation with a special focus on generalizing DL models for wide-range of geometry types, managing uncertainties both in AM data and DL models, overcoming limited and noisy AM data issues by incorporating generative models, and unveiling the potential of interpretable DL for AM.
|
[
"['Amirul Islam Saimon' 'Emmanuel Yangue' 'Xiaowei Yue' 'Zhenyu James Kong'\n 'Chenang Liu']"
] |
null | null |
2403.00673
| null | null |
http://arxiv.org/pdf/2403.00673v2
|
2024-03-12T12:20:59Z
|
2024-03-01T17:05:22Z
|
Snapshot Reinforcement Learning: Leveraging Prior Trajectories for
Efficiency
|
Deep reinforcement learning (DRL) algorithms require substantial samples and computational resources to achieve higher performance, which restricts their practical application and poses challenges for further development. Given the constraint of limited resources, it is essential to leverage existing computational work (e.g., learned policies, samples) to enhance sample efficiency and reduce the computational resource consumption of DRL algorithms. Previous works to leverage existing computational work require intrusive modifications to existing algorithms and models, designed specifically for specific algorithms, lacking flexibility and universality. In this paper, we present the Snapshot Reinforcement Learning (SnapshotRL) framework, which enhances sample efficiency by simply altering environments, without making any modifications to algorithms and models. By allowing student agents to choose states in teacher trajectories as the initial state to sample, SnapshotRL can effectively utilize teacher trajectories to assist student agents in training, allowing student agents to explore a larger state space at the early training phase. We propose a simple and effective SnapshotRL baseline algorithm, S3RL, which integrates well with existing DRL algorithms. Our experiments demonstrate that integrating S3RL with TD3, SAC, and PPO algorithms on the MuJoCo benchmark significantly improves sample efficiency and average return, without extra samples and additional computational resources.
|
[
"['Yanxiao Zhao' 'Yangge Qian' 'Tianyi Wang' 'Jingyang Shan' 'Xiaolin Qin']"
] |
null | null |
2403.00675
| null | null |
http://arxiv.org/pdf/2403.00675v1
|
2024-03-01T17:08:30Z
|
2024-03-01T17:08:30Z
|
Reusing Historical Trajectories in Natural Policy Gradient via
Importance Sampling: Convergence and Convergence Rate
|
Reinforcement learning provides a mathematical framework for learning-based control, whose success largely depends on the amount of data it can utilize. The efficient utilization of historical trajectories obtained from previous policies is essential for expediting policy optimization. Empirical evidence has shown that policy gradient methods based on importance sampling work well. However, existing literature often neglect the interdependence between trajectories from different iterations, and the good empirical performance lacks a rigorous theoretical justification. In this paper, we study a variant of the natural policy gradient method with reusing historical trajectories via importance sampling. We show that the bias of the proposed estimator of the gradient is asymptotically negligible, the resultant algorithm is convergent, and reusing past trajectories helps improve the convergence rate. We further apply the proposed estimator to popular policy optimization algorithms such as trust region policy optimization. Our theoretical results are verified on classical benchmarks.
|
[
"['Yifan Lin' 'Yuhao Wang' 'Enlu Zhou']"
] |
null | null |
2403.00680
| null | null |
http://arxiv.org/pdf/2403.00680v1
|
2024-03-01T17:12:53Z
|
2024-03-01T17:12:53Z
|
Scalable Learning of Item Response Theory Models
|
Item Response Theory (IRT) models aim to assess latent abilities of $n$ examinees along with latent difficulty characteristics of $m$ test items from categorical data that indicates the quality of their corresponding answers. Classical psychometric assessments are based on a relatively small number of examinees and items, say a class of $200$ students solving an exam comprising $10$ problems. More recent global large scale assessments such as PISA, or internet studies, may lead to significantly increased numbers of participants. Additionally, in the context of Machine Learning where algorithms take the role of examinees and data analysis problems take the role of items, both $n$ and $m$ may become very large, challenging the efficiency and scalability of computations. To learn the latent variables in IRT models from large data, we leverage the similarity of these models to logistic regression, which can be approximated accurately using small weighted subsets called coresets. We develop coresets for their use in alternating IRT training algorithms, facilitating scalable learning from large data.
|
[
"['Susanne Frick' 'Amer Krivošija' 'Alexander Munteanu']"
] |
null | null |
2403.00694
| null | null |
http://arxiv.org/pdf/2403.00694v1
|
2024-03-01T17:30:49Z
|
2024-03-01T17:30:49Z
|
Defining Expertise: Applications to Treatment Effect Estimation
|
Decision-makers are often experts of their domain and take actions based on their domain knowledge. Doctors, for instance, may prescribe treatments by predicting the likely outcome of each available treatment. Actions of an expert thus naturally encode part of their domain knowledge, and can help make inferences within the same domain: Knowing doctors try to prescribe the best treatment for their patients, we can tell treatments prescribed more frequently are likely to be more effective. Yet in machine learning, the fact that most decision-makers are experts is often overlooked, and "expertise" is seldom leveraged as an inductive bias. This is especially true for the literature on treatment effect estimation, where often the only assumption made about actions is that of overlap. In this paper, we argue that expertise - particularly the type of expertise the decision-makers of a domain are likely to have - can be informative in designing and selecting methods for treatment effect estimation. We formally define two types of expertise, predictive and prognostic, and demonstrate empirically that: (i) the prominent type of expertise in a domain significantly influences the performance of different methods in treatment effect estimation, and (ii) it is possible to predict the type of expertise present in a dataset, which can provide a quantitative basis for model selection.
|
[
"['Alihan Hüyük' 'Qiyao Wei' 'Alicia Curth' 'Mihaela van der Schaar']"
] |
null | null |
2403.00715
| null | null |
http://arxiv.org/pdf/2403.00715v2
|
2024-03-10T15:12:16Z
|
2024-03-01T18:03:49Z
|
Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive
Analysis and Best-of-Both-Worlds
|
Follow-The-Regularized-Leader (FTRL) is known as an effective and versatile approach in online learning, where appropriate choice of the learning rate is crucial for smaller regret. To this end, we formulate the problem of adjusting FTRL's learning rate as a sequential decision-making problem and introduce the framework of competitive analysis. We establish a lower bound for the competitive ratio and propose update rules for learning rate that achieves an upper bound within a constant factor of this lower bound. Specifically, we illustrate that the optimal competitive ratio is characterized by the (approximate) monotonicity of components of the penalty term, showing that a constant competitive ratio is achievable if the components of the penalty term form a monotonically non-increasing sequence, and derive a tight competitive ratio when penalty terms are $xi$-approximately monotone non-increasing. Our proposed update rule, referred to as textit{stability-penalty matching}, also facilitates constructing the Best-Of-Both-Worlds (BOBW) algorithms for stochastic and adversarial environments. In these environments our result contributes to achieve tighter regret bound and broaden the applicability of algorithms for various settings such as multi-armed bandits, graph bandits, linear bandits, and contextual bandits.
|
[
"['Shinji Ito' 'Taira Tsuchiya' 'Junya Honda']"
] |
null | null |
2403.00720
| null | null |
http://arxiv.org/pdf/2403.00720v2
|
2024-06-06T17:59:38Z
|
2024-03-01T18:12:46Z
|
Subhomogeneous Deep Equilibrium Models
|
Implicit-depth neural networks have grown as powerful alternatives to traditional networks in various applications in recent years. However, these models often lack guarantees of existence and uniqueness, raising stability, performance, and reproducibility issues. In this paper, we present a new analysis of the existence and uniqueness of fixed points for implicit-depth neural networks based on the concept of subhomogeneous operators and the nonlinear Perron-Frobenius theory. Compared to previous similar analyses, our theory allows for weaker assumptions on the parameter matrices, thus yielding a more flexible framework for well-defined implicit networks. We illustrate the performance of the resulting subhomogeneous networks on feedforward, convolutional, and graph neural network examples.
|
[
"['Pietro Sittoni' 'Francesco Tudisco']"
] |
null | null |
2403.00745
| null | null |
http://arxiv.org/pdf/2403.00745v1
|
2024-03-01T18:43:51Z
|
2024-03-01T18:43:51Z
|
AtP*: An efficient and scalable method for localizing LLM behaviour to
components
|
Activation Patching is a method of directly computing causal attributions of behavior to model components. However, applying it exhaustively requires a sweep with cost scaling linearly in the number of model components, which can be prohibitively expensive for SoTA Large Language Models (LLMs). We investigate Attribution Patching (AtP), a fast gradient-based approximation to Activation Patching and find two classes of failure modes of AtP which lead to significant false negatives. We propose a variant of AtP called AtP*, with two changes to address these failure modes while retaining scalability. We present the first systematic study of AtP and alternative methods for faster activation patching and show that AtP significantly outperforms all other investigated methods, with AtP* providing further significant improvement. Finally, we provide a method to bound the probability of remaining false negatives of AtP* estimates.
|
[
"['János Kramár' 'Tom Lieberum' 'Rohin Shah' 'Neel Nanda']"
] |
null | null |
2403.00746
| null | null |
http://arxiv.org/pdf/2403.00746v1
|
2024-03-01T18:46:26Z
|
2024-03-01T18:46:26Z
|
A time-stepping deep gradient flow method for option pricing in (rough)
diffusion models
|
We develop a novel deep learning approach for pricing European options in diffusion models, that can efficiently handle high-dimensional problems resulting from Markovian approximations of rough volatility models. The option pricing partial differential equation is reformulated as an energy minimization problem, which is approximated in a time-stepping fashion by deep artificial neural networks. The proposed scheme respects the asymptotic behavior of option prices for large levels of moneyness, and adheres to a priori known bounds for option prices. The accuracy and efficiency of the proposed method is assessed in a series of numerical examples, with particular focus in the lifted Heston model.
|
[
"['Antonis Papapantoleon' 'Jasper Rou']"
] |
null | null |
2403.00758
| null | null |
http://arxiv.org/pdf/2403.00758v3
|
2024-03-20T07:37:24Z
|
2024-03-01T18:55:20Z
|
Mitigating Reversal Curse in Large Language Models via Semantic-aware
Permutation Training
|
While large language models (LLMs) have achieved impressive performance across diverse tasks, recent studies showcase that causal LLMs suffer from the "reversal curse". It is a typical example that the model knows "A's father is B", but is unable to reason "B's child is A". This limitation poses a challenge to the advancement of artificial general intelligence (AGI), as it suggests a gap in the models' ability to comprehend and apply bidirectional reasoning. In this paper, we first conduct substantial evaluation and identify that the root cause of the reversal curse lies in the different word order between the training and inference stage, namely, the poor ability of causal language models to predict antecedent words within the training data. Accordingly, permutation on the training data is considered as a potential solution, since this can make the model predict antecedent words or tokens. However, previous permutation methods may disrupt complete phrases or entities, thereby posing challenges for the model to comprehend and learn from training data. To address this issue, we propose Semantic-aware Permutation Training (SPT), which addresses this issue by segmenting the training sentences into semantic units (i.e., entities or phrases) with an assistant language model and permuting these units before feeding into the model. Extensive experiments demonstrate that SPT effectively mitigates the reversal curse since the performance on reversed questions approximates that on the forward ones, and significantly advances the performance of existing works.
|
[
"['Qingyan Guo' 'Rui Wang' 'Junliang Guo' 'Xu Tan' 'Jiang Bian'\n 'Yujiu Yang']"
] |
null | null |
2403.00765
| null | null |
http://arxiv.org/pdf/2403.00765v1
|
2024-02-06T12:08:01Z
|
2024-02-06T12:08:01Z
|
An Architecture for Unattended Containerized (Deep) Reinforcement
Learning with Webots
|
As data science applications gain adoption across industries, the tooling landscape matures to facilitate the life cycle of such applications and provide solutions to the challenges involved to boost the productivity of the people involved. Reinforcement learning with agents in a 3D world could still face challenges: the knowledge required to use a simulation software as well as the utilization of a standalone simulation software in unattended training pipelines. In this paper we review tools and approaches to train reinforcement learning agents for robots in 3D worlds with respect to the robot Robotino and argue that the separation of the simulation environment for creators of virtual worlds and the model development environment for data scientists is not a well covered topic. Often both are the same and data scientists require knowledge of the simulation software to work directly with their APIs. Moreover, sometimes creators of virtual worlds and data scientists even work on the same files. We want to contribute to that topic by describing an approach where data scientists don't require knowledge about the simulation software. Our approach uses the standalone simulation software Webots, the Robot Operating System to communicate with simulated robots as well as the simulation software itself and container technology to separate the simulation from the model development environment. We put emphasize on the APIs the data scientists work with and the use of a standalone simulation software in unattended training pipelines. We show the parts that are specific to the Robotino and the robot task to learn.
|
[
"['Tobias Haubold' 'Petra Linke']"
] |
null | null |
2403.00766
| null | null |
http://arxiv.org/pdf/2403.00766v1
|
2024-02-09T07:25:07Z
|
2024-02-09T07:25:07Z
|
Towards Fair and Firm Real-Time Scheduling in DNN Multi-Tenant
Multi-Accelerator Systems via Reinforcement Learning
|
This paper addresses the critical challenge of managing Quality of Service (QoS) in cloud services, focusing on the nuances of individual tenant expectations and varying Service Level Indicators (SLIs). It introduces a novel approach utilizing Deep Reinforcement Learning for tenant-specific QoS management in multi-tenant, multi-accelerator cloud environments. The chosen SLI, deadline hit rate, allows clients to tailor QoS for each service request. A novel online scheduling algorithm for Deep Neural Networks in multi-accelerator systems is proposed, with a focus on guaranteeing tenant-wise, model-specific QoS levels while considering real-time constraints.
|
[
"['Enrico Russo' 'Francesco Giulio Blanco' 'Maurizio Palesi'\n 'Giuseppe Ascia' 'Davide Patti' 'Vincenzo Catania']"
] |
null | null |
2403.00769
| null | null |
http://arxiv.org/abs/2403.00769v1
|
2024-02-11T11:41:25Z
|
2024-02-11T11:41:25Z
|
Text mining in education
|
The explosive growth of online education environments is generating a massive volume of data, specially in text format from forums, chats, social networks, assessments, essays, among others. It produces exciting challenges on how to mine text data in order to find useful knowledge for educational stakeholders. Despite the increasing number of educational applications of text mining published recently, we have not found any paper surveying them. In this line, this work presents a systematic overview of the current status of the Educational Text Mining field. Our final goal is to answer three main research questions: Which are the text mining techniques most used in educational environments? Which are the most used educational resources? And which are the main applications or educational goals? Finally, we outline the conclusions and the more interesting future trends.
|
[
"['R. Ferreira-Mello' 'M. Andre' 'A. Pinheiro' 'E. Costa' 'C. Romero']"
] |
null | null |
2403.00771
| null | null |
http://arxiv.org/pdf/2403.00771v1
|
2024-02-11T21:57:49Z
|
2024-02-11T21:57:49Z
|
XProspeCT: CT Volume Generation from Paired X-Rays
|
Computed tomography (CT) is a beneficial imaging tool for diagnostic purposes. CT scans provide detailed information concerning the internal anatomic structures of a patient, but present higher radiation dose and costs compared to X-ray imaging. In this paper, we build on previous research to convert orthogonal X-ray images into simulated CT volumes by exploring larger datasets and various model structures. Significant model variations include UNet architectures, custom connections, activation functions, loss functions, optimizers, and a novel back projection approach.
|
[
"['Benjamin Paulson' 'Joshua Goldshteyn' 'Sydney Balboni' 'John Cisler'\n 'Andrew Crisler' 'Natalia Bukowski' 'Julia Kalish' 'Theodore Colwell']"
] |
null | null |
2403.00772
| null | null |
http://arxiv.org/abs/2403.00772v1
|
2024-02-12T10:04:54Z
|
2024-02-12T10:04:54Z
|
Do Weibo platform experts perform better at predicting stock market?
|
Sentiment analysis can be used for stock market prediction. However, existing research has not studied the impact of a user's financial background on sentiment-based forecasting of the stock market using artificial neural networks. In this work, a novel combination of neural networks is used for the assessment of sentiment-based stock market prediction, based on the financial background of the population that generated the sentiment. The state-of-the-art language processing model Bidirectional Encoder Representations from Transformers (BERT) is used to classify the sentiment and a Long-Short Term Memory (LSTM) model is used for time-series based stock market prediction. For evaluation, the Weibo social networking platform is used as a sentiment data collection source. Weibo users (and their comments respectively) are divided into Authorized Financial Advisor (AFA) and Unauthorized Financial Advisor (UFA) groups according to their background information, as collected by Weibo. The Hong Kong Hang Seng index is used to extract historical stock market change data. The results indicate that stock market prediction learned from the AFA group users is 39.67% more precise than that learned from the UFA group users and shows the highest accuracy (87%) when compared to existing approaches.
|
[
"['Ziyuan Ma' 'Conor Ryan' 'Jim Buckley' 'Muslim Chochlov']"
] |
null | null |
2403.00773
| null | null |
http://arxiv.org/pdf/2403.00773v1
|
2024-02-13T17:11:08Z
|
2024-02-13T17:11:08Z
|
Misconduct in Post-Selections and Deep Learning
|
This is a theoretical paper on "Deep Learning" misconduct in particular and Post-Selection in general. As far as the author knows, the first peer-reviewed papers on Deep Learning misconduct are [32], [37], [36]. Regardless of learning modes, e.g., supervised, reinforcement, adversarial, and evolutional, almost all machine learning methods (except for a few methods that train a sole system) are rooted in the same misconduct -- cheating and hiding -- (1) cheating in the absence of a test and (2) hiding bad-looking data. It was reasoned in [32], [37], [36] that authors must report at least the average error of all trained networks, good and bad, on the validation set (called general cross-validation in this paper). Better, report also five percentage positions of ranked errors. From the new analysis here, we can see that the hidden culprit is Post-Selection. This is also true for Post-Selection on hand-tuned or searched hyperparameters, because they are random, depending on random observation data. Does cross-validation on data splits rescue Post-Selections from the Misconducts (1) and (2)? The new result here says: No. Specifically, this paper reveals that using cross-validation for data splits is insufficient to exonerate Post-Selections in machine learning. In general, Post-Selections of statistical learners based on their errors on the validation set are statistically invalid.
|
[
"['Juyang Weng']"
] |
null | null |
2403.00775
| null | null |
http://arxiv.org/pdf/2403.00775v1
|
2024-02-14T14:17:56Z
|
2024-02-14T14:17:56Z
|
Detecting Anomalous Events in Object-centric Business Processes via
Graph Neural Networks
|
Detecting anomalies is important for identifying inefficiencies, errors, or fraud in business processes. Traditional process mining approaches focus on analyzing 'flattened', sequential, event logs based on a single case notion. However, many real-world process executions exhibit a graph-like structure, where events can be associated with multiple cases. Flattening event logs requires selecting a single case identifier which creates a gap with the real event data and artificially introduces anomalies in the event logs. Object-centric process mining avoids these limitations by allowing events to be related to different cases. This study proposes a novel framework for anomaly detection in business processes that exploits graph neural networks and the enhanced information offered by object-centric process mining. We first reconstruct and represent the process dependencies of the object-centric event logs as attributed graphs and then employ a graph convolutional autoencoder architecture to detect anomalous events. Our results show that our approach provides promising performance in detecting anomalies at the activity type and attributes level, although it struggles to detect anomalies in the temporal order of events.
|
[
"['Alessandro Niro' 'Michael Werner']"
] |
null | null |
2403.00777
| null | null |
http://arxiv.org/abs/2403.00777v1
|
2024-02-14T17:31:29Z
|
2024-02-14T17:31:29Z
|
Combating Financial Crimes with Unsupervised Learning Techniques:
Clustering and Dimensionality Reduction for Anti-Money Laundering
|
Anti-Money Laundering (AML) is a crucial task in ensuring the integrity of financial systems. One keychallenge in AML is identifying high-risk groups based on their behavior. Unsupervised learning, particularly clustering, is a promising solution for this task. However, the use of hundreds of features todescribe behavior results in a highdimensional dataset that negatively impacts clustering performance.In this paper, we investigate the effectiveness of combining clustering method agglomerative hierarchicalclustering with four dimensionality reduction techniques -Independent Component Analysis (ICA), andKernel Principal Component Analysis (KPCA), Singular Value Decomposition (SVD), Locality Preserving Projections (LPP)- to overcome the issue of high-dimensionality in AML data and improve clusteringresults. This study aims to provide insights into the most effective way of reducing the dimensionality ofAML data and enhance the accuracy of clustering-based AML systems. The experimental results demonstrate that KPCA outperforms other dimension reduction techniques when combined with agglomerativehierarchical clustering. This superiority is observed in the majority of situations, as confirmed by threedistinct validation indices.
|
[
"['Ahmed N. Bakry' 'Almohammady S. Alsharkawy' 'Mohamed S. Farag'\n 'Kamal R. Raslan']"
] |
null | null |
2403.00780
| null | null |
http://arxiv.org/pdf/2403.00780v1
|
2024-02-17T15:00:45Z
|
2024-02-17T15:00:45Z
|
Empirical and Experimental Insights into Data Mining Techniques for
Crime Prediction: A Comprehensive Survey
|
This survey paper presents a comprehensive analysis of crime prediction methodologies, exploring the various techniques and technologies utilized in this area. The paper covers the statistical methods, machine learning algorithms, and deep learning techniques employed to analyze crime data, while also examining their effectiveness and limitations. We propose a methodological taxonomy that classifies crime prediction algorithms into specific techniques. This taxonomy is structured into four tiers, including methodology category, methodology sub-category, methodology techniques, and methodology sub-techniques. Empirical and experimental evaluations are provided to rank the different techniques. The empirical evaluation assesses the crime prediction techniques based on four criteria, while the experimental evaluation ranks the algorithms that employ the same sub-technique, the different sub-techniques that employ the same technique, the different techniques that employ the same methodology sub-category, the different methodology sub-categories within the same category, and the different methodology categories. The combination of methodological taxonomy, empirical evaluations, and experimental comparisons allows for a nuanced and comprehensive understanding of crime prediction algorithms, aiding researchers in making informed decisions. Finally, the paper provides a glimpse into the future of crime prediction techniques, highlighting potential advancements and opportunities for further research in this field
|
[
"['Kamal Taha']"
] |
null | null |
2403.00781
| null | null |
http://arxiv.org/pdf/2403.00781v2
|
2024-03-16T17:31:11Z
|
2024-02-18T06:07:17Z
|
ChatDiet: Empowering Personalized Nutrition-Oriented Food Recommender
Chatbots through an LLM-Augmented Framework
|
The profound impact of food on health necessitates advanced nutrition-oriented food recommendation services. Conventional methods often lack the crucial elements of personalization, explainability, and interactivity. While Large Language Models (LLMs) bring interpretability and explainability, their standalone use falls short of achieving true personalization. In this paper, we introduce ChatDiet, a novel LLM-powered framework designed specifically for personalized nutrition-oriented food recommendation chatbots. ChatDiet integrates personal and population models, complemented by an orchestrator, to seamlessly retrieve and process pertinent information. The personal model leverages causal discovery and inference techniques to assess personalized nutritional effects for a specific user, whereas the population model provides generalized information on food nutritional content. The orchestrator retrieves, synergizes and delivers the output of both models to the LLM, providing tailored food recommendations designed to support targeted health outcomes. The result is a dynamic delivery of personalized and explainable food recommendations, tailored to individual user preferences. Our evaluation of ChatDiet includes a compelling case study, where we establish a causal personal model to estimate individual nutrition effects. Our assessments, including a food recommendation test showcasing a 92% effectiveness rate, coupled with illustrative dialogue examples, underscore ChatDiet's strengths in explainability, personalization, and interactivity.
|
[
"['Zhongqi Yang' 'Elahe Khatibi' 'Nitish Nagesh' 'Mahyar Abbasian'\n 'Iman Azimi' 'Ramesh Jain' 'Amir M. Rahmani']"
] |
null | null |
2403.00785
| null | null |
http://arxiv.org/abs/2403.00785v1
|
2024-02-19T02:43:55Z
|
2024-02-19T02:43:55Z
|
Applying News and Media Sentiment Analysis for Generating Forex Trading
Signals
|
The objective of this research is to examine how sentiment analysis can be employed to generate trading signals for the Foreign Exchange (Forex) market. The author assessed sentiment in social media posts and news articles pertaining to the United States Dollar (USD) using a combination of methods: lexicon-based analysis and the Naive Bayes machine learning algorithm. The findings indicate that sentiment analysis proves valuable in forecasting market movements and devising trading signals. Notably, its effectiveness is consistent across different market conditions. The author concludes that by analyzing sentiment expressed in news and social media, traders can glean insights into prevailing market sentiments towards the USD and other pertinent countries, thereby aiding trading decision-making. This study underscores the importance of weaving sentiment analysis into trading strategies as a pivotal tool for predicting market dynamics.
|
[
"['Oluwafemi F Olaiyapo']"
] |
null | null |
2403.00788
| null | null |
http://arxiv.org/pdf/2403.00788v1
|
2024-02-20T04:26:31Z
|
2024-02-20T04:26:31Z
|
PRECISE Framework: GPT-based Text For Improved Readability, Reliability,
and Understandability of Radiology Reports For Patient-Centered Care
|
This study introduces and evaluates the PRECISE framework, utilizing OpenAI's GPT-4 to enhance patient engagement by providing clearer and more accessible chest X-ray reports at a sixth-grade reading level. The framework was tested on 500 reports, demonstrating significant improvements in readability, reliability, and understandability. Statistical analyses confirmed the effectiveness of the PRECISE approach, highlighting its potential to foster patient-centric care delivery in healthcare decision-making.
|
[
"['Satvik Tripathi' 'Liam Mutter' 'Meghana Muppuri' 'Suhani Dheer'\n 'Emiliano Garza-Frias' 'Komal Awan' 'Aakash Jha' 'Michael Dezube'\n 'Azadeh Tabari' 'Christopher P. Bridge' 'Dania Daye']"
] |
null | null |
2403.00793
| null | null |
http://arxiv.org/abs/2403.00793v2
|
2024-07-05T18:20:15Z
|
2024-02-22T22:47:08Z
|
Ads Recommendation in a Collapsed and Entangled World
|
We present Tencent's ads recommendation system and examine the challenges and practices of learning appropriate recommendation representations. Our study begins by showcasing our approaches to preserving prior knowledge when encoding features of diverse types into embedding representations. We specifically address sequence features, numeric features, and pre-trained embedding features. Subsequently, we delve into two crucial challenges related to feature representation: the dimensional collapse of embeddings and the interest entanglement across different tasks or scenarios. We propose several practical approaches to address these challenges that result in robust and disentangled recommendation representations. We then explore several training techniques to facilitate model optimization, reduce bias, and enhance exploration. Additionally, we introduce three analysis tools that enable us to study feature correlation, dimensional collapse, and interest entanglement. This work builds upon the continuous efforts of Tencent's ads recommendation team over the past decade. It summarizes general design principles and presents a series of readily applicable solutions and analysis tools. The reported performance is based on our online advertising platform, which handles hundreds of billions of requests daily and serves millions of ads to billions of users.
|
[
"['Junwei Pan' 'Wei Xue' 'Ximei Wang' 'Haibin Yu' 'Xun Liu' 'Shijie Quan'\n 'Xueming Qiu' 'Dapeng Liu' 'Lei Xiao' 'Jie Jiang']"
] |
null | null |
2403.00794
| null | null |
http://arxiv.org/pdf/2403.00794v2
|
2024-06-21T17:12:35Z
|
2024-02-23T02:58:12Z
|
Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large
Language Models
|
Humor is a fundamental facet of human cognition and interaction. Yet, despite recent advances in natural language processing, humor detection remains a challenging task that is complicated by the scarcity of datasets that pair humorous texts with similar non-humorous counterparts. In our work, we investigate whether large language models (LLMs), can generate synthetic data for humor detection via editing texts. We benchmark LLMs on an existing human dataset and show that current LLMs display an impressive ability to 'unfun' jokes, as judged by humans and as measured on the downstream task of humor detection. We extend our approach to a code-mixed English-Hindi humor dataset, where we find that GPT-4's synthetic data is highly rated by bilingual annotators and provides challenging adversarial examples for humor classifiers.
|
[
"['Zachary Horvitz' 'Jingru Chen' 'Rahul Aditya' 'Harshvardhan Srivastava'\n 'Robert West' 'Zhou Yu' 'Kathleen McKeown']"
] |
null | null |
2403.00796
| null | null |
http://arxiv.org/pdf/2403.00796v1
|
2024-02-23T06:09:45Z
|
2024-02-23T06:09:45Z
|
Enhancing Mean-Reverting Time Series Prediction with Gaussian Processes:
Functional and Augmented Data Structures in Financial Forecasting
|
In this paper, we explore the application of Gaussian Processes (GPs) for predicting mean-reverting time series with an underlying structure, using relatively unexplored functional and augmented data structures. While many conventional forecasting methods concentrate on the short-term dynamics of time series data, GPs offer the potential to forecast not just the average prediction but the entire probability distribution over a future trajectory. This is particularly beneficial in financial contexts, where accurate predictions alone may not suffice if incorrect volatility assessments lead to capital losses. Moreover, in trade selection, GPs allow for the forecasting of multiple Sharpe ratios adjusted for transaction costs, aiding in decision-making. The functional data representation utilized in this study enables longer-term predictions by leveraging information from previous years, even as the forecast moves away from the current year's training data. Additionally, the augmented representation enriches the training set by incorporating multiple targets for future points in time, facilitating long-term predictions. Our implementation closely aligns with the methodology outlined in, which assessed effectiveness on commodity futures. However, our testing methodology differs. Instead of real data, we employ simulated data with similar characteristics. We construct a testing environment to evaluate both data representations and models under conditions of increasing noise, fat tails, and inappropriate kernels-conditions commonly encountered in practice. By simulating data, we can compare our forecast distribution over time against a full simulation of the actual distribution of our test set, thereby reducing the inherent uncertainty in testing time series models on real data. We enable feature prediction through augmentation and employ sub-sampling to ensure the feasibility of GPs.
|
[
"['Narayan Tondapu']"
] |
null | null |
2403.00798
| null | null |
http://arxiv.org/abs/2403.00798v1
|
2024-02-23T15:00:46Z
|
2024-02-23T15:00:46Z
|
Helen: Optimizing CTR Prediction Models with Frequency-wise Hessian
Eigenvalue Regularization
|
Click-Through Rate (CTR) prediction holds paramount significance in online advertising and recommendation scenarios. Despite the proliferation of recent CTR prediction models, the improvements in performance have remained limited, as evidenced by open-source benchmark assessments. Current researchers tend to focus on developing new models for various datasets and settings, often neglecting a crucial question: What is the key challenge that truly makes CTR prediction so demanding? In this paper, we approach the problem of CTR prediction from an optimization perspective. We explore the typical data characteristics and optimization statistics of CTR prediction, revealing a strong positive correlation between the top hessian eigenvalue and feature frequency. This correlation implies that frequently occurring features tend to converge towards sharp local minima, ultimately leading to suboptimal performance. Motivated by the recent advancements in sharpness-aware minimization (SAM), which considers the geometric aspects of the loss landscape during optimization, we present a dedicated optimizer crafted for CTR prediction, named Helen. Helen incorporates frequency-wise Hessian eigenvalue regularization, achieved through adaptive perturbations based on normalized feature frequencies. Empirical results under the open-source benchmark framework underscore Helen's effectiveness. It successfully constrains the top eigenvalue of the Hessian matrix and demonstrates a clear advantage over widely used optimization algorithms when applied to seven popular models across three public benchmark datasets on BARS. Our code locates at github.com/NUS-HPC-AI-Lab/Helen.
|
[
"['Zirui Zhu' 'Yong Liu' 'Zangwei Zheng' 'Huifeng Guo' 'Yang You']"
] |
null | null |
2403.00799
| null | null |
http://arxiv.org/pdf/2403.00799v1
|
2024-02-23T17:38:43Z
|
2024-02-23T17:38:43Z
|
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning
|
Large language models (LLMs) are displaying emergent abilities for math reasoning tasks,and there is a growing attention on enhancing the ability of open-source LLMs through supervised fine-tuning (SFT).In this paper, we aim to explore a general data strategy for supervised data to help optimize and expand math reasoning ability.Firstly, we determine the ability boundary of reasoning paths augmentation by identifying these paths' minimal optimal set.Secondly, we validate that different abilities of the model can be cumulatively enhanced by Mix of Minimal Optimal Sets of corresponding types of data, while our models MMOS achieve SOTA performance on series base models under much lower construction costs.Besides, we point out GSM-HARD is not really hard and today's LLMs no longer lack numerical robustness.Also, we provide an Auto Problem Generator for robustness testing and educational applications.Our code and data are publicly available at https://github.com/cyzhh/MMOS.
|
[
"['Zui Chen' 'Yezeng Chen' 'Jiaqi Han' 'Zhijie Huang' 'Ji Qi' 'Yi Zhou']"
] |
null | null |
2403.00800
| null | null |
http://arxiv.org/pdf/2403.00800v1
|
2024-02-23T17:40:31Z
|
2024-02-23T17:40:31Z
|
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by
Imitating Human Thought Processes
|
Although large language models demonstrate emergent abilities in solving math word problems, there is a challenging task in complex multi-step mathematical reasoning tasks. To improve model performance on mathematical reasoning tasks, previous work has conducted supervised fine-tuning on open-source models by improving the quality and quantity of data. In this paper, we propose a novel approach, named Brain, to imitate human thought processes to enhance mathematical reasoning abilities, using the Frontal Lobe Model to generate plans, and then employing the Parietal Lobe Model to generate code and execute to obtain answers. First, we achieve SOTA performance in comparison with Code LLaMA 7B based models through this method. Secondly, we find that plans can be explicitly extracted from natural language, code, or formal language. Our code and data are publicly available at https://github.com/cyzhh/Brain.
|
[
"['Yezeng Chen' 'Zui Chen' 'Yi Zhou']"
] |
null | null |
2403.00803
| null | null |
http://arxiv.org/pdf/2403.00803v1
|
2024-02-23T22:06:36Z
|
2024-02-23T22:06:36Z
|
LiMAML: Personalization of Deep Recommender Models via Meta Learning
|
In the realm of recommender systems, the ubiquitous adoption of deep neural networks has emerged as a dominant paradigm for modeling diverse business objectives. As user bases continue to expand, the necessity of personalization and frequent model updates have assumed paramount significance to ensure the delivery of relevant and refreshed experiences to a diverse array of members. In this work, we introduce an innovative meta-learning solution tailored to the personalization of models for individual members and other entities, coupled with the frequent updates based on the latest user interaction signals. Specifically, we leverage the Model-Agnostic Meta Learning (MAML) algorithm to adapt per-task sub-networks using recent user interaction data. Given the near infeasibility of productionizing original MAML-based models in online recommendation systems, we propose an efficient strategy to operationalize meta-learned sub-networks in production, which involves transforming them into fixed-sized vectors, termed meta embeddings, thereby enabling the seamless deployment of models with hundreds of billions of parameters for online serving. Through extensive experimentation on production data drawn from various applications at LinkedIn, we demonstrate that the proposed solution consistently outperforms the baseline models of those applications, including strong baselines such as using wide-and-deep ID based personalization approach. Our approach has enabled the deployment of a range of highly personalized AI models across diverse LinkedIn applications, leading to substantial improvements in business metrics as well as refreshed experience for our members.
|
[
"['Ruofan Wang' 'Prakruthi Prabhakar' 'Gaurav Srivastava' 'Tianqi Wang'\n 'Zeinab S. Jalali' 'Varun Bharill' 'Yunbo Ouyang' 'Aastha Nigam'\n 'Divya Venugopalan' 'Aman Gupta' 'Fedor Borisyuk' 'Sathiya Keerthi'\n 'Ajith Muralidharan']"
] |
null | null |
2403.00804
| null | null |
http://arxiv.org/pdf/2403.00804v1
|
2024-02-24T00:15:09Z
|
2024-02-24T00:15:09Z
|
Uncovering Customer Issues through Topological Natural Language Analysis
|
E-commerce companies deal with a high volume of customer service requests daily. While a simple annotation system is often used to summarize the topics of customer contacts, thoroughly exploring each specific issue can be challenging. This presents a critical concern, especially during an emerging outbreak where companies must quickly identify and address specific issues. To tackle this challenge, we propose a novel machine learning algorithm that leverages natural language techniques and topological data analysis to monitor emerging and trending customer issues. Our approach involves an end-to-end deep learning framework that simultaneously tags the primary question sentence of each customer's transcript and generates sentence embedding vectors. We then whiten the embedding vectors and use them to construct an undirected graph. From there, we define trending and emerging issues based on the topological properties of each transcript. We have validated our results through various methods and found that they are highly consistent with news sources.
|
[
"['Shu-Ting Pi' 'Sidarth Srinivasan' 'Yuying Zhu' 'Michael Yang' 'Qun Liu']"
] |
null | null |
2403.00817
| null | null |
http://arxiv.org/abs/2403.00817v1
|
2024-02-26T05:08:52Z
|
2024-02-26T05:08:52Z
|
Doubly Calibrated Estimator for Recommendation on Data Missing Not At
Random
|
Recommender systems often suffer from selection bias as users tend to rate their preferred items. The datasets collected under such conditions exhibit entries missing not at random and thus are not randomized-controlled trials representing the target population. To address this challenge, a doubly robust estimator and its enhanced variants have been proposed as they ensure unbiasedness when accurate imputed errors or predicted propensities are provided. However, we argue that existing estimators rely on miscalibrated imputed errors and propensity scores as they depend on rudimentary models for estimation. We provide theoretical insights into how miscalibrated imputation and propensity models may limit the effectiveness of doubly robust estimators and validate our theorems using real-world datasets. On this basis, we propose a Doubly Calibrated Estimator that involves the calibration of both the imputation and propensity models. To achieve this, we introduce calibration experts that consider different logit distributions across users. Moreover, we devise a tri-level joint learning framework, allowing the simultaneous optimization of calibration experts alongside prediction and imputation models. Through extensive experiments on real-world datasets, we demonstrate the superiority of the Doubly Calibrated Estimator in the context of debiased recommendation tasks.
|
[
"['Wonbin Kweon' 'Hwanjo Yu']"
] |
null | null |
2403.00818
| null | null |
http://arxiv.org/pdf/2403.00818v2
|
2024-03-05T14:31:03Z
|
2024-02-26T09:21:59Z
|
DenseMamba: State Space Models with Dense Hidden Connection for
Efficient Large Language Models
|
Large language models (LLMs) face a daunting challenge due to the excessive computational and memory requirements of the commonly used Transformer architecture. While state space model (SSM) is a new type of foundational network architecture offering lower computational complexity, their performance has yet to fully rival that of Transformers. This paper introduces DenseSSM, a novel approach to enhance the flow of hidden information between layers in SSMs. By selectively integrating shallowlayer hidden states into deeper layers, DenseSSM retains fine-grained information crucial for the final output. Dense connections enhanced DenseSSM still maintains the training parallelizability and inference efficiency. The proposed method can be widely applicable to various SSM types like RetNet and Mamba. With similar model size, DenseSSM achieves significant improvements, exemplified by DenseRetNet outperforming the original RetNet with up to 5% accuracy improvement on public benchmarks. code is avalaible at https://github.com/WailordHe/DenseSSM
|
[
"['Wei He' 'Kai Han' 'Yehui Tang' 'Chengcheng Wang' 'Yujie Yang'\n 'Tianyu Guo' 'Yunhe Wang']"
] |
null | null |
2403.00821
| null | null |
http://arxiv.org/pdf/2403.00821v1
|
2024-02-26T16:17:19Z
|
2024-02-26T16:17:19Z
|
Social Media as a Sensor: Analyzing Twitter Data for Breast Cancer
Medication Effects Using Natural Language Processing
|
Breast cancer is a significant public health concern and is the leading cause of cancer-related deaths among women. Despite advances in breast cancer treatments, medication non-adherence remains a major problem. As electronic health records do not typically capture patient-reported outcomes that may reveal information about medication-related experiences, social media presents an attractive resource for enhancing our understanding of the patients' treatment experiences. In this paper, we developed natural language processing (NLP) based methodologies to study information posted by an automatically curated breast cancer cohort from social media. We employed a transformer-based classifier to identify breast cancer patients/survivors on X (Twitter) based on their self-reported information, and we collected longitudinal data from their profiles. We then designed a multi-layer rule-based model to develop a breast cancer therapy-associated side effect lexicon and detect patterns of medication usage and associated side effects among breast cancer patients. 1,454,637 posts were available from 583,962 unique users, of which 62,042 were detected as breast cancer members using our transformer-based model. 198 cohort members mentioned breast cancer medications with tamoxifen as the most common. Our side effect lexicon identified well-known side effects of hormone and chemotherapy. Furthermore, it discovered a subject feeling towards cancer and medications, which may suggest a pre-clinical phase of side effects or emotional distress. This analysis highlighted not only the utility of NLP techniques in unstructured social media data to identify self-reported breast cancer posts, medication usage patterns, and treatment side effects but also the richness of social data on such clinical questions.
|
[
"['Seibi Kobara' 'Alireza Rafiei' 'Masoud Nateghi' 'Selen Bozkurt'\n 'Rishikesan Kamaleswaran' 'Abeed Sarker']"
] |
null | null |
2403.00826
| null | null |
http://arxiv.org/pdf/2403.00826v1
|
2024-02-27T10:22:45Z
|
2024-02-27T10:22:45Z
|
LLMGuard: Guarding Against Unsafe LLM Behavior
|
Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present "LLMGuard", a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.
|
[
"['Shubh Goyal' 'Medha Hira' 'Shubham Mishra' 'Sukriti Goyal' 'Arnav Goel'\n 'Niharika Dadu' 'Kirushikesh DB' 'Sameep Mehta' 'Nishtha Madaan']"
] |
null | null |
2403.00827
| null | null |
http://arxiv.org/pdf/2403.00827v1
|
2024-02-27T19:13:01Z
|
2024-02-27T19:13:01Z
|
Self-Refinement of Language Models from External Proxy Metrics Feedback
|
It is often desirable for Large Language Models (LLMs) to capture multiple objectives when providing a response. In document-grounded response generation, for example, agent responses are expected to be relevant to a user's query while also being grounded in a given document. In this paper, we introduce Proxy Metric-based Self-Refinement (ProMiSe), which enables an LLM to refine its own initial response along key dimensions of quality guided by external metrics feedback, yielding an overall better final response. ProMiSe leverages feedback on response quality through principle-specific proxy metrics, and iteratively refines its response one principle at a time. We apply ProMiSe to open source language models Flan-T5-XXL and Llama-2-13B-Chat, to evaluate its performance on document-grounded question answering datasets, MultiDoc2Dial and QuAC, demonstrating that self-refinement improves response quality. We further show that fine-tuning Llama-2-13B-Chat on the synthetic dialogue data generated by ProMiSe yields significant performance improvements over the zero-shot baseline as well as a supervised fine-tuned model on human annotated data.
|
[
"['Keshav Ramji' 'Young-Suk Lee' 'Ramón Fernandez Astudillo'\n 'Md Arafat Sultan' 'Tahira Naseem' 'Asim Munawar' 'Radu Florian'\n 'Salim Roukos']"
] |
null | null |
2403.00828
| null | null |
http://arxiv.org/pdf/2403.00828v1
|
2024-02-27T19:16:39Z
|
2024-02-27T19:16:39Z
|
Deep Learning Detection Method for Large Language Models-Generated
Scientific Content
|
Large Language Models (LLMs), such as GPT-3 and BERT, reshape how textual content is written and communicated. These models have the potential to generate scientific content that is indistinguishable from that written by humans. Hence, LLMs carry severe consequences for the scientific community, which relies on the integrity and reliability of publications. This research paper presents a novel ChatGPT-generated scientific text detection method, AI-Catcher. AI-Catcher integrates two deep learning models, multilayer perceptron (MLP) and convolutional neural networks (CNN). The MLP learns the feature representations of the linguistic and statistical features. The CNN extracts high-level representations of the sequential patterns from the textual content. AI-Catcher is a multimodal model that fuses hidden patterns derived from MLP and CNN. In addition, a new ChatGPT-Generated scientific text dataset is collected to enhance AI-generated text detection tools, AIGTxt. AIGTxt contains 3000 records collected from published academic articles across ten domains and divided into three classes: Human-written, ChatGPT-generated, and Mixed text. Several experiments are conducted to evaluate the performance of AI-Catcher. The comparative results demonstrate the capability of AI-Catcher to distinguish between human-written and ChatGPT-generated scientific text more accurately than alternative methods. On average, AI-Catcher improved accuracy by 37.4%.
|
[
"['Bushra Alhijawi' 'Rawan Jarrar' 'Aseel AbuAlRub' 'Arwa Bader']"
] |
null | null |
2403.00841
| null | null |
http://arxiv.org/pdf/2403.00841v1
|
2024-02-29T11:36:48Z
|
2024-02-29T11:36:48Z
|
Offline Fictitious Self-Play for Competitive Games
|
Offline Reinforcement Learning (RL) has received significant interest due to its ability to improve policies in previously collected datasets without online interactions. Despite its success in the single-agent setting, offline multi-agent RL remains a challenge, especially in competitive games. Firstly, unaware of the game structure, it is impossible to interact with the opponents and conduct a major learning paradigm, self-play, for competitive games. Secondly, real-world datasets cannot cover all the state and action space in the game, resulting in barriers to identifying Nash equilibrium (NE). To address these issues, this paper introduces Off-FSP, the first practical model-free offline RL algorithm for competitive games. We start by simulating interactions with various opponents by adjusting the weights of the fixed dataset with importance sampling. This technique allows us to learn best responses to different opponents and employ the Offline Self-Play learning framework. In this framework, we further implement Fictitious Self-Play (FSP) to approximate NE. In partially covered real-world datasets, our methods show the potential to approach NE by incorporating any single-agent offline RL method. Experimental results in Leduc Hold'em Poker show that our method significantly improves performances compared with state-of-the-art baselines.
|
[
"['Jingxiao Chen' 'Weiji Xie' 'Weinan Zhang' 'Yong yu' 'Ying Wen']"
] |
null | null |
2403.00843
| null | null |
http://arxiv.org/abs/2403.00843v2
|
2024-04-26T07:41:07Z
|
2024-02-29T13:49:56Z
|
Large Language Models are Learnable Planners for Long-Term
Recommendation
|
Planning for both immediate and long-term benefits becomes increasingly important in recommendation. Existing methods apply Reinforcement Learning (RL) to learn planning capacity by maximizing cumulative reward for long-term recommendation. However, the scarcity of recommendation data presents challenges such as instability and susceptibility to overfitting when training RL models from scratch, resulting in sub-optimal performance. In this light, we propose to leverage the remarkable planning capabilities over sparse data of Large Language Models (LLMs) for long-term recommendation. The key to achieving the target lies in formulating a guidance plan following principles of enhancing long-term engagement and grounding the plan to effective and executable actions in a personalized manner. To this end, we propose a Bi-level Learnable LLM Planner framework, which consists of a set of LLM instances and breaks down the learning process into macro-learning and micro-learning to learn macro-level guidance and micro-level personalized recommendation policies, respectively. Extensive experiments validate that the framework facilitates the planning ability of LLMs for long-term recommendation. Our code and data can be found at https://github.com/jizhi-zhang/BiLLP.
|
[
"['Wentao Shi' 'Xiangnan He' 'Yang Zhang' 'Chongming Gao' 'Xinyue Li'\n 'Jizhi Zhang' 'Qifan Wang' 'Fuli Feng']"
] |
null | null |
2403.00844
| null | null |
http://arxiv.org/abs/2403.00844v1
|
2024-02-29T13:58:33Z
|
2024-02-29T13:58:33Z
|
Lower-Left Partial AUC: An Effective and Efficient Optimization Metric
for Recommendation
|
Optimization metrics are crucial for building recommendation systems at scale. However, an effective and efficient metric for practical use remains elusive. While Top-K ranking metrics are the gold standard for optimization, they suffer from significant computational overhead. Alternatively, the more efficient accuracy and AUC metrics often fall short of capturing the true targets of recommendation tasks, leading to suboptimal performance. To overcome this dilemma, we propose a new optimization metric, Lower-Left Partial AUC (LLPAUC), which is computationally efficient like AUC but strongly correlates with Top-K ranking metrics. Compared to AUC, LLPAUC considers only the partial area under the ROC curve in the Lower-Left corner to push the optimization focus on Top-K. We provide theoretical validation of the correlation between LLPAUC and Top-K ranking metrics and demonstrate its robustness to noisy user feedback. We further design an efficient point-wise recommendation loss to maximize LLPAUC and evaluate it on three datasets, validating its effectiveness and robustness.
|
[
"['Wentao Shi' 'Chenxu Wang' 'Fuli Feng' 'Yang Zhang' 'Wenjie Wang'\n 'Junkang Wu' 'Xiangnan He']"
] |
null | null |
2403.00845
| null | null |
http://arxiv.org/pdf/2403.00845v1
|
2024-02-29T14:10:26Z
|
2024-02-29T14:10:26Z
|
Improved Online Learning Algorithms for CTR Prediction in Ad Auctions
|
In this work, we investigate the online learning problem of revenue maximization in ad auctions, where the seller needs to learn the click-through rates (CTRs) of each ad candidate and charge the price of the winner through a pay-per-click manner. We focus on two models of the advertisers' strategic behaviors. First, we assume that the advertiser is completely myopic; i.e.~in each round, they aim to maximize their utility only for the current round. In this setting, we develop an online mechanism based on upper-confidence bounds that achieves a tight $O(sqrt{T})$ regret in the worst-case and negative regret when the values are static across all the auctions and there is a gap between the highest expected value (i.e.~value multiplied by their CTR) and second highest expected value ad. Next, we assume that the advertiser is non-myopic and cares about their long term utility. This setting is much more complex since an advertiser is incentivized to influence the mechanism by bidding strategically in earlier rounds. In this setting, we provide an algorithm to achieve negative regret for the static valuation setting (with a positive gap), which is in sharp contrast with the prior work that shows $O(T^{2/3})$ regret when the valuation is generated by adversary.
|
[
"['Zhe Feng' 'Christopher Liaw' 'Zixin Zhou']"
] |
null | null |
2403.00849
| null | null |
http://arxiv.org/pdf/2403.00849v2
|
2024-07-03T13:43:56Z
|
2024-02-29T16:10:21Z
|
NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable
Functions
|
Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks. Among the most computationally intensive operations in a neural network (NN) is the dot product between the feature and weight vectors. Thus, some previous FPGA acceleration works have proposed mapping neurons with quantized inputs and outputs directly to lookup tables (LUTs) for hardware implementation. In these works, the boundaries of the neurons coincide with the boundaries of the LUTs. We propose relaxing these boundaries and mapping entire sub-networks to a single LUT. As the sub-networks are absorbed within the LUT, the NN topology and precision within a partition do not affect the size of the lookup tables generated. Therefore, we utilize fully connected layers with floating-point precision inside each partition, which benefit from being universal function approximators, but with rigid sparsity and quantization enforced between partitions, where the NN topology becomes exposed to the circuit topology. Although cheap to implement, this approach can lead to very deep NNs, and so to tackle challenges like vanishing gradients, we also introduce skip connections inside the partitions. The resulting methodology can be seen as training DNNs with a specific FPGA hardware-inspired sparsity pattern that allows them to be mapped to much shallower circuit-level networks, thereby significantly improving latency. We validate our proposed method on a known latency-critical task, jet substructure tagging, and on the classical computer vision task, digit classification using MNIST. Our approach allows for greater function expressivity within the LUTs compared to existing work, leading to up to $4.3times$ lower latency NNs for the same accuracy.
|
[
"['Marta Andronic' 'George A. Constantinides']"
] |
null | null |
2403.00853
| null | null |
http://arxiv.org/pdf/2403.00853v1
|
2024-02-29T18:03:03Z
|
2024-02-29T18:03:03Z
|
Distributed Momentum Methods Under Biased Gradient Estimations
|
Distributed stochastic gradient methods are gaining prominence in solving large-scale machine learning problems that involve data distributed across multiple nodes. However, obtaining unbiased stochastic gradients, which have been the focus of most theoretical research, is challenging in many distributed machine learning applications. The gradient estimations easily become biased, for example, when gradients are compressed or clipped, when data is shuffled, and in meta-learning and reinforcement learning. In this work, we establish non-asymptotic convergence bounds on distributed momentum methods under biased gradient estimation on both general non-convex and $mu$-PL non-convex problems. Our analysis covers general distributed optimization problems, and we work out the implications for special cases where gradient estimates are biased, i.e., in meta-learning and when the gradients are compressed or clipped. Our numerical experiments on training deep neural networks with Top-$K$ sparsification and clipping verify faster convergence performance of momentum methods than traditional biased gradient descent.
|
[
"['Ali Beikmohammadi' 'Sarit Khirirat' 'Sindri Magnússon']"
] |
null | null |
2403.00854
| null | null |
http://arxiv.org/pdf/2403.00854v1
|
2024-02-29T18:30:52Z
|
2024-02-29T18:30:52Z
|
Speaker-Independent Dysarthria Severity Classification using
Self-Supervised Transformers and Multi-Task Learning
|
Dysarthria, a condition resulting from impaired control of the speech muscles due to neurological disorders, significantly impacts the communication and quality of life of patients. The condition's complexity, human scoring and varied presentations make its assessment and management challenging. This study presents a transformer-based framework for automatically assessing dysarthria severity from raw speech data. It can offer an objective, repeatable, accessible, standardised and cost-effective and compared to traditional methods requiring human expert assessors. We develop a transformer framework, called Speaker-Agnostic Latent Regularisation (SALR), incorporating a multi-task learning objective and contrastive learning for speaker-independent multi-class dysarthria severity classification. The multi-task framework is designed to reduce reliance on speaker-specific characteristics and address the intrinsic intra-class variability of dysarthric speech. We evaluated on the Universal Access Speech dataset using leave-one-speaker-out cross-validation, our model demonstrated superior performance over traditional machine learning approaches, with an accuracy of $70.48%$ and an F1 score of $59.23%$. Our SALR model also exceeded the previous benchmark for AI-based classification, which used support vector machines, by $16.58%$. We open the black box of our model by visualising the latent space where we can observe how the model substantially reduces speaker-specific cues and amplifies task-specific ones, thereby showing its robustness. In conclusion, SALR establishes a new benchmark in speaker-independent multi-class dysarthria severity classification using generative AI. The potential implications of our findings for broader clinical applications in automated dysarthria severity assessments.
|
[
"['Lauren Stumpf' 'Balasundaram Kadirvelu' 'Sigourney Waibel'\n 'A. Aldo Faisal']"
] |
null | null |
2403.00858
| null | null |
http://arxiv.org/pdf/2403.00858v4
|
2024-05-13T18:34:30Z
|
2024-02-29T19:55:06Z
|
Direct Alignment of Draft Model for Speculative Decoding with
Chat-Fine-Tuned LLMs
|
Text generation with Large Language Models (LLMs) is known to be memory bound due to the combination of their auto-regressive nature, huge parameter counts, and limited memory bandwidths, often resulting in low token rates. Speculative decoding has been proposed as a solution for LLM inference acceleration. However, since draft models are often unavailable in the modern open-source LLM families, e.g., for Llama 2 7B, training a high-quality draft model is required to enable inference acceleration via speculative decoding. In this paper, we propose a simple draft model training framework for direct alignment to chat-capable target models. With the proposed framework, we train Llama 2 Chat Drafter 115M, a draft model for Llama 2 Chat 7B or larger, with only 1.64% of the original size. Our training framework only consists of pretraining, distillation dataset generation, and finetuning with knowledge distillation, with no additional alignment procedure. For the finetuning step, we use instruction-response pairs generated by target model for distillation in plausible data distribution, and propose a new Total Variation Distance++ (TVD++) loss that incorporates variance reduction techniques inspired from the policy gradient method in reinforcement learning. Our empirical results show that Llama 2 Chat Drafter 115M with speculative decoding achieves up to 2.3 block efficiency and 2.4$times$ speed-up relative to autoregressive decoding on various tasks with no further task-specific fine-tuning.
|
[
"['Raghavv Goel' 'Mukul Gagrani' 'Wonseok Jeon' 'Junyoung Park' 'Mingu Lee'\n 'Christopher Lott']"
] |
null | null |
2403.00860
| null | null |
http://arxiv.org/pdf/2403.00860v1
|
2024-02-29T20:48:39Z
|
2024-02-29T20:48:39Z
|
Parallel Algorithms for Exact Enumeration of Deep Neural Network
Activation Regions
|
A feedforward neural network using rectified linear units constructs a mapping from inputs to outputs by partitioning its input space into a set of convex regions where points within a region share a single affine transformation. In order to understand how neural networks work, when and why they fail, and how they compare to biological intelligence, we need to understand the organization and formation of these regions. Step one is to design and implement algorithms for exact region enumeration in networks beyond toy examples. In this work, we present parallel algorithms for exact enumeration in deep (and shallow) neural networks. Our work has three main contributions: (1) we present a novel algorithm framework and parallel algorithms for region enumeration; (2) we implement one of our algorithms on a variety of network architectures and experimentally show how the number of regions dictates runtime; and (3) we show, using our algorithm's output, how the dimension of a region's affine transformation impacts further partitioning of the region by deeper layers. To our knowledge, we run our implemented algorithm on networks larger than all of the networks used in the existing region enumeration literature. Further, we experimentally demonstrate the importance of parallelism for region enumeration of any reasonably sized network.
|
[
"['Sabrina Drammis' 'Bowen Zheng' 'Karthik Srinivasan' 'Robert C. Berwick'\n 'Nancy A. Lynch' 'Robert Ajemian']"
] |
null | null |
2403.00861
| null | null |
http://arxiv.org/pdf/2403.00861v1
|
2024-02-29T21:03:46Z
|
2024-02-29T21:03:46Z
|
Pivoting Retail Supply Chain with Deep Generative Techniques: Taxonomy,
Survey and Insights
|
Generative AI applications, such as ChatGPT or DALL-E, have shown the world their impressive capabilities in generating human-like text or image. Diving deeper, the science stakeholder for those AI applications are Deep Generative Models, a.k.a DGMs, which are designed to learn the underlying distribution of the data and generate new data points that are statistically similar to the original dataset. One critical question is raised: how can we leverage DGMs into morden retail supply chain realm? To address this question, this paper expects to provide a comprehensive review of DGMs and discuss their existing and potential usecases in retail supply chain, by (1) providing a taxonomy and overview of state-of-the-art DGMs and their variants, (2) reviewing existing DGM applications in retail supply chain from a end-to-end view of point, and (3) discussing insights and potential directions on how DGMs can be further utilized on solving retail supply chain problems.
|
[
"['Yuan Wang' 'Lokesh Kumar Sambasivan' 'Mingang Fu' 'Prakhar Mehrotra']"
] |
null | null |
2403.00865
| null | null |
http://arxiv.org/pdf/2403.00865v1
|
2024-03-01T02:20:04Z
|
2024-03-01T02:20:04Z
|
Fast and Efficient Local Search for Genetic Programming Based Loss
Function Learning
|
In this paper, we develop upon the topic of loss function learning, an emergent meta-learning paradigm that aims to learn loss functions that significantly improve the performance of the models trained under them. Specifically, we propose a new meta-learning framework for task and model-agnostic loss function learning via a hybrid search approach. The framework first uses genetic programming to find a set of symbolic loss functions. Second, the set of learned loss functions is subsequently parameterized and optimized via unrolled differentiation. The versatility and performance of the proposed framework are empirically validated on a diverse set of supervised learning tasks. Results show that the learned loss functions bring improved convergence, sample efficiency, and inference performance on tabulated, computer vision, and natural language processing problems, using a variety of task-specific neural network architectures.
|
[
"['Christian Raymond' 'Qi Chen' 'Bing Xue' 'Mengjie Zhang']"
] |
null | null |
2403.00867
| null | null |
http://arxiv.org/pdf/2403.00867v2
|
2024-03-05T13:46:50Z
|
2024-03-01T03:29:54Z
|
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by
Exploring Refusal Loss Landscapes
|
Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, this paper defines and investigates the Refusal Loss of LLMs and then proposes a method called Gradient Cuff to detect jailbreak attempts. Gradient Cuff exploits the unique properties observed in the refusal loss landscape, including functional values and its smoothness, to design an effective two-step detection strategy. Experimental results on two aligned LLMs (LLaMA-2-7B-Chat and Vicuna-7B-V1.5) and six types of jailbreak attacks (GCG, AutoDAN, PAIR, TAP, Base64, and LRL) show that Gradient Cuff can significantly improve the LLM's rejection capability for malicious jailbreak queries, while maintaining the model's performance for benign user queries by adjusting the detection threshold.
|
[
"['Xiaomeng Hu' 'Pin-Yu Chen' 'Tsung-Yi Ho']"
] |
null | null |
2403.00869
| null | null |
http://arxiv.org/pdf/2403.00869v1
|
2024-03-01T04:42:47Z
|
2024-03-01T04:42:47Z
|
Enhancing Multivariate Time Series Forecasting with Mutual
Information-driven Cross-Variable and Temporal Modeling
|
Recent advancements have underscored the impact of deep learning techniques on multivariate time series forecasting (MTSF). Generally, these techniques are bifurcated into two categories: Channel-independence and Channel-mixing approaches. Although Channel-independence methods typically yield better results, Channel-mixing could theoretically offer improvements by leveraging inter-variable correlations. Nonetheless, we argue that the integration of uncorrelated information in channel-mixing methods could curtail the potential enhancement in MTSF model performance. To substantiate this claim, we introduce the Cross-variable Decorrelation Aware feature Modeling (CDAM) for Channel-mixing approaches, aiming to refine Channel-mixing by minimizing redundant information between channels while enhancing relevant mutual information. Furthermore, we introduce the Temporal correlation Aware Modeling (TAM) to exploit temporal correlations, a step beyond conventional single-step forecasting methods. This strategy maximizes the mutual information between adjacent sub-sequences of both the forecasted and target series. Combining CDAM and TAM, our novel framework significantly surpasses existing models, including those previously considered state-of-the-art, in comprehensive tests.
|
[
"['Shiyi Qi' 'Liangjian Wen' 'Yiduo Li' 'Yuanhang Yang' 'Zhe Li'\n 'Zhongwen Rao' 'Lujia Pan' 'Zenglin Xu']"
] |
null | null |
2403.00871
| null | null |
http://arxiv.org/pdf/2403.00871v1
|
2024-03-01T06:15:07Z
|
2024-03-01T06:15:07Z
|
Teach LLMs to Phish: Stealing Private Information from Language Models
|
When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new practical data extraction attack that we call "neural phishing". This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10% attack success rates, at times, as high as 50%. Our attack assumes only that an adversary can insert as few as 10s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data.
|
[
"['Ashwinee Panda' 'Christopher A. Choquette-Choo' 'Zhengming Zhang'\n 'Yaoqing Yang' 'Prateek Mittal']"
] |
null | null |
2403.00873
| null | null |
http://arxiv.org/pdf/2403.00873v2
|
2024-07-05T09:36:26Z
|
2024-03-01T07:41:05Z
|
Blockchain-empowered Federated Learning: Benefits, Challenges, and
Solutions
|
Federated learning (FL) is a distributed machine learning approach that protects user data privacy by training models locally on clients and aggregating them on a parameter server. While effective at preserving privacy, FL systems face limitations such as single points of failure, lack of incentives, and inadequate security. To address these challenges, blockchain technology is integrated into FL systems to provide stronger security, fairness, and scalability. However, blockchain-empowered FL (BC-FL) systems introduce additional demands on network, computing, and storage resources. This survey provides a comprehensive review of recent research on BC-FL systems, analyzing the benefits and challenges associated with blockchain integration. We explore why blockchain is applicable to FL, how it can be implemented, and the challenges and existing solutions for its integration. Additionally, we offer insights on future research directions for the BC-FL system.
|
[
"['Zeju Cai' 'Jianguo Chen' 'Yuting Fan' 'Zibin Zheng' 'Keqin Li']"
] |
null | null |
2403.00875
| null | null |
http://arxiv.org/pdf/2403.00875v1
|
2024-03-01T07:58:29Z
|
2024-03-01T07:58:29Z
|
Enhancing Protein Predictive Models via Proteins Data Augmentation: A
Benchmark and New Directions
|
Augmentation is an effective alternative to utilize the small amount of labeled protein data. However, most of the existing work focuses on design-ing new architectures or pre-training tasks, and relatively little work has studied data augmentation for proteins. This paper extends data augmentation techniques previously used for images and texts to proteins and then benchmarks these techniques on a variety of protein-related tasks, providing the first comprehensive evaluation of protein augmentation. Furthermore, we propose two novel semantic-level protein augmentation methods, namely Integrated Gradients Substitution and Back Translation Substitution, which enable protein semantic-aware augmentation through saliency detection and biological knowledge. Finally, we integrate extended and proposed augmentations into an augmentation pool and propose a simple but effective framework, namely Automated Protein Augmentation (APA), which can adaptively select the most suitable augmentation combinations for different tasks. Extensive experiments have shown that APA enhances the performance of five protein related tasks by an average of 10.55% across three architectures compared to vanilla implementations without augmentation, highlighting its potential to make a great impact on the field.
|
[
"['Rui Sun' 'Lirong Wu' 'Haitao Lin' 'Yufei Huang' 'Stan Z. Li']"
] |
null | null |
2403.00877
| null | null |
http://arxiv.org/pdf/2403.00877v3
|
2024-05-02T05:01:33Z
|
2024-03-01T08:26:44Z
|
Disaggregated Multi-Tower: Topology-aware Modeling Technique for
Efficient Large-Scale Recommendation
|
We study a mismatch between the deep learning recommendation models' flat architecture, common distributed training paradigm and hierarchical data center topology. To address the associated inefficiencies, we propose Disaggregated Multi-Tower (DMT), a modeling technique that consists of (1) Semantic-preserving Tower Transform (SPTT), a novel training paradigm that decomposes the monolithic global embedding lookup process into disjoint towers to exploit data center locality; (2) Tower Module (TM), a synergistic dense component attached to each tower to reduce model complexity and communication volume through hierarchical feature interaction; and (3) Tower Partitioner (TP), a feature partitioner to systematically create towers with meaningful feature interactions and load balanced assignments to preserve model quality and training throughput via learned embeddings. We show that DMT can achieve up to 1.9x speedup compared to the state-of-the-art baselines without losing accuracy across multiple generations of hardware at large data center scales.
|
[
"['Liang Luo' 'Buyun Zhang' 'Michael Tsang' 'Yinbin Ma' 'Ching-Hsiang Chu'\n 'Yuxin Chen' 'Shen Li' 'Yuchen Hao' 'Yanli Zhao' 'Guna Lakshminarayanan'\n 'Ellie Dingqiao Wen' 'Jongsoo Park' 'Dheevatsa Mudigere' 'Maxim Naumov']"
] |
null | null |
2403.00881
| null | null |
http://arxiv.org/pdf/2403.00881v1
|
2024-03-01T09:14:10Z
|
2024-03-01T09:14:10Z
|
FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked
RDMA Transmission
|
Communication overhead is a significant bottleneck in federated learning (FL), which has been exaggerated with the increasing size of AI models. In this paper, we propose FedRDMA, a communication-efficient cross-silo FL system that integrates RDMA into the FL communication protocol. To overcome the limitations of RDMA in wide-area networks (WANs), FedRDMA divides the updated model into chunks and designs a series of optimization techniques to improve the efficiency and robustness of RDMA-based communication. We implement FedRDMA atop the industrial federated learning framework and evaluate it on a real-world cross-silo FL scenario. The experimental results show that sys can achieve up to 3.8$times$ speedup in communication efficiency compared to traditional TCP/IP-based FL systems.
|
[
"['Zeling Zhang' 'Dongqi Cai' 'Yiran Zhang' 'Mengwei Xu' 'Shangguang Wang'\n 'Ao Zhou']"
] |
null | null |
2403.00886
| null | null |
http://arxiv.org/pdf/2403.00886v1
|
2024-03-01T10:19:17Z
|
2024-03-01T10:19:17Z
|
Evaluating and Correcting Performative Effects of Decision Support
Systems via Causal Domain Shift
|
When predicting a target variable $Y$ from features $X$, the prediction $hat{Y}$ can be performative: an agent might act on this prediction, affecting the value of $Y$ that we eventually observe. Performative predictions are deliberately prevalent in algorithmic decision support, where a Decision Support System (DSS) provides a prediction for an agent to affect the value of the target variable. When deploying a DSS in high-stakes settings (e.g. healthcare, law, predictive policing, or child welfare screening) it is imperative to carefully assess the performative effects of the DSS. In the case that the DSS serves as an alarm for a predicted negative outcome, naive retraining of the prediction model is bound to result in a model that underestimates the risk, due to effective workings of the previous model. In this work, we propose to model the deployment of a DSS as causal domain shift and provide novel cross-domain identification results for the conditional expectation $E[Y | X]$, allowing for pre- and post-hoc assessment of the deployment of the DSS, and for retraining of a model that assesses the risk under a baseline policy where the DSS is not deployed. Using a running example, we empirically show that a repeated regression procedure provides a practical framework for estimating these quantities, even when the data is affected by sample selection bias and selective labelling, offering for a practical, unified solution for multiple forms of target variable bias.
|
[
"['Philip Boeken' 'Onno Zoeter' 'Joris M. Mooij']"
] |
null | null |
2403.00887
| null | null |
http://arxiv.org/pdf/2403.00887v1
|
2024-03-01T11:28:37Z
|
2024-03-01T11:28:37Z
|
SEGAA: A Unified Approach to Predicting Age, Gender, and Emotion in
Speech
|
The interpretation of human voices holds importance across various applications. This study ventures into predicting age, gender, and emotion from vocal cues, a field with vast applications. Voice analysis tech advancements span domains, from improving customer interactions to enhancing healthcare and retail experiences. Discerning emotions aids mental health, while age and gender detection are vital in various contexts. Exploring deep learning models for these predictions involves comparing single, multi-output, and sequential models highlighted in this paper. Sourcing suitable data posed challenges, resulting in the amalgamation of the CREMA-D and EMO-DB datasets. Prior work showed promise in individual predictions, but limited research considered all three variables simultaneously. This paper identifies flaws in an individual model approach and advocates for our novel multi-output learning architecture Speech-based Emotion Gender and Age Analysis (SEGAA) model. The experiments suggest that Multi-output models perform comparably to individual models, efficiently capturing the intricate relationships between variables and speech inputs, all while achieving improved runtime.
|
[
"['Aron R' 'Indra Sigicharla' 'Chirag Periwal' 'Mohanaprasad K'\n 'Nithya Darisini P S' 'Sourabh Tiwari' 'Shivani Arora']"
] |
null | null |
2403.00888
| null | null |
http://arxiv.org/pdf/2403.00888v1
|
2024-03-01T11:54:14Z
|
2024-03-01T11:54:14Z
|
Margin Discrepancy-based Adversarial Training for Multi-Domain Text
Classification
|
Multi-domain text classification (MDTC) endeavors to harness available resources from correlated domains to enhance the classification accuracy of the target domain. Presently, most MDTC approaches that embrace adversarial training and the shared-private paradigm exhibit cutting-edge performance. Unfortunately, these methods face a non-negligible challenge: the absence of theoretical guarantees in the design of MDTC algorithms. The dearth of theoretical underpinning poses a substantial impediment to the advancement of MDTC algorithms. To tackle this problem, we first provide a theoretical analysis of MDTC by decomposing the MDTC task into multiple domain adaptation tasks. We incorporate the margin discrepancy as the measure of domain divergence and establish a new generalization bound based on Rademacher complexity. Subsequently, we propose a margin discrepancy-based adversarial training (MDAT) approach for MDTC, in accordance with our theoretical analysis. To validate the efficacy of the proposed MDAT method, we conduct empirical studies on two MDTC benchmarks. The experimental results demonstrate that our MDAT approach surpasses state-of-the-art baselines on both datasets.
|
[
"['Yuan Wu']"
] |
null | null |
2403.00889
| null | null |
http://arxiv.org/pdf/2403.00889v1
|
2024-03-01T11:55:37Z
|
2024-03-01T11:55:37Z
|
Time-bound Contextual Bio-ID Generation for Minimalist Wearables
|
As wearable devices become increasingly miniaturized and powerful, a new opportunity arises for instant and dynamic device-to-device collaboration and human-to-device interaction. However, this progress presents a unique challenge: these minimalist wearables lack inherent mechanisms for real-time authentication, posing significant risks to data privacy and overall security. To address this, we introduce Proteus that realizes an innovative concept of time-bound contextual bio-IDs, which are generated from on-device sensor data and embedded into a common latent space. These bio-IDs act as a time-bound unique user identifier that can be used to identify the wearer in a certain context. Proteus enables dynamic and contextual device collaboration as well as robust human-to-device interaction. Our evaluations demonstrate the effectiveness of our method, particularly in the context of minimalist wearables.
|
[
"['Adiba Orzikulova' 'Diana A. Vasile' 'Fahim Kawsar' 'Chulhong Min']"
] |
null | null |
2403.00891
| null | null |
http://arxiv.org/pdf/2403.00891v1
|
2024-03-01T13:04:12Z
|
2024-03-01T13:04:12Z
|
A Regularization-based Transfer Learning Method for Information
Extraction via Instructed Graph Decoder
|
Information extraction (IE) aims to extract complex structured information from the text. Numerous datasets have been constructed for various IE tasks, leading to time-consuming and labor-intensive data annotations. Nevertheless, most prevailing methods focus on training task-specific models, while the common knowledge among different IE tasks is not explicitly modeled. Moreover, the same phrase may have inconsistent labels in different tasks, which poses a big challenge for knowledge transfer using a unified model. In this study, we propose a regularization-based transfer learning method for IE (TIE) via an instructed graph decoder. Specifically, we first construct an instruction pool for datasets from all well-known IE tasks, and then present an instructed graph decoder, which decodes various complex structures into a graph uniformly based on corresponding instructions. In this way, the common knowledge shared with existing datasets can be learned and transferred to a new dataset with new labels. Furthermore, to alleviate the label inconsistency problem among various IE tasks, we introduce a task-specific regularization strategy, which does not update the gradients of two tasks with 'opposite direction'. We conduct extensive experiments on 12 datasets spanning four IE tasks, and the results demonstrate the great advantages of our proposed method
|
[
"['Kedi Chen' 'Jie Zhou' 'Qin Chen' 'Shunyu Liu' 'Liang He']"
] |
null | null |
2403.00892
| null | null |
http://arxiv.org/pdf/2403.00892v2
|
2024-03-12T09:36:27Z
|
2024-03-01T13:47:39Z
|
PowerFlowMultiNet: Multigraph Neural Networks for Unbalanced Three-Phase
Distribution Systems
|
Efficiently solving unbalanced three-phase power flow in distribution grids is pivotal for grid analysis and simulation. There is a pressing need for scalable algorithms capable of handling large-scale unbalanced power grids that can provide accurate and fast solutions. To address this, deep learning techniques, especially Graph Neural Networks (GNNs), have emerged. However, existing literature primarily focuses on balanced networks, leaving a critical gap in supporting unbalanced three-phase power grids. This letter introduces PowerFlowMultiNet, a novel multigraph GNN framework explicitly designed for unbalanced three-phase power grids. The proposed approach models each phase separately in a multigraph representation, effectively capturing the inherent asymmetry in unbalanced grids. A graph embedding mechanism utilizing message passing is introduced to capture spatial dependencies within the power system network. PowerFlowMultiNet outperforms traditional methods and other deep learning approaches in terms of accuracy and computational speed. Rigorous testing reveals significantly lower error rates and a notable hundredfold increase in computational speed for large power networks compared to model-based methods.
|
[
"['Salah Ghamizi' 'Jun Cao' 'Aoxiang Ma' 'Pedro Rodriguez']"
] |
null | null |
2403.00895
| null | null |
http://arxiv.org/pdf/2403.00895v3
|
2024-03-15T00:40:50Z
|
2024-03-01T15:32:44Z
|
End-to-End Graph-Sequential Representation Learning for Accurate
Recommendations
|
Recent recommender system advancements have focused on developing sequence-based and graph-based approaches. Both approaches proved useful in modeling intricate relationships within behavioral data, leading to promising outcomes in personalized ranking and next-item recommendation tasks while maintaining good scalability. However, they capture very different signals from data. While the former approach represents users directly through ordered interactions with recent items, the latter aims to capture indirect dependencies across the interactions graph. This paper presents a novel multi-representational learning framework exploiting these two paradigms' synergies. Our empirical evaluation on several datasets demonstrates that mutual training of sequential and graph components with the proposed framework significantly improves recommendations performance.
|
[
"['Vladimir Baikalov' 'Evgeny Frolov']"
] |
null | null |
2403.00897
| null | null |
http://arxiv.org/pdf/2403.00897v1
|
2024-03-01T16:27:33Z
|
2024-03-01T16:27:33Z
|
VisRec: A Semi-Supervised Approach to Radio Interferometric Data
Reconstruction
|
Radio telescopes produce visibility data about celestial objects, but these data are sparse and noisy. As a result, images created on raw visibility data are of low quality. Recent studies have used deep learning models to reconstruct visibility data to get cleaner images. However, these methods rely on a substantial amount of labeled training data, which requires significant labeling effort from radio astronomers. Addressing this challenge, we propose VisRec, a model-agnostic semi-supervised learning approach to the reconstruction of visibility data. Specifically, VisRec consists of both a supervised learning module and an unsupervised learning module. In the supervised learning module, we introduce a set of data augmentation functions to produce diverse training examples. In comparison, the unsupervised learning module in VisRec augments unlabeled data and uses reconstructions from non-augmented visibility data as pseudo-labels for training. This hybrid approach allows VisRec to effectively leverage both labeled and unlabeled data. This way, VisRec performs well even when labeled data is scarce. Our evaluation results show that VisRec outperforms all baseline methods in reconstruction quality, robustness against common observation perturbation, and generalizability to different telescope configurations.
|
[
"['Ruoqi Wang' 'Haitao Wang' 'Qiong Luo' 'Feng Wang' 'Hejun Wu']"
] |
null | null |
2403.00898
| null | null |
http://arxiv.org/abs/2403.00898v1
|
2024-03-01T17:29:34Z
|
2024-03-01T17:29:34Z
|
The Algorithm Configuration Problem
|
The field of algorithmic optimization has significantly advanced with the development of methods for the automatic configuration of algorithmic parameters. This article delves into the Algorithm Configuration Problem, focused on optimizing parametrized algorithms for solving specific instances of decision/optimization problems. We present a comprehensive framework that not only formalizes the Algorithm Configuration Problem, but also outlines different approaches for its resolution, leveraging machine learning models and heuristic strategies. The article categorizes existing methodologies into per-instance and per-problem approaches, distinguishing between offline and online strategies for model construction and deployment. By synthesizing these approaches, we aim to provide a clear pathway for both understanding and addressing the complexities inherent in algorithm configuration.
|
[
"['Gabriele Iommazzo' \"Claudia D'Ambrosio\" 'Antonio Frangioni'\n 'Leo Liberti']"
] |
null | null |
2403.00929
| null | null |
http://arxiv.org/pdf/2403.00929v2
|
2024-03-10T08:55:18Z
|
2024-03-01T19:19:56Z
|
PRIME: Scaffolding Manipulation Tasks with Behavior Primitives for
Data-Efficient Imitation Learning
|
Imitation learning has shown great potential for enabling robots to acquire complex manipulation behaviors. However, these algorithms suffer from high sample complexity in long-horizon tasks, where compounding errors accumulate over the task horizons. We present PRIME (PRimitive-based IMitation with data Efficiency), a behavior primitive-based framework designed for improving the data efficiency of imitation learning. PRIME scaffolds robot tasks by decomposing task demonstrations into primitive sequences, followed by learning a high-level control policy to sequence primitives through imitation learning. Our experiments demonstrate that PRIME achieves a significant performance improvement in multi-stage manipulation tasks, with 10-34% higher success rates in simulation over state-of-the-art baselines and 20-48% on physical hardware.
|
[
"['Tian Gao' 'Soroush Nasiriany' 'Huihan Liu' 'Quantao Yang' 'Yuke Zhu']"
] |
null | null |
2403.00930
| null | null |
http://arxiv.org/pdf/2403.00930v1
|
2024-03-01T19:21:10Z
|
2024-03-01T19:21:10Z
|
Scale-free Adversarial Reinforcement Learning
|
This paper initiates the study of scale-free learning in Markov Decision Processes (MDPs), where the scale of rewards/losses is unknown to the learner. We design a generic algorithmic framework, underline{S}cale underline{C}lipping underline{B}ound (texttt{SCB}), and instantiate this framework in both the adversarial Multi-armed Bandit (MAB) setting and the adversarial MDP setting. Through this framework, we achieve the first minimax optimal expected regret bound and the first high-probability regret bound in scale-free adversarial MABs, resolving an open problem raised in cite{hadiji2023adaptation}. On adversarial MDPs, our framework also give birth to the first scale-free RL algorithm with a $tilde{mathcal{O}}(sqrt{T})$ high-probability regret guarantee.
|
[
"['Mingyu Chen' 'Xuezhou Zhang']"
] |
null | null |
2403.00932
| null | null |
http://arxiv.org/pdf/2403.00932v2
|
2024-06-05T03:07:44Z
|
2024-03-01T19:22:24Z
|
Differentially Private Knowledge Distillation via Synthetic Text
Generation
|
Large Language models (LLMs) are achieving state-of-the-art performance in many different downstream tasks. However, the increasing urgency of data privacy puts pressure on practitioners to train LLMs with Differential Privacy (DP) on private data. Concurrently, the exponential growth in parameter size of LLMs necessitates model compression before deployment of LLMs on resource-constrained devices or latency-sensitive applications. Differential privacy and model compression generally must trade off utility loss to achieve their objectives. Moreover, simultaneously applying both schemes can compound the utility degradation. To this end, we propose DistilDP: a novel differentially private knowledge distillation algorithm that exploits synthetic data generated by a differentially private teacher LLM. The knowledge of a teacher LLM is transferred onto the student in two ways: one way from the synthetic data itself -- the hard labels, and the other way by the output distribution of the teacher evaluated on the synthetic data -- the soft labels. Furthermore, if the teacher and student share a similar architectural structure, we can further distill knowledge by aligning the hidden representations between both. Our experimental results demonstrate that DistilDP can substantially improve the utility over existing baselines, at least $9.0$ PPL on the Big Patent dataset, with strong privacy parameters, $epsilon=2$. These promising results progress privacy-preserving compression of autoregressive LLMs. Our code can be accessed here: https://github.com/james-flemings/dp_compress.
|
[
"['James Flemings' 'Murali Annavaram']"
] |
null | null |
2403.00935
| null | null |
http://arxiv.org/pdf/2403.00935v1
|
2024-03-01T19:27:53Z
|
2024-03-01T19:27:53Z
|
Transfer Learning for Security: Challenges and Future Directions
|
Many machine learning and data mining algorithms rely on the assumption that the training and testing data share the same feature space and distribution. However, this assumption may not always hold. For instance, there are situations where we need to classify data in one domain, but we only have sufficient training data available from a different domain. The latter data may follow a distinct distribution. In such cases, successfully transferring knowledge across domains can significantly improve learning performance and reduce the need for extensive data labeling efforts. Transfer learning (TL) has thus emerged as a promising framework to tackle this challenge, particularly in security-related tasks. This paper aims to review the current advancements in utilizing TL techniques for security. The paper includes a discussion of the existing research gaps in applying TL in the security domain, as well as exploring potential future research directions and issues that arise in the context of TL-assisted security solutions.
|
[
"['Adrian Shuai Li' 'Arun Iyengar' 'Ashish Kundu' 'Elisa Bertino']"
] |
null | null |
2403.00942
| null | null |
http://arxiv.org/pdf/2403.00942v2
|
2024-07-11T13:51:56Z
|
2024-03-01T19:39:19Z
|
Resilience of Entropy Model in Distributed Neural Networks
|
Distributed deep neural networks (DNNs) have emerged as a key technique to reduce communication overhead without sacrificing performance in edge computing systems. Recently, entropy coding has been introduced to further reduce the communication overhead. The key idea is to train the distributed DNN jointly with an entropy model, which is used as side information during inference time to adaptively encode latent representations into bit streams with variable length. To the best of our knowledge, the resilience of entropy models is yet to be investigated. As such, in this paper we formulate and investigate the resilience of entropy models to intentional interference (e.g., adversarial attacks) and unintentional interference (e.g., weather changes and motion blur). Through an extensive experimental campaign with 3 different DNN architectures, 2 entropy models and 4 rate-distortion trade-off factors, we demonstrate that the entropy attacks can increase the communication overhead by up to 95%. By separating compression features in frequency and spatial domain, we propose a new defense mechanism that can reduce the transmission overhead of the attacked input by about 9% compared to unperturbed data, with only about 2% accuracy loss. Importantly, the proposed defense mechanism is a standalone approach which can be applied in conjunction with approaches such as adversarial training to further improve robustness. Code will be shared for reproducibility.
|
[
"['Milin Zhang' 'Mohammad Abdi' 'Shahriar Rifat' 'Francesco Restuccia']"
] |
null | null |
2403.00946
| null | null |
http://arxiv.org/pdf/2403.00946v1
|
2024-03-01T19:50:22Z
|
2024-03-01T19:50:22Z
|
Fine-tuning with Very Large Dropout
|
It is impossible today to pretend that the practice of machine learning is compatible with the idea that training and testing data follow the same distribution. Several authors have recently used ensemble techniques to show how scenarios involving multiple data distributions are best served by representations that are both richer than those obtained by regularizing for the best in-distribution performance, and richer than those obtained under the influence of the implicit sparsity bias of common stochastic gradient procedures. This contribution investigates the use of very high dropout rates instead of ensembles to obtain such rich representations. Although training a deep network from scratch using such dropout rates is virtually impossible, fine-tuning a large pre-trained model under such conditions is not only possible but also achieves out-of-distribution performances that exceed those of both ensembles and weight averaging methods such as model soups. This result has practical significance because the importance of the fine-tuning scenario has considerably grown in recent years. This result also provides interesting insights on the nature of rich representations and on the intrinsically linear nature of fine-tuning a large network using a comparatively small dataset.
|
[
"['Jianyu Zhang' 'Léon Bottou']"
] |
null | null |
2403.00952
| null | null |
http://arxiv.org/pdf/2403.00952v1
|
2024-03-01T20:03:44Z
|
2024-03-01T20:03:44Z
|
MediSwift: Efficient Sparse Pre-trained Biomedical Language Models
|
Large language models (LLMs) are typically trained on general source data for various domains, but a recent surge in domain-specific LLMs has shown their potential to outperform general-purpose models in domain-specific tasks (e.g., biomedicine). Although domain-specific pre-training enhances efficiency and leads to smaller models, the computational costs of training these LLMs remain high, posing budgeting challenges. We introduce MediSwift, a suite of biomedical LMs that leverage sparse pre-training on domain-specific biomedical text data. By inducing up to 75% weight sparsity during the pre-training phase, MediSwift achieves a 2-2.5x reduction in training FLOPs. Notably, all sparse pre-training was performed on the Cerebras CS-2 system, which is specifically designed to realize the acceleration benefits from unstructured weight sparsity, thereby significantly enhancing the efficiency of the MediSwift models. Through subsequent dense fine-tuning and strategic soft prompting, MediSwift models outperform existing LLMs up to 7B parameters on biomedical tasks, setting new benchmarks w.r.t efficiency-accuracy on tasks such as PubMedQA. Our results show that sparse pre-training, along with dense fine-tuning and soft prompting, offers an effective method for creating high-performing, computationally efficient models in specialized domains.
|
[
"['Vithursan Thangarasa' 'Mahmoud Salem' 'Shreyas Saxena' 'Kevin Leong'\n 'Joel Hestness' 'Sean Lie']"
] |
null | null |
2403.00961
| null | null |
http://arxiv.org/pdf/2403.00961v2
|
2024-06-16T16:47:56Z
|
2024-03-01T20:21:42Z
|
Data Science Education in Undergraduate Physics: Lessons Learned from a
Community of Practice
|
It is becoming increasingly important that physics educators equip their students with the skills to work with data effectively. However, many educators may lack the necessary training and expertise in data science to teach these skills. To address this gap, we created the Data Science Education Community of Practice (DSECOP), bringing together graduate students and physics educators from different institutions and backgrounds to share best practices and lessons learned from integrating data science into undergraduate physics education. In this article we present insights and experiences from this community of practice, highlighting key strategies and challenges in incorporating data science into the introductory physics curriculum. Our goal is to provide guidance and inspiration to educators who seek to integrate data science into their teaching, helping to prepare the next generation of physicists for a data-driven world.
|
[
"['Karan Shah' 'Julie Butler' 'Alexis Knaub' 'Anıl Zenginoğlu'\n 'William Ratcliff' 'Mohammad Soltanieh-ha']"
] |
null | null |
2403.00963
| null | null |
http://arxiv.org/pdf/2403.00963v1
|
2024-03-01T20:26:33Z
|
2024-03-01T20:26:33Z
|
Tree-Regularized Tabular Embeddings
|
Tabular neural network (NN) has attracted remarkable attentions and its recent advances have gradually narrowed the performance gap with respect to tree-based models on many public datasets. While the mainstreams focus on calibrating NN to fit tabular data, we emphasize the importance of homogeneous embeddings and alternately concentrate on regularizing tabular inputs through supervised pretraining. Specifically, we extend a recent work (DeepTLF) and utilize the structure of pretrained tree ensembles to transform raw variables into a single vector (T2V), or an array of tokens (T2T). Without loss of space efficiency, these binarized embeddings can be consumed by canonical tabular NN with fully-connected or attention-based building blocks. Through quantitative experiments on 88 OpenML datasets with binary classification task, we validated that the proposed tree-regularized representation not only tapers the difference with respect to tree-based models, but also achieves on-par and better performance when compared with advanced NN models. Most importantly, it possesses better robustness and can be easily scaled and generalized as standalone encoder for tabular modality. Codes: https://github.com/milanlx/tree-regularized-embedding.
|
[
"['Xuan Li' 'Yun Wang' 'Bo Li']"
] |
null | null |
2403.00964
| null | null |
http://arxiv.org/pdf/2403.00964v1
|
2024-03-01T20:31:10Z
|
2024-03-01T20:31:10Z
|
MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM
Hallucination Detection
|
In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting "hallucinations". The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.
|
[
"['Federico Borra' 'Claudio Savelli' 'Giacomo Rosso' 'Alkis Koudounas'\n 'Flavio Giobergia']"
] |
null | null |
2403.00965
| null | null |
http://arxiv.org/pdf/2403.00965v1
|
2024-03-01T20:32:17Z
|
2024-03-01T20:32:17Z
|
Binary Gaussian Copula Synthesis: A Novel Data Augmentation Technique to
Advance ML-based Clinical Decision Support Systems for Early Prediction of
Dialysis Among CKD Patients
|
The Center for Disease Control estimates that over 37 million US adults suffer from chronic kidney disease (CKD), yet 9 out of 10 of these individuals are unaware of their condition due to the absence of symptoms in the early stages. It has a significant impact on patients' quality of life, particularly when it progresses to the need for dialysis. Early prediction of dialysis is crucial as it can significantly improve patient outcomes and assist healthcare providers in making timely and informed decisions. However, developing an effective machine learning (ML)-based Clinical Decision Support System (CDSS) for early dialysis prediction poses a key challenge due to the imbalanced nature of data. To address this challenge, this study evaluates various data augmentation techniques to understand their effectiveness on real-world datasets. We propose a new approach named Binary Gaussian Copula Synthesis (BGCS). BGCS is tailored for binary medical datasets and excels in generating synthetic minority data that mirrors the distribution of the original data. BGCS enhances early dialysis prediction by outperforming traditional methods in detecting dialysis patients. For the best ML model, Random Forest, BCGS achieved a 72% improvement, surpassing the state-of-the-art augmentation approaches. Also, we present a ML-based CDSS, designed to aid clinicians in making informed decisions. CDSS, which utilizes decision tree models, is developed to improve patient outcomes, identify critical variables, and thereby enable clinicians to make proactive decisions, and strategize treatment plans effectively for CKD patients who are more likely to require dialysis in the near future. Through comprehensive feature analysis and meticulous data preparation, we ensure that the CDSS's dialysis predictions are not only accurate but also actionable, providing a valuable tool in the management and treatment of CKD.
|
[
"['Hamed Khosravi' 'Srinjoy Das' 'Abdullah Al-Mamun' 'Imtiaz Ahmed']"
] |
null | null |
2403.00974
| null | null |
http://arxiv.org/pdf/2403.00974v1
|
2024-03-01T20:51:10Z
|
2024-03-01T20:51:10Z
|
Motif distribution and function of sparse deep neural networks
|
We characterize the connectivity structure of feed-forward, deep neural networks (DNNs) using network motif theory. To address whether a particular motif distribution is characteristic of the training task, or function of the DNN, we compare the connectivity structure of 350 DNNs trained to simulate a bio-mechanical flight control system with different randomly initialized parameters. We develop and implement algorithms for counting second- and third-order motifs and calculate their significance using their Z-score. The DNNs are trained to solve the inverse problem of the flight dynamics model in Bustamante, et al. (2022) (i.e., predict the controls necessary for controlled flight from the initial and final state-space inputs) and are sparsified through an iterative pruning and retraining algorithm Zahn, et al. (2022). We show that, despite random initialization of network parameters, enforced sparsity causes DNNs to converge to similar connectivity patterns as characterized by their motif distributions. The results suggest how neural network function can be encoded in motif distributions, suggesting a variety of experiments for informing function and control.
|
[
"['Olivia T. Zahn' 'Thomas L. Daniel' 'J. Nathan Kutz']"
] |
null | null |
2403.00975
| null | null |
http://arxiv.org/pdf/2403.00975v1
|
2024-03-01T20:54:31Z
|
2024-03-01T20:54:31Z
|
Equipment Health Assessment: Time Series Analysis for Wind Turbine
Performance
|
In this study, we leverage SCADA data from diverse wind turbines to predict power output, employing advanced time series methods, specifically Functional Neural Networks (FNN) and Long Short-Term Memory (LSTM) networks. A key innovation lies in the ensemble of FNN and LSTM models, capitalizing on their collective learning. This ensemble approach outperforms individual models, ensuring stable and accurate power output predictions. Additionally, machine learning techniques are applied to detect wind turbine performance deterioration, enabling proactive maintenance strategies and health assessment. Crucially, our analysis reveals the uniqueness of each wind turbine, necessitating tailored models for optimal predictions. These insight underscores the importance of providing automatized customization for different turbines to keep human modeling effort low. Importantly, the methodologies developed in this analysis are not limited to wind turbines; they can be extended to predict and optimize performance in various machinery, highlighting the versatility and applicability of our research across diverse industrial contexts.
|
[
"['Jana Backhus' 'Aniruddha Rajendra Rao' 'Chandrasekar Venkatraman'\n 'Abhishek Padmanabhan' 'A. Vinoth Kumar' 'Chetan Gupta']"
] |
null | null |
2403.00986
| null | null |
http://arxiv.org/pdf/2403.00986v2
|
2024-03-07T18:45:09Z
|
2024-03-01T21:16:29Z
|
Merging Text Transformer Models from Different Initializations
|
Recent work on one-shot permutation-based model merging has shown impressive low- or zero-barrier mode connectivity between models from completely different initializations. However, this line of work has not yet extended to the Transformer architecture, despite its dominant popularity in the language domain. Therefore, in this work, we investigate the extent to which separate Transformer minima learn similar features, and propose a model merging technique to investigate the relationship between these minima in the loss landscape. The specifics of the architecture, like its residual connections, multi-headed attention, and discrete, sequential input, require specific interventions in order to compute model permutations that remain within the same functional equivalence class. In merging these models with our method, we consistently find lower loss barriers between minima compared to model averaging for several models trained on a masked-language modeling task or fine-tuned on a language understanding benchmark. Our results show that the minima of these models are less sharp and isolated than previously understood, and provide a basis for future work on merging separately trained Transformer models.
|
[
"['Neha Verma' 'Maha Elbayad']"
] |
null | null |
2403.00991
| null | null |
http://arxiv.org/pdf/2403.00991v1
|
2024-03-01T21:27:03Z
|
2024-03-01T21:27:03Z
|
SELFI: Autonomous Self-Improvement with Reinforcement Learning for
Social Navigation
|
Autonomous self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems. In this paper, we propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies efficiently. SELFI applies online model-free reinforcement learning on top of offline model-based learning to bring out the best parts of both learning paradigms. Specifically, SELFI stabilizes the online learning process by incorporating the same model-based learning objective from offline pre-training into the Q-values learned with online model-free reinforcement learning. We evaluate SELFI in multiple real-world environments and report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study. SELFI enables us to quickly learn useful robotic behaviors with less human interventions such as pre-emptive behavior for the pedestrians, collision avoidance for small and transparent objects, and avoiding travel on uneven floor surfaces. We provide supplementary videos to demonstrate the performance of our fine-tuned policy on our project page.
|
[
"['Noriaki Hirose' 'Dhruv Shah' 'Kyle Stachowicz' 'Ajay Sridhar'\n 'Sergey Levine']"
] |
null | null |
2403.00993
| null | null |
http://arxiv.org/pdf/2403.00993v2
|
2024-05-27T22:19:40Z
|
2024-03-01T21:28:19Z
|
On the Role of Information Structure in Reinforcement Learning for
Partially-Observable Sequential Teams and Games
|
In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other. Classical models of reinforcement learning (e.g., MDPs, POMDPs) assume a simple and highly regular information structure, while more general models like predictive state representations do not explicitly model the information structure. By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables, requiring a rich and flexible representation of information structure. In this paper, we formalize a novel reinforcement learning model which explicitly represents the information structure. We then use this model to carry out an information-structural analysis of the statistical hardness of general sequential decision-making problems, obtaining a characterization via a graph-theoretic quantity of the DAG representation of the information structure. We prove an upper bound on the sample complexity of learning a general sequential decision-making problem in terms of its information structure by exhibiting an algorithm achieving the upper bound. This recovers known tractability results and gives a novel perspective on reinforcement learning in general sequential decision-making problems, providing a systematic way of identifying new tractable classes of problems.
|
[
"['Awni Altabaa' 'Zhuoran Yang']"
] |
null | null |
2403.00999
| null | null |
http://arxiv.org/pdf/2403.00999v1
|
2024-03-01T21:49:34Z
|
2024-03-01T21:49:34Z
|
Distributional Dataset Distillation with Subtask Decomposition
|
What does a neural network learn when training from a task-specific dataset? Synthesizing this knowledge is the central idea behind Dataset Distillation, which recent work has shown can be used to compress large datasets into a small set of input-label pairs ($textit{prototypes}$) that capture essential aspects of the original dataset. In this paper, we make the key observation that existing methods distilling into explicit prototypes are very often suboptimal, incurring in unexpected storage cost from distilled labels. In response, we propose $textit{Distributional Dataset Distillation}$ (D3), which encodes the data using minimal sufficient per-class statistics and paired with a decoder, we distill dataset into a compact distributional representation that is more memory-efficient compared to prototype-based methods. To scale up the process of learning these representations, we propose $textit{Federated distillation}$, which decomposes the dataset into subsets, distills them in parallel using sub-task experts and then re-aggregates them. We thoroughly evaluate our algorithm on a three-dimensional metric and show that our method achieves state-of-the-art results on TinyImageNet and ImageNet-1K. Specifically, we outperform the prior art by $6.9%$ on ImageNet-1K under the storage budget of 2 images per class.
|
[
"['Tian Qin' 'Zhiwei Deng' 'David Alvarez-Melis']"
] |
null | null |
2403.01014
| null | null |
http://arxiv.org/pdf/2403.01014v1
|
2024-03-01T22:24:11Z
|
2024-03-01T22:24:11Z
|
A Case for Validation Buffer in Pessimistic Actor-Critic
|
In this paper, we investigate the issue of error accumulation in critic networks updated via pessimistic temporal difference objectives. We show that the critic approximation error can be approximated via a recursive fixed-point model similar to that of the Bellman value. We use such recursive definition to retrieve the conditions under which the pessimistic critic is unbiased. Building on these insights, we propose Validation Pessimism Learning (VPL) algorithm. VPL uses a small validation buffer to adjust the levels of pessimism throughout the agent training, with the pessimism set such that the approximation error of the critic targets is minimized. We investigate the proposed approach on a variety of locomotion and manipulation tasks and report improvements in sample efficiency and performance.
|
[
"['Michal Nauman' 'Mateusz Ostaszewski' 'Marek Cygan']"
] |
null | null |
2403.01022
| null | null |
http://arxiv.org/pdf/2403.01022v1
|
2024-03-01T22:52:30Z
|
2024-03-01T22:52:30Z
|
Autonomous Strike UAVs for Counterterrorism Missions: Challenges and
Preliminary Solutions
|
Unmanned Aircraft Vehicles (UAVs) are becoming a crucial tool in modern warfare, primarily due to their cost-effectiveness, risk reduction, and ability to perform a wider range of activities. The use of autonomous UAVs to conduct strike missions against highly valuable targets is the focus of this research. Due to developments in ledger technology, smart contracts, and machine learning, such activities formerly carried out by professionals or remotely flown UAVs are now feasible. Our study provides the first in-depth analysis of challenges and preliminary solutions for successful implementation of an autonomous UAV mission. Specifically, we identify challenges that have to be overcome and propose possible technical solutions for the challenges identified. We also derive analytical expressions for the success probability of an autonomous UAV mission, and describe a machine learning model to train the UAV.
|
[
"['Meshari Aljohani' 'Ravi Mukkamalai' 'Stephen Olariu']"
] |
null | null |
2403.01023
| null | null |
http://arxiv.org/pdf/2403.01023v1
|
2024-03-01T22:53:57Z
|
2024-03-01T22:53:57Z
|
Federated Learning via Lattice Joint Source-Channel Coding
|
This paper introduces a universal federated learning framework that enables over-the-air computation via digital communications, using a new joint source-channel coding scheme. Without relying on channel state information at devices, this scheme employs lattice codes to both quantize model parameters and exploit interference from the devices. A novel two-layer receiver structure at the server is designed to reliably decode an integer combination of the quantized model parameters as a lattice point for the purpose of aggregation. Numerical experiments validate the effectiveness of the proposed scheme. Even with the challenges posed by channel conditions and device heterogeneity, the proposed scheme markedly surpasses other over-the-air FL strategies.
|
[
"['Seyed Mohammad Azimi-Abarghouyi' 'Lav R. Varshney']"
] |
null | null |
2403.01046
| null | null |
http://arxiv.org/pdf/2403.01046v3
|
2024-06-29T19:21:59Z
|
2024-03-02T00:33:45Z
|
A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex
Lasso Models with Reflection Features
|
We prove that training neural networks on 1-D data is equivalent to solving a convex Lasso problem with a fixed, explicitly defined dictionary matrix of features. The specific dictionary depends on the activation and depth. We consider 2 and 3-layer networks with piecewise linear activations, and rectangular and tree networks with sign activation and arbitrary depth. Interestingly in absolute value and symmetrized ReLU networks, a third layer creates features that represent reflections of training data about themselves. The Lasso representation sheds insight to globally optimal networks and the solution landscape.
|
[
"['Emi Zeger' 'Yifei Wang' 'Aaron Mishkin' 'Tolga Ergen' 'Emmanuel Candès'\n 'Mert Pilanci']"
] |
null | null |
2403.01053
| null | null |
http://arxiv.org/pdf/2403.01053v2
|
2024-03-05T07:36:04Z
|
2024-03-02T00:56:05Z
|
Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling
|
Machine learning holds tremendous promise for transforming the fundamental practice of scientific discovery by virtue of its data-driven nature. With the ever-increasing stream of research data collection, it would be appealing to autonomously explore patterns and insights from observational data for discovering novel classes of phenotypes and concepts. However, in the biomedical domain, there are several challenges inherently presented in the cumulated data which hamper the progress of novel class discovery. The non-i.i.d. data distribution accompanied by the severe imbalance among different groups of classes essentially leads to ambiguous and biased semantic representations. In this work, we present a geometry-constrained probabilistic modeling treatment to resolve the identified issues. First, we propose to parameterize the approximated posterior of instance embedding as a marginal von MisesFisher distribution to account for the interference of distributional latent bias. Then, we incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space, which in turn minimizes the uncontrollable risk for unknown class learning and structuring. Furthermore, a spectral graph-theoretic method is devised to estimate the number of potential novel classes. It inherits two intriguing merits compared to existent approaches, namely high computational efficiency and flexibility for taxonomy-adaptive estimation. Extensive experiments across various biomedical scenarios substantiate the effectiveness and general applicability of our method.
|
[
"['Jianan Fan' 'Dongnan Liu' 'Hang Chang' 'Heng Huang' 'Mei Chen'\n 'Weidong Cai']"
] |
null | null |
2403.01058
| null | null |
http://arxiv.org/pdf/2403.01058v1
|
2024-03-02T01:20:59Z
|
2024-03-02T01:20:59Z
|
Neural Field Classifiers via Target Encoding and Classification Loss
|
Neural field methods have seen great progress in various long-standing tasks in computer vision and computer graphics, including novel view synthesis and geometry reconstruction. As existing neural field methods try to predict some coordinate-based continuous target values, such as RGB for Neural Radiance Field (NeRF), all of these methods are regression models and are optimized by some regression loss. However, are regression models really better than classification models for neural field methods? In this work, we try to visit this very fundamental but overlooked question for neural fields from a machine learning perspective. We successfully propose a novel Neural Field Classifier (NFC) framework which formulates existing neural field methods as classification tasks rather than regression tasks. The proposed NFC can easily transform arbitrary Neural Field Regressor (NFR) into its classification variant via employing a novel Target Encoding module and optimizing a classification loss. By encoding a continuous regression target into a high-dimensional discrete encoding, we naturally formulate a multi-label classification task. Extensive experiments demonstrate the impressive effectiveness of NFC at the nearly free extra computational costs. Moreover, NFC also shows robustness to sparse inputs, corrupted images, and dynamic scenes.
|
[
"['Xindi Yang' 'Zeke Xie' 'Xiong Zhou' 'Boyu Liu' 'Buhua Liu' 'Yi Liu'\n 'Haoran Wang' 'Yunfeng Cai' 'Mingming Sun']"
] |
null | null |
2403.01059
| null | null |
http://arxiv.org/pdf/2403.01059v1
|
2024-03-02T01:40:37Z
|
2024-03-02T01:40:37Z
|
Continuous Mean-Zero Disagreement-Regularized Imitation Learning
(CMZ-DRIL)
|
Machine-learning paradigms such as imitation learning and reinforcement learning can generate highly performant agents in a variety of complex environments. However, commonly used methods require large quantities of data and/or a known reward function. This paper presents a method called Continuous Mean-Zero Disagreement-Regularized Imitation Learning (CMZ-DRIL) that employs a novel reward structure to improve the performance of imitation-learning agents that have access to only a handful of expert demonstrations. CMZ-DRIL uses reinforcement learning to minimize uncertainty among an ensemble of agents trained to model the expert demonstrations. This method does not use any environment-specific rewards, but creates a continuous and mean-zero reward function from the action disagreement of the agent ensemble. As demonstrated in a waypoint-navigation environment and in two MuJoCo environments, CMZ-DRIL can generate performant agents that behave more similarly to the expert than primary previous approaches in several key metrics.
|
[
"['Noah Ford' 'Ryan W. Gardner' 'Austin Juhl' 'Nathan Larson']"
] |
null | null |
2403.01071
| null | null |
http://arxiv.org/pdf/2403.01071v1
|
2024-03-02T02:28:20Z
|
2024-03-02T02:28:20Z
|
GraphRCG: Self-conditioned Graph Generation via Bootstrapped
Representations
|
Graph generation generally aims to create new graphs that closely align with a specific graph distribution. Existing works often implicitly capture this distribution through the optimization of generators, potentially overlooking the intricacies of the distribution itself. Furthermore, these approaches generally neglect the insights offered by the learned distribution for graph generation. In contrast, in this work, we propose a novel self-conditioned graph generation framework designed to explicitly model graph distributions and employ these distributions to guide the generation process. We first perform self-conditioned modeling to capture the graph distributions by transforming each graph sample into a low-dimensional representation and optimizing a representation generator to create new representations reflective of the learned distribution. Subsequently, we leverage these bootstrapped representations as self-conditioned guidance for the generation process, thereby facilitating the generation of graphs that more accurately reflect the learned distributions. We conduct extensive experiments on generic and molecular graph datasets across various fields. Our framework demonstrates superior performance over existing state-of-the-art graph generation methods in terms of graph quality and fidelity to training data.
|
[
"['Song Wang' 'Zhen Tan' 'Xinyu Zhao' 'Tianlong Chen' 'Huan Liu'\n 'Jundong Li']"
] |
null | null |
2403.01076
| null | null |
http://arxiv.org/pdf/2403.01076v1
|
2024-03-02T03:03:29Z
|
2024-03-02T03:03:29Z
|
Extracting Usable Predictions from Quantized Networks through
Uncertainty Quantification for OOD Detection
|
OOD detection has become more pertinent with advances in network design and increased task complexity. Identifying which parts of the data a given network is misclassifying has become as valuable as the network's overall performance. We can compress the model with quantization, but it suffers minor performance loss. The loss of performance further necessitates the need to derive the confidence estimate of the network's predictions. In line with this thinking, we introduce an Uncertainty Quantification(UQ) technique to quantify the uncertainty in the predictions from a pre-trained vision model. We subsequently leverage this information to extract valuable predictions while ignoring the non-confident predictions. We observe that our technique saves up to 80% of ignored samples from being misclassified. The code for the same is available here.
|
[
"['Rishi Singhal' 'Srinath Srinivasan']"
] |
null | null |
2403.01078
| null | null |
http://arxiv.org/pdf/2403.01078v1
|
2024-03-02T03:26:09Z
|
2024-03-02T03:26:09Z
|
$Γ$-VAE: Curvature regularized variational autoencoders for
uncovering emergent low dimensional geometric structure in high dimensional
data
|
Natural systems with emergent behaviors often organize along low-dimensional subsets of high-dimensional spaces. For example, despite the tens of thousands of genes in the human genome, the principled study of genomics is fruitful because biological processes rely on coordinated organization that results in lower dimensional phenotypes. To uncover this organization, many nonlinear dimensionality reduction techniques have successfully embedded high-dimensional data into low-dimensional spaces by preserving local similarities between data points. However, the nonlinearities in these methods allow for too much curvature to preserve general trends across multiple non-neighboring data clusters, thereby limiting their interpretability and generalizability to out-of-distribution data. Here, we address both of these limitations by regularizing the curvature of manifolds generated by variational autoencoders, a process we coin ``$Gamma$-VAE''. We demonstrate its utility using two example data sets: bulk RNA-seq from the The Cancer Genome Atlas (TCGA) and the Genotype Tissue Expression (GTEx); and single cell RNA-seq from a lineage tracing experiment in hematopoietic stem cell differentiation. We find that the resulting regularized manifolds identify mesoscale structure associated with different cancer cell types, and accurately re-embed tissues from completely unseen, out-of distribution cancers as if they were originally trained on them. Finally, we show that preserving long-range relationships to differentiated cells separates undifferentiated cells -- which have not yet specialized -- according to their eventual fate. Broadly, we anticipate that regularizing the curvature of generative models will enable more consistent, predictive, and generalizable models in any high-dimensional system with emergent low-dimensional behavior.
|
[
"['Jason Z. Kim' 'Nicolas Perrin-Gilbert' 'Erkan Narmanli' 'Paul Klein'\n 'Christopher R. Myers' 'Itai Cohen' 'Joshua J. Waterfall'\n 'James P. Sethna']"
] |
null | null |
2403.01079
| null | null |
http://arxiv.org/pdf/2403.01079v1
|
2024-03-02T03:29:11Z
|
2024-03-02T03:29:11Z
|
Teaching MLP More Graph Information: A Three-stage Multitask Knowledge
Distillation Framework
|
We study the challenging problem for inference tasks on large-scale graph datasets of Graph Neural Networks: huge time and memory consumption, and try to overcome it by reducing reliance on graph structure. Even though distilling graph knowledge to student MLP is an excellent idea, it faces two major problems of positional information loss and low generalization. To solve the problems, we propose a new three-stage multitask distillation framework. In detail, we use Positional Encoding to capture positional information. Also, we introduce Neural Heat Kernels responsible for graph data processing in GNN and utilize hidden layer outputs matching for better performance of student MLP's hidden layers. To the best of our knowledge, it is the first work to include hidden layer distillation for student MLP on graphs and to combine graph Positional Encoding with MLP. We test its performance and robustness with several settings and draw the conclusion that our work can outperform well with good stability.
|
[
"['Junxian Li' 'Bin Shi' 'Erfei Cui' 'Hua Wei' 'Qinghua Zheng']"
] |
null | null |
2403.01081
| null | null |
http://arxiv.org/pdf/2403.01081v3
|
2024-04-29T18:55:34Z
|
2024-03-02T03:48:37Z
|
LAB: Large-Scale Alignment for ChatBots
|
This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training. Leveraging a taxonomy-guided synthetic data generation process and a multi-phase tuning framework, LAB significantly reduces reliance on expensive human annotations and proprietary models like GPT-4. We demonstrate that LAB-trained models can achieve competitive performance across several benchmarks compared to models trained with traditional human-annotated or GPT-4 generated synthetic data. Thus offering a scalable, cost-effective solution for enhancing LLM capabilities and instruction-following behaviors without the drawbacks of catastrophic forgetting, marking a step forward in the efficient training of LLMs for a wide range of applications.
|
[
"['Shivchander Sudalairaj' 'Abhishek Bhandwaldar' 'Aldo Pareja' 'Kai Xu'\n 'David D. Cox' 'Akash Srivastava']"
] |
null | null |
2403.01091
| null | null |
http://arxiv.org/pdf/2403.01091v1
|
2024-03-02T04:30:09Z
|
2024-03-02T04:30:09Z
|
COOL: A Conjoint Perspective on Spatio-Temporal Graph Neural Network for
Traffic Forecasting
|
This paper investigates traffic forecasting, which attempts to forecast the future state of traffic based on historical situations. This problem has received ever-increasing attention in various scenarios and facilitated the development of numerous downstream applications such as urban planning and transportation management. However, the efficacy of existing methods remains sub-optimal due to their tendency to model temporal and spatial relationships independently, thereby inadequately accounting for complex high-order interactions of both worlds. Moreover, the diversity of transitional patterns in traffic forecasting makes them challenging to capture for existing approaches, warranting a deeper exploration of their diversity. Toward this end, this paper proposes Conjoint Spatio-Temporal graph neural network (abbreviated as COOL), which models heterogeneous graphs from prior and posterior information to conjointly capture high-order spatio-temporal relationships. On the one hand, heterogeneous graphs connecting sequential observation are constructed to extract composite spatio-temporal relationships via prior message passing. On the other hand, we model dynamic relationships using constructed affinity and penalty graphs, which guide posterior message passing to incorporate complementary semantic information into node representations. Moreover, to capture diverse transitional properties to enhance traffic forecasting, we propose a conjoint self-attention decoder that models diverse temporal patterns from both multi-rank and multi-scale views. Experimental results on four popular benchmark datasets demonstrate that our proposed COOL provides state-of-the-art performance compared with the competitive baselines.
|
[
"['Wei Ju' 'Yusheng Zhao' 'Yifang Qin' 'Siyu Yi' 'Jingyang Yuan'\n 'Zhiping Xiao' 'Xiao Luo' 'Xiting Yan' 'Ming Zhang']"
] |
null | null |
2403.01092
| null | null |
http://arxiv.org/pdf/2403.01092v2
|
2024-06-05T00:20:38Z
|
2024-03-02T04:31:28Z
|
Pairwise Alignment Improves Graph Domain Adaptation
|
Graph-based methods, pivotal for label inference over interconnected objects in many real-world applications, often encounter generalization challenges, if the graph used for model training differs significantly from the graph used for testing. This work delves into Graph Domain Adaptation (GDA) to address the unique complexities of distribution shifts over graph data, where interconnected data points experience shifts in features, labels, and in particular, connecting patterns. We propose a novel, theoretically principled method, Pairwise Alignment (Pair-Align) to counter graph structure shift by mitigating conditional structure shift (CSS) and label shift (LS). Pair-Align uses edge weights to recalibrate the influence among neighboring nodes to handle CSS and adjusts the classification loss with label weights to handle LS. Our method demonstrates superior performance in real-world applications, including node classification with region shift in social networks, and the pileup mitigation task in particle colliding experiments. For the first application, we also curate the largest dataset by far for GDA studies. Our method shows strong performance in synthetic and other existing benchmark datasets.
|
[
"['Shikun Liu' 'Deyu Zou' 'Han Zhao' 'Pan Li']"
] |
null | null |
2403.01101
| null | null |
http://arxiv.org/pdf/2403.01101v1
|
2024-03-02T06:01:34Z
|
2024-03-02T06:01:34Z
|
Feature Alignment: Rethinking Efficient Active Learning via Proxy in the
Context of Pre-trained Models
|
Fine-tuning the pre-trained model with active learning holds promise for reducing annotation costs. However, this combination introduces significant computational costs, particularly with the growing scale of pre-trained models. Recent research has proposed proxy-based active learning, which pre-computes features to reduce computational costs. Yet, this approach often incurs a significant loss in active learning performance, which may even outweigh the computational cost savings. In this paper, we argue the performance drop stems not only from pre-computed features' inability to distinguish between categories of labeled samples, resulting in the selection of redundant samples but also from the tendency to compromise valuable pre-trained information when fine-tuning with samples selected through the proxy model. To address this issue, we propose a novel method called aligned selection via proxy to update pre-computed features while selecting a proper training method to inherit valuable pre-training information. Extensive experiments validate that our method significantly improves the total cost of efficient active learning while maintaining computational efficiency.
|
[
"['Ziting Wen' 'Oscar Pizarro' 'Stefan Williams']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.