categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.12237 | null | null | http://arxiv.org/pdf/2405.12237v1 | 2024-05-16T15:11:37Z | 2024-05-16T15:11:37Z | EKM: An exact, polynomial-time algorithm for the $K$-medoids problem | The $K$-medoids problem is a challenging combinatorial clustering task, widely used in data analysis applications. While numerous algorithms have been proposed to solve this problem, none of these are able to obtain an exact (globally optimal) solution for the problem in polynomial time. In this paper, we present EKM: a novel algorithm for solving this problem exactly with worst-case $Oleft(N^{K+1}right)$ time complexity. EKM is developed according to recent advances in transformational programming and combinatorial generation, using formal program derivation steps. The derived algorithm is provably correct by construction. We demonstrate the effectiveness of our algorithm by comparing it against various approximate methods on numerous real-world datasets. We show that the wall-clock run time of our algorithm matches the worst-case time complexity analysis on synthetic datasets, clearly outperforming the exponential time complexity of benchmark branch-and-bound based MIP solvers. To our knowledge, this is the first, rigorously-proven polynomial time, practical algorithm for this ubiquitous problem. | [
"['Xi He' 'Max A. Little']"
] |
null | null | 2405.12241 | null | null | http://arxiv.org/pdf/2405.12241v2 | 2024-05-24T13:16:32Z | 2024-05-17T17:03:46Z | Identifying Functionally Important Features with End-to-End Sparse
Dictionary Learning | Identifying the features learned by neural networks is a core challenge in mechanistic interpretability. Sparse autoencoders (SAEs), which learn a sparse, overcomplete dictionary that reconstructs a network's internal activations, have been used to identify these features. However, SAEs may learn more about the structure of the datatset than the computational structure of the network. There is therefore only indirect reason to believe that the directions found in these dictionaries are functionally important to the network. We propose end-to-end (e2e) sparse dictionary learning, a method for training SAEs that ensures the features learned are functionally important by minimizing the KL divergence between the output distributions of the original model and the model with SAE activations inserted. Compared to standard SAEs, e2e SAEs offer a Pareto improvement: They explain more network performance, require fewer total features, and require fewer simultaneously active features per datapoint, all with no cost to interpretability. We explore geometric and qualitative differences between e2e SAE features and standard SAE features. E2e dictionary learning brings us closer to methods that can explain network behavior concisely and accurately. We release our library for training e2e SAEs and reproducing our analysis at https://github.com/ApolloResearch/e2e_sae | [
"['Dan Braun' 'Jordan Taylor' 'Nicholas Goldowsky-Dill' 'Lee Sharkey']"
] |
null | null | 2405.12244 | null | null | http://arxiv.org/pdf/2405.12244v1 | 2024-05-18T07:39:45Z | 2024-05-18T07:39:45Z | Real-Time Go-Around Prediction: A case study of JFK airport | In this paper, we employ the long-short-term memory model (LSTM) to predict the real-time go-around probability as an arrival flight is approaching JFK airport and within 10 nm of the landing runway threshold. We further develop methods to examine the causes to go-around occurrences both from a global view and an individual flight perspective. According to our results, in-trail spacing, and simultaneous runway operation appear to be the top factors that contribute to overall go-around occurrences. We then integrate these pre-trained models and analyses with real-time data streaming, and finally develop a demo web-based user interface that integrates the different components designed previously into a real-time tool that can eventually be used by flight crews and other line personnel to identify situations in which there is a high risk of a go-around. | [
"['Ke Liu' 'Kaijing Ding' 'Lu Dai' 'Mark Hansen' 'Kennis Chan'\n 'John Schade']"
] |
null | null | 2405.12250 | null | null | http://arxiv.org/pdf/2405.12250v1 | 2024-05-19T22:44:00Z | 2024-05-19T22:44:00Z | Your Transformer is Secretly Linear | This paper reveals a novel linear characteristic exclusive to transformer decoders, including models such as GPT, LLaMA, OPT, BLOOM and others. We analyze embedding transformations between sequential layers, uncovering a near-perfect linear relationship (Procrustes similarity score of 0.99). However, linearity decreases when the residual component is removed due to a consistently low output norm of the transformer layer. Our experiments show that removing or linearly approximating some of the most linear blocks of transformers does not affect significantly the loss or model performance. Moreover, in our pretraining experiments on smaller models we introduce a cosine-similarity-based regularization, aimed at reducing layer linearity. This regularization improves performance metrics on benchmarks like Tiny Stories and SuperGLUE and as well successfully decreases the linearity of the models. This study challenges the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed. | [
"['Anton Razzhigaev' 'Matvey Mikhalchuk' 'Elizaveta Goncharova'\n 'Nikolai Gerasimenko' 'Ivan Oseledets' 'Denis Dimitrov'\n 'Andrey Kuznetsov']"
] |
null | null | 2405.12258 | null | null | http://arxiv.org/pdf/2405.12258v2 | 2024-06-05T08:50:51Z | 2024-05-20T11:40:23Z | Scientific Hypothesis Generation by a Large Language Model: Laboratory
Validation in Breast Cancer Treatment | Large language models (LLMs) have transformed AI and achieved breakthrough performance on a wide range of tasks that require human intelligence. In science, perhaps the most interesting application of LLMs is for hypothesis formation. A feature of LLMs, which results from their probabilistic structure, is that the output text is not necessarily a valid inference from the training text. These are 'hallucinations', and are a serious problem in many applications. However, in science, hallucinations may be useful: they are novel hypotheses whose validity may be tested by laboratory experiments. Here we experimentally test the use of LLMs as a source of scientific hypotheses using the domain of breast cancer treatment. We applied the LLM GPT4 to hypothesize novel pairs of FDA-approved non-cancer drugs that target the MCF7 breast cancer cell line relative to the non-tumorigenic breast cell line MCF10A. In the first round of laboratory experiments GPT4 succeeded in discovering three drug combinations (out of 12 tested) with synergy scores above the positive controls. These combinations were itraconazole + atenolol, disulfiram + simvastatin and dipyridamole + mebendazole. GPT4 was then asked to generate new combinations after considering its initial results. It then discovered three more combinations with positive synergy scores (out of four tested), these were disulfiram + fulvestrant, mebendazole + quinacrine and disulfiram + quinacrine. A limitation of GPT4 as a generator of hypotheses was that its explanations for them were formulaic and unconvincing. We conclude that LLMs are an exciting novel source of scientific hypotheses. | [
"['Abbi Abdel-Rehim' 'Hector Zenil' 'Oghenejokpeme Orhobor' 'Marie Fisher'\n 'Ross J. Collins' 'Elizabeth Bourne' 'Gareth W. Fearnley' 'Emma Tate'\n 'Holly X. Smith' 'Larisa N. Soldatova' 'Ross D. King']"
] |
null | null | 2405.12259 | null | null | http://arxiv.org/pdf/2405.12259v1 | 2024-05-20T12:39:24Z | 2024-05-20T12:39:24Z | Generalization Ability of Feature-based Performance Prediction Models: A
Statistical Analysis across Benchmarks | This study examines the generalization ability of algorithm performance prediction models across various benchmark suites. Comparing the statistical similarity between the problem collections with the accuracy of performance prediction models that are based on exploratory landscape analysis features, we observe that there is a positive correlation between these two measures. Specifically, when the high-dimensional feature value distributions between training and testing suites lack statistical significance, the model tends to generalize well, in the sense that the testing errors are in the same range as the training errors. Two experiments validate these findings: one involving the standard benchmark suites, the BBOB and CEC collections, and another using five collections of affine combinations of BBOB problem instances. | [
"['Ana Nikolikj' 'Ana Kostovska' 'Gjorgjina Cenikj' 'Carola Doerr'\n 'Tome Eftimov']"
] |
null | null | 2405.12261 | null | null | http://arxiv.org/pdf/2405.12261v1 | 2024-05-20T14:16:06Z | 2024-05-20T14:16:06Z | EXACT: Towards a platform for empirically benchmarking Machine Learning
model explanation methods | The evolving landscape of explainable artificial intelligence (XAI) aims to improve the interpretability of intricate machine learning (ML) models, yet faces challenges in formalisation and empirical validation, being an inherently unsupervised process. In this paper, we bring together various benchmark datasets and novel performance metrics in an initial benchmarking platform, the Explainable AI Comparison Toolkit (EXACT), providing a standardised foundation for evaluating XAI methods. Our datasets incorporate ground truth explanations for class-conditional features, and leveraging novel quantitative metrics, this platform assesses the performance of post-hoc XAI methods in the quality of the explanations they produce. Our recent findings have highlighted the limitations of popular XAI methods, as they often struggle to surpass random baselines, attributing significance to irrelevant features. Moreover, we show the variability in explanations derived from different equally performing model architectures. This initial benchmarking platform therefore aims to allow XAI researchers to test and assure the high quality of their newly developed methods. | [
"['Benedict Clark' 'Rick Wilming' 'Artur Dox' 'Paul Eschenbach'\n 'Sami Hached' 'Daniel Jin Wodke' 'Michias Taye Zewdie'\n 'Uladzislau Bruila' 'Marta Oliveira' 'Hjalmar Schulz'\n 'Luca Matteo Cornils' 'Danny Panknin' 'Ahcène Boubekki' 'Stefan Haufe']"
] |
null | null | 2405.12262 | null | null | http://arxiv.org/pdf/2405.12262v1 | 2024-05-20T15:42:23Z | 2024-05-20T15:42:23Z | Prompt Learning for Generalized Vehicle Routing | Neural combinatorial optimization (NCO) is a promising learning-based approach to solving various vehicle routing problems without much manual algorithm design. However, the current NCO methods mainly focus on the in-distribution performance, while the real-world problem instances usually come from different distributions. A costly fine-tuning approach or generalized model retraining from scratch could be needed to tackle the out-of-distribution instances. Unlike the existing methods, this work investigates an efficient prompt learning approach in NCO for cross-distribution adaptation. To be concrete, we propose a novel prompt learning method to facilitate fast zero-shot adaptation of a pre-trained model to solve routing problem instances from different distributions. The proposed model learns a set of prompts among various distributions and then selects the best-matched one to prompt a pre-trained attention model for each problem instance. Extensive experiments show that the proposed prompt learning approach facilitates the fast adaptation of pre-trained routing models. It also outperforms existing generalized models on both in-distribution prediction and zero-shot generalization to a diverse set of new tasks. Our code implementation is available online https://github.com/FeiLiu36/PromptVRP. | [
"['Fei Liu' 'Xi Lin' 'Weiduo Liao' 'Zhenkun Wang' 'Qingfu Zhang'\n 'Xialiang Tong' 'Mingxuan Yuan']"
] |
null | null | 2405.12264 | null | null | http://arxiv.org/pdf/2405.12264v1 | 2024-05-20T17:16:27Z | 2024-05-20T17:16:27Z | Directed Metric Structures arising in Large Language Models | Large Language Models are transformer neural networks which are trained to produce a probability distribution on the possible next words to given texts in a corpus, in such a way that the most likely word predicted is the actual word in the training text. In this paper we find what is the mathematical structure defined by such conditional probability distributions of text extensions. Changing the view point from probabilities to -log probabilities we observe that the subtext order is completely encoded in a metric structure defined on the space of texts $mathcal{L}$, by -log probabilities. We then construct a metric polyhedron $P(mathcal{L})$ and an isometric embedding (called Yoneda embedding) of $mathcal{L}$ into $P(mathcal{L})$ such that texts map to generators of certain special extremal rays. We explain that $P(mathcal{L})$ is a $(min,+)$ (tropical) linear span of these extremal ray generators. The generators also satisfy a system of $(min+)$ linear equations. We then show that $P(mathcal{L})$ is compatible with adding more text and from this we derive an approximation of a text vector as a Boltzmann weighted linear combination of the vectors for words in that text. We then prove a duality theorem showing that texts extensions and text restrictions give isometric polyhedra (even though they look a priory very different). Moreover we prove that $P(mathcal{L})$ is the lattice closure of (a version of) the so called, Isbell completion of $mathcal{L}$ which turns out to be the $(max,+)$ span of the text extremal ray generators. All constructions have interpretations in category theory but we don't use category theory explicitly. The categorical interpretations are briefly explained in an appendix. In the final appendix we describe how the syntax to semantics problem could fit in a general well known mathematical duality. | [
"['Stéphane Gaubert' 'Yiannis Vlassopoulos']"
] |
null | null | 2405.12266 | null | null | http://arxiv.org/abs/2405.12266v1 | 2024-05-20T17:52:40Z | 2024-05-20T17:52:40Z | EGAN: Evolutional GAN for Ransomware Evasion | Adversarial Training is a proven defense strategy against adversarial malware. However, generating adversarial malware samples for this type of training presents a challenge because the resulting adversarial malware needs to remain evasive and functional. This work proposes an attack framework, EGAN, to address this limitation. EGAN leverages an Evolution Strategy and Generative Adversarial Network to select a sequence of attack actions that can mutate a Ransomware file while preserving its original functionality. We tested this framework on popular AI-powered commercial antivirus systems listed on VirusTotal and demonstrated that our framework is capable of bypassing the majority of these systems. Moreover, we evaluated whether the EGAN attack framework can evade other commercial non-AI antivirus solutions. Our results indicate that the adversarial ransomware generated can increase the probability of evading some of them. | [
"['Daniel Commey' 'Benjamin Appiah' 'Bill K. Frimpong' 'Isaac Osei'\n 'Ebenezer N. A. Hammond' 'Garth V. Crosby']"
] |
null | null | 2405.12295 | null | null | http://arxiv.org/pdf/2405.12295v2 | 2024-06-04T22:08:09Z | 2024-05-20T18:01:15Z | Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks | Graph Neural Networks (GNNs) are recognized as potent tools for processing real-world data organized in graph structures. Especially inductive GNNs, which enable the processing of graph-structured data without relying on predefined graph structures, are gaining importance in an increasingly wide variety of applications. As these networks demonstrate proficiency across a range of tasks, they become lucrative targets for model-stealing attacks where an adversary seeks to replicate the functionality of the targeted network. A large effort has been made to develop model-stealing attacks that focus on models trained with images and texts. However, little attention has been paid to GNNs trained on graph data. This paper introduces a novel method for unsupervised model-stealing attacks against inductive GNNs, based on graph contrasting learning and spectral graph augmentations to efficiently extract information from the target model. The proposed attack is thoroughly evaluated on six datasets. The results show that this approach demonstrates a higher level of efficiency compared to existing stealing attacks. More concretely, our attack outperforms the baseline on all benchmarks achieving higher fidelity and downstream accuracy of the stolen model while requiring fewer queries sent to the target model. | [
"['Marcin Podhajski' 'Jan Dubiński' 'Franziska Boenisch' 'Adam Dziedzic'\n 'Agnieszka Pregowska' 'Tomasz Michalak']"
] |
null | null | 2405.12299 | null | null | http://arxiv.org/pdf/2405.12299v1 | 2024-05-20T18:04:59Z | 2024-05-20T18:04:59Z | Perturbing the Gradient for Alleviating Meta Overfitting | The reason for Meta Overfitting can be attributed to two factors: Mutual Non-exclusivity and the Lack of diversity, consequent to which a single global function can fit the support set data of all the meta-training tasks and fail to generalize to new unseen tasks. This issue is evidenced by low error rates on the meta-training tasks, but high error rates on new tasks. However, there can be a number of novel solutions to this problem keeping in mind any of the two objectives to be attained, i.e. to increase diversity in the tasks and to reduce the confidence of the model for some of the tasks. In light of the above, this paper proposes a number of solutions to tackle meta-overfitting on few-shot learning settings, such as few-shot sinusoid regression and few shot classification. Our proposed approaches demonstrate improved generalization performance compared to state-of-the-art baselines for learning in a non-mutually exclusive task setting. Overall, this paper aims to provide insights into tackling overfitting in meta-learning and to advance the field towards more robust and generalizable models. | [
"['Manas Gogoi' 'Sambhavi Tiwari' 'Shekhar Verma']"
] |
null | null | 2405.12300 | null | null | http://arxiv.org/pdf/2405.12300v1 | 2024-05-20T18:08:34Z | 2024-05-20T18:08:34Z | Integration of Scanning Probe Microscope with High-Performance
Computing: fixed-policy and reward-driven workflows implementation | The rapid development of computation power and machine learning algorithms has paved the way for automating scientific discovery with a scanning probe microscope (SPM). The key elements towards operationalization of automated SPM are the interface to enable SPM control from Python codes, availability of high computing power, and development of workflows for scientific discovery. Here we build a Python interface library that enables controlling an SPM from either a local computer or a remote high-performance computer (HPC), which satisfies the high computation power need of machine learning algorithms in autonomous workflows. We further introduce a general platform to abstract the operations of SPM in scientific discovery into fixed-policy or reward-driven workflows. Our work provides a full infrastructure to build automated SPM workflows for both routine operations and autonomous scientific discovery with machine learning. | [
"['Yu Liu' 'Utkarsh Pratiush' 'Jason Bemis' 'Roger Proksch' 'Reece Emery'\n 'Philip D. Rack' 'Yu-Chen Liu' 'Jan-Chi Yang' 'Stanislav Udovenko'\n 'Susan Trolier-McKinstry' 'Sergei V. Kalinin']"
] |
null | null | 2405.12308 | null | null | http://arxiv.org/pdf/2405.12308v1 | 2024-05-20T18:12:36Z | 2024-05-20T18:12:36Z | Continual Deep Reinforcement Learning for Decentralized Satellite
Routing | This paper introduces a full solution for decentralized routing in Low Earth Orbit satellite constellations based on continual Deep Reinforcement Learning (DRL). This requires addressing multiple challenges, including the partial knowledge at the satellites and their continuous movement, and the time-varying sources of uncertainty in the system, such as traffic, communication links, or communication buffers. We follow a multi-agent approach, where each satellite acts as an independent decision-making agent, while acquiring a limited knowledge of the environment based on the feedback received from the nearby agents. The solution is divided into two phases. First, an offline learning phase relies on decentralized decisions and a global Deep Neural Network (DNN) trained with global experiences. Then, the online phase with local, on-board, and pre-trained DNNs requires continual learning to evolve with the environment, which can be done in two different ways: (1) Model anticipation, where the predictable conditions of the constellation are exploited by each satellite sharing local model with the next satellite; and (2) Federated Learning (FL), where each agent's model is merged first at the cluster level and then aggregated in a global Parameter Server. The results show that, without high congestion, the proposed Multi-Agent DRL framework achieves the same E2E performance as a shortest-path solution, but the latter assumes intensive communication overhead for real-time network-wise knowledge of the system at a centralized node, whereas ours only requires limited feedback exchange among first neighbour satellites. Importantly, our solution adapts well to congestion conditions and exploits less loaded paths. Moreover, the divergence of models over time is easily tackled by the synergy between anticipation, applied in short-term alignment, and FL, utilized for long-term alignment. | [
"['Federico Lozano-Cuadra' 'Beatriz Soret' 'Israel Leyva-Mayorga'\n 'Petar Popovski']"
] |
null | null | 2405.12309 | null | null | http://arxiv.org/pdf/2405.12309v1 | 2024-05-20T18:13:15Z | 2024-05-20T18:13:15Z | Accurate Learning of Equivariant Quantum Systems from a Single Ground
State | Predicting properties across system parameters is an important task in quantum physics, with applications ranging from molecular dynamics to variational quantum algorithms. Recently, provably efficient algorithms to solve this task for ground states within a gapped phase were developed. Here we dramatically improve the efficiency of these algorithms by showing how to learn properties of all ground states for systems with periodic boundary conditions from a single ground state sample. We prove that the prediction error tends to zero in the thermodynamic limit and numerically verify the results. | [
"['Štěpán Šmíd' 'Roberto Bondesan']"
] |
null | null | 2405.12312 | null | null | http://arxiv.org/pdf/2405.12312v1 | 2024-05-20T18:14:33Z | 2024-05-20T18:14:33Z | A Principled Approach for a New Bias Measure | The widespread use of machine learning and data-driven algorithms for decision making has been steadily increasing over many years. The areas in which this is happening are diverse: healthcare, employment, finance, education, the legal system to name a few; and the associated negative side effects are being increasingly harmful for society. Negative data emph{bias} is one of those, which tends to result in harmful consequences for specific groups of people. Any mitigation strategy or effective policy that addresses the negative consequences of bias must start with awareness that bias exists, together with a way to understand and quantify it. However, there is a lack of consensus on how to measure data bias and oftentimes the intended meaning is context dependent and not uniform within the research community. The main contributions of our work are: (1) a general algorithmic framework for defining and efficiently quantifying the bias level of a dataset with respect to a protected group; and (2) the definition of a new bias measure. Our results are experimentally validated using nine publicly available datasets and theoretically analyzed, which provide novel insights about the problem. Based on our approach, we also derive a bias mitigation algorithm that might be useful to policymakers. | [
"['Bruno Scarone' 'Alfredo Viola' 'Ricardo Baeza-Yates']"
] |
null | null | 2405.12317 | null | null | http://arxiv.org/pdf/2405.12317v1 | 2024-05-20T18:29:36Z | 2024-05-20T18:29:36Z | Kernel spectral joint embeddings for high-dimensional noisy datasets
using duo-landmark integral operators | Integrative analysis of multiple heterogeneous datasets has become standard practice in many research fields, especially in single-cell genomics and medical informatics. Existing approaches oftentimes suffer from limited power in capturing nonlinear structures, insufficient account of noisiness and effects of high-dimensionality, lack of adaptivity to signals and sample sizes imbalance, and their results are sometimes difficult to interpret. To address these limitations, we propose a novel kernel spectral method that achieves joint embeddings of two independently observed high-dimensional noisy datasets. The proposed method automatically captures and leverages possibly shared low-dimensional structures across datasets to enhance embedding quality. The obtained low-dimensional embeddings can be utilized for many downstream tasks such as simultaneous clustering, data visualization, and denoising. The proposed method is justified by rigorous theoretical analysis. Specifically, we show the consistency of our method in recovering the low-dimensional noiseless signals, and characterize the effects of the signal-to-noise ratios on the rates of convergence. Under a joint manifolds model framework, we establish the convergence of ultimate embeddings to the eigenfunctions of some newly introduced integral operators. These operators, referred to as duo-landmark integral operators, are defined by the convolutional kernel maps of some reproducing kernel Hilbert spaces (RKHSs). These RKHSs capture the either partially or entirely shared underlying low-dimensional nonlinear signal structures of the two datasets. Our numerical experiments and analyses of two single-cell omics datasets demonstrate the empirical advantages of the proposed method over existing methods in both embeddings and several downstream tasks. | [
"['Xiucai Ding' 'Rong Ma']"
] |
null | null | 2405.12319 | null | null | http://arxiv.org/pdf/2405.12319v1 | 2024-05-20T18:29:53Z | 2024-05-20T18:29:53Z | Dynamic Line Rating using Hyper-local Weather Predictions: A Machine
Learning Approach | Dynamic Line Rating (DLR) systems are crucial for renewable energy integration in transmission networks. However, traditional methods relying on sensor data face challenges due to the impracticality of installing sensors on every pole or span. Additionally, sensor-based approaches may struggle predicting DLR in rapidly changing weather conditions. This paper proposes a novel approach, leveraging machine learning (ML) techniques alongside hyper-local weather forecast data. Unlike conventional methods, which solely rely on sensor data, this approach utilizes ML models trained to predict hyper-local weather parameters on a full network scale. Integrating topographical data enhances prediction accuracy by accounting for landscape features and obstacles around overhead lines. The paper introduces confidence intervals for DLR assessments to mitigate risks associated with uncertainties. A case study from Estonia demonstrates the practical implementation of the proposed methodology, highlighting its effectiveness in real-world scenarios. By addressing limitations of sensor-based approaches, this research contributes to the discourse of renewable energy integration in transmission systems, advancing efficiency and reliability in the power grid. | [
"['Henri Manninen' 'Markus Lippus' 'Georg Rute']"
] |
null | null | 2405.12326 | null | null | http://arxiv.org/pdf/2405.12326v1 | 2024-05-20T18:51:42Z | 2024-05-20T18:51:42Z | Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF): A
Data-Morphology-based Counterfactual Generation Method for Trustworthy
Artificial Intelligence | Explainable Artificial Intelligence (XAI) is a pivotal research domain aimed at understanding the operational mechanisms of AI systems, particularly those considered ``black boxes'' due to their complex, opaque nature. XAI seeks to make these AI systems more understandable and trustworthy, providing insight into their decision-making processes. By producing clear and comprehensible explanations, XAI enables users, practitioners, and stakeholders to trust a model's decisions. This work analyses the value of data morphology strategies in generating counterfactual explanations. It introduces the Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF) method, a model-agnostic counterfactual generator that leverages data morphology to estimate a model's decision boundaries. The ONB-MACF method constructs hyperspheres in the data space whose covered points share a class, mapping the decision boundary. Counterfactuals are then generated by incrementally adjusting an instance's attributes towards the nearest alternate-class hypersphere, crossing the decision boundary with minimal modifications. By design, the ONB-MACF method generates feasible and sparse counterfactuals that follow the data distribution. Our comprehensive benchmark from a double perspective (quantitative and qualitative) shows that the ONB-MACF method outperforms existing state-of-the-art counterfactual generation methods across multiple quality metrics on diverse tabular datasets. This supports our hypothesis, showcasing the potential of data-morphology-based explainability strategies for trustworthy AI. | [
"['José Daniel Pascual-Triana' 'Alberto Fernández' 'Javier Del Ser'\n 'Francisco Herrera']"
] |
null | null | 2405.12327 | null | null | http://arxiv.org/pdf/2405.12327v1 | 2024-05-20T18:52:33Z | 2024-05-20T18:52:33Z | Diversifying by Intent in Recommender Systems | It has become increasingly clear that recommender systems overly focusing on short-term engagement can inadvertently hurt long-term user experience. However, it is challenging to optimize long-term user experience directly as the desired signal is sparse, noisy and manifests over a long horizon. In this work, we show the benefits of incorporating higher-level user understanding, specifically user intents that can persist across multiple interactions or recommendation sessions, for whole-page recommendation toward optimizing long-term user experience. User intent has primarily been investigated within the context of search, but remains largely under-explored for recommender systems. To bridge this gap, we develop a probabilistic intent-based whole-page diversification framework in the final stage of a recommender system. Starting with a prior belief of user intents, the proposed diversification framework sequentially selects items at each position based on these beliefs, and subsequently updates posterior beliefs about the intents. It ensures that different user intents are represented in a page towards optimizing long-term user experience. We experiment with the intent diversification framework on one of the world's largest content recommendation platforms, serving billions of users daily. Our framework incorporates the user's exploration intent, capturing their propensity to explore new interests and content. Live experiments show that the proposed framework leads to an increase in user retention and overall user enjoyment, validating its effectiveness in facilitating long-term planning. In particular, it enables users to consistently discover and engage with diverse contents that align with their underlying intents over time, thereby leading to an improved long-term user experience. | [
"['Yuyan Wang' 'Cheenar Banerjee' 'Samer Chucri' 'Fabio Soldo'\n 'Sriraj Badam' 'Ed H. Chi' 'Minmin Chen']"
] |
null | null | 2405.12340 | null | null | http://arxiv.org/pdf/2405.12340v1 | 2024-05-20T19:24:10Z | 2024-05-20T19:24:10Z | Cascade-based Randomization for Inferring Causal Effects under Diffusion
Interference | The presence of interference, where the outcome of an individual may depend on the treatment assignment and behavior of neighboring nodes, can lead to biased causal effect estimation. Current approaches to network experiment design focus on limiting interference through cluster-based randomization, in which clusters are identified using graph clustering, and cluster randomization dictates the node assignment to treatment and control. However, cluster-based randomization approaches perform poorly when interference propagates in cascades, whereby the response of individuals to treatment propagates to their multi-hop neighbors. When we have knowledge of the cascade seed nodes, we can leverage this interference structure to mitigate the resulting causal effect estimation bias. With this goal, we propose a cascade-based network experiment design that initiates treatment assignment from the cascade seed node and propagates the assignment to their multi-hop neighbors to limit interference during cascade growth and thereby reduce the overall causal effect estimation error. Our extensive experiments on real-world and synthetic datasets demonstrate that our proposed framework outperforms the existing state-of-the-art approaches in estimating causal effects in network data. | [
"['Zahra Fatemi' 'Jean Pouget-Abadie' 'Elena Zheleva']"
] |
null | null | 2405.12353 | null | null | http://arxiv.org/pdf/2405.12353v1 | 2024-05-20T20:03:51Z | 2024-05-20T20:03:51Z | TinyM$^2$Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks
for Sustainable Edge Deployment | The advancement of sophisticated artificial intelligence (AI) algorithms has led to a notable increase in energy usage and carbon dioxide emissions, intensifying concerns about climate change. This growing problem has brought the environmental sustainability of AI technologies to the forefront, especially as they expand across various sectors. In response to these challenges, there is an urgent need for the development of sustainable AI solutions. These solutions must focus on energy-efficient embedded systems that are capable of handling diverse data types even in environments with limited resources, thereby ensuring both technological progress and environmental responsibility. Integrating complementary multimodal data into tiny machine learning models for edge devices is challenging due to increased complexity, latency, and power consumption. This work introduces TinyM$^2$Net-V3, a system that processes different modalities of complementary data, designs deep neural network (DNN) models, and employs model compression techniques including knowledge distillation and low bit-width quantization with memory-aware considerations to fit models within lower memory hierarchy levels, reducing latency and enhancing energy efficiency on resource-constrained devices. We evaluated TinyM$^2$Net-V3 in two multimodal case studies: COVID-19 detection using cough, speech, and breathing audios, and pose classification from depth and thermal images. With tiny inference models (6 KB and 58 KB), we achieved 92.95% and 90.7% accuracies, respectively. Our tiny machine learning models, deployed on resource limited hardware, demonstrated low latencies within milliseconds and very high power efficiency. | [
"['Hasib-Al Rashid' 'Tinoosh Mohsenin']"
] |
null | null | 2405.12354 | null | null | http://arxiv.org/pdf/2405.12354v1 | 2024-05-20T20:06:42Z | 2024-05-20T20:06:42Z | A Study on Optimization Techniques for Variational Quantum Circuits in
Reinforcement Learning | Quantum Computing aims to streamline machine learning, making it more effective with fewer trainable parameters. This reduction of parameters can speed up the learning process and reduce the use of computational resources. However, in the current phase of quantum computing development, known as the noisy intermediate-scale quantum era (NISQ), learning is difficult due to a limited number of qubits and widespread quantum noise. To overcome these challenges, researchers are focusing on variational quantum circuits (VQCs). VQCs are hybrid algorithms that merge a quantum circuit, which can be adjusted through parameters, with traditional classical optimization techniques. These circuits require only few qubits for effective learning. Recent studies have presented new ways of applying VQCs to reinforcement learning, showing promising results that warrant further exploration. This study investigates the effects of various techniques -- data re-uploading, input scaling, output scaling -- and introduces exponential learning rate decay in the quantum proximal policy optimization algorithm's actor-VQC. We assess these methods in the popular Frozen Lake and Cart Pole environments. Our focus is on their ability to reduce the number of parameters in the VQC without losing effectiveness. Our findings indicate that data re-uploading and an exponential learning rate decay significantly enhance hyperparameter stability and overall performance. While input scaling does not improve parameter efficiency, output scaling effectively manages greediness, leading to increased learning speed and robustness. | [
"['Michael Kölle' 'Timo Witter' 'Tobias Rohe' 'Gerhard Stenzel'\n 'Philipp Altmann' 'Thomas Gabor']"
] |
null | null | 2405.12355 | null | null | http://arxiv.org/pdf/2405.12355v1 | 2024-05-20T20:06:54Z | 2024-05-20T20:06:54Z | Investigating the Impact of Choice on Deep Reinforcement Learning for
Space Controls | For many space applications, traditional control methods are often used during operation. However, as the number of space assets continues to grow, autonomous operation can enable rapid development of control methods for different space related tasks. One method of developing autonomous control is Reinforcement Learning (RL), which has become increasingly popular after demonstrating promising performance and success across many complex tasks. While it is common for RL agents to learn bounded continuous control values, this may not be realistic or practical for many space tasks that traditionally prefer an on/off approach for control. This paper analyzes using discrete action spaces, where the agent must choose from a predefined list of actions. The experiments explore how the number of choices provided to the agents affects their measured performance during and after training. This analysis is conducted for an inspection task, where the agent must circumnavigate an object to inspect points on its surface, and a docking task, where the agent must move into proximity of another spacecraft and "dock" with a low relative speed. A common objective of both tasks, and most space tasks in general, is to minimize fuel usage, which motivates the agent to regularly choose an action that uses no fuel. Our results show that a limited number of discrete choices leads to optimal performance for the inspection task, while continuous control leads to optimal performance for the docking task. | [
"['Nathaniel Hamilton' 'Kyle Dunlap' 'Kerianne L. Hobbs']"
] |
null | null | 2405.12356 | null | null | http://arxiv.org/pdf/2405.12356v1 | 2024-05-20T20:14:09Z | 2024-05-20T20:14:09Z | Coarse-graining conformational dynamics with multi-dimensional
generalized Langevin equation: how, when, and why | A data-driven ab initio generalized Langevin equation (AIGLE) approach is developed to learn and simulate high-dimensional, heterogeneous, coarse-grained conformational dynamics. Constrained by the fluctuation-dissipation theorem, the approach can build coarse-grained models in dynamical consistency with all-atom molecular dynamics. We also propose practical criteria for AIGLE to enforce long-term dynamical consistency. Case studies of a toy polymer, with 20 coarse-grained sites, and the alanine dipeptide, with two dihedral angles, elucidate why one should adopt AIGLE or its Markovian limit for modeling coarse-grained conformational dynamics in practice. | [
"['Pinchen Xie' 'Yunrui Qiu' 'Weinan E']"
] |
null | null | 2405.12372 | null | null | http://arxiv.org/pdf/2405.12372v1 | 2024-05-20T20:56:01Z | 2024-05-20T20:56:01Z | DispaRisk: Assessing and Interpreting Disparity Risks in Datasets | Machine Learning algorithms (ML) impact virtually every aspect of human lives and have found use across diverse sectors, including healthcare, finance, and education. Often, ML algorithms have been found to exacerbate societal biases presented in datasets, leading to adversarial impacts on subsets/groups of individuals, in many cases minority groups. To effectively mitigate these untoward effects, it is crucial that disparities/biases are identified and assessed early in a ML pipeline. This proactive approach facilitates timely interventions to prevent bias amplification and reduce complexity at later stages of model development. In this paper, we introduce DispaRisk, a novel framework designed to proactively assess the potential risks of disparities in datasets during the initial stages of the ML pipeline. We evaluate DispaRisk's effectiveness by benchmarking it with commonly used datasets in fairness research. Our findings demonstrate the capabilities of DispaRisk to identify datasets with a high-risk of discrimination, model families prone to biases, and characteristics that heighten discrimination susceptibility in a ML pipeline. The code for our experiments is available in the following repository: https://github.com/jovasque156/disparisk | [
"['Jonathan Vasquez' 'Carlotta Domeniconi' 'Huzefa Rangwala']"
] |
null | null | 2405.12377 | null | null | http://arxiv.org/pdf/2405.12377v1 | 2024-05-20T21:10:18Z | 2024-05-20T21:10:18Z | Spatio-temporal Attention-based Hidden Physics-informed Neural Network
for Remaining Useful Life Prediction | Predicting the Remaining Useful Life (RUL) is essential in Prognostic Health Management (PHM) for industrial systems. Although deep learning approaches have achieved considerable success in predicting RUL, challenges such as low prediction accuracy and interpretability pose significant challenges, hindering their practical implementation. In this work, we introduce a Spatio-temporal Attention-based Hidden Physics-informed Neural Network (STA-HPINN) for RUL prediction, which can utilize the associated physics of the system degradation. The spatio-temporal attention mechanism can extract important features from the input data. With the self-attention mechanism on both the sensor dimension and time step dimension, the proposed model can effectively extract degradation information. The hidden physics-informed neural network is utilized to capture the physics mechanisms that govern the evolution of RUL. With the constraint of physics, the model can achieve higher accuracy and reasonable predictions. The approach is validated on a benchmark dataset, demonstrating exceptional performance when compared to cutting-edge methods, especially in the case of complex conditions. | [
"['Feilong Jiang' 'Xiaonan Hou' 'Min Xia']"
] |
null | null | 2405.12380 | null | null | http://arxiv.org/pdf/2405.12380v1 | 2024-05-20T21:20:28Z | 2024-05-20T21:20:28Z | Large scale scattering using fast solvers based on neural operators | We extend a recently proposed machine-learning-based iterative solver, i.e. the hybrid iterative transferable solver (HINTS), to solve the scattering problem described by the Helmholtz equation in an exterior domain with a complex absorbing boundary condition. The HINTS method combines neural operators (NOs) with standard iterative solvers, e.g. Jacobi and Gauss-Seidel (GS), to achieve better performance by leveraging the spectral bias of neural networks. In HINTS, some iterations of the conventional iterative method are replaced by inferences of the pre-trained NO. In this work, we employ HINTS to solve the scattering problem for both 2D and 3D problems, where the standard iterative solver fails. We consider square and triangular scatterers of various sizes in 2D, and a cube and a model submarine in 3D. We explore and illustrate the extrapolation capability of HINTS in handling diverse geometries of the scatterer, which is achieved by training the NO on non-scattering scenarios and then deploying it in HINTS to solve scattering problems. The accurate results demonstrate that the NO in HINTS method remains effective without retraining or fine-tuning it whenever a new scatterer is given. Taken together, our results highlight the adaptability and versatility of the extended HINTS methodology in addressing diverse scattering problems. | [
"['Zongren Zou' 'Adar Kahana' 'Enrui Zhang' 'Eli Turkel' 'Rishikesh Ranade'\n 'Jay Pathak' 'George Em Karniadakis']"
] |
null | null | 2405.12382 | null | null | http://arxiv.org/pdf/2405.12382v1 | 2024-05-20T21:26:00Z | 2024-05-20T21:26:00Z | Stochastic Reservoir Computers | Reservoir computing is a form of machine learning that utilizes nonlinear dynamical systems to perform complex tasks in a cost-effective manner when compared to typical neural networks. Many recent advancements in reservoir computing, in particular quantum reservoir computing, make use of reservoirs that are inherently stochastic. However, the theoretical justification for using these systems has not yet been well established. In this paper, we investigate the universality of stochastic reservoir computers, in which we use a stochastic system for reservoir computing using the probabilities of each reservoir state as the readout instead of the states themselves. In stochastic reservoir computing, the number of distinct states of the entire reservoir computer can potentially scale exponentially with the size of the reservoir hardware, offering the advantage of compact device size. We prove that classes of stochastic echo state networks, and therefore the class of all stochastic reservoir computers, are universal approximating classes. We also investigate the performance of two practical examples of stochastic reservoir computers in classification and chaotic time series prediction. While shot noise is a limiting factor in the performance of stochastic reservoir computing, we show significantly improved performance compared to a deterministic reservoir computer with similar hardware in cases where the effects of noise are small. | [
"['Peter J. Ehlers' 'Hendra I. Nurdin' 'Daniel Soh']"
] |
null | null | 2405.12384 | null | null | http://arxiv.org/pdf/2405.12384v3 | 2024-05-28T07:37:28Z | 2024-05-20T21:39:19Z | Vulnerability Detection in C/C++ Code with Deep Learning | Deep learning has been shown to be a promising tool in detecting software vulnerabilities. In this work, we train neural networks with program slices extracted from the source code of C/C++ programs to detect software vulnerabilities. The program slices capture the syntax and semantic characteristics of vulnerability-related program constructs, including API function call, array usage, pointer usage, and arithmetic expression. To achieve a strong prediction model for both vulnerable code and non-vulnerable code, we compare different types of training data, different optimizers, and different types of neural networks. Our result shows that combining different types of characteristics of source code and using a balanced number of vulnerable program slices and non-vulnerable program slices produce a balanced accuracy in predicting both vulnerable code and non-vulnerable code. Among different neural networks, BGRU with the ADAM optimizer performs the best in detecting software vulnerabilities with an accuracy of 92.49%. | [
"['Zhen Huang' 'Amy Aumpansub']"
] |
null | null | 2405.12386 | null | null | http://arxiv.org/pdf/2405.12386v1 | 2024-05-20T21:42:42Z | 2024-05-20T21:42:42Z | Particle swarm optimization with Applications to Maximum Likelihood
Estimation and Penalized Negative Binomial Regression | General purpose optimization routines such as nlminb, optim (R) or nlmixed (SAS) are frequently used to estimate model parameters in nonstandard distributions. This paper presents Particle Swarm Optimization (PSO), as an alternative to many of the current algorithms used in statistics. We find that PSO can not only reproduce the same results as the above routines, it can also produce results that are more optimal or when others cannot converge. In the latter case, it can also identify the source of the problem or problems. We highlight advantages of using PSO using four examples, where: (1) some parameters in a generalized distribution are unidentified using PSO when it is not apparent or computationally manifested using routines in R or SAS; (2) PSO can produce estimation results for the log-binomial regressions when current routines may not; (3) PSO provides flexibility in the link function for binomial regression with LASSO penalty, which is unsupported by standard packages like GLM and GENMOD in Stata and SAS, respectively, and (4) PSO provides superior MLE estimates for an EE-IW distribution compared with those from the traditional statistical methods that rely on moments. | [
"['Sisi Shao' 'Junhyung Park' 'Weng Kee Wong']"
] |
null | null | 2405.12387 | null | null | http://arxiv.org/pdf/2405.12387v1 | 2024-05-20T21:43:43Z | 2024-05-20T21:43:43Z | Conformal Counterfactual Inference under Hidden Confounding | Personalized decision making requires the knowledge of potential outcomes under different treatments, and confidence intervals about the potential outcomes further enrich this decision-making process and improve its reliability in high-stakes scenarios. Predicting potential outcomes along with its uncertainty in a counterfactual world poses the foundamental challenge in causal inference. Existing methods that construct confidence intervals for counterfactuals either rely on the assumption of strong ignorability, or need access to un-identifiable lower and upper bounds that characterize the difference between observational and interventional distributions. To overcome these limitations, we first propose a novel approach wTCP-DR based on transductive weighted conformal prediction, which provides confidence intervals for counterfactual outcomes with marginal converage guarantees, even under hidden confounding. With less restrictive assumptions, our approach requires access to a fraction of interventional data (from randomized controlled trials) to account for the covariate shift from observational distributoin to interventional distribution. Theoretical results explicitly demonstrate the conditions under which our algorithm is strictly advantageous to the naive method that only uses interventional data. After ensuring valid intervals on counterfactuals, it is straightforward to construct intervals for individual treatment effects (ITEs). We demonstrate our method across synthetic and real-world data, including recommendation systems, to verify the superiority of our methods compared against state-of-the-art baselines in terms of both coverage and efficiency | [
"['Zonghao Chen' 'Ruocheng Guo' 'Jean-François Ton' 'Yang Liu']"
] |
null | null | 2405.12390 | null | null | http://arxiv.org/pdf/2405.12390v1 | 2024-05-20T21:50:19Z | 2024-05-20T21:50:19Z | A Metric-based Principal Curve Approach for Learning One-dimensional
Manifold | Principal curve is a well-known statistical method oriented in manifold learning using concepts from differential geometry. In this paper, we propose a novel metric-based principal curve (MPC) method that learns one-dimensional manifold of spatial data. Synthetic datasets Real applications using MNIST dataset show that our method can learn the one-dimensional manifold well in terms of the shape. | [
"['Elvis Han Cui' 'Sisi Shao']"
] |
null | null | 2405.12398 | null | null | http://arxiv.org/pdf/2405.12398v1 | 2024-05-20T22:35:34Z | 2024-05-20T22:35:34Z | ASMR: Activation-sharing Multi-resolution Coordinate Networks For
Efficient Inference | Coordinate network or implicit neural representation (INR) is a fast-emerging method for encoding natural signals (such as images and videos) with the benefits of a compact neural representation. While numerous methods have been proposed to increase the encoding capabilities of an INR, an often overlooked aspect is the inference efficiency, usually measured in multiply-accumulate (MAC) count. This is particularly critical in use cases where inference throughput is greatly limited by hardware constraints. To this end, we propose the Activation-Sharing Multi-Resolution (ASMR) coordinate network that combines multi-resolution coordinate decomposition with hierarchical modulations. Specifically, an ASMR model enables the sharing of activations across grids of the data. This largely decouples its inference cost from its depth which is directly correlated to its reconstruction capability, and renders a near O(1) inference complexity irrespective of the number of layers. Experiments show that ASMR can reduce the MAC of a vanilla SIREN model by up to 500x while achieving an even higher reconstruction quality than its SIREN baseline. | [
"['Jason Chun Lok Li' 'Steven Tin Sui Luo' 'Le Xu' 'Ngai Wong']"
] |
null | null | 2405.12399 | null | null | http://arxiv.org/pdf/2405.12399v1 | 2024-05-20T22:51:05Z | 2024-05-20T22:51:05Z | Diffusion for World Modeling: Visual Details Matter in Atari | World models constitute a promising approach for training reinforcement learning agents in a safe and sample-efficient manner. Recent world models predominantly operate on sequences of discrete latent variables to model environment dynamics. However, this compression into a compact discrete representation may ignore visual details that are important for reinforcement learning. Concurrently, diffusion models have become a dominant approach for image generation, challenging well-established methods modeling discrete latents. Motivated by this paradigm shift, we introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams), a reinforcement learning agent trained in a diffusion world model. We analyze the key design choices that are required to make diffusion suitable for world modeling, and demonstrate how improved visual details can lead to improved agent performance. DIAMOND achieves a mean human normalized score of 1.46 on the competitive Atari 100k benchmark; a new best for agents trained entirely within a world model. To foster future research on diffusion for world modeling, we release our code, agents and playable world models at https://github.com/eloialonso/diamond. | [
"['Eloi Alonso' 'Adam Jelley' 'Vincent Micheli' 'Anssi Kanervisto'\n 'Amos Storkey' 'Tim Pearce' 'François Fleuret']"
] |
null | null | 2405.12412 | null | null | http://arxiv.org/pdf/2405.12412v1 | 2024-05-20T23:30:07Z | 2024-05-20T23:30:07Z | On Measuring Calibration of Discrete Probabilistic Neural Networks | As machine learning systems become increasingly integrated into real-world applications, accurately representing uncertainty is crucial for enhancing their safety, robustness, and reliability. Training neural networks to fit high-dimensional probability distributions via maximum likelihood has become an effective method for uncertainty quantification. However, such models often exhibit poor calibration, leading to overconfident predictions. Traditional metrics like Expected Calibration Error (ECE) and Negative Log Likelihood (NLL) have limitations, including biases and parametric assumptions. This paper proposes a new approach using conditional kernel mean embeddings to measure calibration discrepancies without these biases and assumptions. Preliminary experiments on synthetic data demonstrate the method's potential, with future work planned for more complex applications. | [
"['Spencer Young' 'Porter Jenkins']"
] |
null | null | 2405.12419 | null | null | http://arxiv.org/pdf/2405.12419v1 | 2024-05-20T23:53:42Z | 2024-05-20T23:53:42Z | GeoMask3D: Geometrically Informed Mask Selection for Self-Supervised
Point Cloud Learning in 3D | We introduce a pioneering approach to self-supervised learning for point clouds, employing a geometrically informed mask selection strategy called GeoMask3D (GM3D) to boost the efficiency of Masked Auto Encoders (MAE). Unlike the conventional method of random masking, our technique utilizes a teacher-student model to focus on intricate areas within the data, guiding the model's focus toward regions with higher geometric complexity. This strategy is grounded in the hypothesis that concentrating on harder patches yields a more robust feature representation, as evidenced by the improved performance on downstream tasks. Our method also presents a complete-to-partial feature-level knowledge distillation technique designed to guide the prediction of geometric complexity utilizing a comprehensive context from feature-level information. Extensive experiments confirm our method's superiority over State-Of-The-Art (SOTA) baselines, demonstrating marked improvements in classification, and few-shot tasks. | [
"['Ali Bahri' 'Moslem Yazdanpanah' 'Mehrdad Noori' 'Milad Cheraghalikhani'\n 'Gustavo Adolfo Vargas Hakim' 'David Osowiechi' 'Farzad Beizaee'\n 'Ismail Ben Ayed' 'Christian Desrosiers']"
] |
null | null | 2405.12421 | null | null | http://arxiv.org/pdf/2405.12421v2 | 2024-06-03T23:23:54Z | 2024-05-20T23:59:26Z | A Unified Linear Programming Framework for Offline Reward Learning from
Human Demonstrations and Feedback | Inverse Reinforcement Learning (IRL) and Reinforcement Learning from Human Feedback (RLHF) are pivotal methodologies in reward learning, which involve inferring and shaping the underlying reward function of sequential decision-making problems based on observed human demonstrations and feedback. Most prior work in reward learning has relied on prior knowledge or assumptions about decision or preference models, potentially leading to robustness issues. In response, this paper introduces a novel linear programming (LP) framework tailored for offline reward learning. Utilizing pre-collected trajectories without online exploration, this framework estimates a feasible reward set from the primal-dual optimality conditions of a suitably designed LP, and offers an optimality guarantee with provable sample efficiency. Our LP framework also enables aligning the reward functions with human feedback, such as pairwise trajectory comparison data, while maintaining computational tractability and sample efficiency. We demonstrate that our framework potentially achieves better performance compared to the conventional maximum likelihood estimation (MLE) approach through analytical examples and numerical experiments. | [
"['Kihyun Kim' 'Jiawei Zhang' 'Asuman Ozdaglar' 'Pablo A. Parrilo']"
] |
null | null | 2405.12424 | null | null | http://arxiv.org/pdf/2405.12424v2 | 2024-05-30T23:54:53Z | 2024-05-21T00:26:11Z | Rethinking Robustness Assessment: Adversarial Attacks on Learning-based
Quadrupedal Locomotion Controllers | Legged locomotion has recently achieved remarkable success with the progress of machine learning techniques, especially deep reinforcement learning (RL). Controllers employing neural networks have demonstrated empirical and qualitative robustness against real-world uncertainties, including sensor noise and external perturbations. However, formally investigating the vulnerabilities of these locomotion controllers remains a challenge. This difficulty arises from the requirement to pinpoint vulnerabilities across a long-tailed distribution within a high-dimensional, temporally sequential space. As a first step towards quantitative verification, we propose a computational method that leverages sequential adversarial attacks to identify weaknesses in learned locomotion controllers. Our research demonstrates that, even state-of-the-art robust controllers can fail significantly under well-designed, low-magnitude adversarial sequence. Through experiments in simulation and on the real robot, we validate our approach's effectiveness, and we illustrate how the results it generates can be used to robustify the original policy and offer valuable insights into the safety of these black-box policies. Project page: https://fanshi14.github.io/me/rss24.html | [
"['Fan Shi' 'Chong Zhang' 'Takahiro Miki' 'Joonho Lee' 'Marco Hutter'\n 'Stelian Coros']"
] |
null | null | 2405.12427 | null | null | http://arxiv.org/pdf/2405.12427v1 | 2024-05-21T00:36:34Z | 2024-05-21T00:36:34Z | Deep learning approaches to indoor wireless channel estimation for
low-power communication | In the rapidly growing development of the Internet of Things (IoT) infrastructure, achieving reliable wireless communication is a challenge. IoT devices operate in diverse environments with common signal interference and fluctuating channel conditions. Accurate channel estimation helps adapt the transmission strategies to current conditions, ensuring reliable communication. Traditional methods, such as Least Squares (LS) and Minimum Mean Squared Error (MMSE) estimation techniques, often struggle to adapt to the diverse and complex environments typical of IoT networks. This research article delves into the potential of Deep Learning (DL) to enhance channel estimation, focusing on the Received Signal Strength Indicator (RSSI) metric - a critical yet challenging aspect due to its susceptibility to noise and environmental factors. This paper presents two Fully Connected Neural Networks (FCNNs)-based Low Power (LP-IoT) channel estimation models, leveraging RSSI for accurate channel estimation in LP-IoT communication. Our Model A exhibits a remarkable 99.02% reduction in Mean Squared Error (MSE), and Model B demonstrates a notable 90.03% MSE reduction compared to the benchmarks set by current studies. Additionally, the comparative studies of our model A with other DL-based techniques show significant efficiency in our estimation models. | [
"['Samrah Arif' 'Muhammad Arif Khan' 'Sabih Ur Rehman']"
] |
null | null | 2405.12439 | null | null | http://arxiv.org/pdf/2405.12439v1 | 2024-05-21T01:31:44Z | 2024-05-21T01:31:44Z | No-Regret M${}^{\natural}$-Concave Function Maximization: Stochastic
Bandit Algorithms and NP-Hardness of Adversarial Full-Information Setting | M${}^{natural}$-concave functions, a.k.a. gross substitute valuation functions, play a fundamental role in many fields, including discrete mathematics and economics. In practice, perfect knowledge of M${}^{natural}$-concave functions is often unavailable a priori, and we can optimize them only interactively based on some feedback. Motivated by such situations, we study online M${}^{natural}$-concave function maximization problems, which are interactive versions of the problem studied by Murota and Shioura (1999). For the stochastic bandit setting, we present $O(T^{-1/2})$-simple regret and $O(T^{2/3})$-regret algorithms under $T$ times access to unbiased noisy value oracles of M${}^{natural}$-concave functions. A key to proving these results is the robustness of the greedy algorithm to local errors in M${}^{natural}$-concave function maximization, which is one of our main technical results. While we obtain those positive results for the stochastic setting, another main result of our work is an impossibility in the adversarial setting. We prove that, even with full-information feedback, no algorithms that run in polynomial time per round can achieve $O(T^{1-c})$ regret for any constant $c > 0$ unless $mathsf{P} = mathsf{NP}$. Our proof is based on a reduction from the matroid intersection problem for three matroids, which would be a novel idea in the context of online learning. | [
"['Taihei Oki' 'Shinsaku Sakaue']"
] |
null | null | 2405.12443 | null | null | http://arxiv.org/pdf/2405.12443v1 | 2024-05-21T01:39:11Z | 2024-05-21T01:39:11Z | FFCL: Forward-Forward Net with Cortical Loops, Training and Inference on
Edge Without Backpropagation | The Forward-Forward Learning (FFL) algorithm is a recently proposed solution for training neural networks without needing memory-intensive backpropagation. During training, labels accompany input data, classifying them as positive or negative inputs. Each layer learns its response to these inputs independently. In this study, we enhance the FFL with the following contributions: 1) We optimize label processing by segregating label and feature forwarding between layers, enhancing learning performance. 2) By revising label integration, we enhance the inference process, reduce computational complexity, and improve performance. 3) We introduce feedback loops akin to cortical loops in the brain, where information cycles through and returns to earlier neurons, enabling layers to combine complex features from previous layers with lower-level features, enhancing learning efficiency. | [
"['Ali Karkehabadi' 'Houman Homayoun' 'Avesta Sasan']"
] |
null | null | 2405.12452 | null | null | http://arxiv.org/pdf/2405.12452v1 | 2024-05-21T02:06:40Z | 2024-05-21T02:06:40Z | Prompt-Enhanced Spatio-Temporal Graph Transfer Learning | Spatio-temporal graph neural networks have demonstrated efficacy in capturing complex dependencies for urban computing tasks such as forecasting and kriging. However, their performance is constrained by the reliance on extensive data for training on specific tasks, which limits their adaptability to new urban domains with varied demands. Although transfer learning has been proposed to address this problem by leveraging knowledge across domains, cross-task generalization remains underexplored in spatio-temporal graph transfer learning methods due to the absence of a unified framework. To bridge this gap, we propose Spatio-Temporal Graph Prompting (STGP), a prompt-enhanced transfer learning framework capable of adapting to diverse tasks in data-scarce domains. Specifically, we first unify different tasks into a single template and introduce a task-agnostic network architecture that aligns with this template. This approach enables the capture of spatio-temporal dependencies shared across tasks. Furthermore, we employ learnable prompts to achieve domain and task transfer in a two-stage prompting pipeline, enabling the prompts to effectively capture domain knowledge and task-specific properties at each stage. Extensive experiments demonstrate that STGP outperforms state-of-the-art baselines in three downstream tasks forecasting, kriging, and extrapolation by a notable margin. | [
"['Junfeng Hu' 'Xu Liu' 'Zhencheng Fan' 'Yifang Yin' 'Shili Xiang'\n 'Savitha Ramasamy' 'Roger Zimmermann']"
] |
null | null | 2405.12456 | null | null | http://arxiv.org/pdf/2405.12456v1 | 2024-05-21T02:16:16Z | 2024-05-21T02:16:16Z | Mutual Information Analysis in Multimodal Learning Systems | In recent years, there has been a significant increase in applications of multimodal signal processing and analysis, largely driven by the increased availability of multimodal datasets and the rapid progress in multimodal learning systems. Well-known examples include autonomous vehicles, audiovisual generative systems, vision-language systems, and so on. Such systems integrate multiple signal modalities: text, speech, images, video, LiDAR, etc., to perform various tasks. A key issue for understanding such systems is the relationship between various modalities and how it impacts task performance. In this paper, we employ the concept of mutual information (MI) to gain insight into this issue. Taking advantage of the recent progress in entropy modeling and estimation, we develop a system called InfoMeter to estimate MI between modalities in a multimodal learning system. We then apply InfoMeter to analyze a multimodal 3D object detection system over a large-scale dataset for autonomous driving. Our experiments on this system suggest that a lower MI between modalities is beneficial for detection accuracy. This new insight may facilitate improvements in the development of future multimodal learning systems. | [
"['Hadi Hadizadeh' 'S. Faegheh Yeganli' 'Bahador Rashidi' 'Ivan V. Bajić']"
] |
null | null | 2405.12459 | null | null | http://arxiv.org/pdf/2405.12459v1 | 2024-05-21T02:33:17Z | 2024-05-21T02:33:17Z | PLM4Traj: Cognizing Movement Patterns and Travel Purposes from
Trajectories with Pre-trained Language Models | Spatio-temporal trajectories play a vital role in various spatio-temporal data mining tasks. Developing a versatile trajectory learning approach that can adapt to different tasks while ensuring high accuracy is crucial. This requires effectively extracting movement patterns and travel purposes embedded in trajectories. However, this task is challenging due to limitations in the size and quality of available trajectory datasets. On the other hand, pre-trained language models (PLMs) have shown great success in adapting to different tasks by training on large-scale, high-quality corpus datasets. Given the similarities between trajectories and sentences, there is potential in leveraging PLMs to enhance the development of a versatile and effective trajectory learning method. Nevertheless, vanilla PLMs are not tailored to handle the unique spatio-temporal features present in trajectories and lack the capability to extract movement patterns and travel purposes from them. To overcome these obstacles, we propose a model called PLM4Traj that effectively utilizes PLMs to model trajectories. PLM4Traj leverages the strengths of PLMs to create a versatile trajectory learning approach while addressing the limitations of vanilla PLMs in modeling trajectories. Firstly, PLM4Traj incorporates a novel trajectory semantic embedder that enables PLMs to process spatio-temporal features in trajectories and extract movement patterns and travel purposes from them. Secondly, PLM4Traj introduces a novel trajectory prompt that integrates movement patterns and travel purposes into PLMs, while also allowing the model to adapt to various tasks. Extensive experiments conducted on two real-world datasets and two representative tasks demonstrate that PLM4Traj successfully achieves its design goals. Codes are available at https://github.com/Zeru19/PLM4Traj. | [
"['Zeyu Zhou' 'Yan Lin' 'Haomin Wen' 'Shengnan Guo' 'Jilin Hu'\n 'Youfang Lin' 'Huaiyu Wan']"
] |
null | null | 2405.12462 | null | null | http://arxiv.org/pdf/2405.12462v2 | 2024-05-22T12:12:15Z | 2024-05-21T02:37:47Z | Boosting X-formers with Structured Matrix for Long Sequence Time Series
Forecasting | Transformer-based models for long sequence time series forecasting (LSTF) problems have gained significant attention due to their exceptional forecasting precision. As the cornerstone of these models, the self-attention mechanism poses a challenge to efficient training and inference due to its quadratic time complexity. In this article, we propose a novel architectural design for Transformer-based models in LSTF, leveraging a substitution framework that incorporates Surrogate Attention Blocks and Surrogate FFN Blocks. The framework aims to boost any well-designed model's efficiency without sacrificing its accuracy. We further establish the equivalence of the Surrogate Attention Block to the self-attention mechanism in terms of both expressiveness and trainability. Through extensive experiments encompassing nine Transformer-based models across five time series tasks, we observe an average performance improvement of 9.45% while achieving a significant reduction in model size by 46% | [
"['Zhicheng Zhang' 'Yong Wang' 'Shaoqi Tan' 'Bowei Xia' 'Yujie Luo']"
] |
null | null | 2405.12463 | null | null | http://arxiv.org/pdf/2405.12463v1 | 2024-05-21T02:39:45Z | 2024-05-21T02:39:45Z | Stochastic Learning of Computational Resource Usage as Graph Structured
Multimarginal Schrödinger Bridge | We propose to learn the time-varying stochastic computational resource usage of software as a graph structured Schr"odinger bridge problem. In general, learning the computational resource usage from data is challenging because resources such as the number of CPU instructions and the number of last level cache requests are both time-varying and statistically correlated. Our proposed method enables learning the joint time-varying stochasticity in computational resource usage from the measured profile snapshots in a nonparametric manner. The method can be used to predict the most-likely time-varying distribution of computational resource availability at a desired time. We provide detailed algorithms for stochastic learning in both single and multi-core cases, discuss the convergence guarantees, computational complexities, and demonstrate their practical use in two case studies: a single-core nonlinear model predictive controller, and a synthetic multi-core software. | [
"['Georgiy A. Bondar' 'Robert Gifford' 'Linh Thi Xuan Phan'\n 'Abhishek Halder']"
] |
null | null | 2405.12465 | null | null | http://arxiv.org/pdf/2405.12465v2 | 2024-05-22T05:53:13Z | 2024-05-21T02:41:40Z | A finite element-based physics-informed operator learning framework for
spatiotemporal partial differential equations on arbitrary domains | We propose a novel finite element-based physics-informed operator learning framework that allows for predicting spatiotemporal dynamics governed by partial differential equations (PDEs). The proposed framework employs a loss function inspired by the finite element method (FEM) with the implicit Euler time integration scheme. A transient thermal conduction problem is considered to benchmark the performance. The proposed operator learning framework takes a temperature field at the current time step as input and predicts a temperature field at the next time step. The Galerkin discretized weak formulation of the heat equation is employed to incorporate physics into the loss function, which is coined finite operator learning (FOL). Upon training, the networks successfully predict the temperature evolution over time for any initial temperature field at high accuracy compared to the FEM solution. The framework is also confirmed to be applicable to a heterogeneous thermal conductivity and arbitrary geometry. The advantages of FOL can be summarized as follows: First, the training is performed in an unsupervised manner, avoiding the need for a large data set prepared from costly simulations or experiments. Instead, random temperature patterns generated by the Gaussian random process and the Fourier series, combined with constant temperature fields, are used as training data to cover possible temperature cases. Second, shape functions and backward difference approximation are exploited for the domain discretization, resulting in a purely algebraic equation. This enhances training efficiency, as one avoids time-consuming automatic differentiation when optimizing weights and biases while accepting possible discretization errors. Finally, thanks to the interpolation power of FEM, any arbitrary geometry can be handled with FOL, which is crucial to addressing various engineering application scenarios. | [
"['Yusuke Yamazaki' 'Ali Harandi' 'Mayu Muramatsu' 'Alexandre Viardin'\n 'Markus Apel' 'Tim Brepols' 'Stefanie Reese' 'Shahed Rezaei']"
] |
null | null | 2405.12474 | null | null | http://arxiv.org/pdf/2405.12474v1 | 2024-05-21T03:28:45Z | 2024-05-21T03:28:45Z | How Universal Polynomial Bases Enhance Spectral Graph Neural Networks:
Heterophily, Over-smoothing, and Over-squashing | Spectral Graph Neural Networks (GNNs), alternatively known as graph filters, have gained increasing prevalence for heterophily graphs. Optimal graph filters rely on Laplacian eigendecomposition for Fourier transform. In an attempt to avert prohibitive computations, numerous polynomial filters have been proposed. However, polynomials in the majority of these filters are predefined and remain fixed across different graphs, failing to accommodate the varying degrees of heterophily. Addressing this gap, we demystify the intrinsic correlation between the spectral property of desired polynomial bases and the heterophily degrees via thorough theoretical analyses. Subsequently, we develop a novel adaptive heterophily basis wherein the basis vectors mutually form angles reflecting the heterophily degree of the graph. We integrate this heterophily basis with the homophily basis to construct a universal polynomial basis UniBasis, which devises a polynomial filter based graph neural network - UniFilter. It optimizes the convolution and propagation in GNN, thus effectively limiting over-smoothing and alleviating over-squashing. Our extensive experiments, conducted on a diverse range of real-world and synthetic datasets with varying degrees of heterophily, support the superiority of UniFilter. These results not only demonstrate the universality of UniBasis but also highlight its proficiency in graph explanation. | [
"['Keke Huang' 'Yu Guang Wang' 'Ming Li' 'and Pietro Liò']"
] |
null | null | 2405.12475 | null | null | http://arxiv.org/pdf/2405.12475v1 | 2024-05-21T03:33:07Z | 2024-05-21T03:33:07Z | GASE: Graph Attention Sampling with Edges Fusion for Solving Vehicle
Routing Problems | Learning-based methods have become increasingly popular for solving vehicle routing problems due to their near-optimal performance and fast inference speed. Among them, the combination of deep reinforcement learning and graph representation allows for the abstraction of node topology structures and features in an encoder-decoder style. Such an approach makes it possible to solve routing problems end-to-end without needing complicated heuristic operators designed by domain experts. Existing research studies have been focusing on novel encoding and decoding structures via various neural network models to enhance the node embedding representation. Despite the sophisticated approaches applied, there is a noticeable lack of consideration for the graph-theoretic properties inherent to routing problems. Moreover, the potential ramifications of inter-nodal interactions on the decision-making efficacy of the models have not been adequately explored. To bridge this gap, we propose an adaptive Graph Attention Sampling with the Edges Fusion framework (GASE),where nodes' embedding is determined through attention calculation from certain highly correlated neighbourhoods and edges, utilizing a filtered adjacency matrix. In detail, the selections of particular neighbours and adjacency edges are led by a multi-head attention mechanism, contributing directly to the message passing and node embedding in graph attention sampling networks. Furthermore, we incorporate an adaptive actor-critic algorithm with policy improvements to expedite the training convergence. We then conduct comprehensive experiments against baseline methods on learning-based VRP tasks from different perspectives. Our proposed model outperforms the existing methods by 2.08%-6.23% and shows stronger generalization ability, achieving state-of-the-art performance on randomly generated instances and real-world datasets. | [
"['Zhenwei Wang' 'Ruibin Bai' 'Fazlullah Khan' 'Ender Ozcan' 'Tiehua Zhang']"
] |
null | null | 2405.12489 | null | null | http://arxiv.org/pdf/2405.12489v3 | 2024-06-29T00:46:04Z | 2024-05-21T04:18:57Z | Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks | Exploring the loss landscape offers insights into the inherent principles of deep neural networks (DNNs). Recent work suggests an additional asymmetry of the valley beyond the flat and sharp ones, yet without thoroughly examining its causes or implications. Our study methodically explores the factors affecting the symmetry of DNN valleys, encompassing (1) the dataset, network architecture, initialization, and hyperparameters that influence the convergence point; and (2) the magnitude and direction of the noise for 1D visualization. Our major observation shows that the {it degree of sign consistency} between the noise and the convergence point is a critical indicator of valley symmetry. Theoretical insights from the aspects of ReLU activation and softmax function could explain the interesting phenomenon. Our discovery propels novel understanding and applications in the scenario of Model Fusion: (1) the efficacy of interpolating separate models significantly correlates with their sign consistency ratio, and (2) imposing sign alignment during federated learning emerges as an innovative approach for model parameter alignment. | [
"['Xin-Chun Li' 'Jin-Lin Tang' 'Bo Zhang' 'Lan Li' 'De-Chuan Zhan']"
] |
null | null | 2405.12493 | null | null | http://arxiv.org/pdf/2405.12493v1 | 2024-05-21T04:30:09Z | 2024-05-21T04:30:09Z | Visualizing, Rethinking, and Mining the Loss Landscape of Deep Neural
Networks | The loss landscape of deep neural networks (DNNs) is commonly considered complex and wildly fluctuated. However, an interesting observation is that the loss surfaces plotted along Gaussian noise directions are almost v-basin ones with the perturbed model lying on the basin. This motivates us to rethink whether the 1D or 2D subspace could cover more complex local geometry structures, and how to mine the corresponding perturbation directions. This paper systematically and gradually categorizes the 1D curves from simple to complex, including v-basin, v-side, w-basin, w-peak, and vvv-basin curves. Notably, the latter two types are already hard to obtain via the intuitive construction of specific perturbation directions, and we need to propose proper mining algorithms to plot the corresponding 1D curves. Combining these 1D directions, various types of 2D surfaces are visualized such as the saddle surfaces and the bottom of a bottle of wine that are only shown by demo functions in previous works. Finally, we propose theoretical insights from the lens of the Hessian matrix to explain the observed several interesting phenomena. | [
"['Xin-Chun Li' 'Lan Li' 'De-Chuan Zhan']"
] |
null | null | 2405.12500 | null | null | http://arxiv.org/pdf/2405.12500v1 | 2024-05-21T05:00:30Z | 2024-05-21T05:00:30Z | Entropic associative memory for real world images | The entropic associative memory (EAM) is a computational model of natural memory incorporating some of its putative properties of being associative, distributed, declarative, abstractive and constructive. Previous experiments satisfactorily tested the model on structured, homogeneous and conventional data: images of manuscripts digits and letters, images of clothing, and phone representations. In this work we show that EAM appropriately stores, recognizes and retrieves complex and unconventional images of animals and vehicles. Additionally, the memory system generates meaningful retrieval association chains for such complex images. The retrieved objects can be seen as proper memories, associated recollections or products of imagination. | [
"['Noé Hernández' 'Rafael Morales' 'Luis A. Pineda']"
] |
null | null | 2405.12502 | null | null | http://arxiv.org/abs/2405.12502v3 | 2024-06-29T01:40:46Z | 2024-05-21T05:17:43Z | EntropyStop: Unsupervised Deep Outlier Detection with Loss Entropy | Unsupervised Outlier Detection (UOD) is an important data mining task. With the advance of deep learning, deep Outlier Detection (OD) has received broad interest. Most deep UOD models are trained exclusively on clean datasets to learn the distribution of the normal data, which requires huge manual efforts to clean the real-world data if possible. Instead of relying on clean datasets, some approaches directly train and detect on unlabeled contaminated datasets, leading to the need for methods that are robust to such conditions. Ensemble methods emerged as a superior solution to enhance model robustness against contaminated training sets. However, the training time is greatly increased by the ensemble. In this study, we investigate the impact of outliers on the training phase, aiming to halt training on unlabeled contaminated datasets before performance degradation. Initially, we noted that blending normal and anomalous data causes AUC fluctuations, a label-dependent measure of detection accuracy. To circumvent the need for labels, we propose a zero-label entropy metric named Loss Entropy for loss distribution, enabling us to infer optimal stopping points for training without labels. Meanwhile, we theoretically demonstrate negative correlation between entropy metric and the label-based AUC. Based on this, we develop an automated early-stopping algorithm, EntropyStop, which halts training when loss entropy suggests the maximum model detection capability. We conduct extensive experiments on ADBench (including 47 real datasets), and the overall results indicate that AutoEncoder (AE) enhanced by our approach not only achieves better performance than ensemble AEs but also requires under 2% of training time. Lastly, our proposed metric and early-stopping approach are evaluated on other deep OD models, exhibiting their broad potential applicability. | [
"['Yihong Huang' 'Yuang Zhang' 'Liping Wang' 'Fan Zhang' 'Xuemin Lin']"
] |
null | null | 2405.12519 | null | null | http://arxiv.org/pdf/2405.12519v1 | 2024-05-21T06:12:24Z | 2024-05-21T06:12:24Z | MAGE: Model-Level Graph Neural Networks Explanations via Motif-based
Graph Generation | Graph Neural Networks (GNNs) have shown remarkable success in molecular tasks, yet their interpretability remains challenging. Traditional model-level explanation methods like XGNN and GNNInterpreter often fail to identify valid substructures like rings, leading to questionable interpretability. This limitation stems from XGNN's atom-by-atom approach and GNNInterpreter's reliance on average graph embeddings, which overlook the essential structural elements crucial for molecules. To address these gaps, we introduce an innovative textbf{M}otif-btextbf{A}sed textbf{G}NN textbf{E}xplainer (MAGE) that uses motifs as fundamental units for generating explanations. Our approach begins with extracting potential motifs through a motif decomposition technique. Then, we utilize an attention-based learning method to identify class-specific motifs. Finally, we employ a motif-based graph generator for each class to create molecular graph explanations based on these class-specific motifs. This novel method not only incorporates critical substructures into the explanations but also guarantees their validity, yielding results that are human-understandable. Our proposed method's effectiveness is demonstrated through quantitative and qualitative assessments conducted on six real-world molecular datasets. | [
"['Zhaoning Yu' 'Hongyang Gao']"
] |
null | null | 2405.12521 | null | null | http://arxiv.org/pdf/2405.12521v1 | 2024-05-21T06:23:47Z | 2024-05-21T06:23:47Z | Unleash Graph Neural Networks from Heavy Tuning | Graph Neural Networks (GNNs) are deep-learning architectures designed for graph-type data, where understanding relationships among individual observations is crucial. However, achieving promising GNN performance, especially on unseen data, requires comprehensive hyperparameter tuning and meticulous training. Unfortunately, these processes come with high computational costs and significant human effort. Additionally, conventional searching algorithms such as grid search may result in overfitting on validation data, diminishing generalization accuracy. To tackle these challenges, we propose a graph conditional latent diffusion framework (GNN-Diff) to generate high-performing GNNs directly by learning from checkpoints saved during a light-tuning coarse search. Our method: (1) unleashes GNN training from heavy tuning and complex search space design; (2) produces GNN parameters that outperform those obtained through comprehensive grid search; and (3) establishes higher-quality generation for GNNs compared to diffusion frameworks designed for general neural networks. | [
"['Lequan Lin' 'Dai Shi' 'Andi Han' 'Zhiyong Wang' 'Junbin Gao']"
] |
null | null | 2405.12522 | null | null | http://arxiv.org/pdf/2405.12522v1 | 2024-05-21T06:26:10Z | 2024-05-21T06:26:10Z | Sparse Autoencoders Enable Scalable and Reliable Circuit Identification
in Language Models | This paper introduces an efficient and robust method for discovering interpretable circuits in large language models using discrete sparse autoencoders. Our approach addresses key limitations of existing techniques, namely computational complexity and sensitivity to hyperparameters. We propose training sparse autoencoders on carefully designed positive and negative examples, where the model can only correctly predict the next token for the positive examples. We hypothesise that learned representations of attention head outputs will signal when a head is engaged in specific computations. By discretising the learned representations into integer codes and measuring the overlap between codes unique to positive examples for each head, we enable direct identification of attention heads involved in circuits without the need for expensive ablations or architectural modifications. On three well-studied tasks - indirect object identification, greater-than comparisons, and docstring completion - the proposed method achieves higher precision and recall in recovering ground-truth circuits compared to state-of-the-art baselines, while reducing runtime from hours to seconds. Notably, we require only 5-10 text examples for each task to learn robust representations. Our findings highlight the promise of discrete sparse autoencoders for scalable and efficient mechanistic interpretability, offering a new direction for analysing the inner workings of large language models. | [
"[\"Charles O'Neill\" 'Thang Bui']"
] |
null | null | 2405.12531 | null | null | http://arxiv.org/pdf/2405.12531v1 | 2024-05-21T06:43:03Z | 2024-05-21T06:43:03Z | CustomText: Customized Textual Image Generation using Diffusion Models | Textual image generation spans diverse fields like advertising, education, product packaging, social media, information visualization, and branding. Despite recent strides in language-guided image synthesis using diffusion models, current models excel in image generation but struggle with accurate text rendering and offer limited control over font attributes. In this paper, we aim to enhance the synthesis of high-quality images with precise text customization, thereby contributing to the advancement of image generation models. We call our proposed method CustomText. Our implementation leverages a pre-trained TextDiffuser model to enable control over font color, background, and types. Additionally, to address the challenge of accurately rendering small-sized fonts, we train the ControlNet model for a consistency decoder, significantly enhancing text-generation performance. We assess the performance of CustomText in comparison to previous methods of textual image generation on the publicly available CTW-1500 dataset and a self-curated dataset for small-text generation, showcasing superior results. | [
"['Shubham Paliwal' 'Arushi Jain' 'Monika Sharma' 'Vikram Jamwal'\n 'Lovekesh Vig']"
] |
null | null | 2405.12553 | null | null | http://arxiv.org/pdf/2405.12553v1 | 2024-05-21T07:47:21Z | 2024-05-21T07:47:21Z | Uncertainty quantification by block bootstrap for differentially private
stochastic gradient descent | Stochastic Gradient Descent (SGD) is a widely used tool in machine learning. In the context of Differential Privacy (DP), SGD has been well studied in the last years in which the focus is mainly on convergence rates and privacy guarantees. While in the non private case, uncertainty quantification (UQ) for SGD by bootstrap has been addressed by several authors, these procedures cannot be transferred to differential privacy due to multiple queries to the private data. In this paper, we propose a novel block bootstrap for SGD under local differential privacy that is computationally tractable and does not require an adjustment of the privacy budget. The method can be easily implemented and is applicable to a broad class of estimation problems. We prove the validity of our approach and illustrate its finite sample properties by means of a simulation study. As a by-product, the new method also provides a simple alternative numerical tool for UQ for non-private SGD. | [
"['Holger Dette' 'Carina Graw']"
] |
null | null | 2405.12573 | null | null | http://arxiv.org/pdf/2405.12573v1 | 2024-05-21T08:18:28Z | 2024-05-21T08:18:28Z | EchoPT: A Pretrained Transformer Architecture that Predicts 2D In-Air
Sonar Images for Mobile Robotics | The predictive brain hypothesis suggests that perception can be interpreted as the process of minimizing the error between predicted perception tokens generated by an internal world model and actual sensory input tokens. When implementing working examples of this hypothesis in the context of in-air sonar, significant difficulties arise due to the sparse nature of the reflection model that governs ultrasonic sensing. Despite these challenges, creating consistent world models using sonar data is crucial for implementing predictive processing of ultrasound data in robotics. In an effort to enable robust robot behavior using ultrasound as the sole exteroceptive sensor modality, this paper introduces EchoPT, a pretrained transformer architecture designed to predict 2D sonar images from previous sensory data and robot ego-motion information. We detail the transformer architecture that drives EchoPT and compare the performance of our model to several state-of-the-art techniques. In addition to presenting and evaluating our EchoPT model, we demonstrate the effectiveness of this predictive perception approach in two robotic tasks. | [
"['Jan Steckel' 'Wouter Jansen' 'Nico Huebel']"
] |
null | null | 2405.12584 | null | null | http://arxiv.org/pdf/2405.12584v1 | 2024-05-21T08:27:35Z | 2024-05-21T08:27:35Z | Is Dataset Quality Still a Concern in Diagnosis Using Large Foundation
Model? | Recent advancements in pre-trained large foundation models (LFM) have yielded significant breakthroughs across various domains, including natural language processing and computer vision. These models have been particularly impactful in the domain of medical diagnostic tasks. With abundant unlabeled data, an LFM has been developed for fundus images using the Vision Transformer (VIT) and a self-supervised learning framework. This LFM has shown promising performance in fundus disease diagnosis across multiple datasets. On the other hand, deep learning models have long been challenged by dataset quality issues, such as image quality and dataset bias. To investigate the influence of data quality on LFM, we conducted explorations in two fundus diagnosis tasks using datasets of varying quality. Specifically, we explored the following questions: Is LFM more robust to image quality? Is LFM affected by dataset bias? Can fine-tuning techniques alleviate these effects? Our investigation found that LFM exhibits greater resilience to dataset quality issues, including image quality and dataset bias, compared to typical convolutional networks. Furthermore, we discovered that overall fine-tuning is an effective adapter for LFM to mitigate the impact of dataset quality issues. | [
"['Ziqin Lin' 'Heng Li' 'Zinan Li' 'Huazhu Fu' 'Jiang Liu']"
] |
null | null | 2405.12590 | null | null | http://arxiv.org/pdf/2405.12590v1 | 2024-05-21T08:34:39Z | 2024-05-21T08:34:39Z | Maverick-Aware Shapley Valuation for Client Selection in Federated
Learning | Federated Learning (FL) allows clients to train a model collaboratively without sharing their private data. One key challenge in practical FL systems is data heterogeneity, particularly in handling clients with rare data, also referred to as Mavericks. These clients own one or more data classes exclusively, and the model performance becomes poor without their participation. Thus, utilizing Mavericks throughout training is crucial. In this paper, we first design a Maverick-aware Shapley valuation that fairly evaluates the contribution of Mavericks. The main idea is to compute the clients' Shapley values (SV) class-wise, i.e., per label. Next, we propose FedMS, a Maverick-Shapley client selection mechanism for FL that intelligently selects the clients that contribute the most in each round, by employing our Maverick-aware SV-based contribution score. We show that, compared to an extensive list of baselines, FedMS achieves better model performance and fairer Shapley Rewards distribution. | [
"['Mengwei Yang' 'Ismat Jarin' 'Baturalp Buyukates' 'Salman Avestimehr'\n 'Athina Markopoulou']"
] |
null | null | 2405.12612 | null | null | http://arxiv.org/pdf/2405.12612v1 | 2024-05-21T09:06:36Z | 2024-05-21T09:06:36Z | Tagengo: A Multilingual Chat Dataset | Open source large language models (LLMs) have shown great improvements in recent times. However, many of these models are focused solely on popular spoken languages. We present a high quality dataset of more than 70k prompt-response pairs in 74 languages which consist of human generated prompts and synthetic responses. We use this dataset to train a state-of-the-art open source English LLM to chat multilingually. We evaluate our model on MT-Bench chat benchmarks in 6 languages, finding that our multilingual model outperforms previous state-of-the-art open source LLMs across each language. We further find that training on more multilingual data is beneficial to the performance in a chosen target language (Japanese) compared to simply training on only data in that language. These results indicate the necessity of training on large amounts of high quality multilingual data to make a more accessible LLM. | [
"['Peter Devine']"
] |
null | null | 2405.12615 | null | null | http://arxiv.org/pdf/2405.12615v1 | 2024-05-21T09:10:51Z | 2024-05-21T09:10:51Z | Learning Causal Dynamics Models in Object-Oriented Environments | Causal dynamics models (CDMs) have demonstrated significant potential in addressing various challenges in reinforcement learning. To learn CDMs, recent studies have performed causal discovery to capture the causal dependencies among environmental variables. However, the learning of CDMs is still confined to small-scale environments due to computational complexity and sample efficiency constraints. This paper aims to extend CDMs to large-scale object-oriented environments, which consist of a multitude of objects classified into different categories. We introduce the Object-Oriented CDM (OOCDM) that shares causalities and parameters among objects belonging to the same class. Furthermore, we propose a learning method for OOCDM that enables it to adapt to a varying number of objects. Experiments on large-scale tasks indicate that OOCDM outperforms existing CDMs in terms of causal discovery, prediction accuracy, generalization, and computational efficiency. | [
"['Zhongwei Yu' 'Jingqing Ruan' 'Dengpeng Xing']"
] |
null | null | 2405.12638 | null | null | http://arxiv.org/pdf/2405.12638v1 | 2024-05-21T09:41:56Z | 2024-05-21T09:41:56Z | Multiscale lubrication simulation based on fourier feature networks with
trainable frequency | Rough surface lubrication simulation is crucial for designing and optimizing tribological performance. Despite the growing application of Physical Information Neural Networks (PINNs) in hydrodynamic lubrication analysis, their use has been primarily limited to smooth surfaces. This is due to traditional PINN methods suffer from spectral bias, favoring to learn low-frequency features and thus failing to analyze rough surfaces with high-frequency signals. To date, no PINN methods have been reported for rough surface lubrication. To overcome these limitations, this work introduces a novel multi-scale lubrication neural network architecture that utilizes a trainable Fourier feature network. By incorporating learnable feature embedding frequencies, this architecture automatically adapts to various frequency components, thereby enhancing the analysis of rough surface characteristics. This method has been tested across multiple surface morphologies, and the results have been compared with those obtained using the finite element method (FEM). The comparative analysis demonstrates that this approach achieves a high consistency with FEM results. Furthermore, this novel architecture surpasses traditional Fourier feature networks with fixed feature embedding frequencies in both accuracy and computational efficiency. Consequently, the multi-scale lubrication neural network model offers a more efficient tool for rough surface lubrication analysis. | [
"['Yihu Tang' 'Li Huang' 'Limin Wu' 'Xianghui Meng']"
] |
null | null | 2405.12658 | null | null | http://arxiv.org/pdf/2405.12658v1 | 2024-05-21T10:14:50Z | 2024-05-21T10:14:50Z | Mitigating Overconfidence in Out-of-Distribution Detection by Capturing
Extreme Activations | Detecting out-of-distribution (OOD) instances is crucial for the reliable deployment of machine learning models in real-world scenarios. OOD inputs are commonly expected to cause a more uncertain prediction in the primary task; however, there are OOD cases for which the model returns a highly confident prediction. This phenomenon, denoted as "overconfidence", presents a challenge to OOD detection. Specifically, theoretical evidence indicates that overconfidence is an intrinsic property of certain neural network architectures, leading to poor OOD detection. In this work, we address this issue by measuring extreme activation values in the penultimate layer of neural networks and then leverage this proxy of overconfidence to improve on several OOD detection baselines. We test our method on a wide array of experiments spanning synthetic data and real-world data, tabular and image datasets, multiple architectures such as ResNet and Transformer, different training loss functions, and include the scenarios examined in previous theoretical work. Compared to the baselines, our method often grants substantial improvements, with double-digit increases in OOD detection AUC, and it does not damage performance in any scenario. | [
"['Mohammad Azizmalayeri' 'Ameen Abu-Hanna' 'Giovanni Cinà']"
] |
null | null | 2405.12666 | null | null | http://arxiv.org/pdf/2405.12666v1 | 2024-05-21T10:27:34Z | 2024-05-21T10:27:34Z | SYMPLEX: Controllable Symbolic Music Generation using Simplex Diffusion
with Vocabulary Priors | We present a new approach for fast and controllable generation of symbolic music based on the simplex diffusion, which is essentially a diffusion process operating on probabilities rather than the signal space. This objective has been applied in domains such as natural language processing but here we apply it to generating 4-bar multi-instrument music loops using an orderless representation. We show that our model can be steered with vocabulary priors, which affords a considerable level control over the music generation process, for instance, infilling in time and pitch and choice of instrumentation -- all without task-specific model adaptation or applying extrinsic control. | [
"['Nicolas Jonason' 'Luca Casini' 'Bob L. T. Sturm']"
] |
null | null | 2405.12684 | null | null | http://arxiv.org/pdf/2405.12684v3 | 2024-06-16T05:30:25Z | 2024-05-21T11:19:50Z | Model Free Prediction with Uncertainty Assessment | Deep nonparametric regression, characterized by the utilization of deep neural networks to learn target functions, has emerged as a focus of research attention in recent years. Despite considerable progress in understanding convergence rates, the absence of asymptotic properties hinders rigorous statistical inference. To address this gap, we propose a novel framework that transforms the deep estimation paradigm into a platform conducive to conditional mean estimation, leveraging the conditional diffusion model. Theoretically, we develop an end-to-end convergence rate for the conditional diffusion model and establish the asymptotic normality of the generated samples. Consequently, we are equipped to construct confidence regions, facilitating robust statistical inference. Furthermore, through numerical experiments, we empirically validate the efficacy of our proposed methodology. | [
"['Yuling Jiao' 'Lican Kang' 'Jin Liu' 'Heng Peng' 'Heng Zuo']"
] |
null | null | 2405.12705 | null | null | http://arxiv.org/pdf/2405.12705v1 | 2024-05-21T11:52:14Z | 2024-05-21T11:52:14Z | Multimodal Adaptive Inference for Document Image Classification with
Anytime Early Exiting | This work addresses the need for a balanced approach between performance and efficiency in scalable production environments for visually-rich document understanding (VDU) tasks. Currently, there is a reliance on large document foundation models that offer advanced capabilities but come with a heavy computational burden. In this paper, we propose a multimodal early exit (EE) model design that incorporates various training strategies, exit layer types and placements. Our goal is to achieve a Pareto-optimal balance between predictive performance and efficiency for multimodal document image classification. Through a comprehensive set of experiments, we compare our approach with traditional exit policies and showcase an improved performance-efficiency trade-off. Our multimodal EE design preserves the model's predictive capabilities, enhancing both speed and latency. This is achieved through a reduction of over 20% in latency, while fully retaining the baseline accuracy. This research represents the first exploration of multimodal EE design within the VDU community, highlighting as well the effectiveness of calibration in improving confidence scores for exiting at different layers. Overall, our findings contribute to practical VDU applications by enhancing both performance and efficiency. | [
"['Omar Hamed' 'Souhail Bakkali' 'Marie-Francine Moens' 'Matthew Blaschko'\n 'Jordy Van Landeghem']"
] |
null | null | 2405.12711 | null | null | http://arxiv.org/pdf/2405.12711v2 | 2024-05-22T19:35:34Z | 2024-05-21T12:00:01Z | A Masked Semi-Supervised Learning Approach for Otago Micro Labels
Recognition | The Otago Exercise Program (OEP) serves as a vital rehabilitation initiative for older adults, aiming to enhance their strength and balance, and consequently prevent falls. While Human Activity Recognition (HAR) systems have been widely employed in recognizing the activities of individuals, existing systems focus on the duration of macro activities (i.e. a sequence of repetitions of the same exercise), neglecting the ability to discern micro activities (i.e. the individual repetitions of the exercises), in the case of OEP. This study presents a novel semi-supervised machine learning approach aimed at bridging this gap in recognizing the micro activities of OEP. To manage the limited dataset size, our model utilizes a Transformer encoder for feature extraction, subsequently classified by a Temporal Convolutional Network (TCN). Simultaneously, the Transformer encoder is employed for masked unsupervised learning to reconstruct input signals. Results indicate that the masked unsupervised learning task enhances the performance of the supervised learning (classification task), as evidenced by f1-scores surpassing the clinically applicable threshold of 0.8. From the micro activities, two clinically relevant outcomes emerge: counting the number of repetitions of each exercise and calculating the velocity during chair rising. These outcomes enable the automatic monitoring of exercise intensity and difficulty in the daily lives of older adults. | [
"['Meng Shang' 'Lenore Dedeyne' 'Jolan Dupont' 'Laura Vercauteren'\n 'Nadjia Amini' 'Laurence Lapauw' 'Evelien Gielen' 'Sabine Verschueren'\n 'Carolina Varon' 'Walter De Raedt' 'Bart Vanrumste']"
] |
null | null | 2405.12716 | null | null | http://arxiv.org/pdf/2405.12716v1 | 2024-05-21T12:19:17Z | 2024-05-21T12:19:17Z | Reinforcement Learning Enabled Peer-to-Peer Energy Trading for Dairy
Farms | Farm businesses are increasingly adopting renewables to enhance energy efficiency and reduce reliance on fossil fuels and the grid. This shift aims to decrease dairy farms' dependence on traditional electricity grids by enabling the sale of surplus renewable energy in Peer-to-Peer markets. However, the dynamic nature of farm communities poses challenges, requiring specialized algorithms for P2P energy trading. To address this, the Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES) has been developed, providing a platform to experiment with Reinforcement Learning techniques. The simulations demonstrate significant cost savings, including a 43% reduction in electricity expenses, a 42% decrease in peak demand, and a 1.91% increase in energy sales compared to baseline scenarios lacking peer-to-peer energy trading or renewable energy sources. | [
"['Mian Ibad Ali Shah' 'Enda Barrett' 'Karl Mason']"
] |
null | null | 2405.12739 | null | null | http://arxiv.org/pdf/2405.12739v1 | 2024-05-21T12:47:17Z | 2024-05-21T12:47:17Z | SPO: Multi-Dimensional Preference Sequential Alignment With Implicit
Reward Modeling | Human preference alignment is critical in building powerful and reliable large language models (LLMs). However, current methods either ignore the multi-dimensionality of human preferences (e.g. helpfulness and harmlessness) or struggle with the complexity of managing multiple reward models. To address these issues, we propose Sequential Preference Optimization (SPO), a method that sequentially fine-tunes LLMs to align with multiple dimensions of human preferences. SPO avoids explicit reward modeling, directly optimizing the models to align with nuanced human preferences. We theoretically derive closed-form optimal SPO policy and loss function. Gradient analysis is conducted to show how SPO manages to fine-tune the LLMs while maintaining alignment on previously optimized dimensions. Empirical results on LLMs of different size and multiple evaluation datasets demonstrate that SPO successfully aligns LLMs across multiple dimensions of human preferences and significantly outperforms the baselines. | [
"['Xingzhou Lou' 'Junge Zhang' 'Jian Xie' 'Lifeng Liu' 'Dong Yan'\n 'Kaiqi Huang']"
] |
null | null | 2405.12754 | null | null | http://arxiv.org/pdf/2405.12754v2 | 2024-06-27T02:03:21Z | 2024-05-21T13:04:53Z | Neural Operator for Accelerating Coronal Magnetic Field Model | Studying the sun's outer atmosphere is challenging due to its complex magnetic fields impacting solar activities. Magnetohydrodynamics (MHD) simulations help model these interactions but are extremely time-consuming (usually on a scale of days). Our research applies the Fourier Neural Operator (FNO) to accelerate the coronal magnetic field modeling, specifically, the Bifrost MHD model. We apply Tensorized FNO (TFNO) to generate solutions from partial differential equations (PDEs) over a 3D domain efficiently. TFNO's performance is compared with other deep learning methods, highlighting its accuracy and scalability. Physics analysis confirms that TFNO is reliable and capable of accelerating MHD simulations with high precision. This advancement improves efficiency in data handling, enhances predictive capabilities, and provides a better understanding of magnetic topologies. | [
"['Yutao Du' 'Qin Li' 'Raghav Gnanasambandam' 'Mengnan Du' 'Haimin Wang'\n 'Bo Shen']"
] |
null | null | 2405.12755 | null | null | http://arxiv.org/pdf/2405.12755v2 | 2024-06-20T07:39:05Z | 2024-05-21T13:06:41Z | Progress Measures for Grokking on Real-world Tasks | Grokking, a phenomenon where machine learning models generalize long after overfitting, has been primarily observed and studied in algorithmic tasks. This paper explores grokking in real-world datasets using deep neural networks for classification under the cross-entropy loss. We challenge the prevalent hypothesis that the $L_2$ norm of weights is the primary cause of grokking by demonstrating that grokking can occur outside the expected range of weight norms. To better understand grokking, we introduce three new progress measures: activation sparsity, absolute weight entropy, and approximate local circuit complexity. These measures are conceptually related to generalization and demonstrate a stronger correlation with grokking in real-world datasets compared to weight norms. Our findings suggest that while weight norms might usually correlate with grokking and our progress measures, they are not causative, and our proposed measures provide a better understanding of the dynamics of grokking. | [
"['Satvik Golechha']"
] |
null | null | 2405.12756 | null | null | http://arxiv.org/pdf/2405.12756v1 | 2024-05-21T13:06:55Z | 2024-05-21T13:06:55Z | Parallel Algorithm for Optimal Threshold Labeling of Ordinal Regression
Methods | Ordinal regression (OR) is classification of ordinal data in which the underlying categorical target variable has a natural ordinal relation for the underlying explanatory variable. For $K$-class OR tasks, threshold methods learn a one-dimensional transformation (1DT) of the explanatory variable so that 1DT values for observations of the explanatory variable preserve the order of label values $1,ldots,K$ for corresponding observations of the target variable well, and then assign a label prediction to the learned 1DT through threshold labeling, namely, according to the rank of an interval to which the 1DT belongs among intervals on the real line separated by $(K-1)$ threshold parameters. In this study, we propose a parallelizable algorithm to find the optimal threshold labeling, which was developed in previous research, and derive sufficient conditions for that algorithm to successfully output the optimal threshold labeling. In a numerical experiment we performed, the computation time taken for the whole learning process of a threshold method with the optimal threshold labeling could be reduced to approximately 60,% by using the proposed algorithm with parallel processing compared to using an existing algorithm based on dynamic programming. | [
"['Ryoya Yamasaki' 'Toshiyuki Tanaka']"
] |
null | null | 2405.12774 | null | null | http://arxiv.org/pdf/2405.12774v1 | 2024-05-21T13:24:05Z | 2024-05-21T13:24:05Z | Blind Separation of Vibration Sources using Deep Learning and
Deconvolution | Vibrations of rotating machinery primarily originate from two sources, both of which are distorted by the machine's transfer function on their way to the sensor: the dominant gear-related vibrations and a low-energy signal linked to bearing faults. The proposed method facilitates the blind separation of vibration sources, eliminating the need for any information about the monitored equipment or external measurements. This method estimates both sources in two stages: initially, the gear signal is isolated using a dilated CNN, followed by the estimation of the bearing fault signal using the squared log envelope of the residual. The effect of the transfer function is removed from both sources using a novel whitening-based deconvolution method (WBD). Both simulation and experimental results demonstrate the method's ability to detect bearing failures early when no additional information is available. This study considers both local and distributed bearing faults, assuming that the vibrations are recorded under stable operating conditions. | [
"['Igor Makienko' 'Michael Grebshtein' 'Eli Gildish']"
] |
null | null | 2405.12779 | null | null | http://arxiv.org/pdf/2405.12779v1 | 2024-05-21T13:26:27Z | 2024-05-21T13:26:27Z | Transformer in Touch: A Survey | The Transformer model, initially achieving significant success in the field of natural language processing, has recently shown great potential in the application of tactile perception. This review aims to comprehensively outline the application and development of Transformers in tactile technology. We first introduce the two fundamental concepts behind the success of the Transformer: the self-attention mechanism and large-scale pre-training. Then, we delve into the application of Transformers in various tactile tasks, including but not limited to object recognition, cross-modal generation, and object manipulation, offering a concise summary of the core methodologies, performance benchmarks, and design highlights. Finally, we suggest potential areas for further research and future work, aiming to generate more interest within the community, tackle existing challenges, and encourage the use of Transformer models in the tactile field. | [
"['Jing Gao' 'Ning Cheng' 'Bin Fang' 'Wenjuan Han']"
] |
null | null | 2405.12781 | null | null | http://arxiv.org/pdf/2405.12781v1 | 2024-05-21T13:28:32Z | 2024-05-21T13:28:32Z | Self-Supervised Modality-Agnostic Pre-Training of Swin Transformers | Unsupervised pre-training has emerged as a transformative paradigm, displaying remarkable advancements in various domains. However, the susceptibility to domain shift, where pre-training data distribution differs from fine-tuning, poses a significant obstacle. To address this, we augment the Swin Transformer to learn from different medical imaging modalities, enhancing downstream performance. Our model, dubbed SwinFUSE (Swin Multi-Modal Fusion for UnSupervised Enhancement), offers three key advantages: (i) it learns from both Computed Tomography (CT) and Magnetic Resonance Images (MRI) during pre-training, resulting in complementary feature representations; (ii) a domain-invariance module (DIM) that effectively highlights salient input regions, enhancing adaptability; (iii) exhibits remarkable generalizability, surpassing the confines of tasks it was initially pre-trained on. Our experiments on two publicly available 3D segmentation datasets show a modest 1-2% performance trade-off compared to single-modality models, yet significant out-performance of up to 27% on out-of-distribution modality. This substantial improvement underscores our proposed approach's practical relevance and real-world applicability. Code is available at: https://github.com/devalab/SwinFUSE | [
"['Abhiroop Talasila' 'Maitreya Maity' 'U. Deva Priyakumar']"
] |
null | null | 2405.12783 | null | null | http://arxiv.org/pdf/2405.12783v1 | 2024-05-21T13:29:24Z | 2024-05-21T13:29:24Z | Epanechnikov Variational Autoencoder | In this paper, we bridge Variational Autoencoders (VAEs) [17] and kernel density estimations (KDEs) [25 ],[23] by approximating the posterior by KDEs and deriving an upper bound of the Kullback-Leibler (KL) divergence in the evidence lower bound (ELBO). The flexibility of KDEs makes the optimization of posteriors in VAEs possible, which not only addresses the limitations of Gaussian latent space in vanilla VAE but also provides a new perspective of estimating the KL-divergence in ELBO. Under appropriate conditions [ 9],[3 ], we show that the Epanechnikov kernel is the optimal choice in minimizing the derived upper bound of KL-divergence asymptotically. Compared with Gaussian kernel, Epanechnikov kernel has compact support which should make the generated sample less noisy and blurry. The implementation of Epanechnikov kernel in ELBO is straightforward as it lies in the "location-scale" family of distributions where the reparametrization tricks can be directly employed. A series of experiments on benchmark datasets such as MNIST, Fashion-MNIST, CIFAR-10 and CelebA further demonstrate the superiority of Epanechnikov Variational Autoenocoder (EVAE) over vanilla VAE in the quality of reconstructed images, as measured by the FID score and Sharpness[27]. | [
"['Tian Qin' 'Wei-Min Huang']"
] |
null | null | 2405.12800 | null | null | http://arxiv.org/pdf/2405.12800v2 | 2024-05-22T14:52:07Z | 2024-05-21T13:51:47Z | Deep Reinforcement Learning for Time-Critical Wilderness Search And
Rescue Using Drones | Traditional search and rescue methods in wilderness areas can be time-consuming and have limited coverage. Drones offer a faster and more flexible solution, but optimizing their search paths is crucial. This paper explores the use of deep reinforcement learning to create efficient search missions for drones in wilderness environments. Our approach leverages a priori data about the search area and the missing person in the form of a probability distribution map. This allows the deep reinforcement learning agent to learn optimal flight paths that maximize the probability of finding the missing person quickly. Experimental results show that our method achieves a significant improvement in search times compared to traditional coverage planning and search planning algorithms. In one comparison, deep reinforcement learning is found to outperform other algorithms by over $160%$, a difference that can mean life or death in real-world search operations. Additionally, unlike previous work, our approach incorporates a continuous action space enabled by cubature, allowing for more nuanced flight patterns. | [
"['Jan-Hendrik Ewers' 'David Anderson' 'Douglas Thomson']"
] |
null | null | 2405.12801 | null | null | http://arxiv.org/pdf/2405.12801v1 | 2024-05-21T13:51:48Z | 2024-05-21T13:51:48Z | Comparing Neighbors Together Makes it Easy: Jointly Comparing Multiple
Candidates for Efficient and Effective Retrieval | A common retrieve-and-rerank paradigm involves retrieving a broad set of relevant candidates using a scalable bi-encoder, followed by expensive but more accurate cross-encoders to a limited candidate set. However, this small subset often leads to error propagation from the bi-encoders, thereby restricting the performance of the overall pipeline. To address these issues, we propose the Comparing Multiple Candidates (CMC) framework, which compares a query and multiple candidate embeddings jointly through shallow self-attention layers. While providing contextualized representations, CMC is scalable enough to handle multiple comparisons simultaneously, where comparing 2K candidates takes only twice as long as comparing 100. Practitioners can use CMC as a lightweight and effective reranker to improve top-1 accuracy. Moreover, when integrated with another retriever, CMC reranking can function as a virtually enhanced retriever. This configuration adds only negligible latency compared to using a single retriever (virtual), while significantly improving recall at K (enhanced).} Through experiments, we demonstrate that CMC, as a virtually enhanced retriever, significantly improves Recall@k (+6.7, +3.5%-p for R@16, R@64) compared to the initial retrieval stage on the ZeSHEL dataset. Meanwhile, we conduct experiments for direct reranking on entity, passage, and dialogue ranking. The results indicate that CMC is not only faster (11x) than cross-encoders but also often more effective, with improved prediction performance in Wikipedia entity linking (+0.7%-p) and DSTC7 dialogue ranking (+3.3%-p). The code and link to datasets are available at https://github.com/yc-song/cmc | [
"['Jonghyun Song' 'Cheyon Jin' 'Wenlong Zhao' 'Jay-Yoon Lee']"
] |
null | null | 2405.12802 | null | null | http://arxiv.org/pdf/2405.12802v1 | 2024-05-21T13:53:58Z | 2024-05-21T13:53:58Z | Stochastic Inference of Plate Bending from Heterogeneous Data:
Physics-informed Gaussian Processes via Kirchhoff-Love Theory | Advancements in machine learning and an abundance of structural monitoring data have inspired the integration of mechanical models with probabilistic models to identify a structure's state and quantify the uncertainty of its physical parameters and response. In this paper, we propose an inference methodology for classical Kirchhoff-Love plates via physics-informed Gaussian Processes (GP). A probabilistic model is formulated as a multi-output GP by placing a GP prior on the deflection and deriving the covariance function using the linear differential operators of the plate governing equations. The posteriors of the flexural rigidity, hyperparameters, and plate response are inferred in a Bayesian manner using Markov chain Monte Carlo (MCMC) sampling from noisy measurements. We demonstrate the applicability with two examples: a simply supported plate subjected to a sinusoidal load and a fixed plate subjected to a uniform load. The results illustrate how the proposed methodology can be employed to perform stochastic inference for plate rigidity and physical quantities by integrating measurements from various sensor types and qualities. Potential applications of the presented methodology are in structural health monitoring and uncertainty quantification of plate-like structures. | [
"['Igor Kavrakov' 'Gledson Rodrigo Tondo' 'Guido Morgenthal']"
] |
null | null | 2405.12807 | null | null | http://arxiv.org/pdf/2405.12807v8 | 2024-07-09T05:15:47Z | 2024-05-21T13:58:17Z | FAdam: Adam is a natural gradient optimizer using diagonal empirical
Fisher information | This paper establishes a mathematical foundation for the Adam optimizer, elucidating its connection to natural gradient descent through Riemannian and information geometry. We rigorously analyze the diagonal empirical Fisher information matrix (FIM) in Adam, clarifying all detailed approximations and advocating for the use of log probability functions as loss, which should be based on discrete distributions, due to the limitations of empirical FIM. Our analysis uncovers flaws in the original Adam algorithm, leading to proposed corrections such as enhanced momentum calculations, adjusted bias corrections, adaptive epsilon, and gradient clipping. We refine the weight decay term based on our theoretical framework. Our modified algorithm, Fisher Adam (FAdam), demonstrates superior performance across diverse domains including LLM, ASR, and VQ-VAE, achieving state-of-the-art results in ASR. | [
"['Dongseong Hwang']"
] |
null | null | 2405.12832 | null | null | http://arxiv.org/pdf/2405.12832v2 | 2024-05-27T15:12:55Z | 2024-05-21T14:36:16Z | Wav-KAN: Wavelet Kolmogorov-Arnold Networks | In this paper, we introduce Wav-KAN, an innovative neural network architecture that leverages the Wavelet Kolmogorov-Arnold Networks (Wav-KAN) framework to enhance interpretability and performance. Traditional multilayer perceptrons (MLPs) and even recent advancements like Spl-KAN face challenges related to interpretability, training speed, robustness, computational efficiency, and performance. Wav-KAN addresses these limitations by incorporating wavelet functions into the Kolmogorov-Arnold network structure, enabling the network to capture both high-frequency and low-frequency components of the input data efficiently. Wavelet-based approximations employ orthogonal or semi-orthogonal basis and maintain a balance between accurately representing the underlying data structure and avoiding overfitting to the noise. While continuous wavelet transform (CWT) has a lot of potentials, we also employed discrete wavelet transform (DWT) for multiresolution analysis, which obviated the need for recalculation of the previous steps in finding the details. Analogous to how water conforms to the shape of its container, Wav-KAN adapts to the data structure, resulting in enhanced accuracy, faster training speeds, and increased robustness compared to Spl-KAN and MLPs. Our results highlight the potential of Wav-KAN as a powerful tool for developing interpretable and high-performance neural networks, with applications spanning various fields. This work sets the stage for further exploration and implementation of Wav-KAN in frameworks such as PyTorch and TensorFlow, aiming to make wavelets in KAN as widespread as activation functions like ReLU and sigmoid in universal approximation theory (UAT). The codes to replicate the simulations are available at https://github.com/zavareh1/Wav-KAN. | [
"['Zavareh Bozorgasl' 'Hao Chen']"
] |
null | null | 2405.12840 | null | null | http://arxiv.org/abs/2405.12840v1 | 2024-05-21T14:45:34Z | 2024-05-21T14:45:34Z | GotFunding: A grant recommendation system based on scientific articles | Obtaining funding is an important part of becoming a successful scientist. Junior faculty spend a great deal of time finding the right agencies and programs that best match their research profile. But what are the factors that influence the best publication--grant matching? Some universities might employ pre-award personnel to understand these factors, but not all institutions can afford to hire them. Historical records of publications funded by grants can help us understand the matching process and also help us develop recommendation systems to automate it. In this work, we present textsc{GotFunding} (Grant recOmmendaTion based on past FUNDING), a recommendation system trained on National Institutes of Health's (NIH) grant--publication records. Our system achieves a high performance (NDCG@1 = 0.945) by casting the problem as learning to rank. By analyzing the features that make predictions effective, our results show that the ranking considers most important 1) the year difference between publication and grant grant, 2) the amount of information provided in the publication, and 3) the relevance of the publication to the grant. We discuss future improvements of the system and an online tool for scientists to try. | [
"['Tong Zeng' 'Daniel E. Acuna']"
] |
null | null | 2405.12843 | null | null | http://arxiv.org/pdf/2405.12843v1 | 2024-05-21T14:50:20Z | 2024-05-21T14:50:20Z | OpenCarbonEval: A Unified Carbon Emission Estimation Framework in
Large-Scale AI Models | In recent years, large-scale auto-regressive models have made significant progress in various tasks, such as text or video generation. However, the environmental impact of these models has been largely overlooked, with a lack of assessment and analysis of their carbon footprint. To address this gap, we introduce OpenCarbonEval, a unified framework for integrating large-scale models across diverse modalities to predict carbon emissions, which could provide AI service providers and users with a means to estimate emissions beforehand and help mitigate the environmental pressure associated with these models. In OpenCarbonEval, we propose a dynamic throughput modeling approach that could capture workload and hardware fluctuations in the training process for more precise emissions estimates. Our evaluation results demonstrate that OpenCarbonEval can more accurately predict training emissions than previous methods, and can be seamlessly applied to different modal tasks. Specifically, we show that OpenCarbonEval achieves superior performance in predicting carbon emissions for both visual models and language models. By promoting sustainable AI development and deployment, OpenCarbonEval can help reduce the environmental impact of large-scale models and contribute to a more environmentally responsible future for the AI community. | [
"['Zhaojian Yu' 'Yinghao Wu' 'Zhuotao Deng' 'Yansong Tang'\n 'Xiao-Ping Zhang']"
] |
null | null | 2405.12847 | null | null | http://arxiv.org/abs/2405.12847v1 | 2024-05-21T14:57:04Z | 2024-05-21T14:57:04Z | A Dataset and Baselines for Measuring and Predicting the Music Piece
Memorability | Nowadays, humans are constantly exposed to music, whether through voluntary streaming services or incidental encounters during commercial breaks. Despite the abundance of music, certain pieces remain more memorable and often gain greater popularity. Inspired by this phenomenon, we focus on measuring and predicting music memorability. To achieve this, we collect a new music piece dataset with reliable memorability labels using a novel interactive experimental procedure. We then train baselines to predict and analyze music memorability, leveraging both interpretable features and audio mel-spectrograms as inputs. To the best of our knowledge, we are the first to explore music memorability using data-driven deep learning-based methods. Through a series of experiments and ablation studies, we demonstrate that while there is room for improvement, predicting music memorability with limited data is possible. Certain intrinsic elements, such as higher valence, arousal, and faster tempo, contribute to memorable music. As prediction techniques continue to evolve, real-life applications like music recommendation systems and music style transfer will undoubtedly benefit from this new area of research. | [
"['Li-Yang Tseng' 'Tzu-Ling Lin' 'Hong-Han Shuai' 'Jen-Wei Huang'\n 'Wen-Whei Chang']"
] |
null | null | 2405.12856 | null | null | http://arxiv.org/pdf/2405.12856v2 | 2024-05-25T22:07:48Z | 2024-05-21T15:13:12Z | LLM Processes: Numerical Predictive Distributions Conditioned on Natural
Language | Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses. Moreover, the expertise needed to integrate this prior knowledge into probabilistic modeling typically limits the application of these models to specialists. Our goal is to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations, guided by natural language text which describes a user's prior knowledge. Large Language Models (LLMs) provide a useful starting point for designing such a tool since they 1) provide an interface where users can incorporate expert insights in natural language and 2) provide an opportunity for leveraging latent problem-relevant knowledge encoded in LLMs that users may not have themselves. We start by exploring strategies for eliciting explicit, coherent numerical predictive distributions from LLMs. We examine these joint predictive distributions, which we call LLM Processes, over arbitrarily-many quantities in settings such as forecasting, multi-dimensional regression, black-box optimization, and image modeling. We investigate the practical details of prompting to elicit coherent predictive distributions, and demonstrate their effectiveness at regression. Finally, we demonstrate the ability to usefully incorporate text into numerical predictions, improving predictive performance and giving quantitative structure that reflects qualitative descriptions. This lets us begin to explore the rich, grounded hypothesis space that LLMs implicitly encode. | [
"['James Requeima' 'John Bronskill' 'Dami Choi' 'Richard E. Turner'\n 'David Duvenaud']"
] |
null | null | 2405.12868 | null | null | http://arxiv.org/pdf/2405.12868v1 | 2024-05-21T15:33:21Z | 2024-05-21T15:33:21Z | Equivariant Spatio-Temporal Attentive Graph Networks to Simulate
Physical Dynamics | Learning to represent and simulate the dynamics of physical systems is a crucial yet challenging task. Existing equivariant Graph Neural Network (GNN) based methods have encapsulated the symmetry of physics, emph{e.g.}, translations, rotations, etc, leading to better generalization ability. Nevertheless, their frame-to-frame formulation of the task overlooks the non-Markov property mainly incurred by unobserved dynamics in the environment. In this paper, we reformulate dynamics simulation as a spatio-temporal prediction task, by employing the trajectory in the past period to recover the Non-Markovian interactions. We propose Equivariant Spatio-Temporal Attentive Graph Networks (ESTAG), an equivariant version of spatio-temporal GNNs, to fulfill our purpose. At its core, we design a novel Equivariant Discrete Fourier Transform (EDFT) to extract periodic patterns from the history frames, and then construct an Equivariant Spatial Module (ESM) to accomplish spatial message passing, and an Equivariant Temporal Module (ETM) with the forward attention and equivariant pooling mechanisms to aggregate temporal message. We evaluate our model on three real datasets corresponding to the molecular-, protein- and macro-level. Experimental results verify the effectiveness of ESTAG compared to typical spatio-temporal GNNs and equivariant GNNs. | [
"['Liming Wu' 'Zhichao Hou' 'Jirui Yuan' 'Yu Rong' 'Wenbing Huang']"
] |
null | null | 2405.12888 | null | null | http://arxiv.org/pdf/2405.12888v1 | 2024-05-21T15:59:55Z | 2024-05-21T15:59:55Z | Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows | Conservation laws are well-established in the context of Euclidean gradient flow dynamics, notably for linear or ReLU neural network training. Yet, their existence and principles for non-Euclidean geometries and momentum-based dynamics remain largely unknown. In this paper, we characterize "all" conservation laws in this general setting. In stark contrast to the case of gradient flows, we prove that the conservation laws for momentum-based dynamics exhibit temporal dependence. Additionally, we often observe a "conservation loss" when transitioning from gradient flow to momentum dynamics. Specifically, for linear networks, our framework allows us to identify all momentum conservation laws, which are less numerous than in the gradient flow case except in sufficiently over-parameterized regimes. With ReLU networks, no conservation law remains. This phenomenon also manifests in non-Euclidean metrics, used e.g. for Nonnegative Matrix Factorization (NMF): all conservation laws can be determined in the gradient flow context, yet none persists in the momentum case. | [
"['Sibylle Marcotte' 'Rémi Gribonval' 'Gabriel Peyré']"
] |
null | null | 2405.12892 | null | null | http://arxiv.org/pdf/2405.12892v1 | 2024-05-21T16:02:06Z | 2024-05-21T16:02:06Z | Retrievable Domain-Sensitive Feature Memory for Multi-Domain
Recommendation | With the increase in the business scale and number of domains in online advertising, multi-domain ad recommendation has become a mainstream solution in the industry. The core of multi-domain recommendation is effectively modeling the commonalities and distinctions among domains. Existing works are dedicated to designing model architectures for implicit multi-domain modeling while overlooking an in-depth investigation from a more fundamental perspective of feature distributions. This paper focuses on features with significant differences across various domains in both distributions and effects on model predictions. We refer to these features as domain-sensitive features, which serve as carriers of domain distinctions and are crucial for multi-domain modeling. Experiments demonstrate that existing multi-domain modeling methods may neglect domain-sensitive features, indicating insufficient learning of domain distinctions. To avoid this neglect, we propose a domain-sensitive feature attribution method to identify features that best reflect domain distinctions from the feature set. Further, we design a memory architecture that extracts domain-specific information from domain-sensitive features for the model to retrieve and integrate, thereby enhancing the awareness of domain distinctions. Extensive offline and online experiments demonstrate the superiority of our method in capturing domain distinctions and improving multi-domain recommendation performance. | [
"['Yuang Zhao' 'Zhaocheng Du' 'Qinglin Jia' 'Linxuan Zhang' 'Zhenhua Dong'\n 'Ruiming Tang']"
] |
null | null | 2405.12894 | null | null | http://arxiv.org/pdf/2405.12894v1 | 2024-05-21T16:04:32Z | 2024-05-21T16:04:32Z | Decentralized Federated Learning Over Imperfect Communication Channels | This paper analyzes the impact of imperfect communication channels on decentralized federated learning (D-FL) and subsequently determines the optimal number of local aggregations per training round, adapting to the network topology and imperfect channels. We start by deriving the bias of locally aggregated D-FL models under imperfect channels from the ideal global models requiring perfect channels and aggregations. The bias reveals that excessive local aggregations can accumulate communication errors and degrade convergence. Another important aspect is that we analyze a convergence upper bound of D-FL based on the bias. By minimizing the bound, the optimal number of local aggregations is identified to balance a trade-off with accumulation of communication errors in the absence of knowledge of the channels. With this knowledge, the impact of communication errors can be alleviated, allowing the convergence upper bound to decrease throughout aggregations. Experiments validate our convergence analysis and also identify the optimal number of local aggregations on two widely considered image classification tasks. It is seen that D-FL, with an optimal number of local aggregations, can outperform its potential alternatives by over 10% in training accuracy. | [
"['Weicai Li' 'Tiejun Lv' 'Wei Ni' 'Jingbo Zhao' 'Ekram Hossain'\n 'H. Vincent Poor']"
] |
null | null | 2405.12910 | null | null | http://arxiv.org/pdf/2405.12910v1 | 2024-05-21T16:30:25Z | 2024-05-21T16:30:25Z | Topic Modelling Case Law Using a Large Language Model and a New Taxonomy
for UK Law: AI Insights into Summary Judgment | This paper addresses a critical gap in legal analytics by developing and applying a novel taxonomy for topic modelling summary judgment cases in the United Kingdom. Using a curated dataset of summary judgment cases, we use the Large Language Model Claude 3 Opus to explore functional topics and trends. We find that Claude 3 Opus correctly classified the topic with an accuracy of 87.10%. The analysis reveals distinct patterns in the application of summary judgments across various legal domains. As case law in the United Kingdom is not originally labelled with keywords or a topic filtering option, the findings not only refine our understanding of the thematic underpinnings of summary judgments but also illustrate the potential of combining traditional and AI-driven approaches in legal classification. Therefore, this paper provides a new and general taxonomy for UK law. The implications of this work serve as a foundation for further research and policy discussions in the field of judicial administration and computational legal research methodologies. | [
"['Holli Sargeant' 'Ahmed Izzidien' 'Felix Steffek']"
] |
null | null | 2405.12926 | null | null | http://arxiv.org/pdf/2405.12926v2 | 2024-06-11T14:22:14Z | 2024-05-21T16:51:28Z | Trusting Fair Data: Leveraging Quality in Fairness-Driven Data Removal
Techniques | In this paper, we deal with bias mitigation techniques that remove specific data points from the training set to aim for a fair representation of the population in that set. Machine learning models are trained on these pre-processed datasets, and their predictions are expected to be fair. However, such approaches may exclude relevant data, making the attained subsets less trustworthy for further usage. To enhance the trustworthiness of prior methods, we propose additional requirements and objectives that the subsets must fulfill in addition to fairness: (1) group coverage, and (2) minimal data loss. While removing entire groups may improve the measured fairness, this practice is very problematic as failing to represent every group cannot be considered fair. In our second concern, we advocate for the retention of data while minimizing discrimination. By introducing a multi-objective optimization problem that considers fairness and data loss, we propose a methodology to find Pareto-optimal solutions that balance these objectives. By identifying such solutions, users can make informed decisions about the trade-off between fairness and data quality and select the most suitable subset for their application. | [
"['Manh Khoi Duong' 'Stefan Conrad']"
] |
null | null | 2405.12930 | null | null | http://arxiv.org/pdf/2405.12930v3 | 2024-07-01T18:22:38Z | 2024-05-21T16:58:35Z | Pytorch-Wildlife: A Collaborative Deep Learning Framework for
Conservation | The alarming decline in global biodiversity, driven by various factors, underscores the urgent need for large-scale wildlife monitoring. In response, scientists have turned to automated deep learning methods for data processing in wildlife monitoring. However, applying these advanced methods in real-world scenarios is challenging due to their complexity and the need for specialized knowledge, primarily because of technical challenges and interdisciplinary barriers. To address these challenges, we introduce Pytorch-Wildlife, an open-source deep learning platform built on PyTorch. It is designed for creating, modifying, and sharing powerful AI models. This platform emphasizes usability and accessibility, making it accessible to individuals with limited or no technical background. It also offers a modular codebase to simplify feature expansion and further development. Pytorch-Wildlife offers an intuitive, user-friendly interface, accessible through local installation or Hugging Face, for animal detection and classification in images and videos. As two real-world applications, Pytorch-Wildlife has been utilized to train animal classification models for species recognition in the Amazon Rainforest and for invasive opossum recognition in the Galapagos Islands. The Opossum model achieves 98% accuracy, and the Amazon model has 92% recognition accuracy for 36 animals in 90% of the data. As Pytorch-Wildlife evolves, we aim to integrate more conservation tasks, addressing various environmental challenges. Pytorch-Wildlife is available at https://github.com/microsoft/CameraTraps. | [
"['Andres Hernandez' 'Zhongqi Miao' 'Luisa Vargas' 'Rahul Dodhia'\n 'Pablo Arbelaez' 'Juan M. Lavista Ferres']"
] |
null | null | 2405.12933 | null | null | http://arxiv.org/pdf/2405.12933v2 | 2024-06-02T18:48:56Z | 2024-05-21T17:04:44Z | Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in
LLMs | Large Language Models (LLMs) have shown remarkable capabilities in tasks such as summarization, arithmetic reasoning, and question answering. However, they encounter significant challenges in the domain of moral reasoning and ethical decision-making, especially in complex scenarios with multiple stakeholders. This paper introduces the Skin-in-the-Game (SKIG) framework, aimed at enhancing moral reasoning in LLMs by exploring decisions' consequences from multiple stakeholder perspectives. Central to SKIG's mechanism is simulating accountability for actions, which, alongside empathy exercises and risk assessment, is pivotal to its effectiveness. We validate SKIG's performance across various moral reasoning benchmarks with proprietary and opensource LLMs, and investigate its crucial components through extensive ablation analyses. | [
"['Bilgehan Sel' 'Priya Shanmugasundaram' 'Mohammad Kachuee' 'Kun Zhou'\n 'Ruoxi Jia' 'Ming Jin']"
] |
null | null | 2405.12940 | null | null | http://arxiv.org/pdf/2405.12940v1 | 2024-05-21T17:13:13Z | 2024-05-21T17:13:13Z | Learning the Infinitesimal Generator of Stochastic Diffusion Processes | We address data-driven learning of the infinitesimal generator of stochastic diffusion processes, essential for understanding numerical simulations of natural and physical systems. The unbounded nature of the generator poses significant challenges, rendering conventional analysis techniques for Hilbert-Schmidt operators ineffective. To overcome this, we introduce a novel framework based on the energy functional for these stochastic processes. Our approach integrates physical priors through an energy-based risk metric in both full and partial knowledge settings. We evaluate the statistical performance of a reduced-rank estimator in reproducing kernel Hilbert spaces (RKHS) in the partial knowledge setting. Notably, our approach provides learning bounds independent of the state space dimension and ensures non-spurious spectral estimation. Additionally, we elucidate how the distortion between the intrinsic energy-induced metric of the stochastic diffusion and the RKHS metric used for generator estimation impacts the spectral learning bounds. | [
"['Vladimir R. Kostic' 'Karim Lounici' 'Helene Halconruy'\n 'Timothee Devergne' 'Massimiliano Pontil']"
] |
null | null | 2405.12946 | null | null | http://arxiv.org/pdf/2405.12946v1 | 2024-05-21T17:17:34Z | 2024-05-21T17:17:34Z | Tutorly: Turning Programming Videos Into Apprenticeship Learning
Environments with LLMs | Online programming videos, including tutorials and streamcasts, are widely popular and contain a wealth of expert knowledge. However, effectively utilizing these resources to achieve targeted learning goals can be challenging. Unlike direct tutoring, video content lacks tailored guidance based on individual learning paces, personalized feedback, and interactive engagement necessary for support and monitoring. Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework. Tutorly, developed as a JupyterLab Plugin, allows learners to (1) set personalized learning goals, (2) engage in learning-by-doing through a conversational LLM-based mentor agent, (3) receive guidance and feedback based on a student model that steers the mentor moves. In a within-subject study with 16 participants learning exploratory data analysis from a streamcast, Tutorly significantly improved their performance from 61.9% to 76.6% based on a post-test questionnaire. Tutorly demonstrates the potential for enhancing programming video learning experiences with LLM and learner modeling. | [
"['Wengxi Li' 'Roy Pea' 'Nick Haber' 'Hari Subramonyam']"
] |
null | null | 2405.12952 | null | null | http://arxiv.org/pdf/2405.12952v1 | 2024-05-21T17:28:06Z | 2024-05-21T17:28:06Z | Truncated Variance Reduced Value Iteration | We provide faster randomized algorithms for computing an $epsilon$-optimal policy in a discounted Markov decision process with $A_{text{tot}}$-state-action pairs, bounded rewards, and discount factor $gamma$. We provide an $tilde{O}(A_{text{tot}}[(1 - gamma)^{-3}epsilon^{-2} + (1 - gamma)^{-2}])$-time algorithm in the sampling setting, where the probability transition matrix is unknown but accessible through a generative model which can be queried in $tilde{O}(1)$-time, and an $tilde{O}(s + (1-gamma)^{-2})$-time algorithm in the offline setting where the probability transition matrix is known and $s$-sparse. These results improve upon the prior state-of-the-art which either ran in $tilde{O}(A_{text{tot}}[(1 - gamma)^{-3}epsilon^{-2} + (1 - gamma)^{-3}])$ time [Sidford, Wang, Wu, Ye 2018] in the sampling setting, $tilde{O}(s + A_{text{tot}} (1-gamma)^{-3})$ time [Sidford, Wang, Wu, Yang, Ye 2018] in the offline setting, or time at least quadratic in the number of states using interior point methods for linear programming. We achieve our results by building upon prior stochastic variance-reduced value iteration methods [Sidford, Wang, Wu, Yang, Ye 2018]. We provide a variant that carefully truncates the progress of its iterates to improve the variance of new variance-reduced sampling procedures that we introduce to implement the steps. Our method is essentially model-free and can be implemented in $tilde{O}(A_{text{tot}})$-space when given generative model access. Consequently, our results take a step in closing the sample-complexity gap between model-free and model-based methods. | [
"['Yujia Jin' 'Ishani Karmarkar' 'Aaron Sidford' 'Jiayi Wang']"
] |
null | null | 2405.12954 | null | null | http://arxiv.org/pdf/2405.12954v2 | 2024-05-22T15:43:42Z | 2024-05-19T03:48:05Z | A Method on Searching Better Activation Functions | The success of artificial neural networks (ANNs) hinges greatly on the judicious selection of an activation function, introducing non-linearity into network and enabling them to model sophisticated relationships in data. However, the search of activation functions has largely relied on empirical knowledge in the past, lacking theoretical guidance, which has hindered the identification of more effective activation functions. In this work, we offer a proper solution to such issue. Firstly, we theoretically demonstrate the existence of the worst activation function with boundary conditions (WAFBC) from the perspective of information entropy. Furthermore, inspired by the Taylor expansion form of information entropy functional, we propose the Entropy-based Activation Function Optimization (EAFO) methodology. EAFO methodology presents a novel perspective for designing static activation functions in deep neural networks and the potential of dynamically optimizing activation during iterative training. Utilizing EAFO methodology, we derive a novel activation function from ReLU, known as Correction Regularized ReLU (CRReLU). Experiments conducted with vision transformer and its variants on CIFAR-10, CIFAR-100 and ImageNet-1K datasets demonstrate the superiority of CRReLU over existing corrections of ReLU. Extensive empirical studies on task of large language model (LLM) fine-tuning, CRReLU exhibits superior performance compared to GELU, suggesting its broader potential for practical applications. | [
"['Haoyuan Sun' 'Zihao Wu' 'Bo Xia' 'Pu Chang' 'Zibin Dong' 'Yifu Yuan'\n 'Yongzhe Chang' 'Xueqian Wang']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.