date
timestamp[ns]date 2023-05-05 00:00:00
2025-07-14 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
202
| authors
listlengths 1
3.3k
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2024-06-13T00:00:00 | 2406.05955 | Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters | [
"Yixin Song",
"Haotong Xie",
"Zhengyan Zhang",
"Bo Wen",
"Li Ma",
"Zeyu Mi",
"Haibo Chen"
]
| Exploiting activation sparsity is a promising approach to significantly accelerating the inference process of large language models (LLMs) without compromising performance. However, activation sparsity is determined by activation functions, and commonly used ones like SwiGLU and GeGLU exhibit limited sparsity. Simply replacing these functions with ReLU fails to achieve sufficient sparsity. Moreover, inadequate training data can further increase the risk of performance degradation. To address these challenges, we propose a novel dReLU function, which is designed to improve LLM activation sparsity, along with a high-quality training data mixture ratio to facilitate effective sparsification. Additionally, we leverage sparse activation patterns within the Feed-Forward Network (FFN) experts of Mixture-of-Experts (MoE) models to further boost efficiency. By applying our neuron sparsification method to the Mistral and Mixtral models, only 2.5 billion and 4.3 billion parameters are activated per inference iteration, respectively, while achieving even more powerful model performance. Evaluation results demonstrate that this sparsity achieves a 2-5x decoding speedup. Remarkably, on mobile phones, our TurboSparse-Mixtral-47B achieves an inference speed of 11 tokens per second. Our models are available at https://huggingface.co/PowerInfer |
|
2024-06-13T00:00:00 | 2406.06282 | PowerInfer-2: Fast Large Language Model Inference on a Smartphone | [
"Zhenliang Xue",
"Yixin Song",
"Zeyu Mi",
"Le Chen",
"Yubin Xia",
"Haibo Chen"
]
| This paper introduces PowerInfer-2, a framework designed for high-speed inference of Large Language Models (LLMs) on smartphones, particularly effective for models whose sizes exceed the device's memory capacity. The key insight of PowerInfer-2 is to utilize the heterogeneous computation, memory, and I/O resources in smartphones by decomposing traditional matrix computations into fine-grained neuron cluster computations. Specifically, PowerInfer-2 features a polymorphic neuron engine that adapts computational strategies for various stages of LLM inference. Additionally, it introduces segmented neuron caching and fine-grained neuron-cluster-level pipelining, which effectively minimize and conceal the overhead caused by I/O operations. The implementation and evaluation of PowerInfer-2 demonstrate its capability to support a wide array of LLM models on two smartphones, achieving up to a 29.2x speed increase compared with state-of-the-art frameworks. Notably, PowerInfer-2 is the first system to serve the TurboSparse-Mixtral-47B model with a generation rate of 11.68 tokens per second on a smartphone. For models that fit entirely within the memory, PowerInfer-2 can achieve approximately a 40% reduction in memory usage while maintaining inference speeds comparable to llama.cpp and MLC-LLM. For more details, including a demonstration video, please visit the project site at www.powerinfer.ai/v2. |
|
2024-06-13T00:00:00 | 2406.08414 | Discovering Preference Optimization Algorithms with and for Large Language Models | [
"Chris Lu",
"Samuel Holt",
"Claudio Fanconi",
"Alex J. Chan",
"Jakob Foerster",
"Mihaela van der Schaar",
"Robert Tjarko Lange"
]
| https://github.com/luchris429/DiscoPOP | Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs. Typically, preference optimization is approached as an offline supervised learning task using manually-crafted convex loss functions. While these methods are based on theoretical insights, they are inherently constrained by human creativity, so the large search space of possible loss functions remains under explored. We address this by performing LLM-driven objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously-evaluated performance metrics. This process leads to the discovery of previously-unknown and performant preference optimization algorithms. The best performing of these we call Discovered Preference Optimization (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks. |
2024-06-13T00:00:00 | 2406.04320 | Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models | [
"Ali Behrouz",
"Michele Santacatterina",
"Ramin Zabih"
]
| Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. Traditional State Space Models (SSMs) are classical approaches for univariate time series modeling due to their simplicity and expressive power to represent linear dependencies. They, however, have fundamentally limited expressive power to capture non-linear dependencies, are slow in practice, and fail to model the inter-variate information flow. Despite recent attempts to improve the expressive power of SSMs by using deep structured SSMs, the existing methods are either limited to univariate time series, fail to model complex patterns (e.g., seasonal patterns), fail to dynamically model the dependencies of variate and time dimensions, and/or are input-independent. We present Chimera that uses two input-dependent 2-D SSM heads with different discretization processes to learn long-term progression and seasonal patterns. To improve the efficiency of complex 2D recurrence, we present a fast training using a new 2-dimensional parallel selective scan. We further present and discuss 2-dimensional Mamba and Mamba-2 as the spacial cases of our 2D SSM. Our experimental evaluation shows the superior performance of Chimera on extensive and diverse benchmarks, including ECG and speech time series classification, long-term and short-term time series forecasting, and time series anomaly detection. |
|
2024-06-13T00:00:00 | 2406.04338 | Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion | [
"Fangfu Liu",
"Hanyang Wang",
"Shunyu Yao",
"Shengjun Zhang",
"Jie Zhou",
"Yueqi Duan"
]
| https://github.com/liuff19/Physics3D | In recent years, there has been rapid development in 3D generation models, opening up new possibilities for applications such as simulating the dynamic movements of 3D objects and customizing their behaviors. However, current 3D generative models tend to focus only on surface features such as color and shape, neglecting the inherent physical properties that govern the behavior of objects in the real world. To accurately simulate physics-aligned dynamics, it is essential to predict the physical properties of materials and incorporate them into the behavior prediction process. Nonetheless, predicting the diverse materials of real-world objects is still challenging due to the complex nature of their physical attributes. In this paper, we propose Physics3D, a novel method for learning various physical properties of 3D objects through a video diffusion model. Our approach involves designing a highly generalizable physical simulation system based on a viscoelastic material model, which enables us to simulate a wide range of materials with high-fidelity capabilities. Moreover, we distill the physical priors from a video diffusion model that contains more understanding of realistic object materials. Extensive experiments demonstrate the effectiveness of our method with both elastic and plastic materials. Physics3D shows great potential for bridging the gap between the physical world and virtual neural space, providing a better integration and application of realistic physical principles in virtual environments. Project page: https://liuff19.github.io/Physics3D. |
2024-06-13T00:00:00 | 2406.08487 | Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models | [
"Yi-Fan Zhang",
"Qingsong Wen",
"Chaoyou Fu",
"Xue Wang",
"Zhang Zhang",
"Liang Wang",
"Rong Jin"
]
| https://github.com/yfzhang114/SliME | Seeing clearly with high resolution is a foundation of Large Multimodal Models (LMMs), which has been proven to be vital for visual perception and reasoning. Existing works usually employ a straightforward resolution upscaling method, where the image consists of global and local branches, with the latter being the sliced image patches but resized to the same resolution as the former. This means that higher resolution requires more local patches, resulting in exorbitant computational expenses, and meanwhile, the dominance of local image tokens may diminish the global context. In this paper, we dive into the problems and propose a new framework as well as an elaborate optimization strategy. Specifically, we extract contextual information from the global view using a mixture of adapters, based on the observation that different adapters excel at different tasks. With regard to local patches, learnable query embeddings are introduced to reduce image tokens, the most important tokens accounting for the user question will be further selected by a similarity-based selector. Our empirical results demonstrate a `less is more' pattern, where utilizing fewer but more informative local image tokens leads to improved performance. Besides, a significant challenge lies in the training strategy, as simultaneous end-to-end training of the global mining block and local compression block does not yield optimal results. We thus advocate for an alternating training way, ensuring balanced learning between global and local aspects. Finally, we also introduce a challenging dataset with high requirements for image detail, enhancing the training of the local compression layer. The proposed method, termed LMM with Sophisticated Tasks, Local image compression, and Mixture of global Experts (SliME), achieves leading performance across various benchmarks with only 2 million training data. |
2024-06-13T00:00:00 | 2406.05074 | Hibou: A Family of Foundational Vision Transformers for Pathology | [
"Dmitry Nechaev",
"Alexey Pchelnikov",
"Ekaterina Ivanova"
]
| https://github.com/HistAI/hibou | Pathology, the microscopic examination of diseased tissue, is critical for diagnosing various medical conditions, particularly cancers. Traditional methods are labor-intensive and prone to human error. Digital pathology, which converts glass slides into high-resolution digital images for analysis by computer algorithms, revolutionizes the field by enhancing diagnostic accuracy, consistency, and efficiency through automated image analysis and large-scale data processing. Foundational transformer pretraining is crucial for developing robust, generalizable models as it enables learning from vast amounts of unannotated data. This paper introduces the Hibou family of foundational vision transformers for pathology, leveraging the DINOv2 framework to pretrain two model variants, Hibou-B and Hibou-L, on a proprietary dataset of over 1 million whole slide images (WSIs) representing diverse tissue types and staining techniques. Our pretrained models demonstrate superior performance on both patch-level and slide-level benchmarks, surpassing existing state-of-the-art methods. Notably, Hibou-L achieves the highest average accuracy across multiple benchmark datasets. To support further research and application in the field, we have open-sourced the Hibou-B model, which can be accessed at https://github.com/HistAI/hibou |
2024-06-13T00:00:00 | 2406.04127 | Are We Done with MMLU? | [
"Aryo Pradipta Gema",
"Joshua Ong Jun Leang",
"Giwon Hong",
"Alessio Devoto",
"Alberto Carlo Maria Mancino",
"Rohit Saxena",
"Xuanli He",
"Yu Zhao",
"Xiaotang Du",
"Mohammad Reza Ghasemi Madani",
"Claire Barale",
"Robert McHardy",
"Joshua Harris",
"Jean Kaddour",
"Emile van Krieken",
"Pasquale Minervini"
]
| Maybe not. We identify and analyse errors in the popular Massive Multitask Language Understanding (MMLU) benchmark. Even though MMLU is widely adopted, our analysis demonstrates numerous ground truth errors that obscure the true capabilities of LLMs. For example, we find that 57% of the analysed questions in the Virology subset contain errors. To address this issue, we introduce a comprehensive framework for identifying dataset errors using a novel error taxonomy. Then, we create MMLU-Redux, which is a subset of 3,000 manually re-annotated questions across 30 MMLU subjects. Using MMLU-Redux, we demonstrate significant discrepancies with the model performance metrics that were originally reported. Our results strongly advocate for revising MMLU's error-ridden questions to enhance its future utility and reliability as a benchmark. Therefore, we open up MMLU-Redux for additional annotation https://huggingface.co/datasets/edinburgh-dawg/mmlu-redux. |
|
2024-06-13T00:00:00 | 2406.07933 | Large Language Model Unlearning via Embedding-Corrupted Prompts | [
"Chris Yuhao Liu",
"Yaxuan Wang",
"Jeffrey Flanigan",
"Yang Liu"
]
| Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present Embedding-COrrupted (ECO) Prompts, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at nearly zero side effects in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases. |
|
2024-06-13T00:00:00 | 2406.04329 | Simplified and Generalized Masked Diffusion for Discrete Data | [
"Jiaxin Shi",
"Kehang Han",
"Zhe Wang",
"Arnaud Doucet",
"Michalis K. Titsias"
]
| Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.78~(CIFAR-10) and 3.42 (ImageNet 64times64) bits per dimension that are comparable or better than autoregressive models of similar sizes. |
|
2024-06-13T00:00:00 | 2406.06462 | VCR: Visual Caption Restoration | [
"Tianyu Zhang",
"Suyuchen Wang",
"Lu Li",
"Ge Zhang",
"Perouz Taslakian",
"Sai Rajeswar",
"Jie Fu",
"Bang Liu",
"Yoshua Bengio"
]
| We introduce Visual Caption Restoration (VCR), a novel vision-language task that challenges models to accurately restore partially obscured texts using pixel-level hints within images. This task stems from the observation that text embedded in images is intrinsically different from common visual elements and natural language due to the need to align the modalities of vision, text, and text embedded in images. While numerous works have integrated text embedded in images into visual question-answering tasks, approaches to these tasks generally rely on optical character recognition or masked language modeling, thus reducing the task to mainly text-based processing. However, text-based processing becomes ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. With this pipeline, we construct a dataset for VCR called VCR-Wiki using images with captions from Wikipedia, comprising 2.11M English and 346K Chinese entities in both easy and hard split variants. Our results reveal that current vision language models significantly lag behind human performance in the VCR task, and merely fine-tuning the models on our dataset does not lead to notable improvements. We release VCR-Wiki and the data construction code to facilitate future research. |
|
2024-06-14T00:00:00 | 2406.08552 | DiTFastAttn: Attention Compression for Diffusion Transformer Models | [
"Zhihang Yuan",
"Pu Lu",
"Hanling Zhang",
"Xuefei Ning",
"Linfeng Zhang",
"Tianchen Zhao",
"Shengen Yan",
"Guohao Dai",
"Yu Wang"
]
| https://github.com/thu-nics/DiTFastAttn | Diffusion Transformers (DiT) excel at image and video generation but face computational challenges due to self-attention's quadratic complexity. We propose DiTFastAttn, a novel post-training compression method to alleviate DiT's computational bottleneck. We identify three key redundancies in the attention computation during DiT inference: 1. spatial redundancy, where many attention heads focus on local information; 2. temporal redundancy, with high similarity between neighboring steps' attention outputs; 3. conditional redundancy, where conditional and unconditional inferences exhibit significant similarity. To tackle these redundancies, we propose three techniques: 1. Window Attention with Residual Caching to reduce spatial redundancy; 2. Temporal Similarity Reduction to exploit the similarity between steps; 3. Conditional Redundancy Elimination to skip redundant computations during conditional generation. To demonstrate the effectiveness of DiTFastAttn, we apply it to DiT, PixArt-Sigma for image generation tasks, and OpenSora for video generation tasks. Evaluation results show that for image generation, our method reduces up to 88\% of the FLOPs and achieves up to 1.6x speedup at high resolution generation. |
2024-06-14T00:00:00 | 2406.09415 | An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels | [
"Duy-Kien Nguyen",
"Mahmoud Assran",
"Unnat Jain",
"Martin R. Oswald",
"Cees G. M. Snoek",
"Xinlei Chen"
]
| This work does not introduce a new method. Instead, we present an interesting finding that questions the necessity of the inductive bias -- locality in modern computer vision architectures. Concretely, we find that vanilla Transformers can operate by directly treating each individual pixel as a token and achieve highly performant results. This is substantially different from the popular design in Vision Transformer, which maintains the inductive bias from ConvNets towards local neighborhoods (e.g. by treating each 16x16 patch as a token). We mainly showcase the effectiveness of pixels-as-tokens across three well-studied tasks in computer vision: supervised learning for object classification, self-supervised learning via masked autoencoding, and image generation with diffusion models. Although directly operating on individual pixels is less computationally practical, we believe the community must be aware of this surprising piece of knowledge when devising the next generation of neural architectures for computer vision. |
|
2024-06-14T00:00:00 | 2406.09414 | Depth Anything V2 | [
"Lihe Yang",
"Bingyi Kang",
"Zilong Huang",
"Zhen Zhao",
"Xiaogang Xu",
"Jiashi Feng",
"Hengshuang Zhao"
]
| https://github.com/DepthAnything/Depth-Anything-V2 | This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with precise annotations and diverse scenes to facilitate future research. |
2024-06-14T00:00:00 | 2406.08862 | Cognitively Inspired Energy-Based World Models | [
"Alexi Gladstone",
"Ganesh Nanduru",
"Md Mofijul Islam",
"Aman Chadha",
"Jundong Li",
"Tariq Iqbal"
]
| One of the predominant methods for training world models is autoregressive prediction in the output space of the next element of a sequence. In Natural Language Processing (NLP), this takes the form of Large Language Models (LLMs) predicting the next token; in Computer Vision (CV), this takes the form of autoregressive models predicting the next frame/token/pixel. However, this approach differs from human cognition in several respects. First, human predictions about the future actively influence internal cognitive processes. Second, humans naturally evaluate the plausibility of predictions regarding future states. Based on this capability, and third, by assessing when predictions are sufficient, humans allocate a dynamic amount of time to make a prediction. This adaptive process is analogous to System 2 thinking in psychology. All these capabilities are fundamental to the success of humans at high-level reasoning and planning. Therefore, to address the limitations of traditional autoregressive models lacking these human-like capabilities, we introduce Energy-Based World Models (EBWM). EBWM involves training an Energy-Based Model (EBM) to predict the compatibility of a given context and a predicted future state. In doing so, EBWM enables models to achieve all three facets of human cognition described. Moreover, we developed a variant of the traditional autoregressive transformer tailored for Energy-Based models, termed the Energy-Based Transformer (EBT). Our results demonstrate that EBWM scales better with data and GPU Hours than traditional autoregressive transformers in CV, and that EBWM offers promising early scaling in NLP. Consequently, this approach offers an exciting path toward training future models capable of System 2 thinking and intelligently searching across state spaces. |
|
2024-06-14T00:00:00 | 2406.08673 | HelpSteer2: Open-source dataset for training top-performing reward models | [
"Zhilin Wang",
"Yi Dong",
"Olivier Delalleau",
"Jiaqi Zeng",
"Gerald Shen",
"Daniel Egert",
"Jimmy J. Zhang",
"Makesh Narsimhan Sreedhar",
"Oleksii Kuchaiev"
]
| https://github.com/NVIDIA/NeMo-Aligner | High-quality preference datasets are essential for training reward models that can effectively guide large language models (LLMs) in generating high-quality responses aligned with human preferences. As LLMs become stronger and better aligned, permissively licensed preference datasets, such as Open Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for reward modeling. Methods that distil preference data from proprietary LLMs such as GPT-4 have restrictions on commercial usage imposed by model providers. To improve upon both generated responses and attribute labeling quality, we release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0). Using a powerful internal base model trained on HelpSteer2, we are able to achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming currently listed open and proprietary models, as of June 12th, 2024. Notably, HelpSteer2 consists of only ten thousand response pairs, an order of magnitude fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly efficient for training reward models. Our extensive experiments demonstrate that reward models trained with HelpSteer2 are effective in aligning LLMs. In particular, we propose SteerLM 2.0, a model alignment approach that can effectively make use of the rich multi-attribute score predicted by our reward models. HelpSteer2 is available at https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at https://github.com/NVIDIA/NeMo-Aligner |
2024-06-14T00:00:00 | 2406.09412 | Explore the Limits of Omni-modal Pretraining at Scale | [
"Yiyuan Zhang",
"Handong Li",
"Jing Liu",
"Xiangyu Yue"
]
| https://github.com/invictus717/MiCo | We propose to build omni-modal intelligence, which is capable of understanding any modality and learning universal representations. In specific, we propose a scalable pretraining paradigm, named Multimodal Context (MiCo), which can scale up the numbers of modalities and amount of data, together with the model parameters, in the pretraining process. With MiCo, the pretrained models show significant emergent abilities in multimodal learning, which are evaluated on the following tasks: i) single-modality perception benchmarks of 10 different modalities, ii) 25 cross-modality understanding tasks of retrieval, question-answering, captioning, and iii) 18 multimodal large language model benchmarks. Our models establish 37 new records for state-of-the-art performance. We hope that our research could contribute to the development of omni-modal intelligence. Code and Models are at https://github.com/invictus717/MiCo |
2024-06-14T00:00:00 | 2406.09308 | Transformers meet Neural Algorithmic Reasoners | [
"Wilfried Bounsi",
"Borja Ibarz",
"Andrew Dudzik",
"Jessica B. Hamrick",
"Larisa Markeeva",
"Alex Vitvitskyi",
"Razvan Pascanu",
"Petar Veličković"
]
| Transformers have revolutionized machine learning with their simple yet effective architecture. Pre-training Transformers on massive text datasets from the Internet has led to unmatched generalization for natural language understanding (NLU) tasks. However, such language models remain fragile when tasked with algorithmic forms of reasoning, where computations must be precise and robust. To address this limitation, we propose a novel approach that combines the Transformer's language understanding with the robustness of graph neural network (GNN)-based neural algorithmic reasoners (NARs). Such NARs proved effective as generic solvers for algorithmic tasks, when specified in graph form. To make their embeddings accessible to a Transformer, we propose a hybrid architecture with a two-phase training procedure, allowing the tokens in the language model to cross-attend to the node embeddings from the NAR. We evaluate our resulting TransNAR model on CLRS-Text, the text-based version of the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only models for algorithmic reasoning, both in and out of distribution. |
|
2024-06-14T00:00:00 | 2406.09246 | OpenVLA: An Open-Source Vision-Language-Action Model | [
"Moo Jin Kim",
"Karl Pertsch",
"Siddharth Karamcheti",
"Ted Xiao",
"Ashwin Balakrishna",
"Suraj Nair",
"Rafael Rafailov",
"Ethan Foster",
"Grace Lam",
"Pannag Sanketi",
"Quan Vuong",
"Thomas Kollar",
"Benjamin Burchfiel",
"Russ Tedrake",
"Dorsa Sadigh",
"Sergey Levine",
"Percy Liang",
"Chelsea Finn"
]
| https://github.com/openvla/openvla | Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable policies for visuomotor control. Yet, widespread adoption of VLAs for robotics has been challenging as 1) existing VLAs are largely closed and inaccessible to the public, and 2) prior work fails to explore methods for efficiently fine-tuning VLAs for new tasks, a key component for adoption. Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-source VLA trained on a diverse collection of 970k real-world robot demonstrations. OpenVLA builds on a Llama 2 language model combined with a visual encoder that fuses pretrained features from DINOv2 and SigLIP. As a product of the added data diversity and new model components, OpenVLA demonstrates strong results for generalist manipulation, outperforming closed models such as RT-2-X (55B) by 16.5% in absolute task success rate across 29 tasks and multiple robot embodiments, with 7x fewer parameters. We further show that we can effectively fine-tune OpenVLA for new settings, with especially strong generalization results in multi-task environments involving multiple objects and strong language grounding abilities, and outperform expressive from-scratch imitation learning methods such as Diffusion Policy by 20.4%. We also explore compute efficiency; as a separate contribution, we show that OpenVLA can be fine-tuned on consumer GPUs via modern low-rank adaptation methods and served efficiently via quantization without a hit to downstream success rate. Finally, we release model checkpoints, fine-tuning notebooks, and our PyTorch codebase with built-in support for training VLAs at scale on Open X-Embodiment datasets. |
2024-06-14T00:00:00 | 2406.08657 | Mistral-C2F: Coarse to Fine Actor for Analytical and Reasoning Enhancement in RLHF and Effective-Merged LLMs | [
"Chen Zheng",
"Ke Sun",
"Xun Zhou"
]
| Despite the advances in Large Language Models (LLMs), exemplified by models like GPT-4 and Claude, smaller-scale LLMs such as Llama and Mistral often struggle with generating in-depth and coherent dialogues. This paper presents a novel two-step Coarse-to-Fine Actor model to address the inherent limitations in conversational and analytical capabilities of small-sized LLMs. Our approach begins with the Policy-based Coarse Actor, employing a technique we term "Continuous Maximization". The Coarse Actor establishes an enhanced, knowledge-rich pool adept at aligning with human preference styles in analysis and reasoning. Through the RLHF process, it employs Continuous Maximization, a strategy that dynamically and adaptively extends the output length limit, enabling the generation of more detailed and analytical content. Subsequently, the Fine Actor refines this analytical content, addressing the generation of excessively redundant information from the Coarse Actor. We introduce a "Knowledge Residue Merger" approach, refining the content from the Coarse Actor and merging it with an existing Instruction model to improve quality, correctness, and reduce redundancies. We applied our methodology to the popular Mistral model, creating Mistral-C2F, which has demonstrated exceptional performance across 11 general language tasks and the MT-Bench Dialogue task, outperforming similar-scale models and even larger models with 13B and 30B parameters. Our model has significantly improved conversational and analytical reasoning abilities. |
|
2024-06-14T00:00:00 | 2406.09413 | Interpreting the Weight Space of Customized Diffusion Models | [
"Amil Dravid",
"Yossi Gandelsman",
"Kuan-Chieh Wang",
"Rameen Abdal",
"Gordon Wetzstein",
"Alexei A. Efros",
"Kfir Aberman"
]
| We investigate the space of weights spanned by a large collection of customized diffusion models. We populate this space by creating a dataset of over 60,000 models, each of which is a base model fine-tuned to insert a different person's visual identity. We model the underlying manifold of these weights as a subspace, which we term weights2weights. We demonstrate three immediate applications of this space -- sampling, editing, and inversion. First, as each point in the space corresponds to an identity, sampling a set of weights from it results in a model encoding a novel identity. Next, we find linear directions in this space corresponding to semantic edits of the identity (e.g., adding a beard). These edits persist in appearance across generated samples. Finally, we show that inverting a single image into this space reconstructs a realistic identity, even if the input image is out of distribution (e.g., a painting). Our results indicate that the weight space of fine-tuned diffusion models behaves as an interpretable latent space of identities. |
|
2024-06-14T00:00:00 | 2406.09416 | Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models | [
"Qihao Liu",
"Zhanpeng Zeng",
"Ju He",
"Qihang Yu",
"Xiaohui Shen",
"Liang-Chieh Chen"
]
| https://github.com/qihao067/DiMR | This paper presents innovative enhancements to diffusion models by integrating a novel multi-resolution network and time-dependent layer normalization. Diffusion models have gained prominence for their effectiveness in high-fidelity image generation. While conventional approaches rely on convolutional U-Net architectures, recent Transformer-based designs have demonstrated superior performance and scalability. However, Transformer architectures, which tokenize input data (via "patchification"), face a trade-off between visual fidelity and computational complexity due to the quadratic nature of self-attention operations concerning token length. While larger patch sizes enable attention computation efficiency, they struggle to capture fine-grained visual details, leading to image distortions. To address this challenge, we propose augmenting the Diffusion model with the Multi-Resolution network (DiMR), a framework that refines features across multiple resolutions, progressively enhancing detail from low to high resolution. Additionally, we introduce Time-Dependent Layer Normalization (TD-LN), a parameter-efficient approach that incorporates time-dependent parameters into layer normalization to inject time information and achieve superior performance. Our method's efficacy is demonstrated on the class-conditional ImageNet generation benchmark, where DiMR-XL variants outperform prior diffusion models, setting new state-of-the-art FID scores of 1.70 on ImageNet 256 x 256 and 2.89 on ImageNet 512 x 512. Project page: https://qihao067.github.io/projects/DiMR |
2024-06-14T00:00:00 | 2406.08656 | TC-Bench: Benchmarking Temporal Compositionality in Text-to-Video and Image-to-Video Generation | [
"Weixi Feng",
"Jiachen Li",
"Michael Saxon",
"Tsu-jui Fu",
"Wenhu Chen",
"William Yang Wang"
]
| https://github.com/weixi-feng/tc-bench | Video generation has many unique challenges beyond those of image generation. The temporal dimension introduces extensive possible variations across frames, over which consistency and continuity may be violated. In this study, we move beyond evaluating simple actions and argue that generated videos should incorporate the emergence of new concepts and their relation transitions like in real-world videos as time progresses. To assess the Temporal Compositionality of video generation models, we propose TC-Bench, a benchmark of meticulously crafted text prompts, corresponding ground truth videos, and robust evaluation metrics. The prompts articulate the initial and final states of scenes, effectively reducing ambiguities for frame development and simplifying the assessment of transition completion. In addition, by collecting aligned real-world videos corresponding to the prompts, we expand TC-Bench's applicability from text-conditional models to image-conditional ones that can perform generative frame interpolation. We also develop new metrics to measure the completeness of component transitions in generated videos, which demonstrate significantly higher correlations with human judgments than existing metrics. Our comprehensive experimental results reveal that most video generators achieve less than 20% of the compositional changes, highlighting enormous space for future improvement. Our analysis indicates that current video generation models struggle to interpret descriptions of compositional changes and synthesize various components across different time steps. |
2024-06-14T00:00:00 | 2406.09403 | Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models | [
"Yushi Hu",
"Weijia Shi",
"Xingyu Fu",
"Dan Roth",
"Mari Ostendorf",
"Luke Zettlemoyer",
"Noah A Smith",
"Ranjay Krishna"
]
| https://github.com/Yushi-Hu/VisualSketchpad | Humans draw to facilitate reasoning: we draw auxiliary lines when solving geometry problems; we mark and circle when reasoning on maps; we use sketches to amplify our ideas and relieve our limited-capacity working memory. However, such actions are missing in current multimodal language models (LMs). Current chain-of-thought and tool-use paradigms only use text as intermediate reasoning steps. In this work, we introduce Sketchpad, a framework that gives multimodal LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts planning and reasoning according to the visual artifacts it has drawn. Different from prior work, which uses text-to-image models to enable LMs to draw, Sketchpad enables LMs to draw with lines, boxes, marks, etc., which is closer to human sketching and better facilitates reasoning. Sketchpad can also use specialist vision models during the sketching process (e.g., draw bounding boxes with object detection models, draw masks with segmentation models), to further enhance visual perception and reasoning. We experiment with a wide range of math tasks (including geometry, functions, graphs, and chess) and complex visual reasoning tasks. Sketchpad substantially improves performance on all tasks over strong base models with no sketching, yielding an average gain of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with Sketchpad sets a new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial reasoning (83.9%), and visual correspondence (80.8%). All codes and data are in https://visualsketchpad.github.io/. |
2024-06-14T00:00:00 | 2406.08587 | CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery | [
"Xiaoshuai Song",
"Muxi Diao",
"Guanting Dong",
"Zhengyang Wang",
"Yujia Fu",
"Runqi Qiao",
"Zhexu Wang",
"Dayuan Fu",
"Huangxuan Wu",
"Bin Liang",
"Weihao Zeng",
"Yejie Wang",
"Zhuoma GongQue",
"Jianing Yu",
"Qiuna Tan",
"Weiran Xu"
]
| https://github.com/csbench/csbench | Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench. |
2024-06-14T00:00:00 | 2406.09170 | Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning | [
"Bahare Fatemi",
"Mehran Kazemi",
"Anton Tsitsulin",
"Karishma Malkan",
"Jinyeong Yim",
"John Palowitch",
"Sungyong Seo",
"Jonathan Halcrow",
"Bryan Perozzi"
]
| Large language models (LLMs) have showcased remarkable reasoning capabilities, yet they remain susceptible to errors, particularly in temporal reasoning tasks involving complex temporal logic. Existing research has explored LLM performance on temporal reasoning using diverse datasets and benchmarks. However, these studies often rely on real-world data that LLMs may have encountered during pre-training or employ anonymization techniques that can inadvertently introduce factual inconsistencies. In this work, we address these limitations by introducing novel synthetic datasets specifically designed to assess LLM temporal reasoning abilities in various scenarios. The diversity of question types across these datasets enables systematic investigation into the impact of the problem structure, size, question type, fact order, and other factors on LLM performance. Our findings provide valuable insights into the strengths and weaknesses of current LLMs in temporal reasoning tasks. To foster further research in this area, we are open-sourcing the datasets and evaluation framework used in our experiments: https://huggingface.co/datasets/baharef/ToT. |
|
2024-06-14T00:00:00 | 2406.08479 | Real3D: Scaling Up Large Reconstruction Models with Real-World Images | [
"Hanwen Jiang",
"Qixing Huang",
"Georgios Pavlakos"
]
| https://github.com/hwjiang1510/Real3D | The default strategy for training single-view Large Reconstruction Models (LRMs) follows the fully supervised route using large-scale datasets of synthetic 3D assets or multi-view captures. Although these resources simplify the training procedure, they are hard to scale up beyond the existing datasets and they are not necessarily representative of the real distribution of object shapes. To address these limitations, in this paper, we introduce Real3D, the first LRM system that can be trained using single-view real-world images. Real3D introduces a novel self-training framework that can benefit from both the existing synthetic data and diverse single-view real images. We propose two unsupervised losses that allow us to supervise LRMs at the pixel- and semantic-level, even for training examples without ground-truth 3D or novel views. To further improve performance and scale up the image data, we develop an automatic data curation approach to collect high-quality examples from in-the-wild images. Our experiments show that Real3D consistently outperforms prior work in four diverse evaluation settings that include real and synthetic data, as well as both in-domain and out-of-domain shapes. Code and model can be found here: https://hwjiang1510.github.io/Real3D/ |
2024-06-14T00:00:00 | 2406.07522 | Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling | [
"Liliang Ren",
"Yang Liu",
"Yadong Lu",
"Yelong Shen",
"Chen Liang",
"Weizhu Chen"
]
| https://github.com/microsoft/Samba | Efficiently modeling sequences with infinite context length has been a long-standing problem. Past works suffer from either the quadratic computation complexity or the limited extrapolation ability on length generalization. In this work, we present Samba, a simple hybrid architecture that layer-wise combines Mamba, a selective State Space Model (SSM), with Sliding Window Attention (SWA). Samba selectively compresses a given sequence into recurrent hidden states while still maintaining the ability to precisely recall memories with the attention mechanism. We scale Samba up to 3.8B parameters with 3.2T training tokens and show that Samba substantially outperforms the state-of-the-art models based on pure attention or SSMs on a wide range of benchmarks. When trained on 4K length sequences, Samba can be efficiently extrapolated to 256K context length with perfect memory recall and show improved token predictions up to 1M context length. As a linear-time sequence model, Samba enjoys a 3.73x higher throughput compared to Transformers with grouped-query attention when processing user prompts of 128K length, and 3.64x speedup when generating 64K tokens with unlimited streaming. A sample implementation of Samba is publicly available in https://github.com/microsoft/Samba. |
2024-06-14T00:00:00 | 2406.08598 | Language Model Council: Benchmarking Foundation Models on Highly Subjective Tasks by Consensus | [
"Justin Zhao",
"Flor Miriam Plaza-del-Arco",
"Amanda Cercas Curry"
]
| The rapid advancement of Large Language Models (LLMs) necessitates robust and challenging benchmarks. Leaderboards like Chatbot Arena rank LLMs based on how well their responses align with human preferences. However, many tasks such as those related to emotional intelligence, creative writing, or persuasiveness, are highly subjective and often lack majoritarian human agreement. Judges may have irreconcilable disagreements about what constitutes a better response. To address the challenge of ranking LLMs on highly subjective tasks, we propose a novel benchmarking framework, the Language Model Council (LMC). The LMC operates through a democratic process to: 1) formulate a test set through equal participation, 2) administer the test among council members, and 3) evaluate responses as a collective jury. We deploy a council of 20 newest LLMs on an open-ended emotional intelligence task: responding to interpersonal dilemmas. Our results show that the LMC produces rankings that are more separable, robust, and less biased than those from any individual LLM judge, and is more consistent with a human-established leaderboard compared to other benchmarks. |
|
2024-06-14T00:00:00 | 2406.09371 | LRM-Zero: Training Large Reconstruction Models with Synthesized Data | [
"Desai Xie",
"Sai Bi",
"Zhixin Shu",
"Kai Zhang",
"Zexiang Xu",
"Yi Zhou",
"Sören Pirk",
"Arie Kaufman",
"Xin Sun",
"Hao Tan"
]
| https://github.com/desaixie/zeroverse | We present LRM-Zero, a Large Reconstruction Model (LRM) trained entirely on synthesized 3D data, achieving high-quality sparse-view 3D reconstruction. The core of LRM-Zero is our procedural 3D dataset, Zeroverse, which is automatically synthesized from simple primitive shapes with random texturing and augmentations (e.g., height fields, boolean differences, and wireframes). Unlike previous 3D datasets (e.g., Objaverse) which are often captured or crafted by humans to approximate real 3D data, Zeroverse completely ignores realistic global semantics but is rich in complex geometric and texture details that are locally similar to or even more intricate than real objects. We demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse, can achieve high visual quality in the reconstruction of real-world objects, competitive with models trained on Objaverse. We also analyze several critical design choices of Zeroverse that contribute to LRM-Zero's capability and training stability. Our work demonstrates that 3D reconstruction, one of the core tasks in 3D vision, can potentially be addressed without the semantics of real-world objects. The Zeroverse's procedural synthesis code and interactive visualization are available at: https://desaixie.github.io/lrm-zero/. |
2024-06-14T00:00:00 | 2406.07546 | Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? | [
"Xingyu Fu",
"Muyu He",
"Yujie Lu",
"William Yang Wang",
"Dan Roth"
]
| https://github.com/zeyofu/Commonsense-T2I | We present a novel task and benchmark for evaluating the ability of text-to-image(T2I) generation models to produce images that fit commonsense in real life, which we call Commonsense-T2I. Given two adversarial text prompts containing an identical set of action words with minor differences, such as "a lightbulb without electricity" v.s. "a lightbulb with electricity", we evaluate whether T2I models can conduct visual-commonsense reasoning, e.g. produce images that fit "the lightbulb is unlit" vs. "the lightbulb is lit" correspondingly. Commonsense-T2I presents an adversarial challenge, providing pairwise text prompts along with expected outputs. The dataset is carefully hand-curated by experts and annotated with fine-grained labels, such as commonsense type and likelihood of the expected outputs, to assist analyzing model behavior. We benchmark a variety of state-of-the-art (sota) T2I models and surprisingly find that, there is still a large gap between image synthesis and real life photos--even the DALL-E 3 model could only achieve 48.92% on Commonsense-T2I, and the stable diffusion XL model only achieves 24.92% accuracy. Our experiments show that GPT-enriched prompts cannot solve this challenge, and we include a detailed analysis about possible reasons for such deficiency. We aim for Commonsense-T2I to serve as a high-quality evaluation benchmark for T2I commonsense checking, fostering advancements in real life image generation. |
2024-06-14T00:00:00 | 2406.09411 | MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding | [
"Fei Wang",
"Xingyu Fu",
"James Y. Huang",
"Zekun Li",
"Qin Liu",
"Xiaogeng Liu",
"Mingyu Derek Ma",
"Nan Xu",
"Wenxuan Zhou",
"Kai Zhang",
"Tianyi Lorena Yan",
"Wenjie Jacky Mo",
"Hsiang-Hui Liu",
"Pan Lu",
"Chunyuan Li",
"Chaowei Xiao",
"Kai-Wei Chang",
"Dan Roth",
"Sheng Zhang",
"Hoifung Poon",
"Muhao Chen"
]
| We introduce MuirBench, a comprehensive benchmark that focuses on robust multi-image understanding capabilities of multimodal LLMs. MuirBench consists of 12 diverse multi-image tasks (e.g., scene understanding, ordering) that involve 10 categories of multi-image relations (e.g., multiview, temporal relations). Comprising 11,264 images and 2,600 multiple-choice questions, MuirBench is created in a pairwise manner, where each standard instance is paired with an unanswerable variant that has minimal semantic differences, in order for a reliable assessment. Evaluated upon 20 recent multi-modal LLMs, our results reveal that even the best-performing models like GPT-4o and Gemini Pro find it challenging to solve MuirBench, achieving 68.0% and 49.3% in accuracy. Open-source multimodal LLMs trained on single images can hardly generalize to multi-image questions, hovering below 33.3% in accuracy. These results highlight the importance of MuirBench in encouraging the community to develop multimodal LLMs that can look beyond a single image, suggesting potential pathways for future improvements. |
|
2024-06-14T00:00:00 | 2406.09305 | Toffee: Efficient Million-Scale Dataset Construction for Subject-Driven Text-to-Image Generation | [
"Yufan Zhou",
"Ruiyi Zhang",
"Kaizhi Zheng",
"Nanxuan Zhao",
"Jiuxiang Gu",
"Zichao Wang",
"Xin Eric Wang",
"Tong Sun"
]
| In subject-driven text-to-image generation, recent works have achieved superior performance by training the model on synthetic datasets containing numerous image pairs. Trained on these datasets, generative models can produce text-aligned images for specific subject from arbitrary testing image in a zero-shot manner. They even outperform methods which require additional fine-tuning on testing images. However, the cost of creating such datasets is prohibitive for most researchers. To generate a single training pair, current methods fine-tune a pre-trained text-to-image model on the subject image to capture fine-grained details, then use the fine-tuned model to create images for the same subject based on creative text prompts. Consequently, constructing a large-scale dataset with millions of subjects can require hundreds of thousands of GPU hours. To tackle this problem, we propose Toffee, an efficient method to construct datasets for subject-driven editing and generation. Specifically, our dataset construction does not need any subject-level fine-tuning. After pre-training two generative models, we are able to generate infinite number of high-quality samples. We construct the first large-scale dataset for subject-driven image editing and generation, which contains 5 million image pairs, text prompts, and masks. Our dataset is 5 times the size of previous largest dataset, yet our cost is tens of thousands of GPU hours lower. To test the proposed dataset, we also propose a model which is capable of both subject-driven image editing and generation. By simply training the model on our proposed dataset, it obtains competitive results, illustrating the effectiveness of the proposed dataset construction framework. |
|
2024-06-14T00:00:00 | 2406.09162 | EMMA: Your Text-to-Image Diffusion Model Can Secretly Accept Multi-Modal Prompts | [
"Yucheng Han",
"Rui Wang",
"Chi Zhang",
"Juntao Hu",
"Pei Cheng",
"Bin Fu",
"Hanwang Zhang"
]
| https://github.com/TencentQQGYLab/ELLA | Recent advancements in image generation have enabled the creation of high-quality images from text conditions. However, when facing multi-modal conditions, such as text combined with reference appearances, existing methods struggle to balance multiple conditions effectively, typically showing a preference for one modality over others. To address this challenge, we introduce EMMA, a novel image generation model accepting multi-modal prompts built upon the state-of-the-art text-to-image (T2I) diffusion model, ELLA. EMMA seamlessly incorporates additional modalities alongside text to guide image generation through an innovative Multi-modal Feature Connector design, which effectively integrates textual and supplementary modal information using a special attention mechanism. By freezing all parameters in the original T2I diffusion model and only adjusting some additional layers, we reveal an interesting finding that the pre-trained T2I diffusion model can secretly accept multi-modal prompts. This interesting property facilitates easy adaptation to different existing frameworks, making EMMA a flexible and effective tool for producing personalized and context-aware images and even videos. Additionally, we introduce a strategy to assemble learned EMMA modules to produce images conditioned on multiple modalities simultaneously, eliminating the need for additional training with mixed multi-modal prompts. Extensive experiments demonstrate the effectiveness of EMMA in maintaining high fidelity and detail in generated images, showcasing its potential as a robust solution for advanced multi-modal conditional image generation tasks. |
2024-06-14T00:00:00 | 2406.08707 | mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus | [
"Matthieu Futeral",
"Armel Zebaze",
"Pedro Ortiz Suarez",
"Julien Abadji",
"Rémi Lacroix",
"Cordelia Schmid",
"Rachel Bawden",
"Benoît Sagot"
]
| Multimodal Large Language Models (mLLMs) are trained on a large amount of text-image data. While most mLLMs are trained on caption-like data only, Alayrac et al. [2022] showed that additionally training them on interleaved sequences of text and images can lead to the emergence of in-context learning capabilities. However, the dataset they used, M3W, is not public and is only in English. There have been attempts to reproduce their results but the released datasets are English-only. In contrast, current multilingual and multimodal datasets are either composed of caption-like only or medium-scale or fully private data. This limits mLLM research for the 7,000 other languages spoken in the world. We therefore introduce mOSCAR, to the best of our knowledge the first large-scale multilingual and multimodal document corpus crawled from the web. It covers 163 languages, 315M documents, 214B tokens and 1.2B images. We carefully conduct a set of filtering and evaluation steps to make sure mOSCAR is sufficiently safe, diverse and of good quality. We additionally train two types of multilingual model to prove the benefits of mOSCAR: (1) a model trained on a subset of mOSCAR and captioning data and (2) a model train on captioning data only. The model additionally trained on mOSCAR shows a strong boost in few-shot learning performance across various multilingual image-text tasks and benchmarks, confirming previous findings for English-only mLLMs. |
|
2024-06-14T00:00:00 | 2406.05967 | CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark | [
"David Romero",
"Chenyang Lyu",
"Haryo Akbarianto Wibowo",
"Teresa Lynn",
"Injy Hamed",
"Aditya Nanda Kishore",
"Aishik Mandal",
"Alina Dragonetti",
"Artem Abzaliev",
"Atnafu Lambebo Tonja",
"Bontu Fufa Balcha",
"Chenxi Whitehouse",
"Christian Salamea",
"Dan John Velasco",
"David Ifeoluwa Adelani",
"David Le Meur",
"Emilio Villa-Cueva",
"Fajri Koto",
"Fauzan Farooqui",
"Frederico Belcavello",
"Ganzorig Batnasan",
"Gisela Vallejo",
"Grainne Caulfield",
"Guido Ivetta",
"Haiyue Song",
"Henok Biadglign Ademtew",
"Hernán Maina",
"Holy Lovenia",
"Israel Abebe Azime",
"Jan Christian Blaise Cruz",
"Jay Gala",
"Jiahui Geng",
"Jesus-German Ortiz-Barajas",
"Jinheon Baek",
"Jocelyn Dunstan",
"Laura Alonso Alemany",
"Kumaranage Ravindu Yasas Nagasinghe",
"Luciana Benotti",
"Luis Fernando D'Haro",
"Marcelo Viridiano",
"Marcos Estecha-Garitagoitia",
"Maria Camila Buitrago Cabrera",
"Mario Rodríguez-Cantelar",
"Mélanie Jouitteau",
"Mihail Mihaylov",
"Mohamed Fazli Mohamed Imam",
"Muhammad Farid Adilazuarda",
"Munkhjargal Gochoo",
"Munkh-Erdene Otgonbold",
"Naome Etori",
"Olivier Niyomugisha",
"Paula Mónica Silva",
"Pranjal Chitale",
"Raj Dabre",
"Rendi Chevi",
"Ruochen Zhang",
"Ryandito Diandaru",
"Samuel Cahyawijaya",
"Santiago Góngora",
"Soyeong Jeong",
"Sukannya Purkayastha",
"Tatsuki Kuribayashi",
"Thanmay Jayakumar",
"Tiago Timponi Torrent",
"Toqeer Ehsan",
"Vladimir Araujo",
"Yova Kementchedjhieva",
"Zara Burzo",
"Zheng Wei Lim",
"Zheng Xin Yong",
"Oana Ignat",
"Joan Nwatu",
"Rada Mihalcea",
"Thamar Solorio",
"Alham Fikri Aji"
]
| Visual Question Answering (VQA) is an important task in multimodal AI, and it is often used to test the ability of vision-language models to understand and reason on knowledge present in both visual and textual data. However, most of the current VQA models use datasets that are primarily focused on English and a few major world languages, with images that are typically Western-centric. While recent efforts have tried to increase the number of languages covered on VQA datasets, they still lack diversity in low-resource languages. More importantly, although these datasets often extend their linguistic range via translation or some other approaches, they usually keep images the same, resulting in narrow cultural representation. To address these limitations, we construct CVQA, a new Culturally-diverse multilingual Visual Question Answering benchmark, designed to cover a rich set of languages and cultures, where we engage native speakers and cultural experts in the data collection process. As a result, CVQA includes culturally-driven images and questions from across 28 countries on four continents, covering 26 languages with 11 scripts, providing a total of 9k questions. We then benchmark several Multimodal Large Language Models (MLLMs) on CVQA, and show that the dataset is challenging for the current state-of-the-art models. This benchmark can serve as a probing evaluation suite for assessing the cultural capability and bias of multimodal models and hopefully encourage more research efforts toward increasing cultural awareness and linguistic diversity in this field. |
|
2024-06-14T00:00:00 | 2406.07457 | Estimating the Hallucination Rate of Generative AI | [
"Andrew Jesson",
"Nicolas Beltran-Velez",
"Quentin Chu",
"Sweta Karlekar",
"Jannik Kossen",
"Yarin Gal",
"John P. Cunningham",
"David Blei"
]
| This work is about estimating the hallucination rate for in-context learning (ICL) with Generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and asked to make a prediction based on that dataset. The Bayesian interpretation of ICL assumes that the CGM is calculating a posterior predictive distribution over an unknown Bayesian model of a latent parameter and data. With this perspective, we define a hallucination as a generated prediction that has low-probability under the true latent parameter. We develop a new method that takes an ICL problem -- that is, a CGM, a dataset, and a prediction question -- and estimates the probability that a CGM will generate a hallucination. Our method only requires generating queries and responses from the model and evaluating its response log probability. We empirically evaluate our method on synthetic regression and natural language ICL tasks using large language models. |
|
2024-06-14T00:00:00 | 2406.09297 | MLKV: Multi-Layer Key-Value Heads for Memory Efficient Transformer Decoding | [
"Zayd Muhammad Kawakibi Zuhri",
"Muhammad Farid Adilazuarda",
"Ayu Purwarianti",
"Alham Fikri Aji"
]
| https://github.com/zaydzuhri/pythia-mlkv | Auto-regressive inference of transformers benefit greatly from Key-Value (KV) caching, but can lead to major memory bottlenecks as model size, batch size, and sequence length grow at scale. We introduce Multi-Layer Key-Value (MLKV) sharing, a novel approach extending KV sharing across transformer layers to reduce memory usage beyond what was possible with Multi-Query Attention (MQA) and Grouped-Query Attention (GQA). Evaluations on various NLP benchmarks and inference metrics using uptrained Pythia-160M variants demonstrate that MLKV significantly reduces memory usage with minimal performance loss, reducing KV cache size down to a factor of 6x compared to MQA. These results highlight MLKV's potential for efficient deployment of transformer models at scale. We provide code at https://github.com/zaydzuhri/pythia-mlkv |
2024-06-14T00:00:00 | 2406.09356 | CMC-Bench: Towards a New Paradigm of Visual Signal Compression | [
"Chunyi Li",
"Xiele Wu",
"Haoning Wu",
"Donghui Feng",
"Zicheng Zhang",
"Guo Lu",
"Xiongkuo Min",
"Xiaohong Liu",
"Guangtao Zhai",
"Weisi Lin"
]
| Ultra-low bitrate image compression is a challenging and demanding topic. With the development of Large Multimodal Models (LMMs), a Cross Modality Compression (CMC) paradigm of Image-Text-Image has emerged. Compared with traditional codecs, this semantic-level compression can reduce image data size to 0.1\% or even lower, which has strong potential applications. However, CMC has certain defects in consistency with the original image and perceptual quality. To address this problem, we introduce CMC-Bench, a benchmark of the cooperative performance of Image-to-Text (I2T) and Text-to-Image (T2I) models for image compression. This benchmark covers 18,000 and 40,000 images respectively to verify 6 mainstream I2T and 12 T2I models, including 160,000 subjective preference scores annotated by human experts. At ultra-low bitrates, this paper proves that the combination of some I2T and T2I models has surpassed the most advanced visual signal codecs; meanwhile, it highlights where LMMs can be further optimized toward the compression task. We encourage LMM developers to participate in this test to promote the evolution of visual signal codec protocols. |
|
2024-06-14T00:00:00 | 2406.09358 | Understanding Hallucinations in Diffusion Models through Mode Interpolation | [
"Sumukh K Aithal",
"Pratyush Maini",
"Zachary C. Lipton",
"J. Zico Kolter"
]
| https://github.com/locuslab/diffusion-model-hallucination | Colloquially speaking, image generation models based upon diffusion processes are frequently said to exhibit "hallucinations," samples that could never occur in the training data. But where do such hallucinations come from? In this paper, we study a particular failure mode in diffusion models, which we term mode interpolation. Specifically, we find that diffusion models smoothly "interpolate" between nearby data modes in the training set, to generate samples that are completely outside the support of the original training distribution; this phenomenon leads diffusion models to generate artifacts that never existed in real data (i.e., hallucinations). We systematically study the reasons for, and the manifestation of this phenomenon. Through experiments on 1D and 2D Gaussians, we show how a discontinuous loss landscape in the diffusion model's decoder leads to a region where any smooth approximation will cause such hallucinations. Through experiments on artificial datasets with various shapes, we show how hallucination leads to the generation of combinations of shapes that never existed. Finally, we show that diffusion models in fact know when they go out of support and hallucinate. This is captured by the high variance in the trajectory of the generated sample towards the final few backward sampling process. Using a simple metric to capture this variance, we can remove over 95% of hallucinations at generation time while retaining 96% of in-support samples. We conclude our exploration by showing the implications of such hallucination (and its removal) on the collapse (and stabilization) of recursive training on synthetic data with experiments on MNIST and 2D Gaussians dataset. We release our code at https://github.com/locuslab/diffusion-model-hallucination. |
2024-06-14T00:00:00 | 2406.09406 | 4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities | [
"Roman Bachmann",
"Oğuzhan Fatih Kar",
"David Mizrahi",
"Ali Garjani",
"Mingfei Gao",
"David Griffiths",
"Jiaming Hu",
"Afshin Dehghan",
"Amir Zamir"
]
| https://github.com/apple/ml-4m/ | Current multimodal and multitask foundation models like 4M or UnifiedIO show promising results, but in practice their out-of-the-box abilities to accept diverse inputs and perform diverse tasks are limited by the (usually rather small) number of modalities and tasks they are trained on. In this paper, we expand upon the capabilities of them by training a single model on tens of highly diverse modalities and by performing co-training on large-scale multimodal datasets and text corpora. This includes training on several semantic and geometric modalities, feature maps from recent state of the art models like DINOv2 and ImageBind, pseudo labels of specialist models like SAM and 4DHumans, and a range of new modalities that allow for novel ways to interact with the model and steer the generation, for example image metadata or color palettes. A crucial step in this process is performing discrete tokenization on various modalities, whether they are image-like, neural network feature maps, vectors, structured data like instance segmentation or human poses, or data that can be represented as text. Through this, we expand on the out-of-the-box capabilities of multimodal models and specifically show the possibility of training one model to solve at least 3x more tasks/modalities than existing ones and doing so without a loss in performance. This enables more fine-grained and controllable multimodal generation capabilities and allows us to study the distillation of models trained on diverse data and objectives into a unified model. We successfully scale the training to a three billion parameter model using tens of modalities and different datasets. The resulting models and training code are open sourced at 4m.epfl.ch. |
2024-06-17T00:00:00 | 2406.10208 | Glyph-ByT5-v2: A Strong Aesthetic Baseline for Accurate Multilingual Visual Text Rendering | [
"Zeyu Liu",
"Weicong Liang",
"Yiming Zhao",
"Bohan Chen",
"Ji Li",
"Yuhui Yuan"
]
| https://github.com/AIGText/Glyph-ByT5 | Recently, Glyph-ByT5 has achieved highly accurate visual text rendering performance in graphic design images. However, it still focuses solely on English and performs relatively poorly in terms of visual appeal. In this work, we address these two fundamental limitations by presenting Glyph-ByT5-v2 and Glyph-SDXL-v2, which not only support accurate visual text rendering for 10 different languages but also achieve much better aesthetic quality. To achieve this, we make the following contributions: (i) creating a high-quality multilingual glyph-text and graphic design dataset consisting of more than 1 million glyph-text pairs and 10 million graphic design image-text pairs covering nine other languages, (ii) building a multilingual visual paragraph benchmark consisting of 1,000 prompts, with 100 for each language, to assess multilingual visual spelling accuracy, and (iii) leveraging the latest step-aware preference learning approach to enhance the visual aesthetic quality. With the combination of these techniques, we deliver a powerful customized multilingual text encoder, Glyph-ByT5-v2, and a strong aesthetic graphic generation model, Glyph-SDXL-v2, that can support accurate spelling in 10 different languages. We perceive our work as a significant advancement, considering that the latest DALL-E3 and Ideogram 1.0 still struggle with the multilingual visual text rendering task. |
2024-06-17T00:00:00 | 2406.09559 | Decoding the Diversity: A Review of the Indic AI Research Landscape | [
"Sankalp KJ",
"Vinija Jain",
"Sreyoshi Bhaduri",
"Tamoghna Roy",
"Aman Chadha"
]
| This review paper provides a comprehensive overview of large language model (LLM) research directions within Indic languages. Indic languages are those spoken in the Indian subcontinent, including India, Pakistan, Bangladesh, Sri Lanka, Nepal, and Bhutan, among others. These languages have a rich cultural and linguistic heritage and are spoken by over 1.5 billion people worldwide. With the tremendous market potential and growing demand for natural language processing (NLP) based applications in diverse languages, generative applications for Indic languages pose unique challenges and opportunities for research. Our paper deep dives into the recent advancements in Indic generative modeling, contributing with a taxonomy of research directions, tabulating 84 recent publications. Research directions surveyed in this paper include LLM development, fine-tuning existing LLMs, development of corpora, benchmarking and evaluation, as well as publications around specific techniques, tools, and applications. We found that researchers across the publications emphasize the challenges associated with limited data availability, lack of standardization, and the peculiar linguistic complexities of Indic languages. This work aims to serve as a valuable resource for researchers and practitioners working in the field of NLP, particularly those focused on Indic languages, and contributes to the development of more accurate and efficient LLM applications for these languages. |
|
2024-06-17T00:00:00 | 2406.08418 | OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text | [
"Qingyun Li",
"Zhe Chen",
"Weiyun Wang",
"Wenhai Wang",
"Shenglong Ye",
"Zhenjiang Jin",
"Guanzhou Chen",
"Yinan He",
"Zhangwei Gao",
"Erfei Cui",
"Jiashuo Yu",
"Hao Tian",
"Jiasheng Zhou",
"Chao Xu",
"Bin Wang",
"Xingjian Wei",
"Wei Li",
"Wenjian Zhang",
"Bo Zhang",
"Pinlong Cai",
"Licheng Wen",
"Xiangchao Yan",
"Zhenxiang Li",
"Pei Chu",
"Yi Wang",
"Min Dou",
"Changyao Tian",
"Xizhou Zhu",
"Lewei Lu",
"Yushi Chen",
"Junjun He",
"Zhongying Tu",
"Tong Lu",
"Yali Wang",
"Limin Wang",
"Dahua Lin",
"Yu Qiao",
"Botian Shi",
"Conghui He",
"Jifeng Dai"
]
| https://github.com/OpenGVLab/OmniCorpus | Image-text interleaved data, consisting of multiple images and texts arranged in a natural document format, aligns with the presentation paradigm of internet data and closely resembles human reading habits. Recent studies have shown that such data aids multimodal in-context learning and maintains the capabilities of large language models during multimodal fine-tuning. However, the limited scale and diversity of current image-text interleaved data restrict the development of multimodal large language models. In this paper, we introduce OmniCorpus, a 10 billion-scale image-text interleaved dataset. Using an efficient data engine, we filter and extract large-scale high-quality documents, which contain 8.6 billion images and 1,696 billion text tokens. Compared to counterparts (e.g., MMC4, OBELICS), our dataset 1) has 15 times larger scales while maintaining good data quality; 2) features more diverse sources, including both English and non-English websites as well as video-centric websites; 3) is more flexible, easily degradable from an image-text interleaved format to pure text corpus and image-text pairs. Through comprehensive analysis and experiments, we validate the quality, usability, and effectiveness of the proposed dataset. We hope this could provide a solid data foundation for future multimodal model research. Code and data are released at https://github.com/OpenGVLab/OmniCorpus. |
2024-06-17T00:00:00 | 2406.07230 | Needle In A Multimodal Haystack | [
"Weiyun Wang",
"Shuibo Zhang",
"Yiming Ren",
"Yuchen Duan",
"Tiantong Li",
"Shuo Liu",
"Mengkang Hu",
"Zhe Chen",
"Kaipeng Zhang",
"Lewei Lu",
"Xizhou Zhu",
"Ping Luo",
"Yu Qiao",
"Jifeng Dai",
"Wenqi Shao",
"Wenhai Wang"
]
| https://github.com/OpenGVLab/MM-NIAH | With the rapid advancement of multimodal large language models (MLLMs), their evaluation has become increasingly comprehensive. However, understanding long multimodal content, as a foundational ability for real-world applications, remains underexplored. In this work, we present Needle In A Multimodal Haystack (MM-NIAH), the first benchmark specifically designed to systematically evaluate the capability of existing MLLMs to comprehend long multimodal documents. Our benchmark includes three types of evaluation tasks: multimodal retrieval, counting, and reasoning. In each task, the model is required to answer the questions according to different key information scattered throughout the given multimodal document. Evaluating the leading MLLMs on MM-NIAH, we observe that existing models still have significant room for improvement on these tasks, especially on vision-centric evaluation. We hope this work can provide a platform for further research on long multimodal document comprehension and contribute to the advancement of MLLMs. Code and benchmark are released at https://github.com/OpenGVLab/MM-NIAH. |
2024-06-17T00:00:00 | 2406.08451 | GUI Odyssey: A Comprehensive Dataset for Cross-App GUI Navigation on Mobile Devices | [
"Quanfeng Lu",
"Wenqi Shao",
"Zitao Liu",
"Fanqing Meng",
"Boxuan Li",
"Botong Chen",
"Siyuan Huang",
"Kaipeng Zhang",
"Yu Qiao",
"Ping Luo"
]
| https://github.com/OpenGVLab/GUI-Odyssey | Smartphone users often navigate across multiple applications (apps) to complete tasks such as sharing content between social media platforms. Autonomous Graphical User Interface (GUI) navigation agents can enhance user experience in communication, entertainment, and productivity by streamlining workflows and reducing manual intervention. However, prior GUI agents often trained with datasets comprising simple tasks that can be completed within a single app, leading to poor performance in cross-app navigation. To address this problem, we introduce GUI Odyssey, a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos. Leveraging GUI Odyssey, we developed OdysseyAgent, a multimodal cross-app navigation agent by fine-tuning the Qwen-VL model with a history resampling module. Extensive experiments demonstrate OdysseyAgent's superior accuracy compared to existing models. For instance, OdysseyAgent surpasses fine-tuned Qwen-VL and zero-shot GPT-4V by 1.44\% and 55.49\% in-domain accuracy, and 2.29\% and 48.14\% out-of-domain accuracy on average. The dataset and code will be released in https://github.com/OpenGVLab/GUI-Odyssey. |
2024-06-17T00:00:00 | 2406.09961 | ChartMimic: Evaluating LMM's Cross-Modal Reasoning Capability via Chart-to-Code Generation | [
"Chufan Shi",
"Cheng Yang",
"Yaxin Liu",
"Bo Shui",
"Junjie Wang",
"Mohan Jing",
"Linran Xu",
"Xinyu Zhu",
"Siheng Li",
"Yuxiang Zhang",
"Gongye Liu",
"Xiaomei Nie",
"Deng Cai",
"Yujiu Yang"
]
| https://github.com/ChartMimic/ChartMimic | We introduce a new benchmark, ChartMimic, aimed at assessing the visually-grounded code generation capabilities of large multimodal models (LMMs). ChartMimic utilizes information-intensive visual charts and textual instructions as inputs, requiring LMMs to generate the corresponding code for chart rendering. ChartMimic includes 1,000 human-curated (figure, instruction, code) triplets, which represent the authentic chart use cases found in scientific papers across various domains(e.g., Physics, Computer Science, Economics, etc). These charts span 18 regular types and 4 advanced types, diversifying into 191 subcategories. Furthermore, we propose multi-level evaluation metrics to provide an automatic and thorough assessment of the output code and the rendered charts. Unlike existing code generation benchmarks, ChartMimic places emphasis on evaluating LMMs' capacity to harmonize a blend of cognitive capabilities, encompassing visual understanding, code generation, and cross-modal reasoning. The evaluation of 3 proprietary models and 11 open-weight models highlights the substantial challenges posed by ChartMimic. Even the advanced GPT-4V, Claude-3-opus only achieve an average score of 73.2 and 53.7, respectively, indicating significant room for improvement. We anticipate that ChartMimic will inspire the development of LMMs, advancing the pursuit of artificial general intelligence. |
2024-06-17T00:00:00 | 2406.08845 | Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability,Reproducibility, and Practicality | [
"Tianle Zhang",
"Langtian Ma",
"Yuchen Yan",
"Yuchen Zhang",
"Kai Wang",
"Yue Yang",
"Ziyao Guo",
"Wenqi Shao",
"Yang You",
"Yu Qiao",
"Ping Luo",
"Kaipeng Zhang"
]
| https://github.com/ztlmememe/T2VHE | Recent text-to-video (T2V) technology advancements, as demonstrated by models such as Gen2, Pika, and Sora, have significantly broadened its applicability and popularity. Despite these strides, evaluating these models poses substantial challenges. Primarily, due to the limitations inherent in automatic metrics, manual evaluation is often considered a superior method for assessing T2V generation. However, existing manual evaluation protocols face reproducibility, reliability, and practicality issues. To address these challenges, this paper introduces the Text-to-Video Human Evaluation (T2VHE) protocol, a comprehensive and standardized protocol for T2V models. The T2VHE protocol includes well-defined metrics, thorough annotator training, and an effective dynamic evaluation module. Experimental results demonstrate that this protocol not only ensures high-quality annotations but can also reduce evaluation costs by nearly 50%. We will open-source the entire setup of the T2VHE protocol, including the complete protocol workflow, the dynamic evaluation component details, and the annotation interface code. This will help communities establish more sophisticated human assessment protocols. |
2024-06-17T00:00:00 | 2406.07882 | Designing a Dashboard for Transparency and Control of Conversational AI | [
"Yida Chen",
"Aoyu Wu",
"Trevor DePodesta",
"Catherine Yeh",
"Kenneth Li",
"Nicholas Castillo Marin",
"Oam Patel",
"Jan Riecke",
"Shivam Raval",
"Olivia Seow",
"Martin Wattenberg",
"Fernanda Viégas"
]
| https://github.com/yc015/TalkTuner-chatbot-llm-dashboard/tree/main | Conversational LLMs function as black box systems, leaving users guessing about why they see the output they do. This lack of transparency is potentially problematic, especially given concerns around bias and truthfulness. To address this issue, we present an end-to-end prototype-connecting interpretability techniques with user experience design-that seeks to make chatbots more transparent. We begin by showing evidence that a prominent open-source LLM has a "user model": examining the internal state of the system, we can extract data related to a user's age, gender, educational level, and socioeconomic status. Next, we describe the design of a dashboard that accompanies the chatbot interface, displaying this user model in real time. The dashboard can also be used to control the user model and the system's behavior. Finally, we discuss a study in which users conversed with the instrumented system. Our results suggest that users appreciate seeing internal states, which helped them expose biased behavior and increased their sense of control. Participants also made valuable suggestions that point to future directions for both design and machine learning research. The project page and video demo of our TalkTuner system are available at https://bit.ly/talktuner-project-page |
2024-06-17T00:00:00 | 2406.08973 | XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning | [
"Alexander Nikulin",
"Ilya Zisman",
"Alexey Zemtsov",
"Viacheslav Sinii",
"Vladislav Kurenkov",
"Sergey Kolesnikov"
]
| https://github.com/dunno-lab/xland-minigrid-datasets | Following the success of the in-context learning paradigm in large-scale language and computer vision models, the recently emerging field of in-context reinforcement learning is experiencing a rapid growth. However, its development has been held back by the lack of challenging benchmarks, as all the experiments have been carried out in simple environments and on small-scale datasets. We present XLand-100B, a large-scale dataset for in-context reinforcement learning based on the XLand-MiniGrid environment, as a first step to alleviate this problem. It contains complete learning histories for nearly 30,000 different tasks, covering 100B transitions and 2.5B episodes. It took 50,000 GPU hours to collect the dataset, which is beyond the reach of most academic labs. Along with the dataset, we provide the utilities to reproduce or expand it even further. With this substantial effort, we aim to democratize research in the rapidly growing field of in-context reinforcement learning and provide a solid foundation for further scaling. The code is open-source and available under Apache 2.0 licence at https://github.com/dunno-lab/xland-minigrid-datasets. |
2024-06-17T00:00:00 | 2406.06263 | MaskLID: Code-Switching Language Identification through Iterative Masking | [
"Amir Hossein Kargaran",
"François Yvon",
"Hinrich Schütze"
]
| https://github.com/cisnlp/MaskLID | We present MaskLID, a simple, yet effective, code-switching (CS) language identification (LID) method. MaskLID does not require any training and is designed to complement current high-performance sentence-level LIDs. Sentence-level LIDs are classifiers trained on monolingual texts to provide single labels, typically using a softmax layer to turn scores into probabilities. However, in cases where a sentence is composed in both L1 and L2 languages, the LID classifier often only returns the dominant label L1. To address this limitation, MaskLID employs a strategy to mask text features associated with L1, allowing the LID to classify the text as L2 in the next round. This method uses the LID itself to identify the features that require masking and does not rely on any external resource. In this work, we explore the use of MaskLID for two open-source LIDs (GlotLID and OpenLID), that are both based on the FastText architecture. Code and demo are available at https://github.com/cisnlp/MaskLID. |
2024-06-17T00:00:00 | 2406.10111 | GaussianSR: 3D Gaussian Super-Resolution with 2D Diffusion Priors | [
"Xiqian Yu",
"Hanxin Zhu",
"Tianyu He",
"Zhibo Chen"
]
| Achieving high-resolution novel view synthesis (HRNVS) from low-resolution input views is a challenging task due to the lack of high-resolution data. Previous methods optimize high-resolution Neural Radiance Field (NeRF) from low-resolution input views but suffer from slow rendering speed. In this work, we base our method on 3D Gaussian Splatting (3DGS) due to its capability of producing high-quality images at a faster rendering speed. To alleviate the shortage of data for higher-resolution synthesis, we propose to leverage off-the-shelf 2D diffusion priors by distilling the 2D knowledge into 3D with Score Distillation Sampling (SDS). Nevertheless, applying SDS directly to Gaussian-based 3D super-resolution leads to undesirable and redundant 3D Gaussian primitives, due to the randomness brought by generative priors. To mitigate this issue, we introduce two simple yet effective techniques to reduce stochastic disturbances introduced by SDS. Specifically, we 1) shrink the range of diffusion timestep in SDS with an annealing strategy; 2) randomly discard redundant Gaussian primitives during densification. Extensive experiments have demonstrated that our proposed GaussainSR can attain high-quality results for HRNVS with only low-resolution inputs on both synthetic and real-world datasets. Project page: https://chchnii.github.io/GaussianSR/ |
|
2024-06-17T00:00:00 | 2406.09900 | GEB-1.3B: Open Lightweight Large Language Model | [
"Jie Wu",
"Yufeng Zhu",
"Lei Shen",
"Xuqing Lu"
]
| Recently developed large language models (LLMs) such as ChatGPT, Claude, and Llama have demonstrated impressive abilities, and even surpass human-level performance in several tasks. Despite their success, the resource-intensive demands of these models, requiring significant computational power for both training and inference, limit their deployment to high-performance servers. Additionally, the extensive calculation requirements of the models often lead to increased latency in response times. With the increasing need for LLMs to operate efficiently on CPUs, research about lightweight models that are optimized for CPU inference has emerged. In this work, we introduce GEB-1.3B, a lightweight LLM trained on 550 billion tokens in both Chinese and English languages. We employ novel training techniques, including ROPE, Group-Query-Attention, and FlashAttention-2, to accelerate training while maintaining model performance. Additionally, we fine-tune the model using 10 million samples of instruction data to enhance alignment. GEB-1.3B exhibits outstanding performance on general benchmarks such as MMLU, C-Eval, and CMMLU, outperforming comparative models such as MindLLM-1.3B and TinyLLaMA-1.1B. Notably, the FP32 version of GEB-1.3B achieves commendable inference times on CPUs, with ongoing efforts to further enhance speed through advanced quantization techniques. The release of GEB-1.3B as an open-source model marks a significant contribution to the development of lightweight LLMs, promising to foster further research and innovation in the field. |
|
2024-06-17T00:00:00 | 2406.10149 | BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack | [
"Yuri Kuratov",
"Aydar Bulatov",
"Petr Anokhin",
"Ivan Rodkin",
"Dmitry Sorokin",
"Artyom Sorokin",
"Mikhail Burtsev"
]
| https://github.com/booydar/babilong | In recent years, the input context sizes of large language models (LLMs) have increased dramatically. However, existing evaluation methods have not kept pace, failing to comprehensively assess the efficiency of models in handling long contexts. To bridge this gap, we introduce the BABILong benchmark, designed to test language models' ability to reason across facts distributed in extremely long documents. BABILong includes a diverse set of 20 reasoning tasks, including fact chaining, simple induction, deduction, counting, and handling lists/sets. These tasks are challenging on their own, and even more demanding when the required facts are scattered across long natural text. Our evaluations show that popular LLMs effectively utilize only 10-20\% of the context and their performance declines sharply with increased reasoning complexity. Among alternatives to in-context reasoning, Retrieval-Augmented Generation methods achieve a modest 60\% accuracy on single-fact question answering, independent of context length. Among context extension methods, the highest performance is demonstrated by recurrent memory transformers, enabling the processing of lengths up to 11 million tokens. The BABILong benchmark is extendable to any length to support the evaluation of new upcoming models with increased capabilities, and we provide splits up to 1 million token lengths. |
2024-06-17T00:00:00 | 2406.10210 | Make It Count: Text-to-Image Generation with an Accurate Number of Objects | [
"Lital Binyamin",
"Yoad Tewel",
"Hilit Segev",
"Eran Hirsch",
"Royi Rassin",
"Gal Chechik"
]
| https://github.com/Litalby1/make-it-count | Despite the unprecedented success of text-to-image diffusion models, controlling the number of depicted objects using text is surprisingly hard. This is important for various applications from technical documents, to children's books to illustrating cooking recipes. Generating object-correct counts is fundamentally challenging because the generative model needs to keep a sense of separate identity for every instance of the object, even if several objects look identical or overlap, and then carry out a global computation implicitly during generation. It is still unknown if such representations exist. To address count-correct generation, we first identify features within the diffusion model that can carry the object identity information. We then use them to separate and count instances of objects during the denoising process and detect over-generation and under-generation. We fix the latter by training a model that predicts both the shape and location of a missing object, based on the layout of existing ones, and show how it can be used to guide denoising with correct object count. Our approach, CountGen, does not depend on external source to determine object layout, but rather uses the prior from the diffusion model itself, creating prompt-dependent and seed-dependent layouts. Evaluated on two benchmark datasets, we find that CountGen strongly outperforms the count-accuracy of existing baselines. |
2024-06-17T00:00:00 | 2406.10118 | SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages | [
"Holy Lovenia",
"Rahmad Mahendra",
"Salsabil Maulana Akbar",
"Lester James V. Miranda",
"Jennifer Santoso",
"Elyanah Aco",
"Akhdan Fadhilah",
"Jonibek Mansurov",
"Joseph Marvin Imperial",
"Onno P. Kampman",
"Joel Ruben Antony Moniz",
"Muhammad Ravi Shulthan Habibi",
"Frederikus Hudi",
"Railey Montalan",
"Ryan Ignatius",
"Joanito Agili Lopo",
"William Nixon",
"Börje F. Karlsson",
"James Jaya",
"Ryandito Diandaru",
"Yuze Gao",
"Patrick Amadeus",
"Bin Wang",
"Jan Christian Blaise Cruz",
"Chenxi Whitehouse",
"Ivan Halim Parmonangan",
"Maria Khelli",
"Wenyu Zhang",
"Lucky Susanto",
"Reynard Adha Ryanda",
"Sonny Lazuardi Hermawan",
"Dan John Velasco",
"Muhammad Dehan Al Kautsar",
"Willy Fitra Hendria",
"Yasmin Moslem",
"Noah Flynn",
"Muhammad Farid Adilazuarda",
"Haochen Li",
"Johanes Lee",
"R. Damanhuri",
"Shuo Sun",
"Muhammad Reza Qorib",
"Amirbek Djanibekov",
"Wei Qi Leong",
"Quyet V. Do",
"Niklas Muennighoff",
"Tanrada Pansuwan",
"Ilham Firdausi Putra",
"Yan Xu",
"Ngee Chia Tai",
"Ayu Purwarianti",
"Sebastian Ruder",
"William Tjhi",
"Peerat Limkonchotiwat",
"Alham Fikri Aji",
"Sedrick Keh",
"Genta Indra Winata",
"Ruochen Zhang",
"Fajri Koto",
"Zheng-Xin Yong",
"Samuel Cahyawijaya"
]
| https://github.com/SEACrowd/seacrowd-datahub | Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, we introduce SEACrowd, a collaborative initiative that consolidates a comprehensive resource hub that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in SEA. |
2024-06-17T00:00:00 | 2406.10126 | Training-free Camera Control for Video Generation | [
"Chen Hou",
"Guoqiang Wei",
"Yan Zeng",
"Zhibo Chen"
]
| We propose a training-free and robust solution to offer camera movement control for off-the-shelf video diffusion models. Unlike previous work, our method does not require any supervised finetuning on camera-annotated datasets or self-supervised training via data augmentation. Instead, it can be plugged and played with most pretrained video diffusion models and generate camera controllable videos with a single image or text prompt as input. The inspiration of our work comes from the layout prior that intermediate latents hold towards generated results, thus rearranging noisy pixels in them will make output content reallocated as well. As camera move could also be seen as a kind of pixel rearrangement caused by perspective change, videos could be reorganized following specific camera motion if their noisy latents change accordingly. Established on this, we propose our method CamTrol, which enables robust camera control for video diffusion models. It is achieved by a two-stage process. First, we model image layout rearrangement through explicit camera movement in 3D point cloud space. Second, we generate videos with camera motion using layout prior of noisy latents formed by a series of rearranged images. Extensive experiments have demonstrated the robustness our method holds in controlling camera motion of generated videos. Furthermore, we show that our method can produce impressive results in generating 3D rotation videos with dynamic content. Project page at https://lifedecoder.github.io/CamTrol/. |
|
2024-06-17T00:00:00 | 2406.10227 | VideoGUI: A Benchmark for GUI Automation from Instructional Videos | [
"Kevin Qinghong Lin",
"Linjie Li",
"Difei Gao",
"Qinchen WU",
"Mingyi Yan",
"Zhengyuan Yang",
"Lijuan Wang",
"Mike Zheng Shou"
]
| https://github.com/showlab/videogui | Graphical User Interface (GUI) automation holds significant promise for enhancing human productivity by assisting with computer tasks. Existing task formulations primarily focus on simple tasks that can be specified by a single, language-only instruction, such as "Insert a new slide." In this work, we introduce VideoGUI, a novel multi-modal benchmark designed to evaluate GUI assistants on visual-centric GUI tasks. Sourced from high-quality web instructional videos, our benchmark focuses on tasks involving professional and novel software (e.g., Adobe Photoshop or Stable Diffusion WebUI) and complex activities (e.g., video editing). VideoGUI evaluates GUI assistants through a hierarchical process, allowing for identification of the specific levels at which they may fail: (i) high-level planning: reconstruct procedural subtasks from visual conditions without language descriptions; (ii) middle-level planning: generate sequences of precise action narrations based on visual state (i.e., screenshot) and goals; (iii) atomic action execution: perform specific actions such as accurately clicking designated elements. For each level, we design evaluation metrics across individual dimensions to provide clear signals, such as individual performance in clicking, dragging, typing, and scrolling for atomic action execution. Our evaluation on VideoGUI reveals that even the SoTA large multimodal model GPT4o performs poorly on visual-centric GUI tasks, especially for high-level planning. |
2024-06-17T00:00:00 | 2406.08659 | Vivid-ZOO: Multi-View Video Generation with Diffusion Model | [
"Bing Li",
"Cheng Zheng",
"Wenxuan Zhu",
"Jinjie Mai",
"Biao Zhang",
"Peter Wonka",
"Bernard Ghanem"
]
| https://github.com/hi-zhengcheng/vividzoo | While diffusion models have shown impressive performance in 2D image/video generation, diffusion-based Text-to-Multi-view-Video (T2MVid) generation remains underexplored. The new challenges posed by T2MVid generation lie in the lack of massive captioned multi-view videos and the complexity of modeling such multi-dimensional distribution. To this end, we propose a novel diffusion-based pipeline that generates high-quality multi-view videos centered around a dynamic 3D object from text. Specifically, we factor the T2MVid problem into viewpoint-space and time components. Such factorization allows us to combine and reuse layers of advanced pre-trained multi-view image and 2D video diffusion models to ensure multi-view consistency as well as temporal coherence for the generated multi-view videos, largely reducing the training cost. We further introduce alignment modules to align the latent spaces of layers from the pre-trained multi-view and the 2D video diffusion models, addressing the reused layers' incompatibility that arises from the domain gap between 2D and multi-view data. In support of this and future research, we further contribute a captioned multi-view video dataset. Experimental results demonstrate that our method generates high-quality multi-view videos, exhibiting vivid motions, temporal coherence, and multi-view consistency, given a variety of text prompts. |
2024-06-17T00:00:00 | 2406.08920 | AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis | [
"Swapnil Bhosale",
"Haosen Yang",
"Diptesh Kanojia",
"Jiankang Deng",
"Xiatian Zhu"
]
| https://github.com/Surrey-UP-Lab/AV-GS | Novel view acoustic synthesis (NVAS) aims to render binaural audio at any target viewpoint, given a mono audio emitted by a sound source at a 3D scene. Existing methods have proposed NeRF-based implicit models to exploit visual cues as a condition for synthesizing binaural audio. However, in addition to low efficiency originating from heavy NeRF rendering, these methods all have a limited ability of characterizing the entire scene environment such as room geometry, material properties, and the spatial relation between the listener and sound source. To address these issues, we propose a novel Audio-Visual Gaussian Splatting (AV-GS) model. To obtain a material-aware and geometry-aware condition for audio synthesis, we learn an explicit point-based scene representation with an audio-guidance parameter on locally initialized Gaussian points, taking into account the space relation from the listener and sound source. To make the visual scene model audio adaptive, we propose a point densification and pruning strategy to optimally distribute the Gaussian points, with the per-point contribution in sound propagation (e.g., more points needed for texture-less wall surfaces as they affect sound path diversion). Extensive experiments validate the superiority of our AV-GS over existing alternatives on the real-world RWAS and simulation-based SoundSpaces datasets. |
2024-06-17T00:00:00 | 2406.08545 | RVT-2: Learning Precise Manipulation from Few Demonstrations | [
"Ankit Goyal",
"Valts Blukis",
"Jie Xu",
"Yijie Guo",
"Yu-Wei Chao",
"Dieter Fox"
]
| https://github.com/nvlabs/rvt | In this work, we study how to build a robotic system that can solve multiple 3D manipulation tasks given language instructions. To be useful in industrial and household domains, such a system should be capable of learning new tasks with few demonstrations and solving them precisely. Prior works, like PerAct and RVT, have studied this problem, however, they often struggle with tasks requiring high precision. We study how to make them more effective, precise, and fast. Using a combination of architectural and system-level improvements, we propose RVT-2, a multitask 3D manipulation model that is 6X faster in training and 2X faster in inference than its predecessor RVT. RVT-2 achieves a new state-of-the-art on RLBench, improving the success rate from 65% to 82%. RVT-2 is also effective in the real world, where it can learn tasks requiring high precision, like picking up and inserting plugs, with just 10 demonstrations. Visual results, code, and trained model are provided at: https://robotic-view-transformer-2.github.io/. |
2024-06-17T00:00:00 | 2406.10209 | Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs | [
"Abhimanyu Hans",
"Yuxin Wen",
"Neel Jain",
"John Kirchenbauer",
"Hamid Kazemi",
"Prajwal Singhania",
"Siddharth Singh",
"Gowthami Somepalli",
"Jonas Geiping",
"Abhinav Bhatele",
"Tom Goldstein"
]
| https://github.com/ahans30/goldfish-loss | Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss. During training, a randomly sampled subset of tokens are excluded from the loss computation. These dropped tokens are not memorized by the model, which prevents verbatim reproduction of a complete chain of tokens from the training set. We run extensive experiments training billion-scale Llama-2 models, both pre-trained and trained from scratch, and demonstrate significant reductions in extractable memorization with little to no impact on downstream benchmarks. |
2024-06-18T00:00:00 | 2406.11768 | GAMA: A Large Audio-Language Model with Advanced Audio Understanding and Complex Reasoning Abilities | [
"Sreyan Ghosh",
"Sonal Kumar",
"Ashish Seth",
"Chandra Kiran Reddy Evuru",
"Utkarsh Tyagi",
"S Sakshi",
"Oriol Nieto",
"Ramani Duraiswami",
"Dinesh Manocha"
]
| https://github.com/Sreyan88/GAMA | Perceiving and understanding non-speech sounds and non-verbal speech is essential to making decisions that help us interact with our surroundings. In this paper, we propose GAMA, a novel General-purpose Large Audio-Language Model (LALM) with Advanced Audio Understanding and Complex Reasoning Abilities. We build GAMA by integrating an LLM with multiple types of audio representations, including features from a custom Audio Q-Former, a multi-layer aggregator that aggregates features from multiple layers of an audio encoder. We fine-tune GAMA on a large-scale audio-language dataset, which augments it with audio understanding capabilities. Next, we propose CompA-R (Instruction-Tuning for Complex Audio Reasoning), a synthetically generated instruction-tuning (IT) dataset with instructions that require the model to perform complex reasoning on the input audio. We instruction-tune GAMA with CompA-R to endow it with complex reasoning abilities, where we further add a soft prompt as input with high-level semantic evidence by leveraging event tags of the input audio. Finally, we also propose CompA-R-test, a human-labeled evaluation dataset for evaluating the capabilities of LALMs on open-ended audio question-answering that requires complex reasoning. Through automated and expert human evaluations, we show that GAMA outperforms all other LALMs in literature on diverse audio understanding tasks by margins of 1%-84%. Further, GAMA IT-ed on CompA-R proves to be superior in its complex reasoning and instruction following capabilities. |
2024-06-18T00:00:00 | 2406.11794 | DataComp-LM: In search of the next generation of training sets for language models | [
"Jeffrey Li",
"Alex Fang",
"Georgios Smyrnis",
"Maor Ivgi",
"Matt Jordan",
"Samir Gadre",
"Hritik Bansal",
"Etash Guha",
"Sedrick Keh",
"Kushal Arora",
"Saurabh Garg",
"Rui Xin",
"Niklas Muennighoff",
"Reinhard Heckel",
"Jean Mercat",
"Mayee Chen",
"Suchin Gururangan",
"Mitchell Wortsman",
"Alon Albalak",
"Yonatan Bitton",
"Marianna Nezhurina",
"Amro Abbas",
"Cheng-Yu Hsieh",
"Dhruba Ghosh",
"Josh Gardner",
"Maciej Kilian",
"Hanlin Zhang",
"Rulin Shao",
"Sarah Pratt",
"Sunny Sanyal",
"Gabriel Ilharco",
"Giannis Daras",
"Kalyani Marathe",
"Aaron Gokaslan",
"Jieyu Zhang",
"Khyathi Chandu",
"Thao Nguyen",
"Igor Vasiljevic",
"Sham Kakade",
"Shuran Song",
"Sujay Sanghavi",
"Fartash Faghri",
"Sewoong Oh",
"Luke Zettlemoyer",
"Kyle Lo",
"Alaaeldin El-Nouby",
"Hadi Pouransari",
"Alexander Toshev",
"Stephanie Wang",
"Dirk Groeneveld",
"Luca Soldani",
"Pang Wei Koh",
"Jenia Jitsev",
"Thomas Kollar",
"Alexandros G. Dimakis",
"Yair Carmon",
"Achal Dave",
"Ludwig Schmidt",
"Vaishaal Shankar"
]
| We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models. As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations. Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters. As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set. The resulting dataset, DCLM-Baseline enables training a 7B parameter language model from scratch to 64% 5-shot accuracy on MMLU with 2.6T training tokens. Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute. Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%), and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B. Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation. |
|
2024-06-18T00:00:00 | 2406.11839 | mDPO: Conditional Preference Optimization for Multimodal Large Language Models | [
"Fei Wang",
"Wenxuan Zhou",
"James Y. Huang",
"Nan Xu",
"Sheng Zhang",
"Hoifung Poon",
"Muhao Chen"
]
| Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment. Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement. Through a comparative experiment, we identify the unconditional preference problem in multimodal preference optimization, where the model overlooks the image condition. To address this problem, we propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference. Moreover, we introduce a reward anchor that forces the reward to be positive for chosen responses, thereby avoiding the decrease in their likelihood -- an intrinsic problem of relative preference optimization. Experiments on two multimodal LLMs of different sizes and three widely used benchmarks demonstrate that mDPO effectively addresses the unconditional preference problem in multimodal preference optimization and significantly improves model performance, particularly in reducing hallucination. |
|
2024-06-18T00:00:00 | 2406.10324 | L4GM: Large 4D Gaussian Reconstruction Model | [
"Jiawei Ren",
"Kevin Xie",
"Ashkan Mirzaei",
"Hanxue Liang",
"Xiaohui Zeng",
"Karsten Kreis",
"Ziwei Liu",
"Antonio Torralba",
"Sanja Fidler",
"Seung Wook Kim",
"Huan Ling"
]
| We present L4GM, the first 4D Large Reconstruction Model that produces animated objects from a single-view video input -- in a single feed-forward pass that takes only a second. Key to our success is a novel dataset of multiview videos containing curated, rendered animated objects from Objaverse. This dataset depicts 44K diverse objects with 110K animations rendered in 48 viewpoints, resulting in 12M videos with a total of 300M frames. We keep our L4GM simple for scalability and build directly on top of LGM, a pretrained 3D Large Reconstruction Model that outputs 3D Gaussian ellipsoids from multiview image input. L4GM outputs a per-frame 3D Gaussian Splatting representation from video frames sampled at a low fps and then upsamples the representation to a higher fps to achieve temporal smoothness. We add temporal self-attention layers to the base LGM to help it learn consistency across time, and utilize a per-timestep multiview rendering loss to train the model. The representation is upsampled to a higher framerate by training an interpolation model which produces intermediate 3D Gaussian representations. We showcase that L4GM that is only trained on synthetic data generalizes extremely well on in-the-wild videos, producing high quality animated 3D assets. |
|
2024-06-18T00:00:00 | 2406.10163 | MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers | [
"Yiwen Chen",
"Tong He",
"Di Huang",
"Weicai Ye",
"Sijin Chen",
"Jiaxiang Tang",
"Xin Chen",
"Zhongang Cai",
"Lei Yang",
"Gang Yu",
"Guosheng Lin",
"Chi Zhang"
]
| https://github.com/buaacyw/MeshAnything | Recently, 3D assets created via reconstruction and generation have matched the quality of manually crafted assets, highlighting their potential for replacement. However, this potential is largely unrealized because these assets always need to be converted to meshes for 3D industry applications, and the meshes produced by current mesh extraction methods are significantly inferior to Artist-Created Meshes (AMs), i.e., meshes created by human artists. Specifically, current mesh extraction methods rely on dense faces and ignore geometric features, leading to inefficiencies, complicated post-processing, and lower representation quality. To address these issues, we introduce MeshAnything, a model that treats mesh extraction as a generation problem, producing AMs aligned with specified shapes. By converting 3D assets in any 3D representation into AMs, MeshAnything can be integrated with various 3D asset production methods, thereby enhancing their application across the 3D industry. The architecture of MeshAnything comprises a VQ-VAE and a shape-conditioned decoder-only transformer. We first learn a mesh vocabulary using the VQ-VAE, then train the shape-conditioned decoder-only transformer on this vocabulary for shape-conditioned autoregressive mesh generation. Our extensive experiments show that our method generates AMs with hundreds of times fewer faces, significantly improving storage, rendering, and simulation efficiencies, while achieving precision comparable to previous methods. |
2024-06-18T00:00:00 | 2406.11196 | Vid3D: Synthesis of Dynamic 3D Scenes using 2D Video Diffusion | [
"Rishab Parthasarathy",
"Zack Ankner",
"Aaron Gokaslan"
]
| https://github.com/rishab-partha/Vid3D | A recent frontier in computer vision has been the task of 3D video generation, which consists of generating a time-varying 3D representation of a scene. To generate dynamic 3D scenes, current methods explicitly model 3D temporal dynamics by jointly optimizing for consistency across both time and views of the scene. In this paper, we instead investigate whether it is necessary to explicitly enforce multiview consistency over time, as current approaches do, or if it is sufficient for a model to generate 3D representations of each timestep independently. We hence propose a model, Vid3D, that leverages 2D video diffusion to generate 3D videos by first generating a 2D "seed" of the video's temporal dynamics and then independently generating a 3D representation for each timestep in the seed video. We evaluate Vid3D against two state-of-the-art 3D video generation methods and find that Vid3D is achieves comparable results despite not explicitly modeling 3D temporal dynamics. We further ablate how the quality of Vid3D depends on the number of views generated per frame. While we observe some degradation with fewer views, performance degradation remains minor. Our results thus suggest that 3D temporal knowledge may not be necessary to generate high-quality dynamic 3D scenes, potentially enabling simpler generative algorithms for this task. |
2024-06-18T00:00:00 | 2406.10996 | THEANINE: Revisiting Memory Management in Long-term Conversations with Timeline-augmented Response Generation | [
"Seo Hyun Kim",
"Kai Tzu-iunn Ong",
"Taeyoon Kwon",
"Namyoung Kim",
"Keummin Ka",
"SeongHyeon Bae",
"Yohan Jo",
"Seung-won Hwang",
"Dongha Lee",
"Jinyoung Yeo"
]
| Large language models (LLMs) are capable of processing lengthy dialogue histories during prolonged interaction with users without additional memory modules; however, their responses tend to overlook or incorrectly recall information from the past. In this paper, we revisit memory-augmented response generation in the era of LLMs. While prior work focuses on getting rid of outdated memories, we argue that such memories can provide contextual cues that help dialogue systems understand the development of past events and, therefore, benefit response generation. We present Theanine, a framework that augments LLMs' response generation with memory timelines -- series of memories that demonstrate the development and causality of relevant past events. Along with Theanine, we introduce TeaFarm, a counterfactual-driven question-answering pipeline addressing the limitation of G-Eval in long-term conversations. Supplementary videos of our methods and the TeaBag dataset for TeaFarm evaluation are in https://theanine-693b0.web.app/. |
|
2024-06-18T00:00:00 | 2406.11069 | WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences | [
"Yujie Lu",
"Dongfu Jiang",
"Wenhu Chen",
"William Yang Wang",
"Yejin Choi",
"Bill Yuchen Lin"
]
| Recent breakthroughs in vision-language models (VLMs) emphasize the necessity of benchmarking human preferences in real-world multimodal interactions. To address this gap, we launched WildVision-Arena (WV-Arena), an online platform that collects human preferences to evaluate VLMs. We curated WV-Bench by selecting 500 high-quality samples from 8,000 user submissions in WV-Arena. WV-Bench uses GPT-4 as the judge to compare each VLM with Claude-3-Sonnet, achieving a Spearman correlation of 0.94 with the WV-Arena Elo. This significantly outperforms other benchmarks like MMVet, MMMU, and MMStar. Our comprehensive analysis of 20K real-world interactions reveals important insights into the failure cases of top-performing VLMs. For example, we find that although GPT-4V surpasses many other models like Reka-Flash, Opus, and Yi-VL-Plus in simple visual recognition and reasoning tasks, it still faces challenges with subtle contextual cues, spatial reasoning, visual imagination, and expert domain knowledge. Additionally, current VLMs exhibit issues with hallucinations and safety when intentionally provoked. We are releasing our chat and feedback data to further advance research in the field of VLMs. |
|
2024-06-18T00:00:00 | 2406.11816 | VideoLLM-online: Online Video Large Language Model for Streaming Video | [
"Joya Chen",
"Zhaoyang Lv",
"Shiwei Wu",
"Kevin Qinghong Lin",
"Chenan Song",
"Difei Gao",
"Jia-Wei Liu",
"Ziteng Gao",
"Dongxing Mao",
"Mike Zheng Shou"
]
| https://github.com/showlab/VideoLLM-online | Recent Large Language Models have been enhanced with vision capabilities, enabling them to comprehend images, videos, and interleaved vision-language content. However, the learning methods of these large multimodal models typically treat videos as predetermined clips, making them less effective and efficient at handling streaming video inputs. In this paper, we propose a novel Learning-In-Video-Stream (LIVE) framework, which enables temporally aligned, long-context, and real-time conversation within a continuous video stream. Our LIVE framework comprises comprehensive approaches to achieve video streaming dialogue, encompassing: (1) a training objective designed to perform language modeling for continuous streaming inputs, (2) a data generation scheme that converts offline temporal annotations into a streaming dialogue format, and (3) an optimized inference pipeline to speed up the model responses in real-world video streams. With our LIVE framework, we built VideoLLM-online model upon Llama-2/Llama-3 and demonstrate its significant advantages in processing streaming videos. For instance, on average, our model can support streaming dialogue in a 5-minute video clip at over 10 FPS on an A100 GPU. Moreover, it also showcases state-of-the-art performance on public offline video benchmarks, such as recognition, captioning, and forecasting. The code, model, data, and demo have been made available at https://showlab.github.io/videollm-online. |
2024-06-18T00:00:00 | 2406.11827 | WPO: Enhancing RLHF with Weighted Preference Optimization | [
"Wenxuan Zhou",
"Ravi Agrawal",
"Shujian Zhang",
"Sathish Reddy Indurthi",
"Sanqiang Zhao",
"Kaiqiang Song",
"Silei Xu",
"Chenguang Zhu"
]
| https://github.com/wzhouad/WPO | Reinforcement learning from human feedback (RLHF) is a promising solution to align large language models (LLMs) more closely with human values. Off-policy preference optimization, where the preference data is obtained from other models, is widely adopted due to its cost efficiency and scalability. However, off-policy preference optimization often suffers from a distributional gap between the policy used for data collection and the target policy, leading to suboptimal optimization. In this paper, we propose a novel strategy to mitigate this problem by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. We validate our method on instruction following benchmarks including Alpaca Eval 2 and MT-bench. WPO not only outperforms Direct Preference Optimization (DPO) by up to 5.6% on Alpaca Eval 2 but also establishes a remarkable length-controlled winning rate against GPT-4-turbo of 48.6% based on Llama-3-8B-Instruct, making it the strongest 8B model on the leaderboard. We will release the code and models at https://github.com/wzhouad/WPO. |
2024-06-18T00:00:00 | 2406.11775 | Task Me Anything | [
"Jieyu Zhang",
"Weikai Huang",
"Zixian Ma",
"Oscar Michel",
"Dong He",
"Tanmay Gupta",
"Wei-Chiu Ma",
"Ali Farhadi",
"Aniruddha Kembhavi",
"Ranjay Krishna"
]
| https://github.com/JieyuZ2/TaskMeAnything | Benchmarks for large multimodal language models (MLMs) now serve to simultaneously assess the general capabilities of models instead of evaluating for a specific capability. As a result, when a developer wants to identify which models to use for their application, they are overwhelmed by the number of benchmarks and remain uncertain about which benchmark's results are most reflective of their specific use case. This paper introduces Task-Me-Anything, a benchmark generation engine which produces a benchmark tailored to a user's needs. Task-Me-Anything maintains an extendable taxonomy of visual assets and can programmatically generate a vast number of task instances. Additionally, it algorithmically addresses user queries regarding MLM performance efficiently within a computational budget. It contains 113K images, 10K videos, 2K 3D object assets, over 365 object categories, 655 attributes, and 335 relationships. It can generate 750M image/video question-answering pairs, which focus on evaluating MLM perceptual capabilities. Task-Me-Anything reveals critical insights: open-source MLMs excel in object and attribute recognition but lack spatial and temporal understanding; each model exhibits unique strengths and weaknesses; larger models generally perform better, though exceptions exist; and GPT4o demonstrates challenges in recognizing rotating/moving objects and distinguishing colors. |
2024-06-18T00:00:00 | 2406.10328 | From Pixels to Prose: A Large Dataset of Dense Image Captions | [
"Vasu Singla",
"Kaiyu Yue",
"Sukriti Paul",
"Reza Shirkavand",
"Mayuka Jayawardhana",
"Alireza Ganjdanesh",
"Heng Huang",
"Abhinav Bhatele",
"Gowthami Somepalli",
"Tom Goldstein"
]
| Training large vision-language models requires extensive, high-quality image-text pairs. Existing web-scraped datasets, however, are noisy and lack detailed image descriptions. To bridge this gap, we introduce PixelProse, a comprehensive dataset of over 16M (million) synthetically generated captions, leveraging cutting-edge vision-language models for detailed and accurate descriptions. To ensure data integrity, we rigorously analyze our dataset for problematic content, including child sexual abuse material (CSAM), personally identifiable information (PII), and toxicity. We also provide valuable metadata such as watermark presence and aesthetic scores, aiding in further dataset filtering. We hope PixelProse will be a valuable resource for future vision-language research. PixelProse is available at https://huggingface.co/datasets/tomg-group-umd/pixelprose |
|
2024-06-18T00:00:00 | 2406.11402 | Evaluating Open Language Models Across Task Types, Application Domains, and Reasoning Types: An In-Depth Experimental Analysis | [
"Neelabh Sinha",
"Vinija Jain",
"Aman Chadha"
]
| https://github.com/neelabhsinha/lm-application-eval-kit | The rapid rise of Language Models (LMs) has expanded their use in several applications. Yet, due to constraints of model size, associated cost, or proprietary restrictions, utilizing state-of-the-art (SOTA) LLMs is not always feasible. With open, smaller LMs emerging, more applications can leverage their capabilities, but selecting the right LM can be challenging. This work conducts an in-depth experimental analysis of the semantic correctness of outputs of 10 smaller, open LMs across three aspects: task types, application domains and reasoning types, using diverse prompt styles. We demonstrate that most effective models and prompt styles vary depending on the specific requirements. Our analysis provides a comparative assessment of LMs and prompt styles using a proposed three-tier schema of aspects for their strategic selection based on use-case and other constraints. We also show that if utilized appropriately, these LMs can compete with, and sometimes outperform, SOTA LLMs like DeepSeek-v2, GPT-3.5-Turbo, and GPT-4o. |
2024-06-18T00:00:00 | 2406.11271 | MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens | [
"Anas Awadalla",
"Le Xue",
"Oscar Lo",
"Manli Shu",
"Hannah Lee",
"Etash Kumar Guha",
"Matt Jordan",
"Sheng Shen",
"Mohamed Awadalla",
"Silvio Savarese",
"Caiming Xiong",
"Ran Xu",
"Yejin Choi",
"Ludwig Schmidt"
]
| https://github.com/mlfoundations/MINT-1T | Multimodal interleaved datasets featuring free-form interleaved sequences of images and text are crucial for training frontier large multimodal models (LMMs). Despite the rapid progression of open-source LMMs, there remains a pronounced scarcity of large-scale, diverse open-source multimodal interleaved datasets. In response, we introduce MINT-1T, the most extensive and diverse open-source Multimodal INTerleaved dataset to date. MINT-1T comprises one trillion text tokens and three billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires substantial engineering effort, sharing the data curation process and releasing the dataset greatly benefits the community. Our experiments show that LMMs trained on MINT-1T rival the performance of models trained on the previous leading dataset, OBELICS. Our data and code will be released at https://github.com/mlfoundations/MINT-1T. |
2024-06-18T00:00:00 | 2406.10906 | Breaking the Attention Bottleneck | [
"Kalle Hilsenbek"
]
| Attention-based transformers have become the standard architecture in many deep learning fields, primarily due to their ability to model long-range dependencies and handle variable-length input sequences. However, the attention mechanism with its quadratic complexity is a significant bottleneck in the transformer architecture. This algorithm is only uni-directional in the decoder and converges to a static pattern in over-parametrized decoder-only models. I address this issue by developing a generative function as attention or activation replacement. It still has the auto-regressive character by comparing each token with the previous one. In my test setting with nanoGPT this yields a smaller loss while having a smaller model. The loss further drops by incorporating an average context vector. This concept of attention replacement is distributed under the GNU AGPL v3 license at https://gitlab.com/Bachstelze/causal_generation. |
|
2024-06-18T00:00:00 | 2406.11840 | LLaNA: Large Language and NeRF Assistant | [
"Andrea Amaduzzi",
"Pierluigi Zama Ramirez",
"Giuseppe Lisanti",
"Samuele Salti",
"Luigi Di Stefano"
]
| Multimodal Large Language Models (MLLMs) have demonstrated an excellent understanding of images and 3D data. However, both modalities have shortcomings in holistically capturing the appearance and geometry of objects. Meanwhile, Neural Radiance Fields (NeRFs), which encode information within the weights of a simple Multi-Layer Perceptron (MLP), have emerged as an increasingly widespread modality that simultaneously encodes the geometry and photorealistic appearance of objects. This paper investigates the feasibility and effectiveness of ingesting NeRF into MLLM. We create LLaNA, the first general-purpose NeRF-language assistant capable of performing new tasks such as NeRF captioning and Q\&A. Notably, our method directly processes the weights of the NeRF's MLP to extract information about the represented objects without the need to render images or materialize 3D data structures. Moreover, we build a dataset of NeRFs with text annotations for various NeRF-language tasks with no human intervention. Based on this dataset, we develop a benchmark to evaluate the NeRF understanding capability of our method. Results show that processing NeRF weights performs favourably against extracting 2D or 3D representations from NeRFs. |
|
2024-06-18T00:00:00 | 2406.11813 | How Do Large Language Models Acquire Factual Knowledge During Pretraining? | [
"Hoyeon Chang",
"Jinho Park",
"Seonghyeon Ye",
"Sohee Yang",
"Youngkyung Seo",
"Du-Seong Chang",
"Minjoon Seo"
]
| Despite the recent observation that large language models (LLMs) can store substantial factual knowledge, there is a limited understanding of the mechanisms of how they acquire factual knowledge through pretraining. This work addresses this gap by studying how LLMs acquire factual knowledge during pretraining. The findings reveal several important insights into the dynamics of factual knowledge acquisition during pretraining. First, counterintuitively, we observe that pretraining on more data shows no significant improvement in the model's capability to acquire and maintain factual knowledge. Next, there is a power-law relationship between training steps and forgetting of memorization and generalization of factual knowledge, and LLMs trained with duplicated training data exhibit faster forgetting. Third, training LLMs with larger batch sizes can enhance the models' robustness to forgetting. Overall, our observations suggest that factual knowledge acquisition in LLM pretraining occurs by progressively increasing the probability of factual knowledge presented in the pretraining data at each step. However, this increase is diluted by subsequent forgetting. Based on this interpretation, we demonstrate that we can provide plausible explanations for recently observed behaviors of LLMs, such as the poor performance of LLMs on long-tail knowledge and the benefits of deduplicating the pretraining corpus. |
|
2024-06-18T00:00:00 | 2406.10670 | CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training | [
"David Brandfonbrener",
"Hanlin Zhang",
"Andreas Kirsch",
"Jonathan Richard Schwarz",
"Sham Kakade"
]
| https://github.com/davidbrandfonbrener/color-filter-olmo | Selecting high-quality data for pre-training is crucial in shaping the downstream task performance of language models. A major challenge lies in identifying this optimal subset, a problem generally considered intractable, thus necessitating scalable and effective heuristics. In this work, we propose a data selection method, CoLoR-Filter (Conditional Loss Reduction Filtering), which leverages an empirical Bayes-inspired approach to derive a simple and computationally efficient selection criterion based on the relative loss values of two auxiliary models. In addition to the modeling rationale, we evaluate CoLoR-Filter empirically on two language modeling tasks: (1) selecting data from C4 for domain adaptation to evaluation on Books and (2) selecting data from C4 for a suite of downstream multiple-choice question answering tasks. We demonstrate favorable scaling both as we subselect more aggressively and using small auxiliary models to select data for large target models. As one headline result, CoLoR-Filter data selected using a pair of 150m parameter auxiliary models can train a 1.2b parameter target model to match a 1.2b parameter model trained on 25b randomly selected tokens with 25x less data for Books and 11x less data for the downstream tasks. Code: https://github.com/davidbrandfonbrener/color-filter-olmo Filtered data: https://huggingface.co/datasets/davidbrandfonbrener/color-filtered-c4 |
2024-06-18T00:00:00 | 2406.11833 | MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs | [
"Ziyu Liu",
"Tao Chu",
"Yuhang Zang",
"Xilin Wei",
"Xiaoyi Dong",
"Pan Zhang",
"Zijian Liang",
"Yuanjun Xiong",
"Yu Qiao",
"Dahua Lin",
"Jiaqi Wang"
]
| https://github.com/Liuziyu77/MMDU | Generating natural and meaningful responses to communicate with multi-modal human inputs is a fundamental capability of Large Vision-Language Models(LVLMs). While current open-source LVLMs demonstrate promising performance in simplified scenarios such as single-turn single-image input, they fall short in real-world conversation scenarios such as following instructions in a long context history with multi-turn and multi-images. Existing LVLM benchmarks primarily focus on single-choice questions or short-form responses, which do not adequately assess the capabilities of LVLMs in real-world human-AI interaction applications. Therefore, we introduce MMDU, a comprehensive benchmark, and MMDU-45k, a large-scale instruction tuning dataset, designed to evaluate and improve LVLMs' abilities in multi-turn and multi-image conversations. We employ the clustering algorithm to ffnd the relevant images and textual descriptions from the open-source Wikipedia and construct the question-answer pairs by human annotators with the assistance of the GPT-4o model. MMDU has a maximum of 18k image+text tokens, 20 images, and 27 turns, which is at least 5x longer than previous benchmarks and poses challenges to current LVLMs. Our in-depth analysis of 15 representative LVLMs using MMDU reveals that open-source LVLMs lag behind closed-source counterparts due to limited conversational instruction tuning data. We demonstrate that ffne-tuning open-source LVLMs on MMDU-45k signiffcantly address this gap, generating longer and more accurate conversations, and improving scores on MMDU and existing benchmarks (MMStar: +1.1%, MathVista: +1.5%, ChartQA:+1.2%). Our contributions pave the way for bridging the gap between current LVLM models and real-world application demands. This project is available at https://github.com/Liuziyu77/MMDU. |
2024-06-18T00:00:00 | 2406.11831 | Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models | [
"Bingqi Ma",
"Zhuofan Zong",
"Guanglu Song",
"Hongsheng Li",
"Yu Liu"
]
| Large language models (LLMs) based on decoder-only transformers have demonstrated superior text understanding capabilities compared to CLIP and T5-series models. However, the paradigm for utilizing current advanced LLMs in text-to-image diffusion models remains to be explored. We observed an unusual phenomenon: directly using a large language model as the prompt encoder significantly degrades the prompt-following ability in image generation. We identified two main obstacles behind this issue. One is the misalignment between the next token prediction training in LLM and the requirement for discriminative prompt features in diffusion models. The other is the intrinsic positional bias introduced by the decoder-only architecture. To deal with this issue, we propose a novel framework to fully harness the capabilities of LLMs. Through the carefully designed usage guidance, we effectively enhance the text representation capability for prompt encoding and eliminate its inherent positional bias. This allows us to integrate state-of-the-art LLMs into the text-to-image generation model flexibly. Furthermore, we also provide an effective manner to fuse multiple LLMs into our framework. Considering the excellent performance and scaling capabilities demonstrated by the transformer architecture, we further design an LLM-Infused Diffusion Transformer (LI-DiT) based on the framework. We conduct extensive experiments to validate LI-DiT across model size and data size. Benefiting from the inherent ability of the LLMs and our innovative designs, the prompt understanding performance of LI-DiT easily surpasses state-of-the-art open-source models as well as mainstream closed-source commercial models including Stable Diffusion 3, DALL-E 3, and Midjourney V6. The powerful LI-DiT-10B will be available after further optimization and security checks. |
|
2024-06-18T00:00:00 | 2406.11202 | Consistency^2: Consistent and Fast 3D Painting with Latent Consistency Models | [
"Tianfu Wang",
"Anton Obukhov",
"Konrad Schindler"
]
| https://github.com/kongdai123/consistency2 | Generative 3D Painting is among the top productivity boosters in high-resolution 3D asset management and recycling. Ever since text-to-image models became accessible for inference on consumer hardware, the performance of 3D Painting methods has consistently improved and is currently close to plateauing. At the core of most such models lies denoising diffusion in the latent space, an inherently time-consuming iterative process. Multiple techniques have been developed recently to accelerate generation and reduce sampling iterations by orders of magnitude. Designed for 2D generative imaging, these techniques do not come with recipes for lifting them into 3D. In this paper, we address this shortcoming by proposing a Latent Consistency Model (LCM) adaptation for the task at hand. We analyze the strengths and weaknesses of the proposed model and evaluate it quantitatively and qualitatively. Based on the Objaverse dataset samples study, our 3D painting method attains strong preference in all evaluations. Source code is available at https://github.com/kongdai123/consistency2. |
2024-06-18T00:00:00 | 2406.11463 | Just How Flexible are Neural Networks in Practice? | [
"Ravid Shwartz-Ziv",
"Micah Goldblum",
"Arpit Bansal",
"C. Bayan Bruss",
"Yann LeCun",
"Andrew Gordon Wilson"
]
| It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters, underpinning notions of overparameterized and underparameterized models. In practice, however, we only find solutions accessible via our training procedure, including the optimizer and regularizers, limiting flexibility. Moreover, the exact parameterization of the function class, built into an architecture, shapes its loss surface and impacts the minima we find. In this work, we examine the ability of neural networks to fit data in practice. Our findings indicate that: (1) standard optimizers find minima where the model can only fit training sets with significantly fewer samples than it has parameters; (2) convolutional networks are more parameter-efficient than MLPs and ViTs, even on randomly labeled data; (3) while stochastic training is thought to have a regularizing effect, SGD actually finds minima that fit more training data than full-batch gradient descent; (4) the difference in capacity to fit correctly labeled and incorrectly labeled samples can be predictive of generalization; (5) ReLU activation functions result in finding minima that fit more data despite being designed to avoid vanishing and exploding gradients in deep architectures. |
|
2024-06-18T00:00:00 | 2406.10803 | HiddenTables & PyQTax: A Cooperative Game and Dataset For TableQA to Ensure Scale and Data Privacy Across a Myriad of Taxonomies | [
"William Watson",
"Nicole Cho",
"Tucker Balch",
"Manuela Veloso"
]
| A myriad of different Large Language Models (LLMs) face a common challenge in contextually analyzing table question-answering tasks. These challenges are engendered from (1) finite context windows for large tables, (2) multi-faceted discrepancies amongst tokenization patterns against cell boundaries, and (3) various limitations stemming from data confidentiality in the process of using external models such as gpt-3.5-turbo. We propose a cooperative game dubbed "HiddenTables" as a potential resolution to this challenge. In essence, "HiddenTables" is played between the code-generating LLM "Solver" and the "Oracle" which evaluates the ability of the LLM agents to solve Table QA tasks. This game is based on natural language schemas and importantly, ensures the security of the underlying data. We provide evidential experiments on a diverse set of tables that demonstrate an LLM's collective inability to generalize and perform on complex queries, handle compositional dependencies, and align natural language to programmatic commands when concrete table schemas are provided. Unlike encoder-based models, we have pushed the boundaries of "HiddenTables" to not be limited by the number of rows - therefore we exhibit improved efficiency in prompt and completion tokens. Our infrastructure has spawned a new dataset "PyQTax" that spans across 116,671 question-table-answer triplets and provides additional fine-grained breakdowns & labels for varying question taxonomies. Therefore, in tandem with our academic contributions regarding LLMs' deficiency in TableQA tasks, "HiddenTables" is a tactile manifestation of how LLMs can interact with massive datasets while ensuring data security and minimizing generation costs. |
|
2024-06-18T00:00:00 | 2406.11194 | In-Context Editing: Learning Knowledge from Self-Induced Distributions | [
"Siyuan Qi",
"Bangcheng Yang",
"Kailin Jiang",
"Xiaobo Wang",
"Jiaqi Li",
"Yifan Zhong",
"Yaodong Yang",
"Zilong Zheng"
]
| The existing fine-tuning paradigm for language models is brittle in knowledge editing scenarios, where the model must incorporate new information without extensive retraining. This brittleness often results in overfitting, reduced performance, and unnatural language generation. To address this, we propose Consistent In-Context Editing (ICE), a novel approach that leverages the model's in-context learning capability to tune toward a contextual distribution rather than a one-hot target. ICE introduces a straightforward optimization framework that includes both a target and a procedure, enhancing the robustness and effectiveness of gradient-based tuning methods. We provide analytical insights into ICE across four critical aspects of knowledge editing: accuracy, locality, generalization, and linguistic quality, showing its advantages. Experimental results across four datasets confirm the effectiveness of ICE and demonstrate its potential for continual editing, ensuring that updated information is incorporated while preserving the integrity of the model. |
|
2024-06-18T00:00:00 | 2406.09455 | Pandora: Towards General World Model with Natural Language Actions and Video States | [
"Jiannan Xiang",
"Guangyi Liu",
"Yi Gu",
"Qiyue Gao",
"Yuting Ning",
"Yuheng Zha",
"Zeyu Feng",
"Tianhua Tao",
"Shibo Hao",
"Yemin Shi",
"Zhengzhong Liu",
"Eric P. Xing",
"Zhiting Hu"
]
| https://github.com/maitrix-org/Pandora | World models simulate future states of the world in response to different actions. They facilitate interactive content creation and provides a foundation for grounded, long-horizon reasoning. Current foundation models do not fully meet the capabilities of general world models: large language models (LLMs) are constrained by their reliance on language modality and their limited understanding of the physical world, while video models lack interactive action control over the world simulations. This paper makes a step towards building a general world model by introducing Pandora, a hybrid autoregressive-diffusion model that simulates world states by generating videos and allows real-time control with free-text actions. Pandora achieves domain generality, video consistency, and controllability through large-scale pretraining and instruction tuning. Crucially, Pandora bypasses the cost of training-from-scratch by integrating a pretrained LLM (7B) and a pretrained video model, requiring only additional lightweight finetuning. We illustrate extensive outputs by Pandora across diverse domains (indoor/outdoor, natural/urban, human/robot, 2D/3D, etc.). The results indicate great potential of building stronger general world models with larger-scale training. |
2024-06-18T00:00:00 | 2406.11430 | A Simple and Effective L_2 Norm-Based Strategy for KV Cache Compression | [
"Alessio Devoto",
"Yu Zhao",
"Simone Scardapane",
"Pasquale Minervini"
]
| The deployment of large language models (LLMs) is often hindered by the extensive memory requirements of the Key-Value (KV) cache, especially as context lengths increase. Existing approaches to reduce the KV cache size involve either fine-tuning the model to learn a compression strategy or leveraging attention scores to reduce the sequence length. We analyse the attention distributions in decoder-only Transformers-based models and observe that attention allocation patterns stay consistent across most layers. Surprisingly, we find a clear correlation between the L_2 and the attention scores over cached KV pairs, where a low L_2 of a key embedding usually leads to a high attention score during decoding. This finding indicates that the influence of a KV pair is potentially determined by the key embedding itself before being queried. Based on this observation, we compress the KV cache based on the L_2 of key embeddings. Our experimental results show that this simple strategy can reduce the KV cache size by 50% on language modelling and needle-in-a-haystack tasks and 90% on passkey retrieval tasks without losing accuracy. |
|
2024-06-18T00:00:00 | 2406.11251 | Unifying Multimodal Retrieval via Document Screenshot Embedding | [
"Xueguang Ma",
"Sheng-Chieh Lin",
"Minghan Li",
"Wenhu Chen",
"Jimmy Lin"
]
| In the real world, documents are organized in different formats and varied modalities. Traditional retrieval pipelines require tailored document parsing techniques and content extraction modules to prepare input for indexing. This process is tedious, prone to errors, and has information loss. To this end, we propose Document Screenshot Embedding} (DSE), a novel retrieval paradigm that regards document screenshots as a unified input format, which does not require any content extraction preprocess and preserves all the information in a document (e.g., text, image and layout). DSE leverages a large vision-language model to directly encode document screenshots into dense representations for retrieval. To evaluate our method, we first craft the dataset of Wiki-SS, a 1.3M Wikipedia web page screenshots as the corpus to answer the questions from the Natural Questions dataset. In such a text-intensive document retrieval setting, DSE shows competitive effectiveness compared to other text retrieval methods relying on parsing. For example, DSE outperforms BM25 by 17 points in top-1 retrieval accuracy. Additionally, in a mixed-modality task of slide retrieval, DSE significantly outperforms OCR text retrieval methods by over 15 points in nDCG@10. These experiments show that DSE is an effective document retrieval paradigm for diverse types of documents. Model checkpoints, code, and Wiki-SS collection will be released. |
|
2024-06-18T00:00:00 | 2406.10522 | Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning | [
"Jifan Zhang",
"Lalit Jain",
"Yang Guo",
"Jiayi Chen",
"Kuan Lok Zhou",
"Siddharth Suresh",
"Andrew Wagenmaker",
"Scott Sievert",
"Timothy Rogers",
"Kevin Jamieson",
"Robert Mankoff",
"Robert Nowak"
]
| https://github.com/yguooo/cartoon-caption-generation | We present a novel multimodal preference dataset for creative tasks, consisting of over 250 million human ratings on more than 2.2 million captions, collected through crowdsourcing rating data for The New Yorker's weekly cartoon caption contest over the past eight years. This unique dataset supports the development and evaluation of multimodal large language models and preference-based fine-tuning algorithms for humorous caption generation. We propose novel benchmarks for judging the quality of model-generated captions, utilizing both GPT4 and human judgments to establish ranking-based evaluation strategies. Our experimental results highlight the limitations of current fine-tuning methods, such as RLHF and DPO, when applied to creative tasks. Furthermore, we demonstrate that even state-of-the-art models like GPT4 and Claude currently underperform top human contestants in generating humorous captions. As we conclude this extensive data collection effort, we release the entire preference dataset to the research community, fostering further advancements in AI humor generation and evaluation. |
2024-06-18T00:00:00 | 2406.10023 | Deep Bayesian Active Learning for Preference Modeling in Large Language Models | [
"Luckeciano C. Melo",
"Panagiotis Tigas",
"Alessandro Abate",
"Yarin Gal"
]
| Leveraging human preferences for steering the behavior of Large Language Models (LLMs) has demonstrated notable success in recent years. Nonetheless, data selection and labeling are still a bottleneck for these systems, particularly at large scale. Hence, selecting the most informative points for acquiring human feedback may considerably reduce the cost of preference labeling and unleash the further development of LLMs. Bayesian Active Learning provides a principled framework for addressing this challenge and has demonstrated remarkable success in diverse settings. However, previous attempts to employ it for Preference Modeling did not meet such expectations. In this work, we identify that naive epistemic uncertainty estimation leads to the acquisition of redundant samples. We address this by proposing the Bayesian Active Learner for Preference Modeling (BAL-PM), a novel stochastic acquisition policy that not only targets points of high epistemic uncertainty according to the preference model but also seeks to maximize the entropy of the acquired prompt distribution in the feature space spanned by the employed LLM. Notably, our experiments demonstrate that BAL-PM requires 33% to 68% fewer preference labels in two popular human preference datasets and exceeds previous stochastic Bayesian acquisition policies. |
|
2024-06-19T00:00:00 | 2406.12066 | Language Models are Surprisingly Fragile to Drug Names in Biomedical Benchmarks | [
"Jack Gallifant",
"Shan Chen",
"Pedro Moreira",
"Nikolaj Munch",
"Mingye Gao",
"Jackson Pond",
"Leo Anthony Celi",
"Hugo Aerts",
"Thomas Hartvigsen",
"Danielle Bitterman"
]
| https://github.com/BittermanLab/RABBITS | Medical knowledge is context-dependent and requires consistent reasoning across various natural language expressions of semantically equivalent phrases. This is particularly crucial for drug names, where patients often use brand names like Advil or Tylenol instead of their generic equivalents. To study this, we create a new robustness dataset, RABBITS, to evaluate performance differences on medical benchmarks after swapping brand and generic drug names using physician expert annotations. We assess both open-source and API-based LLMs on MedQA and MedMCQA, revealing a consistent performance drop ranging from 1-10\%. Furthermore, we identify a potential source of this fragility as the contamination of test data in widely used pre-training datasets. All code is accessible at https://github.com/BittermanLab/RABBITS, and a HuggingFace leaderboard is available at https://huggingface.co/spaces/AIM-Harvard/rabbits-leaderboard. |
2024-06-19T00:00:00 | 2406.11811 | RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content | [
"Joao Monteiro",
"Pierre-Andre Noel",
"Etienne Marcotte",
"Sai Rajeswar",
"Valentina Zantedeschi",
"David Vazquez",
"Nicolas Chapados",
"Christopher Pal",
"Perouz Taslakian"
]
| Large Language Models (LLMs) are trained on vast amounts of data, most of which is automatically scraped from the internet. This data includes encyclopedic documents that harbor a vast amount of general knowledge (e.g., Wikipedia) but also potentially overlap with benchmark datasets used for evaluating LLMs. Consequently, evaluating models on test splits that might have leaked into the training set is prone to misleading conclusions. To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. RepLiQA is a collection of five splits of test sets, four of which have not been released to the internet or exposed to LLM APIs prior to this publication. Each sample in RepLiQA comprises (1) a reference document crafted by a human annotator and depicting an imaginary scenario (e.g., a news article) absent from the internet; (2) a question about the document's topic; (3) a ground-truth answer derived directly from the information in the document; and (4) the paragraph extracted from the reference document containing the answer. As such, accurate answers can only be generated if a model can find relevant content within the provided document. We run a large-scale benchmark comprising several state-of-the-art LLMs to uncover differences in performance across models of various types and sizes in a context-conditional language modeling setting. Released splits of RepLiQA can be found here: https://huggingface.co/datasets/ServiceNow/repliqa. |
|
2024-06-19T00:00:00 | 2406.12275 | VoCo-LLaMA: Towards Vision Compression with Large Language Models | [
"Xubing Ye",
"Yukang Gan",
"Xiaoke Huang",
"Yixiao Ge",
"Ying Shan",
"Yansong Tang"
]
| https://github.com/Yxxxb/VoCo-LLaMA | Vision-Language Models (VLMs) have achieved remarkable success in various multi-modal tasks, but they are often bottlenecked by the limited context window and high computational cost of processing high-resolution image inputs and videos. Vision compression can alleviate this problem by reducing the vision token count. Previous approaches compress vision tokens with external modules and force LLMs to understand the compressed ones, leading to visual information loss. However, the LLMs' understanding paradigm of vision tokens is not fully utilised in the compression learning process. We propose VoCo-LLaMA, the first approach to compress vision tokens using LLMs. By introducing Vision Compression tokens during the vision instruction tuning phase and leveraging attention distillation, our method distill how LLMs comprehend vision tokens into their processing of VoCo tokens. VoCo-LLaMA facilitates effective vision compression and improves the computational efficiency during the inference stage. Specifically, our method achieves minimal performance loss with a compression ratio of 576times, resulting in up to 94.8% fewer FLOPs and 69.6% acceleration in inference time. Furthermore, through continuous training using time-series compressed token sequences of video frames, VoCo-LLaMA demonstrates the ability to understand temporal correlations, outperforming previous methods on popular video question-answering benchmarks. Our approach presents a promising way to unlock the full potential of VLMs' contextual window, enabling more scalable multi-modal applications. The project page, along with the associated code, can be accessed via https://yxxxb.github.io/VoCo-LLaMA-page/{this https URL}. |
2024-06-19T00:00:00 | 2406.12753 | OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI | [
"Zhen Huang",
"Zengzhi Wang",
"Shijie Xia",
"Xuefeng Li",
"Haoyang Zou",
"Ruijie Xu",
"Run-Ze Fan",
"Lyumanshan Ye",
"Ethan Chern",
"Yixin Ye",
"Yikai Zhang",
"Yuqing Yang",
"Ting Wu",
"Binjie Wang",
"Shichao Sun",
"Yang Xiao",
"Yiyuan Li",
"Fan Zhou",
"Steffi Chern",
"Yiwei Qin",
"Yan Ma",
"Jiadi Su",
"Yixiu Liu",
"Yuxiang Zheng",
"Shaoting Zhang",
"Dahua Lin",
"Yu Qiao",
"Pengfei Liu"
]
| https://github.com/GAIR-NLP/OlympicArena | The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models' performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI's cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives. We delve into the models' cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions. Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration. Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond. We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features. |
2024-06-19T00:00:00 | 2406.12246 | TroL: Traversal of Layers for Large Language and Vision Models | [
"Byung-Kwan Lee",
"Sangyun Chung",
"Chae Won Kim",
"Beomchan Park",
"Yong Man Ro"
]
| https://github.com/ByungKwanLee/TroL | Large language and vision models (LLVMs) have been driven by the generalization power of large language models (LLMs) and the advent of visual instruction tuning. Along with scaling them up directly, these models enable LLVMs to showcase powerful vision language (VL) performances by covering diverse tasks via natural language instructions. However, existing open-source LLVMs that perform comparably to closed-source LLVMs such as GPT-4V are often considered too large (e.g., 26B, 34B, and 110B parameters), having a larger number of layers. These large models demand costly, high-end resources for both training and inference. To address this issue, we present a new efficient LLVM family with 1.8B, 3.8B, and 7B LLM model sizes, Traversal of Layers (TroL), which enables the reuse of layers in a token-wise manner. This layer traversing technique simulates the effect of looking back and retracing the answering stream while increasing the number of forward propagation layers without physically adding more layers. We demonstrate that TroL employs a simple layer traversing approach yet efficiently outperforms the open-source LLVMs with larger model sizes and rivals the performances of the closed-source LLVMs with substantial sizes. |
2024-06-19T00:00:00 | 2406.11931 | DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence | [
"DeepSeek-AI",
"Qihao Zhu",
"Daya Guo",
"Zhihong Shao",
"Dejian Yang",
"Peiyi Wang",
"Runxin Xu",
"Y. Wu",
"Yukun Li",
"Huazuo Gao",
"Shirong Ma",
"Wangding Zeng",
"Xiao Bi",
"Zihui Gu",
"Hanwei Xu",
"Damai Dai",
"Kai Dong",
"Liyue Zhang",
"Yishi Piao",
"Zhibin Gou",
"Zhenda Xie",
"Zhewen Hao",
"Bingxuan Wang",
"Junxiao Song",
"Deli Chen",
"Xin Xie",
"Kang Guan",
"Yuxiang You",
"Aixin Liu",
"Qiushi Du",
"Wenjun Gao",
"Xuan Lu",
"Qinyu Chen",
"Yaohui Wang",
"Chengqi Deng",
"Jiashi Li",
"Chenggang Zhao",
"Chong Ruan",
"Fuli Luo",
"Wenfeng Liang"
]
| https://github.com/deepseek-ai/DeepSeek-Coder-V2 | We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. |
2024-06-19T00:00:00 | 2406.12793 | ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools | [
"Team GLM",
"Aohan Zeng",
"Bin Xu",
"Bowen Wang",
"Chenhui Zhang",
"Da Yin",
"Diego Rojas",
"Guanyu Feng",
"Hanlin Zhao",
"Hanyu Lai",
"Hao Yu",
"Hongning Wang",
"Jiadai Sun",
"Jiajie Zhang",
"Jiale Cheng",
"Jiayi Gui",
"Jie Tang",
"Jing Zhang",
"Juanzi Li",
"Lei Zhao",
"Lindong Wu",
"Lucen Zhong",
"Mingdao Liu",
"Minlie Huang",
"Peng Zhang",
"Qinkai Zheng",
"Rui Lu",
"Shuaiqi Duan",
"Shudan Zhang",
"Shulin Cao",
"Shuxun Yang",
"Weng Lam Tam",
"Wenyi Zhao",
"Xiao Liu",
"Xiao Xia",
"Xiaohan Zhang",
"Xiaotao Gu",
"Xin Lv",
"Xinghan Liu",
"Xinyi Liu",
"Xinyue Yang",
"Xixuan Song",
"Xunkai Zhang",
"Yifan An",
"Yifan Xu",
"Yilin Niu",
"Yuantao Yang",
"Yueyan Li",
"Yushi Bai",
"Yuxiao Dong",
"Zehan Qi",
"Zhaoyu Wang",
"Zhen Yang",
"Zhengxiao Du",
"Zhenyu Hou",
"Zihan Wang"
]
| We introduce ChatGLM, an evolving family of large language models that we have been developing over time. This report primarily focuses on the GLM-4 language series, which includes GLM-4, GLM-4-Air, and GLM-4-9B. They represent our most capable models that are trained with all the insights and lessons gained from the preceding three generations of ChatGLM. To date, the GLM-4 models are pre-trained on ten trillions of tokens mostly in Chinese and English, along with a small set of corpus from 24 languages, and aligned primarily for Chinese and English usage. The high-quality alignment is achieved via a multi-stage post-training process, which involves supervised fine-tuning and learning from human feedback. Evaluations show that GLM-4 1) closely rivals or outperforms GPT-4 in terms of general metrics such as MMLU, GSM8K, MATH, BBH, GPQA, and HumanEval, 2) gets close to GPT-4-Turbo in instruction following as measured by IFEval, 3) matches GPT-4 Turbo (128K) and Claude 3 for long context tasks, and 4) outperforms GPT-4 in Chinese alignments as measured by AlignBench. The GLM-4 All Tools model is further aligned to understand user intent and autonomously decide when and which tool(s) touse -- including web browser, Python interpreter, text-to-image model, and user-defined functions -- to effectively complete complex tasks. In practical applications, it matches and even surpasses GPT-4 All Tools in tasks like accessing online information via web browsing and solving math problems using Python interpreter. Over the course, we have open-sourced a series of models, including ChatGLM-6B (three generations), GLM-4-9B (128K, 1M), GLM-4V-9B, WebGLM, and CodeGeeX, attracting over 10 million downloads on Hugging face in the year 2023 alone. The open models can be accessed through https://github.com/THUDM and https://huggingface.co/THUDM. |
|
2024-06-19T00:00:00 | 2406.12824 | From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries | [
"Hitesh Wadhwa",
"Rahul Seetharaman",
"Somyaa Aggarwal",
"Reshmi Ghosh",
"Samyadeep Basu",
"Soundararajan Srinivasan",
"Wenlong Zhao",
"Shreyas Chaudhari",
"Ehsan Aghazadeh"
]
| Retrieval Augmented Generation (RAG) enriches the ability of language models to reason using external context to augment responses for a given user prompt. This approach has risen in popularity due to practical applications in various applications of language models in search, question/answering, and chat-bots. However, the exact nature of how this approach works isn't clearly understood. In this paper, we mechanistically examine the RAG pipeline to highlight that language models take shortcut and have a strong bias towards utilizing only the context information to answer the question, while relying minimally on their parametric memory. We probe this mechanistic behavior in language models with: (i) Causal Mediation Analysis to show that the parametric memory is minimally utilized when answering a question and (ii) Attention Contributions and Knockouts to show that the last token residual stream do not get enriched from the subject token in the question, but gets enriched from other informative tokens in the context. We find this pronounced shortcut behaviour true across both LLaMa and Phi family of models. |
|
2024-06-19T00:00:00 | 2406.12644 | Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models | [
"Devichand Budagam",
"Sankalp KJ",
"Ashutosh Kumar",
"Vinija Jain",
"Aman Chadha"
]
| https://github.com/devichand579/HPT | Assessing the effectiveness of large language models (LLMs) in addressing diverse tasks is essential for comprehending their strengths and weaknesses. Conventional evaluation techniques typically apply a single prompting strategy uniformly across datasets, not considering the varying degrees of task complexity. We introduce the Hierarchical Prompting Taxonomy (HPT), a taxonomy that employs a Hierarchical Prompt Framework (HPF) composed of five unique prompting strategies, arranged from the simplest to the most complex, to assess LLMs more precisely and to offer a clearer perspective. This taxonomy assigns a score, called the Hierarchical Prompting Score (HP-Score), to datasets as well as LLMs based on the rules of the taxonomy, providing a nuanced understanding of their ability to solve diverse tasks and offering a universal measure of task complexity. Additionally, we introduce the Adaptive Hierarchical Prompt framework, which automates the selection of appropriate prompting strategies for each task. This study compares manual and adaptive hierarchical prompt frameworks using four instruction-tuned LLMs, namely Llama 3 8B, Phi 3 3.8B, Mistral 7B, and Gemma 7B, across four datasets: BoolQ, CommonSenseQA (CSQA), IWSLT-2017 en-fr (IWSLT), and SamSum. Experiments demonstrate the effectiveness of HPT, providing a reliable way to compare different tasks and LLM capabilities. This paper leads to the development of a universal evaluation metric that can be used to evaluate both the complexity of the datasets and the capabilities of LLMs. The implementation of both manual HPF and adaptive HPF is publicly available. |
Subsets and Splits