date
timestamp[ns]date 2023-05-05 00:00:00
2025-07-11 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
202
| authors
listlengths 1
942
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2024-02-02T00:00:00 | 2402.00854 | SymbolicAI: A framework for logic-based approaches combining generative models and solvers | [
"Marius-Constantin Dinu",
"Claudiu Leoveanu-Condrei",
"Markus Holzleitner",
"Werner Zellinger",
"Sepp Hochreiter"
]
| We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes. SymbolicAI enables the seamless integration of generative models with a diverse range of solvers by treating large language models (LLMs) as semantic parsers that execute tasks based on both natural and formal language instructions, thus bridging the gap between symbolic reasoning and generative AI. We leverage probabilistic programming principles to tackle complex tasks, and utilize differentiable and classical programming paradigms with their respective strengths. The framework introduces a set of polymorphic, compositional, and self-referential operations for data stream manipulation, aligning LLM outputs with user objectives. As a result, we can transition between the capabilities of various foundation models endowed with zero- and few-shot learning capabilities and specialized, fine-tuned models or solvers proficient in addressing specific problems. In turn, the framework facilitates the creation and evaluation of explainable computational graphs. We conclude by introducing a quality measure and its empirical score for evaluating these computational graphs, and propose a benchmark that compares various state-of-the-art LLMs across a set of complex workflows. We refer to the empirical score as the "Vector Embedding for Relational Trajectory Evaluation through Cross-similarity", or VERTEX score for short. The framework codebase and benchmark are linked below. |
|
2024-02-02T00:00:00 | 2402.00867 | AToM: Amortized Text-to-Mesh using 2D Diffusion | [
"Guocheng Qian",
"Junli Cao",
"Aliaksandr Siarohin",
"Yash Kant",
"Chaoyang Wang",
"Michael Vasilkovsky",
"Hsin-Ying Lee",
"Yuwei Fang",
"Ivan Skorokhodov",
"Peiye Zhuang",
"Igor Gilitschenski",
"Jian Ren",
"Bernard Ghanem",
"Kfir Aberman",
"Sergey Tulyakov"
]
| We introduce Amortized Text-to-Mesh (AToM), a feed-forward text-to-mesh framework optimized across multiple text prompts simultaneously. In contrast to existing text-to-3D methods that often entail time-consuming per-prompt optimization and commonly output representations other than polygonal meshes, AToM directly generates high-quality textured meshes in less than 1 second with around 10 times reduction in the training cost, and generalizes to unseen prompts. Our key idea is a novel triplane-based text-to-mesh architecture with a two-stage amortized optimization strategy that ensures stable training and enables scalability. Through extensive experiments on various prompt benchmarks, AToM significantly outperforms state-of-the-art amortized approaches with over 4 times higher accuracy (in DF415 dataset) and produces more distinguishable and higher-quality 3D outputs. AToM demonstrates strong generalizability, offering finegrained 3D assets for unseen interpolated prompts without further optimization during inference, unlike per-prompt solutions. |
|
2024-02-02T00:00:00 | 2402.00351 | Machine Unlearning for Image-to-Image Generative Models | [
"Guihong Li",
"Hsiang Hsu",
"Chun-Fu",
"Chen",
"Radu Marculescu"
]
| https://github.com/jpmorganchase/l2l-generator-unlearning | Machine unlearning has emerged as a new paradigm to deliberately forget data samples from a given model in order to adhere to stringent regulations. However, existing machine unlearning methods have been primarily focused on classification models, leaving the landscape of unlearning for generative models relatively unexplored. This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models. Within this framework, we propose a computationally-efficient algorithm, underpinned by rigorous theoretical analysis, that demonstrates negligible performance degradation on the retain samples, while effectively removing the information from the forget samples. Empirical studies on two large-scale datasets, ImageNet-1K and Places-365, further show that our algorithm does not rely on the availability of the retain samples, which further complies with data retention policy. To our best knowledge, this work is the first that represents systemic, theoretical, empirical explorations of machine unlearning specifically tailored for image-to-image generative models. Our code is available at https://github.com/jpmorganchase/l2l-generator-unlearning. |
2024-02-05T00:00:00 | 2402.01093 | Specialized Language Models with Cheap Inference from Limited Domain Data | [
"David Grangier",
"Angelos Katharopoulos",
"Pierre Ablin",
"Awni Hannun"
]
| Large language models have emerged as a versatile tool but are challenging to apply to tasks lacking large inference budgets and large in-domain training sets. This work formalizes these constraints and distinguishes four important variables: the pretraining budget (for training before the target domain is known), the specialization budget (for training after the target domain is known), the inference budget, and the in-domain training set size. Across these settings, we compare different approaches from the machine learning literature. Limited by inference cost, we find better alternatives to the standard practice of training very large vanilla transformer models. In particular, we show that hyper-networks and mixture of experts have better perplexity for large pretraining budgets, while small models trained on importance sampled datasets are attractive for large specialization budgets. |
|
2024-02-05T00:00:00 | 2402.01118 | PokéLLMon: A Human-Parity Agent for Pokémon Battles with Large Language Models | [
"Sihao Hu",
"Tiansheng Huang",
"Ling Liu"
]
| https://github.com/git-disl/PokeLLMon | We introduce Pok\'eLLMon, the first LLM-embodied agent that achieves human-parity performance in tactical battle games, as demonstrated in Pok\'emon battles. The design of Pok\'eLLMon incorporates three key strategies: (i) In-context reinforcement learning that instantly consumes text-based feedback derived from battles to iteratively refine the policy; (ii) Knowledge-augmented generation that retrieves external knowledge to counteract hallucination and enables the agent to act timely and properly; (iii) Consistent action generation to mitigate the panic switching phenomenon when the agent faces a powerful opponent and wants to elude the battle. We show that online battles against human demonstrates Pok\'eLLMon's human-like battle strategies and just-in-time decision making, achieving 49\% of win rate in the Ladder competitions and 56\% of win rate in the invited battles. Our implementation and playable battle logs are available at: https://github.com/git-disl/PokeLLMon. |
2024-02-05T00:00:00 | 2402.01032 | Repeat After Me: Transformers are Better than State Space Models at Copying | [
"Samy Jelassi",
"David Brandfonbrener",
"Sham M. Kakade",
"Eran Malach"
]
| Transformers are the dominant architecture for sequence modeling, but there is growing interest in models that use a fixed-size latent state that does not depend on the sequence length, which we refer to as "generalized state space models" (GSSMs). In this paper we show that while GSSMs are promising in terms of inference-time efficiency, they are limited compared to transformer models on tasks that require copying from the input context. We start with a theoretical analysis of the simple task of string copying and prove that a two layer transformer can copy strings of exponential length while GSSMs are fundamentally limited by their fixed-size latent state. Empirically, we find that transformers outperform GSSMs in terms of efficiency and generalization on synthetic tasks that require copying the context. Finally, we evaluate pretrained large language models and find that transformer models dramatically outperform state space models at copying and retrieving information from context. Taken together, these results suggest a fundamental gap between transformers and GSSMs on tasks of practical interest. |
|
2024-02-05T00:00:00 | 2402.01521 | K-Level Reasoning with Large Language Models | [
"Yadong Zhang",
"Shaoguang Mao",
"Tao Ge",
"Xun Wang",
"Yan Xia",
"Man Lan",
"Furu Wei"
]
| While Large Language Models (LLMs) have demonstrated their proficiency in complex reasoning tasks, their performance in dynamic, interactive, and competitive scenarios - such as business strategy and stock market analysis - remains underexplored. To bridge this gap, we formally explore the dynamic reasoning capabilities of LLMs for decision-making in rapidly evolving environments. We introduce two game theory-based pilot challenges that mirror the complexities of real-world dynamic decision-making. These challenges are well-defined, enabling clear, controllable, and precise evaluation of LLMs' dynamic reasoning abilities. Through extensive experiments, we find that existing reasoning methods tend to falter in dynamic settings that require k-level thinking - a key concept not tackled by previous works. To address this, we propose a novel reasoning approach for LLMs, named "K-Level Reasoning". This approach adopts the perspective of rivals to recursively employ k-level thinking based on available historical information, which significantly improves the prediction accuracy of rivals' subsequent moves and informs more strategic decision-making. This research not only sets a robust quantitative benchmark for the assessment of dynamic reasoning but also markedly enhances the proficiency of LLMs in dynamic contexts. |
|
2024-02-05T00:00:00 | 2402.01391 | StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback | [
"Shihan Dou",
"Yan Liu",
"Haoxiang Jia",
"Limao Xiong",
"Enyu Zhou",
"Junjie Shan",
"Caishuang Huang",
"Wei Shen",
"Xiaoran Fan",
"Zhiheng Xi",
"Yuhao Zhou",
"Tao Ji",
"Rui Zheng",
"Qi Zhang",
"Xuanjing Huang",
"Tao Gui"
]
| The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to complex human requirements makes RL exploration a challenge. Also, since the unit tests may not cover the complicated code, optimizing LLMs by using these unexecuted code snippets is ineffective. To tackle these challenges, we introduce StepCoder, a novel RL framework for code generation, consisting of two main components: CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks, while FGO only optimizes the model by masking the unexecuted code segments to provide Fine-Grained Optimization. In addition, we furthermore construct the APPS+ dataset for RL training, which is manually verified to ensure the correctness of unit tests. Experimental results show that our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks. |
|
2024-02-05T00:00:00 | 2402.00892 | EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks | [
"Shijia Liao",
"Shiyi Lan",
"Arun George Zachariah"
]
| The advent of Large Models marks a new era in machine learning, significantly outperforming smaller models by leveraging vast datasets to capture and synthesize complex patterns. Despite these advancements, the exploration into scaling, especially in the audio generation domain, remains limited, with previous efforts didn't extend into the high-fidelity (HiFi) 44.1kHz domain and suffering from both spectral discontinuities and blurriness in the high-frequency domain, alongside a lack of robustness against out-of-domain data. These limitations restrict the applicability of models to diverse use cases, including music and singing generation. Our work introduces Enhanced Various Audio Generation via Scalable Generative Adversarial Networks (EVA-GAN), yields significant improvements over previous state-of-the-art in spectral and high-frequency reconstruction and robustness in out-of-domain data performance, enabling the generation of HiFi audios by employing an extensive dataset of 36,000 hours of 44.1kHz audio, a context-aware module, a Human-In-The-Loop artifact measurement toolkit, and expands the model to approximately 200 million parameters. Demonstrations of our work are available at https://double-blind-eva-gan.cc. |
|
2024-02-05T00:00:00 | 2402.01622 | TravelPlanner: A Benchmark for Real-World Planning with Language Agents | [
"Jian Xie",
"Kai Zhang",
"Jiangjie Chen",
"Tinghui Zhu",
"Renze Lou",
"Yuandong Tian",
"Yanghua Xiao",
"Yu Su"
]
| Planning has been part of the core pursuit for artificial intelligence since its conception, but earlier AI agents mostly focused on constrained settings because many of the cognitive substrates necessary for human-level planning have been lacking. Recently, language agents powered by large language models (LLMs) have shown interesting capabilities such as tool use and reasoning. Are these language agents capable of planning in more complex settings that are out of the reach of prior AI agents? To advance this investigation, we propose TravelPlanner, a new planning benchmark that focuses on travel planning, a common real-world planning scenario. It provides a rich sandbox environment, various tools for accessing nearly four million data records, and 1,225 meticulously curated planning intents and reference plans. Comprehensive evaluations show that the current language agents are not yet capable of handling such complex planning tasks-even GPT-4 only achieves a success rate of 0.6%. Language agents struggle to stay on task, use the right tools to collect information, or keep track of multiple constraints. However, we note that the mere possibility for language agents to tackle such a complex problem is in itself non-trivial progress. TravelPlanner provides a challenging yet meaningful testbed for future language agents. |
|
2024-02-05T00:00:00 | 2402.01566 | Boximator: Generating Rich and Controllable Motions for Video Synthesis | [
"Jiawei Wang",
"Yuchen Zhang",
"Jiaxin Zou",
"Yan Zeng",
"Guoqiang Wei",
"Liping Yuan",
"Hang Li"
]
| Generating rich and controllable motion is a pivotal challenge in video synthesis. We propose Boximator, a new approach for fine-grained motion control. Boximator introduces two constraint types: hard box and soft box. Users select objects in the conditional frame using hard boxes and then use either type of boxes to roughly or rigorously define the object's position, shape, or motion path in future frames. Boximator functions as a plug-in for existing video diffusion models. Its training process preserves the base model's knowledge by freezing the original weights and training only the control module. To address training challenges, we introduce a novel self-tracking technique that greatly simplifies the learning of box-object correlations. Empirically, Boximator achieves state-of-the-art video quality (FVD) scores, improving on two base models, and further enhanced after incorporating box constraints. Its robust motion controllability is validated by drastic increases in the bounding box alignment metric. Human evaluation also shows that users favor Boximator generation results over the base model. |
|
2024-02-05T00:00:00 | 2402.01613 | Nomic Embed: Training a Reproducible Long Context Text Embedder | [
"Zach Nussbaum",
"John X. Morris",
"Brandon Duderstadt",
"Andriy Mulyar"
]
| https://github.com/nomic-ai/contrastors | This technical report describes the training of nomic-embed-text-v1, the first fully reproducible, open-source, open-weights, open-data, 8192 context length English text embedding model that outperforms both OpenAI Ada-002 and OpenAI text-embedding-3-small on short and long-context tasks. We release the training code and model weights under an Apache 2 license. In contrast with other open-source models, we release a training data loader with 235 million curated text pairs that allows for the full replication of nomic-embed-text-v1. You can find code and data to replicate the model at https://github.com/nomic-ai/contrastors |
2024-02-06T00:00:00 | 2402.03040 | InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions | [
"Yiyuan Zhang",
"Yuhao Kang",
"Zhixin Zhang",
"Xiaohan Ding",
"Sanyuan Zhao",
"Xiangyu Yue"
]
| https://github.com/invictus717/InteractiveVideo | We introduce InteractiveVideo, a user-centric framework for video generation. Different from traditional generative approaches that operate based on user-provided images or text, our framework is designed for dynamic interaction, allowing users to instruct the generative model through various intuitive mechanisms during the whole generation process, e.g. text and image prompts, painting, drag-and-drop, etc. We propose a Synergistic Multimodal Instruction mechanism, designed to seamlessly integrate users' multimodal instructions into generative models, thus facilitating a cooperative and responsive interaction between user inputs and the generative process. This approach enables iterative and fine-grained refinement of the generation result through precise and effective user instructions. With InteractiveVideo, users are given the flexibility to meticulously tailor key aspects of a video. They can paint the reference image, edit semantics, and adjust video motions until their requirements are fully met. Code, models, and demo are available at https://github.com/invictus717/InteractiveVideo |
2024-02-06T00:00:00 | 2402.03286 | Training-Free Consistent Text-to-Image Generation | [
"Yoad Tewel",
"Omri Kaduri",
"Rinon Gal",
"Yoni Kasten",
"Lior Wolf",
"Gal Chechik",
"Yuval Atzmon"
]
| Text-to-image models offer a new level of creative flexibility by allowing users to guide the image generation process through natural language. However, using these models to consistently portray the same subject across diverse prompts remains challenging. Existing approaches fine-tune the model to teach it new words that describe specific user-provided subjects or add image conditioning to the model. These methods require lengthy per-subject optimization or large-scale pre-training. Moreover, they struggle to align generated images with text prompts and face difficulties in portraying multiple subjects. Here, we present ConsiStory, a training-free approach that enables consistent subject generation by sharing the internal activations of the pretrained model. We introduce a subject-driven shared attention block and correspondence-based feature injection to promote subject consistency between images. Additionally, we develop strategies to encourage layout diversity while maintaining subject consistency. We compare ConsiStory to a range of baselines, and demonstrate state-of-the-art performance on subject consistency and text alignment, without requiring a single optimization step. Finally, ConsiStory can naturally extend to multi-subject scenarios, and even enable training-free personalization for common objects. |
|
2024-02-06T00:00:00 | 2402.02583 | DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image Editing | [
"Chong Mou",
"Xintao Wang",
"Jiechong Song",
"Ying Shan",
"Jian Zhang"
]
| https://github.com/MC-E/DragonDiffusion | Large-scale Text-to-Image (T2I) diffusion models have revolutionized image generation over the last few years. Although owning diverse and high-quality generation capabilities, translating these abilities to fine-grained image editing remains challenging. In this paper, we propose DiffEditor to rectify two weaknesses in existing diffusion-based image editing: (1) in complex scenarios, editing results often lack editing accuracy and exhibit unexpected artifacts; (2) lack of flexibility to harmonize editing operations, e.g., imagine new content. In our solution, we introduce image prompts in fine-grained image editing, cooperating with the text prompt to better describe the editing content. To increase the flexibility while maintaining content consistency, we locally combine stochastic differential equation (SDE) into the ordinary differential equation (ODE) sampling. In addition, we incorporate regional score-based gradient guidance and a time travel strategy into the diffusion sampling, further improving the editing quality. Extensive experiments demonstrate that our method can efficiently achieve state-of-the-art performance on various fine-grained image editing tasks, including editing within a single image (e.g., object moving, resizing, and content dragging) and across images (e.g., appearance replacing and object pasting). Our source code is released at https://github.com/MC-E/DragonDiffusion. |
2024-02-06T00:00:00 | 2402.03300 | DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models | [
"Zhihong Shao",
"Peiyi Wang",
"Qihao Zhu",
"Runxin Xu",
"Junxiao Song",
"Mingchuan Zhang",
"Y. K. Li",
"Y. Wu",
"Daya Guo"
]
| https://github.com/deepseek-ai/DeepSeek-Math | Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO. |
2024-02-06T00:00:00 | 2402.03162 | Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion | [
"Shiyuan Yang",
"Liang Hou",
"Haibin Huang",
"Chongyang Ma",
"Pengfei Wan",
"Di Zhang",
"Xiaodong Chen",
"Jing Liao"
]
| Recent text-to-video diffusion models have achieved impressive progress. In practice, users often desire the ability to control object motion and camera movement independently for customized video creation. However, current methods lack the focus on separately controlling object motion and camera movement in a decoupled manner, which limits the controllability and flexibility of text-to-video models. In this paper, we introduce Direct-a-Video, a system that allows users to independently specify motions for one or multiple objects and/or camera movements, as if directing a video. We propose a simple yet effective strategy for the decoupled control of object motion and camera movement. Object motion is controlled through spatial cross-attention modulation using the model's inherent priors, requiring no additional optimization. For camera movement, we introduce new temporal cross-attention layers to interpret quantitative camera movement parameters. We further employ an augmentation-based approach to train these layers in a self-supervised manner on a small-scale dataset, eliminating the need for explicit motion annotation. Both components operate independently, allowing individual or combined control, and can generalize to open-domain scenarios. Extensive experiments demonstrate the superiority and effectiveness of our method. Project page: https://direct-a-video.github.io/. |
|
2024-02-06T00:00:00 | 2402.01739 | OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models | [
"Fuzhao Xue",
"Zian Zheng",
"Yao Fu",
"Jinjie Ni",
"Zangwei Zheng",
"Wangchunshu Zhou",
"Yang You"
]
| https://github.com/XueFuzhao/OpenMoE | To help the open-source community have a better understanding of Mixture-of-Experts (MoE) based large language models (LLMs), we train and release OpenMoE, a series of fully open-sourced and reproducible decoder-only MoE LLMs, ranging from 650M to 34B parameters and trained on up to over 1T tokens. Our investigation confirms that MoE-based LLMs can offer a more favorable cost-effectiveness trade-off than dense LLMs, highlighting the potential effectiveness for future LLM development. One more important contribution of this study is an in-depth analysis of the routing mechanisms within our OpenMoE models, leading to three significant findings: Context-Independent Specialization, Early Routing Learning, and Drop-towards-the-End. We discovered that routing decisions in MoE models are predominantly based on token IDs, with minimal context relevance. The token-to-expert assignments are determined early in the pre-training phase and remain largely unchanged. This imperfect routing can result in performance degradation, particularly in sequential tasks like multi-turn conversations, where tokens appearing later in a sequence are more likely to be dropped. Finally, we rethink our design based on the above-mentioned observations and analysis. To facilitate future MoE LLM development, we propose potential strategies for mitigating the issues we found and further improving off-the-shelf MoE LLM designs. |
2024-02-06T00:00:00 | 2402.01878 | LiPO: Listwise Preference Optimization through Learning-to-Rank | [
"Tianqi Liu",
"Zhen Qin",
"Junru Wu",
"Jiaming Shen",
"Misha Khalman",
"Rishabh Joshi",
"Yao Zhao",
"Mohammad Saleh",
"Simon Baumgartner",
"Jialu Liu",
"Peter J. Liu",
"Xuanhui Wang"
]
| Aligning language models (LMs) with curated human feedback is critical to control their behaviors in real-world applications. Several recent policy optimization methods, such as DPO and SLiC, serve as promising alternatives to the traditional Reinforcement Learning from Human Feedback (RLHF) approach. In practice, human feedback often comes in a format of a ranked list over multiple responses to amortize the cost of reading prompt. Multiple responses can also be ranked by reward models or AI feedback. There lacks such a study on directly fitting upon a list of responses. In this work, we formulate the LM alignment as a listwise ranking problem and describe the Listwise Preference Optimization (LiPO) framework, where the policy can potentially learn more effectively from a ranked list of plausible responses given the prompt. This view draws an explicit connection to Learning-to-Rank (LTR), where most existing preference optimization work can be mapped to existing ranking objectives, especially pairwise ones. Following this connection, we provide an examination of ranking objectives that are not well studied for LM alignment withDPO and SLiC as special cases when list size is two. In particular, we highlight a specific method, LiPO-{\lambda}, which leverages a state-of-the-art listwise ranking objective and weights each preference pair in a more advanced manner. We show that LiPO-{\lambda} can outperform DPO and SLiC by a clear margin on two preference alignment tasks. |
|
2024-02-06T00:00:00 | 2402.01771 | BlackMamba: Mixture of Experts for State-Space Models | [
"Quentin Anthony",
"Yury Tokpanov",
"Paolo Glorioso",
"Beren Millidge"
]
| https://github.com/Zyphra/BlackMamba | State-space models (SSMs) have recently demonstrated competitive performance to transformers at large-scale language modeling benchmarks while achieving linear time and memory complexity as a function of sequence length. Mamba, a recently released SSM model, shows impressive performance in both language modeling and long sequence processing tasks. Simultaneously, mixture-of-expert (MoE) models have shown remarkable performance while significantly reducing the compute and latency costs of inference at the expense of a larger memory footprint. In this paper, we present BlackMamba, a novel architecture that combines the Mamba SSM with MoE to obtain the benefits of both. We demonstrate that BlackMamba performs competitively against both Mamba and transformer baselines, and outperforms in inference and training FLOPs. We fully train and open-source 340M/1.5B and 630M/2.8B BlackMamba models on 300B tokens of a custom dataset. We show that BlackMamba inherits and combines both of the benefits of SSM and MoE architectures, combining linear-complexity generation from SSM with cheap and fast inference from MoE. We release all weights, checkpoints, and inference code open-source. Inference code at: https://github.com/Zyphra/BlackMamba |
2024-02-06T00:00:00 | 2402.03161 | Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization | [
"Yang Jin",
"Zhicheng Sun",
"Kun Xu",
"Kun Xu",
"Liwei Chen",
"Hao Jiang",
"Quzhe Huang",
"Chengru Song",
"Yuliang Liu",
"Di Zhang",
"Yang Song",
"Kun Gai",
"Yadong Mu"
]
| https://github.com/jy0205/LaVIT | In light of recent advances in multimodal Large Language Models (LLMs), there is increasing attention to scaling them from image-text data to more informative real-world videos. Compared to static images, video poses unique challenges for effective large-scale pre-training due to the modeling of its spatiotemporal dynamics. In this paper, we address such limitations in video-language pre-training with an efficient video decomposition that represents each video as keyframes and temporal motions. These are then adapted to an LLM using well-designed tokenizers that discretize visual and temporal information as a few tokens, thus enabling unified generative pre-training of videos, images, and text. At inference, the generated tokens from the LLM are carefully recovered to the original continuous pixel space to create various video content. Our proposed framework is both capable of comprehending and generating image and video content, as demonstrated by its competitive performance across 13 multimodal benchmarks in image and video understanding and generation. Our code and models will be available at https://video-lavit.github.io. |
2024-02-06T00:00:00 | 2402.02834 | Shortened LLaMA: A Simple Depth Pruning for Large Language Models | [
"Bo-Kyeong Kim",
"Geonmin Kim",
"Tae-Ho Kim",
"Thibault Castells",
"Shinkook Choi",
"Junho Shin",
"Hyoung-Kyu Song"
]
| Structured pruning of modern large language models (LLMs) has emerged as a way of decreasing their high computational needs. Width pruning reduces the size of projection weight matrices (e.g., by removing attention heads) while maintaining the number of layers. Depth pruning, in contrast, removes entire layers or blocks, while keeping the size of the remaining weights unchanged. Most current research focuses on either width-only or a blend of width and depth pruning, with little comparative analysis between the two units (width vs. depth) concerning their impact on LLM inference efficiency. In this work, we show that a simple depth pruning approach can compete with recent width pruning methods in terms of zero-shot task performance. Our pruning method boosts inference speeds, especially under memory-constrained conditions that require limited batch sizes for running LLMs, where width pruning is ineffective. We hope this work can help deploy LLMs on local and edge devices. |
|
2024-02-06T00:00:00 | 2402.02791 | Rethinking Optimization and Architecture for Tiny Language Models | [
"Yehui Tang",
"Fangcheng Liu",
"Yunsheng Ni",
"Yuchuan Tian",
"Zheyuan Bai",
"Yi-Qi Hu",
"Sichao Liu",
"Shangling Jui",
"Kai Han",
"Yunhe Wang"
]
| https://github.com/YuchuanTian/RethinkTinyLM | The power of large language models (LLMs) has been demonstrated through numerous data and computing resources. However, the application of language models on mobile devices is facing huge challenge on the computation and memory costs, that is, tiny language models with high performance are urgently required. Limited by the highly complex training process, there are many details for optimizing language models that are seldom studied carefully. In this study, based on a tiny language model with 1B parameters, we carefully design a series of empirical study to analyze the effect of each component. Three perspectives are mainly discussed, i.e., neural architecture, parameter initialization, and optimization strategy. Several design formulas are empirically proved especially effective for tiny language models, including tokenizer compression, architecture tweaking, parameter inheritance and multiple-round training. Then we train PanGu-pi-1B Pro and PanGu-pi-1.5B Pro on 1.6T multilingual corpora, following the established formulas. Experimental results demonstrate the improved optimization and architecture yield a notable average improvement of 8.87 on benchmark evaluation sets for PanGu-pi-1B Pro. Besides, PanGu-pi-1.5B Pro surpasses a range of SOTA models with larger model sizes, validating its superior performance. The code will be released soon (https://github.com/YuchuanTian/RethinkTinyLM). |
2024-02-06T00:00:00 | 2402.01831 | Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities | [
"Zhifeng Kong",
"Arushi Goel",
"Rohan Badlani",
"Wei Ping",
"Rafael Valle",
"Bryan Catanzaro"
]
| Augmenting large language models (LLMs) to understand audio -- including non-speech sounds and non-verbal speech -- is critically important for diverse real-world applications of LLMs. In this paper, we propose Audio Flamingo, a novel audio language model with 1) strong audio understanding abilities, 2) the ability to quickly adapt to unseen tasks via in-context learning and retrieval, and 3) strong multi-turn dialogue abilities. We introduce a series of training techniques, architecture design, and data strategies to enhance our model with these abilities. Extensive evaluations across various audio understanding tasks confirm the efficacy of our method, setting new state-of-the-art benchmarks. |
|
2024-02-06T00:00:00 | 2402.01761 | Rethinking Interpretability in the Era of Large Language Models | [
"Chandan Singh",
"Jeevana Priya Inala",
"Michel Galley",
"Rich Caruana",
"Jianfeng Gao"
]
| Interpretable machine learning has exploded as an area of interest over the last decade, sparked by the rise of increasingly large datasets and deep neural networks. Simultaneously, large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks, offering a chance to rethink opportunities in interpretable machine learning. Notably, the capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human. However, these new capabilities raise new challenges, such as hallucinated explanations and immense computational costs. In this position paper, we start by reviewing existing methods to evaluate the emerging field of LLM interpretation (both interpreting LLMs and using LLMs for explanation). We contend that, despite their limitations, LLMs hold the opportunity to redefine interpretability with a more ambitious scope across many applications, including in auditing LLMs themselves. We highlight two emerging research priorities for LLM interpretation: using LLMs to directly analyze new datasets and to generate interactive explanations. |
|
2024-02-06T00:00:00 | 2402.03310 | V-IRL: Grounding Virtual Intelligence in Real Life | [
"Jihan Yang",
"Runyu Ding",
"Ellis Brown",
"Xiaojuan Qi",
"Saining Xie"
]
| There is a sensory gulf between the Earth that humans inhabit and the digital realms in which modern AI agents are created. To develop AI agents that can sense, think, and act as flexibly as humans in real-world settings, it is imperative to bridge the realism gap between the digital and physical worlds. How can we embody agents in an environment as rich and diverse as the one we inhabit, without the constraints imposed by real hardware and control? Towards this end, we introduce V-IRL: a platform that enables agents to scalably interact with the real world in a virtual yet realistic environment. Our platform serves as a playground for developing agents that can accomplish various practical tasks and as a vast testbed for measuring progress in capabilities spanning perception, decision-making, and interaction with real-world data across the entire globe. |
|
2024-02-06T00:00:00 | 2402.01935 | Code Representation Learning At Scale | [
"Dejiao Zhang",
"Wasi Ahmad",
"Ming Tan",
"Hantian Ding",
"Ramesh Nallapati",
"Dan Roth",
"Xiaofei Ma",
"Bing Xiang"
]
| Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i.e., code generation. However, most of the existing works on code representation learning train models at a hundred million parameter scale using very limited pretraining corpora. In this work, we fuel code representation learning with a vast amount of code data via a two-stage pretraining scheme. We first train the encoders via a mix that leverages both randomness in masking language modeling and the structure aspect of programming language. We then enhance the representations via contrastive learning with hard negative and hard positive constructed in an unsupervised manner. We establish an off-the-shelf encoder model that persistently outperforms the existing models on a wide variety of downstream tasks by large margins. To comprehend the factors contributing to successful code representation learning, we conduct detailed ablations and share our findings on (i) a customized and effective token-level denoising scheme for source code; (ii) the importance of hard negatives and hard positives; (iii) how the proposed bimodal contrastive learning boost the cross-lingual semantic search performance; and (iv) how the pretraining schemes decide the downstream task performance scales with the model size. |
|
2024-02-07T00:00:00 | 2402.03620 | Self-Discover: Large Language Models Self-Compose Reasoning Structures | [
"Pei Zhou",
"Jay Pujara",
"Xiang Ren",
"Xinyun Chen",
"Heng-Tze Cheng",
"Quoc V. Le",
"Ed H. Chi",
"Denny Zhou",
"Swaroop Mishra",
"Huaixiu Steven Zheng"
]
| We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns. |
|
2024-02-07T00:00:00 | 2402.04177 | Scaling Laws for Downstream Task Performance of Large Language Models | [
"Berivan Isik",
"Natalia Ponomareva",
"Hussein Hazimeh",
"Dimitris Paparas",
"Sergei Vassilvitskii",
"Sanmi Koyejo"
]
| Scaling laws provide important insights that can guide the design of large language models (LLMs). Existing work has primarily focused on studying scaling laws for pretraining (upstream) loss. However, in transfer learning settings, in which LLMs are pretrained on an unsupervised dataset and then finetuned on a downstream task, we often also care about the downstream performance. In this work, we study the scaling behavior in a transfer learning setting, where LLMs are finetuned for machine translation tasks. Specifically, we investigate how the choice of the pretraining data and its size affect downstream performance (translation quality) as judged by two metrics: downstream cross-entropy and BLEU score. Our experiments indicate that the size of the finetuning dataset and the distribution alignment between the pretraining and downstream data significantly influence the scaling behavior. With sufficient alignment, both downstream cross-entropy and BLEU score improve monotonically with more pretraining data. In such cases, we show that it is possible to predict the downstream BLEU score with good accuracy using a log-law. However, there are also cases where moderate misalignment causes the BLEU score to fluctuate or get worse with more pretraining, whereas downstream cross-entropy monotonically improves. By analyzing these observations, we provide new practical insights for choosing appropriate pretraining data. |
|
2024-02-07T00:00:00 | 2402.04248 | Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks | [
"Jongho Park",
"Jaeseung Park",
"Zheyang Xiong",
"Nayoung Lee",
"Jaewoong Cho",
"Samet Oymak",
"Kangwook Lee",
"Dimitris Papailiopoulos"
]
| State-space models (SSMs), such as Mamba Gu & Dao (2034), have been proposed as alternatives to Transformer networks in language modeling, by incorporating gating, convolutions, and input-dependent token selection to mitigate the quadratic cost of multi-head attention. Although SSMs exhibit competitive performance, their in-context learning (ICL) capabilities, a remarkable emergent property of modern language models that enables task execution without parameter optimization, remain underexplored compared to Transformers. In this study, we evaluate the ICL performance of SSMs, focusing on Mamba, against Transformer models across various tasks. Our results show that SSMs perform comparably to Transformers in standard regression ICL tasks, while outperforming them in tasks like sparse parity learning. However, SSMs fall short in tasks involving non-standard retrieval functionality. To address these limitations, we introduce a hybrid model, \variant, that combines Mamba with attention blocks, surpassing individual models in tasks where they struggle independently. Our findings suggest that hybrid architectures offer promising avenues for enhancing ICL in language models. |
|
2024-02-07T00:00:00 | 2402.03766 | MobileVLM V2: Faster and Stronger Baseline for Vision Language Model | [
"Xiangxiang Chu",
"Limeng Qiao",
"Xinyu Zhang",
"Shuang Xu",
"Fei Wei",
"Yang Yang",
"Xiaofei Sun",
"Yiming Hu",
"Xinyang Lin",
"Bo Zhang",
"Chunhua Shen"
]
| https://github.com/Meituan-AutoML/MobileVLM | We introduce MobileVLM V2, a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs' performance. Specifically, MobileVLM V2 1.7B achieves better or on-par performance on standard VLM benchmarks compared with much larger VLMs at the 3B scale. Notably, our 3B model outperforms a large variety of VLMs at the 7B+ scale. Our models will be released at https://github.com/Meituan-AutoML/MobileVLM . |
2024-02-07T00:00:00 | 2402.03749 | Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models | [
"Jianyuan Guo",
"Hanting Chen",
"Chengcheng Wang",
"Kai Han",
"Chang Xu",
"Yunhe Wang"
]
| https://github.com/ggjy/vision_weak_to_strong | Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment. In this context, our paper delves into the realm of vision foundation models, focusing on the concept of weak-to-strong generalization, which involves using a weaker model to supervise a stronger one, aiming to enhance the latter's capabilities beyond the former's limits. We introduce a novel and adaptively adjustable loss function for weak-to-strong supervision. Our comprehensive experiments span various scenarios, including few-shot learning, transfer learning, noisy label learning, and common knowledge distillation settings. The results are striking: our approach not only exceeds the performance benchmarks set by strong-to-strong generalization but also surpasses the outcomes of fine-tuning strong models with whole datasets. This compelling evidence underscores the significant potential of weak-to-strong generalization, showcasing its capability to substantially elevate the performance of vision foundation models. The code is available at https://github.com/ggjy/vision_weak_to_strong. |
2024-02-07T00:00:00 | 2402.04236 | CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations | [
"Ji Qi",
"Ming Ding",
"Weihan Wang",
"Yushi Bai",
"Qingsong Lv",
"Wenyi Hong",
"Bin Xu",
"Lei Hou",
"Juanzi Li",
"Yuxiao Dong",
"Jie Tang"
]
| https://github.com/THUDM/CogCoM | Vision-Language Models (VLMs) have demonstrated their widespread viability thanks to extensive training in aligning visual instructions to answers. However, this conclusive alignment leads models to ignore critical visual reasoning, and further result in failures on meticulous visual problems and unfaithful responses. In this paper, we propose Chain of Manipulations, a mechanism that enables VLMs to solve problems with a series of manipulations, where each manipulation refers to an operation on the visual input, either from intrinsic abilities (e.g., grounding) acquired through prior training or from imitating human-like behaviors (e.g., zoom in). This mechanism encourages VLMs to generate faithful responses with evidential visual reasoning, and permits users to trace error causes in the interpretable paths. We thus train CogCoM, a general 17B VLM with a memory-based compatible architecture endowed this reasoning mechanism. Experiments show that our model achieves the state-of-the-art performance across 8 benchmarks from 3 categories, and a limited number of training steps with the data swiftly gains a competitive performance. The code and data are publicly available at https://github.com/THUDM/CogCoM. |
2024-02-07T00:00:00 | 2402.04141 | Multi-line AI-assisted Code Authoring | [
"Omer Dunay",
"Daniel Cheng",
"Adam Tait",
"Parth Thakkar",
"Peter C Rigby",
"Andy Chiu",
"Imad Ahmad",
"Arun Ganesan",
"Chandra Maddila",
"Vijayaraghavan Murali",
"Ali Tayyebi",
"Nachiappan Nagappan"
]
| CodeCompose is an AI-assisted code authoring tool powered by large language models (LLMs) that provides inline suggestions to 10's of thousands of developers at Meta. In this paper, we present how we scaled the product from displaying single-line suggestions to multi-line suggestions. This evolution required us to overcome several unique challenges in improving the usability of these suggestions for developers. First, we discuss how multi-line suggestions can have a 'jarring' effect, as the LLM's suggestions constantly move around the developer's existing code, which would otherwise result in decreased productivity and satisfaction. Second, multi-line suggestions take significantly longer to generate; hence we present several innovative investments we made to reduce the perceived latency for users. These model-hosting optimizations sped up multi-line suggestion latency by 2.5x. Finally, we conduct experiments on 10's of thousands of engineers to understand how multi-line suggestions impact the user experience and contrast this with single-line suggestions. Our experiments reveal that (i) multi-line suggestions account for 42% of total characters accepted (despite only accounting for 16% for displayed suggestions) (ii) multi-line suggestions almost doubled the percentage of keystrokes saved for users from 9% to 17%. Multi-line CodeCompose has been rolled out to all engineers at Meta, and less than 1% of engineers have opted out of multi-line suggestions. |
|
2024-02-07T00:00:00 | 2402.04252 | EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters | [
"Quan Sun",
"Jinsheng Wang",
"Qiying Yu",
"Yufeng Cui",
"Fan Zhang",
"Xiaosong Zhang",
"Xinlong Wang"
]
| https://github.com/baaivision/EVA/tree/master/EVA-CLIP-18B | Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional 80.7% zero-shot top-1 accuracy averaged across 27 widely recognized image classification benchmarks, outperforming its forerunner EVA-CLIP (5-billion parameters) and other open-source CLIP models by a large margin. Remarkably, we observe a consistent performance improvement with the model size scaling of EVA-CLIP, despite maintaining a constant training dataset of 2-billion image-text pairs from LAION-2B and COYO-700M. This dataset is openly available and much smaller than the in-house datasets (e.g., DFN-5B, WebLI-10B) employed in other state-of-the-art CLIP models. EVA-CLIP-18B demonstrates the potential of EVA-style weak-to-strong visual model scaling. With our model weights made publicly available, we hope to facilitate future research in vision and multimodal foundation models. |
2024-02-07T00:00:00 | 2402.04229 | MusicRL: Aligning Music Generation to Human Preferences | [
"Geoffrey Cideron",
"Sertan Girgin",
"Mauro Verzetti",
"Damien Vincent",
"Matej Kastelic",
"Zalán Borsos",
"Brian McWilliams",
"Victor Ungureanu",
"Olivier Bachem",
"Olivier Pietquin",
"Matthieu Geist",
"Léonard Hussenot",
"Neil Zeghidour",
"Andrea Agostinelli"
]
| We propose MusicRL, the first music generation system finetuned from human feedback. Appreciation of text-to-music models is particularly subjective since the concept of musicality as well as the specific intention behind a caption are user-dependent (e.g. a caption such as "upbeat work-out music" can map to a retro guitar solo or a techno pop beat). Not only this makes supervised training of such models challenging, but it also calls for integrating continuous human feedback in their post-deployment finetuning. MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards. We design reward functions related specifically to text-adherence and audio quality with the help from selected raters, and use those to finetune MusicLM into MusicRL-R. We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences. Using Reinforcement Learning from Human Feedback (RLHF), we train MusicRL-U, the first text-to-music model that incorporates human feedback at scale. Human evaluations show that both MusicRL-R and MusicRL-U are preferred to the baseline. Ultimately, MusicRL-RU combines the two approaches and results in the best model according to human raters. Ablation studies shed light on the musical attributes influencing human preferences, indicating that text adherence and quality only account for a part of it. This underscores the prevalence of subjectivity in musical appreciation and calls for further involvement of human listeners in the finetuning of music generation models. |
|
2024-02-07T00:00:00 | 2402.03944 | IMUSIC: IMU-based Facial Expression Capture | [
"Youjia Wang",
"Yiwen Wu",
"Ruiqian Li",
"Hengan Zhou",
"Hongyang Lin",
"Yingwenqi Jiang",
"Yingsheng Zhu",
"Guanpeng Long",
"Jingya Wang",
"Lan Xu",
"Jingyi Yu"
]
| For facial motion capture and analysis, the dominated solutions are generally based on visual cues, which cannot protect privacy and are vulnerable to occlusions. Inertial measurement units (IMUs) serve as potential rescues yet are mainly adopted for full-body motion capture. In this paper, we propose IMUSIC to fill the gap, a novel path for facial expression capture using purely IMU signals, significantly distant from previous visual solutions.The key design in our IMUSIC is a trilogy. We first design micro-IMUs to suit facial capture, companion with an anatomy-driven IMU placement scheme. Then, we contribute a novel IMU-ARKit dataset, which provides rich paired IMU/visual signals for diverse facial expressions and performances. Such unique multi-modality brings huge potential for future directions like IMU-based facial behavior analysis. Moreover, utilizing IMU-ARKit, we introduce a strong baseline approach to accurately predict facial blendshape parameters from purely IMU signals. Specifically, we tailor a Transformer diffusion model with a two-stage training strategy for this novel tracking task. The IMUSIC framework empowers us to perform accurate facial capture in scenarios where visual methods falter and simultaneously safeguard user privacy. We conduct extensive experiments about both the IMU configuration and technical components to validate the effectiveness of our IMUSIC approach. Notably, IMUSIC enables various potential and novel applications, i.e., privacy-protecting facial capture, hybrid capture against occlusions, or detecting minute facial movements that are often invisible through visual cues. We will release our dataset and implementations to enrich more possibilities of facial capture and analysis in our community. |
|
2024-02-07T00:00:00 | 2402.03908 | EscherNet: A Generative Model for Scalable View Synthesis | [
"Xin Kong",
"Shikun Liu",
"Xiaoyang Lyu",
"Marwan Taher",
"Xiaojuan Qi",
"Andrew J. Davison"
]
| We introduce EscherNet, a multi-view conditioned diffusion model for view synthesis. EscherNet learns implicit and generative 3D representations coupled with a specialised camera positional encoding, allowing precise and continuous relative control of the camera transformation between an arbitrary number of reference and target views. EscherNet offers exceptional generality, flexibility, and scalability in view synthesis -- it can generate more than 100 consistent target views simultaneously on a single consumer-grade GPU, despite being trained with a fixed number of 3 reference views to 3 target views. As a result, EscherNet not only addresses zero-shot novel view synthesis, but also naturally unifies single- and multi-image 3D reconstruction, combining these diverse tasks into a single, cohesive framework. Our extensive experiments demonstrate that EscherNet achieves state-of-the-art performance in multiple benchmarks, even when compared to methods specifically tailored for each individual problem. This remarkable versatility opens up new directions for designing scalable neural architectures for 3D vision. Project page: https://kxhit.github.io/EscherNet. |
|
2024-02-07T00:00:00 | 2402.03570 | Diffusion World Model | [
"Zihan Ding",
"Amy Zhang",
"Yuandong Tian",
"Qinqing Zheng"
]
| We introduce Diffusion World Model (DWM), a conditional diffusion model capable of predicting multistep future states and rewards concurrently. As opposed to traditional one-step dynamics models, DWM offers long-horizon predictions in a single forward pass, eliminating the need for recursive quires. We integrate DWM into model-based value estimation, where the short-term return is simulated by future trajectories sampled from DWM. In the context of offline reinforcement learning, DWM can be viewed as a conservative value regularization through generative modeling. Alternatively, it can be seen as a data source that enables offline Q-learning with synthetic data. Our experiments on the D4RL dataset confirm the robustness of DWM to long-horizon simulation. In terms of absolute performance, DWM significantly surpasses one-step dynamics models with a 44% performance gain, and achieves state-of-the-art performance. |
|
2024-02-08T00:00:00 | 2402.04324 | ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation | [
"Weiming Ren",
"Harry Yang",
"Ge Zhang",
"Cong Wei",
"Xinrun Du",
"Stephen Huang",
"Wenhu Chen"
]
| Image-to-video (I2V) generation aims to use the initial frame (alongside a text prompt) to create a video sequence. A grand challenge in I2V generation is to maintain visual consistency throughout the video: existing methods often struggle to preserve the integrity of the subject, background, and style from the first frame, as well as ensure a fluid and logical progression within the video narrative. To mitigate these issues, we propose ConsistI2V, a diffusion-based method to enhance visual consistency for I2V generation. Specifically, we introduce (1) spatiotemporal attention over the first frame to maintain spatial and motion consistency, (2) noise initialization from the low-frequency band of the first frame to enhance layout consistency. These two approaches enable ConsistI2V to generate highly consistent videos. We also extend the proposed approaches to show their potential to improve consistency in auto-regressive long video generation and camera motion control. To verify the effectiveness of our method, we propose I2V-Bench, a comprehensive evaluation benchmark for I2V generation. Our automatic and human evaluation results demonstrate the superiority of ConsistI2V over existing methods. |
|
2024-02-08T00:00:00 | 2402.04792 | Direct Language Model Alignment from Online AI Feedback | [
"Shangmin Guo",
"Biao Zhang",
"Tianlin Liu",
"Tianqi Liu",
"Misha Khalman",
"Felipe Llinares",
"Alexandre Rame",
"Thomas Mesnard",
"Yao Zhao",
"Bilal Piot",
"Johan Ferret",
"Mathieu Blondel"
]
| Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator. |
|
2024-02-08T00:00:00 | 2402.04291 | BiLLM: Pushing the Limit of Post-Training Quantization for LLMs | [
"Wei Huang",
"Yangdong Liu",
"Haotong Qin",
"Ying Li",
"Shiming Zhang",
"Xianglong Liu",
"Michele Magno",
"Xiaojuan Qi"
]
| Pretrained large language models (LLMs) exhibit exceptional general language processing capabilities but come with significant demands on memory and computational resources. As a powerful compression technology, binarization can extremely reduce model weights to a mere 1 bit, lowering the expensive computation and memory requirements. However, existing quantization techniques fall short of maintaining LLM performance under ultra-low bit-widths. In response to this challenge, we present BiLLM, a groundbreaking 1-bit post-training quantization scheme tailored for pretrained LLMs. Based on the weight distribution of LLMs, BiLLM first identifies and structurally selects salient weights, and minimizes the compression loss through an effective binary residual approximation strategy. Moreover, considering the bell-shaped distribution of the non-salient weights, we propose an optimal splitting search to group and binarize them accurately. BiLLM achieving for the first time high-accuracy inference (e.g. 8.41 perplexity on LLaMA2-70B) with only 1.08-bit weights across various LLMs families and evaluation metrics, outperforms SOTA quantization methods of LLM by significant margins. Moreover, BiLLM enables the binarization process of the LLM with 7 billion weights within 0.5 hours on a single GPU, demonstrating satisfactory time efficiency. |
|
2024-02-08T00:00:00 | 2402.04615 | ScreenAI: A Vision-Language Model for UI and Infographics Understanding | [
"Gilles Baechler",
"Srinivas Sunkara",
"Maria Wang",
"Fedir Zubach",
"Hassan Mansoor",
"Vincent Etter",
"Victor Cărbune",
"Jason Lin",
"Jindong Chen",
"Abhanshu Sharma"
]
| Screen user interfaces (UIs) and infographics, sharing similar visual language and design principles, play important roles in human communication and human-machine interaction. We introduce ScreenAI, a vision-language model that specializes in UI and infographics understanding. Our model improves upon the PaLI architecture with the flexible patching strategy of pix2struct and is trained on a unique mixture of datasets. At the heart of this mixture is a novel screen annotation task in which the model has to identify the type and location of UI elements. We use these text annotations to describe screens to Large Language Models and automatically generate question-answering (QA), UI navigation, and summarization training datasets at scale. We run ablation studies to demonstrate the impact of these design choices. At only 5B parameters, ScreenAI achieves new state-of-the-artresults on UI- and infographics-based tasks (Multi-page DocVQA, WebSRC, MoTIF and Widget Captioning), and new best-in-class performance on others (Chart QA, DocVQA, and InfographicVQA) compared to models of similar size. Finally, we release three new datasets: one focused on the screen annotation task and two others focused on question answering. |
|
2024-02-08T00:00:00 | 2402.04858 | CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay | [
"Natasha Butt",
"Blazej Manczak",
"Auke Wiggers",
"Corrado Rainone",
"David Zhang",
"Michaël Defferrard",
"Taco Cohen"
]
| Large language models are increasingly solving tasks that are commonly believed to require human-level reasoning ability. However, these models still perform very poorly on benchmarks of general intelligence such as the Abstraction and Reasoning Corpus (ARC). In this paper, we approach ARC as a programming-by-examples problem, and introduce a novel and scalable method for language model self-improvement called Code Iteration (CodeIt). Our method iterates between 1) program sampling and hindsight relabeling, and 2) learning from prioritized experience replay. By relabeling the goal of an episode (i.e., the target program output given input) to the realized output produced by the sampled program, our method effectively deals with the extreme sparsity of rewards in program synthesis. Applying CodeIt to the ARC dataset, we demonstrate that prioritized hindsight replay, along with pre-training and data-augmentation, leads to successful inter-task generalization. CodeIt is the first neuro-symbolic approach that scales to the full ARC evaluation dataset. Our method solves 15% of ARC evaluation tasks, achieving state-of-the-art performance and outperforming existing neural and symbolic baselines. |
|
2024-02-08T00:00:00 | 2402.04744 | Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers | [
"Abhimanyu Rajeshkumar Bambhaniya",
"Amir Yazdanbakhsh",
"Suvinay Subramanian",
"Sheng-Chun Kao",
"Shivani Agrawal",
"Utku Evci",
"Tushar Krishna"
]
| https://github.com/abhibambhaniya/progressive_gradient_flow_nm_sparsity | N:M Structured sparsity has garnered significant interest as a result of relatively modest overhead and improved efficiency. Additionally, this form of sparsity holds considerable appeal for reducing the memory footprint owing to their modest representation overhead. There have been efforts to develop training recipes for N:M structured sparsity, they primarily focus on low-sparsity regions (sim50\%). Nonetheless, performance of models trained using these approaches tends to decline when confronted with high-sparsity regions (>80\%). In this work, we study the effectiveness of existing sparse training recipes at high-sparsity regions and argue that these methods fail to sustain the model quality on par with low-sparsity regions. We demonstrate that the significant factor contributing to this disparity is the presence of elevated levels of induced noise in the gradient magnitudes. To mitigate this undesirable effect, we employ decay mechanisms to progressively restrict the flow of gradients towards pruned elements. Our approach improves the model quality by up to 2% and 5% in vision and language models at high sparsity regime, respectively. We also evaluate the trade-off between model accuracy and training compute cost in terms of FLOPs. At iso-training FLOPs, our method yields better performance compared to conventional sparse training recipes, exhibiting an accuracy improvement of up to 2%. The source code is available at https://github.com/abhibambhaniya/progressive_gradient_flow_nm_sparsity. |
2024-02-08T00:00:00 | 2402.04379 | Fine-Tuned Language Models Generate Stable Inorganic Materials as Text | [
"Nate Gruver",
"Anuroop Sriram",
"Andrea Madotto",
"Andrew Gordon Wilson",
"C. Lawrence Zitnick",
"Zachary Ulissi"
]
| We propose fine-tuning large language models for generation of stable materials. While unorthodox, fine-tuning large language models on text-encoded atomistic data is simple to implement yet reliable, with around 90% of sampled structures obeying physical constraints on atom positions and charges. Using energy above hull calculations from both learned ML potentials and gold-standard DFT calculations, we show that our strongest model (fine-tuned LLaMA-2 70B) can generate materials predicted to be metastable at about twice the rate (49% vs 28%) of CDVAE, a competing diffusion model. Because of text prompting's inherent flexibility, our models can simultaneously be used for unconditional generation of stable material, infilling of partial structures and text-conditional generation. Finally, we show that language models' ability to capture key symmetries of crystal structures improves with model scale, suggesting that the biases of pretrained LLMs are surprisingly well-suited for atomistic data. |
|
2024-02-08T00:00:00 | 2402.04494 | Grandmaster-Level Chess Without Search | [
"Anian Ruoss",
"Grégoire Delétang",
"Sourabh Medapati",
"Jordi Grau-Moya",
"Li Kevin Wenliang",
"Elliot Catt",
"John Reid",
"Tim Genewein"
]
| The recent breakthrough successes in machine learning are mainly attributed to scale: namely large-scale attention-based architectures and datasets of unprecedented scale. This paper investigates the impact of training at scale for chess. Unlike traditional chess engines that rely on complex heuristics, explicit search, or a combination of both, we train a 270M parameter transformer model with supervised learning on a dataset of 10 million chess games. We annotate each board in the dataset with action-values provided by the powerful Stockfish 16 engine, leading to roughly 15 billion data points. Our largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero's policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters. |
|
2024-02-08T00:00:00 | 2402.05054 | LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation | [
"Jiaxiang Tang",
"Zhaoxi Chen",
"Xiaokang Chen",
"Tengfei Wang",
"Gang Zeng",
"Ziwei Liu"
]
| https://github.com/3DTopia/LGM | 3D content creation has achieved significant progress in terms of both quality and speed. Although current feed-forward models can produce 3D objects in seconds, their resolution is constrained by the intensive computation required during training. In this paper, we introduce Large Multi-View Gaussian Model (LGM), a novel framework designed to generate high-resolution 3D models from text prompts or single-view images. Our key insights are two-fold: 1) 3D Representation: We propose multi-view Gaussian features as an efficient yet powerful representation, which can then be fused together for differentiable rendering. 2) 3D Backbone: We present an asymmetric U-Net as a high-throughput backbone operating on multi-view images, which can be produced from text or single-view image input by leveraging multi-view diffusion models. Extensive experiments demonstrate the high fidelity and efficiency of our approach. Notably, we maintain the fast speed to generate 3D objects within 5 seconds while boosting the training resolution to 512, thereby achieving high-resolution 3D content generation. |
2024-02-08T00:00:00 | 2402.04825 | Fast Timing-Conditioned Latent Audio Diffusion | [
"Zach Evans",
"CJ Carr",
"Josiah Taylor",
"Scott H. Hawley",
"Jordi Pons"
]
| Generating long-form 44.1kHz stereo audio from text prompts can be computationally demanding. Further, most previous works do not tackle that music and sound effects naturally vary in their duration. Our research focuses on the efficient generation of long-form, variable-length stereo music and sounds at 44.1kHz using text prompts with a generative model. Stable Audio is based on latent diffusion, with its latent defined by a fully-convolutional variational autoencoder. It is conditioned on text prompts as well as timing embeddings, allowing for fine control over both the content and length of the generated music and sounds. Stable Audio is capable of rendering stereo signals of up to 95 sec at 44.1kHz in 8 sec on an A100 GPU. Despite its compute efficiency and fast inference, it is one of the best in two public text-to-music and -audio benchmarks and, differently from state-of-the-art models, can generate music with structure and stereo sounds. |
|
2024-02-08T00:00:00 | 2402.04925 | TP-Aware Dequantization | [
"Adnan Hoque",
"Mudhakar Srivatsa",
"Chih-Chieh Yang",
"Raghu Ganti"
]
| In this paper, we present a novel method that reduces model inference latency during distributed deployment of Large Language Models (LLMs). Our contribution is an optimized inference deployment scheme that address the current limitations of state-of-the-art quantization kernels when used in conjunction with Tensor Parallel (TP). Our method preserves data locality in GPU memory access patterns and exploits a priori knowledge of TP to reduce global communication. We demonstrate an up to 1.81x speedup over existing methods for Llama-70B and up to 1.78x speedup for IBM WatsonX's Granite-20B MLP layer problem sizes on A100 and H100 NVIDIA DGX Systems for a variety of TP settings. |
|
2024-02-08T00:00:00 | 2402.05008 | EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss | [
"Zhuoyang Zhang",
"Han Cai",
"Song Han"
]
| https://github.com/mit-han-lab/efficientvit | We present EfficientViT-SAM, a new family of accelerated segment anything models. We retain SAM's lightweight prompt encoder and mask decoder while replacing the heavy image encoder with EfficientViT. For the training, we begin with the knowledge distillation from the SAM-ViT-H image encoder to EfficientViT. Subsequently, we conduct end-to-end training on the SA-1B dataset. Benefiting from EfficientViT's efficiency and capacity, EfficientViT-SAM delivers 48.9x measured TensorRT speedup on A100 GPU over SAM-ViT-H without sacrificing performance. Our code and pre-trained models are released at https://github.com/mit-han-lab/efficientvit. |
2024-02-08T00:00:00 | 2402.04347 | The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry | [
"Michael Zhang",
"Kush Bhatia",
"Hermann Kumbong",
"Christopher Ré"
]
| Linear attentions have shown potential for improving Transformer efficiency, reducing attention's quadratic complexity to linear in sequence length. This holds exciting promise for (1) training linear Transformers from scratch, (2) "finetuned-conversion" of task-specific Transformers into linear versions that recover task performance, and (3) "pretrained-conversion" of Transformers such as large language models into linear versions finetunable on downstream tasks. However, linear attentions often underperform standard softmax attention in quality. To close this performance gap, we find prior linear attentions lack key properties of softmax attention tied to good performance: low-entropy (or "spiky") weights and dot-product monotonicity. We further observe surprisingly simple feature maps that retain these properties and match softmax performance, but are inefficient to compute in linear attention. We thus propose Hedgehog, a learnable linear attention that retains the spiky and monotonic properties of softmax attention while maintaining linear complexity. Hedgehog uses simple trainable MLPs to produce attention weights mimicking softmax attention. Experiments show Hedgehog recovers over 99% of standard Transformer quality in train-from-scratch and finetuned-conversion settings, outperforming prior linear attentions up to 6 perplexity points on WikiText-103 with causal GPTs, and up to 8.7 GLUE score points on finetuned bidirectional BERTs. Hedgehog also enables pretrained-conversion. Converting a pretrained GPT-2 into a linear attention variant achieves state-of-the-art 16.7 perplexity on WikiText-103 for 125M subquadratic decoder models. We finally turn a pretrained Llama-2 7B into a viable linear attention Llama. With low-rank adaptation, Hedgehog-Llama2 7B achieves 28.1 higher ROUGE-1 points over the base standard attention model, where prior linear attentions lead to 16.5 point drops. |
|
2024-02-08T00:00:00 | 2402.05099 | Hydragen: High-Throughput LLM Inference with Shared Prefixes | [
"Jordan Juravsky",
"Bradley Brown",
"Ryan Ehrlich",
"Daniel Y. Fu",
"Christopher Ré",
"Azalia Mirhoseini"
]
| Transformer-based large language models (LLMs) are now deployed to hundreds of millions of users. LLM inference is commonly performed on batches of sequences that share a prefix, such as few-shot examples or a chatbot system prompt. Decoding in this large-batch setting can be bottlenecked by the attention operation, which reads large key-value (KV) caches from memory and computes inefficient matrix-vector products for every sequence in the batch. In this work, we introduce Hydragen, a hardware-aware exact implementation of attention with shared prefixes. Hydragen computes attention over the shared prefix and unique suffixes separately. This decomposition enables efficient prefix attention by batching queries together across sequences, reducing redundant memory reads and enabling the use of hardware-friendly matrix multiplications. Our method can improve end-to-end LLM throughput by up to 32x against competitive baselines, with speedup growing with the batch size and shared prefix length. Hydragen also enables the use of very long shared contexts: with a high batch size, increasing the prefix length from 1K to 16K tokens decreases Hydragen throughput by less than 15%, while the throughput of baselines drops by over 90%. Hydragen generalizes beyond simple prefix-suffix decomposition and can be applied to tree-based prompt sharing patterns, allowing us to further reduce inference time on competitive programming problems by 55%. |
|
2024-02-09T00:00:00 | 2402.05120 | More Agents Is All You Need | [
"Junyou Li",
"Qin Zhang",
"Yangbin Yu",
"Qiang Fu",
"Deheng Ye"
]
| We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence. Our code is publicly available at: https://anonymous.4open.science/r/more_agent_is_all_you_need. |
|
2024-02-09T00:00:00 | 2402.05468 | Implicit Diffusion: Efficient Optimization through Stochastic Sampling | [
"Pierre Marion",
"Anna Korba",
"Peter Bartlett",
"Mathieu Blondel",
"Valentin De Bortoli",
"Arnaud Doucet",
"Felipe Llinares-López",
"Courtney Paquette",
"Quentin Berthet"
]
| We present a new algorithm to optimize distributions defined implicitly by parameterized stochastic diffusions. Doing so allows us to modify the outcome distribution of sampling processes by optimizing over their parameters. We introduce a general framework for first-order optimization of these processes, that performs jointly, in a single loop, optimization and sampling steps. This approach is inspired by recent advances in bilevel optimization and automatic implicit differentiation, leveraging the point of view of sampling as optimization over the space of probability distributions. We provide theoretical guarantees on the performance of our method, as well as experimental results demonstrating its effectiveness in real-world settings. |
|
2024-02-09T00:00:00 | 2402.05195 | λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space | [
"Maitreya Patel",
"Sangmin Jung",
"Chitta Baral",
"Yezhou Yang"
]
| https://github.com/eclipse-t2i/lambda-eclipse-inference | Despite the recent advances in personalized text-to-image (P-T2I) generative models, subject-driven T2I remains challenging. The primary bottlenecks include 1) Intensive training resource requirements, 2) Hyper-parameter sensitivity leading to inconsistent outputs, and 3) Balancing the intricacies of novel visual concept and composition alignment. We start by re-iterating the core philosophy of T2I diffusion models to address the above limitations. Predominantly, contemporary subject-driven T2I approaches hinge on Latent Diffusion Models (LDMs), which facilitate T2I mapping through cross-attention layers. While LDMs offer distinct advantages, P-T2I methods' reliance on the latent space of these diffusion models significantly escalates resource demands, leading to inconsistent results and necessitating numerous iterations for a single desired image. Recently, ECLIPSE has demonstrated a more resource-efficient pathway for training UnCLIP-based T2I models, circumventing the need for diffusion text-to-image priors. Building on this, we introduce lambda-ECLIPSE. Our method illustrates that effective P-T2I does not necessarily depend on the latent space of diffusion models. lambda-ECLIPSE achieves single, multi-subject, and edge-guided T2I personalization with just 34M parameters and is trained on a mere 74 GPU hours using 1.6M image-text interleaved data. Through extensive experiments, we also establish that lambda-ECLIPSE surpasses existing baselines in composition alignment while preserving concept alignment performance, even with significantly lower resource utilization. |
2024-02-09T00:00:00 | 2402.05935 | SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models | [
"Peng Gao",
"Renrui Zhang",
"Chris Liu",
"Longtian Qiu",
"Siyuan Huang",
"Weifeng Lin",
"Shitian Zhao",
"Shijie Geng",
"Ziyi Lin",
"Peng Jin",
"Kaipeng Zhang",
"Wenqi Shao",
"Chao Xu",
"Conghui He",
"Junjun He",
"Hao Shao",
"Pan Lu",
"Hongsheng Li",
"Yu Qiao"
]
| https://github.com/Alpha-VLLM/LLaMA2-Accessory | We propose SPHINX-X, an extensive Multimodality Large Language Model (MLLM) series developed upon SPHINX. To improve the architecture and training efficiency, we modify the SPHINX framework by removing redundant visual encoders, bypassing fully-padded sub-images with skip tokens, and simplifying multi-stage training into a one-stage all-in-one paradigm. To fully unleash the potential of MLLMs, we assemble a comprehensive multi-domain and multimodal dataset covering publicly available resources in language, vision, and vision-language tasks. We further enrich this collection with our curated OCR intensive and Set-of-Mark datasets, extending the diversity and generality. By training over different base LLMs including TinyLlama1.1B, InternLM2-7B, LLaMA2-13B, and Mixtral8x7B, we obtain a spectrum of MLLMs that vary in parameter size and multilingual capabilities. Comprehensive benchmarking reveals a strong correlation between the multi-modal performance with the data and parameter scales. Code and models are released at https://github.com/Alpha-VLLM/LLaMA2-Accessory |
2024-02-09T00:00:00 | 2402.05932 | Driving Everywhere with Large Language Model Policy Adaptation | [
"Boyi Li",
"Yue Wang",
"Jiageng Mao",
"Boris Ivanovic",
"Sushant Veer",
"Karen Leung",
"Marco Pavone"
]
| Adapting driving behavior to new environments, customs, and laws is a long-standing problem in autonomous driving, precluding the widespread deployment of autonomous vehicles (AVs). In this paper, we present LLaDA, a simple yet powerful tool that enables human drivers and autonomous vehicles alike to drive everywhere by adapting their tasks and motion plans to traffic rules in new locations. LLaDA achieves this by leveraging the impressive zero-shot generalizability of large language models (LLMs) in interpreting the traffic rules in the local driver handbook. Through an extensive user study, we show that LLaDA's instructions are useful in disambiguating in-the-wild unexpected situations. We also demonstrate LLaDA's ability to adapt AV motion planning policies in real-world datasets; LLaDA outperforms baseline planning approaches on all our metrics. Please check our website for more details: https://boyiliee.github.io/llada. |
|
2024-02-09T00:00:00 | 2402.05929 | An Interactive Agent Foundation Model | [
"Zane Durante",
"Bidipta Sarkar",
"Ran Gong",
"Rohan Taori",
"Yusuke Noda",
"Paul Tang",
"Ehsan Adeli",
"Shrinidhi Kowshika Lakshmikanth",
"Kevin Schulman",
"Arnold Milstein",
"Demetri Terzopoulos",
"Ade Famoti",
"Noboru Kuno",
"Ashley Llorens",
"Hoi Vo",
"Katsu Ikeuchi",
"Li Fei-Fei",
"Jianfeng Gao",
"Naoki Wake",
"Qiuyuan Huang"
]
| The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction, enabling a versatile and adaptable AI framework. We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare. Our model demonstrates its ability to generate meaningful and contextually relevant outputs in each area. The strength of our approach lies in its generality, leveraging a variety of data sources such as robotics sequences, gameplay data, large-scale video datasets, and textual information for effective multimodal and multi-task learning. Our approach provides a promising avenue for developing generalist, action-taking, multimodal systems. |
|
2024-02-09T00:00:00 | 2402.05755 | SpiRit-LM: Interleaved Spoken and Written Language Model | [
"Tu Anh Nguyen",
"Benjamin Muller",
"Bokai Yu",
"Marta R. Costa-jussa",
"Maha Elbayad",
"Sravya Popuri",
"Paul-Ambroise Duquenne",
"Robin Algayres",
"Ruslan Mavlyutov",
"Itai Gat",
"Gabriel Synnaeve",
"Juan Pino",
"Benoit Sagot",
"Emmanuel Dupoux"
]
| We introduce SPIRIT-LM, a foundation multimodal language model that freely mixes text and speech. Our model is based on a pretrained text language model that we extend to the speech modality by continuously training it on text and speech units. Speech and text sequences are concatenated as a single set of tokens, and trained with a word-level interleaving method using a small automatically-curated speech-text parallel corpus. SPIRIT-LM comes in two versions: a BASE version that uses speech semantic units and an EXPRESSIVE version that models expressivity using pitch and style units in addition to the semantic units. For both versions, the text is encoded with subword BPE tokens. The resulting model displays both the semantic abilities of text models and the expressive abilities of speech models. Additionally, we demonstrate that SPIRIT-LM is able to learn new tasks in a few-shot fashion across modalities (i.e. ASR, TTS, Speech Classification). |
|
2024-02-09T00:00:00 | 2402.05472 | Question Aware Vision Transformer for Multimodal Reasoning | [
"Roy Ganz",
"Yair Kittenplon",
"Aviad Aberdam",
"Elad Ben Avraham",
"Oren Nuriel",
"Shai Mazor",
"Ron Litman"
]
| Vision-Language (VL) models have gained significant research focus, enabling remarkable advances in multimodal reasoning. These architectures typically comprise a vision encoder, a Large Language Model (LLM), and a projection module that aligns visual features with the LLM's representation space. Despite their success, a critical limitation persists: the vision encoding process remains decoupled from user queries, often in the form of image-related questions. Consequently, the resulting visual features may not be optimally attuned to the query-specific elements of the image. To address this, we introduce QA-ViT, a Question Aware Vision Transformer approach for multimodal reasoning, which embeds question awareness directly within the vision encoder. This integration results in dynamic visual features focusing on relevant image aspects to the posed question. QA-ViT is model-agnostic and can be incorporated efficiently into any VL architecture. Extensive experiments demonstrate the effectiveness of applying our method to various multimodal architectures, leading to consistent improvement across diverse tasks and showcasing its potential for enhancing visual and scene-text understanding. |
|
2024-02-09T00:00:00 | 2402.05140 | Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains | [
"Junhong Shen",
"Neil Tenenholtz",
"James Brian Hall",
"David Alvarez-Melis",
"Nicolo Fusi"
]
| Large Language Models (LLMs) have demonstrated remarkable proficiency in understanding and generating natural language. However, their capabilities wane in highly specialized domains underrepresented in the pretraining corpus, such as physical and biomedical sciences. This work explores how to repurpose general LLMs into effective task solvers for specialized domains. We introduce a novel, model-agnostic framework for learning custom input tags, which are parameterized as continuous vectors appended to the LLM's embedding layer, to condition the LLM. We design two types of input tags: domain tags are used to delimit specialized representations (e.g., chemical formulas) and provide domain-relevant context; function tags are used to represent specific functions (e.g., predicting molecular properties) and compress function-solving instructions. We develop a three-stage protocol to learn these tags using auxiliary data and domain knowledge. By explicitly disentangling task domains from task functions, our method enables zero-shot generalization to unseen problems through diverse combinations of the input tags. It also boosts LLM's performance in various specialized domains, such as predicting protein or chemical properties and modeling drug-target interactions, outperforming expert models tailored to these tasks. |
|
2024-02-09T00:00:00 | 2402.05930 | WebLINX: Real-World Website Navigation with Multi-Turn Dialogue | [
"Xing Han Lù",
"Zdeněk Kasner",
"Siva Reddy"
]
| We propose the problem of conversational web navigation, where a digital agent controls a web browser and follows user instructions to solve real-world tasks in a multi-turn dialogue fashion. To support this problem, we introduce WEBLINX - a large-scale benchmark of 100K interactions across 2300 expert demonstrations of conversational web navigation. Our benchmark covers a broad range of patterns on over 150 real-world websites and can be used to train and evaluate agents in diverse scenarios. Due to the magnitude of information present, Large Language Models (LLMs) cannot process entire web pages in real-time. To solve this bottleneck, we design a retrieval-inspired model that efficiently prunes HTML pages by ranking relevant elements. We use the selected elements, along with screenshots and action history, to assess a variety of models for their ability to replicate human behavior when navigating the web. Our experiments span from small text-only to proprietary multimodal LLMs. We find that smaller finetuned decoders surpass the best zero-shot LLMs (including GPT-4V), but also larger finetuned multimodal models which were explicitly pretrained on screenshots. However, all finetuned models struggle to generalize to unseen websites. Our findings highlight the need for large multimodal models that can generalize to novel settings. Our code, data and models are available for research: https://mcgill-nlp.github.io/weblinx |
|
2024-02-09T00:00:00 | 2402.05403 | In-Context Principle Learning from Mistakes | [
"Tianjun Zhang",
"Aman Madaan",
"Luyu Gao",
"Steven Zheng",
"Swaroop Mishra",
"Yiming Yang",
"Niket Tandon",
"Uri Alon"
]
| In-context learning (ICL, also known as few-shot prompting) has been the standard method of adapting LLMs to downstream tasks, by learning from a few input-output examples. Nonetheless, all ICL-based approaches only learn from correct input-output pairs. In this paper, we revisit this paradigm, by learning more from the few given input-output examples. We introduce Learning Principles (LEAP): First, we intentionally induce the model to make mistakes on these few examples; then we reflect on these mistakes, and learn explicit task-specific "principles" from them, which help solve similar problems and avoid common mistakes; finally, we prompt the model to answer unseen test questions using the original few-shot examples and these learned general principles. We evaluate LEAP on a wide range of benchmarks, including multi-hop question answering (Hotpot QA), textual QA (DROP), Big-Bench Hard reasoning, and math problems (GSM8K and MATH); in all these benchmarks, LEAP improves the strongest available LLMs such as GPT-3.5-turbo, GPT-4, GPT-4 turbo and Claude-2.1. For example, LEAP improves over the standard few-shot prompting using GPT-4 by 7.5% in DROP, and by 3.3% in HotpotQA. Importantly, LEAP does not require any more input or examples than the standard few-shot prompting settings. |
|
2024-02-09T00:00:00 | 2402.05861 | Memory Consolidation Enables Long-Context Video Understanding | [
"Ivana Balažević",
"Yuge Shi",
"Pinelopi Papalampidi",
"Rahma Chaabouni",
"Skanda Koppula",
"Olivier J. Hénaff"
]
| Most transformer-based video encoders are limited to short temporal contexts due to their quadratic complexity. While various attempts have been made to extend this context, this has often come at the cost of both conceptual and computational complexity. We propose to instead re-purpose existing pre-trained video transformers by simply fine-tuning them to attend to memories derived non-parametrically from past activations. By leveraging redundancy reduction, our memory-consolidated vision transformer (MC-ViT) effortlessly extends its context far into the past and exhibits excellent scaling behavior when learning from longer videos. In doing so, MC-ViT sets a new state-of-the-art in long-context video understanding on EgoSchema, Perception Test, and Diving48, outperforming methods that benefit from orders of magnitude more parameters. |
|
2024-02-09T00:00:00 | 2402.05937 | InstaGen: Enhancing Object Detection by Training on Synthetic Dataset | [
"Chengjian Feng",
"Yujie Zhong",
"Zequn Jie",
"Weidi Xie",
"Lin Ma"
]
| In this paper, we introduce a novel paradigm to enhance the ability of object detector, e.g., expanding categories or improving detection performance, by training on synthetic dataset generated from diffusion models. Specifically, we integrate an instance-level grounding head into a pre-trained, generative diffusion model, to augment it with the ability of localising arbitrary instances in the generated images. The grounding head is trained to align the text embedding of category names with the regional visual feature of the diffusion model, using supervision from an off-the-shelf object detector, and a novel self-training scheme on (novel) categories not covered by the detector. This enhanced version of diffusion model, termed as InstaGen, can serve as a data synthesizer for object detection. We conduct thorough experiments to show that, object detector can be enhanced while training on the synthetic dataset from InstaGen, demonstrating superior performance over existing state-of-the-art methods in open-vocabulary (+4.5 AP) and data-sparse (+1.2 to 5.2 AP) scenarios. |
|
2024-02-09T00:00:00 | 2402.05672 | Multilingual E5 Text Embeddings: A Technical Report | [
"Liang Wang",
"Nan Yang",
"Xiaolong Huang",
"Linjun Yang",
"Rangan Majumder",
"Furu Wei"
]
| https://github.com/microsoft/unilm/tree/master/e5 | This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a balance between the inference efficiency and embedding quality. The training procedure adheres to the English E5 model recipe, involving contrastive pre-training on 1 billion multilingual text pairs, followed by fine-tuning on a combination of labeled datasets. Additionally, we introduce a new instruction-tuned embedding model, whose performance is on par with state-of-the-art, English-only models of similar sizes. Information regarding the model release can be found at https://github.com/microsoft/unilm/tree/master/e5 . |
2024-02-09T00:00:00 | 2402.05546 | Offline Actor-Critic Reinforcement Learning Scales to Large Models | [
"Jost Tobias Springenberg",
"Abbas Abdolmaleki",
"Jingwei Zhang",
"Oliver Groth",
"Michael Bloesch",
"Thomas Lampe",
"Philemon Brakel",
"Sarah Bechtle",
"Steven Kapturowski",
"Roland Hafner",
"Nicolas Heess",
"Martin Riedmiller"
]
| We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning. We find that offline actor-critic algorithms can outperform strong, supervised, behavioral cloning baselines for multi-task training on a large dataset containing both sub-optimal and expert behavior on 132 continuous control tasks. We introduce a Perceiver-based actor-critic model and elucidate the key model features needed to make offline RL work with self- and cross-attention modules. Overall, we find that: i) simple offline actor critic algorithms are a natural choice for gradually moving away from the currently predominant paradigm of behavioral cloning, and ii) via offline RL it is possible to learn multi-task policies that master many domains simultaneously, including real robotics tasks, from sub-optimal demonstrations or self-generated data. |
|
2024-02-12T00:00:00 | 2402.06118 | ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward Modeling | [
"Siming Yan",
"Min Bai",
"Weifeng Chen",
"Xiong Zhou",
"Qixing Huang",
"Li Erran Li"
]
| By combining natural language understanding and the generation capabilities and breadth of knowledge of large language models with image perception, recent large vision language models (LVLMs) have shown unprecedented reasoning capabilities in the real world. However, the generated text often suffers from inaccurate grounding in the visual input, resulting in errors such as hallucinating nonexistent scene elements, missing significant parts of the scene, and inferring incorrect attributes and relationships between objects. To address these issues, we introduce a novel framework, ViGoR (Visual Grounding Through Fine-Grained Reward Modeling) that utilizes fine-grained reward modeling to significantly enhance the visual grounding of LVLMs over pre-trained baselines. This improvement is efficiently achieved using much cheaper human evaluations instead of full supervisions, as well as automated methods. We show the effectiveness of our approach through numerous metrics on several benchmarks. Additionally, we construct a comprehensive and challenging dataset specifically designed to validate the visual grounding capabilities of LVLMs. Finally, we plan to release our human annotation comprising approximately 16,000 images and generated text pairs with fine-grained evaluations to contribute to related research in the community. |
|
2024-02-12T00:00:00 | 2402.06149 | HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting | [
"Zhenglin Zhou",
"Fan Ma",
"Hehe Fan",
"Yi Yang"
]
| Creating digital avatars from textual prompts has long been a desirable yet challenging task. Despite the promising outcomes obtained through 2D diffusion priors in recent works, current methods face challenges in achieving high-quality and animated avatars effectively. In this paper, we present HeadStudio, a novel framework that utilizes 3D Gaussian splatting to generate realistic and animated avatars from text prompts. Our method drives 3D Gaussians semantically to create a flexible and achievable appearance through the intermediate FLAME representation. Specifically, we incorporate the FLAME into both 3D representation and score distillation: 1) FLAME-based 3D Gaussian splatting, driving 3D Gaussian points by rigging each point to a FLAME mesh. 2) FLAME-based score distillation sampling, utilizing FLAME-based fine-grained control signal to guide score distillation from the text prompt. Extensive experiments demonstrate the efficacy of HeadStudio in generating animatable avatars from textual prompts, exhibiting visually appealing appearances. The avatars are capable of rendering high-quality real-time (geq 40 fps) novel views at a resolution of 1024. They can be smoothly controlled by real-world speech and video. We hope that HeadStudio can advance digital avatar creation and that the present method can widely be applied across various domains. |
|
2024-02-12T00:00:00 | 2402.06619 | Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning | [
"Shivalika Singh",
"Freddie Vargus",
"Daniel Dsouza",
"Börje F. Karlsson",
"Abinaya Mahendiran",
"Wei-Yin Ko",
"Herumb Shandilya",
"Jay Patel",
"Deividas Mataciunas",
"Laura OMahony",
"Mike Zhang",
"Ramith Hettiarachchi",
"Joseph Wilson",
"Marina Machado",
"Luisa Souza Moura",
"Dominik Krzemiński",
"Hakimeh Fadaei",
"Irem Ergün",
"Ifeoma Okoh",
"Aisha Alaagib",
"Oshan Mudannayake",
"Zaid Alyafeai",
"Vu Minh Chien",
"Sebastian Ruder",
"Surya Guthikonda",
"Emad A. Alghamdi",
"Sebastian Gehrmann",
"Niklas Muennighoff",
"Max Bartolo",
"Julia Kreutzer",
"Ahmet Üstün",
"Marzieh Fadaee",
"Sara Hooker"
]
| Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages. In total, we contribute four key resources: we develop and open-source the Aya Annotation Platform, the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as a valuable framework for future research collaborations that aim to bridge gaps in resources. |
|
2024-02-12T00:00:00 | 2402.06178 | MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models | [
"Yixiao Zhang",
"Yukara Ikemiya",
"Gus Xia",
"Naoki Murata",
"Marco Martínez",
"Wei-Hsiang Liao",
"Yuki Mitsufuji",
"Simon Dixon"
]
| Recent advances in text-to-music generation models have opened new avenues in musical creativity. However, music generation usually involves iterative refinements, and how to edit the generated music remains a significant challenge. This paper introduces a novel approach to the editing of music generated by such models, enabling the modification of specific attributes, such as genre, mood and instrument, while maintaining other aspects unchanged. Our method transforms text editing to latent space manipulation while adding an extra constraint to enforce consistency. It seamlessly integrates with existing pretrained text-to-music diffusion models without requiring additional training. Experimental results demonstrate superior performance over both zero-shot and certain supervised baselines in style and timbre transfer evaluations. Additionally, we showcase the practical applicability of our approach in real-world music editing scenarios. |
|
2024-02-12T00:00:00 | 2402.06332 | InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning | [
"Huaiyuan Ying",
"Shuo Zhang",
"Linyang Li",
"Zhejian Zhou",
"Yunfan Shao",
"Zhaoye Fei",
"Yichuan Ma",
"Jiawei Hong",
"Kuikun Liu",
"Ziyi Wang",
"Yudong Wang",
"Zijian Wu",
"Shuaibin Li",
"Fengzhe Zhou",
"Hongwei Liu",
"Songyang Zhang",
"Wenwei Zhang",
"Hang Yan",
"Xipeng Qiu",
"Jiayu Wang",
"Kai Chen",
"Dahua Lin"
]
| https://github.com/InternLM/InternLM-Math | The math abilities of large language models can represent their abstract reasoning ability. In this paper, we introduce and open-source our math reasoning LLMs InternLM-Math which is continue pre-trained from InternLM2. We unify chain-of-thought reasoning, reward modeling, formal reasoning, data augmentation, and code interpreter in a unified seq2seq format and supervise our model to be a versatile math reasoner, verifier, prover, and augmenter. These abilities can be used to develop the next math LLMs or self-iteration. InternLM-Math obtains open-sourced state-of-the-art performance under the setting of in-context learning, supervised fine-tuning, and code-assisted reasoning in various informal and formal benchmarks including GSM8K, MATH, Hungary math exam, MathBench-ZH, and MiniF2F. Our pre-trained model achieves 30.3 on the MiniF2F test set without fine-tuning. We further explore how to use LEAN to solve math problems and study its performance under the setting of multi-task learning which shows the possibility of using LEAN as a unified platform for solving and proving in math. Our models, codes, and data are released at https://github.com/InternLM/InternLM-Math. |
2024-02-12T00:00:00 | 2402.06071 | Keyframer: Empowering Animation Design using Large Language Models | [
"Tiffany Tseng",
"Ruijia Cheng",
"Jeffrey Nichols"
]
| Large language models (LLMs) have the potential to impact a wide range of creative domains, but the application of LLMs to animation is underexplored and presents novel challenges such as how users might effectively describe motion in natural language. In this paper, we present Keyframer, a design tool for animating static images (SVGs) with natural language. Informed by interviews with professional animation designers and engineers, Keyframer supports exploration and refinement of animations through the combination of prompting and direct editing of generated output. The system also enables users to request design variants, supporting comparison and ideation. Through a user study with 13 participants, we contribute a characterization of user prompting strategies, including a taxonomy of semantic prompt types for describing motion and a 'decomposed' prompting style where users continually adapt their goals in response to generated output.We share how direct editing along with prompting enables iteration beyond one-shot prompting interfaces common in generative tools today. Through this work, we propose how LLMs might empower a range of audiences to engage with animation creation. |
|
2024-02-12T00:00:00 | 2402.06082 | SubGen: Token Generation in Sublinear Time and Memory | [
"Amir Zandieh",
"Insu Han",
"Vahab Mirrokni",
"Amin Karbasi"
]
| Despite the significant success of large language models (LLMs), their extensive memory requirements pose challenges for deploying them in long-context token generation. The substantial memory footprint of LLM decoders arises from the necessity to store all previous tokens in the attention module, a requirement imposed by key-value (KV) caching. In this work, our focus is on developing an efficient compression technique for the KV cache. Empirical evidence indicates a significant clustering tendency within key embeddings in the attention module. Building on this key insight, we have devised a novel caching method with sublinear complexity, employing online clustering on key tokens and online ell_2 sampling on values. The result is a provably accurate and efficient attention decoding algorithm, termed SubGen. Not only does this algorithm ensure a sublinear memory footprint and sublinear time complexity, but we also establish a tight error bound for our approach. Empirical evaluations on long-context question-answering tasks demonstrate that SubGen significantly outperforms existing and state-of-the-art KV cache compression methods in terms of performance and efficiency. |
|
2024-02-12T00:00:00 | 2402.06147 | DeAL: Decoding-time Alignment for Large Language Models | [
"James Y. Huang",
"Sailik Sengupta",
"Daniele Bonadiman",
"Yi-an Lai",
"Arshit Gupta",
"Nikolaos Pappas",
"Saab Mansour",
"Katrin Kirchoff",
"Dan Roth"
]
| Large Language Models (LLMs) are nowadays expected to generate content aligned with human preferences. Current work focuses on alignment at model training time, through techniques such as Reinforcement Learning with Human Feedback (RLHF). However, it is unclear if such methods are an effective choice to teach alignment objectives to the model. First, the inability to incorporate multiple, custom rewards and reliance on a model developer's view of universal and static principles are key limitations. Second, the residual gaps in model training and the reliability of such approaches are also questionable (e.g. susceptibility to jail-breaking even after safety training). To address these, we propose DeAL, a framework that allows the user to customize reward functions and enables Decoding-time Alignment of LLMs (DeAL). At its core, we view decoding as a heuristic-guided search process and facilitate the use of a wide variety of alignment objectives. Our experiments with programmatic constraints such as keyword and length constraints (studied widely in the pre-LLM era) and abstract objectives such as harmlessness and helpfulness (proposed in the post-LLM era) show that we can DeAL with fine-grained trade-offs, improve adherence to alignment objectives, and address residual gaps in LLMs. Lastly, while DeAL can be effectively paired with RLHF and prompting techniques, its generality makes decoding slower, an optimization we leave for future work. |
|
2024-02-12T00:00:00 | 2402.06155 | Model Editing with Canonical Examples | [
"John Hewitt",
"Sarah Chen",
"Lanruo Lora Xie",
"Edward Adams",
"Percy Liang",
"Christopher D. Manning"
]
| We introduce model editing with canonical examples, a setting in which (1) a single learning example is provided per desired behavior, (2) evaluation is performed exclusively out-of-distribution, and (3) deviation from an initial model is strictly limited. A canonical example is a simple instance of good behavior, e.g., The capital of Mauritius is Port Louis) or bad behavior, e.g., An aspect of researchers is coldhearted). The evaluation set contains more complex examples of each behavior (like a paragraph in which the capital of Mauritius is called for.) We create three datasets and modify three more for model editing with canonical examples, covering knowledge-intensive improvements, social bias mitigation, and syntactic edge cases. In our experiments on Pythia language models, we find that LoRA outperforms full finetuning and MEMIT. We then turn to the Backpack language model architecture because it is intended to enable targeted improvement. The Backpack defines a large bank of sense vectors--a decomposition of the different uses of each word--which are weighted and summed to form the output logits of the model. We propose sense finetuning, which selects and finetunes a few (approx 10) sense vectors for each canonical example, and find that it outperforms other finetuning methods, e.g., 4.8% improvement vs 0.3%. Finally, we improve GPT-J-6B by an inference-time ensemble with just the changes from sense finetuning of a 35x smaller Backpack, in one setting outperforming editing GPT-J itself (4.1% vs 1.0%). |
|
2024-02-12T00:00:00 | 2402.06088 | Animated Stickers: Bringing Stickers to Life with Video Diffusion | [
"David Yan",
"Winnie Zhang",
"Luxin Zhang",
"Anmol Kalia",
"Dingkang Wang",
"Ankit Ramchandani",
"Miao Liu",
"Albert Pumarola",
"Edgar Schoenfeld",
"Elliot Blanchard",
"Krishna Narni",
"Yaqiao Luo",
"Lawrence Chen",
"Guan Pang",
"Ali Thabet",
"Peter Vajda",
"Amy Bearman",
"Licheng Yu"
]
| We introduce animated stickers, a video diffusion model which generates an animation conditioned on a text prompt and static sticker image. Our model is built on top of the state-of-the-art Emu text-to-image model, with the addition of temporal layers to model motion. Due to the domain gap, i.e. differences in visual and motion style, a model which performed well on generating natural videos can no longer generate vivid videos when applied to stickers. To bridge this gap, we employ a two-stage finetuning pipeline: first with weakly in-domain data, followed by human-in-the-loop (HITL) strategy which we term ensemble-of-teachers. It distills the best qualities of multiple teachers into a smaller student model. We show that this strategy allows us to specifically target improvements to motion quality while maintaining the style from the static image. With inference optimizations, our model is able to generate an eight-frame video with high-quality, interesting, and relevant motion in under one second. |
|
2024-02-12T00:00:00 | 2402.06102 | Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning | [
"Mohak Bhardwaj",
"Thomas Lampe",
"Michael Neunert",
"Francesco Romano",
"Abbas Abdolmaleki",
"Arunkumar Byravan",
"Markus Wulfmeier",
"Martin Riedmiller",
"Jonas Buchli"
]
| Recent advances in real-world applications of reinforcement learning (RL) have relied on the ability to accurately simulate systems at scale. However, domains such as fluid dynamical systems exhibit complex dynamic phenomena that are hard to simulate at high integration rates, limiting the direct application of modern deep RL algorithms to often expensive or safety critical hardware. In this work, we introduce "Box o Flows", a novel benchtop experimental control system for systematically evaluating RL algorithms in dynamic real-world scenarios. We describe the key components of the Box o Flows, and through a series of experiments demonstrate how state-of-the-art model-free RL algorithms can synthesize a variety of complex behaviors via simple reward specifications. Furthermore, we explore the role of offline RL in data-efficient hypothesis testing by reusing past experiences. We believe that the insights gained from this preliminary study and the availability of systems like the Box o Flows support the way forward for developing systematic RL algorithms that can be generally applied to complex, dynamical systems. Supplementary material and videos of experiments are available at https://sites.google.com/view/box-o-flows/home. |
|
2024-02-12T00:00:00 | 2402.06187 | Premier-TACO: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss | [
"Ruijie Zheng",
"Yongyuan Liang",
"Xiyao Wang",
"Shuang Ma",
"Hal Daumé III",
"Huazhe Xu",
"John Langford",
"Praveen Palanisamy",
"Kalyan Shankar Basu",
"Furong Huang"
]
| https://github.com/PremierTACO/premier-taco | We present Premier-TACO, a multitask feature representation learning approach designed to improve few-shot policy learning efficiency in sequential decision-making tasks. Premier-TACO leverages a subset of multitask offline datasets for pretraining a general feature representation, which captures critical environmental dynamics and is fine-tuned using minimal expert demonstrations. It advances the temporal action contrastive learning (TACO) objective, known for state-of-the-art results in visual control tasks, by incorporating a novel negative example sampling strategy. This strategy is crucial in significantly boosting TACO's computational efficiency, making large-scale multitask offline pretraining feasible. Our extensive empirical evaluation in a diverse set of continuous control benchmarks including Deepmind Control Suite, MetaWorld, and LIBERO demonstrate Premier-TACO's effectiveness in pretraining visual representations, significantly enhancing few-shot imitation learning of novel tasks. Our code, pretraining data, as well as pretrained model checkpoints will be released at https://github.com/PremierTACO/premier-taco. |
2024-02-13T00:00:00 | 2402.07456 | OS-Copilot: Towards Generalist Computer Agents with Self-Improvement | [
"Zhiyong Wu",
"Chengcheng Han",
"Zichen Ding",
"Zhenmin Weng",
"Zhoumianze Liu",
"Shunyu Yao",
"Tao Yu",
"Lingpeng Kong"
]
| Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents. However, most of these agents are designed to interact with a narrow domain, such as a specific software or website. This narrow focus constrains their applicability for general computer tasks. To this end, we introduce OS-Copilot, a framework to build generalist agents capable of interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. We use OS-Copilot to create FRIDAY, a self-improving embodied agent for automating general computer tasks. On GAIA, a general AI assistants benchmark, FRIDAY outperforms previous methods by 35%, showcasing strong generalization to unseen applications via accumulated skills from previous tasks. We also present numerical and quantitative evidence that FRIDAY learns to control and self-improve on Excel and Powerpoint with minimal supervision. Our OS-Copilot framework and empirical findings provide infrastructure and insights for future research toward more capable and general-purpose computer agents. |
|
2024-02-13T00:00:00 | 2402.07043 | A Tale of Tails: Model Collapse as a Change of Scaling Laws | [
"Elvis Dohmatob",
"Yunzhen Feng",
"Pu Yang",
"Francois Charton",
"Julia Kempe"
]
| As AI model size grows, neural scaling laws have become a crucial tool to predict the improvements of large models when increasing capacity and the size of original (human or natural) training data. Yet, the widespread use of popular models means that the ecosystem of online data and text will co-evolve to progressively contain increased amounts of synthesized data. In this paper we ask: How will the scaling laws change in the inevitable regime where synthetic data makes its way into the training corpus? Will future models, still improve, or be doomed to degenerate up to total (model) collapse? We develop a theoretical framework of model collapse through the lens of scaling laws. We discover a wide range of decay phenomena, analyzing loss of scaling, shifted scaling with number of generations, the ''un-learning" of skills, and grokking when mixing human and synthesized data. Our theory is validated by large-scale experiments with a transformer on an arithmetic task and text generation using the large language model Llama2. |
|
2024-02-13T00:00:00 | 2402.07033 | Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models | [
"Keisuke Kamahori",
"Yile Gu",
"Kan Zhu",
"Baris Kasikci"
]
| https://github.com/efeslab/fiddler | Large Language Models (LLMs) based on Mixture-of-Experts (MoE) architecture are showing promising performance on various tasks. However, running them on resource-constrained settings, where GPU memory resources are not abundant, is challenging due to huge model sizes. Existing systems that offload model weights to CPU memory suffer from the significant overhead of frequently moving data between CPU and GPU. In this paper, we propose Fiddler, a resource-efficient inference engine with CPU-GPU orchestration for MoE models. The key idea of Fiddler is to use the computation ability of the CPU to minimize the data movement between the CPU and GPU. Our evaluation shows that Fiddler can run the uncompressed Mixtral-8x7B model, which exceeds 90GB in parameters, to generate over 3 tokens per second on a single GPU with 24GB memory, showing an order of magnitude improvement over existing methods. The code of Fiddler is publicly available at https://github.com/efeslab/fiddler |
2024-02-13T00:00:00 | 2402.06852 | ChemLLM: A Chemical Large Language Model | [
"Di Zhang",
"Wei Liu",
"Qian Tan",
"Jingdan Chen",
"Hang Yan",
"Yuliang Yan",
"Jiatong Li",
"Weiran Huang",
"Xiangyu Yue",
"Dongzhan Zhou",
"Shufei Zhang",
"Mao Su",
"Hansen Zhong",
"Yuqiang Li",
"Wanli Ouyang"
]
| Large language models (LLMs) have made impressive progress in chemistry applications, including molecular property prediction, molecular generation, experimental protocol design, etc. However, the community lacks a dialogue-based model specifically designed for chemistry. The challenge arises from the fact that most chemical data and scientific knowledge are primarily stored in structured databases, and the direct use of these structured data compromises the model's ability to maintain coherent dialogue. To tackle this issue, we develop a novel template-based instruction construction method that transforms structured knowledge into plain dialogue, making it suitable for language model training. By leveraging this approach, we develop ChemLLM, the first large language model dedicated to chemistry, capable of performing various tasks across chemical disciplines with smooth dialogue interaction. ChemLLM beats GPT-3.5 on all three principal tasks in chemistry, i.e., name conversion, molecular caption, and reaction prediction, and surpasses GPT-4 on two of them. Remarkably, ChemLLM also shows exceptional adaptability to related mathematical and physical tasks despite being trained mainly on chemical-centric corpora. Furthermore, ChemLLM demonstrates proficiency in specialized NLP tasks within chemistry, such as literature translation and cheminformatic programming. ChemLLM opens up a new avenue for exploration within chemical studies, while our method of integrating structured chemical knowledge into dialogue systems sets a new frontier for developing LLMs across various scientific fields. Codes, Datasets, and Model weights are publicly accessible at hf.co/AI4Chem/ChemLLM-7B-Chat. |
|
2024-02-13T00:00:00 | 2402.07896 | Suppressing Pink Elephants with Direct Principle Feedback | [
"Louis Castricato",
"Nathan Lile",
"Suraj Anand",
"Hailey Schoelkopf",
"Siddharth Verma",
"Stella Biderman"
]
| Existing methods for controlling language models, such as RLHF and Constitutional AI, involve determining which LLM behaviors are desirable and training them into a language model. However, in many cases, it is desirable for LLMs to be controllable at inference time, so that they can be used in multiple contexts with diverse needs. We illustrate this with the Pink Elephant Problem: instructing an LLM to avoid discussing a certain entity (a ``Pink Elephant''), and instead discuss a preferred entity (``Grey Elephant''). We apply a novel simplification of Constitutional AI, Direct Principle Feedback, which skips the ranking of responses and uses DPO directly on critiques and revisions. Our results show that after DPF fine-tuning on our synthetic Pink Elephants dataset, our 13B fine-tuned LLaMA 2 model significantly outperforms Llama-2-13B-Chat and a prompted baseline, and performs as well as GPT-4 in on our curated test set assessing the Pink Elephant Problem. |
|
2024-02-13T00:00:00 | 2402.07625 | AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts | [
"Yifan Zhang",
"Yifan Luo",
"Yang Yuan",
"Andrew Chi-Chih Yao"
]
| https://github.com/yifanzhang-pro/AutoMathText | To improve language models' proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy that leverages base language models for autonomous data selection. Departing from conventional supervised fine-tuning or trained classifiers with human-annotated data, our approach utilizes meta-prompted language models as zero-shot verifiers to autonomously evaluate and select high-quality mathematical content, and we release the curated open-source AutoMathText dataset encompassing over 200GB of data. To demonstrate the efficacy of our method, we continuously pretrained a 7B-parameter Mistral language model on the AutoMathText dataset, achieving substantial improvements in downstream performance on the MATH dataset with a token amount reduced by orders of magnitude compared to previous continuous pretraining works. Our method showcases a 2 times increase in pretraining token efficiency compared to baselines, underscoring the potential of our approach in enhancing models' mathematical reasoning capabilities. The AutoMathText dataset is available at https://huggingface.co/datasets/math-ai/AutoMathText. The code is available at https://github.com/yifanzhang-pro/AutoMathText. |
2024-02-13T00:00:00 | 2402.07610 | Step-On-Feet Tuning: Scaling Self-Alignment of LLMs via Bootstrapping | [
"Haoyu Wang",
"Guozheng Ma",
"Ziqiao Meng",
"Zeyu Qin",
"Li Shen",
"Zhong Zhang",
"Bingzhe Wu",
"Liu Liu",
"Yatao Bian",
"Tingyang Xu",
"Xueqian Wang",
"Peilin Zhao"
]
| Self-alignment is an effective way to reduce the cost of human annotation while ensuring promising model capability. However, most current methods complete the data collection and training steps in a single round, which may overlook the continuously improving ability of self-aligned models. This gives rise to a key query: What if we do multi-time bootstrapping self-alignment? Does this strategy enhance model performance or lead to rapid degradation? In this paper, our pioneering exploration delves into the impact of bootstrapping self-alignment on large language models. Our findings reveal that bootstrapping self-alignment markedly surpasses the single-round approach, by guaranteeing data diversity from in-context learning. To further exploit the capabilities of bootstrapping, we investigate and adjust the training order of data, which yields improved performance of the model. Drawing on these findings, we propose Step-On-Feet Tuning (SOFT) which leverages model's continuously enhanced few-shot ability to boost zero or one-shot performance. Based on easy-to-hard training recipe, we propose SOFT+ which further boost self-alignment's performance. Our experiments demonstrate the efficiency of SOFT (SOFT+) across various classification and generation tasks, highlighting the potential of bootstrapping self-alignment on continually enhancing model alignment performance. |
|
2024-02-13T00:00:00 | 2402.07207 | GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting | [
"Xiaoyu Zhou",
"Xingjian Ran",
"Yajiao Xiong",
"Jinlin He",
"Zhiwei Lin",
"Yongtao Wang",
"Deqing Sun",
"Ming-Hsuan Yang"
]
| We present GALA3D, generative 3D GAussians with LAyout-guided control, for effective compositional text-to-3D generation. We first utilize large language models (LLMs) to generate the initial layout and introduce a layout-guided 3D Gaussian representation for 3D content generation with adaptive geometric constraints. We then propose an object-scene compositional optimization mechanism with conditioned diffusion to collaboratively generate realistic 3D scenes with consistent geometry, texture, scale, and accurate interactions among multiple objects while simultaneously adjusting the coarse layout priors extracted from the LLMs to align with the generated scene. Experiments show that GALA3D is a user-friendly, end-to-end framework for state-of-the-art scene-level 3D content generation and controllable editing while ensuring the high fidelity of object-level entities within the scene. Source codes and models will be available at https://gala3d.github.io/. |
|
2024-02-13T00:00:00 | 2402.06859 | LiRank: Industrial Large Scale Ranking Models at LinkedIn | [
"Fedor Borisyuk",
"Mingzhou Zhou",
"Qingquan Song",
"Siyu Zhu",
"Birjodh Tiwana",
"Ganesh Parameswaran",
"Siddharth Dangi",
"Lars Hertel",
"Qiang Xiao",
"Xiaochen Hou",
"Yunbo Ouyang",
"Aman Gupta",
"Sheallika Singh",
"Dan Liu",
"Hailing Cheng",
"Lei Le",
"Jonathan Hung",
"Sathiya Keerthi",
"Ruoyan Wang",
"Fengyu Zhang",
"Mohit Kothari",
"Chen Zhu",
"Daqi Sun",
"Yun Dai",
"Xun Luan",
"Sirou Zhu",
"Zhiwei Wang",
"Neil Daftary",
"Qianqi Shen",
"Chengming Jiang",
"Haichao Wei",
"Maneesh Varshney",
"Amol Ghoting",
"Souvik Ghosh"
]
| We present LiRank, a large-scale ranking framework at LinkedIn that brings to production state-of-the-art modeling architectures and optimization methods. We unveil several modeling improvements, including Residual DCN, which adds attention and residual connections to the famous DCNv2 architecture. We share insights into combining and tuning SOTA architectures to create a unified model, including Dense Gating, Transformers and Residual DCN. We also propose novel techniques for calibration and describe how we productionalized deep learning based explore/exploit methods. To enable effective, production-grade serving of large ranking models, we detail how to train and compress models using quantization and vocabulary compression. We provide details about the deployment setup for large-scale use cases of Feed ranking, Jobs Recommendations, and Ads click-through rate (CTR) prediction. We summarize our learnings from various A/B tests by elucidating the most effective technical approaches. These ideas have contributed to relative metrics improvements across the board at LinkedIn: +0.5% member sessions in the Feed, +1.76% qualified job applications for Jobs search and recommendations, and +4.3% for Ads CTR. We hope this work can provide practical insights and solutions for practitioners interested in leveraging large-scale deep ranking systems. |
|
2024-02-13T00:00:00 | 2402.07876 | Policy Improvement using Language Feedback Models | [
"Victor Zhong",
"Dipendra Misra",
"Xingdi Yuan",
"Marc-Alexandre Côté"
]
| We introduce Language Feedback Models (LFMs) that identify desirable behaviour - actions that help achieve tasks specified in the instruction - for imitation learning in instruction following. To train LFMs, we obtain feedback from Large Language Models (LLMs) on visual trajectories verbalized to language descriptions. First, by using LFMs to identify desirable behaviour to imitate, we improve in task-completion rate over strong behavioural cloning baselines on three distinct language grounding environments (Touchdown, ScienceWorld, and ALFWorld). Second, LFMs outperform using LLMs as experts to directly predict actions, when controlling for the number of LLM output tokens. Third, LFMs generalize to unseen environments, improving task-completion rate by 3.5-12.0% through one round of adaptation. Finally, LFM can be modified to provide human-interpretable feedback without performance loss, allowing human verification of desirable behaviour for imitation learning. |
|
2024-02-13T00:00:00 | 2402.07872 | PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs | [
"Soroush Nasiriany",
"Fei Xia",
"Wenhao Yu",
"Ted Xiao",
"Jacky Liang",
"Ishita Dasgupta",
"Annie Xie",
"Danny Driess",
"Ayzaan Wahid",
"Zhuo Xu",
"Quan Vuong",
"Tingnan Zhang",
"Tsang-Wei Edward Lee",
"Kuang-Huei Lee",
"Peng Xu",
"Sean Kirmani",
"Yuke Zhu",
"Andy Zeng",
"Karol Hausman",
"Nicolas Heess",
"Chelsea Finn",
"Sergey Levine",
"Brian Ichter"
]
| Vision language models (VLMs) have shown impressive capabilities across a variety of tasks, from logical reasoning to visual understanding. This opens the door to richer interaction with the world, for example robotic control. However, VLMs produce only textual outputs, while robotic control and other spatial tasks require outputting continuous coordinates, actions, or trajectories. How can we enable VLMs to handle such settings without fine-tuning on task-specific data? In this paper, we propose a novel visual prompting approach for VLMs that we call Prompting with Iterative Visual Optimization (PIVOT), which casts tasks as iterative visual question answering. In each iteration, the image is annotated with a visual representation of proposals that the VLM can refer to (e.g., candidate robot actions, localizations, or trajectories). The VLM then selects the best ones for the task. These proposals are iteratively refined, allowing the VLM to eventually zero in on the best available answer. We investigate PIVOT on real-world robotic navigation, real-world manipulation from images, instruction following in simulation, and additional spatial inference tasks such as localization. We find, perhaps surprisingly, that our approach enables zero-shot control of robotic systems without any robot training data, navigation in a variety of environments, and other capabilities. Although current performance is far from perfect, our work highlights potentials and limitations of this new regime and shows a promising approach for Internet-Scale VLMs in robotic and spatial reasoning domains. Website: pivot-prompt.github.io and HuggingFace: https://huggingface.co/spaces/pivot-prompt/pivot-prompt-demo. |
|
2024-02-13T00:00:00 | 2402.07871 | Scaling Laws for Fine-Grained Mixture of Experts | [
"Jakub Krajewski",
"Jan Ludziejewski",
"Kamil Adamczewski",
"Maciej Pióro",
"Michał Krutul",
"Szymon Antoniak",
"Kamil Ciebiera",
"Krystian Król",
"Tomasz Odrzygóźdź",
"Piotr Sankowski",
"Marek Cygan",
"Sebastian Jaszczur"
]
| Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, incorporating an expanded range of variables. Specifically, we introduce a new hyperparameter, granularity, whose adjustment enables precise control over the size of the experts. Building on this, we establish scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Leveraging these laws, we derive the optimal training configuration for a given computational budget. Our findings not only show that MoE models consistently outperform dense Transformers but also highlight that the efficiency gap between dense and MoE models widens as we scale up the model size and training budget. Furthermore, we demonstrate that the common practice of setting the size of experts in MoE to mirror the feed-forward layer is not optimal at almost any computational budget. |
|
2024-02-13T00:00:00 | 2402.07865 | Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models | [
"Siddharth Karamcheti",
"Suraj Nair",
"Ashwin Balakrishna",
"Percy Liang",
"Thomas Kollar",
"Dorsa Sadigh"
]
| Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning; adoption that has fueled a wealth of new models such as LLaVa, InstructBLIP, and PaLI-3. Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored, making it challenging to understand what factors account for model performance - a challenge further complicated by the lack of objective, consistent evaluations. To address these gaps, we first compile a suite of standardized evaluations spanning visual question answering, object localization from language, and targeted challenge sets that probe properties such as hallucination; evaluations that provide calibrated, fine-grained insight into a VLM's capabilities. Second, we rigorously investigate VLMs along key design axes, including pretrained visual representations and quantifying the tradeoffs of using base vs. instruct-tuned language models, amongst others. We couple our analysis with three resource contributions: (1) a unified framework for evaluating VLMs, (2) optimized, flexible code for VLM training, and (3) checkpoints for all models, including a family of VLMs at the 7-13B scale that strictly outperform InstructBLIP and LLaVa v1.5, the state-of-the-art in open-source VLMs. |
|
2024-02-13T00:00:00 | 2402.07827 | Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model | [
"Ahmet Üstün",
"Viraat Aryabumi",
"Zheng-Xin Yong",
"Wei-Yin Ko",
"Daniel D'souza",
"Gbemileke Onilude",
"Neel Bhandari",
"Shivalika Singh",
"Hui-Lee Ooi",
"Amr Kayid",
"Freddie Vargus",
"Phil Blunsom",
"Shayne Longpre",
"Niklas Muennighoff",
"Marzieh Fadaee",
"Julia Kreutzer",
"Sara Hooker"
]
| Recent breakthroughs in large language models (LLMs) have centered around a handful of data-rich languages. What does it take to broaden access to breakthroughs beyond first-class citizen languages? Our work introduces Aya, a massively multilingual generative language model that follows instructions in 101 languages of which over 50% are considered as lower-resourced. Aya outperforms mT0 and BLOOMZ on the majority of tasks while covering double the number of languages. We introduce extensive new evaluation suites that broaden the state-of-art for multilingual eval across 99 languages -- including discriminative and generative tasks, human evaluation, and simulated win rates that cover both held-out tasks and in-distribution performance. Furthermore, we conduct detailed investigations on the optimal finetuning mixture composition, data pruning, as well as the toxicity, bias, and safety of our models. We open-source our instruction datasets and our model at https://hf.co/CohereForAI/aya-101 |
|
2024-02-13T00:00:00 | 2402.07383 | Making Flow-Matching-Based Zero-Shot Text-to-Speech Laugh as You Like | [
"Naoyuki Kanda",
"Xiaofei Wang",
"Sefik Emre Eskimez",
"Manthan Thakker",
"Hemin Yang",
"Zirun Zhu",
"Min Tang",
"Canrun Li",
"Steven Tsai",
"Zhen Xiao",
"Yufei Xia",
"Jinzhu Li",
"Yanqing Liu",
"Sheng Zhao",
"Michael Zeng"
]
| Laughter is one of the most expressive and natural aspects of human speech, conveying emotions, social cues, and humor. However, most text-to-speech (TTS) systems lack the ability to produce realistic and appropriate laughter sounds, limiting their applications and user experience. While there have been prior works to generate natural laughter, they fell short in terms of controlling the timing and variety of the laughter to be generated. In this work, we propose ELaTE, a zero-shot TTS that can generate natural laughing speech of any speaker based on a short audio prompt with precise control of laughter timing and expression. Specifically, ELaTE works on the audio prompt to mimic the voice characteristic, the text prompt to indicate the contents of the generated speech, and the input to control the laughter expression, which can be either the start and end times of laughter, or the additional audio prompt that contains laughter to be mimicked. We develop our model based on the foundation of conditional flow-matching-based zero-shot TTS, and fine-tune it with frame-level representation from a laughter detector as additional conditioning. With a simple scheme to mix small-scale laughter-conditioned data with large-scale pre-training data, we demonstrate that a pre-trained zero-shot TTS model can be readily fine-tuned to generate natural laughter with precise controllability, without losing any quality of the pre-trained zero-shot TTS model. Through the evaluations, we show that ELaTE can generate laughing speech with significantly higher quality and controllability compared to conventional models. See https://aka.ms/elate/ for demo samples. |
|
2024-02-13T00:00:00 | 2402.07319 | ODIN: Disentangled Reward Mitigates Hacking in RLHF | [
"Lichang Chen",
"Chen Zhu",
"Davit Soselia",
"Jiuhai Chen",
"Tianyi Zhou",
"Tom Goldstein",
"Heng Huang",
"Mohammad Shoeybi",
"Bryan Catanzaro"
]
| In this work, we study the issue of reward hacking on the response length, a challenge emerging in Reinforcement Learning from Human Feedback (RLHF) on LLMs. A well-formatted, verbose but less helpful response from the LLMs can often deceive LLMs or even human evaluators to achieve high scores. The same issue also holds for some reward models in RL. To address the challenges in both training and evaluation, we establish a more reliable evaluation protocol for comparing different training configurations, which inspects the trade-off between LLM evaluation score and response length obtained by varying training hyperparameters. Based on this evaluation, we conduct large-scale studies, where the results shed insights into the efficacy of hyperparameters and tricks used in RL on mitigating length bias. We further propose to improve the reward model by jointly training two linear heads on shared feature representations to predict the rewards, one trained to correlate with length, and the other trained to decorrelate with length and therefore focus more on the actual content. We then discard the length head in RL to prevent reward hacking on length. Experiments demonstrate that our approach almost eliminates the reward correlation with length, and improves the obtained policy by a significant margin. |
|
2024-02-14T00:00:00 | 2402.08609 | Mixtures of Experts Unlock Parameter Scaling for Deep RL | [
"Johan Obando-Ceron",
"Ghada Sokar",
"Timon Willi",
"Clare Lyle",
"Jesse Farebrother",
"Jakob Foerster",
"Gintare Karolina Dziugaite",
"Doina Precup",
"Pablo Samuel Castro"
]
| The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally to its size. Analogous scaling laws remain elusive for reinforcement learning domains, however, where increasing the parameter count of a model often hurts its final performance. In this paper, we demonstrate that incorporating Mixture-of-Expert (MoE) modules, and in particular Soft MoEs (Puigcerver et al., 2023), into value-based networks results in more parameter-scalable models, evidenced by substantial performance increases across a variety of training regimes and model sizes. This work thus provides strong empirical evidence towards developing scaling laws for reinforcement learning. |
|
2024-02-14T00:00:00 | 2402.08644 | Tandem Transformers for Inference Efficient LLMs | [
"Aishwarya P S",
"Pranav Ajit Nair",
"Yashas Samaga",
"Toby Boyd",
"Sanjiv Kumar",
"Prateek Jain",
"Praneeth Netrapalli"
]
| The autoregressive nature of conventional large language models (LLMs) inherently limits inference speed, as tokens are generated sequentially. While speculative and parallel decoding techniques attempt to mitigate this, they face limitations: either relying on less accurate smaller models for generation or failing to fully leverage the base LLM's representations. We introduce a novel architecture, Tandem transformers, to address these issues. This architecture uniquely combines (1) a small autoregressive model and (2) a large model operating in block mode (processing multiple tokens simultaneously). The small model's predictive accuracy is substantially enhanced by granting it attention to the large model's richer representations. On the PaLM2 pretraining dataset, a tandem of PaLM2-Bison and PaLM2-Gecko demonstrates a 3.3% improvement in next-token prediction accuracy over a standalone PaLM2-Gecko, offering a 1.16x speedup compared to a PaLM2-Otter model with comparable downstream performance. We further incorporate the tandem model within the speculative decoding (SPEED) framework where the large model validates tokens from the small model. This ensures that the Tandem of PaLM2-Bison and PaLM2-Gecko achieves substantial speedup (around 1.14x faster than using vanilla PaLM2-Gecko in SPEED) while maintaining identical downstream task accuracy. |
|
2024-02-14T00:00:00 | 2402.08093 | BASE TTS: Lessons from building a billion-parameter Text-to-Speech model on 100K hours of data | [
"Mateusz Łajszczak",
"Guillermo Cámbara",
"Yang Li",
"Fatih Beyhan",
"Arent van Korlaar",
"Fan Yang",
"Arnaud Joly",
"Álvaro Martín-Cortinas",
"Ammar Abbas",
"Adam Michalski",
"Alexis Moinet",
"Sri Karlapati",
"Ewa Muszyńska",
"Haohan Guo",
"Bartosz Putrycz",
"Soledad López Gambino",
"Kayeon Yoo",
"Elena Sokolova",
"Thomas Drugman"
]
| We introduce a text-to-speech (TTS) model called BASE TTS, which stands for Big Adaptive Streamable TTS with Emergent abilities. BASE TTS is the largest TTS model to-date, trained on 100K hours of public domain speech data, achieving a new state-of-the-art in speech naturalness. It deploys a 1-billion-parameter autoregressive Transformer that converts raw texts into discrete codes ("speechcodes") followed by a convolution-based decoder which converts these speechcodes into waveforms in an incremental, streamable manner. Further, our speechcodes are built using a novel speech tokenization technique that features speaker ID disentanglement and compression with byte-pair encoding. Echoing the widely-reported "emergent abilities" of large language models when trained on increasing volume of data, we show that BASE TTS variants built with 10K+ hours and 500M+ parameters begin to demonstrate natural prosody on textually complex sentences. We design and share a specialized dataset to measure these emergent abilities for text-to-speech. We showcase state-of-the-art naturalness of BASE TTS by evaluating against baselines that include publicly available large-scale text-to-speech systems: YourTTS, Bark and TortoiseTTS. Audio samples generated by the model can be heard at https://amazon-ltts-paper.com/. |
|
2024-02-14T00:00:00 | 2402.08622 | NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs | [
"Michael Fischer",
"Zhengqin Li",
"Thu Nguyen-Phuoc",
"Aljaz Bozic",
"Zhao Dong",
"Carl Marshall",
"Tobias Ritschel"
]
| A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene. We here ask the question whether we can transfer the appearance from a source NeRF onto a target 3D geometry in a semantically meaningful way, such that the resulting new NeRF retains the target geometry but has an appearance that is an analogy to the source NeRF. To this end, we generalize classic image analogies from 2D images to NeRFs. We leverage correspondence transfer along semantic affinity that is driven by semantic features from large, pre-trained 2D image models to achieve multi-view consistent appearance transfer. Our method allows exploring the mix-and-match product space of 3D geometry and appearance. We show that our method outperforms traditional stylization-based methods and that a large majority of users prefer our method over several typical baselines. |
Subsets and Splits