date
timestamp[ns]date
2023-05-05 00:00:00
2025-07-14 00:00:00
arxiv_id
stringlengths
10
10
title
stringlengths
8
202
authors
listlengths
1
3.3k
github
stringlengths
0
116
abstract
stringlengths
165
1.92k
2024-03-22T00:00:00
2403.14468
AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks
[ "Max Ku", "Cong Wei", "Weiming Ren", "Huan Yang", "Wenhu Chen" ]
https://github.com/TIGER-AI-Lab/AnyV2V
Video-to-video editing involves editing a source video along with additional control (such as text prompts, subjects, or styles) to generate a new video that aligns with the source video and the provided control. Traditional methods have been constrained to certain editing types, limiting their ability to meet the wide range of user demands. In this paper, we introduce AnyV2V, a novel training-free framework designed to simplify video editing into two primary steps: (1) employing an off-the-shelf image editing model (e.g. InstructPix2Pix, InstantID, etc) to modify the first frame, (2) utilizing an existing image-to-video generation model (e.g. I2VGen-XL) for DDIM inversion and feature injection. In the first stage, AnyV2V can plug in any existing image editing tools to support an extensive array of video editing tasks. Beyond the traditional prompt-based editing methods, AnyV2V also can support novel video editing tasks, including reference-based style transfer, subject-driven editing, and identity manipulation, which were unattainable by previous methods. In the second stage, AnyV2V can plug in any existing image-to-video models to perform DDIM inversion and intermediate feature injection to maintain the appearance and motion consistency with the source video. On the prompt-based editing, we show that AnyV2V can outperform the previous best approach by 35\% on prompt alignment, and 25\% on human preference. On the three novel tasks, we show that AnyV2V also achieves a high success rate. We believe AnyV2V will continue to thrive due to its ability to seamlessly integrate the fast-evolving image editing methods. Such compatibility can help AnyV2V to increase its versatility to cater to diverse user demands.
2024-03-22T00:00:00
2403.14613
DreamReward: Text-to-3D Generation with Human Preference
[ "Junliang Ye", "Fangfu Liu", "Qixiu Li", "Zhengyi Wang", "Yikai Wang", "Xinzhou Wang", "Yueqi Duan", "Jun Zhu" ]
https://github.com/liuff19/DreamReward
3D content creation from text prompts has shown remarkable success recently. However, current text-to-3D methods often generate 3D results that do not align well with human preferences. In this paper, we present a comprehensive framework, coined DreamReward, to learn and improve text-to-3D models from human preference feedback. To begin with, we collect 25k expert comparisons based on a systematic annotation pipeline including rating and ranking. Then, we build Reward3D -- the first general-purpose text-to-3D human preference reward model to effectively encode human preferences. Building upon the 3D reward model, we finally perform theoretical analysis and present the Reward3D Feedback Learning (DreamFL), a direct tuning algorithm to optimize the multi-view diffusion models with a redefined scorer. Grounded by theoretical proof and extensive experiment comparisons, our DreamReward successfully generates high-fidelity and 3D consistent results with significant boosts in prompt alignment with human intention. Our results demonstrate the great potential for learning from human feedback to improve text-to-3D models.
2024-03-22T00:00:00
2403.14611
Explorative Inbetweening of Time and Space
[ "Haiwen Feng", "Zheng Ding", "Zhihao Xia", "Simon Niklaus", "Victoria Abrevaya", "Michael J. Black", "Xuaner Zhang" ]
We introduce bounded generation as a generalized task to control video generation to synthesize arbitrary camera and subject motion based only on a given start and end frame. Our objective is to fully leverage the inherent generalization capability of an image-to-video model without additional training or fine-tuning of the original model. This is achieved through the proposed new sampling strategy, which we call Time Reversal Fusion, that fuses the temporally forward and backward denoising paths conditioned on the start and end frame, respectively. The fused path results in a video that smoothly connects the two frames, generating inbetweening of faithful subject motion, novel views of static scenes, and seamless video looping when the two bounding frames are identical. We curate a diverse evaluation dataset of image pairs and compare against the closest existing methods. We find that Time Reversal Fusion outperforms related work on all subtasks, exhibiting the ability to generate complex motions and 3D-consistent views guided by bounded frames. See project page at https://time-reversal.github.io.
2024-03-22T00:00:00
2403.14186
StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN
[ "Jongwoo Choi", "Kwanggyoon Seo", "Amirsaman Ashtari", "Junyong Noh" ]
https://github.com/jeolpyeoni/StyleCineGAN
We propose a method that can generate cinemagraphs automatically from a still landscape image using a pre-trained StyleGAN. Inspired by the success of recent unconditional video generation, we leverage a powerful pre-trained image generator to synthesize high-quality cinemagraphs. Unlike previous approaches that mainly utilize the latent space of a pre-trained StyleGAN, our approach utilizes its deep feature space for both GAN inversion and cinemagraph generation. Specifically, we propose multi-scale deep feature warping (MSDFW), which warps the intermediate features of a pre-trained StyleGAN at different resolutions. By using MSDFW, the generated cinemagraphs are of high resolution and exhibit plausible looping animation. We demonstrate the superiority of our method through user studies and quantitative comparisons with state-of-the-art cinemagraph generation methods and a video generation method that uses a pre-trained StyleGAN.
2024-03-22T00:00:00
2403.14554
Gaussian Frosting: Editable Complex Radiance Fields with Real-Time Rendering
[ "Antoine Guédon", "Vincent Lepetit" ]
https://github.com/Anttwo/Frosting
We propose Gaussian Frosting, a novel mesh-based representation for high-quality rendering and editing of complex 3D effects in real-time. Our approach builds on the recent 3D Gaussian Splatting framework, which optimizes a set of 3D Gaussians to approximate a radiance field from images. We propose first extracting a base mesh from Gaussians during optimization, then building and refining an adaptive layer of Gaussians with a variable thickness around the mesh to better capture the fine details and volumetric effects near the surface, such as hair or grass. We call this layer Gaussian Frosting, as it resembles a coating of frosting on a cake. The fuzzier the material, the thicker the frosting. We also introduce a parameterization of the Gaussians to enforce them to stay inside the frosting layer and automatically adjust their parameters when deforming, rescaling, editing or animating the mesh. Our representation allows for efficient rendering using Gaussian splatting, as well as editing and animation by modifying the base mesh. We demonstrate the effectiveness of our method on various synthetic and real scenes, and show that it outperforms existing surface-based approaches. We will release our code and a web-based viewer as additional contributions. Our project page is the following: https://anttwo.github.io/frosting/
2024-03-22T00:00:00
2403.14624
MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
[ "Renrui Zhang", "Dongzhi Jiang", "Yichi Zhang", "Haokun Lin", "Ziyu Guo", "Pengshuo Qiu", "Aojun Zhou", "Pan Lu", "Kai-Wei Chang", "Peng Gao", "Hongsheng Li" ]
https://github.com/ZrrSkywalker/MathVerse
The remarkable progress of Multi-modal Large Language Models (MLLMs) has garnered unparalleled attention, due to their superior performance in visual contexts. However, their capabilities in visual math problem-solving remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams. To this end, we introduce MathVerse, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into six distinct versions, each offering varying degrees of information content in multi-modality, contributing to 15K test samples in total. This approach allows MathVerse to comprehensively assess whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning. In addition, we propose a Chain-of-Thought (CoT) evaluation strategy for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs. We hope the MathVerse benchmark may provide unique insights to guide the future development of MLLMs. Project page: https://mathverse-cuhk.github.io
2024-03-22T00:00:00
2403.14467
Recourse for reclamation: Chatting with generative language models
[ "Jennifer Chien", "Kevin R. McKee", "Jackie Kay", "William Isaac" ]
Researchers and developers increasingly rely on toxicity scoring to moderate generative language model outputs, in settings such as customer service, information retrieval, and content generation. However, toxicity scoring may render pertinent information inaccessible, rigidify or "value-lock" cultural norms, and prevent language reclamation processes, particularly for marginalized people. In this work, we extend the concept of algorithmic recourse to generative language models: we provide users a novel mechanism to achieve their desired prediction by dynamically setting thresholds for toxicity filtering. Users thereby exercise increased agency relative to interactions with the baseline system. A pilot study (n = 30) supports the potential of our proposed recourse mechanism, indicating improvements in usability compared to fixed-threshold toxicity-filtering of model outputs. Future work should explore the intersection of toxicity scoring, model controllability, user agency, and language reclamation processes -- particularly with regard to the bias that many communities encounter when interacting with generative language models.
2024-03-22T00:00:00
2403.14599
MyVLM: Personalizing VLMs for User-Specific Queries
[ "Yuval Alaluf", "Elad Richardson", "Sergey Tulyakov", "Kfir Aberman", "Daniel Cohen-Or" ]
https://github.com/snap-research/MyVLM
Recent large-scale vision-language models (VLMs) have demonstrated remarkable capabilities in understanding and generating textual descriptions for visual content. However, these models lack an understanding of user-specific concepts. In this work, we take a first step toward the personalization of VLMs, enabling them to learn and reason over user-provided concepts. For example, we explore whether these models can learn to recognize you in an image and communicate what you are doing, tailoring the model to reflect your personal experiences and relationships. To effectively recognize a variety of user-specific concepts, we augment the VLM with external concept heads that function as toggles for the model, enabling the VLM to identify the presence of specific target concepts in a given image. Having recognized the concept, we learn a new concept embedding in the intermediate feature space of the VLM. This embedding is tasked with guiding the language model to naturally integrate the target concept in its generated response. We apply our technique to BLIP-2 and LLaVA for personalized image captioning and further show its applicability for personalized visual question-answering. Our experiments demonstrate our ability to generalize to unseen images of learned concepts while preserving the model behavior on unrelated inputs.
2024-03-22T00:00:00
2403.14520
Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference
[ "Han Zhao", "Min Zhang", "Wei Zhao", "Pengxiang Ding", "Siteng Huang", "Donglin Wang" ]
https://github.com/h-zhao1997/cobra
In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the foundation model for many downstream tasks, current MLLMs are composed of the well-known Transformer network, which has a less efficient quadratic computation complexity. To improve the efficiency of such basic models, we propose Cobra, a linear computational complexity MLLM. Specifically, Cobra integrates the efficient Mamba language model into the visual modality. Moreover, we explore and study various modal fusion schemes to create an effective multi-modal Mamba. Extensive experiments demonstrate that (1) Cobra achieves extremely competitive performance with current computationally efficient state-of-the-art methods, e.g., LLaVA-Phi, TinyLLaVA, and MobileVLM v2, and has faster speed due to Cobra's linear sequential modeling. (2) Interestingly, the results of closed-set challenging prediction benchmarks show that Cobra performs well in overcoming visual illusions and spatial relationship judgments. (3) Notably, Cobra even achieves comparable performance to LLaVA with about 43% of the number of parameters. We will make all codes of Cobra open-source and hope that the proposed method can facilitate future research on complexity problems in MLLM. Our project page is available at: https://sites.google.com/view/cobravlm.
2024-03-22T00:00:00
2403.14621
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
[ "Yinghao Xu", "Zifan Shi", "Wang Yifan", "Hansheng Chen", "Ceyuan Yang", "Sida Peng", "Yujun Shen", "Gordon Wetzstein" ]
https://github.com/justimyhxu/grm
We introduce GRM, a large-scale reconstructor capable of recovering a 3D asset from sparse-view images in around 0.1s. GRM is a feed-forward transformer-based model that efficiently incorporates multi-view information to translate the input pixels into pixel-aligned Gaussians, which are unprojected to create a set of densely distributed 3D Gaussians representing a scene. Together, our transformer architecture and the use of 3D Gaussians unlock a scalable and efficient reconstruction framework. Extensive experimental results demonstrate the superiority of our method over alternatives regarding both reconstruction quality and efficiency. We also showcase the potential of GRM in generative tasks, i.e., text-to-3D and image-to-3D, by integrating it with existing multi-view diffusion models. Our project website is at: https://justimyhxu.github.io/projects/grm/.
2024-03-22T00:00:00
2403.14602
ReNoise: Real Image Inversion Through Iterative Noising
[ "Daniel Garibi", "Or Patashnik", "Andrey Voynov", "Hadar Averbuch-Elor", "Daniel Cohen-Or" ]
https://github.com/garibida/ReNoise-Inversion
Recent advancements in text-guided diffusion models have unlocked powerful image manipulation capabilities. However, applying these methods to real images necessitates the inversion of the images into the domain of the pretrained diffusion model. Achieving faithful inversion remains a challenge, particularly for more recent models trained to generate images with a small number of denoising steps. In this work, we introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations. Building on reversing the diffusion sampling process, our method employs an iterative renoising mechanism at each inversion sampling step. This mechanism refines the approximation of a predicted point along the forward diffusion trajectory, by iteratively applying the pretrained diffusion model, and averaging these predictions. We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models. Through comprehensive evaluations and comparisons, we show its effectiveness in terms of both accuracy and speed. Furthermore, we confirm that our method preserves editability by demonstrating text-driven image editing on real images.
2024-03-25T00:00:00
2403.15377
InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding
[ "Yi Wang", "Kunchang Li", "Xinhao Li", "Jiashuo Yu", "Yinan He", "Guo Chen", "Baoqi Pei", "Rongkun Zheng", "Jilan Xu", "Zun Wang", "Yansong Shi", "Tianxiang Jiang", "Songze Li", "Hongjie Zhang", "Yifei Huang", "Yu Qiao", "Yali Wang", "Limin Wang" ]
https://github.com/OpenGVLab/InternVideo2
We introduce InternVideo2, a new video foundation model (ViFM) that achieves the state-of-the-art performance in action recognition, video-text tasks, and video-centric dialogue. Our approach employs a progressive training paradigm that unifies the different self- or weakly-supervised learning frameworks of masked video token reconstruction, cross-modal contrastive learning, and next token prediction. Different training stages would guide our model to capture different levels of structure and semantic information through different pretext tasks. At the data level, we prioritize the spatiotemporal consistency by semantically segmenting videos and generating video-audio-speech captions. This improves the alignment between video and text. We scale both data and model size for our InternVideo2. Through extensive experiments, we validate our designs and demonstrate the state-of-the-art performance on over 60 video and audio tasks. Notably, our model outperforms others on various video-related captioning, dialogue, and long video understanding benchmarks, highlighting its ability to reason and comprehend long temporal contexts. Code and models are available at https://github.com/OpenGVLab/InternVideo2/.
2024-03-25T00:00:00
2403.15360
SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series
[ "Badri N. Patro", "Vijay S. Agneeswaran" ]
https://github.com/badripatro/Simba
Transformers have widely adopted attention networks for sequence mixing and MLPs for channel mixing, playing a pivotal role in achieving breakthroughs across domains. However, recent literature highlights issues with attention networks, including low inductive bias and quadratic complexity concerning input sequence length. State Space Models (SSMs) like S4 and others (Hippo, Global Convolutions, liquid S4, LRU, Mega, and Mamba), have emerged to address the above issues to help handle longer sequence lengths. Mamba, while being the state-of-the-art SSM, has a stability issue when scaled to large networks for computer vision datasets. We propose SiMBA, a new architecture that introduces Einstein FFT (EinFFT) for channel modeling by specific eigenvalue computations and uses the Mamba block for sequence modeling. Extensive performance studies across image and time-series benchmarks demonstrate that SiMBA outperforms existing SSMs, bridging the performance gap with state-of-the-art transformers. Notably, SiMBA establishes itself as the new state-of-the-art SSM on ImageNet and transfer learning benchmarks such as Stanford Car and Flower as well as task learning benchmarks as well as seven time series benchmark datasets. The project page is available on this website ~https://github.com/badripatro/Simba.
2024-03-25T00:00:00
2403.15371
Can large language models explore in-context?
[ "Akshay Krishnamurthy", "Keegan Harris", "Dylan J. Foster", "Cyril Zhang", "Aleksandrs Slivkins" ]
We investigate the extent to which contemporary Large Language Models (LLMs) can engage in exploration, a core capability in reinforcement learning and decision making. We focus on native performance of existing LLMs, without training interventions. We deploy LLMs as agents in simple multi-armed bandit environments, specifying the environment description and interaction history entirely in-context, i.e., within the LLM prompt. We experiment with GPT-3.5, GPT-4, and Llama2, using a variety of prompt designs, and find that the models do not robustly engage in exploration without substantial interventions: i) Across all of our experiments, only one configuration resulted in satisfactory exploratory behavior: GPT-4 with chain-of-thought reasoning and an externally summarized interaction history, presented as sufficient statistics; ii) All other configurations did not result in robust exploratory behavior, including those with chain-of-thought reasoning but unsummarized history. Although these findings can be interpreted positively, they suggest that external summarization -- which may not be possible in more complex settings -- is important for obtaining desirable behavior from LLM agents. We conclude that non-trivial algorithmic interventions, such as fine-tuning or dataset curation, may be required to empower LLM-based decision making agents in complex settings.
2024-03-25T00:00:00
2403.14773
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
[ "Roberto Henschel", "Levon Khachatryan", "Daniil Hayrapetyan", "Hayk Poghosyan", "Vahram Tadevosyan", "Zhangyang Wang", "Shant Navasardyan", "Humphrey Shi" ]
https://github.com/Picsart-AI-Research/StreamingT2V
Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, making it easy to create diverse and individual content. However, existing approaches mostly focus on high-quality short video generation (typically 16 or 24 frames), ending up with hard-cuts when naively extended to the case of long video synthesis. To overcome these limitations, we introduce StreamingT2V, an autoregressive approach for long video generation of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:(i) a short-term memory block called conditional attention module (CAM), which conditions the current generation on the features extracted from the previous chunk via an attentional mechanism, leading to consistent chunk transitions, (ii) a long-term memory block called appearance preservation module, which extracts high-level scene and object features from the first video chunk to prevent the model from forgetting the initial scene, and (iii) a randomized blending approach that enables to apply a video enhancer autoregressively for infinitely long videos without inconsistencies between chunks. Experiments show that StreamingT2V generates high motion amount. In contrast, all competing image-to-video methods are prone to video stagnation when applied naively in an autoregressive manner. Thus, we propose with StreamingT2V a high-quality seamless text-to-long video generator that outperforms competitors with consistency and motion. Our code will be available at: https://github.com/Picsart-AI-Research/StreamingT2V
2024-03-25T00:00:00
2403.15042
LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement
[ "Nicholas Lee", "Thanakul Wattanawong", "Sehoon Kim", "Karttikeya Mangalam", "Sheng Shen", "Gopala Anumanchipali", "Michael W. Mahoney", "Kurt Keutzer", "Amir Gholami" ]
https://github.com/SqueezeAILab/LLM2LLM
Pretrained large language models (LLMs) are currently state-of-the-art for solving the vast majority of natural language processing tasks. While many real-world applications still require fine-tuning to reach satisfactory levels of performance, many of them are in the low-data regime, making fine-tuning challenging. To address this, we propose LLM2LLM, a targeted and iterative data augmentation strategy that uses a teacher LLM to enhance a small seed dataset by augmenting additional data that can be used for fine-tuning on a specific task. LLM2LLM (1) fine-tunes a baseline student LLM on the initial seed data, (2) evaluates and extracts data points that the model gets wrong, and (3) uses a teacher LLM to generate synthetic data based on these incorrect data points, which are then added back into the training data. This approach amplifies the signal from incorrectly predicted data points by the LLM during training and reintegrates them into the dataset to focus on more challenging examples for the LLM. Our results show that LLM2LLM significantly enhances the performance of LLMs in the low-data regime, outperforming both traditional fine-tuning and other data augmentation baselines. LLM2LLM reduces the dependence on labor-intensive data curation and paves the way for more scalable and performant LLM solutions, allowing us to tackle data-constrained domains and tasks. We achieve improvements up to 24.2% on the GSM8K dataset, 32.6% on CaseHOLD, 32.0% on SNIPS, 52.6% on TREC and 39.8% on SST-2 over regular fine-tuning in the low-data regime using a LLaMA2-7B student model.
2024-03-25T00:00:00
2403.15246
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions
[ "Orion Weller", "Benjamin Chang", "Sean MacAvaney", "Kyle Lo", "Arman Cohan", "Benjamin Van Durme", "Dawn Lawrie", "Luca Soldaini" ]
https://github.com/orionw/FollowIR
Modern Large Language Models (LLMs) are capable of following long and complex instructions that enable a diverse amount of user tasks. However, despite Information Retrieval (IR) models using LLMs as the backbone of their architectures, nearly all of them still only take queries as input, with no instructions. For the handful of recent models that do take instructions, it's unclear how they use them. We introduce our dataset FollowIR, which contains a rigorous instruction evaluation benchmark as well as a training set for helping IR models learn to better follow real-world instructions. FollowIR builds off the long history of the TREC conferences: as TREC provides human annotators with instructions (also known as narratives) to determine document relevance, so should IR models be able to understand and decide relevance based on these detailed instructions. Our evaluation benchmark starts with three deeply judged TREC collections and alters the annotator instructions, re-annotating relevant documents. Through this process, we can measure how well IR models follow instructions, through a new pairwise evaluation framework. Our results indicate that existing retrieval models fail to correctly use instructions, using them for basic keywords and struggling to understand long-form information. However, we show that it is possible for IR models to learn to follow complex instructions: our new FollowIR-7B model has significant improvements (over 13%) after fine-tuning on our training set.
2024-03-25T00:00:00
2403.15385
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis
[ "Kevin Xie", "Jonathan Lorraine", "Tianshi Cao", "Jun Gao", "James Lucas", "Antonio Torralba", "Sanja Fidler", "Xiaohui Zeng" ]
Recent text-to-3D generation approaches produce impressive 3D results but require time-consuming optimization that can take up to an hour per prompt. Amortized methods like ATT3D optimize multiple prompts simultaneously to improve efficiency, enabling fast text-to-3D synthesis. However, they cannot capture high-frequency geometry and texture details and struggle to scale to large prompt sets, so they generalize poorly. We introduce LATTE3D, addressing these limitations to achieve fast, high-quality generation on a significantly larger prompt set. Key to our method is 1) building a scalable architecture and 2) leveraging 3D data during optimization through 3D-aware diffusion priors, shape regularization, and model initialization to achieve robustness to diverse and complex training prompts. LATTE3D amortizes both neural field and textured surface generation to produce highly detailed textured meshes in a single forward pass. LATTE3D generates 3D objects in 400ms, and can be further enhanced with fast test-time optimization.
2024-03-25T00:00:00
2403.15383
ThemeStation: Generating Theme-Aware 3D Assets from Few Exemplars
[ "Zhenwei Wang", "Tengfei Wang", "Gerhard Hancke", "Ziwei Liu", "Rynson W. H. Lau" ]
https://github.com/3DThemeStation/ThemeStation
Real-world applications often require a large gallery of 3D assets that share a consistent theme. While remarkable advances have been made in general 3D content creation from text or image, synthesizing customized 3D assets following the shared theme of input 3D exemplars remains an open and challenging problem. In this work, we present ThemeStation, a novel approach for theme-aware 3D-to-3D generation. ThemeStation synthesizes customized 3D assets based on given few exemplars with two goals: 1) unity for generating 3D assets that thematically align with the given exemplars and 2) diversity for generating 3D assets with a high degree of variations. To this end, we design a two-stage framework that draws a concept image first, followed by a reference-informed 3D modeling stage. We propose a novel dual score distillation (DSD) loss to jointly leverage priors from both the input exemplars and the synthesized concept image. Extensive experiments and user studies confirm that ThemeStation surpasses prior works in producing diverse theme-aware 3D models with impressive quality. ThemeStation also enables various applications such as controllable 3D-to-3D generation.
2024-03-25T00:00:00
2403.14870
VidLA: Video-Language Alignment at Scale
[ "Mamshad Nayeem Rizve", "Fan Fei", "Jayakrishnan Unnikrishnan", "Son Tran", "Benjamin Z. Yao", "Belinda Zeng", "Mubarak Shah", "Trishul Chilimbi" ]
In this paper, we propose VidLA, an approach for video-language alignment at scale. There are two major limitations of previous video-language alignment approaches. First, they do not capture both short-range and long-range temporal dependencies and typically employ complex hierarchical deep network architectures that are hard to integrate with existing pretrained image-text foundation models. To effectively address this limitation, we instead keep the network architecture simple and use a set of data tokens that operate at different temporal resolutions in a hierarchical manner, accounting for the temporally hierarchical nature of videos. By employing a simple two-tower architecture, we are able to initialize our video-language model with pretrained image-text foundation models, thereby boosting the final performance. Second, existing video-language alignment works struggle due to the lack of semantically aligned large-scale training data. To overcome it, we leverage recent LLMs to curate the largest video-language dataset to date with better visual grounding. Furthermore, unlike existing video-text datasets which only contain short clips, our dataset is enriched with video clips of varying durations to aid our temporally hierarchical data tokens in extracting better representations at varying temporal scales. Overall, empirical results show that our proposed approach surpasses state-of-the-art methods on multiple retrieval benchmarks, especially on longer videos, and performs competitively on classification benchmarks.
2024-03-25T00:00:00
2403.14714
Compiler generated feedback for Large Language Models
[ "Dejan Grubisic", "Chris Cummins", "Volker Seeker", "Hugh Leather" ]
We introduce a novel paradigm in compiler optimization powered by Large Language Models with compiler feedback to optimize the code size of LLVM assembly. The model takes unoptimized LLVM IR as input and produces optimized IR, the best optimization passes, and instruction counts of both unoptimized and optimized IRs. Then we compile the input with generated optimization passes and evaluate if the predicted instruction count is correct, generated IR is compilable, and corresponds to compiled code. We provide this feedback back to LLM and give it another chance to optimize code. This approach adds an extra 0.53% improvement over -Oz to the original model. Even though, adding more information with feedback seems intuitive, simple sampling techniques achieve much higher performance given 10 or more samples.
2024-03-25T00:00:00
2403.14781
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
[ "Shenhao Zhu", "Junming Leo Chen", "Zuozhuo Dai", "Yinghui Xu", "Xun Cao", "Yao Yao", "Hao Zhu", "Siyu Zhu" ]
https://github.com/fudan-generative-vision/champ
In this study, we introduce a methodology for human image animation by leveraging a 3D human parametric model within a latent diffusion framework to enhance shape alignment and motion guidance in curernt human generative techniques. The methodology utilizes the SMPL(Skinned Multi-Person Linear) model as the 3D human parametric model to establish a unified representation of body shape and pose. This facilitates the accurate capture of intricate human geometry and motion characteristics from source videos. Specifically, we incorporate rendered depth images, normal maps, and semantic maps obtained from SMPL sequences, alongside skeleton-based motion guidance, to enrich the conditions to the latent diffusion model with comprehensive 3D shape and detailed pose attributes. A multi-layer motion fusion module, integrating self-attention mechanisms, is employed to fuse the shape and motion latent representations in the spatial domain. By representing the 3D human parametric model as the motion guidance, we can perform parametric shape alignment of the human body between the reference image and the source video motion. Experimental evaluations conducted on benchmark datasets demonstrate the methodology's superior ability to generate high-quality human animations that accurately capture both pose and shape variations. Furthermore, our approach also exhibits superior generalization capabilities on the proposed wild dataset. Project page: https://fudan-generative-vision.github.io/champ.
2024-03-25T00:00:00
2403.15382
DragAPart: Learning a Part-Level Motion Prior for Articulated Objects
[ "Ruining Li", "Chuanxia Zheng", "Christian Rupprecht", "Andrea Vedaldi" ]
https://github.com/RuiningLi/DragAPart
We introduce DragAPart, a method that, given an image and a set of drags as input, can generate a new image of the same object in a new state, compatible with the action of the drags. Differently from prior works that focused on repositioning objects, DragAPart predicts part-level interactions, such as opening and closing a drawer. We study this problem as a proxy for learning a generalist motion model, not restricted to a specific kinematic structure or object category. To this end, we start from a pre-trained image generator and fine-tune it on a new synthetic dataset, Drag-a-Move, which we introduce. Combined with a new encoding for the drags and dataset randomization, the new model generalizes well to real images and different categories. Compared to prior motion-controlled generators, we demonstrate much better part-level motion understanding.
2024-03-25T00:00:00
2403.15157
AllHands: Ask Me Anything on Large-scale Verbatim Feedback via Large Language Models
[ "Chaoyun Zhang", "Zicheng Ma", "Yuhao Wu", "Shilin He", "Si Qin", "Minghua Ma", "Xiaoting Qin", "Yu Kang", "Yuyi Liang", "Xiaoyu Gou", "Yajie Xue", "Qingwei Lin", "Saravan Rajmohan", "Dongmei Zhang", "Qi Zhang" ]
Verbatim feedback constitutes a valuable repository of user experiences, opinions, and requirements essential for software development. Effectively and efficiently extracting valuable insights from such data poses a challenging task. This paper introduces Allhands , an innovative analytic framework designed for large-scale feedback analysis through a natural language interface, leveraging large language models (LLMs). Allhands adheres to a conventional feedback analytic workflow, initially conducting classification and topic modeling on the feedback to convert them into a structurally augmented format, incorporating LLMs to enhance accuracy, robustness, generalization, and user-friendliness. Subsequently, an LLM agent is employed to interpret users' diverse questions in natural language on feedback, translating them into Python code for execution, and delivering comprehensive multi-modal responses, including text, code, tables, and images. We evaluate Allhands across three diverse feedback datasets. The experiments demonstrate that Allhands achieves superior efficacy at all stages of analysis, including classification and topic modeling, eventually providing users with an ``ask me anything'' experience with comprehensive, correct and human-readable response. To the best of our knowledge, Allhands stands as the first comprehensive feedback analysis framework that supports diverse and customized requirements for insight extraction through a natural language interface.
2024-03-26T00:00:00
2403.16627
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
[ "Yuda Song", "Zehao Sun", "Xuanwu Yin" ]
https://github.com/IDKiro/sdxs
Recent advancements in diffusion models have positioned them at the forefront of image generation. Despite their superior performance, diffusion models are not without drawbacks; they are characterized by complex architectures and substantial computational demands, resulting in significant latency due to their iterative sampling process. To mitigate these limitations, we introduce a dual approach involving model miniaturization and a reduction in sampling steps, aimed at significantly decreasing model latency. Our methodology leverages knowledge distillation to streamline the U-Net and image decoder architectures, and introduces an innovative one-step DM training technique that utilizes feature matching and score distillation. We present two models, SDXS-512 and SDXS-1024, achieving inference speeds of approximately 100 FPS (30x faster than SD v1.5) and 30 FP (60x faster than SDXL) on a single GPU, respectively. Moreover, our training approach offers promising applications in image-conditioned control, facilitating efficient image-to-image translation.
2024-03-26T00:00:00
2403.17005
TRIP: Temporal Residual Learning with Image Noise Prior for Image-to-Video Diffusion Models
[ "Zhongwei Zhang", "Fuchen Long", "Yingwei Pan", "Zhaofan Qiu", "Ting Yao", "Yang Cao", "Tao Mei" ]
Recent advances in text-to-video generation have demonstrated the utility of powerful diffusion models. Nevertheless, the problem is not trivial when shaping diffusion models to animate static image (i.e., image-to-video generation). The difficulty originates from the aspect that the diffusion process of subsequent animated frames should not only preserve the faithful alignment with the given image but also pursue temporal coherence among adjacent frames. To alleviate this, we present TRIP, a new recipe of image-to-video diffusion paradigm that pivots on image noise prior derived from static image to jointly trigger inter-frame relational reasoning and ease the coherent temporal modeling via temporal residual learning. Technically, the image noise prior is first attained through one-step backward diffusion process based on both static image and noised video latent codes. Next, TRIP executes a residual-like dual-path scheme for noise prediction: 1) a shortcut path that directly takes image noise prior as the reference noise of each frame to amplify the alignment between the first frame and subsequent frames; 2) a residual path that employs 3D-UNet over noised video and static image latent codes to enable inter-frame relational reasoning, thereby easing the learning of the residual noise for each frame. Furthermore, both reference and residual noise of each frame are dynamically merged via attention mechanism for final video generation. Extensive experiments on WebVid-10M, DTDB and MSR-VTT datasets demonstrate the effectiveness of our TRIP for image-to-video generation. Please see our project page at https://trip-i2v.github.io/TRIP/.
2024-03-26T00:00:00
2403.16971
LLM Agent Operating System
[ "Kai Mei", "Zelong Li", "Shuyuan Xu", "Ruosong Ye", "Yingqiang Ge", "Yongfeng Zhang" ]
https://github.com/agiresearch/AIOS
The integration and deployment of large language model (LLM)-based intelligent agents have been fraught with challenges that compromise their efficiency and efficacy. Among these issues are sub-optimal scheduling and resource allocation of agent requests over the LLM, the difficulties in maintaining context during interactions between agent and LLM, and the complexities inherent in integrating heterogeneous agents with different capabilities and specializations. The rapid increase of agent quantity and complexity further exacerbates these issues, often leading to bottlenecks and sub-optimal utilization of resources. Inspired by these challenges, this paper presents AIOS, an LLM agent operating system, which embeds large language model into operating systems (OS). Specifically, AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, and maintain access control for agents. We present the architecture of such an operating system, outline the core challenges it aims to resolve, and provide the basic design and implementation of the AIOS. Our experiments on concurrent execution of multiple agents demonstrate the reliability and efficiency of our AIOS modules. Through this, we aim to not only improve the performance and efficiency of LLM agents but also to pioneer for better development and deployment of the AIOS ecosystem in the future. The project is open-source at https://github.com/agiresearch/AIOS.
2024-03-26T00:00:00
2403.17001
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation
[ "Yang Chen", "Yingwei Pan", "Haibo Yang", "Ting Yao", "Tao Mei" ]
Recent innovations on text-to-3D generation have featured Score Distillation Sampling (SDS), which enables the zero-shot learning of implicit 3D models (NeRF) by directly distilling prior knowledge from 2D diffusion models. However, current SDS-based models still struggle with intricate text prompts and commonly result in distorted 3D models with unrealistic textures or cross-view inconsistency issues. In this work, we introduce a novel Visual Prompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the visual appearance knowledge in 2D visual prompt to boost text-to-3D generation. Instead of solely supervising SDS with text prompt, VP3D first capitalizes on 2D diffusion model to generate a high-quality image from input text, which subsequently acts as visual prompt to strengthen SDS optimization with explicit visual appearance. Meanwhile, we couple the SDS optimization with additional differentiable reward function that encourages rendering images of 3D models to better visually align with 2D visual prompt and semantically match with text prompt. Through extensive experiments, we show that the 2D Visual Prompt in our VP3D significantly eases the learning of visual appearance of 3D models and thus leads to higher visual fidelity with more detailed textures. It is also appealing in view that when replacing the self-generating visual prompt with a given reference image, VP3D is able to trigger a new task of stylized text-to-3D generation. Our project page is available at https://vp3d-cvpr24.github.io.
2024-03-26T00:00:00
2403.16990
Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation
[ "Omer Dahary", "Or Patashnik", "Kfir Aberman", "Daniel Cohen-Or" ]
Text-to-image diffusion models have an unprecedented ability to generate diverse and high-quality images. However, they often struggle to faithfully capture the intended semantics of complex input prompts that include multiple subjects. Recently, numerous layout-to-image extensions have been introduced to improve user control, aiming to localize subjects represented by specific tokens. Yet, these methods often produce semantically inaccurate images, especially when dealing with multiple semantically or visually similar subjects. In this work, we study and analyze the causes of these limitations. Our exploration reveals that the primary issue stems from inadvertent semantic leakage between subjects in the denoising process. This leakage is attributed to the diffusion model's attention layers, which tend to blend the visual features of different subjects. To address these issues, we introduce Bounded Attention, a training-free method for bounding the information flow in the sampling process. Bounded Attention prevents detrimental leakage among subjects and enables guiding the generation to promote each subject's individuality, even with complex multi-subject conditioning. Through extensive experimentation, we demonstrate that our method empowers the generation of multiple subjects that better align with given prompts and layouts.
2024-03-26T00:00:00
2403.17008
FlashFace: Human Image Personalization with High-fidelity Identity Preservation
[ "Shilong Zhang", "Lianghua Huang", "Xi Chen", "Yifei Zhang", "Zhi-Fan Wu", "Yutong Feng", "Wei Wang", "Yujun Shen", "Yu Liu", "Ping Luo" ]
https://github.com/jshilong/FlashFace
This work presents FlashFace, a practical tool with which users can easily personalize their own photos on the fly by providing one or a few reference face images and a text prompt. Our approach is distinguishable from existing human photo customization methods by higher-fidelity identity preservation and better instruction following, benefiting from two subtle designs. First, we encode the face identity into a series of feature maps instead of one image token as in prior arts, allowing the model to retain more details of the reference faces (e.g., scars, tattoos, and face shape ). Second, we introduce a disentangled integration strategy to balance the text and image guidance during the text-to-image generation process, alleviating the conflict between the reference faces and the text prompts (e.g., personalizing an adult into a "child" or an "elder"). Extensive experimental results demonstrate the effectiveness of our method on various applications, including human image personalization, face swapping under language prompts, making virtual characters into real people, etc. Project Page: https://jshilong.github.io/flashface-page.
2024-03-26T00:00:00
2403.15484
RakutenAI-7B: Extending Large Language Models for Japanese
[ "Rakuten Group", "Aaron Levine", "Connie Huang", "Chenguang Wang", "Eduardo Batista", "Ewa Szymanska", "Hongyi Ding", "Hou Wei Chou", "Jean-François Pessiot", "Johanes Effendi", "Justin Chiu", "Kai Torben Ohlhus", "Karan Chopra", "Keiji Shinzato", "Koji Murakami", "Lee Xiong", "Lei Chen", "Maki Kubota", "Maksim Tkachenko", "Miroku Lee", "Naoki Takahashi", "Prathyusha Jwalapuram", "Ryutaro Tatsushima", "Saurabh Jain", "Sunil Kumar Yadav", "Ting Cai", "Wei-Te Chen", "Yandi Xia", "Yuki Nakayama", "Yutaka Higashiyama" ]
We introduce RakutenAI-7B, a suite of Japanese-oriented large language models that achieve the best performance on the Japanese LM Harness benchmarks among the open 7B models. Along with the foundation model, we release instruction- and chat-tuned models, RakutenAI-7B-instruct and RakutenAI-7B-chat respectively, under the Apache 2.0 license.
2024-03-26T00:00:00
2403.15447
Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
[ "Junyuan Hong", "Jinhao Duan", "Chenhui Zhang", "Zhangheng Li", "Chulin Xie", "Kelsey Lieberman", "James Diffenderfer", "Brian Bartoldson", "Ajay Jaiswal", "Kaidi Xu", "Bhavya Kailkhura", "Dan Hendrycks", "Dawn Song", "Zhangyang Wang", "Bo Li" ]
https://github.com/decoding-comp-trust/comp-trust
Compressing high-capability Large Language Models (LLMs) has emerged as a favored strategy for resource-efficient inferences. While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected. This study conducts the first, thorough evaluation of three (3) leading LLMs using five (5) SoTA compression techniques across eight (8) trustworthiness dimensions. Our experiments highlight the intricate interplay between compression and trustworthiness, revealing some interesting patterns. We find that quantization is currently a more effective approach than pruning in achieving efficiency and trustworthiness simultaneously. For instance, a 4-bit quantized model retains the trustworthiness of its original counterpart, but model pruning significantly degrades trustworthiness, even at 50% sparsity. Moreover, employing quantization within a moderate bit range could unexpectedly improve certain trustworthiness dimensions such as ethics and fairness. Conversely, extreme quantization to very low bit levels (3 bits) tends to significantly reduce trustworthiness. This increased risk cannot be uncovered by looking at benign performance alone, in turn, mandating comprehensive trustworthiness evaluation in practice. These findings culminate in practical recommendations for simultaneously achieving high utility, efficiency, and trustworthiness in LLMs. Models and code are available at https://decoding-comp-trust.github.io/.
2024-03-27T00:00:00
2403.17887
The Unreasonable Ineffectiveness of the Deeper Layers
[ "Andrey Gromov", "Kushal Tirumala", "Hassan Shapourian", "Paolo Glorioso", "Daniel A. Roberts" ]
We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed. To prune these models, we identify the optimal block of layers to prune by considering similarity across layers; then, to "heal" the damage, we perform a small amount of finetuning. In particular, we use parameter-efficient finetuning (PEFT) methods, specifically quantization and Low Rank Adapters (QLoRA), such that each of our experiments can be performed on a single A100 GPU. From a practical perspective, these results suggest that layer pruning methods can complement other PEFT strategies to further reduce computational resources of finetuning on the one hand, and can improve the memory and latency of inference on the other hand. From a scientific perspective, the robustness of these LLMs to the deletion of layers implies either that current pretraining methods are not properly leveraging the parameters in the deeper layers of the network or that the shallow layers play a critical role in storing knowledge.
2024-03-27T00:00:00
2403.17804
Improving Text-to-Image Consistency via Automatic Prompt Optimization
[ "Oscar Mañas", "Pietro Astolfi", "Melissa Hall", "Candace Ross", "Jack Urbanek", "Adina Williams", "Aishwarya Agrawal", "Adriana Romero-Soriano", "Michal Drozdzal" ]
Impressive advances in text-to-image (T2I) generative models have yielded a plethora of high performing models which are able to generate aesthetically appealing, photorealistic images. Despite the progress, these models still struggle to produce images that are consistent with the input prompt, oftentimes failing to capture object quantities, relations and attributes properly. Existing solutions to improve prompt-image consistency suffer from the following challenges: (1) they oftentimes require model fine-tuning, (2) they only focus on nearby prompt samples, and (3) they are affected by unfavorable trade-offs among image quality, representation diversity, and prompt-image consistency. In this paper, we address these challenges and introduce a T2I optimization-by-prompting framework, OPT2I, which leverages a large language model (LLM) to improve prompt-image consistency in T2I models. Our framework starts from a user prompt and iteratively generates revised prompts with the goal of maximizing a consistency score. Our extensive validation on two datasets, MSCOCO and PartiPrompts, shows that OPT2I can boost the initial consistency score by up to 24.9% in terms of DSG score while preserving the FID and increasing the recall between generated and real data. Our work paves the way toward building more reliable and robust T2I systems by harnessing the power of LLMs.
2024-03-27T00:00:00
2403.17920
TC4D: Trajectory-Conditioned Text-to-4D Generation
[ "Sherwin Bahmani", "Xian Liu", "Yifan Wang", "Ivan Skorokhodov", "Victor Rong", "Ziwei Liu", "Xihui Liu", "Jeong Joon Park", "Sergey Tulyakov", "Gordon Wetzstein", "Andrea Tagliasacchi", "David B. Lindell" ]
https://github.com/sherwinbahmani/tc4d
Recent techniques for text-to-4D generation synthesize dynamic 3D scenes using supervision from pre-trained text-to-video models. However, existing representations for motion, such as deformation models or time-dependent neural representations, are limited in the amount of motion they can generate-they cannot synthesize motion extending far beyond the bounding box used for volume rendering. The lack of a more flexible motion model contributes to the gap in realism between 4D generation methods and recent, near-photorealistic video generation models. Here, we propose TC4D: trajectory-conditioned text-to-4D generation, which factors motion into global and local components. We represent the global motion of a scene's bounding box using rigid transformation along a trajectory parameterized by a spline. We learn local deformations that conform to the global trajectory using supervision from a text-to-video model. Our approach enables the synthesis of scenes animated along arbitrary trajectories, compositional scene generation, and significant improvements to the realism and amount of generated motion, which we evaluate qualitatively and through a user study. Video results can be viewed on our website: https://sherwinbahmani.github.io/tc4d.
2024-03-27T00:00:00
2403.17607
Fully-fused Multi-Layer Perceptrons on Intel Data Center GPUs
[ "Kai Yuan", "Christoph Bauinger", "Xiangyi Zhang", "Pascal Baehr", "Matthias Kirchhart", "Darius Dabert", "Adrien Tousnakhoff", "Pierre Boudier", "Michael Paulitsch" ]
https://github.com/intel/tiny-dpcpp-nn
This paper presents a SYCL implementation of Multi-Layer Perceptrons (MLPs), which targets and is optimized for the Intel Data Center GPU Max 1550. To increase the performance, our implementation minimizes the slow global memory accesses by maximizing the data reuse within the general register file and the shared local memory by fusing the operations in each layer of the MLP. We show with a simple roofline model that this results in a significant increase in the arithmetic intensity, leading to improved performance, especially for inference. We compare our approach to a similar CUDA implementation for MLPs and show that our implementation on the Intel Data Center GPU outperforms the CUDA implementation on Nvidia's H100 GPU by a factor up to 2.84 in inference and 1.75 in training. The paper also showcases the efficiency of our SYCL implementation in three significant areas: Image Compression, Neural Radiance Fields, and Physics-Informed Machine Learning. In all cases, our implementation outperforms the off-the-shelf Intel Extension for PyTorch (IPEX) implementation on the same Intel GPU by up to a factor of 30 and the CUDA PyTorch version on Nvidia's H100 GPU by up to a factor 19. The code can be found at https://github.com/intel/tiny-dpcpp-nn.
2024-03-27T00:00:00
2403.17297
InternLM2 Technical Report
[ "Zheng Cai", "Maosong Cao", "Haojiong Chen", "Kai Chen", "Keyu Chen", "Xin Chen", "Xun Chen", "Zehui Chen", "Zhi Chen", "Pei Chu", "Xiaoyi Dong", "Haodong Duan", "Qi Fan", "Zhaoye Fei", "Yang Gao", "Jiaye Ge", "Chenya Gu", "Yuzhe Gu", "Tao Gui", "Aijia Guo", "Qipeng Guo", "Conghui He", "Yingfan Hu", "Ting Huang", "Tao Jiang", "Penglong Jiao", "Zhenjiang Jin", "Zhikai Lei", "Jiaxing Li", "Jingwen Li", "Linyang Li", "Shuaibin Li", "Wei Li", "Yining Li", "Hongwei Liu", "Jiangning Liu", "Jiawei Hong", "Kaiwen Liu", "Kuikun Liu", "Xiaoran Liu", "Chengqi Lv", "Haijun Lv", "Kai Lv", "Li Ma", "Runyuan Ma", "Zerun Ma", "Wenchang Ning", "Linke Ouyang", "Jiantao Qiu", "Yuan Qu", "Fukai Shang", "Yunfan Shao", "Demin Song", "Zifan Song", "Zhihao Sui", "Peng Sun", "Yu Sun", "Huanze Tang", "Bin Wang", "Guoteng Wang", "Jiaqi Wang", "Jiayu Wang", "Rui Wang", "Yudong Wang", "Ziyi Wang", "Xingjian Wei", "Qizhen Weng", "Fan Wu", "Yingtong Xiong", "Chao Xu", "Ruiliang Xu", "Hang Yan", "Yirong Yan", "Xiaogui Yang", "Haochen Ye", "Huaiyuan Ying", "Jia Yu", "Jing Yu", "Yuhang Zang", "Chuyu Zhang", "Li Zhang", "Pan Zhang", "Peng Zhang", "Ruijie Zhang", "Shuo Zhang", "Songyang Zhang", "Wenjian Zhang", "Wenwei Zhang", "Xingcheng Zhang", "Xinyue Zhang", "Hui Zhao", "Qian Zhao", "Xiaomeng Zhao", "Fengzhe Zhou", "Zaida Zhou", "Jingming Zhuo", "Yicheng Zou", "Xipeng Qiu", "Yu Qiao", "Dahua Lin" ]
The evolution of Large Language Models (LLMs) like ChatGPT and GPT-4 has sparked discussions on the advent of Artificial General Intelligence (AGI). However, replicating such advancements in open-source models has been challenging. This paper introduces InternLM2, an open-source LLM that outperforms its predecessors in comprehensive evaluations across 6 dimensions and 30 benchmarks, long-context modeling, and open-ended subjective evaluations through innovative pre-training and optimization techniques. The pre-training process of InternLM2 is meticulously detailed, highlighting the preparation of diverse data types including text, code, and long-context data. InternLM2 efficiently captures long-term dependencies, initially trained on 4k tokens before advancing to 32k tokens in pre-training and fine-tuning stages, exhibiting remarkable performance on the 200k ``Needle-in-a-Haystack" test. InternLM2 is further aligned using Supervised Fine-Tuning (SFT) and a novel Conditional Online Reinforcement Learning from Human Feedback (COOL RLHF) strategy that addresses conflicting human preferences and reward hacking. By releasing InternLM2 models in different training stages and model sizes, we provide the community with insights into the model's evolution.
2024-03-27T00:00:00
2403.17898
Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians
[ "Kerui Ren", "Lihan Jiang", "Tao Lu", "Mulin Yu", "Linning Xu", "Zhangkai Ni", "Bo Dai" ]
https://github.com/city-super/Octree-GS
The recent 3D Gaussian splatting (3D-GS) has shown remarkable rendering fidelity and efficiency compared to NeRF-based neural scene representations. While demonstrating the potential for real-time rendering, 3D-GS encounters rendering bottlenecks in large scenes with complex details due to an excessive number of Gaussian primitives located within the viewing frustum. This limitation is particularly noticeable in zoom-out views and can lead to inconsistent rendering speeds in scenes with varying details. Moreover, it often struggles to capture the corresponding level of details at different scales with its heuristic density control operation. Inspired by the Level-of-Detail (LOD) techniques, we introduce Octree-GS, featuring an LOD-structured 3D Gaussian approach supporting level-of-detail decomposition for scene representation that contributes to the final rendering results. Our model dynamically selects the appropriate level from the set of multi-resolution anchor points, ensuring consistent rendering performance with adaptive LOD adjustments while maintaining high-fidelity rendering results.
2024-03-27T00:00:00
2403.17237
DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion
[ "Yuanze Lin", "Ronald Clark", "Philip Torr" ]
https://github.com/yuanze-lin/DreamPolisher
We present DreamPolisher, a novel Gaussian Splatting based method with geometric guidance, tailored to learn cross-view consistency and intricate detail from textual descriptions. While recent progress on text-to-3D generation methods have been promising, prevailing methods often fail to ensure view-consistency and textural richness. This problem becomes particularly noticeable for methods that work with text input alone. To address this, we propose a two-stage Gaussian Splatting based approach that enforces geometric consistency among views. Initially, a coarse 3D generation undergoes refinement via geometric optimization. Subsequently, we use a ControlNet driven refiner coupled with the geometric consistency term to improve both texture fidelity and overall consistency of the generated 3D asset. Empirical evaluations across diverse textual prompts spanning various object categories demonstrate the efficacy of DreamPolisher in generating consistent and realistic 3D objects, aligning closely with the semantics of the textual instructions.
2024-03-27T00:00:00
2403.17888
2D Gaussian Splatting for Geometrically Accurate Radiance Fields
[ "Binbin Huang", "Zehao Yu", "Anpei Chen", "Andreas Geiger", "Shenghua Gao" ]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking. However, 3DGS fails to accurately represent surfaces due to the multi-view inconsistent nature of 3D Gaussians. We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images. Our key idea is to collapse the 3D volume into a set of 2D oriented planar Gaussian disks. Unlike 3D Gaussians, 2D Gaussians provide view-consistent geometry while modeling surfaces intrinsically. To accurately recover thin surfaces and achieve stable optimization, we introduce a perspective-accurate 2D splatting process utilizing ray-splat intersection and rasterization. Additionally, we incorporate depth distortion and normal consistency terms to further enhance the quality of the reconstructions. We demonstrate that our differentiable renderer allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering. Our code will be made publicly available.
2024-03-27T00:00:00
2403.17694
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
[ "Huawei Wei", "Zejun Yang", "Zhisheng Wang" ]
https://github.com/scutzzj/AniPortrait
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusion model, coupled with a motion module, to convert the landmark sequence into photorealistic and temporally consistent portrait animation. Experimental results demonstrate the superiority of AniPortrait in terms of facial naturalness, pose diversity, and visual quality, thereby offering an enhanced perceptual experience. Moreover, our methodology exhibits considerable potential in terms of flexibility and controllability, which can be effectively applied in areas such as facial motion editing or face reenactment. We release code and model weights at https://github.com/scutzzj/AniPortrait
2024-03-28T00:00:00
2403.18795
Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction
[ "Qiuhong Shen", "Xuanyu Yi", "Zike Wu", "Pan Zhou", "Hanwang Zhang", "Shuicheng Yan", "Xinchao Wang" ]
We tackle the challenge of efficiently reconstructing a 3D asset from a single image with growing demands for automated 3D content creation pipelines. Previous methods primarily rely on Score Distillation Sampling (SDS) and Neural Radiance Fields (NeRF). Despite their significant success, these approaches encounter practical limitations due to lengthy optimization and considerable memory usage. In this report, we introduce Gamba, an end-to-end amortized 3D reconstruction model from single-view images, emphasizing two main insights: (1) 3D representation: leveraging a large number of 3D Gaussians for an efficient 3D Gaussian splatting process; (2) Backbone design: introducing a Mamba-based sequential network that facilitates context-dependent reasoning and linear scalability with the sequence (token) length, accommodating a substantial number of Gaussians. Gamba incorporates significant advancements in data preprocessing, regularization design, and training methodologies. We assessed Gamba against existing optimization-based and feed-forward 3D generation approaches using the real-world scanned OmniObject3D dataset. Here, Gamba demonstrates competitive generation capabilities, both qualitatively and quantitatively, while achieving remarkable speed, approximately 0.6 second on a single NVIDIA A100 GPU.
2024-03-28T00:00:00
2403.18421
BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text
[ "Elliot Bolton", "Abhinav Venigalla", "Michihiro Yasunaga", "David Hall", "Betty Xiong", "Tony Lee", "Roxana Daneshjou", "Jonathan Frankle", "Percy Liang", "Michael Carbin", "Christopher D. Manning" ]
Models such as GPT-4 and Med-PaLM 2 have demonstrated impressive performance on a wide variety of biomedical NLP tasks. However, these models have hundreds of billions of parameters, are computationally expensive to run, require users to send their input data over the internet, and are trained on unknown data sources. Can smaller, more targeted models compete? To address this question, we build and release BioMedLM, a 2.7 billion parameter GPT-style autoregressive model trained exclusively on PubMed abstracts and full articles. When fine-tuned, BioMedLM can produce strong multiple-choice biomedical question-answering results competitive with much larger models, such as achieving a score of 57.3% on MedMCQA (dev) and 69.0% on the MMLU Medical Genetics exam. BioMedLM can also be fine-tuned to produce useful answers to patient questions on medical topics. This demonstrates that smaller models can potentially serve as transparent, privacy-preserving, economical and environmentally friendly foundations for particular NLP applications, such as in biomedicine. The model is available on the Hugging Face Hub: https://huggingface.co/stanford-crfm/BioMedLM.
2024-03-28T00:00:00
2403.18814
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
[ "Yanwei Li", "Yuechen Zhang", "Chengyao Wang", "Zhisheng Zhong", "Yixin Chen", "Ruihang Chu", "Shaoteng Liu", "Jiaya Jia" ]
https://github.com/dvlab-research/MiniGemini
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mining the potential of VLMs for better performance and any-to-any workflow from three aspects, i.e., high-resolution visual tokens, high-quality data, and VLM-guided generation. To enhance visual tokens, we propose to utilize an additional visual encoder for high-resolution refinement without increasing the visual token count. We further construct a high-quality dataset that promotes precise image comprehension and reasoning-based generation, expanding the operational scope of current VLMs. In general, Mini-Gemini further mines the potential of VLMs and empowers current frameworks with image understanding, reasoning, and generation simultaneously. Mini-Gemini supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B. It is demonstrated to achieve leading performance in several zero-shot benchmarks and even surpasses the developed private models. Code and models are available at https://github.com/dvlab-research/MiniGemini.
2024-03-28T00:00:00
2403.18361
ViTAR: Vision Transformer with Any Resolution
[ "Qihang Fan", "Quanzeng You", "Xiaotian Han", "Yongfei Liu", "Yunzhe Tao", "Huaibo Huang", "Ran He", "Hongxia Yang" ]
his paper tackles a significant challenge faced by Vision Transformers (ViTs): their constrained scalability across different image resolutions. Typically, ViTs experience a performance decline when processing resolutions different from those seen during training. Our work introduces two key innovations to address this issue. Firstly, we propose a novel module for dynamic resolution adjustment, designed with a single Transformer block, specifically to achieve highly efficient incremental token integration. Secondly, we introduce fuzzy positional encoding in the Vision Transformer to provide consistent positional awareness across multiple resolutions, thereby preventing overfitting to any single training resolution. Our resulting model, ViTAR (Vision Transformer with Any Resolution), demonstrates impressive adaptability, achieving 83.3\% top-1 accuracy at a 1120x1120 resolution and 80.4\% accuracy at a 4032x4032 resolution, all while reducing computational costs. ViTAR also shows strong performance in downstream tasks such as instance and semantic segmentation and can easily combined with self-supervised learning techniques like Masked AutoEncoder. Our work provides a cost-effective solution for enhancing the resolution scalability of ViTs, paving the way for more versatile and efficient high-resolution image processing.
2024-03-28T00:00:00
2403.18802
Long-form factuality in large language models
[ "Jerry Wei", "Chengrun Yang", "Xinying Song", "Yifeng Lu", "Nathan Hu", "Dustin Tran", "Daiyi Peng", "Ruibo Liu", "Da Huang", "Cosmo Du", "Quoc V. Le" ]
https://github.com/google-deepmind/long-form-factuality
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user's preferred response length (recall). Empirically, we demonstrate that LLM agents can achieve superhuman rating performance - on a set of ~16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at https://github.com/google-deepmind/long-form-factuality.
2024-03-28T00:00:00
2403.18816
Garment3DGen: 3D Garment Stylization and Texture Generation
[ "Nikolaos Sarafianos", "Tuur Stuyck", "Xiaoyu Xiang", "Yilei Li", "Jovan Popovic", "Rakesh Ranjan" ]
We introduce Garment3DGen a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance. Our proposed approach allows users to generate 3D textured clothes based on both real and synthetic images, such as those generated by text prompts. The generated assets can be directly draped and simulated on human bodies. First, we leverage the recent progress of image to 3D diffusion methods to generate 3D garment geometries. However, since these geometries cannot be utilized directly for downstream tasks, we propose to use them as pseudo ground-truth and set up a mesh deformation optimization procedure that deforms a base template mesh to match the generated 3D target. Second, we introduce carefully designed losses that allow the input base mesh to freely deform towards the desired target, yet preserve mesh quality and topology such that they can be simulated. Finally, a texture estimation module generates high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance, allowing us to render the generated 3D assets. With Garment3DGen users can generate the textured 3D garment of their choice without the need of artist intervention. One can provide a textual prompt describing the garment they desire to generate a simulation-ready 3D asset. We present a plethora of quantitative and qualitative comparisons on various assets both real and generated and provide use-cases of how one can generate simulation-ready 3D garments.
2024-03-28T00:00:00
2403.18605
FlexEdit: Flexible and Controllable Diffusion-based Object-centric Image Editing
[ "Trong-Tung Nguyen", "Duc-Anh Nguyen", "Anh Tran", "Cuong Pham" ]
Our work addresses limitations seen in previous approaches for object-centric editing problems, such as unrealistic results due to shape discrepancies and limited control in object replacement or insertion. To this end, we introduce FlexEdit, a flexible and controllable editing framework for objects where we iteratively adjust latents at each denoising step using our FlexEdit block. Initially, we optimize latents at test time to align with specified object constraints. Then, our framework employs an adaptive mask, automatically extracted during denoising, to protect the background while seamlessly blending new content into the target image. We demonstrate the versatility of FlexEdit in various object editing tasks and curate an evaluation test suite with samples from both real and synthetic images, along with novel evaluation metrics designed for object-centric editing. We conduct extensive experiments on different editing scenarios, demonstrating the superiority of our editing framework over recent advanced text-guided image editing methods. Our project page is published at https://flex-edit.github.io/.
2024-03-28T00:00:00
2403.18783
Towards a World-English Language Model for On-Device Virtual Assistants
[ "Rricha Jalota", "Lyan Verwimp", "Markus Nussbaum-Thom", "Amr Mousa", "Arturo Argueta", "Youssef Oualil" ]
Neural Network Language Models (NNLMs) for Virtual Assistants (VAs) are generally language-, region-, and in some cases, device-dependent, which increases the effort to scale and maintain them. Combining NNLMs for one or more of the categories is one way to improve scalability. In this work, we combine regional variants of English to build a ``World English'' NNLM for on-device VAs. In particular, we investigate the application of adapter bottlenecks to model dialect-specific characteristics in our existing production NNLMs {and enhance the multi-dialect baselines}. We find that adapter modules are more effective in modeling dialects than specializing entire sub-networks. Based on this insight and leveraging the design of our production models, we introduce a new architecture for World English NNLM that meets the accuracy, latency, and memory constraints of our single-dialect models.
2024-03-28T00:00:00
2403.18118
EgoLifter: Open-world 3D Segmentation for Egocentric Perception
[ "Qiao Gu", "Zhaoyang Lv", "Duncan Frost", "Simon Green", "Julian Straub", "Chris Sweeney" ]
In this paper we present EgoLifter, a novel system that can automatically segment scenes captured from egocentric sensors into a complete decomposition of individual 3D objects. The system is specifically designed for egocentric data where scenes contain hundreds of objects captured from natural (non-scanning) motion. EgoLifter adopts 3D Gaussians as the underlying representation of 3D scenes and objects and uses segmentation masks from the Segment Anything Model (SAM) as weak supervision to learn flexible and promptable definitions of object instances free of any specific object taxonomy. To handle the challenge of dynamic objects in ego-centric videos, we design a transient prediction module that learns to filter out dynamic objects in the 3D reconstruction. The result is a fully automatic pipeline that is able to reconstruct 3D object instances as collections of 3D Gaussians that collectively compose the entire scene. We created a new benchmark on the Aria Digital Twin dataset that quantitatively demonstrates its state-of-the-art performance in open-world 3D segmentation from natural egocentric input. We run EgoLifter on various egocentric activity datasets which shows the promise of the method for 3D egocentric perception at scale.
2024-03-28T00:00:00
2403.18818
ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion
[ "Daniel Winter", "Matan Cohen", "Shlomi Fruchter", "Yael Pritch", "Alex Rav-Acha", "Yedid Hoshen" ]
Diffusion models have revolutionized image editing but often generate images that violate physical laws, particularly the effects of objects on the scene, e.g., occlusions, shadows, and reflections. By analyzing the limitations of self-supervised approaches, we propose a practical solution centered on a counterfactual dataset. Our method involves capturing a scene before and after removing a single object, while minimizing other changes. By fine-tuning a diffusion model on this dataset, we are able to not only remove objects but also their effects on the scene. However, we find that applying this approach for photorealistic object insertion requires an impractically large dataset. To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably. Our approach significantly outperforms prior methods in photorealistic object removal and insertion, particularly at modeling the effects of objects on the scene.
2024-03-29T00:00:00
2403.18978
TextCraftor: Your Text Encoder Can be Image Quality Controller
[ "Yanyu Li", "Xian Liu", "Anil Kag", "Ju Hu", "Yerlan Idelbayev", "Dhritiman Sagar", "Yanzhi Wang", "Sergey Tulyakov", "Jian Ren" ]
Diffusion-based text-to-image generative models, e.g., Stable Diffusion, have revolutionized the field of content generation, enabling significant advancements in areas like image editing and video synthesis. Despite their formidable capabilities, these models are not without their limitations. It is still challenging to synthesize an image that aligns well with the input text, and multiple runs with carefully crafted prompts are required to achieve satisfactory results. To mitigate these limitations, numerous studies have endeavored to fine-tune the pre-trained diffusion models, i.e., UNet, utilizing various technologies. Yet, amidst these efforts, a pivotal question of text-to-image diffusion model training has remained largely unexplored: Is it possible and feasible to fine-tune the text encoder to improve the performance of text-to-image diffusion models? Our findings reveal that, instead of replacing the CLIP text encoder used in Stable Diffusion with other large language models, we can enhance it through our proposed fine-tuning approach, TextCraftor, leading to substantial improvements in quantitative benchmarks and human assessments. Interestingly, our technique also empowers controllable image generation through the interpolation of different text encoders fine-tuned with various rewards. We also demonstrate that TextCraftor is orthogonal to UNet finetuning, and can be combined to further improve generative quality.
2024-03-29T00:00:00
2403.19046
LITA: Language Instructed Temporal-Localization Assistant
[ "De-An Huang", "Shijia Liao", "Subhashree Radhakrishnan", "Hongxu Yin", "Pavlo Molchanov", "Zhiding Yu", "Jan Kautz" ]
https://github.com/NVlabs/LITA
There has been tremendous progress in multimodal Large Language Models (LLMs). Recent works have extended these models to video input with promising instruction following capabilities. However, an important missing piece is temporal localization. These models cannot accurately answer the "When?" questions. We identify three key aspects that limit their temporal localization capabilities: (i) time representation, (ii) architecture, and (iii) data. We address these shortcomings by proposing Language Instructed Temporal-Localization Assistant (LITA) with the following features: (1) We introduce time tokens that encode timestamps relative to the video length to better represent time in videos. (2) We introduce SlowFast tokens in the architecture to capture temporal information at fine temporal resolution. (3) We emphasize temporal localization data for LITA. In addition to leveraging existing video datasets with timestamps, we propose a new task, Reasoning Temporal Localization (RTL), along with the dataset, ActivityNet-RTL, for learning and evaluating this task. Reasoning temporal localization requires both the reasoning and temporal localization of Video LLMs. LITA demonstrates strong performance on this challenging task, nearly doubling the temporal mean intersection-over-union (mIoU) of baselines. In addition, we show that our emphasis on temporal localization also substantially improves video-based text generation compared to existing Video LLMs, including a 36% relative improvement of Temporal Understanding. Code is available at: https://github.com/NVlabs/LITA
2024-03-29T00:00:00
2403.19655
GaussianCube: Structuring Gaussian Splatting using Optimal Transport for 3D Generative Modeling
[ "Bowen Zhang", "Yiji Cheng", "Jiaolong Yang", "Chunyu Wang", "Feng Zhao", "Yansong Tang", "Dong Chen", "Baining Guo" ]
https://github.com/GaussianCube/GaussianCube
3D Gaussian Splatting (GS) have achieved considerable improvement over Neural Radiance Fields in terms of 3D fitting fidelity and rendering speed. However, this unstructured representation with scattered Gaussians poses a significant challenge for generative modeling. To address the problem, we introduce GaussianCube, a structured GS representation that is both powerful and efficient for generative modeling. We achieve this by first proposing a modified densification-constrained GS fitting algorithm which can yield high-quality fitting results using a fixed number of free Gaussians, and then re-arranging the Gaussians into a predefined voxel grid via Optimal Transport. The structured grid representation allows us to use standard 3D U-Net as our backbone in diffusion generative modeling without elaborate designs. Extensive experiments conducted on ShapeNet and OmniObject3D show that our model achieves state-of-the-art generation results both qualitatively and quantitatively, underscoring the potential of GaussianCube as a powerful and versatile 3D representation.
2024-03-29T00:00:00
2403.19319
Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation
[ "Yujin Chen", "Yinyu Nie", "Benjamin Ummenhofer", "Reiner Birkl", "Michael Paulitsch", "Matthias Müller", "Matthias Nießner" ]
We present Mesh2NeRF, an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks. Many 3D generative approaches represent 3D scenes as radiance fields for training. Their ground-truth radiance fields are usually fitted from multi-view renderings from a large-scale synthetic 3D dataset, which often results in artifacts due to occlusions or under-fitting issues. In Mesh2NeRF, we propose an analytic solution to directly obtain ground-truth radiance fields from 3D meshes, characterizing the density field with an occupancy function featuring a defined surface thickness, and determining view-dependent color through a reflection function considering both the mesh and environment lighting. Mesh2NeRF extracts accurate radiance fields which provides direct supervision for training generative NeRFs and single scene representation. We validate the effectiveness of Mesh2NeRF across various tasks, achieving a noteworthy 3.12dB improvement in PSNR for view synthesis in single scene representation on the ABO dataset, a 0.69 PSNR enhancement in the single-view conditional generation of ShapeNet Cars, and notably improved mesh extraction from NeRF in the unconditional generation of Objaverse Mugs.
2024-03-29T00:00:00
2403.19270
sDPO: Don't Use Your Data All at Once
[ "Dahyun Kim", "Yungi Kim", "Wonho Song", "Hyeonwoo Kim", "Yunsu Kim", "Sanghoon Kim", "Chanjun Park" ]
As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important. We propose stepwise DPO (sDPO), an extension of the recently popularized direct preference optimization (DPO) for alignment tuning. This approach involves dividing the available preference datasets and utilizing them in a stepwise manner, rather than employing it all at once. We demonstrate that this method facilitates the use of more precisely aligned reference models within the DPO training framework. Furthermore, sDPO trains the final model to be more performant, even outperforming other popular LLMs with more parameters.
2024-04-01T00:00:00
2403.20331
Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
[ "Atsuyuki Miyai", "Jingkang Yang", "Jingyang Zhang", "Yifei Ming", "Qing Yu", "Go Irie", "Yixuan Li", "Hai Li", "Ziwei Liu", "Kiyoharu Aizawa" ]
https://github.com/AtsuMiyai/UPD/
This paper introduces a novel and significant challenge for Vision Language Models (VLMs), termed Unsolvable Problem Detection (UPD). UPD examines the VLM's ability to withhold answers when faced with unsolvable problems in the context of Visual Question Answering (VQA) tasks. UPD encompasses three distinct settings: Absent Answer Detection (AAD), Incompatible Answer Set Detection (IASD), and Incompatible Visual Question Detection (IVQD). To deeply investigate the UPD problem, extensive experiments indicate that most VLMs, including GPT-4V and LLaVA-Next-34B, struggle with our benchmarks to varying extents, highlighting significant room for the improvements. To address UPD, we explore both training-free and training-based solutions, offering new insights into their effectiveness and limitations. We hope our insights, together with future efforts within the proposed UPD settings, will enhance the broader understanding and development of more practical and reliable VLMs.
2024-04-01T00:00:00
2403.20309
InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds
[ "Zhiwen Fan", "Wenyan Cong", "Kairun Wen", "Kevin Wang", "Jian Zhang", "Xinghao Ding", "Danfei Xu", "Boris Ivanovic", "Marco Pavone", "Georgios Pavlakos", "Zhangyang Wang", "Yue Wang" ]
While novel view synthesis (NVS) has made substantial progress in 3D computer vision, it typically requires an initial estimation of camera intrinsics and extrinsics from dense viewpoints. This pre-processing is usually conducted via a Structure-from-Motion (SfM) pipeline, a procedure that can be slow and unreliable, particularly in sparse-view scenarios with insufficient matched features for accurate reconstruction. In this work, we integrate the strengths of point-based representations (e.g., 3D Gaussian Splatting, 3D-GS) with end-to-end dense stereo models (DUSt3R) to tackle the complex yet unresolved issues in NVS under unconstrained settings, which encompasses pose-free and sparse view challenges. Our framework, InstantSplat, unifies dense stereo priors with 3D-GS to build 3D Gaussians of large-scale scenes from sparseview & pose-free images in less than 1 minute. Specifically, InstantSplat comprises a Coarse Geometric Initialization (CGI) module that swiftly establishes a preliminary scene structure and camera parameters across all training views, utilizing globally-aligned 3D point maps derived from a pre-trained dense stereo pipeline. This is followed by the Fast 3D-Gaussian Optimization (F-3DGO) module, which jointly optimizes the 3D Gaussian attributes and the initialized poses with pose regularization. Experiments conducted on the large-scale outdoor Tanks & Temples datasets demonstrate that InstantSplat significantly improves SSIM (by 32%) while concurrently reducing Absolute Trajectory Error (ATE) by 80%. These establish InstantSplat as a viable solution for scenarios involving posefree and sparse-view conditions. Project page: instantsplat.github.io.
2024-04-01T00:00:00
2403.20275
Snap-it, Tap-it, Splat-it: Tactile-Informed 3D Gaussian Splatting for Reconstructing Challenging Surfaces
[ "Mauro Comi", "Alessio Tonioni", "Max Yang", "Jonathan Tremblay", "Valts Blukis", "Yijiong Lin", "Nathan F. Lepora", "Laurence Aitchison" ]
Touch and vision go hand in hand, mutually enhancing our ability to understand the world. From a research perspective, the problem of mixing touch and vision is underexplored and presents interesting challenges. To this end, we propose Tactile-Informed 3DGS, a novel approach that incorporates touch data (local depth maps) with multi-view vision data to achieve surface reconstruction and novel view synthesis. Our method optimises 3D Gaussian primitives to accurately model the object's geometry at points of contact. By creating a framework that decreases the transmittance at touch locations, we achieve a refined surface reconstruction, ensuring a uniformly smooth depth map. Touch is particularly useful when considering non-Lambertian objects (e.g. shiny or reflective surfaces) since contemporary methods tend to fail to reconstruct with fidelity specular highlights. By combining vision and tactile sensing, we achieve more accurate geometry reconstructions with fewer images than prior methods. We conduct evaluation on objects with glossy and reflective surfaces and demonstrate the effectiveness of our approach, offering significant improvements in reconstruction quality.
2024-04-01T00:00:00
2403.20329
ReALM: Reference Resolution As Language Modeling
[ "Joel Ruben Antony Moniz", "Soundarya Krishnan", "Melis Ozyildirim", "Prathamesh Saraf", "Halim Cagri Ates", "Yuan Zhang", "Hong Yu", "Nidhi Rajshree" ]
Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds. This context includes both previous turns and context that pertains to non-conversational entities, such as entities on the user's screen or those running in the background. While LLMs have been shown to be extremely powerful for a variety of tasks, their use in reference resolution, particularly for non-conversational entities, remains underutilized. This paper demonstrates how LLMs can be used to create an extremely effective system to resolve references of various types, by showing how reference resolution can be converted into a language modeling problem, despite involving forms of entities like those on screen that are not traditionally conducive to being reduced to a text-only modality. We demonstrate large improvements over an existing system with similar functionality across different types of references, with our smallest model obtaining absolute gains of over 5% for on-screen references. We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it.
2024-04-01T00:00:00
2403.19887
Jamba: A Hybrid Transformer-Mamba Language Model
[ "Opher Lieber", "Barak Lenz", "Hofit Bata", "Gal Cohen", "Jhonathan Osin", "Itay Dalmedigos", "Erez Safahi", "Shaked Meirom", "Yonatan Belinkov", "Shai Shalev-Shwartz", "Omri Abend", "Raz Alon", "Tomer Asida", "Amir Bergman", "Roman Glozman", "Michael Gokhman", "Avashalom Manevich", "Nir Ratner", "Noam Rozen", "Erez Shwartz", "Mor Zusman", "Yoav Shoham" ]
We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.
2024-04-01T00:00:00
2403.20327
Gecko: Versatile Text Embeddings Distilled from Large Language Models
[ "Jinhyuk Lee", "Zhuyun Dai", "Xiaoqi Ren", "Blair Chen", "Daniel Cer", "Jeremy R. Cole", "Kai Hui", "Michael Boratko", "Rajvi Kapadia", "Wen Ding", "Yi Luan", "Sai Meher Karthik Duddu", "Gustavo Hernandez Abrego", "Weiqiang Shi", "Nithi Gupta", "Aditya Kusupati", "Prateek Jain", "Siddhartha Reddy Jonnalagadda", "Ming-Wei Chang", "Iftekhar Naim" ]
We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using an LLM. Next, we further refine the data quality by retrieving a set of candidate passages for each query, and relabeling the positive and hard negative passages using the same LLM. The effectiveness of our approach is demonstrated by the compactness of the Gecko. On the Massive Text Embedding Benchmark (MTEB), Gecko with 256 embedding dimensions outperforms all existing entries with 768 embedding size. Gecko with 768 embedding dimensions achieves an average score of 66.31, competing with 7x larger models and 5x higher dimensional embeddings.
2024-04-01T00:00:00
2403.20041
Transformer-Lite: High-efficiency Deployment of Large Language Models on Mobile Phone GPUs
[ "Luchang Li", "Sheng Qian", "Jie Lu", "Lunxi Yuan", "Rui Wang", "Qin Xie" ]
The Large Language Model (LLM) is widely employed for tasks such as intelligent assistants, text summarization, translation, and multi-modality on mobile phones. However, the current methods for on-device LLM deployment maintain slow inference speed, which causes poor user experience. To facilitate high-efficiency LLM deployment on device GPUs, we propose four optimization techniques: (a) a symbolic expression-based approach to support dynamic shape model inference; (b) operator optimizations and execution priority setting to enhance inference speed and reduce phone lagging; (c) an FP4 quantization method termed M0E4 to reduce dequantization overhead; (d) a sub-tensor-based technique to eliminate the need for copying KV cache after LLM inference. Furthermore, we implement these methods in our mobile inference engine, Transformer-Lite, which is compatible with both Qualcomm and MTK processors. We evaluated Transformer-Lite's performance using LLMs with varied architectures and parameters ranging from 2B to 14B. Specifically, we achieved prefill and decoding speeds of 121 token/s and 14 token/s for ChatGLM2 6B, and 330 token/s and 30 token/s for smaller Gemma 2B, respectively. Compared with CPU-based FastLLM and GPU-based MLC-LLM, our engine attains over 10x speedup for the prefill speed and 2~3x speedup for the decoding speed.
2024-04-01T00:00:00
2403.19888
MambaMixer: Efficient Selective State Space Models with Dual Token and Channel Selection
[ "Ali Behrouz", "Michele Santacatterina", "Ramin Zabih" ]
https://github.com/MambaMixer/M2
Recent advances in deep learning have mainly relied on Transformers due to their data dependency and ability to learn at scale. The attention module in these architectures, however, exhibits quadratic time and space in input size, limiting their scalability for long-sequence modeling. Despite recent attempts to design efficient and effective architecture backbone for multi-dimensional data, such as images and multivariate time series, existing models are either data independent, or fail to allow inter- and intra-dimension communication. Recently, State Space Models (SSMs), and more specifically Selective State Space Models, with efficient hardware-aware implementation, have shown promising potential for long sequence modeling. Motivated by the success of SSMs, we present MambaMixer, a new architecture with data-dependent weights that uses a dual selection mechanism across tokens and channels, called Selective Token and Channel Mixer. MambaMixer connects selective mixers using a weighted averaging mechanism, allowing layers to have direct access to early features. As a proof of concept, we design Vision MambaMixer (ViM2) and Time Series MambaMixer (TSM2) architectures based on the MambaMixer block and explore their performance in various vision and time series forecasting tasks. Our results underline the importance of selective mixing across both tokens and channels. In ImageNet classification, object detection, and semantic segmentation tasks, ViM2 achieves competitive performance with well-established vision models and outperforms SSM-based vision models. In time series forecasting, TSM2 achieves outstanding performance compared to state-of-the-art methods while demonstrating significantly improved computational cost. These results show that while Transformers, cross-channel attention, and MLPs are sufficient for good performance in time series forecasting, neither is necessary.
2024-04-01T00:00:00
2403.19928
DiJiang: Efficient Large Language Models through Compact Kernelization
[ "Hanting Chen", "Zhicheng Liu", "Xutao Wang", "Yuchuan Tian", "Yunhe Wang" ]
https://github.com/YuchuanTian/DiJiang
In an effort to reduce the computational load of Transformers, research on linear attention has gained significant momentum. However, the improvement strategies for attention mechanisms typically necessitate extensive retraining, which is impractical for large language models with a vast array of parameters. In this paper, we present DiJiang, a novel Frequency Domain Kernelization approach that enables the transformation of a pre-trained vanilla Transformer into a linear complexity model with little training costs. By employing a weighted Quasi-Monte Carlo method for sampling, the proposed approach theoretically offers superior approximation efficiency. To further reduce the training computational complexity, our kernelization is based on Discrete Cosine Transform (DCT) operations. Extensive experiments demonstrate that the proposed method achieves comparable performance to the original Transformer, but with significantly reduced training costs and much faster inference speeds. Our DiJiang-7B achieves comparable performance with LLaMA2-7B on various benchmark while requires only about 1/50 training cost. Code is available at https://github.com/YuchuanTian/DiJiang.
2024-04-01T00:00:00
2403.19851
Localizing Paragraph Memorization in Language Models
[ "Niklas Stoehr", "Mitchell Gordon", "Chiyuan Zhang", "Owen Lewis" ]
https://github.com/googleinterns/localizing-paragraph-memorization
Can we localize the weights and mechanisms used by a language model to memorize and recite entire paragraphs of its training data? In this paper, we show that while memorization is spread across multiple layers and model components, gradients of memorized paragraphs have a distinguishable spatial pattern, being larger in lower model layers than gradients of non-memorized examples. Moreover, the memorized examples can be unlearned by fine-tuning only the high-gradient weights. We localize a low-layer attention head that appears to be especially involved in paragraph memorization. This head is predominantly focusing its attention on distinctive, rare tokens that are least frequent in a corpus-level unigram distribution. Next, we study how localized memorization is across the tokens in the prefix by perturbing tokens and measuring the caused change in the decoding. A few distinctive tokens early in a prefix can often corrupt the entire continuation. Overall, memorized continuations are not only harder to unlearn, but also to corrupt than non-memorized ones.
2024-04-02T00:00:00
2404.01197
Getting it Right: Improving Spatial Consistency in Text-to-Image Models
[ "Agneet Chatterjee", "Gabriela Ben Melech Stan", "Estelle Aflalo", "Sayak Paul", "Dhruba Ghosh", "Tejas Gokhale", "Ludwig Schmidt", "Hannaneh Hajishirzi", "Vasudev Lal", "Chitta Baral", "Yezhou Yang" ]
One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt. In this paper, we offer a comprehensive investigation of this limitation, while also developing datasets and methods that achieve state-of-the-art performance. First, we find that current vision-language datasets do not represent spatial relationships well enough; to alleviate this bottleneck, we create SPRIGHT, the first spatially-focused, large scale dataset, by re-captioning 6 million images from 4 widely used vision datasets. Through a 3-fold evaluation and analysis pipeline, we find that SPRIGHT largely improves upon existing datasets in capturing spatial relationships. To demonstrate its efficacy, we leverage only ~0.25% of SPRIGHT and achieve a 22% improvement in generating spatially accurate images while also improving the FID and CMMD scores. Secondly, we find that training on images containing a large number of objects results in substantial improvements in spatial consistency. Notably, we attain state-of-the-art on T2I-CompBench with a spatial score of 0.2133, by fine-tuning on <500 images. Finally, through a set of controlled experiments and ablations, we document multiple findings that we believe will enhance the understanding of factors that affect spatial consistency in text-to-image models. We publicly release our dataset and model to foster further research in this area.
2024-04-02T00:00:00
2404.00987
FlexiDreamer: Single Image-to-3D Generation with FlexiCubes
[ "Ruowen Zhao", "Zhengyi Wang", "Yikai Wang", "Zihan Zhou", "Jun Zhu" ]
3D content generation from text prompts or single images has made remarkable progress in quality and speed recently. One of its dominant paradigms involves generating consistent multi-view images followed by a sparse-view reconstruction. However, due to the challenge of directly deforming the mesh representation to approach the target topology, most methodologies learn an implicit representation (such as NeRF) during the sparse-view reconstruction and acquire the target mesh by a post-processing extraction. Although the implicit representation can effectively model rich 3D information, its training typically entails a long convergence time. In addition, the post-extraction operation from the implicit field also leads to undesirable visual artifacts. In this paper, we propose FlexiDreamer, a novel single image-to-3d generation framework that reconstructs the target mesh in an end-to-end manner. By leveraging a flexible gradient-based extraction known as FlexiCubes, our method circumvents the defects brought by the post-processing and facilitates a direct acquisition of the target mesh. Furthermore, we incorporate a multi-resolution hash grid encoding scheme that progressively activates the encoding levels into the implicit field in FlexiCubes to help capture geometric details for per-step optimization. Notably, FlexiDreamer recovers a dense 3D structure from a single-view image in approximately 1 minute on a single NVIDIA A100 GPU, outperforming previous methodologies by a large margin.
2024-04-02T00:00:00
2404.01258
Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward
[ "Ruohong Zhang", "Liangke Gui", "Zhiqing Sun", "Yihao Feng", "Keyang Xu", "Yuanhan Zhang", "Di Fu", "Chunyuan Li", "Alexander Hauptmann", "Yonatan Bisk", "Yiming Yang" ]
Preference modeling techniques, such as direct preference optimization (DPO), has shown effective in enhancing the generalization abilities of large language model (LLM). However, in tasks involving video instruction-following, providing informative feedback, especially for detecting hallucinations in generated responses, remains a significant challenge. Previous studies have explored using large large multimodal models (LMMs) as reward models to guide preference modeling, but their ability to accurately assess the factuality of generated responses compared to corresponding videos has not been conclusively established. This paper introduces a novel framework that utilizes detailed video captions as a proxy of video content, enabling language models to incorporate this information as supporting evidence for scoring video Question Answering (QA) predictions. Our approach demonstrates robust alignment with OpenAI GPT-4V model's reward mechanism, which directly takes video frames as input. Furthermore, we show that applying this tailored reward through DPO significantly improves the performance of video LMMs on video QA tasks.
2024-04-02T00:00:00
2404.01143
Condition-Aware Neural Network for Controlled Image Generation
[ "Han Cai", "Muyang Li", "Zhuoyang Zhang", "Qinsheng Zhang", "Ming-Yu Liu", "Song Han" ]
We present Condition-Aware Neural Network (CAN), a new method for adding control to image generative models. In parallel to prior conditional control methods, CAN controls the image generation process by dynamically manipulating the weight of the neural network. This is achieved by introducing a condition-aware weight generation module that generates conditional weight for convolution/linear layers based on the input condition. We test CAN on class-conditional image generation on ImageNet and text-to-image generation on COCO. CAN consistently delivers significant improvements for diffusion transformer models, including DiT and UViT. In particular, CAN combined with EfficientViT (CaT) achieves 2.78 FID on ImageNet 512x512, surpassing DiT-XL/2 while requiring 52x fewer MACs per sampling step.
2024-04-02T00:00:00
2404.00345
MaGRITTe: Manipulative and Generative 3D Realization from Image, Topview and Text
[ "Takayuki Hara", "Tatsuya Harada" ]
The generation of 3D scenes from user-specified conditions offers a promising avenue for alleviating the production burden in 3D applications. Previous studies required significant effort to realize the desired scene, owing to limited control conditions. We propose a method for controlling and generating 3D scenes under multimodal conditions using partial images, layout information represented in the top view, and text prompts. Combining these conditions to generate a 3D scene involves the following significant difficulties: (1) the creation of large datasets, (2) reflection on the interaction of multimodal conditions, and (3) domain dependence of the layout conditions. We decompose the process of 3D scene generation into 2D image generation from the given conditions and 3D scene generation from 2D images. 2D image generation is achieved by fine-tuning a pretrained text-to-image model with a small artificial dataset of partial images and layouts, and 3D scene generation is achieved by layout-conditioned depth estimation and neural radiance fields (NeRF), thereby avoiding the creation of large datasets. The use of a common representation of spatial information using 360-degree images allows for the consideration of multimodal condition interactions and reduces the domain dependence of the layout control. The experimental results qualitatively and quantitatively demonstrated that the proposed method can generate 3D scenes in diverse domains, from indoor to outdoor, according to multimodal conditions.
2024-04-02T00:00:00
2404.00399
Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order
[ "Taishi Nakamura", "Mayank Mishra", "Simone Tedeschi", "Yekun Chai", "Jason T Stillerman", "Felix Friedrich", "Prateek Yadav", "Tanmay Laud", "Vu Minh Chien", "Terry Yue Zhuo", "Diganta Misra", "Ben Bogin", "Xuan-Son Vu", "Marzena Karpinska", "Arnav Varma Dantuluri", "Wojciech Kusa", "Tommaso Furlanello", "Rio Yokota", "Niklas Muennighoff", "Suhas Pai", "Tosin Adewumi", "Veronika Laippala", "Xiaozhe Yao", "Adalberto Junior", "Alpay Ariyak", "Aleksandr Drozd", "Jordan Clive", "Kshitij Gupta", "Liangyu Chen", "Qi Sun", "Ken Tsui", "Noah Persaud", "Nour Fahmy", "Tianlong Chen", "Mohit Bansal", "Nicolo Monti", "Tai Dang", "Ziyang Luo", "Tien-Tung Bui", "Roberto Navigli", "Virendra Mehta", "Matthew Blumberg", "Victor May", "Huu Nguyen", "Sampo Pyysalo" ]
Pretrained language models underpin several AI applications, but their high computational cost for training limits accessibility. Initiatives such as BLOOM and StarCoder aim to democratize access to pretrained models for collaborative community development. However, such existing models face challenges: limited multilingual capabilities, continual pretraining causing catastrophic forgetting, whereas pretraining from scratch is computationally expensive, and compliance with AI safety and development laws. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435 billion additional tokens, Aurora-M surpasses 2 trillion tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Aurora-M is rigorously evaluated across various tasks and languages, demonstrating robustness against catastrophic forgetting and outperforming alternatives in multilingual settings, particularly in safety evaluations. To promote responsible open-source LLM development, Aurora-M and its variants are released at https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407 .
2024-04-02T00:00:00
2404.01292
Measuring Style Similarity in Diffusion Models
[ "Gowthami Somepalli", "Anubhav Gupta", "Kamal Gupta", "Shramay Palta", "Micah Goldblum", "Jonas Geiping", "Abhinav Shrivastava", "Tom Goldstein" ]
https://github.com/learn2phoenix/CSD
Generative models are now widely used by graphic designers and artists. Prior works have shown that these models remember and often replicate content from their training data during generation. Hence as their proliferation increases, it has become important to perform a database search to determine whether the properties of the image are attributable to specific training data, every time before a generated image is used for professional purposes. Existing tools for this purpose focus on retrieving images of similar semantic content. Meanwhile, many artists are concerned with style replication in text-to-image models. We present a framework for understanding and extracting style descriptors from images. Our framework comprises a new dataset curated using the insight that style is a subjective property of an image that captures complex yet meaningful interactions of factors including but not limited to colors, textures, shapes, etc. We also propose a method to extract style descriptors that can be used to attribute style of a generated image to the images used in the training dataset of a text-to-image model. We showcase promising results in various style retrieval tasks. We also quantitatively and qualitatively analyze style attribution and matching in the Stable Diffusion model. Code and artifacts are available at https://github.com/learn2phoenix/CSD.
2024-04-02T00:00:00
2404.01294
CosmicMan: A Text-to-Image Foundation Model for Humans
[ "Shikai Li", "Jianglin Fu", "Kaiyuan Liu", "Wentao Wang", "Kwan-Yee Lin", "Wayne Wu" ]
https://github.com/cosmicman-cvpr2024/CosmicMan
We present CosmicMan, a text-to-image foundation model specialized for generating high-fidelity human images. Unlike current general-purpose foundation models that are stuck in the dilemma of inferior quality and text-image misalignment for humans, CosmicMan enables generating photo-realistic human images with meticulous appearance, reasonable structure, and precise text-image alignment with detailed dense descriptions. At the heart of CosmicMan's success are the new reflections and perspectives on data and models: (1) We found that data quality and a scalable data production flow are essential for the final results from trained models. Hence, we propose a new data production paradigm, Annotate Anyone, which serves as a perpetual data flywheel to produce high-quality data with accurate yet cost-effective annotations over time. Based on this, we constructed a large-scale dataset, CosmicMan-HQ 1.0, with 6 Million high-quality real-world human images in a mean resolution of 1488x1255, and attached with precise text annotations deriving from 115 Million attributes in diverse granularities. (2) We argue that a text-to-image foundation model specialized for humans must be pragmatic -- easy to integrate into down-streaming tasks while effective in producing high-quality human images. Hence, we propose to model the relationship between dense text descriptions and image pixels in a decomposed manner, and present Decomposed-Attention-Refocusing (Daring) training framework. It seamlessly decomposes the cross-attention features in existing text-to-image diffusion model, and enforces attention refocusing without adding extra modules. Through Daring, we show that explicitly discretizing continuous text space into several basic groups that align with human body structure is the key to tackling the misalignment problem in a breeze.
2024-04-02T00:00:00
2404.00656
WavLLM: Towards Robust and Adaptive Speech Large Language Model
[ "Shujie Hu", "Long Zhou", "Shujie Liu", "Sanyuan Chen", "Hongkun Hao", "Jing Pan", "Xunying Liu", "Jinyu Li", "Sunit Sivasankaran", "Linquan Liu", "Furu Wei" ]
https://github.com/microsoft/SpeechT5/tree/main/WavLLM
The recent advancements in large language models (LLMs) have revolutionized the field of natural language processing, progressively broadening their scope to multimodal perception and generation. However, effectively integrating listening capabilities into LLMs poses significant challenges, particularly with respect to generalizing across varied contexts and executing complex auditory tasks. In this work, we introduce WavLLM, a robust and adaptive speech large language model with dual encoders, and a prompt-aware LoRA weight adapter, optimized by a two-stage curriculum learning approach. Leveraging dual encoders, we decouple different types of speech information, utilizing a Whisper encoder to process the semantic content of speech, and a WavLM encoder to capture the unique characteristics of the speaker's identity. Within the curriculum learning framework, WavLLM first builds its foundational capabilities by optimizing on mixed elementary single tasks, followed by advanced multi-task training on more complex tasks such as combinations of the elementary tasks. To enhance the flexibility and adherence to different tasks and instructions, a prompt-aware LoRA weight adapter is introduced in the second advanced multi-task training stage. We validate the proposed model on universal speech benchmarks including tasks such as ASR, ST, SV, ER, and also apply it to specialized datasets like Gaokao English listening comprehension set for SQA, and speech Chain-of-Thought (CoT) evaluation set. Experiments demonstrate that the proposed model achieves state-of-the-art performance across a range of speech tasks on the same model size, exhibiting robust generalization capabilities in executing complex tasks using CoT approach. Furthermore, our model successfully completes Gaokao tasks without specialized training. The codes, models, audio, and Gaokao evaluation set can be accessed at aka.ms/wavllm.
2024-04-02T00:00:00
2404.00488
Noise-Aware Training of Layout-Aware Language Models
[ "Ritesh Sarkhel", "Xiaoqi Ren", "Lauro Beltrao Costa", "Guolong Su", "Vincent Perot", "Yanan Xie", "Emmanouil Koukoumidis", "Arnab Nandi" ]
A visually rich document (VRD) utilizes visual features along with linguistic cues to disseminate information. Training a custom extractor that identifies named entities from a document requires a large number of instances of the target document type annotated at textual and visual modalities. This is an expensive bottleneck in enterprise scenarios, where we want to train custom extractors for thousands of different document types in a scalable way. Pre-training an extractor model on unlabeled instances of the target document type, followed by a fine-tuning step on human-labeled instances does not work in these scenarios, as it surpasses the maximum allowable training time allocated for the extractor. We address this scenario by proposing a Noise-Aware Training method or NAT in this paper. Instead of acquiring expensive human-labeled documents, NAT utilizes weakly labeled documents to train an extractor in a scalable way. To avoid degradation in the model's quality due to noisy, weakly labeled samples, NAT estimates the confidence of each training sample and incorporates it as uncertainty measure during training. We train multiple state-of-the-art extractor models using NAT. Experiments on a number of publicly available and in-house datasets show that NAT-trained models are not only robust in performance -- it outperforms a transfer-learning baseline by up to 6% in terms of macro-F1 score, but it is also more label-efficient -- it reduces the amount of human-effort required to obtain comparable performance by up to 73%.
2024-04-02T00:00:00
2404.00308
ST-LLM: Large Language Models Are Effective Temporal Learners
[ "Ruyang Liu", "Chen Li", "Haoran Tang", "Yixiao Ge", "Ying Shan", "Ge Li" ]
https://github.com/TencentARC/ST-LLM
Large Language Models (LLMs) have showcased impressive capabilities in text comprehension and generation, prompting research efforts towards video LLMs to facilitate human-AI interaction at the video level. However, how to effectively encode and understand videos in video-based dialogue systems remains to be solved. In this paper, we investigate a straightforward yet unexplored question: Can we feed all spatial-temporal tokens into the LLM, thus delegating the task of video sequence modeling to the LLMs? Surprisingly, this simple approach yields significant improvements in video understanding. Based upon this, we propose ST-LLM, an effective video-LLM baseline with Spatial-Temporal sequence modeling inside LLM. Furthermore, to address the overhead and stability issues introduced by uncompressed video tokens within LLMs, we develop a dynamic masking strategy with tailor-made training objectives. For particularly long videos, we have also designed a global-local input module to balance efficiency and effectiveness. Consequently, we harness LLM for proficient spatial-temporal modeling, while upholding efficiency and stability. Extensive experimental results attest to the effectiveness of our method. Through a more concise model and training pipeline, ST-LLM establishes a new state-of-the-art result on VideoChatGPT-Bench and MVBench. Codes have been available at https://github.com/TencentARC/ST-LLM.
2024-04-02T00:00:00
2404.01297
Streaming Dense Video Captioning
[ "Xingyi Zhou", "Anurag Arnab", "Shyamal Buch", "Shen Yan", "Austin Myers", "Xuehan Xiong", "Arsha Nagrani", "Cordelia Schmid" ]
https://github.com/google-research/scenic
An ideal model for dense video captioning -- predicting captions localized temporally in a video -- should be able to handle long input videos, predict rich, detailed textual descriptions, and be able to produce outputs before processing the entire video. Current state-of-the-art models, however, process a fixed number of downsampled frames, and make a single full prediction after seeing the whole video. We propose a streaming dense video captioning model that consists of two novel components: First, we propose a new memory module, based on clustering incoming tokens, which can handle arbitrarily long videos as the memory is of a fixed size. Second, we develop a streaming decoding algorithm that enables our model to make predictions before the entire video has been processed. Our model achieves this streaming ability, and significantly improves the state-of-the-art on three dense video captioning benchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at https://github.com/google-research/scenic.
2024-04-03T00:00:00
2404.02060
Long-context LLMs Struggle with Long In-context Learning
[ "Tianle Li", "Ge Zhang", "Quy Duc Do", "Xiang Yue", "Wenhu Chen" ]
https://github.com/TIGER-AI-Lab/LongICLBench
Large Language Models (LLMs) have made significant strides in handling long sequences exceeding 32K tokens. However, their performance evaluation has largely been confined to metrics like perplexity and synthetic tasks, which may not fully capture their abilities in more nuanced, real-world scenarios. This study introduces a specialized benchmark (LIConBench) focusing on long in-context learning within the realm of extreme-label classification. We meticulously selected six datasets with a label range spanning 28 to 174 classes covering different input (few-shot demonstration) length from 2K to 50K. Our benchmark requires LLMs to comprehend the entire input to recognize the massive label spaces to make correct prediction. We evaluate 13 long-context LLMs on our benchmarks. We find that the long-context LLMs perform relatively well under the token length of 20K and the performance benefits from utilizing the long context window. However, after the context window exceeds 20K, most LLMs except GPT-4 will dip dramatically. This suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences. Further analysis revealed a tendency among models to favor predictions for labels presented towards the end at the sequence. Their ability to reason over multiple pieces in the long sequence is yet to be improved. Our study reveals that long context understanding and reasoning is still a challenging task for the existing LLMs. We believe LIConBench could serve as a more realistic evaluation for the future long context LLMs.
2024-04-03T00:00:00
2404.02078
Advancing LLM Reasoning Generalists with Preference Trees
[ "Lifan Yuan", "Ganqu Cui", "Hanbin Wang", "Ning Ding", "Xingyao Wang", "Jia Deng", "Boji Shan", "Huimin Chen", "Ruobing Xie", "Yankai Lin", "Zhenghao Liu", "Bowen Zhou", "Hao Peng", "Zhiyuan Liu", "Maosong Sun" ]
https://github.com/OpenBMB/Eurus
We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of Eurus can be primarily attributed to UltraInteract, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. UltraInteract can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. UltraInteract allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model.
2024-04-03T00:00:00
2404.02101
CameraCtrl: Enabling Camera Control for Text-to-Video Generation
[ "Hao He", "Yinghao Xu", "Yuwei Guo", "Gordon Wetzstein", "Bo Dai", "Hongsheng Li", "Ceyuan Yang" ]
https://github.com/hehao13/CameraCtrl
Controllability plays a crucial role in video generation since it allows users to create desired content. However, existing models largely overlooked the precise control of camera pose that serves as a cinematic language to express deeper narrative nuances. To alleviate this issue, we introduce CameraCtrl, enabling accurate camera pose control for text-to-video(T2V) models. After precisely parameterizing the camera trajectory, a plug-and-play camera module is then trained on a T2V model, leaving others untouched. Additionally, a comprehensive study on the effect of various datasets is also conducted, suggesting that videos with diverse camera distribution and similar appearances indeed enhance controllability and generalization. Experimental results demonstrate the effectiveness of CameraCtrl in achieving precise and domain-adaptive camera control, marking a step forward in the pursuit of dynamic and customized video storytelling from textual and camera pose inputs. Our project website is at: https://hehao13.github.io/projects-CameraCtrl/.
2024-04-03T00:00:00
2404.01475
Are large language models superhuman chemists?
[ "Adrian Mirza", "Nawaf Alampara", "Sreekanth Kunchapu", "Benedict Emoekabu", "Aswanth Krishnan", "Mara Wilhelmi", "Macjonathan Okereke", "Juliane Eberhardt", "Amir Mohammad Elahi", "Maximilian Greiner", "Caroline T. Holick", "Tanya Gupta", "Mehrdad Asgari", "Christina Glaubitz", "Lea C. Klepsch", "Yannik Köster", "Jakob Meyer", "Santiago Miret", "Tim Hoffmann", "Fabian Alexander Kreth", "Michael Ringleb", "Nicole Roesner", "Ulrich S. Schubert", "Leanne M. Stafast", "Dinga Wonanke", "Michael Pieler", "Philippe Schwaller", "Kevin Maik Jablonka" ]
Large language models (LLMs) have gained widespread interest due to their ability to process human language and perform tasks on which they have not been explicitly trained. This is relevant for the chemical sciences, which face the problem of small and diverse datasets that are frequently in the form of text. LLMs have shown promise in addressing these issues and are increasingly being harnessed to predict chemical properties, optimize reactions, and even design and conduct experiments autonomously. However, we still have only a very limited systematic understanding of the chemical reasoning capabilities of LLMs, which would be required to improve models and mitigate potential harms. Here, we introduce "ChemBench," an automated framework designed to rigorously evaluate the chemical knowledge and reasoning abilities of state-of-the-art LLMs against the expertise of human chemists. We curated more than 7,000 question-answer pairs for a wide array of subfields of the chemical sciences, evaluated leading open and closed-source LLMs, and found that the best models outperformed the best human chemists in our study on average. The models, however, struggle with some chemical reasoning tasks that are easy for human experts and provide overconfident, misleading predictions, such as about chemicals' safety profiles. These findings underscore the dual reality that, although LLMs demonstrate remarkable proficiency in chemical tasks, further research is critical to enhancing their safety and utility in chemical sciences. Our findings also indicate a need for adaptations to chemistry curricula and highlight the importance of continuing to develop evaluation frameworks to improve safe and useful LLMs.
2024-04-03T00:00:00
2404.01331
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model
[ "Musashi Hinck", "Matthew L. Olson", "David Cobbley", "Shao-Yen Tseng", "Vasudev Lal" ]
We train a suite of multimodal foundation models (MMFM) using the popular LLaVA framework with the recently released Gemma family of large language models (LLMs). Of particular interest is the 2B parameter Gemma model, which provides opportunities to construct capable small-scale MMFMs. In line with findings from other papers in this space, we test the effect of ablating three design features: pretraining the connector, utilizing a more powerful image backbone, and increasing the size of the language backbone. The resulting models, which we call LLaVA-Gemma, exhibit moderate performance on an array of evaluations, but fail to improve past the current comparably sized SOTA models. Closer analysis of performance shows mixed effects; skipping pretraining tends to reduce performance, larger vision models sometimes improve performance, and increasing language model size has inconsistent effects. We publicly release training recipes, code and weights for our models for the LLaVA-Gemma models.
2024-04-03T00:00:00
2404.01617
LLM-ABR: Designing Adaptive Bitrate Algorithms via Large Language Models
[ "Zhiyuan He", "Aashish Gottipati", "Lili Qiu", "Francis Y. Yan", "Xufang Luo", "Kenuo Xu", "Yuqing Yang" ]
We present LLM-ABR, the first system that utilizes the generative capabilities of large language models (LLMs) to autonomously design adaptive bitrate (ABR) algorithms tailored for diverse network characteristics. Operating within a reinforcement learning framework, LLM-ABR empowers LLMs to design key components such as states and neural network architectures. We evaluate LLM-ABR across diverse network settings, including broadband, satellite, 4G, and 5G. LLM-ABR consistently outperforms default ABR algorithms.
2024-04-03T00:00:00
2404.01367
Bigger is not Always Better: Scaling Properties of Latent Diffusion Models
[ "Kangfu Mei", "Zhengzhong Tu", "Mauricio Delbracio", "Hossein Talebi", "Vishal M. Patel", "Peyman Milanfar" ]
We study the scaling properties of latent diffusion models (LDMs) with an emphasis on their sampling efficiency. While improved network architecture and inference algorithms have shown to effectively boost sampling efficiency of diffusion models, the role of model size -- a critical determinant of sampling efficiency -- has not been thoroughly examined. Through empirical analysis of established text-to-image diffusion models, we conduct an in-depth investigation into how model size influences sampling efficiency across varying sampling steps. Our findings unveil a surprising trend: when operating under a given inference budget, smaller models frequently outperform their larger equivalents in generating high-quality results. Moreover, we extend our study to demonstrate the generalizability of the these findings by applying various diffusion samplers, exploring diverse downstream tasks, evaluating post-distilled models, as well as comparing performance relative to training compute. These findings open up new pathways for the development of LDM scaling strategies which can be employed to enhance generative capabilities within limited inference budgets.
2024-04-03T00:00:00
2404.01856
Poro 34B and the Blessing of Multilinguality
[ "Risto Luukkonen", "Jonathan Burdge", "Elaine Zosa", "Aarne Talman", "Ville Komulainen", "Väinö Hatanpää", "Peter Sarlin", "Sampo Pyysalo" ]
The pretraining of state-of-the-art large language models now requires trillions of words of text, which is orders of magnitude more than available for the vast majority of languages. While including text in more than one language is an obvious way to acquire more pretraining data, multilinguality is often seen as a curse, and most model training efforts continue to focus near-exclusively on individual large languages. We believe that multilinguality can be a blessing and that it should be possible to substantially improve over the capabilities of monolingual models for small languages through multilingual training. In this study, we introduce Poro 34B, a 34 billion parameter model trained for 1 trillion tokens of Finnish, English, and programming languages, and demonstrate that a multilingual training approach can produce a model that not only substantially advances over the capabilities of existing models for Finnish, but also excels in translation and is competitive in its class in generating English and programming languages. We release the model parameters, scripts, and data under open licenses at https://huggingface.co/LumiOpen/Poro-34B.
2024-04-03T00:00:00
2404.01744
Octopus v2: On-device language model for super agent
[ "Wei Chen", "Zhiyuan Li" ]
Language models have shown effectiveness in a variety of software applications, particularly in tasks related to automatic workflow. These models possess the crucial ability to call functions, which is essential in creating AI agents. Despite the high performance of large-scale language models in cloud environments, they are often associated with concerns over privacy and cost. Current on-device models for function calling face issues with latency and accuracy. Our research presents a new method that empowers an on-device model with 2 billion parameters to surpass the performance of GPT-4 in both accuracy and latency, and decrease the context length by 95\%. When compared to Llama-7B with a RAG-based function calling mechanism, our method enhances latency by 35-fold. This method reduces the latency to levels deemed suitable for deployment across a variety of edge devices in production environments, aligning with the performance requisites for real-world applications.
2024-04-03T00:00:00
2404.02125
3D Congealing: 3D-Aware Image Alignment in the Wild
[ "Yunzhi Zhang", "Zizhang Li", "Amit Raj", "Andreas Engelhardt", "Yuanzhen Li", "Tingbo Hou", "Jiajun Wu", "Varun Jampani" ]
We propose 3D Congealing, a novel problem of 3D-aware alignment for 2D images capturing semantically similar objects. Given a collection of unlabeled Internet images, our goal is to associate the shared semantic parts from the inputs and aggregate the knowledge from 2D images to a shared 3D canonical space. We introduce a general framework that tackles the task without assuming shape templates, poses, or any camera parameters. At its core is a canonical 3D representation that encapsulates geometric and semantic information. The framework optimizes for the canonical representation together with the pose for each input image, and a per-image coordinate map that warps 2D pixel coordinates to the 3D canonical frame to account for the shape matching. The optimization procedure fuses prior knowledge from a pre-trained image generative model and semantic information from input images. The former provides strong knowledge guidance for this under-constraint task, while the latter provides the necessary information to mitigate the training data bias from the pre-trained model. Our framework can be used for various tasks such as correspondence matching, pose estimation, and image editing, achieving strong results on real-world image datasets under challenging illumination conditions and on in-the-wild online image collections.
2024-04-03T00:00:00
2404.01954
HyperCLOVA X Technical Report
[ "Kang Min Yoo", "Jaegeun Han", "Sookyo In", "Heewon Jeon", "Jisu Jeong", "Jaewook Kang", "Hyunwook Kim", "Kyung-Min Kim", "Munhyong Kim", "Sungju Kim", "Donghyun Kwak", "Hanock Kwak", "Se Jung Kwon", "Bado Lee", "Dongsoo Lee", "Gichang Lee", "Jooho Lee", "Baeseong Park", "Seongjin Shin", "Joonsang Yu", "Seolki Baek", "Sumin Byeon", "Eungsup Cho", "Dooseok Choe", "Jeesung Han", "Youngkyun Jin", "Hyein Jun", "Jaeseung Jung", "Chanwoong Kim", "Jinhong Kim", "Jinuk Kim", "Dokyeong Lee", "Dongwook Park", "Jeong Min Sohn", "Sujung Han", "Jiae Heo", "Sungju Hong", "Mina Jeon", "Hyunhoon Jung", "Jungeun Jung", "Wangkyo Jung", "Chungjoon Kim", "Hyeri Kim", "Jonghyun Kim", "Min Young Kim", "Soeun Lee", "Joonhee Park", "Jieun Shin", "Sojin Yang", "Jungsoon Yoon", "Hwaran Lee", "Sanghwan Bae", "Jeehwan Cha", "Donghoon Ham", "Youngki Hong", "Yunki Hong", "Myunggeun Ji", "Yeguk Jin", "Chansong Jo", "Shinyoung Joo", "Seunghwan Jung", "Hyomin Kim", "Jungwhan Kim", "Minkyoung Kim", "Minseung Kim", "Sungdong Kim", "Yonghee Kim", "Youngjun Kim", "Donghyeon Ko", "Dughyun Lee", "Jaehong Lee", "Jieun Lee", "Jongjin Lee", "Min Young Lee", "Yehbin Lee", "Taehong Min", "Kiyoon Moon", "Jaesun Park", "Kyuyon Park", "Seunghyun Seo", "Gyubin Son", "Wonjoon Yoo", "Myungin You", "Doheon Ahn", "Homin Ahn", "Joohee Ahn", "Seongmin Ahn", "Chanwoo An", "Hyeryun An", "Junho An", "Sang-Min An", "Boram Byun", "Jongho Cha", "Minji Chang", "Seunggyu Chang", "Haesong Cho", "Youngdo Cho", "Dalnim Choi", "Daseul Choi", "Hyoseok Choi", "Minseong Choi", "Sangho Choi", "Seongjae Choi", "Wooyong Choi", "Sewhan Chun", "Dong Young Go", "Chiheon Ham", "Danbi Han", "Jaemin Han", "Mihak Hong", "Moonyoung Hong", "Sung Bum Hong", "Seongchan Hwang", "Eunbin Hyun", "Jinbae Im", "Jaehyung Jang", "Jaeni Jang", "Sihyeon Jang", "Sungwon Jang", "Joonha Jeon", "Yujin Jeon", "Daun Jeong", "Joonhyun Jeong", "Kyeongseok Jeong", "Mini Jeong", "Yeji Jeong", "Sol Jin", "Hanbyeol Jo", "Hanju Jo", "Minjung Jo", "Lee Jonghyun", "Chaeyoon Jung", "Hyungsik Jung", "Jaeuk Jung", "Ju Hwan Jung", "Kwangsun Jung", "Seungjae Jung", "Soonwon Ka", "Donghan Kang", "Soyoung Kang", "Taeho Kil", "Areum Kim", "Beomyoung Kim", "Byeongwook Kim", "Daehee Kim", "Dong-Gyun Kim", "Donggook Kim", "Donghyun Kim", "Euna Kim", "Eunchul Kim", "Geewook Kim", "Gyu Ri Kim", "Hanbyul Kim", "Heesu Kim", "Isaac Kim", "Jeonghoon Kim", "Jihye Kim", "Joonghoon Kim", "Minjae Kim", "Minsub Kim", "Pil Hwan Kim", "Sammy Kim", "Seokhun Kim", "Seonghyeon Kim", "Soojin Kim", "Soong Kim", "Soyoon Kim", "Sunyoung Kim", "Taeho Kim", "Wonho Kim", "Yoonsik Kim", "You Jin Kim", "Yuri Kim", "Beomseok Kwon", "Ohsung Kwon", "Yoo-Hwan Kwon", "Anna Lee", "Byungwook Lee", "Changho Lee", "Daun Lee", "Dongjae Lee", "Ha-Ram Lee", "Hodong Lee", "Hwiyeong Lee", "Hyunmi Lee", "Injae Lee", "Jaeung Lee", "Jeongsang Lee", "Jisoo Lee", "Joongjae Lee", "Juhan Lee", "Jung Hyun Lee", "Junghoon Lee", "Junwoo Lee", "Se Yun Lee", "Sujin Lee", "Sungjae Lee", "Sungwoo Lee", "Wonjae Lee", "Zoo Hyun Lee", "Jong Kun Lim", "Kun Lim", "Taemin Lim", "Yuri Min", "Nuri Na", "Jeongyeon Nam", "Kyeong-Min Nam", "Yeonseog Noh", "Biro Oh", "Hyangnam Oh", "Jung-Sik Oh", "Solgil Oh", "Yeontaek Oh", "Boyoun Park", "Cheonbok Park", "Dongju Park", "Hyeonjin Park", "Hyun Tae Park", "Hyunjung Park", "Jihye Park", "Jooseok Park", "Junghwan Park", "Jungsoo Park", "Miru Park", "Sang Hee Park", "Seunghyun Park", "Taerim Park", "Wonkyeong Park", "Hyunjoon Ryu", "Jeonghun Ryu", "Nahyeon Ryu", "Soonshin Seo", "Suk Min Seo", "Yoonjeong Shim", "Kyuyong Shin", "Wonkwang Shin", "Hyun Sim", "Mihyun Sim", "Woongseob Sim", "Hyejin Soh", "Bokyoung Son", "Hyunjun Son", "Seulah Son", "Chi-Yun Song", "Chiyoung Song", "Ka Yeon Song", "Minchul Song", "Seungmin Song", "Jisung Wang", "Matt Yeo", "Yonggoo Yeo", "Myeong Yeon Yi", "Moon Bin Yim", "Taehwan Yoo", "Youngjoon Yoo", "Sungmin Yoon", "Young Jin Yoon", "Hangyeol Yu", "Ui Seon Yu", "Xingdong Zuo", "Jeongin Bae", "Joungeun Bae", "Hyunsoo Cho", "Seonghyun Cho", "Yongjin Cho", "Taekyoon Choi", "Yera Choi", "Jiwan Chung", "Zhenghui Han", "Byeongho Heo", "Euisuk Hong", "Taebaek Hwang", "Seonyeol Im", "Sumin Jegal", "Sumin Jeon", "Yelim Jeong", "Yonghyun Jeong", "Can Jiang", "Juyong Jiang", "Jiho Jin", "Ara Jo", "Younghyun Jo", "Hoyoun Jung", "Juyoung Jung", "Dae Hee Kim", "Ginam Kim", "Hangyeol Kim", "Heeseung Kim", "Hyojin Kim", "Hyojun Kim", "Hyun-Ah Kim", "Jeehye Kim", "Jin-Hwa Kim", "Jiseon Kim", "Jonghak Kim", "Jung Yoon Kim", "Rak Yeong Kim", "Seoyoon Kim", "Sewon Kim", "Sooyoung Kim", "Sukyoung Kim", "Taeyong Kim", "Naeun Ko", "Bonseung Koo", "Heeyoung Kwak", "Haena Kwon", "Youngjin Kwon", "Boram Lee", "Bruce W. Lee", "Dagyeong Lee", "Erin Lee", "Euijin Lee", "Ha Gyeong Lee", "Hyojin Lee", "Hyunjeong Lee", "Jeeyoon Lee", "Jeonghyun Lee", "Jongheok Lee", "Joonhyung Lee", "Junhyuk Lee", "Mingu Lee", "Nayeon Lee", "Sangkyu Lee", "Se Young Lee", "Seulgi Lee", "Seung Jin Lee", "Suhyeon Lee", "Yeonjae Lee", "Yesol Lee", "Youngbeom Lee", "Yujin Lee", "Shaodong Li", "Tianyu Liu", "Seong-Eun Moon", "Taehong Moon", "Max-Lasse Nihlenramstroem", "Wonseok Oh", "Yuri Oh", "Hongbeen Park", "Hyekyung Park", "Nohil Park", "Sangjin Park", "Jiwon Ryu", "Miru Ryu", "Simo Ryu", "Ahreum Seo", "Hee Seo", "Kangdeok Seo", "Jamin Shin", "Seungyoun Shin", "Heetae Sin", "Jiangping Wang", "Lei Wang", "Ning Xiang", "Longxiang Xiao", "Jing Xu", "Seonyeong Yi", "Haanju Yoo", "Haneul Yoo", "Hwanhee Yoo", "Liang Yu", "Youngjae Yu", "Weijie Yuan", "Bo Zeng", "Qian Zhou", "Kyunghyun Cho", "Jung-Woo Ha", "Joonsuk Park", "Jihyun Hwang", "Hyoung Jo Kwon", "Soonyong Kwon", "Jungyeon Lee", "Seungho Lee", "Seungho Choi", "Sang-Woo Lee", "Jung Hwa Lim", "Nako Sung" ]
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.
2024-04-04T00:00:00
2404.02905
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction
[ "Keyu Tian", "Yi Jiang", "Zehuan Yuan", "Bingyue Peng", "Liwei Wang" ]
https://github.com/FoundationVision/VAR
We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction". This simple, intuitive methodology allows autoregressive (AR) transformers to learn visual distributions fast and generalize well: VAR, for the first time, makes AR models surpass diffusion transformers in image generation. On ImageNet 256x256 benchmark, VAR significantly improve AR baseline by improving Frechet inception distance (FID) from 18.65 to 1.80, inception score (IS) from 80.4 to 356.4, with around 20x faster inference speed. It is also empirically verified that VAR outperforms the Diffusion Transformer (DiT) in multiple dimensions including image quality, inference speed, data efficiency, and scalability. Scaling up VAR models exhibits clear power-law scaling laws similar to those observed in LLMs, with linear correlation coefficients near -0.998 as solid evidence. VAR further showcases zero-shot generalization ability in downstream tasks including image in-painting, out-painting, and editing. These results suggest VAR has initially emulated the two important properties of LLMs: Scaling Laws and zero-shot task generalization. We have released all models and codes to promote the exploration of AR/VAR models for visual generation and unified learning.
2024-04-04T00:00:00
2404.02883
On the Scalability of Diffusion-based Text-to-Image Generation
[ "Hao Li", "Yang Zou", "Ying Wang", "Orchid Majumder", "Yusheng Xie", "R. Manmatha", "Ashwin Swaminathan", "Zhuowen Tu", "Stefano Ermon", "Stefano Soatto" ]
Scaling up model and data size has been quite successful for the evolution of LLMs. However, the scaling law for the diffusion based text-to-image (T2I) models is not fully explored. It is also unclear how to efficiently scale the model for better performance at reduced cost. The different training settings and expensive training cost make a fair model comparison extremely difficult. In this work, we empirically study the scaling properties of diffusion based T2I models by performing extensive and rigours ablations on scaling both denoising backbones and training set, including training scaled UNet and Transformer variants ranging from 0.4B to 4B parameters on datasets upto 600M images. For model scaling, we find the location and amount of cross attention distinguishes the performance of existing UNet designs. And increasing the transformer blocks is more parameter-efficient for improving text-image alignment than increasing channel numbers. We then identify an efficient UNet variant, which is 45% smaller and 28% faster than SDXL's UNet. On the data scaling side, we show the quality and diversity of the training set matters more than simply dataset size. Increasing caption density and diversity improves text-image alignment performance and the learning efficiency. Finally, we provide scaling functions to predict the text-image alignment performance as functions of the scale of model size, compute and dataset size.
2024-04-04T00:00:00
2404.02893
ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline
[ "Yifan Xu", "Xiao Liu", "Xinghan Liu", "Zhenyu Hou", "Yueyan Li", "Xiaohan Zhang", "Zihan Wang", "Aohan Zeng", "Zhengxiao Du", "Wenyi Zhao", "Jie Tang", "Yuxiao Dong" ]
https://github.com/THUDM/ChatGLM-Math
Large language models (LLMs) have shown excellent mastering of human language, but still struggle in real-world applications that require mathematical problem-solving. While many strategies and datasets to enhance LLMs' mathematics are developed, it remains a challenge to simultaneously maintain and improve both language and mathematical capabilities in deployed LLM systems.In this work, we tailor the Self-Critique pipeline, which addresses the challenge in the feedback learning stage of LLM alignment. We first train a general Math-Critique model from the LLM itself to provide feedback signals. Then, we sequentially employ rejective fine-tuning and direct preference optimization over the LLM's own generations for data collection. Based on ChatGLM3-32B, we conduct a series of experiments on both academic and our newly created challenging dataset, MathUserEval. Results show that our pipeline significantly enhances the LLM's mathematical problem-solving while still improving its language ability, outperforming LLMs that could be two times larger. Related techniques have been deployed to ChatGLM\url{https://chatglm.cn}, an online serving LLM. Related evaluation dataset and scripts are released at https://github.com/THUDM/ChatGLM-Math.
2024-04-04T00:00:00
2404.02747
Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models
[ "Wentian Zhang", "Haozhe Liu", "Jinheng Xie", "Francesco Faccio", "Mike Zheng Shou", "Jürgen Schmidhuber" ]
https://github.com/HaozheLiu-ST/T-GATE
This study explores the role of cross-attention during inference in text-conditional diffusion models. We find that cross-attention outputs converge to a fixed point after few inference steps. Accordingly, the time point of convergence naturally divides the entire inference process into two stages: an initial semantics-planning stage, during which, the model relies on cross-attention to plan text-oriented visual semantics, and a subsequent fidelity-improving stage, during which the model tries to generate images from previously planned semantics. Surprisingly, ignoring text conditions in the fidelity-improving stage not only reduces computation complexity, but also maintains model performance. This yields a simple and training-free method called TGATE for efficient generation, which caches the cross-attention output once it converges and keeps it fixed during the remaining inference steps. Our empirical study on the MS-COCO validation set confirms its effectiveness. The source code of TGATE is available at https://github.com/HaozheLiu-ST/T-GATE.
2024-04-04T00:00:00
2404.02733
InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation
[ "Haofan Wang", "Qixun Wang", "Xu Bai", "Zekui Qin", "Anthony Chen" ]
https://github.com/InstantStyle/InstantStyle
Tuning-free diffusion-based models have demonstrated significant potential in the realm of image personalization and customization. However, despite this notable progress, current models continue to grapple with several complex challenges in producing style-consistent image generation. Firstly, the concept of style is inherently underdetermined, encompassing a multitude of elements such as color, material, atmosphere, design, and structure, among others. Secondly, inversion-based methods are prone to style degradation, often resulting in the loss of fine-grained details. Lastly, adapter-based approaches frequently require meticulous weight tuning for each reference image to achieve a balance between style intensity and text controllability. In this paper, we commence by examining several compelling yet frequently overlooked observations. We then proceed to introduce InstantStyle, a framework designed to address these issues through the implementation of two key strategies: 1) A straightforward mechanism that decouples style and content from reference images within the feature space, predicated on the assumption that features within the same space can be either added to or subtracted from one another. 2) The injection of reference image features exclusively into style-specific blocks, thereby preventing style leaks and eschewing the need for cumbersome weight tuning, which often characterizes more parameter-heavy designs.Our work demonstrates superior visual stylization outcomes, striking an optimal balance between the intensity of style and the controllability of textual elements. Our codes will be available at https://github.com/InstantStyle/InstantStyle.
2024-04-04T00:00:00
2404.02575
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
[ "Hyungjoo Chae", "Yeonghyeon Kim", "Seungone Kim", "Kai Tzu-iunn Ong", "Beong-woo Kwak", "Moohyeon Kim", "Seonghwan Kim", "Taeyoon Kwon", "Jiwan Chung", "Youngjae Yu", "Jinyoung Yeo" ]
Algorithmic reasoning refers to the ability to understand the complex patterns behind the problem and decompose them into a sequence of reasoning steps towards the solution. Such nature of algorithmic reasoning makes it a challenge for large language models (LLMs), even though they have demonstrated promising performance in other reasoning tasks. Within this context, some recent studies use programming languages (e.g., Python) to express the necessary logic for solving a given instance/question (e.g., Program-of-Thought) as inspired by their strict and precise syntaxes. However, it is non-trivial to write an executable code that expresses the correct logic on the fly within a single inference call. Also, the code generated specifically for an instance cannot be reused for others, even if they are from the same task and might require identical logic to solve. This paper presents Think-and-Execute, a novel framework that decomposes the reasoning process of language models into two steps. (1) In Think, we discover a task-level logic that is shared across all instances for solving a given task and then express the logic with pseudocode; (2) In Execute, we further tailor the generated pseudocode to each instance and simulate the execution of the code. With extensive experiments on seven algorithmic reasoning tasks, we demonstrate the effectiveness of Think-and-Execute. Our approach better improves LMs' reasoning compared to several strong baselines performing instance-specific reasoning (e.g., CoT and PoT), suggesting the helpfulness of discovering task-level logic. Also, we show that compared to natural language, pseudocode can better guide the reasoning of LMs, even though they are trained to follow natural language instructions.
2024-04-04T00:00:00
2404.02258
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
[ "David Raposo", "Sam Ritter", "Blake Richards", "Timothy Lillicrap", "Peter Conway Humphreys", "Adam Santoro" ]
Transformer-based language models spread FLOPs uniformly across input sequences. In this work we demonstrate that transformers can instead learn to dynamically allocate FLOPs (or compute) to specific positions in a sequence, optimising the allocation along the sequence for different layers across the model depth. Our method enforces a total compute budget by capping the number of tokens (k) that can participate in the self-attention and MLP computations at a given layer. The tokens to be processed are determined by the network using a top-k routing mechanism. Since k is defined a priori, this simple procedure uses a static computation graph with known tensor sizes, unlike other conditional computation techniques. Nevertheless, since the identities of the k tokens are fluid, this method can expend FLOPs non-uniformly across the time and model depth dimensions. Thus, compute expenditure is entirely predictable in sum total, but dynamic and context-sensitive at the token-level. Not only do models trained in this way learn to dynamically allocate compute, they do so efficiently. These models match baseline performance for equivalent FLOPS and wall-clock times to train, but require a fraction of the FLOPs per forward pass, and can be upwards of 50\% faster to step during post-training sampling.
2024-04-04T00:00:00
2404.02514
Freditor: High-Fidelity and Transferable NeRF Editing by Frequency Decomposition
[ "Yisheng He", "Weihao Yuan", "Siyu Zhu", "Zilong Dong", "Liefeng Bo", "Qixing Huang" ]
This paper enables high-fidelity, transferable NeRF editing by frequency decomposition. Recent NeRF editing pipelines lift 2D stylization results to 3D scenes while suffering from blurry results, and fail to capture detailed structures caused by the inconsistency between 2D editings. Our critical insight is that low-frequency components of images are more multiview-consistent after editing compared with their high-frequency parts. Moreover, the appearance style is mainly exhibited on the low-frequency components, and the content details especially reside in high-frequency parts. This motivates us to perform editing on low-frequency components, which results in high-fidelity edited scenes. In addition, the editing is performed in the low-frequency feature space, enabling stable intensity control and novel scene transfer. Comprehensive experiments conducted on photorealistic datasets demonstrate the superior performance of high-fidelity and transferable NeRF editing. The project page is at https://aigc3d.github.io/freditor.
2024-04-05T00:00:00
2404.03543
CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
[ "Jiawei Guo", "Ziming Li", "Xueling Liu", "Kaijing Ma", "Tianyu Zheng", "Zhouliang Yu", "Ding Pan", "Yizhi LI", "Ruibo Liu", "Yue Wang", "Shuyue Guo", "Xingwei Qu", "Xiang Yue", "Ge Zhang", "Wenhu Chen", "Jie Fu" ]
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.
2024-04-05T00:00:00
2404.03592
ReFT: Representation Finetuning for Language Models
[ "Zhengxuan Wu", "Aryaman Arora", "Zheng Wang", "Atticus Geiger", "Dan Jurafsky", "Christopher D. Manning", "Christopher Potts" ]
https://github.com/stanfordnlp/pyreft
Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. Here, we pursue this hypothesis by developing a family of Representation Finetuning (ReFT) methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT). LoReFT is a drop-in replacement for existing PEFTs and learns interventions that are 10x-50x more parameter-efficient than prior state-of-the-art PEFTs. We showcase LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks, Alpaca-Eval v1.0, and GLUE. In all these evaluations, LoReFT delivers the best balance of efficiency and performance, and almost always outperforms state-of-the-art PEFTs. We release a generic ReFT training library publicly at https://github.com/stanfordnlp/pyreft.
2024-04-05T00:00:00
2404.03626
Training LLMs over Neurally Compressed Text
[ "Brian Lester", "Jaehoon Lee", "Alex Alemi", "Jeffrey Pennington", "Adam Roberts", "Jascha Sohl-Dickstein", "Noah Constant" ]
In this paper, we explore the idea of training large language models (LLMs) over highly compressed text. While standard subword tokenizers compress text by a small factor, neural text compressors can achieve much higher rates of compression. If it were possible to train LLMs directly over neurally compressed text, this would confer advantages in training and serving efficiency, as well as easier handling of long text spans. The main obstacle to this goal is that strong compression tends to produce opaque outputs that are not well-suited for learning. In particular, we find that text na\"ively compressed via Arithmetic Coding is not readily learnable by LLMs. To overcome this, we propose Equal-Info Windows, a novel compression technique whereby text is segmented into blocks that each compress to the same bit length. Using this method, we demonstrate effective learning over neurally compressed text that improves with scale, and outperforms byte-level baselines by a wide margin on perplexity and inference speed benchmarks. While our method delivers worse perplexity than subword tokenizers for models trained with the same parameter count, it has the benefit of shorter sequence lengths. Shorter sequence lengths require fewer autoregressive generation steps, and reduce latency. Finally, we provide extensive analysis of the properties that contribute to learnability, and offer concrete suggestions for how to further improve the performance of high-compression tokenizers.