date
timestamp[ns]date
2023-05-05 00:00:00
2025-07-11 00:00:00
arxiv_id
stringlengths
10
10
title
stringlengths
8
202
authors
listlengths
1
942
github
stringlengths
0
116
abstract
stringlengths
165
1.92k
2023-12-13T00:00:00
2312.07231
Fast Training of Diffusion Transformer with Extreme Masking for 3D Point Clouds Generation
[ "Shentong Mo", "Enze Xie", "Yue Wu", "Junsong Chen", "Matthias Nießner", "Zhenguo Li" ]
Diffusion Transformers have recently shown remarkable effectiveness in generating high-quality 3D point clouds. However, training voxel-based diffusion models for high-resolution 3D voxels remains prohibitively expensive due to the cubic complexity of attention operators, which arises from the additional dimension of voxels. Motivated by the inherent redundancy of 3D compared to 2D, we propose FastDiT-3D, a novel masked diffusion transformer tailored for efficient 3D point cloud generation, which greatly reduces training costs. Specifically, we draw inspiration from masked autoencoders to dynamically operate the denoising process on masked voxelized point clouds. We also propose a novel voxel-aware masking strategy to adaptively aggregate background/foreground information from voxelized point clouds. Our method achieves state-of-the-art performance with an extreme masking ratio of nearly 99%. Moreover, to improve multi-category 3D generation, we introduce Mixture-of-Expert (MoE) in 3D diffusion model. Each category can learn a distinct diffusion path with different experts, relieving gradient conflict. Experimental results on the ShapeNet dataset demonstrate that our method achieves state-of-the-art high-fidelity and diverse 3D point cloud generation performance. Our FastDiT-3D improves 1-Nearest Neighbor Accuracy and Coverage metrics when generating 128-resolution voxel point clouds, using only 6.5% of the original training cost.
2023-12-13T00:00:00
2312.07424
How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary Investigation
[ "Zhongyi Han", "Guanglin Zhou", "Rundong He", "Jindong Wang", "Xing Xie", "Tailin Wu", "Yilong Yin", "Salman Khan", "Lina Yao", "Tongliang Liu", "Kun Zhang" ]
https://github.com/jameszhou-gl/gpt-4v-distribution-shift
In machine learning, generalization against distribution shifts -- where deployment conditions diverge from the training scenarios -- is crucial, particularly in fields like climate modeling, biomedicine, and autonomous driving. The emergence of foundation models, distinguished by their extensive pretraining and task versatility, has led to an increased interest in their adaptability to distribution shifts. GPT-4V(ision) acts as the most advanced publicly accessible multimodal foundation model, with extensive applications across various domains, including anomaly detection, video understanding, image generation, and medical diagnosis. However, its robustness against data distributions remains largely underexplored. Addressing this gap, this study rigorously evaluates GPT-4V's adaptability and generalization capabilities in dynamic environments, benchmarking against prominent models like CLIP and LLaVA. We delve into GPT-4V's zero-shot generalization across 13 diverse datasets spanning natural, medical, and molecular domains. We further investigate its adaptability to controlled data perturbations and examine the efficacy of in-context learning as a tool to enhance its adaptation. Our findings delineate GPT-4V's capability boundaries in distribution shifts, shedding light on its strengths and limitations across various scenarios. Importantly, this investigation contributes to our understanding of how AI foundation models generalize to distribution shifts, offering pivotal insights into their adaptability and robustness. Code is publicly available at https://github.com/jameszhou-gl/gpt-4v-distribution-shift.
2023-12-13T00:00:00
2312.06681
Steering Llama 2 via Contrastive Activation Addition
[ "Nina Rimsky", "Nick Gabrieli", "Julian Schulz", "Meg Tong", "Evan Hubinger", "Alexander Matt Turner" ]
We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying activations during their forward passes. CAA computes ``steering vectors'' by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior such as factual versus hallucinatory responses. During inference, these steering vectors are added at all token positions after the user's prompt with either a positive or negative coefficient, allowing precise control over the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2 Chat using both multiple-choice behavioral question datasets and open-ended generation tasks. We demonstrate that CAA significantly alters model behavior, outperforms traditional methods like finetuning and few-shot prompting, and minimally reduces capabilities. Moreover, by employing various activation space interpretation methods, we gain deeper insights into CAA's mechanisms. CAA both accurately steers model outputs and also sheds light on how high-level concepts are represented in Large Language Models (LLMs).
2023-12-13T00:00:00
2312.07509
PEEKABOO: Interactive Video Generation via Masked-Diffusion
[ "Yash Jain", "Anshul Nasery", "Vibhav Vineet", "Harkirat Behl" ]
Recently there has been a lot of progress in text-to-video generation, with state-of-the-art models being capable of generating high quality, realistic videos. However, these models lack the capability for users to interactively control and generate videos, which can potentially unlock new areas of application. As a first step towards this goal, we tackle the problem of endowing diffusion-based video generation models with interactive spatio-temporal control over their output. To this end, we take inspiration from the recent advances in segmentation literature to propose a novel spatio-temporal masked attention module - Peekaboo. This module is a training-free, no-inference-overhead addition to off-the-shelf video generation models which enables spatio-temporal control. We also propose an evaluation benchmark for the interactive video generation task. Through extensive qualitative and quantitative evaluation, we establish that Peekaboo enables control video generation and even obtains a gain of upto 3.8x in mIoU over baseline models.
2023-12-13T00:00:00
2312.06674
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
[ "Hakan Inan", "Kartikeya Upasani", "Jianfeng Chi", "Rashi Rungta", "Krithika Iyer", "Yuning Mao", "Michael Tontchev", "Qing Hu", "Brian Fuller", "Davide Testuggine", "Madian Khabsa" ]
https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard
We introduce Llama Guard, an LLM-based input-output safeguard model geared towards Human-AI conversation use cases. Our model incorporates a safety risk taxonomy, a valuable tool for categorizing a specific set of safety risks found in LLM prompts (i.e., prompt classification). This taxonomy is also instrumental in classifying the responses generated by LLMs to these prompts, a process we refer to as response classification. For the purpose of both prompt and response classification, we have meticulously gathered a dataset of high quality. Llama Guard, a Llama2-7b model that is instruction-tuned on our collected dataset, albeit low in volume, demonstrates strong performance on existing benchmarks such as the OpenAI Moderation Evaluation dataset and ToxicChat, where its performance matches or exceeds that of currently available content moderation tools. Llama Guard functions as a language model, carrying out multi-class classification and generating binary decision scores. Furthermore, the instruction fine-tuning of Llama Guard allows for the customization of tasks and the adaptation of output formats. This feature enhances the model's capabilities, such as enabling the adjustment of taxonomy categories to align with specific use cases, and facilitating zero-shot or few-shot prompting with diverse taxonomies at the input. We are making Llama Guard model weights available and we encourage researchers to further develop and adapt them to meet the evolving needs of the community for AI safety.
2023-12-13T00:00:00
2312.07504
COLMAP-Free 3D Gaussian Splatting
[ "Yang Fu", "Sifei Liu", "Amey Kulkarni", "Jan Kautz", "Alexei A. Efros", "Xiaolong Wang" ]
While neural rendering has led to impressive advances in scene reconstruction and novel view synthesis, it relies heavily on accurately pre-computed camera poses. To relax this constraint, multiple efforts have been made to train Neural Radiance Fields (NeRFs) without pre-processed camera poses. However, the implicit representations of NeRFs provide extra challenges to optimize the 3D structure and camera poses at the same time. On the other hand, the recently proposed 3D Gaussian Splatting provides new opportunities given its explicit point cloud representations. This paper leverages both the explicit geometric representation and the continuity of the input video stream to perform novel view synthesis without any SfM preprocessing. We process the input frames in a sequential manner and progressively grow the 3D Gaussians set by taking one input frame at a time, without the need to pre-compute the camera poses. Our method significantly improves over previous approaches in view synthesis and camera pose estimation under large motion changes. Our project page is https://oasisyang.github.io/colmap-free-3dgs
2023-12-13T00:00:00
2312.06908
"I Want It That Way": Enabling Interactive Decision Support Using Large Language Models and Constraint Programming
[ "Connor Lawless", "Jakob Schoeffer", "Lindy Le", "Kael Rowan", "Shilad Sen", "Cristina St. Hill", "Jina Suh", "Bahar Sarrafzadeh" ]
A critical factor in the success of decision support systems is the accurate modeling of user preferences. Psychology research has demonstrated that users often develop their preferences during the elicitation process, highlighting the pivotal role of system-user interaction in developing personalized systems. This paper introduces a novel approach, combining Large Language Models (LLMs) with Constraint Programming to facilitate interactive decision support. We study this hybrid framework through the lens of meeting scheduling, a time-consuming daily activity faced by a multitude of information workers. We conduct three studies to evaluate the novel framework, including a diary study (n=64) to characterize contextual scheduling preferences, a quantitative evaluation of the system's performance, and a user study (n=10) with a prototype system. Our work highlights the potential for a hybrid LLM and optimization approach for iterative preference elicitation and design considerations for building systems that support human-system collaborative decision-making processes.
2023-12-14T00:00:00
2312.08136
ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields
[ "Juan Luis Gonzalez Bello", "Minh-Quan Viet Bui", "Munchurl Kim" ]
https://github.com/KAIST-VICLab/pronerf
Recent advances in neural rendering have shown that, albeit slow, implicit compact models can learn a scene's geometries and view-dependent appearances from multiple views. To maintain such a small memory footprint but achieve faster inference times, recent works have adopted `sampler' networks that adaptively sample a small subset of points along each ray in the implicit neural radiance fields. Although these methods achieve up to a 10times reduction in rendering time, they still suffer from considerable quality degradation compared to the vanilla NeRF. In contrast, we propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes). ProNeRF is equipped with a novel projection-aware sampling (PAS) network together with a new training strategy for ray exploration and exploitation, allowing for efficient fine-grained particle sampling. Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel. Our exploration and exploitation training strategy allows ProNeRF to learn the full scenes' color and density distributions while also learning efficient ray sampling focused on the highest-density regions. We provide extensive experimental results that support the effectiveness of our method on the widely adopted forward-facing and 360 datasets, LLFF and Blender, respectively.
2023-12-14T00:00:00
2312.07661
CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor
[ "Shuyang Sun", "Runjia Li", "Philip Torr", "Xiuye Gu", "Siyang Li" ]
Existing open-vocabulary image segmentation methods require a fine-tuning step on mask annotations and/or image-text datasets. Mask labels are labor-intensive, which limits the number of categories in segmentation datasets. As a result, the open-vocabulary capacity of pre-trained VLMs is severely reduced after fine-tuning. However, without fine-tuning, VLMs trained under weak image-text supervision tend to make suboptimal mask predictions when there are text queries referring to non-existing concepts in the image. To alleviate these issues, we introduce a novel recurrent framework that progressively filters out irrelevant texts and enhances mask quality without training efforts. The recurrent unit is a two-stage segmenter built upon a VLM with frozen weights. Thus, our model retains the VLM's broad vocabulary space and strengthens its segmentation capability. Experimental results show that our method outperforms not only the training-free counterparts, but also those fine-tuned with millions of additional data samples, and sets new state-of-the-art records for both zero-shot semantic and referring image segmentation tasks. Specifically, we improve the current record by 28.8, 16.0, and 6.9 mIoU on Pascal VOC, COCO Object, and Pascal Context.
2023-12-14T00:00:00
2312.08128
Clockwork Diffusion: Efficient Generation With Model-Step Distillation
[ "Amirhossein Habibian", "Amir Ghodrati", "Noor Fathima", "Guillaume Sautiere", "Risheek Garrepalli", "Fatih Porikli", "Jens Petersen" ]
This work aims to improve the efficiency of text-to-image diffusion models. While diffusion models use computationally expensive UNet-based denoising operations in every generation step, we identify that not all operations are equally relevant for the final output quality. In particular, we observe that UNet layers operating on high-res feature maps are relatively sensitive to small perturbations. In contrast, low-res feature maps influence the semantic layout of the final image and can often be perturbed with no noticeable change in the output. Based on this observation, we propose Clockwork Diffusion, a method that periodically reuses computation from preceding denoising steps to approximate low-res feature maps at one or more subsequent steps. For multiple baselines, and for both text-to-image generation and image editing, we demonstrate that Clockwork leads to comparable or improved perceptual scores with drastically reduced computational complexity. As an example, for Stable Diffusion v1.5 with 8 DPM++ steps we save 32% of FLOPs with negligible FID and CLIP change.
2023-12-14T00:00:00
2312.08361
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
[ "Alexander Borzunov", "Max Ryabinin", "Artem Chumachenko", "Dmitry Baranchuk", "Tim Dettmers", "Younes Belkada", "Pavel Samygin", "Colin Raffel" ]
Large language models (LLMs) are useful in many NLP tasks and become more capable with size, with the best open-source models having over 50 billion parameters. However, using these 50B+ models requires high-end hardware, making them inaccessible to most researchers. In this work, we investigate methods for cost-efficient inference and fine-tuning of LLMs, comparing local and distributed strategies. We observe that a large enough model (50B+) can run efficiently even on geodistributed devices in a consumer-grade network. This could allow running LLM efficiently by pooling together idle compute resources of multiple research groups and volunteers. We address two open problems: (1) how to perform inference and fine-tuning reliably if any device can disconnect abruptly and (2) how to partition LLMs between devices with uneven hardware, joining and leaving at will. In order to do that, we develop special fault-tolerant inference algorithms and load-balancing protocols that automatically assign devices to maximize the total system throughput. We showcase these algorithms in Petals - a decentralized system that runs Llama 2 (70B) and BLOOM (176B) over the Internet up to 10x faster than offloading for interactive generation. We evaluate the performance of our system in simulated conditions and a real-world setup spanning two continents.
2023-12-14T00:00:00
2312.08344
FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
[ "Bowen Wen", "Wei Yang", "Jan Kautz", "Stan Birchfield" ]
https://github.com/NVlabs/FoundationPose
We present FoundationPose, a unified foundation model for 6D object pose estimation and tracking, supporting both model-based and model-free setups. Our approach can be instantly applied at test-time to a novel object without fine-tuning, as long as its CAD model is given, or a small number of reference images are captured. We bridge the gap between these two setups with a neural implicit representation that allows for effective novel view synthesis, keeping the downstream pose estimation modules invariant under the same unified framework. Strong generalizability is achieved via large-scale synthetic training, aided by a large language model (LLM), a novel transformer-based architecture, and contrastive learning formulation. Extensive evaluation on multiple public datasets involving challenging scenarios and objects indicate our unified approach outperforms existing methods specialized for each task by a large margin. In addition, it even achieves comparable results to instance-level methods despite the reduced assumptions. Project page: https://nvlabs.github.io/FoundationPose/
2023-12-14T00:00:00
2312.07987
SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention
[ "Róbert Csordás", "Piotr Piękos", "Kazuki Irie", "Jürgen Schmidhuber" ]
The costly self-attention layers in modern Transformers require memory and compute quadratic in sequence length. Existing approximation methods usually underperform and fail to obtain significant speedups in practice. Here we present SwitchHead - a novel method that reduces both compute and memory requirements and achieves wall-clock speedup, while matching the language modeling performance of baseline Transformers with the same parameter budget. SwitchHead uses Mixture-of-Experts (MoE) layers for the value and output projections and requires 4 to 8 times fewer attention matrices than standard Transformers. Our novel attention can also be combined with MoE MLP layers, resulting in an efficient fully-MoE "SwitchAll" Transformer model. Our code is public.
2023-12-14T00:00:00
2312.07910
PromptBench: A Unified Library for Evaluation of Large Language Models
[ "Kaijie Zhu", "Qinlin Zhao", "Hao Chen", "Jindong Wang", "Xing Xie" ]
https://github.com/microsoft/promptbench
The evaluation of large language models (LLMs) is crucial to assess their performance and mitigate potential security risks. In this paper, we introduce PromptBench, a unified library to evaluate LLMs. It consists of several key components that are easily used and extended by researchers: prompt construction, prompt engineering, dataset and model loading, adversarial prompt attack, dynamic evaluation protocols, and analysis tools. PromptBench is designed to be an open, general, and flexible codebase for research purposes that can facilitate original study in creating new benchmarks, deploying downstream applications, and designing new evaluation protocols. The code is available at: https://github.com/microsoft/promptbench and will be continuously supported.
2023-12-14T00:00:00
2312.07843
Foundation Models in Robotics: Applications, Challenges, and the Future
[ "Roya Firoozi", "Johnathan Tucker", "Stephen Tian", "Anirudha Majumdar", "Jiankai Sun", "Weiyu Liu", "Yuke Zhu", "Shuran Song", "Ashish Kapoor", "Karol Hausman", "Brian Ichter", "Danny Driess", "Jiajun Wu", "Cewu Lu", "Mac Schwager" ]
https://github.com/robotics-survey/Awesome-Robotics-Foundation-Models
We survey applications of pretrained foundation models in robotics. Traditional deep learning models in robotics are trained on small datasets tailored for specific tasks, which limits their adaptability across diverse applications. In contrast, foundation models pretrained on internet-scale data appear to have superior generalization capabilities, and in some instances display an emergent ability to find zero-shot solutions to problems that are not present in the training data. Foundation models may hold the potential to enhance various components of the robot autonomy stack, from perception to decision-making and control. For example, large language models can generate code or provide common sense reasoning, while vision-language models enable open-vocabulary visual recognition. However, significant open research challenges remain, particularly around the scarcity of robot-relevant training data, safety guarantees and uncertainty quantification, and real-time execution. In this survey, we study recent papers that have used or built foundation models to solve robotics problems. We explore how foundation models contribute to improving robot capabilities in the domains of perception, decision-making, and control. We discuss the challenges hindering the adoption of foundation models in robot autonomy and provide opportunities and potential pathways for future advancements. The GitHub project corresponding to this paper (Preliminary release. We are committed to further enhancing and updating this work to ensure its quality and relevance) can be found here: https://github.com/robotics-survey/Awesome-Robotics-Foundation-Models
2023-12-14T00:00:00
2312.07859
Invariant Graph Transformer
[ "Zhe Xu", "Menghai Pan", "Yuzhong Chen", "Huiyuan Chen", "Yuchen Yan", "Mahashweta Das", "Hanghang Tong" ]
Rationale discovery is defined as finding a subset of the input data that maximally supports the prediction of downstream tasks. In graph machine learning context, graph rationale is defined to locate the critical subgraph in the given graph topology, which fundamentally determines the prediction results. In contrast to the rationale subgraph, the remaining subgraph is named the environment subgraph. Graph rationalization can enhance the model performance as the mapping between the graph rationale and prediction label is viewed as invariant, by assumption. To ensure the discriminative power of the extracted rationale subgraphs, a key technique named "intervention" is applied. The core idea of intervention is that given any changing environment subgraphs, the semantics from the rationale subgraph is invariant, which guarantees the correct prediction result. However, most, if not all, of the existing rationalization works on graph data develop their intervention strategies on the graph level, which is coarse-grained. In this paper, we propose well-tailored intervention strategies on graph data. Our idea is driven by the development of Transformer models, whose self-attention module provides rich interactions between input nodes. Based on the self-attention module, our proposed invariant graph Transformer (IGT) can achieve fine-grained, more specifically, node-level and virtual node-level intervention. Our comprehensive experiments involve 7 real-world datasets, and the proposed IGT shows significant performance advantages compared to 13 baseline methods.
2023-12-15T00:00:00
2312.09241
TinyGSM: achieving >80% on GSM8k with small language models
[ "Bingbin Liu", "Sebastien Bubeck", "Ronen Eldan", "Janardhan Kulkarni", "Yuanzhi Li", "Anh Nguyen", "Rachel Ward", "Yi Zhang" ]
Small-scale models offer various computational advantages, and yet to which extent size is critical for problem-solving abilities remains an open question. Specifically for solving grade school math, the smallest model size so far required to break the 80\% barrier on the GSM8K benchmark remains to be 34B. Our work studies how high-quality datasets may be the key for small language models to acquire mathematical reasoning. We introduce TinyGSM, a synthetic dataset of 12.3M grade school math problems paired with Python solutions, generated fully by GPT-3.5. After finetuning on TinyGSM, we find that a duo of a 1.3B generation model and a 1.3B verifier model can achieve 81.5\% accuracy, outperforming existing models that are orders of magnitude larger. This also rivals the performance of the GPT-3.5 ``teacher'' model (77.4\%), from which our model's training data is generated. Our approach is simple and has two key components: 1) the high-quality dataset TinyGSM, 2) the use of a verifier, which selects the final outputs from multiple candidate generations.
2023-12-15T00:00:00
2312.09237
Pixel Aligned Language Models
[ "Jiarui Xu", "Xingyi Zhou", "Shen Yan", "Xiuye Gu", "Anurag Arnab", "Chen Sun", "Xiaolong Wang", "Cordelia Schmid" ]
Large language models have achieved great success in recent years, so as their variants in vision. Existing vision-language models can describe images in natural languages, answer visual-related questions, or perform complex reasoning about the image. However, it is yet unclear how localization tasks, such as word grounding or referring localization, can be performed using large language models. In this work, we aim to develop a vision-language model that can take locations, for example, a set of points or boxes, as either inputs or outputs. When taking locations as inputs, the model performs location-conditioned captioning, which generates captions for the indicated object or region. When generating locations as outputs, our model regresses pixel coordinates for each output word generated by the language model, and thus performs dense word grounding. Our model is pre-trained on the Localized Narrative dataset, which contains pixel-word-aligned captioning from human attention. We show our model can be applied to various location-aware vision-language tasks, including referring localization, location-conditioned captioning, and dense object captioning, archiving state-of-the-art performance on RefCOCO and Visual Genome. Project page: https://jerryxu.net/PixelLLM .
2023-12-15T00:00:00
2312.09187
Vision-Language Models as a Source of Rewards
[ "Kate Baumli", "Satinder Baveja", "Feryal Behbahani", "Harris Chan", "Gheorghe Comanici", "Sebastian Flennerhag", "Maxime Gazeau", "Kristian Holsheimer", "Dan Horgan", "Michael Laskin", "Clare Lyle", "Hussain Masoom", "Kay McKinney", "Volodymyr Mnih", "Alexander Neitz", "Fabio Pardo", "Jack Parker-Holder", "John Quan", "Tim Rocktäschel", "Himanshu Sahni", "Tom Schaul", "Yannick Schroecker", "Stephen Spencer", "Richie Steigerwald", "Luyu Wang", "Lei Zhang" ]
Building generalist agents that can accomplish many goals in rich open-ended environments is one of the research frontiers for reinforcement learning. A key limiting factor for building generalist agents with RL has been the need for a large number of reward functions for achieving different goals. We investigate the feasibility of using off-the-shelf vision-language models, or VLMs, as sources of rewards for reinforcement learning agents. We show how rewards for visual achievement of a variety of language goals can be derived from the CLIP family of models, and used to train RL agents that can achieve a variety of language goals. We showcase this approach in two distinct visual domains and present a scaling trend showing how larger VLMs lead to more accurate rewards for visual goal achievement, which in turn produces more capable RL agents.
2023-12-15T00:00:00
2312.09158
General Object Foundation Model for Images and Videos at Scale
[ "Junfeng Wu", "Yi Jiang", "Qihao Liu", "Zehuan Yuan", "Xiang Bai", "Song Bai" ]
We present GLEE in this work, an object-level foundation model for locating and identifying objects in images and videos. Through a unified framework, GLEE accomplishes detection, segmentation, tracking, grounding, and identification of arbitrary objects in the open world scenario for various object perception tasks. Adopting a cohesive learning strategy, GLEE acquires knowledge from diverse data sources with varying supervision levels to formulate general object representations, excelling in zero-shot transfer to new data and tasks. Specifically, we employ an image encoder, text encoder, and visual prompter to handle multi-modal inputs, enabling to simultaneously solve various object-centric downstream tasks while maintaining state-of-the-art performance. Demonstrated through extensive training on over five million images from diverse benchmarks, GLEE exhibits remarkable versatility and improved generalization performance, efficiently tackling downstream tasks without the need for task-specific adaptation. By integrating large volumes of automatically labeled data, we further enhance its zero-shot generalization capabilities. Additionally, GLEE is capable of being integrated into Large Language Models, serving as a foundational model to provide universal object-level information for multi-modal tasks. We hope that the versatility and universality of our method will mark a significant step in the development of efficient visual foundation models for AGI systems. The model and code will be released at https://glee-vision.github.io .
2023-12-15T00:00:00
2312.08914
CogAgent: A Visual Language Model for GUI Agents
[ "Wenyi Hong", "Weihan Wang", "Qingsong Lv", "Jiazheng Xu", "Wenmeng Yu", "Junhui Ji", "Yan Wang", "Zihan Wang", "Yuxiao Dong", "Ming Ding", "Jie Tang" ]
https://github.com/THUDM/CogVLM
People are spending an enormous amount of time on digital devices through graphical user interfaces (GUIs), e.g., computer or smartphone screens. Large language models (LLMs) such as ChatGPT can assist people in tasks like writing emails, but struggle to understand and interact with GUIs, thus limiting their potential to increase automation levels. In this paper, we introduce CogAgent, an 18-billion-parameter visual language model (VLM) specializing in GUI understanding and navigation. By utilizing both low-resolution and high-resolution image encoders, CogAgent supports input at a resolution of 1120*1120, enabling it to recognize tiny page elements and text. As a generalist visual language model, CogAgent achieves the state of the art on five text-rich and four general VQA benchmarks, including VQAv2, OK-VQA, Text-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. CogAgent, using only screenshots as input, outperforms LLM-based methods that consume extracted HTML text on both PC and Android GUI navigation tasks -- Mind2Web and AITW, advancing the state of the art. The model and codes are available at https://github.com/THUDM/CogVLM.
2023-12-15T00:00:00
2312.08688
TigerBot: An Open Multilingual Multitask LLM
[ "Ye Chen", "Wei Cai", "Liangmin Wu", "Xiaowei Li", "Zhanxuan Xin", "Cong Fu" ]
We release and introduce the TigerBot family of large language models (LLMs), consisting of base and chat models, sized from 7, 13, 70 and 180 billion parameters. We develop our models embarking from Llama-2 and BLOOM, and push the boundary further in data, training algorithm, infrastructure, and application tools. Our models yield meaningful performance gain over SOTA open-source models, e.g., Llama-2, specifically 6\% gain in English and 20\% gain in Chinese. TigerBot model family also achieves leading performance in major academic and industrial benchmarks and leaderboards. We believe that TigerBot represents just a snapshot of lightning-fast progression in LLM open-source community. Therefore, we are thrilled to give back by publicly releasing our models and reporting our approach behind, with additional emphases on building SOTA LLMs in a democratized way and making LLMs of use in real-world applications.
2023-12-15T00:00:00
2312.08618
Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention
[ "Kaiqiang Song", "Xiaoyang Wang", "Sangwoo Cho", "Xiaoman Pan", "Dong Yu" ]
This paper introduces a novel approach to enhance the capabilities of Large Language Models (LLMs) in processing and understanding extensive text sequences, a critical aspect in applications requiring deep comprehension and synthesis of large volumes of information. Recognizing the inherent challenges in extending the context window for LLMs, primarily built on Transformer architecture, we propose a new model architecture, referred to as Zebra. This architecture efficiently manages the quadratic time and memory complexity issues associated with full attention in the Transformer by employing grouped local-global attention layers. Our model, akin to a zebra's alternating stripes, balances local and global attention layers, significantly reducing computational requirements and memory consumption. Comprehensive experiments, including pretraining from scratch, continuation of long context adaptation training, and long instruction tuning, are conducted to evaluate the Zebra's performance. The results show that Zebra achieves comparable or superior performance on both short and long sequence benchmarks, while also enhancing training and inference efficiency.
2023-12-15T00:00:00
2312.08583
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
[ "Xiaoxia Wu", "Haojun Xia", "Stephen Youn", "Zhen Zheng", "Shiyang Chen", "Arash Bakhtiari", "Michael Wyatt", "Yuxiong He", "Olatunji Ruwase", "Leon Song", "Zhewei Yao" ]
This study examines 4-bit quantization methods like GPTQ in large language models (LLMs), highlighting GPTQ's overfitting and limited enhancement in Zero-Shot tasks. While prior works merely focusing on zero-shot measurement, we extend task scope to more generative categories such as code generation and abstractive summarization, in which we found that INT4 quantization can significantly underperform. However, simply shifting to higher precision formats like FP6 has been particularly challenging, thus overlooked, due to poor performance caused by the lack of sophisticated integration and system acceleration strategies on current AI hardware. Our results show that FP6, even with a coarse-grain quantization scheme, performs robustly across various algorithms and tasks, demonstrating its superiority in accuracy and versatility. Notably, with the FP6 quantization, \codestar-15B model performs comparably to its FP16 counterpart in code generation, and for smaller models like the 406M it closely matches their baselines in summarization. Neither can be achieved by INT4. To better accommodate various AI hardware and achieve the best system performance, we propose a novel 4+2 design for FP6 to achieve similar latency to the state-of-the-art INT4 fine-grain quantization. With our design, FP6 can become a promising solution to the current 4-bit quantization methods used in LLMs.
2023-12-15T00:00:00
2312.08578
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
[ "Jack Urbanek", "Florian Bordes", "Pietro Astolfi", "Mary Williamson", "Vasu Sharma", "Adriana Romero-Soriano" ]
https://github.com/facebookresearch/DCI
Curation methods for massive vision-language datasets trade off between dataset size and quality. However, even the highest quality of available curated captions are far too short to capture the rich visual detail in an image. To show the value of dense and highly-aligned image-text pairs, we collect the Densely Captioned Images (DCI) dataset, containing 8012 natural images human-annotated with mask-aligned descriptions averaging above 1000 words each. With precise and reliable captions associated with specific parts of an image, we can evaluate vision-language models' (VLMs) understanding of image content with a novel task that matches each caption with its corresponding subcrop. As current models are often limited to 77 text tokens, we also introduce a summarized version (sDCI) in which each caption length is limited. We show that modern techniques that make progress on standard benchmarks do not correspond with significant improvement on our sDCI based benchmark. Lastly, we finetune CLIP using sDCI and show significant improvements over the baseline despite a small training set. By releasing the first human annotated dense image captioning dataset, we hope to enable the development of new benchmarks or fine-tuning recipes for the next generation of VLMs to come.
2023-12-15T00:00:00
2312.09067
Holodeck: Language Guided Generation of 3D Embodied AI Environments
[ "Yue Yang", "Fan-Yun Sun", "Luca Weihs", "Eli VanderBilt", "Alvaro Herrasti", "Winson Han", "Jiajun Wu", "Nick Haber", "Ranjay Krishna", "Lingjie Liu", "Chris Callison-Burch", "Mark Yatskar", "Aniruddha Kembhavi", "Christopher Clark" ]
3D simulated environments play a critical role in Embodied AI, but their creation requires expertise and extensive manual effort, restricting their diversity and scope. To mitigate this limitation, we present Holodeck, a system that generates 3D environments to match a user-supplied prompt fully automatedly. Holodeck can generate diverse scenes, e.g., arcades, spas, and museums, adjust the designs for styles, and can capture the semantics of complex queries such as "apartment for a researcher with a cat" and "office of a professor who is a fan of Star Wars". Holodeck leverages a large language model (GPT-4) for common sense knowledge about what the scene might look like and uses a large collection of 3D assets from Objaverse to populate the scene with diverse objects. To address the challenge of positioning objects correctly, we prompt GPT-4 to generate spatial relational constraints between objects and then optimize the layout to satisfy those constraints. Our large-scale human evaluation shows that annotators prefer Holodeck over manually designed procedural baselines in residential scenes and that Holodeck can produce high-quality outputs for diverse scene types. We also demonstrate an exciting application of Holodeck in Embodied AI, training agents to navigate in novel scenes like music rooms and daycares without human-constructed data, which is a significant step forward in developing general-purpose embodied agents.
2023-12-15T00:00:00
2312.08926
Modeling Complex Mathematical Reasoning via Large Language Model based MathAgent
[ "Haoran Liao", "Qinyi Du", "Shaohua Hu", "Hao He", "Yanyan Xu", "Jidong Tian", "Yaohui Jin" ]
Large language models (LLMs) face challenges in solving complex mathematical problems that require comprehensive capacities to parse the statements, associate domain knowledge, perform compound logical reasoning, and integrate the intermediate rationales. Tackling all these problems once could be arduous for LLMs, thus leading to confusion in generation. In this work, we explore the potential of enhancing LLMs with agents by meticulous decomposition and modeling of mathematical reasoning process. Specifically, we propose a formal description of the mathematical solving and extend LLMs with an agent-based zero-shot framework named Planner-Reasoner-Executor-Reflector (PRER). We further provide and implement two MathAgents that define the logical forms and inherent relations via a pool of actions in different grains and orientations: MathAgent-M adapts its actions to LLMs, while MathAgent-H aligns with humankind. Experiments on miniF2F and MATH have demonstrated the effectiveness of PRER and proposed MathAgents, achieving an increase of 12.3%(53.9%66.2%) on the MiniF2F, 9.2% (49.8%59.0%) on MATH, and 13.2%(23.2%35.4%) for level-5 problems of MATH against GPT-4. Further analytical results provide more insightful perspectives on exploiting the behaviors of LLMs as agents.
2023-12-15T00:00:00
2312.08723
StemGen: A music generation model that listens
[ "Julian D. Parker", "Janne Spijkervet", "Katerina Kosta", "Furkan Yesiler", "Boris Kuznetsov", "Ju-Chiang Wang", "Matt Avent", "Jitong Chen", "Duc Le" ]
End-to-end generation of musical audio using deep learning techniques has seen an explosion of activity recently. However, most models concentrate on generating fully mixed music in response to abstract conditioning information. In this work, we present an alternative paradigm for producing music generation models that can listen and respond to musical context. We describe how such a model can be constructed using a non-autoregressive, transformer-based model architecture and present a number of novel architectural and sampling improvements. We train the described architecture on both an open-source and a proprietary dataset. We evaluate the produced models using standard quality metrics and a new approach based on music information retrieval descriptors. The resulting model reaches the audio quality of state-of-the-art text-conditioned models, as well as exhibiting strong musical coherence with its context.
2023-12-15T00:00:00
2312.09109
VideoLCM: Video Latent Consistency Model
[ "Xiang Wang", "Shiwei Zhang", "Han Zhang", "Yu Liu", "Yingya Zhang", "Changxin Gao", "Nong Sang" ]
Consistency models have demonstrated powerful capability in efficient image generation and allowed synthesis within a few sampling steps, alleviating the high computational cost in diffusion models. However, the consistency model in the more challenging and resource-consuming video generation is still less explored. In this report, we present the VideoLCM framework to fill this gap, which leverages the concept of consistency models from image generation to efficiently synthesize videos with minimal steps while maintaining high quality. VideoLCM builds upon existing latent video diffusion models and incorporates consistency distillation techniques for training the latent consistency model. Experimental results reveal the effectiveness of our VideoLCM in terms of computational efficiency, fidelity and temporal consistency. Notably, VideoLCM achieves high-fidelity and smooth video synthesis with only four sampling steps, showcasing the potential for real-time synthesis. We hope that VideoLCM can serve as a simple yet effective baseline for subsequent research. The source code and models will be publicly available.
2023-12-15T00:00:00
2312.09246
SHAP-EDITOR: Instruction-guided Latent 3D Editing in Seconds
[ "Minghao Chen", "Junyu Xie", "Iro Laina", "Andrea Vedaldi" ]
We propose a novel feed-forward 3D editing framework called Shap-Editor. Prior research on editing 3D objects primarily concentrated on editing individual objects by leveraging off-the-shelf 2D image editing networks. This is achieved via a process called distillation, which transfers knowledge from the 2D network to 3D assets. Distillation necessitates at least tens of minutes per asset to attain satisfactory editing results, and is thus not very practical. In contrast, we ask whether 3D editing can be carried out directly by a feed-forward network, eschewing test-time optimisation. In particular, we hypothesise that editing can be greatly simplified by first encoding 3D objects in a suitable latent space. We validate this hypothesis by building upon the latent space of Shap-E. We demonstrate that direct 3D editing in this space is possible and efficient by building a feed-forward editor network that only requires approximately one second per edit. Our experiments show that Shap-Editor generalises well to both in-distribution and out-of-distribution 3D assets with different prompts, exhibiting comparable performance with methods that carry out test-time optimisation for each edited instance.
2023-12-15T00:00:00
2312.09222
Mosaic-SDF for 3D Generative Models
[ "Lior Yariv", "Omri Puny", "Natalia Neverova", "Oran Gafni", "Yaron Lipman" ]
Current diffusion or flow-based generative models for 3D shapes divide to two: distilling pre-trained 2D image diffusion models, and training directly on 3D shapes. When training a diffusion or flow models on 3D shapes a crucial design choice is the shape representation. An effective shape representation needs to adhere three design principles: it should allow an efficient conversion of large 3D datasets to the representation form; it should provide a good tradeoff of approximation power versus number of parameters; and it should have a simple tensorial form that is compatible with existing powerful neural architectures. While standard 3D shape representations such as volumetric grids and point clouds do not adhere to all these principles simultaneously, we advocate in this paper a new representation that does. We introduce Mosaic-SDF (M-SDF): a simple 3D shape representation that approximates the Signed Distance Function (SDF) of a given shape by using a set of local grids spread near the shape's boundary. The M-SDF representation is fast to compute for each shape individually making it readily parallelizable; it is parameter efficient as it only covers the space around the shape's boundary; and it has a simple matrix form, compatible with Transformer-based architectures. We demonstrate the efficacy of the M-SDF representation by using it to train a 3D generative flow model including class-conditioned generation with the 3D Warehouse dataset, and text-to-3D generation using a dataset of about 600k caption-shape pairs.
2023-12-15T00:00:00
2312.09256
LIME: Localized Image Editing via Attention Regularization in Diffusion Models
[ "Enis Simsar", "Alessio Tonioni", "Yongqin Xian", "Thomas Hofmann", "Federico Tombari" ]
Diffusion models (DMs) have gained prominence due to their ability to generate high-quality, varied images, with recent advancements in text-to-image generation. The research focus is now shifting towards the controllability of DMs. A significant challenge within this domain is localized editing, where specific areas of an image are modified without affecting the rest of the content. This paper introduces LIME for localized image editing in diffusion models that do not require user-specified regions of interest (RoI) or additional text input. Our method employs features from pre-trained methods and a simple clustering technique to obtain precise semantic segmentation maps. Then, by leveraging cross-attention maps, it refines these segments for localized edits. Finally, we propose a novel cross-attention regularization technique that penalizes unrelated cross-attention scores in the RoI during the denoising steps, ensuring localized edits. Our approach, without re-training and fine-tuning, consistently improves the performance of existing methods in various editing benchmarks.
2023-12-15T00:00:00
2312.08889
SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained Geometry and Appearance
[ "Yuanyou Xu", "Zongxin Yang", "Yi Yang" ]
Powered by large-scale text-to-image generation models, text-to-3D avatar generation has made promising progress. However, most methods fail to produce photorealistic results, limited by imprecise geometry and low-quality appearance. Towards more practical avatar generation, we present SEEAvatar, a method for generating photorealistic 3D avatars from text with SElf-Evolving constraints for decoupled geometry and appearance. For geometry, we propose to constrain the optimized avatar in a decent global shape with a template avatar. The template avatar is initialized with human prior and can be updated by the optimized avatar periodically as an evolving template, which enables more flexible shape generation. Besides, the geometry is also constrained by the static human prior in local parts like face and hands to maintain the delicate structures. For appearance generation, we use diffusion model enhanced by prompt engineering to guide a physically based rendering pipeline to generate realistic textures. The lightness constraint is applied on the albedo texture to suppress incorrect lighting effect. Experiments show that our method outperforms previous methods on both global and local geometry and appearance quality by a large margin. Since our method can produce high-quality meshes and textures, such assets can be directly applied in classic graphics pipeline for realistic rendering under any lighting condition. Project page at: https://seeavatar3d.github.io.
2023-12-15T00:00:00
2312.09251
VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation
[ "Jinguo Zhu", "Xiaohan Ding", "Yixiao Ge", "Yuying Ge", "Sijie Zhao", "Hengshuang Zhao", "Xiaohua Wang", "Ying Shan" ]
In this work, we introduce Vision-Language Generative Pre-trained Transformer (VL-GPT), a transformer model proficient at concurrently perceiving and generating visual and linguistic data. VL-GPT achieves a unified pre-training approach for both image and text modalities by employing a straightforward auto-regressive objective, thereby enabling the model to process image and text as seamlessly as a language model processes text. To accomplish this, we initially propose a novel image tokenizer-detokenizer framework for visual data, specifically designed to transform raw images into a sequence of continuous embeddings and reconstruct them accordingly. In combination with the existing text tokenizer and detokenizer, this framework allows for the encoding of interleaved image-text data into a multimodal sequence, which can subsequently be fed into the transformer model. Consequently, VL-GPT can perform large-scale pre-training on multimodal corpora utilizing a unified auto-regressive objective (i.e., next-token prediction). Upon completion of pre-training, VL-GPT exhibits remarkable zero-shot and few-shot performance across a diverse range of vision and language understanding and generation tasks, including image captioning, visual question answering, text-to-image generation, and more. Additionally, the pre-trained model retrains in-context learning capabilities when provided with multimodal prompts. We further conduct instruction tuning on our VL-GPT, highlighting its exceptional potential for multimodal assistance. The source code and model weights shall be released.
2023-12-15T00:00:00
2312.09244
Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking
[ "Jacob Eisenstein", "Chirag Nagpal", "Alekh Agarwal", "Ahmad Beirami", "Alex D'Amour", "DJ Dvijotham", "Adam Fisch", "Katherine Heller", "Stephen Pfohl", "Deepak Ramachandran", "Peter Shaw", "Jonathan Berant" ]
Reward models play a key role in aligning language model applications towards human preferences. However, this setup creates an incentive for the language model to exploit errors in the reward model to achieve high estimated reward, a phenomenon often termed reward hacking. A natural mitigation is to train an ensemble of reward models, aggregating over model outputs to obtain a more robust reward estimate. We explore the application of reward ensembles to alignment at both training time (through reinforcement learning) and inference time (through reranking). First, we show that reward models are underspecified: reward models that perform similarly in-distribution can yield very different rewards when used in alignment, due to distribution shift. Second, underspecification results in overoptimization, where alignment to one reward model does not improve reward as measured by another reward model trained on the same data. Third, overoptimization is mitigated by the use of reward ensembles, and ensembles that vary by their pretraining seeds lead to better generalization than ensembles that differ only by their fine-tuning seeds, with both outperforming individual reward models. However, even pretrain reward ensembles do not eliminate reward hacking: we show several qualitative reward hacking phenomena that are not mitigated by ensembling because all reward models in the ensemble exhibit similar error patterns.
2023-12-15T00:00:00
2312.08754
UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation
[ "Zexiang Liu", "Yangguang Li", "Youtian Lin", "Xin Yu", "Sida Peng", "Yan-Pei Cao", "Xiaojuan Qi", "Xiaoshui Huang", "Ding Liang", "Wanli Ouyang" ]
https://github.com/YG256Li/UniDream
Recent advancements in text-to-3D generation technology have significantly advanced the conversion of textual descriptions into imaginative well-geometrical and finely textured 3D objects. Despite these developments, a prevalent limitation arises from the use of RGB data in diffusion or reconstruction models, which often results in models with inherent lighting and shadows effects that detract from their realism, thereby limiting their usability in applications that demand accurate relighting capabilities. To bridge this gap, we present UniDream, a text-to-3D generation framework by incorporating unified diffusion priors. Our approach consists of three main components: (1) a dual-phase training process to get albedo-normal aligned multi-view diffusion and reconstruction models, (2) a progressive generation procedure for geometry and albedo-textures based on Score Distillation Sample (SDS) using the trained reconstruction and diffusion models, and (3) an innovative application of SDS for finalizing PBR generation while keeping a fixed albedo based on Stable Diffusion model. Extensive evaluations demonstrate that UniDream surpasses existing methods in generating 3D objects with clearer albedo textures, smoother surfaces, enhanced realism, and superior relighting capabilities.
2023-12-15T00:00:00
2312.09252
FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection
[ "Hongsuk Choi", "Isaac Kasahara", "Selim Engin", "Moritz Graule", "Nikhil Chavan-Dafle", "Volkan Isler" ]
https://github.com/SamsungLabs/FineControlNet
Recently introduced ControlNet has the ability to steer the text-driven image generation process with geometric input such as human 2D pose, or edge features. While ControlNet provides control over the geometric form of the instances in the generated image, it lacks the capability to dictate the visual appearance of each instance. We present FineControlNet to provide fine control over each instance's appearance while maintaining the precise pose control capability. Specifically, we develop and demonstrate FineControlNet with geometric control via human pose images and appearance control via instance-level text prompts. The spatial alignment of instance-specific text prompts and 2D poses in latent space enables the fine control capabilities of FineControlNet. We evaluate the performance of FineControlNet with rigorous comparison against state-of-the-art pose-conditioned text-to-image diffusion models. FineControlNet achieves superior performance in generating images that follow the user-provided instance-specific text prompts and poses compared with existing methods. Project webpage: https://samsunglabs.github.io/FineControlNet-project-page
2023-12-18T00:00:00
2312.09390
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
[ "Collin Burns", "Pavel Izmailov", "Jan Hendrik Kirchner", "Bowen Baker", "Leo Gao", "Leopold Aschenbrenner", "Yining Chen", "Adrien Ecoffet", "Manas Joglekar", "Jan Leike", "Ilya Sutskever", "Jeff Wu" ]
Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior - for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too difficult for humans to reliably evaluate; humans will only be able to weakly supervise superhuman models. We study an analogy to this problem: can weak model supervision elicit the full capabilities of a much stronger model? We test this using a range of pretrained language models in the GPT-4 family on natural language processing (NLP), chess, and reward modeling tasks. We find that when we naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors, a phenomenon we call weak-to-strong generalization. However, we are still far from recovering the full capabilities of strong models with naive finetuning alone, suggesting that techniques like RLHF may scale poorly to superhuman models without further work. We find that simple methods can often significantly improve weak-to-strong generalization: for example, when finetuning GPT-4 with a GPT-2-level supervisor and an auxiliary confidence loss, we can recover close to GPT-3.5-level performance on NLP tasks. Our results suggest that it is feasible to make empirical progress today on a fundamental challenge of aligning superhuman models.
2023-12-18T00:00:00
2312.09299
Weight subcloning: direct initialization of transformers using larger pretrained ones
[ "Mohammad Samragh", "Mehrdad Farajtabar", "Sachin Mehta", "Raviteja Vemulapalli", "Fartash Faghri", "Devang Naik", "Oncel Tuzel", "Mohammad Rastegari" ]
Training large transformer models from scratch for a target task requires lots of data and is computationally demanding. The usual practice of transfer learning overcomes this challenge by initializing the model with weights of a pretrained model of the same size and specification to increase the convergence and training speed. However, what if no pretrained model of the required size is available? In this paper, we introduce a simple yet effective technique to transfer the knowledge of a pretrained model to smaller variants. Our approach called weight subcloning expedites the training of scaled-down transformers by initializing their weights from larger pretrained models. Weight subcloning involves an operation on the pretrained model to obtain the equivalent initialized scaled-down model. It consists of two key steps: first, we introduce neuron importance ranking to decrease the embedding dimension per layer in the pretrained model. Then, we remove blocks from the transformer model to match the number of layers in the scaled-down network. The result is a network ready to undergo training, which gains significant improvements in training speed compared to random initialization. For instance, we achieve 4x faster training for vision transformers in image classification and language models designed for next token prediction.
2023-12-18T00:00:00
2312.09300
Self-Evaluation Improves Selective Generation in Large Language Models
[ "Jie Ren", "Yao Zhao", "Tu Vu", "Peter J. Liu", "Balaji Lakshminarayanan" ]
Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements. In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include a ``None of the above'' option to express the model's uncertainty explicitly. We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TruthfulQA and TL;DR. Through experiments with PaLM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content.
2023-12-18T00:00:00
2312.10029
Challenges with unsupervised LLM knowledge discovery
[ "Sebastian Farquhar", "Vikrant Varma", "Zachary Kenton", "Johannes Gasteiger", "Vladimir Mikulik", "Rohin Shah" ]
We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge -- instead they seem to discover whatever feature of the activations is most prominent. The idea behind unsupervised knowledge elicitation is that knowledge satisfies a consistency structure, which can be used to discover knowledge. We first prove theoretically that arbitrary features (not just knowledge) satisfy the consistency structure of a particular leading unsupervised knowledge-elicitation method, contrast-consistent search (Burns et al. - arXiv:2212.03827). We then present a series of experiments showing settings in which unsupervised methods result in classifiers that do not predict knowledge, but instead predict a different prominent feature. We conclude that existing unsupervised methods for discovering latent knowledge are insufficient, and we contribute sanity checks to apply to evaluating future knowledge elicitation methods. Conceptually, we hypothesise that the identification issues explored here, e.g. distinguishing a model's knowledge from that of a simulated character's, will persist for future unsupervised methods.
2023-12-18T00:00:00
2312.10007
Faithful Persona-based Conversational Dataset Generation with Large Language Models
[ "Pegah Jandaghi", "XiangHai Sheng", "Xinyi Bai", "Jay Pujara", "Hakim Sidahmed" ]
High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.
2023-12-18T00:00:00
2312.10003
ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
[ "Renat Aksitov", "Sobhan Miryoosefi", "Zonglin Li", "Daliang Li", "Sheila Babayan", "Kavya Kopparapu", "Zachary Fisher", "Ruiqi Guo", "Sushant Prakash", "Pranesh Srinivasan", "Manzil Zaheer", "Felix Yu", "Sanjiv Kumar" ]
Answering complex natural language questions often necessitates multi-step reasoning and integrating external information. Several systems have combined knowledge retrieval with a large language model (LLM) to answer such questions. These systems, however, suffer from various failure cases, and we cannot directly train them end-to-end to fix such failures, as interaction with external knowledge is non-differentiable. To address these deficiencies, we define a ReAct-style LLM agent with the ability to reason and act upon external knowledge. We further refine the agent through a ReST-like method that iteratively trains on previous trajectories, employing growing-batch reinforcement learning with AI feedback for continuous self-improvement and self-distillation. Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model that achieves comparable performance on challenging compositional question-answering benchmarks with two orders of magnitude fewer parameters.
2023-12-18T00:00:00
2312.09911
Amphion: An Open-Source Audio, Music and Speech Generation Toolkit
[ "Xueyao Zhang", "Liumeng Xue", "Yuancheng Wang", "Yicheng Gu", "Xi Chen", "Zihao Fang", "Haopeng Chen", "Lexiao Zou", "Chaoren Wang", "Jun Han", "Kai Chen", "Haizhou Li", "Zhizheng Wu" ]
https://github.com/open-mmlab/Amphion
Amphion is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development. Amphion offers a unique feature: visualizations of classic models or architectures. We believe that these visualizations are beneficial for junior researchers and engineers who wish to gain a better understanding of the model. The North-Star objective of Amphion is to offer a platform for studying the conversion of any inputs into general audio. Amphion is designed to support individual generation tasks. In addition to the specific generation tasks, Amphion also includes several vocoders and evaluation metrics. A vocoder is an important module for producing high-quality audio signals, while evaluation metrics are critical for ensuring consistent metrics in generation tasks. In this paper, we provide a high-level overview of Amphion.
2023-12-18T00:00:00
2312.09767
DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
[ "Yifeng Ma", "Shiwei Zhang", "Jiayu Wang", "Xiang Wang", "Yingya Zhang", "Zhidong Deng" ]
https://github.com/ali-vilab/dreamtalk
Diffusion models have shown remarkable success in a variety of downstream generative tasks, yet remain under-explored in the important and challenging expressive talking head generation. In this work, we propose a DreamTalk framework to fulfill this gap, which employs meticulous design to unlock the potential of diffusion models in generating expressive talking heads. Specifically, DreamTalk consists of three crucial components: a denoising network, a style-aware lip expert, and a style predictor. The diffusion-based denoising network is able to consistently synthesize high-quality audio-driven face motions across diverse expressions. To enhance the expressiveness and accuracy of lip motions, we introduce a style-aware lip expert that can guide lip-sync while being mindful of the speaking styles. To eliminate the need for expression reference video or text, an extra diffusion-based style predictor is utilized to predict the target expression directly from the audio. By this means, DreamTalk can harness powerful diffusion models to generate expressive faces effectively and reduce the reliance on expensive style references. Experimental results demonstrate that DreamTalk is capable of generating photo-realistic talking faces with diverse speaking styles and achieving accurate lip motions, surpassing existing state-of-the-art counterparts.
2023-12-18T00:00:00
2312.09579
MobileSAMv2: Faster Segment Anything to Everything
[ "Chaoning Zhang", "Dongshen Han", "Sheng Zheng", "Jinwoo Choi", "Tae-Ho Kim", "Choong Seon Hong" ]
https://github.com/ChaoningZhang/MobileSAM{red{https:
Segment anything model (SAM) addresses two practical yet challenging segmentation tasks: segment anything (SegAny), which utilizes a certain point to predict the mask for a single object of interest, and segment everything (SegEvery), which predicts the masks for all objects on the image. What makes SegAny slow for SAM is its heavyweight image encoder, which has been addressed by MobileSAM via decoupled knowledge distillation. The efficiency bottleneck of SegEvery with SAM, however, lies in its mask decoder because it needs to first generate numerous masks with redundant grid-search prompts and then perform filtering to obtain the final valid masks. We propose to improve its efficiency by directly generating the final masks with only valid prompts, which can be obtained through object discovery. Our proposed approach not only helps reduce the total time on the mask decoder by at least 16 times but also achieves superior performance. Specifically, our approach yields an average performance boost of 3.6\% (42.5\% v.s. 38.9\%) for zero-shot object proposal on the LVIS dataset with the mask AR@K metric. Qualitative results show that our approach generates fine-grained masks while avoiding over-segmenting things. This project targeting faster SegEvery than the original SAM is termed MobileSAMv2 to differentiate from MobileSAM which targets faster SegAny. Moreover, we demonstrate that our new prompt sampling is also compatible with the distilled image encoders in MobileSAM, contributing to a unified framework for efficient SegAny and SegEvery. The code is available at the same link as MobileSAM Project https://github.com/ChaoningZhang/MobileSAM{red{https://github.com/ChaoningZhang/MobileSAM}}. abstract
2023-12-18T00:00:00
2312.10035
Point Transformer V3: Simpler, Faster, Stronger
[ "Xiaoyang Wu", "Li Jiang", "Peng-Shuai Wang", "Zhijian Liu", "Xihui Liu", "Yu Qiao", "Wanli Ouyang", "Tong He", "Hengshuang Zhao" ]
This paper is not motivated to seek innovation within the attention mechanism. Instead, it focuses on overcoming the existing trade-offs between accuracy and efficiency within the context of point cloud processing, leveraging the power of scale. Drawing inspiration from recent advances in 3D large-scale representation learning, we recognize that model performance is more influenced by scale than by intricate design. Therefore, we present Point Transformer V3 (PTv3), which prioritizes simplicity and efficiency over the accuracy of certain mechanisms that are minor to the overall performance after scaling, such as replacing the precise neighbor search by KNN with an efficient serialized neighbor mapping of point clouds organized with specific patterns. This principle enables significant scaling, expanding the receptive field from 16 to 1024 points while remaining efficient (a 3x increase in processing speed and a 10x improvement in memory efficiency compared with its predecessor, PTv2). PTv3 attains state-of-the-art results on over 20 downstream tasks that span both indoor and outdoor scenarios. Further enhanced with multi-dataset joint training, PTv3 pushes these results to a higher level.
2023-12-18T00:00:00
2312.09608
Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models
[ "Senmao Li", "Taihang Hu", "Fahad Shahbaz Khan", "Linxuan Li", "Shiqi Yang", "Yaxing Wang", "Ming-Ming Cheng", "Jian Yang" ]
https://github.com/hutaiHang/Faster-Diffusion{FasterDiffusion
One of the key components within diffusion models is the UNet for noise prediction. While several works have explored basic properties of the UNet decoder, its encoder largely remains unexplored. In this work, we conduct the first comprehensive study of the UNet encoder. We empirically analyze the encoder features and provide insights to important questions regarding their changes at the inference process. In particular, we find that encoder features change gently, whereas the decoder features exhibit substantial variations across different time-steps. This finding inspired us to omit the encoder at certain adjacent time-steps and reuse cyclically the encoder features in the previous time-steps for the decoder. Further based on this observation, we introduce a simple yet effective encoder propagation scheme to accelerate the diffusion sampling for a diverse set of tasks. By benefiting from our propagation scheme, we are able to perform in parallel the decoder at certain adjacent time-steps. Additionally, we introduce a prior noise injection method to improve the texture details in the generated image. Besides the standard text-to-image task, we also validate our approach on other tasks: text-to-video, personalized generation and reference-guided generation. Without utilizing any knowledge distillation technique, our approach accelerates both the Stable Diffusion (SD) and the DeepFloyd-IF models sampling by 41% and 24% respectively, while maintaining high-quality generation performance. Our code is available in https://github.com/hutaiHang/Faster-Diffusion{FasterDiffusion}.
2023-12-18T00:00:00
2312.09571
Extending Context Window of Large Language Models via Semantic Compression
[ "Weizhi Fei", "Xueyan Niu", "Pingyi Zhou", "Lu Hou", "Bo Bai", "Lei Deng", "Wei Han" ]
Transformer-based Large Language Models (LLMs) often impose limitations on the length of the text input to ensure the generation of fluent and relevant responses. This constraint restricts their applicability in scenarios involving long texts. We propose a novel semantic compression method that enables generalization to texts that are 6-8 times longer, without incurring significant computational costs or requiring fine-tuning. Our proposed framework draws inspiration from source coding in information theory and employs a pre-trained model to reduce the semantic redundancy of long inputs before passing them to the LLMs for downstream tasks. Experimental results demonstrate that our method effectively extends the context window of LLMs across a range of tasks including question answering, summarization, few-shot learning, and information retrieval. Furthermore, the proposed semantic compression method exhibits consistent fluency in text generation while reducing the associated computational overhead.
2023-12-18T00:00:00
2312.10034
SlimmeRF: Slimmable Radiance Fields
[ "Shiran Yuan", "Hao Zhao" ]
https://github.com/Shiran-Yuan/SlimmeRF
Neural Radiance Field (NeRF) and its variants have recently emerged as successful methods for novel view synthesis and 3D scene reconstruction. However, most current NeRF models either achieve high accuracy using large model sizes, or achieve high memory-efficiency by trading off accuracy. This limits the applicable scope of any single model, since high-accuracy models might not fit in low-memory devices, and memory-efficient models might not satisfy high-quality requirements. To this end, we present SlimmeRF, a model that allows for instant test-time trade-offs between model size and accuracy through slimming, thus making the model simultaneously suitable for scenarios with different computing budgets. We achieve this through a newly proposed algorithm named Tensorial Rank Incrementation (TRaIn) which increases the rank of the model's tensorial representation gradually during training. We also observe that our model allows for more effective trade-offs in sparse-view scenarios, at times even achieving higher accuracy after being slimmed. We credit this to the fact that erroneous information such as floaters tend to be stored in components corresponding to higher ranks. Our implementation is available at https://github.com/Shiran-Yuan/SlimmeRF.
2023-12-18T00:00:00
2312.09323
Perspectives on the State and Future of Deep Learning -- 2023
[ "Micah Goldblum", "Anima Anandkumar", "Richard Baraniuk", "Tom Goldstein", "Kyunghyun Cho", "Zachary C Lipton", "Melanie Mitchell", "Preetum Nakkiran", "Max Welling", "Andrew Gordon Wilson" ]
The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time. The plan is to host this survey periodically until the AI singularity paperclip-frenzy-driven doomsday, keeping an updated list of topical questions and interviewing new community members for each edition. In this issue, we probed people's opinions on interpretable AI, the value of benchmarking in modern NLP, the state of progress towards understanding deep learning, and the future of academia.
2023-12-18T00:00:00
2312.09305
Stable Score Distillation for High-Quality 3D Generation
[ "Boshi Tang", "Jianan Wang", "Zhiyong Wu", "Lei Zhang" ]
Score Distillation Sampling (SDS) has exhibited remarkable performance in conditional 3D content generation. However, a comprehensive understanding of the SDS formulation is still lacking, hindering the development of 3D generation. In this work, we present an interpretation of SDS as a combination of three functional components: mode-disengaging, mode-seeking and variance-reducing terms, and analyze the properties of each. We show that problems such as over-smoothness and color-saturation result from the intrinsic deficiency of the supervision terms and reveal that the variance-reducing term introduced by SDS is sub-optimal. Additionally, we shed light on the adoption of large Classifier-Free Guidance (CFG) scale for 3D generation. Based on the analysis, we propose a simple yet effective approach named Stable Score Distillation (SSD) which strategically orchestrates each term for high-quality 3D generation. Extensive experiments validate the efficacy of our approach, demonstrating its ability to generate high-fidelity 3D content without succumbing to issues such as over-smoothness and over-saturation, even under low CFG conditions with the most challenging NeRF representation.
2023-12-19T00:00:00
2312.10763
M3DBench: Let's Instruct Large Models with Multi-modal 3D Prompts
[ "Mingsheng Li", "Xin Chen", "Chi Zhang", "Sijin Chen", "Hongyuan Zhu", "Fukun Yin", "Gang Yu", "Tao Chen" ]
Recently, 3D understanding has become popular to facilitate autonomous agents to perform further decisionmaking. However, existing 3D datasets and methods are often limited to specific tasks. On the other hand, recent progress in Large Language Models (LLMs) and Multimodal Language Models (MLMs) have demonstrated exceptional general language and imagery tasking performance. Therefore, it is interesting to unlock MLM's potential to be 3D generalist for wider tasks. However, current MLMs' research has been less focused on 3D tasks due to a lack of large-scale 3D instruction-following datasets. In this work, we introduce a comprehensive 3D instructionfollowing dataset called M3DBench, which possesses the following characteristics: 1) It supports general multimodal instructions interleaved with text, images, 3D objects, and other visual prompts. 2) It unifies diverse 3D tasks at both region and scene levels, covering a variety of fundamental abilities in real-world 3D environments. 3) It is a large-scale 3D instruction-following dataset with over 320k instruction-response pairs. Furthermore, we establish a new benchmark for assessing the performance of large models in understanding multi-modal 3D prompts. Extensive experiments demonstrate the effectiveness of our dataset and baseline, supporting general 3D-centric tasks, which can inspire future research.
2023-12-19T00:00:00
2312.11396
MAG-Edit: Localized Image Editing in Complex Scenarios via Mask-Based Attention-Adjusted Guidance
[ "Qi Mao", "Lan Chen", "Yuchao Gu", "Zhen Fang", "Mike Zheng Shou" ]
Recent diffusion-based image editing approaches have exhibited impressive editing capabilities in images with simple compositions. However, localized editing in complex scenarios has not been well-studied in the literature, despite its growing real-world demands. Existing mask-based inpainting methods fall short of retaining the underlying structure within the edit region. Meanwhile, mask-free attention-based methods often exhibit editing leakage and misalignment in more complex compositions. In this work, we develop MAG-Edit, a training-free, inference-stage optimization method, which enables localized image editing in complex scenarios. In particular, MAG-Edit optimizes the noise latent feature in diffusion models by maximizing two mask-based cross-attention constraints of the edit token, which in turn gradually enhances the local alignment with the desired prompt. Extensive quantitative and qualitative experiments demonstrate the effectiveness of our method in achieving both text alignment and structure preservation for localized editing within complex scenarios.
2023-12-19T00:00:00
2312.11461
GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning
[ "Ye Yuan", "Xueting Li", "Yangyi Huang", "Shalini De Mello", "Koki Nagano", "Jan Kautz", "Umar Iqbal" ]
Gaussian splatting has emerged as a powerful 3D representation that harnesses the advantages of both explicit (mesh) and implicit (NeRF) 3D representations. In this paper, we seek to leverage Gaussian splatting to generate realistic animatable avatars from textual descriptions, addressing the limitations (e.g., flexibility and efficiency) imposed by mesh or NeRF-based representations. However, a naive application of Gaussian splatting cannot generate high-quality animatable avatars and suffers from learning instability; it also cannot capture fine avatar geometries and often leads to degenerate body parts. To tackle these problems, we first propose a primitive-based 3D Gaussian representation where Gaussians are defined inside pose-driven primitives to facilitate animation. Second, to stabilize and amortize the learning of millions of Gaussians, we propose to use neural implicit fields to predict the Gaussian attributes (e.g., colors). Finally, to capture fine avatar geometries and extract detailed meshes, we propose a novel SDF-based implicit mesh learning approach for 3D Gaussians that regularizes the underlying geometries and extracts highly detailed textured meshes. Our proposed method, GAvatar, enables the large-scale generation of diverse animatable avatars using only text prompts. GAvatar significantly surpasses existing methods in terms of both appearance and geometry quality, and achieves extremely fast rendering (100 fps) at 1K resolution.
2023-12-19T00:00:00
2312.10899
MagicScroll: Nontypical Aspect-Ratio Image Generation for Visual Storytelling via Multi-Layered Semantic-Aware Denoising
[ "Bingyuan Wang", "Hengyu Meng", "Zeyu Cai", "Lanjiong Li", "Yue Ma", "Qifeng Chen", "Zeyu Wang" ]
Visual storytelling often uses nontypical aspect-ratio images like scroll paintings, comic strips, and panoramas to create an expressive and compelling narrative. While generative AI has achieved great success and shown the potential to reshape the creative industry, it remains a challenge to generate coherent and engaging content with arbitrary size and controllable style, concept, and layout, all of which are essential for visual storytelling. To overcome the shortcomings of previous methods including repetitive content, style inconsistency, and lack of controllability, we propose MagicScroll, a multi-layered, progressive diffusion-based image generation framework with a novel semantic-aware denoising process. The model enables fine-grained control over the generated image on object, scene, and background levels with text, image, and layout conditions. We also establish the first benchmark for nontypical aspect-ratio image generation for visual storytelling including mediums like paintings, comics, and cinematic panoramas, with customized metrics for systematic evaluation. Through comparative and ablation studies, MagicScroll showcases promising results in aligning with the narrative text, improving visual coherence, and engaging the audience. We plan to release the code and benchmark in the hope of a better collaboration between AI researchers and creative practitioners involving visual storytelling.
2023-12-19T00:00:00
2312.11462
Cascade Speculative Drafting for Even Faster LLM Inference
[ "Ziyi Chen", "Xiaocong Yang", "Jiacheng Lin", "Chenkai Sun", "Jie Huang", "Kevin Chen-Chuan Chang" ]
Speculative decoding enhances the efficiency of large language models (LLMs) by leveraging a draft model to draft for a larger target model to review. However, drafting in speculative decoding involves slow autoregressive generation and generating tokens of different importance with the same time allocation. These two inefficiencies lead to its suboptimal performance. To address this issue, we introduce Cascade Speculative Drafting (CS. Drafting), a novel approach that employs two types of cascades. The Vertical Cascade eliminates autoregressive generation from neural models. The Horizontal Cascade constitutes efficient time allocation in drafting with its optimality supported by our theoretical analysis. Combining both cascades, our CS. Drafting algorithm has achieved up to 72 percent additional speedup over speculative decoding in our experiments while keeping the same output distribution.
2023-12-19T00:00:00
2312.10523
Paloma: A Benchmark for Evaluating Language Model Fit
[ "Ian Magnusson", "Akshita Bhagia", "Valentin Hofmann", "Luca Soldaini", "Ananya Harsh Jha", "Oyvind Tafjord", "Dustin Schwenk", "Evan Pete Walsh", "Yanai Elazar", "Kyle Lo", "Dirk Groeneveld", "Iz Beltagy", "Hannaneh Hajishirzi", "Noah A. Smith", "Kyle Richardson", "Jesse Dodge" ]
Language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domainsx2013varying distributions of language. Rather than assuming perplexity on one distribution extrapolates to others, Perplexity Analysis for Language Model Assessment (Paloma), measures LM fit to 585 text domains, ranging from nytimes.com to r/depression on Reddit. We invite submissions to our benchmark and organize results by comparability based on compliance with guidelines such as removal of benchmark contamination from pretraining. Submissions can also record parameter and training token count to make comparisons of Pareto efficiency for performance as a function of these measures of cost. We populate our benchmark with results from 6 baselines pretrained on popular corpora. In case studies, we demonstrate analyses that are possible with Paloma, such as finding that pretraining without data beyond Common Crawl leads to inconsistent fit to many domains.
2023-12-19T00:00:00
2312.10253
Catwalk: A Unified Language Model Evaluation Framework for Many Datasets
[ "Dirk Groeneveld", "Anas Awadalla", "Iz Beltagy", "Akshita Bhagia", "Ian Magnusson", "Hao Peng", "Oyvind Tafjord", "Pete Walsh", "Kyle Richardson", "Jesse Dodge" ]
https://github.com/allenai/catwalk
The success of large language models has shifted the evaluation paradigms in natural language processing (NLP). The community's interest has drifted towards comparing NLP models across many tasks, domains, and datasets, often at an extreme scale. This imposes new engineering challenges: efforts in constructing datasets and models have been fragmented, and their formats and interfaces are incompatible. As a result, it often takes extensive (re)implementation efforts to make fair and controlled comparisons at scale. Catwalk aims to address these issues. Catwalk provides a unified interface to a broad range of existing NLP datasets and models, ranging from both canonical supervised training and fine-tuning, to more modern paradigms like in-context learning. Its carefully-designed abstractions allow for easy extensions to many others. Catwalk substantially lowers the barriers to conducting controlled experiments at scale. For example, we finetuned and evaluated over 64 models on over 86 datasets with a single command, without writing any code. Maintained by the AllenNLP team at the Allen Institute for Artificial Intelligence (AI2), Catwalk is an ongoing open-source effort: https://github.com/allenai/catwalk.
2023-12-19T00:00:00
2312.11370
G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model
[ "Jiahui Gao", "Renjie Pi", "Jipeng Zhang", "Jiacheng Ye", "Wanjun Zhong", "Yufei Wang", "Lanqing Hong", "Jianhua Han", "Hang Xu", "Zhenguo Li", "Lingpeng Kong" ]
Large language models (LLMs) have shown remarkable proficiency in human-level reasoning and generation capabilities, which encourages extensive research on their application in mathematical problem solving. However, current work has been largely focused on text-based mathematical problems, with limited investigation in problems involving geometric information. Addressing this gap, we aim to enable LLMs to solve geometric problems by understanding image input. We first analyze the limitations of current Multimodal Large Language Models (MLLMs) in this area: they struggle to accurately comprehending basic geometric elements and their relationships. To overcome these challenges, we take advantage of the unique characteristics of geometric problems (such as unique geometric logical form, and geometric scalability) and the capacity of the textual LLMs to build an enriched multimodal geometry dataset based on existing data. The augmented dataset, Geo170K, contains more than 170K geometric image-caption and question-answer pairs. Utilizing our constructed Geo170K dataset, we develop G-LLaVA, which demonstrates exceptional performance in solving geometric problems, significantly outperforming GPT-4-V on the MathVista benchmark with only 7B parameters.
2023-12-19T00:00:00
2312.10240
Rich Human Feedback for Text-to-Image Generation
[ "Youwei Liang", "Junfeng He", "Gang Li", "Peizhao Li", "Arseniy Klimovskiy", "Nicholas Carolan", "Jiao Sun", "Jordi Pont-Tuset", "Sarah Young", "Feng Yang", "Junjie Ke", "Krishnamurthy Dj Dvijotham", "Katie Collins", "Yiwen Luo", "Yang Li", "Kai J Kohlhoff", "Deepak Ramachandran", "Vidhya Navalpakkam" ]
Recent Text-to-Image (T2I) generation models such as Stable Diffusion and Imagen have made significant progress in generating high-resolution images based on text descriptions. However, many generated images still suffer from issues such as artifacts/implausibility, misalignment with text descriptions, and low aesthetic quality. Inspired by the success of Reinforcement Learning with Human Feedback (RLHF) for large language models, prior works collected human-provided scores as feedback on generated images and trained a reward model to improve the T2I generation. In this paper, we enrich the feedback signal by (i) marking image regions that are implausible or misaligned with the text, and (ii) annotating which words in the text prompt are misrepresented or missing on the image. We collect such rich human feedback on 18K generated images and train a multimodal transformer to predict the rich feedback automatically. We show that the predicted rich human feedback can be leveraged to improve image generation, for example, by selecting high-quality training data to finetune and improve the generative models, or by creating masks with predicted heatmaps to inpaint the problematic regions. Notably, the improvements generalize to models (Muse) beyond those used to generate the images on which human feedback data were collected (Stable Diffusion variants).
2023-12-19T00:00:00
2312.10656
VidToMe: Video Token Merging for Zero-Shot Video Editing
[ "Xirui Li", "Chao Ma", "Xiaokang Yang", "Ming-Hsuan Yang" ]
Diffusion models have made significant advances in generating high-quality images, but their application to video generation has remained challenging due to the complexity of temporal motion. Zero-shot video editing offers a solution by utilizing pre-trained image diffusion models to translate source videos into new ones. Nevertheless, existing methods struggle to maintain strict temporal consistency and efficient memory consumption. In this work, we propose a novel approach to enhance temporal consistency in generated videos by merging self-attention tokens across frames. By aligning and compressing temporally redundant tokens across frames, our method improves temporal coherence and reduces memory consumption in self-attention computations. The merging strategy matches and aligns tokens according to the temporal correspondence between frames, facilitating natural temporal consistency in generated video frames. To manage the complexity of video processing, we divide videos into chunks and develop intra-chunk local token merging and inter-chunk global token merging, ensuring both short-term video continuity and long-term content consistency. Our video editing approach seamlessly extends the advancements in image editing to video editing, rendering favorable results in temporal consistency over state-of-the-art methods.
2023-12-19T00:00:00
2312.10835
Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models
[ "Nikita Starodubcev", "Artem Fedorov", "Artem Babenko", "Dmitry Baranchuk" ]
Knowledge distillation methods have recently shown to be a promising direction to speedup the synthesis of large-scale diffusion models by requiring only a few inference steps. While several powerful distillation methods were recently proposed, the overall quality of student samples is typically lower compared to the teacher ones, which hinders their practical usage. In this work, we investigate the relative quality of samples produced by the teacher text-to-image diffusion model and its distilled student version. As our main empirical finding, we discover that a noticeable portion of student samples exhibit superior fidelity compared to the teacher ones, despite the ``approximate'' nature of the student. Based on this finding, we propose an adaptive collaboration between student and teacher diffusion models for effective text-to-image synthesis. Specifically, the distilled model produces the initial sample, and then an oracle decides whether it needs further improvements with a slow teacher model. Extensive experiments demonstrate that the designed pipeline surpasses state-of-the-art text-to-image alternatives for various inference budgets in terms of human preference. Furthermore, the proposed approach can be naturally used in popular applications such as text-guided image editing and controllable generation.
2023-12-19T00:00:00
2312.10540
VecFusion: Vector Font Generation with Diffusion
[ "Vikas Thamizharasan", "Difan Liu", "Shantanu Agarwal", "Matthew Fisher", "Michael Gharbi", "Oliver Wang", "Alec Jacobson", "Evangelos Kalogerakis" ]
We present VecFusion, a new neural architecture that can generate vector fonts with varying topological structures and precise control point positions. Our approach is a cascaded diffusion model which consists of a raster diffusion model followed by a vector diffusion model. The raster model generates low-resolution, rasterized fonts with auxiliary control point information, capturing the global style and shape of the font, while the vector model synthesizes vector fonts conditioned on the low-resolution raster fonts from the first stage. To synthesize long and complex curves, our vector diffusion model uses a transformer architecture and a novel vector representation that enables the modeling of diverse vector geometry and the precise prediction of control points. Our experiments show that, in contrast to previous generative models for vector graphics, our new cascaded vector diffusion model generates higher quality vector fonts, with complex structures and diverse styles.
2023-12-19T00:00:00
2312.11458
GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
[ "Yiqing Liang", "Numair Khan", "Zhengqin Li", "Thu Nguyen-Phuoc", "Douglas Lanman", "James Tompkin", "Lei Xiao" ]
We propose a method for dynamic scene reconstruction using deformable 3D Gaussians that is tailored for monocular video. Building upon the efficiency of Gaussian splatting, our approach extends the representation to accommodate dynamic elements via a deformable set of Gaussians residing in a canonical space, and a time-dependent deformation field defined by a multi-layer perceptron (MLP). Moreover, under the assumption that most natural scenes have large regions that remain static, we allow the MLP to focus its representational power by additionally including a static Gaussian point cloud. The concatenated dynamic and static point clouds form the input for the Gaussian Splatting rasterizer, enabling real-time rendering. The differentiable pipeline is optimized end-to-end with a self-supervised rendering loss. Our method achieves results that are comparable to state-of-the-art dynamic neural radiance field methods while allowing much faster optimization and rendering. Project website: https://lynl7130.github.io/gaufre/index.html
2023-12-19T00:00:00
2312.11459
VolumeDiffusion: Flexible Text-to-3D Generation with Efficient Volumetric Encoder
[ "Zhicong Tang", "Shuyang Gu", "Chunyu Wang", "Ting Zhang", "Jianmin Bao", "Dong Chen", "Baining Guo" ]
https://github.com/tzco/VolumeDiffusion
This paper introduces a pioneering 3D volumetric encoder designed for text-to-3D generation. To scale up the training data for the diffusion model, a lightweight network is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology. Code is available at https://github.com/tzco/VolumeDiffusion.
2023-12-19T00:00:00
2312.11392
SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing
[ "Zeyinzi Jiang", "Chaojie Mao", "Yulin Pan", "Zhen Han", "Jingfeng Zhang" ]
Image diffusion models have been utilized in various tasks, such as text-to-image generation and controllable image synthesis. Recent research has introduced tuning methods that make subtle adjustments to the original models, yielding promising results in specific adaptations of foundational generative diffusion models. Rather than modifying the main backbone of the diffusion model, we delve into the role of skip connection in U-Net and reveal that hierarchical features aggregating long-distance information across encoder and decoder make a significant impact on the content and quality of image generation. Based on the observation, we propose an efficient generative tuning framework, dubbed SCEdit, which integrates and edits Skip Connection using a lightweight tuning module named SC-Tuner. Furthermore, the proposed framework allows for straightforward extension to controllable image synthesis by injecting different conditions with Controllable SC-Tuner, simplifying and unifying the network design for multi-condition inputs. Our SCEdit substantially reduces training parameters, memory usage, and computational expense due to its lightweight tuners, with backward propagation only passing to the decoder blocks. Extensive experiments conducted on text-to-image generation and controllable image synthesis tasks demonstrate the superiority of our method in terms of efficiency and performance. Project page: https://scedit.github.io/
2023-12-19T00:00:00
2312.10332
ProTIP: Progressive Tool Retrieval Improves Planning
[ "Raviteja Anantha", "Bortik Bandyopadhyay", "Anirudh Kashi", "Sayantan Mahinder", "Andrew W Hill", "Srinivas Chappidi" ]
Large language models (LLMs) are increasingly employed for complex multi-step planning tasks, where the tool retrieval (TR) step is crucial for achieving successful outcomes. Two prevalent approaches for TR are single-step retrieval, which utilizes the complete query, and sequential retrieval using task decomposition (TD), where a full query is segmented into discrete atomic subtasks. While single-step retrieval lacks the flexibility to handle "inter-tool dependency," the TD approach necessitates maintaining "subtask-tool atomicity alignment," as the toolbox can evolve dynamically. To address these limitations, we introduce the Progressive Tool retrieval to Improve Planning (ProTIP) framework. ProTIP is a lightweight, contrastive learning-based framework that implicitly performs TD without the explicit requirement of subtask labels, while simultaneously maintaining subtask-tool atomicity. On the ToolBench dataset, ProTIP outperforms the ChatGPT task decomposition-based approach by a remarkable margin, achieving a 24% improvement in Recall@K=10 for TR and a 41% enhancement in tool accuracy for plan generation.
2023-12-19T00:00:00
2312.10665
Silkie: Preference Distillation for Large Visual Language Models
[ "Lei Li", "Zhihui Xie", "Mukai Li", "Shunian Chen", "Peiyi Wang", "Liang Chen", "Yazheng Yang", "Benyou Wang", "Lingpeng Kong" ]
This paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context. We first build a vision-language feedback (VLFeedback) dataset utilizing AI annotation. Specifically, responses are generated by models sampled from 12 LVLMs, conditioned on multi-modal instructions sourced from various datasets. We adopt GPT-4V to assess the generated outputs regarding helpfulness, visual faithfulness, and ethical considerations. Furthermore, the preference supervision is distilled into Qwen-VL-Chat through the direct preference optimization (DPO) method. The resulting model Silkie, achieves 6.9% and 9.5% relative improvement on the MME benchmark regarding the perception and cognition capabilities, respectively. Silkie also demonstrates reduced hallucination by setting a new state-of-the-art score of 3.02 on the MMHal-Bench benchmark. Further analysis shows that DPO with our VLFeedback dataset mainly boosts the fine-grained perception and complex cognition abilities of LVLMs, leading to more comprehensive improvements compared to human-annotated preference datasets.
2023-12-20T00:00:00
2312.11514
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
[ "Keivan Alizadeh", "Iman Mirzadeh", "Dmitry Belenko", "Karen Khatamifard", "Minsik Cho", "Carlo C Del Mundo", "Mohammad Rastegari", "Mehrdad Farajtabar" ]
Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their intensive computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM. Our method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this flash memory-informed framework, we introduce two principal techniques. First, "windowing'" strategically reduces data transfer by reusing previously activated neurons, and second, "row-column bundling", tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.
2023-12-20T00:00:00
2312.11556
StarVector: Generating Scalable Vector Graphics Code from Images
[ "Juan A. Rodriguez", "Shubham Agarwal", "Issam H. Laradji", "Pau Rodriguez", "David Vazquez", "Christopher Pal", "Marco Pedersoli" ]
https://github.com/joanrod/star-vector
Scalable Vector Graphics (SVGs) have become integral in modern image rendering applications due to their infinite scalability in resolution, versatile usability, and editing capabilities. SVGs are particularly popular in the fields of web development and graphic design. Existing approaches for SVG modeling using deep learning often struggle with generating complex SVGs and are restricted to simpler ones that require extensive processing and simplification. This paper introduces StarVector, a multimodal SVG generation model that effectively integrates Code Generation Large Language Models (CodeLLMs) and vision models. Our approach utilizes a CLIP image encoder to extract visual representations from pixel-based images, which are then transformed into visual tokens via an adapter module. These visual tokens are pre-pended to the SVG token embeddings, and the sequence is modeled by the StarCoder model using next-token prediction, effectively learning to align the visual and code tokens. This enables StarVector to generate unrestricted SVGs that accurately represent pixel images. To evaluate StarVector's performance, we present SVG-Bench, a comprehensive benchmark for evaluating SVG methods across multiple datasets and relevant metrics. Within this benchmark, we introduce novel datasets including SVG-Stack, a large-scale dataset of real-world SVG examples, and use it to pre-train StarVector as a large foundation model for SVGs. Our results demonstrate significant enhancements in visual quality and complexity handling over current methods, marking a notable advancement in SVG generation technology. Code and models: https://github.com/joanrod/star-vector
2023-12-20T00:00:00
2312.12436
A Challenger to GPT-4V? Early Explorations of Gemini in Visual Expertise
[ "Chaoyou Fu", "Renrui Zhang", "Haojia Lin", "Zihan Wang", "Timin Gao", "Yongdong Luo", "Yubo Huang", "Zhengye Zhang", "Longtian Qiu", "Gaoxiang Ye", "Yunhang Shen", "Mengdan Zhang", "Peixian Chen", "Sirui Zhao", "Xiawu Zheng", "Shaohui Lin", "Deqiang Jiang", "Di Yin", "Peng Gao", "Ke Li", "Xing Sun", "Rongrong Ji" ]
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
The surge of interest towards Multi-modal Large Language Models (MLLMs), e.g., GPT-4V(ision) from OpenAI, has marked a significant trend in both academia and industry. They endow Large Language Models (LLMs) with powerful capabilities in visual understanding, enabling them to tackle diverse multi-modal tasks. Very recently, Google released Gemini, its newest and most capable MLLM built from the ground up for multi-modality. In light of the superior reasoning capabilities, can Gemini challenge GPT-4V's leading position in multi-modal learning? In this paper, we present a preliminary exploration of Gemini Pro's visual understanding proficiency, which comprehensively covers four domains: fundamental perception, advanced cognition, challenging vision tasks, and various expert capacities. We compare Gemini Pro with the state-of-the-art GPT-4V to evaluate its upper limits, along with the latest open-sourced MLLM, Sphinx, which reveals the gap between manual efforts and black-box systems. The qualitative samples indicate that, while GPT-4V and Gemini showcase different answering styles and preferences, they can exhibit comparable visual reasoning capabilities, and Sphinx still trails behind them concerning domain generalizability. Specifically, GPT-4V tends to elaborate detailed explanations and intermediate steps, and Gemini prefers to output a direct and concise answer. The quantitative evaluation on the popular MME benchmark also demonstrates the potential of Gemini to be a strong challenger to GPT-4V. Our early investigation of Gemini also observes some common issues of MLLMs, indicating that there still remains a considerable distance towards artificial general intelligence. Our project for tracking the progress of MLLM is released at https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models.
2023-12-20T00:00:00
2312.12433
Tracking Any Object Amodally
[ "Cheng-Yen Hsieh", "Tarasha Khurana", "Achal Dave", "Deva Ramanan" ]
Amodal perception, the ability to comprehend complete object structures from partial visibility, is a fundamental skill, even for infants. Its significance extends to applications like autonomous driving, where a clear understanding of heavily occluded objects is essential. However, modern detection and tracking algorithms often overlook this critical capability, perhaps due to the prevalence of modal annotations in most datasets. To address the scarcity of amodal data, we introduce the TAO-Amodal benchmark, featuring 880 diverse categories in thousands of video sequences. Our dataset includes amodal and modal bounding boxes for visible and occluded objects, including objects that are partially out-of-frame. To enhance amodal tracking with object permanence, we leverage a lightweight plug-in module, the amodal expander, to transform standard, modal trackers into amodal ones through fine-tuning on a few hundred video sequences with data augmentation. We achieve a 3.3\% and 1.6\% improvement on the detection and tracking of occluded objects on TAO-Amodal. When evaluated on people, our method produces dramatic improvements of 2x compared to state-of-the-art modal baselines.
2023-12-20T00:00:00
2312.11841
MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
[ "Chaojian Li", "Bichen Wu", "Peter Vajda", "Yingyan", "Lin" ]
Neural Radiance Field (NeRF) has emerged as a leading technique for novel view synthesis, owing to its impressive photorealistic reconstruction and rendering capability. Nevertheless, achieving real-time NeRF rendering in large-scale scenes has presented challenges, often leading to the adoption of either intricate baked mesh representations with a substantial number of triangles or resource-intensive ray marching in baked representations. We challenge these conventions, observing that high-quality geometry, represented by meshes with substantial triangles, is not necessary for achieving photorealistic rendering quality. Consequently, we propose MixRT, a novel NeRF representation that includes a low-quality mesh, a view-dependent displacement map, and a compressed NeRF model. This design effectively harnesses the capabilities of existing graphics hardware, thus enabling real-time NeRF rendering on edge devices. Leveraging a highly-optimized WebGL-based rendering framework, our proposed MixRT attains real-time rendering speeds on edge devices (over 30 FPS at a resolution of 1280 x 720 on a MacBook M1 Pro laptop), better rendering quality (0.2 PSNR higher in indoor scenes of the Unbounded-360 datasets), and a smaller storage size (less than 80% compared to state-of-the-art methods).
2023-12-20T00:00:00
2312.12423
Jack of All Tasks, Master of Many: Designing General-purpose Coarse-to-Fine Vision-Language Model
[ "Shraman Pramanick", "Guangxing Han", "Rui Hou", "Sayan Nag", "Ser-Nam Lim", "Nicolas Ballas", "Qifan Wang", "Rama Chellappa", "Amjad Almahairi" ]
The ability of large language models (LLMs) to process visual inputs has given rise to general-purpose vision systems, unifying various vision-language (VL) tasks by instruction tuning. However, due to the enormous diversity in input-output formats in the vision domain, existing general-purpose models fail to successfully integrate segmentation and multi-image inputs with coarse-level tasks into a single framework. In this work, we introduce VistaLLM, a powerful visual system that addresses coarse- and fine-grained VL tasks over single and multiple input images using a unified framework. VistaLLM utilizes an instruction-guided image tokenizer that filters global embeddings using task descriptions to extract compressed and refined features from numerous images. Moreover, VistaLLM employs a gradient-aware adaptive sampling technique to represent binary segmentation masks as sequences, significantly improving over previously used uniform sampling. To bolster the desired capability of VistaLLM, we curate CoinIt, a comprehensive coarse-to-fine instruction tuning dataset with 6.8M samples. We also address the lack of multi-image grounding datasets by introducing a novel task, AttCoSeg (Attribute-level Co-Segmentation), which boosts the model's reasoning and grounding capability over multiple input images. Extensive experiments on a wide range of V- and VL tasks demonstrate the effectiveness of VistaLLM by achieving consistent state-of-the-art performance over strong baselines across all downstream tasks. Our project page can be found at https://shramanpramanick.github.io/VistaLLM/.
2023-12-20T00:00:00
2312.11805
Gemini: A Family of Highly Capable Multimodal Models
[ "Gemini Team", "Rohan Anil", "Sebastian Borgeaud", "Yonghui Wu", "Jean-Baptiste Alayrac", "Jiahui Yu", "Radu Soricut", "Johan Schalkwyk", "Andrew M. Dai", "Anja Hauth", "Katie Millican", "David Silver", "Slav Petrov", "Melvin Johnson", "Ioannis Antonoglou", "Julian Schrittwieser", "Amelia Glaese", "Jilin Chen", "Emily Pitler", "Timothy Lillicrap", "Angeliki Lazaridou", "Orhan Firat", "James Molloy", "Michael Isard", "Paul R. Barham", "Tom Hennigan", "Benjamin Lee", "Fabio Viola", "Malcolm Reynolds", "Yuanzhong Xu", "Ryan Doherty", "Eli Collins", "Clemens Meyer", "Eliza Rutherford", "Erica Moreira", "Kareem Ayoub", "Megha Goel", "George Tucker", "Enrique Piqueras", "Maxim Krikun", "Iain Barr", "Nikolay Savinov", "Ivo Danihelka", "Becca Roelofs", "Anaïs White", "Anders Andreassen", "Tamara von Glehn", "Lakshman Yagati", "Mehran Kazemi", "Lucas Gonzalez", "Misha Khalman", "Jakub Sygnowski", "Alexandre Frechette", "Charlotte Smith", "Laura Culp", "Lev Proleev", "Yi Luan", "Xi Chen", "James Lottes", "Nathan Schucher", "Federico Lebron", "Alban Rrustemi", "Natalie Clay", "Phil Crone", "Tomas Kocisky", "Jeffrey Zhao", "Bartek Perz", "Dian Yu", "Heidi Howard", "Adam Bloniarz", "Jack W. Rae", "Han Lu", "Laurent Sifre", "Marcello Maggioni", "Fred Alcober", "Dan Garrette", "Megan Barnes", "Shantanu Thakoor", "Jacob Austin", "Gabriel Barth-Maron", "William Wong", "Rishabh Joshi", "Rahma Chaabouni", "Deeni Fatiha", "Arun Ahuja", "Ruibo Liu", "Yunxuan Li", "Sarah Cogan", "Jeremy Chen", "Chao Jia", "Chenjie Gu", "Qiao Zhang", "Jordan Grimstad", "Ale Jakse Hartman", "Martin Chadwick", "Gaurav Singh Tomar", "Xavier Garcia", "Evan Senter", "Emanuel Taropa", "Thanumalayan Sankaranarayana Pillai", "Jacob Devlin", "Michael Laskin", "Diego de Las Casas", "Dasha Valter", "Connie Tao", "Lorenzo Blanco", "Adrià Puigdomènech Badia", "David Reitter", "Mianna Chen", "Jenny Brennan", "Clara Rivera", "Sergey Brin", "Shariq Iqbal", "Gabriela Surita", "Jane Labanowski", "Abhi Rao", "Stephanie Winkler", "Emilio Parisotto", "Yiming Gu", "Kate Olszewska", "Yujing Zhang", "Ravi Addanki", "Antoine Miech", "Annie Louis", "Laurent El Shafey", "Denis Teplyashin", "Geoff Brown", "Elliot Catt", "Nithya Attaluri", "Jan Balaguer", "Jackie Xiang", "Pidong Wang", "Zoe Ashwood", "Anton Briukhov", "Albert Webson", "Sanjay Ganapathy", "Smit Sanghavi", "Ajay Kannan", "Ming-Wei Chang", "Axel Stjerngren", "Josip Djolonga", "Yuting Sun", "Ankur Bapna", "Matthew Aitchison", "Pedram Pejman", "Henryk Michalewski", "Tianhe Yu", "Cindy Wang", "Juliette Love", "Junwhan Ahn", "Dawn Bloxwich", "Kehang Han", "Peter Humphreys", "Thibault Sellam", "James Bradbury", "Varun Godbole", "Sina Samangooei", "Bogdan Damoc", "Alex Kaskasoli", "Sébastien M. R. Arnold", "Vijay Vasudevan", "Shubham Agrawal", "Jason Riesa", "Dmitry Lepikhin", "Richard Tanburn", "Srivatsan Srinivasan", "Hyeontaek Lim", "Sarah Hodkinson", "Pranav Shyam", "Johan Ferret", "Steven Hand", "Ankush Garg", "Tom Le Paine", "Jian Li", "Yujia Li", "Minh Giang", "Alexander Neitz", "Zaheer Abbas", "Sarah York", "Machel Reid", "Elizabeth Cole", "Aakanksha Chowdhery", "Dipanjan Das", "Dominika Rogozińska", "Vitaly Nikolaev", "Pablo Sprechmann", "Zachary Nado", "Lukas Zilka", "Flavien Prost", "Luheng He", "Marianne Monteiro", "Gaurav Mishra", "Chris Welty", "Josh Newlan", "Dawei Jia", "Miltiadis Allamanis", "Clara Huiyi Hu", "Raoul de Liedekerke", "Justin Gilmer", "Carl Saroufim", "Shruti Rijhwani", "Shaobo Hou", "Disha Shrivastava", "Anirudh Baddepudi", "Alex Goldin", "Adnan Ozturel", "Albin Cassirer", "Yunhan Xu", "Daniel Sohn", "Devendra Sachan", "Reinald Kim Amplayo", "Craig Swanson", "Dessie Petrova", "Shashi Narayan", "Arthur Guez", "Siddhartha Brahma", "Jessica Landon", "Miteyan Patel", "Ruizhe Zhao", "Kevin Villela", "Luyu Wang", "Wenhao Jia", "Matthew Rahtz", "Mai Giménez", "Legg Yeung", "Hanzhao Lin", "James Keeling", "Petko Georgiev", "Diana Mincu", "Boxi Wu", "Salem Haykal", "Rachel Saputro", "Kiran Vodrahalli", "James Qin", "Zeynep Cankara", "Abhanshu Sharma", "Nick Fernando", "Will Hawkins", "Behnam Neyshabur", "Solomon Kim", "Adrian Hutter", "Priyanka Agrawal", "Alex Castro-Ros", "George van den Driessche", "Tao Wang", "Fan Yang", "Shuo-yiin Chang", "Paul Komarek", "Ross McIlroy", "Mario Lučić", "Guodong Zhang", "Wael Farhan", "Michael Sharman", "Paul Natsev", "Paul Michel", "Yong Cheng", "Yamini Bansal", "Siyuan Qiao", "Kris Cao", "Siamak Shakeri", "Christina Butterfield", "Justin Chung", "Paul Kishan Rubenstein", "Shivani Agrawal", "Arthur Mensch", "Kedar Soparkar", "Karel Lenc", "Timothy Chung", "Aedan Pope", "Loren Maggiore", "Jackie Kay", "Priya Jhakra", "Shibo Wang", "Joshua Maynez", "Mary Phuong", "Taylor Tobin", "Andrea Tacchetti", "Maja Trebacz", "Kevin Robinson", "Yash Katariya", "Sebastian Riedel", "Paige Bailey", "Kefan Xiao", "Nimesh Ghelani", "Lora Aroyo", "Ambrose Slone", "Neil Houlsby", "Xuehan Xiong", "Zhen Yang", "Elena Gribovskaya", "Jonas Adler", "Mateo Wirth", "Lisa Lee", "Music Li", "Thais Kagohara", "Jay Pavagadhi", "Sophie Bridgers", "Anna Bortsova", "Sanjay Ghemawat", "Zafarali Ahmed", "Tianqi Liu", "Richard Powell", "Vijay Bolina", "Mariko Iinuma", "Polina Zablotskaia", "James Besley", "Da-Woon Chung", "Timothy Dozat", "Ramona Comanescu", "Xiance Si", "Jeremy Greer", "Guolong Su", "Martin Polacek", "Raphaël Lopez Kaufman", "Simon Tokumine", "Hexiang Hu", "Elena Buchatskaya", "Yingjie Miao", "Mohamed Elhawaty", "Aditya Siddhant", "Nenad Tomasev", "Jinwei Xing", "Christina Greer", "Helen Miller", "Shereen Ashraf", "Aurko Roy", "Zizhao Zhang", "Ada Ma", "Angelos Filos", "Milos Besta", "Rory Blevins", "Ted Klimenko", "Chih-Kuan Yeh", "Soravit Changpinyo", "Jiaqi Mu", "Oscar Chang", "Mantas Pajarskas", "Carrie Muir", "Vered Cohen", "Charline Le Lan", "Krishna Haridasan", "Amit Marathe", "Steven Hansen", "Sholto Douglas", "Rajkumar Samuel", "Mingqiu Wang", "Sophia Austin", "Chang Lan", "Jiepu Jiang", "Justin Chiu", "Jaime Alonso Lorenzo", "Lars Lowe Sjösund", "Sébastien Cevey", "Zach Gleicher", "Thi Avrahami", "Anudhyan Boral", "Hansa Srinivasan", "Vittorio Selo", "Rhys May", "Konstantinos Aisopos", "Léonard Hussenot", "Livio Baldini Soares", "Kate Baumli", "Michael B. Chang", "Adrià Recasens", "Ben Caine", "Alexander Pritzel", "Filip Pavetic", "Fabio Pardo", "Anita Gergely", "Justin Frye", "Vinay Ramasesh", "Dan Horgan", "Kartikeya Badola", "Nora Kassner", "Subhrajit Roy", "Ethan Dyer", "Víctor Campos", "Alex Tomala", "Yunhao Tang", "Dalia El Badawy", "Elspeth White", "Basil Mustafa", "Oran Lang", "Abhishek Jindal", "Sharad Vikram", "Zhitao Gong", "Sergi Caelles", "Ross Hemsley", "Gregory Thornton", "Fangxiaoyu Feng", "Wojciech Stokowiec", "Ce Zheng", "Phoebe Thacker", "Çağlar Ünlü", "Zhishuai Zhang", "Mohammad Saleh", "James Svensson", "Max Bileschi", "Piyush Patil", "Ankesh Anand", "Roman Ring", "Katerina Tsihlas", "Arpi Vezer", "Marco Selvi", "Toby Shevlane", "Mikel Rodriguez", "Tom Kwiatkowski", "Samira Daruki", "Keran Rong", "Allan Dafoe", "Nicholas FitzGerald", "Keren Gu-Lemberg", "Mina Khan", "Lisa Anne Hendricks", "Marie Pellat", "Vladimir Feinberg", "James Cobon-Kerr", "Tara Sainath", "Maribeth Rauh", "Sayed Hadi Hashemi", "Richard Ives", "Yana Hasson", "YaGuang Li", "Eric Noland", "Yuan Cao", "Nathan Byrd", "Le Hou", "Qingze Wang", "Thibault Sottiaux", "Michela Paganini", "Jean-Baptiste Lespiau", "Alexandre Moufarek", "Samer Hassan", "Kaushik Shivakumar", "Joost van Amersfoort", "Amol Mandhane", "Pratik Joshi", "Anirudh Goyal", "Matthew Tung", "Andrew Brock", "Hannah Sheahan", "Vedant Misra", "Cheng Li", "Nemanja Rakićević", "Mostafa Dehghani", "Fangyu Liu", "Sid Mittal", "Junhyuk Oh", "Seb Noury", "Eren Sezener", "Fantine Huot", "Matthew Lamm", "Nicola De Cao", "Charlie Chen", "Gamaleldin Elsayed", "Ed Chi", "Mahdis Mahdieh", "Ian Tenney", "Nan Hua", "Ivan Petrychenko", "Patrick Kane", "Dylan Scandinaro", "Rishub Jain", "Jonathan Uesato", "Romina Datta", "Adam Sadovsky", "Oskar Bunyan", "Dominik Rabiej", "Shimu Wu", "John Zhang", "Gautam Vasudevan", "Edouard Leurent", "Mahmoud Alnahlawi", "Ionut Georgescu", "Nan Wei", "Ivy Zheng", "Betty Chan", "Pam G Rabinovitch", "Piotr Stanczyk", "Ye Zhang", "David Steiner", "Subhajit Naskar", "Michael Azzam", "Matthew Johnson", "Adam Paszke", "Chung-Cheng Chiu", "Jaume Sanchez Elias", "Afroz Mohiuddin", "Faizan Muhammad", "Jin Miao", "Andrew Lee", "Nino Vieillard", "Sahitya Potluri", "Jane Park", "Elnaz Davoodi", "Jiageng Zhang", "Jeff Stanway", "Drew Garmon", "Abhijit Karmarkar", "Zhe Dong", "Jong Lee", "Aviral Kumar", "Luowei Zhou", "Jonathan Evens", "William Isaac", "Zhe Chen", "Johnson Jia", "Anselm Levskaya", "Zhenkai Zhu", "Chris Gorgolewski", "Peter Grabowski", "Yu Mao", "Alberto Magni", "Kaisheng Yao", "Javier Snaider", "Norman Casagrande", "Paul Suganthan", "Evan Palmer", "Geoffrey Irving", "Edward Loper", "Manaal Faruqui", "Isha Arkatkar", "Nanxin Chen", "Izhak Shafran", "Michael Fink", "Alfonso Castaño", "Irene Giannoumis", "Wooyeol Kim", "Mikołaj Rybiński", "Ashwin Sreevatsa", "Jennifer Prendki", "David Soergel", "Adrian Goedeckemeyer", "Willi Gierke", "Mohsen Jafari", "Meenu Gaba", "Jeremy Wiesner", "Diana Gage Wright", "Yawen Wei", "Harsha Vashisht", "Yana Kulizhskaya", "Jay Hoover", "Maigo Le", "Lu Li", "Chimezie Iwuanyanwu", "Lu Liu", "Kevin Ramirez", "Andrey Khorlin", "Albert Cui", "Tian LIN", "Marin Georgiev", "Marcus Wu", "Ricardo Aguilar", "Keith Pallo", "Abhishek Chakladar", "Alena Repina", "Xihui Wu", "Tom van der Weide", "Priya Ponnapalli", "Caroline Kaplan", "Jiri Simsa", "Shuangfeng Li", "Olivier Dousse", "Fan Yang", "Jeff Piper", "Nathan Ie", "Minnie Lui", "Rama Pasumarthi", "Nathan Lintz", "Anitha Vijayakumar", "Lam Nguyen Thiet", "Daniel Andor", "Pedro Valenzuela", "Cosmin Paduraru", "Daiyi Peng", "Katherine Lee", "Shuyuan Zhang", "Somer Greene", "Duc Dung Nguyen", "Paula Kurylowicz", "Sarmishta Velury", "Sebastian Krause", "Cassidy Hardin", "Lucas Dixon", "Lili Janzer", "Kiam Choo", "Ziqiang Feng", "Biao Zhang", "Achintya Singhal", "Tejasi Latkar", "Mingyang Zhang", "Quoc Le", "Elena Allica Abellan", "Dayou Du", "Dan McKinnon", "Natasha Antropova", "Tolga Bolukbasi", "Orgad Keller", "David Reid", "Daniel Finchelstein", "Maria Abi Raad", "Remi Crocker", "Peter Hawkins", "Robert Dadashi", "Colin Gaffney", "Sid Lall", "Ken Franko", "Egor Filonov", "Anna Bulanova", "Rémi Leblond", "Vikas Yadav", "Shirley Chung", "Harry Askham", "Luis C. Cobo", "Kelvin Xu", "Felix Fischer", "Jun Xu", "Christina Sorokin", "Chris Alberti", "Chu-Cheng Lin", "Colin Evans", "Hao Zhou", "Alek Dimitriev", "Hannah Forbes", "Dylan Banarse", "Zora Tung", "Jeremiah Liu", "Mark Omernick", "Colton Bishop", "Chintu Kumar", "Rachel Sterneck", "Ryan Foley", "Rohan Jain", "Swaroop Mishra", "Jiawei Xia", "Taylor Bos", "Geoffrey Cideron", "Ehsan Amid", "Francesco Piccinno", "Xingyu Wang", "Praseem Banzal", "Petru Gurita", "Hila Noga", "Premal Shah", "Daniel J. Mankowitz", "Alex Polozov", "Nate Kushman", "Victoria Krakovna", "Sasha Brown", "MohammadHossein Bateni", "Dennis Duan", "Vlad Firoiu", "Meghana Thotakuri", "Tom Natan", "Anhad Mohananey", "Matthieu Geist", "Sidharth Mudgal", "Sertan Girgin", "Hui Li", "Jiayu Ye", "Ofir Roval", "Reiko Tojo", "Michael Kwong", "James Lee-Thorp", "Christopher Yew", "Quan Yuan", "Sumit Bagri", "Danila Sinopalnikov", "Sabela Ramos", "John Mellor", "Abhishek Sharma", "Aliaksei Severyn", "Jonathan Lai", "Kathy Wu", "Heng-Tze Cheng", "David Miller", "Nicolas Sonnerat", "Denis Vnukov", "Rory Greig", "Jennifer Beattie", "Emily Caveness", "Libin Bai", "Julian Eisenschlos", "Alex Korchemniy", "Tomy Tsai", "Mimi Jasarevic", "Weize Kong", "Phuong Dao", "Zeyu Zheng", "Frederick Liu", "Fan Yang", "Rui Zhu", "Mark Geller", "Tian Huey Teh", "Jason Sanmiya", "Evgeny Gladchenko", "Nejc Trdin", "Andrei Sozanschi", "Daniel Toyama", "Evan Rosen", "Sasan Tavakkol", "Linting Xue", "Chen Elkind", "Oliver Woodman", "John Carpenter", "George Papamakarios", "Rupert Kemp", "Sushant Kafle", "Tanya Grunina", "Rishika Sinha", "Alice Talbert", "Abhimanyu Goyal", "Diane Wu", "Denese Owusu-Afriyie", "Cosmo Du", "Chloe Thornton", "Jordi Pont-Tuset", "Pradyumna Narayana", "Jing Li", "Sabaer Fatehi", "John Wieting", "Omar Ajmeri", "Benigno Uria", "Tao Zhu", "Yeongil Ko", "Laura Knight", "Amélie Héliou", "Ning Niu", "Shane Gu", "Chenxi Pang", "Dustin Tran", "Yeqing Li", "Nir Levine", "Ariel Stolovich", "Norbert Kalb", "Rebeca Santamaria-Fernandez", "Sonam Goenka", "Wenny Yustalim", "Robin Strudel", "Ali Elqursh", "Balaji Lakshminarayanan", "Charlie Deck", "Shyam Upadhyay", "Hyo Lee", "Mike Dusenberry", "Zonglin Li", "Xuezhi Wang", "Kyle Levin", "Raphael Hoffmann", "Dan Holtmann-Rice", "Olivier Bachem", "Summer Yue", "Sho Arora", "Eric Malmi", "Daniil Mirylenka", "Qijun Tan", "Christy Koh", "Soheil Hassas Yeganeh", "Siim Põder", "Steven Zheng", "Francesco Pongetti", "Mukarram Tariq", "Yanhua Sun", "Lucian Ionita", "Mojtaba Seyedhosseini", "Pouya Tafti", "Ragha Kotikalapudi", "Zhiyu Liu", "Anmol Gulati", "Jasmine Liu", "Xinyu Ye", "Bart Chrzaszcz", "Lily Wang", "Nikhil Sethi", "Tianrun Li", "Ben Brown", "Shreya Singh", "Wei Fan", "Aaron Parisi", "Joe Stanton", "Chenkai Kuang", "Vinod Koverkathu", "Christopher A. Choquette-Choo", "Yunjie Li", "TJ Lu", "Abe Ittycheriah", "Prakash Shroff", "Pei Sun", "Mani Varadarajan", "Sanaz Bahargam", "Rob Willoughby", "David Gaddy", "Ishita Dasgupta", "Guillaume Desjardins", "Marco Cornero", "Brona Robenek", "Bhavishya Mittal", "Ben Albrecht", "Ashish Shenoy", "Fedor Moiseev", "Henrik Jacobsson", "Alireza Ghaffarkhah", "Morgane Rivière", "Alanna Walton", "Clément Crepy", "Alicia Parrish", "Yuan Liu", "Zongwei Zhou", "Clement Farabet", "Carey Radebaugh", "Praveen Srinivasan", "Claudia van der Salm", "Andreas Fidjeland", "Salvatore Scellato", "Eri Latorre-Chimoto", "Hanna Klimczak-Plucińska", "David Bridson", "Dario de Cesare", "Tom Hudson", "Piermaria Mendolicchio", "Lexi Walker", "Alex Morris", "Ivo Penchev", "Matthew Mauger", "Alexey Guseynov", "Alison Reid", "Seth Odoom", "Lucia Loher", "Victor Cotruta", "Madhavi Yenugula", "Dominik Grewe", "Anastasia Petrushkina", "Tom Duerig", "Antonio Sanchez", "Steve Yadlowsky", "Amy Shen", "Amir Globerson", "Adam Kurzrok", "Lynette Webb", "Sahil Dua", "Dong Li", "Preethi Lahoti", "Surya Bhupatiraju", "Dan Hurt", "Haroon Qureshi", "Ananth Agarwal", "Tomer Shani", "Matan Eyal", "Anuj Khare", "Shreyas Rammohan Belle", "Lei Wang", "Chetan Tekur", "Mihir Sanjay Kale", "Jinliang Wei", "Ruoxin Sang", "Brennan Saeta", "Tyler Liechty", "Yi Sun", "Yao Zhao", "Stephan Lee", "Pandu Nayak", "Doug Fritz", "Manish Reddy Vuyyuru", "John Aslanides", "Nidhi Vyas", "Martin Wicke", "Xiao Ma", "Taylan Bilal", "Evgenii Eltyshev", "Daniel Balle", "Nina Martin", "Hardie Cate", "James Manyika", "Keyvan Amiri", "Yelin Kim", "Xi Xiong", "Kai Kang", "Florian Luisier", "Nilesh Tripuraneni", "David Madras", "Mandy Guo", "Austin Waters", "Oliver Wang", "Joshua Ainslie", "Jason Baldridge", "Han Zhang", "Garima Pruthi", "Jakob Bauer", "Feng Yang", "Riham Mansour", "Jason Gelman", "Yang Xu", "George Polovets", "Ji Liu", "Honglong Cai", "Warren Chen", "XiangHai Sheng", "Emily Xue", "Sherjil Ozair", "Adams Yu", "Christof Angermueller", "Xiaowei Li", "Weiren Wang", "Julia Wiesinger", "Emmanouil Koukoumidis", "Yuan Tian", "Anand Iyer", "Madhu Gurumurthy", "Mark Goldenson", "Parashar Shah", "MK Blake", "Hongkun Yu", "Anthony Urbanowicz", "Jennimaria Palomaki", "Chrisantha Fernando", "Kevin Brooks", "Ken Durden", "Harsh Mehta", "Nikola Momchev", "Elahe Rahimtoroghi", "Maria Georgaki", "Amit Raul", "Sebastian Ruder", "Morgan Redshaw", "Jinhyuk Lee", "Komal Jalan", "Dinghua Li", "Ginger Perng", "Blake Hechtman", "Parker Schuh", "Milad Nasr", "Mia Chen", "Kieran Milan", "Vladimir Mikulik", "Trevor Strohman", "Juliana Franco", "Tim Green", "Demis Hassabis", "Koray Kavukcuoglu", "Jeffrey Dean", "Oriol Vinyals" ]
This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of Gemini models in cross-modal reasoning and language understanding will enable a wide variety of use cases and we discuss our approach toward deploying them responsibly to users.
2023-12-20T00:00:00
2312.11595
TIP: Text-Driven Image Processing with Semantic and Restoration Instructions
[ "Chenyang Qi", "Zhengzhong Tu", "Keren Ye", "Mauricio Delbracio", "Peyman Milanfar", "Qifeng Chen", "Hossein Talebi" ]
Text-driven diffusion models have become increasingly popular for various image editing tasks, including inpainting, stylization, and object replacement. However, it still remains an open research problem to adopt this language-vision paradigm for more fine-level image processing tasks, such as denoising, super-resolution, deblurring, and compression artifact removal. In this paper, we develop TIP, a Text-driven Image Processing framework that leverages natural language as a user-friendly interface to control the image restoration process. We consider the capacity of text information in two dimensions. First, we use content-related prompts to enhance the semantic alignment, effectively alleviating identity ambiguity in the restoration outcomes. Second, our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength, without the need for explicit task-specific design. In addition, we introduce a novel fusion mechanism that augments the existing ControlNet architecture by learning to rescale the generative prior, thereby achieving better restoration fidelity. Our extensive experiments demonstrate the superior restoration performance of TIP compared to the state of the arts, alongside offering the flexibility of text-based control over the restoration effects.
2023-12-20T00:00:00
2312.11537
FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
[ "Chien-Yu Lin", "Qichen Fu", "Thomas Merth", "Karren Yang", "Anurag Ranjan" ]
Super-resolution (SR) techniques have recently been proposed to upscale the outputs of neural radiance fields (NeRF) and generate high-quality images with enhanced inference speeds. However, existing NeRF+SR methods increase training overhead by using extra input features, loss functions, and/or expensive training procedures such as knowledge distillation. In this paper, we aim to leverage SR for efficiency gains without costly training or architectural changes. Specifically, we build a simple NeRF+SR pipeline that directly combines existing modules, and we propose a lightweight augmentation technique, random patch sampling, for training. Compared to existing NeRF+SR methods, our pipeline mitigates the SR computing overhead and can be trained up to 23x faster, making it feasible to run on consumer devices such as the Apple MacBook. Experiments show our pipeline can upscale NeRF outputs by 2-4x while maintaining high quality, increasing inference speeds by up to 18x on an NVIDIA V100 GPU and 12.8x on an M1 Pro chip. We conclude that SR can be a simple but effective technique for improving the efficiency of NeRF models for consumer devices.
2023-12-20T00:00:00
2312.11535
Customize-It-3D: High-Quality 3D Creation from A Single Image Using Subject-Specific Knowledge Prior
[ "Nan Huang", "Ting Zhang", "Yuhui Yuan", "Dong Chen", "Shanghang Zhang" ]
In this paper, we present a novel two-stage approach that fully utilizes the information provided by the reference image to establish a customized knowledge prior for image-to-3D generation. While previous approaches primarily rely on a general diffusion prior, which struggles to yield consistent results with the reference image, we propose a subject-specific and multi-modal diffusion model. This model not only aids NeRF optimization by considering the shading mode for improved geometry but also enhances texture from the coarse results to achieve superior refinement. Both aspects contribute to faithfully aligning the 3D content with the subject. Extensive experiments showcase the superiority of our method, Customize-It-3D, outperforming previous works by a substantial margin. It produces faithful 360-degree reconstructions with impressive visual quality, making it well-suited for various applications, including text-to-3D creation.
2023-12-20T00:00:00
2312.11897
Text-Conditioned Resampler For Long Form Video Understanding
[ "Bruno Korbar", "Yongqin Xian", "Alessio Tonioni", "Andrew Zisserman", "Federico Tombari" ]
Videos are highly redundant data source and it is often enough to identify a few key moments to solve any given task. In this paper, we present a text-conditioned video resampler (TCR) module that uses a pre-trained and frozen visual encoder and large language model (LLM) to process long video sequences for a task. TCR localises relevant visual features from the video given a text condition and provides them to a LLM to generate a text response. Due to its lightweight design and use of cross-attention, TCR can process more than 100 frames at a time allowing the model to use much longer chunks of video than earlier works. We make the following contributions: (i) we design a transformer-based sampling architecture that can process long videos conditioned on a task, together with a training method that enables it to bridge pre-trained visual and language models; (ii) we empirically validate its efficacy on a wide variety of evaluation tasks, and set a new state-of-the-art on NextQA, EgoSchema, and the EGO4D-LTA challenge; and (iii) we determine tasks which require longer video contexts and that can thus be used effectively for further evaluation of long-range video models.
2023-12-20T00:00:00
2312.11532
Topic-VQ-VAE: Leveraging Latent Codebooks for Flexible Topic-Guided Document Generation
[ "YoungJoon Yoo", "Jongwon Choi" ]
https://github.com/clovaai/TVQ-VAE
This paper introduces a novel approach for topic modeling utilizing latent codebooks from Vector-Quantized Variational Auto-Encoder~(VQ-VAE), discretely encapsulating the rich information of the pre-trained embeddings such as the pre-trained language model. From the novel interpretation of the latent codebooks and embeddings as conceptual bag-of-words, we propose a new generative topic model called Topic-VQ-VAE~(TVQ-VAE) which inversely generates the original documents related to the respective latent codebook. The TVQ-VAE can visualize the topics with various generative distributions including the traditional BoW distribution and the autoregressive image generation. Our experimental results on document analysis and image generation demonstrate that TVQ-VAE effectively captures the topic context which reveals the underlying structures of the dataset and supports flexible forms of document generation. Official implementation of the proposed TVQ-VAE is available at https://github.com/clovaai/TVQ-VAE.
2023-12-20T00:00:00
2312.12030
Towards Accurate Guided Diffusion Sampling through Symplectic Adjoint Method
[ "Jiachun Pan", "Hanshu Yan", "Jun Hao Liew", "Jiashi Feng", "Vincent Y. F. Tan" ]
Training-free guided sampling in diffusion models leverages off-the-shelf pre-trained networks, such as an aesthetic evaluation model, to guide the generation process. Current training-free guided sampling algorithms obtain the guidance energy function based on a one-step estimate of the clean image. However, since the off-the-shelf pre-trained networks are trained on clean images, the one-step estimation procedure of the clean image may be inaccurate, especially in the early stages of the generation process in diffusion models. This causes the guidance in the early time steps to be inaccurate. To overcome this problem, we propose Symplectic Adjoint Guidance (SAG), which calculates the gradient guidance in two inner stages. Firstly, SAG estimates the clean image via n function calls, where n serves as a flexible hyperparameter that can be tailored to meet specific image quality requirements. Secondly, SAG uses the symplectic adjoint method to obtain the gradients accurately and efficiently in terms of the memory requirements. Extensive experiments demonstrate that SAG generates images with higher qualities compared to the baselines in both guided image and video generation tasks.
2023-12-20T00:00:00
2312.11894
3D-LFM: Lifting Foundation Model
[ "Mosam Dabhi", "Laszlo A. Jeni", "Simon Lucey" ]
The lifting of 3D structure and camera from 2D landmarks is at the cornerstone of the entire discipline of computer vision. Traditional methods have been confined to specific rigid objects, such as those in Perspective-n-Point (PnP) problems, but deep learning has expanded our capability to reconstruct a wide range of object classes (e.g. C3PDO and PAUL) with resilience to noise, occlusions, and perspective distortions. All these techniques, however, have been limited by the fundamental need to establish correspondences across the 3D training data -- significantly limiting their utility to applications where one has an abundance of "in-correspondence" 3D data. Our approach harnesses the inherent permutation equivariance of transformers to manage varying number of points per 3D data instance, withstands occlusions, and generalizes to unseen categories. We demonstrate state of the art performance across 2D-3D lifting task benchmarks. Since our approach can be trained across such a broad class of structures we refer to it simply as a 3D Lifting Foundation Model (3D-LFM) -- the first of its kind.
2023-12-20T00:00:00
2312.11666
HAAR: Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles
[ "Vanessa Sklyarova", "Egor Zakharov", "Otmar Hilliges", "Michael J. Black", "Justus Thies" ]
We present HAAR, a new strand-based generative model for 3D human hairstyles. Specifically, based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines. Current AI-based generative models take advantage of powerful 2D priors to reconstruct 3D content in the form of point clouds, meshes, or volumetric functions. However, by using the 2D priors, they are intrinsically limited to only recovering the visual parts. Highly occluded hair structures can not be reconstructed with those methods, and they only model the ''outer shell'', which is not ready to be used in physics-based rendering or simulation pipelines. In contrast, we propose a first text-guided generative method that uses 3D hair strands as an underlying representation. Leveraging 2D visual question-answering (VQA) systems, we automatically annotate synthetic hair models that are generated from a small set of artist-created hairstyles. This allows us to train a latent diffusion model that operates in a common hairstyle UV space. In qualitative and quantitative studies, we demonstrate the capabilities of the proposed model and compare it to existing hairstyle generation approaches.
2023-12-21T00:00:00
2312.12491
StreamDiffusion: A Pipeline-level Solution for Real-time Interactive Generation
[ "Akio Kodaira", "Chenfeng Xu", "Toshiki Hazama", "Takanori Yoshimoto", "Kohei Ohno", "Shogo Mitsuhori", "Soichi Sugano", "Hanying Cho", "Zhijian Liu", "Kurt Keutzer" ]
https://github.com/cumulo-autumn/StreamDiffusion
We introduce StreamDiffusion, a real-time diffusion pipeline designed for interactive image generation. Existing diffusion models are adept at creating images from text or image prompts, yet they often fall short in real-time interaction. This limitation becomes particularly evident in scenarios involving continuous input, such as Metaverse, live video streaming, and broadcasting, where high throughput is imperative. To address this, we present a novel approach that transforms the original sequential denoising into the batching denoising process. Stream Batch eliminates the conventional wait-and-interact approach and enables fluid and high throughput streams. To handle the frequency disparity between data input and model throughput, we design a novel input-output queue for parallelizing the streaming process. Moreover, the existing diffusion pipeline uses classifier-free guidance(CFG), which requires additional U-Net computation. To mitigate the redundant computations, we propose a novel residual classifier-free guidance (RCFG) algorithm that reduces the number of negative conditional denoising steps to only one or even zero. Besides, we introduce a stochastic similarity filter(SSF) to optimize power consumption. Our Stream Batch achieves around 1.5x speedup compared to the sequential denoising method at different denoising levels. The proposed RCFG leads to speeds up to 2.05x higher than the conventional CFG. Combining the proposed strategies and existing mature acceleration tools makes the image-to-image generation achieve up-to 91.07fps on one RTX4090, improving the throughputs of AutoPipline developed by Diffusers over 59.56x. Furthermore, our proposed StreamDiffusion also significantly reduces the energy consumption by 2.39x on one RTX3060 and 1.99x on one RTX4090, respectively.
2023-12-21T00:00:00
2312.12490
InstructVideo: Instructing Video Diffusion Models with Human Feedback
[ "Hangjie Yuan", "Shiwei Zhang", "Xiang Wang", "Yujie Wei", "Tao Feng", "Yining Pan", "Yingya Zhang", "Ziwei Liu", "Samuel Albanie", "Dong Ni" ]
Diffusion models have emerged as the de facto paradigm for video generation. However, their reliance on web-scale data of varied quality often yields results that are visually unappealing and misaligned with the textual prompts. To tackle this problem, we propose InstructVideo to instruct text-to-video diffusion models with human feedback by reward fine-tuning. InstructVideo has two key ingredients: 1) To ameliorate the cost of reward fine-tuning induced by generating through the full DDIM sampling chain, we recast reward fine-tuning as editing. By leveraging the diffusion process to corrupt a sampled video, InstructVideo requires only partial inference of the DDIM sampling chain, reducing fine-tuning cost while improving fine-tuning efficiency. 2) To mitigate the absence of a dedicated video reward model for human preferences, we repurpose established image reward models, e.g., HPSv2. To this end, we propose Segmental Video Reward, a mechanism to provide reward signals based on segmental sparse sampling, and Temporally Attenuated Reward, a method that mitigates temporal modeling degradation during fine-tuning. Extensive experiments, both qualitative and quantitative, validate the practicality and efficacy of using image reward models in InstructVideo, significantly enhancing the visual quality of generated videos without compromising generalization capabilities. Code and models will be made publicly available.
2023-12-21T00:00:00
2312.13271
Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting
[ "Junwu Zhang", "Zhenyu Tang", "Yatian Pang", "Xinhua Cheng", "Peng Jin", "Yida Wei", "Wangbo Yu", "Munan Ning", "Li Yuan" ]
https://github.com/junwuzhang19/repaint123
Recent one image to 3D generation methods commonly adopt Score Distillation Sampling (SDS). Despite the impressive results, there are multiple deficiencies including multi-view inconsistency, over-saturated and over-smoothed textures, as well as the slow generation speed. To address these deficiencies, we present Repaint123 to alleviate multi-view bias as well as texture degradation and speed up the generation process. The core idea is to combine the powerful image generation capability of the 2D diffusion model and the texture alignment ability of the repainting strategy for generating high-quality multi-view images with consistency. We further propose visibility-aware adaptive repainting strength for overlap regions to enhance the generated image quality in the repainting process. The generated high-quality and multi-view consistent images enable the use of simple Mean Square Error (MSE) loss for fast 3D content generation. We conduct extensive experiments and show that our method has a superior ability to generate high-quality 3D content with multi-view consistency and fine textures in 2 minutes from scratch. Code is at https://github.com/junwuzhang19/repaint123.
2023-12-21T00:00:00
2312.12742
Cached Transformers: Improving Transformers with Differentiable Memory Cache
[ "Zhaoyang Zhang", "Wenqi Shao", "Yixiao Ge", "Xiaogang Wang", "Jinwei Gu", "Ping Luo" ]
This work introduces a new Transformer model called Cached Transformer, which uses Gated Recurrent Cached (GRC) attention to extend the self-attention mechanism with a differentiable memory cache of tokens. GRC attention enables attending to both past and current tokens, increasing the receptive field of attention and allowing for exploring long-range dependencies. By utilizing a recurrent gating unit to continuously update the cache, our model achieves significant advancements in six language and vision tasks, including language modeling, machine translation, ListOPs, image classification, object detection, and instance segmentation. Furthermore, our approach surpasses previous memory-based techniques in tasks such as language modeling and displays the ability to be applied to a broader range of situations.
2023-12-21T00:00:00
2312.12682
Mini-GPTs: Efficient Large Language Models through Contextual Pruning
[ "Tim Valicenti", "Justice Vidal", "Ritik Patnaik" ]
In AI research, the optimization of Large Language Models (LLMs) remains a significant challenge, crucial for advancing the field's practical applications and sustainability. Building upon the foundational work of Professor Song Han's lab at MIT, this paper introduces a novel approach in developing Mini-GPTs via contextual pruning. Our methodology strategically prunes the computational architecture of traditional LLMs, like Phi-1.5, focusing on retaining core functionalities while drastically reducing model sizes. We employ the technique across diverse and complex datasets, including US law, Medical Q&A, Skyrim dialogue, English-Taiwanese translation, and Economics articles. The results underscore the efficiency and effectiveness of contextual pruning, not merely as a theoretical concept but as a practical tool in developing domain-specific, resource-efficient LLMs. Contextual pruning is a promising method for building domain-specific LLMs, and this research is a building block towards future development with more hardware compute, refined fine-tuning, and quantization.
2023-12-21T00:00:00
2312.12456
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
[ "Yixin Song", "Zeyu Mi", "Haotong Xie", "Haibo Chen" ]
https://github.com/SJTU-IPADS/PowerInfer
This paper introduces PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation. This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity. Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.
2023-12-21T00:00:00
2312.13286
Generative Multimodal Models are In-Context Learners
[ "Quan Sun", "Yufeng Cui", "Xiaosong Zhang", "Fan Zhang", "Qiying Yu", "Zhengxiong Luo", "Yueze Wang", "Yongming Rao", "Jingjing Liu", "Tiejun Huang", "Xinlong Wang" ]
The human ability to easily solve multimodal tasks in context (i.e., with only a few demonstrations or simple instructions), is what current multimodal systems have largely struggled to imitate. In this work, we demonstrate that the task-agnostic in-context learning capabilities of large multimodal models can be significantly enhanced by effective scaling-up. We introduce Emu2, a generative multimodal model with 37 billion parameters, trained on large-scale multimodal sequences with a unified autoregressive objective. Emu2 exhibits strong multimodal in-context learning abilities, even emerging to solve tasks that require on-the-fly reasoning, such as visual prompting and object-grounded generation. The model sets a new record on multiple multimodal understanding tasks in few-shot settings. When instruction-tuned to follow specific instructions, Emu2 further achieves new state-of-the-art on challenging tasks such as question answering benchmarks for large multimodal models and open-ended subject-driven generation. These achievements demonstrate that Emu2 can serve as a base model and general-purpose interface for a wide range of multimodal tasks. Code and models are publicly available to facilitate future research.
2023-12-21T00:00:00
2312.13102
SpecNeRF: Gaussian Directional Encoding for Specular Reflections
[ "Li Ma", "Vasu Agrawal", "Haithem Turki", "Changil Kim", "Chen Gao", "Pedro Sander", "Michael Zollhöfer", "Christian Richardt" ]
Neural radiance fields have achieved remarkable performance in modeling the appearance of 3D scenes. However, existing approaches still struggle with the view-dependent appearance of glossy surfaces, especially under complex lighting of indoor environments. Unlike existing methods, which typically assume distant lighting like an environment map, we propose a learnable Gaussian directional encoding to better model the view-dependent effects under near-field lighting conditions. Importantly, our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps. As a result, it enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients. We further introduce a data-driven geometry prior that helps alleviate the shape radiance ambiguity in reflection modeling. We show that our Gaussian directional encoding and geometry prior significantly improve the modeling of challenging specular reflections in neural radiance fields, which helps decompose appearance into more physically meaningful components.
2023-12-21T00:00:00
2312.12791
Model-Based Control with Sparse Neural Dynamics
[ "Ziang Liu", "Genggeng Zhou", "Jeff He", "Tobia Marcucci", "Li Fei-Fei", "Jiajun Wu", "Yunzhu Li" ]
Learning predictive models from observations using deep neural networks (DNNs) is a promising new approach to many real-world planning and control problems. However, common DNNs are too unstructured for effective planning, and current control methods typically rely on extensive sampling or local gradient descent. In this paper, we propose a new framework for integrated model learning and predictive control that is amenable to efficient optimization algorithms. Specifically, we start with a ReLU neural model of the system dynamics and, with minimal losses in prediction accuracy, we gradually sparsify it by removing redundant neurons. This discrete sparsification process is approximated as a continuous problem, enabling an end-to-end optimization of both the model architecture and the weight parameters. The sparsified model is subsequently used by a mixed-integer predictive controller, which represents the neuron activations as binary variables and employs efficient branch-and-bound algorithms. Our framework is applicable to a wide variety of DNNs, from simple multilayer perceptrons to complex graph neural dynamics. It can efficiently handle tasks involving complicated contact dynamics, such as object pushing, compositional object sorting, and manipulation of deformable objects. Numerical and hardware experiments show that, despite the aggressive sparsification, our framework can deliver better closed-loop performance than existing state-of-the-art methods.
2023-12-21T00:00:00
2312.13252
Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model
[ "Saurabh Saxena", "Junhwa Hur", "Charles Herrmann", "Deqing Sun", "David J. Fleet" ]
While methods for monocular depth estimation have made significant strides on standard benchmarks, zero-shot metric depth estimation remains unsolved. Challenges include the joint modeling of indoor and outdoor scenes, which often exhibit significantly different distributions of RGB and depth, and the depth-scale ambiguity due to unknown camera intrinsics. Recent work has proposed specialized multi-head architectures for jointly modeling indoor and outdoor scenes. In contrast, we advocate a generic, task-agnostic diffusion model, with several advancements such as log-scale depth parameterization to enable joint modeling of indoor and outdoor scenes, conditioning on the field-of-view (FOV) to handle scale ambiguity and synthetically augmenting FOV during training to generalize beyond the limited camera intrinsics in training datasets. Furthermore, by employing a more diverse training mixture than is common, and an efficient diffusion parameterization, our method, DMD (Diffusion for Metric Depth) achieves a 25\% reduction in relative error (REL) on zero-shot indoor and 33\% reduction on zero-shot outdoor datasets over the current SOTA using only a small number of denoising steps. For an overview see https://diffusion-vision.github.io/dmd
2023-12-21T00:00:00
2312.12865
RadEdit: stress-testing biomedical vision models via diffusion image editing
[ "Fernando Pérez-García", "Sam Bond-Taylor", "Pedro P. Sanchez", "Boris van Breugel", "Daniel C. Castro", "Harshita Sharma", "Valentina Salvatelli", "Maria T. A. Wetscherek", "Hannah Richardson", "Matthew P. Lungren", "Aditya Nori", "Javier Alvarez-Valle", "Ozan Oktay", "Maximilian Ilse" ]
Biomedical imaging datasets are often small and biased, meaning that real-world performance of predictive models can be substantially lower than expected from internal testing. This work proposes using generative image editing to simulate dataset shifts and diagnose failure modes of biomedical vision models; this can be used in advance of deployment to assess readiness, potentially reducing cost and patient harm. Existing editing methods can produce undesirable changes, with spurious correlations learned due to the co-occurrence of disease and treatment interventions, limiting practical applicability. To address this, we train a text-to-image diffusion model on multiple chest X-ray datasets and introduce a new editing method RadEdit that uses multiple masks, if present, to constrain changes and ensure consistency in the edited images. We consider three types of dataset shifts: acquisition shift, manifestation shift, and population shift, and demonstrate that our approach can diagnose failures and quantify model robustness without additional data collection, complementing more qualitative tools for explainable AI.
2023-12-21T00:00:00
2312.12487
Adaptive Guidance: Training-free Acceleration of Conditional Diffusion Models
[ "Angela Castillo", "Jonas Kohler", "Juan C. Pérez", "Juan Pablo Pérez", "Albert Pumarola", "Bernard Ghanem", "Pablo Arbeláez", "Ali Thabet" ]
This paper presents a comprehensive study on the role of Classifier-Free Guidance (CFG) in text-conditioned diffusion models from the perspective of inference efficiency. In particular, we relax the default choice of applying CFG in all diffusion steps and instead search for efficient guidance policies. We formulate the discovery of such policies in the differentiable Neural Architecture Search framework. Our findings suggest that the denoising steps proposed by CFG become increasingly aligned with simple conditional steps, which renders the extra neural network evaluation of CFG redundant, especially in the second half of the denoising process. Building upon this insight, we propose "Adaptive Guidance" (AG), an efficient variant of CFG, that adaptively omits network evaluations when the denoising process displays convergence. Our experiments demonstrate that AG preserves CFG's image quality while reducing computation by 25%. Thus, AG constitutes a plug-and-play alternative to Guidance Distillation, achieving 50% of the speed-ups of the latter while being training-free and retaining the capacity to handle negative prompts. Finally, we uncover further redundancies of CFG in the first half of the diffusion process, showing that entire neural function evaluations can be replaced by simple affine transformations of past score estimates. This method, termed LinearAG, offers even cheaper inference at the cost of deviating from the baseline model. Our findings provide insights into the efficiency of the conditional denoising process that contribute to more practical and swift deployment of text-conditioned diffusion models.
2023-12-21T00:00:00
2312.12468
MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers
[ "Haoyu Ma", "Shahin Mahdizadehaghdam", "Bichen Wu", "Zhipeng Fan", "Yuchao Gu", "Wenliang Zhao", "Lior Shapira", "Xiaohui Xie" ]
Recent advances in generative AI have significantly enhanced image and video editing, particularly in the context of text prompt control. State-of-the-art approaches predominantly rely on diffusion models to accomplish these tasks. However, the computational demands of diffusion-based methods are substantial, often necessitating large-scale paired datasets for training, and therefore challenging the deployment in practical applications. This study addresses this challenge by breaking down the text-based video editing process into two separate stages. In the first stage, we leverage an existing text-to-image diffusion model to simultaneously edit a few keyframes without additional fine-tuning. In the second stage, we introduce an efficient model called MaskINT, which is built on non-autoregressive masked generative transformers and specializes in frame interpolation between the keyframes, benefiting from structural guidance provided by intermediate frames. Our comprehensive set of experiments illustrates the efficacy and efficiency of MaskINT when compared to other diffusion-based methodologies. This research offers a practical solution for text-based video editing and showcases the potential of non-autoregressive masked generative transformers in this domain.
2023-12-21T00:00:00
2312.13150
Splatter Image: Ultra-Fast Single-View 3D Reconstruction
[ "Stanislaw Szymanowicz", "Christian Rupprecht", "Andrea Vedaldi" ]
We introduce the Splatter Image, an ultra-fast approach for monocular 3D object reconstruction which operates at 38 FPS. Splatter Image is based on Gaussian Splatting, which has recently brought real-time rendering, fast training, and excellent scaling to multi-view reconstruction. For the first time, we apply Gaussian Splatting in a monocular reconstruction setting. Our approach is learning-based, and, at test time, reconstruction only requires the feed-forward evaluation of a neural network. The main innovation of Splatter Image is the surprisingly straightforward design: it uses a 2D image-to-image network to map the input image to one 3D Gaussian per pixel. The resulting Gaussians thus have the form of an image, the Splatter Image. We further extend the method to incorporate more than one image as input, which we do by adding cross-view attention. Owning to the speed of the renderer (588 FPS), we can use a single GPU for training while generating entire images at each iteration in order to optimize perceptual metrics like LPIPS. On standard benchmarks, we demonstrate not only fast reconstruction but also better results than recent and much more expensive baselines in terms of PSNR, LPIPS, and other metrics.
2023-12-21T00:00:00
2312.13285
UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections
[ "Fangjinhua Wang", "Marie-Julie Rakotosaona", "Michael Niemeyer", "Richard Szeliski", "Marc Pollefeys", "Federico Tombari" ]
Neural 3D scene representations have shown great potential for 3D reconstruction from 2D images. However, reconstructing real-world captures of complex scenes still remains a challenge. Existing generic 3D reconstruction methods often struggle to represent fine geometric details and do not adequately model reflective surfaces of large-scale scenes. Techniques that explicitly focus on reflective surfaces can model complex and detailed reflections by exploiting better reflection parameterizations. However, we observe that these methods are often not robust in real unbounded scenarios where non-reflective as well as reflective components are present. In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections. We investigate both view-based as well as reflection-based color prediction parameterization techniques and find that explicitly blending these representations in 3D space enables reconstruction of surfaces that are more geometrically accurate, especially for reflective surfaces. We further combine this representation with a multi-resolution grid backbone that is trained in a coarse-to-fine manner, enabling faster reconstructions than prior methods. Extensive experiments on object-level datasets DTU, Shiny Blender as well as unbounded datasets Mip-NeRF 360 and Ref-NeRF real demonstrate that our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces. Please see our project page at https://fangjinhuawang.github.io/UniSDF.
2023-12-22T00:00:00
2312.13913
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
[ "Xianfang Zeng", "Xin Chen", "Zhongqi Qi", "Wen Liu", "Zibo Zhao", "Zhibin Wang", "BIN FU", "Yong Liu", "Gang Yu" ]
This paper presents Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.