date
timestamp[ns]date 2023-05-05 00:00:00
2025-07-14 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
202
| authors
listlengths 1
3.3k
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2024-04-05T00:00:00 | 2404.03648 | AutoWebGLM: Bootstrap And Reinforce A Large Language Model-based Web Navigating Agent | [
"Hanyu Lai",
"Xiao Liu",
"Iat Long Iong",
"Shuntian Yao",
"Yuxuan Chen",
"Pengbo Shen",
"Hao Yu",
"Hanchen Zhang",
"Xiaohan Zhang",
"Yuxiao Dong",
"Jie Tang"
]
| https://github.com/THUDM/AutoWebGLM | Large language models (LLMs) have fueled many intelligent agent tasks, such as web navigation -- but most existing agents perform far from satisfying in real-world webpages due to three factors: (1) the versatility of actions on webpages, (2) HTML text exceeding model processing capacity, and (3) the complexity of decision-making due to the open-domain nature of web. In light of the challenge, we develop AutoWebGLM, a GPT-4-outperforming automated web navigation agent built upon ChatGLM3-6B. Inspired by human browsing patterns, we design an HTML simplification algorithm to represent webpages, preserving vital information succinctly. We employ a hybrid human-AI method to build web browsing data for curriculum training. Then, we bootstrap the model by reinforcement learning and rejection sampling to further facilitate webpage comprehension, browser operations, and efficient task decomposition by itself. For testing, we establish a bilingual benchmark -- AutoWebBench -- for real-world web browsing tasks. We evaluate AutoWebGLM across diverse web navigation benchmarks, revealing its improvements but also underlying challenges to tackle real environments. Related code, model, and data will be released at https://github.com/THUDM/AutoWebGLM. |
2024-04-05T00:00:00 | 2404.03411 | Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? | [
"Shuo Chen",
"Zhen Han",
"Bailan He",
"Zifeng Ding",
"Wenqian Yu",
"Philip Torr",
"Volker Tresp",
"Jindong Gu"
]
| Various jailbreak attacks have been proposed to red-team Large Language Models (LLMs) and revealed the vulnerable safeguards of LLMs. Besides, some methods are not limited to the textual modality and extend the jailbreak attack to Multimodal Large Language Models (MLLMs) by perturbing the visual input. However, the absence of a universal evaluation benchmark complicates the performance reproduction and fair comparison. Besides, there is a lack of comprehensive evaluation of closed-source state-of-the-art (SOTA) models, especially MLLMs, such as GPT-4V. To address these issues, this work first builds a comprehensive jailbreak evaluation dataset with 1445 harmful questions covering 11 different safety policies. Based on this dataset, extensive red-teaming experiments are conducted on 11 different LLMs and MLLMs, including both SOTA proprietary models and open-source models. We then conduct a deep analysis of the evaluated results and find that (1) GPT4 and GPT-4V demonstrate better robustness against jailbreak attacks compared to open-source LLMs and MLLMs. (2) Llama2 and Qwen-VL-Chat are more robust compared to other open-source models. (3) The transferability of visual jailbreak methods is relatively limited compared to textual jailbreak methods. The dataset and code can be found here https://anonymous.4open.science/r/red_teaming_gpt4-C1CE/README.md . |
|
2024-04-05T00:00:00 | 2404.03653 | CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching | [
"Dongzhi Jiang",
"Guanglu Song",
"Xiaoshi Wu",
"Renrui Zhang",
"Dazhong Shen",
"Zhuofan Zong",
"Yu Liu",
"Hongsheng Li"
]
| https://github.com/CaraJ7/CoMat | Diffusion models have demonstrated great success in the field of text-to-image generation. However, alleviating the misalignment between the text prompts and images is still challenging. The root reason behind the misalignment has not been extensively investigated. We observe that the misalignment is caused by inadequate token attention activation. We further attribute this phenomenon to the diffusion model's insufficient condition utilization, which is caused by its training paradigm. To address the issue, we propose CoMat, an end-to-end diffusion model fine-tuning strategy with an image-to-text concept matching mechanism. We leverage an image captioning model to measure image-to-text alignment and guide the diffusion model to revisit ignored tokens. A novel attribute concentration module is also proposed to address the attribute binding problem. Without any image or human preference data, we use only 20K text prompts to fine-tune SDXL to obtain CoMat-SDXL. Extensive experiments show that CoMat-SDXL significantly outperforms the baseline model SDXL in two text-to-image alignment benchmarks and achieves start-of-the-art performance. |
2024-04-05T00:00:00 | 2404.03566 | PointInfinity: Resolution-Invariant Point Diffusion Models | [
"Zixuan Huang",
"Justin Johnson",
"Shoubhik Debnath",
"James M. Rehg",
"Chao-Yuan Wu"
]
| We present PointInfinity, an efficient family of point cloud diffusion models. Our core idea is to use a transformer-based architecture with a fixed-size, resolution-invariant latent representation. This enables efficient training with low-resolution point clouds, while allowing high-resolution point clouds to be generated during inference. More importantly, we show that scaling the test-time resolution beyond the training resolution improves the fidelity of generated point clouds and surfaces. We analyze this phenomenon and draw a link to classifier-free guidance commonly used in diffusion models, demonstrating that both allow trading off fidelity and variability during inference. Experiments on CO3D show that PointInfinity can efficiently generate high-resolution point clouds (up to 131k points, 31 times more than Point-E) with state-of-the-art quality. |
|
2024-04-05T00:00:00 | 2404.03118 | LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models | [
"Gabriela Ben Melech Stan",
"Raanan Yehezkel Rohekar",
"Yaniv Gurwicz",
"Matthew Lyle Olson",
"Anahita Bhiwandiwalla",
"Estelle Aflalo",
"Chenfei Wu",
"Nan Duan",
"Shao-Yen Tseng",
"Vasudev Lal"
]
| In the rapidly evolving landscape of artificial intelligence, multi-modal large language models are emerging as a significant area of interest. These models, which combine various forms of data input, are becoming increasingly popular. However, understanding their internal mechanisms remains a complex task. Numerous advancements have been made in the field of explainability tools and mechanisms, yet there is still much to explore. In this work, we present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models. Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer, and assess the efficacy of the language model in grounding its output in the image. With our application, a user can systematically investigate the model and uncover system limitations, paving the way for enhancements in system capabilities. Finally, we present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA. |
|
2024-04-05T00:00:00 | 2404.03204 | RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis | [
"Detai Xin",
"Xu Tan",
"Kai Shen",
"Zeqian Ju",
"Dongchao Yang",
"Yuancheng Wang",
"Shinnosuke Takamichi",
"Hiroshi Saruwatari",
"Shujie Liu",
"Jinyu Li",
"Sheng Zhao"
]
| We present RALL-E, a robust language modeling method for text-to-speech (TTS) synthesis. While previous work based on large language models (LLMs) shows impressive performance on zero-shot TTS, such methods often suffer from poor robustness, such as unstable prosody (weird pitch and rhythm/duration) and a high word error rate (WER), due to the autoregressive prediction style of language models. The core idea behind RALL-E is chain-of-thought (CoT) prompting, which decomposes the task into simpler steps to enhance the robustness of LLM-based TTS. To accomplish this idea, RALL-E first predicts prosody features (pitch and duration) of the input text and uses them as intermediate conditions to predict speech tokens in a CoT style. Second, RALL-E utilizes the predicted duration prompt to guide the computing of self-attention weights in Transformer to enforce the model to focus on the corresponding phonemes and prosody features when predicting speech tokens. Results of comprehensive objective and subjective evaluations demonstrate that, compared to a powerful baseline method VALL-E, RALL-E significantly improves the WER of zero-shot TTS from 6.3% (without reranking) and 2.1% (with reranking) to 2.8% and 1.0%, respectively. Furthermore, we demonstrate that RALL-E correctly synthesizes sentences that are hard for VALL-E and reduces the error rate from 68% to 4%. |
|
2024-04-05T00:00:00 | 2404.03413 | MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens | [
"Kirolos Ataallah",
"Xiaoqian Shen",
"Eslam Abdelrahman",
"Essam Sleiman",
"Deyao Zhu",
"Jian Ding",
"Mohamed Elhoseiny"
]
| https://github.com/Vision-CAIR/MiniGPT4-video | This paper introduces MiniGPT4-Video, a multimodal Large Language Model (LLM) designed specifically for video understanding. The model is capable of processing both temporal visual and textual data, making it adept at understanding the complexities of videos. Building upon the success of MiniGPT-v2, which excelled in translating visual features into the LLM space for single images and achieved impressive results on various image-text benchmarks, this paper extends the model's capabilities to process a sequence of frames, enabling it to comprehend videos. MiniGPT4-video does not only consider visual content but also incorporates textual conversations, allowing the model to effectively answer queries involving both visual and text components. The proposed model outperforms existing state-of-the-art methods, registering gains of 4.22%, 1.13%, 20.82%, and 13.1% on the MSVD, MSRVTT, TGIF, and TVQA benchmarks respectively. Our models and code have been made publicly available here https://vision-cair.github.io/MiniGPT4-video/ |
2024-04-08T00:00:00 | 2404.03683 | Stream of Search (SoS): Learning to Search in Language | [
"Kanishk Gandhi",
"Denise Lee",
"Gabriel Grand",
"Muxin Liu",
"Winson Cheng",
"Archit Sharma",
"Noah D. Goodman"
]
| https://github.com/kanishkg/stream-of-search | Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones. |
2024-04-08T00:00:00 | 2404.03715 | Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences | [
"Corby Rosset",
"Ching-An Cheng",
"Arindam Mitra",
"Michael Santacroce",
"Ahmed Awadallah",
"Tengyang Xie"
]
| This paper studies post-training large language models (LLMs) using preference feedback from a powerful oracle to help a model iteratively improve over itself. The typical approach for post-training LLMs involves Reinforcement Learning from Human Feedback (RLHF), which traditionally separates reward learning and subsequent policy optimization. However, such a reward maximization approach is limited by the nature of "point-wise" rewards (such as Bradley-Terry model), which fails to express complex intransitive or cyclic preference relations. While advances on RLHF show reward learning and policy optimization can be merged into a single contrastive objective for stability, they yet still remain tethered to the reward maximization framework. Recently, a new wave of research sidesteps the reward maximization presumptions in favor of directly optimizing over "pair-wise" or general preferences. In this paper, we introduce Direct Nash Optimization (DNO), a provable and scalable algorithm that marries the simplicity and stability of contrastive learning with theoretical generality from optimizing general preferences. Because DNO is a batched on-policy algorithm using a regression-based objective, its implementation is straightforward and efficient. Moreover, DNO enjoys monotonic improvement across iterations that help it improve even over a strong teacher (such as GPT-4). In our experiments, a resulting 7B parameter Orca-2.5 model aligned by DNO achieves the state-of-the-art win-rate against GPT-4-Turbo of 33% on AlpacaEval 2.0 (even after controlling for response length), an absolute gain of 26% (7% to 33%) over the initializing model. It outperforms models with far more parameters, including Mistral Large, Self-Rewarding LM (70B parameters), and older versions of GPT-4. |
|
2024-04-08T00:00:00 | 2404.03673 | RL for Consistency Models: Faster Reward Guided Text-to-Image Generation | [
"Owen Oertell",
"Jonathan D. Chang",
"Yiyi Zhang",
"Kianté Brantley",
"Wen Sun"
]
| https://github.com/Owen-Oertell/rlcm | Reinforcement learning (RL) has improved guided image generation with diffusion models by directly optimizing rewards that capture image quality, aesthetics, and instruction following capabilities. However, the resulting generative policies inherit the same iterative sampling process of diffusion models that causes slow generation. To overcome this limitation, consistency models proposed learning a new class of generative models that directly map noise to data, resulting in a model that can generate an image in as few as one sampling iteration. In this work, to optimize text-to-image generative models for task specific rewards and enable fast training and inference, we propose a framework for fine-tuning consistency models via RL. Our framework, called Reinforcement Learning for Consistency Model (RLCM), frames the iterative inference process of a consistency model as an RL procedure. RLCM improves upon RL fine-tuned diffusion models on text-to-image generation capabilities and trades computation during inference time for sample quality. Experimentally, we show that RLCM can adapt text-to-image consistency models to objectives that are challenging to express with prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Comparing to RL finetuned diffusion models, RLCM trains significantly faster, improves the quality of the generation measured under the reward objectives, and speeds up the inference procedure by generating high quality images with as few as two inference steps. Our code is available at https://rlcm.owenoertell.com |
2024-04-08T00:00:00 | 2404.04125 | No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance | [
"Vishaal Udandarao",
"Ameya Prabhu",
"Adhiraj Ghosh",
"Yash Sharma",
"Philip H. S. Torr",
"Adel Bibi",
"Samuel Albanie",
"Matthias Bethge"
]
| https://github.com/bethgelab/frequency_determines_performance | Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image generation. However, it is unclear how meaningful the notion of "zero-shot" generalization is for such multimodal models, as it is not known to what extent their pretraining datasets encompass the downstream concepts targeted for during "zero-shot" evaluation. In this work, we ask: How is the performance of multimodal models on downstream concepts influenced by the frequency of these concepts in their pretraining datasets? We comprehensively investigate this question across 34 models and five standard pretraining datasets (CC-3M, CC-12M, YFCC-15M, LAION-400M, LAION-Aesthetics), generating over 300GB of data artifacts. We consistently find that, far from exhibiting "zero-shot" generalization, multimodal models require exponentially more data to achieve linear improvements in downstream "zero-shot" performance, following a sample inefficient log-linear scaling trend. This trend persists even when controlling for sample-level similarity between pretraining and downstream datasets, and testing on purely synthetic data distributions. Furthermore, upon benchmarking models on long-tailed data sampled based on our analysis, we demonstrate that multimodal models across the board perform poorly. We contribute this long-tail test set as the "Let it Wag!" benchmark to further research in this direction. Taken together, our study reveals an exponential need for training data which implies that the key to "zero-shot" generalization capabilities under large-scale training paradigms remains to be found. |
2024-04-08T00:00:00 | 2404.04167 | Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model | [
"Xinrun Du",
"Zhouliang Yu",
"Songyang Gao",
"Ding Pan",
"Yuyang Cheng",
"Ziyang Ma",
"Ruibin Yuan",
"Xingwei Qu",
"Jiaheng Liu",
"Tianyu Zheng",
"Xinchen Luo",
"Guorui Zhou",
"Binhang Yuan",
"Wenhu Chen",
"Jie Fu",
"Ge Zhang"
]
| In this study, we introduce CT-LLM, a 2B large language model (LLM) that illustrates a pivotal shift towards prioritizing the Chinese language in developing LLMs. Uniquely initiated from scratch, CT-LLM diverges from the conventional methodology by primarily incorporating Chinese textual data, utilizing an extensive corpus of 1,200 billion tokens, including 800 billion Chinese tokens, 300 billion English tokens, and 100 billion code tokens. This strategic composition facilitates the model's exceptional proficiency in understanding and processing Chinese, a capability further enhanced through alignment techniques. Demonstrating remarkable performance on the CHC-Bench, CT-LLM excels in Chinese language tasks, and showcases its adeptness in English through SFT. This research challenges the prevailing paradigm of training LLMs predominantly on English corpora and then adapting them to other languages, broadening the horizons for LLM training methodologies. By open-sourcing the full process of training a Chinese LLM, including a detailed data processing procedure with the obtained Massive Appropriate Pretraining Chinese Corpus (MAP-CC), a well-chosen multidisciplinary Chinese Hard Case Benchmark (CHC-Bench), and the 2B-size Chinese Tiny LLM (CT-LLM), we aim to foster further exploration and innovation in both academia and industry, paving the way for more inclusive and versatile language models. |
|
2024-04-08T00:00:00 | 2404.03820 | CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues | [
"Makesh Narsimhan Sreedhar",
"Traian Rebedea",
"Shaona Ghosh",
"Christopher Parisien"
]
| Recent advancements in instruction-tuning datasets have predominantly focused on specific tasks like mathematical or logical reasoning. There has been a notable gap in data designed for aligning language models to maintain topic relevance in conversations - a critical aspect for deploying chatbots to production. We introduce the CantTalkAboutThis dataset to help language models remain focused on the subject at hand during task-oriented interactions. It consists of synthetic dialogues on a wide range of conversation topics from different domains. These dialogues are interspersed with distractor turns that intentionally divert the chatbot from the predefined topic. Fine-tuning language models on this dataset helps make them resilient to deviating from the role assigned and improves their ability to maintain topical coherence compared to general-purpose instruction-tuned LLMs like GPT-4-turbo and Mixtral-Instruct. Additionally, preliminary observations suggest that training models on this dataset also enhance their performance on fine-grained instruction following tasks. |
|
2024-04-08T00:00:00 | 2404.04211 | Robust Gaussian Splatting | [
"François Darmon",
"Lorenzo Porzi",
"Samuel Rota-Bulò",
"Peter Kontschieder"
]
| In this paper, we address common error sources for 3D Gaussian Splatting (3DGS) including blur, imperfect camera poses, and color inconsistencies, with the goal of improving its robustness for practical applications like reconstructions from handheld phone captures. Our main contribution involves modeling motion blur as a Gaussian distribution over camera poses, allowing us to address both camera pose refinement and motion blur correction in a unified way. Additionally, we propose mechanisms for defocus blur compensation and for addressing color in-consistencies caused by ambient light, shadows, or due to camera-related factors like varying white balancing settings. Our proposed solutions integrate in a seamless way with the 3DGS formulation while maintaining its benefits in terms of training efficiency and rendering speed. We experimentally validate our contributions on relevant benchmark datasets including Scannet++ and Deblur-NeRF, obtaining state-of-the-art results and thus consistent improvements over relevant baselines. |
|
2024-04-08T00:00:00 | 2404.04204 | Social Skill Training with Large Language Models | [
"Diyi Yang",
"Caleb Ziems",
"William Held",
"Omar Shaikh",
"Michael S. Bernstein",
"John Mitchell"
]
| People rely on social skills like conflict resolution to communicate effectively and to thrive in both work and personal life. However, practice environments for social skills are typically out of reach for most people. How can we make social skill training more available, accessible, and inviting? Drawing upon interdisciplinary research from communication and psychology, this perspective paper identifies social skill barriers to enter specialized fields. Then we present a solution that leverages large language models for social skill training via a generic framework. Our AI Partner, AI Mentor framework merges experiential learning with realistic practice and tailored feedback. This work ultimately calls for cross-disciplinary innovation to address the broader implications for workforce development and social equality. |
|
2024-04-08T00:00:00 | 2404.04256 | Sigma: Siamese Mamba Network for Multi-Modal Semantic Segmentation | [
"Zifu Wan",
"Yuhao Wang",
"Silong Yong",
"Pingping Zhang",
"Simon Stepputtis",
"Katia Sycara",
"Yaqi Xie"
]
| https://github.com/zifuwan/Sigma | Multi-modal semantic segmentation significantly enhances AI agents' perception and scene understanding, especially under adverse conditions like low-light or overexposed environments. Leveraging additional modalities (X-modality) like thermal and depth alongside traditional RGB provides complementary information, enabling more robust and reliable segmentation. In this work, we introduce Sigma, a Siamese Mamba network for multi-modal semantic segmentation, utilizing the Selective Structured State Space Model, Mamba. Unlike conventional methods that rely on CNNs, with their limited local receptive fields, or Vision Transformers (ViTs), which offer global receptive fields at the cost of quadratic complexity, our model achieves global receptive fields coverage with linear complexity. By employing a Siamese encoder and innovating a Mamba fusion mechanism, we effectively select essential information from different modalities. A decoder is then developed to enhance the channel-wise modeling ability of the model. Our method, Sigma, is rigorously evaluated on both RGB-Thermal and RGB-Depth segmentation tasks, demonstrating its superiority and marking the first successful application of State Space Models (SSMs) in multi-modal perception tasks. Code is available at https://github.com/zifuwan/Sigma. |
2024-04-09T00:00:00 | 2404.05595 | UniFL: Improve Stable Diffusion via Unified Feedback Learning | [
"Jiacheng Zhang",
"Jie Wu",
"Yuxi Ren",
"Xin Xia",
"Huafeng Kuang",
"Pan Xie",
"Jiashi Li",
"Xuefeng Xiao",
"Weilin Huang",
"Min Zheng",
"Lean Fu",
"Guanbin Li"
]
| Diffusion models have revolutionized the field of image generation, leading to the proliferation of high-quality models and diverse downstream applications. However, despite these significant advancements, the current competitive solutions still suffer from several limitations, including inferior visual quality, a lack of aesthetic appeal, and inefficient inference, without a comprehensive solution in sight. To address these challenges, we present UniFL, a unified framework that leverages feedback learning to enhance diffusion models comprehensively. UniFL stands out as a universal, effective, and generalizable solution applicable to various diffusion models, such as SD1.5 and SDXL. Notably, UniFL incorporates three key components: perceptual feedback learning, which enhances visual quality; decoupled feedback learning, which improves aesthetic appeal; and adversarial feedback learning, which optimizes inference speed. In-depth experiments and extensive user studies validate the superior performance of our proposed method in enhancing both the quality of generated models and their acceleration. For instance, UniFL surpasses ImageReward by 17% user preference in terms of generation quality and outperforms LCM and SDXL Turbo by 57% and 20% in 4-step inference. Moreover, we have verified the efficacy of our approach in downstream tasks, including Lora, ControlNet, and AnimateDiff. |
|
2024-04-09T00:00:00 | 2404.05726 | MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding | [
"Bo He",
"Hengduo Li",
"Young Kyun Jang",
"Menglin Jia",
"Xuefei Cao",
"Ashish Shah",
"Abhinav Shrivastava",
"Ser-Nam Lim"
]
| https://github.com/boheumd/MA-LMM | With the success of large language models (LLMs), integrating the vision model into LLMs to build vision-language foundation models has gained much more interest recently. However, existing LLM-based large multimodal models (e.g., Video-LLaMA, VideoChat) can only take in a limited number of frames for short video understanding. In this study, we mainly focus on designing an efficient and effective model for long-term video understanding. Instead of trying to process more frames simultaneously like most existing work, we propose to process videos in an online manner and store past video information in a memory bank. This allows our model to reference historical video content for long-term analysis without exceeding LLMs' context length constraints or GPU memory limits. Our memory bank can be seamlessly integrated into current multimodal LLMs in an off-the-shelf manner. We conduct extensive experiments on various video understanding tasks, such as long-video understanding, video question answering, and video captioning, and our model can achieve state-of-the-art performances across multiple datasets. Code available at https://boheumd.github.io/MA-LMM/. |
2024-04-09T00:00:00 | 2404.04478 | Diffusion-RWKV: Scaling RWKV-Like Architectures for Diffusion Models | [
"Zhengcong Fei",
"Mingyuan Fan",
"Changqian Yu",
"Debang Li",
"Junshi Huang"
]
| https://github.com/feizc/Diffusion-RWKV | Transformers have catalyzed advancements in computer vision and natural language processing (NLP) fields. However, substantial computational complexity poses limitations for their application in long-context tasks, such as high-resolution image generation. This paper introduces a series of architectures adapted from the RWKV model used in the NLP, with requisite modifications tailored for diffusion model applied to image generation tasks, referred to as Diffusion-RWKV. Similar to the diffusion with Transformers, our model is designed to efficiently handle patchnified inputs in a sequence with extra conditions, while also scaling up effectively, accommodating both large-scale parameters and extensive datasets. Its distinctive advantage manifests in its reduced spatial aggregation complexity, rendering it exceptionally adept at processing high-resolution images, thereby eliminating the necessity for windowing or group cached operations. Experimental results on both condition and unconditional image generation tasks demonstrate that Diffison-RWKV achieves performance on par with or surpasses existing CNN or Transformer-based diffusion models in FID and IS metrics while significantly reducing total computation FLOP usage. |
2024-04-09T00:00:00 | 2404.05014 | MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators | [
"Shenghai Yuan",
"Jinfa Huang",
"Yujun Shi",
"Yongqi Xu",
"Ruijie Zhu",
"Bin Lin",
"Xinhua Cheng",
"Li Yuan",
"Jiebo Luo"
]
| https://github.com/PKU-YuanGroup/MagicTime | Recent advances in Text-to-Video generation (T2V) have achieved remarkable success in synthesizing high-quality general videos from textual descriptions. A largely overlooked problem in T2V is that existing models have not adequately encoded physical knowledge of the real world, thus generated videos tend to have limited motion and poor variations. In this paper, we propose MagicTime, a metamorphic time-lapse video generation model, which learns real-world physics knowledge from time-lapse videos and implements metamorphic generation. First, we design a MagicAdapter scheme to decouple spatial and temporal training, encode more physical knowledge from metamorphic videos, and transform pre-trained T2V models to generate metamorphic videos. Second, we introduce a Dynamic Frames Extraction strategy to adapt to metamorphic time-lapse videos, which have a wider variation range and cover dramatic object metamorphic processes, thus embodying more physical knowledge than general videos. Finally, we introduce a Magic Text-Encoder to improve the understanding of metamorphic video prompts. Furthermore, we create a time-lapse video-text dataset called ChronoMagic, specifically curated to unlock the metamorphic video generation ability. Extensive experiments demonstrate the superiority and effectiveness of MagicTime for generating high-quality and dynamic metamorphic videos, suggesting time-lapse video generation is a promising path toward building metamorphic simulators of the physical world. |
2024-04-09T00:00:00 | 2404.05719 | Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs | [
"Keen You",
"Haotian Zhang",
"Eldon Schoop",
"Floris Weers",
"Amanda Swearngin",
"Jeffrey Nichols",
"Yinfei Yang",
"Zhe Gan"
]
| Recent advancements in multimodal large language models (MLLMs) have been noteworthy, yet, these general-domain MLLMs often fall short in their ability to comprehend and interact effectively with user interface (UI) screens. In this paper, we present Ferret-UI, a new MLLM tailored for enhanced understanding of mobile UI screens, equipped with referring, grounding, and reasoning capabilities. Given that UI screens typically exhibit a more elongated aspect ratio and contain smaller objects of interest (e.g., icons, texts) than natural images, we incorporate "any resolution" on top of Ferret to magnify details and leverage enhanced visual features. Specifically, each screen is divided into 2 sub-images based on the original aspect ratio (i.e., horizontal division for portrait screens and vertical division for landscape screens). Both sub-images are encoded separately before being sent to LLMs. We meticulously gather training samples from an extensive range of elementary UI tasks, such as icon recognition, find text, and widget listing. These samples are formatted for instruction-following with region annotations to facilitate precise referring and grounding. To augment the model's reasoning ability, we further compile a dataset for advanced tasks, including detailed description, perception/interaction conversations, and function inference. After training on the curated datasets, Ferret-UI exhibits outstanding comprehension of UI screens and the capability to execute open-ended instructions. For model evaluation, we establish a comprehensive benchmark encompassing all the aforementioned tasks. Ferret-UI excels not only beyond most open-source UI MLLMs, but also surpasses GPT-4V on all the elementary UI tasks. |
|
2024-04-09T00:00:00 | 2404.04860 | ByteEdit: Boost, Comply and Accelerate Generative Image Editing | [
"Yuxi Ren",
"Jie Wu",
"Yanzuo Lu",
"Huafeng Kuang",
"Xin Xia",
"Xionghui Wang",
"Qianqian Wang",
"Yixing Zhu",
"Pan Xie",
"Shiyin Wang",
"Xuefeng Xiao",
"Yitong Wang",
"Min Zheng",
"Lean Fu"
]
| Recent advancements in diffusion-based generative image editing have sparked a profound revolution, reshaping the landscape of image outpainting and inpainting tasks. Despite these strides, the field grapples with inherent challenges, including: i) inferior quality; ii) poor consistency; iii) insufficient instrcution adherence; iv) suboptimal generation efficiency. To address these obstacles, we present ByteEdit, an innovative feedback learning framework meticulously designed to Boost, Comply, and Accelerate Generative Image Editing tasks. ByteEdit seamlessly integrates image reward models dedicated to enhancing aesthetics and image-text alignment, while also introducing a dense, pixel-level reward model tailored to foster coherence in the output. Furthermore, we propose a pioneering adversarial and progressive feedback learning strategy to expedite the model's inference speed. Through extensive large-scale user evaluations, we demonstrate that ByteEdit surpasses leading generative image editing products, including Adobe, Canva, and MeiTu, in both generation quality and consistency. ByteEdit-Outpainting exhibits a remarkable enhancement of 388% and 135% in quality and consistency, respectively, when compared to the baseline model. Experiments also verfied that our acceleration models maintains excellent performance results in terms of quality and consistency. |
|
2024-04-09T00:00:00 | 2404.04544 | BeyondScene: Higher-Resolution Human-Centric Scene Generation With Pretrained Diffusion | [
"Gwanghyun Kim",
"Hayeon Kim",
"Hoigi Seo",
"Dong Un Kang",
"Se Young Chun"
]
| Generating higher-resolution human-centric scenes with details and controls remains a challenge for existing text-to-image diffusion models. This challenge stems from limited training image size, text encoder capacity (limited tokens), and the inherent difficulty of generating complex scenes involving multiple humans. While current methods attempted to address training size limit only, they often yielded human-centric scenes with severe artifacts. We propose BeyondScene, a novel framework that overcomes prior limitations, generating exquisite higher-resolution (over 8K) human-centric scenes with exceptional text-image correspondence and naturalness using existing pretrained diffusion models. BeyondScene employs a staged and hierarchical approach to initially generate a detailed base image focusing on crucial elements in instance creation for multiple humans and detailed descriptions beyond token limit of diffusion model, and then to seamlessly convert the base image to a higher-resolution output, exceeding training image size and incorporating details aware of text and instances via our novel instance-aware hierarchical enlargement process that consists of our proposed high-frequency injected forward diffusion and adaptive joint diffusion. BeyondScene surpasses existing methods in terms of correspondence with detailed text descriptions and naturalness, paving the way for advanced applications in higher-resolution human-centric scene creation beyond the capacity of pretrained diffusion models without costly retraining. Project page: https://janeyeon.github.io/beyond-scene. |
|
2024-04-09T00:00:00 | 2404.04465 | Aligning Diffusion Models by Optimizing Human Utility | [
"Shufan Li",
"Konstantinos Kallidromitis",
"Akash Gokul",
"Yusuke Kato",
"Kazuki Kozuka"
]
| We present Diffusion-KTO, a novel approach for aligning text-to-image diffusion models by formulating the alignment objective as the maximization of expected human utility. Since this objective applies to each generation independently, Diffusion-KTO does not require collecting costly pairwise preference data nor training a complex reward model. Instead, our objective requires simple per-image binary feedback signals, e.g. likes or dislikes, which are abundantly available. After fine-tuning using Diffusion-KTO, text-to-image diffusion models exhibit superior performance compared to existing techniques, including supervised fine-tuning and Diffusion-DPO, both in terms of human judgment and automatic evaluation metrics such as PickScore and ImageReward. Overall, Diffusion-KTO unlocks the potential of leveraging readily available per-image binary signals and broadens the applicability of aligning text-to-image diffusion models with human preferences. |
|
2024-04-09T00:00:00 | 2404.04526 | DATENeRF: Depth-Aware Text-based Editing of NeRFs | [
"Sara Rojas",
"Julien Philip",
"Kai Zhang",
"Sai Bi",
"Fujun Luan",
"Bernard Ghanem",
"Kalyan Sunkavall"
]
| Recent advancements in diffusion models have shown remarkable proficiency in editing 2D images based on text prompts. However, extending these techniques to edit scenes in Neural Radiance Fields (NeRF) is complex, as editing individual 2D frames can result in inconsistencies across multiple views. Our crucial insight is that a NeRF scene's geometry can serve as a bridge to integrate these 2D edits. Utilizing this geometry, we employ a depth-conditioned ControlNet to enhance the coherence of each 2D image modification. Moreover, we introduce an inpainting approach that leverages the depth information of NeRF scenes to distribute 2D edits across different images, ensuring robustness against errors and resampling challenges. Our results reveal that this methodology achieves more consistent, lifelike, and detailed edits than existing leading methods for text-driven NeRF scene editing. |
|
2024-04-09T00:00:00 | 2404.04346 | Koala: Key frame-conditioned long video-LLM | [
"Reuben Tan",
"Ximeng Sun",
"Ping Hu",
"Jui-hsien Wang",
"Hanieh Deilamsalehy",
"Bryan A. Plummer",
"Bryan Russell",
"Kate Saenko"
]
| https://github.com/rxtan2/Koala-video-llm | Long video question answering is a challenging task that involves recognizing short-term activities and reasoning about their fine-grained relationships. State-of-the-art video Large Language Models (vLLMs) hold promise as a viable solution due to their demonstrated emergent capabilities on new tasks. However, despite being trained on millions of short seconds-long videos, vLLMs are unable to understand minutes-long videos and accurately answer questions about them. To address this limitation, we propose a lightweight and self-supervised approach, Key frame-conditioned long video-LLM (Koala), that introduces learnable spatiotemporal queries to adapt pretrained vLLMs for generalizing to longer videos. Our approach introduces two new tokenizers that condition on visual tokens computed from sparse video key frames for understanding short and long video moments. We train our proposed approach on HowTo100M and demonstrate its effectiveness on zero-shot long video understanding benchmarks, where it outperforms state-of-the-art large models by 3 - 6% in absolute accuracy across all tasks. Surprisingly, we also empirically show that our approach not only helps a pretrained vLLM to understand long videos but also improves its accuracy on short-term action recognition. |
2024-04-09T00:00:00 | 2404.05717 | SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing | [
"Jing Gu",
"Yilin Wang",
"Nanxuan Zhao",
"Wei Xiong",
"Qing Liu",
"Zhifei Zhang",
"He Zhang",
"Jianming Zhang",
"HyunJoon Jung",
"Xin Eric Wang"
]
| https://github.com/eric-ai-lab/swap-anything | Effective editing of personal content holds a pivotal role in enabling individuals to express their creativity, weaving captivating narratives within their visual stories, and elevate the overall quality and impact of their visual content. Therefore, in this work, we introduce SwapAnything, a novel framework that can swap any objects in an image with personalized concepts given by the reference, while keeping the context unchanged. Compared with existing methods for personalized subject swapping, SwapAnything has three unique advantages: (1) precise control of arbitrary objects and parts rather than the main subject, (2) more faithful preservation of context pixels, (3) better adaptation of the personalized concept to the image. First, we propose targeted variable swapping to apply region control over latent feature maps and swap masked variables for faithful context preservation and initial semantic concept swapping. Then, we introduce appearance adaptation, to seamlessly adapt the semantic concept into the original image in terms of target location, shape, style, and content during the image generation process. Extensive results on both human and automatic evaluation demonstrate significant improvements of our approach over baseline methods on personalized swapping. Furthermore, SwapAnything shows its precise and faithful swapping abilities across single object, multiple objects, partial object, and cross-domain swapping tasks. SwapAnything also achieves great performance on text-based swapping and tasks beyond swapping such as object insertion. |
2024-04-09T00:00:00 | 2404.05666 | YaART: Yet Another ART Rendering Technology | [
"Sergey Kastryulin",
"Artem Konev",
"Alexander Shishenya",
"Eugene Lyapustin",
"Artem Khurshudov",
"Alexander Tselousov",
"Nikita Vinokurov",
"Denis Kuznedelev",
"Alexander Markovich",
"Grigoriy Livshits",
"Alexey Kirillov",
"Anastasiia Tabisheva",
"Liubov Chubarova",
"Marina Kaminskaia",
"Alexander Ustyuzhanin",
"Artemii Shvetsov",
"Daniil Shlenskii",
"Valerii Startsev",
"Dmitrii Kornilov",
"Mikhail Romanov",
"Artem Babenko",
"Sergei Ovcharenko",
"Valentin Khrulkov"
]
| In the rapidly progressing field of generative models, the development of efficient and high-fidelity text-to-image diffusion systems represents a significant frontier. This study introduces YaART, a novel production-grade text-to-image cascaded diffusion model aligned to human preferences using Reinforcement Learning from Human Feedback (RLHF). During the development of YaART, we especially focus on the choices of the model and training dataset sizes, the aspects that were not systematically investigated for text-to-image cascaded diffusion models before. In particular, we comprehensively analyze how these choices affect both the efficiency of the training process and the quality of the generated images, which are highly important in practice. Furthermore, we demonstrate that models trained on smaller datasets of higher-quality images can successfully compete with those trained on larger datasets, establishing a more efficient scenario of diffusion models training. From the quality perspective, YaART is consistently preferred by users over many existing state-of-the-art models. |
|
2024-04-09T00:00:00 | 2404.04319 | SpatialTracker: Tracking Any 2D Pixels in 3D Space | [
"Yuxi Xiao",
"Qianqian Wang",
"Shangzhan Zhang",
"Nan Xue",
"Sida Peng",
"Yujun Shen",
"Xiaowei Zhou"
]
| Recovering dense and long-range pixel motion in videos is a challenging problem. Part of the difficulty arises from the 3D-to-2D projection process, leading to occlusions and discontinuities in the 2D motion domain. While 2D motion can be intricate, we posit that the underlying 3D motion can often be simple and low-dimensional. In this work, we propose to estimate point trajectories in 3D space to mitigate the issues caused by image projection. Our method, named SpatialTracker, lifts 2D pixels to 3D using monocular depth estimators, represents the 3D content of each frame efficiently using a triplane representation, and performs iterative updates using a transformer to estimate 3D trajectories. Tracking in 3D allows us to leverage as-rigid-as-possible (ARAP) constraints while simultaneously learning a rigidity embedding that clusters pixels into different rigid parts. Extensive evaluation shows that our approach achieves state-of-the-art tracking performance both qualitatively and quantitatively, particularly in challenging scenarios such as out-of-plane rotation. |
|
2024-04-09T00:00:00 | 2404.04421 | PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations | [
"Yang Zheng",
"Qingqing Zhao",
"Guandao Yang",
"Wang Yifan",
"Donglai Xiang",
"Florian Dubost",
"Dmitry Lagun",
"Thabo Beeler",
"Federico Tombari",
"Leonidas Guibas",
"Gordon Wetzstein"
]
| Modeling and rendering photorealistic avatars is of crucial importance in many applications. Existing methods that build a 3D avatar from visual observations, however, struggle to reconstruct clothed humans. We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human from multi-view video data along with the physical parameters of the fabric of their clothes. For this purpose, we adopt a mesh-aligned 4D Gaussian technique for spatio-temporal mesh tracking as well as a physically based inverse renderer to estimate the intrinsic material properties. PhysAvatar integrates a physics simulator to estimate the physical parameters of the garments using gradient-based optimization in a principled manner. These novel capabilities enable PhysAvatar to create high-quality novel-view renderings of avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data. This marks a significant advancement towards modeling photorealistic digital humans using physically based inverse rendering with physics in the loop. Our project website is at: https://qingqing-zhao.github.io/PhysAvatar |
|
2024-04-09T00:00:00 | 2404.05674 | MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation | [
"Kunpeng Song",
"Yizhe Zhu",
"Bingchen Liu",
"Qing Yan",
"Ahmed Elgammal",
"Xiao Yang"
]
| In this paper, we present MoMA: an open-vocabulary, training-free personalized image model that boasts flexible zero-shot capabilities. As foundational text-to-image models rapidly evolve, the demand for robust image-to-image translation grows. Addressing this need, MoMA specializes in subject-driven personalized image generation. Utilizing an open-source, Multimodal Large Language Model (MLLM), we train MoMA to serve a dual role as both a feature extractor and a generator. This approach effectively synergizes reference image and text prompt information to produce valuable image features, facilitating an image diffusion model. To better leverage the generated features, we further introduce a novel self-attention shortcut method that efficiently transfers image features to an image diffusion model, improving the resemblance of the target object in generated images. Remarkably, as a tuning-free plug-and-play module, our model requires only a single reference image and outperforms existing methods in generating images with high detail fidelity, enhanced identity-preservation and prompt faithfulness. Our work is open-source, thereby providing universal access to these advancements. |
|
2024-04-10T00:00:00 | 2404.06429 | Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion | [
"Fan Yang",
"Jianfeng Zhang",
"Yichun Shi",
"Bowen Chen",
"Chenxu Zhang",
"Huichao Zhang",
"Xiaofeng Yang",
"Jiashi Feng",
"Guosheng Lin"
]
| https://github.com/magic-research/magic-boost | Benefiting from the rapid development of 2D diffusion models, 3D content creation has made significant progress recently. One promising solution involves the fine-tuning of pre-trained 2D diffusion models to harness their capacity for producing multi-view images, which are then lifted into accurate 3D models via methods like fast-NeRFs or large reconstruction models. However, as inconsistency still exists and limited generated resolution, the generation results of such methods still lack intricate textures and complex geometries. To solve this problem, we propose Magic-Boost, a multi-view conditioned diffusion model that significantly refines coarse generative results through a brief period of SDS optimization (sim15min). Compared to the previous text or single image based diffusion models, Magic-Boost exhibits a robust capability to generate images with high consistency from pseudo synthesized multi-view images. It provides precise SDS guidance that well aligns with the identity of the input images, enriching the local detail in both geometry and texture of the initial generative results. Extensive experiments show Magic-Boost greatly enhances the coarse inputs and generates high-quality 3D assets with rich geometric and textural details. (Project Page: https://magic-research.github.io/magic-boost/) |
2024-04-10T00:00:00 | 2404.06512 | InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD | [
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Zang",
"Yuhang Cao",
"Bin Wang",
"Linke Ouyang",
"Songyang Zhang",
"Haodong Duan",
"Wenwei Zhang",
"Yining Li",
"Hang Yan",
"Yang Gao",
"Zhe Chen",
"Xinyue Zhang",
"Wei Li",
"Jingwen Li",
"Wenhai Wang",
"Kai Chen",
"Conghui He",
"Xingcheng Zhang",
"Jifeng Dai",
"Yu Qiao",
"Dahua Lin",
"Jiaqi Wang"
]
| https://github.com/InternLM/InternLM-XComposer | The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression has been hindered by challenges in comprehending fine-grained visual content due to limited resolution. Recent efforts have aimed to enhance the high-resolution understanding capabilities of LVLMs, yet they remain capped at approximately 1500 x 1500 pixels and constrained to a relatively narrow resolution range. This paper represents InternLM-XComposer2-4KHD, a groundbreaking exploration into elevating LVLM resolution capabilities up to 4K HD (3840 x 1600) and beyond. Concurrently, considering the ultra-high resolution may not be necessary in all scenarios, it supports a wide range of diverse resolutions from 336 pixels to 4K standard, significantly broadening its scope of applicability. Specifically, this research advances the patch division paradigm by introducing a novel extension: dynamic resolution with automatic patch configuration. It maintains the training image aspect ratios while automatically varying patch counts and configuring layouts based on a pre-trained Vision Transformer (ViT) (336 x 336), leading to dynamic training resolution from 336 pixels to 4K standard. Our research demonstrates that scaling training resolution up to 4K HD leads to consistent performance enhancements without hitting the ceiling of potential improvements. InternLM-XComposer2-4KHD shows superb capability that matches or even surpasses GPT-4V and Gemini Pro in 10 of the 16 benchmarks. The InternLM-XComposer2-4KHD model series with 7B parameters are publicly available at https://github.com/InternLM/InternLM-XComposer. |
2024-04-10T00:00:00 | 2404.06393 | MuPT: A Generative Symbolic Music Pretrained Transformer | [
"Xingwei Qu",
"Yuelin Bai",
"Yinghao Ma",
"Ziya Zhou",
"Ka Man Lo",
"Jiaheng Liu",
"Ruibin Yuan",
"Lejun Min",
"Xueling Liu",
"Tianyu Zhang",
"Xinrun Du",
"Shuyue Guo",
"Yiming Liang",
"Yizhi Li",
"Shangda Wu",
"Junting Zhou",
"Tianyu Zheng",
"Ziyang Ma",
"Fengze Han",
"Wei Xue",
"Gus Xia",
"Emmanouil Benetos",
"Xiang Yue",
"Chenghua Lin",
"Xu Tan",
"Stephen W. Huang",
"Wenhu Chen",
"Jie Fu",
"Ge Zhang"
]
| In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music. While the prevalent use of MIDI in music modeling is well-established, our findings suggest that LLMs are inherently more compatible with ABC Notation, which aligns more closely with their design and strengths, thereby enhancing the model's performance in musical composition. To address the challenges associated with misaligned measures from different tracks during generation, we propose the development of a Synchronized Multi-Track ABC Notation (SMT-ABC Notation), which aims to preserve coherence across multiple musical tracks. Our contributions include a series of models capable of handling up to 8192 tokens, covering 90\% of the symbolic music data in our training set. Furthermore, we explore the implications of the Symbolic Music Scaling Law (SMS Law) on model performance. The results indicate a promising direction for future research in music generation, offering extensive resources for community-led research through our open-source contributions. |
|
2024-04-10T00:00:00 | 2404.05961 | LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders | [
"Parishad BehnamGhader",
"Vaibhav Adlakha",
"Marius Mosbach",
"Dzmitry Bahdanau",
"Nicolas Chapados",
"Siva Reddy"
]
| https://github.com/McGill-NLP/llm2vec | Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 3 popular LLMs ranging from 1.3B to 7B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data. Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data. |
2024-04-10T00:00:00 | 2404.05892 | Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence | [
"Bo Peng",
"Daniel Goldstein",
"Quentin Anthony",
"Alon Albalak",
"Eric Alcaide",
"Stella Biderman",
"Eugene Cheah",
"Teddy Ferdinan",
"Haowen Hou",
"Przemysław Kazienko",
"Kranthi Kiran GV",
"Jan Kocoń",
"Bartłomiej Koptyra",
"Satyapriya Krishna",
"Ronald McClelland Jr.",
"Niklas Muennighoff",
"Fares Obeid",
"Atsushi Saito",
"Guangyu Song",
"Haoqin Tu",
"Stanisław Woźniak",
"Ruichong Zhang",
"Bingchen Zhao",
"Qihang Zhao",
"Peng Zhou",
"Jian Zhu",
"Rui-Jie Zhu"
]
| We present Eagle (RWKV-5) and Finch (RWKV-6), sequence models improving upon the RWKV (RWKV-4) architecture. Our architectural design advancements include multi-headed matrix-valued states and a dynamic recurrence mechanism that improve expressivity while maintaining the inference efficiency characteristics of RNNs. We introduce a new multilingual corpus with 1.12 trillion tokens and a fast tokenizer based on greedy matching for enhanced multilinguality. We trained four Eagle models, ranging from 0.46 to 7.5 billion parameters, and two Finch models with 1.6 and 3.1 billion parameters and find that they achieve competitive performance across a wide variety of benchmarks. We release all our models on HuggingFace under the Apache 2.0 license. Models at: https://huggingface.co/RWKV Training code at: https://github.com/RWKV/RWKV-LM Inference code at: https://github.com/RWKV/ChatRWKV Time-parallel training code at: https://github.com/RWKV/RWKV-infctx-trainer |
|
2024-04-10T00:00:00 | 2404.06212 | OmniFusion Technical Report | [
"Elizaveta Goncharova",
"Anton Razzhigaev",
"Matvey Mikhalchuk",
"Maxim Kurkin",
"Irina Abdullaeva",
"Matvey Skripkin",
"Ivan Oseledets",
"Denis Dimitrov",
"Andrey Kuznetsov"
]
| https://github.com/AIRI-Institute/OmniFusion | Last year, multimodal architectures served up a revolution in AI-based approaches and solutions, extending the capabilities of large language models (LLM). We propose an OmniFusion model based on a pretrained LLM and adapters for visual modality. We evaluated and compared several architecture design principles for better text and visual data coupling: MLP and transformer adapters, various CLIP ViT-based encoders (SigLIP, InternVIT, etc.), and their fusing approach, image encoding method (whole image or tiles encoding) and two 7B LLMs (the proprietary one and open-source Mistral). Experiments on 8 visual-language benchmarks show the top score for the best OmniFusion setup in terms of different VQA tasks in comparison with open-source LLaVA-like solutions: VizWiz, Pope, MM-Vet, ScienceQA, MMBench, TextVQA, VQAv2, MMMU. We also propose a variety of situations, where OmniFusion provides highly-detailed answers in different domains: housekeeping, sightseeing, culture, medicine, handwritten and scanned equations recognition, etc. Mistral-based OmniFusion model is an open-source solution with weights, training and inference scripts available at https://github.com/AIRI-Institute/OmniFusion. |
2024-04-10T00:00:00 | 2404.06395 | MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies | [
"Shengding Hu",
"Yuge Tu",
"Xu Han",
"Chaoqun He",
"Ganqu Cui",
"Xiang Long",
"Zhi Zheng",
"Yewei Fang",
"Yuxiang Huang",
"Weilin Zhao",
"Xinrong Zhang",
"Zheng Leng Thai",
"Kaihuo Zhang",
"Chongyi Wang",
"Yuan Yao",
"Chenyang Zhao",
"Jie Zhou",
"Jie Cai",
"Zhongwu Zhai",
"Ning Ding",
"Chao Jia",
"Guoyang Zeng",
"Dahai Li",
"Zhiyuan Liu",
"Maosong Sun"
]
| https://github.com/OpenBMB/MiniCPM | The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense, particularly given the immense cost of experimentation. This scenario underscores the importance of exploring the potential of Small Language Models (SLMs) as a resource-efficient alternative. In this context, we introduce MiniCPM, specifically the 1.2B and 2.4B non-embedding parameter variants, not only excel in their respective categories but also demonstrate capabilities on par with 7B-13B LLMs. While focusing on SLMs, our approach exhibits scalability in both model and data dimensions for future LLM research. Regarding model scaling, we employ extensive model wind tunnel experiments for stable and optimal scaling. For data scaling, we introduce a Warmup-Stable-Decay (WSD) learning rate scheduler (LRS), conducive to continuous training and domain adaptation. We present an in-depth analysis of the intriguing training dynamics that occurred in the WSD LRS. With WSD LRS, we are now able to efficiently study data-model scaling law without extensive retraining experiments on both axes of model and data, from which we derive the much higher compute optimal data-model ratio than Chinchilla Optimal. Additionally, we introduce MiniCPM family, including MiniCPM-DPO, MiniCPM-MoE and MiniCPM-128K, whose excellent performance further cementing MiniCPM's foundation in diverse SLM applications. MiniCPM models are available publicly at https://github.com/OpenBMB/MiniCPM . |
2024-04-10T00:00:00 | 2404.06209 | Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models | [
"Sebastian Bordt",
"Harsha Nori",
"Vanessa Rodrigues",
"Besmira Nushi",
"Rich Caruana"
]
| https://github.com/interpretml/LLM-Tabular-Memorization-Checker | While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over. In this work, we address this concern for tabular data. Specifically, we introduce a variety of different techniques to assess whether a language model has seen a tabular dataset during training. This investigation reveals that LLMs have memorized many popular tabular datasets verbatim. We then compare the few-shot learning performance of LLMs on datasets that were seen during training to the performance on datasets released after training. We find that LLMs perform better on datasets seen during training, indicating that memorization leads to overfitting. At the same time, LLMs show non-trivial performance on novel datasets and are surprisingly robust to data transformations. We then investigate the in-context statistical learning abilities of LLMs. Without fine-tuning, we find them to be limited. This suggests that much of the few-shot performance on novel datasets is due to the LLM's world knowledge. Overall, our results highlight the importance of testing whether an LLM has seen an evaluation dataset during pre-training. We make the exposure tests we developed available as the tabmemcheck Python package at https://github.com/interpretml/LLM-Tabular-Memorization-Checker |
2024-04-10T00:00:00 | 2404.05875 | CodecLM: Aligning Language Models with Tailored Synthetic Data | [
"Zifeng Wang",
"Chun-Liang Li",
"Vincent Perot",
"Long T. Le",
"Jin Miao",
"Zizhao Zhang",
"Chen-Yu Lee",
"Tomas Pfister"
]
| Instruction tuning has emerged as the key in aligning large language models (LLMs) with specific task instructions, thereby mitigating the discrepancy between the next-token prediction objective and users' actual goals. To reduce the labor and time cost to collect or annotate data by humans, researchers start to explore the use of LLMs to generate instruction-aligned synthetic data. Recent works focus on generating diverse instructions and applying LLM to increase instruction complexity, often neglecting downstream use cases. It remains unclear how to tailor high-quality data to elicit better instruction-following abilities in different target instruction distributions and LLMs. To this end, we introduce CodecLM, a general framework for adaptively generating high-quality synthetic data for LLM alignment with different downstream instruction distributions and LLMs. Drawing on the Encode-Decode principles, we use LLMs as codecs to guide the data generation process. We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution, and then decode metadata to create tailored instructions. We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples. Extensive experiments on four open-domain instruction following benchmarks validate the effectiveness of CodecLM over the current state-of-the-arts. |
|
2024-04-10T00:00:00 | 2404.05829 | SambaLingo: Teaching Large Language Models New Languages | [
"Zoltan Csaki",
"Bo Li",
"Jonathan Li",
"Qiantong Xu",
"Pian Pawakapan",
"Leon Zhang",
"Yun Du",
"Hengyu Zhao",
"Changran Hu",
"Urmish Thakker"
]
| Despite the widespread availability of LLMs, there remains a substantial gap in their capabilities and availability across diverse languages. One approach to address these issues has been to take an existing pre-trained LLM and continue to train it on new languages. While prior works have experimented with language adaptation, many questions around best practices and methodology have not been covered. In this paper, we present a comprehensive investigation into the adaptation of LLMs to new languages. Our study covers the key components in this process, including vocabulary extension, direct preference optimization and the data scarcity problem for human alignment in low-resource languages. We scale these experiments across 9 languages and 2 parameter scales (7B and 70B). We compare our models against Llama 2, Aya-101, XGLM, BLOOM and existing language experts, outperforming all prior published baselines. Additionally, all evaluation code and checkpoints are made public to facilitate future research. |
|
2024-04-10T00:00:00 | 2404.06507 | Reconstructing Hand-Held Objects in 3D | [
"Jane Wu",
"Georgios Pavlakos",
"Georgia Gkioxari",
"Jitendra Malik"
]
| Objects manipulated by the hand (i.e., manipulanda) are particularly challenging to reconstruct from in-the-wild RGB images or videos. Not only does the hand occlude much of the object, but also the object is often only visible in a small number of image pixels. At the same time, two strong anchors emerge in this setting: (1) estimated 3D hands help disambiguate the location and scale of the object, and (2) the set of manipulanda is small relative to all possible objects. With these insights in mind, we present a scalable paradigm for handheld object reconstruction that builds on recent breakthroughs in large language/vision models and 3D object datasets. Our model, MCC-Hand-Object (MCC-HO), jointly reconstructs hand and object geometry given a single RGB image and inferred 3D hand as inputs. Subsequently, we use GPT-4(V) to retrieve a 3D object model that matches the object in the image and rigidly align the model to the network-inferred geometry; we call this alignment Retrieval-Augmented Reconstruction (RAR). Experiments demonstrate that MCC-HO achieves state-of-the-art performance on lab and Internet datasets, and we show how RAR can be used to automatically obtain 3D labels for in-the-wild images of hand-object interactions. |
|
2024-04-10T00:00:00 | 2404.06091 | Hash3D: Training-free Acceleration for 3D Generation | [
"Xingyi Yang",
"Xinchao Wang"
]
| https://github.com/Adamdad/hash3D | The evolution of 3D generative modeling has been notably propelled by the adoption of 2D diffusion models. Despite this progress, the cumbersome optimization process per se presents a critical hurdle to efficiency. In this paper, we introduce Hash3D, a universal acceleration for 3D generation without model training. Central to Hash3D is the insight that feature-map redundancy is prevalent in images rendered from camera positions and diffusion time-steps in close proximity. By effectively hashing and reusing these feature maps across neighboring timesteps and camera angles, Hash3D substantially prevents redundant calculations, thus accelerating the diffusion model's inference in 3D generation tasks. We achieve this through an adaptive grid-based hashing. Surprisingly, this feature-sharing mechanism not only speed up the generation but also enhances the smoothness and view consistency of the synthesized 3D objects. Our experiments covering 5 text-to-3D and 3 image-to-3D models, demonstrate Hash3D's versatility to speed up optimization, enhancing efficiency by 1.3 to 4 times. Additionally, Hash3D's integration with 3D Gaussian splatting largely speeds up 3D model creation, reducing text-to-3D processing to about 10 minutes and image-to-3D conversion to roughly 30 seconds. The project page is at https://adamdad.github.io/hash3D/. |
2024-04-10T00:00:00 | 2404.06109 | Revising Densification in Gaussian Splatting | [
"Samuel Rota Bulò",
"Lorenzo Porzi",
"Peter Kontschieder"
]
| In this paper, we address the limitations of Adaptive Density Control (ADC) in 3D Gaussian Splatting (3DGS), a scene representation method achieving high-quality, photorealistic results for novel view synthesis. ADC has been introduced for automatic 3D point primitive management, controlling densification and pruning, however, with certain limitations in the densification logic. Our main contribution is a more principled, pixel-error driven formulation for density control in 3DGS, leveraging an auxiliary, per-pixel error function as the criterion for densification. We further introduce a mechanism to control the total number of primitives generated per scene and correct a bias in the current opacity handling strategy of ADC during cloning operations. Our approach leads to consistent quality improvements across a variety of benchmark scenes, without sacrificing the method's efficiency. |
|
2024-04-11T00:00:00 | 2404.07143 | Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention | [
"Tsendsuren Munkhdalai",
"Manaal Faruqui",
"Siddharth Gopal"
]
| This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs. |
|
2024-04-11T00:00:00 | 2404.06654 | RULER: What's the Real Context Size of Your Long-Context Language Models? | [
"Cheng-Ping Hsieh",
"Simeng Sun",
"Samuel Kriman",
"Shantanu Acharya",
"Dima Rekesh",
"Fei Jia",
"Boris Ginsburg"
]
| https://github.com/hsiehjackson/RULER | The needle-in-a-haystack (NIAH) test, which examines the ability to retrieve a piece of information (the "needle") from long distractor texts (the "haystack"), has been widely adopted to evaluate long-context language models (LMs). However, this simple retrieval-based test is indicative of only a superficial form of long-context understanding. To provide a more comprehensive evaluation of long-context LMs, we create a new synthetic benchmark RULER with flexible configurations for customized sequence length and task complexity. RULER expands upon the vanilla NIAH test to encompass variations with diverse types and quantities of needles. Moreover, RULER introduces new task categories multi-hop tracing and aggregation to test behaviors beyond searching from context. We evaluate ten long-context LMs with 13 representative tasks in RULER. Despite achieving nearly perfect accuracy in the vanilla NIAH test, all models exhibit large performance drops as the context length increases. While these models all claim context sizes of 32K tokens or greater, only four models (GPT-4, Command-R, Yi-34B, and Mixtral) can maintain satisfactory performance at the length of 32K. Our analysis of Yi-34B, which supports context length of 200K, reveals large room for improvement as we increase input length and task complexity. We open source RULER to spur comprehensive evaluation of long-context LMs. |
2024-04-11T00:00:00 | 2404.07199 | RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion | [
"Jaidev Shriram",
"Alex Trevithick",
"Lingjie Liu",
"Ravi Ramamoorthi"
]
| https://github.com/jaidevshriram/realmdreamer | We introduce RealmDreamer, a technique for generation of general forward-facing 3D scenes from text descriptions. Our technique optimizes a 3D Gaussian Splatting representation to match complex text prompts. We initialize these splats by utilizing the state-of-the-art text-to-image generators, lifting their samples into 3D, and computing the occlusion volume. We then optimize this representation across multiple views as a 3D inpainting task with image-conditional diffusion models. To learn correct geometric structure, we incorporate a depth diffusion model by conditioning on the samples from the inpainting model, giving rich geometric structure. Finally, we finetune the model using sharpened samples from image generators. Notably, our technique does not require video or multi-view data and can synthesize a variety of high-quality 3D scenes in different styles, consisting of multiple objects. Its generality additionally allows 3D synthesis from a single image. |
2024-04-11T00:00:00 | 2404.06780 | Urban Architect: Steerable 3D Urban Scene Generation with Layout Prior | [
"Fan Lu",
"Kwan-Yee Lin",
"Yan Xu",
"Hongsheng Li",
"Guang Chen",
"Changjun Jiang"
]
| https://github.com/UrbanArchitect/UrbanArchitect | Text-to-3D generation has achieved remarkable success via large-scale text-to-image diffusion models. Nevertheless, there is no paradigm for scaling up the methodology to urban scale. Urban scenes, characterized by numerous elements, intricate arrangement relationships, and vast scale, present a formidable barrier to the interpretability of ambiguous textual descriptions for effective model optimization. In this work, we surmount the limitations by introducing a compositional 3D layout representation into text-to-3D paradigm, serving as an additional prior. It comprises a set of semantic primitives with simple geometric structures and explicit arrangement relationships, complementing textual descriptions and enabling steerable generation. Upon this, we propose two modifications -- (1) We introduce Layout-Guided Variational Score Distillation to address model optimization inadequacies. It conditions the score distillation sampling process with geometric and semantic constraints of 3D layouts. (2) To handle the unbounded nature of urban scenes, we represent 3D scene with a Scalable Hash Grid structure, incrementally adapting to the growing scale of urban scenes. Extensive experiments substantiate the capability of our framework to scale text-to-3D generation to large-scale urban scenes that cover over 1000m driving distance for the first time. We also present various scene editing demonstrations, showing the powers of steerable urban scene generation. Website: https://urbanarchitect.github.io. |
2024-04-11T00:00:00 | 2404.07204 | BRAVE: Broadening the visual encoding of vision-language models | [
"Oğuzhan Fatih Kar",
"Alessio Tonioni",
"Petra Poklukar",
"Achin Kulshrestha",
"Amir Zamir",
"Federico Tombari"
]
| Vision-language models (VLMs) are typically composed of a vision encoder, e.g. CLIP, and a language model (LM) that interprets the encoded features to solve downstream tasks. Despite remarkable progress, VLMs are subject to several shortcomings due to the limited capabilities of vision encoders, e.g. "blindness" to certain image features, visual hallucination, etc. To address these issues, we study broadening the visual encoding capabilities of VLMs. We first comprehensively benchmark several vision encoders with different inductive biases for solving VLM tasks. We observe that there is no single encoding configuration that consistently achieves top performance across different tasks, and encoders with different biases can perform surprisingly similarly. Motivated by this, we introduce a method, named BRAVE, that consolidates features from multiple frozen encoders into a more versatile representation that can be directly fed as the input to a frozen LM. BRAVE achieves state-of-the-art performance on a broad range of captioning and VQA benchmarks and significantly reduces the aforementioned issues of VLMs, while requiring a smaller number of trainable parameters than existing methods and having a more compressed representation. Our results highlight the potential of incorporating different visual biases for a more broad and contextualized visual understanding of VLMs. |
|
2024-04-11T00:00:00 | 2404.06773 | Adapting LLaMA Decoder to Vision Transformer | [
"Jiahao Wang",
"Wenqi Shao",
"Mengzhao Chen",
"Chengyue Wu",
"Yong Liu",
"Kaipeng Zhang",
"Songyang Zhang",
"Kai Chen",
"Ping Luo"
]
| This work examines whether decoder-only Transformers such as LLaMA, which were originally designed for large language models (LLMs), can be adapted to the computer vision field. We first "LLaMAfy" a standard ViT step-by-step to align with LLaMA's architecture, and find that directly applying a casual mask to the self-attention brings an attention collapse issue, resulting in the failure to the network training. We suggest to reposition the class token behind the image tokens with a post-sequence class token technique to overcome this challenge, enabling causal self-attention to efficiently capture the entire image's information. Additionally, we develop a soft mask strategy that gradually introduces a casual mask to the self-attention at the onset of training to facilitate the optimization behavior. The tailored model, dubbed as image LLaMA (iLLaMA), is akin to LLaMA in architecture and enables direct supervised learning. Its causal self-attention boosts computational efficiency and learns complex representation by elevating attention map ranks. iLLaMA rivals the performance with its encoder-only counterparts, achieving 75.1% ImageNet top-1 accuracy with only 5.7M parameters. Scaling the model to ~310M and pre-training on ImageNet-21K further enhances the accuracy to 86.0%. Extensive experiments demonstrate iLLaMA's reliable properties: calibration, shape-texture bias, quantization compatibility, ADE20K segmentation and CIFAR transfer learning. We hope our study can kindle fresh views to visual model design in the wave of LLMs. Pre-trained models and codes are available here. |
|
2024-04-11T00:00:00 | 2404.06903 | DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting | [
"Shijie Zhou",
"Zhiwen Fan",
"Dejia Xu",
"Haoran Chang",
"Pradyumna Chari",
"Tejas Bharadwaj",
"Suya You",
"Zhangyang Wang",
"Achuta Kadambi"
]
| The increasing demand for virtual reality applications has highlighted the significance of crafting immersive 3D assets. We present a text-to-3D 360^{circ} scene generation pipeline that facilitates the creation of comprehensive 360^{circ} scenes for in-the-wild environments in a matter of minutes. Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement to create a high-quality and globally coherent panoramic image. This image acts as a preliminary "flat" (2D) scene representation. Subsequently, it is lifted into 3D Gaussians, employing splatting techniques to enable real-time exploration. To produce consistent 3D geometry, our pipeline constructs a spatially coherent structure by aligning the 2D monocular depth into a globally optimized point cloud. This point cloud serves as the initial state for the centroids of 3D Gaussians. In order to address invisible issues inherent in single-view inputs, we impose semantic and geometric constraints on both synthesized and input camera views as regularizations. These guide the optimization of Gaussians, aiding in the reconstruction of unseen regions. In summary, our method offers a globally consistent 3D scene within a 360^{circ} perspective, providing an enhanced immersive experience over existing techniques. Project website at: http://dreamscene360.github.io/ |
|
2024-04-12T00:00:00 | 2404.07448 | Transferable and Principled Efficiency for Open-Vocabulary Segmentation | [
"Jingxuan Xu",
"Wuyang Chen",
"Yao Zhao",
"Yunchao Wei"
]
| https://github.com/Xujxyang/OpenTrans | Recent success of pre-trained foundation vision-language models makes Open-Vocabulary Segmentation (OVS) possible. Despite the promising performance, this approach introduces heavy computational overheads for two challenges: 1) large model sizes of the backbone; 2) expensive costs during the fine-tuning. These challenges hinder this OVS strategy from being widely applicable and affordable in real-world scenarios. Although traditional methods such as model compression and efficient fine-tuning can address these challenges, they often rely on heuristics. This means that their solutions cannot be easily transferred and necessitate re-training on different models, which comes at a cost. In the context of efficient OVS, we target achieving performance that is comparable to or even better than prior OVS works based on large vision-language foundation models, by utilizing smaller models that incur lower training costs. The core strategy is to make our efficiency principled and thus seamlessly transferable from one OVS framework to others without further customization. Comprehensive experiments on diverse OVS benchmarks demonstrate our superior trade-off between segmentation accuracy and computation costs over previous works. Our code is available on https://github.com/Xujxyang/OpenTrans |
2024-04-12T00:00:00 | 2404.07413 | JetMoE: Reaching Llama2 Performance with 0.1M Dollars | [
"Yikang Shen",
"Zhen Guo",
"Tianle Cai",
"Zengyi Qin"
]
| https://github.com/myshell-ai/JetMoE | Large Language Models (LLMs) have achieved remarkable results, but their increasing resource demand has become a major obstacle to the development of powerful and accessible super-human intelligence. This report introduces JetMoE-8B, a new LLM trained with less than $0.1 million, using 1.25T tokens from carefully mixed open-source corpora and 30,000 H100 GPU hours. Despite its low cost, the JetMoE-8B demonstrates impressive performance, with JetMoE-8B outperforming the Llama2-7B model and JetMoE-8B-Chat surpassing the Llama2-13B-Chat model. These results suggest that LLM training can be much more cost-effective than generally thought. JetMoE-8B is based on an efficient Sparsely-gated Mixture-of-Experts (SMoE) architecture, composed of attention and feedforward experts. Both layers are sparsely activated, allowing JetMoE-8B to have 8B parameters while only activating 2B for each input token, reducing inference computation by about 70% compared to Llama2-7B. Moreover, JetMoE-8B is highly open and academia-friendly, using only public datasets and training code. All training parameters and data mixtures have been detailed in this report to facilitate future efforts in the development of open foundation models. This transparency aims to encourage collaboration and further advancements in the field of accessible and efficient LLMs. The model weights are publicly available at https://github.com/myshell-ai/JetMoE. |
2024-04-12T00:00:00 | 2404.07616 | Audio Dialogues: Dialogues dataset for audio and music understanding | [
"Arushi Goel",
"Zhifeng Kong",
"Rafael Valle",
"Bryan Catanzaro"
]
| Existing datasets for audio understanding primarily focus on single-turn interactions (i.e. audio captioning, audio question answering) for describing audio in natural language, thus limiting understanding audio via interactive dialogue. To address this gap, we introduce Audio Dialogues: a multi-turn dialogue dataset containing 163.8k samples for general audio sounds and music. In addition to dialogues, Audio Dialogues also has question-answer pairs to understand and compare multiple input audios together. Audio Dialogues leverages a prompting-based approach and caption annotations from existing datasets to generate multi-turn dialogues using a Large Language Model (LLM). We evaluate existing audio-augmented large language models on our proposed dataset to demonstrate the complexity and applicability of Audio Dialogues. Our code for generating the dataset will be made publicly available. Detailed prompts and generated dialogues can be found on the demo website https://audiodialogues.github.io/. |
|
2024-04-12T00:00:00 | 2404.07544 | From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples | [
"Robert Vacareanu",
"Vlad-Andrei Negru",
"Vasile Suciu",
"Mihai Surdeanu"
]
| https://github.com/robertvacareanu/llm4regression | We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance rivaling (or even outperforming) that of traditional supervised methods such as Random Forest, Bagging, or Gradient Boosting. For example, on the challenging Friedman #2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting. We then investigate how well the performance of large language models scales with the number of in-context exemplars. We borrow from the notion of regret from online learning and empirically show that LLMs are capable of obtaining a sub-linear regret. |
2024-04-12T00:00:00 | 2404.07973 | Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models | [
"Haotian Zhang",
"Haoxuan You",
"Philipp Dufter",
"Bowen Zhang",
"Chen Chen",
"Hong-You Chen",
"Tsu-Jui Fu",
"William Yang Wang",
"Shih-Fu Chang",
"Zhe Gan",
"Yinfei Yang"
]
| While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks. In this work, we unveil Ferret-v2, a significant upgrade to Ferret, with three key designs. (1) Any resolution grounding and referring: A flexible approach that effortlessly handles higher image resolution, improving the model's ability to process and understand images in greater detail. (2) Multi-granularity visual encoding: By integrating the additional DINOv2 encoder, the model learns better and diverse underlying contexts for global and fine-grained visual information. (3) A three-stage training paradigm: Besides image-caption alignment, an additional stage is proposed for high-resolution dense alignment before the final instruction tuning. Experiments show that Ferret-v2 provides substantial improvements over Ferret and other state-of-the-art methods, thanks to its high-resolution scaling and fine-grained visual processing. |
|
2024-04-12T00:00:00 | 2404.07965 | Rho-1: Not All Tokens Are What You Need | [
"Zhenghao Lin",
"Zhibin Gou",
"Yeyun Gong",
"Xiao Liu",
"Yelong Shen",
"Ruochen Xu",
"Chen Lin",
"Yujiu Yang",
"Jian Jiao",
"Nan Duan",
"Weizhu Chen"
]
| https://github.com/microsoft/rho | Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that "Not all tokens in a corpus are equally important for language model training". Our initial analysis delves into token-level training dynamics of language model, revealing distinct loss patterns for different tokens. Leveraging these insights, we introduce a new language model called Rho-1. Unlike traditional LMs that learn to predict every next token in a corpus, Rho-1 employs Selective Language Modeling (SLM), which selectively trains on useful tokens that aligned with the desired distribution. This approach involves scoring pretraining tokens using a reference model, and then training the language model with a focused loss on tokens with higher excess loss. When continual pretraining on 15B OpenWebMath corpus, Rho-1 yields an absolute improvement in few-shot accuracy of up to 30% in 9 math tasks. After fine-tuning, Rho-1-1B and 7B achieved state-of-the-art results of 40.6% and 51.8% on MATH dataset, respectively - matching DeepSeekMath with only 3% of the pretraining tokens. Furthermore, when pretraining on 80B general tokens, Rho-1 achieves 6.8% average enhancement across 15 diverse tasks, increasing both efficiency and performance of the language model pre-training. |
2024-04-12T00:00:00 | 2404.07839 | RecurrentGemma: Moving Past Transformers for Efficient Open Language Models | [
"Aleksandar Botev",
"Soham De",
"Samuel L Smith",
"Anushan Fernando",
"George-Cristian Muraru",
"Ruba Haroun",
"Leonard Berrada",
"Razvan Pascanu",
"Pier Giuseppe Sessa",
"Robert Dadashi",
"Léonard Hussenot",
"Johan Ferret",
"Sertan Girgin",
"Olivier Bachem",
"Alek Andreev",
"Kathleen Kenealy",
"Thomas Mesnard",
"Cassidy Hardin",
"Surya Bhupatiraju",
"Shreya Pathak",
"Laurent Sifre",
"Morgane Rivière",
"Mihir Sanjay Kale",
"Juliette Love",
"Pouya Tafti",
"Armand Joulin",
"Noah Fiedel",
"Evan Senter",
"Yutian Chen",
"Srivatsan Srinivasan",
"Guillaume Desjardins",
"David Budden",
"Arnaud Doucet",
"Sharad Vikram",
"Adam Paszke",
"Trevor Gale",
"Sebastian Borgeaud",
"Charlie Chen",
"Andy Brock",
"Antonia Paterson",
"Jenny Brennan",
"Meg Risdal",
"Raj Gundluru",
"Nesh Devanathan",
"Paul Mooney",
"Nilay Chauhan",
"Phil Culliton",
"Luiz GUStavo Martins",
"Elisa Bandy",
"David Huntsperger",
"Glenn Cameron",
"Arthur Zucker",
"Tris Warkentin",
"Ludovic Peran",
"Minh Giang",
"Zoubin Ghahramani",
"Clément Farabet",
"Koray Kavukcuoglu",
"Demis Hassabis",
"Raia Hadsell",
"Yee Whye Teh",
"Nando de Frietas"
]
| https://github.com/google-deepmind/recurrentgemma | We introduce RecurrentGemma, an open language model which uses Google's novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens. |
2024-04-12T00:00:00 | 2404.07987 | ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback | [
"Ming Li",
"Taojiannan Yang",
"Huafeng Kuang",
"Jie Wu",
"Zhaoning Wang",
"Xuefeng Xiao",
"Chen Chen"
]
| To enhance the controllability of text-to-image diffusion models, existing efforts like ControlNet incorporated image-based conditional controls. In this paper, we reveal that existing methods still face significant challenges in generating images that align with the image conditional controls. To this end, we propose ControlNet++, a novel approach that improves controllable generation by explicitly optimizing pixel-level cycle consistency between generated images and conditional controls. Specifically, for an input conditional control, we use a pre-trained discriminative reward model to extract the corresponding condition of the generated images, and then optimize the consistency loss between the input conditional control and extracted condition. A straightforward implementation would be generating images from random noises and then calculating the consistency loss, but such an approach requires storing gradients for multiple sampling timesteps, leading to considerable time and memory costs. To address this, we introduce an efficient reward strategy that deliberately disturbs the input images by adding noise, and then uses the single-step denoised images for reward fine-tuning. This avoids the extensive costs associated with image sampling, allowing for more efficient reward fine-tuning. Extensive experiments show that ControlNet++ significantly improves controllability under various conditional controls. For example, it achieves improvements over ControlNet by 7.9% mIoU, 13.4% SSIM, and 7.6% RMSE, respectively, for segmentation mask, line-art edge, and depth conditions. |
|
2024-04-12T00:00:00 | 2404.07972 | OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments | [
"Tianbao Xie",
"Danyang Zhang",
"Jixuan Chen",
"Xiaochuan Li",
"Siheng Zhao",
"Ruisheng Cao",
"Toh Jing Hua",
"Zhoujun Cheng",
"Dongchan Shin",
"Fangyu Lei",
"Yitao Liu",
"Yiheng Xu",
"Shuyan Zhou",
"Silvio Savarese",
"Caiming Xiong",
"Victor Zhong",
"Tao Yu"
]
| https://github.com/xlang-ai/OSWorld | Autonomous agents that accomplish complex computer tasks with minimal human interventions have the potential to transform human-computer interaction, significantly enhancing accessibility and productivity. However, existing benchmarks either lack an interactive environment or are limited to environments specific to certain applications or domains, failing to reflect the diverse and complex nature of real-world computer use, thereby limiting the scope of tasks and agent scalability. To address this issue, we introduce OSWorld, the first-of-its-kind scalable, real computer environment for multimodal agents, supporting task setup, execution-based evaluation, and interactive learning across various operating systems such as Ubuntu, Windows, and macOS. OSWorld can serve as a unified, integrated computer environment for assessing open-ended computer tasks that involve arbitrary applications. Building upon OSWorld, we create a benchmark of 369 computer tasks involving real web and desktop apps in open domains, OS file I/O, and workflows spanning multiple applications. Each task example is derived from real-world computer use cases and includes a detailed initial state setup configuration and a custom execution-based evaluation script for reliable, reproducible evaluation. Extensive evaluation of state-of-the-art LLM/VLM-based agents on OSWorld reveals significant deficiencies in their ability to serve as computer assistants. While humans can accomplish over 72.36% of the tasks, the best model achieves only 12.24% success, primarily struggling with GUI grounding and operational knowledge. Comprehensive analysis using OSWorld provides valuable insights for developing multimodal generalist agents that were not possible with previous benchmarks. Our code, environment, baseline models, and data are publicly available at https://os-world.github.io. |
2024-04-12T00:00:00 | 2404.07979 | LLoCO: Learning Long Contexts Offline | [
"Sijun Tan",
"Xiuyu Li",
"Shishir Patil",
"Ziyang Wu",
"Tianjun Zhang",
"Kurt Keutzer",
"Joseph E. Gonzalez",
"Raluca Ada Popa"
]
| https://github.com/jeffreysijuntan/lloco | Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning. Our method enables an LLM to create a concise representation of the original context and efficiently retrieve relevant information to answer questions accurately. We introduce LLoCO, a technique that combines context compression, retrieval, and parameter-efficient finetuning using LoRA. Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning while using 30times fewer tokens during inference. LLoCO achieves up to 7.62times speed-up and substantially reduces the cost of long document question answering, making it a promising solution for efficient long context processing. Our code is publicly available at https://github.com/jeffreysijuntan/lloco. |
2024-04-12T00:00:00 | 2404.07503 | Best Practices and Lessons Learned on Synthetic Data for Language Models | [
"Ruibo Liu",
"Jerry Wei",
"Fangyu Liu",
"Chenglei Si",
"Yanzhe Zhang",
"Jinmeng Rao",
"Steven Zheng",
"Daiyi Peng",
"Diyi Yang",
"Denny Zhou",
"Andrew M. Dai"
]
| The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models. |
|
2024-04-12T00:00:00 | 2404.07821 | Sparse Laneformer | [
"Ji Liu",
"Zifeng Zhang",
"Mingjie Lu",
"Hongyang Wei",
"Dong Li",
"Yile Xie",
"Jinzhang Peng",
"Lu Tian",
"Ashish Sirasao",
"Emad Barsoum"
]
| Lane detection is a fundamental task in autonomous driving, and has achieved great progress as deep learning emerges. Previous anchor-based methods often design dense anchors, which highly depend on the training dataset and remain fixed during inference. We analyze that dense anchors are not necessary for lane detection, and propose a transformer-based lane detection framework based on a sparse anchor mechanism. To this end, we generate sparse anchors with position-aware lane queries and angle queries instead of traditional explicit anchors. We adopt Horizontal Perceptual Attention (HPA) to aggregate the lane features along the horizontal direction, and adopt Lane-Angle Cross Attention (LACA) to perform interactions between lane queries and angle queries. We also propose Lane Perceptual Attention (LPA) based on deformable cross attention to further refine the lane predictions. Our method, named Sparse Laneformer, is easy-to-implement and end-to-end trainable. Extensive experiments demonstrate that Sparse Laneformer performs favorably against the state-of-the-art methods, e.g., surpassing Laneformer by 3.0% F1 score and O2SFormer by 0.7% F1 score with fewer MACs on CULane with the same ResNet-34 backbone. |
|
2024-04-12T00:00:00 | 2404.07724 | Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models | [
"Tuomas Kynkäänniemi",
"Miika Aittala",
"Tero Karras",
"Samuli Laine",
"Timo Aila",
"Jaakko Lehtinen"
]
| Guidance is a crucial technique for extracting the best performance out of image-generating diffusion models. Traditionally, a constant guidance weight has been applied throughout the sampling chain of an image. We show that guidance is clearly harmful toward the beginning of the chain (high noise levels), largely unnecessary toward the end (low noise levels), and only beneficial in the middle. We thus restrict it to a specific range of noise levels, improving both the inference speed and result quality. This limited guidance interval improves the record FID in ImageNet-512 significantly, from 1.81 to 1.40. We show that it is quantitatively and qualitatively beneficial across different sampler parameters, network architectures, and datasets, including the large-scale setting of Stable Diffusion XL. We thus suggest exposing the guidance interval as a hyperparameter in all diffusion models that use guidance. |
|
2024-04-12T00:00:00 | 2404.05902 | WILBUR: Adaptive In-Context Learning for Robust and Accurate Web Agents | [
"Michael Lutz",
"Arth Bohra",
"Manvel Saroyan",
"Artem Harutyunyan",
"Giovanni Campagna"
]
| In the realm of web agent research, achieving both generalization and accuracy remains a challenging problem. Due to high variance in website structure, existing approaches often fail. Moreover, existing fine-tuning and in-context learning techniques fail to generalize across multiple websites. We introduce Wilbur, an approach that uses a differentiable ranking model and a novel instruction synthesis technique to optimally populate a black-box large language model's prompt with task demonstrations from previous runs. To maximize end-to-end success rates, we also propose an intelligent backtracking mechanism that learns and recovers from its mistakes. Finally, we show that our ranking model can be trained on data from a generative auto-curriculum which samples representative goals from an LLM, runs the agent, and automatically evaluates it, with no manual annotation. Wilbur achieves state-of-the-art results on the WebVoyager benchmark, beating text-only models by 8% overall, and up to 36% on certain websites. On the same benchmark, Wilbur is within 5% of a strong multi-modal model despite only receiving textual inputs, and further analysis reveals a substantial number of failures are due to engineering challenges of operating the web. |
|
2024-04-12T00:00:00 | 2404.07904 | HGRN2: Gated Linear RNNs with State Expansion | [
"Zhen Qin",
"Songlin Yang",
"Weixuan Sun",
"Xuyang Shen",
"Dong Li",
"Weigao Sun",
"Yiran Zhong"
]
| https://github.com/OpenNLPLab/HGRN2 | Hierarchically gated linear RNN (HGRN,Qin et al. 2023) has demonstrated competitive training speed and performance in language modeling, while offering efficient inference. However, the recurrent state size of HGRN remains relatively small, which limits its expressiveness.To address this issue, inspired by linear attention, we introduce a simple outer-product-based state expansion mechanism so that the recurrent state size can be significantly enlarged without introducing any additional parameters. The linear attention form also allows for hardware-efficient training.Our extensive experiments verify the advantage of HGRN2 over HGRN1 in language modeling, image classification, and Long Range Arena.Our largest 3B HGRN2 model slightly outperforms Mamba and LLaMa Architecture Transformer for language modeling in a controlled experiment setting; and performs competitively with many open-source 3B models in downstream evaluation while using much fewer total training tokens. |
2024-04-15T00:00:00 | 2404.08634 | Pre-training Small Base LMs with Fewer Tokens | [
"Sunny Sanyal",
"Sujay Sanghavi",
"Alexandros G. Dimakis"
]
| https://github.com/sanyalsunny111/LLM-Inheritune | We study the effectiveness of a simple approach to develop a small base language model (LM) starting from an existing large base LM: first inherit a few transformer blocks from the larger LM, and then train this smaller model on a very small subset (0.1\%) of the raw pretraining data of the larger model. We call our simple recipe Inheritune and first demonstrate it for building a small base LM with 1.5B parameters using 1B tokens (and a starting few layers of larger LM of 3B parameters); we do this using a single A6000 GPU for less than half a day. Across 9 diverse evaluation datasets as well as the MMLU benchmark, the resulting model compares favorably to publicly available base models of 1B-2B size, some of which have been trained using 50-1000 times more tokens. We investigate Inheritune in a slightly different setting where we train small LMs utilizing larger LMs and their full pre-training dataset. Here we show that smaller LMs trained utilizing some of the layers of GPT2-medium (355M) and GPT-2-large (770M) can effectively match the val loss of their bigger counterparts when trained from scratch for the same number of training steps on OpenWebText dataset with 9B tokens. We analyze our recipe with extensive experiments and demonstrate it efficacy on diverse settings. Our code is available at https://github.com/sanyalsunny111/LLM-Inheritune. |
2024-04-15T00:00:00 | 2404.08252 | MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based Monocular Guidance | [
"Yuqun Wu",
"Jae Yong Lee",
"Chuhang Zou",
"Shenlong Wang",
"Derek Hoiem"
]
| https://github.com/yuqunw/monopatch_nerf | The latest regularized Neural Radiance Field (NeRF) approaches produce poor geometry and view extrapolation for multiview stereo (MVS) benchmarks such as ETH3D. In this paper, we aim to create 3D models that provide accurate geometry and view synthesis, partially closing the large geometric performance gap between NeRF and traditional MVS methods. We propose a patch-based approach that effectively leverages monocular surface normal and relative depth predictions. The patch-based ray sampling also enables the appearance regularization of normalized cross-correlation (NCC) and structural similarity (SSIM) between randomly sampled virtual and training views. We further show that "density restrictions" based on sparse structure-from-motion points can help greatly improve geometric accuracy with a slight drop in novel view synthesis metrics. Our experiments show 4x the performance of RegNeRF and 8x that of FreeNeRF on average F1@2cm for ETH3D MVS benchmark, suggesting a fruitful research direction to improve the geometric accuracy of NeRF-based models, and sheds light on a potential future approach to enable NeRF-based optimization to eventually outperform traditional MVS. |
2024-04-15T00:00:00 | 2404.08540 | On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation | [
"Agneet Chatterjee",
"Tejas Gokhale",
"Chitta Baral",
"Yezhou Yang"
]
| https://github.com/agneet42/robustness_depth_lang | Recent advances in monocular depth estimation have been made by incorporating natural language as additional guidance. Although yielding impressive results, the impact of the language prior, particularly in terms of generalization and robustness, remains unexplored. In this paper, we address this gap by quantifying the impact of this prior and introduce methods to benchmark its effectiveness across various settings. We generate "low-level" sentences that convey object-centric, three-dimensional spatial relationships, incorporate them as additional language priors and evaluate their downstream impact on depth estimation. Our key finding is that current language-guided depth estimators perform optimally only with scene-level descriptions and counter-intuitively fare worse with low level descriptions. Despite leveraging additional data, these methods are not robust to directed adversarial attacks and decline in performance with an increase in distribution shift. Finally, to provide a foundation for future research, we identify points of failures and offer insights to better understand these shortcomings. With an increasing number of methods using language for depth estimation, our findings highlight the opportunities and pitfalls that require careful consideration for effective deployment in real-world settings |
2024-04-15T00:00:00 | 2404.08636 | Probing the 3D Awareness of Visual Foundation Models | [
"Mohamed El Banani",
"Amit Raj",
"Kevis-Kokitsi Maninis",
"Abhishek Kar",
"Yuanzhen Li",
"Michael Rubinstein",
"Deqing Sun",
"Leonidas Guibas",
"Justin Johnson",
"Varun Jampani"
]
| https://github.com/mbanani/probe3d | Recent advances in large-scale pretraining have yielded visual foundation models with strong capabilities. Not only can recent models generalize to arbitrary images for their training task, their intermediate representations are useful for other visual tasks such as detection and segmentation. Given that such models can classify, delineate, and localize objects in 2D, we ask whether they also represent their 3D structure? In this work, we analyze the 3D awareness of visual foundation models. We posit that 3D awareness implies that representations (1) encode the 3D structure of the scene and (2) consistently represent the surface across views. We conduct a series of experiments using task-specific probes and zero-shot inference procedures on frozen features. Our experiments reveal several limitations of the current models. Our code and analysis can be found at https://github.com/mbanani/probe3d. |
2024-04-15T00:00:00 | 2404.08639 | COCONut: Modernizing COCO Segmentation | [
"Xueqing Deng",
"Qihang Yu",
"Peng Wang",
"Xiaohui Shen",
"Liang-Chieh Chen"
]
| https://github.com/bytedance/coconut_cvpr2024 | In recent decades, the vision community has witnessed remarkable progress in visual recognition, partially owing to advancements in dataset benchmarks. Notably, the established COCO benchmark has propelled the development of modern detection and segmentation systems. However, the COCO segmentation benchmark has seen comparatively slow improvement over the last decade. Originally equipped with coarse polygon annotations for thing instances, it gradually incorporated coarse superpixel annotations for stuff regions, which were subsequently heuristically amalgamated to yield panoptic segmentation annotations. These annotations, executed by different groups of raters, have resulted not only in coarse segmentation masks but also in inconsistencies between segmentation types. In this study, we undertake a comprehensive reevaluation of the COCO segmentation annotations. By enhancing the annotation quality and expanding the dataset to encompass 383K images with more than 5.18M panoptic masks, we introduce COCONut, the COCO Next Universal segmenTation dataset. COCONut harmonizes segmentation annotations across semantic, instance, and panoptic segmentation with meticulously crafted high-quality masks, and establishes a robust benchmark for all segmentation tasks. To our knowledge, COCONut stands as the inaugural large-scale universal segmentation dataset, verified by human raters. We anticipate that the release of COCONut will significantly contribute to the community's ability to assess the progress of novel neural networks. |
2024-04-15T00:00:00 | 2404.08197 | Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies | [
"Zichao Li",
"Cihang Xie",
"Ekin Dogus Cubuk"
]
| This paper investigates the performance of the Contrastive Language-Image Pre-training (CLIP) when scaled down to limited computation budgets. We explore CLIP along three dimensions: data, architecture, and training strategies. With regards to data, we demonstrate the significance of high-quality training data and show that a smaller dataset of high-quality data can outperform a larger dataset with lower quality. We also examine how model performance varies with different dataset sizes, suggesting that smaller ViT models are better suited for smaller datasets, while larger models perform better on larger datasets with fixed compute. Additionally, we provide guidance on when to choose a CNN-based architecture or a ViT-based architecture for CLIP training. We compare four CLIP training strategies - SLIP, FLIP, CLIP, and CLIP+Data Augmentation - and show that the choice of training strategy depends on the available compute resource. Our analysis reveals that CLIP+Data Augmentation can achieve comparable performance to CLIP using only half of the training data. This work provides practical insights into how to effectively train and deploy CLIP models, making them more accessible and affordable for practical use in various applications. |
|
2024-04-15T00:00:00 | 2404.08495 | Dataset Reset Policy Optimization for RLHF | [
"Jonathan D. Chang",
"Wenhao Shan",
"Owen Oertell",
"Kianté Brantley",
"Dipendra Misra",
"Jason D. Lee",
"Wen Sun"
]
| https://github.com/Cornell-RL/drpo | Reinforcement Learning (RL) from Human Preference-based feedback is a popular paradigm for fine-tuning generative models, which has produced impressive models such as GPT-4 and Claude3 Opus. This framework often consists of two steps: learning a reward model from an offline preference dataset followed by running online RL to optimize the learned reward model. In this work, leveraging the idea of reset, we propose a new RLHF algorithm with provable guarantees. Motivated by the fact that offline preference dataset provides informative states (i.e., data that is preferred by the labelers), our new algorithm, Dataset Reset Policy Optimization (DR-PO), integrates the existing offline preference dataset into the online policy training procedure via dataset reset: it directly resets the policy optimizer to the states in the offline dataset, instead of always starting from the initial state distribution. In theory, we show that DR-PO learns to perform at least as good as any policy that is covered by the offline dataset under general function approximation with finite sample complexity. In experiments, we demonstrate that on both the TL;DR summarization and the Anthropic Helpful Harmful (HH) dataset, the generation from DR-PO is better than that from Proximal Policy Optimization (PPO) and Direction Preference Optimization (DPO), under the metric of GPT4 win-rate. Code for this work can be found at https://github.com/Cornell-RL/drpo. |
2024-04-16T00:00:00 | 2404.09956 | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization | [
"Navonil Majumder",
"Chia-Yu Hung",
"Deepanway Ghosal",
"Wei-Ning Hsu",
"Rada Mihalcea",
"Soujanya Poria"
]
| https://github.com/declare-lab/tango | Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics. |
2024-04-16T00:00:00 | 2404.09967 | Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model | [
"Han Lin",
"Jaemin Cho",
"Abhay Zala",
"Mohit Bansal"
]
| https://github.com/HL-hanlin/Ctrl-Adapter | ControlNets are widely used for adding spatial control in image generation with different conditions, such as depth maps, canny edges, and human poses. However, there are several challenges when leveraging the pretrained image ControlNets for controlled video generation. First, pretrained ControlNet cannot be directly plugged into new backbone models due to the mismatch of feature spaces, and the cost of training ControlNets for new backbones is a big burden. Second, ControlNet features for different frames might not effectively handle the temporal consistency. To address these challenges, we introduce Ctrl-Adapter, an efficient and versatile framework that adds diverse controls to any image/video diffusion models, by adapting pretrained ControlNets (and improving temporal alignment for videos). Ctrl-Adapter provides diverse capabilities including image control, video control, video control with sparse frames, multi-condition control, compatibility with different backbones, adaptation to unseen control conditions, and video editing. In Ctrl-Adapter, we train adapter layers that fuse pretrained ControlNet features to different image/video diffusion models, while keeping the parameters of the ControlNets and the diffusion models frozen. Ctrl-Adapter consists of temporal and spatial modules so that it can effectively handle the temporal consistency of videos. We also propose latent skipping and inverse timestep sampling for robust adaptation and sparse control. Moreover, Ctrl-Adapter enables control from multiple conditions by simply taking the (weighted) average of ControlNet outputs. With diverse image/video diffusion backbones (SDXL, Hotshot-XL, I2VGen-XL, and SVD), Ctrl-Adapter matches ControlNet for image control and outperforms all baselines for video control (achieving the SOTA accuracy on the DAVIS 2017 dataset) with significantly lower computational costs (less than 10 GPU hours). |
2024-04-16T00:00:00 | 2404.09990 | HQ-Edit: A High-Quality Dataset for Instruction-based Image Editing | [
"Mude Hui",
"Siwei Yang",
"Bingchen Zhao",
"Yichun Shi",
"Heng Wang",
"Peng Wang",
"Yuyin Zhou",
"Cihang Xie"
]
| https://github.com/UCSC-VLAA/HQ-Edit | This study introduces HQ-Edit, a high-quality instruction-based image editing dataset with around 200,000 edits. Unlike prior approaches relying on attribute guidance or human feedback on building datasets, we devise a scalable data collection pipeline leveraging advanced foundation models, namely GPT-4V and DALL-E 3. To ensure its high quality, diverse examples are first collected online, expanded, and then used to create high-quality diptychs featuring input and output images with detailed text prompts, followed by precise alignment ensured through post-processing. In addition, we propose two evaluation metrics, Alignment and Coherence, to quantitatively assess the quality of image edit pairs using GPT-4V. HQ-Edits high-resolution images, rich in detail and accompanied by comprehensive editing prompts, substantially enhance the capabilities of existing image editing models. For example, an HQ-Edit finetuned InstructPix2Pix can attain state-of-the-art image editing performance, even surpassing those models fine-tuned with human-annotated data. The project page is https://thefllood.github.io/HQEdit_web. |
2024-04-16T00:00:00 | 2404.09656 | Learn Your Reference Model for Real Good Alignment | [
"Alexey Gorbatovski",
"Boris Shaposhnikov",
"Alexey Malakhov",
"Nikita Surnachev",
"Yaroslav Aksenov",
"Ian Maksimov",
"Nikita Balagansky",
"Daniil Gavrilov"
]
| The complexity of the alignment problem stems from the fact that existing methods are unstable. Researchers continuously invent various tricks to address this shortcoming. For instance, in the fundamental Reinforcement Learning From Human Feedback (RLHF) technique of Language Model alignment, in addition to reward maximization, the Kullback-Leibler divergence between the trainable policy and the SFT policy is minimized. This addition prevents the model from being overfitted to the Reward Model (RM) and generating texts that are out-of-domain for the RM. The Direct Preference Optimization (DPO) method reformulates the optimization task of RLHF and eliminates the Reward Model while tacitly maintaining the requirement for the policy to be close to the SFT policy. In our paper, we argue that this implicit limitation in the DPO method leads to sub-optimal results. We propose a new method called Trust Region DPO (TR-DPO), which updates the reference policy during training. With such a straightforward update, we demonstrate the effectiveness of TR-DPO against DPO on the Anthropic HH and TLDR datasets. We show that TR-DPO outperforms DPO by up to 19%, measured by automatic evaluation with GPT-4. The new alignment approach that we propose allows us to improve the quality of models across several parameters at once, such as coherence, correctness, level of detail, helpfulness, and harmlessness. |
|
2024-04-16T00:00:00 | 2404.09995 | Taming Latent Diffusion Model for Neural Radiance Field Inpainting | [
"Chieh Hubert Lin",
"Changil Kim",
"Jia-Bin Huang",
"Qinbo Li",
"Chih-Yao Ma",
"Johannes Kopf",
"Ming-Hsuan Yang",
"Hung-Yu Tseng"
]
| Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images. Despite some recent work showing preliminary success in editing a reconstructed NeRF with diffusion prior, they remain struggling to synthesize reasonable geometry in completely uncovered regions. One major reason is the high diversity of synthetic contents from the diffusion model, which hinders the radiance field from converging to a crisp and deterministic geometry. Moreover, applying latent diffusion models on real data often yields a textural shift incoherent to the image condition due to auto-encoding errors. These two problems are further reinforced with the use of pixel-distance losses. To address these issues, we propose tempering the diffusion model's stochasticity with per-scene customization and mitigating the textural shift with masked adversarial training. During the analyses, we also found the commonly used pixel and perceptual losses are harmful in the NeRF inpainting task. Through rigorous experiments, our framework yields state-of-the-art NeRF inpainting results on various real-world scenes. Project page: https://hubert0527.github.io/MALD-NeRF |
|
2024-04-16T00:00:00 | 2404.09173 | TransformerFAM: Feedback attention is working memory | [
"Dongseong Hwang",
"Weiran Wang",
"Zhuoyuan Huo",
"Khe Chai Sim",
"Pedro Moreno Mengibar"
]
| While Transformers have revolutionized deep learning, their quadratic attention complexity hinders their ability to process infinitely long inputs. We propose Feedback Attention Memory (FAM), a novel Transformer architecture that leverages a feedback loop to enable the network to attend to its own latent representations. This design fosters the emergence of working memory within the Transformer, allowing it to process indefinitely long sequences. TransformerFAM requires no additional weights, enabling seamless integration with pre-trained models. Our experiments show that TransformerFAM significantly improves Transformer performance on long-context tasks across various model sizes (1B, 8B, and 24B). These results showcase the potential to empower Large Language Models (LLMs) to process sequences of unlimited length. |
|
2024-04-16T00:00:00 | 2404.09833 | Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video | [
"Hongchi Xia",
"Zhi-Hao Lin",
"Wei-Chiu Ma",
"Shenlong Wang"
]
| https://github.com/video2game/video2game | Creating high-quality and interactive virtual environments, such as games and simulators, often involves complex and costly manual modeling processes. In this paper, we present Video2Game, a novel approach that automatically converts videos of real-world scenes into realistic and interactive game environments. At the heart of our system are three core components:(i) a neural radiance fields (NeRF) module that effectively captures the geometry and visual appearance of the scene; (ii) a mesh module that distills the knowledge from NeRF for faster rendering; and (iii) a physics module that models the interactions and physical dynamics among the objects. By following the carefully designed pipeline, one can construct an interactable and actionable digital replica of the real world. We benchmark our system on both indoor and large-scale outdoor scenes. We show that we can not only produce highly-realistic renderings in real-time, but also build interactive games on top. |
2024-04-16T00:00:00 | 2404.08801 | Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length | [
"Xuezhe Ma",
"Xiaomeng Yang",
"Wenhan Xiong",
"Beidi Chen",
"Lili Yu",
"Hao Zhang",
"Jonathan May",
"Luke Zettlemoyer",
"Omer Levy",
"Chunting Zhou"
]
| https://github.com/XuezheMax/megalodon | The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and state space models exist, they empirically underperform Transformers in pretraining efficiency and downstream task accuracy. We introduce Megalodon, a neural architecture for efficient sequence modeling with unlimited context length. Megalodon inherits the architecture of Mega (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with Llama2, Megalodon achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. Megalodon reaches a training loss of 1.70, landing mid-way between Llama2-7B (1.75) and 13B (1.67). Code: https://github.com/XuezheMax/megalodon |
2024-04-16T00:00:00 | 2404.09458 | CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting | [
"Xiangrui Liu",
"Xinju Wu",
"Pingping Zhang",
"Shiqi Wang",
"Zhu Li",
"Sam Kwong"
]
| Gaussian splatting, renowned for its exceptional rendering quality and efficiency, has emerged as a prominent technique in 3D scene representation. However, the substantial data volume of Gaussian splatting impedes its practical utility in real-world applications. Herein, we propose an efficient 3D scene representation, named Compressed Gaussian Splatting (CompGS), which harnesses compact Gaussian primitives for faithful 3D scene modeling with a remarkably reduced data size. To ensure the compactness of Gaussian primitives, we devise a hybrid primitive structure that captures predictive relationships between each other. Then, we exploit a small set of anchor primitives for prediction, allowing the majority of primitives to be encapsulated into highly compact residual forms. Moreover, we develop a rate-constrained optimization scheme to eliminate redundancies within such hybrid primitives, steering our CompGS towards an optimal trade-off between bitrate consumption and representation efficacy. Experimental results show that the proposed CompGS significantly outperforms existing methods, achieving superior compactness in 3D scene representation without compromising model accuracy and rendering quality. Our code will be released on GitHub for further research. |
|
2024-04-16T00:00:00 | 2404.09937 | Compression Represents Intelligence Linearly | [
"Yuzhen Huang",
"Jinghan Zhang",
"Zifei Shan",
"Junxian He"
]
| There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence. Despite such appealing discussions, little empirical evidence is present for the interplay between compression and intelligence. In this work, we examine their relationship in the context of LLMs, treating LLMs as data compressors. Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora. These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence. Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly. |
|
2024-04-16T00:00:00 | 2404.08856 | On Speculative Decoding for Multimodal Large Language Models | [
"Mukul Gagrani",
"Raghavv Goel",
"Wonseok Jeon",
"Junyoung Park",
"Mingu Lee",
"Christopher Lott"
]
| Inference with Multimodal Large Language Models (MLLMs) is slow due to their large-language-model backbone which suffers from memory bandwidth bottleneck and generates tokens auto-regressively. In this paper, we explore the application of speculative decoding to enhance the inference efficiency of MLLMs, specifically the LLaVA 7B model. We show that a language-only model can serve as a good draft model for speculative decoding with LLaVA 7B, bypassing the need for image tokens and their associated processing components from the draft model. Our experiments across three different tasks show that speculative decoding can achieve a memory-bound speedup of up to 2.37times using a 115M parameter language model that we trained from scratch. Additionally, we introduce a compact LLaVA draft model incorporating an image adapter, which shows marginal performance gains in image captioning while maintaining comparable results in other tasks. |
|
2024-04-16T00:00:00 | 2404.09204 | TextHawk: Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models | [
"Ya-Qi Yu",
"Minghui Liao",
"Jihao Wu",
"Yongxin Liao",
"Xiaoyu Zheng",
"Wei Zeng"
]
| Multimodal Large Language Models (MLLMs) have shown impressive results on various multimodal tasks. However, most existing MLLMs are not well suited for document-oriented tasks, which require fine-grained image perception and information compression. In this paper, we present TextHawk, a MLLM that is specifically designed for document-oriented tasks, while preserving the general capabilities of MLLMs. TextHawk is aimed to explore efficient fine-grained perception by designing four dedicated components. Firstly, a ReSampling and ReArrangement (ReSA) module is proposed to reduce the redundancy in the document texts and lower the computational cost of the MLLM. We explore encoding the positions of each local feature by presenting Scalable Positional Embeddings (SPEs), which can preserve the scalability of various image sizes. A Query Proposal Network (QPN) is then adopted to initialize the queries dynamically among different sub-images. To further enhance the fine-grained visual perceptual ability of the MLLM, we design a Multi-Level Cross-Attention (MLCA) mechanism that captures the hierarchical structure and semantic relations of document images. Furthermore, we create a new instruction-tuning dataset for document-oriented tasks by enriching the multimodal document data with Gemini Pro. We conduct extensive experiments on both general and document-oriented MLLM benchmarks, and show that TextHawk outperforms the state-of-the-art methods, demonstrating its effectiveness and superiority in fine-grained document perception and general abilities. |
|
2024-04-17T00:00:00 | 2404.10179 | Scaling Instructable Agents Across Many Simulated Worlds | [
"SIMA Team",
"Maria Abi Raad",
"Arun Ahuja",
"Catarina Barros",
"Frederic Besse",
"Andrew Bolt",
"Adrian Bolton",
"Bethanie Brownfield",
"Gavin Buttimore",
"Max Cant",
"Sarah Chakera",
"Stephanie C. Y. Chan",
"Jeff Clune",
"Adrian Collister",
"Vikki Copeman",
"Alex Cullum",
"Ishita Dasgupta",
"Dario de Cesare",
"Julia Di Trapani",
"Yani Donchev",
"Emma Dunleavy",
"Martin Engelcke",
"Ryan Faulkner",
"Frankie Garcia",
"Charles Gbadamosi",
"Zhitao Gong",
"Lucy Gonzales",
"Karol Gregor",
"Arne Olav Hallingstad",
"Tim Harley",
"Sam Haves",
"Felix Hill",
"Ed Hirst",
"Drew A. Hudson",
"Steph Hughes-Fitt",
"Danilo J. Rezende",
"Mimi Jasarevic",
"Laura Kampis",
"Rosemary Ke",
"Thomas Keck",
"Junkyung Kim",
"Oscar Knagg",
"Kavya Kopparapu",
"Andrew Lampinen",
"Shane Legg",
"Alexander Lerchner",
"Marjorie Limont",
"Yulan Liu",
"Maria Loks-Thompson",
"Joseph Marino",
"Kathryn Martin Cussons",
"Loic Matthey",
"Siobhan Mcloughlin",
"Piermaria Mendolicchio",
"Hamza Merzic",
"Anna Mitenkova",
"Alexandre Moufarek",
"Valeria Oliveira",
"Yanko Oliveira",
"Hannah Openshaw",
"Renke Pan",
"Aneesh Pappu",
"Alex Platonov",
"Ollie Purkiss",
"David Reichert",
"John Reid",
"Pierre Harvey Richemond",
"Tyson Roberts",
"Giles Ruscoe",
"Jaume Sanchez Elias",
"Tasha Sandars",
"Daniel P. Sawyer",
"Tim Scholtes",
"Guy Simmons",
"Daniel Slater",
"Hubert Soyer",
"Heiko Strathmann",
"Peter Stys",
"Allison C. Tam",
"Denis Teplyashin",
"Tayfun Terzi",
"Davide Vercelli",
"Bojan Vujatovic",
"Marcus Wainwright",
"Jane X. Wang",
"Zhengdong Wang",
"Daan Wierstra",
"Duncan Williams",
"Nathaniel Wong",
"Sarah York",
"Nick Young"
]
| Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions, in order to accomplish complex tasks. The Scalable, Instructable, Multiworld Agent (SIMA) project tackles this by training agents to follow free-form instructions across a diverse range of virtual 3D environments, including curated research environments as well as open-ended, commercial video games. Our goal is to develop an instructable agent that can accomplish anything a human can do in any simulated 3D environment. Our approach focuses on language-driven generality while imposing minimal assumptions. Our agents interact with environments in real-time using a generic, human-like interface: the inputs are image observations and language instructions and the outputs are keyboard-and-mouse actions. This general approach is challenging, but it allows agents to ground language across many visually complex and semantically rich environments while also allowing us to readily run agents in new environments. In this paper we describe our motivation and goal, the initial progress we have made, and promising preliminary results on several diverse research environments and a variety of commercial video games. |
|
2024-04-17T00:00:00 | 2404.10301 | Long-form music generation with latent diffusion | [
"Zach Evans",
"Julian D. Parker",
"CJ Carr",
"Zack Zukowski",
"Josiah Taylor",
"Jordi Pons"
]
| Audio-based generative models for music have seen great strides recently, but so far have not managed to produce full-length music tracks with coherent musical structure. We show that by training a generative model on long temporal contexts it is possible to produce long-form music of up to 4m45s. Our model consists of a diffusion-transformer operating on a highly downsampled continuous latent representation (latent rate of 21.5Hz). It obtains state-of-the-art generations according to metrics on audio quality and prompt alignment, and subjective tests reveal that it produces full-length music with coherent structure. |
|
2024-04-19T00:00:00 | 2404.12390 | BLINK: Multimodal Large Language Models Can See but Not Perceive | [
"Xingyu Fu",
"Yushi Hu",
"Bangzheng Li",
"Yu Feng",
"Haoyu Wang",
"Xudong Lin",
"Dan Roth",
"Noah A. Smith",
"Wei-Chiu Ma",
"Ranjay Krishna"
]
| https://github.com/zeyofu/BLINK_Benchmark | We introduce Blink, a new benchmark for multimodal language models (LLMs) that focuses on core visual perception abilities not found in other evaluations. Most of the Blink tasks can be solved by humans "within a blink" (e.g., relative depth estimation, visual correspondence, forensics detection, and multi-view reasoning). However, we find these perception-demanding tasks cast significant challenges for current multimodal LLMs because they resist mediation through natural language. Blink reformats 14 classic computer vision tasks into 3,807 multiple-choice questions, paired with single or multiple images and visual prompting. While humans get 95.70% accuracy on average, Blink is surprisingly challenging for existing multimodal LLMs: even the best-performing GPT-4V and Gemini achieve accuracies of 51.26% and 45.72%, only 13.17% and 7.63% higher than random guessing, indicating that such perception abilities have not "emerged" yet in recent multimodal LLMs. Our analysis also highlights that specialist CV models could solve these problems much better, suggesting potential pathways for future improvements. We believe Blink will stimulate the community to help multimodal LLMs catch up with human-level visual perception. |
2024-04-19T00:00:00 | 2404.12387 | Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models | [
"Aitor Ormazabal",
"Che Zheng",
"Cyprien de Masson d'Autume",
"Dani Yogatama",
"Deyu Fu",
"Donovan Ong",
"Eric Chen",
"Eugenie Lamprecht",
"Hai Pham",
"Isaac Ong",
"Kaloyan Aleksiev",
"Lei Li",
"Matthew Henderson",
"Max Bain",
"Mikel Artetxe",
"Nishant Relan",
"Piotr Padlewski",
"Qi Liu",
"Ren Chen",
"Samuel Phua",
"Yazheng Yang",
"Yi Tay",
"Yuqi Wang",
"Zhongkai Zhu",
"Zhihui Xie"
]
| We introduce Reka Core, Flash, and Edge, a series of powerful multimodal language models trained from scratch by Reka. Reka models are able to process and reason with text, images, video, and audio inputs. This technical report discusses details of training some of these models and provides comprehensive evaluation results. We show that Reka Edge and Reka Flash are not only state-of-the-art but also outperform many much larger models, delivering outsized values for their respective compute class. Meanwhile, our most capable and largest model, Reka Core, approaches the best frontier models on both automatic evaluations and blind human evaluations. On image question answering benchmarks (e.g. MMMU, VQAv2), Core performs competitively to GPT4-V. Meanwhile, on multimodal chat, Core ranks as the second most preferred model under a blind third-party human evaluation setup, outperforming other models such as Claude 3 Opus. On text benchmarks, Core not only performs competitively to other frontier models on a set of well-established benchmarks (e.g. MMLU, GSM8K) but also outperforms GPT4-0613 on human evaluation. On video question answering (Perception-Test), Core outperforms Gemini Ultra. Models are shipped in production at http://chat.reka.ai . A showcase of non cherry picked qualitative examples can also be found at http://showcase.reka.ai . |
|
2024-04-19T00:00:00 | 2404.12347 | AniClipart: Clipart Animation with Text-to-Video Priors | [
"Ronghuan Wu",
"Wanchao Su",
"Kede Ma",
"Jing Liao"
]
| Clipart, a pre-made graphic art form, offers a convenient and efficient way of illustrating visual content. Traditional workflows to convert static clipart images into motion sequences are laborious and time-consuming, involving numerous intricate steps like rigging, key animation and in-betweening. Recent advancements in text-to-video generation hold great potential in resolving this problem. Nevertheless, direct application of text-to-video generation models often struggles to retain the visual identity of clipart images or generate cartoon-style motions, resulting in unsatisfactory animation outcomes. In this paper, we introduce AniClipart, a system that transforms static clipart images into high-quality motion sequences guided by text-to-video priors. To generate cartoon-style and smooth motion, we first define B\'{e}zier curves over keypoints of the clipart image as a form of motion regularization. We then align the motion trajectories of the keypoints with the provided text prompt by optimizing the Video Score Distillation Sampling (VSDS) loss, which encodes adequate knowledge of natural motion within a pretrained text-to-video diffusion model. With a differentiable As-Rigid-As-Possible shape deformation algorithm, our method can be end-to-end optimized while maintaining deformation rigidity. Experimental results show that the proposed AniClipart consistently outperforms existing image-to-video generation models, in terms of text-video alignment, visual identity preservation, and motion consistency. Furthermore, we showcase the versatility of AniClipart by adapting it to generate a broader array of animation formats, such as layered animation, which allows topological changes. |
|
2024-04-19T00:00:00 | 2404.11912 | TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding | [
"Hanshi Sun",
"Zhuoming Chen",
"Xinyu Yang",
"Yuandong Tian",
"Beidi Chen"
]
| https://github.com/Infini-AI-Lab/TriForce | With large language models (LLMs) widely deployed in long content generation recently, there has emerged an increasing demand for efficient long-sequence inference support. However, key-value (KV) cache, which is stored to avoid re-computation, has emerged as a critical bottleneck by growing linearly in size with the sequence length. Due to the auto-regressive nature of LLMs, the entire KV cache will be loaded for every generated token, resulting in low utilization of computational cores and high latency. While various compression methods for KV cache have been proposed to alleviate this issue, they suffer from degradation in generation quality. We introduce TriForce, a hierarchical speculative decoding system that is scalable to long sequence generation. This approach leverages the original model weights and dynamic sparse KV cache via retrieval as a draft model, which serves as an intermediate layer in the hierarchy and is further speculated by a smaller model to reduce its drafting latency. TriForce not only facilitates impressive speedups for Llama2-7B-128K, achieving up to 2.31times on an A100 GPU but also showcases scalability in handling even longer contexts. For the offloading setting on two RTX 4090 GPUs, TriForce achieves 0.108s/tokenx2014only half as slow as the auto-regressive baseline on an A100, which attains 7.78times on our optimized offloading system. Additionally, TriForce performs 4.86times than DeepSpeed-Zero-Inference on a single RTX 4090 GPU. TriForce's robustness is highlighted by its consistently outstanding performance across various temperatures. The code is available at https://github.com/Infini-AI-Lab/TriForce. |
2024-04-19T00:00:00 | 2404.12195 | OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data | [
"Chandeepa Dissanayake",
"Lahiru Lowe",
"Sachith Gunasekara",
"Yasiru Ratnayake"
]
| Instruction fine-tuning pretrained LLMs for diverse downstream tasks has demonstrated remarkable success and has captured the interest of both academics and practitioners. To ensure such fine-tuned LLMs align with human preferences, techniques such as RLHF and DPO have emerged. At the same time, there is increasing interest in smaller parameter counts for models. In this work, using OpenLLaMA 3Bv2 as a base model, we describe the recipe used to fine-tune the OpenBezoar family of models. In this recipe: We first generate synthetic instruction fine-tuning data using an open and commercially non-restrictive instruction fine-tuned variant of the Falcon-40B model under three schemes based on: LaMini-LM, WizardLM/Evol-Instruct (with databricks-dolly-15k as a seed dataset) and Orca (with the Flan Collection as a seed dataset), then filter these generations using GPT-4 as a human proxy. We then perform cost-effective QLoRA-based supervised fine-tuning sequentially with each scheme. The resulting checkpoint is further fine-tuned with a subset of the HH-RLHF dataset to minimize distribution shift prior to using the DPO loss to obtain the final checkpoint. Evaluation is done with the LM Eval Harness tasks/metrics as well as on MT-Bench using the "LLM-as-a-judge" framework with Claude 2.1, with the finding that the final checkpoint, "OpenBezoar-HH-RLHF-DPO", demonstrates superior performance over many models at the 3B parameter scale, even outperforming the top model in one of the categories on the Huggingface Open LLM Leaderboard. We release "OpenBezoar-SFT", "OpenBezoar-HH-RLHF-SFT", "OpenBezoar-HH-RLHF-DPO" checkpoints, alongside our generated datasets on HuggingFace at https://huggingface.co/collections/SurgeGlobal/open-bezoar-6620a24923e12127e9e2b9cc and our codebase at https://bitbucket.org/paladinanalytics/workspace/projects/OP. |
|
2024-04-19T00:00:00 | 2404.12253 | Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing | [
"Ye Tian",
"Baolin Peng",
"Linfeng Song",
"Lifeng Jin",
"Dian Yu",
"Haitao Mi",
"Dong Yu"
]
| Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning. Recent work proposed advanced prompting techniques and the necessity of fine-tuning with high-quality data to augment LLMs' reasoning abilities. However, these approaches are inherently constrained by data availability and quality. In light of this, self-correction and self-learning emerge as viable solutions, employing strategies that allow LLMs to refine their outputs and learn from self-assessed rewards. Yet, the efficacy of LLMs in self-refining its response, particularly in complex reasoning and planning task, remains dubious. In this paper, we introduce AlphaLLM for the self-improvements of LLMs, which integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop, thereby enhancing the capabilities of LLMs without additional annotations. Drawing inspiration from the success of AlphaGo, AlphaLLM addresses the unique challenges of combining MCTS with LLM for self-improvement, including data scarcity, the vastness search spaces of language tasks, and the subjective nature of feedback in language tasks. AlphaLLM is comprised of prompt synthesis component, an efficient MCTS approach tailored for language tasks, and a trio of critic models for precise feedback. Our experimental results in mathematical reasoning tasks demonstrate that AlphaLLM significantly enhances the performance of LLMs without additional annotations, showing the potential for self-improvement in LLMs. |
|
2024-04-19T00:00:00 | 2404.11565 | MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation | [
"Kuan-Chieh",
"Wang",
"Daniil Ostashev",
"Yuwei Fang",
"Sergey Tulyakov",
"Kfir Aberman"
]
| https://github.com/snap-research/mixture-of-attention | We introduce a new architecture for personalization of text-to-image diffusion models, coined Mixture-of-Attention (MoA). Inspired by the Mixture-of-Experts mechanism utilized in large language models (LLMs), MoA distributes the generation workload between two attention pathways: a personalized branch and a non-personalized prior branch. MoA is designed to retain the original model's prior by fixing its attention layers in the prior branch, while minimally intervening in the generation process with the personalized branch that learns to embed subjects in the layout and context generated by the prior branch. A novel routing mechanism manages the distribution of pixels in each layer across these branches to optimize the blend of personalized and generic content creation. Once trained, MoA facilitates the creation of high-quality, personalized images featuring multiple subjects with compositions and interactions as diverse as those generated by the original model. Crucially, MoA enhances the distinction between the model's pre-existing capability and the newly augmented personalized intervention, thereby offering a more disentangled subject-context control that was previously unattainable. Project page: https://snap-research.github.io/mixture-of-attention |
2024-04-19T00:00:00 | 2404.12318 | Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment | [
"Zhaofeng Wu",
"Ananth Balashankar",
"Yoon Kim",
"Jacob Eisenstein",
"Ahmad Beirami"
]
| Aligning language models (LMs) based on human-annotated preference data is a crucial step in obtaining practical and performant LM-based systems. However, multilingual human preference data are difficult to obtain at scale, making it challenging to extend this framework to diverse languages. In this work, we evaluate a simple approach for zero-shot cross-lingual alignment, where a reward model is trained on preference data in one source language and directly applied to other target languages. On summarization and open-ended dialog generation, we show that this method is consistently successful under comprehensive evaluation settings, including human evaluation: cross-lingually aligned models are preferred by humans over unaligned models on up to >70% of evaluation instances. We moreover find that a different-language reward model sometimes yields better aligned models than a same-language reward model. We also identify best practices when there is no language-specific data for even supervised finetuning, another component in alignment. |
|
2024-04-19T00:00:00 | 2404.12241 | Introducing v0.5 of the AI Safety Benchmark from MLCommons | [
"Bertie Vidgen",
"Adarsh Agrawal",
"Ahmed M. Ahmed",
"Victor Akinwande",
"Namir Al-Nuaimi",
"Najla Alfaraj",
"Elie Alhajjar",
"Lora Aroyo",
"Trupti Bavalatti",
"Borhane Blili-Hamelin",
"Kurt Bollacker",
"Rishi Bomassani",
"Marisa Ferrara Boston",
"Siméon Campos",
"Kal Chakra",
"Canyu Chen",
"Cody Coleman",
"Zacharie Delpierre Coudert",
"Leon Derczynski",
"Debojyoti Dutta",
"Ian Eisenberg",
"James Ezick",
"Heather Frase",
"Brian Fuller",
"Ram Gandikota",
"Agasthya Gangavarapu",
"Ananya Gangavarapu",
"James Gealy",
"Rajat Ghosh",
"James Goel",
"Usman Gohar",
"Sujata Goswami",
"Scott A. Hale",
"Wiebke Hutiri",
"Joseph Marvin Imperial",
"Surgan Jandial",
"Nick Judd",
"Felix Juefei-Xu",
"Foutse Khomh",
"Bhavya Kailkhura",
"Hannah Rose Kirk",
"Kevin Klyman",
"Chris Knotz",
"Michael Kuchnik",
"Shachi H. Kumar",
"Chris Lengerich",
"Bo Li",
"Zeyi Liao",
"Eileen Peters Long",
"Victor Lu",
"Yifan Mai",
"Priyanka Mary Mammen",
"Kelvin Manyeki",
"Sean McGregor",
"Virendra Mehta",
"Shafee Mohammed",
"Emanuel Moss",
"Lama Nachman",
"Dinesh Jinenhally Naganna",
"Amin Nikanjam",
"Besmira Nushi",
"Luis Oala",
"Iftach Orr",
"Alicia Parrish",
"Cigdem Patlak",
"William Pietri",
"Forough Poursabzi-Sangdeh",
"Eleonora Presani",
"Fabrizio Puletti",
"Paul Röttger",
"Saurav Sahay",
"Tim Santos",
"Nino Scherrer",
"Alice Schoenauer Sebag",
"Patrick Schramowski",
"Abolfazl Shahbazi",
"Vin Sharma",
"Xudong Shen",
"Vamsi Sistla",
"Leonard Tang",
"Davide Testuggine",
"Vithursan Thangarasa",
"Elizabeth Anne Watkins",
"Rebecca Weiss",
"Chris Welty",
"Tyler Wilbers",
"Adina Williams",
"Carole-Jean Wu",
"Poonam Yadav",
"Xianjun Yang",
"Yi Zeng",
"Wenhui Zhang",
"Fedor Zhdanov",
"Jiacheng Zhu",
"Percy Liang",
"Peter Mattson",
"Joaquin Vanschoren"
]
| This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark. |
|
2024-04-19T00:00:00 | 2404.11614 | Dynamic Typography: Bringing Words to Life | [
"Zichen Liu",
"Yihao Meng",
"Hao Ouyang",
"Yue Yu",
"Bolin Zhao",
"Daniel Cohen-Or",
"Huamin Qu"
]
| https://github.com/zliucz/animate-your-word | Text animation serves as an expressive medium, transforming static communication into dynamic experiences by infusing words with motion to evoke emotions, emphasize meanings, and construct compelling narratives. Crafting animations that are semantically aware poses significant challenges, demanding expertise in graphic design and animation. We present an automated text animation scheme, termed "Dynamic Typography", which combines two challenging tasks. It deforms letters to convey semantic meaning and infuses them with vibrant movements based on user prompts. Our technique harnesses vector graphics representations and an end-to-end optimization-based framework. This framework employs neural displacement fields to convert letters into base shapes and applies per-frame motion, encouraging coherence with the intended textual concept. Shape preservation techniques and perceptual loss regularization are employed to maintain legibility and structural integrity throughout the animation process. We demonstrate the generalizability of our approach across various text-to-video models and highlight the superiority of our end-to-end methodology over baseline methods, which might comprise separate tasks. Through quantitative and qualitative evaluations, we demonstrate the effectiveness of our framework in generating coherent text animations that faithfully interpret user prompts while maintaining readability. Our code is available at: https://animate-your-word.github.io/demo/. |
2024-04-19T00:00:00 | 2404.12385 | MeshLRM: Large Reconstruction Model for High-Quality Mesh | [
"Xinyue Wei",
"Kai Zhang",
"Sai Bi",
"Hao Tan",
"Fujun Luan",
"Valentin Deschaintre",
"Kalyan Sunkavalli",
"Hao Su",
"Zexiang Xu"
]
| We propose MeshLRM, a novel LRM-based approach that can reconstruct a high-quality mesh from merely four input images in less than one second. Different from previous large reconstruction models (LRMs) that focus on NeRF-based reconstruction, MeshLRM incorporates differentiable mesh extraction and rendering within the LRM framework. This allows for end-to-end mesh reconstruction by fine-tuning a pre-trained NeRF LRM with mesh rendering. Moreover, we improve the LRM architecture by simplifying several complex designs in previous LRMs. MeshLRM's NeRF initialization is sequentially trained with low- and high-resolution images; this new LRM training strategy enables significantly faster convergence and thereby leads to better quality with less compute. Our approach achieves state-of-the-art mesh reconstruction from sparse-view inputs and also allows for many downstream applications, including text-to-3D and single-image-to-3D generation. Project page: https://sarahweiii.github.io/meshlrm/ |
|
2024-04-19T00:00:00 | 2404.11925 | EdgeFusion: On-Device Text-to-Image Generation | [
"Thibault Castells",
"Hyoung-Kyu Song",
"Tairen Piao",
"Shinkook Choi",
"Bo-Kyeong Kim",
"Hanyoung Yim",
"Changgwun Lee",
"Jae Gon Kim",
"Tae-Ho Kim"
]
| The intensive computational burden of Stable Diffusion (SD) for text-to-image generation poses a significant hurdle for its practical application. To tackle this challenge, recent research focuses on methods to reduce sampling steps, such as Latent Consistency Model (LCM), and on employing architectural optimizations, including pruning and knowledge distillation. Diverging from existing approaches, we uniquely start with a compact SD variant, BK-SDM. We observe that directly applying LCM to BK-SDM with commonly used crawled datasets yields unsatisfactory results. It leads us to develop two strategies: (1) leveraging high-quality image-text pairs from leading generative models and (2) designing an advanced distillation process tailored for LCM. Through our thorough exploration of quantization, profiling, and on-device deployment, we achieve rapid generation of photo-realistic, text-aligned images in just two steps, with latency under one second on resource-limited edge devices. |
|
2024-04-22T00:00:00 | 2404.13026 | PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation | [
"Tianyuan Zhang",
"Hong-Xing Yu",
"Rundi Wu",
"Brandon Y. Feng",
"Changxi Zheng",
"Noah Snavely",
"Jiajun Wu",
"William T. Freeman"
]
| https://github.com/a1600012888/PhysDreamer | Realistic object interactions are crucial for creating immersive virtual experiences, yet synthesizing realistic 3D object dynamics in response to novel interactions remains a significant challenge. Unlike unconditional or text-conditioned dynamics generation, action-conditioned dynamics requires perceiving the physical material properties of objects and grounding the 3D motion prediction on these properties, such as object stiffness. However, estimating physical material properties is an open problem due to the lack of material ground-truth data, as measuring these properties for real objects is highly difficult. We present PhysDreamer, a physics-based approach that endows static 3D objects with interactive dynamics by leveraging the object dynamics priors learned by video generation models. By distilling these priors, PhysDreamer enables the synthesis of realistic object responses to novel interactions, such as external forces or agent manipulations. We demonstrate our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study. PhysDreamer takes a step towards more engaging and realistic virtual experiences by enabling static 3D objects to dynamically respond to interactive stimuli in a physically plausible manner. See our project page at https://physdreamer.github.io/. |
Subsets and Splits