date
timestamp[ns]date 2023-05-05 00:00:00
2025-07-16 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
202
| authors
listlengths 1
3.3k
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2024-08-13T00:00:00 | 2408.06070 | ControlNeXt: Powerful and Efficient Control for Image and Video Generation | [
"Bohao Peng",
"Jian Wang",
"Yuechen Zhang",
"Wenbo Li",
"Ming-Chang Yang",
"Jiaya Jia"
]
| Diffusion models have demonstrated remarkable and robust abilities in both image and video generation. To achieve greater control over generated results, researchers introduce additional architectures, such as ControlNet, Adapters and ReferenceNet, to integrate conditioning controls. However, current controllable generation methods often require substantial additional computational resources, especially for video generation, and face challenges in training or exhibit weak control. In this paper, we propose ControlNeXt: a powerful and efficient method for controllable image and video generation. We first design a more straightforward and efficient architecture, replacing heavy additional branches with minimal additional cost compared to the base model. Such a concise structure also allows our method to seamlessly integrate with other LoRA weights, enabling style alteration without the need for additional training. As for training, we reduce up to 90% of learnable parameters compared to the alternatives. Furthermore, we propose another method called Cross Normalization (CN) as a replacement for Zero-Convolution' to achieve fast and stable training convergence. We have conducted various experiments with different base models across images and videos, demonstrating the robustness of our method. |
|
2024-08-13T00:00:00 | 2408.05939 | UniPortrait: A Unified Framework for Identity-Preserving Single- and Multi-Human Image Personalization | [
"Junjie He",
"Yifeng Geng",
"Liefeng Bo"
]
| This paper presents UniPortrait, an innovative human image personalization framework that unifies single- and multi-ID customization with high face fidelity, extensive facial editability, free-form input description, and diverse layout generation. UniPortrait consists of only two plug-and-play modules: an ID embedding module and an ID routing module. The ID embedding module extracts versatile editable facial features with a decoupling strategy for each ID and embeds them into the context space of diffusion models. The ID routing module then combines and distributes these embeddings adaptively to their respective regions within the synthesized image, achieving the customization of single and multiple IDs. With a carefully designed two-stage training scheme, UniPortrait achieves superior performance in both single- and multi-ID customization. Quantitative and qualitative experiments demonstrate the advantages of our method over existing approaches as well as its good scalability, e.g., the universal compatibility with existing generative control tools. The project page is at https://aigcdesigngroup.github.io/UniPortrait-Page/ . |
|
2024-08-13T00:00:00 | 2408.06190 | FruitNeRF: A Unified Neural Radiance Field based Fruit Counting Framework | [
"Lukas Meyer",
"Andreas Gilson",
"Ute Schmidt",
"Marc Stamminger"
]
| We introduce FruitNeRF, a unified novel fruit counting framework that leverages state-of-the-art view synthesis methods to count any fruit type directly in 3D. Our framework takes an unordered set of posed images captured by a monocular camera and segments fruit in each image. To make our system independent of the fruit type, we employ a foundation model that generates binary segmentation masks for any fruit. Utilizing both modalities, RGB and semantic, we train a semantic neural radiance field. Through uniform volume sampling of the implicit Fruit Field, we obtain fruit-only point clouds. By applying cascaded clustering on the extracted point cloud, our approach achieves precise fruit count.The use of neural radiance fields provides significant advantages over conventional methods such as object tracking or optical flow, as the counting itself is lifted into 3D. Our method prevents double counting fruit and avoids counting irrelevant fruit.We evaluate our methodology using both real-world and synthetic datasets. The real-world dataset consists of three apple trees with manually counted ground truths, a benchmark apple dataset with one row and ground truth fruit location, while the synthetic dataset comprises various fruit types including apple, plum, lemon, pear, peach, and mango.Additionally, we assess the performance of fruit counting using the foundation model compared to a U-Net. |
|
2024-08-13T00:00:00 | 2408.06072 | CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer | [
"Zhuoyi Yang",
"Jiayan Teng",
"Wendi Zheng",
"Ming Ding",
"Shiyu Huang",
"Jiazheng Xu",
"Yuanming Yang",
"Wenyi Hong",
"Xiaohan Zhang",
"Guanyu Feng",
"Da Yin",
"Xiaotao Gu",
"Yuxuan Zhang",
"Weihan Wang",
"Yean Cheng",
"Ting Liu",
"Bin Xu",
"Yuxiao Dong",
"Jie Tang"
]
| https://github.com/THUDM/CogVideo | We introduce CogVideoX, a large-scale diffusion transformer model designed for generating videos based on text prompts. To efficently model video data, we propose to levearge a 3D Variational Autoencoder (VAE) to compress videos along both spatial and temporal dimensions. To improve the text-video alignment, we propose an expert transformer with the expert adaptive LayerNorm to facilitate the deep fusion between the two modalities. By employing a progressive training technique, CogVideoX is adept at producing coherent, long-duration videos characterized by significant motions. In addition, we develop an effective text-video data processing pipeline that includes various data preprocessing strategies and a video captioning method. It significantly helps enhance the performance of CogVideoX, improving both generation quality and semantic alignment. Results show that CogVideoX demonstrates state-of-the-art performance across both multiple machine metrics and human evaluations. The model weights of both the 3D Causal VAE and CogVideoX are publicly available at https://github.com/THUDM/CogVideo. |
2024-08-13T00:00:00 | 2408.06316 | Body Transformer: Leveraging Robot Embodiment for Policy Learning | [
"Carmelo Sferrazza",
"Dun-Ming Huang",
"Fangchen Liu",
"Jongmin Lee",
"Pieter Abbeel"
]
| In recent years, the transformer architecture has become the de facto standard for machine learning algorithms applied to natural language processing and computer vision. Despite notable evidence of successful deployment of this architecture in the context of robot learning, we claim that vanilla transformers do not fully exploit the structure of the robot learning problem. Therefore, we propose Body Transformer (BoT), an architecture that leverages the robot embodiment by providing an inductive bias that guides the learning process. We represent the robot body as a graph of sensors and actuators, and rely on masked attention to pool information throughout the architecture. The resulting architecture outperforms the vanilla transformer, as well as the classical multilayer perceptron, in terms of task completion, scaling properties, and computational efficiency when representing either imitation or reinforcement learning policies. Additional material including the open-source code is available at https://sferrazza.cc/bot_site. |
|
2024-08-13T00:00:00 | 2408.06195 | Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers | [
"Zhenting Qi",
"Mingyuan Ma",
"Jiahang Xu",
"Li Lyna Zhang",
"Fan Yang",
"Mao Yang"
]
| https://github.com/zhentingqi/rStar | This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B, from 36.46% to 81.88% for Mistral-7B, from 74.53% to 91.13% for LLaMA3-8B-Instruct. Code will be available at https://github.com/zhentingqi/rStar. |
2024-08-13T00:00:00 | 2408.06019 | HeadGAP: Few-shot 3D Head Avatar via Generalizable Gaussian Priors | [
"Xiaozheng Zheng",
"Chao Wen",
"Zhaohu Li",
"Weiyi Zhang",
"Zhuo Su",
"Xu Chang",
"Yang Zhao",
"Zheng Lv",
"Xiaoyuan Zhang",
"Yongjie Zhang",
"Guidong Wang",
"Lan Xu"
]
| In this paper, we present a novel 3D head avatar creation approach capable of generalizing from few-shot in-the-wild data with high-fidelity and animatable robustness. Given the underconstrained nature of this problem, incorporating prior knowledge is essential. Therefore, we propose a framework comprising prior learning and avatar creation phases. The prior learning phase leverages 3D head priors derived from a large-scale multi-view dynamic dataset, and the avatar creation phase applies these priors for few-shot personalization. Our approach effectively captures these priors by utilizing a Gaussian Splatting-based auto-decoder network with part-based dynamic modeling. Our method employs identity-shared encoding with personalized latent codes for individual identities to learn the attributes of Gaussian primitives. During the avatar creation phase, we achieve fast head avatar personalization by leveraging inversion and fine-tuning strategies. Extensive experiments demonstrate that our model effectively exploits head priors and successfully generalizes them to few-shot personalization, achieving photo-realistic rendering quality, multi-view consistency, and stable animation. |
|
2024-08-13T00:00:00 | 2408.06327 | VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents | [
"Xiao Liu",
"Tianjie Zhang",
"Yu Gu",
"Iat Long Iong",
"Yifan Xu",
"Xixuan Song",
"Shudan Zhang",
"Hanyu Lai",
"Xinyi Liu",
"Hanlin Zhao",
"Jiadai Sun",
"Xinyue Yang",
"Yu Yang",
"Zehan Qi",
"Shuntian Yao",
"Xueqiao Sun",
"Siyi Cheng",
"Qinkai Zheng",
"Hao Yu",
"Hanchen Zhang",
"Wenyi Hong",
"Ming Ding",
"Lihang Pan",
"Xiaotao Gu",
"Aohan Zeng",
"Zhengxiao Du",
"Chan Hee Song",
"Yu Su",
"Yuxiao Dong",
"Jie Tang"
]
| https://github.com/THUDM/VisualAgentBench | Large Multimodal Models (LMMs) have ushered in a new era in artificial intelligence, merging capabilities in both language and vision to form highly capable Visual Foundation Agents. These agents are postulated to excel across a myriad of tasks, potentially approaching general artificial intelligence. However, existing benchmarks fail to sufficiently challenge or showcase the full potential of LMMs in complex, real-world environments. To address this gap, we introduce VisualAgentBench (VAB), a comprehensive and pioneering benchmark specifically designed to train and evaluate LMMs as visual foundation agents across diverse scenarios, including Embodied, Graphical User Interface, and Visual Design, with tasks formulated to probe the depth of LMMs' understanding and interaction capabilities. Through rigorous testing across nine proprietary LMM APIs and eight open models, we demonstrate the considerable yet still developing agent capabilities of these models. Additionally, VAB constructs a trajectory training set constructed through hybrid methods including Program-based Solvers, LMM Agent Bootstrapping, and Human Demonstrations, promoting substantial performance improvements in LMMs through behavior cloning. Our work not only aims to benchmark existing models but also provides a solid foundation for future development into visual foundation agents. Code, train \& test data, and part of fine-tuned open LMMs are available at https://github.com/THUDM/VisualAgentBench. |
2024-08-13T00:00:00 | 2408.06142 | Med42-v2: A Suite of Clinical LLMs | [
"Clément Christophe",
"Praveen K Kanithi",
"Tathagata Raha",
"Shadab Khan",
"Marco AF Pimentel"
]
| Med42-v2 introduces a suite of clinical large language models (LLMs) designed to address the limitations of generic models in healthcare settings. These models are built on Llama3 architecture and fine-tuned using specialized clinical data. They underwent multi-stage preference alignment to effectively respond to natural prompts. While generic models are often preference-aligned to avoid answering clinical queries as a precaution, Med42-v2 is specifically trained to overcome this limitation, enabling its use in clinical settings. Med42-v2 models demonstrate superior performance compared to the original Llama3 models in both 8B and 70B parameter configurations and GPT-4 across various medical benchmarks. These LLMs are developed to understand clinical queries, perform reasoning tasks, and provide valuable assistance in clinical environments. The models are now publicly available at https://huggingface.co/m42-health{https://huggingface.co/m42-health}. |
|
2024-08-13T00:00:00 | 2408.05506 | Your Context Is Not an Array: Unveiling Random Access Limitations in Transformers | [
"MohammadReza Ebrahimi",
"Sunny Panchal",
"Roland Memisevic"
]
| Despite their recent successes, Transformer-based large language models show surprising failure modes. A well-known example of such failure modes is their inability to length-generalize: solving problem instances at inference time that are longer than those seen during training. In this work, we further explore the root cause of this failure by performing a detailed analysis of model behaviors on the simple parity task. Our analysis suggests that length generalization failures are intricately related to a model's inability to perform random memory accesses within its context window. We present supporting evidence for this hypothesis by demonstrating the effectiveness of methodologies that circumvent the need for indexing or that enable random token access indirectly, through content-based addressing. We further show where and how the failure to perform random memory access manifests through attention map visualizations. |
|
2024-08-14T00:00:00 | 2408.07009 | Imagen 3 | [
"Imagen-Team-Google",
"Jason Baldridge",
"Jakob Bauer",
"Mukul Bhutani",
"Nicole Brichtova",
"Andrew Bunner",
"Kelvin Chan",
"Yichang Chen",
"Sander Dieleman",
"Yuqing Du",
"Zach Eaton-Rosen",
"Hongliang Fei",
"Nando de Freitas",
"Yilin Gao",
"Evgeny Gladchenko",
"Sergio Gómez Colmenarejo",
"Mandy Guo",
"Alex Haig",
"Will Hawkins",
"Hexiang Hu",
"Huilian Huang",
"Tobenna Peter Igwe",
"Christos Kaplanis",
"Siavash Khodadadeh",
"Yelin Kim",
"Ksenia Konyushkova",
"Karol Langner",
"Eric Lau",
"Shixin Luo",
"Soňa Mokrá",
"Henna Nandwani",
"Yasumasa Onoe",
"Aäron van den Oord",
"Zarana Parekh",
"Jordi Pont-Tuset",
"Hang Qi",
"Rui Qian",
"Deepak Ramachandran",
"Poorva Rane",
"Abdullah Rashwan",
"Ali Razavi",
"Robert Riachi",
"Hansa Srinivasan",
"Srivatsan Srinivasan",
"Robin Strudel",
"Benigno Uria",
"Oliver Wang",
"Su Wang",
"Austin Waters",
"Chris Wolff",
"Auriel Wright",
"Zhisheng Xiao",
"Hao Xiong",
"Keyang Xu",
"Marc van Zee",
"Junlin Zhang",
"Katie Zhang",
"Wenlei Zhou",
"Konrad Zolna",
"Ola Aboubakar",
"Canfer Akbulut",
"Oscar Akerlund",
"Isabela Albuquerque",
"Nina Anderson",
"Marco Andreetto",
"Lora Aroyo",
"Ben Bariach",
"David Barker",
"Sherry Ben",
"Dana Berman",
"Courtney Biles",
"Irina Blok",
"Pankil Botadra",
"Jenny Brennan",
"Karla Brown",
"John Buckley",
"Rudy Bunel",
"Elie Bursztein",
"Christina Butterfield",
"Ben Caine",
"Viral Carpenter",
"Norman Casagrande",
"Ming-Wei Chang",
"Solomon Chang",
"Shamik Chaudhuri",
"Tony Chen",
"John Choi",
"Dmitry Churbanau",
"Nathan Clement",
"Matan Cohen",
"Forrester Cole",
"Mikhail Dektiarev",
"Vincent Du",
"Praneet Dutta",
"Tom Eccles",
"Ndidi Elue",
"Ashley Feden",
"Shlomi Fruchter",
"Frankie Garcia",
"Roopal Garg",
"Weina Ge",
"Ahmed Ghazy",
"Bryant Gipson",
"Andrew Goodman",
"Dawid Górny",
"Sven Gowal",
"Khyatti Gupta",
"Yoni Halpern",
"Yena Han",
"Susan Hao",
"Jamie Hayes",
"Amir Hertz",
"Ed Hirst",
"Tingbo Hou",
"Heidi Howard",
"Mohamed Ibrahim",
"Dirichi Ike-Njoku",
"Joana Iljazi",
"Vlad Ionescu",
"William Isaac",
"Reena Jana",
"Gemma Jennings",
"Donovon Jenson",
"Xuhui Jia",
"Kerry Jones",
"Xiaoen Ju",
"Ivana Kajic",
"Christos Kaplanis",
"Burcu Karagol Ayan",
"Jacob Kelly",
"Suraj Kothawade",
"Christina Kouridi",
"Ira Ktena",
"Jolanda Kumakaw",
"Dana Kurniawan",
"Dmitry Lagun",
"Lily Lavitas",
"Jason Lee",
"Tao Li",
"Marco Liang",
"Maggie Li-Calis",
"Yuchi Liu",
"Javier Lopez Alberca",
"Peggy Lu",
"Kristian Lum",
"Yukun Ma",
"Chase Malik",
"John Mellor",
"Inbar Mosseri",
"Tom Murray",
"Aida Nematzadeh",
"Paul Nicholas",
"João Gabriel Oliveira",
"Guillermo Ortiz-Jimenez",
"Michela Paganini",
"Tom Le Paine",
"Roni Paiss",
"Alicia Parrish",
"Anne Peckham",
"Vikas Peswani",
"Igor Petrovski",
"Tobias Pfaff",
"Alex Pirozhenko",
"Ryan Poplin",
"Utsav Prabhu",
"Yuan Qi",
"Matthew Rahtz",
"Cyrus Rashtchian",
"Charvi Rastogi",
"Amit Raul",
"Ali Razavi",
"Sylvestre-Alvise Rebuffi",
"Susanna Ricco",
"Felix Riedel",
"Dirk Robinson",
"Pankaj Rohatgi",
"Bill Rosgen",
"Sarah Rumbley",
"Moonkyung Ryu",
"Anthony Salgado",
"Sahil Singla",
"Florian Schroff",
"Candice Schumann",
"Tanmay Shah",
"Brendan Shillingford",
"Kaushik Shivakumar",
"Dennis Shtatnov",
"Zach Singer",
"Evgeny Sluzhaev",
"Valerii Sokolov",
"Thibault Sottiaux",
"Florian Stimberg",
"Brad Stone",
"David Stutz",
"Yu-Chuan Su",
"Eric Tabellion",
"Shuai Tang",
"David Tao",
"Kurt Thomas",
"Gregory Thornton",
"Andeep Toor",
"Cristian Udrescu",
"Aayush Upadhyay",
"Cristina Vasconcelos",
"Alex Vasiloff",
"Andrey Voynov",
"Amanda Walker",
"Luyu Wang",
"Miaosen Wang",
"Simon Wang",
"Stanley Wang",
"Qifei Wang",
"Yuxiao Wang",
"Ágoston Weisz",
"Olivia Wiles",
"Chenxia Wu",
"Xingyu Federico Xu",
"Andrew Xue",
"Jianbo Yang",
"Luo Yu",
"Mete Yurtoglu",
"Ali Zand",
"Han Zhang",
"Jiageng Zhang",
"Catherine Zhao",
"Adilet Zhaxybay",
"Miao Zhou",
"Shengqi Zhu",
"Zhenkai Zhu",
"Dawn Bloxwich",
"Mahyar Bordbar",
"Luis C. Cobo",
"Eli Collins",
"Shengyang Dai",
"Tulsee Doshi",
"Anca Dragan",
"Douglas Eck",
"Demis Hassabis",
"Sissie Hsiao",
"Tom Hume",
"Koray Kavukcuoglu",
"Helen King",
"Jack Krawczyk",
"Yeqing Li",
"Kathy Meier-Hellstern",
"Andras Orban",
"Yury Pinsky",
"Amar Subramanya",
"Oriol Vinyals",
"Ting Yu",
"Yori Zwols"
]
| We introduce Imagen 3, a latent diffusion model that generates high quality images from text prompts. We describe our quality and responsibility evaluations. Imagen 3 is preferred over other state-of-the-art (SOTA) models at the time of evaluation. In addition, we discuss issues around safety and representation, as well as methods we used to minimize the potential harm of our models. |
|
2024-08-14T00:00:00 | 2408.07055 | LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs | [
"Yushi Bai",
"Jiajie Zhang",
"Xin Lv",
"Linzhi Zheng",
"Siqi Zhu",
"Lei Hou",
"Yuxiao Dong",
"Jie Tang",
"Juanzi Li"
]
| https://github.com/THUDM/LongWriter | Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words. Through controlled experiments, we find that the model's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning (SFT). In other words, their output limitation is due to the scarcity of long-output examples in existing SFT datasets. To address this, we introduce AgentWrite, an agent-based pipeline that decomposes ultra-long generation tasks into subtasks, enabling off-the-shelf LLMs to generate coherent outputs exceeding 20,000 words. Leveraging AgentWrite, we construct LongWriter-6k, a dataset containing 6,000 SFT data with output lengths ranging from 2k to 32k words. By incorporating this dataset into model training, we successfully scale the output length of existing models to over 10,000 words while maintaining output quality. We also develop LongBench-Write, a comprehensive benchmark for evaluating ultra-long generation capabilities. Our 9B parameter model, further improved through DPO, achieves state-of-the-art performance on this benchmark, surpassing even much larger proprietary models. In general, our work demonstrates that existing long context LLM already possesses the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability. Our code & models are at: https://github.com/THUDM/LongWriter. |
2024-08-14T00:00:00 | 2408.06941 | OpenResearcher: Unleashing AI for Accelerated Scientific Research | [
"Yuxiang Zheng",
"Shichao Sun",
"Lin Qiu",
"Dongyu Ru",
"Cheng Jiayang",
"Xuefeng Li",
"Jifan Lin",
"Binjie Wang",
"Yun Luo",
"Renjie Pan",
"Yang Xu",
"Qingkai Min",
"Zizhao Zhang",
"Yiwen Wang",
"Wenjie Li",
"Pengfei Liu"
]
| https://github.com/GAIR-NLP/OpenResearcher | The rapid growth of scientific literature imposes significant challenges for researchers endeavoring to stay updated with the latest advancements in their fields and delve into new areas. We introduce OpenResearcher, an innovative platform that leverages Artificial Intelligence (AI) techniques to accelerate the research process by answering diverse questions from researchers. OpenResearcher is built based on Retrieval-Augmented Generation (RAG) to integrate Large Language Models (LLMs) with up-to-date, domain-specific knowledge. Moreover, we develop various tools for OpenResearcher to understand researchers' queries, search from the scientific literature, filter retrieved information, provide accurate and comprehensive answers, and self-refine these answers. OpenResearcher can flexibly use these tools to balance efficiency and effectiveness. As a result, OpenResearcher enables researchers to save time and increase their potential to discover new insights and drive scientific breakthroughs. Demo, video, and code are available at: https://github.com/GAIR-NLP/OpenResearcher. |
2024-08-14T00:00:00 | 2408.06481 | UniT: Unified Tactile Representation for Robot Learning | [
"Zhengtong Xu",
"Raghava Uppuluri",
"Xinwei Zhang",
"Cael Fitch",
"Philip Glen Crandall",
"Wan Shou",
"Dongyi Wang",
"Yu She"
]
| https://github.com/ZhengtongXu/UniT | UniT is a novel approach to tactile representation learning, using VQVAE to learn a compact latent space and serve as the tactile representation. It uses tactile images obtained from a single simple object to train the representation with transferability and generalizability. This tactile representation can be zero-shot transferred to various downstream tasks, including perception tasks and manipulation policy learning. Our benchmarking on an in-hand 3D pose estimation task shows that UniT outperforms existing visual and tactile representation learning methods. Additionally, UniT's effectiveness in policy learning is demonstrated across three real-world tasks involving diverse manipulated objects and complex robot-object-environment interactions. Through extensive experimentation, UniT is shown to be a simple-to-train, plug-and-play, yet widely effective method for tactile representation learning. For more details, please refer to our open-source repository https://github.com/ZhengtongXu/UniT and the project website https://zhengtongxu.github.io/unifiedtactile.github.io/. |
2024-08-14T00:00:00 | 2408.06693 | DC3DO: Diffusion Classifier for 3D Objects | [
"Nursena Koprucu",
"Meher Shashwat Nigam",
"Shicheng Xu",
"Biruk Abere",
"Gabriele Dominici",
"Andrew Rodriguez",
"Sharvaree Vadgam",
"Berfin Inal",
"Alberto Tono"
]
| Inspired by Geoffrey Hinton emphasis on generative modeling, To recognize shapes, first learn to generate them, we explore the use of 3D diffusion models for object classification. Leveraging the density estimates from these models, our approach, the Diffusion Classifier for 3D Objects (DC3DO), enables zero-shot classification of 3D shapes without additional training. On average, our method achieves a 12.5 percent improvement compared to its multiview counterparts, demonstrating superior multimodal reasoning over discriminative approaches. DC3DO employs a class-conditional diffusion model trained on ShapeNet, and we run inferences on point clouds of chairs and cars. This work highlights the potential of generative models in 3D object classification. |
|
2024-08-14T00:00:00 | 2408.06793 | Layerwise Recurrent Router for Mixture-of-Experts | [
"Zihan Qiu",
"Zeyu Huang",
"Shuang Cheng",
"Yizhi Zhou",
"Zili Wang",
"Ivan Titov",
"Jie Fu"
]
| https://github.com/qiuzh20/RMoE | The scaling of large language models (LLMs) has revolutionized their capabilities in various tasks, yet this growth must be matched with efficient computational strategies. The Mixture-of-Experts (MoE) architecture stands out for its ability to scale model size without significantly increasing training costs. Despite their advantages, current MoE models often display parameter inefficiency. For instance, a pre-trained MoE-based LLM with 52 billion parameters might perform comparably to a standard model with 6.7 billion parameters. Being a crucial part of MoE, current routers in different layers independently assign tokens without leveraging historical routing information, potentially leading to suboptimal token-expert combinations and the parameter inefficiency problem. To alleviate this issue, we introduce the Layerwise Recurrent Router for Mixture-of-Experts (RMoE). RMoE leverages a Gated Recurrent Unit (GRU) to establish dependencies between routing decisions across consecutive layers. Such layerwise recurrence can be efficiently parallelly computed for input tokens and introduces negotiable costs. Our extensive empirical evaluations demonstrate that RMoE-based language models consistently outperform a spectrum of baseline models. Furthermore, RMoE integrates a novel computation stage orthogonal to existing methods, allowing seamless compatibility with other MoE architectures. Our analyses attribute RMoE's gains to its effective cross-layer information sharing, which also improves expert selection and diversity. Our code is at https://github.com/qiuzh20/RMoE |
2024-08-14T00:00:00 | 2408.06697 | SlotLifter: Slot-guided Feature Lifting for Learning Object-centric Radiance Fields | [
"Yu Liu",
"Baoxiong Jia",
"Yixin Chen",
"Siyuan Huang"
]
| The ability to distill object-centric abstractions from intricate visual scenes underpins human-level generalization. Despite the significant progress in object-centric learning methods, learning object-centric representations in the 3D physical world remains a crucial challenge. In this work, we propose SlotLifter, a novel object-centric radiance model addressing scene reconstruction and decomposition jointly via slot-guided feature lifting. Such a design unites object-centric learning representations and image-based rendering methods, offering state-of-the-art performance in scene decomposition and novel-view synthesis on four challenging synthetic and four complex real-world datasets, outperforming existing 3D object-centric learning methods by a large margin. Through extensive ablative studies, we showcase the efficacy of designs in SlotLifter, revealing key insights for potential future directions. |
|
2024-08-14T00:00:00 | 2408.06506 | TacSL: A Library for Visuotactile Sensor Simulation and Learning | [
"Iretiayo Akinola",
"Jie Xu",
"Jan Carius",
"Dieter Fox",
"Yashraj Narang"
]
| For both humans and robots, the sense of touch, known as tactile sensing, is critical for performing contact-rich manipulation tasks. Three key challenges in robotic tactile sensing are 1) interpreting sensor signals, 2) generating sensor signals in novel scenarios, and 3) learning sensor-based policies. For visuotactile sensors, interpretation has been facilitated by their close relationship with vision sensors (e.g., RGB cameras). However, generation is still difficult, as visuotactile sensors typically involve contact, deformation, illumination, and imaging, all of which are expensive to simulate; in turn, policy learning has been challenging, as simulation cannot be leveraged for large-scale data collection. We present TacSL (taxel), a library for GPU-based visuotactile sensor simulation and learning. TacSL can be used to simulate visuotactile images and extract contact-force distributions over 200times faster than the prior state-of-the-art, all within the widely-used Isaac Gym simulator. Furthermore, TacSL provides a learning toolkit containing multiple sensor models, contact-intensive training environments, and online/offline algorithms that can facilitate policy learning for sim-to-real applications. On the algorithmic side, we introduce a novel online reinforcement-learning algorithm called asymmetric actor-critic distillation (\sysName), designed to effectively and efficiently learn tactile-based policies in simulation that can transfer to the real world. Finally, we demonstrate the utility of our library and algorithms by evaluating the benefits of distillation and multimodal sensing for contact-rich manip ulation tasks, and most critically, performing sim-to-real transfer. Supplementary videos and results are at https://iakinola23.github.io/tacsl/. |
|
2024-08-14T00:00:00 | 2408.06281 | MovieSum: An Abstractive Summarization Dataset for Movie Screenplays | [
"Rohit Saxena",
"Frank Keller"
]
| Movie screenplay summarization is challenging, as it requires an understanding of long input contexts and various elements unique to movies. Large language models have shown significant advancements in document summarization, but they often struggle with processing long input contexts. Furthermore, while television transcripts have received attention in recent studies, movie screenplay summarization remains underexplored. To stimulate research in this area, we present a new dataset, MovieSum, for abstractive summarization of movie screenplays. This dataset comprises 2200 movie screenplays accompanied by their Wikipedia plot summaries. We manually formatted the movie screenplays to represent their structural elements. Compared to existing datasets, MovieSum possesses several distinctive features: (1) It includes movie screenplays, which are longer than scripts of TV episodes. (2) It is twice the size of previous movie screenplay datasets. (3) It provides metadata with IMDb IDs to facilitate access to additional external knowledge. We also show the results of recently released large language models applied to summarization on our dataset to provide a detailed baseline. |
|
2024-08-14T00:00:00 | 2408.05928 | Adapting General Disentanglement-Based Speaker Anonymization for Enhanced Emotion Preservation | [
"Xiaoxiao Miao",
"Yuxiang Zhang",
"Xin Wang",
"Natalia Tomashenko",
"Donny Cheng Lock Soh",
"Ian Mcloughlin"
]
| A general disentanglement-based speaker anonymization system typically separates speech into content, speaker, and prosody features using individual encoders. This paper explores how to adapt such a system when a new speech attribute, for example, emotion, needs to be preserved to a greater extent. While existing systems are good at anonymizing speaker embeddings, they are not designed to preserve emotion. Two strategies for this are examined. First, we show that integrating emotion embeddings from a pre-trained emotion encoder can help preserve emotional cues, even though this approach slightly compromises privacy protection. Alternatively, we propose an emotion compensation strategy as a post-processing step applied to anonymized speaker embeddings. This conceals the original speaker's identity and reintroduces the emotional traits lost during speaker embedding anonymization. Specifically, we model the emotion attribute using support vector machines to learn separate boundaries for each emotion. During inference, the original speaker embedding is processed in two ways: one, by an emotion indicator to predict emotion and select the emotion-matched SVM accurately; and two, by a speaker anonymizer to conceal speaker characteristics. The anonymized speaker embedding is then modified along the corresponding SVM boundary towards an enhanced emotional direction to save the emotional cues. The proposed strategies are also expected to be useful for adapting a general disentanglement-based speaker anonymization system to preserve other target paralinguistic attributes, with potential for a range of downstream tasks. |
|
2024-08-14T00:00:00 | 2408.06396 | Design Proteins Using Large Language Models: Enhancements and Comparative Analyses | [
"Kamyar Zeinalipour",
"Neda Jamshidi",
"Monica Bianchini",
"Marco Maggini",
"Marco Gori"
]
| Pre-trained LLMs have demonstrated substantial capabilities across a range of conventional natural language processing (NLP) tasks, such as summarization and entity recognition. In this paper, we explore the application of LLMs in the generation of high-quality protein sequences. Specifically, we adopt a suite of pre-trained LLMs, including Mistral-7B1, Llama-2-7B2, Llama-3-8B3, and gemma-7B4, to produce valid protein sequences. All of these models are publicly available.5 Unlike previous work in this field, our approach utilizes a relatively small dataset comprising 42,000 distinct human protein sequences. We retrain these models to process protein-related data, ensuring the generation of biologically feasible protein structures. Our findings demonstrate that even with limited data, the adapted models exhibit efficiency comparable to established protein-focused models such as ProGen varieties, ProtGPT2, and ProLLaMA, which were trained on millions of protein sequences. To validate and quantify the performance of our models, we conduct comparative analyses employing standard metrics such as pLDDT, RMSD, TM-score, and REU. Furthermore, we commit to making the trained versions of all four models publicly available, fostering greater transparency and collaboration in the field of computational biology. |
|
2024-08-14T00:00:00 | 2408.06273 | FuxiTranyu: A Multilingual Large Language Model Trained with Balanced Data | [
"Haoran Sun",
"Renren Jin",
"Shaoyang Xu",
"Leiyu Pan",
"Supryadi",
"Menglong Cui",
"Jiangcun Du",
"Yikun Lei",
"Lei Yang",
"Ling Shi",
"Juesi Xiao",
"Shaolin Zhu",
"Deyi Xiong"
]
| Large language models (LLMs) have demonstrated prowess in a wide range of tasks. However, many LLMs exhibit significant performance discrepancies between high- and low-resource languages. To mitigate this challenge, we present FuxiTranyu, an open-source multilingual LLM, which is designed to satisfy the need of the research community for balanced and high-performing multilingual capabilities. FuxiTranyu-8B, the base model with 8 billion parameters, is trained from scratch on a meticulously balanced multilingual data repository that contains 600 billion tokens covering 43 natural languages and 16 programming languages. In addition to the base model, we also develop two instruction-tuned models: FuxiTranyu-8B-SFT that is fine-tuned on a diverse multilingual instruction dataset, and FuxiTranyu-8B-DPO that is further refined with DPO on a preference dataset for enhanced alignment ability. Extensive experiments on a wide range of multilingual benchmarks demonstrate the competitive performance of FuxiTranyu against existing multilingual LLMs, e.g., BLOOM-7B, PolyLM-13B, Llama-2-Chat-7B and Mistral-7B-Instruct. Interpretability analyses at both the neuron and representation level suggest that FuxiTranyu is able to learn consistent multilingual representations across different languages. To promote further research into multilingual LLMs and their working mechanisms, we release both the base and instruction-tuned FuxiTranyu models together with 58 pretraining checkpoints at HuggingFace and Github. |
|
2024-08-14T00:00:00 | 2408.06663 | Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models | [
"Kaiser Sun",
"Mark Dredze"
]
| The development of large language models leads to the formation of a pre-train-then-align paradigm, in which the model is typically pre-trained on a large text corpus and undergoes a tuning stage to align the model with human preference or downstream tasks. In this work, we investigate the relationship between pre-training and fine-tuning by fine-tuning multiple intermediate pre-trained model checkpoints. Our results on 18 datasets suggest that i) continual pre-training improves the model in a latent way that unveils after fine-tuning; ii) with extra fine-tuning, the datasets that the model does not demonstrate capability gain much more than those that the model performs well during the pre-training stage; iii) although model benefits significantly through supervised fine-tuning, it may forget previously known domain knowledge and the tasks that are not seen during fine-tuning; iv) the model resembles high sensitivity to evaluation prompts after supervised fine-tuning, but this sensitivity can be alleviated by more pre-training. |
|
2024-08-14T00:00:00 | 2408.05492 | ZePo: Zero-Shot Portrait Stylization with Faster Sampling | [
"Jin Liu",
"Huaibo Huang",
"Jie Cao",
"Ran He"
]
| https://github.com/liujin112/ZePo | Diffusion-based text-to-image generation models have significantly advanced the field of art content synthesis. However, current portrait stylization methods generally require either model fine-tuning based on examples or the employment of DDIM Inversion to revert images to noise space, both of which substantially decelerate the image generation process. To overcome these limitations, this paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps. We observed that Latent Consistency Models employing consistency distillation can effectively extract representative Consistency Features from noisy images. To blend the Consistency Features extracted from both content and style images, we introduce a Style Enhancement Attention Control technique that meticulously merges content and style features within the attention space of the target image. Moreover, we propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control. Extensive experiments have validated the effectiveness of our proposed framework in enhancing stylization efficiency and fidelity. The code is available at https://github.com/liujin112/ZePo. |
2024-08-14T00:00:00 | 2408.07060 | Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents | [
"Kexun Zhang",
"Weiran Yao",
"Zuxin Liu",
"Yihao Feng",
"Zhiwei Liu",
"Rithesh Murthy",
"Tian Lan",
"Lei Li",
"Renze Lou",
"Jiacheng Xu",
"Bo Pang",
"Yingbo Zhou",
"Shelby Heinecke",
"Silvio Savarese",
"Huan Wang",
"Caiming Xiong"
]
| Large language model (LLM) agents have shown great potential in solving real-world software engineering (SWE) problems. The most advanced open-source SWE agent can resolve over 27% of real GitHub issues in SWE-Bench Lite. However, these sophisticated agent frameworks exhibit varying strengths, excelling in certain tasks while underperforming in others. To fully harness the diversity of these agents, we propose DEI (Diversity Empowered Intelligence), a framework that leverages their unique expertise. DEI functions as a meta-module atop existing SWE agent frameworks, managing agent collectives for enhanced problem-solving. Experimental results show that a DEI-guided committee of agents is able to surpass the best individual agent's performance by a large margin. For instance, a group of open-source SWE agents, with a maximum individual resolve rate of 27.3% on SWE-Bench Lite, can achieve a 34.3% resolve rate with DEI, making a 25% improvement and beating most closed-source solutions. Our best-performing group excels with a 55% resolve rate, securing the highest ranking on SWE-Bench Lite. Our findings contribute to the growing body of research on collaborative AI systems and their potential to solve complex software engineering challenges. |
|
2024-08-14T00:00:00 | 2408.07246 | Seeing and Understanding: Bridging Vision with Chemical Knowledge Via ChemVLM | [
"Junxian Li",
"Di Zhang",
"Xunzhi Wang",
"Zeying Hao",
"Jingdi Lei",
"Qian Tan",
"Cai Zhou",
"Wei Liu",
"Weiyun Wang",
"Zhe Chen",
"Wenhai Wang",
"Wei Li",
"Shufei Zhang",
"Mao Su",
"Wanli Ouyang",
"Yuqiang Li",
"Dongzhan Zhou"
]
| In this technical report, we propose ChemVLM, the first open-source multimodal large language model dedicated to the fields of chemistry, designed to address the incompatibility between chemical image understanding and text analysis. Built upon the VIT-MLP-LLM architecture, we leverage ChemLLM-20B as the foundational large model, endowing our model with robust capabilities in understanding and utilizing chemical text knowledge. Additionally, we employ InternVIT-6B as a powerful image encoder. We have curated high-quality data from the chemical domain, including molecules, reaction formulas, and chemistry examination data, and compiled these into a bilingual multimodal question-answering dataset. We test the performance of our model on multiple open-source benchmarks and three custom evaluation sets. Experimental results demonstrate that our model achieves excellent performance, securing state-of-the-art results in five out of six involved tasks. Our model can be found at https://huggingface.co/AI4Chem/ChemVLM-26B. |
|
2024-08-15T00:00:00 | 2408.07116 | Generative Photomontage | [
"Sean J. Liu",
"Nupur Kumari",
"Ariel Shamir",
"Jun-Yan Zhu"
]
| Text-to-image models are powerful tools for image creation. However, the generation process is akin to a dice roll and makes it difficult to achieve a single image that captures everything a user wants. In this paper, we propose a framework for creating the desired image by compositing it from various parts of generated images, in essence forming a Generative Photomontage. Given a stack of images generated by ControlNet using the same input condition and different seeds, we let users select desired parts from the generated results using a brush stroke interface. We introduce a novel technique that takes in the user's brush strokes, segments the generated images using a graph-based optimization in diffusion feature space, and then composites the segmented regions via a new feature-space blending method. Our method faithfully preserves the user-selected regions while compositing them harmoniously. We demonstrate that our flexible framework can be used for many applications, including generating new appearance combinations, fixing incorrect shapes and artifacts, and improving prompt alignment. We show compelling results for each application and demonstrate that our method outperforms existing image blending methods and various baselines. |
|
2024-08-15T00:00:00 | 2408.07089 | InfinityMATH: A Scalable Instruction Tuning Dataset in Programmatic Mathematical Reasoning | [
"Bo-Wen Zhang",
"Yan Yan",
"Lin Li",
"Guang Liu"
]
| Recent advancements in Chain-of-Thoughts (CoT) and Program-of-Thoughts (PoT) methods have greatly enhanced language models' mathematical reasoning capabilities, facilitating their integration into instruction tuning datasets with LLMs. However, existing methods for large-scale dataset creation require substantial seed data and high computational costs for data synthesis, posing significant challenges for scalability. We introduce InfinityMATH, a scalable instruction tuning dataset for programmatic mathematical reasoning. The construction pipeline emphasizes decoupling numbers from mathematical problems to synthesize number-independent programs, enabling efficient and flexible scaling while minimizing dependency on specific numerical values. Fine-tuning experiments with open-source language and code models, such as Llama2 and CodeLlama, demonstrate the practical benefits of InfinityMATH. These fine-tuned models, showed significant relative improvements on both in-domain and out-of-domain benchmarks, ranging from 184.7% to 514.3% on average. Additionally, these models exhibited high robustness on the GSM8K+ and MATH+ benchmarks, which are enhanced version of test sets with simply the number variations. InfinityMATH ensures that models are more versatile and effective across a broader range of mathematical problems. The data is available at https://huggingface.co/datasets/flagopen/InfinityMATH. |
|
2024-08-15T00:00:00 | 2408.07410 | Aquila2 Technical Report | [
"Bo-Wen Zhang",
"Liangdong Wang",
"Jijie Li",
"Shuhao Gu",
"Xinya Wu",
"Zhengduo Zhang",
"Boyan Gao",
"Yulong Ao",
"Guang Liu"
]
| This paper introduces the Aquila2 series, which comprises a wide range of bilingual models with parameter sizes of 7, 34, and 70 billion. These models are trained based on an innovative framework named HeuriMentor (HM), which offers real-time insights into model convergence and enhances the training process and data management. The HM System, comprising the Adaptive Training Engine (ATE), Training State Monitor (TSM), and Data Management Unit (DMU), allows for precise monitoring of the model's training progress and enables efficient optimization of data distribution, thereby enhancing training effectiveness. Extensive evaluations show that the Aquila2 model series performs comparably well on both English and Chinese benchmarks. Specifically, Aquila2-34B demonstrates only a slight decrease in performance when quantized to Int4. Furthermore, we have made our training code (https://github.com/FlagOpen/FlagScale) and model weights (https://github.com/FlagAI-Open/Aquila2) publicly available to support ongoing research and the development of applications. |
|
2024-08-15T00:00:00 | 2408.07540 | 3D Gaussian Editing with A Single Image | [
"Guan Luo",
"Tian-Xing Xu",
"Ying-Tian Liu",
"Xiao-Xiong Fan",
"Fang-Lue Zhang",
"Song-Hai Zhang"
]
| The modeling and manipulation of 3D scenes captured from the real world are pivotal in various applications, attracting growing research interest. While previous works on editing have achieved interesting results through manipulating 3D meshes, they often require accurately reconstructed meshes to perform editing, which limits their application in 3D content generation. To address this gap, we introduce a novel single-image-driven 3D scene editing approach based on 3D Gaussian Splatting, enabling intuitive manipulation via directly editing the content on a 2D image plane. Our method learns to optimize the 3D Gaussians to align with an edited version of the image rendered from a user-specified viewpoint of the original scene. To capture long-range object deformation, we introduce positional loss into the optimization process of 3D Gaussian Splatting and enable gradient propagation through reparameterization. To handle occluded 3D Gaussians when rendering from the specified viewpoint, we build an anchor-based structure and employ a coarse-to-fine optimization strategy capable of handling long-range deformation while maintaining structural stability. Furthermore, we design a novel masking strategy to adaptively identify non-rigid deformation regions for fine-scale modeling. Extensive experiments show the effectiveness of our method in handling geometric details, long-range, and non-rigid deformation, demonstrating superior editing flexibility and quality compared to previous approaches. |
|
2024-08-15T00:00:00 | 2408.07547 | PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation | [
"Sang-Hoon Lee",
"Ha-Yeong Choi",
"Seong-Whan Lee"
]
| https://github.com/sh-lee-prml/PeriodWave | Recently, universal waveform generation tasks have been investigated conditioned on various out-of-distribution scenarios. Although GAN-based methods have shown their strength in fast waveform generation, they are vulnerable to train-inference mismatch scenarios such as two-stage text-to-speech. Meanwhile, diffusion-based models have shown their powerful generative performance in other domains; however, they stay out of the limelight due to slow inference speed in waveform generation tasks. Above all, there is no generator architecture that can explicitly disentangle the natural periodic features of high-resolution waveform signals. In this paper, we propose PeriodWave, a novel universal waveform generation model. First, we introduce a period-aware flow matching estimator that can capture the periodic features of the waveform signal when estimating the vector fields. Additionally, we utilize a multi-period estimator that avoids overlaps to capture different periodic features of waveform signals. Although increasing the number of periods can improve the performance significantly, this requires more computational costs. To reduce this issue, we also propose a single period-conditional universal estimator that can feed-forward parallel by period-wise batch inference. Additionally, we utilize discrete wavelet transform to losslessly disentangle the frequency information of waveform signals for high-frequency modeling, and introduce FreeU to reduce the high-frequency noise for waveform generation. The experimental results demonstrated that our model outperforms the previous models both in Mel-spectrogram reconstruction and text-to-speech tasks. All source code will be available at https://github.com/sh-lee-prml/PeriodWave. |
2024-08-15T00:00:00 | 2408.07416 | Rethinking Open-Vocabulary Segmentation of Radiance Fields in 3D Space | [
"Hyunjee Lee",
"Youngsik Yun",
"Jeongmin Bae",
"Seoha Kim",
"Youngjung Uh"
]
| Understanding the 3D semantics of a scene is a fundamental problem for various scenarios such as embodied agents. While NeRFs and 3DGS excel at novel-view synthesis, previous methods for understanding their semantics have been limited to incomplete 3D understanding: their segmentation results are 2D masks and their supervision is anchored at 2D pixels. This paper revisits the problem set to pursue a better 3D understanding of a scene modeled by NeRFs and 3DGS as follows. 1) We directly supervise the 3D points to train the language embedding field. It achieves state-of-the-art accuracy without relying on multi-scale language embeddings. 2) We transfer the pre-trained language field to 3DGS, achieving the first real-time rendering speed without sacrificing training time or accuracy. 3) We introduce a 3D querying and evaluation protocol for assessing the reconstructed geometry and semantics together. Code, checkpoints, and annotations will be available online. Project page: https://hyunji12.github.io/Open3DRF |
|
2024-08-15T00:00:00 | 2408.05366 | DeepSpeak Dataset v1.0 | [
"Sarah Barrington",
"Matyas Bohacek",
"Hany Farid"
]
| We describe a large-scale dataset--{\em DeepSpeak}--of real and deepfake footage of people talking and gesturing in front of their webcams. The real videos in this first version of the dataset consist of 9 hours of footage from 220 diverse individuals. Constituting more than 25 hours of footage, the fake videos consist of a range of different state-of-the-art face-swap and lip-sync deepfakes with natural and AI-generated voices. We expect to release future versions of this dataset with different and updated deepfake technologies. This dataset is made freely available for research and non-commercial uses; requests for commercial use will be considered. |
|
2024-08-16T00:00:00 | 2408.08189 | FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance | [
"Jiasong Feng",
"Ao Ma",
"Jing Wang",
"Bo Cheng",
"Xiaodan Liang",
"Dawei Leng",
"Yuhui Yin"
]
| Synthesizing motion-rich and temporally consistent videos remains a challenge in artificial intelligence, especially when dealing with extended durations. Existing text-to-video (T2V) models commonly employ spatial cross-attention for text control, equivalently guiding different frame generations without frame-specific textual guidance. Thus, the model's capacity to comprehend the temporal logic conveyed in prompts and generate videos with coherent motion is restricted. To tackle this limitation, we introduce FancyVideo, an innovative video generator that improves the existing text-control mechanism with the well-designed Cross-frame Textual Guidance Module (CTGM). Specifically, CTGM incorporates the Temporal Information Injector (TII), Temporal Affinity Refiner (TAR), and Temporal Feature Booster (TFB) at the beginning, middle, and end of cross-attention, respectively, to achieve frame-specific textual guidance. Firstly, TII injects frame-specific information from latent features into text conditions, thereby obtaining cross-frame textual conditions. Then, TAR refines the correlation matrix between cross-frame textual conditions and latent features along the time dimension. Lastly, TFB boosts the temporal consistency of latent features. Extensive experiments comprising both quantitative and qualitative evaluations demonstrate the effectiveness of FancyVideo. Our approach achieves state-of-the-art T2V generation results on the EvalCrafter benchmark and facilitates the synthesis of dynamic and consistent videos. The video show results can be available at https://fancyvideo.github.io/, and we will make our code and model weights publicly available. |
|
2024-08-16T00:00:00 | 2408.08152 | DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search | [
"Huajian Xin",
"Z. Z. Ren",
"Junxiao Song",
"Zhihong Shao",
"Wanjia Zhao",
"Haocheng Wang",
"Bo Liu",
"Liyue Zhang",
"Xuan Lu",
"Qiushi Du",
"Wenjun Gao",
"Qihao Zhu",
"Dejian Yang",
"Zhibin Gou",
"Z. F. Wu",
"Fuli Luo",
"Chong Ruan"
]
| We introduce DeepSeek-Prover-V1.5, an open-source language model designed for theorem proving in Lean 4, which enhances DeepSeek-Prover-V1 by optimizing both training and inference processes. Pre-trained on DeepSeekMath-Base with specialization in formal mathematical languages, the model undergoes supervised fine-tuning using an enhanced formal theorem proving dataset derived from DeepSeek-Prover-V1. Further refinement is achieved through reinforcement learning from proof assistant feedback (RLPAF). Beyond the single-pass whole-proof generation approach of DeepSeek-Prover-V1, we propose RMaxTS, a variant of Monte-Carlo tree search that employs an intrinsic-reward-driven exploration strategy to generate diverse proof paths. DeepSeek-Prover-V1.5 demonstrates significant improvements over DeepSeek-Prover-V1, achieving new state-of-the-art results on the test set of the high school level miniF2F benchmark (63.5%) and the undergraduate level ProofNet benchmark (25.3%). |
|
2024-08-16T00:00:00 | 2408.08313 | Can Large Language Models Understand Symbolic Graphics Programs? | [
"Zeju Qiu",
"Weiyang Liu",
"Haiwen Feng",
"Zhen Liu",
"Tim Z. Xiao",
"Katherine M. Collins",
"Joshua B. Tenenbaum",
"Adrian Weller",
"Michael J. Black",
"Bernhard Schölkopf"
]
| Assessing the capabilities of large language models (LLMs) is often challenging, in part, because it is hard to find tasks to which they have not been exposed during training. We take one step to address this challenge by turning to a new task: focusing on symbolic graphics programs, which are a popular representation for graphics content that procedurally generates visual data. LLMs have shown exciting promise towards program synthesis, but do they understand symbolic graphics programs? Unlike conventional programs, symbolic graphics programs can be translated to graphics content. Here, we characterize an LLM's understanding of symbolic programs in terms of their ability to answer questions related to the graphics content. This task is challenging as the questions are difficult to answer from the symbolic programs alone -- yet, they would be easy to answer from the corresponding graphics content as we verify through a human experiment. To understand symbolic programs, LLMs may need to possess the ability to imagine how the corresponding graphics content would look without directly accessing the rendered visual content. We use this task to evaluate LLMs by creating a large benchmark for the semantic understanding of symbolic graphics programs. This benchmark is built via program-graphics correspondence, hence requiring minimal human efforts. We evaluate current LLMs on our benchmark to elucidate a preliminary assessment of their ability to reason about visual scenes from programs. We find that this task distinguishes existing LLMs and models considered good at reasoning perform better. Lastly, we introduce Symbolic Instruction Tuning (SIT) to improve this ability. Specifically, we query GPT4-o with questions and images generated by symbolic programs. Such data are then used to finetune an LLM. We also find that SIT data can improve the general instruction following ability of LLMs. |
|
2024-08-16T00:00:00 | 2408.08201 | Heavy Labels Out! Dataset Distillation with Label Space Lightening | [
"Ruonan Yu",
"Songhua Liu",
"Zigeng Chen",
"Jingwen Ye",
"Xinchao Wang"
]
| Dataset distillation or condensation aims to condense a large-scale training dataset into a much smaller synthetic one such that the training performance of distilled and original sets on neural networks are similar. Although the number of training samples can be reduced substantially, current state-of-the-art methods heavily rely on enormous soft labels to achieve satisfactory performance. As a result, the required storage can be comparable even to original datasets, especially for large-scale ones. To solve this problem, instead of storing these heavy labels, we propose a novel label-lightening framework termed HeLlO aiming at effective image-to-label projectors, with which synthetic labels can be directly generated online from synthetic images. Specifically, to construct such projectors, we leverage prior knowledge in open-source foundation models, e.g., CLIP, and introduce a LoRA-like fine-tuning strategy to mitigate the gap between pre-trained and target distributions, so that original models for soft-label generation can be distilled into a group of low-rank matrices. Moreover, an effective image optimization method is proposed to further mitigate the potential error between the original and distilled label generators. Extensive experiments demonstrate that with only about 0.003% of the original storage required for a complete set of soft labels, we achieve comparable performance to current state-of-the-art dataset distillation methods on large-scale datasets. Our code will be available. |
|
2024-08-16T00:00:00 | 2408.08000 | MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing | [
"Chenjie Cao",
"Chaohui Yu",
"Yanwei Fu",
"Fan Wang",
"Xiangyang Xue"
]
| Novel View Synthesis (NVS) and 3D generation have recently achieved prominent improvements. However, these works mainly focus on confined categories or synthetic 3D assets, which are discouraged from generalizing to challenging in-the-wild scenes and fail to be employed with 2D synthesis directly. Moreover, these methods heavily depended on camera poses, limiting their real-world applications. To overcome these issues, we propose MVInpainter, re-formulating the 3D editing as a multi-view 2D inpainting task. Specifically, MVInpainter partially inpaints multi-view images with the reference guidance rather than intractably generating an entirely novel view from scratch, which largely simplifies the difficulty of in-the-wild NVS and leverages unmasked clues instead of explicit pose conditions. To ensure cross-view consistency, MVInpainter is enhanced by video priors from motion components and appearance guidance from concatenated reference key&value attention. Furthermore, MVInpainter incorporates slot attention to aggregate high-level optical flow features from unmasked regions to control the camera movement with pose-free training and inference. Sufficient scene-level experiments on both object-centric and forward-facing datasets verify the effectiveness of MVInpainter, including diverse tasks, such as multi-view object removal, synthesis, insertion, and replacement. The project page is https://ewrfcas.github.io/MVInpainter/. |
|
2024-08-16T00:00:00 | 2408.08019 | Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization | [
"Sang-Hoon Lee",
"Ha-Yeong Choi",
"Seong-Whan Lee"
]
| https://github.com/sh-lee-prml/PeriodWave | This paper introduces PeriodWave-Turbo, a high-fidelity and high-efficient waveform generation model via adversarial flow matching optimization. Recently, conditional flow matching (CFM) generative models have been successfully adopted for waveform generation tasks, leveraging a single vector field estimation objective for training. Although these models can generate high-fidelity waveform signals, they require significantly more ODE steps compared to GAN-based models, which only need a single generation step. Additionally, the generated samples often lack high-frequency information due to noisy vector field estimation, which fails to ensure high-frequency reproduction. To address this limitation, we enhance pre-trained CFM-based generative models by incorporating a fixed-step generator modification. We utilized reconstruction losses and adversarial feedback to accelerate high-fidelity waveform generation. Through adversarial flow matching optimization, it only requires 1,000 steps of fine-tuning to achieve state-of-the-art performance across various objective metrics. Moreover, we significantly reduce inference speed from 16 steps to 2 or 4 steps. Additionally, by scaling up the backbone of PeriodWave from 29M to 70M parameters for improved generalization, PeriodWave-Turbo achieves unprecedented performance, with a perceptual evaluation of speech quality (PESQ) score of 4.454 on the LibriTTS dataset. Audio samples, source code and checkpoints will be available at https://github.com/sh-lee-prml/PeriodWave. |
2024-08-16T00:00:00 | 2408.08172 | Towards flexible perception with visual memory | [
"Robert Geirhos",
"Priyank Jaini",
"Austin Stone",
"Sourabh Medapati",
"Xi Yi",
"George Toderici",
"Abhijit Ogale",
"Jonathon Shlens"
]
| Training a neural network is a monolithic endeavor, akin to carving knowledge into stone: once the process is completed, editing the knowledge in a network is nearly impossible, since all information is distributed across the network's weights. We here explore a simple, compelling alternative by marrying the representational power of deep neural networks with the flexibility of a database. Decomposing the task of image classification into image similarity (from a pre-trained embedding) and search (via fast nearest neighbor retrieval from a knowledge database), we build a simple and flexible visual memory that has the following key capabilities: (1.) The ability to flexibly add data across scales: from individual samples all the way to entire classes and billion-scale data; (2.) The ability to remove data through unlearning and memory pruning; (3.) An interpretable decision-mechanism on which we can intervene to control its behavior. Taken together, these capabilities comprehensively demonstrate the benefits of an explicit visual memory. We hope that it might contribute to a conversation on how knowledge should be represented in deep vision models -- beyond carving it in ``stone'' weights. |
|
2024-08-16T00:00:00 | 2408.08274 | BAM! Just Like That: Simple and Efficient Parameter Upcycling for Mixture of Experts | [
"Qizhen Zhang",
"Nikolas Gritsch",
"Dwaraknath Gnaneshwar",
"Simon Guo",
"David Cairuz",
"Bharat Venkitesh",
"Jakob Foerster",
"Phil Blunsom",
"Sebastian Ruder",
"Ahmet Ustun",
"Acyr Locatelli"
]
| The Mixture of Experts (MoE) framework has become a popular architecture for large language models due to its superior performance over dense models. However, training MoEs from scratch in a large-scale regime is prohibitively expensive. Existing methods mitigate this by pre-training multiple dense expert models independently and using them to initialize an MoE. This is done by using experts' feed-forward network (FFN) to initialize the MoE's experts while merging other parameters. However, this method limits the reuse of dense model parameters to only the FFN layers, thereby constraining the advantages when "upcycling" these models into MoEs. We propose BAM (Branch-Attend-Mix), a simple yet effective method that addresses this shortcoming. BAM makes full use of specialized dense models by not only using their FFN to initialize the MoE layers but also leveraging experts' attention parameters fully by initializing them into a soft-variant of Mixture of Attention (MoA) layers. We explore two methods for upcycling attention parameters: 1) initializing separate attention experts from dense models including all attention parameters for the best model performance; and 2) sharing key and value parameters across all experts to facilitate for better inference efficiency. To further improve efficiency, we adopt a parallel attention transformer architecture to MoEs, which allows the attention experts and FFN experts to be computed concurrently. Our experiments on seed models ranging from 590 million to 2 billion parameters demonstrate that BAM surpasses baselines in both perplexity and downstream task performance, within the same computational and data constraints. |
|
2024-08-16T00:00:00 | 2408.07852 | Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability | [
"Jiri Hron",
"Laura Culp",
"Gamaleldin Elsayed",
"Rosanne Liu",
"Ben Adlam",
"Maxwell Bileschi",
"Bernd Bohnet",
"JD Co-Reyes",
"Noah Fiedel",
"C. Daniel Freeman",
"Izzeddin Gur",
"Kathleen Kenealy",
"Jaehoon Lee",
"Peter J. Liu",
"Gaurav Mishra",
"Igor Mordatch",
"Azade Nova",
"Roman Novak",
"Aaron Parisi",
"Jeffrey Pennington",
"Alex Rizkowsky",
"Isabelle Simpson",
"Hanie Sedghi",
"Jascha Sohl-dickstein",
"Kevin Swersky",
"Sharad Vikram",
"Tris Warkentin",
"Lechao Xiao",
"Kelvin Xu",
"Jasper Snoek",
"Simon Kornblith"
]
| While many capabilities of language models (LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood. Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations where a correct answer appears verbatim in the training set. To fully control the training data content, we construct a knowledge graph (KG)-based dataset, and use it to train a set of increasingly large LMs. We find that for a fixed dataset, larger and longer-trained LMs hallucinate less. However, hallucinating on leq5% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, than Hoffmann et al. (2022) reported was optimal. Given this costliness, we study how hallucination detectors depend on scale. While we see detector size improves performance on fixed LM's outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations. |
|
2024-08-16T00:00:00 | 2408.08072 | I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm | [
"Yiming Liang",
"Ge Zhang",
"Xingwei Qu",
"Tianyu Zheng",
"Jiawei Guo",
"Xinrun Du",
"Zhenzhu Yang",
"Jiaheng Liu",
"Chenghua Lin",
"Lei Ma",
"Wenhao Huang",
"Jiajun Zhang"
]
| Large Language Models (LLMs) have achieved significant advancements, however, the common learning paradigm treats LLMs as passive information repositories, neglecting their potential for active learning and alignment. Some approaches train LLMs using their own generated synthetic data, exploring the possibility of active alignment. However, there is still a huge gap between these one-time alignment methods and the continuous automatic alignment of humans. In this paper, we introduce I-SHEEP, an Iterative Self-EnHancEmEnt Paradigm.This human-like paradigm enables LLMs to continuously self-align from scratch with nothing. Compared to the one-time alignment method Dromedary sun2023principledriven, which refers to the first iteration in this paper, I-SHEEP can significantly enhance capacities on both Qwen and Llama models. I-SHEEP achieves a maximum relative improvement of 78.2\% in the Alpaca Eval, 24.0\% in the MT Bench, and an absolute increase of 8.88\% in the IFEval accuracy over subsequent iterations in Qwen-1.5 72B model. Additionally, I-SHEEP surpasses the base model in various standard benchmark generation tasks, achieving an average improvement of 24.77\% in code generation tasks, 12.04\% in TrivialQA, and 20.29\% in SQuAD. We also provide new insights based on the experiment results. Our codes, datasets, and models are available at https://anonymous.4open.science/r/I-SHEEP. |
|
2024-08-16T00:00:00 | 2408.07990 | FuseChat: Knowledge Fusion of Chat Models | [
"Fanqi Wan",
"Longguang Zhong",
"Ziyi Yang",
"Ruijun Chen",
"Xiaojun Quan"
]
| https://github.com/fanqiwan/FuseAI | While training large language models (LLMs) from scratch can indeed lead to models with distinct capabilities and strengths, it incurs substantial costs and may lead to redundancy in competencies. Knowledge fusion aims to integrate existing LLMs of diverse architectures and capabilities into a more potent LLM through lightweight continual training, thereby reducing the need for costly LLM development. In this work, we propose a new framework for the knowledge fusion of chat LLMs through two main stages, resulting in FuseChat. Firstly, we conduct pairwise knowledge fusion on source chat LLMs of varying structures and scales to create multiple target LLMs with identical structure and size via lightweight fine-tuning. During this process, a statistics-based token alignment approach is introduced as the cornerstone for fusing LLMs with different structures. Secondly, we merge these target LLMs within the parameter space, where we propose a novel method for determining the merging coefficients based on the magnitude of parameter updates before and after fine-tuning. We implement and validate FuseChat using six prominent chat LLMs with diverse architectures and scales, including OpenChat-3.5-7B, Starling-LM-7B-alpha, NH2-SOLAR-10.7B, InternLM2-Chat-20B, Mixtral-8x7B-Instruct, and Qwen-1.5-Chat-72B. Experimental results on two instruction-following benchmarks, AlpacaEval 2.0 and MT-Bench, demonstrate the superiority of FuseChat-7B over baselines of various sizes. Our model is even comparable to the larger Mixtral-8x7B-Instruct and approaches GPT-3.5-Turbo-1106 on MT-Bench. Our code, model weights, and data are public at https://github.com/fanqiwan/FuseAI. |
2024-08-16T00:00:00 | 2408.08291 | The ShareLM Collection and Plugin: Contributing Human-Model Chats for the Benefit of the Community | [
"Shachar Don-Yehiya",
"Leshem Choshen",
"Omri Abend"
]
| Human-model conversations provide a window into users' real-world scenarios, behavior, and needs, and thus are a valuable resource for model development and research. While for-profit companies collect user data through the APIs of their models, using it internally to improve their own models, the open source and research community lags behind. We introduce the ShareLM collection, a unified set of human conversations with large language models, and its accompanying plugin, a Web extension for voluntarily contributing user-model conversations. Where few platforms share their chats, the ShareLM plugin adds this functionality, thus, allowing users to share conversations from most platforms. The plugin allows the user to rate their conversations, both at the conversation and the response levels, and delete conversations they prefer to keep private before they ever leave the user's local storage. We release the plugin conversations as part of the ShareLM collection, and call for more community effort in the field of open human-model data. The code, plugin, and data are available. |
|
2024-08-19T00:00:00 | 2408.08459 | JPEG-LM: LLMs as Image Generators with Canonical Codec Representations | [
"Xiaochuang Han",
"Marjan Ghazvininejad",
"Pang Wei Koh",
"Yulia Tsvetkov"
]
| Recent work in image and video generation has been adopting the autoregressive LLM architecture due to its generality and potentially easy integration into multi-modal systems. The crux of applying autoregressive training in language generation to visual generation is discretization -- representing continuous data like images and videos as discrete tokens. Common methods of discretizing images and videos include modeling raw pixel values, which are prohibitively lengthy, or vector quantization, which requires convoluted pre-hoc training. In this work, we propose to directly model images and videos as compressed files saved on computers via canonical codecs (e.g., JPEG, AVC/H.264). Using the default Llama architecture without any vision-specific modifications, we pretrain JPEG-LM from scratch to generate images (and AVC-LM to generate videos as a proof of concept), by directly outputting compressed file bytes in JPEG and AVC formats. Evaluation of image generation shows that this simple and straightforward approach is more effective than pixel-based modeling and sophisticated vector quantization baselines (on which our method yields a 31% reduction in FID). Our analysis shows that JPEG-LM has an especial advantage over vector quantization models in generating long-tail visual elements. Overall, we show that using canonical codec representations can help lower the barriers between language generation and visual generation, facilitating future research on multi-modal language/image/video LLMs. |
|
2024-08-19T00:00:00 | 2408.08872 | xGen-MM (BLIP-3): A Family of Open Large Multimodal Models | [
"Le Xue",
"Manli Shu",
"Anas Awadalla",
"Jun Wang",
"An Yan",
"Senthil Purushwalkam",
"Honglu Zhou",
"Viraj Prabhu",
"Yutong Dai",
"Michael S Ryoo",
"Shrikant Kendre",
"Jieyu Zhang",
"Can Qin",
"Shu Zhang",
"Chia-Chih Chen",
"Ning Yu",
"Juntao Tan",
"Tulika Manoj Awalgaonkar",
"Shelby Heinecke",
"Huan Wang",
"Yejin Choi",
"Ludwig Schmidt",
"Zeyuan Chen",
"Silvio Savarese",
"Juan Carlos Niebles",
"Caiming Xiong",
"Ran Xu"
]
| This report introduces xGen-MM (also known as BLIP-3), a framework for developing Large Multimodal Models (LMMs). The framework comprises meticulously curated datasets, a training recipe, model architectures, and a resulting suite of LMMs. xGen-MM, short for xGen-MultiModal, expands the Salesforce xGen initiative on foundation AI models. Our models undergo rigorous evaluation across a range of tasks, including both single and multi-image benchmarks. Our pre-trained base model exhibits strong in-context learning capabilities and the instruction-tuned model demonstrates competitive performance among open-source LMMs with similar model sizes. In addition, we introduce a safety-tuned model with DPO, aiming to mitigate harmful behaviors such as hallucinations and improve safety. We open-source our models, curated large-scale datasets, and our fine-tuning codebase to facilitate further advancements in LMM research. Associated resources will be available on our project page above. |
|
2024-08-19T00:00:00 | 2408.07931 | Surgical SAM 2: Real-time Segment Anything in Surgical Video by Efficient Frame Pruning | [
"Haofeng Liu",
"Erli Zhang",
"Junde Wu",
"Mingxuan Hong",
"Yueming Jin"
]
| Surgical video segmentation is a critical task in computer-assisted surgery and is vital for enhancing surgical quality and patient outcomes. Recently, the Segment Anything Model 2 (SAM2) framework has shown superior advancements in image and video segmentation. However, SAM2 struggles with efficiency due to the high computational demands of processing high-resolution images and complex and long-range temporal dynamics in surgical videos. To address these challenges, we introduce Surgical SAM 2 (SurgSAM-2), an advanced model to utilize SAM2 with an Efficient Frame Pruning (EFP) mechanism, to facilitate real-time surgical video segmentation. The EFP mechanism dynamically manages the memory bank by selectively retaining only the most informative frames, reducing memory usage and computational cost while maintaining high segmentation accuracy. Our extensive experiments demonstrate that SurgSAM-2 significantly improves both efficiency and segmentation accuracy compared to the vanilla SAM2. Remarkably, SurgSAM-2 achieves a 3times FPS compared with SAM2, while also delivering state-of-the-art performance after fine-tuning with lower-resolution data. These advancements establish SurgSAM-2 as a leading model for surgical video analysis, making real-time surgical video segmentation in resource-constrained environments a feasible reality. |
|
2024-08-19T00:00:00 | 2408.08435 | Automated Design of Agentic Systems | [
"Shengran Hu",
"Cong Lu",
"Jeff Clune"
]
| Researchers are investing substantial effort in developing powerful general-purpose agents, wherein Foundation Models are used as modules within agentic systems (e.g. Chain-of-Thought, Self-Reflection, Toolformer). However, the history of machine learning teaches us that hand-designed solutions are eventually replaced by learned solutions. We formulate a new research area, Automated Design of Agentic Systems (ADAS), which aims to automatically create powerful agentic system designs, including inventing novel building blocks and/or combining them in new ways. We further demonstrate that there is an unexplored yet promising approach within ADAS where agents can be defined in code and new agents can be automatically discovered by a meta agent programming ever better ones in code. Given that programming languages are Turing Complete, this approach theoretically enables the learning of any possible agentic system: including novel prompts, tool use, control flows, and combinations thereof. We present a simple yet effective algorithm named Meta Agent Search to demonstrate this idea, where a meta agent iteratively programs interesting new agents based on an ever-growing archive of previous discoveries. Through extensive experiments across multiple domains including coding, science, and math, we show that our algorithm can progressively invent agents with novel designs that greatly outperform state-of-the-art hand-designed agents. Importantly, we consistently observe the surprising result that agents invented by Meta Agent Search maintain superior performance even when transferred across domains and models, demonstrating their robustness and generality. Provided we develop it safely, our work illustrates the potential of an exciting new research direction toward automatically designing ever-more powerful agentic systems to benefit humanity. |
|
2024-08-19T00:00:00 | 2408.08332 | TurboEdit: Instant text-based image editing | [
"Zongze Wu",
"Nicholas Kolkin",
"Jonathan Brandt",
"Richard Zhang",
"Eli Shechtman"
]
| We address the challenges of precise image inversion and disentangled image editing in the context of few-step diffusion models. We introduce an encoder based iterative inversion technique. The inversion network is conditioned on the input image and the reconstructed image from the previous step, allowing for correction of the next reconstruction towards the input image. We demonstrate that disentangled controls can be easily achieved in the few-step diffusion model by conditioning on an (automatically generated) detailed text prompt. To manipulate the inverted image, we freeze the noise maps and modify one attribute in the text prompt (either manually or via instruction based editing driven by an LLM), resulting in the generation of a new image similar to the input image with only one attribute changed. It can further control the editing strength and accept instructive text prompt. Our approach facilitates realistic text-guided image edits in real-time, requiring only 8 number of functional evaluations (NFEs) in inversion (one-time cost) and 4 NFEs per edit. Our method is not only fast, but also significantly outperforms state-of-the-art multi-step diffusion editing techniques. |
|
2024-08-19T00:00:00 | 2408.08441 | D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning | [
"Rafael Rafailov",
"Kyle Hatch",
"Anikait Singh",
"Laura Smith",
"Aviral Kumar",
"Ilya Kostrikov",
"Philippe Hansen-Estruch",
"Victor Kolev",
"Philip Ball",
"Jiajun Wu",
"Chelsea Finn",
"Sergey Levine"
]
| Offline reinforcement learning algorithms hold the promise of enabling data-driven RL methods that do not require costly or dangerous real-world exploration and benefit from large pre-collected datasets. This in turn can facilitate real-world applications, as well as a more standardized approach to RL research. Furthermore, offline RL methods can provide effective initializations for online finetuning to overcome challenges with exploration. However, evaluating progress on offline RL algorithms requires effective and challenging benchmarks that capture properties of real-world tasks, provide a range of task difficulties, and cover a range of challenges both in terms of the parameters of the domain (e.g., length of the horizon, sparsity of rewards) and the parameters of the data (e.g., narrow demonstration data or broad exploratory data). While considerable progress in offline RL in recent years has been enabled by simpler benchmark tasks, the most widely used datasets are increasingly saturating in performance and may fail to reflect properties of realistic tasks. We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments, based on models of real-world robotic systems, and comprising a variety of data sources, including scripted data, play-style data collected by human teleoperators, and other data sources. Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation, with some of the tasks specifically designed to require both pre-training and fine-tuning. We hope that our proposed benchmark will facilitate further progress on both offline RL and fine-tuning algorithms. Website with code, examples, tasks, and data is available at https://sites.google.com/view/d5rl/ |
|
2024-08-19T00:00:00 | 2408.07888 | Fine-tuning Large Language Models with Human-inspired Learning Strategies in Medical Question Answering | [
"Yushi Yang",
"Andrew M. Bean",
"Robert McCraith",
"Adam Mahdi"
]
| Training Large Language Models (LLMs) incurs substantial data-related costs, motivating the development of data-efficient training methods through optimised data ordering and selection. Human-inspired learning strategies, such as curriculum learning, offer possibilities for efficient training by organising data according to common human learning practices. Despite evidence that fine-tuning with curriculum learning improves the performance of LLMs for natural language understanding tasks, its effectiveness is typically assessed using a single model. In this work, we extend previous research by evaluating both curriculum-based and non-curriculum-based learning strategies across multiple LLMs, using human-defined and automated data labels for medical question answering. Our results indicate a moderate impact of using human-inspired learning strategies for fine-tuning LLMs, with maximum accuracy gains of 1.77% per model and 1.81% per dataset. Crucially, we demonstrate that the effectiveness of these strategies varies significantly across different model-dataset combinations, emphasising that the benefits of a specific human-inspired strategy for fine-tuning LLMs do not generalise. Additionally, we find evidence that curriculum learning using LLM-defined question difficulty outperforms human-defined difficulty, highlighting the potential of using model-generated measures for optimal curriculum design. |
|
2024-08-20T00:00:00 | 2408.10198 | MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction Model | [
"Minghua Liu",
"Chong Zeng",
"Xinyue Wei",
"Ruoxi Shi",
"Linghao Chen",
"Chao Xu",
"Mengqi Zhang",
"Zhaoning Wang",
"Xiaoshuai Zhang",
"Isabella Liu",
"Hongzhi Wu",
"Hao Su"
]
| Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. Specifically, instead of using a triplane representation, we store features in 3D sparse voxels and combine transformers with 3D convolutions to leverage an explicit 3D structure and projective bias. In addition to sparse-view RGB input, we require the network to take input and generate corresponding normal maps. The input normal maps can be predicted by 2D diffusion models, significantly aiding in the guidance and refinement of the geometry's learning. Moreover, by combining Signed Distance Function (SDF) supervision with surface rendering, we directly learn to generate high-quality meshes without the need for complex multi-stage training processes. By incorporating these explicit 3D biases, MeshFormer can be trained efficiently and deliver high-quality textured meshes with fine-grained geometric details. It can also be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks. Project page: https://meshformer3d.github.io |
|
2024-08-20T00:00:00 | 2408.10161 | NeuFlow v2: High-Efficiency Optical Flow Estimation on Edge Devices | [
"Zhiyong Zhang",
"Aniket Gupta",
"Huaizu Jiang",
"Hanumant Singh"
]
| https://github.com/neufieldrobotics/NeuFlow_v2 | Real-time high-accuracy optical flow estimation is crucial for various real-world applications. While recent learning-based optical flow methods have achieved high accuracy, they often come with significant computational costs. In this paper, we propose a highly efficient optical flow method that balances high accuracy with reduced computational demands. Building upon NeuFlow v1, we introduce new components including a much more light-weight backbone and a fast refinement module. Both these modules help in keeping the computational demands light while providing close to state of the art accuracy. Compares to other state of the art methods, our model achieves a 10x-70x speedup while maintaining comparable performance on both synthetic and real-world data. It is capable of running at over 20 FPS on 512x384 resolution images on a Jetson Orin Nano. The full training and evaluation code is available at https://github.com/neufieldrobotics/NeuFlow_v2. |
2024-08-20T00:00:00 | 2408.10119 | Factorized-Dreamer: Training A High-Quality Video Generator with Limited and Low-Quality Data | [
"Tao Yang",
"Yangming Shi",
"Yunwen Huang",
"Feng Chen",
"Yin Zheng",
"Lei Zhang"
]
| https://github.com/yangxy/Factorized-Dreamer | Text-to-video (T2V) generation has gained significant attention due to its wide applications to video generation, editing, enhancement and translation, \etc. However, high-quality (HQ) video synthesis is extremely challenging because of the diverse and complex motions existed in real world. Most existing works struggle to address this problem by collecting large-scale HQ videos, which are inaccessible to the community. In this work, we show that publicly available limited and low-quality (LQ) data are sufficient to train a HQ video generator without recaptioning or finetuning. We factorize the whole T2V generation process into two steps: generating an image conditioned on a highly descriptive caption, and synthesizing the video conditioned on the generated image and a concise caption of motion details. Specifically, we present Factorized-Dreamer, a factorized spatiotemporal framework with several critical designs for T2V generation, including an adapter to combine text and image embeddings, a pixel-aware cross attention module to capture pixel-level image information, a T5 text encoder to better understand motion description, and a PredictNet to supervise optical flows. We further present a noise schedule, which plays a key role in ensuring the quality and stability of video generation. Our model lowers the requirements in detailed captions and HQ videos, and can be directly trained on limited LQ datasets with noisy and brief captions such as WebVid-10M, largely alleviating the cost to collect large-scale HQ video-text pairs. Extensive experiments in a variety of T2V and image-to-video generation tasks demonstrate the effectiveness of our proposed Factorized-Dreamer. Our source codes are available at https://github.com/yangxy/Factorized-Dreamer/. |
2024-08-20T00:00:00 | 2408.08926 | Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risk of Language Models | [
"Andy K. Zhang",
"Neil Perry",
"Riya Dulepet",
"Eliot Jones",
"Justin W. Lin",
"Joey Ji",
"Celeste Menders",
"Gashon Hussein",
"Samantha Liu",
"Donovan Jasper",
"Pura Peetathawatchai",
"Ari Glenn",
"Vikram Sivashankar",
"Daniel Zamoshchin",
"Leo Glikbarg",
"Derek Askaryar",
"Mike Yang",
"Teddy Zhang",
"Rishi Alluri",
"Nathan Tran",
"Rinnara Sangpisit",
"Polycarpos Yiorkadjis",
"Kenny Osele",
"Gautham Raghupathi",
"Dan Boneh",
"Daniel E. Ho",
"Percy Liang"
]
| Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have the potential to cause real-world impact. Policymakers, model providers, and other researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents to help mitigate cyberrisk and investigate opportunities for penetration testing. Toward that end, we introduce Cybench, a framework for specifying cybersecurity tasks and evaluating agents on those tasks. We include 40 professional-level Capture the Flag (CTF) tasks from 4 distinct CTF competitions, chosen to be recent, meaningful, and spanning a wide range of difficulties. Each task includes its own description, starter files, and is initialized in an environment where an agent can execute bash commands and observe outputs. Since many tasks are beyond the capabilities of existing LM agents, we introduce subtasks, which break down a task into intermediary steps for more gradated evaluation; we add subtasks for 17 of the 40 tasks. To evaluate agent capabilities, we construct a cybersecurity agent and evaluate 7 models: GPT-4o, Claude 3 Opus, Claude 3.5 Sonnet, Mixtral 8x22b Instruct, Gemini 1.5 Pro, Llama 3 70B Chat, and Llama 3.1 405B Instruct. Without guidance, we find that agents are able to solve only the easiest complete tasks that took human teams up to 11 minutes to solve, with Claude 3.5 Sonnet and GPT-4o having the highest success rates. Finally, subtasks provide more signal for measuring performance compared to unguided runs, with models achieving a 3.2\% higher success rate on complete tasks with subtask-guidance than without subtask-guidance. All code and data are publicly available at https://cybench.github.io |
|
2024-08-20T00:00:00 | 2408.10188 | LongVILA: Scaling Long-Context Visual Language Models for Long Videos | [
"Fuzhao Xue",
"Yukang Chen",
"Dacheng Li",
"Qinghao Hu",
"Ligeng Zhu",
"Xiuyu Li",
"Yunhao Fang",
"Haotian Tang",
"Shang Yang",
"Zhijian Liu",
"Ethan He",
"Hongxu Yin",
"Pavlo Molchanov",
"Jan Kautz",
"Linxi Fan",
"Yuke Zhu",
"Yao Lu",
"Song Han"
]
| Long-context capability is critical for multi-modal foundation models. We introduce LongVILA, a full-stack solution for long-context vision-language models, including system, model training, and dataset development. On the system side, we introduce the first Multi-Modal Sequence Parallelism (MM-SP) system that enables long-context training and inference, enabling 2M context length training on 256 GPUs. MM-SP is also efficient, being 2.1x - 5.7x faster than Ring-Style Sequence Parallelism and 1.1x - 1.4x faster than Megatron-LM in text-only settings. Moreover, it seamlessly integrates with Hugging Face Transformers. For model training, we propose a five-stage pipeline comprising alignment, pre-training, context extension, and long-short joint supervised fine-tuning. Regarding datasets, we meticulously construct large-scale visual language pre-training datasets and long video instruction-following datasets to support our multi-stage training process. The full-stack solution extends the feasible frame number of VILA by a factor of 128 (from 8 to 1024 frames) and improves long video captioning score from 2.00 to 3.26 (1.6x), achieving 99.5% accuracy in 1400-frames video (274k context length) needle in a haystack. LongVILA-8B also demonstrates a consistent improvement in performance on long videos within the VideoMME benchmark as the video frames increase. |
|
2024-08-20T00:00:00 | 2408.08946 | Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges | [
"Baixiang Huang",
"Canyu Chen",
"Kai Shu"
]
| Accurate attribution of authorship is crucial for maintaining the integrity of digital content, improving forensic investigations, and mitigating the risks of misinformation and plagiarism. Addressing the imperative need for proper authorship attribution is essential to uphold the credibility and accountability of authentic authorship. The rapid advancements of Large Language Models (LLMs) have blurred the lines between human and machine authorship, posing significant challenges for traditional methods. We presents a comprehensive literature review that examines the latest research on authorship attribution in the era of LLMs. This survey systematically explores the landscape of this field by categorizing four representative problems: (1) Human-written Text Attribution; (2) LLM-generated Text Detection; (3) LLM-generated Text Attribution; and (4) Human-LLM Co-authored Text Attribution. We also discuss the challenges related to ensuring the generalization and explainability of authorship attribution methods. Generalization requires the ability to generalize across various domains, while explainability emphasizes providing transparent and understandable insights into the decisions made by these models. By evaluating the strengths and limitations of existing methods and benchmarks, we identify key open problems and future research directions in this field. This literature review serves a roadmap for researchers and practitioners interested in understanding the state of the art in this rapidly evolving field. Additional resources and a curated list of papers are available and regularly updated at https://llm-authorship.github.io |
|
2024-08-20T00:00:00 | 2408.09739 | TraDiffusion: Trajectory-Based Training-Free Image Generation | [
"Mingrui Wu",
"Oucheng Huang",
"Jiayi Ji",
"Jiale Li",
"Xinyue Cai",
"Huafeng Kuang",
"Jianzhuang Liu",
"Xiaoshuai Sun",
"Rongrong Ji"
]
| In this work, we propose a training-free, trajectory-based controllable T2I approach, termed TraDiffusion. This novel method allows users to effortlessly guide image generation via mouse trajectories. To achieve precise control, we design a distance awareness energy function to effectively guide latent variables, ensuring that the focus of generation is within the areas defined by the trajectory. The energy function encompasses a control function to draw the generation closer to the specified trajectory and a movement function to diminish activity in areas distant from the trajectory. Through extensive experiments and qualitative assessments on the COCO dataset, the results reveal that TraDiffusion facilitates simpler, more natural image control. Moreover, it showcases the ability to manipulate salient regions, attributes, and relationships within the generated images, alongside visual input based on arbitrary or enhanced trajectories. |
|
2024-08-20T00:00:00 | 2408.09702 | Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering | [
"Ruofan Liang",
"Zan Gojcic",
"Merlin Nimier-David",
"David Acuna",
"Nandita Vijaykumar",
"Sanja Fidler",
"Zian Wang"
]
| The correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials, as well as the image formation process. While recent large-scale diffusion models have shown strong generative and inpainting capabilities, we find that current models do not sufficiently "understand" the scene shown in a single picture to generate consistent lighting effects (shadows, bright reflections, etc.) while preserving the identity and details of the composited object. We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process. Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes. Our physically based pipeline further enables automatic materials and tone-mapping refinement. |
|
2024-08-20T00:00:00 | 2408.09085 | Segment Anything with Multiple Modalities | [
"Aoran Xiao",
"Weihao Xuan",
"Heli Qi",
"Yun Xing",
"Naoto Yokoya",
"Shijian Lu"
]
| Robust and accurate segmentation of scenes has become one core functionality in various visual recognition and navigation tasks. This has inspired the recent development of Segment Anything Model (SAM), a foundation model for general mask segmentation. However, SAM is largely tailored for single-modal RGB images, limiting its applicability to multi-modal data captured with widely-adopted sensor suites, such as LiDAR plus RGB, depth plus RGB, thermal plus RGB, etc. We develop MM-SAM, an extension and expansion of SAM that supports cross-modal and multi-modal processing for robust and enhanced segmentation with different sensor suites. MM-SAM features two key designs, namely, unsupervised cross-modal transfer and weakly-supervised multi-modal fusion, enabling label-efficient and parameter-efficient adaptation toward various sensor modalities. It addresses three main challenges: 1) adaptation toward diverse non-RGB sensors for single-modal processing, 2) synergistic processing of multi-modal data via sensor fusion, and 3) mask-free training for different downstream tasks. Extensive experiments show that MM-SAM consistently outperforms SAM by large margins, demonstrating its effectiveness and robustness across various sensors and data modalities. |
|
2024-08-20T00:00:00 | 2408.10195 | SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views | [
"Chao Xu",
"Ang Li",
"Linghao Chen",
"Yulin Liu",
"Ruoxi Shi",
"Hao Su",
"Minghua Liu"
]
| Open-world 3D generation has recently attracted considerable attention. While many single-image-to-3D methods have yielded visually appealing outcomes, they often lack sufficient controllability and tend to produce hallucinated regions that may not align with users' expectations. In this paper, we explore an important scenario in which the input consists of one or a few unposed 2D images of a single object, with little or no overlap. We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for these sparse-view images. SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views. The diffusion model is trained to jointly predict surrogate representations for camera poses and multi-view images of the object under known poses, integrating all information from the input sparse views. These predictions are then leveraged to accomplish 3D reconstruction and pose estimation, and the reconstructed 3D model can be used to further refine the camera poses of input views. Through extensive experiments on three datasets, we demonstrate that our method not only significantly outperforms baseline methods in terms of 3D reconstruction quality and pose prediction accuracy but also exhibits strong efficiency. It requires only about 20 seconds to produce a textured mesh and camera poses for the input views. Project page: https://chaoxu.xyz/sparp. |
|
2024-08-20T00:00:00 | 2408.09858 | ShortCircuit: AlphaZero-Driven Circuit Design | [
"Dimitrios Tsaras",
"Antoine Grosnit",
"Lei Chen",
"Zhiyao Xie",
"Haitham Bou-Ammar",
"Mingxuan Yuan"
]
| Chip design relies heavily on generating Boolean circuits, such as AND-Inverter Graphs (AIGs), from functional descriptions like truth tables. While recent advances in deep learning have aimed to accelerate circuit design, these efforts have mostly focused on tasks other than synthesis, and traditional heuristic methods have plateaued. In this paper, we introduce ShortCircuit, a novel transformer-based architecture that leverages the structural properties of AIGs and performs efficient space exploration. Contrary to prior approaches attempting end-to-end generation of logic circuits using deep networks, ShortCircuit employs a two-phase process combining supervised with reinforcement learning to enhance generalization to unseen truth tables. We also propose an AlphaZero variant to handle the double exponentially large state space and the sparsity of the rewards, enabling the discovery of near-optimal designs. To evaluate the generative performance of our trained model , we extract 500 truth tables from a benchmark set of 20 real-world circuits. ShortCircuit successfully generates AIGs for 84.6% of the 8-input test truth tables, and outperforms the state-of-the-art logic synthesis tool, ABC, by 14.61% in terms of circuits size. |
|
2024-08-21T00:00:00 | 2408.11001 | MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning | [
"Haoning Wu",
"Shaocheng Shen",
"Qiang Hu",
"Xiaoyun Zhang",
"Ya Zhang",
"Yanfeng Wang"
]
| Diffusion models have emerged as frontrunners in text-to-image generation for their impressive capabilities. Nonetheless, their fixed image resolution during training often leads to challenges in high-resolution image generation, such as semantic inaccuracies and object replication. This paper introduces MegaFusion, a novel approach that extends existing diffusion-based text-to-image generation models towards efficient higher-resolution generation without additional fine-tuning or extra adaptation. Specifically, we employ an innovative truncate and relay strategy to bridge the denoising processes across different resolutions, allowing for high-resolution image generation in a coarse-to-fine manner. Moreover, by integrating dilated convolutions and noise re-scheduling, we further adapt the model's priors for higher resolution. The versatility and efficacy of MegaFusion make it universally applicable to both latent-space and pixel-space diffusion models, along with other derivative models. Extensive experiments confirm that MegaFusion significantly boosts the capability of existing models to produce images of megapixels and various aspect ratios, while only requiring about 40% of the original computational cost. |
|
2024-08-21T00:00:00 | 2408.10998 | Audio Match Cutting: Finding and Creating Matching Audio Transitions in Movies and Videos | [
"Dennis Fedorishin",
"Lie Lu",
"Srirangaraj Setlur",
"Venu Govindaraju"
]
| A "match cut" is a common video editing technique where a pair of shots that have a similar composition transition fluidly from one to another. Although match cuts are often visual, certain match cuts involve the fluid transition of audio, where sounds from different sources merge into one indistinguishable transition between two shots. In this paper, we explore the ability to automatically find and create "audio match cuts" within videos and movies. We create a self-supervised audio representation for audio match cutting and develop a coarse-to-fine audio match pipeline that recommends matching shots and creates the blended audio. We further annotate a dataset for the proposed audio match cut task and compare the ability of multiple audio representations to find audio match cut candidates. Finally, we evaluate multiple methods to blend two matching audio candidates with the goal of creating a smooth transition. Project page and examples are available at: https://denfed.github.io/audiomatchcut/ |
|
2024-08-21T00:00:00 | 2408.10487 | MambaEVT: Event Stream based Visual Object Tracking using State Space Model | [
"Xiao Wang",
"Chao wang",
"Shiao Wang",
"Xixi Wang",
"Zhicheng Zhao",
"Lin Zhu",
"Bo Jiang"
]
| https://github.com/Event-AHU/MambaEVT | Event camera-based visual tracking has drawn more and more attention in recent years due to the unique imaging principle and advantages of low energy consumption, high dynamic range, and dense temporal resolution. Current event-based tracking algorithms are gradually hitting their performance bottlenecks, due to the utilization of vision Transformer and the static template for target object localization. In this paper, we propose a novel Mamba-based visual tracking framework that adopts the state space model with linear complexity as a backbone network. The search regions and target template are fed into the vision Mamba network for simultaneous feature extraction and interaction. The output tokens of search regions will be fed into the tracking head for target localization. More importantly, we consider introducing a dynamic template update strategy into the tracking framework using the Memory Mamba network. By considering the diversity of samples in the target template library and making appropriate adjustments to the template memory module, a more effective dynamic template can be integrated. The effective combination of dynamic and static templates allows our Mamba-based tracking algorithm to achieve a good balance between accuracy and computational cost on multiple large-scale datasets, including EventVOT, VisEvent, and FE240hz. The source code will be released on https://github.com/Event-AHU/MambaEVT |
2024-08-21T00:00:00 | 2408.11049 | MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding | [
"Jian Chen",
"Vashisth Tiwari",
"Ranajoy Sadhukhan",
"Zhuoming Chen",
"Jinyuan Shi",
"Ian En-Hsu Yen",
"Beidi Chen"
]
| Large Language Models (LLMs) have become more prevalent in long-context applications such as interactive chatbots, document analysis, and agent workflows, but it is challenging to serve long-context requests with low latency and high throughput. Speculative decoding (SD) is a widely used technique to reduce latency without sacrificing performance but the conventional wisdom suggests that its efficacy is limited to small batch sizes. In MagicDec, we show that surprisingly SD can achieve speedup even for a high throughput inference regime for moderate to long sequences. More interestingly, an intelligent drafting strategy can achieve better speedup with increasing batch size based on our rigorous analysis. MagicDec first identifies the bottleneck shifts with increasing batch size and sequence length, and uses these insights to deploy speculative decoding more effectively for high throughput inference. Then, it leverages draft models with sparse KV cache to address the KV bottleneck that scales with both sequence length and batch size. |
|
2024-08-21T00:00:00 | 2408.11039 | Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model | [
"Chunting Zhou",
"Lili Yu",
"Arun Babu",
"Kushal Tirumala",
"Michihiro Yasunaga",
"Leonid Shamis",
"Jacob Kahn",
"Xuezhe Ma",
"Luke Zettlemoyer",
"Omer Levy"
]
| We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. We pretrain multiple Transfusion models up to 7B parameters from scratch on a mixture of text and image data, establishing scaling laws with respect to a variety of uni- and cross-modal benchmarks. Our experiments show that Transfusion scales significantly better than quantizing images and training a language model over discrete image tokens. By introducing modality-specific encoding and decoding layers, we can further improve the performance of Transfusion models, and even compress each image to just 16 patches. We further demonstrate that scaling our Transfusion recipe to 7B parameters and 2T multi-modal tokens produces a model that can generate images and text on a par with similar scale diffusion models and language models, reaping the benefits of both worlds. |
|
2024-08-21T00:00:00 | 2408.10906 | ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining | [
"Qi Ma",
"Yue Li",
"Bin Ren",
"Nicu Sebe",
"Ender Konukoglu",
"Theo Gevers",
"Luc Van Gool",
"Danda Pani Paudel"
]
| 3D Gaussian Splatting (3DGS) has become the de facto method of 3D representation in many vision tasks. This calls for the 3D understanding directly in this representation space. To facilitate the research in this direction, we first build a large-scale dataset of 3DGS using the commonly used ShapeNet and ModelNet datasets. Our dataset ShapeSplat consists of 65K objects from 87 unique categories, whose labels are in accordance with the respective datasets. The creation of this dataset utilized the compute equivalent of 2 GPU years on a TITAN XP GPU. We utilize our dataset for unsupervised pretraining and supervised finetuning for classification and segmentation tasks. To this end, we introduce \textit{Gaussian-MAE}, which highlights the unique benefits of representation learning from Gaussian parameters. Through exhaustive experiments, we provide several valuable insights. In particular, we show that (1) the distribution of the optimized GS centroids significantly differs from the uniformly sampled point cloud (used for initialization) counterpart; (2) this change in distribution results in degradation in classification but improvement in segmentation tasks when using only the centroids; (3) to leverage additional Gaussian parameters, we propose Gaussian feature grouping in a normalized feature space, along with splats pooling layer, offering a tailored solution to effectively group and embed similar Gaussians, which leads to notable improvement in finetuning tasks. |
|
2024-08-21T00:00:00 | 2408.10914 | To Code, or Not To Code? Exploring Impact of Code in Pre-training | [
"Viraat Aryabumi",
"Yixuan Su",
"Raymond Ma",
"Adrien Morisot",
"Ivan Zhang",
"Acyr Locatelli",
"Marzieh Fadaee",
"Ahmet Üstün",
"Sara Hooker"
]
| Including code in the pre-training data mixture, even for models not specifically designed for code, has become a common practice in LLMs pre-training. While there has been anecdotal consensus among practitioners that code data plays a vital role in general LLMs' performance, there is only limited work analyzing the precise impact of code on non-code tasks. In this work, we systematically investigate the impact of code data on general performance. We ask "what is the impact of code data used in pre-training on a large variety of downstream tasks beyond code generation". We conduct extensive ablations and evaluate across a broad range of natural language reasoning tasks, world knowledge tasks, code benchmarks, and LLM-as-a-judge win-rates for models with sizes ranging from 470M to 2.8B parameters. Across settings, we find a consistent results that code is a critical building block for generalization far beyond coding tasks and improvements to code quality have an outsized impact across all tasks. In particular, compared to text-only pre-training, the addition of code results in up to relative increase of 8.2% in natural language (NL) reasoning, 4.2% in world knowledge, 6.6% improvement in generative win-rates, and a 12x boost in code performance respectively. Our work suggests investments in code quality and preserving code during pre-training have positive impacts. |
|
2024-08-21T00:00:00 | 2408.11048 | RP1M: A Large-Scale Motion Dataset for Piano Playing with Bi-Manual Dexterous Robot Hands | [
"Yi Zhao",
"Le Chen",
"Jan Schneider",
"Quankai Gao",
"Juho Kannala",
"Bernhard Schölkopf",
"Joni Pajarinen",
"Dieter Büchler"
]
| It has been a long-standing research goal to endow robot hands with human-level dexterity. Bi-manual robot piano playing constitutes a task that combines challenges from dynamic tasks, such as generating fast while precise motions, with slower but contact-rich manipulation problems. Although reinforcement learning based approaches have shown promising results in single-task performance, these methods struggle in a multi-song setting. Our work aims to close this gap and, thereby, enable imitation learning approaches for robot piano playing at scale. To this end, we introduce the Robot Piano 1 Million (RP1M) dataset, containing bi-manual robot piano playing motion data of more than one million trajectories. We formulate finger placements as an optimal transport problem, thus, enabling automatic annotation of vast amounts of unlabeled songs. Benchmarking existing imitation learning approaches shows that such approaches reach state-of-the-art robot piano playing performance by leveraging RP1M. |
|
2024-08-21T00:00:00 | 2408.10764 | Predicting Rewards Alongside Tokens: Non-disruptive Parameter Insertion for Efficient Inference Intervention in Large Language Model | [
"Chenhan Yuan",
"Fei Huang",
"Ru Peng",
"Keming Lu",
"Bowen Yu",
"Chang Zhou",
"Jingren Zhou"
]
| https://github.com/chenhan97/Otter | Transformer-based large language models (LLMs) exhibit limitations such as generating unsafe responses, unreliable reasoning, etc. Existing inference intervention approaches attempt to mitigate these issues by finetuning additional models to produce calibration signals (such as rewards) that guide the LLM's decoding process. However, this solution introduces substantial time and space overhead due to the separate models required. This work proposes Non-disruptive parameters insertion (Otter), inserting extra parameters into the transformer architecture to predict calibration signals along with the original LLM output. Otter offers state-of-the-art performance on multiple demanding tasks while saving up to 86.5\% extra space and 98.5\% extra time. Furthermore, Otter seamlessly integrates with existing inference engines, requiring only a one-line code change, and the original model response remains accessible after the parameter insertion. Our code is publicly available at https://github.com/chenhan97/Otter |
2024-08-21T00:00:00 | 2408.09174 | TableBench: A Comprehensive and Complex Benchmark for Table Question Answering | [
"Xianjie Wu",
"Jian Yang",
"Linzheng Chai",
"Ge Zhang",
"Jiaheng Liu",
"Xinrun Du",
"Di Liang",
"Daixin Shu",
"Xianfu Cheng",
"Tianzhen Sun",
"Guanglin Niu",
"Tongliang Li",
"Zhoujun Li"
]
| Recent advancements in Large Language Models (LLMs) have markedly enhanced the interpretation and processing of tabular data, introducing previously unimaginable capabilities. Despite these achievements, LLMs still encounter significant challenges when applied in industrial scenarios, particularly due to the increased complexity of reasoning required with real-world tabular data, underscoring a notable disparity between academic benchmarks and practical applications. To address this discrepancy, we conduct a detailed investigation into the application of tabular data in industrial scenarios and propose a comprehensive and complex benchmark TableBench, including 18 fields within four major categories of table question answering (TableQA) capabilities. Furthermore, we introduce TableLLM, trained on our meticulously constructed training set TableInstruct, achieving comparable performance with GPT-3.5. Massive experiments conducted on TableBench indicate that both open-source and proprietary LLMs still have significant room for improvement to meet real-world demands, where the most advanced model, GPT-4, achieves only a modest score compared to humans. |
|
2024-08-21T00:00:00 | 2408.10446 | The Brittleness of AI-Generated Image Watermarking Techniques: Examining Their Robustness Against Visual Paraphrasing Attacks | [
"Niyar R Barman",
"Krish Sharma",
"Ashhar Aziz",
"Shashwat Bajpai",
"Shwetangshu Biswas",
"Vasu Sharma",
"Vinija Jain",
"Aman Chadha",
"Amit Sheth",
"Amitava Das"
]
| The rapid advancement of text-to-image generation systems, exemplified by models like Stable Diffusion, Midjourney, Imagen, and DALL-E, has heightened concerns about their potential misuse. In response, companies like Meta and Google have intensified their efforts to implement watermarking techniques on AI-generated images to curb the circulation of potentially misleading visuals. However, in this paper, we argue that current image watermarking methods are fragile and susceptible to being circumvented through visual paraphrase attacks. The proposed visual paraphraser operates in two steps. First, it generates a caption for the given image using KOSMOS-2, one of the latest state-of-the-art image captioning systems. Second, it passes both the original image and the generated caption to an image-to-image diffusion system. During the denoising step of the diffusion pipeline, the system generates a visually similar image that is guided by the text caption. The resulting image is a visual paraphrase and is free of any watermarks. Our empirical findings demonstrate that visual paraphrase attacks can effectively remove watermarks from images. This paper provides a critical assessment, empirically revealing the vulnerability of existing watermarking techniques to visual paraphrase attacks. While we do not propose solutions to this issue, this paper serves as a call to action for the scientific community to prioritize the development of more robust watermarking techniques. Our first-of-its-kind visual paraphrase dataset and accompanying code are publicly available. |
|
2024-08-21T00:00:00 | 2408.10088 | Recent Surge in Public Interest in Transportation: Sentiment Analysis of Baidu Apollo Go Using Weibo Data | [
"Shiqi Wang",
"Zhouye Zhao",
"Yuhang Xie",
"Mingchuan Ma",
"Zirui Chen",
"Zeyu Wang",
"Bohao Su",
"Wenrui Xu",
"Tianyi Li"
]
| https://github.com/GIStudio/trb2024 | Urban mobility and transportation systems have been profoundly transformed by the advancement of autonomous vehicle technologies. Baidu Apollo Go, a pioneer robotaxi service from the Chinese tech giant Baidu, has recently been widely deployed in major cities like Beijing and Wuhan, sparking increased conversation and offering a glimpse into the future of urban mobility. This study investigates public attitudes towards Apollo Go across China using Sentiment Analysis with a hybrid BERT model on 36,096 Weibo posts from January to July 2024. The analysis shows that 89.56\% of posts related to Apollo Go are clustered in July. From January to July, public sentiment was mostly positive, but negative comments began to rise after it became a hot topic on July 21. Spatial analysis indicates a strong correlation between provinces with high discussion intensity and those where Apollo Go operates. Initially, Hubei and Guangdong dominated online posting volume, but by July, Guangdong, Beijing, and international regions had overtaken Hubei. Attitudes varied significantly among provinces, with Xinjiang and Qinghai showing optimism and Tibet and Gansu expressing concerns about the impact on traditional taxi services. Sentiment analysis revealed that positive comments focused on technology applications and personal experiences, while negative comments centered on job displacement and safety concerns. In summary, this study highlights the divergence in public perceptions of autonomous ride-hailing services, providing valuable insights for planners, policymakers, and service providers. The model is published on Hugging Face at https://huggingface.co/wsqstar/bert-finetuned-weibo-luobokuaipao and the repository on GitHub at https://github.com/GIStudio/trb2024. |
2024-08-21T00:00:00 | 2408.10701 | Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique | [
"Tej Deep Pala",
"Vernon Y. H. Toh",
"Rishabh Bhardwaj",
"Soujanya Poria"
]
| https://github.com/declare-lab/ferret | In today's era, where large language models (LLMs) are integrated into numerous real-world applications, ensuring their safety and robustness is crucial for responsible AI usage. Automated red-teaming methods play a key role in this process by generating adversarial attacks to identify and mitigate potential vulnerabilities in these models. However, existing methods often struggle with slow performance, limited categorical diversity, and high resource demands. While Rainbow Teaming, a recent approach, addresses the diversity challenge by framing adversarial prompt generation as a quality-diversity search, it remains slow and requires a large fine-tuned mutator for optimal performance. To overcome these limitations, we propose Ferret, a novel approach that builds upon Rainbow Teaming by generating multiple adversarial prompt mutations per iteration and using a scoring function to rank and select the most effective adversarial prompt. We explore various scoring functions, including reward models, Llama Guard, and LLM-as-a-judge, to rank adversarial mutations based on their potential harm to improve the efficiency of the search for harmful mutations. Our results demonstrate that Ferret, utilizing a reward model as a scoring function, improves the overall attack success rate (ASR) to 95%, which is 46% higher than Rainbow Teaming. Additionally, Ferret reduces the time needed to achieve a 90% ASR by 15.2% compared to the baseline and generates adversarial prompts that are transferable i.e. effective on other LLMs of larger size. Our codes are available at https://github.com/declare-lab/ferret. |
2024-08-21T00:00:00 | 2408.09574 | PhysBERT: A Text Embedding Model for Physics Scientific Literature | [
"Thorsten Hellert",
"João Montenegro",
"Andrea Pollastro"
]
| The specialized language and complex concepts in physics pose significant challenges for information extraction through Natural Language Processing (NLP). Central to effective NLP applications is the text embedding model, which converts text into dense vector representations for efficient information retrieval and semantic analysis. In this work, we introduce PhysBERT, the first physics-specific text embedding model. Pre-trained on a curated corpus of 1.2 million arXiv physics papers and fine-tuned with supervised data, PhysBERT outperforms leading general-purpose models on physics-specific tasks including the effectiveness in fine-tuning for specific physics subdomains. |
|
2024-08-21T00:00:00 | 2408.11054 | NeCo: Improving DINOv2's spatial representations in 19 GPU hours with Patch Neighbor Consistency | [
"Valentinos Pariza",
"Mohammadreza Salehi",
"Gertjan Burghouts",
"Francesco Locatello",
"Yuki M. Asano"
]
| We propose sorting patch representations across views as a novel self-supervised learning signal to improve pretrained representations. To this end, we introduce NeCo: Patch Neighbor Consistency, a novel training loss that enforces patch-level nearest neighbor consistency across a student and teacher model, relative to reference batches. Our method leverages a differentiable sorting method applied on top of pretrained representations, such as DINOv2-registers to bootstrap the learning signal and further improve upon them. This dense post-pretraining leads to superior performance across various models and datasets, despite requiring only 19 hours on a single GPU. We demonstrate that this method generates high-quality dense feature encoders and establish several new state-of-the-art results: +5.5% and + 6% for non-parametric in-context semantic segmentation on ADE20k and Pascal VOC, and +7.2% and +5.7% for linear segmentation evaluations on COCO-Things and -Stuff. |
|
2024-08-22T00:00:00 | 2408.11318 | TWLV-I: Analysis and Insights from Holistic Evaluation on Video Foundation Models | [
"Hyeongmin Lee",
"Jin-Young Kim",
"Kyungjune Baek",
"Jihwan Kim",
"Hyojun Go",
"Seongsu Ha",
"Seokjin Han",
"Jiho Jang",
"Raehyuk Jung",
"Daewoo Kim",
"GeunOh Kim",
"JongMok Kim",
"Jongseok Kim",
"Junwan Kim",
"Soonwoo Kwon",
"Jangwon Lee",
"Seungjoon Park",
"Minjoon Seo",
"Jay Suh",
"Jaehyuk Yi",
"Aiden Lee"
]
| https://github.com/twelvelabs-io/video-embeddings-evaluation-framework" | In this work, we discuss evaluating video foundation models in a fair and robust manner. Unlike language or image foundation models, many video foundation models are evaluated with differing parameters (such as sampling rate, number of frames, pretraining steps, etc.), making fair and robust comparisons challenging. Therefore, we present a carefully designed evaluation framework for measuring two core capabilities of video comprehension: appearance and motion understanding. Our findings reveal that existing video foundation models, whether text-supervised like UMT or InternVideo2, or self-supervised like V-JEPA, exhibit limitations in at least one of these capabilities. As an alternative, we introduce TWLV-I, a new video foundation model that constructs robust visual representations for both motion- and appearance-based videos. Based on the average top-1 accuracy of linear probing on five action recognition benchmarks, pretrained only on publicly accessible datasets, our model shows a 4.6%p improvement compared to V-JEPA (ViT-L) and a 7.7%p improvement compared to UMT (ViT-L). Even when compared to much larger models, our model demonstrates a 7.2%p improvement compared to DFN (ViT-H), a 2.7%p improvement compared to V-JEPA~(ViT-H) and a 2.8%p improvement compared to InternVideo2 (ViT-g). We provide embedding vectors obtained by TWLV-I from videos of several commonly used video benchmarks, along with evaluation source code that can directly utilize these embeddings. The code is available on "https://github.com/twelvelabs-io/video-embeddings-evaluation-framework". |
2024-08-22T00:00:00 | 2408.11475 | TrackGo: A Flexible and Efficient Method for Controllable Video Generation | [
"Haitao Zhou",
"Chuang Wang",
"Rui Nie",
"Jinxiao Lin",
"Dongdong Yu",
"Qian Yu",
"Changhu Wang"
]
| Recent years have seen substantial progress in diffusion-based controllable video generation. However, achieving precise control in complex scenarios, including fine-grained object parts, sophisticated motion trajectories, and coherent background movement, remains a challenge. In this paper, we introduce TrackGo, a novel approach that leverages free-form masks and arrows for conditional video generation. This method offers users with a flexible and precise mechanism for manipulating video content. We also propose the TrackAdapter for control implementation, an efficient and lightweight adapter designed to be seamlessly integrated into the temporal self-attention layers of a pretrained video generation model. This design leverages our observation that the attention map of these layers can accurately activate regions corresponding to motion in videos. Our experimental results demonstrate that our new approach, enhanced by the TrackAdapter, achieves state-of-the-art performance on key metrics such as FVD, FID, and ObjMC scores. The project page of TrackGo can be found at: https://zhtjtcz.github.io/TrackGo-Page/ |
|
2024-08-22T00:00:00 | 2408.11745 | FocusLLM: Scaling LLM's Context by Parallel Decoding | [
"Zhenyu Li",
"Yike Zhang",
"Tengyu Pan",
"Yutao Sun",
"Zhichao Duan",
"Junjie Fang",
"Rong Han",
"Zixuan Wang",
"Jianyong Wang"
]
| https://github.com/leezythu/FocusLLM | Empowering LLMs with the ability to utilize useful information from a long context is crucial for many downstream applications. However, achieving long context lengths with the conventional transformer architecture requires substantial training and inference resources. In this paper, we present FocusLLM, a framework designed to extend the context length of any decoder-only LLM, enabling the model to focus on relevant information from very long sequences. FocusLLM processes long text inputs by dividing them into chunks based on the model's original context length to alleviate the issue of attention distraction. Then, it appends the local context to each chunk as a prompt to extract essential information from each chunk based on a novel parallel decoding mechanism, and ultimately integrates the extracted information into the local context. FocusLLM stands out for great training efficiency and versatility: trained with an 8K input length with much less training cost than previous methods, FocusLLM exhibits superior performance across downstream long-context tasks and maintains strong language modeling ability when handling extensive long texts, even up to 400K tokens. Our code is available at https://github.com/leezythu/FocusLLM. |
2024-08-22T00:00:00 | 2408.11796 | LLM Pruning and Distillation in Practice: The Minitron Approach | [
"Sharath Turuvekere Sreenivas",
"Saurav Muralidharan",
"Raviraj Joshi",
"Marcin Chochowski",
"Mostofa Patwary",
"Mohammad Shoeybi",
"Bryan Catanzaro",
"Jan Kautz",
"Pavlo Molchanov"
]
| We present a comprehensive report on compressing the Llama 3.1 8B and Mistral NeMo 12B models to 4B and 8B parameters, respectively, using pruning and distillation. We explore two distinct pruning strategies: (1) depth pruning and (2) joint hidden/attention/MLP (width) pruning, and evaluate the results on common benchmarks from the LM Evaluation Harness. The models are then aligned with NeMo Aligner and tested in instruct-tuned versions. This approach produces a compelling 4B model from Llama 3.1 8B and a state-of-the-art Mistral-NeMo-Minitron-8B (MN-Minitron-8B for brevity) model from Mistral NeMo 12B. We found that with no access to the original data, it is beneficial to slightly fine-tune teacher models on the distillation dataset. We open-source our base model weights on Hugging Face with a permissive license. |
|
2024-08-22T00:00:00 | 2408.11457 | Expanding FLORES+ Benchmark for more Low-Resource Settings: Portuguese-Emakhuwa Machine Translation Evaluation | [
"Felermino D. M. Antonio Ali",
"Henrique Lopes Cardoso",
"Rui Sousa-Silva"
]
| As part of the Open Language Data Initiative shared tasks, we have expanded the FLORES+ evaluation set to include Emakhuwa, a low-resource language widely spoken in Mozambique. We translated the dev and devtest sets from Portuguese into Emakhuwa, and we detail the translation process and quality assurance measures used. Our methodology involved various quality checks, including post-editing and adequacy assessments. The resulting datasets consist of multiple reference sentences for each source. We present baseline results from training a Neural Machine Translation system and fine-tuning existing multilingual translation models. Our findings suggest that spelling inconsistencies remain a challenge in Emakhuwa. Additionally, the baseline models underperformed on this evaluation set, underscoring the necessity for further research to enhance machine translation quality for Emakhuwa. The data is publicly available at https://huggingface.co/datasets/LIACC/Emakhuwa-FLORES. |
|
2024-08-22T00:00:00 | 2408.11706 | FRAP: Faithful and Realistic Text-to-Image Generation with Adaptive Prompt Weighting | [
"Liyao Jiang",
"Negar Hassanpour",
"Mohammad Salameh",
"Mohan Sai Singamsetti",
"Fengyu Sun",
"Wei Lu",
"Di Niu"
]
| Text-to-image (T2I) diffusion models have demonstrated impressive capabilities in generating high-quality images given a text prompt. However, ensuring the prompt-image alignment remains a considerable challenge, i.e., generating images that faithfully align with the prompt's semantics. Recent works attempt to improve the faithfulness by optimizing the latent code, which potentially could cause the latent code to go out-of-distribution and thus produce unrealistic images. In this paper, we propose FRAP, a simple, yet effective approach based on adaptively adjusting the per-token prompt weights to improve prompt-image alignment and authenticity of the generated images. We design an online algorithm to adaptively update each token's weight coefficient, which is achieved by minimizing a unified objective function that encourages object presence and the binding of object-modifier pairs. Through extensive evaluations, we show FRAP generates images with significantly higher prompt-image alignment to prompts from complex datasets, while having a lower average latency compared to recent latent code optimization methods, e.g., 4 seconds faster than D&B on the COCO-Subject dataset. Furthermore, through visual comparisons and evaluation on the CLIP-IQA-Real metric, we show that FRAP not only improves prompt-image alignment but also generates more authentic images with realistic appearances. We also explore combining FRAP with prompt rewriting LLM to recover their degraded prompt-image alignment, where we observe improvements in both prompt-image alignment and image quality. |
|
2024-08-22T00:00:00 | 2408.08793 | Backward-Compatible Aligned Representations via an Orthogonal Transformation Layer | [
"Simone Ricci",
"Niccolò Biondi",
"Federico Pernici",
"Alberto Del Bimbo"
]
| Visual retrieval systems face significant challenges when updating models with improved representations due to misalignment between the old and new representations. The costly and resource-intensive backfilling process involves recalculating feature vectors for images in the gallery set whenever a new model is introduced. To address this, prior research has explored backward-compatible training methods that enable direct comparisons between new and old representations without backfilling. Despite these advancements, achieving a balance between backward compatibility and the performance of independently trained models remains an open problem. In this paper, we address it by expanding the representation space with additional dimensions and learning an orthogonal transformation to achieve compatibility with old models and, at the same time, integrate new information. This transformation preserves the original feature space's geometry, ensuring that our model aligns with previous versions while also learning new data. Our Orthogonal Compatible Aligned (OCA) approach eliminates the need for re-indexing during model updates and ensures that features can be compared directly across different model updates without additional mapping functions. Experimental results on CIFAR-100 and ImageNet-1k demonstrate that our method not only maintains compatibility with previous models but also achieves state-of-the-art accuracy, outperforming several existing methods. |
|
2024-08-22T00:00:00 | 2408.11817 | GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models | [
"Jonathan Roberts",
"Kai Han",
"Samuel Albanie"
]
| Large multimodal models (LMMs) have exhibited proficiencies across many visual tasks. Although numerous well-known benchmarks exist to evaluate model performance, they increasingly have insufficient headroom. As such, there is a pressing need for a new generation of benchmarks challenging enough for the next generation of LMMs. One area that LMMs show potential is graph analysis, specifically, the tasks an analyst might typically perform when interpreting figures such as estimating the mean, intercepts or correlations of functions and data series. In this work, we introduce GRAB, a graph analysis benchmark, fit for current and future frontier LMMs. Our benchmark is entirely synthetic, ensuring high-quality, noise-free questions. GRAB is comprised of 2170 questions, covering four tasks and 23 graph properties. We evaluate 20 LMMs on GRAB, finding it to be a challenging benchmark, with the highest performing model attaining a score of just 21.7%. Finally, we conduct various ablations to investigate where the models succeed and struggle. We release GRAB to encourage progress in this important, growing domain. |
|
2024-08-22T00:00:00 | 2408.11721 | Iterative Object Count Optimization for Text-to-image Diffusion Models | [
"Oz Zafar",
"Lior Wolf",
"Idan Schwartz"
]
| We address a persistent challenge in text-to-image models: accurately generating a specified number of objects. Current models, which learn from image-text pairs, inherently struggle with counting, as training data cannot depict every possible number of objects for any given object. To solve this, we propose optimizing the generated image based on a counting loss derived from a counting model that aggregates an object\'s potential. Employing an out-of-the-box counting model is challenging for two reasons: first, the model requires a scaling hyperparameter for the potential aggregation that varies depending on the viewpoint of the objects, and second, classifier guidance techniques require modified models that operate on noisy intermediate diffusion steps. To address these challenges, we propose an iterated online training mode that improves the accuracy of inferred images while altering the text conditioning embedding and dynamically adjusting hyperparameters. Our method offers three key advantages: (i) it can consider non-derivable counting techniques based on detection models, (ii) it is a zero-shot plug-and-play solution facilitating rapid changes to the counting techniques and image generation methods, and (iii) the optimized counting token can be reused to generate accurate images without additional optimization. We evaluate the generation of various objects and show significant improvements in accuracy. The project page is available at https://ozzafar.github.io/count_token. |
|
2024-08-22T00:00:00 | 2408.11812 | Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation | [
"Ria Doshi",
"Homer Walke",
"Oier Mees",
"Sudeep Dasari",
"Sergey Levine"
]
| Modern machine learning systems rely on large datasets to attain broad generalization, and this often poses a challenge in robot learning, where each robotic platform and task might have only a small dataset. By training a single policy across many different kinds of robots, a robot learning method can leverage much broader and more diverse datasets, which in turn can lead to better generalization and robustness. However, training a single policy on multi-robot data is challenging because robots can have widely varying sensors, actuators, and control frequencies. We propose CrossFormer, a scalable and flexible transformer-based policy that can consume data from any embodiment. We train CrossFormer on the largest and most diverse dataset to date, 900K trajectories across 20 different robot embodiments. We demonstrate that the same network weights can control vastly different robots, including single and dual arm manipulation systems, wheeled robots, quadcopters, and quadrupeds. Unlike prior work, our model does not require manual alignment of the observation or action spaces. Extensive experiments in the real world show that our method matches the performance of specialist policies tailored for each embodiment, while also significantly outperforming the prior state of the art in cross-embodiment learning. |
|
2024-08-22T00:00:00 | 2408.11247 | Unboxing Occupational Bias: Grounded Debiasing LLMs with U.S. Labor Data | [
"Atmika Gorti",
"Manas Gaur",
"Aman Chadha"
]
| Large Language Models (LLMs) are prone to inheriting and amplifying societal biases embedded within their training data, potentially reinforcing harmful stereotypes related to gender, occupation, and other sensitive categories. This issue becomes particularly problematic as biased LLMs can have far-reaching consequences, leading to unfair practices and exacerbating social inequalities across various domains, such as recruitment, online content moderation, or even the criminal justice system. Although prior research has focused on detecting bias in LLMs using specialized datasets designed to highlight intrinsic biases, there has been a notable lack of investigation into how these findings correlate with authoritative datasets, such as those from the U.S. National Bureau of Labor Statistics (NBLS). To address this gap, we conduct empirical research that evaluates LLMs in a ``bias-out-of-the-box" setting, analyzing how the generated outputs compare with the distributions found in NBLS data. Furthermore, we propose a straightforward yet effective debiasing mechanism that directly incorporates NBLS instances to mitigate bias within LLMs. Our study spans seven different LLMs, including instructable, base, and mixture-of-expert models, and reveals significant levels of bias that are often overlooked by existing bias detection techniques. Importantly, our debiasing method, which does not rely on external datasets, demonstrates a substantial reduction in bias scores, highlighting the efficacy of our approach in creating fairer and more reliable LLMs. |
|
2024-08-22T00:00:00 | 2408.11237 | Out-of-Distribution Detection with Attention Head Masking for Multimodal Document Classification | [
"Christos Constantinou",
"Georgios Ioannides",
"Aman Chadha",
"Aaron Elkins",
"Edwin Simpson"
]
| Detecting out-of-distribution (OOD) data is crucial in machine learning applications to mitigate the risk of model overconfidence, thereby enhancing the reliability and safety of deployed systems. The majority of existing OOD detection methods predominantly address uni-modal inputs, such as images or texts. In the context of multi-modal documents, there is a notable lack of extensive research on the performance of these methods, which have primarily been developed with a focus on computer vision tasks. We propose a novel methodology termed as attention head masking (AHM) for multi-modal OOD tasks in document classification systems. Our empirical results demonstrate that the proposed AHM method outperforms all state-of-the-art approaches and significantly decreases the false positive rate (FPR) compared to existing solutions up to 7.5\%. This methodology generalizes well to multi-modal data, such as documents, where visual and textual information are modeled under the same Transformer architecture. To address the scarcity of high-quality publicly available document datasets and encourage further research on OOD detection for documents, we introduce FinanceDocs, a new document AI dataset. Our code and dataset are publicly available. |
|
2024-08-23T00:00:00 | 2408.12599 | Controllable Text Generation for Large Language Models: A Survey | [
"Xun Liang",
"Hanyu Wang",
"Yezhaohui Wang",
"Shichao Song",
"Jiawei Yang",
"Simin Niu",
"Jie Hu",
"Dan Liu",
"Shunyu Yao",
"Feiyu Xiong",
"Zhiyu Li"
]
| https://github.com/IAAR-Shanghai/CTGSurvey | In Natural Language Processing (NLP), Large Language Models (LLMs) have demonstrated high text generation quality. However, in real-world applications, LLMs must meet increasingly complex requirements. Beyond avoiding misleading or inappropriate content, LLMs are also expected to cater to specific user needs, such as imitating particular writing styles or generating text with poetic richness. These varied demands have driven the development of Controllable Text Generation (CTG) techniques, which ensure that outputs adhere to predefined control conditions--such as safety, sentiment, thematic consistency, and linguistic style--while maintaining high standards of helpfulness, fluency, and diversity. This paper systematically reviews the latest advancements in CTG for LLMs, offering a comprehensive definition of its core concepts and clarifying the requirements for control conditions and text quality. We categorize CTG tasks into two primary types: content control and attribute control. The key methods are discussed, including model retraining, fine-tuning, reinforcement learning, prompt engineering, latent space manipulation, and decoding-time intervention. We analyze each method's characteristics, advantages, and limitations, providing nuanced insights for achieving generation control. Additionally, we review CTG evaluation methods, summarize its applications across domains, and address key challenges in current research, including reduced fluency and practicality. We also propose several appeals, such as placing greater emphasis on real-world applications in future research. This paper aims to offer valuable guidance to researchers and developers in the field. Our reference list and Chinese version are open-sourced at https://github.com/IAAR-Shanghai/CTGSurvey. |
2024-08-23T00:00:00 | 2408.11857 | Hermes 3 Technical Report | [
"Ryan Teknium",
"Jeffrey Quesnelle",
"Chen Guang"
]
| Instruct (or "chat") tuned models have become the primary way in which most people interact with large language models. As opposed to "base" or "foundation" models, instruct-tuned models are optimized to respond to imperative statements. We present Hermes 3, a neutrally-aligned generalist instruct and tool use model with strong reasoning and creative abilities. Its largest version, Hermes 3 405B, achieves state of the art performance among open weight models on several public benchmarks. |
|
2024-08-23T00:00:00 | 2408.12590 | xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations | [
"Can Qin",
"Congying Xia",
"Krithika Ramakrishnan",
"Michael Ryoo",
"Lifu Tu",
"Yihao Feng",
"Manli Shu",
"Honglu Zhou",
"Anas Awadalla",
"Jun Wang",
"Senthil Purushwalkam",
"Le Xue",
"Yingbo Zhou",
"Huan Wang",
"Silvio Savarese",
"Juan Carlos Niebles",
"Zeyuan Chen",
"Ran Xu",
"Caiming Xiong"
]
| We present xGen-VideoSyn-1, a text-to-video (T2V) generation model capable of producing realistic scenes from textual descriptions. Building on recent advancements, such as OpenAI's Sora, we explore the latent diffusion model (LDM) architecture and introduce a video variational autoencoder (VidVAE). VidVAE compresses video data both spatially and temporally, significantly reducing the length of visual tokens and the computational demands associated with generating long-sequence videos. To further address the computational costs, we propose a divide-and-merge strategy that maintains temporal consistency across video segments. Our Diffusion Transformer (DiT) model incorporates spatial and temporal self-attention layers, enabling robust generalization across different timeframes and aspect ratios. We have devised a data processing pipeline from the very beginning and collected over 13M high-quality video-text pairs. The pipeline includes multiple steps such as clipping, text detection, motion estimation, aesthetics scoring, and dense captioning based on our in-house video-LLM model. Training the VidVAE and DiT models required approximately 40 and 642 H100 days, respectively. Our model supports over 14-second 720p video generation in an end-to-end way and demonstrates competitive performance against state-of-the-art T2V models. |
|
2024-08-23T00:00:00 | 2408.10635 | Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search | [
"Jonathan Light",
"Min Cai",
"Weiqin Chen",
"Guanzhi Wang",
"Xiusi Chen",
"Wei Cheng",
"Yisong Yue",
"Ziniu Hu"
]
| In this paper, we propose a new method Strategist that utilizes LLMs to acquire new skills for playing multi-agent games through a self-improvement process. Our method gathers quality feedback through self-play simulations with Monte Carlo tree search and LLM-based reflection, which can then be used to learn high-level strategic skills such as how to evaluate states that guide the low-level execution.We showcase how our method can be used in both action planning and dialogue generation in the context of games, achieving good performance on both tasks. Specifically, we demonstrate that our method can help train agents with better performance than both traditional reinforcement learning-based approaches and other LLM-based skill learning approaches in games including the Game of Pure Strategy (GOPS) and The Resistance: Avalon. |
|
2024-08-23T00:00:00 | 2408.12245 | Scalable Autoregressive Image Generation with Mamba | [
"Haopeng Li",
"Jinyue Yang",
"Kexin Wang",
"Xuerui Qiu",
"Yuhong Chou",
"Xin Li",
"Guoqi Li"
]
| https://github.com/hp-l33/AiM | We introduce AiM, an autoregressive (AR) image generative model based on Mamba architecture. AiM employs Mamba, a novel state-space model characterized by its exceptional performance for long-sequence modeling with linear time complexity, to supplant the commonly utilized Transformers in AR image generation models, aiming to achieve both superior generation quality and enhanced inference speed. Unlike existing methods that adapt Mamba to handle two-dimensional signals via multi-directional scan, AiM directly utilizes the next-token prediction paradigm for autoregressive image generation. This approach circumvents the need for extensive modifications to enable Mamba to learn 2D spatial representations. By implementing straightforward yet strategically targeted modifications for visual generative tasks, we preserve Mamba's core structure, fully exploiting its efficient long-sequence modeling capabilities and scalability. We provide AiM models in various scales, with parameter counts ranging from 148M to 1.3B. On the ImageNet1K 256*256 benchmark, our best AiM model achieves a FID of 2.21, surpassing all existing AR models of comparable parameter counts and demonstrating significant competitiveness against diffusion models, with 2 to 10 times faster inference speed. Code is available at https://github.com/hp-l33/AiM |
2024-08-23T00:00:00 | 2408.11915 | Video-Foley: Two-Stage Video-To-Sound Generation via Temporal Event Condition For Foley Sound | [
"Junwon Lee",
"Jaekwon Im",
"Dabin Kim",
"Juhan Nam"
]
| Foley sound synthesis is crucial for multimedia production, enhancing user experience by synchronizing audio and video both temporally and semantically. Recent studies on automating this labor-intensive process through video-to-sound generation face significant challenges. Systems lacking explicit temporal features suffer from poor controllability and alignment, while timestamp-based models require costly and subjective human annotation. We propose Video-Foley, a video-to-sound system using Root Mean Square (RMS) as a temporal event condition with semantic timbre prompts (audio or text). RMS, a frame-level intensity envelope feature closely related to audio semantics, ensures high controllability and synchronization. The annotation-free self-supervised learning framework consists of two stages, Video2RMS and RMS2Sound, incorporating novel ideas including RMS discretization and RMS-ControlNet with a pretrained text-to-audio model. Our extensive evaluation shows that Video-Foley achieves state-of-the-art performance in audio-visual alignment and controllability for sound timing, intensity, timbre, and nuance. Code, model weights, and demonstrations are available on the accompanying website. (https://jnwnlee.github.io/video-foley-demo) |
|
2024-08-23T00:00:00 | 2408.12569 | Sapiens: Foundation for Human Vision Models | [
"Rawal Khirodkar",
"Timur Bagautdinov",
"Julieta Martinez",
"Su Zhaoen",
"Austin James",
"Peter Selednik",
"Stuart Anderson",
"Shunsuke Saito"
]
| We present Sapiens, a family of models for four fundamental human-centric vision tasks - 2D pose estimation, body-part segmentation, depth estimation, and surface normal prediction. Our models natively support 1K high-resolution inference and are extremely easy to adapt for individual tasks by simply fine-tuning models pretrained on over 300 million in-the-wild human images. We observe that, given the same computational budget, self-supervised pretraining on a curated dataset of human images significantly boosts the performance for a diverse set of human-centric tasks. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic. Our simple model design also brings scalability - model performance across tasks improves as we scale the number of parameters from 0.3 to 2 billion. Sapiens consistently surpasses existing baselines across various human-centric benchmarks. We achieve significant improvements over the prior state-of-the-art on Humans-5K (pose) by 7.6 mAP, Humans-2K (part-seg) by 17.1 mIoU, Hi4D (depth) by 22.4% relative RMSE, and THuman2 (normal) by 53.5% relative angular error. |
|
2024-08-23T00:00:00 | 2408.12528 | Show-o: One Single Transformer to Unify Multimodal Understanding and Generation | [
"Jinheng Xie",
"Weijia Mao",
"Zechen Bai",
"David Junhao Zhang",
"Weihao Wang",
"Kevin Qinghong Lin",
"Yuchao Gu",
"Zhijie Chen",
"Zhenheng Yang",
"Mike Zheng Shou"
]
| https://github.com/showlab/Show-o | We present a unified transformer, i.e., Show-o, that unifies multimodal understanding and generation. Unlike fully autoregressive models, Show-o unifies autoregressive and (discrete) diffusion modeling to adaptively handle inputs and outputs of various and mixed modalities. The unified model flexibly supports a wide range of vision-language tasks including visual question-answering, text-to-image generation, text-guided inpainting/extrapolation, and mixed-modality generation. Across various benchmarks, it demonstrates comparable or superior performance to existing individual models with an equivalent or larger number of parameters tailored for understanding or generation. This significantly highlights its potential as a next-generation foundation model. Code and models are released at https://github.com/showlab/Show-o. |
2024-08-23T00:00:00 | 2408.11878 | Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications | [
"Qianqian Xie",
"Dong Li",
"Mengxi Xiao",
"Zihao Jiang",
"Ruoyu Xiang",
"Xiao Zhang",
"Zhengyu Chen",
"Yueru He",
"Weiguang Han",
"Yuzhe Yang",
"Shunian Chen",
"Yifei Zhang",
"Lihang Shen",
"Daniel Kim",
"Zhiwei Liu",
"Zheheng Luo",
"Yangyang Yu",
"Yupeng Cao",
"Zhiyang Deng",
"Zhiyuan Yao",
"Haohang Li",
"Duanyu Feng",
"Yongfu Dai",
"VijayaSai Somasundaram",
"Peng Lu",
"Yilun Zhao",
"Yitao Long",
"Guojun Xiong",
"Kaleb Smith",
"Honghai Yu",
"Yanzhao Lai",
"Min Peng",
"Jianyun Nie",
"Jordan W. Suchow",
"Xiao-Yang Liu",
"Benyou Wang",
"Alejandro Lopez-Lira",
"Jimin Huang",
"Sophia Ananiadou"
]
| Large language models (LLMs) have advanced financial applications, yet they often lack sufficient financial knowledge and struggle with tasks involving multi-modal inputs like tables and time series data. To address these limitations, we introduce Open-FinLLMs, a series of Financial LLMs. We begin with FinLLaMA, pre-trained on a 52 billion token financial corpus, incorporating text, tables, and time-series data to embed comprehensive financial knowledge. FinLLaMA is then instruction fine-tuned with 573K financial instructions, resulting in FinLLaMA-instruct, which enhances task performance. Finally, we present FinLLaVA, a multimodal LLM trained with 1.43M image-text instructions to handle complex financial data types. Extensive evaluations demonstrate FinLLaMA's superior performance over LLaMA3-8B, LLaMA3.1-8B, and BloombergGPT in both zero-shot and few-shot settings across 19 and 4 datasets, respectively. FinLLaMA-instruct outperforms GPT-4 and other Financial LLMs on 15 datasets. FinLLaVA excels in understanding tables and charts across 4 multimodal tasks. Additionally, FinLLaMA achieves impressive Sharpe Ratios in trading simulations, highlighting its robust financial application capabilities. We will continually maintain and improve our models and benchmarks to support ongoing innovation in academia and industry. |
|
2024-08-23T00:00:00 | 2408.12570 | Jamba-1.5: Hybrid Transformer-Mamba Models at Scale | [
"Jamba Team",
"Barak Lenz",
"Alan Arazi",
"Amir Bergman",
"Avshalom Manevich",
"Barak Peleg",
"Ben Aviram",
"Chen Almagor",
"Clara Fridman",
"Dan Padnos",
"Daniel Gissin",
"Daniel Jannai",
"Dor Muhlgay",
"Dor Zimberg",
"Edden M Gerber",
"Elad Dolev",
"Eran Krakovsky",
"Erez Safahi",
"Erez Schwartz",
"Gal Cohen",
"Gal Shachaf",
"Haim Rozenblum",
"Hofit Bata",
"Ido Blass",
"Inbal Magar",
"Itay Dalmedigos",
"Jhonathan Osin",
"Julie Fadlon",
"Maria Rozman",
"Matan Danos",
"Michael Gokhman",
"Mor Zusman",
"Naama Gidron",
"Nir Ratner",
"Noam Gat",
"Noam Rozen",
"Oded Fried",
"Ohad Leshno",
"Omer Antverg",
"Omri Abend",
"Opher Lieber",
"Or Dagan",
"Orit Cohavi",
"Raz Alon",
"Ro'i Belson",
"Roi Cohen",
"Rom Gilad",
"Roman Glozman",
"Shahar Lev",
"Shaked Meirom",
"Tal Delbari",
"Tal Ness",
"Tomer Asida",
"Tom Ben Gal",
"Tom Braude",
"Uriya Pumerantz",
"Yehoshua Cohen",
"Yonatan Belinkov",
"Yuval Globerson",
"Yuval Peleg Levy",
"Yoav Shoham"
]
| We present Jamba-1.5, new instruction-tuned large language models based on our Jamba architecture. Jamba is a hybrid Transformer-Mamba mixture of experts architecture, providing high throughput and low memory usage across context lengths, while retaining the same or better quality as Transformer models. We release two model sizes: Jamba-1.5-Large, with 94B active parameters, and Jamba-1.5-Mini, with 12B active parameters. Both models are fine-tuned for a variety of conversational and instruction-following capabilties, and have an effective context length of 256K tokens, the largest amongst open-weight models. To support cost-effective inference, we introduce ExpertsInt8, a novel quantization technique that allows fitting Jamba-1.5-Large on a machine with 8 80GB GPUs when processing 256K-token contexts without loss of quality. When evaluated on a battery of academic and chatbot benchmarks, Jamba-1.5 models achieve excellent results while providing high throughput and outperforming other open-weight models on long-context benchmarks. The model weights for both sizes are publicly available under the Jamba Open Model License and we release ExpertsInt8 as open source. |
Subsets and Splits