date
timestamp[ns]date
2023-05-05 00:00:00
2025-07-16 00:00:00
arxiv_id
stringlengths
10
10
title
stringlengths
8
202
authors
listlengths
1
3.3k
github
stringlengths
0
116
abstract
stringlengths
165
1.92k
2024-07-31T00:00:00
2407.20267
A Large Encoder-Decoder Family of Foundation Models For Chemical Language
[ "Eduardo Soares", "Victor Shirasuna", "Emilio Vital Brazil", "Renato Cerqueira", "Dmitry Zubarev", "Kristin Schmidt" ]
Large-scale pre-training methodologies for chemical language models represent a breakthrough in cheminformatics. These methods excel in tasks such as property prediction and molecule generation by learning contextualized representations of input tokens through self-supervised learning on large unlabeled corpora. Typically, this involves pre-training on unlabeled data followed by fine-tuning on specific tasks, reducing dependence on annotated datasets and broadening chemical language representation understanding. This paper introduces a large encoder-decoder chemical foundation models pre-trained on a curated dataset of 91 million SMILES samples sourced from PubChem, which is equivalent to 4 billion of molecular tokens. The proposed foundation model supports different complex tasks, including quantum property prediction, and offer flexibility with two main variants (289M and 8times289M). Our experiments across multiple benchmark datasets validate the capacity of the proposed model in providing state-of-the-art results for different tasks. We also provide a preliminary assessment of the compositionality of the embedding space as a prerequisite for the reasoning tasks. We demonstrate that the produced latent space is separable compared to the state-of-the-art with few-shot learning capabilities.
2024-08-01T00:00:00
2407.21475
Fine-gained Zero-shot Video Sampling
[ "Dengsheng Chen", "Jie Hu", "Xiaoming Wei", "Enhua Wu" ]
Incorporating a temporal dimension into pretrained image diffusion models for video generation is a prevalent approach. However, this method is computationally demanding and necessitates large-scale video datasets. More critically, the heterogeneity between image and video datasets often results in catastrophic forgetting of the image expertise. Recent attempts to directly extract video snippets from image diffusion models have somewhat mitigated these problems. Nevertheless, these methods can only generate brief video clips with simple movements and fail to capture fine-grained motion or non-grid deformation. In this paper, we propose a novel Zero-Shot video Sampling algorithm, denoted as ZS^2, capable of directly sampling high-quality video clips from existing image synthesis methods, such as Stable Diffusion, without any training or optimization. Specifically, ZS^2 utilizes the dependency noise model and temporal momentum attention to ensure content consistency and animation coherence, respectively. This ability enables it to excel in related tasks, such as conditional and context-specialized video generation and instruction-guided video editing. Experimental results demonstrate that ZS^2 achieves state-of-the-art performance in zero-shot video generation, occasionally outperforming recent supervised methods. Homepage: https://densechen.github.io/zss/.
2024-08-01T00:00:00
2407.21646
Towards Achieving Human Parity on End-to-end Simultaneous Speech Translation via LLM Agent
[ "Shanbo Cheng", "Zhichao Huang", "Tom Ko", "Hang Li", "Ningxin Peng", "Lu Xu", "Qini Zhang" ]
In this paper, we present Cross Language Agent -- Simultaneous Interpretation, CLASI, a high-quality and human-like Simultaneous Speech Translation (SiST) System. Inspired by professional human interpreters, we utilize a novel data-driven read-write strategy to balance the translation quality and latency. To address the challenge of translating in-domain terminologies, CLASI employs a multi-modal retrieving module to obtain relevant information to augment the translation. Supported by LLMs, our approach can generate error-tolerated translation by considering the input audio, historical context, and retrieved information. Experimental results show that our system outperforms other systems by significant margins. Aligned with professional human interpreters, we evaluate CLASI with a better human evaluation metric, valid information proportion (VIP), which measures the amount of information that can be successfully conveyed to the listeners. In the real-world scenarios, where the speeches are often disfluent, informal, and unclear, CLASI achieves VIP of 81.3% and 78.0% for Chinese-to-English and English-to-Chinese translation directions, respectively. In contrast, state-of-the-art commercial or open-source systems only achieve 35.4% and 41.6%. On the extremely hard dataset, where other systems achieve under 13% VIP, CLASI can still achieve 70% VIP.
2024-08-01T00:00:00
2407.21781
Berkeley Humanoid: A Research Platform for Learning-based Control
[ "Qiayuan Liao", "Bike Zhang", "Xuanyu Huang", "Xiaoyu Huang", "Zhongyu Li", "Koushil Sreenath" ]
We introduce Berkeley Humanoid, a reliable and low-cost mid-scale humanoid research platform for learning-based control. Our lightweight, in-house-built robot is designed specifically for learning algorithms with low simulation complexity, anthropomorphic motion, and high reliability against falls. The robot's narrow sim-to-real gap enables agile and robust locomotion across various terrains in outdoor environments, achieved with a simple reinforcement learning controller using light domain randomization. Furthermore, we demonstrate the robot traversing for hundreds of meters, walking on a steep unpaved trail, and hopping with single and double legs as a testimony to its high performance in dynamical walking. Capable of omnidirectional locomotion and withstanding large perturbations with a compact setup, our system aims for scalable, sim-to-real deployment of learning-based humanoid systems. Please check http://berkeley-humanoid.com for more details.
2024-08-01T00:00:00
2407.21686
Expressive Whole-Body 3D Gaussian Avatar
[ "Gyeongsik Moon", "Takaaki Shiratori", "Shunsuke Saito" ]
Facial expression and hand motions are necessary to express our emotions and interact with the world. Nevertheless, most of the 3D human avatars modeled from a casually captured video only support body motions without facial expressions and hand motions.In this work, we present ExAvatar, an expressive whole-body 3D human avatar learned from a short monocular video. We design ExAvatar as a combination of the whole-body parametric mesh model (SMPL-X) and 3D Gaussian Splatting (3DGS). The main challenges are 1) a limited diversity of facial expressions and poses in the video and 2) the absence of 3D observations, such as 3D scans and RGBD images. The limited diversity in the video makes animations with novel facial expressions and poses non-trivial. In addition, the absence of 3D observations could cause significant ambiguity in human parts that are not observed in the video, which can result in noticeable artifacts under novel motions. To address them, we introduce our hybrid representation of the mesh and 3D Gaussians. Our hybrid representation treats each 3D Gaussian as a vertex on the surface with pre-defined connectivity information (i.e., triangle faces) between them following the mesh topology of SMPL-X. It makes our ExAvatar animatable with novel facial expressions by driven by the facial expression space of SMPL-X. In addition, by using connectivity-based regularizers, we significantly reduce artifacts in novel facial expressions and poses.
2024-08-01T00:00:00
2407.21705
Tora: Trajectory-oriented Diffusion Transformer for Video Generation
[ "Zhenghao Zhang", "Junchao Liao", "Menghao Li", "Long Qin", "Weizhi Wang" ]
Recent advancements in Diffusion Transformer (DiT) have demonstrated remarkable proficiency in producing high-quality video content. Nonetheless, the potential of transformer-based diffusion models for effectively generating videos with controllable motion remains an area of limited exploration. This paper introduces Tora, the first trajectory-oriented DiT framework that integrates textual, visual, and trajectory conditions concurrently for video generation. Specifically, Tora consists of a Trajectory Extractor~(TE), a Spatial-Temporal DiT, and a Motion-guidance Fuser~(MGF). The TE encodes arbitrary trajectories into hierarchical spacetime motion patches with a 3D video compression network. The MGF integrates the motion patches into the DiT blocks to generate consistent videos following trajectories. Our design aligns seamlessly with DiT's scalability, allowing precise control of video content's dynamics with diverse durations, aspect ratios, and resolutions. Extensive experiments demonstrate Tora's excellence in achieving high motion fidelity, while also meticulously simulating the movement of the physical world. Page can be found at https://ali-videoai.github.io/tora_video.
2024-08-01T00:00:00
2407.21770
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
[ "Xi Victoria Lin", "Akshat Shrivastava", "Liang Luo", "Srinivasan Iyer", "Mike Lewis", "Gargi Gosh", "Luke Zettlemoyer", "Armen Aghajanyan" ]
We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.
2024-08-01T00:00:00
2407.21530
Data Contamination Report from the 2024 CONDA Shared Task
[ "Oscar Sainz", "Iker García-Ferrero", "Alon Jacovi", "Jon Ander Campos", "Yanai Elazar", "Eneko Agirre", "Yoav Goldberg", "Wei-Lin Chen", "Jenny Chim", "Leshem Choshen", "Luca D'Amico-Wong", "Melissa Dell", "Run-Ze Fan", "Shahriar Golchin", "Yucheng Li", "Pengfei Liu", "Bhavish Pahwa", "Ameya Prabhu", "Suryansh Sharma", "Emily Silcock", "Kateryna Solonko", "David Stap", "Mihai Surdeanu", "Yu-Min Tseng", "Vishaal Udandarao", "Zengzhi Wang", "Ruijie Xu", "Jinglin Yang" ]
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora used to train large scale models, compromising evaluation results. The workshop fostered a shared task to collect evidence on data contamination in current available datasets and models. The goal of the shared task and associated database is to assist the community in understanding the extent of the problem and to assist researchers in avoiding reporting evaluation results on known contaminated resources. The shared task provides a structured, centralized public database for the collection of contamination evidence, open to contributions from the community via GitHub pool requests. This first compilation paper is based on 566 reported entries over 91 contaminated sources from a total of 23 contributors. The details of the individual contamination events are available in the platform. The platform continues to be online, open to contributions from the community.
2024-08-01T00:00:00
2407.21783
The Llama 3 Herd of Models
[ "Abhimanyu Dubey", "Abhinav Jauhri", "Abhinav Pandey", "Abhishek Kadian", "Ahmad Al-Dahle", "Aiesha Letman", "Akhil Mathur", "Alan Schelten", "Amy Yang", "Angela Fan", "Anirudh Goyal", "Anthony Hartshorn", "Aobo Yang", "Archi Mitra", "Archie Sravankumar", "Artem Korenev", "Arthur Hinsvark", "Arun Rao", "Aston Zhang", "Aurelien Rodriguez", "Austen Gregerson", "Ava Spataru", "Baptiste Roziere", "Bethany Biron", "Binh Tang", "Bobbie Chern", "Charlotte Caucheteux", "Chaya Nayak", "Chloe Bi", "Chris Marra", "Chris McConnell", "Christian Keller", "Christophe Touret", "Chunyang Wu", "Corinne Wong", "Cristian Canton Ferrer", "Cyrus Nikolaidis", "Damien Allonsius", "Daniel Song", "Danielle Pintz", "Danny Livshits", "David Esiobu", "Dhruv Choudhary", "Dhruv Mahajan", "Diego Garcia-Olano", "Diego Perino", "Dieuwke Hupkes", "Egor Lakomkin", "Ehab AlBadawy", "Elina Lobanova", "Emily Dinan", "Eric Michael Smith", "Filip Radenovic", "Frank Zhang", "Gabriel Synnaeve", "Gabrielle Lee", "Georgia Lewis Anderson", "Graeme Nail", "Gregoire Mialon", "Guan Pang", "Guillem Cucurell", "Hailey Nguyen", "Hannah Korevaar", "Hu Xu", "Hugo Touvron", "Iliyan Zarov", "Imanol Arrieta Ibarra", "Isabel Kloumann", "Ishan Misra", "Ivan Evtimov", "Jade Copet", "Jaewon Lee", "Jan Geffert", "Jana Vranes", "Jason Park", "Jay Mahadeokar", "Jeet Shah", "Jelmer van der Linde", "Jennifer Billock", "Jenny Hong", "Jenya Lee", "Jeremy Fu", "Jianfeng Chi", "Jianyu Huang", "Jiawen Liu", "Jie Wang", "Jiecao Yu", "Joanna Bitton", "Joe Spisak", "Jongsoo Park", "Joseph Rocca", "Joshua Johnstun", "Joshua Saxe", "Junteng Jia", "Kalyan Vasuden Alwala", "Kartikeya Upasani", "Kate Plawiak", "Ke Li", "Kenneth Heafield", "Kevin Stone", "Khalid El-Arini", "Krithika Iyer", "Kshitiz Malik", "Kuenley Chiu", "Kunal Bhalla", "Lauren Rantala-Yeary", "Laurens van der Maaten", "Lawrence Chen", "Liang Tan", "Liz Jenkins", "Louis Martin", "Lovish Madaan", "Lubo Malo", "Lukas Blecher", "Lukas Landzaat", "Luke de Oliveira", "Madeline Muzzi", "Mahesh Pasupuleti", "Mannat Singh", "Manohar Paluri", "Marcin Kardas", "Mathew Oldham", "Mathieu Rita", "Maya Pavlova", "Melanie Kambadur", "Mike Lewis", "Min Si", "Mitesh Kumar Singh", "Mona Hassan", "Naman Goyal", "Narjes Torabi", "Nikolay Bashlykov", "Nikolay Bogoychev", "Niladri Chatterji", "Olivier Duchenne", "Onur Çelebi", "Patrick Alrassy", "Pengchuan Zhang", "Pengwei Li", "Petar Vasic", "Peter Weng", "Prajjwal Bhargava", "Pratik Dubal", "Praveen Krishnan", "Punit Singh Koura", "Puxin Xu", "Qing He", "Qingxiao Dong", "Ragavan Srinivasan", "Raj Ganapathy", "Ramon Calderer", "Ricardo Silveira Cabral", "Robert Stojnic", "Roberta Raileanu", "Rohit Girdhar", "Rohit Patel", "Romain Sauvestre", "Ronnie Polidoro", "Roshan Sumbaly", "Ross Taylor", "Ruan Silva", "Rui Hou", "Rui Wang", "Saghar Hosseini", "Sahana Chennabasappa", "Sanjay Singh", "Sean Bell", "Seohyun Sonia Kim", "Sergey Edunov", "Shaoliang Nie", "Sharan Narang", "Sharath Raparthy", "Sheng Shen", "Shengye Wan", "Shruti Bhosale", "Shun Zhang", "Simon Vandenhende", "Soumya Batra", "Spencer Whitman", "Sten Sootla", "Stephane Collot", "Suchin Gururangan", "Sydney Borodinsky", "Tamar Herman", "Tara Fowler", "Tarek Sheasha", "Thomas Georgiou", "Thomas Scialom", "Tobias Speckbacher", "Todor Mihaylov", "Tong Xiao", "Ujjwal Karn", "Vedanuj Goswami", "Vibhor Gupta", "Vignesh Ramanathan", "Viktor Kerkez", "Vincent Gonguet", "Virginie Do", "Vish Vogeti", "Vladan Petrovic", "Weiwei Chu", "Wenhan Xiong", "Wenyin Fu", "Whitney Meers", "Xavier Martinet", "Xiaodong Wang", "Xiaoqing Ellen Tan", "Xinfeng Xie", "Xuchao Jia", "Xuewei Wang", "Yaelle Goldschlag", "Yashesh Gaur", "Yasmine Babaei", "Yi Wen", "Yiwen Song", "Yuchen Zhang", "Yue Li", "Yuning Mao", "Zacharie Delpierre Coudert", "Zheng Yan", "Zhengxing Chen", "Zoe Papakipos", "Aaditya Singh", "Aaron Grattafiori", "Abha Jain", "Adam Kelsey", "Adam Shajnfeld", "Adithya Gangidi", "Adolfo Victoria", "Ahuva Goldstand", "Ajay Menon", "Ajay Sharma", "Alex Boesenberg", "Alex Vaughan", "Alexei Baevski", "Allie Feinstein", "Amanda Kallet", "Amit Sangani", "Anam Yunus", "Andrei Lupu", "Andres Alvarado", "Andrew Caples", "Andrew Gu", "Andrew Ho", "Andrew Poulton", "Andrew Ryan", "Ankit Ramchandani", "Annie Franco", "Aparajita Saraf", "Arkabandhu Chowdhury", "Ashley Gabriel", "Ashwin Bharambe", "Assaf Eisenman", "Azadeh Yazdan", "Beau James", "Ben Maurer", "Benjamin Leonhardi", "Bernie Huang", "Beth Loyd", "Beto De Paola", "Bhargavi Paranjape", "Bing Liu", "Bo Wu", "Boyu Ni", "Braden Hancock", "Bram Wasti", "Brandon Spence", "Brani Stojkovic", "Brian Gamido", "Britt Montalvo", "Carl Parker", "Carly Burton", "Catalina Mejia", "Changhan Wang", "Changkyu Kim", "Chao Zhou", "Chester Hu", "Ching-Hsiang Chu", "Chris Cai", "Chris Tindal", "Christoph Feichtenhofer", "Damon Civin", "Dana Beaty", "Daniel Kreymer", "Daniel Li", "Danny Wyatt", "David Adkins", "David Xu", "Davide Testuggine", "Delia David", "Devi Parikh", "Diana Liskovich", "Didem Foss", "Dingkang Wang", "Duc Le", "Dustin Holland", "Edward Dowling", "Eissa Jamil", "Elaine Montgomery", "Eleonora Presani", "Emily Hahn", "Emily Wood", "Erik Brinkman", "Esteban Arcaute", "Evan Dunbar", "Evan Smothers", "Fei Sun", "Felix Kreuk", "Feng Tian", "Firat Ozgenel", "Francesco Caggioni", "Francisco Guzmán", "Frank Kanayet", "Frank Seide", "Gabriela Medina Florez", "Gabriella Schwarz", "Gada Badeer", "Georgia Swee", "Gil Halpern", "Govind Thattai", "Grant Herman", "Grigory Sizov", "Guangyi", "Zhang", "Guna Lakshminarayanan", "Hamid Shojanazeri", "Han Zou", "Hannah Wang", "Hanwen Zha", "Haroun Habeeb", "Harrison Rudolph", "Helen Suk", "Henry Aspegren", "Hunter Goldman", "Igor Molybog", "Igor Tufanov", "Irina-Elena Veliche", "Itai Gat", "Jake Weissman", "James Geboski", "James Kohli", "Japhet Asher", "Jean-Baptiste Gaya", "Jeff Marcus", "Jeff Tang", "Jennifer Chan", "Jenny Zhen", "Jeremy Reizenstein", "Jeremy Teboul", "Jessica Zhong", "Jian Jin", "Jingyi Yang", "Joe Cummings", "Jon Carvill", "Jon Shepard", "Jonathan McPhie", "Jonathan Torres", "Josh Ginsburg", "Junjie Wang", "Kai Wu", "Kam Hou U", "Karan Saxena", "Karthik Prasad", "Kartikay Khandelwal", "Katayoun Zand", "Kathy Matosich", "Kaushik Veeraraghavan", "Kelly Michelena", "Keqian Li", "Kun Huang", "Kunal Chawla", "Kushal Lakhotia", "Kyle Huang", "Lailin Chen", "Lakshya Garg", "Lavender A", "Leandro Silva", "Lee Bell", "Lei Zhang", "Liangpeng Guo", "Licheng Yu", "Liron Moshkovich", "Luca Wehrstedt", "Madian Khabsa", "Manav Avalani", "Manish Bhatt", "Maria Tsimpoukelli", "Martynas Mankus", "Matan Hasson", "Matthew Lennie", "Matthias Reso", "Maxim Groshev", "Maxim Naumov", "Maya Lathi", "Meghan Keneally", "Michael L. Seltzer", "Michal Valko", "Michelle Restrepo", "Mihir Patel", "Mik Vyatskov", "Mikayel Samvelyan", "Mike Clark", "Mike Macey", "Mike Wang", "Miquel Jubert Hermoso", "Mo Metanat", "Mohammad Rastegari", "Munish Bansal", "Nandhini Santhanam", "Natascha Parks", "Natasha White", "Navyata Bawa", "Nayan Singhal", "Nick Egebo", "Nicolas Usunier", "Nikolay Pavlovich Laptev", "Ning Dong", "Ning Zhang", "Norman Cheng", "Oleg Chernoguz", "Olivia Hart", "Omkar Salpekar", "Ozlem Kalinli", "Parkin Kent", "Parth Parekh", "Paul Saab", "Pavan Balaji", "Pedro Rittner", "Philip Bontrager", "Pierre Roux", "Piotr Dollar", "Polina Zvyagina", "Prashant Ratanchandani", "Pritish Yuvraj", "Qian Liang", "Rachad Alao", "Rachel Rodriguez", "Rafi Ayub", "Raghotham Murthy", "Raghu Nayani", "Rahul Mitra", "Raymond Li", "Rebekkah Hogan", "Robin Battey", "Rocky Wang", "Rohan Maheswari", "Russ Howes", "Ruty Rinott", "Sai Jayesh Bondu", "Samyak Datta", "Sara Chugh", "Sara Hunt", "Sargun Dhillon", "Sasha Sidorov", "Satadru Pan", "Saurabh Verma", "Seiji Yamamoto", "Sharadh Ramaswamy", "Shaun Lindsay", "Shaun Lindsay", "Sheng Feng", "Shenghao Lin", "Shengxin Cindy Zha", "Shiva Shankar", "Shuqiang Zhang", "Shuqiang Zhang", "Sinong Wang", "Sneha Agarwal", "Soji Sajuyigbe", "Soumith Chintala", "Stephanie Max", "Stephen Chen", "Steve Kehoe", "Steve Satterfield", "Sudarshan Govindaprasad", "Sumit Gupta", "Sungmin Cho", "Sunny Virk", "Suraj Subramanian", "Sy Choudhury", "Sydney Goldman", "Tal Remez", "Tamar Glaser", "Tamara Best", "Thilo Kohler", "Thomas Robinson", "Tianhe Li", "Tianjun Zhang", "Tim Matthews", "Timothy Chou", "Tzook Shaked", "Varun Vontimitta", "Victoria Ajayi", "Victoria Montanez", "Vijai Mohan", "Vinay Satish Kumar", "Vishal Mangla", "Vlad Ionescu", "Vlad Poenaru", "Vlad Tiberiu Mihailescu", "Vladimir Ivanov", "Wei Li", "Wenchen Wang", "Wenwen Jiang", "Wes Bouaziz", "Will Constable", "Xiaocheng Tang", "Xiaofang Wang", "Xiaojian Wu", "Xiaolan Wang", "Xide Xia", "Xilun Wu", "Xinbo Gao", "Yanjun Chen", "Ye Hu", "Ye Jia", "Ye Qi", "Yenda Li", "Yilin Zhang", "Ying Zhang", "Yossi Adi", "Youngjin Nam", "Yu", "Wang", "Yuchen Hao", "Yundi Qian", "Yuzi He", "Zach Rait", "Zachary DeVito", "Zef Rosnbrick", "Zhaoduo Wen", "Zhenyu Yang", "Zhiwei Zhao" ]
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
2024-08-01T00:00:00
2407.21772
ShieldGemma: Generative AI Content Moderation Based on Gemma
[ "Wenjun Zeng", "Yuchi Liu", "Ryan Mullins", "Ludovic Peran", "Joe Fernandez", "Hamza Harkous", "Karthik Narasimhan", "Drew Proud", "Piyush Kumar", "Bhaktipriya Radharapu", "Olivia Sturman", "Oscar Wahltinez" ]
We present ShieldGemma, a comprehensive suite of LLM-based safety content moderation models built upon Gemma2. These models provide robust, state-of-the-art predictions of safety risks across key harm types (sexually explicit, dangerous content, harassment, hate speech) in both user input and LLM-generated output. By evaluating on both public and internal benchmarks, we demonstrate superior performance compared to existing models, such as Llama Guard (+10.8\% AU-PRC on public benchmarks) and WildCard (+4.3\%). Additionally, we present a novel LLM-based data curation pipeline, adaptable to a variety of safety-related tasks and beyond. We have shown strong generalization performance for model trained mainly on synthetic data. By releasing ShieldGemma, we provide a valuable resource to the research community, advancing LLM safety and enabling the creation of more effective content moderation solutions for developers.
2024-08-01T00:00:00
2407.21721
Open-Vocabulary Audio-Visual Semantic Segmentation
[ "Ruohao Guo", "Liao Qu", "Dantong Niu", "Yanyu Qi", "Wenzhen Yue", "Ji Shi", "Bowei Xing", "Xianghua Ying" ]
https://github.com/ruohaoguo/ovavss
Audio-visual semantic segmentation (AVSS) aims to segment and classify sounding objects in videos with acoustic cues. However, most approaches operate on the close-set assumption and only identify pre-defined categories from training data, lacking the generalization ability to detect novel categories in practical applications. In this paper, we introduce a new task: open-vocabulary audio-visual semantic segmentation, extending AVSS task to open-world scenarios beyond the annotated label space. This is a more challenging task that requires recognizing all categories, even those that have never been seen nor heard during training. Moreover, we propose the first open-vocabulary AVSS framework, OV-AVSS, which mainly consists of two parts: 1) a universal sound source localization module to perform audio-visual fusion and locate all potential sounding objects and 2) an open-vocabulary classification module to predict categories with the help of the prior knowledge from large-scale pre-trained vision-language models. To properly evaluate the open-vocabulary AVSS, we split zero-shot training and testing subsets based on the AVSBench-semantic benchmark, namely AVSBench-OV. Extensive experiments demonstrate the strong segmentation and zero-shot generalization ability of our model on all categories. On the AVSBench-OV dataset, OV-AVSS achieves 55.43% mIoU on base categories and 29.14% mIoU on novel categories, exceeding the state-of-the-art zero-shot method by 41.88%/20.61% and open-vocabulary method by 10.2%/11.6%. The code is available at https://github.com/ruohaoguo/ovavss.
2024-08-01T00:00:00
2407.21630
TAROT: Task-Oriented Authorship Obfuscation Using Policy Optimization Methods
[ "Gabriel Loiseau", "Damien Sileo", "Damien Riquet", "Maxime Meyer", "Marc Tommasi" ]
Authorship obfuscation aims to disguise the identity of an author within a text by altering the writing style, vocabulary, syntax, and other linguistic features associated with the text author. This alteration needs to balance privacy and utility. While strong obfuscation techniques can effectively hide the author's identity, they often degrade the quality and usefulness of the text for its intended purpose. Conversely, maintaining high utility tends to provide insufficient privacy, making it easier for an adversary to de-anonymize the author. Thus, achieving an optimal trade-off between these two conflicting objectives is crucial. In this paper, we propose TAROT: Task-Oriented Authorship Obfuscation Using Policy Optimization, a new unsupervised authorship obfuscation method whose goal is to optimize the privacy-utility trade-off by regenerating the entire text considering its downstream utility. Our approach leverages policy optimization as a fine-tuning paradigm over small language models in order to rewrite texts by preserving author identity and downstream task utility. We show that our approach largely reduce the accuracy of attackers while preserving utility. We make our code and models publicly available.
2024-08-01T00:00:00
2404.01300
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields
[ "Muhammad Zubair Irshad", "Sergey Zakharov", "Vitor Guizilini", "Adrien Gaidon", "Zsolt Kira", "Rares Ambrus" ]
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Owing to the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation is irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images. Once pretrained, the encoder is used for effective 3D transfer learning. Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection.
2024-08-01T00:00:00
2407.20229
Improving 2D Feature Representations by 3D-Aware Fine-Tuning
[ "Yuanwen Yue", "Anurag Das", "Francis Engelmann", "Siyu Tang", "Jan Eric Lenssen" ]
Current visual foundation models are trained purely on unstructured 2D data, limiting their understanding of 3D structure of objects and scenes. In this work, we show that fine-tuning on 3D-aware data improves the quality of emerging semantic features. We design a method to lift semantic 2D features into an efficient 3D Gaussian representation, which allows us to re-render them for arbitrary views. Using the rendered 3D-aware features, we design a fine-tuning strategy to transfer such 3D awareness into a 2D foundation model. We demonstrate that models fine-tuned in that way produce features that readily improve downstream task performance in semantic segmentation and depth estimation through simple linear probing. Notably, though fined-tuned on a single indoor dataset, the improvement is transferable to a variety of indoor datasets and out-of-domain datasets. We hope our study encourages the community to consider injecting 3D awareness when training 2D foundation models. Project page: https://ywyue.github.io/FiT3D.
2024-08-02T00:00:00
2408.00203
OmniParser for Pure Vision Based GUI Agent
[ "Yadong Lu", "Jianwei Yang", "Yelong Shen", "Ahmed Awadallah" ]
The recent success of large vision language models shows great potential in driving the agent system operating on user interfaces. However, we argue that the power multimodal models like GPT-4V as a general agent on multiple operating systems across different applications is largely underestimated due to the lack of a robust screen parsing technique capable of: 1) reliably identifying interactable icons within the user interface, and 2) understanding the semantics of various elements in a screenshot and accurately associate the intended action with the corresponding region on the screen. To fill these gaps, we introduce OmniParser, a comprehensive method for parsing user interface screenshots into structured elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface. We first curated an interactable icon detection dataset using popular webpages and an icon description dataset. These datasets were utilized to fine-tune specialized models: a detection model to parse interactable regions on the screen and a caption model to extract the functional semantics of the detected elements. OmniParser significantly improves GPT-4V's performance on ScreenSpot benchmark. And on Mind2Web and AITW benchmark, OmniParser with screenshot only input outperforms the GPT-4V baselines requiring additional information outside of screenshot.
2024-08-02T00:00:00
2408.00754
Coarse Correspondence Elicit 3D Spacetime Understanding in Multimodal Language Model
[ "Benlin Liu", "Yuhao Dong", "Yiqin Wang", "Yongming Rao", "Yansong Tang", "Wei-Chiu Ma", "Ranjay Krishna" ]
Multimodal language models (MLLMs) are increasingly being implemented in real-world environments, necessitating their ability to interpret 3D spaces and comprehend temporal dynamics. Despite their potential, current top models within our community still fall short in adequately understanding spatial and temporal dimensions. We introduce Coarse Correspondence, a simple, training-free, effective, and general-purpose visual prompting method to elicit 3D and temporal understanding in multimodal LLMs. Our method uses a lightweight tracking model to find object correspondences between frames in a video or between sets of image viewpoints. It selects the most frequent object instances and visualizes them with markers with unique IDs in the image. With this simple approach, we achieve state-of-the-art results on 3D understanding benchmarks including ScanQA (+20.5\%) and a subset of OpenEQA (+9.7\%), and on long-form video benchmarks such as EgoSchema (+6.0\%). We also curate a small diagnostic dataset to evaluate whether MLLMs can reason about space from a described viewpoint other than the camera viewpoint. Again, Coarse Correspondence improves spatial perspective-taking abilities but we highlight that MLLMs struggle with this task. Together, we demonstrate that our simple prompting method can significantly aid downstream tasks that require 3D or temporal reasoning.
2024-08-02T00:00:00
2408.00735
TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models
[ "Gilad Deutch", "Rinon Gal", "Daniel Garibi", "Or Patashnik", "Daniel Cohen-Or" ]
Diffusion models have opened the path to a wide range of text-based image editing frameworks. However, these typically build on the multi-step nature of the diffusion backwards process, and adapting them to distilled, fast-sampling methods has proven surprisingly challenging. Here, we focus on a popular line of text-based editing frameworks - the ``edit-friendly'' DDPM-noise inversion approach. We analyze its application to fast sampling methods and categorize its failures into two classes: the appearance of visual artifacts, and insufficient editing strength. We trace the artifacts to mismatched noise statistics between inverted noises and the expected noise schedule, and suggest a shifted noise schedule which corrects for this offset. To increase editing strength, we propose a pseudo-guidance approach that efficiently increases the magnitude of edits without introducing new artifacts. All in all, our method enables text-based image editing with as few as three diffusion steps, while providing novel insights into the mechanisms behind popular text-based editing approaches.
2024-08-02T00:00:00
2408.00714
SAM 2: Segment Anything in Images and Videos
[ "Nikhila Ravi", "Valentin Gabeur", "Yuan-Ting Hu", "Ronghang Hu", "Chaitanya Ryali", "Tengyu Ma", "Haitham Khedr", "Roman Rädle", "Chloe Rolland", "Laura Gustafson", "Eric Mintun", "Junting Pan", "Kalyan Vasudev Alwala", "Nicolas Carion", "Chao-Yuan Wu", "Ross Girshick", "Piotr Dollár", "Christoph Feichtenhofer" ]
We present Segment Anything Model 2 (SAM 2), a foundation model towards solving promptable visual segmentation in images and videos. We build a data engine, which improves model and data via user interaction, to collect the largest video segmentation dataset to date. Our model is a simple transformer architecture with streaming memory for real-time video processing. SAM 2 trained on our data provides strong performance across a wide range of tasks. In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM). We believe that our data, model, and insights will serve as a significant milestone for video segmentation and related perception tasks. We are releasing a version of our model, the dataset and an interactive demo.
2024-08-02T00:00:00
2408.00762
UniTalker: Scaling up Audio-Driven 3D Facial Animation through A Unified Model
[ "Xiangyu Fan", "Jiaqi Li", "Zhiqian Lin", "Weiye Xiao", "Lei Yang" ]
https://github.com/X-niper/UniTalker
Audio-driven 3D facial animation aims to map input audio to realistic facial motion. Despite significant progress, limitations arise from inconsistent 3D annotations, restricting previous models to training on specific annotations and thereby constraining the training scale. In this work, we present UniTalker, a unified model featuring a multi-head architecture designed to effectively leverage datasets with varied annotations. To enhance training stability and ensure consistency among multi-head outputs, we employ three training strategies, namely, PCA, model warm-up, and pivot identity embedding. To expand the training scale and diversity, we assemble A2F-Bench, comprising five publicly available datasets and three newly curated datasets. These datasets contain a wide range of audio domains, covering multilingual speech voices and songs, thereby scaling the training data from commonly employed datasets, typically less than 1 hour, to 18.5 hours. With a single trained UniTalker model, we achieve substantial lip vertex error reductions of 9.2% for BIWI dataset and 13.7% for Vocaset. Additionally, the pre-trained UniTalker exhibits promise as the foundation model for audio-driven facial animation tasks. Fine-tuning the pre-trained UniTalker on seen datasets further enhances performance on each dataset, with an average error reduction of 6.3% on A2F-Bench. Moreover, fine-tuning UniTalker on an unseen dataset with only half the data surpasses prior state-of-the-art models trained on the full dataset. The code and dataset are available at the project page https://github.com/X-niper/UniTalker.
2024-08-02T00:00:00
2408.00690
Improving Text Embeddings for Smaller Language Models Using Contrastive Fine-tuning
[ "Trapoom Ukarapol", "Zhicheng Lee", "Amy Xin" ]
https://github.com/trapoom555/Language-Model-STS-CFT
While Large Language Models show remarkable performance in natural language understanding, their resource-intensive nature makes them less accessible. In contrast, smaller language models such as MiniCPM offer more sustainable scalability, but often underperform without specialized optimization. In this paper, we explore the enhancement of smaller language models through the improvement of their text embeddings. We select three language models, MiniCPM, Phi-2, and Gemma, to conduct contrastive fine-tuning on the NLI dataset. Our results demonstrate that this fine-tuning method enhances the quality of text embeddings for all three models across various benchmarks, with MiniCPM showing the most significant improvements of an average 56.33\% performance gain. The contrastive fine-tuning code is publicly available at https://github.com/trapoom555/Language-Model-STS-CFT.
2024-08-02T00:00:00
2408.00653
SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement
[ "Mark Boss", "Zixuan Huang", "Aaryaman Vasishta", "Varun Jampani" ]
We present SF3D, a novel method for rapid and high-quality textured object mesh reconstruction from a single image in just 0.5 seconds. Unlike most existing approaches, SF3D is explicitly trained for mesh generation, incorporating a fast UV unwrapping technique that enables swift texture generation rather than relying on vertex colors. The method also learns to predict material parameters and normal maps to enhance the visual quality of the reconstructed 3D meshes. Furthermore, SF3D integrates a delighting step to effectively remove low-frequency illumination effects, ensuring that the reconstructed meshes can be easily used in novel illumination conditions. Experiments demonstrate the superior performance of SF3D over the existing techniques. Project page: https://stable-fast-3d.github.io
2024-08-02T00:00:00
2408.00298
Tails Tell Tales: Chapter-Wide Manga Transcriptions with Character Names
[ "Ragav Sachdeva", "Gyungin Shin", "Andrew Zisserman" ]
https://github.com/ragavsachdeva/magi
Enabling engagement of manga by visually impaired individuals presents a significant challenge due to its inherently visual nature. With the goal of fostering accessibility, this paper aims to generate a dialogue transcript of a complete manga chapter, entirely automatically, with a particular emphasis on ensuring narrative consistency. This entails identifying (i) what is being said, i.e., detecting the texts on each page and classifying them into essential vs non-essential, and (ii) who is saying it, i.e., attributing each dialogue to its speaker, while ensuring the same characters are named consistently throughout the chapter. To this end, we introduce: (i) Magiv2, a model that is capable of generating high-quality chapter-wide manga transcripts with named characters and significantly higher precision in speaker diarisation over prior works; (ii) an extension of the PopManga evaluation dataset, which now includes annotations for speech-bubble tail boxes, associations of text to corresponding tails, classifications of text as essential or non-essential, and the identity for each character box; and (iii) a new character bank dataset, which comprises over 11K characters from 76 manga series, featuring 11.5K exemplar character images in total, as well as a list of chapters in which they appear. The code, trained model, and both datasets can be found at: https://github.com/ragavsachdeva/magi
2024-08-02T00:00:00
2408.00205
Sentence-wise Speech Summarization: Task, Datasets, and End-to-End Modeling with LM Knowledge Distillation
[ "Kohei Matsuura", "Takanori Ashihara", "Takafumi Moriya", "Masato Mimura", "Takatomo Kano", "Atsunori Ogawa", "Marc Delcroix" ]
This paper introduces a novel approach called sentence-wise speech summarization (Sen-SSum), which generates text summaries from a spoken document in a sentence-by-sentence manner. Sen-SSum combines the real-time processing of automatic speech recognition (ASR) with the conciseness of speech summarization. To explore this approach, we present two datasets for Sen-SSum: Mega-SSum and CSJ-SSum. Using these datasets, our study evaluates two types of Transformer-based models: 1) cascade models that combine ASR and strong text summarization models, and 2) end-to-end (E2E) models that directly convert speech into a text summary. While E2E models are appealing to develop compute-efficient models, they perform worse than cascade models. Therefore, we propose knowledge distillation for E2E models using pseudo-summaries generated by the cascade models. Our experiments show that this proposed knowledge distillation effectively improves the performance of the E2E model on both datasets.
2024-08-02T00:00:00
2408.00765
MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities
[ "Weihao Yu", "Zhengyuan Yang", "Linfeng Ren", "Linjie Li", "Jianfeng Wang", "Kevin Lin", "Chung-Ching Lin", "Zicheng Liu", "Lijuan Wang", "Xinchao Wang" ]
MM-Vet, with open-ended vision-language questions targeting at evaluating integrated capabilities, has become one of the most popular benchmarks for large multimodal model evaluation. MM-Vet assesses six core vision-language (VL) capabilities: recognition, knowledge, spatial awareness, language generation, OCR, and math. However, its question format is restricted to single image-text pairs, lacking the interleaved image and text sequences prevalent in real-world scenarios. To address this limitation, we introduce MM-Vet v2, which includes a new VL capability called "image-text sequence understanding", evaluating models' ability to process VL sequences. Furthermore, we maintain the high quality of evaluation samples while further expanding the evaluation set size. Using MM-Vet v2 to benchmark large multimodal models, we found that Claude 3.5 Sonnet is the best model with a score of 71.8, slightly outperforming GPT-4o which scored 71.0. Among open-weight models, InternVL2-Llama3-76B leads with a score of 68.4.
2024-08-02T00:00:00
2408.00584
Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses
[ "Gabriele Sarti", "Tommaso Caselli", "Malvina Nissim", "Arianna Bisazza" ]
Rebuses are puzzles requiring constrained multi-step reasoning to identify a hidden phrase from a set of images and letters. In this work, we introduce a large collection of verbalized rebuses for the Italian language and use it to assess the rebus-solving capabilities of state-of-the-art large language models. While general-purpose systems such as LLaMA-3 and GPT-4o perform poorly on this task, ad-hoc fine-tuning seems to improve models' performance. However, we find that performance gains from training are largely motivated by memorization. Our results suggest that rebus solving remains a challenging test bed to evaluate large language models' linguistic proficiency and sequential instruction-following skills.
2024-08-02T00:00:00
2408.00458
Reenact Anything: Semantic Video Motion Transfer Using Motion-Textual Inversion
[ "Manuel Kansy", "Jacek Naruniec", "Christopher Schroers", "Markus Gross", "Romann M. Weber" ]
Recent years have seen a tremendous improvement in the quality of video generation and editing approaches. While several techniques focus on editing appearance, few address motion. Current approaches using text, trajectories, or bounding boxes are limited to simple motions, so we specify motions with a single motion reference video instead. We further propose to use a pre-trained image-to-video model rather than a text-to-video model. This approach allows us to preserve the exact appearance and position of a target object or scene and helps disentangle appearance from motion. Our method, called motion-textual inversion, leverages our observation that image-to-video models extract appearance mainly from the (latent) image input, while the text/image embedding injected via cross-attention predominantly controls motion. We thus represent motion using text/image embedding tokens. By operating on an inflated motion-text embedding containing multiple text/image embedding tokens per frame, we achieve a high temporal motion granularity. Once optimized on the motion reference video, this embedding can be applied to various target images to generate videos with semantically similar motions. Our approach does not require spatial alignment between the motion reference video and target image, generalizes across various domains, and can be applied to various tasks such as full-body and face reenactment, as well as controlling the motion of inanimate objects and the camera. We empirically demonstrate the effectiveness of our method in the semantic video motion transfer task, significantly outperforming existing methods in this context.
2024-08-02T00:00:00
2408.00118
Gemma 2: Improving Open Language Models at a Practical Size
[ "Gemma Team", "Morgane Riviere", "Shreya Pathak", "Pier Giuseppe Sessa", "Cassidy Hardin", "Surya Bhupatiraju", "Léonard Hussenot", "Thomas Mesnard", "Bobak Shahriari", "Alexandre Ramé", "Johan Ferret", "Peter Liu", "Pouya Tafti", "Abe Friesen", "Michelle Casbon", "Sabela Ramos", "Ravin Kumar", "Charline Le Lan", "Sammy Jerome", "Anton Tsitsulin", "Nino Vieillard", "Piotr Stanczyk", "Sertan Girgin", "Nikola Momchev", "Matt Hoffman", "Shantanu Thakoor", "Jean-Bastien Grill", "Behnam Neyshabur", "Alanna Walton", "Aliaksei Severyn", "Alicia Parrish", "Aliya Ahmad", "Allen Hutchison", "Alvin Abdagic", "Amanda Carl", "Amy Shen", "Andy Brock", "Andy Coenen", "Anthony Laforge", "Antonia Paterson", "Ben Bastian", "Bilal Piot", "Bo Wu", "Brandon Royal", "Charlie Chen", "Chintu Kumar", "Chris Perry", "Chris Welty", "Christopher A. Choquette-Choo", "Danila Sinopalnikov", "David Weinberger", "Dimple Vijaykumar", "Dominika Rogozińska", "Dustin Herbison", "Elisa Bandy", "Emma Wang", "Eric Noland", "Erica Moreira", "Evan Senter", "Evgenii Eltyshev", "Francesco Visin", "Gabriel Rasskin", "Gary Wei", "Glenn Cameron", "Gus Martins", "Hadi Hashemi", "Hanna Klimczak-Plucińska", "Harleen Batra", "Harsh Dhand", "Ivan Nardini", "Jacinda Mein", "Jack Zhou", "James Svensson", "Jeff Stanway", "Jetha Chan", "Jin Zhou", "Joana Carrasqueira", "Joana Iljazi", "Jocelyn Becker", "Joe Fernandez", "Joost van Amersfoort", "Josh Gordon", "Josh Lipschultz", "Josh Newlan", "Ju-yeong Ji", "Kareem Mohamed", "Kartikeya Badola", "Kat Black", "Katie Millican", "Keelin McDonell", "Kelvin Nguyen", "Kiranbir Sodhia", "Kish Greene", "Lars Lowe Sjoesund", "Lauren Usui", "Laurent Sifre", "Lena Heuermann", "Leticia Lago", "Lilly McNealus", "Livio Baldini Soares", "Logan Kilpatrick", "Lucas Dixon", "Luciano Martins", "Machel Reid", "Manvinder Singh", "Mark Iverson", "Martin Görner", "Mat Velloso", "Mateo Wirth", "Matt Davidow", "Matt Miller", "Matthew Rahtz", "Matthew Watson", "Meg Risdal", "Mehran Kazemi", "Michael Moynihan", "Ming Zhang", "Minsuk Kahng", "Minwoo Park", "Mofi Rahman", "Mohit Khatwani", "Natalie Dao", "Nenshad Bardoliwalla", "Nesh Devanathan", "Neta Dumai", "Nilay Chauhan", "Oscar Wahltinez", "Pankil Botarda", "Parker Barnes", "Paul Barham", "Paul Michel", "Pengchong Jin", "Petko Georgiev", "Phil Culliton", "Pradeep Kuppala", "Ramona Comanescu", "Ramona Merhej", "Reena Jana", "Reza Ardeshir Rokni", "Rishabh Agarwal", "Ryan Mullins", "Samaneh Saadat", "Sara Mc Carthy", "Sarah Perrin", "Sébastien Arnold", "Sebastian Krause", "Shengyang Dai", "Shruti Garg", "Shruti Sheth", "Sue Ronstrom", "Susan Chan", "Timothy Jordan", "Ting Yu", "Tom Eccles", "Tom Hennigan", "Tomas Kocisky", "Tulsee Doshi", "Vihan Jain", "Vikas Yadav", "Vilobh Meshram", "Vishal Dharmadhikari", "Warren Barkley", "Wei Wei", "Wenming Ye", "Woohyun Han", "Woosuk Kwon", "Xiang Xu", "Zhe Shen", "Zhitao Gong", "Zichuan Wei", "Victor Cotruta", "Phoebe Kirk", "Anand Rao", "Minh Giang", "Ludovic Peran", "Tris Warkentin", "Eli Collins", "Joelle Barral", "Zoubin Ghahramani", "Raia Hadsell", "D. Sculley", "Jeanine Banks", "Anca Dragan", "Slav Petrov", "Oriol Vinyals", "Jeff Dean", "Demis Hassabis", "Koray Kavukcuoglu", "Clement Farabet", "Elena Buchatskaya", "Sebastian Borgeaud", "Noah Fiedel", "Armand Joulin", "Kathleen Kenealy", "Robert Dadashi", "Alek Andreev" ]
In this work, we introduce Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters. In this new version, we apply several known technical modifications to the Transformer architecture, such as interleaving local-global attentions (Beltagy et al., 2020a) and group-query attention (Ainslie et al., 2023). We also train the 2B and 9B models with knowledge distillation (Hinton et al., 2015) instead of next token prediction. The resulting models deliver the best performance for their size, and even offer competitive alternatives to models that are 2-3 times bigger. We release all our models to the community.
2024-08-02T00:00:00
2408.00167
Finch: Prompt-guided Key-Value Cache Compression
[ "Giulio Corallo", "Paolo Papotti" ]
Recent large language model applications, such as Retrieval-Augmented Generation and chatbots, have led to an increased need to process longer input contexts. However, this requirement is hampered by inherent limitations. Architecturally, models are constrained by a context window defined during training. Additionally, processing extensive texts requires substantial GPU memory. We propose a novel approach, Finch, to compress the input context by leveraging the pre-trained model weights of the self-attention. Given a prompt and a long text, Finch iteratively identifies the most relevant Key (K) and Value (V) pairs over chunks of the text conditioned on the prompt. Only such pairs are stored in the KV cache, which, within the space constrained by the context window, ultimately contains a compressed version of the long text. Our proposal enables models to consume large inputs even with high compression (up to 93x) while preserving semantic integrity without the need for fine-tuning.
2024-08-02T00:00:00
2407.21794
Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey
[ "Atsuyuki Miyai", "Jingkang Yang", "Jingyang Zhang", "Yifei Ming", "Yueqian Lin", "Qing Yu", "Go Irie", "Shafiq Joty", "Yixuan Li", "Hai Li", "Ziwei Liu", "Toshihiko Yamasaki", "Kiyoharu Aizawa" ]
Detecting out-of-distribution (OOD) samples is crucial for ensuring the safety of machine learning systems and has shaped the field of OOD detection. Meanwhile, several other problems are closely related to OOD detection, including anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD). To unify these problems, a generalized OOD detection framework was proposed, taxonomically categorizing these five problems. However, Vision Language Models (VLMs) such as CLIP have significantly changed the paradigm and blurred the boundaries between these fields, again confusing researchers. In this survey, we first present a generalized OOD detection v2, encapsulating the evolution of AD, ND, OSR, OOD detection, and OD in the VLM era. Our framework reveals that, with some field inactivity and integration, the demanding challenges have become OOD detection and AD. In addition, we also highlight the significant shift in the definition, problem settings, and benchmarks; we thus feature a comprehensive review of the methodology for OOD detection, including the discussion over other related tasks to clarify their relationship to OOD detection. Finally, we explore the advancements in the emerging Large Vision Language Model (LVLM) era, such as GPT-4V. We conclude this survey with open challenges and future directions.
2024-08-02T00:00:00
2408.00760
Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention
[ "Susung Hong" ]
https://github.com/SusungHong/SEG-SDXL
Conditional diffusion models have shown remarkable success in visual content generation, producing high-quality samples across various domains, largely due to classifier-free guidance (CFG). Recent attempts to extend guidance to unconditional models have relied on heuristic techniques, resulting in suboptimal generation quality and unintended effects. In this work, we propose Smoothed Energy Guidance (SEG), a novel training- and condition-free approach that leverages the energy-based perspective of the self-attention mechanism to enhance image generation. By defining the energy of self-attention, we introduce a method to reduce the curvature of the energy landscape of attention and use the output as the unconditional prediction. Practically, we control the curvature of the energy landscape by adjusting the Gaussian kernel parameter while keeping the guidance scale parameter fixed. Additionally, we present a query blurring method that is equivalent to blurring the entire attention weights without incurring quadratic complexity in the number of tokens. In our experiments, SEG achieves a Pareto improvement in both quality and the reduction of side effects. The code is available at https://github.com/SusungHong/SEG-SDXL.
2024-08-02T00:00:00
2407.21139
Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning
[ "Omer Nacar", "Anis Koubaa" ]
This work presents a novel framework for training Arabic nested embedding models through Matryoshka Embedding Learning, leveraging multilingual, Arabic-specific, and English-based models, to highlight the power of nested embeddings models in various Arabic NLP downstream tasks. Our innovative contribution includes the translation of various sentence similarity datasets into Arabic, enabling a comprehensive evaluation framework to compare these models across different dimensions. We trained several nested embedding models on the Arabic Natural Language Inference triplet dataset and assessed their performance using multiple evaluation metrics, including Pearson and Spearman correlations for cosine similarity, Manhattan distance, Euclidean distance, and dot product similarity. The results demonstrate the superior performance of the Matryoshka embedding models, particularly in capturing semantic nuances unique to the Arabic language. Results demonstrated that Arabic Matryoshka embedding models have superior performance in capturing semantic nuances unique to the Arabic language, significantly outperforming traditional models by up to 20-25\% across various similarity metrics. These results underscore the effectiveness of language-specific training and highlight the potential of Matryoshka models in enhancing semantic textual similarity tasks for Arabic NLP.
2024-08-05T00:00:00
2408.01031
POA: Pre-training Once for Models of All Sizes
[ "Yingying Zhang", "Xin Guo", "Jiangwei Lao", "Lei Yu", "Lixiang Ru", "Jian Wang", "Guo Ye", "Huimei He", "Jingdong Chen", "Ming Yang" ]
https://github.com/Qichuzyy/POA
Large-scale self-supervised pre-training has paved the way for one foundation model to handle many different vision tasks. Most pre-training methodologies train a single model of a certain size at one time. Nevertheless, various computation or storage constraints in real-world scenarios require substantial efforts to develop a series of models with different sizes to deploy. Thus, in this study, we propose a novel tri-branch self-supervised training framework, termed as POA (Pre-training Once for All), to tackle this aforementioned issue. Our approach introduces an innovative elastic student branch into a modern self-distillation paradigm. At each pre-training step, we randomly sample a sub-network from the original student to form the elastic student and train all branches in a self-distilling fashion. Once pre-trained, POA allows the extraction of pre-trained models of diverse sizes for downstream tasks. Remarkably, the elastic student facilitates the simultaneous pre-training of multiple models with different sizes, which also acts as an additional ensemble of models of various sizes to enhance representation learning. Extensive experiments, including k-nearest neighbors, linear probing evaluation and assessments on multiple downstream tasks demonstrate the effectiveness and advantages of our POA. It achieves state-of-the-art performance using ViT, Swin Transformer and ResNet backbones, producing around a hundred models with different sizes through a single pre-training session. The code is available at: https://github.com/Qichuzyy/POA.
2024-08-05T00:00:00
2408.01291
TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling
[ "Dong Huo", "Zixin Guo", "Xinxin Zuo", "Zhihao Shi", "Juwei Lu", "Peng Dai", "Songcen Xu", "Li Cheng", "Yee-Hong Yang" ]
Given a 3D mesh, we aim to synthesize 3D textures that correspond to arbitrary textual descriptions. Current methods for generating and assembling textures from sampled views often result in prominent seams or excessive smoothing. To tackle these issues, we present TexGen, a novel multi-view sampling and resampling framework for texture generation leveraging a pre-trained text-to-image diffusion model. For view consistent sampling, first of all we maintain a texture map in RGB space that is parameterized by the denoising step and updated after each sampling step of the diffusion model to progressively reduce the view discrepancy. An attention-guided multi-view sampling strategy is exploited to broadcast the appearance information across views. To preserve texture details, we develop a noise resampling technique that aids in the estimation of noise, generating inputs for subsequent denoising steps, as directed by the text prompt and current texture map. Through an extensive amount of qualitative and quantitative evaluations, we demonstrate that our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency and rich appearance details, outperforming current state-of-the-art methods. Furthermore, our proposed texture generation technique can also be applied to texture editing while preserving the original identity. More experimental results are available at https://dong-huo.github.io/TexGen/
2024-08-05T00:00:00
2408.00874
Medical SAM 2: Segment medical images as video via Segment Anything Model 2
[ "Jiayuan Zhu", "Yunli Qi", "Junde Wu" ]
https://github.com/MedicineToken/Medical-SAM2
In this paper, we introduce Medical SAM 2 (MedSAM-2), an advanced segmentation model that utilizes the SAM 2 framework to address both 2D and 3D medical image segmentation tasks. By adopting the philosophy of taking medical images as videos, MedSAM-2 not only applies to 3D medical images but also unlocks new One-prompt Segmentation capability. That allows users to provide a prompt for just one or a specific image targeting an object, after which the model can autonomously segment the same type of object in all subsequent images, regardless of temporal relationships between the images. We evaluated MedSAM-2 across a variety of medical imaging modalities, including abdominal organs, optic discs, brain tumors, thyroid nodules, and skin lesions, comparing it against state-of-the-art models in both traditional and interactive segmentation settings. Our findings show that MedSAM-2 not only surpasses existing models in performance but also exhibits superior generalization across a range of medical image segmentation tasks. Our code will be released at: https://github.com/MedicineToken/Medical-SAM2
2024-08-05T00:00:00
2408.01337
MuChoMusic: Evaluating Music Understanding in Multimodal Audio-Language Models
[ "Benno Weck", "Ilaria Manco", "Emmanouil Benetos", "Elio Quinton", "George Fazekas", "Dmitry Bogdanov" ]
Multimodal models that jointly process audio and language hold great promise in audio understanding and are increasingly being adopted in the music domain. By allowing users to query via text and obtain information about a given audio input, these models have the potential to enable a variety of music understanding tasks via language-based interfaces. However, their evaluation poses considerable challenges, and it remains unclear how to effectively assess their ability to correctly interpret music-related inputs with current methods. Motivated by this, we introduce MuChoMusic, a benchmark for evaluating music understanding in multimodal language models focused on audio. MuChoMusic comprises 1,187 multiple-choice questions, all validated by human annotators, on 644 music tracks sourced from two publicly available music datasets, and covering a wide variety of genres. Questions in the benchmark are crafted to assess knowledge and reasoning abilities across several dimensions that cover fundamental musical concepts and their relation to cultural and functional contexts. Through the holistic analysis afforded by the benchmark, we evaluate five open-source models and identify several pitfalls, including an over-reliance on the language modality, pointing to a need for better multimodal integration. Data and code are open-sourced.
2024-08-05T00:00:00
2408.00113
Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models
[ "Adam Karvonen", "Benjamin Wright", "Can Rager", "Rico Angell", "Jannik Brinkmann", "Logan Smith", "Claudio Mayrink Verdun", "David Bau", "Samuel Marks" ]
What latent features are encoded in language model (LM) representations? Recent work on training sparse autoencoders (SAEs) to disentangle interpretable features in LM representations has shown significant promise. However, evaluating the quality of these SAEs is difficult because we lack a ground-truth collection of interpretable features that we expect good SAEs to recover. We thus propose to measure progress in interpretable dictionary learning by working in the setting of LMs trained on chess and Othello transcripts. These settings carry natural collections of interpretable features -- for example, "there is a knight on F3" -- which we leverage into supervised metrics for SAE quality. To guide progress in interpretable dictionary learning, we introduce a new SAE training technique, p-annealing, which improves performance on prior unsupervised metrics as well as our new metrics.
2024-08-05T00:00:00
2408.00103
ReLiK: Retrieve and LinK, Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget
[ "Riccardo Orlando", "Pere-Lluis Huguet-Cabot", "Edoardo Barba", "Roberto Navigli" ]
Entity Linking (EL) and Relation Extraction (RE) are fundamental tasks in Natural Language Processing, serving as critical components in a wide range of applications. In this paper, we propose ReLiK, a Retriever-Reader architecture for both EL and RE, where, given an input text, the Retriever module undertakes the identification of candidate entities or relations that could potentially appear within the text. Subsequently, the Reader module is tasked to discern the pertinent retrieved entities or relations and establish their alignment with the corresponding textual spans. Notably, we put forward an innovative input representation that incorporates the candidate entities or relations alongside the text, making it possible to link entities or extract relations in a single forward pass and to fully leverage pre-trained language models contextualization capabilities, in contrast with previous Retriever-Reader-based methods, which require a forward pass for each candidate. Our formulation of EL and RE achieves state-of-the-art performance in both in-domain and out-of-domain benchmarks while using academic budget training and with up to 40x inference speed compared to competitors. Finally, we show how our architecture can be used seamlessly for Information Extraction (cIE), i.e. EL + RE, and setting a new state of the art by employing a shared Reader that simultaneously extracts entities and relations.
2024-08-05T00:00:00
2408.00397
In-Context Example Selection via Similarity Search Improves Low-Resource Machine Translation
[ "Armel Zebaze", "Benoît Sagot", "Rachel Bawden" ]
https://github.com/ArmelRandy/ICL-MT
The ability of generative large language models (LLMs) to perform in-context learning has given rise to a large body of research into how best to prompt models for various natural language processing tasks. In this paper, we focus on machine translation (MT), a task that has been shown to benefit from in-context translation examples. However no systematic studies have been published on how best to select examples, and mixed results have been reported on the usefulness of similarity-based selection over random selection. We provide a study covering multiple LLMs and multiple in-context example retrieval strategies, comparing multilingual sentence embeddings. We cover several language directions, representing different levels of language resourcedness (English into French, German, Swahili and Wolof). Contrarily to previously published results, we find that sentence embedding similarity can improve MT, especially for low-resource language directions, and discuss the balance between selection pool diversity and quality. We also highlight potential problems with the evaluation of LLM-based MT and suggest a more appropriate evaluation protocol, adapting the COMET metric to the evaluation of LLMs. Code and outputs are freely available at https://github.com/ArmelRandy/ICL-MT.
2024-08-05T00:00:00
2407.20060
RelBench: A Benchmark for Deep Learning on Relational Databases
[ "Joshua Robinson", "Rishabh Ranjan", "Weihua Hu", "Kexin Huang", "Jiaqi Han", "Alejandro Dobles", "Matthias Fey", "Jan E. Lenssen", "Yiwen Yuan", "Zecheng Zhang", "Xinwei He", "Jure Leskovec" ]
We present RelBench, a public benchmark for solving predictive tasks over relational databases with graph neural networks. RelBench provides databases and tasks spanning diverse domains and scales, and is intended to be a foundational infrastructure for future research. We use RelBench to conduct the first comprehensive study of Relational Deep Learning (RDL) (Fey et al., 2024), which combines graph neural network predictive models with (deep) tabular models that extract initial entity-level representations from raw tables. End-to-end learned RDL models fully exploit the predictive signal encoded in primary-foreign key links, marking a significant shift away from the dominant paradigm of manual feature engineering combined with tabular models. To thoroughly evaluate RDL against this prior gold-standard, we conduct an in-depth user study where an experienced data scientist manually engineers features for each task. In this study, RDL learns better models whilst reducing human work needed by more than an order of magnitude. This demonstrates the power of deep learning for solving predictive tasks over relational databases, opening up many new research opportunities enabled by RelBench.
2024-08-06T00:00:00
2408.02666
Self-Taught Evaluators
[ "Tianlu Wang", "Ilia Kulikov", "Olga Golovneva", "Ping Yu", "Weizhe Yuan", "Jane Dwivedi-Yu", "Richard Yuanzhe Pang", "Maryam Fazel-Zarandi", "Jason Weston", "Xian Li" ]
Model-based evaluation is at the heart of successful model development -- as a reward model for training, and as a replacement for human evaluation. To train such evaluators, the standard approach is to collect a large amount of human preference judgments over model responses, which is costly and the data becomes stale as models improve. In this work, we present an approach that aims to im-prove evaluators without human annotations, using synthetic training data only. Starting from unlabeled instructions, our iterative self-improvement scheme generates contrasting model outputs and trains an LLM-as-a-Judge to produce reasoning traces and final judgments, repeating this training at each new iteration using the improved predictions. Without any labeled preference data, our Self-Taught Evaluator can improve a strong LLM (Llama3-70B-Instruct) from 75.4 to 88.3 (88.7 with majority vote) on RewardBench. This outperforms commonly used LLM judges such as GPT-4 and matches the performance of the top-performing reward models trained with labeled examples.
2024-08-06T00:00:00
2408.02629
VidGen-1M: A Large-Scale Dataset for Text-to-video Generation
[ "Zhiyu Tan", "Xiaomeng Yang", "Luozheng Qin", "Hao Li" ]
The quality of video-text pairs fundamentally determines the upper bound of text-to-video models. Currently, the datasets used for training these models suffer from significant shortcomings, including low temporal consistency, poor-quality captions, substandard video quality, and imbalanced data distribution. The prevailing video curation process, which depends on image models for tagging and manual rule-based curation, leads to a high computational load and leaves behind unclean data. As a result, there is a lack of appropriate training datasets for text-to-video models. To address this problem, we present VidGen-1M, a superior training dataset for text-to-video models. Produced through a coarse-to-fine curation strategy, this dataset guarantees high-quality videos and detailed captions with excellent temporal consistency. When used to train the video generation model, this dataset has led to experimental results that surpass those obtained with other models.
2024-08-06T00:00:00
2408.02226
ProCreate, Dont Reproduce! Propulsive Energy Diffusion for Creative Generation
[ "Jack Lu", "Ryan Teehan", "Mengye Ren" ]
https://github.com/Agentic-Learning-AI-Lab/procreate-diffusion-public
In this paper, we propose ProCreate, a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction. ProCreate operates on a set of reference images and actively propels the generated image embedding away from the reference embeddings during the generation process. We propose FSCG-8 (Few-Shot Creative Generation 8), a few-shot creative generation dataset on eight different categories -- encompassing different concepts, styles, and settings -- in which ProCreate achieves the highest sample diversity and fidelity. Furthermore, we show that ProCreate is effective at preventing replicating training data in a large-scale evaluation using training text prompts. Code and FSCG-8 are available at https://github.com/Agentic-Learning-AI-Lab/procreate-diffusion-public. The project page is available at https://procreate-diffusion.github.io.
2024-08-06T00:00:00
2408.02657
Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraining
[ "Dongyang Liu", "Shitian Zhao", "Le Zhuo", "Weifeng Lin", "Yu Qiao", "Hongsheng Li", "Peng Gao" ]
We present Lumina-mGPT, a family of multimodal autoregressive models capable of various vision and language tasks, particularly excelling in generating flexible photorealistic images from text descriptions. Unlike existing autoregressive image generation approaches, Lumina-mGPT employs a pretrained decoder-only transformer as a unified framework for modeling multimodal token sequences. Our key insight is that a simple decoder-only transformer with multimodal Generative PreTraining (mGPT), utilizing the next-token prediction objective on massive interleaved text-image sequences, can learn broad and general multimodal capabilities, thereby illuminating photorealistic text-to-image generation. Building on these pretrained models, we propose Flexible Progressive Supervised Finetuning (FP-SFT) on high-quality image-text pairs to fully unlock their potential for high-aesthetic image synthesis at any resolution while maintaining their general multimodal capabilities. Furthermore, we introduce Ominiponent Supervised Finetuning (Omni-SFT), transforming Lumina-mGPT into a foundation model that seamlessly achieves omnipotent task unification. The resulting model demonstrates versatile multimodal capabilities, including visual generation tasks like flexible text-to-image generation and controllable generation, visual recognition tasks like segmentation and depth estimation, and vision-language tasks like multiturn visual question answering. Additionally, we analyze the differences and similarities between diffusion-based and autoregressive methods in a direct comparison.
2024-08-06T00:00:00
2408.02622
Language Model Can Listen While Speaking
[ "Ziyang Ma", "Yakun Song", "Chenpeng Du", "Jian Cong", "Zhuo Chen", "Yuping Wang", "Yuxuan Wang", "Xie Chen" ]
Dialogue serves as the most natural manner of human-computer interaction (HCI). Recent advancements in speech language models (SLM) have significantly enhanced speech-based conversational AI. However, these models are limited to turn-based conversation, lacking the ability to interact with humans in real-time spoken scenarios, for example, being interrupted when the generated content is not satisfactory. To address these limitations, we explore full duplex modeling (FDM) in interactive speech language models (iSLM), focusing on enhancing real-time interaction and, more explicitly, exploring the quintessential ability of interruption. We introduce a novel model design, namely listening-while-speaking language model (LSLM), an end-to-end system equipped with both listening and speaking channels. Our LSLM employs a token-based decoder-only TTS for speech generation and a streaming self-supervised learning (SSL) encoder for real-time audio input. LSLM fuses both channels for autoregressive generation and detects turn-taking in real time. Three fusion strategies -- early fusion, middle fusion, and late fusion -- are explored, with middle fusion achieving an optimal balance between speech generation and real-time interaction. Two experimental settings, command-based FDM and voice-based FDM, demonstrate LSLM's robustness to noise and sensitivity to diverse instructions. Our results highlight LSLM's capability to achieve duplex communication with minimal impact on existing systems. This study aims to advance the development of interactive speech dialogue systems, enhancing their applicability in real-world contexts.
2024-08-06T00:00:00
2408.01584
GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS
[ "Saman Kazemkhani", "Aarav Pandya", "Daphne Cornelisse", "Brennan Shacklett", "Eugene Vinitsky" ]
https://github.com/Emerge-Lab/gpudrive
Multi-agent learning algorithms have been successful at generating superhuman planning in a wide variety of games but have had little impact on the design of deployed multi-agent planners. A key bottleneck in applying these techniques to multi-agent planning is that they require billions of steps of experience. To enable the study of multi-agent planning at this scale, we present GPUDrive, a GPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine that can generate over a million steps of experience per second. Observation, reward, and dynamics functions are written directly in C++, allowing users to define complex, heterogeneous agent behaviors that are lowered to high-performance CUDA. We show that using GPUDrive we are able to effectively train reinforcement learning agents over many scenes in the Waymo Motion dataset, yielding highly effective goal-reaching agents in minutes for individual scenes and generally capable agents in a few hours. We ship these trained agents as part of the code base at https://github.com/Emerge-Lab/gpudrive.
2024-08-06T00:00:00
2408.02555
MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization
[ "Yiwen Chen", "Yikai Wang", "Yihao Luo", "Zhengyi Wang", "Zilong Chen", "Jun Zhu", "Chi Zhang", "Guosheng Lin" ]
We introduce MeshAnything V2, an autoregressive transformer that generates Artist-Created Meshes (AM) aligned to given shapes. It can be integrated with various 3D asset production pipelines to achieve high-quality, highly controllable AM generation. MeshAnything V2 surpasses previous methods in both efficiency and performance using models of the same size. These improvements are due to our newly proposed mesh tokenization method: Adjacent Mesh Tokenization (AMT). Different from previous methods that represent each face with three vertices, AMT uses a single vertex whenever possible. Compared to previous methods, AMT requires about half the token sequence length to represent the same mesh in average. Furthermore, the token sequences from AMT are more compact and well-structured, fundamentally benefiting AM generation. Our extensive experiments show that AMT significantly improves the efficiency and performance of AM generation. Project Page: https://buaacyw.github.io/meshanything-v2/
2024-08-06T00:00:00
2408.02085
Unleashing the Power of Data Tsunami: A Comprehensive Survey on Data Assessment and Selection for Instruction Tuning of Language Models
[ "Yulei Qin", "Yuncheng Yang", "Pengcheng Guo", "Gang Li", "Hang Shao", "Yuchen Shi", "Zihan Xu", "Yun Gu", "Ke Li", "Xing Sun" ]
https://github.com/yuleiqin/fantastic-data-engineering
Instruction tuning plays a critical role in aligning large language models (LLMs) with human preference. Despite the vast amount of open instruction datasets, naively training a LLM on all existing instructions may not be optimal and practical. To pinpoint the most beneficial datapoints, data assessment and selection methods have been proposed in the fields of natural language processing (NLP) and deep learning. However, under the context of instruction tuning, there still exists a gap in knowledge on what kind of data evaluation metrics can be employed and how they can be integrated into the selection mechanism. To bridge this gap, we present a comprehensive review on existing literature of data assessment and selection especially for instruction tuning of LLMs. We systematically categorize all applicable methods into quality-based, diversity-based, and importance-based ones where a unified, fine-grained taxonomy is structured. For each category, representative methods are elaborated to describe the landscape of relevant research. In addition, comparison between latest methods is conducted on their officially reported results to provide in-depth discussions on their limitations. Finally, we summarize the open challenges and propose the promosing avenues for future studies. All related contents are available at https://github.com/yuleiqin/fantastic-data-engineering.
2024-08-06T00:00:00
2408.01800
MiniCPM-V: A GPT-4V Level MLLM on Your Phone
[ "Yuan Yao", "Tianyu Yu", "Ao Zhang", "Chongyi Wang", "Junbo Cui", "Hongji Zhu", "Tianchi Cai", "Haoyu Li", "Weilin Zhao", "Zhihui He", "Qianyu Chen", "Huarong Zhou", "Zhensheng Zou", "Haoye Zhang", "Shengding Hu", "Zhi Zheng", "Jie Zhou", "Jie Cai", "Xu Han", "Guoyang Zeng", "Dahai Li", "Zhiyuan Liu", "Maosong Sun" ]
The recent surge of Multimodal Large Language Models (MLLMs) has fundamentally reshaped the landscape of AI research and industry, shedding light on a promising path toward the next AI milestone. However, significant challenges remain preventing MLLMs from being practical in real-world applications. The most notable challenge comes from the huge cost of running an MLLM with a massive number of parameters and extensive computation. As a result, most MLLMs need to be deployed on high-performing cloud servers, which greatly limits their application scopes such as mobile, offline, energy-sensitive, and privacy-protective scenarios. In this work, we present MiniCPM-V, a series of efficient MLLMs deployable on end-side devices. By integrating the latest MLLM techniques in architecture, pretraining and alignment, the latest MiniCPM-Llama3-V 2.5 has several notable features: (1) Strong performance, outperforming GPT-4V-1106, Gemini Pro and Claude 3 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks, (2) strong OCR capability and 1.8M pixel high-resolution image perception at any aspect ratio, (3) trustworthy behavior with low hallucination rates, (4) multilingual support for 30+ languages, and (5) efficient deployment on mobile phones. More importantly, MiniCPM-V can be viewed as a representative example of a promising trend: The model sizes for achieving usable (e.g., GPT-4V) level performance are rapidly decreasing, along with the fast growth of end-side computation capacity. This jointly shows that GPT-4V level MLLMs deployed on end devices are becoming increasingly possible, unlocking a wider spectrum of real-world AI applications in the near future.
2024-08-06T00:00:00
2408.02210
ExoViP: Step-by-step Verification and Exploration with Exoskeleton Modules for Compositional Visual Reasoning
[ "Yuxuan Wang", "Alan Yuille", "Zhuowan Li", "Zilong Zheng" ]
Compositional visual reasoning methods, which translate a complex query into a structured composition of feasible visual tasks, have exhibited a strong potential in complicated multi-modal tasks. Empowered by recent advances in large language models (LLMs), this multi-modal challenge has been brought to a new stage by treating LLMs as few-shot/zero-shot planners, i.e., vision-language (VL) programming. Such methods, despite their numerous merits, suffer from challenges due to LLM planning mistakes or inaccuracy of visual execution modules, lagging behind the non-compositional models. In this work, we devise a "plug-and-play" method, ExoViP, to correct errors in both the planning and execution stages through introspective verification. We employ verification modules as "exoskeletons" to enhance current VL programming schemes. Specifically, our proposed verification module utilizes a mixture of three sub-verifiers to validate predictions after each reasoning step, subsequently calibrating the visual module predictions and refining the reasoning trace planned by LLMs. Experimental results on two representative VL programming methods showcase consistent improvements on five compositional reasoning tasks on standard benchmarks. In light of this, we believe that ExoViP can foster better performance and generalization on open-domain multi-modal challenges.
2024-08-06T00:00:00
2408.01050
The Impact of Hyperparameters on Large Language Model Inference Performance: An Evaluation of vLLM and HuggingFace Pipelines
[ "Matias Martinez" ]
The recent surge of open-source large language models (LLMs) enables developers to create AI-based solutions while maintaining control over aspects such as privacy and compliance, thereby providing governance and ownership of the model deployment process. To utilize these LLMs, inference engines are needed. These engines load the model's weights onto available resources, such as GPUs, and process queries to generate responses. The speed of inference, or performance, of the LLM, is critical for real-time applications, as it computes millions or billions of floating point operations per inference. Recently, advanced inference engines such as vLLM have emerged, incorporating novel mechanisms such as efficient memory management to achieve state-of-the-art performance. In this paper, we analyze the performance, particularly the throughput (tokens generated per unit of time), of 20 LLMs using two inference libraries: vLLM and HuggingFace's pipelines. We investigate how various hyperparameters, which developers must configure, influence inference performance. Our results reveal that throughput landscapes are irregular, with distinct peaks, highlighting the importance of hyperparameter optimization to achieve maximum performance. We also show that applying hyperparameter optimization when upgrading or downgrading the GPU model used for inference can improve throughput from HuggingFace pipelines by an average of 9.16% and 13.7%, respectively.
2024-08-06T00:00:00
2408.02373
Operationalizing Contextual Integrity in Privacy-Conscious Assistants
[ "Sahra Ghalebikesabi", "Eugene Bagdasaryan", "Ren Yi", "Itay Yona", "Ilia Shumailov", "Aneesh Pappu", "Chongyang Shi", "Laura Weidinger", "Robert Stanforth", "Leonard Berrada", "Pushmeet Kohli", "Po-Sen Huang", "Borja Balle" ]
Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of synthetic data and human annotations, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.
2024-08-06T00:00:00
2408.02545
RAG Foundry: A Framework for Enhancing LLMs for Retrieval Augmented Generation
[ "Daniel Fleischer", "Moshe Berchansky", "Moshe Wasserblat", "Peter Izsak" ]
https://github.com/IntelLabs/RAGFoundry
Implementing Retrieval-Augmented Generation (RAG) systems is inherently complex, requiring deep understanding of data, use cases, and intricate design decisions. Additionally, evaluating these systems presents significant challenges, necessitating assessment of both retrieval accuracy and generative quality through a multi-faceted approach. We introduce RAG Foundry, an open-source framework for augmenting large language models for RAG use cases. RAG Foundry integrates data creation, training, inference and evaluation into a single workflow, facilitating the creation of data-augmented datasets for training and evaluating large language models in RAG settings. This integration enables rapid prototyping and experimentation with various RAG techniques, allowing users to easily generate datasets and train RAG models using internal or specialized knowledge sources. We demonstrate the framework effectiveness by augmenting and fine-tuning Llama-3 and Phi-3 models with diverse RAG configurations, showcasing consistent improvements across three knowledge-intensive datasets. Code is released as open-source in https://github.com/IntelLabs/RAGFoundry.
2024-08-06T00:00:00
2408.02600
BioMamba: A Pre-trained Biomedical Language Representation Model Leveraging Mamba
[ "Ling Yue", "Sixue Xing", "Yingzhou Lu", "Tianfan Fu" ]
The advancement of natural language processing (NLP) in biology hinges on models' ability to interpret intricate biomedical literature. Traditional models often struggle with the complex and domain-specific language in this field. In this paper, we present BioMamba, a pre-trained model specifically designed for biomedical text mining. BioMamba builds upon the Mamba architecture and is pre-trained on an extensive corpus of biomedical literature. Our empirical studies demonstrate that BioMamba significantly outperforms models like BioBERT and general-domain Mamba across various biomedical tasks. For instance, BioMamba achieves a 100 times reduction in perplexity and a 4 times reduction in cross-entropy loss on the BioASQ test set. We provide an overview of the model architecture, pre-training process, and fine-tuning techniques. Additionally, we release the code and trained model to facilitate further research.
2024-08-07T00:00:00
2408.02718
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
[ "Fanqing Meng", "Jin Wang", "Chuanhao Li", "Quanfeng Lu", "Hao Tian", "Jiaqi Liao", "Xizhou Zhu", "Jifeng Dai", "Yu Qiao", "Ping Luo", "Kaipeng Zhang", "Wenqi Shao" ]
The capability to process multiple images is crucial for Large Vision-Language Models (LVLMs) to develop a more thorough and nuanced understanding of a scene. Recent multi-image LVLMs have begun to address this need. However, their evaluation has not kept pace with their development. To fill this gap, we introduce the Multimodal Multi-image Understanding (MMIU) benchmark, a comprehensive evaluation suite designed to assess LVLMs across a wide range of multi-image tasks. MMIU encompasses 7 types of multi-image relationships, 52 tasks, 77K images, and 11K meticulously curated multiple-choice questions, making it the most extensive benchmark of its kind. Our evaluation of 24 popular LVLMs, including both open-source and proprietary models, reveals significant challenges in multi-image comprehension, particularly in tasks involving spatial understanding. Even the most advanced models, such as GPT-4o, achieve only 55.7% accuracy on MMIU. Through multi-faceted analytical experiments, we identify key performance gaps and limitations, providing valuable insights for future model and data improvements. We aim for MMIU to advance the frontier of LVLM research and development, moving us toward achieving sophisticated multimodal multi-image user interactions.
2024-08-07T00:00:00
2408.02752
Diffusion Models as Data Mining Tools
[ "Ioannis Siglidis", "Aleksander Holynski", "Alexei A. Efros", "Mathieu Aubry", "Shiry Ginosar" ]
This paper demonstrates how to use generative models trained for image synthesis as tools for visual data mining. Our insight is that since contemporary generative models learn an accurate representation of their training data, we can use them to summarize the data by mining for visual patterns. Concretely, we show that after finetuning conditional diffusion models to synthesize images from a specific dataset, we can use these models to define a typicality measure on that dataset. This measure assesses how typical visual elements are for different data labels, such as geographic location, time stamps, semantic labels, or even the presence of a disease. This analysis-by-synthesis approach to data mining has two key advantages. First, it scales much better than traditional correspondence-based approaches since it does not require explicitly comparing all pairs of visual elements. Second, while most previous works on visual data mining focus on a single dataset, our approach works on diverse datasets in terms of content and scale, including a historical car dataset, a historical face dataset, a large worldwide street-view dataset, and an even larger scene dataset. Furthermore, our approach allows for translating visual elements across class labels and analyzing consistent changes.
2024-08-07T00:00:00
2408.03178
An Object is Worth 64x64 Pixels: Generating 3D Object via Image Diffusion
[ "Xingguang Yan", "Han-Hung Lee", "Ziyu Wan", "Angel X. Chang" ]
We introduce a new approach for generating realistic 3D models with UV maps through a representation termed "Object Images." This approach encapsulates surface geometry, appearance, and patch structures within a 64x64 pixel image, effectively converting complex 3D shapes into a more manageable 2D format. By doing so, we address the challenges of both geometric and semantic irregularity inherent in polygonal meshes. This method allows us to use image generation models, such as Diffusion Transformers, directly for 3D shape generation. Evaluated on the ABO dataset, our generated shapes with patch structures achieve point cloud FID comparable to recent 3D generative models, while naturally supporting PBR material generation.
2024-08-07T00:00:00
2408.02900
MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine
[ "Yunfei Xie", "Ce Zhou", "Lang Gao", "Juncheng Wu", "Xianhang Li", "Hong-Yu Zhou", "Sheng Liu", "Lei Xing", "James Zou", "Cihang Xie", "Yuyin Zhou" ]
This paper introduces MedTrinity-25M, a comprehensive, large-scale multimodal dataset for medicine, covering over 25 million images across 10 modalities, with multigranular annotations for more than 65 diseases. These enriched annotations encompass both global textual information, such as disease/lesion type, modality, region-specific descriptions, and inter-regional relationships, as well as detailed local annotations for regions of interest (ROIs), including bounding boxes, segmentation masks. Unlike existing approach which is limited by the availability of image-text pairs, we have developed the first automated pipeline that scales up multimodal data by generating multigranular visual and texual annotations (in the form of image-ROI-description triplets) without the need for any paired text descriptions. Specifically, data from over 90 different sources have been collected, preprocessed, and grounded using domain-specific expert models to identify ROIs related to abnormal regions. We then build a comprehensive knowledge base and prompt multimodal large language models to perform retrieval-augmented generation with the identified ROIs as guidance, resulting in multigranular texual descriptions. Compared to existing datasets, MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. Pretraining on MedTrinity-25M, our model achieves state-of-the-art performance on VQA-RAD and PathVQA, surpassing both multimodal large language models and other representative SoTA approaches. This dataset can also be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain.
2024-08-07T00:00:00
2408.03325
CoverBench: A Challenging Benchmark for Complex Claim Verification
[ "Alon Jacovi", "Moran Ambar", "Eyal Ben-David", "Uri Shaham", "Amir Feder", "Mor Geva", "Dror Marcus", "Avi Caciularu" ]
There is a growing line of research on verifying the correctness of language models' outputs. At the same time, LMs are being used to tackle complex queries that require reasoning. We introduce CoverBench, a challenging benchmark focused on verifying LM outputs in complex reasoning settings. Datasets that can be used for this purpose are often designed for other complex reasoning tasks (e.g., QA) targeting specific use-cases (e.g., financial tables), requiring transformations, negative sampling and selection of hard examples to collect such a benchmark. CoverBench provides a diversified evaluation for complex claim verification in a variety of domains, types of reasoning, relatively long inputs, and a variety of standardizations, such as multiple representations for tables where available, and a consistent schema. We manually vet the data for quality to ensure low levels of label noise. Finally, we report a variety of competitive baseline results to show CoverBench is challenging and has very significant headroom. The data is available at https://huggingface.co/datasets/google/coverbench .
2024-08-07T00:00:00
2408.03326
LLaVA-OneVision: Easy Visual Task Transfer
[ "Bo Li", "Yuanhan Zhang", "Dong Guo", "Renrui Zhang", "Feng Li", "Hao Zhang", "Kaichen Zhang", "Yanwei Li", "Ziwei Liu", "Chunyuan Li" ]
We present LLaVA-OneVision, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series. Our experimental results demonstrate that LLaVA-OneVision is the first single model that can simultaneously push the performance boundaries of open LMMs in three important computer vision scenarios: single-image, multi-image, and video scenarios. Importantly, the design of LLaVA-OneVision allows strong transfer learning across different modalities/scenarios, yielding new emerging capabilities. In particular, strong video understanding and cross-scenario capabilities are demonstrated through task transfer from images to videos.
2024-08-07T00:00:00
2408.03284
ReSyncer: Rewiring Style-based Generator for Unified Audio-Visually Synced Facial Performer
[ "Jiazhi Guan", "Zhiliang Xu", "Hang Zhou", "Kaisiyuan Wang", "Shengyi He", "Zhanwang Zhang", "Borong Liang", "Haocheng Feng", "Errui Ding", "Jingtuo Liu", "Jingdong Wang", "Youjian Zhao", "Ziwei Liu" ]
Lip-syncing videos with given audio is the foundation for various applications including the creation of virtual presenters or performers. While recent studies explore high-fidelity lip-sync with different techniques, their task-orientated models either require long-term videos for clip-specific training or retain visible artifacts. In this paper, we propose a unified and effective framework ReSyncer, that synchronizes generalized audio-visual facial information. The key design is revisiting and rewiring the Style-based generator to efficiently adopt 3D facial dynamics predicted by a principled style-injected Transformer. By simply re-configuring the information insertion mechanisms within the noise and style space, our framework fuses motion and appearance with unified training. Extensive experiments demonstrate that ReSyncer not only produces high-fidelity lip-synced videos according to audio, but also supports multiple appealing properties that are suitable for creating virtual presenters and performers, including fast personalized fine-tuning, video-driven lip-syncing, the transfer of speaking styles, and even face swapping. Resources can be found at https://guanjz20.github.io/projects/ReSyncer.
2024-08-07T00:00:00
2408.03209
IPAdapter-Instruct: Resolving Ambiguity in Image-based Conditioning using Instruct Prompts
[ "Ciara Rowles", "Shimon Vainer", "Dante De Nigris", "Slava Elizarov", "Konstantin Kutsy", "Simon Donné" ]
Diffusion models continuously push the boundary of state-of-the-art image generation, but the process is hard to control with any nuance: practice proves that textual prompts are inadequate for accurately describing image style or fine structural details (such as faces). ControlNet and IPAdapter address this shortcoming by conditioning the generative process on imagery instead, but each individual instance is limited to modeling a single conditional posterior: for practical use-cases, where multiple different posteriors are desired within the same workflow, training and using multiple adapters is cumbersome. We propose IPAdapter-Instruct, which combines natural-image conditioning with ``Instruct'' prompts to swap between interpretations for the same conditioning image: style transfer, object extraction, both, or something else still? IPAdapterInstruct efficiently learns multiple tasks with minimal loss in quality compared to dedicated per-task models.
2024-08-07T00:00:00
2408.03314
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
[ "Charlie Snell", "Jaehoon Lee", "Kelvin Xu", "Aviral Kumar" ]
Enabling LLMs to improve their outputs by using more test-time computation is a critical step towards building generally self-improving agents that can operate on open-ended natural language. In this paper, we study the scaling of inference-time computation in LLMs, with a focus on answering the question: if an LLM is allowed to use a fixed but non-trivial amount of inference-time compute, how much can it improve its performance on a challenging prompt? Answering this question has implications not only on the achievable performance of LLMs, but also on the future of LLM pretraining and how one should tradeoff inference-time and pre-training compute. Despite its importance, little research attempted to understand the scaling behaviors of various test-time inference methods. Moreover, current work largely provides negative results for a number of these strategies. In this work, we analyze two primary mechanisms to scale test-time computation: (1) searching against dense, process-based verifier reward models; and (2) updating the model's distribution over a response adaptively, given the prompt at test time. We find that in both cases, the effectiveness of different approaches to scaling test-time compute critically varies depending on the difficulty of the prompt. This observation motivates applying a "compute-optimal" scaling strategy, which acts to most effectively allocate test-time compute adaptively per prompt. Using this compute-optimal strategy, we can improve the efficiency of test-time compute scaling by more than 4x compared to a best-of-N baseline. Additionally, in a FLOPs-matched evaluation, we find that on problems where a smaller base model attains somewhat non-trivial success rates, test-time compute can be used to outperform a 14x larger model.
2024-08-07T00:00:00
2408.01708
AVESFormer: Efficient Transformer Design for Real-Time Audio-Visual Segmentation
[ "Zili Wang", "Qi Yang", "Linsu Shi", "Jiazhong Yu", "Qinghua Liang", "Fei Li", "Shiming Xiang" ]
https://github.com/MarkXCloud/AVESFormer.git
Recently, transformer-based models have demonstrated remarkable performance on audio-visual segmentation (AVS) tasks. However, their expensive computational cost makes real-time inference impractical. By characterizing attention maps of the network, we identify two key obstacles in AVS models: 1) attention dissipation, corresponding to the over-concentrated attention weights by Softmax within restricted frames, and 2) inefficient, burdensome transformer decoder, caused by narrow focus patterns in early stages. In this paper, we introduce AVESFormer, the first real-time Audio-Visual Efficient Segmentation transformer that achieves fast, efficient and light-weight simultaneously. Our model leverages an efficient prompt query generator to correct the behaviour of cross-attention. Additionally, we propose ELF decoder to bring greater efficiency by facilitating convolutions suitable for local features to reduce computational burdens. Extensive experiments demonstrate that our AVESFormer significantly enhances model performance, achieving 79.9% on S4, 57.9% on MS3 and 31.2% on AVSS, outperforming previous state-of-the-art and achieving an excellent trade-off between performance and speed. Code can be found at https://github.com/MarkXCloud/AVESFormer.git.
2024-08-07T00:00:00
2408.03281
StructEval: Deepen and Broaden Large Language Model Assessment via Structured Evaluation
[ "Boxi Cao", "Mengjie Ren", "Hongyu Lin", "Xianpei Han", "Feng Zhang", "Junfeng Zhan", "Le Sun" ]
Evaluation is the baton for the development of large language models. Current evaluations typically employ a single-item assessment paradigm for each atomic test objective, which struggles to discern whether a model genuinely possesses the required capabilities or merely memorizes/guesses the answers to specific questions. To this end, we propose a novel evaluation framework referred to as StructEval. Starting from an atomic test objective, StructEval deepens and broadens the evaluation by conducting a structured assessment across multiple cognitive levels and critical concepts, and therefore offers a comprehensive, robust and consistent evaluation for LLMs. Experiments on three widely-used benchmarks demonstrate that StructEval serves as a reliable tool for resisting the risk of data contamination and reducing the interference of potential biases, thereby providing more reliable and consistent conclusions regarding model capabilities. Our framework also sheds light on the design of future principled and trustworthy LLM evaluation protocols.
2024-08-07T00:00:00
2408.03256
Synthesizing Text-to-SQL Data from Weak and Strong LLMs
[ "Jiaxi Yang", "Binyuan Hui", "Min Yang", "Jian Yang", "Junyang Lin", "Chang Zhou" ]
The capability gap between open-source and closed-source large language models (LLMs) remains a challenge in text-to-SQL tasks. In this paper, we introduce a synthetic data approach that combines data produced by larger, more powerful models (strong models) with error information data generated by smaller, not well-aligned models (weak models). The method not only enhances the domain generalization of text-to-SQL models but also explores the potential of error data supervision through preference learning. Furthermore, we employ the synthetic data approach for instruction tuning on open-source LLMs, resulting SENSE, a specialized text-to-SQL model. The effectiveness of SENSE is demonstrated through state-of-the-art results on the SPIDER and BIRD benchmarks, bridging the performance gap between open-source models and methods prompted by closed-source models.
2024-08-08T00:00:00
2408.03906
Achieving Human Level Competitive Robot Table Tennis
[ "David B. D'Ambrosio", "Saminda Abeyruwan", "Laura Graesser", "Atil Iscen", "Heni Ben Amor", "Alex Bewley", "Barney J. Reed", "Krista Reymann", "Leila Takayama", "Yuval Tassa", "Krzysztof Choromanski", "Erwin Coumans", "Deepali Jain", "Navdeep Jaitly", "Natasha Jaques", "Satoshi Kataoka", "Yuheng Kuang", "Nevena Lazic", "Reza Mahjourian", "Sherry Moore", "Kenneth Oslund", "Anish Shankar", "Vikas Sindhwani", "Vincent Vanhoucke", "Grace Vesom", "Peng Xu", "Pannag R. Sanketi" ]
Achieving human-level speed and performance on real world tasks is a north star for the robotics research community. This work takes a step towards that goal and presents the first learned robot agent that reaches amateur human-level performance in competitive table tennis. Table tennis is a physically demanding sport which requires human players to undergo years of training to achieve an advanced level of proficiency. In this paper, we contribute (1) a hierarchical and modular policy architecture consisting of (i) low level controllers with their detailed skill descriptors which model the agent's capabilities and help to bridge the sim-to-real gap and (ii) a high level controller that chooses the low level skills, (2) techniques for enabling zero-shot sim-to-real including an iterative approach to defining the task distribution that is grounded in the real-world and defines an automatic curriculum, and (3) real time adaptation to unseen opponents. Policy performance was assessed through 29 robot vs. human matches of which the robot won 45% (13/29). All humans were unseen players and their skill level varied from beginner to tournament level. Whilst the robot lost all matches vs. the most advanced players it won 100% matches vs. beginners and 55% matches vs. intermediate players, demonstrating solidly amateur human-level performance. Videos of the matches can be viewed at https://sites.google.com/view/competitive-robot-table-tennis
2024-08-08T00:00:00
2408.03541
EXAONE 3.0 7.8B Instruction Tuned Language Model
[ "LG AI Research", "Soyoung An", "Kyunghoon Bae", "Eunbi Choi", "Stanley Jungkyu Choi", "Yemuk Choi", "Seokhee Hong", "Yeonjung Hong", "Junwon Hwang", "Hyojin Jeon", "Gerrard Jeongwon Jo", "Hyunjik Jo", "Jiyeon Jung", "Yountae Jung", "Euisoon Kim", "Hyosang Kim", "Joonkee Kim", "Seonghwan Kim", "Soyeon Kim", "Sunkyoung Kim", "Yireun Kim", "Youchul Kim", "Edward Hwayoung Lee", "Haeju Lee", "Honglak Lee", "Jinsik Lee", "Kyungmin Lee", "Moontae Lee", "Seungjun Lee", "Woohyung Lim", "Sangha Park", "Sooyoun Park", "Yongmin Park", "Boseong Seo", "Sihoon Yang", "Heuiyeen Yeen", "Kyungjae Yoo", "Hyeongu Yun" ]
We introduce EXAONE 3.0 instruction-tuned language model, the first open model in the family of Large Language Models (LLMs) developed by LG AI Research. Among different model sizes, we publicly release the 7.8B instruction-tuned model to promote open research and innovations. Through extensive evaluations across a wide range of public and in-house benchmarks, EXAONE 3.0 demonstrates highly competitive real-world performance with instruction-following capability against other state-of-the-art open models of similar size. Our comparative analysis shows that EXAONE 3.0 excels particularly in Korean, while achieving compelling performance across general tasks and complex reasoning. With its strong real-world effectiveness and bilingual proficiency, we hope that EXAONE keeps contributing to advancements in Expert AI. Our EXAONE 3.0 instruction-tuned model is available at https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct
2024-08-08T00:00:00
2408.03356
RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis
[ "Hugo Blanc", "Jean-Emmanuel Deschaud", "Alexis Paljic" ]
Differentiable volumetric rendering-based methods made significant progress in novel view synthesis. On one hand, innovative methods have replaced the Neural Radiance Fields (NeRF) network with locally parameterized structures, enabling high-quality renderings in a reasonable time. On the other hand, approaches have used differentiable splatting instead of NeRF's ray casting to optimize radiance fields rapidly using Gaussian kernels, allowing for fine adaptation to the scene. However, differentiable ray casting of irregularly spaced kernels has been scarcely explored, while splatting, despite enabling fast rendering times, is susceptible to clearly visible artifacts. Our work closes this gap by providing a physically consistent formulation of the emitted radiance c and density {\sigma}, decomposed with Gaussian functions associated with Spherical Gaussians/Harmonics for all-frequency colorimetric representation. We also introduce a method enabling differentiable ray casting of irregularly distributed Gaussians using an algorithm that integrates radiance fields slab by slab and leverages a BVH structure. This allows our approach to finely adapt to the scene while avoiding splatting artifacts. As a result, we achieve superior rendering quality compared to the state-of-the-art while maintaining reasonable training times and achieving inference speeds of 25 FPS on the Blender dataset. Project page with videos and code: https://raygauss.github.io/
2024-08-08T00:00:00
2408.03615
Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks
[ "Zaijing Li", "Yuquan Xie", "Rui Shao", "Gongwei Chen", "Dongmei Jiang", "Liqiang Nie" ]
Building a general-purpose agent is a long-standing vision in the field of artificial intelligence. Existing agents have made remarkable progress in many domains, yet they still struggle to complete long-horizon tasks in an open world. We attribute this to the lack of necessary world knowledge and multimodal experience that can guide agents through a variety of long-horizon tasks. In this paper, we propose a Hybrid Multimodal Memory module to address the above challenges. It 1) transforms knowledge into Hierarchical Directed Knowledge Graph that allows agents to explicitly represent and learn world knowledge, and 2) summarises historical information into Abstracted Multimodal Experience Pool that provide agents with rich references for in-context learning. On top of the Hybrid Multimodal Memory module, a multimodal agent, Optimus-1, is constructed with dedicated Knowledge-guided Planner and Experience-Driven Reflector, contributing to a better planning and reflection in the face of long-horizon tasks in Minecraft. Extensive experimental results show that Optimus-1 significantly outperforms all existing agents on challenging long-horizon task benchmarks, and exhibits near human-level performance on many tasks. In addition, we introduce various Multimodal Large Language Models (MLLMs) as the backbone of Optimus-1. Experimental results show that Optimus-1 exhibits strong generalization with the help of the Hybrid Multimodal Memory module, outperforming the GPT-4V baseline on many tasks.
2024-08-08T00:00:00
2408.03695
Openstory++: A Large-scale Dataset and Benchmark for Instance-aware Open-domain Visual Storytelling
[ "Zilyu Ye", "Jinxiu Liu", "Ruotian Peng", "Jinjin Cao", "Zhiyang Chen", "Yiyang Zhang", "Ziwei Xuan", "Mingyuan Zhou", "Xiaoqian Shen", "Mohamed Elhoseiny", "Qi Liu", "Guo-Jun Qi" ]
Recent image generation models excel at creating high-quality images from brief captions. However, they fail to maintain consistency of multiple instances across images when encountering lengthy contexts. This inconsistency is largely due to in existing training datasets the absence of granular instance feature labeling in existing training datasets. To tackle these issues, we introduce Openstory++, a large-scale dataset combining additional instance-level annotations with both images and text. Furthermore, we develop a training methodology that emphasizes entity-centric image-text generation, ensuring that the models learn to effectively interweave visual and textual information. Specifically, Openstory++ streamlines the process of keyframe extraction from open-domain videos, employing vision-language models to generate captions that are then polished by a large language model for narrative continuity. It surpasses previous datasets by offering a more expansive open-domain resource, which incorporates automated captioning, high-resolution imagery tailored for instance count, and extensive frame sequences for temporal consistency. Additionally, we present Cohere-Bench, a pioneering benchmark framework for evaluating the image generation tasks when long multimodal context is provided, including the ability to keep the background, style, instances in the given context coherent. Compared to existing benchmarks, our work fills critical gaps in multi-modal generation, propelling the development of models that can adeptly generate and interpret complex narratives in open-domain environments. Experiments conducted within Cohere-Bench confirm the superiority of Openstory++ in nurturing high-quality visual storytelling models, enhancing their ability to address open-domain generation tasks. More details can be found at https://openstorypp.github.io/
2024-08-08T00:00:00
2408.03910
CodexGraph: Bridging Large Language Models and Code Repositories via Code Graph Databases
[ "Xiangyan Liu", "Bo Lan", "Zhiyuan Hu", "Yang Liu", "Zhicheng Zhang", "Wenmeng Zhou", "Fei Wang", "Michael Shieh" ]
https://github.com/modelscope/modelscope-agent/tree/master/apps
Large Language Models (LLMs) excel in stand-alone code tasks like HumanEval and MBPP, but struggle with handling entire code repositories. This challenge has prompted research on enhancing LLM-codebase interaction at a repository scale. Current solutions rely on similarity-based retrieval or manual tools and APIs, each with notable drawbacks. Similarity-based retrieval often has low recall in complex tasks, while manual tools and APIs are typically task-specific and require expert knowledge, reducing their generalizability across diverse code tasks and real-world applications. To mitigate these limitations, we introduce \framework, a system that integrates LLM agents with graph database interfaces extracted from code repositories. By leveraging the structural properties of graph databases and the flexibility of the graph query language, \framework enables the LLM agent to construct and execute queries, allowing for precise, code structure-aware context retrieval and code navigation. We assess \framework using three benchmarks: CrossCodeEval, SWE-bench, and EvoCodeBench. Additionally, we develop five real-world coding applications. With a unified graph database schema, \framework demonstrates competitive performance and potential in both academic and real-world environments, showcasing its versatility and efficacy in software engineering. Our application demo: https://github.com/modelscope/modelscope-agent/tree/master/apps/codexgraph_agent.
2024-08-08T00:00:00
2408.03900
Speech-MASSIVE: A Multilingual Speech Dataset for SLU and Beyond
[ "Beomseok Lee", "Ioan Calapodescu", "Marco Gaido", "Matteo Negri", "Laurent Besacier" ]
https://github.com/hlt-mt/Speech-MASSIVE
We present Speech-MASSIVE, a multilingual Spoken Language Understanding (SLU) dataset comprising the speech counterpart for a portion of the MASSIVE textual corpus. Speech-MASSIVE covers 12 languages from different families and inherits from MASSIVE the annotations for the intent prediction and slot-filling tasks. Our extension is prompted by the scarcity of massively multilingual SLU datasets and the growing need for versatile speech datasets to assess foundation models (LLMs, speech encoders) across languages and tasks. We provide a multimodal, multitask, multilingual dataset and report SLU baselines using both cascaded and end-to-end architectures in various training scenarios (zero-shot, few-shot, and full fine-tune). Furthermore, we demonstrate the suitability of Speech-MASSIVE for benchmarking other tasks such as speech transcription, language identification, and speech translation. The dataset, models, and code are publicly available at: https://github.com/hlt-mt/Speech-MASSIVE
2024-08-08T00:00:00
2408.03822
Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields
[ "Joo Chan Lee", "Daniel Rho", "Xiangyu Sun", "Jong Hwan Ko", "Eunbyung Park" ]
3D Gaussian splatting (3DGS) has recently emerged as an alternative representation that leverages a 3D Gaussian-based representation and introduces an approximated volumetric rendering, achieving very fast rendering speed and promising image quality. Furthermore, subsequent studies have successfully extended 3DGS to dynamic 3D scenes, demonstrating its wide range of applications. However, a significant drawback arises as 3DGS and its following methods entail a substantial number of Gaussians to maintain the high fidelity of the rendered images, which requires a large amount of memory and storage. To address this critical issue, we place a specific emphasis on two key objectives: reducing the number of Gaussian points without sacrificing performance and compressing the Gaussian attributes, such as view-dependent color and covariance. To this end, we propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance. In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field rather than relying on spherical harmonics. Finally, we learn codebooks to compactly represent the geometric and temporal attributes by residual vector quantization. With model compression techniques such as quantization and entropy coding, we consistently show over 25x reduced storage and enhanced rendering speed compared to 3DGS for static scenes, while maintaining the quality of the scene representation. For dynamic scenes, our approach achieves more than 12x storage efficiency and retains a high-quality reconstruction compared to the existing state-of-the-art methods. Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering. Our project page is available at https://maincold2.github.io/c3dgs/.
2024-08-08T00:00:00
2408.03588
Facing the Music: Tackling Singing Voice Separation in Cinematic Audio Source Separation
[ "Karn N. Watcharasupat", "Chih-Wei Wu", "Iroro Orife" ]
https://github.com/kwatcharasupat/source-separation-landing
Cinematic audio source separation (CASS) is a fairly new subtask of audio source separation. A typical setup of CASS is a three-stem problem, with the aim of separating the mixture into the dialogue stem (DX), music stem (MX), and effects stem (FX). In practice, however, several edge cases exist as some sound sources do not fit neatly in either of these three stems, necessitating the use of additional auxiliary stems in production. One very common edge case is the singing voice in film audio, which may belong in either the DX or MX, depending heavily on the cinematic context. In this work, we demonstrate a very straightforward extension of the dedicated-decoder Bandit and query-based single-decoder Banquet models to a four-stem problem, treating non-musical dialogue, instrumental music, singing voice, and effects as separate stems. Interestingly, the query-based Banquet model outperformed the dedicated-decoder Bandit model. We hypothesized that this is due to a better feature alignment at the bottleneck as enforced by the band-agnostic FiLM layer. Dataset and model implementation will be made available at https://github.com/kwatcharasupat/source-separation-landing.
2024-08-08T00:00:00
2408.03923
Fast Sprite Decomposition from Animated Graphics
[ "Tomoyuki Suzuki", "Kotaro Kikuchi", "Kota Yamaguchi" ]
This paper presents an approach to decomposing animated graphics into sprites, a set of basic elements or layers. Our approach builds on the optimization of sprite parameters to fit the raster video. For efficiency, we assume static textures for sprites to reduce the search space while preventing artifacts using a texture prior model. To further speed up the optimization, we introduce the initialization of the sprite parameters utilizing a pre-trained video object segmentation model and user input of single frame annotations. For our study, we construct the Crello Animation dataset from an online design service and define quantitative metrics to measure the quality of the extracted sprites. Experiments show that our method significantly outperforms baselines for similar decomposition tasks in terms of the quality/efficiency tradeoff.
2024-08-08T00:00:00
2408.03837
WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models
[ "Prannaya Gupta", "Le Qi Yau", "Hao Han Low", "I-Shiang Lee", "Hugo Maximus Lim", "Yu Xin Teoh", "Jia Hng Koh", "Dar Win Liew", "Rishabh Bhardwaj", "Rajat Bhardwaj", "Soujanya Poria" ]
https://github.com/walledai/walledevalA
WalledEval is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35 safety benchmarks covering areas such as multilingual safety, exaggerated safety, and prompt injections. The framework supports both LLM and judge benchmarking, and incorporates custom mutators to test safety against various text-style mutations such as future tense and paraphrasing. Additionally, WalledEval introduces WalledGuard, a new, small and performant content moderation tool, and SGXSTest, a benchmark for assessing exaggerated safety in cultural contexts. We make WalledEval publicly available at https://github.com/walledai/walledevalA.
2024-08-09T00:00:00
2408.04631
Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics
[ "Ruining Li", "Chuanxia Zheng", "Christian Rupprecht", "Andrea Vedaldi" ]
We present Puppet-Master, an interactive video generative model that can serve as a motion prior for part-level dynamics. At test time, given a single image and a sparse set of motion trajectories (i.e., drags), Puppet-Master can synthesize a video depicting realistic part-level motion faithful to the given drag interactions. This is achieved by fine-tuning a large-scale pre-trained video diffusion model, for which we propose a new conditioning architecture to inject the dragging control effectively. More importantly, we introduce the all-to-first attention mechanism, a drop-in replacement for the widely adopted spatial attention modules, which significantly improves generation quality by addressing the appearance and background issues in existing models. Unlike other motion-conditioned video generators that are trained on in-the-wild videos and mostly move an entire object, Puppet-Master is learned from Objaverse-Animation-HQ, a new dataset of curated part-level motion clips. We propose a strategy to automatically filter out sub-optimal animations and augment the synthetic renderings with meaningful motion trajectories. Puppet-Master generalizes well to real images across various categories and outperforms existing methods in a zero-shot manner on a real-world benchmark. See our project page for more results: vgg-puppetmaster.github.io.
2024-08-09T00:00:00
2408.03361
GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI
[ "Pengcheng Chen", "Jin Ye", "Guoan Wang", "Yanjun Li", "Zhongying Deng", "Wei Li", "Tianbin Li", "Haodong Duan", "Ziyan Huang", "Yanzhou Su", "Benyou Wang", "Shaoting Zhang", "Bin Fu", "Jianfei Cai", "Bohan Zhuang", "Eric J Seibel", "Junjun He", "Yu Qiao" ]
Large Vision-Language Models (LVLMs) are capable of handling diverse data types such as imaging, text, and physiological signals, and can be applied in various fields. In the medical field, LVLMs have a high potential to offer substantial assistance for diagnosis and treatment. Before that, it is crucial to develop benchmarks to evaluate LVLMs' effectiveness in various medical applications. Current benchmarks are often built upon specific academic literature, mainly focusing on a single domain, and lacking varying perceptual granularities. Thus, they face specific challenges, including limited clinical relevance, incomplete evaluations, and insufficient guidance for interactive LVLMs. To address these limitations, we developed the GMAI-MMBench, the most comprehensive general medical AI benchmark with well-categorized data structure and multi-perceptual granularity to date. It is constructed from 285 datasets across 39 medical image modalities, 18 clinical-related tasks, 18 departments, and 4 perceptual granularities in a Visual Question Answering (VQA) format. Additionally, we implemented a lexical tree structure that allows users to customize evaluation tasks, accommodating various assessment needs and substantially supporting medical AI research and applications. We evaluated 50 LVLMs, and the results show that even the advanced GPT-4o only achieves an accuracy of 52%, indicating significant room for improvement. Moreover, we identified five key insufficiencies in current cutting-edge LVLMs that need to be addressed to advance the development of better medical applications. We believe that GMAI-MMBench will stimulate the community to build the next generation of LVLMs toward GMAI. Project Page: https://uni-medical.github.io/GMAI-MMBench.github.io/
2024-08-09T00:00:00
2408.04034
Task-oriented Sequential Grounding in 3D Scenes
[ "Zhuofan Zhang", "Ziyu Zhu", "Pengxiang Li", "Tengyu Liu", "Xiaojian Ma", "Yixin Chen", "Baoxiong Jia", "Siyuan Huang", "Qing Li" ]
Grounding natural language in physical 3D environments is essential for the advancement of embodied artificial intelligence. Current datasets and models for 3D visual grounding predominantly focus on identifying and localizing objects from static, object-centric descriptions. These approaches do not adequately address the dynamic and sequential nature of task-oriented grounding necessary for practical applications. In this work, we propose a new task: Task-oriented Sequential Grounding in 3D scenes, wherein an agent must follow detailed step-by-step instructions to complete daily activities by locating a sequence of target objects in indoor scenes. To facilitate this task, we introduce SG3D, a large-scale dataset containing 22,346 tasks with 112,236 steps across 4,895 real-world 3D scenes. The dataset is constructed using a combination of RGB-D scans from various 3D scene datasets and an automated task generation pipeline, followed by human verification for quality assurance. We adapted three state-of-the-art 3D visual grounding models to the sequential grounding task and evaluated their performance on SG3D. Our results reveal that while these models perform well on traditional benchmarks, they face significant challenges with task-oriented sequential grounding, underscoring the need for further research in this area.
2024-08-09T00:00:00
2408.04567
Sketch2Scene: Automatic Generation of Interactive 3D Game Scenes from User's Casual Sketches
[ "Yongzhi Xu", "Yonhon Ng", "Yifu Wang", "Inkyu Sa", "Yunfei Duan", "Yang Li", "Pan Ji", "Hongdong Li" ]
3D Content Generation is at the heart of many computer graphics applications, including video gaming, film-making, virtual and augmented reality, etc. This paper proposes a novel deep-learning based approach for automatically generating interactive and playable 3D game scenes, all from the user's casual prompts such as a hand-drawn sketch. Sketch-based input offers a natural, and convenient way to convey the user's design intention in the content creation process. To circumvent the data-deficient challenge in learning (i.e. the lack of large training data of 3D scenes), our method leverages a pre-trained 2D denoising diffusion model to generate a 2D image of the scene as the conceptual guidance. In this process, we adopt the isometric projection mode to factor out unknown camera poses while obtaining the scene layout. From the generated isometric image, we use a pre-trained image understanding method to segment the image into meaningful parts, such as off-ground objects, trees, and buildings, and extract the 2D scene layout. These segments and layouts are subsequently fed into a procedural content generation (PCG) engine, such as a 3D video game engine like Unity or Unreal, to create the 3D scene. The resulting 3D scene can be seamlessly integrated into a game development environment and is readily playable. Extensive tests demonstrate that our method can efficiently generate high-quality and interactive 3D game scenes with layouts that closely follow the user's intention.
2024-08-09T00:00:00
2408.04594
Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models
[ "Qirui Jiao", "Daoyuan Chen", "Yilun Huang", "Yaliang Li", "Ying Shen" ]
https://github.com/modelscope/data-juicer
High-performance Multimodal Large Language Models (MLLMs) rely heavily on data quality. This study introduces a novel dataset named Img-Diff, designed to enhance fine-grained image recognition in MLLMs by leveraging insights from contrastive learning and image difference captioning. By analyzing object differences between similar images, we challenge models to identify both matching and distinct components. We utilize the Stable-Diffusion-XL model and advanced image editing techniques to create pairs of similar images that highlight object replacements. Our methodology includes a Difference Area Generator for object differences identifying, followed by a Difference Captions Generator for detailed difference descriptions. The result is a relatively small but high-quality dataset of "object replacement" samples. We use the the proposed dataset to fine-tune state-of-the-art (SOTA) MLLMs such as MGM-7B, yielding comprehensive improvements of performance scores over SOTA models that trained with larger-scale datasets, in numerous image difference and Visual Question Answering tasks. For instance, our trained models notably surpass the SOTA models GPT-4V and Gemini on the MMVP benchmark. Besides, we investigate alternative methods for generating image difference data through "object removal" and conduct thorough evaluation to confirm the dataset's diversity, quality, and robustness, presenting several insights on synthesis of such contrastive dataset. To encourage further research and advance the field of multimodal data synthesis and enhancement of MLLMs' fundamental capabilities for image understanding, we release our codes and dataset at https://github.com/modelscope/data-juicer/tree/ImgDiff.
2024-08-09T00:00:00
2408.04284
LLM-DetectAIve: a Tool for Fine-Grained Machine-Generated Text Detection
[ "Mervat Abassy", "Kareem Elozeiri", "Alexander Aziz", "Minh Ngoc Ta", "Raj Vardhan Tomar", "Bimarsha Adhikari", "Saad El Dine Ahmed", "Yuxia Wang", "Osama Mohammed Afzal", "Zhuohan Xie", "Jonibek Mansurov", "Ekaterina Artemova", "Vladislav Mikhailov", "Rui Xing", "Jiahui Geng", "Hasan Iqbal", "Zain Muhammad Mujahid", "Tarek Mahmoud", "Akim Tsvigun", "Alham Fikri Aji", "Artem Shelmanov", "Nizar Habash", "Iryna Gurevych", "Preslav Nakov" ]
The widespread accessibility of large language models (LLMs) to the general public has significantly amplified the dissemination of machine-generated texts (MGTs). Advancements in prompt manipulation have exacerbated the difficulty in discerning the origin of a text (human-authored vs machinegenerated). This raises concerns regarding the potential misuse of MGTs, particularly within educational and academic domains. In this paper, we present LLM-DetectAIve -- a system designed for fine-grained MGT detection. It is able to classify texts into four categories: human-written, machine-generated, machine-written machine-humanized, and human-written machine-polished. Contrary to previous MGT detectors that perform binary classification, introducing two additional categories in LLM-DetectiAIve offers insights into the varying degrees of LLM intervention during the text creation. This might be useful in some domains like education, where any LLM intervention is usually prohibited. Experiments show that LLM-DetectAIve can effectively identify the authorship of textual content, proving its usefulness in enhancing integrity in education, academia, and other domains. LLM-DetectAIve is publicly accessible at https://huggingface.co/spaces/raj-tomar001/MGT-New. The video describing our system is available at https://youtu.be/E8eT_bE7k8c.
2024-08-09T00:00:00
2408.04614
Better Alignment with Instruction Back-and-Forth Translation
[ "Thao Nguyen", "Jeffrey Li", "Sewoong Oh", "Ludwig Schmidt", "Jason Weston", "Luke Zettlemoyer", "Xian Li" ]
We propose a new method, instruction back-and-forth translation, to construct high-quality synthetic data grounded in world knowledge for aligning large language models (LLMs). Given documents from a web corpus, we generate and curate synthetic instructions using the backtranslation approach proposed by Li et al.(2023a), and rewrite the responses to improve their quality further based on the initial documents. Fine-tuning with the resulting (backtranslated instruction, rewritten response) pairs yields higher win rates on AlpacaEval than using other common instruction datasets such as Humpback, ShareGPT, Open Orca, Alpaca-GPT4 and Self-instruct. We also demonstrate that rewriting the responses with an LLM outperforms direct distillation, and the two generated text distributions exhibit significant distinction in embedding space. Further analysis shows that our backtranslated instructions are of higher quality than other sources of synthetic instructions, while our responses are more diverse and complex than those obtained from distillation. Overall we find that instruction back-and-forth translation combines the best of both worlds -- making use of the information diversity and quantity found on the web, while ensuring the quality of the responses which is necessary for effective alignment.
2024-08-09T00:00:00
2408.04619
Transformer Explainer: Interactive Learning of Text-Generative Models
[ "Aeree Cho", "Grace C. Kim", "Alexander Karpekov", "Alec Helbling", "Zijie J. Wang", "Seongmin Lee", "Benjamin Hoover", "Duen Horng Chau" ]
Transformers have revolutionized machine learning, yet their inner workings remain opaque to many. We present Transformer Explainer, an interactive visualization tool designed for non-experts to learn about Transformers through the GPT-2 model. Our tool helps users understand complex Transformer concepts by integrating a model overview and enabling smooth transitions across abstraction levels of mathematical operations and model structures. It runs a live GPT-2 instance locally in the user's browser, empowering users to experiment with their own input and observe in real-time how the internal components and parameters of the Transformer work together to predict the next tokens. Our tool requires no installation or special hardware, broadening the public's education access to modern generative AI techniques. Our open-sourced tool is available at https://poloclub.github.io/transformer-explainer/. A video demo is available at https://youtu.be/ECR4oAwocjs.
2024-08-09T00:00:00
2408.04303
Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP
[ "François Remy", "Pieter Delobelle", "Hayastan Avetisyan", "Alfiya Khabibullina", "Miryam de Lhoneux", "Thomas Demeester" ]
The development of monolingual language models for low and mid-resource languages continues to be hindered by the difficulty in sourcing high-quality training data. In this study, we present a novel cross-lingual vocabulary transfer strategy, trans-tokenization, designed to tackle this challenge and enable more efficient language adaptation. Our approach focuses on adapting a high-resource monolingual LLM to an unseen target language by initializing the token embeddings of the target language using a weighted average of semantically similar token embeddings from the source language. For this, we leverage a translation resource covering both the source and target languages. We validate our method with the Tweeties, a series of trans-tokenized LLMs, and demonstrate their competitive performance on various downstream tasks across a small but diverse set of languages. Additionally, we introduce Hydra LLMs, models with multiple swappable language modeling heads and embedding tables, which further extend the capabilities of our trans-tokenization strategy. By designing a Hydra LLM based on the multilingual model TowerInstruct, we developed a state-of-the-art machine translation model for Tatar, in a zero-shot manner, completely bypassing the need for high-quality parallel data. This breakthrough is particularly significant for low-resource languages like Tatar, where high-quality parallel data is hard to come by. By lowering the data and time requirements for training high-quality models, our trans-tokenization strategy allows for the development of LLMs for a wider range of languages, especially those with limited resources. We hope that our work will inspire further research and collaboration in the field of cross-lingual vocabulary transfer and contribute to the empowerment of languages on a global scale.
2024-08-09T00:00:00
2408.04520
Advancing Molecular Machine (Learned) Representations with Stereoelectronics-Infused Molecular Graphs
[ "Daniil A. Boiko", "Thiago Reschützegger", "Benjamin Sanchez-Lengeling", "Samuel M. Blau", "Gabe Gomes" ]
Molecular representation is a foundational element in our understanding of the physical world. Its importance ranges from the fundamentals of chemical reactions to the design of new therapies and materials. Previous molecular machine learning models have employed strings, fingerprints, global features, and simple molecular graphs that are inherently information-sparse representations. However, as the complexity of prediction tasks increases, the molecular representation needs to encode higher fidelity information. This work introduces a novel approach to infusing quantum-chemical-rich information into molecular graphs via stereoelectronic effects. We show that the explicit addition of stereoelectronic interactions significantly improves the performance of molecular machine learning models. Furthermore, stereoelectronics-infused representations can be learned and deployed with a tailored double graph neural network workflow, enabling its application to any downstream molecular machine learning task. Finally, we show that the learned representations allow for facile stereoelectronic evaluation of previously intractable systems, such as entire proteins, opening new avenues of molecular design.
2024-08-09T00:00:00
2408.02816
Learning to Predict Program Execution by Modeling Dynamic Dependency on Code Graphs
[ "Cuong Chi Le", "Hoang Nhat Phan", "Huy Nhat Phan", "Tien N. Nguyen", "Nghi D. Q. Bui" ]
Predicting program behavior without execution is an essential and challenging task in software engineering. Traditional models often struggle to capture dynamic dependencies and interactions within code. This paper introduces a novel machine learning-based framework called CodeFlowrepresents, which predicts code coverage and detects runtime errors through Dynamic Dependencies Learning. Utilizing control flow graphs (CFGs), CodeFlowrepresents all possible execution paths and the relationships between different statements, offering a comprehensive understanding of program behavior. It constructs CFGs to depict execution paths and learns vector representations for CFG nodes, capturing static control-flow dependencies. Additionally, it learns dynamic dependencies through execution traces, which reflect the impacts among statements during execution. This approach enables accurate prediction of code coverage and identification of runtime errors. Empirical evaluations show significant improvements in code coverage prediction accuracy and effective localization of runtime errors, surpassing current models.
2024-08-09T00:00:00
2407.18245
VGGHeads: A Large-Scale Synthetic Dataset for 3D Human Heads
[ "Orest Kupyn", "Eugene Khvedchenia", "Christian Rupprecht" ]
Human head detection, keypoint estimation, and 3D head model fitting are important tasks with many applications. However, traditional real-world datasets often suffer from bias, privacy, and ethical concerns, and they have been recorded in laboratory environments, which makes it difficult for trained models to generalize. Here, we introduce VGGHeads -- a large scale synthetic dataset generated with diffusion models for human head detection and 3D mesh estimation. Our dataset comprises over 1 million high-resolution images, each annotated with detailed 3D head meshes, facial landmarks, and bounding boxes. Using this dataset we introduce a new model architecture capable of simultaneous heads detection and head meshes reconstruction from a single image in a single step. Through extensive experimental evaluations, we demonstrate that models trained on our synthetic data achieve strong performance on real images. Furthermore, the versatility of our dataset makes it applicable across a broad spectrum of tasks, offering a general and comprehensive representation of human heads. Additionally, we provide detailed information about the synthetic data generation pipeline, enabling it to be re-used for other tasks and domains.
2024-08-09T00:00:00
2406.04604
Learning Task Decomposition to Assist Humans in Competitive Programming
[ "Jiaxin Wen", "Ruiqi Zhong", "Pei Ke", "Zhihong Shao", "Hongning Wang", "Minlie Huang" ]
When using language models (LMs) to solve complex problems, humans might struggle to understand the LM-generated solutions and repair the flawed ones. To assist humans in repairing them, we propose to automatically decompose complex solutions into multiple simpler pieces that correspond to specific subtasks. We introduce a novel objective for learning task decomposition, termed assistive value (AssistV), which measures the feasibility and speed for humans to repair the decomposed solution. We collect a dataset of human repair experiences on different decomposed solutions. Utilizing the collected data as in-context examples, we then learn to critique, refine, and rank decomposed solutions to improve AssistV. We validate our method under competitive programming problems: under 177 hours of human study, our method enables non-experts to solve 33.3\% more problems, speeds them up by 3.3x, and empowers them to match unassisted experts.
2024-08-12T00:00:00
2408.05211
VITA: Towards Open-Source Interactive Omni Multimodal LLM
[ "Chaoyou Fu", "Haojia Lin", "Zuwei Long", "Yunhang Shen", "Meng Zhao", "Yifan Zhang", "Xiong Wang", "Di Yin", "Long Ma", "Xiawu Zheng", "Ran He", "Rongrong Ji", "Yunsheng Wu", "Caifeng Shan", "Xing Sun" ]
The remarkable multimodal capabilities and interactive experience of GPT-4o underscore their necessity in practical applications, yet open-source models rarely excel in both areas. In this paper, we introduce VITA, the first-ever open-source Multimodal Large Language Model (MLLM) adept at simultaneous processing and analysis of Video, Image, Text, and Audio modalities, and meanwhile has an advanced multimodal interactive experience. Starting from Mixtral 8x7B as a language foundation, we expand its Chinese vocabulary followed by bilingual instruction tuning. We further endow the language model with visual and audio capabilities through two-stage multi-task learning of multimodal alignment and instruction tuning. VITA demonstrates robust foundational capabilities of multilingual, vision, and audio understanding, as evidenced by its strong performance across a range of both unimodal and multimodal benchmarks. Beyond foundational capabilities, we have made considerable progress in enhancing the natural multimodal human-computer interaction experience. To the best of our knowledge, we are the first to exploit non-awakening interaction and audio interrupt in MLLM. VITA is the first step for the open-source community to explore the seamless integration of multimodal understanding and interaction. While there is still lots of work to be done on VITA to get close to close-source counterparts, we hope that its role as a pioneer can serve as a cornerstone for subsequent research. Project Page: https://vita-home.github.io.
2024-08-12T00:00:00
2408.04810
UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling
[ "Haider Al-Tahan", "Quentin Garrido", "Randall Balestriero", "Diane Bouchacourt", "Caner Hazirbas", "Mark Ibrahim" ]
Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches. Yet, with an ever-growing number of benchmarks, researchers are tasked with the heavy burden of implementing each protocol, bearing a non-trivial computational cost, and making sense of how all these benchmarks translate into meaningful axes of progress. To facilitate a systematic evaluation of VLM progress, we introduce UniBench: a unified implementation of 50+ VLM benchmarks spanning a comprehensive range of carefully categorized capabilities from object recognition to spatial awareness, counting, and much more. We showcase the utility of UniBench for measuring progress by evaluating nearly 60 publicly available vision-language models, trained on scales of up to 12.8B samples. We find that while scaling training data or model size can boost many vision-language model capabilities, scaling offers little benefit for reasoning or relations. Surprisingly, we also discover today's best VLMs struggle on simple digit recognition and counting tasks, e.g. MNIST, which much simpler networks can solve. Where scale falls short, we find that more precise interventions, such as data quality or tailored-learning objectives offer more promise. For practitioners, we also offer guidance on selecting a suitable VLM for a given application. Finally, we release an easy-to-run UniBench code-base with the full set of 50+ benchmarks and comparisons across 59 models as well as a distilled, representative set of benchmarks that runs in 5 minutes on a single GPU.
2024-08-12T00:00:00
2408.05205
Kalman-Inspired Feature Propagation for Video Face Super-Resolution
[ "Ruicheng Feng", "Chongyi Li", "Chen Change Loy" ]
Despite the promising progress of face image super-resolution, video face super-resolution remains relatively under-explored. Existing approaches either adapt general video super-resolution networks to face datasets or apply established face image super-resolution models independently on individual video frames. These paradigms encounter challenges either in reconstructing facial details or maintaining temporal consistency. To address these issues, we introduce a novel framework called Kalman-inspired Feature Propagation (KEEP), designed to maintain a stable face prior over time. The Kalman filtering principles offer our method a recurrent ability to use the information from previously restored frames to guide and regulate the restoration process of the current frame. Extensive experiments demonstrate the effectiveness of our method in capturing facial details consistently across video frames. Code and video demo are available at https://jnjaby.github.io/projects/KEEP.
2024-08-12T00:00:00
2408.05147
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
[ "Tom Lieberum", "Senthooran Rajamanoharan", "Arthur Conmy", "Lewis Smith", "Nicolas Sonnerat", "Vikrant Varma", "János Kramár", "Anca Dragan", "Rohin Shah", "Neel Nanda" ]
Sparse autoencoders (SAEs) are an unsupervised method for learning a sparse decomposition of a neural network's latent representations into seemingly interpretable features. Despite recent excitement about their potential, research applications outside of industry are limited by the high cost of training a comprehensive suite of SAEs. In this work, we introduce Gemma Scope, an open suite of JumpReLU SAEs trained on all layers and sub-layers of Gemma 2 2B and 9B and select layers of Gemma 2 27B base models. We primarily train SAEs on the Gemma 2 pre-trained models, but additionally release SAEs trained on instruction-tuned Gemma 2 9B for comparison. We evaluate the quality of each SAE on standard metrics and release these results. We hope that by releasing these SAE weights, we can help make more ambitious safety and interpretability research easier for the community. Weights and a tutorial can be found at https://huggingface.co/google/gemma-scope and an interactive demo can be found at https://www.neuronpedia.org/gemma-scope
2024-08-12T00:00:00
2408.04708
MulliVC: Multi-lingual Voice Conversion With Cycle Consistency
[ "Jiawei Huang", "Chen Zhang", "Yi Ren", "Ziyue Jiang", "Zhenhui Ye", "Jinglin Liu", "Jinzheng He", "Xiang Yin", "Zhou Zhao" ]
Voice conversion aims to modify the source speaker's voice to resemble the target speaker while preserving the original speech content. Despite notable advancements in voice conversion these days, multi-lingual voice conversion (including both monolingual and cross-lingual scenarios) has yet to be extensively studied. It faces two main challenges: 1) the considerable variability in prosody and articulation habits across languages; and 2) the rarity of paired multi-lingual datasets from the same speaker. In this paper, we propose MulliVC, a novel voice conversion system that only converts timbre and keeps original content and source language prosody without multi-lingual paired data. Specifically, each training step of MulliVC contains three substeps: In step one the model is trained with monolingual speech data; then, steps two and three take inspiration from back translation, construct a cyclical process to disentangle the timbre and other information (content, prosody, and other language-related information) in the absence of multi-lingual data from the same speaker. Both objective and subjective results indicate that MulliVC significantly surpasses other methods in both monolingual and cross-lingual contexts, demonstrating the system's efficacy and the viability of the three-step approach with cycle consistency. Audio samples can be found on our demo page (mullivc.github.io).
2024-08-12T00:00:00
2408.04682
ToolSandbox: A Stateful, Conversational, Interactive Evaluation Benchmark for LLM Tool Use Capabilities
[ "Jiarui Lu", "Thomas Holleis", "Yizhe Zhang", "Bernhard Aumayer", "Feng Nan", "Felix Bai", "Shuang Ma", "Shen Ma", "Mengyu Li", "Guoli Yin", "Zirui Wang", "Ruoming Pang" ]
https://github.com/apple/ToolSandbox
Recent large language models (LLMs) advancements sparked a growing research interest in tool assisted LLMs solving real-world challenges, which calls for comprehensive evaluation of tool-use capabilities. While previous works focused on either evaluating over stateless web services (RESTful API), based on a single turn user prompt, or an off-policy dialog trajectory, ToolSandbox includes stateful tool execution, implicit state dependencies between tools, a built-in user simulator supporting on-policy conversational evaluation and a dynamic evaluation strategy for intermediate and final milestones over an arbitrary trajectory. We show that open source and proprietary models have a significant performance gap, and complex tasks like State Dependency, Canonicalization and Insufficient Information defined in ToolSandbox are challenging even the most capable SOTA LLMs, providing brand-new insights into tool-use LLM capabilities. ToolSandbox evaluation framework is released at https://github.com/apple/ToolSandbox
2024-08-12T00:00:00
2408.04840
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
[ "Jiabo Ye", "Haiyang Xu", "Haowei Liu", "Anwen Hu", "Ming Yan", "Qi Qian", "Ji Zhang", "Fei Huang", "Jingren Zhou" ]
Multi-modal Large Language Models (MLLMs) have demonstrated remarkable capabilities in executing instructions for a variety of single-image tasks. Despite this progress, significant challenges remain in modeling long image sequences. In this work, we introduce the versatile multi-modal large language model, mPLUG-Owl3, which enhances the capability for long image-sequence understanding in scenarios that incorporate retrieved image-text knowledge, interleaved image-text, and lengthy videos. Specifically, we propose novel hyper attention blocks to efficiently integrate vision and language into a common language-guided semantic space, thereby facilitating the processing of extended multi-image scenarios. Extensive experimental results suggest that mPLUG-Owl3 achieves state-of-the-art performance among models with a similar size on single-image, multi-image, and video benchmarks. Moreover, we propose a challenging long visual sequence evaluation named Distractor Resistance to assess the ability of models to maintain focus amidst distractions. Finally, with the proposed architecture, mPLUG-Owl3 demonstrates outstanding performance on ultra-long visual sequence inputs. We hope that mPLUG-Owl3 can contribute to the development of more efficient and powerful multimodal large language models.
2024-08-12T00:00:00
2408.04785
BRAT: Bonus oRthogonAl Token for Architecture Agnostic Textual Inversion
[ "James Baker" ]
https://github.com/jamesBaker361/tex_inv_plus
Textual Inversion remains a popular method for personalizing diffusion models, in order to teach models new subjects and styles. We note that textual inversion has been underexplored using alternatives to the UNet, and experiment with textual inversion with a vision transformer. We also seek to optimize textual inversion using a strategy that does not require explicit use of the UNet and its idiosyncratic layers, so we add bonus tokens and enforce orthogonality. We find the use of the bonus token improves adherence to the source images and the use of the vision transformer improves adherence to the prompt. Code is available at https://github.com/jamesBaker361/tex_inv_plus.
2024-08-12T00:00:00
2408.05086
Generating novel experimental hypotheses from language models: A case study on cross-dative generalization
[ "Kanishka Misra", "Najoung Kim" ]
Neural network language models (LMs) have been shown to successfully capture complex linguistic knowledge. However, their utility for understanding language acquisition is still debated. We contribute to this debate by presenting a case study where we use LMs as simulated learners to derive novel experimental hypotheses to be tested with humans. We apply this paradigm to study cross-dative generalization (CDG): productive generalization of novel verbs across dative constructions (she pilked me the ball/she pilked the ball to me) -- acquisition of which is known to involve a large space of contextual features -- using LMs trained on child-directed speech. We specifically ask: "what properties of the training exposure facilitate a novel verb's generalization to the (unmodeled) alternate construction?" To answer this, we systematically vary the exposure context in which a novel dative verb occurs in terms of the properties of the theme and recipient, and then analyze the LMs' usage of the novel verb in the unmodeled dative construction. We find LMs to replicate known patterns of children's CDG, as a precondition to exploring novel hypotheses. Subsequent simulations reveal a nuanced role of the features of the novel verbs' exposure context on the LMs' CDG. We find CDG to be facilitated when the first postverbal argument of the exposure context is pronominal, definite, short, and conforms to the prototypical animacy expectations of the exposure dative. These patterns are characteristic of harmonic alignment in datives, where the argument with features ranking higher on the discourse prominence scale tends to precede the other. This gives rise to a novel hypothesis that CDG is facilitated insofar as the features of the exposure context -- in particular, its first postverbal argument -- are harmonically aligned. We conclude by proposing future experiments that can test this hypothesis in children.
2024-08-12T00:00:00
2408.05101
MooER: LLM-based Speech Recognition and Translation Models from Moore Threads
[ "Junhao Xu", "Zhenlin Liang", "Yi Liu", "Yichao Hu", "Jian Li", "Yajun Zheng", "Meng Cai", "Hua Wang" ]
In this paper, we present MooER, a LLM-based large-scale automatic speech recognition (ASR) / automatic speech translation (AST) model of Moore Threads. A 5000h pseudo labeled dataset containing open source and self collected speech data is used for training. We achieve performance comparable to other open source models trained with up to hundreds of thousands of hours of labeled speech data. Meanwhile, experiments conducted on Covost2 Zh2en testset suggest that our model outperforms other open source Speech LLMs. A BLEU score of 25.2 can be obtained. The main contributions of this paper are summarized as follows. First, this paper presents a training strategy for encoders and LLMs on speech related tasks (including ASR and AST) using a small size of pseudo labeled data without any extra manual annotation and selection. Second, we release our ASR and AST models and plan to open-source our training code and strategy in the near future. Moreover, a model trained on 8wh scale training data is planned to be released later on.
2024-08-13T00:00:00
2408.06292
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery
[ "Chris Lu", "Cong Lu", "Robert Tjarko Lange", "Jakob Foerster", "Jeff Clune", "David Ha" ]
https://github.com/SakanaAI/AI-Scientist
One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world's most challenging problems. Our code is open-sourced at https://github.com/SakanaAI/AI-Scientist