date
timestamp[ns]date 2023-05-05 00:00:00
2025-08-01 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 7
202
| authors
listlengths 1
3.3k
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2025-07-14T00:00:00 |
2507.01951
|
Test-Time Scaling with Reflective Generative Model
|
[
"Zixiao Wang",
"Yuxin Wang",
"Xiaorui Wang",
"Mengting Xing",
"Jie Gao",
"Jianjun Xu",
"Guangcan Liu",
"Chenhui Jin",
"Zhuo Wang",
"Shengzhuo Zhang",
"Hongtao Xie"
] |
https://github.com/MetaStone-AI/MetaStone-S1
|
We introduce our first reflective generative model MetaStone-S1, which obtains OpenAI o3's performance via the self-supervised process reward model (SPRM). Through sharing the backbone network and using task-specific heads for next token prediction and process scoring respectively, SPRM successfully integrates the policy model and process reward model(PRM) into a unified interface without extra process annotation, reducing over 99% PRM parameters for efficient reasoning. Equipped with SPRM, MetaStone-S1 is naturally suitable for test time scaling (TTS), and we provide three reasoning effort modes (low, medium, and high), based on the controllable thinking length. Moreover, we empirically establish a scaling law that reveals the relationship between total thinking computation and TTS performance. Experiments demonstrate that our MetaStone-S1 achieves comparable performance to OpenAI-o3-mini's series with only 32B parameter size. To support the research community, we have open-sourced MetaStone-S1 at https://github.com/MetaStone-AI/MetaStone-S1.
|
2025-07-14T00:00:00 |
2507.08801
|
Lumos-1: On Autoregressive Video Generation from a Unified Model Perspective
|
[
"Hangjie Yuan",
"Weihua Chen",
"Jun Cen",
"Hu Yu",
"Jingyun Liang",
"Shuning Chang",
"Zhihui Lin",
"Tao Feng",
"Pengwei Liu",
"Jiazheng Xing",
"Hao Luo",
"Jiasheng Tang",
"Fan Wang",
"Yi Yang"
] |
https://github.com/alibaba-damo-academy/Lumos
|
Autoregressive large language models (LLMs) have unified a vast range of language tasks, inspiring preliminary efforts in autoregressive video generation. Existing autoregressive video generators either diverge from standard LLM architectures, depend on bulky external text encoders, or incur prohibitive latency due to next-token decoding. In this paper, we introduce Lumos-1, an autoregressive video generator that retains the LLM architecture with minimal architectural modifications. To inject spatiotemporal correlations in LLMs, we identify the efficacy of incorporating 3D RoPE and diagnose its imbalanced frequency spectrum ranges. Therefore, we propose MM-RoPE, a RoPE scheme that preserves the original textual RoPE while providing comprehensive frequency spectra and scaled 3D positions for modeling multimodal spatiotemporal data. Moreover, Lumos-1 resorts to a token dependency strategy that obeys intra-frame bidirectionality and inter-frame temporal causality. Based on this dependency strategy, we identify the issue of frame-wise loss imbalance caused by spatial information redundancy and solve it by proposing Autoregressive Discrete Diffusion Forcing (AR-DF). AR-DF introduces temporal tube masking during training with a compatible inference-time masking policy to avoid quality degradation. By using memory-efficient training techniques, we pre-train Lumos-1 on only 48 GPUs, achieving performance comparable to EMU3 on GenEval, COSMOS-Video2World on VBench-I2V, and OpenSoraPlan on VBench-T2V. Code and models are available at https://github.com/alibaba-damo-academy/Lumos.
|
2025-07-14T00:00:00 |
2507.08800
|
NeuralOS: Towards Simulating Operating Systems via Neural Generative Models
|
[
"Luke Rivard",
"Sun Sun",
"Hongyu Guo",
"Wenhu Chen",
"Yuntian Deng"
] |
We introduce NeuralOS, a neural framework that simulates graphical user interfaces (GUIs) of operating systems by directly predicting screen frames in response to user inputs such as mouse movements, clicks, and keyboard events. NeuralOS combines a recurrent neural network (RNN), which tracks computer state, with a diffusion-based neural renderer that generates screen images. The model is trained on a large-scale dataset of Ubuntu XFCE recordings, which include both randomly generated interactions and realistic interactions produced by AI agents. Experiments show that NeuralOS successfully renders realistic GUI sequences, accurately captures mouse interactions, and reliably predicts state transitions like application launches. Although modeling fine-grained keyboard interactions precisely remains challenging, NeuralOS offers a step toward creating fully adaptive, generative neural interfaces for future human-computer interaction systems.
|
|
2025-07-14T00:00:00 |
2507.08794
|
One Token to Fool LLM-as-a-Judge
|
[
"Yulai Zhao",
"Haolin Liu",
"Dian Yu",
"S. Y. Kung",
"Haitao Mi",
"Dong Yu"
] |
Generative reward models (also known as LLMs-as-judges), which use large language models (LLMs) to evaluate answer quality, are increasingly adopted in reinforcement learning with verifiable rewards (RLVR). They are often preferred over rigid rule-based metrics, especially for complex reasoning tasks involving free-form outputs. In this paradigm, an LLM is typically prompted to compare a candidate answer against a ground-truth reference and assign a binary reward indicating correctness. Despite the seeming simplicity of this comparison task, we find that generative reward models exhibit surprising vulnerabilities to superficial manipulations: non-word symbols (e.g., ":" or ".") or reasoning openers like "Thought process:" and "Let's solve this problem step by step." can often lead to false positive rewards. We demonstrate that this weakness is widespread across LLMs, datasets, and prompt formats, posing a serious threat for core algorithmic paradigms that rely on generative reward models, such as rejection sampling, preference optimization, and RLVR. To mitigate this issue, we introduce a simple yet effective data augmentation strategy and train a new generative reward model with substantially improved robustness. Our findings highlight the urgent need for more reliable LLM-based evaluation methods. We release our robust, general-domain reward model and its synthetic training data at https://huggingface.co/sarosavo/Master-RM and https://huggingface.co/datasets/sarosavo/Master-RM.
|
|
2025-07-14T00:00:00 |
2507.07151
|
Robust Multimodal Large Language Models Against Modality Conflict
|
[
"Zongmeng Zhang",
"Wengang Zhou",
"Jie Zhao",
"Houqiang Li"
] |
Despite the impressive capabilities of multimodal large language models (MLLMs) in vision-language tasks, they are prone to hallucinations in real-world scenarios. This paper investigates the hallucination phenomenon in MLLMs from the perspective of modality conflict. Unlike existing works focusing on the conflicts between model responses and inputs, we study the inherent conflicts in inputs from different modalities that place MLLMs in a dilemma and directly lead to hallucinations. We formally define the modality conflict and construct a dataset named Multimodal Modality Conflict (MMMC) to simulate this phenomenon in vision-language tasks. Three methods based on prompt engineering, supervised fine-tuning, and reinforcement learning are proposed to alleviate the hallucination caused by modality conflict. Extensive experiments are conducted on the MMMC dataset to analyze the merits and demerits of these methods. Our results show that the reinforcement learning method achieves the best performance in mitigating the hallucination under modality conflict, while the supervised fine-tuning method shows promising and stable performance. Our work sheds light on the unnoticed modality conflict that leads to hallucinations and provides more insights into the robustness of MLLMs.
|
|
2025-07-14T00:00:00 |
2507.08771
|
BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity
|
[
"Chenyang Song",
"Weilin Zhao",
"Xu Han",
"Chaojun Xiao",
"Yingfa Chen",
"Yuxuan Li",
"Zhiyuan Liu",
"Maosong Sun"
] |
https://github.com/thunlp/BlockFFN
|
To alleviate the computational burden of large language models (LLMs), architectures with activation sparsity, represented by mixture-of-experts (MoE), have attracted increasing attention. However, the non-differentiable and inflexible routing of vanilla MoE hurts model performance. Moreover, while each token activates only a few parameters, these sparsely-activated architectures exhibit low chunk-level sparsity, indicating that the union of multiple consecutive tokens activates a large ratio of parameters. Such a sparsity pattern is unfriendly for acceleration under low-resource conditions (e.g., end-side devices) and incompatible with mainstream acceleration techniques (e.g., speculative decoding). To address these challenges, we introduce a novel MoE architecture, BlockFFN, as well as its efficient training and deployment techniques. Specifically, we use a router integrating ReLU activation and RMSNorm for differentiable and flexible routing. Next, to promote both token-level sparsity (TLS) and chunk-level sparsity (CLS), CLS-aware training objectives are designed, making BlockFFN more acceleration-friendly. Finally, we implement efficient acceleration kernels, combining activation sparsity and speculative decoding for the first time. The experimental results demonstrate the superior performance of BlockFFN over other MoE baselines, achieving over 80% TLS and 70% 8-token CLS. Our kernels achieve up to 3.67times speedup on real end-side devices than dense models. All codes and checkpoints are available publicly (https://github.com/thunlp/BlockFFN).
|
2025-07-14T00:00:00 |
2507.06952
|
What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models
|
[
"Keyon Vafa",
"Peter G. Chang",
"Ashesh Rambachan",
"Sendhil Mullainathan"
] |
Foundation models are premised on the idea that sequence prediction can uncover deeper domain understanding, much like how Kepler's predictions of planetary motion later led to the discovery of Newtonian mechanics. However, evaluating whether these models truly capture deeper structure remains a challenge. We develop a technique for evaluating foundation models that examines how they adapt to synthetic datasets generated from some postulated world model. Our technique measures whether the foundation model's inductive bias aligns with the world model, and so we refer to it as an inductive bias probe. Across multiple domains, we find that foundation models can excel at their training tasks yet fail to develop inductive biases towards the underlying world model when adapted to new tasks. We particularly find that foundation models trained on orbital trajectories consistently fail to apply Newtonian mechanics when adapted to new physics tasks. Further analysis reveals that these models behave as if they develop task-specific heuristics that fail to generalize.
|
|
2025-07-14T00:00:00 |
2507.05397
|
Neural-Driven Image Editing
|
[
"Pengfei Zhou",
"Jie Xia",
"Xiaopeng Peng",
"Wangbo Zhao",
"Zilong Ye",
"Zekai Li",
"Suorong Yang",
"Jiadong Pan",
"Yuanxiang Chen",
"Ziqiao Wang",
"Kai Wang",
"Qian Zheng",
"Xiaojun Chang",
"Gang Pan",
"Shurong Dong",
"Kaipeng Zhang",
"Yang You"
] |
Traditional image editing typically relies on manual prompting, making it labor-intensive and inaccessible to individuals with limited motor control or language abilities. Leveraging recent advances in brain-computer interfaces (BCIs) and generative models, we propose LoongX, a hands-free image editing approach driven by multimodal neurophysiological signals. LoongX utilizes state-of-the-art diffusion models trained on a comprehensive dataset of 23,928 image editing pairs, each paired with synchronized electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), photoplethysmography (PPG), and head motion signals that capture user intent. To effectively address the heterogeneity of these signals, LoongX integrates two key modules. The cross-scale state space (CS3) module encodes informative modality-specific features. The dynamic gated fusion (DGF) module further aggregates these features into a unified latent space, which is then aligned with edit semantics via fine-tuning on a diffusion transformer (DiT). Additionally, we pre-train the encoders using contrastive learning to align cognitive states with semantic intentions from embedded natural language. Extensive experiments demonstrate that LoongX achieves performance comparable to text-driven methods (CLIP-I: 0.6605 vs. 0.6558; DINO: 0.4812 vs. 0.4636) and outperforms them when neural signals are combined with speech (CLIP-T: 0.2588 vs. 0.2549). These results highlight the promise of neural-driven generative models in enabling accessible, intuitive image editing and open new directions for cognitive-driven creative technologies. Datasets and code will be released to support future work and foster progress in this emerging area.
|
|
2025-07-14T00:00:00 |
2507.08799
|
KV Cache Steering for Inducing Reasoning in Small Language Models
|
[
"Max Belitsky",
"Dawid J. Kopiczko",
"Michael Dorkenwald",
"M. Jehanzeb Mirza",
"Cees G. M. Snoek",
"Yuki M. Asano"
] |
We propose cache steering, a lightweight method for implicit steering of language models via a one-shot intervention applied directly to the key-value cache. To validate its effectiveness, we apply cache steering to induce chain-of-thought reasoning in small language models. Our approach leverages GPT-4o-generated reasoning traces to construct steering vectors that shift model behavior toward more explicit, multi-step reasoning without fine-tuning or prompt modifications. Experimental evaluations on diverse reasoning benchmarks demonstrate that cache steering improves both the qualitative structure of model reasoning and quantitative task performance. Compared to prior activation steering techniques that require continuous interventions, our one-shot cache steering offers substantial advantages in terms of hyperparameter stability, inference-time efficiency, and ease of integration, making it a more robust and practical solution for controlled generation.
|
|
2025-07-14T00:00:00 |
2507.08441
|
Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation
|
[
"Anlin Zheng",
"Xin Wen",
"Xuanyang Zhang",
"Chuofan Ma",
"Tiancai Wang",
"Gang Yu",
"Xiangyu Zhang",
"Xiaojuan Qi"
] |
Leveraging the powerful representations of pre-trained vision foundation models -- traditionally used for visual comprehension -- we explore a novel direction: building an image tokenizer directly atop such models, a largely underexplored area. Specifically, we employ a frozen vision foundation model as the encoder of our tokenizer. To enhance its effectiveness, we introduce two key components: (1) a region-adaptive quantization framework that reduces redundancy in the pre-trained features on regular 2D grids, and (2) a semantic reconstruction objective that aligns the tokenizer's outputs with the foundation model's representations to preserve semantic fidelity. Based on these designs, our proposed image tokenizer, VFMTok, achieves substantial improvements in image reconstruction and generation quality, while also enhancing token efficiency. It further boosts autoregressive (AR) generation -- achieving a gFID of 2.07 on ImageNet benchmarks, while accelerating model convergence by three times, and enabling high-fidelity class-conditional synthesis without the need for classifier-free guidance (CFG). The code will be released publicly to benefit the community.
|
|
2025-07-14T00:00:00 |
2507.05255
|
Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning
|
[
"Yana Wei",
"Liang Zhao",
"Jianjian Sun",
"Kangheng Lin",
"Jisheng Yin",
"Jingcheng Hu",
"Yinmin Zhang",
"En Yu",
"Haoran Lv",
"Zejia Weng",
"Jia Wang",
"Chunrui Han",
"Yuang Peng",
"Qi Han",
"Zheng Ge",
"Xiangyu Zhang",
"Daxin Jiang",
"Vishal M. Patel"
] |
The remarkable reasoning capability of large language models (LLMs) stems from cognitive behaviors that emerge through reinforcement with verifiable rewards. This work investigates how to transfer this principle to Multimodal LLMs (MLLMs) to unlock advanced visual reasoning. We introduce a two-stage paradigm built on Qwen2.5-VL-7B: a massive linguistic cold-start fine-tuning, followed by multimodal reinforcement learning (RL) spanning nearly 1,000 steps, surpassing all previous open-source efforts in scale. This pioneering work reveals three fundamental insights: 1) Behavior transfer emerges surprisingly early in cold start due to linguistic mental imagery. 2) Cold start broadly memorizes visual behaviors, while RL critically discerns and scales up effective patterns. 3) Transfer strategically favors high-utility behaviors such as visual reflection. Our resulting model, Open-Vision-Reasoner (OVR), achieves state-of-the-art performance on a suite of reasoning benchmarks, including 95.3% on MATH500, 51.8% on MathVision and 54.6% on MathVerse. We release our model, data, and training dynamics to catalyze the development of more capable, behavior-aligned multimodal reasoners.
|
|
2025-07-14T00:00:00 |
2507.06261
|
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
|
[
"Gheorghe Comanici",
"Eric Bieber",
"Mike Schaekermann",
"Ice Pasupat",
"Noveen Sachdeva",
"Inderjit Dhillon",
"Marcel Blistein",
"Ori Ram",
"Dan Zhang",
"Evan Rosen",
"Luke Marris",
"Sam Petulla",
"Colin Gaffney",
"Asaf Aharoni",
"Nathan Lintz",
"Tiago Cardal Pais",
"Henrik Jacobsson",
"Idan Szpektor",
"Nan-Jiang Jiang",
"Krishna Haridasan",
"Ahmed Omran",
"Nikunj Saunshi",
"Dara Bahri",
"Gaurav Mishra",
"Eric Chu",
"Toby Boyd",
"Brad Hekman",
"Aaron Parisi",
"Chaoyi Zhang",
"Kornraphop Kawintiranon",
"Tania Bedrax-Weiss",
"Oliver Wang",
"Ya Xu",
"Ollie Purkiss",
"Uri Mendlovic",
"Ilaï Deutel",
"Nam Nguyen",
"Adam Langley",
"Flip Korn",
"Lucia Rossazza",
"Alexandre Ramé",
"Sagar Waghmare",
"Helen Miller",
"Vaishakh Keshava",
"Ying Jian",
"Xiaofan Zhang",
"Raluca Ada Popa",
"Kedar Dhamdhere",
"Blaž Bratanič",
"Kyuyeun Kim",
"Terry Koo",
"Ferran Alet",
"Yi-ting Chen",
"Arsha Nagrani",
"Hannah Muckenhirn",
"Zhiyuan Zhang",
"Corbin Quick",
"Filip Pavetić",
"Duc Dung Nguyen",
"Joao Carreira",
"Michael Elabd",
"Haroon Qureshi",
"Fabian Mentzer",
"Yao-Yuan Yang",
"Danielle Eisenbud",
"Anmol Gulati",
"Ellie Talius",
"Eric Ni",
"Sahra Ghalebikesabi",
"Edouard Yvinec",
"Alaa Saade",
"Thatcher Ulrich",
"Lorenzo Blanco",
"Dan A. Calian",
"Muhuan Huang",
"Aäron van den Oord",
"Naman Goyal",
"Terry Chen",
"Praynaa Rawlani",
"Christian Schallhart",
"Swachhand Lokhande",
"Xianghong Luo",
"Jyn Shan",
"Ceslee Montgomery",
"Victoria Krakovna",
"Federico Piccinini",
"Omer Barak",
"Jingyu Cui",
"Yiling Jia",
"Mikhail Dektiarev",
"Alexey Kolganov",
"Shiyu Huang",
"Zhe Chen",
"Xingyu Wang",
"Jessica Austin",
"Peter de Boursac",
"Evgeny Sluzhaev",
"Frank Ding",
"Huijian Li",
"Surya Bhupatiraju",
"Mohit Agarwal",
"Sławek Kwasiborski",
"Paramjit Sandhu",
"Patrick Siegler",
"Ahmet Iscen",
"Eyal Ben-David",
"Shiraz Butt",
"Miltos Allamanis",
"Seth Benjamin",
"Robert Busa-Fekete",
"Felix Hernandez-Campos",
"Sasha Goldshtein",
"Matt Dibb",
"Weiyang Zhang",
"Annie Marsden",
"Carey Radebaugh",
"Stephen Roller",
"Abhishek Nayyar",
"Jacob Austin",
"Tayfun Terzi",
"Bhargav Kanagal Shamanna",
"Pete Shaw",
"Aayush Singh",
"Florian Luisier",
"Artur Mendonça",
"Vaibhav Aggarwal",
"Larisa Markeeva",
"Claudio Fantacci",
"Sergey Brin",
"HyunJeong Choe",
"Guanyu Wang",
"Hartwig Adam",
"Avigail Dabush",
"Tatsuya Kiyono",
"Eyal Marcus",
"Jeremy Cole",
"Theophane Weber",
"Hongrae Lee",
"Ronny Huang",
"Alex Muzio",
"Leandro Kieliger",
"Maigo Le",
"Courtney Biles",
"Long Le",
"Archit Sharma",
"Chengrun Yang",
"Avery Lamp",
"Dave Dopson",
"Nate Hurley",
"Katrina",
"Xu",
"Zhihao Shan",
"Shuang Song",
"Jiewen Tan",
"Alexandre Senges",
"George Zhang",
"Chong You",
"Yennie Jun",
"David Raposo",
"Susanna Ricco",
"Xuan Yang",
"Weijie Chen",
"Prakhar Gupta",
"Arthur Szlam",
"Kevin Villela",
"Chun-Sung Ferng",
"Daniel Kasenberg",
"Chen Liang",
"Rui Zhu",
"Arunachalam Narayanaswamy",
"Florence Perot",
"Paul Pucciarelli",
"Anna Shekhawat",
"Alexey Stern",
"Rishikesh Ingale",
"Stefani Karp",
"Sanaz Bahargam",
"Adrian Goedeckemeyer",
"Jie Han",
"Sicheng Li",
"Andrea Tacchetti",
"Dian Yu",
"Abhishek Chakladar",
"Zhiying Zhang",
"Mona El Mahdy",
"Xu Gao",
"Dale Johnson",
"Samrat Phatale",
"AJ Piergiovanni",
"Hyeontaek Lim",
"Clement Farabet",
"Carl Lebsack",
"Theo Guidroz",
"John Blitzer",
"Nico Duduta",
"David Madras",
"Steve Li",
"Daniel von Dincklage",
"Xin Li",
"Mahdis Mahdieh",
"George Tucker",
"Ganesh Jawahar",
"Owen Xiao",
"Danny Tarlow",
"Robert Geirhos",
"Noam Velan",
"Daniel Vlasic",
"Kalesha Bullard",
"SK Park",
"Nishesh Gupta",
"Kellie Webster",
"Ayal Hitron",
"Jieming Mao",
"Julian Eisenschlos",
"Laurel Prince",
"Nina D'Souza",
"Kelvin Zheng",
"Sara Nasso",
"Gabriela Botea",
"Carl Doersch",
"Caglar Unlu",
"Chris Alberti",
"Alexey Svyatkovskiy",
"Ankita Goel",
"Krzysztof Choromanski",
"Pan-Pan Jiang",
"Richard Nguyen",
"Four Flynn",
"Daria Ćurko",
"Peter Chen",
"Nicholas Roth",
"Kieran Milan",
"Caleb Habtegebriel",
"Shashi Narayan",
"Michael Moffitt",
"Jake Marcus",
"Thomas Anthony",
"Brendan McMahan",
"Gowoon Cheon",
"Ruibo Liu",
"Megan Barnes",
"Lukasz Lew",
"Rebeca Santamaria-Fernandez",
"Mayank Upadhyay",
"Arjun Akula",
"Arnar Mar Hrafnkelsson",
"Alvaro Caceres",
"Andrew Bunner",
"Michal Sokolik",
"Subha Puttagunta",
"Lawrence Moore",
"Berivan Isik",
"Weilun Chen",
"Jay Hartford",
"Lawrence Chan",
"Pradeep Shenoy",
"Dan Holtmann-Rice",
"Jane Park",
"Fabio Viola",
"Alex Salcianu",
"Sujeevan Rajayogam",
"Ian Stewart-Binks",
"Zelin Wu",
"Richard Everett",
"Xi Xiong",
"Pierre-Antoine Manzagol",
"Gary Leung",
"Carl Saroufim",
"Bo Pang",
"Dawid Wegner",
"George Papamakarios",
"Jennimaria Palomaki",
"Helena Pankov",
"Guangda Lai",
"Guilherme Tubone",
"Shubin Zhao",
"Theofilos Strinopoulos",
"Seth Neel",
"Mingqiu Wang",
"Joe Kelley",
"Li Li",
"Pingmei Xu",
"Anitha Vijayakumar",
"Andrea D'olimpio",
"Omer Levy",
"Massimo Nicosia",
"Grigory Rozhdestvenskiy",
"Ni Lao",
"Sirui Xie",
"Yash Katariya",
"Jon Simon",
"Sanjiv Kumar",
"Florian Hartmann",
"Michael Kilgore",
"Jinhyuk Lee",
"Aroma Mahendru",
"Roman Ring",
"Tom Hennigan",
"Fiona Lang",
"Colin Cherry",
"David Steiner",
"Dawsen Hwang",
"Ray Smith",
"Pidong Wang",
"Jeremy Chen",
"Ming-Hsuan Yang",
"Sam Kwei",
"Philippe Schlattner",
"Donnie Kim",
"Ganesh Poomal Girirajan",
"Nikola Momchev",
"Ayushi Agarwal",
"Xingyi Zhou",
"Ilkin Safarli",
"Zachary Garrett",
"AJ Pierigiovanni",
"Sarthak Jauhari",
"Alif Raditya Rochman",
"Shikhar Vashishth",
"Quan Yuan",
"Christof Angermueller",
"Jon Blanton",
"Xinying Song",
"Nitesh Bharadwaj Gundavarapu",
"Thi Avrahami",
"Maxine Deines",
"Subhrajit Roy",
"Manish Gupta",
"Christopher Semturs",
"Shobha Vasudevan",
"Aditya Srikanth Veerubhotla",
"Shriya Sharma",
"Josh Jacob",
"Zhen Yang",
"Andreas Terzis",
"Dan Karliner",
"Auriel Wright",
"Tania Rojas-Esponda",
"Ashley Brown",
"Abhijit Guha Roy",
"Pawan Dogra",
"Andrei Kapishnikov",
"Peter Young",
"Wendy Kan",
"Vinodh Kumar Rajendran",
"Maria Ivanova",
"Salil Deshmukh",
"Chia-Hua Ho",
"Mike Kwong",
"Stav Ginzburg",
"Annie Louis",
"KP Sawhney",
"Slav Petrov",
"Jing Xie",
"Yunfei Bai",
"Georgi Stoyanov",
"Alex Fabrikant",
"Rajesh Jayaram",
"Yuqi Li",
"Joe Heyward",
"Justin Gilmer",
"Yaqing Wang",
"Radu Soricut",
"Luyang Liu",
"Qingnan Duan",
"Jamie Hayes",
"Maura O'Brien",
"Gaurav Singh Tomar",
"Sivan Eiger",
"Bahar Fatemi",
"Jeffrey Hui",
"Catarina Barros",
"Adaeze Chukwuka",
"Alena Butryna",
"Saksham Thakur",
"Austin Huang",
"Zhufeng Pan",
"Haotian Tang",
"Serkan Cabi",
"Tulsee Doshi",
"Michiel Bakker",
"Sumit Bagri",
"Ruy Ley-Wild",
"Adam Lelkes",
"Jennie Lees",
"Patrick Kane",
"David Greene",
"Shimu Wu",
"Jörg Bornschein",
"Gabriela Surita",
"Sarah Hodkinson",
"Fangtao Li",
"Chris Hidey",
"Sébastien Pereira",
"Sean Ammirati",
"Phillip Lippe",
"Adam Kraft",
"Pu Han",
"Sebastian Gerlach",
"Zifeng Wang",
"Liviu Panait",
"Feng Han",
"Brian Farris",
"Yingying Bi",
"Hannah DeBalsi",
"Miaosen Wang",
"Gladys Tyen",
"James Cohan",
"Susan Zhang",
"Jarred Barber",
"Da-Woon Chung",
"Jaeyoun Kim",
"Markus Kunesch",
"Steven Pecht",
"Nami Akazawa",
"Abe Friesen",
"James Lyon",
"Ali Eslami",
"Junru Wu",
"Jie Tan",
"Yue Song",
"Ravi Kumar",
"Chris Welty",
"Ilia Akolzin",
"Gena Gibson",
"Sean Augenstein",
"Arjun Pillai",
"Nancy Yuen",
"Du Phan",
"Xin Wang",
"Iain Barr",
"Heiga Zen",
"Nan Hua",
"Casper Liu",
"Jilei",
"Wang",
"Tanuj Bhatia",
"Hao Xu",
"Oded Elyada",
"Pushmeet Kohli",
"Mirek Olšák",
"Ke Chen",
"Azalia Mirhoseini",
"Noam Shazeer",
"Shoshana Jakobovits",
"Maggie Tran",
"Nolan Ramsden",
"Tarun Bharti",
"Fred Alcober",
"Yunjie Li",
"Shilpa Shetty",
"Jing Chen",
"Dmitry Kalashnikov",
"Megha Nawhal",
"Sercan Arik",
"Hanwen Chen",
"Michiel Blokzijl",
"Shubham Gupta",
"James Rubin",
"Rigel Swavely",
"Sophie Bridgers",
"Ian Gemp",
"Chen Su",
"Arun Suggala",
"Juliette Pluto",
"Mary Cassin",
"Alain Vaucher",
"Kaiyang Ji",
"Jiahao Cai",
"Andrew Audibert",
"Animesh Sinha",
"David Tian",
"Efrat Farkash",
"Amy Hua",
"Jilin Chen",
"Duc-Hieu Tran",
"Edward Loper",
"Nicole Brichtova",
"Lara McConnaughey",
"Ballie Sandhu",
"Robert Leland",
"Doug DeCarlo",
"Andrew Over",
"James Huang",
"Xing Wu",
"Connie Fan",
"Eric Li",
"Yun Lei",
"Deepak Sharma",
"Cosmin Paduraru",
"Luo Yu",
"Matko Bošnjak",
"Phuong Dao",
"Min Choi",
"Sneha Kudugunta",
"Jakub Adamek",
"Carlos Guía",
"Ali Khodaei",
"Jie Feng",
"Wenjun Zeng",
"David Welling",
"Sandeep Tata",
"Christina Butterfield",
"Andrey Vlasov",
"Seliem El-Sayed",
"Swaroop Mishra",
"Tara Sainath",
"Shentao Yang",
"RJ Skerry-Ryan",
"Jeremy Shar",
"Robert Berry",
"Arunkumar Rajendran",
"Arun Kandoor",
"Andrea Burns",
"Deepali Jain",
"Tom Stone",
"Wonpyo Park",
"Shibo Wang",
"Albin Cassirer",
"Guohui Wang",
"Hayato Kobayashi",
"Sergey Rogulenko",
"Vineetha Govindaraj",
"Mikołaj Rybiński",
"Nadav Olmert",
"Colin Evans",
"Po-Sen Huang",
"Kelvin Xu",
"Premal Shah",
"Terry Thurk",
"Caitlin Sikora",
"Mu Cai",
"Jin Xie",
"Elahe Dabir",
"Saloni Shah",
"Norbert Kalb",
"Carrie Zhang",
"Shruthi Prabhakara",
"Amit Sabne",
"Artiom Myaskovsky",
"Vikas Raunak",
"Blanca Huergo",
"Behnam Neyshabur",
"Jon Clark",
"Ye Zhang",
"Shankar Krishnan",
"Eden Cohen",
"Dinesh Tewari",
"James Lottes",
"Yumeya Yamamori",
"Hui",
"Li",
"Mohamed Elhawaty",
"Ada Maksutaj Oflazer",
"Adrià Recasens",
"Sheryl Luo",
"Duy Nguyen",
"Taylor Bos",
"Kalyan Andra",
"Ana Salazar",
"Ed Chi",
"Jeongwoo Ko",
"Matt Ginsberg",
"Anders Andreassen",
"Anian Ruoss",
"Todor Davchev",
"Elnaz Davoodi",
"Chenxi Liu",
"Min Kim",
"Santiago Ontanon",
"Chi Ming To",
"Dawei Jia",
"Rosemary Ke",
"Jing Wang",
"Anna Korsun",
"Moran Ambar",
"Ilya Kornakov",
"Irene Giannoumis",
"Toni Creswell",
"Denny Zhou",
"Yi Su",
"Ishaan Watts",
"Aleksandr Zaks",
"Evgenii Eltyshev",
"Ziqiang Feng",
"Sidharth Mudgal",
"Alex Kaskasoli",
"Juliette Love",
"Kingshuk Dasgupta",
"Sam Shleifer",
"Richard Green",
"Sungyong Seo",
"Chansoo Lee",
"Dale Webster",
"Prakash Shroff",
"Ganna Raboshchuk",
"Isabel Leal",
"James Manyika",
"Sofia Erell",
"Daniel Murphy",
"Zhisheng Xiao",
"Anton Bulyenov",
"Julian Walker",
"Mark Collier",
"Matej Kastelic",
"Nelson George",
"Sushant Prakash",
"Sailesh Sidhwani",
"Alexey Frolov",
"Steven Hansen",
"Petko Georgiev",
"Tiberiu Sosea",
"Chris Apps",
"Aishwarya Kamath",
"David Reid",
"Emma Cooney",
"Charlotte Magister",
"Oriana Riva",
"Alec Go",
"Pu-Chin Chen",
"Sebastian Krause",
"Nir Levine",
"Marco Fornoni",
"Ilya Figotin",
"Nick Roy",
"Parsa Mahmoudieh",
"Vladimir Magay",
"Mukundan Madhavan",
"Jin Miao",
"Jianmo Ni",
"Yasuhisa Fujii",
"Ian Chou",
"George Scrivener",
"Zak Tsai",
"Siobhan Mcloughlin",
"Jeremy Selier",
"Sandra Lefdal",
"Jeffrey Zhao",
"Abhijit Karmarkar",
"Kushal Chauhan",
"Shivanker Goel",
"Zhaoyi Zhang",
"Vihan Jain",
"Parisa Haghani",
"Mostafa Dehghani",
"Jacob Scott",
"Erin Farnese",
"Anastasija Ilić",
"Steven Baker",
"Julia Pawar",
"Li Zhong",
"Josh Camp",
"Yoel Zeldes",
"Shravya Shetty",
"Anand Iyer",
"Vít Listík",
"Jiaxian Guo",
"Luming Tang",
"Mark Geller",
"Simon Bucher",
"Yifan Ding",
"Hongzhi Shi",
"Carrie Muir",
"Dominik Grewe",
"Ramy Eskander",
"Octavio Ponce",
"Boqing Gong",
"Derek Gasaway",
"Samira Khan",
"Umang Gupta",
"Angelos Filos",
"Weicheng Kuo",
"Klemen Kloboves",
"Jennifer Beattie",
"Christian Wright",
"Leon Li",
"Alicia Jin",
"Sandeep Mariserla",
"Miteyan Patel",
"Jens Heitkaemper",
"Dilip Krishnan",
"Vivek Sharma",
"David Bieber",
"Christian Frank",
"John Lambert",
"Paul Caron",
"Martin Polacek",
"Mai Giménez",
"Himadri Choudhury",
"Xing Yu",
"Sasan Tavakkol",
"Arun Ahuja",
"Franz Och",
"Rodolphe Jenatton",
"Wojtek Skut",
"Bryan Richter",
"David Gaddy",
"Andy Ly",
"Misha Bilenko",
"Megh Umekar",
"Ethan Liang",
"Martin Sevenich",
"Mandar Joshi",
"Hassan Mansoor",
"Rebecca Lin",
"Sumit Sanghai",
"Abhimanyu Singh",
"Xiaowei Li",
"Sudheendra Vijayanarasimhan",
"Zaheer Abbas",
"Yonatan Bitton",
"Hansa Srinivasan",
"Manish Reddy Vuyyuru",
"Alexander Frömmgen",
"Yanhua Sun",
"Ralph Leith",
"Alfonso Castaño",
"DJ Strouse",
"Le Yan",
"Austin Kyker",
"Satish Kambala",
"Mary Jasarevic",
"Thibault Sellam",
"Chao Jia",
"Alexander Pritzel",
"Raghavender R",
"Huizhong Chen",
"Natalie Clay",
"Sudeep Gandhe",
"Sean Kirmani",
"Sayna Ebrahimi",
"Hannah Kirkwood",
"Jonathan Mallinson",
"Chao Wang",
"Adnan Ozturel",
"Kuo Lin",
"Shyam Upadhyay",
"Vincent Cohen-Addad",
"Sean Purser-haskell",
"Yichong Xu",
"Ebrahim Songhori",
"Babi Seal",
"Alberto Magni",
"Almog Gueta",
"Tingting Zou",
"Guru Guruganesh",
"Thais Kagohara",
"Hung Nguyen",
"Khalid Salama",
"Alejandro Cruzado Ruiz",
"Justin Frye",
"Zhenkai Zhu",
"Matthias Lochbrunner",
"Simon Osindero",
"Wentao Yuan",
"Lisa Lee",
"Aman Prasad",
"Lam Nguyen Thiet",
"Daniele Calandriello",
"Victor Stone",
"Qixuan Feng",
"Han Ke",
"Maria Voitovich",
"Geta Sampemane",
"Lewis Chiang",
"Ling Wu",
"Alexander Bykovsky",
"Matt Young",
"Luke Vilnis",
"Ishita Dasgupta",
"Aditya Chawla",
"Qin Cao",
"Bowen Liang",
"Daniel Toyama",
"Szabolcs Payrits",
"Anca Stefanoiu",
"Dimitrios Vytiniotis",
"Ankesh Anand",
"Tianxiao Shen",
"Blagoj Mitrevski",
"Michael Tschannen",
"Sreenivas Gollapudi",
"Aishwarya P S",
"José Leal",
"Zhe Shen",
"Han Fu",
"Wei Wang",
"Arvind Kannan",
"Doron Kukliansky",
"Sergey Yaroshenko",
"Svetlana Grant",
"Umesh Telang",
"David Wood",
"Alexandra Chronopoulou",
"Alexandru Ţifrea",
"Tao Zhou",
"Tony",
"Nguy\\~ên",
"Muge Ersoy",
"Anima Singh",
"Meiyan Xie",
"Emanuel Taropa",
"Woohyun Han",
"Eirikur Agustsson",
"Andrei Sozanschi",
"Hui Peng",
"Alex Chen",
"Yoel Drori",
"Efren Robles",
"Yang Gao",
"Xerxes Dotiwalla",
"Ying Chen",
"Anudhyan Boral",
"Alexei Bendebury",
"John Nham",
"Chris Tar",
"Luis Castro",
"Jiepu Jiang",
"Canoee Liu",
"Felix Halim",
"Jinoo Baek",
"Andy Wan",
"Jeremiah Liu",
"Yuan Cao",
"Shengyang Dai",
"Trilok Acharya",
"Ruoxi Sun",
"Fuzhao Xue",
"Saket Joshi",
"Morgane Lustman",
"Yongqin Xian",
"Rishabh Joshi",
"Deep Karkhanis",
"Nora Kassner",
"Jamie Hall",
"Xiangzhuo Ding",
"Gan Song",
"Gang Li",
"Chen Zhu",
"Yana Kulizhskaya",
"Bin Ni",
"Alexey Vlaskin",
"Solomon Demmessie",
"Lucio Dery",
"Salah Zaiem",
"Yanping Huang",
"Cindy Fan",
"Felix Gimeno",
"Ananth Balashankar",
"Koji Kojima",
"Hagai Taitelbaum",
"Maya Meng",
"Dero Gharibian",
"Sahil Singla",
"Wei Chen",
"Ambrose Slone",
"Guanjie Chen",
"Sujee Rajayogam",
"Max Schumacher",
"Suyog Kotecha",
"Rory Blevins",
"Qifei Wang",
"Mor Hazan Taege",
"Alex Morris",
"Xin Liu",
"Fayaz Jamil",
"Richard Zhang",
"Pratik Joshi",
"Ben Ingram",
"Tyler Liechty",
"Ahmed Eleryan",
"Scott Baird",
"Alex Grills",
"Gagan Bansal",
"Shan Han",
"Kiran Yalasangi",
"Shawn Xu",
"Majd Al Merey",
"Isabel Gao",
"Felix Weissenberger",
"Igor Karpov",
"Robert Riachi",
"Ankit Anand",
"Gautam Prasad",
"Kay Lamerigts",
"Reid Hayes",
"Jamie Rogers",
"Mandy Guo",
"Ashish Shenoy",
"Qiong",
"Hu",
"Kyle He",
"Yuchen Liu",
"Polina Zablotskaia",
"Sagar Gubbi",
"Yifan Chang",
"Jay Pavagadhi",
"Kristian Kjems",
"Archita Vadali",
"Diego Machado",
"Yeqing Li",
"Renshen Wang",
"Dipankar Ghosh",
"Aahil Mehta",
"Dana Alon",
"George Polovets",
"Alessio Tonioni",
"Nate Kushman",
"Joel D'sa",
"Lin Zhuo",
"Allen Wu",
"Rohin Shah",
"John Youssef",
"Jiayu Ye",
"Justin Snyder",
"Karel Lenc",
"Senaka Buthpitiya",
"Matthew Tung",
"Jichuan Chang",
"Tao Chen",
"David Saxton",
"Jenny Lee",
"Lydia Lihui Zhang",
"James Qin",
"Prabakar Radhakrishnan",
"Maxwell Chen",
"Piotr Ambroszczyk",
"Metin Toksoz-Exley",
"Yan Zhong",
"Nitzan Katz",
"Brendan O'Donoghue",
"Tamara von Glehn",
"Adi Gerzi Rosenthal",
"Aga Świetlik",
"Xiaokai Zhao",
"Nick Fernando",
"Jinliang Wei",
"Jieru Mei",
"Sergei Vassilvitskii",
"Diego Cedillo",
"Pranjal Awasthi",
"Hui Zheng",
"Koray Kavukcuoglu",
"Itay Laish",
"Joseph Pagadora",
"Marc Brockschmidt",
"Christopher A. Choquette-Choo",
"Arunkumar Byravan",
"Yifeng Lu",
"Xu Chen",
"Mia Chen",
"Kenton Lee",
"Rama Pasumarthi",
"Sijal Bhatnagar",
"Aditya Shah",
"Qiyin Wu",
"Zhuoyuan Chen",
"Zack Nado",
"Bartek Perz",
"Zixuan Jiang",
"David Kao",
"Ganesh Mallya",
"Nino Vieillard",
"Lantao Mei",
"Sertan Girgin",
"Mandy Jordan",
"Yeongil Ko",
"Alekh Agarwal",
"Yaxin Liu",
"Yasemin Altun",
"Raoul de Liedekerke",
"Anastasios Kementsietsidis",
"Daiyi Peng",
"Dangyi Liu",
"Utku Evci",
"Peter Humphreys",
"Austin Tarango",
"Xiang Deng",
"Yoad Lewenberg",
"Kevin Aydin",
"Chengda Wu",
"Bhavishya Mittal",
"Tsendsuren Munkhdalai",
"Kleopatra Chatziprimou",
"Rodrigo Benenson",
"Uri First",
"Xiao Ma",
"Jinning Li",
"Armand Joulin",
"Hamish Tomlinson",
"Tingnan Zhang",
"Milad Nasr",
"Zhi Hong",
"Michaël Sander",
"Lisa Anne Hendricks",
"Anuj Sharma",
"Andrew Bolt",
"Eszter Vértes",
"Jiri Simsa",
"Tomer Levinboim",
"Olcan Sercinoglu",
"Divyansh Shukla",
"Austin Wu",
"Craig Swanson",
"Danny Vainstein",
"Fan Bu",
"Bo Wang",
"Ryan Julian",
"Charles Yoon",
"Sergei Lebedev",
"Antonious Girgis",
"Bernd Bandemer",
"David Du",
"Todd Wang",
"Xi Chen",
"Ying Xiao",
"Peggy Lu",
"Natalie Ha",
"Vlad Ionescu",
"Simon Rowe",
"Josip Matak",
"Federico Lebron",
"Andreas Steiner",
"Lalit Jain",
"Manaal Faruqui",
"Nicolas Lacasse",
"Georgie Evans",
"Neesha Subramaniam",
"Dean Reich",
"Giulia Vezzani",
"Aditya Pandey",
"Joe Stanton",
"Tianhao Zhou",
"Liam McCafferty",
"Henry Griffiths",
"Verena Rieser",
"Soheil Hassas Yeganeh",
"Eleftheria Briakou",
"Lu Huang",
"Zichuan Wei",
"Liangchen Luo",
"Erik Jue",
"Gabby Wang",
"Victor Cotruta",
"Myriam Khan",
"Jongbin Park",
"Qiuchen Guo",
"Peiran Li",
"Rong Rong",
"Diego Antognini",
"Anastasia Petrushkina",
"Chetan Tekur",
"Eli Collins",
"Parul Bhatia",
"Chester Kwak",
"Wenhu Chen",
"Arvind Neelakantan",
"Immanuel Odisho",
"Sheng Peng",
"Vincent Nallatamby",
"Vaibhav Tulsyan",
"Fabian Pedregosa",
"Peng Xu",
"Raymond Lin",
"Yulong Wang",
"Emma Wang",
"Sholto Douglas",
"Reut Tsarfaty",
"Elena Gribovskaya",
"Renga Aravamudhan",
"Manu Agarwal",
"Mara Finkelstein",
"Qiao Zhang",
"Elizabeth Cole",
"Phil Crone",
"Sarmishta Velury",
"Anil Das",
"Chris Sauer",
"Luyao Xu",
"Danfeng Qin",
"Chenjie Gu",
"Dror Marcus",
"CJ Zheng",
"Wouter Van Gansbeke",
"Sobhan Miryoosefi",
"Haitian Sun",
"YaGuang Li",
"Charlie Chen",
"Jae Yoo",
"Pavel Dubov",
"Alex Tomala",
"Adams Yu",
"Paweł Wesołowski",
"Alok Gunjan",
"Eddie Cao",
"Jiaming Luo",
"Nikhil Sethi",
"Arkadiusz Socala",
"Laura Graesser",
"Tomas Kocisky",
"Arturo BC",
"Minmin Chen",
"Edward Lee",
"Sophie Wang",
"Weize Kong",
"Qiantong Xu",
"Nilesh Tripuraneni",
"Yiming Li",
"Xinxin Yu",
"Allen Porter",
"Paul Voigtlaender",
"Biao Zhang",
"Arpi Vezer",
"Sarah York",
"Qing Wei",
"Geoffrey Cideron",
"Mark Kurzeja",
"Seungyeon Kim",
"Benny Li",
"Angéline Pouget",
"Hyo Lee",
"Kaspar Daugaard",
"Yang Li",
"Dave Uthus",
"Aditya Siddhant",
"Paul Cavallaro",
"Sriram Ganapathy",
"Maulik Shah",
"Rolf Jagerman",
"Jeff Stanway",
"Piermaria Mendolicchio",
"Li Xiao",
"Kayi Lee",
"Tara Thompson",
"Shubham Milind Phal",
"Jason Chase",
"Sun Jae Lee",
"Adrian N Reyes",
"Disha Shrivastava",
"Zhen Qin",
"Roykrong Sukkerd",
"Seth Odoom",
"Lior Madmoni",
"John Aslanides",
"Jonathan Herzig",
"Elena Pochernina",
"Sheng Zhang",
"Parker Barnes",
"Daisuke Ikeda",
"Qiujia Li",
"Shuo-yiin Chang",
"Shakir Mohamed",
"Jim Sproch",
"Richard Powell",
"Bidisha Samanta",
"Domagoj Ćevid",
"Anton Kovsharov",
"Shrestha Basu Mallick",
"Srinivas Tadepalli",
"Anne Zheng",
"Kareem Ayoub",
"Andreas Noever",
"Christian Reisswig",
"Zhuo Xu",
"Junhyuk Oh",
"Martin Matysiak",
"Tim Blyth",
"Shereen Ashraf",
"Julien Amelot",
"Boone Severson",
"Michele Bevilacqua",
"Motoki Sano",
"Ethan Dyer",
"Ofir Roval",
"Anu Sinha",
"Yin Zhong",
"Sagi Perel",
"Tea Sabolić",
"Johannes Mauerer",
"Willi Gierke",
"Mauro Verzetti",
"Rodrigo Cabrera",
"Alvin Abdagic",
"Steven Hemingray",
"Austin Stone",
"Jong Lee",
"Farooq Ahmad",
"Karthik Raman",
"Lior Shani",
"Jonathan Lai",
"Orhan Firat",
"Nathan Waters",
"Eric Ge",
"Mo Shomrat",
"Himanshu Gupta",
"Rajeev Aggarwal",
"Tom Hudson",
"Bill Jia",
"Simon Baumgartner",
"Palak Jain",
"Joe Kovac",
"Junehyuk Jung",
"Ante Žužul",
"Will Truong",
"Morteza Zadimoghaddam",
"Songyou Peng",
"Marco Liang",
"Rachel Sterneck",
"Balaji Lakshminarayanan",
"Machel Reid",
"Oliver Woodman",
"Tong Zhou",
"Jianling Wang",
"Vincent Coriou",
"Arjun Narayanan",
"Jay Hoover",
"Yenai Ma",
"Apoorv Jindal",
"Clayton Sanford",
"Doug Reid",
"Swaroop Ramaswamy",
"Alex Kurakin",
"Roland Zimmermann",
"Yana Lunts",
"Dragos Dena",
"Zalán Borsos",
"Vered Cohen",
"Shujian Zhang",
"Will Grathwohl",
"Robert Dadashi",
"Morgan Redshaw",
"Joshua Kessinger",
"Julian Odell",
"Silvano Bonacina",
"Zihang Dai",
"Grace Chen",
"Ayush Dubey",
"Pablo Sprechmann",
"Mantas Pajarskas",
"Wenxuan Zhou",
"Niharika Ahuja",
"Tara Thomas",
"Martin Nikoltchev",
"Matija Kecman",
"Bharath Mankalale",
"Andrey Ryabtsev",
"Jennifer She",
"Christian Walder",
"Jiaming Shen",
"Lu Li",
"Carolina Parada",
"Sheena Panthaplackel",
"Okwan Kwon",
"Matt Lawlor",
"Utsav Prabhu",
"Yannick Schroecker",
"Marc'aurelio Ranzato",
"Pete Blois",
"Iurii Kemaev",
"Ting Yu",
"Dmitry",
"Lepikhin",
"Hao Xiong",
"Sahand Sharifzadeh",
"Oleaser Johnson",
"Jeremiah Willcock",
"Rui Yao",
"Greg Farquhar",
"Sujoy Basu",
"Hidetoshi Shimokawa",
"Nina Anderson",
"Haiguang Li",
"Khiem Pham",
"Yizhong Liang",
"Sebastian Borgeaud",
"Alexandre Moufarek",
"Hideto Kazawa",
"Blair Kutzman",
"Marcin Sieniek",
"Sara Smoot",
"Ruth Wang",
"Natalie Axelsson",
"Nova Fallen",
"Prasha Sundaram",
"Yuexiang Zhai",
"Varun Godbole",
"Petros Maniatis",
"Alek Wang",
"Ilia Shumailov",
"Santhosh Thangaraj",
"Remi Crocker",
"Nikita Gupta",
"Gang Wu",
"Phil Chen",
"Gellért Weisz",
"Celine Smith",
"Mojtaba Seyedhosseini",
"Boya Fang",
"Xiyang Luo",
"Roey Yogev",
"Zeynep Cankara",
"Andrew Hard",
"Helen Ran",
"Rahul Sukthankar",
"George Necula",
"Gaël Liu",
"Honglong Cai",
"Praseem Banzal",
"Daniel Keysers",
"Sanjay Ghemawat",
"Connie Tao",
"Emma Dunleavy",
"Aditi Chaudhary",
"Wei Li",
"Maciej Mikuła",
"Chen-Yu Lee",
"Tiziana Refice",
"Krishna Somandepalli",
"Alexandre Fréchette",
"Dan Bahir",
"John Karro",
"Keith Rush",
"Sarah Perrin",
"Bill Rosgen",
"Xiaomeng Yang",
"Clara Huiyi Hu",
"Mahmoud Alnahlawi",
"Justin Mao-Jones",
"Roopal Garg",
"Hoang Nguyen",
"Bat-Orgil Batsaikhan",
"Iñaki Iturrate",
"Anselm Levskaya",
"Avi Singh",
"Ashyana Kachra",
"Tony Lu",
"Denis Petek",
"Zheng Xu",
"Mark Graham",
"Lukas Zilka",
"Yael Karov",
"Marija Kostelac",
"Fangyu Liu",
"Yaohui Guo",
"Weiyue Wang",
"Bernd Bohnet",
"Emily Pitler",
"Tony Bruguier",
"Keisuke Kinoshita",
"Chrysovalantis Anastasiou",
"Nilpa Jha",
"Ting Liu",
"Jerome Connor",
"Phil Wallis",
"Philip Pham",
"Eric Bailey",
"Shixin Li",
"Heng-Tze Cheng",
"Sally Ma",
"Haiqiong Li",
"Akanksha Maurya",
"Kate Olszewska",
"Manfred Warmuth",
"Christy Koh",
"Dominik Paulus",
"Siddhartha Reddy Jonnalagadda",
"Enrique Piqueras",
"Ali Elqursh",
"Geoff Brown",
"Hadar Shemtov",
"Loren Maggiore",
"Fei Xia",
"Ryan Foley",
"Beka Westberg",
"George van den Driessche",
"Livio Baldini Soares",
"Arjun Kar",
"Michael Quinn",
"Siqi Zuo",
"Jialin Wu",
"Kyle Kastner",
"Anna Bortsova",
"Aijun Bai",
"Ales Mikhalap",
"Luowei Zhou",
"Jennifer Brennan",
"Vinay Ramasesh",
"Honglei Zhuang",
"John Maggs",
"Johan Schalkwyk",
"Yuntao Xu",
"Hui Huang",
"Andrew Howard",
"Sasha Brown",
"Linting Xue",
"Gloria Shen",
"Brian Albert",
"Neha Jha",
"Daniel Zheng",
"Varvara Krayvanova",
"Spurthi Amba Hombaiah",
"Olivier Lacombe",
"Gautam Vasudevan",
"Dan Graur",
"Tian Xie",
"Meet Gandhi",
"Bangju Wang",
"Dustin Zelle",
"Harman Singh",
"Dahun Kim",
"Sébastien Cevey",
"Victor Ungureanu",
"Natasha Noy",
"Fei Liu",
"Annie Xie",
"Fangxiaoyu Feng",
"Katerina Tsihlas",
"Daniel Formoso",
"Neera Vats",
"Quentin Wellens",
"Yinan Wang",
"Niket Kumar Bhumihar",
"Samrat Ghosh",
"Matt Hoffman",
"Tom Lieber",
"Oran Lang",
"Kush Bhatia",
"Tom Paine",
"Aroonalok Pyne",
"Ronny Votel",
"Madeleine Clare Elish",
"Benoit Schillings",
"Alex Panagopoulos",
"Haichuan Yang",
"Adam Raveret",
"Zohar Yahav",
"Shuang Liu",
"Warren Chen",
"Dalia El Badawy",
"Nishant Agrawal",
"Mohammed Badawi",
"Mahdi Mirzazadeh",
"Carla Bromberg",
"Fan Ye",
"Chang Liu",
"Tatiana Sholokhova",
"George-Cristian Muraru",
"Gargi Balasubramaniam",
"Jonathan Malmaud",
"Alen Carin",
"Danilo Martins",
"Irina Jurenka",
"Pankil Botadra",
"Dave Lacey",
"Richa Singh",
"Mariano Schain",
"Dan Zheng",
"Isabelle Guyon",
"Victor Lavrenko",
"Seungji Lee",
"Xiang Zhou",
"Demis Hassabis",
"Jeshwanth Challagundla",
"Derek Cheng",
"Nikhil Mehta",
"Matthew Mauger",
"Michela Paganini",
"Pushkar Mishra",
"Kate Lee",
"Zhang Li",
"Lexi Baugher",
"Ondrej Skopek",
"Max Chang",
"Amir Zait",
"Gaurav Menghani",
"Lizzetth Bellot",
"Guangxing Han",
"Jean-Michel Sarr",
"Sharat Chikkerur",
"Himanshu Sahni",
"Rohan Anil",
"Arun Narayanan",
"Chandu Thekkath",
"Daniele Pighin",
"Hana Strejček",
"Marko Velic",
"Fred Bertsch",
"Manuel Tragut",
"Keran Rong",
"Alicia Parrish",
"Kai Bailey",
"Jiho Park",
"Isabela Albuquerque",
"Abhishek Bapna",
"Rajesh Venkataraman",
"Alec Kosik",
"Johannes Griesser",
"Zhiwei Deng",
"Alek Andreev",
"Qingyun Dou",
"Kevin Hui",
"Fanny Wei",
"Xiaobin Yu",
"Lei Shu",
"Avia Aharon",
"David Barker",
"Badih Ghazi",
"Sebastian Flennerhag",
"Chris Breaux",
"Yuchuan Liu",
"Matthew Bilotti",
"Josh Woodward",
"Uri Alon",
"Stephanie Winkler",
"Tzu-Kuo Huang",
"Kostas Andriopoulos",
"João Gabriel Oliveira",
"Penporn Koanantakool",
"Berkin Akin",
"Michael Wunder",
"Cicero Nogueira dos Santos",
"Mohammad Hossein Bateni",
"Lin Yang",
"Dan Horgan",
"Beer Changpinyo",
"Keyvan Amiri",
"Min Ma",
"Dayeong Lee",
"Lihao Liang",
"Anirudh Baddepudi",
"Tejasi Latkar",
"Raia Hadsell",
"Jun Xu",
"Hairong Mu",
"Michael Han",
"Aedan Pope",
"Snchit Grover",
"Frank Kim",
"Ankit Bhagatwala",
"Guan Sun",
"Yamini Bansal",
"Amir Globerson",
"Alireza Nazari",
"Samira Daruki",
"Hagen Soltau",
"Jane Labanowski",
"Laurent El Shafey",
"Matt Harvey",
"Yanif Ahmad",
"Elan Rosenfeld",
"William Kong",
"Etienne Pot",
"Yi-Xuan Tan",
"Aurora Wei",
"Victoria Langston",
"Marcel Prasetya",
"Petar Veličković",
"Richard Killam",
"Robin Strudel",
"Darren Ni",
"Zhenhai Zhu",
"Aaron Archer",
"Kavya Kopparapu",
"Lynn Nguyen",
"Emilio Parisotto",
"Hussain Masoom",
"Sravanti Addepalli",
"Jordan Grimstad",
"Hexiang Hu",
"Joss Moore",
"Avinatan Hassidim",
"Le Hou",
"Mukund Raghavachari",
"Jared Lichtarge",
"Adam R. Brown",
"Hilal Dib",
"Natalia Ponomareva",
"Justin Fu",
"Yujing Zhang",
"Altaf Rahman",
"Joana Iljazi",
"Edouard Leurent",
"Gabriel Dulac-Arnold",
"Cosmo Du",
"Chulayuth Asawaroengchai",
"Larry Jin",
"Ela Gruzewska",
"Ziwei Ji",
"Benigno Uria",
"Daniel De Freitas",
"Paul Barham",
"Lauren Beltrone",
"Víctor Campos",
"Jun Yan",
"Neel Kovelamudi",
"Arthur Nguyen",
"Elinor Davies",
"Zhichun Wu",
"Zoltan Egyed",
"Kristina Toutanova",
"Nithya Attaluri",
"Hongliang Fei",
"Peter Stys",
"Siddhartha Brahma",
"Martin Izzard",
"Siva Velusamy",
"Scott Lundberg",
"Vincent Zhuang",
"Kevin Sequeira",
"Adam Santoro",
"Ehsan Amid",
"Ophir Aharoni",
"Shuai Ye",
"Mukund Sundararajan",
"Lijun Yu",
"Yu-Cheng Ling",
"Stephen Spencer",
"Hugo Song",
"Josip Djolonga",
"Christo Kirov",
"Sonal Gupta",
"Alessandro Bissacco",
"Clemens Meyer",
"Mukul Bhutani",
"Andrew Dai",
"Weiyi Wang",
"Siqi Liu",
"Ashwin Sreevatsa",
"Qijun Tan",
"Maria Wang",
"Lucy Kim",
"Yicheng Wang",
"Alex Irpan",
"Yang Xiao",
"Stanislav Fort",
"Yifan He",
"Alex Gurney",
"Bryan Gale",
"Yue Ma",
"Monica Roy",
"Viorica Patraucean",
"Taylan Bilal",
"Golnaz Ghiasi",
"Anahita Hosseini",
"Melvin Johnson",
"Zhuowan Li",
"Yi Tay",
"Benjamin Beyret",
"Katie Millican",
"Josef Broder",
"Mayank Lunayach",
"Danny Swisher",
"Eugen Vušak",
"David Parkinson",
"MH Tessler",
"Adi Mayrav Gilady",
"Richard Song",
"Allan Dafoe",
"Yves Raimond",
"Masa Yamaguchi",
"Itay Karo",
"Elizabeth Nielsen",
"Kevin Kilgour",
"Mike Dusenberry",
"Rajiv Mathews",
"Jiho Choi",
"Siyuan Qiao",
"Harsh Mehta",
"Sahitya Potluri",
"Chris Knutsen",
"Jialu Liu",
"Tat Tan",
"Kuntal Sengupta",
"Keerthana Gopalakrishnan",
"Abodunrinwa Toki",
"Mencher Chiang",
"Mike Burrows",
"Grace Vesom",
"Zafarali Ahmed",
"Ilia Labzovsky",
"Siddharth Vashishtha",
"Preeti Singh",
"Ankur Sharma",
"Ada Ma",
"Jinyu Xie",
"Pranav Talluri",
"Hannah Forbes-Pollard",
"Aarush Selvan",
"Joel Wee",
"Loic Matthey",
"Tom Funkhouser",
"Parthasarathy Gopavarapu",
"Lev Proleev",
"Cheng Li",
"Matt Thomas",
"Kashyap Kolipaka",
"Zhipeng Jia",
"Ashwin Kakarla",
"Srinivas Sunkara",
"Joan Puigcerver",
"Suraj Satishkumar Sheth",
"Emily Graves",
"Chen Wang",
"Sadh MNM Khan",
"Kai Kang",
"Shyamal Buch",
"Fred Zhang",
"Omkar Savant",
"David Soergel",
"Kevin Lee",
"Linda Friso",
"Xuanyi Dong",
"Rahul Arya",
"Shreyas Chandrakaladharan",
"Connor Schenck",
"Greg Billock",
"Tejas Iyer",
"Anton Bakalov",
"Leslie Baker",
"Alex Ruiz",
"Angad Chandorkar",
"Trieu Trinh",
"Matt Miecnikowski",
"Yanqi Zhou",
"Yangsibo Huang",
"Jiazhong Nie",
"Ali Shah",
"Ashish Thapliyal",
"Sam Haves",
"Lun Wang",
"Uri Shaham",
"Patrick Morris-Suzuki",
"Soroush Radpour",
"Leonard Berrada",
"Thomas Strohmann",
"Chaochao Yan",
"Jingwei Shen",
"Sonam Goenka",
"Tris Warkentin",
"Petar Dević",
"Dan Belov",
"Albert Webson",
"Madhavi Yenugula",
"Puranjay Datta",
"Jerry Chang",
"Nimesh Ghelani",
"Aviral Kumar",
"Vincent Perot",
"Jessica Lo",
"Yang Song",
"Herman Schmit",
"Jianmin Chen",
"Vasilisa Bashlovkina",
"Xiaoyue Pan",
"Diana Mincu",
"Paul Roit",
"Isabel Edkins",
"Andy Davis",
"Yujia Li",
"Ben Horn",
"Xinjian Li",
"Pradeep Kumar S",
"Eric Doi",
"Wanzheng Zhu",
"Sri Gayatri Sundara Padmanabhan",
"Siddharth Verma",
"Jasmine Liu",
"Heng Chen",
"Mihajlo Velimirović",
"Malcolm Reynolds",
"Priyanka Agrawal",
"Nick Sukhanov",
"Abhinit Modi",
"Siddharth Goyal",
"John Palowitch",
"Nima Khajehnouri",
"Wing Lowe",
"David Klinghoffer",
"Sharon Silver",
"Vinh Tran",
"Candice Schumann",
"Francesco Piccinno",
"Xi Liu",
"Mario Lučić",
"Xiaochen Yang",
"Sandeep Kumar",
"Ajay Kannan",
"Ragha Kotikalapudi",
"Mudit Bansal",
"Fabian Fuchs",
"Javad Hosseini",
"Abdelrahman Abdelhamed",
"Dawn Bloxwich",
"Tianhe Yu",
"Ruoxin Sang",
"Gregory Thornton",
"Karan Gill",
"Yuchi Liu",
"Virat Shejwalkar",
"Jason Lin",
"Zhipeng Yan",
"Kehang Han",
"Thomas Buschmann",
"Michael Pliskin",
"Zhi Xing",
"Susheel Tatineni",
"Junlin Zhang",
"Sissie Hsiao",
"Gavin Buttimore",
"Marcus Wu",
"Zefei Li",
"Geza Kovacs",
"Legg Yeung",
"Tao Huang",
"Aaron Cohen",
"Bethanie Brownfield",
"Averi Nowak",
"Mikel Rodriguez",
"Tianze Shi",
"Hado van Hasselt",
"Kevin Cen",
"Deepanway Ghoshal",
"Kushal Majmundar",
"Weiren Yu",
"Warren",
"Chen",
"Danila Sinopalnikov",
"Hao Zhang",
"Vlado Galić",
"Di Lu",
"Zeyu Zheng",
"Maggie Song",
"Gary Wang",
"Gui Citovsky",
"Swapnil Gawde",
"Isaac Galatzer-Levy",
"David Silver",
"Ivana Balazevic",
"Dipanjan Das",
"Kingshuk Majumder",
"Yale Cong",
"Praneet Dutta",
"Dustin Tran",
"Hui Wan",
"Junwei Yuan",
"Daniel Eppens",
"Alanna Walton",
"Been Kim",
"Harry Ragan",
"James Cobon-Kerr",
"Lu Liu",
"Weijun Wang",
"Bryce Petrini",
"Jack Rae",
"Rakesh Shivanna",
"Yan Xiong",
"Chace Lee",
"Pauline Coquinot",
"Yiming Gu",
"Lisa Patel",
"Blake Hechtman",
"Aviel Boag",
"Orion Jankowski",
"Alex Wertheim",
"Alex Lee",
"Paul Covington",
"Hila Noga",
"Sam Sobell",
"Shanthal Vasanth",
"William Bono",
"Chirag Nagpal",
"Wei Fan",
"Xavier Garcia",
"Kedar Soparkar",
"Aybuke Turker",
"Nathan Howard",
"Sachit Menon",
"Yuankai Chen",
"Vikas Verma",
"Vladimir Pchelin",
"Harish Rajamani",
"Valentin Dalibard",
"Ana Ramalho",
"Yang Guo",
"Kartikeya Badola",
"Seojin Bang",
"Nathalie Rauschmayr",
"Julia Proskurnia",
"Sudeep Dasari",
"Xinyun Chen",
"Mikhail Sushkov",
"Anja Hauth",
"Pauline Sho",
"Abhinav Singh",
"Bilva Chandra",
"Allie Culp",
"Max Dylla",
"Olivier Bachem",
"James Besley",
"Heri Zhao",
"Timothy Lillicrap",
"Wei Wei",
"Wael Al Jishi",
"Ning Niu",
"Alban Rrustemi",
"Raphaël Lopez Kaufman",
"Ryan Poplin",
"Jewel Zhao",
"Minh Truong",
"Shikhar Bharadwaj",
"Ester Hlavnova",
"Eli Stickgold",
"Cordelia Schmid",
"Georgi Stephanov",
"Zhaoqi Leng",
"Frederick Liu",
"Léonard Hussenot",
"Shenil Dodhia",
"Juliana Vicente Franco",
"Lesley Katzen",
"Abhanshu Sharma",
"Sarah Cogan",
"Zuguang Yang",
"Aniket Ray",
"Sergi Caelles",
"Shen Yan",
"Ravin Kumar",
"Daniel Gillick",
"Renee Wong",
"Joshua Ainslie",
"Jonathan Hoech",
"Séb Arnold",
"Dan Abolafia",
"Anca Dragan",
"Ben Hora",
"Grace Hu",
"Alexey Guseynov",
"Yang Lu",
"Chas Leichner",
"Jinmeng Rao",
"Abhimanyu Goyal",
"Nagabhushan Baddi",
"Daniel Hernandez Diaz",
"Tim McConnell",
"Max Bain",
"Jake Abernethy",
"Qiqi Yan",
"Rylan Schaeffer",
"Paul Vicol",
"Will Thompson",
"Montse Gonzalez Arenas",
"Mathias Bellaiche",
"Pablo Barrio",
"Stefan Zinke",
"Riccardo Patana",
"Pulkit Mehta",
"JK Kearns",
"Avraham Ruderman",
"Scott Pollom",
"David D'Ambrosio",
"Cath Hope",
"Yang Yu",
"Andrea Gesmundo",
"Kuang-Huei Lee",
"Aviv Rosenberg",
"Yiqian Zhou",
"Yaoyiran Li",
"Drew Garmon",
"Yonghui Wu",
"Safeen Huda",
"Gil Fidel",
"Martin Baeuml",
"Jian Li",
"Phoebe Kirk",
"Rhys May",
"Tao Tu",
"Sara Mc Carthy",
"Toshiyuki Fukuzawa",
"Miranda Aperghis",
"Chih-Kuan Yeh",
"Toshihiro Yoshino",
"Bo Li",
"Austin Myers",
"Kaisheng Yao",
"Ben Limonchik",
"Changwan Ryu",
"Rohun Saxena",
"Alex Goldin",
"Ruizhe Zhao",
"Rocky Rhodes",
"Tao Zhu",
"Divya Tyam",
"Heidi Howard",
"Nathan Byrd",
"Hongxu Ma",
"Yan Wu",
"Ryan Mullins",
"Qingze Wang",
"Aida Amini",
"Sebastien Baur",
"Yiran Mao",
"Subhashini Venugopalan",
"Will Song",
"Wen Ding",
"Paul Collins",
"Sashank Reddi",
"Megan Shum",
"Andrei Rusu",
"Luisa Zintgraf",
"Kelvin Chan",
"Sheela Goenka",
"Mathieu Blondel",
"Michael Collins",
"Renke Pan",
"Marissa Giustina",
"Nikolai Chinaev",
"Christian Schuler",
"Ce Zheng",
"Jonas Valfridsson",
"Alyssa Loo",
"Alex Yakubovich",
"Jamie Smith",
"Tao Jiang",
"Rich Munoz",
"Gabriel Barcik",
"Rishabh Bansal",
"Mingyao Yang",
"Yilun Du",
"Pablo Duque",
"Mary Phuong",
"Alexandra Belias",
"Kunal Lad",
"Zeyu Liu",
"Tal Schuster",
"Karthik Duddu",
"Jieru Hu",
"Paige Kunkle",
"Matthew Watson",
"Jackson Tolins",
"Josh Smith",
"Denis Teplyashin",
"Garrett Bingham",
"Marvin Ritter",
"Marco Andreetto",
"Divya Pitta",
"Mohak Patel",
"Shashank Viswanadha",
"Trevor Strohman",
"Catalin Ionescu",
"Jincheng Luo",
"Yogesh Kalley",
"Jeremy Wiesner",
"Dan Deutsch",
"Derek Lockhart",
"Peter Choy",
"Rumen Dangovski",
"Chawin Sitawarin",
"Cat Graves",
"Tanya Lando",
"Joost van Amersfoort",
"Ndidi Elue",
"Zhouyuan Huo",
"Pooya Moradi",
"Jean Tarbouriech",
"Henryk Michalewski",
"Wenting Ye",
"Eunyoung Kim",
"Alex Druinsky",
"Florent Altché",
"Xinyi Chen",
"Artur Dwornik",
"Da-Cheng Juan",
"Rivka Moroshko",
"Horia Toma",
"Jarrod Kahn",
"Hai Qian",
"Maximilian Sieb",
"Irene Cai",
"Roman Goldenberg",
"Praneeth Netrapalli",
"Sindhu Raghuram",
"Yuan Gong",
"Lijie Fan",
"Evan Palmer",
"Yossi Matias",
"Valentin Gabeur",
"Shreya Pathak",
"Tom Ouyang",
"Don Metzler",
"Geoff Bacon",
"Srinivasan Venkatachary",
"Sridhar Thiagarajan",
"Alex Cullum",
"Eran Ofek",
"Vytenis Sakenas",
"Mohamed Hammad",
"Cesar Magalhaes",
"Mayank Daswani",
"Oscar Chang",
"Ashok Popat",
"Ruichao Li",
"Komal Jalan",
"Yanhan Hou",
"Josh Lipschultz",
"Antoine He",
"Wenhao Jia",
"Pier Giuseppe Sessa",
"Prateek Kolhar",
"William Wong",
"Sumeet Singh",
"Lukas Haas",
"Jay Whang",
"Hanna Klimczak-Plucińska",
"Georges Rotival",
"Grace Chung",
"Yiqing Hua",
"Anfal Siddiqui",
"Nicolas Serrano",
"Dongkai Chen",
"Billy Porter",
"Libin Bai",
"Keshav Shivam",
"Sho Arora",
"Partha Talukdar",
"Tom Cobley",
"Sangnie Bhardwaj",
"Evgeny Gladchenko",
"Simon Green",
"Kelvin Guu",
"Felix Fischer",
"Xiao Wu",
"Eric Wang",
"Achintya Singhal",
"Tatiana Matejovicova",
"James Martens",
"Hongji Li",
"Roma Patel",
"Elizabeth Kemp",
"Jiaqi Pan",
"Lily Wang",
"Blake JianHang Chen",
"Jean-Baptiste Alayrac",
"Navneet Potti",
"Erika Gemzer",
"Eugene Ie",
"Kay McKinney",
"Takaaki Saeki",
"Edward Chou",
"Pascal Lamblin",
"SQ Mah",
"Zach Fisher",
"Martin Chadwick",
"Jon Stritar",
"Obaid Sarvana",
"Andrew Hogue",
"Artem Shtefan",
"Hadi Hashemi",
"Yang Xu",
"Jindong Gu",
"Sharad Vikram",
"Chung-Ching Chang",
"Sabela Ramos",
"Logan Kilpatrick",
"Weijuan Xi",
"Jenny Brennan",
"Yinghao Sun",
"Abhishek Jindal",
"Ionel Gog",
"Dawn Chen",
"Felix Wu",
"Jason Lee",
"Sudhindra Kopalle",
"Srinadh Bhojanapalli",
"Oriol Vinyals",
"Natan Potikha",
"Burcu Karagol Ayan",
"Yuan Yuan",
"Michael Riley",
"Piotr Stanczyk",
"Sergey Kishchenko",
"Bing Wang",
"Dan Garrette",
"Antoine Yang",
"Vlad Feinberg",
"CJ Carey",
"Javad Azizi",
"Viral Shah",
"Erica Moreira",
"Chongyang Shi",
"Josh Feldman",
"Elizabeth Salesky",
"Thomas Lampe",
"Aneesh Pappu",
"Duhyeon Kim",
"Jonas Adler",
"Avi Caciularu",
"Brian Walker",
"Yunhan Xu",
"Yochai Blau",
"Dylan Scandinaro",
"Terry Huang",
"Sam El-Husseini",
"Abhishek Sinha",
"Lijie Ren",
"Taylor Tobin",
"Patrik Sundberg",
"Tim Sohn",
"Vikas Yadav",
"Mimi Ly",
"Emily Xue",
"Jing Xiong",
"Afzal Shama Soudagar",
"Sneha Mondal",
"Nikhil Khadke",
"Qingchun Ren",
"Ben Vargas",
"Stan Bileschi",
"Sarah Chakera",
"Cindy Wang",
"Boyu Wang",
"Yoni Halpern",
"Joe Jiang",
"Vikas Sindhwani",
"Petre Petrov",
"Pranavaraj Ponnuramu",
"Sanket Vaibhav Mehta",
"Yu Watanabe",
"Betty Chan",
"Matheus Wisniewski",
"Trang Pham",
"Jingwei Zhang",
"Conglong Li",
"Dario de Cesare",
"Art Khurshudov",
"Alex Vasiloff",
"Melissa Tan",
"Zoe Ashwood",
"Bobak Shahriari",
"Maryam Majzoubi",
"Garrett Tanzer",
"Olga Kozlova",
"Robin Alazard",
"James Lee-Thorp",
"Nguyet Minh Phu",
"Isaac Tian",
"Junwhan Ahn",
"Andy Crawford",
"Lauren Lax",
"Yuan",
"Shangguan",
"Iftekhar Naim",
"David Ross",
"Oleksandr Ferludin",
"Tongfei Guo",
"Andrea Banino",
"Hubert Soyer",
"Xiaoen Ju",
"Dominika Rogozińska",
"Ishaan Malhi",
"Marcella Valentine",
"Daniel Balle",
"Apoorv Kulshreshtha",
"Maciej Kula",
"Yiwen Song",
"Sophia Austin",
"John Schultz",
"Roy Hirsch",
"Arthur Douillard",
"Apoorv Reddy",
"Michael Fink",
"Summer Yue",
"Khyatti Gupta",
"Adam Zhang",
"Norman Rink",
"Daniel McDuff",
"Lei Meng",
"András György",
"Yasaman Razeghi",
"Ricky Liang",
"Kazuki Osawa",
"Aviel Atias",
"Matan Eyal",
"Tyrone Hill",
"Nikolai Grigorev",
"Zhengdong Wang",
"Nitish Kulkarni",
"Rachel Soh",
"Ivan Lobov",
"Zachary Charles",
"Sid Lall",
"Kazuma Hashimoto",
"Ido Kessler",
"Victor Gomes",
"Zelda Mariet",
"Danny Driess",
"Alessandro Agostini",
"Canfer Akbulut",
"Jingcao Hu",
"Marissa Ikonomidis",
"Emily Caveness",
"Kartik Audhkhasi",
"Saurabh Agrawal",
"Ioana Bica",
"Evan Senter",
"Jayaram Mudigonda",
"Kelly Chen",
"Jingchen Ye",
"Xuanhui Wang",
"James Svensson",
"Philipp Fränken",
"Josh Newlan",
"Li Lao",
"Eva Schnider",
"Sami Alabed",
"Joseph Kready",
"Jesse Emond",
"Afief Halumi",
"Tim Zaman",
"Chengxi Ye",
"Naina Raisinghani",
"Vilobh Meshram",
"Bo Chang",
"Ankit Singh Rawat",
"Axel Stjerngren",
"Sergey Levi",
"Rui Wang",
"Xiangzhu Long",
"Mitchelle Rasquinha",
"Steven Hand",
"Aditi Mavalankar",
"Lauren Agubuzu",
"Sudeshna Roy",
"Junquan Chen",
"Jarek Wilkiewicz",
"Hao Zhou",
"Michal Jastrzebski",
"Qiong Hu",
"Agustin Dal Lago",
"Ramya Sree Boppana",
"Wei-Jen Ko",
"Jennifer Prendki",
"Yao Su",
"Zhi Li",
"Eliza Rutherford",
"Girish Ramchandra Rao",
"Ramona Comanescu",
"Adrià Puigdomènech",
"Qihang Chen",
"Dessie Petrova",
"Christine Chan",
"Vedrana Milutinovic",
"Felipe Tiengo Ferreira",
"Chin-Yi Cheng",
"Ming Zhang",
"Tapomay Dey",
"Sherry Yang",
"Ramesh Sampath",
"Quoc Le",
"Howard Zhou",
"Chu-Cheng Lin",
"Hoi Lam",
"Christine Kaeser-Chen",
"Kai Hui",
"Dean Hirsch",
"Tom Eccles",
"Basil Mustafa",
"Shruti Rijhwani",
"Morgane Rivière",
"Yuanzhong Xu",
"Junjie Wang",
"Xinyang Geng",
"Xiance Si",
"Arjun Khare",
"Cheolmin Kim",
"Vahab Mirrokni",
"Kamyu Lee",
"Khuslen Baatarsukh",
"Nathaniel Braun",
"Lisa Wang",
"Pallavi LV",
"Richard Tanburn",
"Yuvein",
"Zhu",
"Fangda Li",
"Setareh Ariafar",
"Dan Goldberg",
"Ken Burke",
"Daniil Mirylenka",
"Meiqi Guo",
"Olaf Ronneberger",
"Hadas Natalie Vogel",
"Liqun Cheng",
"Nishita Shetty",
"Johnson Jia",
"Thomas Jimma",
"Corey Fry",
"Ted Xiao",
"Martin Sundermeyer",
"Ryan Burnell",
"Yannis Assael",
"Mario Pinto",
"JD Chen",
"Rohit Sathyanarayana",
"Donghyun Cho",
"Jing Lu",
"Rishabh Agarwal",
"Sugato Basu",
"Lucas Gonzalez",
"Dhruv Shah",
"Meng Wei",
"Dre Mahaarachchi",
"Rohan Agrawal",
"Tero Rissa",
"Yani Donchev",
"Ramiro Leal-Cavazos",
"Adrian Hutter",
"Markus Mircea",
"Alon Jacovi",
"Faruk Ahmed",
"Jiageng Zhang",
"Shuguang Hu",
"Bo-Juen Chen",
"Jonni Kanerva",
"Guillaume Desjardins",
"Andrew Lee",
"Nikos Parotsidis",
"Asier Mujika",
"Tobias Weyand",
"Jasper Snoek",
"Jo Chick",
"Kai Chen",
"Paul Chang",
"Ethan Mahintorabi",
"Zi Wang",
"Tolly Powell",
"Orgad Keller",
"Abhirut Gupta",
"Claire Sha",
"Kanav Garg",
"Nicolas Heess",
"Ágoston Weisz",
"Cassidy Hardin",
"Bartek Wydrowski",
"Ben Coleman",
"Karina Zainullina",
"Pankaj Joshi",
"Alessandro Epasto",
"Terry Spitz",
"Binbin Xiong",
"Kai Zhao",
"Arseniy Klimovskiy",
"Ivy Zheng",
"Johan Ferret",
"Itay Yona",
"Waleed Khawaja",
"Jean-Baptiste Lespiau",
"Maxim Krikun",
"Siamak Shakeri",
"Timothee Cour",
"Bonnie Li",
"Igor Krivokon",
"Dan Suh",
"Alex Hofer",
"Jad Al Abdallah",
"Nikita Putikhin",
"Oscar Akerlund",
"Silvio Lattanzi",
"Anurag Kumar",
"Shane Settle",
"Himanshu Srivastava",
"Folawiyo Campbell-Ajala",
"Edouard Rosseel",
"Mihai Dorin Istin",
"Nishanth Dikkala",
"Anand Rao",
"Nick Young",
"Kate Lin",
"Dhruva Bhaswar",
"Yiming Wang",
"Jaume Sanchez Elias",
"Kritika Muralidharan",
"James Keeling",
"Dayou Du",
"Siddharth Gopal",
"Gregory Dibb",
"Charles Blundell",
"Manolis Delakis",
"Jacky Liang",
"Marco Tulio Ribeiro",
"Georgi Karadzhov",
"Guillermo Garrido",
"Ankur Bapna",
"Jiawei Cao",
"Adam Sadovsky",
"Pouya Tafti",
"Arthur Guez",
"Coline Devin",
"Yixian Di",
"Jinwei Xing",
"Chuqiao",
"Xu",
"Hanzhao Lin",
"Chun-Te Chu",
"Sameera Ponda",
"Wesley Helmholz",
"Fan Yang",
"Yue Gao",
"Sara Javanmardi",
"Wael Farhan",
"Alex Ramirez",
"Ricardo Figueira",
"Khe Chai Sim",
"Yuval Bahat",
"Ashwin Vaswani",
"Liangzhe Yuan",
"Gufeng Zhang",
"Leland Rechis",
"Hanjun Dai",
"Tayo Oguntebi",
"Alexandra Cordell",
"Eugénie Rives",
"Kaan Tekelioglu",
"Naveen Kumar",
"Bing Zhang",
"Aurick Zhou",
"Nikolay Savinov",
"Andrew Leach",
"Alex Tudor",
"Sanjay Ganapathy",
"Yanyan Zheng",
"Mirko Rossini",
"Vera Axelrod",
"Arnaud Autef",
"Yukun Zhu",
"Zheng Zheng",
"Mingda Zhang",
"Baochen Sun",
"Jie Ren",
"Nenad Tomasev",
"Nithish Kannan",
"Amer Sinha",
"Charles Chen",
"Louis O'Bryan",
"Alex Pak",
"Aditya Kusupati",
"Weel Yang",
"Deepak Ramachandran",
"Patrick Griffin",
"Seokhwan Kim",
"Philipp Neubeck",
"Craig Schiff",
"Tammo Spalink",
"Mingyang Ling",
"Arun Nair",
"Ga-Young Joung",
"Linda Deng",
"Avishkar Bhoopchand",
"Lora Aroyo",
"Tom Duerig",
"Jordan Griffith",
"Gabe Barth-Maron",
"Jake Ades",
"Alex Haig",
"Ankur Taly",
"Yunting Song",
"Paul Michel",
"Dave Orr",
"Dean Weesner",
"Corentin Tallec",
"Carrie Grimes Bostock",
"Paul Niemczyk",
"Andy Twigg",
"Mudit Verma",
"Rohith Vallu",
"Henry Wang",
"Marco Gelmi",
"Kiranbir Sodhia",
"Aleksandr Chuklin",
"Omer Goldman",
"Jasmine George",
"Liang Bai",
"Kelvin Zhang",
"Petar Sirkovic",
"Efrat Nehoran",
"Golan Pundak",
"Jiaqi Mu",
"Alice Chen",
"Alex Greve",
"Paulo Zacchello",
"David Amos",
"Heming Ge",
"Eric Noland",
"Colton Bishop",
"Jeffrey Dudek",
"Youhei Namiki",
"Elena Buchatskaya",
"Jing Li",
"Dorsa Sadigh",
"Masha Samsikova",
"Dan Malkin",
"Damien Vincent",
"Robert David",
"Rob Willoughby",
"Phoenix Meadowlark",
"Shawn Gao",
"Yan Li",
"Raj Apte",
"Amit Jhindal",
"Stein Xudong Lin",
"Alex Polozov",
"Zhicheng Wang",
"Tomas Mery",
"Anirudh GP",
"Varun Yerram",
"Sage Stevens",
"Tianqi Liu",
"Noah Fiedel",
"Charles Sutton",
"Matthew Johnson",
"Xiaodan Song",
"Kate Baumli",
"Nir Shabat",
"Muqthar Mohammad",
"Hao Liu",
"Marco Selvi",
"Yichao Zhou",
"Mehdi Hafezi Manshadi",
"Chu-ling Ko",
"Anthony Chen",
"Michael Bendersky",
"Jorge Gonzalez Mendez",
"Nisarg Kothari",
"Amir Zandieh",
"Yiling Huang",
"Daniel Andor",
"Ellie Pavlick",
"Idan Brusilovsky",
"Jitendra Harlalka",
"Sally Goldman",
"Andrew Lampinen",
"Guowang Li",
"Asahi Ushio",
"Somit Gupta",
"Lei Zhang",
"Chuyuan Kelly Fu",
"Madhavi Sewak",
"Timo Denk",
"Jed Borovik",
"Brendan Jou",
"Avital Zipori",
"Prateek Jain",
"Junwen Bai",
"Thang Luong",
"Jonathan Tompson",
"Alice Li",
"Li Liu",
"George Powell",
"Jiajun Shen",
"Alex Feng",
"Grishma Chole",
"Da Yu",
"Yinlam Chow",
"Tongxin Yin",
"Eric Malmi",
"Kefan Xiao",
"Yash Pande",
"Shachi Paul",
"Niccolò Dal Santo",
"Adil Dostmohamed",
"Sergio Guadarrama",
"Aaron Phillips",
"Thanumalayan Sankaranarayana Pillai",
"Gal Yona",
"Amin Ghafouri",
"Preethi Lahoti",
"Benjamin Lee",
"Dhruv Madeka",
"Eren Sezener",
"Simon Tokumine",
"Adrian Collister",
"Nicola De Cao",
"Richard Shin",
"Uday Kalra",
"Parker Beak",
"Emily Nottage",
"Ryo Nakashima",
"Ivan Jurin",
"Vikash Sehwag",
"Meenu Gaba",
"Junhao Zeng",
"Kevin R. McKee",
"Fernando Pereira",
"Tamar Yakar",
"Amayika Panda",
"Arka Dhar",
"Peilin Zhong",
"Daniel Sohn",
"Mark Brand",
"Lars Lowe Sjoesund",
"Viral Carpenter",
"Sharon Lin",
"Shantanu Thakoor",
"Marcus Wainwright",
"Ashwin Chaugule",
"Pranesh Srinivasan",
"Muye Zhu",
"Bernett Orlando",
"Jack Weber",
"Ayzaan Wahid",
"Gilles Baechler",
"Apurv Suman",
"Jovana Mitrović",
"Gabe Taubman",
"Honglin Yu",
"Helen King",
"Josh Dillon",
"Cathy Yip",
"Dhriti Varma",
"Tomas Izo",
"Levent Bolelli",
"Borja De Balle Pigem",
"Julia Di Trapani",
"Fotis Iliopoulos",
"Adam Paszke",
"Nishant Ranka",
"Joe Zou",
"Francesco Pongetti",
"Jed McGiffin",
"Alex Siegman",
"Rich Galt",
"Ross Hemsley",
"Goran Žužić",
"Victor Carbune",
"Tao Li",
"Myle Ott",
"Félix de Chaumont Quitry",
"David Vilar Torres",
"Yuri Chervonyi",
"Tomy Tsai",
"Prem Eruvbetine",
"Samuel Yang",
"Matthew Denton",
"Jake Walker",
"Slavica Andačić",
"Idan Heimlich Shtacher",
"Vittal Premachandran",
"Harshal Tushar Lehri",
"Cip Baetu",
"Damion Yates",
"Lampros Lamprou",
"Mariko Iinuma",
"Ioana Mihailescu",
"Ben Albrecht",
"Shachi Dave",
"Susie Sargsyan",
"Bryan Perozzi",
"Lucas Manning",
"Chiyuan Zhang",
"Denis Vnukov",
"Igor Mordatch",
"Raia Hadsell Wolfgang Macherey",
"Ryan Kappedal",
"Jim Stephan",
"Aditya Tripathi",
"Klaus Macherey",
"Jun Qian",
"Abhishek Bhowmick",
"Shekoofeh Azizi",
"Rémi Leblond",
"Shiva Mohan Reddy Garlapati",
"Timothy Knight",
"Matthew Wiethoff",
"Wei-Chih Hung",
"Anelia Angelova",
"Georgios Evangelopoulos",
"Pawel Janus",
"Dimitris Paparas",
"Matthew Rahtz",
"Ken Caluwaerts",
"Vivek Sampathkumar",
"Daniel Jarrett",
"Shadi Noghabi",
"Antoine Miech",
"Chak Yeung",
"Geoff Clark",
"Henry Prior",
"Fei Zheng",
"Jean Pouget-Abadie",
"Indro Bhattacharya",
"Kalpesh Krishna",
"Will Bishop",
"Zhe Yuan",
"Yunxiao Deng",
"Ashutosh Sathe",
"Kacper Krasowiak",
"Ciprian Chelba",
"Cho-Jui Hsieh",
"Kiran Vodrahalli",
"Buhuang Liu",
"Thomas Köppe",
"Amr Khalifa",
"Lubo Litchev",
"Pichi Charoenpanit",
"Reed Roberts",
"Sachin Yadav",
"Yasumasa Onoe",
"Desi Ivanov",
"Megha Mohabey",
"Vighnesh Birodkar",
"Nemanja Rakićević",
"Pierre Sermanet",
"Vaibhav Mehta",
"Krishan Subudhi",
"Travis Choma",
"Will Ng",
"Luheng He",
"Kathie Wang",
"Tasos Kementsietsidis",
"Shane Gu",
"Mansi Gupta",
"Andrew Nystrom",
"Mehran Kazemi",
"Timothy Chung",
"Nacho Cano",
"Nikhil Dhawan",
"Yufei Wang",
"Jiawei Xia",
"Trevor Yacovone",
"Eric Jia",
"Mingqing Chen",
"Simeon Ivanov",
"Ashrith Sheshan",
"Sid Dalmia",
"Paweł Stradomski",
"Pengcheng Yin",
"Salem Haykal",
"Congchao Wang",
"Dennis Duan",
"Neslihan Bulut",
"Greg Kochanski",
"Liam MacDermed",
"Namrata Godbole",
"Shitao Weng",
"Jingjing Chen",
"Rachana Fellinger",
"Ramin Mehran",
"Daniel Suo",
"Hisham Husain",
"Tong He",
"Kaushal Patel",
"Joshua Howland",
"Randall Parker",
"Kelvin Nguyen",
"Sharath Maddineni",
"Chris Rawles",
"Mina Khan",
"Shlomi Cohen-Ganor",
"Amol Mandhane",
"Xinyi Wu",
"Chenkai Kuang",
"Iulia Comşa",
"Ramya Ganeshan",
"Hanie Sedghi",
"Adam Bloniarz",
"Nuo Wang Pierse",
"Anton Briukhov",
"Petr Mitrichev",
"Anita Gergely",
"Serena Zhan",
"Allan Zhou",
"Nikita Saxena",
"Eva Lu",
"Josef Dean",
"Ashish Gupta",
"Nicolas Perez-Nieves",
"Renjie Wu",
"Cory McLean",
"Wei Liang",
"Disha Jindal",
"Anton Tsitsulin",
"Wenhao Yu",
"Kaiz Alarakyia",
"Tom Schaul",
"Piyush Patil",
"Peter Sung",
"Elijah Peake",
"Hongkun Yu",
"Feryal Behbahani",
"JD Co-Reyes",
"Alan Ansell",
"Sean Sun",
"Clara Barbu",
"Jonathan Lee",
"Seb Noury",
"James Allingham",
"Bilal Piot",
"Mohit Sharma",
"Christopher Yew",
"Ivan Korotkov",
"Bibo Xu",
"Demetra Brady",
"Goran Petrovic",
"Shibl Mourad",
"Claire Cui",
"Aditya Gupta",
"Parker Schuh",
"Saarthak Khanna",
"Anna Goldie",
"Abhinav Arora",
"Vadim Zubov",
"Amy Stuart",
"Mark Epstein",
"Yun Zhu",
"Jianqiao Liu",
"Yury Stuken",
"Ziyue Wang",
"Karolis Misiunas",
"Dee Guo",
"Ashleah Gill",
"Ale Hartman",
"Zaid Nabulsi",
"Aurko Roy",
"Aleksandra Faust",
"Jason Riesa",
"Ben Withbroe",
"Mengchao Wang",
"Marco Tagliasacchi",
"Andreea Marzoca",
"James Noraky",
"Serge Toropov",
"Malika Mehrotra",
"Bahram Raad",
"Sanja Deur",
"Steve Xu",
"Marianne Monteiro",
"Zhongru Wu",
"Yi Luan",
"Sam Ritter",
"Nick Li",
"Håvard Garnes",
"Yanzhang He",
"Martin Zlocha",
"Jifan Zhu",
"Matteo Hessel",
"Will Wu",
"Spandana Raj Babbula",
"Chizu Kawamoto",
"Yuanzhen Li",
"Mehadi Hassen",
"Yan Wang",
"Brian Wieder",
"James Freedman",
"Yin Zhang",
"Xinyi Bai",
"Tianli Yu",
"David Reitter",
"XiangHai Sheng",
"Mateo Wirth",
"Aditya Kini",
"Dima Damen",
"Mingcen Gao",
"Rachel Hornung",
"Michael Voznesensky",
"Brian Roark",
"Adhi Kuncoro",
"Yuxiang Zhou",
"Rushin Shah",
"Anthony Brohan",
"Kuangyuan Chen",
"James Wendt",
"David Rim",
"Paul Kishan Rubenstein",
"Jonathan Halcrow",
"Michelle Liu",
"Ty Geri",
"Yunhsuan Sung",
"Jane Shapiro",
"Shaan Bijwadia",
"Chris Duvarney",
"Christina Sorokin",
"Paul Natsev",
"Reeve Ingle",
"Pramod Gupta",
"Young Maeng",
"Ndaba Ndebele",
"Kexin Zhu",
"Valentin Anklin",
"Katherine Lee",
"Yuan Liu",
"Yaroslav Akulov",
"Shaleen Gupta",
"Guolong Su",
"Flavien Prost",
"Tianlin Liu",
"Vitaly Kovalev",
"Pol Moreno",
"Martin Scholz",
"Sam Redmond",
"Zongwei Zhou",
"Alex Castro-Ros",
"André Susano Pinto",
"Dia Kharrat",
"Michal Yarom",
"Rachel Saputro",
"Jannis Bulian",
"Ben Caine",
"Ji Liu",
"Abbas Abdolmaleki",
"Shariq Iqbal",
"Tautvydas Misiunas",
"Mikhail Sirotenko",
"Shefali Garg",
"Guy Bensky",
"Huan Gui",
"Xuezhi Wang",
"Raphael Koster",
"Mike Bernico",
"Da Huang",
"Romal Thoppilan",
"Trevor Cohn",
"Ben Golan",
"Wenlei Zhou",
"Andrew Rosenberg",
"Markus Freitag",
"Tynan Gangwani",
"Vincent Tsang",
"Anand Shukla",
"Xiaoqi Ren",
"Minh Giang",
"Chi Zou",
"Andre Elisseeff",
"Charline Le Lan",
"Dheeru Dua",
"Shuba Lall",
"Pranav Shyam",
"Frankie Garcia",
"Sarah Nguyen",
"Michael Guzman",
"AJ Maschinot",
"Marcello Maggioni",
"Ming-Wei Chang",
"Karol Gregor",
"Lotte Weerts",
"Kumaran Venkatesan",
"Bogdan Damoc",
"Leon Liu",
"Jan Wassenberg",
"Lewis Ho",
"Becca Roelofs",
"Majid Hadian",
"François-Xavier Aubet",
"Yu Liang",
"Sami Lachgar",
"Danny Karmon",
"Yong Cheng",
"Amelio Vázquez-Reina",
"Angie Chen",
"Zhuyun Dai",
"Andy Brock",
"Shubham Agrawal",
"Chenxi Pang",
"Peter Garst",
"Mariella Sanchez-Vargas",
"Ivor Rendulic",
"Aditya Ayyar",
"Andrija Ražnatović",
"Olivia Ma",
"Roopali Vij",
"Neha Sharma",
"Ashwin Balakrishna",
"Bingyuan Liu",
"Ian Mackinnon",
"Sorin Baltateanu",
"Petra Poklukar",
"Gabriel Ibagon",
"Colin Ji",
"Hongyang Jiao",
"Isaac Noble",
"Wojciech Stokowiec",
"Zhihao Li",
"Jeff Dean",
"David Lindner",
"Mark Omernick",
"Kristen Chiafullo",
"Mason Dimarco",
"Vitor Rodrigues",
"Vittorio Selo",
"Garrett Honke",
"Xintian",
"Wu",
"Wei He",
"Adam Hillier",
"Anhad Mohananey",
"Vihari Piratla",
"Chang Ye",
"Chase Malik",
"Sebastian Riedel",
"Samuel Albanie",
"Zi Yang",
"Kenny Vassigh",
"Maria Bauza",
"Sheng Li",
"Yiqing Tao",
"Nevan Wichers",
"Andrii Maksai",
"Abe Ittycheriah",
"Ross Mcilroy",
"Bryan Seybold",
"Noah Goodman",
"Romina Datta",
"Steven M. Hernandez",
"Tian Shi",
"Yony Kochinski",
"Anna Bulanova",
"Ken Franko",
"Mikita Sazanovich",
"Nicholas FitzGerald",
"Praneeth Kacham",
"Shubha Srinivas Raghvendra",
"Vincent Hellendoorn",
"Alexander Grushetsky",
"Julian Salazar",
"Angeliki Lazaridou",
"Jason Chang",
"Jan-Thorsten Peter",
"Sushant Kafle",
"Yann Dauphin",
"Abhishek Rao",
"Filippo Graziano",
"Izhak Shafran",
"Yuguo Liao",
"Tianli Ding",
"Geng Yan",
"Grace Chu",
"Zhao Fu",
"Vincent Roulet",
"Gabriel Rasskin",
"Duncan Williams",
"Shahar Drath",
"Alex Mossin",
"Raphael Hoffmann",
"Jordi Orbay",
"Francesco Bertolini",
"Hila Sheftel",
"Justin Chiu",
"Siyang Xue",
"Yuheng Kuang",
"Ferjad Naeem",
"Swaroop Nath",
"Nana Nti",
"Phil Culliton",
"Kashyap Krishnakumar",
"Michael Isard",
"Pei Sun",
"Ayan Chakrabarti",
"Nathan Clement",
"Regev Cohen",
"Arissa Wongpanich",
"GS Oh",
"Ashwin Murthy",
"Hao Zheng",
"Jessica Hamrick",
"Oskar Bunyan",
"Suhas Ganesh",
"Nitish Gupta",
"Roy Frostig",
"John Wieting",
"Yury Malkov",
"Pierre Marcenac",
"Zhixin",
"Lai",
"Xiaodan Tang",
"Mohammad Saleh",
"Fedir Zubach",
"Chinmay Kulkarni",
"Huanjie Zhou",
"Vicky Zayats",
"Nan Ding",
"Anshuman Tripathi",
"Arijit Pramanik",
"Patrik Zochbauer",
"Harish Ganapathy",
"Vedant Misra",
"Zach Behrman",
"Hugo Vallet",
"Mingyang Zhang",
"Mukund Sridhar",
"Ye Jin",
"Mohammad Babaeizadeh",
"Siim Põder",
"Megha Goel",
"Divya Jain",
"Tajwar Nasir",
"Shubham Mittal",
"Tim Dozat",
"Diego Ardila",
"Aliaksei Severyn",
"Fabio Pardo",
"Sammy Jerome",
"Siyang Qin",
"Louis Rouillard",
"Amir Yazdanbakhsh",
"Zizhao Zhang",
"Shivani Agrawal",
"Kaushik Shivakumar",
"Caden Lu",
"Praveen Kallakuri",
"Rachita Chhaparia",
"Kanishka Rao",
"Charles Kwong",
"Asya Fadeeva",
"Shitij Nigam",
"Yan Virin",
"Yuan Zhang",
"Balaji Venkatraman",
"Beliz Gunel",
"Marc Wilson",
"Huiyu Wang",
"Abhinav Gupta",
"Xiaowei Xu",
"Adrien Ali Taïga",
"Kareem Mohamed",
"Doug Fritz",
"Daniel Rodriguez",
"Zoubin Ghahramani",
"Harry Askham",
"Lior Belenki",
"James Zhao",
"Rahul Gupta",
"Krzysztof Jastrzębski",
"Takahiro Kosakai",
"Kaan Katircioglu",
"Jon Schneider",
"Rina Panigrahy",
"Konstantinos Bousmalis",
"Peter Grabowski",
"Prajit Ramachandran",
"Chaitra Hegde",
"Mihaela Rosca",
"Angelo Scorza Scarpati",
"Kyriakos Axiotis",
"Ying Xu",
"Zach Gleicher",
"Assaf Hurwitz Michaely",
"Mandar Sharma",
"Sanil Jain",
"Christoph Hirnschall",
"Tal Marian",
"Xuhui Jia",
"Kevin Mather",
"Kilol Gupta",
"Linhai Qiu",
"Nigamaa Nayakanti",
"Lucian Ionita",
"Steven Zheng",
"Lucia Loher",
"Kurt Shuster",
"Igor Petrovski",
"Roshan Sharma",
"Rahma Chaabouni",
"Angel Yeh",
"James An",
"Arushi Gupta",
"Steven Schwarcz",
"Seher Ellis",
"Sam Conway-Rahman",
"Javier Snaider",
"Alex Zhai",
"James Atwood",
"Daniel Golovin",
"Liqian Peng",
"Te I",
"Vivian Xia",
"Salvatore Scellato",
"Mahan Malihi",
"Arthur Bražinskas",
"Vlad-Doru Ion",
"Younghoon Jun",
"James Swirhun",
"Soroosh Mariooryad",
"Jiao Sun",
"Steve Chien",
"Rey Coaguila",
"Ariel Brand",
"Yi Gao",
"Tom Kwiatkowski",
"Roee Aharoni",
"Cheng-Chun Lee",
"Mislav Žanić",
"Yichi Zhang",
"Dan Ethier",
"Vitaly Nikolaev",
"Pranav Nair",
"Yoav Ben Shalom",
"Hen Fitoussi",
"Jai Gupta",
"Hongbin Liu",
"Dee Cattle",
"Tolga Bolukbasi",
"Ben Murdoch",
"Fantine Huot",
"Yin Li",
"Chris Hahn"
] |
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal understanding and it is now able to process up to 3 hours of video content. Its unique combination of long context, multimodal and reasoning capabilities can be combined to unlock new agentic workflows. Gemini 2.5 Flash provides excellent reasoning abilities at a fraction of the compute and latency requirements and Gemini 2.0 Flash and Flash-Lite provide high performance at low latency and cost. Taken together, the Gemini 2.X model generation spans the full Pareto frontier of model capability vs cost, allowing users to explore the boundaries of what is possible with complex agentic problem solving.
|
|
2025-07-14T00:00:00 |
2507.04517
|
DOTResize: Reducing LLM Width via Discrete Optimal Transport-based Neuron Merging
|
[
"Neha Verma",
"Kenton Murray",
"Kevin Duh"
] |
Model compression offers a promising path to reducing the cost and inaccessibility of large pre-trained models, without significantly compromising their impressive performance. Large Transformer models, including large language models (LLMs), often contain computational redundancy, which can serve as a target for new model compression methods. In this work, we specifically target neuron-level redundancies in model layers by combining groups of similar neurons into fewer neurons. We frame this width reduction as a Discrete Optimal Transport problem, and propose DOTResize, a novel Transformer compression method that uses optimal transport theory to transform and compress model weights. To ensure applicability within the Transformer architecture, we motivate and incorporate entropic regularization and matrix factorization into the transportation maps produced by our method. Unlike pruning-based approaches which discard neurons based on importance measures, DOTResize re-projects the entire neuron width, allowing the retention and redistribution of useful signal across the reduced layer. Empirical results show that compared to simple or state-of-the-art neuron width-pruning techniques, DOTResize can outperform these methods across multiple LLM families and sizes, while achieving measurable reductions in real-world computational cost.
|
|
2025-07-14T00:00:00 |
2507.07994
|
Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection
|
[
"Subhajit Maity",
"Ayan Kumar Bhunia",
"Subhadeep Koley",
"Pinaki Nath Chowdhury",
"Aneeshan Sain",
"Yi-Zhe Song"
] |
Keypoint detection, integral to modern machine perception, faces challenges in few-shot learning, particularly when source data from the same distribution as the query is unavailable. This gap is addressed by leveraging sketches, a popular form of human expression, providing a source-free alternative. However, challenges arise in mastering cross-modal embeddings and handling user-specific sketch styles. Our proposed framework overcomes these hurdles with a prototypical setup, combined with a grid-based locator and prototypical domain adaptation. We also demonstrate success in few-shot convergence across novel keypoints and classes through extensive experiments.
|
|
2025-07-14T00:00:00 |
2507.08128
|
Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio Language Models
|
[
"Arushi Goel",
"Sreyan Ghosh",
"Jaehyeon Kim",
"Sonal Kumar",
"Zhifeng Kong",
"Sang-gil Lee",
"Chao-Han Huck Yang",
"Ramani Duraiswami",
"Dinesh Manocha",
"Rafael Valle",
"Bryan Catanzaro"
] |
We present Audio Flamingo 3 (AF3), a fully open state-of-the-art (SOTA) large audio-language model that advances reasoning and understanding across speech, sound, and music. AF3 introduces: (i) AF-Whisper, a unified audio encoder trained using a novel strategy for joint representation learning across all 3 modalities of speech, sound, and music; (ii) flexible, on-demand thinking, allowing the model to do chain-of-thought-type reasoning before answering; (iii) multi-turn, multi-audio chat; (iv) long audio understanding and reasoning (including speech) up to 10 minutes; and (v) voice-to-voice interaction. To enable these capabilities, we propose several large-scale training datasets curated using novel strategies, including AudioSkills-XL, LongAudio-XL, AF-Think, and AF-Chat, and train AF3 with a novel five-stage curriculum-based training strategy. Trained on only open-source audio data, AF3 achieves new SOTA results on over 20+ (long) audio understanding and reasoning benchmarks, surpassing both open-weight and closed-source models trained on much larger datasets.
|
|
2025-07-15T00:00:00 |
2507.09862
|
SpeakerVid-5M: A Large-Scale High-Quality Dataset for Audio-Visual Dyadic Interactive Human Generation
|
[
"Youliang Zhang",
"Zhaoyang Li",
"Duomin Wang",
"Jiahe Zhang",
"Deyu Zhou",
"Zixin Yin",
"Xili Dai",
"Gang Yu",
"Xiu Li"
] |
The rapid development of large-scale models has catalyzed significant breakthroughs in the digital human domain. These advanced methodologies offer high-fidelity solutions for avatar driving and rendering, leading academia to focus on the next major challenge: audio-visual dyadic interactive virtual human. To facilitate research in this emerging area, we present SpeakerVid-5M dataset, the first large-scale, high-quality dataset designed for audio-visual dyadic interactive virtual human generation. Totaling over 8,743 hours, SpeakerVid-5M contains more than 5.2 million video clips of human portraits. It covers diverse scales and interaction types, including monadic talking, listening, and dyadic conversations. Crucially, the dataset is structured along two key dimensions: interaction type and data quality. First, it is categorized into four types (dialogue branch, single branch, listening branch and multi-turn branch) based on the interaction scenario. Second, it is stratified into a large-scale pre-training subset and a curated, high-quality subset for Supervised Fine-Tuning (SFT). This dual structure accommodates a wide array of 2D virtual human tasks. In addition, we provide an autoregressive (AR)-based video chat baseline trained on this data, accompanied by a dedicated set of metrics and test data to serve as a benchmark VidChatBench for future work. Both the dataset and the corresponding data processing code will be publicly released. Project page: https://dorniwang.github.io/SpeakerVid-5M/
|
|
2025-07-15T00:00:00 |
2507.10548
|
EmbRACE-3K: Embodied Reasoning and Action in Complex Environments
|
[
"Mingxian Lin",
"Wei Huang",
"Yitang Li",
"Chengjie Jiang",
"Kui Wu",
"Fangwei Zhong",
"Shengju Qian",
"Xin Wang",
"Xiaojuan Qi"
] |
Recent advanced vision-language models(VLMs) have demonstrated strong performance on passive, offline image and video understanding tasks. However, their effectiveness in embodied settings, which require online interaction and active scene understanding remains limited. In such scenarios, an agent perceives the environment from a first-person perspective, with each action dynamically shaping subsequent observations. Even state-of-the-art models such as GPT-4o, Claude 3.5 Sonnet, and Gemini 2.5 Pro struggle in open-environment interactions, exhibiting clear limitations in spatial reasoning and long-horizon planning. To address this gap, we introduce EmRACE-3K, a dataset of over 3,000 language-guided tasks situated in diverse, photorealistic environments constructed using Unreal Engine and the UnrealCV-Zoo framework. The tasks encompass a wide range of embodied challenges, including navigation, object manipulation, and multi-stage goal execution. Each task unfolds as a multi-step trajectory, pairing first-person visual observations with high-level instructions, grounded actions, and natural language rationales that express the agent's intent at every step. Using EmRACE-3K, we establish a benchmark to evaluate the embodied reasoning capabilities of VLMs across three key dimensions: Exploration, Dynamic Spatial-Semantic Reasoning, and Multi-stage Goal Execution. In zero-shot settings, all models achieve success rates below 20%, underscoring the challenge posed by our benchmark and the current limitations of VLMs in interactive environments. To demonstrate the utility of EmRACE-3K, we further fine-tune Qwen2.5-VL-7B using supervised learning followed by reinforcement learning. This approach yields substantial improvements across all three challenge categories, highlighting the dataset's effectiveness in enabling the development of embodied reasoning capabilities.
|
|
2025-07-15T00:00:00 |
2507.09104
|
CompassJudger-2: Towards Generalist Judge Model via Verifiable Rewards
|
[
"Taolin Zhang",
"Maosong Cao",
"Alexander Lam",
"Songyang Zhang",
"Kai Chen"
] |
Recently, the role of LLM-as-judge in evaluating large language models has gained prominence. However, current judge models suffer from narrow specialization and limited robustness, undermining their capacity for comprehensive evaluations. In this work, we present CompassJudger-2, a novel generalist judge model that overcomes these limitations via a task-driven, multi-domain data curation strategy. Central to our approach is supervising judgment tasks with verifiable rewards, guiding intrinsic critical reasoning through rejection sampling to foster robust, generalizable judgment capabilities. We introduce a refined learning objective with margin policy gradient loss to enhance performance. Empirically, CompassJudger-2 achieves superior results across multiple judge and reward benchmarks, and our 7B model demonstrates competitive judgment accuracy with significantly larger models like DeepSeek-V3 and Qwen3-235B-A22B. Additionally, we propose JudgerBenchV2, a comprehensive benchmark evaluating cross-domain judgment accuracy and rank consistency to standardize judge model evaluation. These contributions advance robust, scalable LLM judgment and establish new performance and evaluation standards.
|
|
2025-07-15T00:00:00 |
2507.04404
|
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers
|
[
"Jingze Zhu",
"Yongliang Wu",
"Wenbo Zhu",
"Jiawang Cao",
"Yanqiang Zheng",
"Jiawei Chen",
"Xu Yang",
"Bernt Schiele",
"Jonas Fischer",
"Xinting Hu"
] |
Large language models (LLMs) excel at natural language understanding and generation but remain vulnerable to factual errors, limiting their reliability in knowledge-intensive tasks. While decoding-time strategies provide a promising efficient solution without training, existing methods typically treat token-level and layer-level signals in isolation, overlooking the joint dynamics between them. In this work, we introduce a token-aware, layer-localized contrastive decoding method that aligns specific token types with their most influential transformer layers to improve factual generation. Through empirical attention analysis, we identify two key patterns: punctuation tokens receive dominant attention in early layers, while conceptual tokens govern semantic reasoning in intermediate layers. By selectively suppressing attention to these token types at their respective depths, we achieve the induction of controlled factual degradation and derive contrastive signals to guide the final factual decoding. Our method requires no additional training or model modification, and experiments demonstrate that our method consistently improves factuality across multiple LLMs and various benchmarks.
|
|
2025-07-15T00:00:00 |
2507.10524
|
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation
|
[
"Sangmin Bae",
"Yujin Kim",
"Reza Bayat",
"Sungnyun Kim",
"Jiyoun Ha",
"Tal Schuster",
"Adam Fisch",
"Hrayr Harutyunyan",
"Ziwei Ji",
"Aaron Courville",
"Se-Young Yun"
] |
Scaling language models unlocks impressive capabilities, but the accompanying computational and memory demands make both training and deployment expensive. Existing efficiency efforts typically target either parameter sharing or adaptive computation, leaving open the question of how to attain both simultaneously. We introduce Mixture-of-Recursions (MoR), a unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens. This allows MoR to focus quadratic attention computation only among tokens still active at a given recursion depth, further improving memory access efficiency by selectively caching only their key-value pairs. Beyond these core mechanisms, we also propose a KV sharing variant that reuses KV pairs from the first recursion, specifically designed to decrease prefill latency and memory footprint. Across model scales ranging from 135M to 1.7B parameters, MoR forms a new Pareto frontier: at equal training FLOPs and smaller model sizes, it significantly lowers validation perplexity and improves few-shot accuracy, while delivering higher throughput compared with vanilla and existing recursive baselines. These gains demonstrate that MoR is an effective path towards large-model quality without incurring large-model cost.
|
|
2025-07-15T00:00:00 |
2507.10541
|
REST: Stress Testing Large Reasoning Models by Asking Multiple Problems at Once
|
[
"Zhuoshi Pan",
"Qizhi Pei",
"Yu Li",
"Qiyao Sun",
"Zinan Tang",
"H. Vicky Zhao",
"Conghui He",
"Lijun Wu"
] |
Recent Large Reasoning Models (LRMs) have achieved remarkable progress on task-specific benchmarks, yet their evaluation methods remain constrained by isolated problem-solving paradigms. Existing benchmarks predominantly assess single-question reasoning through sequential testing, resulting critical limitations: (1) vulnerability to data contamination and less challenging (e.g., DeepSeek-R1 achieves 97.0% on MATH500), forcing costly and perpetual creation of new questions with large human efforts, (2) failure to evaluate models under multi-context pressure, a key requirement for real-world deployment. To bridge this gap, we present REST (Reasoning Evaluation through Simultaneous Testing), a stress-testing framework that concurrently exposes LRMs to multiple problems simultaneously. Beyond basic reasoning, REST specifically evaluates several under-tested capabilities: contextual priority allocation, cross-problem interference resistance, and dynamic cognitive load management. Our evaluation reveals several striking findings: Even state-of-the-art (SOTA) models like DeepSeek-R1 exhibit substantial performance degradation under stress testing. Crucially, REST demonstrates stronger discriminative power than existing benchmarks, revealing pronounced performance differences among models that exhibit similar, near-ceiling performance under single-question evaluations. Some key mechanistic insights emerge from our analysis: (1) the "overthinking trap" is a critical factor contributing to the performance degradation; (2) the models trained with "long2short" technique preserve more accuracy of their single-problem performance under REST, outperforming standard-trained counterparts. These results establish REST as a cost-efficient, future-proof evaluation paradigm that better reflects real-world reasoning demands while reducing reliance on continuous human annotation.
|
|
2025-07-15T00:00:00 |
2507.10065
|
MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second
|
[
"Chenguo Lin",
"Yuchen Lin",
"Panwang Pan",
"Yifan Yu",
"Honglei Yan",
"Katerina Fragkiadaki",
"Yadong Mu"
] |
We present MoVieS, a novel feed-forward model that synthesizes 4D dynamic novel views from monocular videos in one second. MoVieS represents dynamic 3D scenes using pixel-aligned grids of Gaussian primitives, explicitly supervising their time-varying motion. This allows, for the first time, the unified modeling of appearance, geometry and motion, and enables view synthesis, reconstruction and 3D point tracking within a single learning-based framework. By bridging novel view synthesis with dynamic geometry reconstruction, MoVieS enables large-scale training on diverse datasets with minimal dependence on task-specific supervision. As a result, it also naturally supports a wide range of zero-shot applications, such as scene flow estimation and moving object segmentation. Extensive experiments validate the effectiveness and efficiency of MoVieS across multiple tasks, achieving competitive performance while offering several orders of magnitude speedups.
|
|
2025-07-15T00:00:00 |
2507.10532
|
Reasoning or Memorization? Unreliable Results of Reinforcement Learning Due to Data Contamination
|
[
"Mingqi Wu",
"Zhihao Zhang",
"Qiaole Dong",
"Zhiheng Xi",
"Jun Zhao",
"Senjie Jin",
"Xiaoran Fan",
"Yuhao Zhou",
"Yanwei Fu",
"Qin Liu",
"Songyang Zhang",
"Qi Zhang"
] |
The reasoning capabilities of large language models (LLMs) have been a longstanding focus of research. Recent works have further enhanced these capabilities using reinforcement learning (RL), with many new methods claiming significant improvements with minimal or no external supervision. Surprisingly, some studies even suggest that random or incorrect reward signals can enhance reasoning performance. However, these breakthroughs are mostly reported on the Qwen2.5 model family and evaluated on well-known benchmarks such as MATH-500, AMC, and AIME, while failing to achieve similar gains on other models like Llama, which warrants further investigation. Our analysis shows that although Qwen2.5 achieves strong mathematical reasoning performance, its pretraining on large-scale web corpora makes it vulnerable to data contamination in popular benchmarks. As a result, results derived from these benchmarks may be unreliable. To address this, we introduce a generator that produces fully synthetic arithmetic problems of arbitrary length and difficulty, yielding a clean dataset we call RandomCalculation. Using these leakage-free datasets, we show that only accurate reward signals consistently improve performance, while noisy or incorrect signals do not. We advocate for evaluating RL methods on uncontaminated benchmarks and across diverse model families to ensure trustworthy conclusions.
|
|
2025-07-15T00:00:00 |
2507.08924
|
From KMMLU-Redux to KMMLU-Pro: A Professional Korean Benchmark Suite for LLM Evaluation
|
[
"Seokhee Hong",
"Sunkyoung Kim",
"Guijin Son",
"Soyeon Kim",
"Yeonjung Hong",
"Jinsik Lee"
] |
The development of Large Language Models (LLMs) requires robust benchmarks that encompass not only academic domains but also industrial fields to effectively evaluate their applicability in real-world scenarios. In this paper, we introduce two Korean expert-level benchmarks. KMMLU-Redux, reconstructed from the existing KMMLU, consists of questions from the Korean National Technical Qualification exams, with critical errors removed to enhance reliability. KMMLU-Pro is based on Korean National Professional Licensure exams to reflect professional knowledge in Korea. Our experiments demonstrate that these benchmarks comprehensively represent industrial knowledge in Korea. We release our dataset publicly available.
|
|
2025-07-15T00:00:00 |
2507.08267
|
A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning
|
[
"Hiroshi Yoshihara",
"Taiki Yamaguchi",
"Yuichi Inoue"
] |
https://github.com/analokmaus/kaggle-aimo2-fast-math-r1
|
Enhancing the mathematical reasoning of Large Language Models (LLMs) is a pivotal challenge in advancing AI capabilities. While Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) are the dominant training paradigms, a systematic methodology for combining them to maximize both accuracy and efficiency remains largely unexplored. This paper introduces a practical and effective training recipe that strategically integrates extended SFT with RL from online inference (GRPO). We posit that these methods play complementary, not competing, roles: a prolonged SFT phase first pushes the model's accuracy to its limits, after which a GRPO phase dramatically improves token efficiency while preserving this peak performance. Our experiments reveal that extending SFT for as many as 10 epochs is crucial for performance breakthroughs, and that the primary role of GRPO in this framework is to optimize solution length. The efficacy of our recipe is rigorously validated through top-tier performance on challenging benchmarks, including a high rank among over 2,200 teams in the strictly leak-free AI Mathematical Olympiad (AIMO). This work provides the community with a battle-tested blueprint for developing state-of-the-art mathematical reasoners that are both exceptionally accurate and practically efficient. To ensure full reproducibility and empower future research, we will open-source our entire framework, including all code, model checkpoints, and training configurations at https://github.com/analokmaus/kaggle-aimo2-fast-math-r1.
|
2025-07-15T00:00:00 |
2507.09074
|
Favicon Trojans: Executable Steganography Via Ico Alpha Channel Exploitation
|
[
"David Noever",
"Forrest McKee"
] |
This paper presents a novel method of executable steganography using the alpha transparency layer of ICO image files to embed and deliver self-decompressing JavaScript payloads within web browsers. By targeting the least significant bit (LSB) of non-transparent alpha layer image values, the proposed method successfully conceals compressed JavaScript code inside a favicon image without affecting visual fidelity. Global web traffic loads 294 billion favicons daily and consume 0.9 petabytes of network bandwidth. A proof-of-concept implementation demonstrates that a 64x64 ICO image can embed up to 512 bytes uncompressed, or 0.8 kilobyte when using lightweight two-fold compression. On page load, a browser fetches the favicon as part of standard behavior, allowing an embedded loader script to extract and execute the payload entirely in memory using native JavaScript APIs and canvas pixel access. This creates a two-stage covert channel requiring no additional network or user requests. Testing across multiple browsers in both desktop and mobile environments confirms successful and silent execution of the embedded script. We evaluate the threat model, relate it to polymorphic phishing attacks that evade favicon-based detection, and analyze evasion of content security policies and antivirus scanners. We map nine example MITRE ATT&CK Framework objectives to single line JavaScript to execute arbitrarily in ICO files. Existing steganalysis and sanitization defenses are discussed, highlighting limitations in detecting or neutralizing alpha-channel exploits. The results demonstrate a stealthy and reusable attack surface that blurs traditional boundaries between static images and executable content. Because modern browsers report silent errors when developers specifically fail to load ICO files, this attack surface offers an interesting example of required web behaviors that in turn compromise security.
|
|
2025-07-15T00:00:00 |
2507.04218
|
DreamPoster: A Unified Framework for Image-Conditioned Generative Poster Design
|
[
"Xiwei Hu",
"Haokun Chen",
"Zhongqi Qi",
"Hui Zhang",
"Dexiang Hong",
"Jie Shao",
"Xinglong Wu"
] |
We present DreamPoster, a Text-to-Image generation framework that intelligently synthesizes high-quality posters from user-provided images and text prompts while maintaining content fidelity and supporting flexible resolution and layout outputs. Specifically, DreamPoster is built upon our T2I model, Seedream3.0 to uniformly process different poster generating types. For dataset construction, we propose a systematic data annotation pipeline that precisely annotates textual content and typographic hierarchy information within poster images, while employing comprehensive methodologies to construct paired datasets comprising source materials (e.g., raw graphics/text) and their corresponding final poster outputs. Additionally, we implement a progressive training strategy that enables the model to hierarchically acquire multi-task generation capabilities while maintaining high-quality generation. Evaluations on our testing benchmarks demonstrate DreamPoster's superiority over existing methods, achieving a high usability rate of 88.55\%, compared to GPT-4o (47.56\%) and SeedEdit3.0 (25.96\%). DreamPoster will be online in Jimeng and other Bytedance Apps.
|
|
2025-07-15T00:00:00 |
2507.09751
|
Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations
|
[
"Bradley P. Allen",
"Prateek Chhikara",
"Thomas Macaulay Ferguson",
"Filip Ilievski",
"Paul Groth"
] |
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs' broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neuro-symbolic reasoning that leverages an LLM's knowledge while preserving the underlying logic's soundness and completeness properties.
|
|
2025-07-15T00:00:00 |
2507.08396
|
Subject-Consistent and Pose-Diverse Text-to-Image Generation
|
[
"Zhanxin Gao",
"Beier Zhu",
"Liang Yao",
"Jian Yang",
"Ying Tai"
] |
https://github.com/NJU-PCALab/CoDi
|
Subject-consistent generation (SCG)-aiming to maintain a consistent subject identity across diverse scenes-remains a challenge for text-to-image (T2I) models. Existing training-free SCG methods often achieve consistency at the cost of layout and pose diversity, hindering expressive visual storytelling. To address the limitation, we propose subject-Consistent and pose-Diverse T2I framework, dubbed as CoDi, that enables consistent subject generation with diverse pose and layout. Motivated by the progressive nature of diffusion, where coarse structures emerge early and fine details are refined later, CoDi adopts a two-stage strategy: Identity Transport (IT) and Identity Refinement (IR). IT operates in the early denoising steps, using optimal transport to transfer identity features to each target image in a pose-aware manner. This promotes subject consistency while preserving pose diversity. IR is applied in the later denoising steps, selecting the most salient identity features to further refine subject details. Extensive qualitative and quantitative results on subject consistency, pose diversity, and prompt fidelity demonstrate that CoDi achieves both better visual perception and stronger performance across all metrics. The code is provided in https://github.com/NJU-PCALab/CoDi.
|
2025-07-15T00:00:00 |
2507.11137
|
Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking
|
[
"Yuan Yao",
"Jin Song",
"Jian Jin"
] |
https://github.com/AIResearch-Group/NeuralMark
|
As valuable digital assets, deep neural networks necessitate robust ownership protection, positioning neural network watermarking (NNW) as a promising solution. Among various NNW approaches, weight-based methods are favored for their simplicity and practicality; however, they remain vulnerable to forging and overwriting attacks. To address those challenges, we propose NeuralMark, a robust method built around a hashed watermark filter. Specifically, we utilize a hash function to generate an irreversible binary watermark from a secret key, which is then used as a filter to select the model parameters for embedding. This design cleverly intertwines the embedding parameters with the hashed watermark, providing a robust defense against both forging and overwriting attacks. An average pooling is also incorporated to resist fine-tuning and pruning attacks. Furthermore, it can be seamlessly integrated into various neural network architectures, ensuring broad applicability. Theoretically, we analyze its security boundary. Empirically, we verify its effectiveness and robustness across 13 distinct Convolutional and Transformer architectures, covering five image classification tasks and one text generation task. The source codes are available at https://github.com/AIResearch-Group/NeuralMark.
|
2025-07-16T00:00:00 |
2507.10787
|
Can Multimodal Foundation Models Understand Schematic Diagrams? An Empirical Study on Information-Seeking QA over Scientific Papers
|
[
"Yilun Zhao",
"Chengye Wang",
"Chuhan Li",
"Arman Cohan"
] |
This paper introduces MISS-QA, the first benchmark specifically designed to evaluate the ability of models to interpret schematic diagrams within scientific literature. MISS-QA comprises 1,500 expert-annotated examples over 465 scientific papers. In this benchmark, models are tasked with interpreting schematic diagrams that illustrate research overviews and answering corresponding information-seeking questions based on the broader context of the paper. We assess the performance of 18 frontier multimodal foundation models, including o4-mini, Gemini-2.5-Flash, and Qwen2.5-VL. We reveal a significant performance gap between these models and human experts on MISS-QA. Our analysis of model performance on unanswerable questions and our detailed error analysis further highlight the strengths and limitations of current models, offering key insights to enhance models in comprehending multimodal scientific literature.
|
|
2025-07-16T00:00:00 |
2507.09411
|
LLMalMorph: On The Feasibility of Generating Variant Malware using Large-Language-Models
|
[
"Md Ajwad Akil",
"Adrian Shuai Li",
"Imtiaz Karim",
"Arun Iyengar",
"Ashish Kundu",
"Vinny Parla",
"Elisa Bertino"
] |
Large Language Models (LLMs) have transformed software development and automated code generation. Motivated by these advancements, this paper explores the feasibility of LLMs in modifying malware source code to generate variants. We introduce LLMalMorph, a semi-automated framework that leverages semantical and syntactical code comprehension by LLMs to generate new malware variants. LLMalMorph extracts function-level information from the malware source code and employs custom-engineered prompts coupled with strategically defined code transformations to guide the LLM in generating variants without resource-intensive fine-tuning. To evaluate LLMalMorph, we collected 10 diverse Windows malware samples of varying types, complexity and functionality and generated 618 variants. Our thorough experiments demonstrate that it is possible to reduce the detection rates of antivirus engines of these malware variants to some extent while preserving malware functionalities. In addition, despite not optimizing against any Machine Learning (ML)-based malware detectors, several variants also achieved notable attack success rates against an ML-based malware classifier. We also discuss the limitations of current LLM capabilities in generating malware variants from source code and assess where this emerging technology stands in the broader context of malware variant generation.
|
|
2025-07-16T00:00:00 |
2507.07104
|
Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation from Diffusion Models
|
[
"Tiezheng Zhang",
"Yitong Li",
"Yu-cheng Chou",
"Jieneng Chen",
"Alan Yuille",
"Chen Wei",
"Junfei Xiao"
] |
Building state-of-the-art Vision-Language Models (VLMs) with strong captioning capabilities typically necessitates training on billions of high-quality image-text pairs, requiring millions of GPU hours. This paper introduces the Vision-Language-Vision (VLV) auto-encoder framework, which strategically leverages key pretrained components: a vision encoder, the decoder of a Text-to-Image (T2I) diffusion model, and subsequently, a Large Language Model (LLM). Specifically, we establish an information bottleneck by regularizing the language representation space, achieved through freezing the pretrained T2I diffusion decoder. Our VLV pipeline effectively distills knowledge from the text-conditioned diffusion model using continuous embeddings, demonstrating comprehensive semantic understanding via high-quality reconstructions. Furthermore, by fine-tuning a pretrained LLM to decode the intermediate language representations into detailed descriptions, we construct a state-of-the-art (SoTA) captioner comparable to leading models like GPT-4o and Gemini 2.0 Flash. Our method demonstrates exceptional cost-efficiency and significantly reduces data requirements; by primarily utilizing single-modal images for training and maximizing the utility of existing pretrained models (image encoder, T2I diffusion model, and LLM), it circumvents the need for massive paired image-text datasets, keeping the total training expenditure under $1,000 USD.
|
|
2025-07-16T00:00:00 |
2507.09075
|
OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique
|
[
"Wasi Uddin Ahmad",
"Somshubra Majumdar",
"Aleksander Ficek",
"Sean Narenthiran",
"Mehrzad Samadi",
"Jocelyn Huang",
"Siddhartha Jain",
"Vahid Noroozi",
"Boris Ginsburg"
] |
Recent advancements in reasoning-based Large Language Models (LLMs), particularly their potential through test-time scaling, have created significant opportunities for distillation in code generation and critique. However, progress in both areas fundamentally depends on large-scale, high-quality datasets. In this work, we introduce OpenCodeReasoning-II, a dataset consists of 2.5M question-solution-critique triples (approx. 35K unique programming questions), making it nearly twice the size of the previous largest publicly available code reasoning dataset. In this work, we employ a two-stage supervised fine-tuning strategy. The first stage focuses on fine-tuning for code generation, while the second stage involves the joint training of models for both code generation and critique. Our resulting finetuned Qwen2.5-Instruct models achieve performance in code generation that either exceeds or equals the best prior open-weight distilled models. Notably, the integration of our code generation and critique models leads to significant improvements in competitive coding performance. Furthermore, we present an extension of the LiveCodeBench benchmark to specifically support the C++ programming language, thereby facilitating more comprehensive LLM evaluation using this benchmark.
|
|
2025-07-16T00:00:00 |
2507.09404
|
Scaling Laws for Optimal Data Mixtures
|
[
"Mustafa Shukor",
"Louis Bethune",
"Dan Busbridge",
"David Grangier",
"Enrico Fini",
"Alaaeldin El-Nouby",
"Pierre Ablin"
] |
Large foundation models are typically trained on data from multiple domains, with the data mixture--the proportion of each domain used--playing a critical role in model performance. The standard approach to selecting this mixture relies on trial and error, which becomes impractical for large-scale pretraining. We propose a systematic method to determine the optimal data mixture for any target domain using scaling laws. Our approach accurately predicts the loss of a model of size N trained with D tokens and a specific domain weight vector h. We validate the universality of these scaling laws by demonstrating their predictive power in three distinct and large-scale settings: large language model (LLM), native multimodal model (NMM), and large vision models (LVM) pretraining. We further show that these scaling laws can extrapolate to new data mixtures and across scales: their parameters can be accurately estimated using a few small-scale training runs, and used to estimate the performance at larger scales and unseen domain weights. The scaling laws allow to derive the optimal domain weights for any target domain under a given training budget (N,D), providing a principled alternative to costly trial-and-error methods.
|
|
2025-07-16T00:00:00 |
2507.07186
|
Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs
|
[
"Itay Itzhak",
"Yonatan Belinkov",
"Gabriel Stanovsky"
] |
Large language models (LLMs) exhibit cognitive biases -- systematic tendencies of irrational decision-making, similar to those seen in humans. Prior work has found that these biases vary across models and can be amplified by instruction tuning. However, it remains unclear if these differences in biases stem from pretraining, finetuning, or even random noise due to training stochasticity. We propose a two-step causal experimental approach to disentangle these factors. First, we finetune models multiple times using different random seeds to study how training randomness affects over 30 cognitive biases. Second, we introduce cross-tuning -- swapping instruction datasets between models to isolate bias sources. This swap uses datasets that led to different bias patterns, directly testing whether biases are dataset-dependent. Our findings reveal that while training randomness introduces some variability, biases are mainly shaped by pretraining: models with the same pretrained backbone exhibit more similar bias patterns than those sharing only finetuning data. These insights suggest that understanding biases in finetuned models requires considering their pretraining origins beyond finetuning effects. This perspective can guide future efforts to develop principled strategies for evaluating and mitigating bias in LLMs.
|
|
2025-07-16T00:00:00 |
2507.11407
|
EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes
|
[
"LG AI Research",
"Kyunghoon Bae",
"Eunbi Choi",
"Kibong Choi",
"Stanley Jungkyu Choi",
"Yemuk Choi",
"Kyubeen Han",
"Seokhee Hong",
"Junwon Hwang",
"Taewan Hwang",
"Joonwon Jang",
"Hyojin Jeon",
"Kijeong Jeon",
"Gerrard Jeongwon Jo",
"Hyunjik Jo",
"Jiyeon Jung",
"Euisoon Kim",
"Hyosang Kim",
"Jihoon Kim",
"Joonkee Kim",
"Seonghwan Kim",
"Soyeon Kim",
"Sunkyoung Kim",
"Yireun Kim",
"Yongil Kim",
"Youchul Kim",
"Edward Hwayoung Lee",
"Gwangho Lee",
"Haeju Lee",
"Honglak Lee",
"Jinsik Lee",
"Kyungmin Lee",
"Sangha Park",
"Young Min Paik",
"Yongmin Park",
"Youngyong Park",
"Sanghyun Seo",
"Sihoon Yang",
"Heuiyeen Yeen",
"Sihyuk Yi",
"Hyeongu Yun"
] |
This technical report introduces EXAONE 4.0, which integrates a Non-reasoning mode and a Reasoning mode to achieve both the excellent usability of EXAONE 3.5 and the advanced reasoning abilities of EXAONE Deep. To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended to support Spanish in addition to English and Korean. The EXAONE 4.0 model series consists of two sizes: a mid-size 32B model optimized for high performance, and a small-size 1.2B model designed for on-device applications. The EXAONE 4.0 demonstrates superior performance compared to open-weight models in its class and remains competitive even against frontier-class models. The models are publicly available for research purposes and can be easily downloaded via https://huggingface.co/LGAI-EXAONE.
|
|
2025-07-16T00:00:00 |
2507.08616
|
AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMs
|
[
"Florian Grötschla",
"Luis Müller",
"Jan Tönshoff",
"Mikhail Galkin",
"Bryan Perozzi"
] |
Large-language models (LLMs) have demonstrated powerful problem-solving capabilities, in particular when organized in multi-agent systems. However, the advent of such systems also raises several questions on the ability of a complex network of agents to effectively self-organize and collaborate. While measuring performance on standard reasoning benchmarks indicates how well multi-agent systems can solve reasoning tasks, it is unclear whether these systems are able to leverage their topology effectively. Here, we propose AgentsNet, a new benchmark for multi-agent reasoning. By drawing inspiration from classical problems in distributed systems and graph theory, AgentsNet measures the ability of multi-agent systems to collaboratively form strategies for problem-solving, self-organization, and effective communication given a network topology. We evaluate a variety of baseline methods on AgentsNet including homogeneous networks of agents which first have to agree on basic protocols for organization and communication. We find that some frontier LLMs are already demonstrating strong performance for small networks but begin to fall off once the size of the network scales. While existing multi-agent benchmarks cover at most 2-5 agents, AgentsNet is practically unlimited in size and can scale with new generations of LLMs. As such, we also probe frontier models in a setup with up to 100 agents.
|
|
2025-07-16T00:00:00 |
2507.10571
|
Orchestrator-Agent Trust: A Modular Agentic AI Visual Classification System with Trust-Aware Orchestration and RAG-Based Reasoning
|
[
"Konstantinos I. Roumeliotis",
"Ranjan Sapkota",
"Manoj Karkee",
"Nikolaos D. Tselikas"
] |
https://github.com/Applied-AI-Research-Lab/Orchestrator-Agent-Trust
|
Modern Artificial Intelligence (AI) increasingly relies on multi-agent architectures that blend visual and language understanding. Yet, a pressing challenge remains: How can we trust these agents especially in zero-shot settings with no fine-tuning? We introduce a novel modular Agentic AI visual classification framework that integrates generalist multimodal agents with a non-visual reasoning orchestrator and a Retrieval-Augmented Generation (RAG) module. Applied to apple leaf disease diagnosis, we benchmark three configurations: (I) zero-shot with confidence-based orchestration, (II) fine-tuned agents with improved performance, and (III) trust-calibrated orchestration enhanced by CLIP-based image retrieval and re-evaluation loops. Using confidence calibration metrics (ECE, OCR, CCC), the orchestrator modulates trust across agents. Our results demonstrate a 77.94\% accuracy improvement in the zero-shot setting using trust-aware orchestration and RAG, achieving 85.63\% overall. GPT-4o showed better calibration, while Qwen-2.5-VL displayed overconfidence. Furthermore, image-RAG grounded predictions with visually similar cases, enabling correction of agent overconfidence via iterative re-evaluation. The proposed system separates perception (vision agents) from meta-reasoning (orchestrator), enabling scalable and interpretable multi-agent AI. This blueprint is extensible to diagnostics, biology, and other trust-critical domains. All models, prompts, results, and system components including the complete software source code are openly released to support reproducibility, transparency, and community benchmarking at Github: https://github.com/Applied-AI-Research-Lab/Orchestrator-Agent-Trust
|
2025-07-16T00:00:00 |
2507.11336
|
UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks
|
[
"Peiran Wu",
"Yunze Liu",
"Zhengdong Zhu",
"Enmin Zhou",
"Shawn Shen"
] |
Real-world user-generated videos, especially on platforms like TikTok, often feature rich and intertwined audio visual content. However, existing video captioning benchmarks and models remain predominantly visual centric, overlooking the crucial role of audio in conveying scene dynamics, speaker intent, and narrative context. This lack of omni datasets and lightweight, capable models hampers progress in fine grained, multimodal video understanding. To address these challenges, we introduce UGC-VideoCap, a new benchmark and model framework specifically designed for detailed omnimodal captioning of short form user-generated videos. Unlike prior datasets, UGC-VideoCap emphasizes balanced integration of audio and visual modalities, featuring 1000 TikTok videos annotated through a structured three stage human-in-the-loop pipeline covering audio only, visual only, and joint audio visual semantics. The benchmark also includes 4000 carefully crafted QA pairs probing both unimodal and cross modal understanding. Alongside the dataset, we propose UGC-VideoCaptioner(3B), a 3B parameter captioning model distilled from Gemini 2.5 Flash. Using a novel two-stage training strategy supervised fine tuning followed by Group Relative Policy Optimization (GRPO), our approach enables efficient adaptation from limited data while maintaining competitive performance. Together, our benchmark and model offer a high-quality foundation and a data-efficient solution for advancing omnimodal video captioning in unconstrained real-world UGC settings.
|
|
2025-07-16T00:00:00 |
2507.08333
|
Token-based Audio Inpainting via Discrete Diffusion
|
[
"Tali Dror",
"Iftach Shoham",
"Moshe Buchris",
"Oren Gal",
"Haim Permuter",
"Gilad Katz",
"Eliya Nachmani"
] |
Audio inpainting refers to the task of reconstructing missing segments in corrupted audio recordings. While prior approaches-including waveform and spectrogram-based diffusion models-have shown promising results for short gaps, they often degrade in quality when gaps exceed 100 milliseconds (ms). In this work, we introduce a novel inpainting method based on discrete diffusion modeling, which operates over tokenized audio representations produced by a pre-trained audio tokenizer. Our approach models the generative process directly in the discrete latent space, enabling stable and semantically coherent reconstruction of missing audio. We evaluate the method on the MusicNet dataset using both objective and perceptual metrics across gap durations up to 300 ms. We further evaluated our approach on the MTG dataset, extending the gap duration to 500 ms. Experimental results demonstrate that our method achieves competitive or superior performance compared to existing baselines, particularly for longer gaps, offering a robust solution for restoring degraded musical recordings. Audio examples of our proposed method can be found at https://iftach21.github.io/
|
|
2025-07-16T00:00:00 |
2507.04127
|
BYOKG-RAG: Multi-Strategy Graph Retrieval for Knowledge Graph Question Answering
|
[
"Costas Mavromatis",
"Soji Adeshina",
"Vassilis N. Ioannidis",
"Zhen Han",
"Qi Zhu",
"Ian Robinson",
"Bryan Thompson",
"Huzefa Rangwala",
"George Karypis"
] |
https://github.com/awslabs/graphrag-toolkit
|
Knowledge graph question answering (KGQA) presents significant challenges due to the structural and semantic variations across input graphs. Existing works rely on Large Language Model (LLM) agents for graph traversal and retrieval; an approach that is sensitive to traversal initialization, as it is prone to entity linking errors and may not generalize well to custom ("bring-your-own") KGs. We introduce BYOKG-RAG, a framework that enhances KGQA by synergistically combining LLMs with specialized graph retrieval tools. In BYOKG-RAG, LLMs generate critical graph artifacts (question entities, candidate answers, reasoning paths, and OpenCypher queries), and graph tools link these artifacts to the KG and retrieve relevant graph context. The retrieved context enables the LLM to iteratively refine its graph linking and retrieval, before final answer generation. By retrieving context from different graph tools, BYOKG-RAG offers a more general and robust solution for QA over custom KGs. Through experiments on five benchmarks spanning diverse KG types, we demonstrate that BYOKG-RAG outperforms the second-best graph retrieval method by 4.5% points while showing better generalization to custom KGs. BYOKG-RAG framework is open-sourced at https://github.com/awslabs/graphrag-toolkit.
|
2025-07-16T00:00:00 |
2507.09082
|
Taming generative video models for zero-shot optical flow extraction
|
[
"Seungwoo Kim",
"Khai Loong Aw",
"Klemen Kotar",
"Cristobal Eyzaguirre",
"Wanhee Lee",
"Yunong Liu",
"Jared Watrous",
"Stefan Stojanov",
"Juan Carlos Niebles",
"Jiajun Wu",
"Daniel L. K. Yamins"
] |
Extracting optical flow from videos remains a core computer vision problem. Motivated by the success of large general-purpose models, we ask whether frozen self-supervised video models trained only for future frame prediction can be prompted, without fine-tuning, to output flow. Prior work reading out depth or illumination from video generators required fine-tuning, which is impractical for flow where labels are scarce and synthetic datasets suffer from a sim-to-real gap. Inspired by the Counterfactual World Model (CWM) paradigm, which can obtain point-wise correspondences by injecting a small tracer perturbation into a next-frame predictor and tracking its propagation, we extend this idea to generative video models. We explore several popular architectures and find that successful zero-shot flow extraction in this manner is aided by three model properties: (1) distributional prediction of future frames (avoiding blurry or noisy outputs); (2) factorized latents that treat each spatio-temporal patch independently; and (3) random-access decoding that can condition on any subset of future pixels. These properties are uniquely present in the recent Local Random Access Sequence (LRAS) architecture. Building on LRAS, we propose KL-tracing: a novel test-time procedure that injects a localized perturbation into the first frame, rolls out the model one step, and computes the Kullback-Leibler divergence between perturbed and unperturbed predictive distributions. Without any flow-specific fine-tuning, our method outperforms state-of-the-art models on real-world TAP-Vid DAVIS dataset (16.6% relative improvement for endpoint error) and synthetic TAP-Vid Kubric (4.7% relative improvement). Our results indicate that counterfactual prompting of controllable generative video models is a scalable and effective alternative to supervised or photometric-loss approaches for high-quality flow.
|
|
2025-07-17T00:00:00 |
2507.12465
|
PhysX: Physical-Grounded 3D Asset Generation
|
[
"Ziang Cao",
"Zhaoxi Chen",
"Linag Pan",
"Ziwei Liu"
] |
3D modeling is moving from virtual to physical. Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling. Consequently, despite the rapid development of 3D generative models, the synthesized 3D assets often overlook rich and important physical properties, hampering their real-world application in physical domains like simulation and embodied AI. As an initial attempt to address this challenge, we propose PhysX, an end-to-end paradigm for physical-grounded 3D asset generation. 1) To bridge the critical gap in physics-annotated 3D datasets, we present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: absolute scale, material, affordance, kinematics, and function description. In particular, we devise a scalable human-in-the-loop annotation pipeline based on vision-language models, which enables efficient creation of physics-first assets from raw 3D assets.2) Furthermore, we propose PhysXGen, a feed-forward framework for physics-grounded image-to-3D asset generation, injecting physical knowledge into the pre-trained 3D structural space. Specifically, PhysXGen employs a dual-branch architecture to explicitly model the latent correlations between 3D structures and physical properties, thereby producing 3D assets with plausible physical predictions while preserving the native geometry quality. Extensive experiments validate the superior performance and promising generalization capability of our framework. All the code, data, and models will be released to facilitate future research in generative physical AI.
|
|
2025-07-17T00:00:00 |
2507.09025
|
Lizard: An Efficient Linearization Framework for Large Language Models
|
[
"Chien Van Nguyen",
"Ruiyi Zhang",
"Hanieh Deilamsalehy",
"Puneet Mathur",
"Viet Dac Lai",
"Haoliang Wang",
"Jayakumar Subramanian",
"Ryan A. Rossi",
"Trung Bui",
"Nikos Vlassis",
"Franck Dernoncourt",
"Thien Huu Nguyen"
] |
We propose Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation. Transformer-based LLMs face significant memory and computational bottlenecks as context lengths increase, due to the quadratic complexity of softmax attention and the growing key-value (KV) cache. Lizard addresses these limitations by introducing a subquadratic attention mechanism that closely approximates softmax attention while preserving the output quality. Unlike previous linearization methods, which are often limited by fixed model structures and therefore exclude gating mechanisms, Lizard incorporates a gating module inspired by recent state-of-the-art linear models. This enables adaptive memory control, supports constant-memory inference, offers strong length generalization, and allows more flexible model design. Lizard combines gated linear attention for global context compression with sliding window attention enhanced by meta memory, forming a hybrid mechanism that captures both long-range dependencies and fine-grained local interactions. Moreover, we introduce a hardware-aware algorithm that accelerates the training speed of our models. Extensive experiments show that Lizard achieves near-lossless recovery of the teacher model's performance across standard language modeling tasks, while significantly outperforming previous linearization methods. On the 5-shot MMLU benchmark, Lizard improves over prior models by 18 points and shows significant improvements on associative recall tasks.
|
|
2025-07-17T00:00:00 |
2507.11527
|
DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering
|
[
"Yinsheng Li",
"Zhen Dong",
"Yi Shao"
] |
https://github.com/Eason-Li-AIS/DrafterBench
|
Large Language Model (LLM) agents have shown great potential for solving real-world problems and promise to be a solution for tasks automation in industry. However, more benchmarks are needed to systematically evaluate automation agents from an industrial perspective, for example, in Civil Engineering. Therefore, we propose DrafterBench for the comprehensive evaluation of LLM agents in the context of technical drawing revision, a representation task in civil engineering. DrafterBench contains twelve types of tasks summarized from real-world drawing files, with 46 customized functions/tools and 1920 tasks in total. DrafterBench is an open-source benchmark to rigorously test AI agents' proficiency in interpreting intricate and long-context instructions, leveraging prior knowledge, and adapting to dynamic instruction quality via implicit policy awareness. The toolkit comprehensively assesses distinct capabilities in structured data comprehension, function execution, instruction following, and critical reasoning. DrafterBench offers detailed analysis of task accuracy and error statistics, aiming to provide deeper insight into agent capabilities and identify improvement targets for integrating LLMs in engineering applications. Our benchmark is available at https://github.com/Eason-Li-AIS/DrafterBench, with the test set hosted at https://huggingface.co/datasets/Eason666/DrafterBench.
|
2025-07-17T00:00:00 |
2507.09477
|
Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs
|
[
"Yangning Li",
"Weizhi Zhang",
"Yuyao Yang",
"Wei-Chieh Huang",
"Yaozu Wu",
"Junyu Luo",
"Yuanchen Bei",
"Henry Peng Zou",
"Xiao Luo",
"Yusheng Zhao",
"Chunkit Chan",
"Yankai Chen",
"Zhongfen Deng",
"Yinghui Li",
"Hai-Tao Zheng",
"Dongyuan Li",
"Renhe Jiang",
"Ming Zhang",
"Yangqiu Song",
"Philip S. Yu"
] |
https://github.com/DavidZWZ/Awesome-RAG-Reasoning
|
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge, yet it falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts. This survey synthesizes both strands under a unified reasoning-retrieval perspective. We first map how advanced reasoning optimizes each stage of RAG (Reasoning-Enhanced RAG). Then, we show how retrieved knowledge of different type supply missing premises and expand context for complex inference (RAG-Enhanced Reasoning). Finally, we spotlight emerging Synergized RAG-Reasoning frameworks, where (agentic) LLMs iteratively interleave search and reasoning to achieve state-of-the-art performance across knowledge-intensive benchmarks. We categorize methods, datasets, and open challenges, and outline research avenues toward deeper RAG-Reasoning systems that are more effective, multimodally-adaptive, trustworthy, and human-centric. The collection is available at https://github.com/DavidZWZ/Awesome-RAG-Reasoning.
|
2025-07-17T00:00:00 |
2507.11949
|
MOSPA: Human Motion Generation Driven by Spatial Audio
|
[
"Shuyang Xu",
"Zhiyang Dou",
"Mingyi Shi",
"Liang Pan",
"Leo Ho",
"Jingbo Wang",
"Yuan Liu",
"Cheng Lin",
"Yuexin Ma",
"Wenping Wang",
"Taku Komura"
] |
Enabling virtual humans to dynamically and realistically respond to diverse auditory stimuli remains a key challenge in character animation, demanding the integration of perceptual modeling and motion synthesis. Despite its significance, this task remains largely unexplored. Most previous works have primarily focused on mapping modalities like speech, audio, and music to generate human motion. As of yet, these models typically overlook the impact of spatial features encoded in spatial audio signals on human motion. To bridge this gap and enable high-quality modeling of human movements in response to spatial audio, we introduce the first comprehensive Spatial Audio-Driven Human Motion (SAM) dataset, which contains diverse and high-quality spatial audio and motion data. For benchmarking, we develop a simple yet effective diffusion-based generative framework for human MOtion generation driven by SPatial Audio, termed MOSPA, which faithfully captures the relationship between body motion and spatial audio through an effective fusion mechanism. Once trained, MOSPA could generate diverse realistic human motions conditioned on varying spatial audio inputs. We perform a thorough investigation of the proposed dataset and conduct extensive experiments for benchmarking, where our method achieves state-of-the-art performance on this task. Our model and dataset will be open-sourced upon acceptance. Please refer to our supplementary video for more details.
|
|
2025-07-17T00:00:00 |
2507.12463
|
MMHU: A Massive-Scale Multimodal Benchmark for Human Behavior Understanding
|
[
"Renjie Li",
"Ruijie Ye",
"Mingyang Wu",
"Hao Frank Yang",
"Zhiwen Fan",
"Hezhen Hu",
"Zhengzhong Tu"
] |
Humans are integral components of the transportation ecosystem, and understanding their behaviors is crucial to facilitating the development of safe driving systems. Although recent progress has explored various aspects of human behaviorx2014such as motion, trajectories, and intentionx2014a comprehensive benchmark for evaluating human behavior understanding in autonomous driving remains unavailable. In this work, we propose MMHU, a large-scale benchmark for human behavior analysis featuring rich annotations, such as human motion and trajectories, text description for human motions, human intention, and critical behavior labels relevant to driving safety. Our dataset encompasses 57k human motion clips and 1.73M frames gathered from diverse sources, including established driving datasets such as Waymo, in-the-wild videos from YouTube, and self-collected data. A human-in-the-loop annotation pipeline is developed to generate rich behavior captions. We provide a thorough dataset analysis and benchmark multiple tasksx2014ranging from motion prediction to motion generation and human behavior question answeringx2014thereby offering a broad evaluation suite. Project page : https://MMHU-Benchmark.github.io.
|
|
2025-07-17T00:00:00 |
2507.12415
|
SWE-Perf: Can Language Models Optimize Code Performance on Real-World Repositories?
|
[
"Xinyi He",
"Qian Liu",
"Mingzhe Du",
"Lin Yan",
"Zhijie Fan",
"Yiming Huang",
"Zejian Yuan",
"Zejun Ma"
] |
Code performance optimization is paramount in real-world software engineering and critical for production-level systems. While Large Language Models (LLMs) have demonstrated impressive capabilities in code generation and bug fixing, their proficiency in enhancing code performance at the repository level remains largely unexplored. To address this gap, we introduce SWE-Perf, the first benchmark specifically designed to systematically evaluate LLMs on code performance optimization tasks within authentic repository contexts. SWE-Perf comprises 140 carefully curated instances, each derived from performance-improving pull requests from popular GitHub repositories. Each benchmark instance includes the relevant codebase, target functions, performance-related tests, expert-authored patches, and executable environments. Through a comprehensive evaluation of representative methods that span file-level and repo-level approaches (e.g., Agentless and OpenHands), we reveal a substantial capability gap between existing LLMs and expert-level optimization performance, highlighting critical research opportunities in this emerging field.
|
|
2025-07-17T00:00:00 |
2507.02857
|
AnyI2V: Animating Any Conditional Image with Motion Control
|
[
"Ziye Li",
"Hao Luo",
"Xincheng Shuai",
"Henghui Ding"
] |
Recent advancements in video generation, particularly in diffusion models, have driven notable progress in text-to-video (T2V) and image-to-video (I2V) synthesis. However, challenges remain in effectively integrating dynamic motion signals and flexible spatial constraints. Existing T2V methods typically rely on text prompts, which inherently lack precise control over the spatial layout of generated content. In contrast, I2V methods are limited by their dependence on real images, which restricts the editability of the synthesized content. Although some methods incorporate ControlNet to introduce image-based conditioning, they often lack explicit motion control and require computationally expensive training. To address these limitations, we propose AnyI2V, a training-free framework that animates any conditional images with user-defined motion trajectories. AnyI2V supports a broader range of modalities as the conditional image, including data types such as meshes and point clouds that are not supported by ControlNet, enabling more flexible and versatile video generation. Additionally, it supports mixed conditional inputs and enables style transfer and editing via LoRA and text prompts. Extensive experiments demonstrate that the proposed AnyI2V achieves superior performance and provides a new perspective in spatial- and motion-controlled video generation. Code is available at https://henghuiding.com/AnyI2V/.
|
|
2025-07-17T00:00:00 |
2507.12462
|
SpatialTrackerV2: 3D Point Tracking Made Easy
|
[
"Yuxi Xiao",
"Jianyuan Wang",
"Nan Xue",
"Nikita Karaev",
"Yuri Makarov",
"Bingyi Kang",
"Xing Zhu",
"Hujun Bao",
"Yujun Shen",
"Xiaowei Zhou"
] |
We present SpatialTrackerV2, a feed-forward 3D point tracking method for monocular videos. Going beyond modular pipelines built on off-the-shelf components for 3D tracking, our approach unifies the intrinsic connections between point tracking, monocular depth, and camera pose estimation into a high-performing and feedforward 3D point tracker. It decomposes world-space 3D motion into scene geometry, camera ego-motion, and pixel-wise object motion, with a fully differentiable and end-to-end architecture, allowing scalable training across a wide range of datasets, including synthetic sequences, posed RGB-D videos, and unlabeled in-the-wild footage. By learning geometry and motion jointly from such heterogeneous data, SpatialTrackerV2 outperforms existing 3D tracking methods by 30%, and matches the accuracy of leading dynamic 3D reconstruction approaches while running 50times faster.
|
|
2025-07-17T00:00:00 |
2507.05065
|
Replacing thinking with tool usage enables reasoning in small language models
|
[
"Corrado Rainone",
"Tim Bakker",
"Roland Memisevic"
] |
Recent advances have established a new machine learning paradigm based on scaling up compute at inference time as well as at training time. In that line of work, a combination of Supervised Fine-Tuning (SFT) on synthetic demonstrations and Reinforcement Learning with Verifiable Rewards (RLVR) is used for training Large Language Models to expend extra compute during inference in the form of "thoughts" expressed in natural language. In this paper, we propose to instead format these tokens as a multi-turn interaction trace with a stateful tool. At each turn, the new state of the tool is appended to the context of the model, whose job is to generate the tokens necessary to control the tool via a custom DSL. We benchmark this approach on the problem of repairing malfunctioning Python code, and show that this constrained setup allows for faster sampling of experience and a denser reward signal, allowing even models of size up to 3B parameters to learn how to proficiently expend additional compute on the task.
|
|
2025-07-17T00:00:00 |
2507.07451
|
RLEP: Reinforcement Learning with Experience Replay for LLM Reasoning
|
[
"Hongzhi Zhang",
"Jia Fu",
"Jingyuan Zhang",
"Kai Fu",
"Qi Wang",
"Fuzheng Zhang",
"Guorui Zhou"
] |
https://github.com/Kwai-Klear/RLEP
|
Reinforcement learning (RL) for large language models is an energy-intensive endeavor: training can be unstable, and the policy may gradually drift away from its pretrained weights. We present RLEP\, -- \,Reinforcement Learning with Experience rePlay\, -- \,a two-phase framework that first collects verified trajectories and then replays them during subsequent training. At every update step, the policy is optimized on mini-batches that blend newly generated rollouts with these replayed successes. By replaying high-quality examples, RLEP steers the model away from fruitless exploration, focuses learning on promising reasoning paths, and delivers both faster convergence and stronger final performance. On the Qwen2.5-Math-7B base model, RLEP reaches baseline peak accuracy with substantially fewer updates and ultimately surpasses it, improving accuracy on AIME-2024 from 38.2% to 39.9%, on AIME-2025 from 19.8% to 22.3%, and on AMC-2023 from 77.0% to 82.2%. Our code, datasets, and checkpoints are publicly available at https://github.com/Kwai-Klear/RLEP to facilitate reproducibility and further research.
|
2025-07-17T00:00:00 |
2507.11764
|
AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles
|
[
"Matteo Fasulo",
"Luca Babboni",
"Luca Tedeschini"
] |
This paper presents AI Wizards' participation in the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles, classifying sentences as subjective/objective in monolingual, multilingual, and zero-shot settings. Training/development datasets were provided for Arabic, German, English, Italian, and Bulgarian; final evaluation included additional unseen languages (e.g., Greek, Romanian, Polish, Ukrainian) to assess generalization. Our primary strategy enhanced transformer-based classifiers by integrating sentiment scores, derived from an auxiliary model, with sentence representations, aiming to improve upon standard fine-tuning. We explored this sentiment-augmented architecture with mDeBERTaV3-base, ModernBERT-base (English), and Llama3.2-1B. To address class imbalance, prevalent across languages, we employed decision threshold calibration optimized on the development set. Our experiments show sentiment feature integration significantly boosts performance, especially subjective F1 score. This framework led to high rankings, notably 1st for Greek (Macro F1 = 0.51).
|
|
2025-07-17T00:00:00 |
2507.11412
|
Seq vs Seq: An Open Suite of Paired Encoders and Decoders
|
[
"Orion Weller",
"Kathryn Ricci",
"Marc Marone",
"Antoine Chaffin",
"Dawn Lawrie",
"Benjamin Van Durme"
] |
The large language model (LLM) community focuses almost exclusively on decoder-only language models, since they are easier to use for text generation. However, a large subset of the community still uses encoder-only models for tasks such as classification or retrieval. Previous work has attempted to compare these architectures, but is forced to make comparisons with models that have different numbers of parameters, training techniques, and datasets. We introduce the SOTA open-data Ettin suite of models: paired encoder-only and decoder-only models ranging from 17 million parameters to 1 billion, trained on up to 2 trillion tokens. Using the same recipe for both encoder-only and decoder-only models produces SOTA recipes in both categories for their respective sizes, beating ModernBERT as an encoder and Llama 3.2 and SmolLM2 as decoders. Like previous work, we find that encoder-only models excel at classification and retrieval tasks while decoders excel at generative tasks. However, we show that adapting a decoder model to encoder tasks (and vice versa) through continued training is subpar compared to using only the reverse objective (i.e. a 400M encoder outperforms a 1B decoder on MNLI, and vice versa for generative tasks). We open-source all artifacts of this study including training data, training order segmented by checkpoint, and 200+ checkpoints to allow future work to analyze or extend all aspects of training.
|
|
2025-07-17T00:00:00 |
2507.12367
|
GitChameleon: Evaluating AI Code Generation Against Python Library Version Incompatibilities
|
[
"Diganta Misra",
"Nizar Islah",
"Victor May",
"Brice Rauby",
"Zihan Wang",
"Justine Gehring",
"Antonio Orvieto",
"Muawiz Chaudhary",
"Eilif B. Muller",
"Irina Rish",
"Samira Ebrahimi Kahou",
"Massimo Caccia"
] |
https://github.com/mrcabbage972/GitChameleonBenchmark
|
The rapid evolution of software libraries poses a considerable hurdle for code generation, necessitating continuous adaptation to frequent version updates while preserving backward compatibility. While existing code evolution benchmarks provide valuable insights, they typically lack execution-based evaluation for generating code compliant with specific library versions. To address this, we introduce GitChameleon, a novel, meticulously curated dataset comprising 328 Python code completion problems, each conditioned on specific library versions and accompanied by executable unit tests. GitChameleon rigorously evaluates the capacity of contemporary large language models (LLMs), LLM-powered agents, code assistants, and RAG systems to perform version-conditioned code generation that demonstrates functional accuracy through execution. Our extensive evaluations indicate that state-of-the-art systems encounter significant challenges with this task; enterprise models achieving baseline success rates in the 48-51\% range, underscoring the intricacy of the problem. By offering an execution-based benchmark emphasizing the dynamic nature of code libraries, GitChameleon enables a clearer understanding of this challenge and helps guide the development of more adaptable and dependable AI code generation methods. We make the dataset and evaluation code publicly available at https://github.com/mrcabbage972/GitChameleonBenchmark.
|
2025-07-17T00:00:00 |
2507.10015
|
(Almost) Free Modality Stitching of Foundation Models
|
[
"Jaisidh Singh",
"Diganta Misra",
"Boris Knyazev",
"Antonio Orvieto"
] |
Foundation multi-modal models are often designed by stitching of multiple existing pretrained uni-modal models: for example, an image classifier with an text model. This stitching process is performed by training a connector module that aims to align the representation spaces of these uni-modal models towards a multi-modal objective. However, given the complexity of training such connectors on large scale web-based datasets coupled with the ever-increasing number of available pretrained uni-modal models, the task of uni-modal models selection and subsequent connector module training becomes computationally demanding. To address this under-studied critical problem, we propose Hypernetwork Model Alignment (Hyma), a novel all-in-one solution for optimal uni-modal model selection and connector training by leveraging hypernetworks. Specifically, our framework utilizes the parameter prediction capability of a hypernetwork to obtain jointly trained connector modules for N times M combinations of uni-modal models. In our experiments, Hyma reduces the cost of searching for the best performing uni-modal model pair by 10times, while matching the ranking and trained connector performance obtained via grid search across a suite of diverse multi-modal benchmarks.
|
|
2025-07-17T00:00:00 |
2507.07015
|
MST-Distill: Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation
|
[
"Hui Li",
"Pengfei Yang",
"Juanyang Chen",
"Le Dong",
"Yanxin Chen",
"Quan Wang"
] |
https://github.com/Gray-OREO/MST-Distill
|
Knowledge distillation as an efficient knowledge transfer technique, has achieved remarkable success in unimodal scenarios. However, in cross-modal settings, conventional distillation methods encounter significant challenges due to data and statistical heterogeneities, failing to leverage the complementary prior knowledge embedded in cross-modal teacher models. This paper empirically reveals two critical issues in existing approaches: distillation path selection and knowledge drift. To address these limitations, we propose MST-Distill, a novel cross-modal knowledge distillation framework featuring a mixture of specialized teachers. Our approach employs a diverse ensemble of teacher models across both cross-modal and multimodal configurations, integrated with an instance-level routing network that facilitates adaptive and dynamic distillation. This architecture effectively transcends the constraints of traditional methods that rely on monotonous and static teacher models. Additionally, we introduce a plug-in masking module, independently trained to suppress modality-specific discrepancies and reconstruct teacher representations, thereby mitigating knowledge drift and enhancing transfer effectiveness. Extensive experiments across five diverse multimodal datasets, spanning visual, audio, and text, demonstrate that our method significantly outperforms existing state-of-the-art knowledge distillation methods in cross-modal distillation tasks. The source code is available at https://github.com/Gray-OREO/MST-Distill.
|
2025-07-18T00:00:00 |
2507.13334
|
A Survey of Context Engineering for Large Language Models
|
[
"Lingrui Mei",
"Jiayu Yao",
"Yuyao Ge",
"Yiwei Wang",
"Baolong Bi",
"Yujun Cai",
"Jiazhi Liu",
"Mingyu Li",
"Zhong-Zhi Li",
"Duzhen Zhang",
"Chenlin Zhou",
"Jiayi Mao",
"Tianze Xia",
"Jiafeng Guo",
"Shenghua Liu"
] |
The performance of Large Language Models (LLMs) is fundamentally determined by the contextual information provided during inference. This survey introduces Context Engineering, a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs. We present a comprehensive taxonomy decomposing Context Engineering into its foundational components and the sophisticated implementations that integrate them into intelligent systems. We first examine the foundational components: context retrieval and generation, context processing and context management. We then explore how these components are architecturally integrated to create sophisticated system implementations: retrieval-augmented generation (RAG), memory systems and tool-integrated reasoning, and multi-agent systems. Through this systematic analysis of over 1300 research papers, our survey not only establishes a technical roadmap for the field but also reveals a critical research gap: a fundamental asymmetry exists between model capabilities. While current models, augmented by advanced context engineering, demonstrate remarkable proficiency in understanding complex contexts, they exhibit pronounced limitations in generating equally sophisticated, long-form outputs. Addressing this gap is a defining priority for future research. Ultimately, this survey provides a unified framework for both researchers and engineers advancing context-aware AI.
|
|
2025-07-18T00:00:00 |
2507.13332
|
The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner
|
[
"Zhouqi Hua",
"Wenwei Zhang",
"Chengqi Lyu",
"Yuzhe Gu",
"Songyang Gao",
"Kuikun Liu",
"Kai Chen"
] |
Length generalization, the ability to solve problems of longer sequences than those observed during training, poses a core challenge of Transformer-based large language models (LLM). Although existing studies have predominantly focused on data-driven approaches for arithmetic operations and symbolic manipulation tasks, these approaches tend to be task-specific with limited overall performance. To pursue a more general solution, this paper focuses on a broader case of reasoning problems that are computable, i.e., problems that algorithms can solve, thus can be solved by the Turing Machine. From this perspective, this paper proposes Turing MAchine Imitation Learning (TAIL) to improve the length generalization ability of LLMs. TAIL synthesizes chain-of-thoughts (CoT) data that imitate the execution process of a Turing Machine by computer programs, which linearly expands the reasoning steps into atomic states to alleviate shortcut learning and explicit memory fetch mechanism to reduce the difficulties of dynamic and long-range data access in elementary operations. To validate the reliability and universality of TAIL, we construct a challenging synthetic dataset covering 8 classes of algorithms and 18 tasks. Without bells and whistles, TAIL significantly improves the length generalization ability as well as the performance of Qwen2.5-7B on various tasks using only synthetic data, surpassing previous methods and DeepSeek-R1. The experimental results reveal that the key concepts in the Turing Machine, instead of the thinking styles, are indispensable for TAIL for length generalization, through which the model exhibits read-and-write behaviors consistent with the properties of the Turing Machine in their attention layers. This work provides a promising direction for future research in the learning of LLM reasoning from synthetic data.
|
|
2025-07-18T00:00:00 |
2507.12720
|
FLEXITOKENS: Flexible Tokenization for Evolving Language Models
|
[
"Abraham Toluase Owodunni",
"Orevaoghene Ahia",
"Sachin Kumar"
] |
https://github.com/owos/flexitokens
|
Language models (LMs) are challenging to adapt to new data distributions by simple finetuning. This is due to the rigidity of their subword tokenizers, which typically remain unchanged during adaptation. This inflexibility often leads to inefficient tokenization, causing overfragmentation of out-of-distribution domains, unseen languages, or scripts. In this work, we develop byte-level LMs with learnable tokenizers to make tokenization adaptive. Our models include a submodule that learns to predict boundaries between the input byte sequence, encoding it into variable-length segments. Existing tokenizer-free methods train this boundary predictor using an auxiliary loss that enforces a fixed compression rate across the training corpus, introducing a new kind of rigidity. We propose FLEXITOKENS, a simplified training objective that enables significantly greater flexibility during adaptation. Evaluating across multiple multilingual benchmarks, morphologically diverse tasks, and domains, we demonstrate that FLEXITOKENS consistently reduces token over-fragmentation and achieves up to 10\% improvements on downstream task performance compared to subword and other gradient-based tokenizers. Code and data for our experiments will be released at https://github.com/owos/flexitokens
|
2025-07-18T00:00:00 |
2507.12508
|
MindJourney: Test-Time Scaling with World Models for Spatial Reasoning
|
[
"Yuncong Yang",
"Jiageng Liu",
"Zheyuan Zhang",
"Siyuan Zhou",
"Reuben Tan",
"Jianwei Yang",
"Yilun Du",
"Chuang Gan"
] |
Spatial reasoning in 3D space is central to human cognition and indispensable for embodied tasks such as navigation and manipulation. However, state-of-the-art vision-language models (VLMs) struggle frequently with tasks as simple as anticipating how a scene will look after an egocentric motion: they perceive 2D images but lack an internal model of 3D dynamics. We therefore propose MindJourney, a test-time scaling framework that grants a VLM with this missing capability by coupling it to a controllable world model based on video diffusion. The VLM iteratively sketches a concise camera trajectory, while the world model synthesizes the corresponding view at each step. The VLM then reasons over this multi-view evidence gathered during the interactive exploration. Without any fine-tuning, our MindJourney achieves over an average 8% performance boost on the representative spatial reasoning benchmark SAT, showing that pairing VLMs with world models for test-time scaling offers a simple, plug-and-play route to robust 3D reasoning. Meanwhile, our method also improves upon the test-time inference VLMs trained through reinforcement learning, which demonstrates the potential of our method that utilizes world models for test-time scaling.
|
|
2025-07-18T00:00:00 |
2507.13347
|
π^3: Scalable Permutation-Equivariant Visual Geometry Learning
|
[
"Yifan Wang",
"Jianjun Zhou",
"Haoyi Zhu",
"Wenzheng Chang",
"Yang Zhou",
"Zizun Li",
"Junyi Chen",
"Jiangmiao Pang",
"Chunhua Shen",
"Tong He"
] |
We introduce pi^3, a feed-forward neural network that offers a novel approach to visual geometry reconstruction, breaking the reliance on a conventional fixed reference view. Previous methods often anchor their reconstructions to a designated viewpoint, an inductive bias that can lead to instability and failures if the reference is suboptimal. In contrast, pi^3 employs a fully permutation-equivariant architecture to predict affine-invariant camera poses and scale-invariant local point maps without any reference frames. This design makes our model inherently robust to input ordering and highly scalable. These advantages enable our simple and bias-free approach to achieve state-of-the-art performance on a wide range of tasks, including camera pose estimation, monocular/video depth estimation, and dense point map reconstruction. Code and models are publicly available.
|
|
2025-07-18T00:00:00 |
2507.12841
|
AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning
|
[
"Yiming Ren",
"Zhiqiang Lin",
"Yu Li",
"Gao Meng",
"Weiyun Wang",
"Junjie Wang",
"Zicheng Lin",
"Jifeng Dai",
"Yujiu Yang",
"Wenhai Wang",
"Ruihang Chu"
] |
Controllable captioning is essential for precise multimodal alignment and instruction following, yet existing models often lack fine-grained control and reliable evaluation protocols. To address this gap, we present the AnyCap Project, an integrated solution spanning model, dataset, and evaluation. We introduce AnyCapModel (ACM), a lightweight plug-and-play framework that enhances the controllability of existing foundation models for omni-modal captioning without retraining the base model. ACM reuses the original captions from base models while incorporating user instructions and modality features to generate improved captions. To remedy the data scarcity in controllable multimodal captioning, we build AnyCapDataset (ACD), covering three modalities, 28 user-instruction types, and 300\,k high-quality data entries. We further propose AnyCapEval, a new benchmark that provides more reliable evaluation metrics for controllable captioning by decoupling content accuracy and stylistic fidelity. ACM markedly improves caption quality across a diverse set of base models on AnyCapEval. Notably, ACM-8B raises GPT-4o\'s content scores by 45\% and style scores by 12\%, and it also achieves substantial gains on widely used benchmarks such as MIA-Bench and VidCapBench.
|
|
2025-07-18T00:00:00 |
2507.13348
|
VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
|
[
"Senqiao Yang",
"Junyi Li",
"Xin Lai",
"Bei Yu",
"Hengshuang Zhao",
"Jiaya Jia"
] |
https://github.com/dvlab-research/VisionThink
|
Recent advancements in vision-language models (VLMs) have improved performance by increasing the number of visual tokens, which are often significantly longer than text tokens. However, we observe that most real-world scenarios do not require such an extensive number of visual tokens. While the performance drops significantly in a small subset of OCR-related tasks, models still perform accurately in most other general VQA tasks with only 1/4 resolution. Therefore, we propose to dynamically process distinct samples with different resolutions, and present a new paradigm for visual token compression, namely, VisionThink. It starts with a downsampled image and smartly decides whether it is sufficient for problem solving. Otherwise, the model could output a special token to request the higher-resolution image. Compared to existing Efficient VLM methods that compress tokens using fixed pruning ratios or thresholds, VisionThink autonomously decides whether to compress tokens case by case. As a result, it demonstrates strong fine-grained visual understanding capability on OCR-related tasks, and meanwhile saves substantial visual tokens on simpler tasks. We adopt reinforcement learning and propose the LLM-as-Judge strategy to successfully apply RL to general VQA tasks. Moreover, we carefully design a reward function and penalty mechanism to achieve a stable and reasonable image resize call ratio. Extensive experiments demonstrate the superiority, efficiency, and effectiveness of our method. Our code is available at https://github.com/dvlab-research/VisionThink.
|
2025-07-18T00:00:00 |
2507.13300
|
AbGen: Evaluating Large Language Models in Ablation Study Design and Evaluation for Scientific Research
|
[
"Yilun Zhao",
"Weiyuan Chen",
"Zhijian Xu",
"Manasi Patwardhan",
"Yixin Liu",
"Chengye Wang",
"Lovekesh Vig",
"Arman Cohan"
] |
We introduce AbGen, the first benchmark designed to evaluate the capabilities of LLMs in designing ablation studies for scientific research. AbGen consists of 1,500 expert-annotated examples derived from 807 NLP papers. In this benchmark, LLMs are tasked with generating detailed ablation study designs for a specified module or process based on the given research context. Our evaluation of leading LLMs, such as DeepSeek-R1-0528 and o4-mini, highlights a significant performance gap between these models and human experts in terms of the importance, faithfulness, and soundness of the ablation study designs. Moreover, we demonstrate that current automated evaluation methods are not reliable for our task, as they show a significant discrepancy when compared to human assessment. To better investigate this, we develop AbGen-Eval, a meta-evaluation benchmark designed to assess the reliability of commonly used automated evaluation systems in measuring LLM performance on our task. We investigate various LLM-as-Judge systems on AbGen-Eval, providing insights for future research on developing more effective and reliable LLM-based evaluation systems for complex scientific tasks.
|
|
2025-07-18T00:00:00 |
2507.04984
|
TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation
|
[
"Zonglin Lyu",
"Chen Chen"
] |
Video Frame Interpolation (VFI) aims to predict the intermediate frame I_n (we use n to denote time in videos to avoid notation overload with the timestep t in diffusion models) based on two consecutive neighboring frames I_0 and I_1. Recent approaches apply diffusion models (both image-based and video-based) in this task and achieve strong performance. However, image-based diffusion models are unable to extract temporal information and are relatively inefficient compared to non-diffusion methods. Video-based diffusion models can extract temporal information, but they are too large in terms of training scale, model size, and inference time. To mitigate the above issues, we propose Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation (TLB-VFI), an efficient video-based diffusion model. By extracting rich temporal information from video inputs through our proposed 3D-wavelet gating and temporal-aware autoencoder, our method achieves 20% improvement in FID on the most challenging datasets over recent SOTA of image-based diffusion models. Meanwhile, due to the existence of rich temporal information, our method achieves strong performance while having 3times fewer parameters. Such a parameter reduction results in 2.3x speed up. By incorporating optical flow guidance, our method requires 9000x less training data and achieves over 20x fewer parameters than video-based diffusion models. Codes and results are available at our project page: https://zonglinl.github.io/tlbvfi_page.
|
|
2025-07-18T00:00:00 |
2507.12956
|
FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers
|
[
"Qiang Wang",
"Mengchao Wang",
"Fan Jiang",
"Yaqi Fan",
"Yonggang Qi",
"Mu Xu"
] |
Producing expressive facial animations from static images is a challenging task. Prior methods relying on explicit geometric priors (e.g., facial landmarks or 3DMM) often suffer from artifacts in cross reenactment and struggle to capture subtle emotions. Furthermore, existing approaches lack support for multi-character animation, as driving features from different individuals frequently interfere with one another, complicating the task. To address these challenges, we propose FantasyPortrait, a diffusion transformer based framework capable of generating high-fidelity and emotion-rich animations for both single- and multi-character scenarios. Our method introduces an expression-augmented learning strategy that utilizes implicit representations to capture identity-agnostic facial dynamics, enhancing the model's ability to render fine-grained emotions. For multi-character control, we design a masked cross-attention mechanism that ensures independent yet coordinated expression generation, effectively preventing feature interference. To advance research in this area, we propose the Multi-Expr dataset and ExprBench, which are specifically designed datasets and benchmarks for training and evaluating multi-character portrait animations. Extensive experiments demonstrate that FantasyPortrait significantly outperforms state-of-the-art methods in both quantitative metrics and qualitative evaluations, excelling particularly in challenging cross reenactment and multi-character contexts. Our project page is https://fantasy-amap.github.io/fantasy-portrait/.
|
|
2025-07-18T00:00:00 |
2507.13344
|
Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models
|
[
"Yudong Jin",
"Sida Peng",
"Xuan Wang",
"Tao Xie",
"Zhen Xu",
"Yifan Yang",
"Yujun Shen",
"Hujun Bao",
"Xiaowei Zhou"
] |
This paper addresses the challenge of high-fidelity view synthesis of humans with sparse-view videos as input. Previous methods solve the issue of insufficient observation by leveraging 4D diffusion models to generate videos at novel viewpoints. However, the generated videos from these models often lack spatio-temporal consistency, thus degrading view synthesis quality. In this paper, we propose a novel sliding iterative denoising process to enhance the spatio-temporal consistency of the 4D diffusion model. Specifically, we define a latent grid in which each latent encodes the image, camera pose, and human pose for a certain viewpoint and timestamp, then alternately denoising the latent grid along spatial and temporal dimensions with a sliding window, and finally decode the videos at target viewpoints from the corresponding denoised latents. Through the iterative sliding, information flows sufficiently across the latent grid, allowing the diffusion model to obtain a large receptive field and thus enhance the 4D consistency of the output, while making the GPU memory consumption affordable. The experiments on the DNA-Rendering and ActorsHQ datasets demonstrate that our method is able to synthesize high-quality and consistent novel-view videos and significantly outperforms the existing approaches. See our project page for interactive demos and video results: https://diffuman4d.github.io/ .
|
|
2025-07-18T00:00:00 |
2507.12990
|
Teach Old SAEs New Domain Tricks with Boosting
|
[
"Nikita Koriagin",
"Yaroslav Aksenov",
"Daniil Laptev",
"Gleb Gerasimov",
"Nikita Balagansky",
"Daniil Gavrilov"
] |
Sparse Autoencoders have emerged as powerful tools for interpreting the internal representations of Large Language Models, yet they often fail to capture domain-specific features not prevalent in their training corpora. This paper introduces a residual learning approach that addresses this feature blindness without requiring complete retraining. We propose training a secondary SAE specifically to model the reconstruction error of a pretrained SAE on domain-specific texts, effectively capturing features missed by the primary model. By summing the outputs of both models during inference, we demonstrate significant improvements in both LLM cross-entropy and explained variance metrics across multiple specialized domains. Our experiments show that this method efficiently incorporates new domain knowledge into existing SAEs while maintaining their performance on general tasks. This approach enables researchers to selectively enhance SAE interpretability for specific domains of interest, opening new possibilities for targeted mechanistic interpretability of LLMs.
|
|
2025-07-18T00:00:00 |
2507.13255
|
Automating Steering for Safe Multimodal Large Language Models
|
[
"Lyucheng Wu",
"Mengru Wang",
"Ziwen Xu",
"Tri Cao",
"Nay Oo",
"Bryan Hooi",
"Shumin Deng"
] |
Recent progress in Multimodal Large Language Models (MLLMs) has unlocked powerful cross-modal reasoning abilities, but also raised new safety concerns, particularly when faced with adversarial multimodal inputs. To improve the safety of MLLMs during inference, we introduce a modular and adaptive inference-time intervention technology, AutoSteer, without requiring any fine-tuning of the underlying model. AutoSteer incorporates three core components: (1) a novel Safety Awareness Score (SAS) that automatically identifies the most safety-relevant distinctions among the model's internal layers; (2) an adaptive safety prober trained to estimate the likelihood of toxic outputs from intermediate representations; and (3) a lightweight Refusal Head that selectively intervenes to modulate generation when safety risks are detected. Experiments on LLaVA-OV and Chameleon across diverse safety-critical benchmarks demonstrate that AutoSteer significantly reduces the Attack Success Rate (ASR) for textual, visual, and cross-modal threats, while maintaining general abilities. These findings position AutoSteer as a practical, interpretable, and effective framework for safer deployment of multimodal AI systems.
|
|
2025-07-18T00:00:00 |
2507.13264
|
Voxtral
|
[
"Alexander H. Liu",
"Andy Ehrenberg",
"Andy Lo",
"Clément Denoix",
"Corentin Barreau",
"Guillaume Lample",
"Jean-Malo Delignon",
"Khyathi Raghavi Chandu",
"Patrick von Platen",
"Pavankumar Reddy Muddireddy",
"Sanchit Gandhi",
"Soham Ghosh",
"Srijan Mishra",
"Thomas Foubert",
"Abhinav Rastogi",
"Adam Yang",
"Albert Q. Jiang",
"Alexandre Sablayrolles",
"Amélie Héliou",
"Amélie Martin",
"Anmol Agarwal",
"Antoine Roux",
"Arthur Darcet",
"Arthur Mensch",
"Baptiste Bout",
"Baptiste Rozière",
"Baudouin De Monicault",
"Chris Bamford",
"Christian Wallenwein",
"Christophe Renaudin",
"Clémence Lanfranchi",
"Darius Dabert",
"Devendra Singh Chaplot",
"Devon Mizelle",
"Diego de las Casas",
"Elliot Chane-Sane",
"Emilien Fugier",
"Emma Bou Hanna",
"Gabrielle Berrada",
"Gauthier Delerce",
"Gauthier Guinet",
"Georgii Novikov",
"Guillaume Martin",
"Himanshu Jaju",
"Jan Ludziejewski",
"Jason Rute",
"Jean-Hadrien Chabran",
"Jessica Chudnovsky",
"Joachim Studnia",
"Joep Barmentlo",
"Jonas Amar",
"Josselin Somerville Roberts",
"Julien Denize",
"Karan Saxena",
"Karmesh Yadav",
"Kartik Khandelwal",
"Kush Jain",
"Lélio Renard Lavaud",
"Léonard Blier",
"Lingxiao Zhao",
"Louis Martin",
"Lucile Saulnier",
"Luyu Gao",
"Marie Pellat",
"Mathilde Guillaumin",
"Mathis Felardos",
"Matthieu Dinot",
"Maxime Darrin",
"Maximilian Augustin",
"Mickaël Seznec",
"Neha Gupta",
"Nikhil Raghuraman",
"Olivier Duchenne",
"Patricia Wang",
"Patryk Saffer",
"Paul Jacob",
"Paul Wambergue",
"Paula Kurylowicz",
"Philomène Chagniot",
"Pierre Stock",
"Pravesh Agrawal",
"Rémi Delacourt",
"Romain Sauvestre",
"Roman Soletskyi",
"Sagar Vaze",
"Sandeep Subramanian",
"Saurabh Garg",
"Shashwat Dalal",
"Siddharth Gandhi",
"Sumukh Aithal",
"Szymon Antoniak",
"Teven Le Scao",
"Thibault Schueller",
"Thibaut Lavril",
"Thomas Robert",
"Thomas Wang",
"Timothée Lacroix",
"Tom Bewley",
"Valeriia Nemychnikova",
"Victor Paltz",
"Virgile Richard",
"Wen-Ding Li",
"William Marshall",
"Xuanyu Zhang",
"Yihan Wan",
"Yunhao Tang"
] |
We present Voxtral Mini and Voxtral Small, two multimodal audio chat models. Voxtral is trained to comprehend both spoken audio and text documents, achieving state-of-the-art performance across a diverse range of audio benchmarks, while preserving strong text capabilities. Voxtral Small outperforms a number of closed-source models, while being small enough to run locally. A 32K context window enables the model to handle audio files up to 40 minutes in duration and long multi-turn conversations. We also contribute three benchmarks for evaluating speech understanding models on knowledge and trivia. Both Voxtral models are released under Apache 2.0 license.
|
|
2025-07-18T00:00:00 |
2507.11589
|
Einstein Fields: A Neural Perspective To Computational General Relativity
|
[
"Sandeep Suresh Cranganore",
"Andrei Bodnar",
"Arturs Berzins",
"Johannes Brandstetter"
] |
https://github.com/AndreiB137/EinFields
|
We introduce Einstein Fields, a neural representation that is designed to compress computationally intensive four-dimensional numerical relativity simulations into compact implicit neural network weights. By modeling the metric, which is the core tensor field of general relativity, Einstein Fields enable the derivation of physical quantities via automatic differentiation. However, unlike conventional neural fields (e.g., signed distance, occupancy, or radiance fields), Einstein Fields are Neural Tensor Fields with the key difference that when encoding the spacetime geometry of general relativity into neural field representations, dynamics emerge naturally as a byproduct. Einstein Fields show remarkable potential, including continuum modeling of 4D spacetime, mesh-agnosticity, storage efficiency, derivative accuracy, and ease of use. We address these challenges across several canonical test beds of general relativity and release an open source JAX-based library, paving the way for more scalable and expressive approaches to numerical relativity. Code is made available at https://github.com/AndreiB137/EinFields
|
2025-07-18T00:00:00 |
2507.12142
|
RiemannLoRA: A Unified Riemannian Framework for Ambiguity-Free LoRA Optimization
|
[
"Vladimir Bogachev",
"Vladimir Aletov",
"Alexander Molozhavenko",
"Denis Bobkov",
"Vera Soboleva",
"Aibek Alanov",
"Maxim Rakhuba"
] |
Low-Rank Adaptation (LoRA) has become a widely adopted standard for parameter-efficient fine-tuning of large language models (LLMs), significantly reducing memory and computational demands. However, challenges remain, including finding optimal initialization strategies or mitigating overparametrization in low-rank matrix factorization. In this work, we propose a novel approach that addresses both of the challenges simultaneously within a unified framework. Our method treats a set of fixed-rank LoRA matrices as a smooth manifold. Considering adapters as elements on this manifold removes overparametrization, while determining the direction of the fastest loss decrease along the manifold provides initialization. Special care is taken to obtain numerically stable and computationally efficient implementation of our method, using best practices from numerical linear algebra and Riemannian optimization. Experimental results on LLM and diffusion model architectures demonstrate that RiemannLoRA consistently improves both convergence speed and final performance over standard LoRA and its state-of-the-art modifications.
|
|
2025-07-21T00:00:00 |
2507.10605
|
RedOne: Revealing Domain-specific LLM Post-Training in Social Networking Services
|
[
"Fei Zhao",
"Chonggang Lu",
"Yue Wang",
"Zheyong Xie",
"Ziyan Liu",
"Haofu Qian",
"JianZhao Huang",
"Fangcheng Shi",
"Zijie Meng",
"Hongcheng Guo",
"Mingqian He",
"Xinze Lyu",
"Yiming Lu",
"Ziyang Xiang",
"Zheyu Ye",
"Chengqiang Lu",
"Zhe Xu",
"Yi Wu",
"Yao Hu",
"Yan Gao",
"Jun Fan",
"Xiaolong Jiang",
"Weiting Liu",
"Boyang Wang",
"Shaosheng Cao"
] |
As a primary medium for modern information dissemination, social networking services (SNS) have experienced rapid growth, which has proposed significant challenges for platform content management and interaction quality improvement. Recently, the development of large language models (LLMs) has offered potential solutions but existing studies focus on isolated tasks, which not only encounter diminishing benefit from the data scaling within individual scenarios but also fail to flexibly adapt to diverse real-world context. To address these challenges, we introduce RedOne, a domain-specific LLM designed to break the performance bottleneck of single-task baselines and establish a comprehensive foundation for the SNS. RedOne was developed through a three-stage training strategy consisting of continue pretraining, supervised fine-tuning, and preference optimization, using a large-scale real-world dataset. Through extensive experiments, RedOne maintains strong general capabilities, and achieves an average improvement up to 14.02% across 8 major SNS tasks and 7.56% in SNS bilingual evaluation benchmark, compared with base models. Furthermore, through online testing, RedOne reduced the exposure rate in harmful content detection by 11.23% and improved the click page rate in post-view search by 14.95% compared with single-tasks finetuned baseline models. These results establish RedOne as a robust domain-specific LLM for SNS, demonstrating excellent generalization across various tasks and promising applicability in real-world scenarios.
|
|
2025-07-21T00:00:00 |
2507.11097
|
The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs
|
[
"Zichen Wen",
"Jiashu Qu",
"Dongrui Liu",
"Zhiyuan Liu",
"Ruixi Wu",
"Yicun Yang",
"Xiangqi Jin",
"Haoyun Xu",
"Xuyang Liu",
"Weijia Li",
"Chaochao Lu",
"Jing Shao",
"Conghui He",
"Linfeng Zhang"
] |
https://github.com/ZichenWen1/DIJA
|
Diffusion-based large language models (dLLMs) have recently emerged as a powerful alternative to autoregressive LLMs, offering faster inference and greater interactivity via parallel decoding and bidirectional modeling. However, despite strong performance in code generation and text infilling, we identify a fundamental safety concern: existing alignment mechanisms fail to safeguard dLLMs against context-aware, masked-input adversarial prompts, exposing novel vulnerabilities. To this end, we present DIJA, the first systematic study and jailbreak attack framework that exploits unique safety weaknesses of dLLMs. Specifically, our proposed DIJA constructs adversarial interleaved mask-text prompts that exploit the text generation mechanisms of dLLMs, i.e., bidirectional modeling and parallel decoding. Bidirectional modeling drives the model to produce contextually consistent outputs for masked spans, even when harmful, while parallel decoding limits model dynamic filtering and rejection sampling of unsafe content. This causes standard alignment mechanisms to fail, enabling harmful completions in alignment-tuned dLLMs, even when harmful behaviors or unsafe instructions are directly exposed in the prompt. Through comprehensive experiments, we demonstrate that DIJA significantly outperforms existing jailbreak methods, exposing a previously overlooked threat surface in dLLM architectures. Notably, our method achieves up to 100% keyword-based ASR on Dream-Instruct, surpassing the strongest prior baseline, ReNeLLM, by up to 78.5% in evaluator-based ASR on JailbreakBench and by 37.7 points in StrongREJECT score, while requiring no rewriting or hiding of harmful content in the jailbreak prompt. Our findings underscore the urgent need for rethinking safety alignment in this emerging class of language models. Code is available at https://github.com/ZichenWen1/DIJA.
|
2025-07-21T00:00:00 |
2507.12566
|
Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models
|
[
"Gen Luo",
"Wenhan Dou",
"Wenhao Li",
"Zhaokai Wang",
"Xue Yang",
"Changyao Tian",
"Hao Li",
"Weiyun Wang",
"Wenhai Wang",
"Xizhou Zhu",
"Yu Qiao",
"Jifeng Dai"
] |
https://github.com/OpenGVLab/Mono-InternVL
|
This paper focuses on monolithic Multimodal Large Language Models (MLLMs), which integrate visual encoding and language decoding into a single model. Existing structures and pre-training strategies for monolithic MLLMs often suffer from unstable optimization and catastrophic forgetting. To address these challenges, our key idea is to embed a new visual parameter space into a pre-trained LLM, enabling stable learning of visual knowledge from noisy data via delta tuning. Based on this principle, we first introduce Mono-InternVL, an advanced monolithic MLLM that incorporates a set of visual experts through a multimodal mixture-of-experts architecture. In addition, we design an innovative Endogenous Visual Pre-training (EViP) for Mono-InternVL to maximize its visual capabilities via progressive learning. Mono-InternVL achieves competitive performance against existing MLLMs but also leads to relatively expensive data cost. Therefore, we further present Mono-InternVL-1.5, a cheaper and stronger monolithic MLLM equipped with an improved EViP (EViP++). EViP++ introduces additional visual attention experts to Mono-InternVL-1.5 and re-organizes the pre-training process in an efficient manner. During inference, it includes a fused CUDA kernel to speed up its MoE operations. With these designs, Mono-InternVL-1.5 significantly reduces training and inference costs, while still maintaining competitive performance with Mono-InternVL. To evaluate our approach, we conduct extensive experiments across 15 benchmarks. Results demonstrate that Mono-InternVL outperforms existing monolithic MLLMs on 12 out of 15 benchmarks, e.g., +114-point improvement over Emu3 on OCRBench. Compared to its modular counterpart, i.e., InternVL-1.5, Mono-InternVL-1.5 achieves similar multimodal performance while reducing first-token latency by up to 69%. Code and models are released at https://github.com/OpenGVLab/Mono-InternVL.
|
2025-07-21T00:00:00 |
2507.14137
|
Franca: Nested Matryoshka Clustering for Scalable Visual Representation Learning
|
[
"Shashanka Venkataramanan",
"Valentinos Pariza",
"Mohammadreza Salehi",
"Lukas Knobel",
"Spyros Gidaris",
"Elias Ramzi",
"Andrei Bursuc",
"Yuki M. Asano"
] |
https://github.com/valeoai/Franca
|
We present Franca (pronounced Fran-ka): free one; the first fully open-source (data, code, weights) vision foundation model that matches and in many cases surpasses the performance of state-of-the-art proprietary models, e.g., DINOv2, CLIP, SigLIPv2, etc. Our approach is grounded in a transparent training pipeline inspired by Web-SSL and uses publicly available data: ImageNet-21K and a subset of ReLAION-2B. Beyond model release, we tackle critical limitations in SSL clustering methods. While modern models rely on assigning image features to large codebooks via clustering algorithms like Sinkhorn-Knopp, they fail to account for the inherent ambiguity in clustering semantics. To address this, we introduce a parameter-efficient, multi-head clustering projector based on nested Matryoshka representations. This design progressively refines features into increasingly fine-grained clusters without increasing the model size, enabling both performance and memory efficiency. Additionally, we propose a novel positional disentanglement strategy that explicitly removes positional biases from dense representations, thereby improving the encoding of semantic content. This leads to consistent gains on several downstream benchmarks, demonstrating the utility of cleaner feature spaces. Our contributions establish a new standard for transparent, high-performance vision models and open a path toward more reproducible and generalizable foundation models for the broader AI community. The code and model checkpoints are available at https://github.com/valeoai/Franca.
|
2025-07-21T00:00:00 |
2507.13391
|
Quantitative Risk Management in Volatile Markets with an Expectile-Based Framework for the FTSE Index
|
[
"Abiodun Finbarrs Oketunji"
] |
This research presents a framework for quantitative risk management in volatile markets, specifically focusing on expectile-based methodologies applied to the FTSE 100 index. Traditional risk measures such as Value-at-Risk (VaR) have demonstrated significant limitations during periods of market stress, as evidenced during the 2008 financial crisis and subsequent volatile periods. This study develops an advanced expectile-based framework that addresses the shortcomings of conventional quantile-based approaches by providing greater sensitivity to tail losses and improved stability in extreme market conditions. The research employs a dataset spanning two decades of FTSE 100 returns, incorporating periods of high volatility, market crashes, and recovery phases. Our methodology introduces novel mathematical formulations for expectile regression models, enhanced threshold determination techniques using time series analysis, and robust backtesting procedures. The empirical results demonstrate that expectile-based Value-at-Risk (EVaR) consistently outperforms traditional VaR measures across various confidence levels and market conditions. The framework exhibits superior performance during volatile periods, with reduced model risk and enhanced predictive accuracy. Furthermore, the study establishes practical implementation guidelines for financial institutions and provides evidence-based recommendations for regulatory compliance and portfolio management. The findings contribute significantly to the literature on financial risk management and offer practical tools for practitioners dealing with volatile market environments.
|
|
2025-07-21T00:00:00 |
2507.13302
|
The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations
|
[
"Carlos Arriaga",
"Gonzalo Martínez",
"Eneko Sendin",
"Javier Conde",
"Pedro Reviriego"
] |
The evaluation of large language models is a complex task, in which several approaches have been proposed. The most common is the use of automated benchmarks in which LLMs have to answer multiple-choice questions of different topics. However, this method has certain limitations, being the most concerning, the poor correlation with the humans. An alternative approach, is to have humans evaluate the LLMs. This poses scalability issues as there is a large and growing number of models to evaluate making it impractical (and costly) to run traditional studies based on recruiting a number of evaluators and having them rank the responses of the models. An alternative approach is the use of public arenas, such as the popular LM arena, on which any user can freely evaluate models on any question and rank the responses of two models. The results are then elaborated into a model ranking. An increasingly important aspect of LLMs is their energy consumption and, therefore, evaluating how energy awareness influences the decisions of humans in selecting a model is of interest. In this paper, we present GEA, the Generative Energy Arena, an arena that incorporates information on the energy consumption of the model in the evaluation process. Preliminary results obtained with GEA are also presented, showing that for most questions, when users are aware of the energy consumption, they favor smaller and more energy efficient models. This suggests that for most user interactions, the extra cost and energy incurred by the more complex and top-performing models do not provide an increase in the perceived quality of the responses that justifies their use.
|
|
2025-07-21T00:00:00 |
2507.13984
|
CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models
|
[
"Quang-Binh Nguyen",
"Minh Luu",
"Quang Nguyen",
"Anh Tran",
"Khoi Nguyen"
] |
Disentangling content and style from a single image, known as content-style decomposition (CSD), enables recontextualization of extracted content and stylization of extracted styles, offering greater creative flexibility in visual synthesis. While recent personalization methods have explored the decomposition of explicit content style, they remain tailored for diffusion models. Meanwhile, Visual Autoregressive Modeling (VAR) has emerged as a promising alternative with a next-scale prediction paradigm, achieving performance comparable to that of diffusion models. In this paper, we explore VAR as a generative framework for CSD, leveraging its scale-wise generation process for improved disentanglement. To this end, we propose CSD-VAR, a novel method that introduces three key innovations: (1) a scale-aware alternating optimization strategy that aligns content and style representation with their respective scales to enhance separation, (2) an SVD-based rectification method to mitigate content leakage into style representations, and (3) an Augmented Key-Value (K-V) memory enhancing content identity preservation. To benchmark this task, we introduce CSD-100, a dataset specifically designed for content-style decomposition, featuring diverse subjects rendered in various artistic styles. Experiments demonstrate that CSD-VAR outperforms prior approaches, achieving superior content preservation and stylization fidelity.
|
|
2025-07-21T00:00:00 |
2507.13563
|
A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models
|
[
"Kirill Borodin",
"Nikita Vasiliev",
"Vasiliy Kudryavtsev",
"Maxim Maslov",
"Mikhail Gorodnichev",
"Oleg Rogov",
"Grach Mkrtchian"
] |
Russian speech synthesis presents distinctive challenges, including vowel reduction, consonant devoicing, variable stress patterns, homograph ambiguity, and unnatural intonation. This paper introduces Balalaika, a novel dataset comprising more than 2,000 hours of studio-quality Russian speech with comprehensive textual annotations, including punctuation and stress markings. Experimental results show that models trained on Balalaika significantly outperform those trained on existing datasets in both speech synthesis and enhancement tasks. We detail the dataset construction pipeline, annotation methodology, and results of comparative evaluations.
|
|
2025-07-21T00:00:00 |
2507.12455
|
Mitigating Object Hallucinations via Sentence-Level Early Intervention
|
[
"Shangpin Peng",
"Senqiao Yang",
"Li Jiang",
"Zhuotao Tian"
] |
https://github.com/pspdada/SENTINEL
|
Multimodal large language models (MLLMs) have revolutionized cross-modal understanding but continue to struggle with hallucinations - fabricated content contradicting visual inputs. Existing hallucination mitigation methods either incur prohibitive computational costs or introduce distribution mismatches between training data and model outputs. We identify a critical insight: hallucinations predominantly emerge at the early stages of text generation and propagate through subsequent outputs. To address this, we propose **SENTINEL** (**S**entence-level **E**arly i**N**tervention **T**hrough **IN**-domain pr**E**ference **L**earning), a framework that eliminates dependency on human annotations. Specifically, we first bootstrap high-quality in-domain preference pairs by iteratively sampling model outputs, validating object existence through cross-checking with two open-vocabulary detectors, and classifying sentences into hallucinated/non-hallucinated categories. Subsequently, we use context-coherent positive samples and hallucinated negative samples to build context-aware preference data iteratively. Finally, we train models using a context-aware preference loss (C-DPO) that emphasizes discriminative learning at the sentence level where hallucinations initially manifest. Experimental results show that SENTINEL can reduce hallucinations by over 90\% compared to the original model and outperforms the previous state-of-the-art method on both hallucination benchmarks and general capabilities benchmarks, demonstrating its superiority and generalization ability. The models, datasets, and code are available at https://github.com/pspdada/SENTINEL.
|
2025-07-21T00:00:00 |
2507.13158
|
Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities
|
[
"Hao Sun",
"Mihaela van der Schaar"
] |
In the era of Large Language Models (LLMs), alignment has emerged as a fundamental yet challenging problem in the pursuit of more reliable, controllable, and capable machine intelligence. The recent success of reasoning models and conversational AI systems has underscored the critical role of reinforcement learning (RL) in enhancing these systems, driving increased research interest at the intersection of RL and LLM alignment. This paper provides a comprehensive review of recent advances in LLM alignment through the lens of inverse reinforcement learning (IRL), emphasizing the distinctions between RL techniques employed in LLM alignment and those in conventional RL tasks. In particular, we highlight the necessity of constructing neural reward models from human data and discuss the formal and practical implications of this paradigm shift. We begin by introducing fundamental concepts in RL to provide a foundation for readers unfamiliar with the field. We then examine recent advances in this research agenda, discussing key challenges and opportunities in conducting IRL for LLM alignment. Beyond methodological considerations, we explore practical aspects, including datasets, benchmarks, evaluation metrics, infrastructure, and computationally efficient training and inference techniques. Finally, we draw insights from the literature on sparse-reward RL to identify open questions and potential research directions. By synthesizing findings from diverse studies, we aim to provide a structured and critical overview of the field, highlight unresolved challenges, and outline promising future directions for improving LLM alignment through RL and IRL techniques.
|
|
2025-07-21T00:00:00 |
2507.14129
|
OpenBEATs: A Fully Open-Source General-Purpose Audio Encoder
|
[
"Shikhar Bharadwaj",
"Samuele Cornell",
"Kwanghee Choi",
"Satoru Fukayama",
"Hye-jin Shim",
"Soham Deshmukh",
"Shinji Watanabe"
] |
Masked token prediction has emerged as a powerful pre-training objective across language, vision, and speech, offering the potential to unify these diverse modalities through a single pre-training task. However, its application for general audio understanding remains underexplored, with BEATs being the only notable example. BEATs has seen limited modifications due to the absence of open-source pre-training code. Furthermore, BEATs was trained only on AudioSet, restricting its broader downstream applicability. To address these gaps, we present OpenBEATs, an open-source framework that extends BEATs via multi-domain audio pre-training. We conduct comprehensive evaluations across six types of tasks, twenty five datasets, and three audio domains, including audio reasoning tasks such as audio question answering, entailment, and captioning. OpenBEATs achieves state-of-the-art performance on six bioacoustics datasets, two environmental sound datasets and five reasoning datasets, performing better than models exceeding a billion parameters at one-fourth their parameter size. These results demonstrate the effectiveness of multi-domain datasets and masked token prediction task to learn general-purpose audio representations. To promote further research and reproducibility, we release all pre-training and evaluation code, pretrained and fine-tuned checkpoints, and training logs at https://shikhar-s.github.io/OpenBEATs
|
|
2025-07-22T00:00:00 |
2507.15061
|
WebShaper: Agentically Data Synthesizing via Information-Seeking Formalization
|
[
"Zhengwei Tao",
"Jialong Wu",
"Wenbiao Yin",
"Junkai Zhang",
"Baixuan Li",
"Haiyang Shen",
"Kuan Li",
"Liwen Zhang",
"Xinyu Wang",
"Yong Jiang",
"Pengjun Xie",
"Fei Huang",
"Jingren Zhou"
] |
The advent of Large Language Model (LLM)-powered agents has revolutionized artificial intelligence by enabling solutions to complex, open-ended tasks through web-based information-seeking (IS) capabilities. The scarcity of high-quality training data has limited the development of IS agents. Existing approaches typically adopt an information-driven paradigm that first collects web data and then generates questions based on the retrieval. However, this may lead to inconsistency between information structure and reasoning structure, question and answer. To mitigate, we propose a formalization-driven IS data synthesis framework WebShaper to construct a dataset. WebShaper systematically formalizes IS tasks through set theory. Central to the formalization is the concept of Knowledge Projections (KP), which enables precise control over reasoning structure by KP operation compositions. During synthesis, we begin by creating seed tasks, then use a multi-step expansion process. At each step, an agentic Expander expands the current formal question more complex with retrieval and validation tools based on our formalization. We train our model on the synthesized dataset. Experiment results demonstrate that WebShaper achieves state-of-the-art performance among open-sourced IS agents on GAIA and WebWalkerQA benchmarks.
|
|
2025-07-22T00:00:00 |
2507.15846
|
GUI-G^2: Gaussian Reward Modeling for GUI Grounding
|
[
"Fei Tang",
"Zhangxuan Gu",
"Zhengxi Lu",
"Xuyang Liu",
"Shuheng Shen",
"Changhua Meng",
"Wen Wang",
"Wenqi Zhang",
"Yongliang Shen",
"Weiming Lu",
"Jun Xiao",
"Yueting Zhuang"
] |
Graphical User Interface (GUI) grounding maps natural language instructions to precise interface locations for autonomous interaction. Current reinforcement learning approaches use binary rewards that treat elements as hit-or-miss targets, creating sparse signals that ignore the continuous nature of spatial interactions. Motivated by human clicking behavior that naturally forms Gaussian distributions centered on target elements, we introduce GUI Gaussian Grounding Rewards (GUI-G^2), a principled reward framework that models GUI elements as continuous Gaussian distributions across the interface plane. GUI-G^2 incorporates two synergistic mechanisms: Gaussian point rewards model precise localization through exponentially decaying distributions centered on element centroids, while coverage rewards assess spatial alignment by measuring the overlap between predicted Gaussian distributions and target regions. To handle diverse element scales, we develop an adaptive variance mechanism that calibrates reward distributions based on element dimensions. This framework transforms GUI grounding from sparse binary classification to dense continuous optimization, where Gaussian distributions generate rich gradient signals that guide models toward optimal interaction positions. Extensive experiments across ScreenSpot, ScreenSpot-v2, and ScreenSpot-Pro benchmarks demonstrate that GUI-G^2, substantially outperforms state-of-the-art method UI-TARS-72B, with the most significant improvement of 24.7% on ScreenSpot-Pro. Our analysis reveals that continuous modeling provides superior robustness to interface variations and enhanced generalization to unseen layouts, establishing a new paradigm for spatial reasoning in GUI interaction tasks.
|
|
2025-07-22T00:00:00 |
2507.15778
|
Stabilizing Knowledge, Promoting Reasoning: Dual-Token Constraints for RLVR
|
[
"Jiakang Wang",
"Runze Liu",
"Fuzheng Zhang",
"Xiu Li",
"Guorui Zhou"
] |
https://github.com/wizard-III/ArcherCodeR
|
Reinforcement Learning with Verifiable Rewards (RLVR) has become an effective post-training method for improving the reasoning abilities of Large Language Models (LLMs), mainly by shaping higher-order behaviors such as reflection and planning. However, previous RLVR algorithms often apply uniform training signals to all tokens, without considering the different roles of low-entropy knowledge-related tokens and high-entropy reasoning-related tokens. Some recent methods try to separate these token types by gradient masking or asynchronous updates, but these approaches may break semantic dependencies in the model output and hinder effective learning. In this work, we propose Archer, an entropy-aware RLVR approach with dual-token constraints and synchronous updates. Specifically, our method applies weaker KL regularization and higher clipping thresholds to reasoning tokens to encourage exploration, while using stronger constraints on knowledge tokens to maintain factual knowledge. Experimental results on several mathematical reasoning and code generation benchmarks show that our approach significantly outperforms previous RLVR methods, reaching or exceeding state-of-the-art performance among models of comparable size. The code is available at https://github.com/wizard-III/ArcherCodeR.
|
2025-07-22T00:00:00 |
2507.14683
|
MiroMind-M1: An Open-Source Advancement in Mathematical Reasoning via Context-Aware Multi-Stage Policy Optimization
|
[
"Xingxuan Li",
"Yao Xiao",
"Dianwen Ng",
"Hai Ye",
"Yue Deng",
"Xiang Lin",
"Bin Wang",
"Zhanfeng Mo",
"Chong Zhang",
"Yueyi Zhang",
"Zonglin Yang",
"Ruilin Li",
"Lei Lei",
"Shihao Xu",
"Han Zhao",
"Weiling Chen",
"Feng Ji",
"Lidong Bing"
] |
Large language models have recently evolved from fluent text generation to advanced reasoning across diverse domains, giving rise to reasoning language models. Among these domains, mathematical reasoning serves as a representative benchmark as it requires precise multi-step logic and abstract reasoning, which can be generalized to other tasks. While closed-source RLMs such as GPT-o3 demonstrate impressive reasoning capabilities, their proprietary nature limits transparency and reproducibility. Although many open-source projects aim to close this gap, most of them lack sufficient openness by omitting critical resources such as datasets and detailed training configurations, which hinders reproducibility. To contribute toward greater transparency in RLM development, we introduce the MiroMind-M1 series, a set of fully open-source RLMs built on the Qwen-2.5 backbone that match or exceed the performance of existing open-source RLMs. Specifically, our models are trained in two stages: SFT on a carefully curated corpus of 719K math-reasoning problems with verified CoT trajectories, followed by RLVR on 62K challenging and verifiable problems. To enhance the robustness and efficiency of the RLVR process, we introduce Context-Aware Multi-Stage Policy Optimization, an algorithm that integrates length-progressive training with an adaptive repetition penalty to encourage context-aware RL training. Our model achieves state-of-the-art or competitive performance and superior token efficiency among Qwen-2.5-based open-source 7B and 32B models on the AIME24, AIME25, and MATH benchmarks. To facilitate reproducibility, we release the complete stack: models (MiroMind-M1-SFT-7B, MiroMind-M1-RL-7B, MiroMind-M1-RL-32B); datasets (MiroMind-M1-SFT-719K, MiroMind-M1-RL-62K); and all training and evaluation configurations. We hope these resources will support further research and foster community advancement.
|
|
2025-07-22T00:00:00 |
2507.11539
|
Streaming 4D Visual Geometry Transformer
|
[
"Dong Zhuo",
"Wenzhao Zheng",
"Jiahe Guo",
"Yuqi Wu",
"Jie Zhou",
"Jiwen Lu"
] |
https://github.com/wzzheng/StreamVGGT
|
Perceiving and reconstructing 4D spatial-temporal geometry from videos is a fundamental yet challenging computer vision task. To facilitate interactive and real-time applications, we propose a streaming 4D visual geometry transformer that shares a similar philosophy with autoregressive large language models. We explore a simple and efficient design and employ a causal transformer architecture to process the input sequence in an online manner. We use temporal causal attention and cache the historical keys and values as implicit memory to enable efficient streaming long-term 4D reconstruction. This design can handle real-time 4D reconstruction by incrementally integrating historical information while maintaining high-quality spatial consistency. For efficient training, we propose to distill knowledge from the dense bidirectional visual geometry grounded transformer (VGGT) to our causal model. For inference, our model supports the migration of optimized efficient attention operator (e.g., FlashAttention) from the field of large language models. Extensive experiments on various 4D geometry perception benchmarks demonstrate that our model increases the inference speed in online scenarios while maintaining competitive performance, paving the way for scalable and interactive 4D vision systems. Code is available at: https://github.com/wzzheng/StreamVGGT.
|
2025-07-22T00:00:00 |
2507.15815
|
LLM Economist: Large Population Models and Mechanism Design in Multi-Agent Generative Simulacra
|
[
"Seth Karten",
"Wenzhe Li",
"Zihan Ding",
"Samuel Kleiner",
"Yu Bai",
"Chi Jin"
] |
We present the LLM Economist, a novel framework that uses agent-based modeling to design and assess economic policies in strategic environments with hierarchical decision-making. At the lower level, bounded rational worker agents -- instantiated as persona-conditioned prompts sampled from U.S. Census-calibrated income and demographic statistics -- choose labor supply to maximize text-based utility functions learned in-context. At the upper level, a planner agent employs in-context reinforcement learning to propose piecewise-linear marginal tax schedules anchored to the current U.S. federal brackets. This construction endows economic simulacra with three capabilities requisite for credible fiscal experimentation: (i) optimization of heterogeneous utilities, (ii) principled generation of large, demographically realistic agent populations, and (iii) mechanism design -- the ultimate nudging problem -- expressed entirely in natural language. Experiments with populations of up to one hundred interacting agents show that the planner converges near Stackelberg equilibria that improve aggregate social welfare relative to Saez solutions, while a periodic, persona-level voting procedure furthers these gains under decentralized governance. These results demonstrate that large language model-based agents can jointly model, simulate, and govern complex economic systems, providing a tractable test bed for policy evaluation at the societal scale to help build better civilizations.
|
|
2025-07-22T00:00:00 |
2507.15375
|
STITCH: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models
|
[
"Cheng-Han Chiang",
"Xiaofei Wang",
"Linjie Li",
"Chung-Ching Lin",
"Kevin Lin",
"Shujie Liu",
"Zhendong Wang",
"Zhengyuan Yang",
"Hung-yi Lee",
"Lijuan Wang"
] |
Spoken Language Models (SLMs) are designed to take speech inputs and produce spoken responses. However, current SLMs lack the ability to perform an internal, unspoken thinking process before responding. In contrast, humans typically engage in complex mental reasoning internally, enabling them to communicate ideas clearly and concisely. Thus, integrating an unspoken thought process into SLMs is highly desirable. While naively generating a complete chain-of-thought (CoT) reasoning before starting to talk can enable thinking for SLMs, this induces additional latency for the speech response, as the CoT reasoning can be arbitrarily long. To solve this issue, we propose Stitch, a novel generation method that alternates between the generation of unspoken reasoning chunks and spoken response chunks. Since the audio duration of a chunk of spoken response is much longer than the time to generate the tokens in a chunk of spoken response, we use the remaining free time to generate the unspoken reasoning tokens. When a chunk of audio is played to the user, the model continues to generate the next unspoken reasoning chunk, achieving simultaneous thinking and talking. Remarkably, Stitch matches the latency of baselines that cannot generate unspoken CoT by design while outperforming those baselines by 15% on math reasoning datasets; Stitch also performs equally well on non-reasoning datasets as those baseline models. Some animations and demonstrations are on the project page: https://d223302.github.io/STITCH.
|
|
2025-07-22T00:00:00 |
2507.15493
|
GR-3 Technical Report
|
[
"Chilam Cheang",
"Sijin Chen",
"Zhongren Cui",
"Yingdong Hu",
"Liqun Huang",
"Tao Kong",
"Hang Li",
"Yifeng Li",
"Yuxiao Liu",
"Xiao Ma",
"Hao Niu",
"Wenxuan Ou",
"Wanli Peng",
"Zeyu Ren",
"Haixin Shi",
"Jiawen Tian",
"Hongtao Wu",
"Xin Xiao",
"Yuyang Xiao",
"Jiafeng Xu",
"Yichu Yang"
] |
We report our recent progress towards building generalist robot policies, the development of GR-3. GR-3 is a large-scale vision-language-action (VLA) model. It showcases exceptional capabilities in generalizing to novel objects, environments, and instructions involving abstract concepts. Furthermore, it can be efficiently fine-tuned with minimal human trajectory data, enabling rapid and cost-effective adaptation to new settings. GR-3 also excels in handling long-horizon and dexterous tasks, including those requiring bi-manual manipulation and mobile movement, showcasing robust and reliable performance. These capabilities are achieved through a multi-faceted training recipe that includes co-training with web-scale vision-language data, efficient fine-tuning from human trajectory data collected via VR devices, and effective imitation learning with robot trajectory data. In addition, we introduce ByteMini, a versatile bi-manual mobile robot designed with exceptional flexibility and reliability, capable of accomplishing a wide range of tasks when integrated with GR-3. Through extensive real-world experiments, we show GR-3 surpasses the state-of-the-art baseline method, pi_0, on a wide variety of challenging tasks. We hope GR-3 can serve as a step towards building generalist robots capable of assisting humans in daily life.
|
|
2025-07-22T00:00:00 |
2507.15028
|
Towards Video Thinking Test: A Holistic Benchmark for Advanced Video Reasoning and Understanding
|
[
"Yuanhan Zhang",
"Yunice Chew",
"Yuhao Dong",
"Aria Leo",
"Bo Hu",
"Ziwei Liu"
] |
Human intelligence requires correctness and robustness, with the former being foundational for the latter. In video understanding, correctness ensures the accurate interpretation of visual content, and robustness maintains consistent performance in challenging conditions. Despite advances in video large language models (video LLMs), existing benchmarks inadequately reflect the gap between these models and human intelligence in maintaining correctness and robustness in video interpretation. We introduce the Video Thinking Test (Video-TT), to assess if video LLMs can interpret real-world videos as effectively as humans. Video-TT reflects genuine gaps in understanding complex visual narratives, and evaluates robustness against natural adversarial questions. Video-TT comprises 1,000 YouTube Shorts videos, each with one open-ended question and four adversarial questions that probe visual and narrative complexity. Our evaluation shows a significant gap between video LLMs and human performance.
|
|
2025-07-22T00:00:00 |
2507.11061
|
Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling
|
[
"Hayeon Kim",
"Ji Ha Jang",
"Se Young Chun"
] |
Recent advances in 3D neural representations and instance-level editing models have enabled the efficient creation of high-quality 3D content. However, achieving precise local 3D edits remains challenging, especially for Gaussian Splatting, due to inconsistent multi-view 2D part segmentations and inherently ambiguous nature of Score Distillation Sampling (SDS) loss. To address these limitations, we propose RoMaP, a novel local 3D Gaussian editing framework that enables precise and drastic part-level modifications. First, we introduce a robust 3D mask generation module with our 3D-Geometry Aware Label Prediction (3D-GALP), which uses spherical harmonics (SH) coefficients to model view-dependent label variations and soft-label property, yielding accurate and consistent part segmentations across viewpoints. Second, we propose a regularized SDS loss that combines the standard SDS loss with additional regularizers. In particular, an L1 anchor loss is introduced via our Scheduled Latent Mixing and Part (SLaMP) editing method, which generates high-quality part-edited 2D images and confines modifications only to the target region while preserving contextual coherence. Additional regularizers, such as Gaussian prior removal, further improve flexibility by allowing changes beyond the existing context, and robust 3D masking prevents unintended edits. Experimental results demonstrate that our RoMaP achieves state-of-the-art local 3D editing on both reconstructed and generated Gaussian scenes and objects qualitatively and quantitatively, making it possible for more robust and flexible part-level 3D Gaussian editing. Code is available at https://janeyeon.github.io/romap.
|
|
2025-07-22T00:00:00 |
2507.15856
|
Latent Denoising Makes Good Visual Tokenizers
|
[
"Jiawei Yang",
"Tianhong Li",
"Lijie Fan",
"Yonglong Tian",
"Yue Wang"
] |
Despite their fundamental role, it remains unclear what properties could make visual tokenizers more effective for generative modeling. We observe that modern generative models share a conceptually similar training objective -- reconstructing clean signals from corrupted inputs such as Gaussian noise or masking -- a process we term denoising. Motivated by this insight, we propose aligning tokenizer embeddings directly with the downstream denoising objective, encouraging latent embeddings to be more easily reconstructed even when heavily corrupted. To achieve this, we introduce the Latent Denoising Tokenizer (l-DeTok), a simple yet effective tokenizer trained to reconstruct clean images from latent embeddings corrupted by interpolative noise and random masking. Extensive experiments on ImageNet 256x256 demonstrate that our tokenizer consistently outperforms standard tokenizers across six representative generative models. Our findings highlight denoising as a fundamental design principle for tokenizer development, and we hope it could motivate new perspectives for future tokenizer design.
|
|
2025-07-22T00:00:00 |
2507.14843
|
The Invisible Leash: Why RLVR May Not Escape Its Origin
|
[
"Fang Wu",
"Weihao Xuan",
"Ximing Lu",
"Zaid Harchaoui",
"Yejin Choi"
] |
Recent advances in large reasoning models highlight Reinforcement Learning with Verifiable Rewards (RLVR) as a promising method for enhancing AI's capabilities, particularly in solving complex logical tasks. However, it remains unclear whether RLVR truly expands a model's reasoning boundary or merely amplifies high-reward outputs that the base model already knows for improved precision. This study presents a theoretical and empirical investigation that provides fresh insights into the potential limits of RLVR. First, we offer a new theoretical perspective that RLVR is constrained by the base model's support-unable to sample solutions with zero initial probability-and operates as a conservative reweighting mechanism that may restrict the discovery of entirely original solutions. We also identify an entropy-reward tradeoff: while RLVR reliably enhances precision, it may progressively narrow exploration and potentially overlook correct yet underrepresented solutions. Extensive empirical experiments validate that while RLVR consistently improves pass@1, the shrinkage of empirical support generally outweighs the expansion of empirical support under larger sampling budgets, failing to recover correct answers that were previously accessible to the base model. Interestingly, we also observe that while RLVR sometimes increases token-level entropy, resulting in greater uncertainty at each generation step, answer-level entropy declines, indicating that these seemingly more uncertain paths ultimately converge onto a smaller set of distinct answers. Taken together, these findings reveal potential limits of RLVR in extending reasoning horizons. Breaking this invisible leash may require future algorithmic innovations such as explicit exploration mechanisms or hybrid strategies that seed probability mass into underrepresented solution regions.
|
|
2025-07-22T00:00:00 |
2507.15852
|
SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction
|
[
"Zhixiong Zhang",
"Shuangrui Ding",
"Xiaoyi Dong",
"Songxin He",
"Jianfan Lin",
"Junsong Tang",
"Yuhang Zang",
"Yuhang Cao",
"Dahua Lin",
"Jiaqi Wang"
] |
Video Object Segmentation (VOS) is a core task in computer vision, requiring models to track and segment target objects across video frames. Despite notable advances with recent efforts, current techniques still lag behind human capabilities in handling drastic visual variations, occlusions, and complex scene changes. This limitation arises from their reliance on appearance matching, neglecting the human-like conceptual understanding of objects that enables robust identification across temporal dynamics. Motivated by this gap, we propose Segment Concept (SeC), a concept-driven segmentation framework that shifts from conventional feature matching to the progressive construction and utilization of high-level, object-centric representations. SeC employs Large Vision-Language Models (LVLMs) to integrate visual cues across diverse frames, constructing robust conceptual priors. During inference, SeC forms a comprehensive semantic representation of the target based on processed frames, realizing robust segmentation of follow-up frames. Furthermore, SeC adaptively balances LVLM-based semantic reasoning with enhanced feature matching, dynamically adjusting computational efforts based on scene complexity. To rigorously assess VOS methods in scenarios demanding high-level conceptual reasoning and robust semantic understanding, we introduce the Semantic Complex Scenarios Video Object Segmentation benchmark (SeCVOS). SeCVOS comprises 160 manually annotated multi-scenario videos designed to challenge models with substantial appearance variations and dynamic scene transformations. In particular, SeC achieves an 11.8-point improvement over SAM 2.1 on SeCVOS, establishing a new state-of-the-art in concept-aware video object segmentation.
|
|
2025-07-22T00:00:00 |
2507.15629
|
Gaussian Splatting with Discretized SDF for Relightable Assets
|
[
"Zuo-Liang Zhu",
"Jian Yang",
"Beibei Wang"
] |
https://github.com/NK-CS-ZZL/DiscretizedSDF
|
3D Gaussian splatting (3DGS) has shown its detailed expressive ability and highly efficient rendering speed in the novel view synthesis (NVS) task. The application to inverse rendering still faces several challenges, as the discrete nature of Gaussian primitives makes it difficult to apply geometry constraints. Recent works introduce the signed distance field (SDF) as an extra continuous representation to regularize the geometry defined by Gaussian primitives. It improves the decomposition quality, at the cost of increasing memory usage and complicating training. Unlike these works, we introduce a discretized SDF to represent the continuous SDF in a discrete manner by encoding it within each Gaussian using a sampled value. This approach allows us to link the SDF with the Gaussian opacity through an SDF-to-opacity transformation, enabling rendering the SDF via splatting and avoiding the computational cost of ray marching.The key challenge is to regularize the discrete samples to be consistent with the underlying SDF, as the discrete representation can hardly apply the gradient-based constraints (\eg Eikonal loss). For this, we project Gaussians onto the zero-level set of SDF and enforce alignment with the surface from splatting, namely a projection-based consistency loss. Thanks to the discretized SDF, our method achieves higher relighting quality, while requiring no extra memory beyond GS and avoiding complex manually designed optimization. The experiments reveal that our method outperforms existing Gaussian-based inverse rendering methods. Our code is available at https://github.com/NK-CS-ZZL/DiscretizedSDF.
|
Subsets and Splits
Top 20 Papers with GitHub Links
This query retrieves 20 entries from the training set where the GitHub URL is not null or empty, providing a basic filter of non-empty entries.