Papers
arxiv:2503.04724

LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM

Published on Mar 6
· Submitted by sahalshajim on Mar 7
#3 Paper of the day
Authors:
,
,
,
,

Abstract

Recent advancements in speech-to-speech dialogue systems leverage LLMs for multimodal interactions, yet they remain hindered by fine-tuning requirements, high computational overhead, and text-speech misalignment. Existing speech-enabled LLMs often degrade conversational quality by modifying the LLM, thereby compromising its linguistic capabilities. In contrast, we propose LLMVoX, a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming TTS system that generates high-quality speech with low latency, while fully preserving the capabilities of the base LLM. Our approach achieves a significantly lower Word Error Rate compared to speech-enabled LLMs, while operating at comparable latency and UTMOS score. By decoupling speech synthesis from LLM processing via a multi-queue token streaming system, LLMVoX supports seamless, infinite-length dialogues. Its plug-and-play design also facilitates extension to various tasks with different backbones. Furthermore, LLMVoX generalizes to new languages with only dataset adaptation, attaining a low Character Error Rate on an Arabic speech task. Additionally, we have integrated LLMVoX with a Vision-Language Model to create an omni-model with speech, text, and vision capabilities, without requiring additional multimodal training. Our code base and project page is available at https://mbzuai-oryx.github.io/LLMVoX .

Community

Paper author Paper submitter
edited 6 days ago

In the 4 min demo. Are you using some API or is the full model running on mobile? Also what mobile specs are you running it on?

Paper author

We have hosted our model on a local A100 GPU and have a in house Flutter app calling the hosted API .
We are planning to also release a on device setup for it soon.

Yes output speed was sus

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.04724 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.04724 in a Space README.md to link it from this page.

Collections including this paper 7