emova-asr-tts-eval / README.md
KaiChen1998's picture
Update README.md
4c956b9 verified
metadata
dataset_info:
  - config_name: librispeech-asr-tts
    features:
      - name: id
        dtype: string
      - name: conversations
        list:
          - name: from
            dtype: string
          - name: value
            dtype: string
    splits:
      - name: test
        num_bytes: 14023156
        num_examples: 5240
    download_size: 2510708
    dataset_size: 14023156
configs:
  - config_name: librispeech-asr-tts
    data_files:
      - split: test
        path: librispeech-asr-tts/test-*
license: apache-2.0
task_categories:
  - automatic-speech-recognition
  - text-to-speech
language:
  - en
  - zh
tags:
  - Omni-modal-LLM
  - Multi-modal-LLM
  - Emotional-spoken-dialogue

EMOVA-ASR-TTS-Eval

πŸ€— EMOVA-Models | πŸ€— EMOVA-Datasets | πŸ€— EMOVA-Demo
πŸ“„ Paper | 🌐 Project-Page | πŸ’» Github | πŸ’» EMOVA-Speech-Tokenizer-Github

Overview

EMOVA-ASR-TTS-Eval is a dataset designed for evaluating the ASR and TTS performance of Omni-modal LLMs. It is derived from the test-clean set of the LibriSpeech dataset. This dataset is part of the EMOVA-Datasets collection. We extract the speech units using the EMOVA Speech Tokenizer.

Structure

This dataset contains two types of data samples:

  • Automated Speech Recognition (ASR): recognize the corresponding plain texts given speech unit inputs.
  • Text-to-Speech (TTS): generate speech units given the plain text inputs.

Getting Started

This dataset is organized in the official LLaVA data format and can be accessed with the HuggingFace datasets API. For more details on evaluating EMOVA with this dataset, check our github repo.

from datasets import load_dataset

dataset = load_dataset("Emova-ollm/emova-asr-tts-eval", name="librispeech-asr-tts", split='test')

# should be a dictionary containing
# {"id": sample identification, 'conversations': containing speech units}
for data in dataset:
    print(data)

Citation

@article{chen2024emova,
  title={Emova: Empowering language models to see, hear and speak with vivid emotions},
  author={Chen, Kai and Gou, Yunhao and Huang, Runhui and Liu, Zhili and Tan, Daxin and Xu, Jing and Wang, Chunwei and Zhu, Yi and Zeng, Yihan and Yang, Kuo and others},
  journal={arXiv preprint arXiv:2409.18042},
  year={2024}
}