--- dataset_info: - config_name: librispeech-asr-tts features: - name: id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: test num_bytes: 14023156 num_examples: 5240 download_size: 2510708 dataset_size: 14023156 configs: - config_name: librispeech-asr-tts data_files: - split: test path: librispeech-asr-tts/test-* license: apache-2.0 task_categories: - automatic-speech-recognition - text-to-speech language: - en - zh tags: - Omni-modal-LLM - Multi-modal-LLM - Emotional-spoken-dialogue --- # EMOVA-ASR-TTS-Eval
🤗 [EMOVA-Models](https://huggingface.co/collections/Emova-ollm/emova-models-67779d377bb8261e6057a320) | 🤗 [EMOVA-Datasets](https://huggingface.co/collections/Emova-ollm/emova-datasets-67779be7d02447a2d0891bf6) | 🤗 [EMOVA-Demo](https://huggingface.co/spaces/Emova-ollm/EMOVA-demo)
📄 [Paper](https://arxiv.org/abs/2409.18042) | 🌐 [Project-Page](https://emova-ollm.github.io/) | 💻 [Github](https://github.com/emova-ollm/EMOVA) | 💻 [EMOVA-Speech-Tokenizer-Github](https://github.com/emova-ollm/EMOVA_speech_tokenizer)
## Overview EMOVA-ASR-TTS-Eval is a dataset designed for evaluating the ASR and TTS performance of Omni-modal LLMs. It is derived from the **test-clean** set of the LibriSpeech dataset. This dataset is part of the [EMOVA-Datasets](https://huggingface.co/collections/Emova-ollm/emova-dataset-67779be7d02447a2d0891bf6) collection. We extract the speech units using the [EMOVA Speech Tokenizer](https://huggingface.co/Emova-ollm/emova_speech_tokenizer_hf). ## Structure This dataset contains two types of data samples: - **Automated Speech Recognition (ASR)**: recognize the corresponding plain texts given speech unit inputs. - **Text-to-Speech (TTS)**: generate speech units given the plain text inputs. ## Getting Started This dataset is organized in the official LLaVA data format and can be accessed with the HuggingFace datasets API. For more details on evaluating EMOVA with this dataset, check our [github repo](https://github.com/emova-ollm/EMOVA#evaluation). ```python from datasets import load_dataset dataset = load_dataset("Emova-ollm/emova-asr-tts-eval", name="librispeech-asr-tts", split='test') # should be a dictionary containing # {"id": sample identification, 'conversations': containing speech units} for data in dataset: print(data) ``` ## Citation ```bibtex @article{chen2024emova, title={Emova: Empowering language models to see, hear and speak with vivid emotions}, author={Chen, Kai and Gou, Yunhao and Huang, Runhui and Liu, Zhili and Tan, Daxin and Xu, Jing and Wang, Chunwei and Zhu, Yi and Zeng, Yihan and Yang, Kuo and others}, journal={arXiv preprint arXiv:2409.18042}, year={2024} } ```