|
--- |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
# SongEval π΅ |
|
**A Large-Scale Benchmark Dataset for Aesthetic Evaluation of Complete Songs** |
|
|
|
<!-- [](https://huggingface.co/datasets/ASLP-lab/SongEval) --> |
|
[](https://github.com/ASLP-lab/SongEval) |
|
[](https://arxiv.org/pdf/2505.10793) |
|
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
|
|
|
--- |
|
|
|
## π Overview |
|
|
|
**SongEval** is the first open-source, large-scale benchmark dataset designed for **aesthetic evaluation of complete songs**. It provides over **2,399 songs** (~140 hours) annotated by **16 expert raters** across **five perceptual dimensions**. The dataset enables research in evaluating and improving music generation systems from a human aesthetic perspective. |
|
|
|
|
|
<p align="center"> <img src="assets/intro.png" alt="SongEval" width="800"/> </p> |
|
|
|
--- |
|
|
|
## π Features |
|
|
|
- π§ **2,399 complete songs** (with vocals and accompaniment) |
|
- β±οΈ **~140 hours** of high-quality audio |
|
- π **English and Chinese** songs |
|
- πΌ **9 mainstream genres** |
|
- π **5 aesthetic dimensions**: |
|
- Overall Coherence |
|
- Memorability |
|
- Naturalness of Vocal Breathing and Phrasing |
|
- Clarity of Song Structure |
|
- Overall Musicality |
|
- π Ratings on a **5-point Likert scale** by **musically trained annotators** |
|
- ποΈ Includes outputs from **five generation models** + a subset of real/bad-case samples |
|
|
|
<div style="display: flex; justify-content: space-between;"> |
|
<img src="assets/score.png" alt="Image 1" style="width: 48%;" /> |
|
<img src="assets/distribution.png" alt="Image 2" style="width: 48%;" /> |
|
</div> |
|
|
|
|
|
--- |
|
|
|
## π Dataset Structure |
|
|
|
Each sample includes: |
|
|
|
- `audio`: WAV audio of the full song |
|
- `gender`: male or female |
|
- `aesthetic_scores`: dict of five human-annotated scores (1β5) |
|
|
|
--- |
|
|
|
## π Use Cases |
|
|
|
- Benchmarking song generation models from an aesthetic viewpoint |
|
- Training perceptual quality predictors for song |
|
- Exploring alignment between objective metrics and human judgments |
|
|
|
--- |
|
|
|
## π§ͺ Evaluation Toolkit |
|
|
|
We provide an open-source evaluation toolkit trained on SongEval to help researchers evaluate new music generation outputs: |
|
|
|
π GitHub: [https://github.com/ASLP-lab/SongEval](https://github.com/ASLP-lab/SongEval) |
|
|
|
--- |
|
|
|
## π₯ Download |
|
|
|
You can load the dataset directly using π€ Datasets: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("ASLP-lab/SongEval") |
|
|
|
``` |
|
|
|
## π Acknowledgement |
|
|
|
This project is mainly organized by the audio, speech and language processing lab [(ASLP@NPU)](http://www.npu-aslp.org/). |
|
|
|
We sincerely thank the **Shanghai Conservatory of Music** for their expert guidance on music theory, aesthetics, and annotation design. |
|
Meanwhile, we thank AISHELL to help with the orgnization of the song annotations. |
|
|
|
<p align="center"> <img src="assets/logo.png" alt="Shanghai Conservatory of Music Logo"/> </p> |
|
|
|
--- |
|
|
|
## π¬ Citation |
|
If you use this toolkit or the SongEval dataset, please cite the following: |
|
``` |
|
@article{yao2025songeval, |
|
title = {SongEval: A Benchmark Dataset for Song Aesthetics Evaluation}, |
|
author = {Yao, Jixun and Ma, Guobin and Xue, Huixin and Chen, Huakang and Hao, Chunbo and Jiang, Yuepeng and Liu, Haohe and Yuan, Ruibin and Xu, Jin and Xue, Wei and others}, |
|
journal = {arXiv preprint arXiv:2505.10793}, |
|
year={2025} |
|
} |
|
|
|
``` |
|
|
|
|