You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To acquire this dataset, please obtain the license of IEMOCAP first. See
https://sail.usc.edu/iemocap/iemocap_release.htm.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Name: Text-Only ASR transcripts of IEMOCAP for ASR error correction and emotion recognition

This is an n-best hypotheses augmented dataset of IEMOCAP, which is used for ASR error correction and emotion recognition.

Please see IEMOCAP first to access the raw audio dataset.

Description

This dataset consists of ASR transcripts of 11 speech models, following the turns of the conversation in IEMOCAP, with corresponding speaker ID and utterance ID.

To acquire this dataset, please obtain the license of IEMOCAP first (if you already have it, please skip step 1). Specifically:

  1. Submit a request to SAIL lab at USC following their guidance: link of their webpage. All you have to do is read their license and fill out a Google form, which is pretty easy.
  2. When registering this challenge, attach the approved license or screenshot of the approval email as proof. We will then release the data to you.

The explanation for each key is as follows:

  • need_prediction: this key indicates whether this utterance should be included in the prediction procedure. "yes" denotes the utterances labeled with Big4 emotions, which are widely used for emotion recognition in IEMOCAP. "no" denotes all other utterances. Note that we have removed the utterances that have no human annotations.

  • emotion: this key indicates the emotion label of the utterance.

  • id: this key indicates the utterance ID, which is also the name of the audio file in IEMOCAP corpus. The ID is exactly the same as the raw ID in IEMOCAP.

  • speaker: this key indicates the speaker of the utterance. Since there are two speakers in each session, there are ten speakers in total. It's important to note that the sixth character of the id DOES NOT represent the gender of the speaker, but rather the gender of the person currently wearing the motion capture device. Please use our provided speaker as the speaker ID.

  • groundtruth: this key indicates the original human transcription provided by IEMOCAP.

The remaining ten keys indicate the ASR transcription generated by respective ASR model.

Note that, we intentionally truncated some of the ASR transcriptions from the Whisper models because they contained very few errors.

Access

The dataset will be shared to you after you have registered.

Acknowledgments

This dataset is created based on IEMOCAP. Thanks to the original authors of IEMOCAP and appreciate the approval of Prof. Shrikanth Narayanan.

References

@inproceedings{li2024speech,
  title={Speech emotion recognition with asr transcripts: A comprehensive study on word error rate and fusion techniques},
  author={Li, Yuanchao and Bell, Peter and Lai, Catherine},
  booktitle={2024 IEEE Spoken Language Technology Workshop (SLT)},
  pages={518--525},
  year={2024},
  organization={IEEE}
}

@article{busso2008iemocap,
  title={IEMOCAP: Interactive emotional dyadic motion capture database},
  author={Busso, Carlos and Bulut, Murtaza and Lee, Chi-Chun and Kazemzadeh, Abe and Mower, Emily and Kim, Samuel and Chang, Jeannette N and Lee, Sungbok and Narayanan, Shrikanth S},
  journal={Language resources and evaluation},
  volume={42},
  pages={335--359},
  year={2008},
  publisher={Springer}
}

@inproceedings{yang2024large,
  title={Large language model based generative error correction: A challenge and baselines for speech recognition, speaker tagging, and emotion recognition},
  author={Yang, Chao-Han Huck and Park, Taejin and Gong, Yuan and Li, Yuanchao and Chen, Zhehuai and Lin, Yen-Ting and Chen, Chen and Hu, Yuchen and Dhawan, Kunal and {\.Z}elasko, Piotr and others},
  booktitle={2024 IEEE Spoken Language Technology Workshop (SLT)},
  pages={371--378},
  year={2024},
  organization={IEEE}
}

Dataset Structure

Each example in the dataset contains:

  • id: Unique identifier for the utterance
  • speaker: Speaker ID
  • emotion: Emotion label ("ang", "hap", "sad", "neu", "fru", "xxx", etc.)
  • groundtruth: The ground truth transcription
  • need_prediction: Whether this example needs prediction ("yes" or "no")
  • ASR model outputs: Transcriptions from different ASR models:
    • hubertlarge
    • w2v2100
    • w2v2960
    • w2v2960large
    • w2v2960largeself
    • wavlmplus
    • whisperbase
    • whisperlarge
    • whispermedium
    • whispersmall
    • whispertiny

Example Usage

Basic Data Exploration

# View dataset features
print(dataset.features)

# Get a sample example
example = dataset[0]
print(f"ID: {example['id']}")
print(f"Emotion: {example['emotion']}")
print(f"Ground truth: {example['groundtruth']}")

# Access different ASR outputs
print("\nASR transcriptions:")
asr_keys = [k for k in example.keys() if k not in ['id', 'speaker', 'emotion', 'groundtruth', 'need_prediction']]
for key in asr_keys:
    print(f"{key}: {example[key]}")

Working with N-Best ASR Transcriptions

import pandas as pd

# Function to collect all ASR transcriptions for an example
def get_all_transcriptions(example):
    asr_systems = ['groundtruth', 'hubertlarge', 'w2v2100', 'w2v2960', 'w2v2960large', 
                   'w2v2960largeself', 'wavlmplus', 'whisperbase', 'whisperlarge', 
                   'whispermedium', 'whispersmall', 'whispertiny']
    
    return {sys: example[sys] for sys in asr_systems}

# Get examples that need prediction
prediction_examples = dataset.filter(lambda x: x['need_prediction'] == 'yes')
print(f"Found {len(prediction_examples)} examples that need prediction")

# Example: Create a dataframe with all transcriptions for the first example
example_id = prediction_examples[0]['id']
transcriptions = get_all_transcriptions(prediction_examples[0])

df = pd.DataFrame.from_dict(transcriptions, orient='index', columns=['transcription'])
df.index.name = 'asr_system'
print(f"Transcriptions for {example_id}:")
print(df)

# Filter by emotion
angry_examples = dataset.filter(lambda x: x['emotion'] == 'ang')
print(f"Found {len(angry_examples)} examples with 'angry' emotion")

Combining with Other Features

If you have additional features like acoustic features, you can combine them:

# Example: Hypothetical function to load acoustic features
def load_acoustic_features(example_id):
    # Implementation depends on your acoustic features format
    # This is just a placeholder
    import numpy as np
    return np.random.random(128)  # Return random features for example

# Add acoustic features to examples
def add_acoustic_features(example):
    features = load_acoustic_features(example['id'])
    return {'acoustic_features': features, **example}

# Apply to a few examples for demonstration
sample_dataset = dataset.select(range(5))
sample_dataset = sample_dataset.map(add_acoustic_features)

# Now each example has acoustic features along with ASR transcriptions

This README provides comprehensive examples showing how to:

  1. Load the dataset
  2. Explore its structure
  3. Access the n-best ASR transcriptions
  4. Filter and work with the data
  5. Combine it with other features if needed

You can adapt this to your specific needs or add more examples as required.

Loading the Dataset

You can load the dataset directly from disk after running the conversion script:

from datasets import load_from_disk

# Load the full dataset
dataset = load_from_disk('iemocap_post_asr_n_best_dataset')

# Or if you've pushed it to the Hugging Face Hub
# from datasets import load_dataset
# dataset = load_dataset('your-username/iemocap-asr-dataset')

print(f"Dataset loaded with {len(dataset)} examples")
Downloads last month
19