crema-d / README.md
myleslinder's picture
readme
4dc35a9
metadata
license: odbl
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: actor_id
      dtype: string
    - name: sentence
      dtype: string
    - name: emotion_intensity
      dtype: string
    - name: label
      dtype:
        class_label:
          names:
            '0': anger
            '1': disgust
            '2': fear
            '3': happy
            '4': neutral
            '5': sad
  splits:
    - name: train
      num_bytes: 1736578
      num_examples: 7442
  download_size: 470756748
  dataset_size: 1736578

Dataset Card for CREMA-D (Crowd-sourced Emotional Multimodal Actors Dataset)

Dataset Description

Dataset Summary

CREMA-D is a data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified).

Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy, Neutral and Sad) and four different emotion levels (Low, Medium, High and Unspecified).

Participants rated the emotion and emotion levels based on the combined audiovisual presentation, the video alone, and the audio alone. Due to the large number of ratings needed, this effort was crowd-sourced and a total of 2443 participants each rated 90 unique clips, 30 audio, 30 visual, and 30 audio-visual. 95% of the clips have more than 7 ratings.

Languages

English

Dataset Structure

Data Instances

{
  'path': '.../.cache/huggingface/datasets/downloads/extracted/.../data/AudioWAV/1001_DFA_ANG_XX.wav',
  'audio': {
          'path': '.../.cache/huggingface/datasets/downloads/extracted/.../data/AudioWAV/1001_DFA_ANG_XX.wav',
          'array': array([
                  -1.35336370e-06,
                  -1.84488497e-04,
                  -2.73496640e-04,
                  1.40174336e-04,
                  8.33026352e-05,
                  0.00000000e+00
          ]),
          'sampling_rate': 16000
  },
  'actor_id': '1001',
  'sentence': "Don't forget a jacket",
  'emotion_intensity': 'Unspecified',
  'label': 0
}

Additional Information

Citation Information

@article{cao2014crema,
  title={CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset},
  author={Cao, H. and Cooper, D. G. and Keutmann, M. K. and Gur, R. C. and Nenkova, A. and Verma, R.},
  journal={IEEE transactions on affective computing},
  volume={5},
  number={4},
  pages={377--390},
  year={2014},
  doi={10.1109/TAFFC.2014.2336244},
  url={https://doi.org/10.1109/TAFFC.2014.2336244}
}