myleslinder commited on
Commit
4dc35a9
·
1 Parent(s): 87d9c44
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -29,3 +29,65 @@ dataset_info:
29
  download_size: 470756748
30
  dataset_size: 1736578
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  download_size: 470756748
30
  dataset_size: 1736578
31
  ---
32
+ # Dataset Card for CREMA-D (Crowd-sourced Emotional Multimodal Actors Dataset)
33
+
34
+ ## Dataset Description
35
+
36
+ - **Homepage:** <https://github.com/CheyneyComputerScience/CREMA-D>
37
+ - **Point of Contact:** <[email protected]>
38
+
39
+ ### Dataset Summary
40
+
41
+ CREMA-D is a data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified).
42
+
43
+ Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy, Neutral and Sad) and four different emotion levels (Low, Medium, High and Unspecified).
44
+
45
+ Participants rated the emotion and emotion levels based on the combined audiovisual presentation, the video alone, and the audio alone. Due to the large number of ratings needed, this effort was crowd-sourced and a total of 2443 participants each rated 90 unique clips, 30 audio, 30 visual, and 30 audio-visual. 95% of the clips have more than 7 ratings.
46
+
47
+ ### Languages
48
+
49
+ English
50
+
51
+ ## Dataset Structure
52
+
53
+ ### Data Instances
54
+
55
+ ```json
56
+ {
57
+ 'path': '.../.cache/huggingface/datasets/downloads/extracted/.../data/AudioWAV/1001_DFA_ANG_XX.wav',
58
+ 'audio': {
59
+ 'path': '.../.cache/huggingface/datasets/downloads/extracted/.../data/AudioWAV/1001_DFA_ANG_XX.wav',
60
+ 'array': array([
61
+ -1.35336370e-06,
62
+ -1.84488497e-04,
63
+ -2.73496640e-04,
64
+ 1.40174336e-04,
65
+ 8.33026352e-05,
66
+ 0.00000000e+00
67
+ ]),
68
+ 'sampling_rate': 16000
69
+ },
70
+ 'actor_id': '1001',
71
+ 'sentence': "Don't forget a jacket",
72
+ 'emotion_intensity': 'Unspecified',
73
+ 'label': 0
74
+ }
75
+ ```
76
+
77
+ ## Additional Information
78
+
79
+ ### Citation Information
80
+
81
+ ```BibTex
82
+ @article{cao2014crema,
83
+ title={CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset},
84
+ author={Cao, H. and Cooper, D. G. and Keutmann, M. K. and Gur, R. C. and Nenkova, A. and Verma, R.},
85
+ journal={IEEE transactions on affective computing},
86
+ volume={5},
87
+ number={4},
88
+ pages={377--390},
89
+ year={2014},
90
+ doi={10.1109/TAFFC.2014.2336244},
91
+ url={https://doi.org/10.1109/TAFFC.2014.2336244}
92
+ }
93
+ ```