File size: 3,898 Bytes
ee0452d bc9d896 ee0452d a5c5be1 ee0452d a5c5be1 ee0452d a5c5be1 ee0452d a5c5be1 bc9d896 a5c5be1 bc9d896 a5c5be1 bc9d896 a5c5be1 bc9d896 a5c5be1 ee0452d bc9d896 4af20a1 a5c5be1 0db026a 542697f 0db026a 8cdaae2 78cbe29 542697f 722e885 542697f 8cdaae2 542697f f52b1b0 8ea6f96 f52b1b0 542697f 0db026a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
dataset_info:
- config_name: ihm
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: timestamps_start
sequence: float64
- name: timestamps_end
sequence: float64
- name: speakers
sequence: string
splits:
- name: train
num_bytes: 9326329826
num_examples: 136
- name: validation
num_bytes: 1113896048
num_examples: 18
- name: test
num_bytes: 1044169059
num_examples: 16
download_size: 10267627474
dataset_size: 11484394933
- config_name: sdm
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: timestamps_start
sequence: float64
- name: timestamps_end
sequence: float64
- name: speakers
sequence: string
splits:
- name: train
num_bytes: 9208897240
num_examples: 134
- name: validation
num_bytes: 1113930821
num_examples: 18
- name: test
num_bytes: 1044187355
num_examples: 16
download_size: 10679615636
dataset_size: 11367015416
configs:
- config_name: ihm
data_files:
- split: train
path: ihm/train-*
- split: validation
path: ihm/validation-*
- split: test
path: ihm/test-*
- config_name: sdm
data_files:
- split: train
path: sdm/train-*
- split: validation
path: sdm/validation-*
- split: test
path: sdm/test-*
license: cc-by-4.0
language:
- en
tags:
- speaker-diarization
- voice-activity-detection
- speaker-segmentation
---
# Dataset Card for the AMI dataset for speaker diarization
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
the participants also have unsynchronized pens available to them that record what is written. The meetings
were recorded in English using three different rooms with different acoustic properties, and include mostly
non-native speakers.
**Note**: This dataset has been preprocessed using [diarizers](https://github.com/huggingface/diarizers/tree/main/datasets).
It makes the dataset compatible with the `diarizers` library to fine-tune [pyannote](https://huggingface.co/pyannote/segmentation-3.0) segmentation models.
### Example Usage
```python
from datasets import load_dataset
ds = load_dataset("diarizers-community/ami", "ihm")
print(ds)
```
gives:
```
DatasetDict({
train: Dataset({
features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
num_rows: 136
})
validation: Dataset({
features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
num_rows: 18
})
test: Dataset({
features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
num_rows: 16
})
})
```
## Dataset source
- **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/
- **Repository:** https://github.com/pyannote/AMI-diarization-setup
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Preprocessed using:** [diarizers](https://github.com/huggingface/diarizers/tree/main/datasets)
## Citation
```
@article{article,
author = {Mccowan, Iain and Carletta, J and Kraaij, Wessel and Ashby, Simone and Bourban, S and Flynn, M and Guillemot, M and Hain, Thomas and Kadlec, J and Karaiskos, V and Kronenthal, M and Lathoud, Guillaume and Lincoln, Mike and Lisowska Masson, Agnes and Post, Wilfried and Reidsma, Dennis and Wellner, P},
year = {2005},
month = {01},
pages = {},
title = {The AMI meeting corpus},
journal = {Int'l. Conf. on Methods and Techniques in Behavioral Research}
}
```
## Contribution
Thanks to [@kamilakesbi](https://huggingface.co/kamilakesbi) and [@sanchit-gandhi](https://huggingface.co/sanchit-gandhi) for adding this dataset.
|