SanderGi's picture
Update README.md
beff60c verified
metadata
dataset_info:
  features:
    - name: full_audio
      dtype: audio
    - name: snake_audio
      dtype: audio
    - name: snack_audio
      dtype: audio
    - name: bags_audio
      dtype: audio
    - name: snake_incontext_transcription
      dtype: string
    - name: snack_incontext_transcription
      dtype: string
    - name: bags_incontext_transcription
      dtype: string
    - name: snake_nocontext_transcription
      dtype: string
    - name: snack_nocontext_transcription
      dtype: string
    - name: bags_nocontext_transcription
      dtype: string
    - name: full_facebook_transcription
      dtype: string
    - name: age
      dtype: int64
    - name: birthplace
      dtype: string
    - name: native_language
      dtype: string
    - name: sex
      dtype: string
    - name: country
      dtype: string
    - name: speakerid
      dtype: int64
  splits:
    - name: train
      num_bytes: 1851995457.892
      num_examples: 2138
  download_size: 1965023997
  dataset_size: 1851995457.892
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: agpl-3.0
language:
  - en
tags:
  - speech
  - ipa
pretty_name: sac
size_categories:
  - 1K<n<10K

2138 English speakers from a variety of language backgrounds (pulled from the Speech Accent Archive) saying the sentence

"Please call Stella. Ask her to bring these things with her from the store: Six spoons of fresh snow peas, five thick slabs of blue cheese, and maybe a snack for her brother Bob. We also need a small plastic snake and a big toy frog for the kids. She can scoop these things into three red bags, and we will go meet her Wednesday at the train station."

We focus on three words -- "snack", "snake", "bags" -- which are notoriously tricky for humans to identify out-of-context. We use an automated script to transcribe them both in-context and clipped with no-context using the Facebook 60 phoneme transcription model. For cases of mispronunciation, the results show that in-context, the transformer layers of the Wav2Vec2 model act as a language model to hallucinate the more "standard" g2p transcription of the words whereas out-of-context, the model predicts a transcription closer to what the speaker actually said.