Audio classification is the task of assigning a label or class to a given audio.
Example applications:
For more details about the audio-classification task, check out its dedicated page! You will find examples and related materials.
Explore all available models and find the one that suits you best here.
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="hf-inference",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
output = client.audio_classification("sample1.flac", model="firdhokk/speech-emotion-recognition-with-openai-whisper-large-v3")| Headers | ||
|---|---|---|
| authorization | string | Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with “Inference Providers” permission. You can generate one from your settings page. |
| Payload | ||
|---|---|---|
| inputs* | string | The input audio data as a base64-encoded string. If no parameters are provided, you can also provide the audio data as a raw bytes payload. |
| parameters | object | |
| function_to_apply | enum | Possible values: sigmoid, softmax, none. |
| top_k | integer | When specified, limits the output to the top K most probable classes. |
| Body | ||
|---|---|---|
| (array) | object[] | Output is an array of objects. |
| label | string | The predicted class label. |
| score | number | The corresponding probability. |