Feature extraction is the task of converting a text into a vector (often called “embedding”).
Example applications:
For more details about the feature-extraction task, check out its dedicated page! You will find examples and related materials.
Explore all available models and find the one that suits you best here.
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="hf-inference",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
result = client.feature_extraction(
inputs="Today is a sunny day and I will get some ice cream.",
model="intfloat/multilingual-e5-large-instruct",
)| Headers | ||
|---|---|---|
| authorization | string | Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with “Inference Providers” permission. You can generate one from your settings page. |
| Payload | ||
|---|---|---|
| inputs* | unknown | One of the following: |
| (#1) | string | |
| (#2) | string[] | |
| normalize | boolean | |
| prompt_name | string | The name of the prompt that should be used by for encoding. If not set, no prompt will be applied. Must be a key in the sentence-transformers configuration prompts dictionary. For example if prompt_name is “query” and the prompts is {“query”: “query: ”, …}, then the sentence “What is the capital of France?” will be encoded as “query: What is the capital of France?” because the prompt text will be prepended before any text to encode. |
| truncate | boolean | |
| truncation_direction | enum | Possible values: Left, Right. |
| Body | ||
|---|---|---|
| (array) | array[] | Output is an array of arrays. |