from huggingface_hub import InferenceClient
client = InferenceClient(
provider="fireworks-ai",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
max_tokens=500,
)
print(completion.choices[0].message)
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="fireworks-ai",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
max_tokens=500,
)
print(completion.choices[0].message)
+1
from huggingface_hub import InferenceClient
client = InferenceClient(
provider="black-forest-labs",
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
)
# output is a PIL.Image object
image = client.text_to_image(
"Astronaut riding a horse",
model="black-forest-labs/FLUX.1-dev",
)
Instant Access to thousands of ML Models for Fast Prototyping
Explore the most popular models for text, image, speech, and more β all with a simple API request. Build, test, and experiment without worrying about infrastructure or setup.
The Serverless Inference API offers a fast and simple way to explore thousands of models for a variety of tasks. Whether youβre prototyping a new application or experimenting with ML capabilities, this API gives you instant access to high-performing models across multiple domains:
β‘ Fast and Free to Get Started: The Inference API is free to try out and comes with additional included credits for PRO users. For production needs, explore Inference Endpoints for dedicated resources, autoscaling, advanced security features, and more.
The documentation is organized into two sections:
If you want to get started quickly with Chat Completion models use the Inference Playground to quickly test and compare models against your prompts.