A tool is essentially a function that an LLM can use within an agentic system. To interact with it, the LLM requires an API with the following components:
Rather than using a regular Python function, a tool should be implemented as a class. Essentially, it wraps a function with metadata that helps the LLM understand how to use it effectively.
Below, you can see an animation illustrating how a tool call is managed:

In smolagents, tools can be defined in two ways:
Tool, which provides useful methods.@tool decorator to define a function-based tool.The first approach involves creating a subclass of Tool. In this class, we define:
name: The tool’s name.description: A description used to populate the agent’s system prompt.inputs: A dictionary with keys type and description, providing information to help the Python interpreter process inputs.output_type: Specifies the expected output type.forward: The method containing the inference logic to execute.Below, we can see an example of a tool built using Tool and to integrate it within a CodeAgent.
from smolagents import Tool
class HFModelDownloadsTool(Tool):
name = "model_download_counter"
description = """
This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
It returns the name of the checkpoint."""
inputs = {
"task": {
"type": "string",
"description": "the task category (such as text-classification, depth-estimation, etc)",
}
}
output_type = "string"
def forward(self, task: str):
from huggingface_hub import list_models
model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
return model.id
model_downloads_tool = HFModelDownloadsTool()
agent = CodeAgent(tools=[model_downloads_tool], model=HfApiModel())
agent.run(
"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)The @tool decorator is the recommended way to define simple tools. Using this approach, we define a function with:
Args: section where each argument is explicitly described. These descriptions provide valuable context for the LLM, so it’s important to write them carefully.Below is an example of a function using the @tool decorator, replicating the same functionality as the previous example:
from smolagents import tool
@tool
def model_download_tool(task: str) -> str:
"""
This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
It returns the name of the checkpoint.
Args:
task: The task for which to get the download count.
"""
most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
return most_downloaded_model.id
agent = CodeAgent(tools=[model_download_tool], model=HfApiModel())
agent.run(
"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)smolagents comes with a set of pre-built tools that can be directly injected into your agent. The default toolbox includes:
One of the most powerful features of smolagents is the ability to share your custom tools to the Hub, as well as load tools shared by the community. This includes integrating HF Spaces or LangChain tools. Below are examples showcasing each of these functionalities:
To share your custom tool, you can upload it to your Hugging Face account using the push_to_hub() method:
model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>")You can import tools developed by other users by utilizing the load_tool() function:
from smolagents import load_tool, CodeAgent
model_download_tool = load_tool(
"{your_username}/hf-model-downloads", # m-ric/text-to-image
trust_remote_code=True
)You can also import a HF Space as a tool using Tool.from_space(). This opens up many possibilities for integration. The functionality uses the gradio_client under the hood, so make sure to install it via pip if you don’t have it already:
from smolagents import CodeAgent, HfApiModel, Tool
image_generation_tool = Tool.from_space(
"black-forest-labs/FLUX.1-schnell",
name="image_generator",
description="Generate an image from a prompt"
)
model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct")
agent = CodeAgent(tools=[image_generation_tool], model=model)
agent.run(
"Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'}
)You can also load tools from LangChain using the Tool.from_langchain() method. Here’s how to import and use a LangChain tool:
from langchain.agents import load_tools, Tool
search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
agent = CodeAgent(tools=[search_tool], model=model)
agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?")