Now that weâve built MCP servers in Gradio and learned about creating MCP clients, letâs complete our end-to-end application by building an agent that can seamlessly interact with our sentiment analysis tool. This section builds on the project Tiny Agents, which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.
In this final exercise of Unit 2, we will walk you through how to implement both TypeScript (JS) and Python MCP clients that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous sections. This completes our end-to-end MCP application flow: from building a Gradio MCP server exposing a sentiment analysis tool, to creating a flexible agent that can use this tool alongside other capabilities.
![]()
Letâs install the necessary packages to build our Tiny Agents.
Some MCP Clients, notably Claude Desktop, do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as mcp-remote. First install Node.js. Then, add the following to your own MCP Client config:
First, we need to install the tiny-agents package.
npm install @huggingface/tiny-agents
# or
pnpm add @huggingface/tiny-agentsThen, we need to install the mcp-remote package.
npm i mcp-remote
# or
pnpm add mcp-remoteTiny Agents can create MCP clients from the command line based on JSON configuration files.
Letâs setup a project with a basic Tiny Agent.
mkdir my-agent
touch my-agent/agent.jsonThe JSON file will look like this:
{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"servers": [
{
"type": "stdio",
"config": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:7860/gradio_api/mcp/sse"
]
}
}
]
}We can then run the agent with the following command:
npx @huggingface/tiny-agents run ./my-agent
Here we have a basic Tiny Agent that can connect to our Gradio MCP server. It includes a model, provider, and a server configuration.
| Field | Description |
|---|---|
model | The open source model to use for the agent |
provider | The inference provider to use for the agent |
servers | The servers to use for the agent. Weâll use the mcp-remote server for our Gradio MCP server. |
We could also use an open source model running locally with Tiny Agents.
{
"model": "Qwen/Qwen3-32B",
"endpointUrl": "http://localhost:1234/v1",
"servers": [
{
"type": "stdio",
"config": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:1234/v1/mcp/sse"
]
}
}
]
}Here we have a Tiny Agent that can connect to a local model. It includes a model, endpoint URL (http://localhost:1234/v1), and a server configuration. The endpoint should be an OpenAI-compatible endpoint.
Now that we understand both Tiny Agents and Gradio MCP servers, letâs see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server from earlier sections.
To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier in this unit, we just need to add it to our list of servers. Hereâs how we can modify our agent configuration:
const agent = new Agent({
provider: process.env.PROVIDER ?? "nebius",
model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
apiKey: process.env.HF_TOKEN,
servers: [
// ... existing servers ...
{
command: "npx",
args: [
"mcp-remote",
"http://localhost:7860/gradio_api/mcp/sse" // Your Gradio MCP server
]
}
],
});Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
When deploying your Gradio MCP server to Hugging Face Spaces, youâll need to update the server URL in your agent configuration to point to your deployed space:
{
command: "npx",
args: [
"mcp-remote",
"https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse"
]
}This allows your agent to use the sentiment analysis tool from anywhere, not just locally!
In this unit, weâve gone from understanding MCP basics to building a complete end-to-end application:
This demonstrates the power of the Model Context Protocol - we can create specialized tools using frameworks weâre familiar with (like Gradio), expose them through a standardized interface (MCP), and then have agents seamlessly use these tools alongside other capabilities.
The complete flow weâve built allows an agent to:
This modular approach is what makes MCP so powerful for building flexible AI applications.
As a bonus, letâs explore how to use the Playwright MCP server for browser automation with Tiny Agents. This demonstrates the extensibility of the MCP ecosystem beyond our sentiment analysis example.
This section is based on the Tiny Agents blog post and adapted for the MCP course.
In this section, weâll show you how to build an agent that can perform web automation tasks like searching, clicking, and extracting information from websites.
// playwright-agent.ts
import { Agent } from "@huggingface/tiny-agents";
const agent = new Agent({
provider: process.env.PROVIDER ?? "nebius",
model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
apiKey: process.env.HF_TOKEN,
servers: [
{
command: "npx",
args: ["playwright-mcp"]
}
],
});
await agent.run();The Playwright MCP server exposes tools that allow your agent to:
Hereâs an example interaction with our browser automation agent:
User: Search for "tiny agents" on GitHub and collect the names of the top 3 repositories
Agent: I'll search GitHub for "tiny agents" repositories.
[Agent opens browser, navigates to GitHub, performs the search, and extracts repository names]
Here are the top 3 (not real) repositories for "tiny agents":
1. huggingface/tiny-agents
2. modelcontextprotocol/tiny-agents-examples
3. langchain/tiny-agents-jsThis browser automation capability can be combined with other MCP servers to create powerful workflowsâfor example, extracting text from a webpage and then analyzing it with custom tools.
If you have NodeJS (with pnpm or npm), just run this in a terminal:
npx @huggingface/mcp-client
or if using pnpm:
pnpx @huggingface/mcp-client
This installs the package into a temporary folder then executes its command.
Youâll see your simple Agent connect to multiple MCP servers (running locally), loading their tools (similar to how it would load your Gradio sentiment analysis tool), then prompting you for a conversation.
By default our example Agent connects to the following two MCP servers:
[!NOTE] Note: this is a bit counter-intuitive but currently, all MCP servers in tiny agents are actually local processes (though remote servers are coming soon).
Our input for this first video was:
write a haiku about the Hugging Face community and write it to a file named âhf.txtâ on my Desktop
Now let us try this prompt that involves some Web browsing:
do a Web Search for HF inference providers on Brave Search and open the first 3 results
In terms of model/provider pair, our example Agent uses by default:
This is all configurable through env variables:
const agent = new Agent({
provider: process.env.PROVIDER ?? "nebius",
model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
apiKey: process.env.HF_TOKEN,
servers: [
// Default servers
{
command: "npx",
args: ["@modelcontextprotocol/servers", "filesystem"]
},
{
command: "npx",
args: ["playwright-mcp"]
},
],
});Now that we know what a tool is in recent LLMs, letâs implement the actual MCP client that will communicate with MCP servers and other MCP servers.
The official doc at https://modelcontextprotocol.io/quickstart/client is fairly well-written. You only have to replace any mention of the Anthropic client SDK by any other OpenAI-compatible client SDK. (There is also a llms.txt you can feed into your LLM of choice to help you code along).
As a reminder, we use HFâs InferenceClient for our inference client.
The complete McpClient.ts code file is here if you want to follow along using the actual code đ¤
Our McpClient class has:
huggingface/inference supports both remote and local endpoints)export class McpClient {
protected client: InferenceClient;
protected provider: string;
protected model: string;
private clients: Map<ToolName, Client> = new Map();
public readonly availableTools: ChatCompletionInputTool[] = [];
constructor({ provider, model, apiKey }: { provider: InferenceProvider; model: string; apiKey: string }) {
this.client = new InferenceClient(apiKey);
this.provider = provider;
this.model = model;
}
// [...]
}To connect to a MCP server, the official @modelcontextprotocol/sdk/client TypeScript SDK provides a Client class with a listTools() method:
async addMcpServer(server: StdioServerParameters): Promise<void> {
const transport = new StdioClientTransport({
...server,
env: { ...server.env, PATH: process.env.PATH ?? "" },
});
const mcp = new Client({ name: "@huggingface/mcp-client", version: packageVersion });
await mcp.connect(transport);
const toolsResult = await mcp.listTools();
debug(
"Connected to server with tools:",
toolsResult.tools.map(({ name }) => name)
);
for (const tool of toolsResult.tools) {
this.clients.set(tool.name, mcp);
}
this.availableTools.push(
...toolsResult.tools.map((tool) => {
return {
type: "function",
function: {
name: tool.name,
description: tool.description,
parameters: tool.inputSchema,
},
} satisfies ChatCompletionInputTool;
})
);
}StdioServerParameters is an interface from the MCP SDK that will let you easily spawn a local process: as we mentioned earlier, currently, all MCP servers are actually local processes.
Using our sentiment analysis tool (or any other MCP tool) is straightforward. You just pass this.availableTools to your LLM chat-completion, in addition to your usual array of messages:
const stream = this.client.chatCompletionStream({
provider: this.provider,
model: this.model,
messages,
tools: this.availableTools,
tool_choice: "auto",
});tool_choice: "auto" is the parameter you pass for the LLM to generate zero, one, or multiple tool calls.
When parsing or streaming the output, the LLM will generate some tool calls (i.e. a function name, and some JSON-encoded arguments), which you (as a developer) need to compute. The MCP client SDK once again makes that very easy; it has a client.callTool() method:
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);
const toolMessage: ChatCompletionInputMessageTool = {
role: "tool",
tool_call_id: toolCall.id,
content: "",
name: toolName,
};
/// Get the appropriate session for this tool
const client = this.clients.get(toolName);
if (client) {
const result = await client.callTool({ name: toolName, arguments: toolArgs });
toolMessage.content = result.content[0].text;
} else {
toolMessage.content = `Error: No session found for tool: ${toolName}`;
}If the LLM chooses to use a tool, this code will automatically route the call to the MCP server, execute the analysis, and return the result back to the LLM.
Finally you will add the resulting tool message to your messages array and back into the LLM.
Now that we have an MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well⌠what is an Agent?
Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.
In more detail, an Agent is simply a combination of:
The complete Agent.ts code file is here.
Our Agent class simply extends McpClient:
export class Agent extends McpClient {
private readonly servers: StdioServerParameters[];
protected messages: ChatCompletionInputMessage[];
constructor({
provider,
model,
apiKey,
servers,
prompt,
}: {
provider: InferenceProvider;
model: string;
apiKey: string;
servers: StdioServerParameters[];
prompt?: string;
}) {
super({ provider, model, apiKey });
this.servers = servers;
this.messages = [
{
role: "system",
content: prompt ?? DEFAULT_SYSTEM_PROMPT,
},
];
}
}By default, we use a very simple system prompt inspired by the one shared in the GPT-4.1 prompting guide.
Even though this comes from OpenAI đ, this sentence in particular applies to more and more models, both closed and open:
We encourage developers to exclusively use the tools field to pass tools, rather than manually injecting tool descriptions into your prompt and writing a separate parser for tool calls, as some have reported doing in the past.
Which is to say, we donât need to provide painstakingly formatted lists of tool use examples in the prompt. The tools: this.availableTools param is enough, and the LLM will know how to use both the filesystem tools.
Loading the tools on the Agent is literally just connecting to the MCP servers we want (in parallel because itâs so easy to do in JS):
async loadTools(): Promise<void> {
await Promise.all(this.servers.map((s) => this.addMcpServer(s)));
}We add two extra tools (outside of MCP) that can be used by the LLM for our Agentâs control flow:
const taskCompletionTool: ChatCompletionInputTool = {
type: "function",
function: {
name: "task_complete",
description: "Call this tool when the task given by the user is complete",
parameters: {
type: "object",
properties: {},
},
},
};
const askQuestionTool: ChatCompletionInputTool = {
type: "function",
function: {
name: "ask_question",
description: "Ask a question to the user to get more info required to solve or clarify their problem.",
parameters: {
type: "object",
properties: {},
},
},
};
const exitLoopTools = [taskCompletionTool, askQuestionTool];When calling any of these tools, the Agent will break its loop and give control back to the user for new input.
Behold our complete while loop.đ
The gist of our Agentâs main while loop is that we simply iterate with the LLM alternating between tool calling and feeding it the tool results, and we do so until the LLM starts to respond with two non-tool messages in a row.
This is the complete while loop:
let numOfTurns = 0;
let nextTurnShouldCallTools = true;
while (true) {
try {
yield* this.processSingleTurnWithTools(this.messages, {
exitLoopTools,
exitIfFirstChunkNoTool: numOfTurns > 0 && nextTurnShouldCallTools,
abortSignal: opts.abortSignal,
});
} catch (err) {
if (err instanceof Error && err.message === "AbortError") {
return;
}
throw err;
}
numOfTurns++;
const currentLast = this.messages.at(-1)!;
if (
currentLast.role === "tool" &&
currentLast.name &&
exitLoopTools.map((t) => t.function.name).includes(currentLast.name)
) {
return;
}
if (currentLast.role !== "tool" && numOfTurns > MAX_NUM_TURNS) {
return;
}
if (currentLast.role !== "tool" && nextTurnShouldCallTools) {
return;
}
if (currentLast.role === "tool") {
nextTurnShouldCallTools = false;
} else {
nextTurnShouldCallTools = true;
}
}