Building Tiny Agents with MCP and the Hugging Face Hub

Now that we’ve built MCP servers in Gradio and learned about creating MCP clients, let’s complete our end-to-end application by building an agent that can seamlessly interact with our sentiment analysis tool. This section builds on the project Tiny Agents, which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.

In this final exercise of Unit 2, we will walk you through how to implement both TypeScript (JS) and Python MCP clients that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous sections. This completes our end-to-end MCP application flow: from building a Gradio MCP server exposing a sentiment analysis tool, to creating a flexible agent that can use this tool alongside other capabilities.

meme

Image credit https://x.com/adamdotdev

Installation

Let’s install the necessary packages to build our Tiny Agents.

Some MCP Clients, notably Claude Desktop, do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as mcp-remote. First install Node.js. Then, add the following to your own MCP Client config:

typescript
python

First, we need to install the tiny-agents package.

npm install @huggingface/tiny-agents
# or
pnpm add @huggingface/tiny-agents

Then, we need to install the mcp-remote package.

npm i mcp-remote
# or
pnpm add mcp-remote

Tiny Agents MCP Client in the Command Line

Tiny Agents can create MCP clients from the command line based on JSON configuration files.

typescript
python

Let’s setup a project with a basic Tiny Agent.

mkdir my-agent
touch my-agent/agent.json

The JSON file will look like this:

{
	"model": "Qwen/Qwen2.5-72B-Instruct",
	"provider": "nebius",
	"servers": [
		{
			"type": "stdio",
			"config": {
				"command": "npx",
				"args": [
					"mcp-remote",
					"http://localhost:7860/gradio_api/mcp/sse"
				]
			}
		}
	]
}

We can then run the agent with the following command:

npx @huggingface/tiny-agents run ./my-agent

Here we have a basic Tiny Agent that can connect to our Gradio MCP server. It includes a model, provider, and a server configuration.

Field Description
model The open source model to use for the agent
provider The inference provider to use for the agent
servers The servers to use for the agent. We’ll use the mcp-remote server for our Gradio MCP server.

We could also use an open source model running locally with Tiny Agents.

typescript
python
{
	"model": "Qwen/Qwen3-32B",
	"endpointUrl": "http://localhost:1234/v1",
	"servers": [
		{
			"type": "stdio",
			"config": {
				"command": "npx",
				"args": [
					"mcp-remote",
					"http://localhost:1234/v1/mcp/sse"
				]
			}
		}
	]
}

Here we have a Tiny Agent that can connect to a local model. It includes a model, endpoint URL (http://localhost:1234/v1), and a server configuration. The endpoint should be an OpenAI-compatible endpoint.

Custom Tiny Agents MCP Client

Now that we understand both Tiny Agents and Gradio MCP servers, let’s see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server from earlier sections.

Using the Gradio Server with Tiny Agents

To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier in this unit, we just need to add it to our list of servers. Here’s how we can modify our agent configuration:

typescript
python
const agent = new Agent({
    provider: process.env.PROVIDER ?? "nebius",
    model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
    apiKey: process.env.HF_TOKEN,
    servers: [
        // ... existing servers ...
        {
            command: "npx",
            args: [
                "mcp-remote",
                "http://localhost:7860/gradio_api/mcp/sse"  // Your Gradio MCP server
            ]
        }
    ],
});

Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:

  1. Read text from a file using the filesystem server
  2. Analyze its sentiment using our Gradio server
  3. Write the results back to a file

Deployment Considerations

When deploying your Gradio MCP server to Hugging Face Spaces, you’ll need to update the server URL in your agent configuration to point to your deployed space:

{
    command: "npx",
    args: [
        "mcp-remote",
        "https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse"
    ]
}

This allows your agent to use the sentiment analysis tool from anywhere, not just locally!

Conclusion: Our Complete End-to-End MCP Application

In this unit, we’ve gone from understanding MCP basics to building a complete end-to-end application:

  1. We created a Gradio MCP server that exposes a sentiment analysis tool
  2. We learned how to connect to this server using MCP clients
  3. We built a tiny agent in TypeScript and Python that can interact with our tool

This demonstrates the power of the Model Context Protocol - we can create specialized tools using frameworks we’re familiar with (like Gradio), expose them through a standardized interface (MCP), and then have agents seamlessly use these tools alongside other capabilities.

The complete flow we’ve built allows an agent to:

This modular approach is what makes MCP so powerful for building flexible AI applications.

Bonus: Build a Browser Automation MCP Server with Playwright

As a bonus, let’s explore how to use the Playwright MCP server for browser automation with Tiny Agents. This demonstrates the extensibility of the MCP ecosystem beyond our sentiment analysis example.

This section is based on the Tiny Agents blog post and adapted for the MCP course.

In this section, we’ll show you how to build an agent that can perform web automation tasks like searching, clicking, and extracting information from websites.

typescript
python
// playwright-agent.ts
import { Agent } from "@huggingface/tiny-agents";

const agent = new Agent({
  provider: process.env.PROVIDER ?? "nebius",
  model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
  apiKey: process.env.HF_TOKEN,
  servers: [
    {
      command: "npx",
      args: ["playwright-mcp"]
    }
  ],
});

await agent.run();

The Playwright MCP server exposes tools that allow your agent to:

  1. Open browser tabs
  2. Navigate to URLs
  3. Click on elements
  4. Type into forms
  5. Extract content from webpages
  6. Take screenshots

Here’s an example interaction with our browser automation agent:

User: Search for "tiny agents" on GitHub and collect the names of the top 3 repositories

Agent: I'll search GitHub for "tiny agents" repositories.
[Agent opens browser, navigates to GitHub, performs the search, and extracts repository names]

Here are the top 3 (not real) repositories for "tiny agents":
1. huggingface/tiny-agents
2. modelcontextprotocol/tiny-agents-examples
3. langchain/tiny-agents-js

This browser automation capability can be combined with other MCP servers to create powerful workflows—for example, extracting text from a webpage and then analyzing it with custom tools.

How to run the complete demo

typescript
python

If you have NodeJS (with pnpm or npm), just run this in a terminal:

npx @huggingface/mcp-client

or if using pnpm:

pnpx @huggingface/mcp-client

This installs the package into a temporary folder then executes its command.

You’ll see your simple Agent connect to multiple MCP servers (running locally), loading their tools (similar to how it would load your Gradio sentiment analysis tool), then prompting you for a conversation.

By default our example Agent connects to the following two MCP servers:

[!NOTE] Note: this is a bit counter-intuitive but currently, all MCP servers in tiny agents are actually local processes (though remote servers are coming soon).

Our input for this first video was:

write a haiku about the Hugging Face community and write it to a file named “hf.txt” on my Desktop

Now let us try this prompt that involves some Web browsing:

do a Web Search for HF inference providers on Brave Search and open the first 3 results

Default model and provider

In terms of model/provider pair, our example Agent uses by default:

This is all configurable through env variables:

typescript
python
const agent = new Agent({
	provider: process.env.PROVIDER ?? "nebius",
	model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
	apiKey: process.env.HF_TOKEN,
	servers: [
		// Default servers
		{
			command: "npx",
			args: ["@modelcontextprotocol/servers", "filesystem"]
		},
		{
			command: "npx",
			args: ["playwright-mcp"]
		},
	],
});

Implementing an MCP client on top of InferenceClient

Now that we know what a tool is in recent LLMs, let’s implement the actual MCP client that will communicate with MCP servers and other MCP servers.

The official doc at https://modelcontextprotocol.io/quickstart/client is fairly well-written. You only have to replace any mention of the Anthropic client SDK by any other OpenAI-compatible client SDK. (There is also a llms.txt you can feed into your LLM of choice to help you code along).

typescript
python

As a reminder, we use HF’s InferenceClient for our inference client.

The complete McpClient.ts code file is here if you want to follow along using the actual code 🤓

Our McpClient class has:

export class McpClient {
	protected client: InferenceClient;
	protected provider: string;
	protected model: string;
	private clients: Map<ToolName, Client> = new Map();
	public readonly availableTools: ChatCompletionInputTool[] = [];

	constructor({ provider, model, apiKey }: { provider: InferenceProvider; model: string; apiKey: string }) {
		this.client = new InferenceClient(apiKey);
		this.provider = provider;
		this.model = model;
	}
	
	// [...]
}

To connect to a MCP server, the official @modelcontextprotocol/sdk/client TypeScript SDK provides a Client class with a listTools() method:

async addMcpServer(server: StdioServerParameters): Promise<void> {
	const transport = new StdioClientTransport({
		...server,
		env: { ...server.env, PATH: process.env.PATH ?? "" },
	});
	const mcp = new Client({ name: "@huggingface/mcp-client", version: packageVersion });
	await mcp.connect(transport);

	const toolsResult = await mcp.listTools();
	debug(
		"Connected to server with tools:",
		toolsResult.tools.map(({ name }) => name)
	);

	for (const tool of toolsResult.tools) {
		this.clients.set(tool.name, mcp);
	}

	this.availableTools.push(
		...toolsResult.tools.map((tool) => {
			return {
				type: "function",
				function: {
					name: tool.name,
					description: tool.description,
					parameters: tool.inputSchema,
				},
			} satisfies ChatCompletionInputTool;
		})
	);
}

StdioServerParameters is an interface from the MCP SDK that will let you easily spawn a local process: as we mentioned earlier, currently, all MCP servers are actually local processes.

How to use the tools

Using our sentiment analysis tool (or any other MCP tool) is straightforward. You just pass this.availableTools to your LLM chat-completion, in addition to your usual array of messages:

typescript
python
const stream = this.client.chatCompletionStream({
	provider: this.provider,
	model: this.model,
	messages,
	tools: this.availableTools,
	tool_choice: "auto",
});

tool_choice: "auto" is the parameter you pass for the LLM to generate zero, one, or multiple tool calls.

When parsing or streaming the output, the LLM will generate some tool calls (i.e. a function name, and some JSON-encoded arguments), which you (as a developer) need to compute. The MCP client SDK once again makes that very easy; it has a client.callTool() method:

typescript
python
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);

const toolMessage: ChatCompletionInputMessageTool = {
	role: "tool",
	tool_call_id: toolCall.id,
	content: "",
	name: toolName,
};

/// Get the appropriate session for this tool
const client = this.clients.get(toolName);
if (client) {
	const result = await client.callTool({ name: toolName, arguments: toolArgs });
	toolMessage.content = result.content[0].text;
} else {
	toolMessage.content = `Error: No session found for tool: ${toolName}`;
}

If the LLM chooses to use a tool, this code will automatically route the call to the MCP server, execute the analysis, and return the result back to the LLM.

Finally you will add the resulting tool message to your messages array and back into the LLM.

Our 50-lines-of-code Agent 🤯

Now that we have an MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well… what is an Agent?

Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.

In more detail, an Agent is simply a combination of:

typescript
python

The complete Agent.ts code file is here.

Our Agent class simply extends McpClient:

export class Agent extends McpClient {
	private readonly servers: StdioServerParameters[];
	protected messages: ChatCompletionInputMessage[];

	constructor({
		provider,
		model,
		apiKey,
		servers,
		prompt,
	}: {
		provider: InferenceProvider;
		model: string;
		apiKey: string;
		servers: StdioServerParameters[];
		prompt?: string;
	}) {
		super({ provider, model, apiKey });
		this.servers = servers;
		this.messages = [
			{
				role: "system",
				content: prompt ?? DEFAULT_SYSTEM_PROMPT,
			},
		];
	}
}

By default, we use a very simple system prompt inspired by the one shared in the GPT-4.1 prompting guide.

Even though this comes from OpenAI 😈, this sentence in particular applies to more and more models, both closed and open:

We encourage developers to exclusively use the tools field to pass tools, rather than manually injecting tool descriptions into your prompt and writing a separate parser for tool calls, as some have reported doing in the past.

Which is to say, we don’t need to provide painstakingly formatted lists of tool use examples in the prompt. The tools: this.availableTools param is enough, and the LLM will know how to use both the filesystem tools.

Loading the tools on the Agent is literally just connecting to the MCP servers we want (in parallel because it’s so easy to do in JS):

typescript
python
async loadTools(): Promise<void> {
	await Promise.all(this.servers.map((s) => this.addMcpServer(s)));
}

We add two extra tools (outside of MCP) that can be used by the LLM for our Agent’s control flow:

typescript
python
const taskCompletionTool: ChatCompletionInputTool = {
	type: "function",
	function: {
		name: "task_complete",
		description: "Call this tool when the task given by the user is complete",
		parameters: {
			type: "object",
			properties: {},
		},
	},
};
const askQuestionTool: ChatCompletionInputTool = {
	type: "function",
	function: {
		name: "ask_question",
		description: "Ask a question to the user to get more info required to solve or clarify their problem.",
		parameters: {
			type: "object",
			properties: {},
		},
	},
};
const exitLoopTools = [taskCompletionTool, askQuestionTool];

When calling any of these tools, the Agent will break its loop and give control back to the user for new input.

The complete while loop

Behold our complete while loop.🎉

The gist of our Agent’s main while loop is that we simply iterate with the LLM alternating between tool calling and feeding it the tool results, and we do so until the LLM starts to respond with two non-tool messages in a row.

This is the complete while loop:

typescript
python
let numOfTurns = 0;
let nextTurnShouldCallTools = true;
while (true) {
	try {
		yield* this.processSingleTurnWithTools(this.messages, {
			exitLoopTools,
			exitIfFirstChunkNoTool: numOfTurns > 0 && nextTurnShouldCallTools,
			abortSignal: opts.abortSignal,
		});
	} catch (err) {
		if (err instanceof Error && err.message === "AbortError") {
			return;
		}
		throw err;
	}
	numOfTurns++;
	const currentLast = this.messages.at(-1)!;
	if (
		currentLast.role === "tool" &&
		currentLast.name &&
		exitLoopTools.map((t) => t.function.name).includes(currentLast.name)
	) {
		return;
	}
	if (currentLast.role !== "tool" && numOfTurns > MAX_NUM_TURNS) {
		return;
	}
	if (currentLast.role !== "tool" && nextTurnShouldCallTools) {
		return;
	}
	if (currentLast.role === "tool") {
		nextTurnShouldCallTools = false;
	} else {
		nextTurnShouldCallTools = true;
	}
}
< > Update on GitHub