Now that weâve built MCP servers in Gradio letâs explore MCP clients even further. This section builds on the experimental project Tiny Agents, which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.
In this short exercise, we will walk you through how to implement a TypeScript (JS) MCP client that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous section. Youâll see how MCP standardizes the way agents interact with tools, making Agentic AI development significantly simpler.
![]()
We will show you how to connect your tiny agent to Gradio-based MCP servers, allowing it to leverage both your custom sentiment analysis tool and other pre-built tools.
If you have NodeJS (with pnpm or npm), just run this in a terminal:
npx @huggingface/mcp-client
or if using pnpm:
pnpx @huggingface/mcp-client
This installs the package into a temporary folder then executes its command.
Youâll see your simple Agent connect to multiple MCP servers (running locally), loading their tools (similar to how it would load your Gradio sentiment analysis tool), then prompting you for a conversation.
By default our example Agent connects to the following two MCP servers:
You can easily add your Gradio sentiment analysis server to this list, as weâll demonstrate later in this section.
[!NOTE] Note: this is a bit counter-intuitive but currently, all MCP servers in tiny agents are actually local processes (though remote servers are coming soon). This doesnât includes our Gradio server running on localhost:7860.
Our input for this first video was:
write a haiku about the Hugging Face community and write it to a file named âhf.txtâ on my Desktop
Now let us try this prompt that involves some Web browsing:
do a Web Search for HF inference providers on Brave Search and open the first 3 results
With our Gradio sentiment analysis tool connected, we could similarly ask:
analyze the sentiment of this review: âI absolutely loved the product, it exceeded all my expectations!â
In terms of model/provider pair, our example Agent uses by default:
This is all configurable through env variables! Here, weâll also show how to add our Gradio MCP server:
const agent = new Agent({
provider: process.env.PROVIDER ?? "nebius",
model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
apiKey: process.env.HF_TOKEN,
servers: [
// Default servers
{
command: "npx",
args: ["@modelcontextprotocol/servers", "filesystem"]
},
{
command: "npx",
args: ["playwright-mcp"]
},
// Our Gradio sentiment analysis server
{
command: "npx",
args: [
"mcp-remote",
"http://localhost:7860/gradio_api/mcp/sse"
]
}
],
});We connect to our Gradio based MCP server via the mcp-remote package.
What makes connecting Gradio MCP servers to our Tiny Agent possible is that recent LLMs (both closed and open) have been trained for function calling, aka. tool use. This same capability powers our integration with the sentiment analysis tool we built with Gradio.
A tool is defined by its name, a description, and a JSONSchema representation of its parameters - exactly how we defined our sentiment analysis function in the Gradio server. Letâs look at a simple example:
const weatherTool = {
type: "function",
function: {
name: "get_weather",
description: "Get current temperature for a given location.",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City and country e.g. BogotĂĄ, Colombia",
},
},
},
},
};Our Gradio sentiment analysis tool would have a similar structure, with text as the input parameter instead of location.
The canonical documentation I will link to here is OpenAIâs function calling doc. (Yes⌠OpenAI pretty much defines the LLM standards for the whole community đ ).
Inference engines let you pass a list of tools when calling the LLM, and the LLM is free to call zero, one or more of those tools. As a developer, you run the tools and feed their result back into the LLM to continue the generation.
[!NOTE] Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted
chat_template, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
Now that we know what a tool is in recent LLMs, letâs implement the actual MCP client that will communicate with our Gradio server and other MCP servers.
The official doc at https://modelcontextprotocol.io/quickstart/client is fairly well-written. You only have to replace any mention of the Anthropic client SDK by any other OpenAI-compatible client SDK. (There is also a llms.txt you can feed into your LLM of choice to help you code along).
As a reminder, we use HFâs InferenceClient for our inference client.
The complete McpClient.ts code file is here if you want to follow along using the actual code đ¤
Our McpClient class has:
huggingface/inference supports both remote and local endpoints)export class McpClient {
protected client: InferenceClient;
protected provider: string;
protected model: string;
private clients: Map<ToolName, Client> = new Map();
public readonly availableTools: ChatCompletionInputTool[] = [];
constructor({ provider, model, apiKey }: { provider: InferenceProvider; model: string; apiKey: string }) {
this.client = new InferenceClient(apiKey);
this.provider = provider;
this.model = model;
}
// [...]
}To connect to a MCP server (like our Gradio sentiment analysis server), the official @modelcontextprotocol/sdk/client TypeScript SDK provides a Client class with a listTools() method:
async addMcpServer(server: StdioServerParameters): Promise<void> {
const transport = new StdioClientTransport({
...server,
env: { ...server.env, PATH: process.env.PATH ?? "" },
});
const mcp = new Client({ name: "@huggingface/mcp-client", version: packageVersion });
await mcp.connect(transport);
const toolsResult = await mcp.listTools();
debug(
"Connected to server with tools:",
toolsResult.tools.map(({ name }) => name)
);
for (const tool of toolsResult.tools) {
this.clients.set(tool.name, mcp);
}
this.availableTools.push(
...toolsResult.tools.map((tool) => {
return {
type: "function",
function: {
name: tool.name,
description: tool.description,
parameters: tool.inputSchema,
},
} satisfies ChatCompletionInputTool;
})
);
}StdioServerParameters is an interface from the MCP SDK that will let you easily spawn a local process: as we mentioned earlier, currently, all MCP servers are actually local processes, including our Gradio server (though we access it via HTTP).
For each MCP server we connect to (including our Gradio sentiment analysis server), we slightly re-format its list of tools and add them to this.availableTools.
Using our sentiment analysis tool (or any other MCP tool) is straightforward. You just pass this.availableTools to your LLM chat-completion, in addition to your usual array of messages:
const stream = this.client.chatCompletionStream({
provider: this.provider,
model: this.model,
messages,
tools: this.availableTools,
tool_choice: "auto",
});tool_choice: "auto" is the parameter you pass for the LLM to generate zero, one, or multiple tool calls.
When parsing or streaming the output, the LLM will generate some tool calls (i.e. a function name, and some JSON-encoded arguments), which you (as a developer) need to compute. The MCP client SDK once again makes that very easy; it has a client.callTool() method:
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);
const toolMessage: ChatCompletionInputMessageTool = {
role: "tool",
tool_call_id: toolCall.id,
content: "",
name: toolName,
};
/// Get the appropriate session for this tool
const client = this.clients.get(toolName);
if (client) {
const result = await client.callTool({ name: toolName, arguments: toolArgs });
toolMessage.content = result.content[0].text;
} else {
toolMessage.content = `Error: No session found for tool: ${toolName}`;
}If the LLM chooses to use our sentiment analysis tool, this code will automatically route the call to our Gradio server, execute the analysis, and return the result back to the LLM.
Finally you will add the resulting tool message to your messages array and back into the LLM.
Now that we have an MCP client capable of connecting to arbitrary MCP servers (including our Gradio sentiment analysis server) to get lists of tools and capable of injecting them and parsing them from the LLM inference, well⌠what is an Agent?
Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.
In more detail, an Agent is simply a combination of:
The complete Agent.ts code file is here.
Our Agent class simply extends McpClient:
export class Agent extends McpClient {
private readonly servers: StdioServerParameters[];
protected messages: ChatCompletionInputMessage[];
constructor({
provider,
model,
apiKey,
servers,
prompt,
}: {
provider: InferenceProvider;
model: string;
apiKey: string;
servers: StdioServerParameters[];
prompt?: string;
}) {
super({ provider, model, apiKey });
this.servers = servers;
this.messages = [
{
role: "system",
content: prompt ?? DEFAULT_SYSTEM_PROMPT,
},
];
}
}By default, we use a very simple system prompt inspired by the one shared in the GPT-4.1 prompting guide.
Even though this comes from OpenAI đ, this sentence in particular applies to more and more models, both closed and open:
We encourage developers to exclusively use the tools field to pass tools, rather than manually injecting tool descriptions into your prompt and writing a separate parser for tool calls, as some have reported doing in the past.
Which is to say, we donât need to provide painstakingly formatted lists of tool use examples in the prompt. The tools: this.availableTools param is enough, and the LLM will know how to use both the filesystem tools and our Gradio sentiment analysis tool.
Loading the tools on the Agent is literally just connecting to the MCP servers we want (in parallel because itâs so easy to do in JS):
async loadTools(): Promise<void> {
await Promise.all(this.servers.map((s) => this.addMcpServer(s)));
}We add two extra tools (outside of MCP) that can be used by the LLM for our Agentâs control flow:
const taskCompletionTool: ChatCompletionInputTool = {
type: "function",
function: {
name: "task_complete",
description: "Call this tool when the task given by the user is complete",
parameters: {
type: "object",
properties: {},
},
},
};
const askQuestionTool: ChatCompletionInputTool = {
type: "function",
function: {
name: "ask_question",
description: "Ask a question to the user to get more info required to solve or clarify their problem.",
parameters: {
type: "object",
properties: {},
},
},
};
const exitLoopTools = [taskCompletionTool, askQuestionTool];When calling any of these tools, the Agent will break its loop and give control back to the user for new input.
Behold our complete while loop.đ
The gist of our Agentâs main while loop is that we simply iterate with the LLM alternating between tool calling and feeding it the tool results, and we do so until the LLM starts to respond with two non-tool messages in a row.
This is the complete while loop:
let numOfTurns = 0;
let nextTurnShouldCallTools = true;
while (true) {
try {
yield* this.processSingleTurnWithTools(this.messages, {
exitLoopTools,
exitIfFirstChunkNoTool: numOfTurns > 0 && nextTurnShouldCallTools,
abortSignal: opts.abortSignal,
});
} catch (err) {
if (err instanceof Error && err.message === "AbortError") {
return;
}
throw err;
}
numOfTurns++;
const currentLast = this.messages.at(-1)!;
if (
currentLast.role === "tool" &&
currentLast.name &&
exitLoopTools.map((t) => t.function.name).includes(currentLast.name)
) {
return;
}
if (currentLast.role !== "tool" && numOfTurns > MAX_NUM_TURNS) {
return;
}
if (currentLast.role !== "tool" && nextTurnShouldCallTools) {
return;
}
if (currentLast.role === "tool") {
nextTurnShouldCallTools = false;
} else {
nextTurnShouldCallTools = true;
}
}Now that we understand both Tiny Agents and Gradio MCP servers, letâs see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server.
To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier, we just need to add it to our list of servers. Hereâs how we can modify our agent configuration:
const agent = new Agent({
provider: process.env.PROVIDER ?? "nebius",
model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
apiKey: process.env.HF_TOKEN,
servers: [
// ... existing servers ...
{
command: "npx",
args: [
"mcp-remote",
"http://localhost:7860/gradio_api/mcp/sse" // Your Gradio MCP server
]
}
],
});Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
Hereâs what a conversation with our agent might look like:
User: Read the file "feedback.txt" from my Desktop and analyze its sentiment
Agent: I'll help you analyze the sentiment of the feedback file. Let me break this down into steps:
1. First, I'll read the file using the filesystem tool
2. Then, I'll analyze its sentiment using the sentiment analysis tool
3. Finally, I'll write the results to a new file
[Agent proceeds to use the tools and provide the analysis]When deploying your Gradio MCP server to Hugging Face Spaces, youâll need to update the server URL in your agent configuration to point to your deployed space:
{
command: "npx",
args: [
"mcp-remote",
"https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse"
]
}This allows your agent to use the sentiment analysis tool from anywhere, not just locally!
< > Update on GitHub