Building the Gradio MCP Server

In this section, we’ll create our sentiment analysis MCP server using Gradio. This server will expose a sentiment analysis tool that can be used by both human users through a web interface and AI models through the MCP protocol.

Introduction to Gradio MCP Integration

Gradio provides a straightforward way to create MCP servers by automatically converting your Python functions into MCP tools. When you set mcp_server=True in launch(), Gradio:

  1. Automatically converts your functions into MCP Tools
  2. Maps input components to tool argument schemas
  3. Determines response formats from output components
  4. Sets up JSON-RPC over HTTP+SSE for client-server communication
  5. Creates both a web interface and an MCP server endpoint

Setting Up the Project

First, let’s create a new directory for our project and set up the required dependencies:

mkdir mcp-sentiment
cd mcp-sentiment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install "gradio[mcp]" textblob

Creating the Server

Hugging face spaces needs an app.py file to build the space. So the name of the python file has to be app.py

Create a new file called app.py with the following code:

import gradio as gr
from textblob import TextBlob

def sentiment_analysis(text: str) -> dict:
    """
    Analyze the sentiment of the given text.

    Args:
        text (str): The text to analyze

    Returns:
        dict: A dictionary containing polarity, subjectivity, and assessment
    """
    blob = TextBlob(text)
    sentiment = blob.sentiment
    
    return {
        "polarity": round(sentiment.polarity, 2),  # -1 (negative) to 1 (positive)
        "subjectivity": round(sentiment.subjectivity, 2),  # 0 (objective) to 1 (subjective)
        "assessment": "positive" if sentiment.polarity > 0 else "negative" if sentiment.polarity < 0 else "neutral"
    }

# Create the Gradio interface
demo = gr.Interface(
    fn=sentiment_analysis,
    inputs=gr.Textbox(placeholder="Enter text to analyze..."),
    outputs=gr.JSON(),
    title="Text Sentiment Analysis",
    description="Analyze the sentiment of text using TextBlob"
)

# Launch the interface and MCP server
if __name__ == "__main__":
    demo.launch(mcp_server=True)

Understanding the Code

Let’s break down the key components:

  1. Function Definition:

  2. Gradio Interface:

  3. MCP Server:

Running the Server

Start the server by running:

python app.py

You should see output indicating that both the web interface and MCP server are running. The web interface will be available at http://localhost:7860, and the MCP server at http://localhost:7860/gradio_api/mcp/sse.

Testing the Server

You can test the server in two ways:

  1. Web Interface:

  2. MCP Schema:

Troubleshooting Tips

  1. Type Hints and Docstrings:

  2. String Inputs:

  3. SSE Support:

  4. Connection Issues:

Deploying to Hugging Face Spaces

To make your server available to others, you can deploy it to Hugging Face Spaces:

  1. Create a new Space on Hugging Face:

  2. Create a requirements.txt file:

gradio[mcp]
textblob
  1. Push your code to the Space:
git init
git add app.py requirements.txt
git commit -m "Initial commit"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-sentiment
git push -u origin main

Your MCP server will now be available at:

https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse

Next Steps

Now that we have our MCP server running, we’ll create clients to interact with it. In the next sections, we’ll:

  1. Create a HuggingFace.js-based client inspired by Tiny Agents
  2. Implement a SmolAgents-based Python client
  3. Test both clients with our deployed server

Let’s move on to building our first client!

< > Update on GitHub