Understanding MCP Capabilities

MCP Servers expose a variety of capabilities to Clients through the communication protocol. These capabilities fall into four main categories, each with distinct characteristics and use cases. Let’s explore these core primitives that form the foundation of MCP’s functionality.

In this section, we’ll show examples as framework agnostic functions in each language. This is to focus on the concepts and how they work together, rather than the complexities of any framework.

In the coming units, we’ll show how these concepts are implemented in MCP specific code.

Tools

Tools are executable functions or actions that the AI model can invoke through the MCP protocol.

Example: A weather tool that fetches current weather data for a given location:

python
javascript
def get_weather(location: str) -> dict:
    """Get the current weather for a specified location."""
    # Connect to weather API and fetch data
    return {
        "temperature": 72,
        "conditions": "Sunny",
        "humidity": 45
    }

Resources

Resources provide read-only access to data sources, allowing the AI model to retrieve context without executing complex logic.

Example: A resource that provides access to file contents:

python
javascript
def read_file(file_path: str) -> str:
    """Read the contents of a file at the specified path."""
    with open(file_path, 'r') as f:
        return f.read()

Prompts

Prompts are predefined templates or workflows that guide the interaction between the user, the AI model, and the Server’s capabilities.

Example: A prompt template for generating a code review:

python
javascript
def code_review(code: str, language: str) -> list:
    """Generate a code review for the provided code snippet."""
    return [
        {
            "role": "system",
            "content": f"You are a code reviewer examining {language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement."
        },
        {
            "role": "user",
            "content": f"Please review this {language} code:\n\n```{language}\n{code}\n```"
        }
    ]

Sampling

Sampling allows Servers to request the Client (specifically, the Host application) to perform LLM interactions.

Example: A Server might request the Client to analyze data it has processed:

python
javascript
def request_sampling(messages, system_prompt=None, include_context="none"):
    """Request LLM sampling from the client."""
    # In a real implementation, this would send a request to the client
    return {
        "role": "assistant",
        "content": "Analysis of the provided data..."
    }

The sampling flow follows these steps:

  1. Server sends a sampling/createMessage request to the client
  2. Client reviews the request and can modify it
  3. Client samples from an LLM
  4. Client reviews the completion
  5. Client returns the result to the server

This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. When implementing sampling, it’s important to provide clear, well-structured prompts and include relevant context.

How Capabilities Work Together

Let’s look at how these capabilities work together to enable complex interactions. In the table below, we’ve outlined the capabilities, who controls them, the direction of control, and some other details.

Capability Controlled By Direction Side Effects Approval Needed Typical Use Cases
Tools Model (LLM) Client → Server Yes (potentially) Yes Actions, API calls, data manipulation
Resources Application Client → Server No (read-only) Typically no Data retrieval, context gathering
Prompts User Server → Client No No (selected by user) Guided workflows, specialized templates
Sampling Server Server → Client → Server Indirectly Yes Multi-step tasks, agentic behaviors

These capabilities are designed to work together in complementary ways:

  1. A user might select a Prompt to start a specialized workflow
  2. The Prompt might include context from Resources
  3. During processing, the AI model might call Tools to perform specific actions
  4. For complex operations, the Server might use Sampling to request additional LLM processing

The distinction between these primitives provides a clear structure for MCP interactions, enabling AI models to access information, perform actions, and engage in complex workflows while maintaining appropriate control boundaries.

Discovery Process

One of MCP’s key features is dynamic capability discovery. When a Client connects to a Server, it can query the available Tools, Resources, and Prompts through specific list methods:

This dynamic discovery mechanism allows Clients to adapt to the specific capabilities each Server offers without requiring hardcoded knowledge of the Server’s functionality.

Conclusion

Understanding these core primitives is essential for working with MCP effectively. By providing distinct types of capabilities with clear control boundaries, MCP enables powerful interactions between AI models and external systems while maintaining appropriate safety and control mechanisms.

In the next section, we’ll explore how Gradio integrates with MCP to provide easy-to-use interfaces for these capabilities.

< > Update on GitHub