In this section, we’ll connect MCP with local and open-source models using Continue, a tool for building AI coding assistants that works with local tools like Ollama.
You can install Continue from the VS Code marketplace.
Install on the Continue extension page in the Visual Studio MarketplaceInstall again
With Continue configured, we’ll move on to setting up Ollama to pull local models.
Ollama is an open-source tool that allows users to run large language models (LLMs)
locally on their own computers. To use Ollama, you can install it and
download the model you want to run with the ollama pull command.
For example, you can download the llama 3.1:8b model with:
ollama pull llama3.1:8b
Details on all available model providers can be found in the Continue documentation.
It is important that we use models that have tool calling as a built-in feature, i.e. Codestral Qwen and Llama 3.1x.
.continue/models at the top level of your workspacellama-max.yaml to this folderllama-max.yaml and savename: Ollama Llama model
version: 0.0.1
schema: v1
models:
- provider: ollama
model: llama3.1:8b
defaultCompletionOptions:
contextLength: 128000
name: a llama3.1:8b max
roles:
- chat
- editBy default, each model has a max context length, in this case it is 128000 tokens. This setup includes a larger use of
that context window to perform multiple MCP requests and needs to be able to handle more tokens.
Tools provide a powerful way for models to interface with the external world.
They are provided to the model as a JSON object with a name and an arguments
schema. For example, a read_file tool with a filepath argument will give the
model the ability to request the contents of a specific file.
The following handshake describes how the Agent uses tools:
user chat requestsAutomaticContinue supports multiple local model providers. You can use different models for different tasks or switch models as needed. This section focuses on local-first solutions, but Continue does work with popular providers like OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can also run your own model provider.
Now that we have everything set up, let’s add an existing MCP server. Below is a quick example of setting up a new MCP server for use in your assistant:
.continue/mcpServers at the top level of your workspaceplaywright-mcp.yaml to this folderplaywright-mcp.yaml and savename: Playwright mcpServer
version: 0.0.1
schema: v1
mcpServers:
- name: Browser search
command: npx
args:
- "@playwright/mcp@latest"Now test your MCP server by prompting the following command:
1. Using playwright, navigate to https://news.ycombinator.com.
2. Extract the titles and URLs of the top 4 posts on the homepage.
3. Create a file named hn.txt in the root directory of the project.
4. Save this list as plain text in the hn.txt file, with each line containing the title and URL separated by a hyphen.
Do not output code or instructions—just complete the task and confirm when it is done.The result will be a generated file called hn.txt in the current working directory.

By combining Continue with local models like Llama 3.1 and MCP servers, you’ve unlocked a powerful development workflow that keeps your code and data private while leveraging cutting-edge AI capabilities.
This setup gives you the flexibility to customize your AI assistant with specialized tools, from web automation to file management, all running entirely on your local machine. Ready to take your development workflow to the next level? Start by experimenting with different MCP servers from the Continue Hub MCP explore page and discover how local AI can transform your coding experience.
< > Update on GitHub