Large language models have become increasingly sophisticated in recent years, yet they've remained limited in their ability to interact with the world around them.
When you want your AI model to check your calendar, send an email, or analyze data from your CRM, you've typically needed custom integrations for each tool. This fragmentation has limited AI's potential and created headaches for developers.
Enter Model Context Protocol (MCP): an open standard introduced by Anthropic in November 2024 that is rapidly changing how AI systems connect to external tools and data. In just a few months, MCP has gained significant traction and is on track to become the universal standard for AI connectivity.
This article will unpack what MCP is, why it matters, and how you can start using it to build more capable AI applications.
What is Model Context Protocol?
Model Context Protocol is an open standard that defines how AI applications connect to external data sources and tools. Think of MCP as a universal connector that allows any AI application to plug into virtually any external system without custom code.
As a protocol rather than a framework, MCP establishes the rules for communication between AI systems and external tools. It doesn't dictate implementation details but instead provides a standardized way for these systems to exchange information and functionality.
MCP was initially created by Anthropic (the company behind Claude) to help their AI assistant better interact with user data and applications. However, it was designed from the start to be open and model-agnostic, meaning it works with any AI system, whether that's Claude, GPT models, or open-source LLMs.
Unlike frameworks like LangChain or LlamaIndex, which provide specific implementations for building AI applications, MCP is a communication standard.
This distinction is crucial since frameworks often lead to vendor lock-in, while protocols enable interoperability across an entire ecosystem.
Why MCP matters
To understand MCP's significance, it helps to look at the problem it solves.
Before MCP, if you wanted your AI assistant to access your email, check your calendar, and update your task list, you'd need to write custom code for each integration.
Each external service would require its own authentication flow, data format handling, and error management. This approach doesn't scale well as adding a new tool means writing more custom code.
MCP transforms this dynamic by providing:
- Standardized connectivity: A single protocol for connecting to any MCP-compatible server
- Dynamic discovery: AI models can automatically discover what tools and resources are available
- Two-way communication: Real-time, bidirectional interaction between AI models and external systems
- Uniform access patterns: Consistent ways to access data regardless of where it's stored
This standardization dramatically simplifies the process of creating AI applications that can interact with the world. Instead of building custom integrations for each service, developers can focus on creating valuable experiences while leveraging the growing ecosystem of MCP servers.
The impact of this shift becomes clearer when we examine MCP's architecture.
Understanding MCP architecture
MCP follows a client-server architecture with three main components:
- Host: The AI-powered application (like Claude Desktop or an AI-enhanced IDE like Cursor).
- Client: The component that connects the host to servers.
- Server: The component that exposes tools and resources to clients.
In a typical setup, the host application initializes one or more clients, each of which connects to a different server. For example, Claude Desktop might connect to servers for your file system, email, and calendar.
Each MCP server provides three core primitives:
1. Tools
Tools are executable functions that an AI model can call to perform actions. For
example, a Gmail MCP server might provide tools like send_email
,
list_unread
, and search_emails
. These are similar to function calling in
traditional AI APIs, but within a standardized framework.
Here's what a simple tool definition might look like in Python using the MCP SDK:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("email_server")
@mcp.tool()
def send_email(to: str, subject: str, body: str) -> str:
"""Sends an email to the specified recipient.
Args:
to: Email address of the recipient
subject: Subject line of the email
body: Content of the email
Returns:
A confirmation message with the email ID
"""
# Implementation to send email via SMTP or API
# (simplified for example)
email_id = "123456"
return f"Email sent successfully (ID: {email_id})"
2. Resources
Resources are read-only data sources that provide context to the AI model. These could be files, database records, logs, or any other information that helps the model understand the user's environment.
For instance, a GitHub MCP server might expose repository files as resources:
@mcp.resource("file://repository.md")
async def get_repository_info() -> str:
"""Returns information about the user's repositories"""
# Fetch data from GitHub API
repo_data = """# Your GitHub Repositories
- project-alpha: Python library for data processing
- web-dashboard: React frontend for analytics platform
- api-service: Backend API written in Go
"""
return repo_data
3. Prompts
Prompts are pre-defined templates that help users accomplish specific tasks. They provide structured guidance for how the AI should approach particular tasks with the available tools and resources.
@mcp.prompt()
def create_pull_request(repo_name: str, branch_name: str, description: str) -> str:
"""Creates a prompt that guides the AI in creating a pull request"""
return f"""Please create a pull request for the {branch_name} branch in the {repo_name} repository.
Use the following description: {description}
First, check if there are any uncommitted changes using the check_status tool.
If there are uncommitted changes, commit them with an appropriate message.
Then create the pull request using the create_pr tool."""
How MCP works
At its core, MCP is built on JSON-RPC 2.0, a lightweight protocol for remote procedure calls. This foundation enables standardized communication between clients and servers using three main message types:
- Requests: Messages sent to initiate an operation.
- Responses: Messages sent in reply to requests.
- Notifications: One-way messages that don't require a response.
Let's look at how these messages might flow in a typical MCP interaction:
{
"jsonrpc": "2.0",
"id": "req-001",
"method": "tools/call",
"params": {
"name": "send_email",
"arguments": {
"to": "colleague@example.com",
"subject": "Project update",
"body": "Here's the latest update on our project..."
}
}
}
The server would then process this request and return a response:
{
"jsonrpc": "2.0",
"id": "req-001",
"result": {
"content": [
{
"type": "text",
"text": "Email sent successfully (ID: 123456)"
}
]
}
}
MCP supports two primary transport mechanisms:
- Standard I/O (stdio): Communication via standard input/output streams, ideal for local servers.
- Server-Sent Events (SSE) over HTTP: Enabling remote communication over the web.
The lifecycle of an MCP connection follows a consistent pattern:
- Initialization: Client and server exchange capabilities and negotiate protocol versions.
- Operation: Normal communication occurs, with tools being executed and resources accessed.
- Termination: Connection is gracefully closed when no longer needed.
Building a simple MCP server and client
Let's walk through creating a basic MCP server and client to demonstrate how MCP works in practice. We'll build a simple calculator server that provides addition and multiplication tools.
First, we'll need to install the MCP Python SDK:
pip install "mcp[cli]"
Now, let's create our server:
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("calculator")
@mcp.tool()
def add(a: float, b: float) -> float:
"""Adds two numbers together.
Args:
a: First number
b: Second number
Returns:
The sum of a and b
"""
return a + b
@mcp.tool()
def multiply(a: float, b: float) -> float:
"""Multiplies two numbers together.
Args:
a: First number
b: Second number
Returns:
The product of a and b
"""
return a * b
# The server automatically runs when this file is executed with the MCP CLI
Next, let's create a client that will use this calculator server:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Define server parameters
server_params = StdioServerParameters(
command="mcp", # The MCP CLI executable
args=["run", "calculator_server.py"] # Arguments to run our server
)
async def run():
# Start the server and establish communication channels
async with stdio_client(server_params) as (read, write):
# Create a client session
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# List available tools
response = await session.list_tools()
print("Available tools:")
for tool in response.tools:
print(f"- {tool.name}: {tool.description}")
# Call the add tool
result = await session.call_tool(
"add",
arguments={"a": 5, "b": 3}
)
print(f"\nResult of 5 + 3: {result.content[0].text}")
# Call the multiply tool
result = await session.call_tool(
"multiply",
arguments={"a": 4, "b": 7}
)
print(f"Result of 4 * 7: {result.content[0].text}")
if __name__ == "__main__":
asyncio.run(run())
To run the client:
python calculator_client.py
[04/24/25 08:52:53] INFO Processing request of type ListToolsRequest server.py:534
Available tools:
- add: Adds two numbers together.
Args:
a: First number
b: Second number
Returns:
The sum of a and b
- multiply: Multiplies two numbers together.
Args:
a: First number
b: Second number
Returns:
The product of a and b
INFO Processing request of type CallToolRequest server.py:534
Result of 5 + 3: 8.0
INFO Processing request of type CallToolRequest server.py:534
Result of 4 * 7: 28.0
This example shows a basic MCP interaction. In a real-world application, the AI model would determine which tools to call based on user input and context.
Real-world applications of MCP
MCP unlocks new possibilities for AI integration. Here are some compelling use cases:
AI-powered development environments
Integrated Development Environments (IDEs) like Cursor and Replit have already adopted MCP to give their AI assistants access to code repositories, file systems, and deployment tools. This enables more contextual code suggestions and automated workflows.
For example, when a developer asks, "Refactor this function to improve performance," the AI can:
- Access the current file using an MCP file server
- Read related files to understand dependencies
- Execute tests to verify changes work
- Commit the changes using a git MCP server
All without requiring custom integration code for each of these operations.
Enterprise data integration
For businesses, MCP provides a secure way to give AI access to internal data sources. By implementing MCP servers for CRMs, knowledge bases, and enterprise applications, companies can build AI assistants that provide accurate, contextual responses.
For instance, a customer support AI could:
- Access customer data from a CRM server
- Look up product documentation via a knowledge base server
- Check order status through an ERP server
- Create support tickets through a ticketing system server
This integration enables more personalized and effective AI interactions without exposing sensitive data directly to the AI model.
Cross-system workflows
One of the most powerful applications of MCP is enabling multi-step workflows that span multiple systems. Consider a meeting scheduling assistant that needs to:
- Check your calendar for availability
- Send emails to participants
- Book a conference room
- Create an agenda document
- Set up a video conference link
With MCP, each of these steps could be handled by a different server, all coordinated by the AI assistant without requiring custom integration code.
Getting started with MCP
If you're interested in experimenting with MCP, here's how to get started:
Using existing MCP servers
The simplest way to start is by using pre-built MCP servers. The MCP ecosystem is growing rapidly, with servers available for popular services like:
- Google Workspace (Gmail, Drive, Calendar)
- GitHub
- Slack
- File systems
- Databases
You can find these servers in repositories like:
To use these servers, typically you'll:
- Clone the repository
- Install dependencies
- Configure authentication
- Start the server
Building your own MCP server
If you need to integrate with a system that doesn't have an MCP server yet, you can build your own. The MCP SDK makes this relatively straightforward:
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("my_custom_service")
# Define tools
@mcp.tool()
def my_custom_tool(param1: str, param2: int) -> str:
"""Description of what this tool does.
Args:
param1: Description of param1
param2: Description of param2
Returns:
Description of the return value
"""
# Implementation
return f"Result: {param1} - {param2}"
# Define resources
@mcp.resource("file://my_data.txt")
async def get_my_data() -> str:
"""Provides custom data to the model"""
return "This is my custom data that helps provide context."
# Define prompts
@mcp.prompt()
def my_custom_prompt(variable: str) -> str:
"""Guides the AI in using my custom service"""
return f"Use the my_custom_tool to process {variable}."
Integrating MCP with AI models
To use MCP with an AI model, you'll need to:
- Establish connections to MCP servers
- List available tools, resources, and prompts
- Execute tools based on the AI model's decisions
- Provide resource content to the model as context
Here's a simplified example using Anthropic's Claude:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import anthropic
# Define server parameters
server_params = StdioServerParameters(
command="mcp",
args=["run", "my_custom_server.py"]
)
async def process_query(query: str):
# Start the server and establish connection
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize connection
await session.initialize()
# Get available tools
response = await session.list_tools()
available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
# Create AI client
client = anthropic.Anthropic()
# Initial AI call with tools
ai_response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1000,
messages=[{"role": "user", "content": query}],
tools=available_tools
)
# Process response and handle tool calls
for content in ai_response.content:
if content.type == "tool_use":
# Execute tool call
tool_result = await session.call_tool(
content.name,
content.input
)
# Continue conversation with tool results
messages = [
{"role": "user", "content": query},
{"role": "assistant", "content": content.text},
{"role": "user", "content": tool_result.content[0].text}
]
# Get final response
final_response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1000,
messages=messages
)
return final_response.content[0].text
else:
return content.text
# Example usage
if __name__ == "__main__":
result = asyncio.run(process_query("Please use my_custom_tool with 'hello' and 42"))
print(result)
Limitations and challenges
While MCP offers significant advantages, it's important to understand its current limitations:
Authentication
MCP doesn't prescribe a specific authentication mechanism, leaving it to each implementation. This can lead to inconsistency across different MCP servers and potentially creates security challenges.
Some developers have addressed this by creating authentication layers on top of MCP, such as AgentAuth, but a standardized approach is still evolving.
Ecosystem maturity
As a relatively new standard, MCP's ecosystem is still developing. While adoption is accelerating, not all services have official MCP servers yet. This means you might need to build your own servers for certain integrations.
Local-first design
MCP was initially designed with local, desktop-based applications in mind. While it can work over HTTP for remote connections, some aspects of the protocol are better suited to local deployment.
Work is ongoing to make MCP more suitable for cloud-based and distributed architectures.
Development overhead
For simple use cases, implementing MCP might introduce more complexity than direct API calls. The benefits of MCP become more apparent when dealing with multiple integrations or when building for long-term flexibility.
The future of MCP
MCP is evolving rapidly, with several exciting developments on the horizon:
Remote servers and OAuth
Future MCP versions will include better support for remote hosting using Server-Sent Events (SSE) and built-in OAuth 2.0 for secure integration with third-party services.
Official MCP registry
An upcoming central registry will simplify discovery and verification of MCP servers, making it easier to find and use servers for different services.
Well-known endpoints
Standardized .well-known/mcp
files will enable first-party server discovery,
similar to how robots.txt works for web crawlers.
Additional features
Other planned enhancements include:
- Streaming support for large data transfers
- Stateless connections for better cloud compatibility
- Proactive server behavior (servers initiating actions)
- Improved namespacing for organization
As the MCP ecosystem grows, we can expect to see more sophisticated AI applications that seamlessly integrate with the digital world.
Final thoughts
Model Context Protocol represents a significant step forward in how AI systems interact with the world. By standardizing the connection between AI models and external tools and data, MCP removes a major barrier to building truly useful AI applications.
As adoption continues to grow, MCP has the potential to become the universal standard for AI connectivity – much like HTTP became for the web. Whether you're building AI-powered developer tools, enterprise applications, or personal assistants, MCP offers a cleaner, more flexible approach to integration.
The future of AI isn't just about smarter models – it's about models that can effectively leverage the tools and information around them. MCP is helping to make that future a reality.
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github