To improve AI interoperability, OpenAI has announced its support for Anthropic’s Model Context Protocol (MCP), an open-source standard designed to streamline the integration between AI assistants and various data systems. This collaboration marks a pivotal step in creating a unified framework for AI applications to access and utilize external data sources effectively.
Developed by Anthropic, MCP is an open standard that facilitates seamless connections between AI models and external data repositories, business tools, and development environments. By providing a standardized protocol, MCP eliminates the need for custom integrations, allowing AI systems to access necessary context dynamically. This approach enhances the relevance and accuracy of AI-generated responses by enabling real-time data retrieval and interaction.
Here’s a much simpler, easy-to-understand MCP:
If you’re building with an AI model, you’ve probably run into this:
Now things get messy.
It’s frustrating and takes way too much time.
That’s where MCP (Model Context Protocol) comes in:
MCP saves you time, simplifies your code, and makes multi-LLM work way easier.
Also Read: How to Use MCP?
MCP 🤝 OpenAI Agents SDK
— OpenAI Developers (@OpenAIDevs) March 26, 2025
You can now connect your Model Context Protocol servers to Agents: https://t.co/6jvLt10Qh7
We’re also working on MCP support for the OpenAI API and ChatGPT desktop app—we’ll share some more news in the coming months.
OpenAI’s decision to adopt MCP underscores its commitment to enhancing the functionality and interoperability of its AI products. CEO Sam Altman highlighted the enthusiasm for MCP, stating that support is being integrated across OpenAI’s offerings. The integration is already available in the Agents SDK, with forthcoming support planned for the ChatGPT desktop app and the Responses API.
Since its inception, MCP has garnered support from various organizations. Companies such as Block, Apollo, Replit, Codeium, and Sourcegraph have integrated MCP into their platforms, recognizing its potential to standardize AI-data interactions.
The adoption of MCP by industry leaders like OpenAI and Microsoft signifies a broader trend towards standardization in AI integrations. As more organizations embrace MCP, the ecosystem is expected to evolve, offering developers a robust framework for building AI applications that can seamlessly interact with diverse data sources.
Here’s how you can use MCP:
Firstly, search for OpenAI Agent SDK and open the Model Context Protocol (MCP).
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
Let’s begin with the implementation:
I am getting the information about the Langmanus repository and for that, clone this repository in your system and keep the path handy.
Clone the repository: openai-agents-python
Then, put your OpenAI API Key:
export OPENAI_API_KEY: SK-XXXXXX
After this, go to the openai-agents-python directory
cd openai-agents-python/
Then run this command:
uv run python examples/mcp/git_example/main.py
Finally, put the Repository path:
Please enter the path to the git repository: /home/pankaj/langmanus
The most frequent contributor is **Henry Li**, with multiple commits in the
history provided.
--------------------------------------------
Running: Summarize the last change in the repository.
The last change in the repository was made by MSc. João Gabriel Lima on March
23, 2025. The commit hash is `646c3e06c4bd58e252967c8b1065c7a0b0f0309b`.
### Commit Message
- **Type:** feat
- **Summary:** ChatLiteLLMV2 missing function (#103)
#### Details:
- Added parameter filtering and supported parameters methods in
ChatLiteLLMV2.
- This change was repeated several times in the commit message details,
highlighting its importance.
Here’s the Main.py
import asyncio
import shutil
from agents import Agent, Runner, trace
from agents.mcp import MCPServer, MCPServerStdio
async def run(mcp_server: MCPServer, directory_path: str):
agent = Agent(
name="Assistant",
instructions=f"Answer questions about the git repository at {directory_path}, use that for repo_path",
mcp_servers=[mcp_server],
)
message = "Who's the most frequent contributor?"
print("\n" + "-" * 40)
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)
message = "Summarize the last change in the repository."
print("\n" + "-" * 40)
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)
async def main():
# Ask the user for the directory path
directory_path = input("Please enter the path to the git repository: ")
async with MCPServerStdio(
cache_tools_list=True, # Cache the tools list, for demonstration
params={"command": "uvx", "args": ["mcp-server-git"]},
) as server:
with trace(workflow_name="MCP Git Example"):
await run(server, directory_path)
if __name__ == "__main__":
if not shutil.which("uvx"):
raise RuntimeError("uvx is not installed. Please install it with `pip install uvx`.")
asyncio.run(main())
Also, watch this to understand MCP better:
OpenAI’s adoption of Anthropic’s Model Context Protocol represents a significant advancement in the quest for standardized, efficient, and secure AI-data integrations. By embracing MCP, OpenAI not only enhances the capabilities of its own AI systems but also contributes to the broader movement towards collaborative innovation in the AI industry. As MCP continues to gain traction, it promises to simplify the development of context-aware AI applications, ultimately leading to more intelligent and responsive AI assistants.