If you’ve been on LinkedIn or X recently, your feed is most likely full of posts about AI agents. Just yesterday OpenAI launched a few new tools for agents, including the Agents SDK.

In late November, Anthropic launched the Model Context Protocol (MCP). These two launches are really important in terms of reshaping how developers build agents and agentic applications.

In this guide, we’ll break down what these two tools do, how they compare, and how you can use them together.

Hey everyone, Dan here from PromptHub. OpenAI just launched a whole bunch of new updates, and we're going to run through them together.

Defining Agents

OpenAI defines an agent as a system that can act independently to complete tasks. This is important because there's often confusion about what an agent actually is. A great reference on this is Anthropic’s blog post Building Effective Agents, which discusses how an agent differs from a standard predefined workflow.

New Built-In Tools

OpenAI has introduced three new built-in tools:

  • Web Search Tool: This tool allows models to retrieve real-time information from the internet. It powers ChatGPT’s search and uses a fine-tuned version of GPT-4 or GPT-4o Mini.
  • File Search Tool: Enables users to upload and search documents with metadata filtering, making it easier to retrieve relevant information.
  • Computer Use Tool: Allows agents to control a computer, including interacting with applications that lack an API.

Having web search built-in is a big deal for anyone working with agents, as it removes the need to integrate third-party search APIs manually. Also, the ability to filter file searches based on metadata makes managing large datasets much more practical.

The Responses API

OpenAI also introduced a new API: the Responses API. This is designed to handle multiple turns, tool calls, and different input modalities (text, images, audio). While it looks similar to chat completions, it provides more flexibility.

Multi-Agent Systems and the Agents SDK

For developers working with multiple agents, OpenAI announced the Agents SDK. Previously, OpenAI released Swarm as an experimental agent orchestration tool, but now they’re making it production-ready under the new name.

The Agents SDK allows:

  • Managing multiple agents with separate responsibilities.
  • Creating handoffs between agents (e.g., a customer support agent can transfer a request to a refund-processing agent).
  • Tracing and monitoring agent interactions.
  • Defining tools directly as Python functions with automatic JSON schema generation.

One key concept here is handoffs, where one agent passes control to another while maintaining conversation context. This setup keeps system prompts focused, preventing a single agent from handling too many responsibilities.

OpenAI’s Future API Strategy

OpenAI also announced that the Assistance API will be sunsetted in 2026. Many developers found it too opinionated, and it lacked the flexibility of chat completions. Instead, OpenAI will prioritize the Responses API for future updates and capabilities.

Final Thoughts

These updates, combined with other innovations like Anthropic’s new model context protocol, make it easier to build and deploy AI agents. Excited to see where this goes next!

Check out our blog for a deeper dive, along with additional resources on agent development.

OpenAI’s Agents SDK and Responses API

OpenAI’s newly launched Agents SDK is a lightweight (Python) framework for building AI agents that can plan, use tools, and execute multi-step tasks. In addition they also launched the Responses API.

  • Responses API
    • A new API that brings together features from the Chat Completions API and the Assistants API.
    • Biggest thing to highlight here is that it has three built-in tools
    • OpenAI is signaling that this will be the endpoint that gets the most support and enhancements going forward
    • Chat Completions is going to stay around and will get updates (not as many as Responses)
    • Assistants API will be sunset in 2026.
  • Built-in Tools – OpenAI released three built-in tools compatible with the Responses API
    • Web Search: Real-time, cited search results - same search that powers ChatGPT
    • File Search: Lets agents retrieve context files stored in your OpenAI vector store
    • Computer Use (CUA): Agents can interact with a computer GUI - similar to Operator
  • Agent-Orchestration Features – The SDK includes powerful features for structuring AI workflows:
    • Handoffs: Agents can delegate specific tasks to sub-agents or specialized functions. For example, a customer support agent can escalate a billing issue to a billing agent
    • Guardrails: Developers can enforce constraints on agent behavior. For example, a healthcare AI agent can be restricted from making certain medical recommendations without a human review
    • Observability & Debugging: The SDK includes built-in tracing tools to help developers monitor how agents reason, execute actions, and handle failures.

Tracing UI from OpenAI

What is Anthropic’s Model Context Protocol (MCP)?

We briefly covered Anthropic’s Model Context Protocol (MCP) in our weekly Substack a few weeks ago. MCP is an open standard for connecting AI models to external data sources and tools. The goal? To make it easy for any AI system—not just Claude—to securely interact with proprietary knowledge bases, databases, and APIs.

Third blocks representing and comparing APIs, LSP, and MCP
Source: Building Agents with Model Context Protocol - Full Workshop with Mahesh Murag of Anthropic

How it works:

  • MCP Servers: Connect to specific data sources (e.g., Slack, Notion, internal databases) and expose them via the protocol.
  • MCP Clients: AI models inside applications (like Claude, Cursor, etc) that query these servers dynamically.
  • Standardized Interface: Instead of writing custom integrations for each tool, developers can plug into a universal framework.

MCP is designed to be model-agnostic, meaning any AI system—whether Claude, GPT-4, or open-source models—can implement it. Anthropic sees MCP as a USB-C port for AI, enabling seamless access to external knowledge and services.

There are even marketplaces and app-store like websites for listing and discovering MCP servers like,Smithery and Glama.

MCP Client communication workflow
Source: Building Agents with Model Context Protocol - Full Workshop with Mahesh Murag of Anthropic

Will OpenAI release its own version of MCP? Maybe. But right now, MCP is the best standardized solution we have.

Comparing OpenAI’s Agents SDK & Anthropic’s MCP

By no means are they substitutes, they work well together, but given their importance, here's a nice table.

Feature OpenAI’s Agents SDK Anthropic’s MCP
Purpose Framework for orchestrating AI agents Standard protocol for AI to access external data & tools
Primary Focus Multi-step reasoning, planning, and execution Connecting AI to external data sources securely
Built-in Tools Web search, file search, computer use None (relies on external MCP servers)
Model Agnostic? Agents SDK can be used with any model that is OpenAI compatible Fully model-agnostic—works with Claude, GPT, open-source models
Security & Control Managed within OpenAI’s ecosystem Allows organizations to retain full control over their data
Ideal Use Case AI agents that autonomously complete tasks AI assistants that need secure access to company data

How they work together

OpenAI’s Agents SDK and Anthropic’s MCP complement each other. OpenAI’s Agent SDK makes it easy to spin up agents, leveraging built in tooling, orchestration, and tracing from OpenAI. MCP makes it easy to access data from tools like databases, CRMs, etc.

For example, if you’re building an AI assistant for customer support, you might:

  1. Use OpenAI’s Agents SDK to orchestrate the conversation and tool usage.
  2. Integrate MCP to fetch account details from an internal database or Zendesk.
  3. Have the agent decide whether to escalate the request or resolve it autonomously.

The speed at which you can build agents with these two tools is insane.

Final thoughts

Currently we have a hard time differentiating between what's an agent, what's just a chatbot, and what's just a workflow. Either way, building all of these just became much easier. OpenAI’s Agents SDK simplifies agent orchestration for multi-agent systems, while Anthropic’s MCP streamlines data integration—a key factor in maximizing LLM performance (context is king!). Time to build!

Headshot of PromptHub co-founder Dan Cleary
Dan Cleary
Founder