Why OpenAPI Should Be The Foundation For Your MCP Server

Why OpenAPI Should Be The Foundation For Your MCP Server

Posted in

At this point in 2025, who in the developer community hasn’t heard of Model Context Protocol (MCP)? MCP is Anthropic’s open protocol that standardizes how applications connect to and provide context to large language models (LLMs). Like it or not, MCP continues to be a hot topic for API and app developers. So much so that many are experimenting with it by building their own MCP servers for their APIs or third-party APIs.

You have several options for building your own MCP server. You could code it yourself, use an official SDK, or get help from an LLM like Claude. You could also leverage a tool that automatically generates an MCP server from an API specification file. In fact, using an OpenAPI specification as the foundation of your server can bring key benefits in terms of time, cost, and performance. These benefits increase if you also apply specific best practices, some of which we highlight in this article.

Why Use an OpenAPI Specification for Your MCP Server?

Some developers have pushed back on generating entire MCP servers from OpenAPI specifications. They argue that when you build this type of server you must consider tool intent and not just raw API endpoints. However, there are good reasons to use an OpenAPI specification as a starting point for your MCP server.

You’re Already Using OpenAPI

OpenAPI is among the top industry standards for API specifications. If you design and build your own APIs, then odds are, you already work with OpenAPI. You could modify one of your existing OpenAPI specifications so that it works as the underpinning of a functional MCP server.

It Can Expose Semantic Context

MCP is all about standardizing how AI apps provide context about data sources and tools to LLMs. Both MCP and OpenAPI can provide a description of a tool or operation. However, you could employ OpenAPI to expose semantic context, describing why and when the AI model should call each one and how to interpret API responses.

It Can Help Reduce LLM Token Usage

OpenAPI helps you design clear and consistent APIs without redundancies. It also gives you a way to explain to large language models the purpose and intent of each endpoint with examples. If the model immediately understands what each tool is and when and why to use each one, it won’t have to generate tokens to figure out all the tool options itself. Most LLM pricing is based on token usage, so the fewer tokens needed for each prompt, the less the model will cost to use.

You Can Make it Your Single Source of Truth

While you could use MCP to describe the tools generated from your API’s endpoints, it’s not a great idea to maintain two sources of truth for your APIs. OpenAPI should be your one source of truth for every API you convert to an MCP tool. Use the OpenAPI specification to describe every aspect of each API, including the semantic context. Implementing OpenAPI as your single source of truth and part of your versioning strategy can help you detect breaking changes in APIs.

You Can Leverage Automated Solutions

We’ve seen many solutions launched in recent months that automatically convert well-defined APIs into MCP servers. Most of these automated solutions support OpenAPI and are written in popular languages like Go, Python, and TypeScript. Why manually convert an API to an MCP server when you can quickly generate one from an OpenAPI specification? You can basically turn any API into an MCP server, although it doesn’t necessarily mean that you should incorporate every single endpoint. Keep tool intent in mind when choosing which API endpoints to expose as tools for your server.

Now that you know why you should use an OpenAPI specification to develop an MCP server, we’ll highlight a few best practices you should follow when doing so.

Best Practices When Using an OpenAPI Specification to Build an MCP Server

Developing your MCP server based on a valid and comprehensive OpenAPI specification can improve its performance and reduce LLM costs. However, for the best results, you’ll need to apply some best practices during development. Here are a few to keep in mind:

Make Sure Your OpenAPI Specification is Clear and Complete

You should review the API specification to ensure it includes:

  • The purpose of each endpoint and operation
  • Clear parameter descriptions
  • Examples, especially responses and parameters
  • Semantic context

This is not a comprehensive list. However, including these specific components in your specification will help large language models better understand the tool options offered by your server and how to leverage them. This clarity helps improve the performance of the LLM and the AI applications your server supports.

Check That Your API Specification is Valid

You’ve made sure your OpenAPI specification is clear and complete, but have you checked its stylistic validity? Use a linter to validate specification files against pre-defined or custom rules. If you feed an invalid specification file into an automated conversion solution, you may get a parsing error message or a flawed MCP server.

Don’t Create Tools for Every Endpoint Unless Necessary

If your API only has a few endpoints, then it may make sense to create a tool for each one. However, LLMs become overwhelmed when presented with too many options. This confusion results in the model wasting a lot of tokens figuring out which tools it needs to complete a task.

For example, let’s say the MCP server powers an autonomous AI agent that needs to find and book a flight for a customer. When the AI agent connects to the LLM, the MCP server offers tools for booking flights, hotels, and local transportation. Now the model must sort through all these tools (which generate tokens) to find the one that will enable it to book a flight. In this scenario, the AI agent has flight booking capabilities only, so only the flight endpoints should have been converted to tools.

If your API has 500 endpoints, you probably don’t need to create a tool for all of them. If you’re using an automated conversion solution, modify your OpenAPI specification so that it only includes the endpoints you absolutely need for your server.

Refine Your Tool Descriptions

If you already have an OpenAPI specification for your APIs, then the existing descriptions are likely meant for developers. However, if your MCP server needs to connect an AI agent to an LLM, you’ll want the server tool descriptions to be clear, complete, and instructive for that AI agent. Vague or incomplete descriptions can cause the model to hallucinate or perform poorly. Clear descriptions will result in more accurate tool usage by the AI model, improving its performance.

Filter the Responses From Your API

If you’re using an API as the basis of your server, you’ll want to make sure that each tool doesn’t send the LLM a lot of irrelevant data. For example, if the API responses are formatted with JSON, you’ll want to filter out the irrelevant JSON code before sending API responses to the AI model. Sending the AI model all that code will result in many unnecessary tokens and a higher LLM bill. This best practice isn’t specific to an API with an OpenAPI specification. You should filter the responses of any API endpoint that becomes a tool offered by an MCP server.

Build a Better MCP Server with an OpenAPI Specification

An OpenAPI specification can help you develop an efficient and cost-effective MCP server, but it requires some work on your part. You need to make sure the specification is comprehensive, valid, and exposes the semantic context of each tool. You must also check that it doesn’t include an excessive number of endpoints, or you risk the LLM performing poorly or costing you a fortune. One of the best reasons to use an OpenAPI specification, however, is the growing number of automated conversion solutions. Once you’ve refined your API specification, you can upload it to one of these tools and have a working MCP server up and running in a matter of minutes!

For more, check out our list of ten tools for building MCP servers.