Three Principles That Changed How We Design Our MCP Tools @ monday.com

Three Principles That Changed How We Design Our MCP Tools @ monday.com

MCP is one of the most exciting developments in AI today. While platforms like Power Automate promised to connect agents to tool ecosystems, they fundamentally didn't work. This failure came down to three critical reasons:

First, LLMs hadn't matured enough—their ability to select the correct tool at the right time simply wasn't there. Second, the custom methodologies required for integrating different agents from the provider side significantly increased cost and complexity. Finally, consumers weren't ready. AI was in its infancy, and organizations were still struggling to build basic rule-based automation workflows.

There was no market fit. Even when large service providers created state-of-the-art AI integrations, clients couldn't properly use them to extract value for customers.

But recently, everything changed. LLMs exploded in capabilities, and Claude was the first to truly demonstrate generalization for tool integration. This generalization became what we now know as the MCP protocol. The protocol sparked a back-and-forth process where clients like Claude, Cursor, and soon CoPilot and ChatGPT will create their initial support. This invited companies to build their MCPs, and frankly, the overhyped initial excitement caused many companies to race to demonstrate leadership.

I can't complain—sometimes a little madness is what's needed to get the ball rolling. Under all this excitement and press, customers started getting excited too. I've never faced a situation as a PM where customers tell me they want a technology without being able to articulate how it will fix their problems. But bottom line, they're game.

So now we have the perfect storm under the right conditions, and I'm driving a train through it. At monday.com, I'm leading our MCP effort. Being creative, unafraid to take risks, and owning the API seemed to be the motivations for putting me in charge. But as I've come to learn, MCP is not API—though it's deceptively similar.

So how MCPs and APIs differ?

Let's begin with the simple reason: APIs are meant to be consumed by rule-based applications. A great API needs to be predictable, inflexible, and simple for code to interact with. Our initial thought at monday.com was to just wrap our API capabilities and expose them via MCP. This would have been naive.

As we built our MCP, I learned that the MCP's consumer is not a rule-based application—it's an AI agent. This makes it much closer to a human than a machine. Why is this so important? Because for the last 30 years, we've been working hard to make the human interface (UI) as good as possible, often a far cry from the Application Interface (API).

This makes sense. While the API is the execution layer, the UI is the nerve center that interprets our intent and connects it to the API execution layer. One would be naive to think that MCP, consumed by a client closer to a human than a machine, would be implemented as an API.

Through building monday.com's MCP, I discovered three fundamental principles that distinguish MCP design from traditional API design. Let me walk you through each one with real examples that opened my team's minds.

Principle 1: Contextual Intelligence

Imagine a user who receives an email asking them to update a monday.com board item's status. (For those unfamiliar with monday.com, think of a board as a spreadsheet, an item as a row, and status as a column.) When the user clicks the link and lands on the board, something fascinating happens—a cascade of implicit API calls that we never think about.

If the user has never visited this board before, they need to understand what it is. Here's what happens naturally:

  • They look at the board name
  • They scan the structure (column names)
  • They review example items to understand what's expected
  • They examine possible status values to understand their options
  • If needed, they check the workspace name and folder structure for additional context

All this happens before a single mouse click. I call this process "understanding board context," and it became critical for our MCP design.

Why? Because users interact with agents the same way they interact with other humans. When someone asks an agent to "update the status," the agent starts with the same blank slate a human would.

The API Approach (Wrong for MCP): If we'd simply wrapped our API, the agent would need to:

  1. Call get_board_metadata
  2. Call get_board_items
  3. Call get_column_settings
  4. Try to piece together context from multiple responses
  5. Likely fail several times with wrong parameters
  6. Eventually succeed, but with terrible AgentEx (what I call Agent Experience)

The Contextual Intelligence Approach (Right for MCP): Instead, we created a single tool: understand_board with the description "whenever you have to perform a task on a board and have no context about it, call this tool first."

This tool requires no inputs and returns a thoughtfully structured response containing:

  • Board name and description
  • Column structure with types and possible values
  • Recent items as examples
  • Workspace and folder context
  • Common patterns of how this board is typically used

The agent gets everything it needs in one call, understanding exactly why it called the tool. This mirrors how humans naturally gather context—all at once, implicitly, before acting.

Principle 2: Natural Interaction

Humans can receive different UI interactions and simply understand what they're supposed to do, without training on every specific component or page. Agents can do this too—if we design for it.

Let me share an eye-opening example from studying the Zapier MCP. I wanted to understand how they resolved parameters needed to run tools. For instance, if a user wants to update a board, how do they resolve the board ID needed for execution?

I was surprised to learn they run an internal LLM for each tool. Their input is always a natural language prompt, and their response is often an instruction. The LLM can choose to search for a board ID by name, then either run the tool or return an instruction like: "I found 3 boards with similar names. Which one would you like me to use?"

This demonstrates Natural Interaction beautifully—the consuming agent doesn't need to insert specific parameters or receive back specific structures. It's flexible, conversational, adaptive.

At monday.com, we decided not to run an LLM for every call (cost considerations), but the concept inspired us. We realized we could still embrace Natural Interaction in other ways.

The Innovation: Understanding User Intent One of my biggest struggles is understanding what customers actually do with our MCP. Sure, I can see which tools they call with which parameters, but this doesn't answer why. Are they organizing conferences? Bootstrapping projects? Managing construction sites?

The solution can come by embracing Natural Interaction: by adding a new field to every tool asking the agent to explain what job it's completing on behalf of the user. Something like:

{
  "tool": "create_items",
  "parameters": {...},
  "context": "I'm helping the user set up a content calendar for their Q1 marketing campaign."
}        

This is only possible because of Natural Interaction—we're having a conversation with the agent, not just processing rigid parameters.

One of the first design principles we agreed on at monday.com was that every time a resource is created, we will ask the agent to return its name with a link to the user as a part of the response. This creates a consistant experience were the user can always quickly navigate into the platform and continue working on the asset created. It also nicely connects to the next point!

Principle 3: Product Intelligence

When building an API, we leave business logic to the consumer because rule-based applications can handle it consistently. The same company often controls both the API and UI, so it makes sense to have the UI layer handle business logic for flexibility.

But with MCP, you have a serious problem if you follow this pattern. You're leaving your business knowledge and best practices to be figured out by the agent/user. But you don't control the agent! This would create completely different experiences across different agents, and take away you ability to control the experience the customer gets. Unlike API, with MCP you should embed all your business logic and product best practices within the tools themselves.

Let me make this concrete with our document creation example.

The API Way (Wrong for MCP): In our API, creating a document requires:

  1. Explicitly define where it should be created (workspace or board column)
  2. If choosing a board column, check if a doc column exists
  3. If not, create the doc column first
  4. Create the document content separately using our proprietary JSON format
  5. Link everything together

A developer figures this out once, writes the code, and it runs forever.

The Product Intelligence Way (Right for MCP): Our create_doc MCP tool embeds all our product intelligence:

create_doc:
  - workspace: mandatory
  - board: optional  
  - column: optional
  - content: mandatory (accepts standard markdown)        

The intelligence we embedded:

  • If only workspace is provided → create a standalone doc in the workspace
  • If board is provided but no column → find the first doc column and use it
  • No doc column exists? → create one automatically with a smart name
  • Column specified? → use it directly
  • Content is mandatory because creating an empty doc makes no sense for humans
  • Accept markdown instead of our proprietary format (agents already know markdown)

We lost predictability but made it 10x easier for agents to use. This tool now contains product intelligence about how users actually work with documents. We've embedded decisions like "users rarely want empty documents" and "if someone specifies a board, they probably want the doc attached to it."

Why This Matters: Remember, for APIs, developers figure it out once and scripts run forever. For our poor agents, they have to understand the tools nearly every time they're tasked with something. We can't afford bad AgentEx.

The Path Forward

These three principles—Contextual Intelligence, Natural Interaction, and Product Intelligence—fundamentally reshape how we think about building MCPs:

Contextual Intelligence means we provide rich, complete context upfront, just as a human would gather it naturally.

Natural Interaction means we design for flexible, conversational exchanges rather than rigid parameter passing.

Product Intelligence means we embed our accumulated product wisdom directly into the tools, not leaving agents to figure out best practices.

Think of your MCP as a UI replacement, not an API wrapper. Think of the consumer as human-like, not machine-like. Think about AgentEx—the experience that agent will have interacting with your MCP, and by extension, the user interacting with that agent.

What's Next

I have many more thoughts about MCP and what's coming next in the protocol, but this is already a serious brain dump. The key insight is this: we're not just building another integration layer. We're creating the nerve center where Product Intelligence meets Artificial Intelligence, where years of human-centered design principles suddenly apply to machine interactions.

The companies that understand this shift—that MCP is fundamentally different from API—will build the tools that agents actually want to use. Those still thinking in terms of wrapped APIs will wonder why their perfectly functional integrations sit unused.

Do you see things differently? I'd love to hear from you. I'm still figuring this out as I go, with little elsewhere to look for inspiration. But one thing is clear: the rules we've followed for 30 years of API design don't apply here. It's time to write new ones.


I'm leading MCP development at monday.com. If you're building MCPs or thinking about agent integration, I'd love to compare notes. The space is moving too fast for any of us to figure it out alone.

Pons Mudivai Arun

Untold problems → simple, high-leverage solutions.

3w

Thank you Daniel Hai for sparking this important dialogue on MCP vs. API design — and the deeper systemic implications when agents, not humans, are the primary “consumers.” In my systemic lens (attached image), When product intelligence is embedded into MCP tools, agent experiences become more consistent, which builds user trust and adoption — a reinforcing cycle. But legacy API practices create friction, acting as a balancing loop that limits adoption until product intelligence overcomes those constraints. Curious on "Which force will define the agentic era in your organization — the reinforcing spiral of intelligence and trust, or the balancing drag of legacy APIs?" #SystemsThinking #AgentExperience #AgenticAI #AIProductDesign #FutureOfWork

  • No alternative text description for this image
Amos Porat

COO & Co-Founder | Appscent Medical

1mo

Daniel🙂 My colleague would be happy to work with you! His WhatsApp number: https://wa.me/972544446480?text=I'd+be+happy+to+chat+with+you+about+the+next+role

Like
Reply
Rachael Muga

Quality Assurance Engineer | Cybersecurity Analyst | Testing Deeply. Securing Boldly. Delivering Confidently.

1mo

Really like how you framed MCPs as being closer to UIs than APIs. The point on contextual intelligence especially stood out. It feels a lot like how humans (and even testers) naturally validate things before acting. Super insightful breakdown.

Jacob Dietle

🟠 I architect GTM Operating Systems | Data & Context Engineering for GTM

1mo

This is a great write up appreciate you sharing! MCP UX is still such a brand new field super exciting to see common themes (like don’t just wrap your api but focus on the meta workflows) emerge as we all figure it out together :)

Great post, MCPs are designed for AIs to use, while APIs are designed for developers.

To view or add a comment, sign in

More articles by Daniel Hai

Others also viewed

Explore content categories