How MCP servers challenge traditional API security models
Zoran Gorgiev, Alessio Dalla Piazza
In December last year, Anthropic (2025) reported that there were 10,000+ active public MCP servers. This number shows that the Model Context Protocol (MCP) is becoming the standard for connecting AI agents to data sources and tools.
According to the protocol, MCP servers act as gateways through which AI agents autonomously orchestrate API calls. Thus, they have become a critical layer where API security must adapt to address machine-driven intent.
Why MCP?
MCP was introduced by Anthropic in 2024 to address the integration bottleneck that occurred as LLMs moved from simple chatbots to functional agents.
Its purpose is to standardize the complexity of tool and data integration for AI systems, preventing them from growing into fragmented ecosystems plagued by duplicated and brittle glue code.
What problem does MCP solve?
Before MCP, if you wanted an AI model, such as Claude or GPT-4, to interact with a tool like Google Drive or GitHub, the integration had to be bespoke.
This is the commonly cited “N-to-N” connector problem: N models and N data sources entail N² integrations. That means high fragmentation and heavy maintenance overhead due to:
- Vendor-specific plugin frameworks
- Function calling schemas tied to many different platforms
- Custom REST clients per integration
- One-off auth, logging, and error handling per connector
MCP lets organizations expose tools and data once through an MCP server, which any MCP-compatible client — such as Claude, ChatGPT, internal agents, or IDE assistants — can access using the same protocol.
It formalizes the modus operandi for an assistant to discover available tools at runtime and access resources, like DB records and company documents, in a uniform and consistent way.
In short, MCP provides a standardized abstraction layer that lets external systems implement a single MCP interface once, which any MCP‑compatible LLM client can reuse — reducing model‑specific connector work, although without eliminating the need to expose each system via MCP.
A key advantage of this technology is the auto-discovery feature. Unlike traditional APIs, which remain invisible without a definition file, **MCP servers allow the automatic discovery of tools, resources, and prompts without manual configuration.
Keep in mind, however, that this advantage carries a security implication. If not properly configured, these resources could be exposed without authentication, posing a risk of unauthorized access.
Does MCP increase complexity?
MCP adds architectural but reduces operational complexity:
- Without MCP, you deal with a slew of integrations, each with its own custom code, weaknesses, and undocumented behavior.
- With MCP, you have a single, audited protocol that consolidates complexity into fewer, reusable components, reducing the overall ecosystem convolution (despite the additional layer of abstraction). According to Errico et al., (2025), organizations report achieving 50–70% reductions in time spent on routine tasks after deploying MCP.
Also, the 2025 revision of MCP introduces HTTP streaming, which enables real-time updates between servers and clients. This capability allows MCP servers to push updates automatically whenever resources or tools change, eliminating the need for repeated polling.
By keeping AI agents continuously connected to the latest data, HTTP streaming enhances scalability and performance, enabling faster, more efficient interactions.
Was it necessary to introduce MCP?
Strictly speaking, no. We could have continued with the function-calling mode.
However, as we moved from AI as a consultant to AI as an agent, that model became less efficient because it was more restrictive and often tied to individual vendors.
From that perspective, MCP made more sense. It provided a standardized coordination layer that connects LLMs to external tools.
When do you need MCP?
You need MCP when you:
- Expect to connect AI to multiple tools and data sources, and be able to add, remove, or change them without rewriting your assistant.
- Want multiple AI assistants to reuse the same tool integrations instead of each one implementing its own.
- Need the freedom to switch models or vendors without rebuilding your entire tool layer.
- Prefer to access AI tools through a single, controlled gateway that works the same on a laptop and in a company environment, as well as lets you track and audit everything the AI does.
Are there operational and architectural critiques of MCP?
Yes, MCP sometimes faces pushback. Below is a condensed breakdown of key non-security arguments against it.
Performance and “the dumb-down effect”
- Context bloat: Every tool requires its schema/metadata to be loaded into the AI’s short-term memory.
- Token overhead: Connecting a stack of tools wastes thousands of tokens on documentation even before the first query.
- Reasoning degradation: As the context window fills with tool definitions, exposing too many functions — rather than a limited subset — overloads the context, reducing the model’s ability to concentrate on the task and leading to performance degradation, including errors like hallucinations.
Architectural over-engineering
- The shim problem: Critics see MCP as a shim — an additional layer on top of REST APIs that adds latency and maintenance without offering unique functionality.
- Stateful complexity: MCP’s architecture involves connections that are not purely stateless REST calls, meaning handling session‑like interactions and continuous streaming can add architectural complexity relative to simpler, purely stateless protocols. Although newer MCP versions use Streamable HTTP to reduce issues tied to Server‑Sent Events, managing streaming and session logic can still complicate scaling and load balancing in large or highly dynamic deployments.
- Rigidity: Like the Gopher protocol of the 90s, MCP may be too structured to adapt easily to a fast-evolving AI ecosystem.
APIs and MCP Servers
As new technologies emerge, one thing remains unchanged: APIs’ role as supporting pillars in modern web, mobile, and LLM-based applications.
In the MCP context, as Gartner (2025a) notes, this protocol not only does not replace existing APIs, but its wider adoption will lead to higher API usage. Consequently, understanding the precise role of APIs in MCP and how they relate to MCP servers is key to building both resilient MCP architectures and resilient APIs.
The role of APIs in MCP
In the MCP ecosystem, APIs change from end-to-end solutions to raw data delivery mechanisms wrapped by the protocol:
- REST APIs provide the ‘what’ of AI tool integration, that is, the data and endpoints.
- MCP provides the ‘how’ — the standardized envelope and discovery process.
Under the hood, an MCP server often functions as a specialized proxy. It interacts with existing proprietary APIs — like Jira’s REST API or Slack’s Web API — and translates their structures into a universal schema that any MCP-compliant LLM can interpret.
This way, developers can harness their existing API investments, at the same time avoiding the need to write custom code for every new model version or agentic framework.
Moreover, MCP fundamentally changes the granularity of API consumption. Instead of an LLM needing a hardcoded client library for every service it touches, it uses the MCP as a runtime negotiation layer to understand what an API can do.
The protocol relies on JSON-RPC to facilitate communication, allowing the large language model to query the MCP server for a list of tools (essentially, API functions) and resources (static or dynamic data).
The AI model no longer needs to know or care how a tool is implemented or exposed at the API level. It interacts through a stable, standardized tool description provided by MCP. This means that the integration layer remains uninterrupted, regardless of whether the model itself is updated, upgraded, or replaced.
Differences and similarities between MCP servers and APIs
At their core, both MCP servers and traditional APIs serve as interfaces for data exchange — bridges between requesters and data sources. They both rely on structured formats, typically JSON, to transport information and use standard web transport layers, such as HTTP, to facilitate communication.
In a functional sense, an MCP server is a type of API. It exposes endpoints, requires authentication, and follows a predefined contract. But the similarity ends at the main objective:
- A standard API provides a raw data pipe for developers to build an application.
- An MCP server provides a semantic map for an AI to navigate a digital environment.
That means the primary difference lies in the direction of logic and discovery. In a traditional API setup, the developer must predefine every step of the interaction, mapping inputs to outputs. This approach can be called imperative.
Conversely, MCP servers adopt a declarative approach. They broadcast a manifest of their own capabilities — tools, prompts, and resources — which the LLM interprets at runtime.
In other words:
- A REST API is stateless and passive. It waits for a command to complete a task.
- An MCP server maintains a JSON-RPC session. It allows the AI to explore the server and understand what it can do before executing a command.
| Feature Comparison | ||
|---|---|---|
| Aspect | Traditional API (REST) | MCP Server |
| Protocol foundation | HTTP: GET, POST, etc. | JSON-RPC, usually over SSE or Stdio |
| Data format | Primarily JSON/XML | Strictly JSON |
| Communication style | Stateless: request-response | Stateful: Session-based |
| Schema definition | External: OpenAPI/Swagger | Internal manifest: self-describing |
| Developer focus | Endpoint management and data types | Tool descriptions and semantic prompts |
The relationship between APIs and MCP servers within MCP
Within the MCP architecture, the relationship between APIs and MCP servers is at the same time symbiotic and hierarchical. The protocol hinges on APIs; they remain the vital engines of data operations. However, the protocol itself is the governing layer.
In this environment, MCP servers act as translators, formatting API endpoints for autonomous discovery and execution. They consume raw API endpoints and repackage them into a format rich with semantic descriptions and metadata. The servers help the LLM navigate a structured terrain where the intent and utility of every tool are explicitly defined.
As a reminder, the standard API is designed for the predictable logic of a human coder. As such, it often lacks the descriptive cues an LLM needs to reason without hallucinating and making (major) errors.
But within the MCP framework, the server maps complex API functions to standardized tools that the model can discover and call at runtime in a uniform and consistent way. By enforcing a standardized contract, this design solves the last-mile problem of AI integration.
The big win here is that the underlying APIs can go through structural updates without breaking the agent’s logic or, moreover, the overall AI system. As long as the MCP server’s translation layer remains compliant with the protocol, the AI’s interaction with the data remains seamless.
Why MCP servers strain existing API security assumptions
The transition from human-centric to agentic software interaction drastically changes the enterprise’s risk profile. Gartner predicts that by 2028, 80% of organizations will see AI agents, instead of human developers, consume the majority of their APIs.
MCP servers expand the API attack surface
MCP servers are key in this equation, as they act as centralized hubs that funnel access to multiple disparate APIs through a single entry point.
This architectural consolidation creates a dangerous single point of failure. Because the MCP server brings various tools together into a unified point, a compromise can give an adversary orchestrated access to your entire suite of connected services, tools, and data repositories.
Empirical data already demonstrated the gravity of this centralized risk. The Li et al., (2025) analysis of 2,562 real-world MCP applications revealed that the most frequently consolidated capabilities are high-privilege
- Network APIs, affecting 1,438 servers
- System APIs, affecting 1,237 servers
Consequently, the risk of a compromised MCP gateway is not limited to a localized data leak. Instead, it provides threat actors with a direct path to exploit the host’s network and operating system.
Authentication and authorization challenges
As mentioned before, MCP servers function as specialized proxies. As such, they can play the role of central repositories for authentication tokens and sensitive credentials necessary for interaction with underlying services.
This concentration of secrets challenges traditional point-to-point API security models. Since MCP’s built-in security mechanisms are often minimal, developers sometimes resort to ad hoc cryptographic practices, which Yan et al., (2025) found propagate insecure secrets and fragile authentication downstream.
Gartner predicts that, through 2029, over 50% of successful cyberattacks against AI agents will exploit precisely access control issues. Without strict per-user authentication and granular, scoped authorization, the proxy role of MCP servers will turn into a conduit for session hijacking and credential leaks.
However, consider that the latest MCP specifications introduce OAuth for standardized, secure authentication, aiming to replace ad hoc methods with a more scalable, robust solution.
To complement this, individual MCP server providers must implement Role-based Access Control (RBAC) and enforce granular access controls for tools and resources based on user roles and permissions. Each MCP server implementation must configure and enforce RBAC.
These improvements, along with the introduction of the roots capability:
- Protect sensitive data.
- Restrict access to authorized agents only.
- Minimize the risk of unauthorized access.
That said, despite these updated security measures being part of the latest specifications, full implementation across all MCP servers will take time. In the meantime, ad hoc authentication solutions will likely remain prevalent, as many existing systems continue to rely on them until the newer security mechanisms are fully adopted and integrated.
Dynamic tool discovery and increased data exposure
The transition to a dynamic, runtime-discovery model in which an LLM autonomously explores available tools and resources moves the security boundary from the network and syntactic layers to the semantic and agentic layers. And there, the primary threat is the manipulation of intent.
A key risk in this dynamic environment is indirect prompt injection. A malicious file read via an MCP server can instruct the AI to use available tools to exfiltrate data.
Also, since many real-world MCP applications currently lack sufficient privilege separation, plugins can inherit broad system privileges, leading to unintended data exposure and over-permissioning.
Mitigating MCP security risks
MCP simplifies AI application development and composition but introduces security and trust issues that echo those of previous API and distributed system technologies (Gartner, 2025b). Effectively addressing these risks requires a dual strategy:
- Developing security mechanisms that address the non-deterministic nature of AI agents.
- Applying rigorous, proven testing standards to this new layer of integration.
Agent-centric API security
Securing the Model Context Protocol requires a move from “Developer Experience” (DX) to “Agent Experience” (AX) in API management (Gartner, 2025a).
Since AI agents will likely consume the majority of organizational APIs, security teams must move away from the assumption that a human is behind every request.
In these conditions, specialized API security solutions must implement semantic rate limiting and dynamic access management to prevent agents from abusing the underlying services they discover at runtime. That means validating the agent’s intent rather than just the request syntax.
In addition, protection must extend into the API execution layer.
Traditional security models validate the request and then hand it off. However, MCP-driven interactions require containerized execution sandboxing and strict I/O validation to make sure that a payload delivered via an API cannot lead to unauthorized code execution (Errico et al., 2025).
Finally, supplement this with provenance tracking, which treats the AI’s chain of thought as a verifiable identity. Tracing an API call back to the prompt makes it possible to protect the API, even when the entity making the call is a compromised AI agent.
Proactive vulnerability testing with Equixly
Recent research by Equixly suggests that, in its current state, MCP can contain security gaps that reintroduce basic vulnerabilities long thought mitigated in the web era (Dalla Piazza, 2025).
Equixly’s security assessments of popular MCP implementations depicted a worrisome picture:
- 43% of the tested servers contained command injection flaws.
- 30% were vulnerable to SSRF.
- 22% allowed path traversal.
These risks are worsened by the lack of mandatory authentication and the exposure of sensitive session identifiers in URLs. Because any entity can access MCP servers — not just transparent LLMs — they provide a universal attack surface that enables remote code execution (RCE) in up-to-date implementations.
Since MCP servers are essentially wrappers around existing APIs, the security of the agentic layer depends extensively on the quality of the raw API endpoints it consumes. For that purpose, Equixly helps developers create high-quality, reliable API specifications by testing for the very RCE and command injection vulnerabilities that are now resurfacing within MCP.
Its current automated testing of REST and JSON interfaces serves as the essential first line of defense. By hardening the APIs that MCP will eventually wrap, Equixly helps you prevent access to your most sensitive system and network resources.
Most importantly, Equixly now includes direct support for MCP testing within its platform, enabling auto-discovery and testing for users’ MCP servers. This functionality, which is already available ahead of the public release in March, allows organizations to proactively identify vulnerabilities in their MCP servers before threat actors exploit them.
Equixly offers early access and demos of this new testing feature, helping users secure their MCP implementations and stay ahead of emerging security threats.
Final thoughts
MCP provides the necessary bridge for the agentic era. Yet, its current maturity level calls for a considerable change in security strategy.
To move safely toward an ecosystem where AI agents dominate API consumption, you must prioritize proactive vulnerability testing and adopt an agent-centric security model that accounts for the unique risks of autonomous orchestration.
Reach out to see how our MCP testing support can proactively secure your MCP implementation and harden your APIs against emerging threats.
Don’t let security gaps in your MCP deployment compromise your system!
FAQs
What is the primary difference between a traditional API and an MCP server?
While traditional APIs require developers to manually program every specific connection between a model and a tool, MCP servers allow AI to automatically understand and use tools by reading a built-in menu of capabilities.
Does using MCP replace the need for traditional REST APIs?
No, MCP acts as a standardized orchestration layer that sits on top of your existing APIs, translating their data into a format that AI agents can easily reason with and navigate.
How does MCP change the current API security landscape?
MCP shifts the security focus from human-centric request syntax to agentic intent validation, requiring new defenses against risks like indirect prompt injection and centralized credential exposure.
Zoran Gorgiev
Technical Content Specialist
Zoran is a technical content specialist with SEO mastery and practical cybersecurity and web technologies knowledge. He has rich international experience in content and product marketing, helping both small companies and large corporations implement effective content strategies and attain their marketing objectives. He applies his philosophical background to his writing to create intellectually stimulating content. Zoran is an avid learner who believes in continuous learning and never-ending skill polishing.
Alessio Dalla Piazza
CTO & FOUNDER
Former Founder & CTO of CYS4, he embarked on active digital surveillance work in 2014, collaborating with global and local law enforcement to combat terrorism and organized crime. He designed and utilized advanced eavesdropping technologies, identifying Zero-days in products like Skype, VMware, Safari, Docker, and IBM WebSphere. In June 2016, he transitioned to a research role at an international firm, where he crafted tools for automated offensive security and vulnerability detection. He discovered multiple vulnerabilities that, if exploited, would grant complete control. His expertise served the banking, insurance, and industrial sectors through Red Team operations, Incident Management, and Advanced Training, enhancing client security.