Select Page

I touched on this concept reporting from the Ignition Community Conference last September. It’s where I was sitting beside this excitable “influencer” who was overjoyed at the announcement from Inductive Automation that MCP was coming to Ignition sometime in 2026 and darn near put a big bruise on my thigh hitting me in his excitement.

This blog post on the Inductive Automation website, What Is MCP? Understanding the Model Context Protocol, explains MCP for Ignition coming this year.

Our company is working on an MCP Module for Ignition that will be released later in 2026. MCP is a very new technology on the scene, so you shouldn’t feel bad if you’re asking yourself, ‘Cool, but what exactly is MCP?’ In this blog post, we’ll give you a quick overview of what MCP does so you can start thinking of exciting ways to use the new module once it’s released.

As AI continues to evolve, one of the biggest limitations holding it back from widespread real-world adoption is its isolation. Large language models (LLMs) are powerful, but they are typically trained on a fixed dataset and are unable to access or act on real-time information.

The Model Context Protocol (MCP) breaks down that barrier. Introduced by Anthropic in November 2024 as an open standard protocol, MCP creates a standardized two-way communication bridge between AI systems and external tools, applications, and data sources. It extends LLMs with the ability to interact with enterprise resource planning (ERP) systems, customer relationship management (CRM) systems, databases, APIs, and external developer tools. You can think of it as a universal plug that allows LLMs to connect seamlessly with information outside of their training data.

Traditional LLMs are limited in two critical ways: they are static and isolated. This means that once an LLM is trained, its knowledge is frozen in time, and it cannot access external tools or databases unless you build custom integrations. MCP solves both of these problems by turning LLMs into dynamic agents. Through MCP, AI systems can query real-time data, update records, and trigger workflows.

For example, an enterprise assistant built with MCP could answer questions about project timelines, check your Google Calendar, update a ticketing system, query metrics, update internal systems, book events, or send an email within the same conversation. In creative fields, MCP-enabled AIs could write code and deploy it to production environments or generate 3D designs and send them directly to a printer.

Simply put, MCP increases LLM utility and automation by enabling it to perform a wide range of actions that would be impossible without extensive custom engineering.

One of the most important advantages of MCP is that it significantly reduces the hallucinations or inaccuracies that LLMs often generate by allowing models to access authoritative, real-time sources like your databases and APIs. This ensures that your LLMs’ outputs are more grounded in reality rather than relying on probabilistic text generation.

Additionally, unlike proprietary integrations that lock AI applications into a specific tool or vendor ecosystem, MCP is an open standard, which enables developers to share pre-built MCP server frameworks. This allows AI systems to evolve over time, and provides the critical benefit of solving the N x M problem of integration, where N (AI models) and M (tools) require N x M number of custom connectors. MCP provides a consistent grammar and communication protocol, standardizing the interface and allowing a single tool to be shared across models via a plug-and-play architecture. This makes it easier to reuse components, accelerates development, and fosters open collaboration across vendors and platforms without rewriting application logic, positioning MCP as foundational infrastructure rather than a short-lived integration layer.

MCP uses a client-server architecture. The AI application acts as the MCP host, while MCP clients serve as bridges to external systems and tools. These clients handle session management, parsing, reconnection, and translation of user requests into MCP’s structured format. Each MCP client communicates with a unique MCP server, which connects to external databases, APIs, and web services, enabling it to execute tool functions, fetch data, or provide prompts.

MCP servers expose three core primitives: resources, tools, and prompts. Resources provide read-only access to data sources like databases or files; tools perform actions, such as making API calls or triggering workflows; and prompts are reusable templates that set the structure for how the LLM communicates with tools and data. MCP uses these primitives as structured, declarative interfaces rather than allowing the LLM to issue arbitrary API calls. This streamlines the AI by shielding it from low-level system complexity, ensuring that it invokes well-defined actions with clearly scoped inputs and outputs.

MCP can be deployed in many ways to align with the needs of different environments and industries:

  • Local servers for privacy-sensitive and high-speed offline tasks
  • Remote servers for cloud-based, shared services
  • Managed servers for scalability and operational simplicity
  • Self-hosted servers for compliance, control, on-premise, or legacy environments

Using AI with MCP is very simple from the user’s perspective. You prompt your LLM as you normally would, and the MCP-connected system handles the rest. For example, if you ask, “Build me a report,” the AI host initiates a tool discovery process by querying the MCP server. It retrieves a list of available tools, selects the appropriate one, and calls the function with the necessary parameters.

If your system needs a real-time update, such as a tool becoming unavailable, the MCP server can push a notification to the client without waiting for a new prompt. Once the tool completes its task, MCP integrates the results into the AI’s response or uses them to trigger the next action in a multi-step workflow.

This orchestration model makes MCP ideal for building advanced AI agents capable of reasoning with live data, executing actions across systems, and adapting dynamically as tools and environments change.

MCP represents a foundational shift in how AI connects to systems. It transforms LLMs from static knowledge engines into intelligent, action-capable systems. As adoption grows, MCP is poised to become a core part of modern software infrastructure, powering a new generation of agentic and adaptive AI applications.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Share This

Follow this blog

Get a weekly email of all new posts.