by Gary Mintchell | Feb 4, 2026 | Generative AI, News, Process Control, Security, Software
This news came last week. Just as I was contemplating the business model of cybersecurity firms following another acquisition, this news of a new company launch with a unique take on security. This company will be interesting to watch. The news comes from Amsterdam concerning the launch of a company called Indurex. Naturally they have AI in their product offering and manage to work in an older term—cyber-physical systems.
The quick take: An AI-powered, human-in-the-loop platform that brings together process safety and cybersecurity, turning complex signals into trusted decisions for resilient critical infrastructure.
Indurex, a pioneering artificial intelligence (AI) and cyber-physical systems (CPS) security company, announced on January 27 its official launch to help protect critical infrastructure, smart manufacturing, and connected industrial operations. The company’s mission is to deliver robust, adaptive security solutions that safeguard both the physical and digital worlds as they increasingly converge.
Founded by a team of seasoned experts in operational technology (OT), cybersecurity, and process safety systems, Indurex enters the market at a decisive time. Operators across energy, utilities, and manufacturing sectors face mounting challenges from IT-OT convergence, cyber sabotage, and cascading system failures — putting both process safety and cybersecurity integrity under increasing pressure and exposing essential assets to unprecedented risk. Traditional tools, designed for isolated IT networks or legacy control systems, can no longer assure the level of operational, safety, and cyber integrity required in today’s highly connected industrial environments.
Industrial organisations continue to face a critical gap between process safety and cybersecurity, which are managed in disconnected silos. Existing tools generate high volumes of alerts without sufficient industrial or engineering context, leading to alert fatigue and a limited ability to assess real operational and safety impact. At the same time, a new class of AI-enabled and cyber-physical threats is emerging — capable of exploiting process behaviour, safety dependencies, and human workflows. Detecting and stopping these threats requires AI-native technologies designed for industrial systems, combined with human-in-the-loop intelligence to ensure explainability, trust, and effective decision-making.
Indurex bridges this gap with an AI-native, interoperable platform that unifies engineering context and cybersecurity intelligence — an approach the company defines as Engineering Cyber Intelligence.
This delivers measurable returns across three dimensions:
- Operational Excellence & Safety Integrity: Fewer trips and faster recovery through unified situational awareness and continuous assurance of Safety Integrity Functions (SIF)
- Cyber Resilience: Contextualized detection and response across digital and physical domains, aligned with operational and safety impact
- Cost & Compliance: Automated reporting and defensible evidence of risk, control maturity, and safety integrity across critical systems
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Jan 30, 2026 | Generative AI, Manufacturing IT, Software
I touched on this concept reporting from the Ignition Community Conference last September. It’s where I was sitting beside this excitable “influencer” who was overjoyed at the announcement from Inductive Automation that MCP was coming to Ignition sometime in 2026 and darn near put a big bruise on my thigh hitting me in his excitement.
This blog post on the Inductive Automation website, What Is MCP? Understanding the Model Context Protocol, explains MCP for Ignition coming this year.
Our company is working on an MCP Module for Ignition that will be released later in 2026. MCP is a very new technology on the scene, so you shouldn’t feel bad if you’re asking yourself, ‘Cool, but what exactly is MCP?’ In this blog post, we’ll give you a quick overview of what MCP does so you can start thinking of exciting ways to use the new module once it’s released.
As AI continues to evolve, one of the biggest limitations holding it back from widespread real-world adoption is its isolation. Large language models (LLMs) are powerful, but they are typically trained on a fixed dataset and are unable to access or act on real-time information.
The Model Context Protocol (MCP) breaks down that barrier. Introduced by Anthropic in November 2024 as an open standard protocol, MCP creates a standardized two-way communication bridge between AI systems and external tools, applications, and data sources. It extends LLMs with the ability to interact with enterprise resource planning (ERP) systems, customer relationship management (CRM) systems, databases, APIs, and external developer tools. You can think of it as a universal plug that allows LLMs to connect seamlessly with information outside of their training data.
Traditional LLMs are limited in two critical ways: they are static and isolated. This means that once an LLM is trained, its knowledge is frozen in time, and it cannot access external tools or databases unless you build custom integrations. MCP solves both of these problems by turning LLMs into dynamic agents. Through MCP, AI systems can query real-time data, update records, and trigger workflows.
For example, an enterprise assistant built with MCP could answer questions about project timelines, check your Google Calendar, update a ticketing system, query metrics, update internal systems, book events, or send an email within the same conversation. In creative fields, MCP-enabled AIs could write code and deploy it to production environments or generate 3D designs and send them directly to a printer.
Simply put, MCP increases LLM utility and automation by enabling it to perform a wide range of actions that would be impossible without extensive custom engineering.
One of the most important advantages of MCP is that it significantly reduces the hallucinations or inaccuracies that LLMs often generate by allowing models to access authoritative, real-time sources like your databases and APIs. This ensures that your LLMs’ outputs are more grounded in reality rather than relying on probabilistic text generation.
Additionally, unlike proprietary integrations that lock AI applications into a specific tool or vendor ecosystem, MCP is an open standard, which enables developers to share pre-built MCP server frameworks. This allows AI systems to evolve over time, and provides the critical benefit of solving the N x M problem of integration, where N (AI models) and M (tools) require N x M number of custom connectors. MCP provides a consistent grammar and communication protocol, standardizing the interface and allowing a single tool to be shared across models via a plug-and-play architecture. This makes it easier to reuse components, accelerates development, and fosters open collaboration across vendors and platforms without rewriting application logic, positioning MCP as foundational infrastructure rather than a short-lived integration layer.
MCP uses a client-server architecture. The AI application acts as the MCP host, while MCP clients serve as bridges to external systems and tools. These clients handle session management, parsing, reconnection, and translation of user requests into MCP’s structured format. Each MCP client communicates with a unique MCP server, which connects to external databases, APIs, and web services, enabling it to execute tool functions, fetch data, or provide prompts.
MCP servers expose three core primitives: resources, tools, and prompts. Resources provide read-only access to data sources like databases or files; tools perform actions, such as making API calls or triggering workflows; and prompts are reusable templates that set the structure for how the LLM communicates with tools and data. MCP uses these primitives as structured, declarative interfaces rather than allowing the LLM to issue arbitrary API calls. This streamlines the AI by shielding it from low-level system complexity, ensuring that it invokes well-defined actions with clearly scoped inputs and outputs.
MCP can be deployed in many ways to align with the needs of different environments and industries:
- Local servers for privacy-sensitive and high-speed offline tasks
- Remote servers for cloud-based, shared services
- Managed servers for scalability and operational simplicity
- Self-hosted servers for compliance, control, on-premise, or legacy environments
Using AI with MCP is very simple from the user’s perspective. You prompt your LLM as you normally would, and the MCP-connected system handles the rest. For example, if you ask, “Build me a report,” the AI host initiates a tool discovery process by querying the MCP server. It retrieves a list of available tools, selects the appropriate one, and calls the function with the necessary parameters.
If your system needs a real-time update, such as a tool becoming unavailable, the MCP server can push a notification to the client without waiting for a new prompt. Once the tool completes its task, MCP integrates the results into the AI’s response or uses them to trigger the next action in a multi-step workflow.
This orchestration model makes MCP ideal for building advanced AI agents capable of reasoning with live data, executing actions across systems, and adapting dynamically as tools and environments change.
MCP represents a foundational shift in how AI connects to systems. It transforms LLMs from static knowledge engines into intelligent, action-capable systems. As adoption grows, MCP is poised to become a core part of modern software infrastructure, powering a new generation of agentic and adaptive AI applications.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Jan 29, 2026 | Generative AI
I issued a challenge regarding real-world applications of AI. I have a craving to dig beneath the hype I’ve been reading. Applications with built-in artificial intelligence (AI) have existed for decades. You probably are running some—for sure in manufacturing. But also in your daily work on a computer.
The hype grew with the rapid development of large language models (LLMs) and what is labeled Generative AI. People have been falling all over themselves playing with ChatGPT, Claude, Sora, and much more. I use Claude. It’s been helpful as a researcher/analyst.
Is there more?
A publicist for a new company called GrayCyan took me up on the challenge.
Many manufacturers are sitting on AI-driven efficiencies that go way beyond robots and predictive maintenance, and most shops haven’t even noticed them yet.
OK, that’s enticing.
Nishkam Batta, founder and CEO of GrayCyan, a company that specializes in creating custom AI solutions for the manufacturing and industrial industry, has a unique perspective on these gaps. Some topics he can speak to:
- Overlooked AI moves that are driving the next wave of efficiency in industrial operations.
- The AI gap is costing manufacturers millions.. 3 places where money’s leaking.
- Beyond predictive maintenance: Simple hidden AI plays that slash costs and boost throughput
I had to take him up on that. We talked last week.
They have a LinkedIn magazine called HonestAI. You can check that one out. They also publish it as a glossy and pdf. It’s pretty cool.
Here is some background.
GrayCyan is an applied artificial intelligence company that builds human-in-the-loop AI systems for mid-sized organizations. The company specializes in AI middleware that integrates with legacy ERPs, CRMs, and operational platforms to automate administrative workflows, improve data consistency, and support real-world business operations.
He told me recent studies show that SMBs that adopt AI for operations and planning improve coordination across production, purchasing, quality, and scheduling. This means fewer misunderstandings, fewer duplicated tasks, and fewer delays caused by missing or outdated information. Because small companies feel the impact of every mistake more sharply than enterprises, this improved visibility allows them to run smoother, leaner operations that often outperform larger competitors who have navigated the rigid internal structures.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Jan 27, 2026 | Generative AI, Workforce
Another survey of manufacturing business leaders regarding applying AI in their operations and availability of a skilled workforce to apply it.
Revalize, a worldwide leader in CPQ, PLM, and CAD software solutions for manufacturers, released new research around the state of AI and smart technology adoption in the manufacturing sector. Despite ongoing economic uncertainty, the report finds that technology investment continues to surge, prioritizing AI and automation, yet many companies are struggling to find talent with the right skillsets to implement and deploy the technology effectively. Manufacturers must focus on streamlining systems and upskilling workers to leverage emerging AI opportunities and fully realize the return on these investments.
The report, Smart Manufacturing 2026: Agile Leaders Confront the AI Skills Gap, features data gathered from a survey of 500 business leaders in select manufacturing fields across the United States, Germany, Austria, and Switzerland. The research reveals companies are investing in new technologies, but many are starting to face the reality that their workforce might not be ready for it.
Key findings include:
- Technology Investments Continue to Rise: 77% of manufacturing leaders report increased software budgets over the past 12 months, up from 70% the year prior, signaling sustained momentum behind digital transformation. Additionally, 93% of manufacturing leaders plan to utilize new technologies, tools, or software this year.
- Holistic AI Adoption is Lagging Despite Lofty Goals: While 56% of manufacturers reported having implemented AI in select areas, only 10% said the technology was fully integrated across their operations, illuminating a critical gap in execution.
- The U.S. Has the Highest Demand for AI Skills: U.S. manufacturers lead investment in AI-driven and human-centric Industry 5.0 technologies, creating the highest demand for AI skills among all countries surveyed. As a result, 44% of U.S. teams cited a need for AI expertise, 16% higher than other regions.
- Industry 5.0 Confidence Remains High, but Realism Grows: 84% of manufacturers feel prepared to adopt and leverage Industry 5.0 technologies, a slight decline from last year, reflecting a more realistic understanding of the integration, data, skill, and workforce capabilities required for success.
“AI and automation are transforming the manufacturing sector, but without serious investment in workforce training to leverage these technologies, initiatives fall short of expectations,” said Mike Sabin, CEO of Revalize. “The industry’s continued technology investments must be matched by a commitment to upskilling talent through both internal programs and external academic partnerships. I foresee 2026 as a pivotal turning point where manufacturers will either move beyond AI hype and take the necessary steps to bridge the gap between investment and readiness or fall behind as competitors move faster toward Industry 5.0.”
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Jan 26, 2026 | Automation, Commentary, Generative AI
Vijay Narayan, Business Unit Head, Manufacturing, Logistics, Energy, Utilities at Cognizant, recently spoke with me about what they are seeing in their consulting work with manufacturing and logistics companies. We broached on AI, workforce, and strategy.
Cognizant promotes developing and using digital twins to pilot new machines enabling simulation for optimization. AI for predictive maintenance has been useful for efficiency. He finds progress in adopting automation across the industries he serves as staggered. Same with AI. Companies take one step up and find they can’t go back. The most useful sponsorship for adoption in his experience comes from the CFO. That office asks about how to gain efficiency. At the local level, the plant manager holds the keys to effective adoption.
Following our conversation, I found this press release about a report on analysis of how AI will impact work and jobs.
The new research reveals AI is changing the workforce faster than previously reported: it’s now capable of handling $4.5 trillion in U.S. work tasks and impacting potentially 93% of jobs today. However, the report also underscores that AI is not a blanket solution for advancing labor productivity: human involvement and adaptable operations continue to be vital to capturing the full value potential of AI.
Cognizant’s analysis for New Work, New World 2026 is based on a reassessment of 18,000 tasks and 1,000 jobs in the O*NET labor database, with a focus on how jobs and tasks could be assisted or automated by AI. Specifically, the new study points to an accelerated pace of change in “exposure scores”—the degree to which a job can be assisted or automated by AI—and highlights how those evolving changes can influence labor and enterprise success.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Jan 23, 2026 | Asset Performance Management, Generative AI
I seldom hear from Emerson lately. They have announced an update to the AspenTech Mtell asset performance management portfolio. They mention AI-enabled functions. The company has had aspects of artificial intelligence buried within the system for years. This expands its usefulness.
Emerson announced the next evolution of its AspenTech Asset Performance Management (APM) portfolio. The latest release of Aspen Mtell provides a pathway for companies to drive immediate value and seamlessly scale from foundational asset health monitoring to best-in-class, AI-enabled failure prediction and continuous operational improvement.
The latest innovations in Aspen Mtell enable a proactive enterprise reliability program that delivers continuous improvement. Key capabilities and updates to Aspen Mtell include:
- Rapid Scalability: Industry- and asset-specific templates and market-leading analytics drive faster deployment of asset health monitoring across the enterprise, enabling quick ROI and seamless transition to AI-driven prediction.
- Accelerated Alert Resolution: AI-powered insights automatically group and prioritize alerts based on severity, risk and historical data. Embedded failure mode and effects analysis prescribes corrective action, significantly streamlining risk resolution.
- Next-Level Operational Reliability: Direct connection with Emerson’s vibration monitoring solutions, AMS Machine Works and AMS Device Manager.
- Seamless Enterprise Integration: Deliver actionable insights directly into existing enterprise resource planning workflows through deep integration with enterprise asset management systems.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.