Select Page

Updated Information On CESMII, The Smart Manufacturing Institute

The last that I wrote about CESMII the output had been several educational initiatives regarding smart manufacturing.

I contacted Jillian Kupchella, marketing director, last month initiating some conversations so that I could get an updated.

For those not familiar with the organization: CESMII – the Smart Manufacturing Institute – has a total current investment commitment of $201M from Department of Energy funding and public/private partnership contributions, with a mandate to create a more competitive manufacturing environment here in the US through advanced sensing, analytics, modeling, control and platforms. CESMII is one of 18 Manufacturing USA institutes on this mission to increase manufacturing productivity, global competitiveness, and reinvestment by increasing energy productivity, improving economic performance and raising workforce capacity. University of California at Los Angeles (UCLA) is the program and administrative home of CESMII.

The CEO is a former colleague from MESA, John Dyck.

The early education initiatives have blossomed over the ensuing few years to a community of nearly 100 Certified Smart Manufacturing Roadmapping Professionals who are equipped to engage manufacturers of all size – small, medium, and large – to assess current states, develop strategic roadmaps, align communications, and establish sustainable funding models. This work is accelerating the development of data-driven cultures and a true Smart Manufacturing mindset across all industries.

They identified manufacturing and systems interoperability as a strategic imperative – marking the end of siloed data and stovepipe architectures and enabling scalable data, application, and integration interoperability. I’ve heard and written about the data silo and stovepipe architecture for perhaps decades. I hope they can move that ball forward (to use an American football analogy given the recently completed Super Bowl).

Hearing from Dyck, CESMII have identified a couple new initiatives they consider key to the widespread deployment of Smart Manufacturing.

CESMII’s 3 Smart Manufacturing Architecture Imperatives represent a foundational set of requirements that address this demand for interoperability. We are advocating for open, standards-based information modeling (SM Profiles), interoperable platform requirements, and a common API that can drive scalability, reduce complexity, and unlock real-time value from manufacturing data across systems, applications, and the supply chain. You can learn more about these SM Imperatives here: SM Architecture Imperatives Workshop

We do want to draw your attention to the newest, and arguably most important of these imperatives. CESMII convened an international, open initiative to establish a common, vendor-agnostic API for contextualized manufacturing information. This effort addresses a longstanding challenge faced by manufacturers and application developers alike: the need to build against incompatible, proprietary platform interfaces. Adoption of this API is already underway among several leading manufacturing software and platform providers, with an official launch planned for early 2026.

We are also excited to share that several of our technology provider partners are actively working toward compliance with CESMII’s Smart Manufacturing Imperatives. As a result, we anticipate the addition of several new compliant Smart Manufacturing Interoperability Platforms (SMIPs) in 2026 – further strengthening the ecosystem. Stay tuned for announcements.

Scaling Smart Manufacturing for Impact

Through community engagement, CESMII has identified several strategic innovation and investment areas essential to scaling and deploying Smart Manufacturing, including:

  • Replicating Smart Manufacturing solutions across factories within an industry
  • Scaling from unit operations to factory and enterprise levels
  • Extending Smart Manufacturing solutions across the supply chain, including tier suppliers and small and medium-sized manufacturers
  • Scaling and deploying will demonstrate industry integration, implementation, and reusability of existing SM solutions, practices, and infrastructure.

The Institute have given itself some ambitious projects. We wish them success.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

IEC 61131 Process Control Function Standards Working Group Launched

The Open Process Automation Forum has been building a standard of standards to promote open and interoperable technology for process automation. PLCOpen has been at the forefront of international standards promulgation as the organization behind IEC 61131. This latter organization has instituted a Working Group to create IEC61131 process automation standard and certifications for application engineers to efficiently deploy PLC, DCS, and open platform controls in process industry applications.

I’ve been following and promoting open and interoperability for decades. This should be a useful step forward.

Bill Lydon sent this explanation of the background and current status of programming standards.

The cost of programming process automation and control continues to grow and is a significant part of project costs.  Each supplier having unique function blocks that do not follow a single worldwide standard increases training, application development costs, and project profit risk.  PLCopen standardization and modular methodology lowers training time, project development costs, and lowers project cost overruns risk.

This further expands the base of  PLCopen standards that enable No-Code/Low-Code industrial automation programming across vendor platforms including industrial computers. This will include incorporation of the function blocks defined in the O-PAS standard into a new PLCopen standard.

The new PLCopen Process Functions standards and certification make it easier for application engineers to deploy PLC,  DCS, and open platform controls in process applications.  

Working Group Goal

The PLCopen Process Industry Working Group goal is accelerating the convergence of discrete and process control & automation into harmonized PLC, DCS and open platform system architectures to achieve industrial business digitalization.

Today there are a diverse number of ways to program applications for process control and automation.  The goal is to develop PLCopen function block standards for process control functions.   Function Blocks are encapsulations of variables, parameters and their processing algorithms.  Similar standardization has been done with PLCopen standards developed for motion control, safety, fluid power, XML Program Interchange, and OPC UA.

He notes process control applications being done using PLCs. I actually sold a PLC to a chemical plant engineer, who used it to control one of his processes. That was in 1995. So, while unusual, not unheard of.

Today many process control applications are being done using PLCs (Programmable Logic Controllers) since the capabilities of these devices is far beyond original 1970s relay replacement applications.  The emerging use of industrial edge computers with IEC 611 31 runtime software engines is another segment that benefits from the results of the PLCopen Process Industry Working Group.

PLCopen Background

PLCopen has been successful defining IEC 61131 functions and certifications used widely throughout industry worldwide increasing engineering efficiency, quality and empowering a wider number of people in  motion control, fluid power, safety, and other functions. The standards define common inputs outputs and behaviors with vendor certifying conformance to accomplish the functions or additional features.

PLCopen Standards

  • Logic – The PLCopen basis is provided by the world wide standard IEC 61131, and especially Part 3 – Programming Languages.
  • Motion Control – Creating reusable, hardware independent Motion Control applications via IEC 61131-3 and PLCopen Function Blocks including Fluid Power.
  • Safety -PLCopen Safety integrates safety functionality into the IEC 61131-3 development environments.  Meets IEC 61508 & related standards.
  • Communication – PLCopen and OPC Foundation  combine their technologies to a platform and manufacturer-independent information and communication architecture.
  • XML Exchange – PLCopen added independent XML schemes to IEC 61131-3

Movements including Industry 4.0, Industrial Internet of Things, The Open Process Automation Forum, and Smart Manufacturing are creating a drive for more standards.  IEC 61131-3 along with PLCopen extensions and certifications are well established in discrete and hybrid applications and with the addition of OPC Function blocks is already part of the newer Industry 4.0 and Industrial Internet of Things offerings.

Working Group

As part of our ongoing efforts to drive standardization and interoperability in industrial automation PLCopen will start a new workgroup exploring the incorporation of the function blocks we have developed for the O-PAS standard into a new PLCopen standard.

The O-PAS (Open Process Automation Standard) is an open, interoperable, and vendor-neutral standard developed by the Open Process Automation Forum (OPAF) to enable flexible and modular process automation systems. It is designed to replace traditional, proprietary DCS’ with a standards-based, plug-and-play architecture, allowing components from different vendors to work seamlessly together. O-PAS is based on existing industry standards, such as (among others) IEC 61131 & IEC 61499.

Part 6.4 of the O-PAS defines a set of standard function blocks to ensure interoperability, consistency, and comparability across different process automation systems. These FBs provide a reference model with standardized inputs, outputs, and behaviors. By establishing a uniform function block framework, part 6.4 supports modular automation, making it easier to adopt open, vendor-independent control solutions. PLCopen helped creating several pre-defined function blocks for part 6.4 of the O-PAS standard.

In order to standardizing these function blocks within PLCopen we are starting a new workgroup to create a new PLCopen standard for the process automation.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Indurex Launches with a Mission to Advance Safety and Cybersecurity Resilience Across Cyber-Physical Systems

This news came last week. Just as I was contemplating the business model of cybersecurity firms following another acquisition, this news of a new company launch with a unique take on security. This company will be interesting to watch. The news comes from Amsterdam concerning the launch of a company called Indurex. Naturally they have AI in their product offering and manage to work in an older term—cyber-physical systems.

The quick take: An AI-powered, human-in-the-loop platform that brings together process safety and cybersecurity, turning complex signals into trusted decisions for resilient critical infrastructure.

Indurex, a pioneering artificial intelligence (AI) and cyber-physical systems (CPS) security company, announced on January 27 its official launch to help protect critical infrastructure, smart manufacturing, and connected industrial operations. The company’s mission is to deliver robust, adaptive security solutions that safeguard both the physical and digital worlds as they increasingly converge.

Founded by a team of seasoned experts in operational technology (OT), cybersecurity, and process safety systems, Indurex enters the market at a decisive time. Operators across energy, utilities, and manufacturing sectors face mounting challenges from IT-OT convergence, cyber sabotage, and cascading system failures — putting both process safety and cybersecurity integrity under increasing pressure and exposing essential assets to unprecedented risk. Traditional tools, designed for isolated IT networks or legacy control systems, can no longer assure the level of operational, safety, and cyber integrity required in today’s highly connected industrial environments.

Industrial organisations continue to face a critical gap between process safety and cybersecurity, which are managed in disconnected silos. Existing tools generate high volumes of alerts without sufficient industrial or engineering context, leading to alert fatigue and a limited ability to assess real operational and safety impact. At the same time, a new class of AI-enabled and cyber-physical threats is emerging — capable of exploiting process behaviour, safety dependencies, and human workflows. Detecting and stopping these threats requires AI-native technologies designed for industrial systems, combined with human-in-the-loop intelligence to ensure explainability, trust, and effective decision-making.

Indurex bridges this gap with an AI-native, interoperable platform that unifies engineering context and cybersecurity intelligence — an approach the company defines as Engineering Cyber Intelligence.

This delivers measurable returns across three dimensions:

  • Operational Excellence & Safety Integrity: Fewer trips and faster recovery through unified situational awareness and continuous assurance of Safety Integrity Functions (SIF)
  • Cyber Resilience: Contextualized detection and response across digital and physical domains, aligned with operational and safety impact
  • Cost & Compliance: Automated reporting and defensible evidence of risk, control maturity, and safety integrity across critical systems

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Model Context Protocol in Ignition

I touched on this concept reporting from the Ignition Community Conference last September. It’s where I was sitting beside this excitable “influencer” who was overjoyed at the announcement from Inductive Automation that MCP was coming to Ignition sometime in 2026 and darn near put a big bruise on my thigh hitting me in his excitement.

This blog post on the Inductive Automation website, What Is MCP? Understanding the Model Context Protocol, explains MCP for Ignition coming this year.

Our company is working on an MCP Module for Ignition that will be released later in 2026. MCP is a very new technology on the scene, so you shouldn’t feel bad if you’re asking yourself, ‘Cool, but what exactly is MCP?’ In this blog post, we’ll give you a quick overview of what MCP does so you can start thinking of exciting ways to use the new module once it’s released.

As AI continues to evolve, one of the biggest limitations holding it back from widespread real-world adoption is its isolation. Large language models (LLMs) are powerful, but they are typically trained on a fixed dataset and are unable to access or act on real-time information.

The Model Context Protocol (MCP) breaks down that barrier. Introduced by Anthropic in November 2024 as an open standard protocol, MCP creates a standardized two-way communication bridge between AI systems and external tools, applications, and data sources. It extends LLMs with the ability to interact with enterprise resource planning (ERP) systems, customer relationship management (CRM) systems, databases, APIs, and external developer tools. You can think of it as a universal plug that allows LLMs to connect seamlessly with information outside of their training data.

Traditional LLMs are limited in two critical ways: they are static and isolated. This means that once an LLM is trained, its knowledge is frozen in time, and it cannot access external tools or databases unless you build custom integrations. MCP solves both of these problems by turning LLMs into dynamic agents. Through MCP, AI systems can query real-time data, update records, and trigger workflows.

For example, an enterprise assistant built with MCP could answer questions about project timelines, check your Google Calendar, update a ticketing system, query metrics, update internal systems, book events, or send an email within the same conversation. In creative fields, MCP-enabled AIs could write code and deploy it to production environments or generate 3D designs and send them directly to a printer.

Simply put, MCP increases LLM utility and automation by enabling it to perform a wide range of actions that would be impossible without extensive custom engineering.

One of the most important advantages of MCP is that it significantly reduces the hallucinations or inaccuracies that LLMs often generate by allowing models to access authoritative, real-time sources like your databases and APIs. This ensures that your LLMs’ outputs are more grounded in reality rather than relying on probabilistic text generation.

Additionally, unlike proprietary integrations that lock AI applications into a specific tool or vendor ecosystem, MCP is an open standard, which enables developers to share pre-built MCP server frameworks. This allows AI systems to evolve over time, and provides the critical benefit of solving the N x M problem of integration, where N (AI models) and M (tools) require N x M number of custom connectors. MCP provides a consistent grammar and communication protocol, standardizing the interface and allowing a single tool to be shared across models via a plug-and-play architecture. This makes it easier to reuse components, accelerates development, and fosters open collaboration across vendors and platforms without rewriting application logic, positioning MCP as foundational infrastructure rather than a short-lived integration layer.

MCP uses a client-server architecture. The AI application acts as the MCP host, while MCP clients serve as bridges to external systems and tools. These clients handle session management, parsing, reconnection, and translation of user requests into MCP’s structured format. Each MCP client communicates with a unique MCP server, which connects to external databases, APIs, and web services, enabling it to execute tool functions, fetch data, or provide prompts.

MCP servers expose three core primitives: resources, tools, and prompts. Resources provide read-only access to data sources like databases or files; tools perform actions, such as making API calls or triggering workflows; and prompts are reusable templates that set the structure for how the LLM communicates with tools and data. MCP uses these primitives as structured, declarative interfaces rather than allowing the LLM to issue arbitrary API calls. This streamlines the AI by shielding it from low-level system complexity, ensuring that it invokes well-defined actions with clearly scoped inputs and outputs.

MCP can be deployed in many ways to align with the needs of different environments and industries:

  • Local servers for privacy-sensitive and high-speed offline tasks
  • Remote servers for cloud-based, shared services
  • Managed servers for scalability and operational simplicity
  • Self-hosted servers for compliance, control, on-premise, or legacy environments

Using AI with MCP is very simple from the user’s perspective. You prompt your LLM as you normally would, and the MCP-connected system handles the rest. For example, if you ask, “Build me a report,” the AI host initiates a tool discovery process by querying the MCP server. It retrieves a list of available tools, selects the appropriate one, and calls the function with the necessary parameters.

If your system needs a real-time update, such as a tool becoming unavailable, the MCP server can push a notification to the client without waiting for a new prompt. Once the tool completes its task, MCP integrates the results into the AI’s response or uses them to trigger the next action in a multi-step workflow.

This orchestration model makes MCP ideal for building advanced AI agents capable of reasoning with live data, executing actions across systems, and adapting dynamically as tools and environments change.

MCP represents a foundational shift in how AI connects to systems. It transforms LLMs from static knowledge engines into intelligent, action-capable systems. As adoption grows, MCP is poised to become a core part of modern software infrastructure, powering a new generation of agentic and adaptive AI applications.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Real-World Application of AI

I issued a challenge regarding real-world applications of AI. I have a craving to dig beneath the hype I’ve been reading. Applications with built-in artificial intelligence (AI) have existed for decades. You probably are running some—for sure in manufacturing. But also in your daily work on a computer.

The hype grew with the rapid development of large language models (LLMs) and what is labeled Generative AI. People have been falling all over themselves playing with ChatGPT, Claude, Sora, and much more. I use Claude. It’s been helpful as a researcher/analyst.

Is there more?

A publicist for a new company called GrayCyan took me up on the challenge.

Many manufacturers are sitting on AI-driven efficiencies that go way beyond robots and predictive maintenance, and most shops haven’t even noticed them yet.

OK, that’s enticing.

Nishkam Batta, founder and CEO of GrayCyan, a company that specializes in creating custom AI solutions for the manufacturing and industrial industry, has a unique perspective on these gaps. Some topics he can speak to:

  • Overlooked AI moves that are driving the next wave of efficiency in industrial operations.
  • The AI gap is costing manufacturers millions.. 3 places where money’s leaking.
  • Beyond predictive maintenance: Simple hidden AI plays that slash costs and boost throughput

I had to take him up on that. We talked last week.

They have a LinkedIn magazine called HonestAI. You can check that one out. They also publish it as a glossy and pdf. It’s pretty cool.

Here is some background.

GrayCyan is an applied artificial intelligence company that builds human-in-the-loop AI systems for mid-sized organizations. The company specializes in AI middleware that integrates with legacy ERPs, CRMs, and operational platforms to automate administrative workflows, improve data consistency, and support real-world business operations.

He told me recent studies show that SMBs that adopt AI for operations and planning improve coordination across production, purchasing, quality, and scheduling. This means fewer misunderstandings, fewer duplicated tasks, and fewer delays caused by missing or outdated information. Because small companies feel the impact of every mistake more sharply than enterprises, this improved  visibility allows them to run smoother, leaner operations that often outperform larger competitors who have navigated the rigid internal structures.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Bridging the Design-to-Manufacturing Gap with AI-Driven Generative Engineering

I appreciate press releases about AI that include definite use cases rather than just the usual vague “we’ve got AI.” InfinitForm is a company new to me. It uses the popular Co-Pilot form of AI for its Generative Engineering Platform.

InfinitForm launched its Generative Engineering Platform, the next stage in the evolution of Design for Manufacturing (DFM). The Generative Engineering Platform is powered by the InfinitForm AI Co-Pilot to automate DFM analysis while optimizing for manufacturing processes, freeing engineers to focus on innovation rather than design iterations and reducing design cycles by 60-80%. 

Speaking from past harsh experience as a manager of product development, anything reducing engineering and design time getting us into manufacturing more quickly is a win.

The Generative Engineering Platform is a software-as-a-service (SaaS) platform that integrates with computer-aided design (CAD) workflows and uses artificial intelligence (AI) to optimize design for manufacturability. The Platform fosters a manufacturing-first approach that extends generative design beyond additive-only optimization, providing engineers and designers with automated analysis and intelligence tools to bridge the gap between design and production. 

Much of the PLM, CAD, and similar technologies on the cutting edge have moved into a variety of cloud-enabled applications. This fits the trend.

The Generative Engineering Platform automates design while optimizing for manufacturing processes, including CNC (computer numeric control) machining, die casting, injection molding, extrusion, additive, and hybrid manufacturing. Automated analysis accounts for multiple manufacturability variables, including wall thickness, draft angles, tool accessibility, tolerance stack-up, assembly complexity, and tooling feasibility. The Platform also analyzes the cost of manufacture and provides first-pass yield predictions. 

The InfinitForm AI Co-Pilot amplifies rather than replaces engineering expertise to accelerate decision-making, freeing design engineers to focus on innovation rather than manufacturability trade-offs. Using AI, the Generative Engineering Platform enables design engineers to explore more concepts with confidence that the results will be manufacturable.

The Platform also reduces the time required for handoffs to manufacturing engineers from weeks to days. Manufacturing engineers gain early visibility into design decisions that could affect manufacturing, eliminating surprises and reducing time-to-production. Using AI to ensure manufacturability also delivers a higher first-pass manufacturing yield.

The Generative Engineering Platform also features a Privacy-First Architecture to protect intellectual property. Customer designs are never used to train Platform algorithms, so proprietary data is always protected.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Follow this blog

Get a weekly email of all new posts.