Every morning last week I listened to presentations from OPC Day–which was actually a week. Once again this year was a conference reporting on a vast amount of work done by volunteers from numerous companies that push forward the cause of interoperability in manufacturing/industrial data communications. Earlier this year, I visited the ODVA annual general meeting. This virtual conference by the OPC Foundation is well worth a listen.
There were two or three presentations on MQTT where speakers tried mightily to tread the line between simplifying and explaining how these are not competing technologies and yet evangelizing the benefits of OPC UA over MQTT versus Sparkplug B. The presentations were balanced for the most part. OPC UA is a substantial information model. MQTT is a lightweight transport protocol widely adopted by IT. Sparkplug B is a lightweight information model requiring some extra defining work by the integrators but keeps overhead low. Obviously, there is a place for each.
I’ve added a list of videos from the OPC Foundation YouTube channel for your viewing pleasure:
Day 1: https://youtu.be/2i54Q-2IvCQ
Day 2: https://youtu.be/CsXagNmWWjY
Day 3: https://youtu.be/8XuTAcG598o
Day 4: https://youtu.be/ezSRRaG1fAE
Day 5: https://youtu.be/ZzS7Z8a7c1I
I applaud these efforts to improve and increase digital interoperability through industry or formal standards and open source. These efforts over many years, and even ones that pre-date digital, have provided progress not only in technology but in the lives of users. This announcement comes from the Digital Twin Consortium, a project of The Object Management Group (OMG).
Digital Twin Consortium (DTC) announced the Digital Twin System Interoperability Framework. The framework characterizes the multiple facets of system interoperability based on seven key concepts to create complex systems that interoperate at scale.
“Interoperability is critical to enable digital twins to process information from heterogeneous systems. The Digital Twin System Interoperability Framework seeks to address this challenge by facilitating complex system of systems interactions. Examples include scaling a smart building to a smart city to an entire country, or an assembly line to a factory to a global supply chain network,” said Dan Isaacs, CTO, Digital Twin Consortium.
The seven key concepts of the DTC Digital Twin System Interoperability Framework are:
1 System-Centric Design – enables collaboration across and within disciplines—mechanical, electronic, and software—creating systems of systems within a domain and across multiple domains.
2 Model-Based Approach – with millions and billions of interconnections implemented daily, designers can codify, standardize, identify, and reuse models in various use cases in the field.
3 Holistic Information Flow – facilitates an understanding of the real world for optimal decision-making, where the “world” can be a building, utility, city, country, or other dynamic environment.
4 State-Based Interactions – the state of an entity (system) encompasses all the entity’s static and dynamic attribute values at a point in time.
5 Federated Repositories – optimal decision-making requires accessing and correlating distributed, heterogeneous information across multiple dimensions of a digital twin, spanning time and lifecycle.
6 Actionable Information – ensures that information exchanged between constituent systems enables effective action.
7 Scalable Mechanisms – ensures interoperability mechanism(s) are inherently scalable from the simplest interoperation of two systems to the interoperability of a dynamic coalition of distributed, autonomous, and heterogeneous systems within a complex and global ecosystem.
“The Digital Twin System Interoperability Framework enables USB-type compatibility and ease for all systems connected to the Internet and private networks, which until now, has been the domain of system integrators,” said Anto Budiardjo, CEO, Padi.io. “This means system integrators can concentrate on designing applications rather than point-to-point integrations.”
Interoperability has been a great benefit to consumers in many areas of the economy. Even industrial technology, where many forces coalesce to circumvent it.
I have written about interoperability specifically several times and have even given a couple of presentations on the subject. None of my work comes close to touching the work of Seth Godin and this podcast on interoperability at Akimbo.
Interoperability is great for users. The ability to connect different components and software applications powerfully enables use. However, suppliers fear that they will lose business by not being able to lock customers into their own proprietary ecosystem. Experienced users easily dismiss the argument that “all our products work better together when we control the system”. We’ve been there. That is not always the case.
The irony…interoperability is actually better for suppliers in the long run.
Check out the podcast as Seth describes indifferent, cooperative, and adversarial interoperability.
My inbox has accumulated a plethora of news about open—open standards, open source, open interoperability. Open benefits implementers (end users) because so much thought and work have been done already defining models, data, messaging, and the like that integration time and complexity can be greatly reduced. Yes, integrators remain necessary. But time to production, one of the critical measures of product success, improves. Not to mention time to trouble shoot both during startup and during operation.
Without warning a short time ago, I received a call from Alan Johnston, president (and driving force for many years) of MIMOSA. I attended MIMOSA meetings for several years and served for a year as director of marketing. I was even part of the meeting that birthed the OIIE name and fleshed out the original model. I set up a number of meetings, but we were just a little premature. We needed a bit more momentum from industry and academia to get things going. The reason for Alan’s call was that momentum was growing again. Several organizations in Australia are interested, there is renewed interest from the ISA 95 committee, and the Open O&M Initiative gained new life.
So, I wound up sitting through most of four hours of introductory meeting as the various parties—old and new—talked about what they were working on and where it all needed to go to get the job done. And Alan was right. Progress has evolved. It’s time to talk about this again.
The driving force for this work continues to be fostering interoperability and data/information flow among the major applications behind the design, construction, and operation & maintenance of a plant—engineering, operations, maintenance. Any of us who have ever searched for the current and correct information/specification of a piece of the process facing impending unplanned shutdown understand almost intuitively the critical nature of this work. (See the Solutions Architecture diagram.)
An executive summary of a white paper I wrote a few years ago still exists on my Dropbox here. The information remains relevant even though some of the organizations have changed and some technology has been updated.
The Open Industrial Interoperability Ecosystem (OIIE) enables a shift from traditional systems integration methods to standards-based interoperability in asset intensive industries, including process industries, integrated energy, aerospace and defense and other key critical infrastructure sectors.
The OIIE digital ecosystem is a supplier-neutral, industrial interoperability ecosystem, which provides a pragmatic alternative to the status quo, enabling the use of Commercial Off The Shelf (COTS) applications from multiple suppliers, as well as bespoke applications. It is defined by a standardized Industry Solutions Architecture, which enables implementations of OIIE instances for both owner/operators and their major supply chain partners that are adaptable, repeatable, scalable and sustainable at substantially lower cost than traditional methods.
The OIIE is an outgrowth of several related industry standardization activities, each of which is addressing a part of the industries requirements for standards-based interoperability. The OIIE brings these individual efforts together, with the direct participation and support of multiple participating industry standards organizations. Major parts of the OIIE include standards associated with the OpenO&M Initiative and with ISO 15926. The OIIE uses these existing standards in combination with each other, to meet the identified systems and information interoperability requirements for use cases which are defined and prioritized by the industries which are served.
Articles I have written over the years:
Standard of standards model for asset data
Plethora of Protocols
Center Industrial Internet of Things
Oil and Gas Interoperability Pilot
OpenO&M is an initiative of multiple industry standards organizations to provide a harmonized set of standards for the exchange of Operations & Maintenance (O&M) data and associated context. OpenO&M is an open, collaborative, effort composed of diverse groups of relevant organizations and subject matter experts.
The original members of OpenO&M Initiative are ISA, MESA, MIMOSA, OAGi, and the OPC Foundation. ISA for the ISA 95 standard, MESA houses B2MML, MIMOSA has CCOM among other standards, and the OPC Foundation for OPC UA.
The purpose of last week’s conference calls was to revitalize the work, introduce additional organizations, and (importantly) new and younger participants. I left the meeting with renewed optimism that the work will continue to fruition. I am personally a globalist, but as a citizen and resident of the US, I hope that our engineers wake up to the utility of standards. Most interest in general over the past several years has been found in Asia with Europe remaining strong.
Perhaps the component that holds everything together is the ISBM. This was previously described as ws-ISBM as it was based on SOAP and web services. The March 2020 update to ISBM v2.0 added REST and JSON support.
ISBM is an implementation specification for ISA-95 Message Service Model. It provides additional specificity that is required to enable two or more groups to develop implementations of the MSM that will properly interoperate with each other without a priori knowledge of each other. The ISBM provides a consistent set of specifications supporting both intra- and inter-enterprise activities, where a combination of functionality, security, supplier-neutrality and ease of implementation are required for industry digital transformation.
Alan Johnston caught up with me yesterday to update me on progress MIMOSA has made toward updating and adoption of its asset information data and data flow models–described by the Open Industrial Interoperability Ecosystem (OIIE). I had been working with them a few years ago, but it was too early for the promotional work I could help them with.
[Note: This is an old slide I had in my database. I don’t think Fiatech and POSC Caesar are still involved, but I cannot edit the slide. The ISA 95 committee is still involved.]
I did write an Executive Summary White Paper that has been downloaded many times over the years. This paper is four years old, but I think it still describes the ideas of interoperability, using standards, handing off from engineering to operations and maintenance of process plants.
Many operations and maintenance managers have expressed frustrations of handover and startup events. When I’ve described this system, they’ve all been receptive.
On the other hand, neither the large integration companies nor the large automation and control companies are thrilled with it out of concern about greatly reduced revenue generated by lock in.
I could reference the work of the Open Process Automation group attempting also a “standard of standards” approach to dissociating software from hardware for improved upgradability. Schneider Electric (Foxboro) and Yokogawa have seen the possibility of competitive advantage, especially with ExxonMobil, with this approach. But the view is not generally held.
Back to Alan. He has been making progress on the standards adoption front and getting some buy-ins. I’ve always seen the potential for improved operations and maintenance from the model. But the amount of work to get there has been staggering.
Looks like they are getting there.
I’m interested in where all this IoT, IT/OT convergence, and digital transformation technology is taking us. This survey on platforms is revealing. Nearly forty percent of respondents are using Platform as a Service (PaaS), containers, and serverless technologies together for flexibility and interoperability.
Companies are using different cloud-native technologies side-by-side at an increasing pace to build both new cloud-native applications and to refactor traditional applications, according to the latest report <https://www.cloudfoundry.org/multi-platform-trend-report-2018> released by the Cloud Foundry Foundation <http://www.cloudfoundry.org/>, home of the most widely-adopted open source cloud technologies in the world.
“As IT Decision Makers settle into their cloud journey, they are more broadly deploying a combination of available platforms, including PaaS, containers and serverless,” said Abby Kearns, Executive Director, Cloud Foundry Foundation. “In this multi-platform world, it should come as no surprise that, as they become more comfortable with these tools, IT decision makers are searching for a suite of technologies to work together. They want technologies that integrate with their current solutions in order to address their needs today—but are flexible enough to address their needs in the future.”
Key Findings include:
• A Multi-Platform World:Technologies are being used side by side more than ever before. IT decision makers report 77 percent are using or evaluating Platforms-as-a-Service (PaaS), 72 percent are using or evaluating containers and 46 percent are using or evaluating serverless computing. More than a third (39 percent) are using a combination of all three technologies together.
• A Mix of New Cloud-Native and Refactoring Legacy Applications:57 percent of IT decision makers report their companies do a mix of building new cloud-native applications and refactoring existing applications, an increase of nine percentage points from late 2017.
• Containers Have Crossed the Chasm: For companies choosing to develop new or refactor existing applications, they are choosing containers.
• Serverless is on the Upswing: Serverless computing is being evaluated with rapid momentum. Only 43 percent of respondents are not using serverless and 10 percent more companies are evaluating serverless than in 2017.
• PaaS Usage Continues to Swell: PaaS is being more broadly deployed than ever before and companies are developing new cloud-native applications at increased momentum. It stands to reason that these two upsurges happen in tandem. This growth in usage is bolstered by the 62 percent of IT decision makers who report their companies save over $100,000 by using a PaaS.
• Flexibility and Interoperability are Key: IT decision makers ranked “Integration with existing tools” and “Flexibility to work with new tools” in the top five attributes in a platform, alongside Security, Support, and Price.
Less than six months ago, the Foundation’s 2017 report “Innovation & Relevance: A Crossroads on the Cloud Journey <https://www.cloudfoundry.org/cloud-journey-report/>” indicated IT decision makers were advancing in their cloud journeys. In 2016, IT Decision Makers reported a lack of clarity around IaaS (Infrastructure-as-a-Service) and PaaS (Platform-as-a-Service) technologies, and most were still in the evaluation stages. By late 2017, they had progressed to selection and broader deployment of cloud solutions. Twenty percent report of IT decision makers report primarily building new cloud-native applications, up five percentage points from last year, while 13 percent say they are primarily refactoring, a drop of 11 points. As an increasing number of companies are developing new cloud-native applications, PaaS is being broadly deployed by more companies than ever. It stands to reason that these two upsurges happen in tandem.
Cloud Foundry Application Runtime is a mature and growing cloud application platform used by large enterprises to develop and deploy cloud-native applications, saving them significant amounts of time and resources. Enterprises benefit from the consistency of Cloud Foundry Application Runtime across a variety of distributions of the platform, thanks to a Certified Provider program. Cloud Foundry Container Runtime combines Kubernetes with the power of Cloud Foundry BOSH, enabling a uniform way to instantiate, deploy and manage highly available Kubernetes clusters on any cloud, making deployment, management and integration of containers easy.
To receive a copy of the survey, go here <https://www.cloudfoundry.org/multi-platform-trend-report-2018>. The survey was conducted and produced by ClearPath Strategies <http://www.clearpath-strategies.com/>, a strategic consulting and research firm for the world’s leaders and progressive forces.
Cloud Foundry is an open source technology backed by the largest technology companies in the world, including Dell EMC, Google, IBM, Microsoft, Pivotal, SAP and SUSE, and is being used by leaders in manufacturing, telecommunications and financial services. Only Cloud Foundry delivers the velocity needed to continuously deliver apps at the speed of business. Cloud Foundry’s container-based architecture runs apps in any language on your choice of cloud — Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud, Microsoft Azure, OpenStack, VMware vSphere, and more. With a robust services ecosystem and simple integration with existing technologies, Cloud Foundry is the modern standard for mission critical apps for global organizations.
The Cloud Foundry Foundation is an independent non-profit open source organization formed to sustain the development, promotion and adoption of Cloud Foundry as the industry standard for delivering the best developer experiences to companies of all sizes. The Foundation projects include Cloud Foundry Application Runtime, Cloud Foundry Container Runtime, BOSH, Open Service Broker API, Abacus, CF-Local, CredHub, ServiceFabrik, Stratos and more. Cloud Foundry makes it faster and easier to build, test, deploy and scale applications, and is used by more than half the Fortune 500, representing nearly $15 trillion in combined revenue. Cloud Foundry is hosted by The Linux Foundation and is an Apache 2.0 licensed project available on Github:https://github.com/cloudfoundry. To learn more, visit:http://www.cloudfoundry.org <http://www.cloudfoundry.org/>.