There was plenty of cool new products unveiled at last week’s Emerson Global Users Exchange. As a former product development manager, I liked the “peanut butter and chocolate” moment when Emerson’s engineers were trying to solve the human location in a plant problem. They realized that many customers already have a WirelessHART mesh network. Why don’t we use location tags with WirelessHART as the communications service? Cool.
Topping the news released during the week was announcement that Emerson has agreed to acquire Intelligent Platforms, a division of General Electric. Intelligent Platforms’ programmable logic controller (PLC) technologies will enable Emerson, a leader in automation for process and industrial applications, to provide its customers broader control and management of their operations.
This is a great acquisition. It reveals Emerson as a company that has its act together. This is the consolidation trend in the industry. Siemens has a complete portfolio (well, mostly). ABB recently acquired B+R Automation in a similar move. Schneider Electric added Foxboro and Triconex from Invensys to its mostly factory automation portfolio. So there are four major companies aligning their competitive offerings. And all are focused on digital transformation for their customers.
Even Rockwell Automation has built a process automation business over time. It recently shunned acquisition with its money and instead invested $1 billion for a little over 8% of PTC in order to achieve a closer partnership with ThingWorx (and a seat on the board). Maybe having an executive on the board, it can learn how Jim Hepplemann managed to build a company through acquisition.
Back to Emerson. GE IP (formerly know as GE Fanuc) has a line of PLCs, motion control, and HMIs. It hasn’t promoted its products for years, but they are still alive and well in Charlottesville, VA. This is a great strategic move.
As for GE? Well, we know that it is having a fire sale. I’d wondered about this part of the business. Now we all wonder about what’s left of GE Digital. We know from a Wall Street Journal article that it’s for sale. And also we know that the board just replaced the CEO evidently for not moving quickly enough. But…will anyone want GE Digital? I’m sure everyone has looked. Here’s a thought. What if it wound up with an IT company to complement these burgeoning IoT practices?
Inductive Automation included a number of partner companies in its Ignition Community Conference last week in Folsom, CA. Among these companies was Bedrock Automation. I’ve written about Bedrock before a few times. This trip I was looking at its display when its CEO in disguise appeared.
Why it matters: Cyber security is at the top of everyone’s mind these days. Bedrock Automation has designed a system to be secure from all parts of the supply chain.
Albert Rooyakkers, founder/CEO/CTO, was wearing a hat and sunglasses and I walked right past him. However, he came over and gave me his usual high energy explanation of the entire Bedrock system.
Bedrock Automation builds an industrial control system (PLC) that was designed from the beginning with security in mind. Not just cyber security, but also security from tampering, lightning, high-energy electromagnetic interference, and more.
Intrinsic Security begins with Strong Cryptography, then adds Secure Components, Component Anti Tamper, Secure Firmware, Secure Communications, and Module Anti Tamper.
The metal construction showcases the secure construction, just as does the design of the I/O modules and communication with the controller (no insecure backplane).
Public Key Infrastructure
Rooyakkers always gives me the deep dive into Public Key Infrastructure which leads to Hardware Root of Trust—the essential element of security in the product.
Use of asymmetric cryptography for authentication and key exchange is the basis of secure e-commerce. In the internet context, there is a critical additional piece, a root of trust at the center of an exchange. This is called Certificate Authority. Key pairs, certificates, a root of trust and interoperable algorithms together form a Public Key Infrastructure (PKI) which includes the infrastructure and policies to manage and maintain the trust. Some of the building blocks include:
• Transport Layer Security
• X.509 Certificates
• Certificate Chain of Trust
• Root Certificate Authority
Until now PKI has not been implemented in industrial control systems. Bedrock Automation embeds the Hardware Root of Trust in the control system. It is designed from the ground up with security in mind.
Bedrock Automation has always gone to market with systems integrators—a strategy that fits with Inductive Automation. In many remote control and SCADA systems, the two form a perfect pair.
Just when I thought I’d never write about Controllers, here comes a very interesting announcement from Emerson Automation Solutions [note new name]. Taking direct aim at its competitors who have moved aggressively from discrete control into process systems, Emerson announced launch of the DeltaV PK Controller.
This controller targets fast-growth industries traditionally less reliant on large-scale automation. The next-gen controller provides scalable automation control to all process industries, particularly parts of the life sciences, oil and gas, petrochemical, and discrete manufacturing industries that have relied on complex, non-integrated programmable logic controllers (PLCs) with limited operational capabilities. The fit-for-purpose DeltaV PK Controller is the process industry’s first controller that manufacturers can scale down for skid units or scale up to be natively merged into the DeltaV DCS in a larger plant.
These industries tend to use PLCs for smaller applications, which can create disconnected “Islands of Automation,” and limits plant production improvements. The DeltaV PK Controller bridges small and large control applications. Organizations can leverage the DeltaV PK Controller for effective, easy-to-implement standalone automation control akin to a PLC but with the features of a full-scale DCS, including advanced batch production, recipe management, execution, and historization. Users can then choose to leave the DeltaV PK Controller standalone, or natively merge it into their DeltaV DCS. This capability eliminates operational complexity and dramatically improves the performance, safety, and efficiency of their entire project and operational lifecycle.
“The DeltaV PK Controller delivers a business-effective solution for organizations of all sizes to improve automation control and integration,” said Jessica Jordan, Emerson product manager. “The controller is capable of powerful standalone control for advanced automation on skids today while still being able to easily integrate into a full-scale DCS for total plant production control.”
The DeltaV PK Controller is the latest addition to Emerson’s Project Certainty initiative, targeting radical transformation in capital project execution. The new controller will simplify capital projects by enabling OEM skid-builders to design and produce skids in the same way they do today, while eliminating the costs, time, and risks associated with integrating a PLC into their control system.
The DeltaV PK Controller was designed from the start with connectivity, particularly into the IIoT, in mind. The scalable controller leverages an assortment of communication protocols, including the first Emerson controller with a built-in OPC UA server. It is also the first Emerson controller with six Ethernet ports and can operate using any Emerson DeltaV I/O type, including DeltaV Electronic Marshalling, traditional marshalled I/O, wireless I/O, and integrated safety instrumented systems. In addition, it has built-in protocols to communicate with Ethernet devices such as drives and motors. Together, these features make connectivity easier at every stage and help plants achieve operational benefits of cloud-based tools and analytics through the IIoT. The DeltaV PK Controller also features built-in redundancy for controllers, communication, and power supplies, allowing organizations to improve uptime without adding to complexity or footprint.
Can we bring more discipline to PLC programming in industrial control?
Discussion swirls at every gathering of automation professionals about the new generation of engineers entering (we hope) the industry. One thing for sure, the new generation begins with a much deeper computer science background than any before. Will they want to continue to code programmable logic controllers (PLCs) in the same relay graphical representation as their predecessors? Even the Structured Text language popular in Europe and other places can string together like an old BASIC program.
I am guessing they will bring more discipline to the craft of coding industrial control than ever before. Evidently PLCopen, the organization devoted to developing standards for PLC programming, does also. It has just announced a new set of coding guidelines.
The organization notes in its press release, “Software is becoming increasingly responsible, complex, and demanding. This does not come without its challenges. Due to the greater complexity, programs are more difficult to maintain, more time consuming, and potentially therefore more expensive. This is why quality is taking such an important role these days.”
Continuing, they note, “Unlike in other industries, such as that of embedded software and computer science, there has not previously been a dedicated standard for Programmable Logic Controller (PLC) programs. This has meant that programs were not measured against anything and were often of a poor quality. But that’s where the independent association PLCopen has come in and set the standard with the release of their coding guidelines. These guidelines are a set of good practice programming rules for PLCs, which will help to control and enhance programming methods within industrial automation.”
PLCopen, whose mission is to provide industrial control programming solutions, collaborated with members from a number of companies in different industries to create these coding guidelines. These companies include PLC vendors such as Phoenix Systems, Siemens, and Omron, to software vendors such as Itris Automation and CoDeSys, and educational institutions such as RWTH Aachen. These guidelines were inspired by some pre-existing standards from other domains such as IEC 61131-3, JSF++ coding standard and MISRA-C, and they are the product of three years of work by the working group. PLCopen’s reference standard can be used for testing the quality of all PLC codes, independent of brand and industry.
PLCopen’s coding guidelines are made up of 64 rules, which cover the naming, comments and structure of the code. By following these guidelines, the quality of the code will be improved and there will be greater consistency amongst developers. This will result in greater efficiency, as better readability means a faster debug time, and a program that is easier to maintain. This then results in lower costs as less time is required in order to maintain the program, and the maintenance should be easy enough for both an internal or external programmer as the code will be more straightforward. If the original developer fails to follow certain guidelines when creating a program, this could obstruct other developers and maintenance teams when working with the code during the product lifecycle, thus creating delays and additional costs.
In safety-critical industries, there is the standard IEC 61508 which in 2011 was also extended to PLCS. However, as quality is becoming an ever more important factor across the board, as programs become bigger and more complex, it is generally good practice to follow a set of rules or a standard in all industries. PLCopen’s coding guidelines suggest a standard that can be used across all industries to greatly improve the quality of the code and, as a result, to help companies save time and money. The introduction of such a standard will allow PLC programs to be verified not only from the simple functionality perspective but also from a coding perspective by confirming that good practice programming rules have been followed in their creation. Consistency across PLC programs can only be achieved through the respect of a global corporate or industrial standard, with PLCopen now being the de facto standard in the automation industry.
With quality playing a greater role in industry and with companies always looking for cost saving methods, the answer is to use some sort of standard or set of rules in order to meet these goals. PLCopen have created this standard to improve quality and consistency across PLC programs and so that individual industries and companies don’t have to go to the effort of creating a set of rules themselves. In addition to the internal benefits, this standard will also allow companies to enforce their quality requirements on suppliers, software contractors and system integrators. The only issue for now is that the process for verifying these rules is done manually by most users as they are unaware that some tools are available to do this automatically. But overall, following a standard such as the one proposed by PLCopen, will greatly improve the quality of the program and will save time and money throughout the whole duration of the product lifecycle.
The PLCopen coding guidelines v1.0 are available to download for free from the PLCopen website.
I ran a brief series on industrial data, interoperability, and the Purdue Model (see this one, for example, and others about that time). It’s about how data is becoming decoupled from the application. It’s not hierarchical, seeking out applications that need it.
This week I took a look at Opto 22’s latest innovation—use of RESTful APIs in an industrial controller. The next step seemed to be looking at MQTT. This is another IT-friendly technology that also serves as an open, standardized method of transporting data—and more.
Then I’ll follow up on a deeper discussion of OPC and where that may be fitting in within the new enterprise data architecture.
I’ll finish the brief series with an application of (perhaps) Big Data and IIoT. It’s not open standard, but shows where enterprises could be going.
MQTT and Sparkplug
Inductive Automation has been around for about 13 years, but it has shown rapid growth over the past 5. It is a cloud-based HMI/SCADA and IIoT platform. I finally made it to the user conference last September and was amazed at the turnout—and at the companies represented. Its product is targeted at the market dominated in the past by Wonderware, Rockwell Automation RS View, and GE Proficy (Intellution iFix in a former life). It’s a private company, but I’ve been trying to assemble some competitive market share guesses. My guess is that Inductive ranks very well with the old guard. Part of the reason is its business model that seems friendly to users.
Just as Opto 22 was an early strong supporter of OPC (and still supports it), so also is Inductive Automation a strong OPC shop. However, just as Opto 22 sees opportunities for better cloud and IT interoperability with REST, Inductive Automation has seen the same with MQTT. In fact, it just pulled off its own Webinar on the subject.
I put in a call and got into a conversation with Don Pearson and Travis Cox. Following is a synopsis of the conversation. It is also a preview of the ICC user conference in Folsom, CA Sept. 19-21. At the conference you can talk to both Arlen Nipper, president and CTO, Cirrus Link and co-developer of MQTT along with Tom Burke, president of the OPC Foundation.
Don and Travis explained that MQTT itself is a middleware “broker” technology. It describes a lightweight, publish/subscribe transport mechanism that is completely agnostic as to the message contained in the communication. So, you could send OPC UA information over MQTT or other types of data. The caveat, as always, is that the application on the receiving end must speak the same “language.”
They see apps talking directly to PLCs/PACs/controllers as going away. We are in the midst of a trend of decoupling data from the application or device.
MQTT is “stateful”, it can report the last state of the device. It rides on TCP/IP, uses TLS security, and it reports by exception.
Describing the message
MQTT is, in itself, agnostic as to the message itself. However, to be truly useful it needs a message specification. Enter Sparkplug. This technology describes the payload. So, it is needed on both sides of the communication. it doesn’t need to know the device itself, as it is all about information. it is a GitHub project and, as is MQTT, part of the eclipse foundation.
I have known Don and Travis for years. I have never heard them as passionate about technology as they were during our conversation.
If you are coming to Folsom, CA for the conference, you’ll hear more. I will be there and would love to have a breakfast or dinner with a group and dive into a deep discussion about all this. Let me know.