This news contains information about another open-source industrial computer based on an inexpensive module—in this case Raspberry Pi. Again I ask (not predicting, but wondering) whether the new wave of engineers inevitably coming into this market will prefer something, small, inexpensive, programmable in familiar languages, and flexible due to advances in networking?
Phytools, a leading IIoT value-added reseller, announced Oct. 11 the publication of its first Revolution Pi White Paper, presenting the seven-year history and technical details of this revolutionary open-source industrial PC (IPC) in an easily-digestible medium. Built on the open, customizable, and commercial-off-the-shelf premise of Raspberry Pi, RevPi ruggedizes the versatile microcomputer for use in industrial applications.
IIoT-enabled IPCs like RevPi are ideal for automation, data connectivity, motion control, robotics support, machine visualization, and many more applications. From 2016 to the present, RevPi has cemented its place in the market by incorporating features such as:
- Modular form factors
- Operating system (OS) determinism
- A wide variety of input/output (I/O) modules
- Communications gateway functionality
- DIN-rail mounting capability
- And extensive support for third-party software, such as CODESYS soft PLC
The latest revision—RevPi Connect 4—was released in August 2023, equipped with a Broadcom BCM2711 processor and support for WLAN and Bluetooth communications. This most recent lineup addition also comes with a battery-buffered real-time clock and easier I/O expansion with a plug-and-play GUI.
RevPi base modules come equipped with USB, Ethernet, and HDMI connections. The two USB 2.0 sockets are each capable of supplying 500mA at 5V, alleviating the need for an external USB power supply when connecting parallel devices. The onboard RJ45 Ethernet connection is industrially hardened with suppressor circuits, and a micro-HDMI socket is included for connecting a monitor with sound output.
The RevPi delineates itself from Raspberry Pi with a portion of its operating system dedicated to deterministic control. Most PLCs in industrial settings leverage a real-time OS (RTOS) to ensure deterministic program execution through a dedicated routine, a rigorous treatment which is used to provide the robust and predictable performance necessary to avoid equipment failures and human harm.
A general-purpose PC OS is more versatile than that of a PLC because of its shareable—and usually more powerful—computing resources, which empower multi-tasking, customization, and processor-intensive operations. However, this flexibility presents a greater risk of OS disruption if one or more tasks becomes corrupted or overloads the processor, potentially leading to delays, instability, or a complete crash.
Revolution Pi uses a split OS, providing the determinism of an RTOS, along with the versatility of a standard OS. As a result, RevPi can be developed into an industrially viable small control system, a large scale IIoT platform, or elements of both simultaneously. Its scheduler—which controls the execution of tasks by the operating system—can be configured extensively to avoid networking or software resource delays, ensuring the best performance for real-time control and other critical or time-sensitive tasks.
I don’t know when we’ll begin using quantum computing in industrial applications, but heck, we’re beginning to see ChatGPT usage. This news is an advance in quantum compute state-of-the-art.
- The Quantum Brilliance Qristal SDK moves from beta into broad release for developing on-premise and edge applications for compact, room-temperature quantum accelerators
Quantum Brilliance, the leading developer of miniaturised, room-temperature quantum computing products and solutions, announced June 8 the full release of the Qristal SDK, an open-source software development kit for researching applications that integrate the company’s portable, diamond-based quantum accelerators.
Previously in beta, the Quantum Brilliance Qristal SDK is now available for anyone to develop and test novel quantum algorithms for real-world applications specifically designed for quantum accelerators rather than quantum mainframes. Potential use cases include classical-quantum hybrid applications in data centres, massively parallelised clusters of accelerators for computational chemistry and embedded accelerators for edge computing applications such as robotics, autonomous vehicles and satellites.
“With enhancements based on input from beta users, the Qristal SDK allows researchers to leverage quantum-based solutions in a host of potential real-world applications,” said Mark Luo, CEO and co-founder of Quantum Brilliance. “We believe this powerful tool will help organizations around the world understand how quantum accelerators can enable and enhance productisation and commercialisation.”
Qristal SDK users will find fully integrated C++ and Python APIs, NVIDIA CUDA features and customizable noise models to support the development of their quantum-enhanced designs. The software also incorporates MPI, the global standard for large-scale parallel computing, allowing users to optimise, simulate and deploy hybrid applications of parallelised, room-temperature quantum accelerators in high-performance computing (HPC) deployments from supercomputers to edge devices.
Quantum Brilliance’s quantum systems use synthetic diamonds to operate at room temperature in any environment. Unlike large mainframe quantum computers, Quantum Brilliance’s small-form devices do not require cryogenics, vacuum systems or precision laser arrays, consuming significantly less power and enabling deployment onsite or at the edge.
Currently the size of a desktop PC, the company is working to further miniaturise its technology to the size of a semiconductor chip that can be used on any device, wherever classical computers exist today, unlocking practical quantum computing for everyone. The Qristal SDK source code can be downloaded here. The source code includes extensive application libraries for VQE, QAOA, quantum machine learning, natural language processing and more.
I picked up this bit of news last month on a blog at the Open Metaverse Foundation by Royal O’Brien published on December 15, 2022. I had written a couple of things on “industrial metaverse” speculating on what is actually new and what could possibly be realistic. The Linux Foundation has begun an effort to bring companies together to work on definitions and security issues. Befitting a project of the Linux Foundation, openness is the plea and the work.
In October, we brought top experts from diverse sectors together with leaders from many of the projects across the Linux Foundation to discuss what it will take to transform the emerging concept of the Metaverse from promise to reality—from digital assets, simulations and transactions, to artificial intelligence, networking, security and privacy, and legal considerations.
One thing I found interesting is the list of interest groups as initially defined. This provides a bit of definition as to their thinking of what constitutes a metaverse market.
- digital assets,
- virtual worlds and simulation,
- artificial intelligence,
- security and privacy,
- legal and policy.
They are looking for members. I am curious about what companies will join and work on this project. Of course, one thing I won’t discover will be the companies that join to slow down the process.
Much as some of its large industrial competitors, ABB is quickly building out industrial software solutions. A friend who is a financial analyst told me that Wall Street and other investors prize software right now. A company focused on instrumentation and automation platforms doesn’t evoke the same eyes full of longing and desire as when they add software.
In this announcement, ABB and Red Hat, the open source enterprise software company, are partnering to deliver ABB automation and industrial software solutions at the intersection of information technology (IT) and operational technology (OT), equipping the industrial ecosystem with extended deployment capabilities and greater agility. This is consistent with ABB’s vision of the evolution of process automation.
- ABB will deliver digital solutions to customers on-demand and at scale using Red HatOpenShift
- Customers will be better able to harness the potential of data-based decisions by using applications that can be deployed flexibly from the edge to the cloud
The partnership enables virtualization and containerization of automation software with Red Hat OpenShift to provide advanced flexibility in hardware deployment, optimized according to application needs. It also provides efficient system orchestration, enabling real-time, data-based decision making at the edge and further processing in the cloud.
Red Hat OpenShift, the industry’s leading enterprise Kubernetes platform, with Red Hat Enterprise Linux as its foundation, provides ABB with a single consistent application platform, from small single node systems to scaled-out hyperconverged clusters at the industrial edge, which simplifies development and management efforts for ABB’s customers.
“This exciting partnership with Red Hat demonstrates ABB’s commitment to meet customer needs by seeking alliances with other innovative market leaders,” said Bernhard Eschermann, Chief Technology Officer, ABB Process Automation. “The alliance with Red Hat will see ABB continue helping our customers improve their operations as they navigate a rapidly evolving digital landscape. It will give them access to the tools they need to integrate plantwide IT and OT, while reducing risks and optimizing performance.”
Red Hat OpenShift increases the deployment flexibility and scalability of ABB Ability Edgenius, a comprehensive edge platform for industrial software applications, together with ABB Ability Genix Industrial Analytics and AI Suite, an enterprise-grade platform and applications suite that leverages industrial AI to drive Industry 4.0 digital business outcomes for customers. ABB’s Edgenius and Genix can both be scaled seamlessly and securely across multiple deployments. With this partnership, ABB will have access to capabilities like zero-touch provisioning (remote configuration of networks) which can increase manageability and consistency across plant environments.
“Red Hat is excited to work with ABB to bring operational and information technology closer together to form the industrial edge. Together, we intend to streamline the transition from automated to autonomous operations and address current and future manufacturing needs using open-source technologies,” said Matt Hicks, executive vice president, Products and Technologies, Red Hat. “As we work to break down barriers between IT and the plant level, we look to drive limitless innovation and mark a paradigm shift in operational technology based on open source.”
I’m sitting in the San Diego airport following my second post-pandemic conference. ODVA wrapped up its 2022 Annual General Meeting at lunch today with technical committee sessions continuing the rest of the day. This organization may be the most active of any similar one of its kind currently. Working groups met virtually during the two years of the pandemic following the 2020 meeting and maybe were more productive than ever.
Yesterday, March 9, I sat in two technical sessions relevant to my interests. The first, ”Edge to Cloud”, discussed the work being done to map CIP data to OPC UA. A large amount of detail has been by the ODVA working group as well as work with a joint working group writing a companion specification for OPC Foundation. Much field-level data that may not even be used by the control function bears content useful to other systems—many of which use the cloud for storage and retrieval.
The second technical session concerned using CIP networks in process automation applications. ODVA originally developed DeviceNet, a fieldbus most useful for discrete applications. Even EtherNet/IP found most uses in factory automation. Process automation users also discovered a need to use EtherNet/IP (a CIP network). The technology enticing for process automation users is Advanced Physical Layer (APL). This network can handle identified required areas including safety, hazardous areas, configuration, process improvement, secure remote access, and 24/7 uptime. Work continues to define and implement standards.
Al Beydoun, executive director of ODVA and Adrienne Meyer, VP of operations, reviewed the many association activities of the past two years.
- Grew membership to greater than 365
- Focused on growth in China
- Development work for EtherNet/IP over TSN
- CIP Safety was recertified with IEC
- Collaboration continued with Fieldcomm Group and FDT Group
- Worked with OPD Foundation
- Worked on xDS device descriptions
- Extensive online training and promotion.
The technical committees recorded activities of 80 SEs and TDEs, completed two publication cycles in 2020 and three in 2021 one of which concerned APL, and recorded 27 volume revisions. They also worked on standards for resource constrained devices, process industry requirements, and Time Sensitive Networking (TSN).
User Requirements from P&G
Paul Maurath, Technical Director—Process Automation from Procter & Gamble’s Central Engineering, presented the user’s view of automation. I will dispense with suspense. His conclusion, ”Help us manage complexity.”
Maurath told the story of setting up a test process cell in the lab. They used it to test and demonstrate Ethernet APL devices and the network. They discovered that APL worked, the controller didn’t see any issues. The discouraging discovery was the amount of configuration required and the complexity of setup. He referred to an E&I technician working the shift on a Sunday morning at 3 am. Call comes in. Device is down. With a regular HART / 4-20 mA device, the tech has the tools. But with an Ethernet device configuration can be a problem.
- There is a need for new technology to deliver functionality and simplicity
- Standards are great
- Please keep end users in mind when developing standards and technology
ARC Advisory Group Glimpses the Future
Harry Forbes, research director for ARC Advisory Group. devoted a substantial part of his keynote to open source. ”There is,” he noted, ”an IT technology totally overlooked by OT—open source software.” He principally cited the Linux Foundation. You’ll find news and comments from LF throughout this blog. I see great value from this technology. That an ARC researcher also sees the power was somewhat a surprise, though. ”It’s not software that’s eating the world,” said Forbes, ”it is open source eating the world.”
The problem to solve as detailed by presentations at the last ARC Industry Forum (and I think also worked on by the Open Process Automation Forum which also appears often on this blog) is the need to decouple hardware and software allowing easier updates to the software through containers (Docker, Kubernetes) and virtual machines.
Is that the future? I’m not sure where the vendors are that will bring this innovation, but I’m sure that many users would welcome it.
ODVA appears to be thriving. It is at the forefront of pushing new standards. It is looking forward at new technologies. It is growing membership and mindshare. The staff also assembled an outstanding event.
I applaud these efforts to improve and increase digital interoperability through industry or formal standards and open source. These efforts over many years, and even ones that pre-date digital, have provided progress not only in technology but in the lives of users. This announcement comes from the Digital Twin Consortium, a project of The Object Management Group (OMG).
Digital Twin Consortium (DTC) announced the Digital Twin System Interoperability Framework. The framework characterizes the multiple facets of system interoperability based on seven key concepts to create complex systems that interoperate at scale.
“Interoperability is critical to enable digital twins to process information from heterogeneous systems. The Digital Twin System Interoperability Framework seeks to address this challenge by facilitating complex system of systems interactions. Examples include scaling a smart building to a smart city to an entire country, or an assembly line to a factory to a global supply chain network,” said Dan Isaacs, CTO, Digital Twin Consortium.
The seven key concepts of the DTC Digital Twin System Interoperability Framework are:
1 System-Centric Design – enables collaboration across and within disciplines—mechanical, electronic, and software—creating systems of systems within a domain and across multiple domains.
2 Model-Based Approach – with millions and billions of interconnections implemented daily, designers can codify, standardize, identify, and reuse models in various use cases in the field.
3 Holistic Information Flow – facilitates an understanding of the real world for optimal decision-making, where the “world” can be a building, utility, city, country, or other dynamic environment.
4 State-Based Interactions – the state of an entity (system) encompasses all the entity’s static and dynamic attribute values at a point in time.
5 Federated Repositories – optimal decision-making requires accessing and correlating distributed, heterogeneous information across multiple dimensions of a digital twin, spanning time and lifecycle.
6 Actionable Information – ensures that information exchanged between constituent systems enables effective action.
7 Scalable Mechanisms – ensures interoperability mechanism(s) are inherently scalable from the simplest interoperation of two systems to the interoperability of a dynamic coalition of distributed, autonomous, and heterogeneous systems within a complex and global ecosystem.
“The Digital Twin System Interoperability Framework enables USB-type compatibility and ease for all systems connected to the Internet and private networks, which until now, has been the domain of system integrators,” said Anto Budiardjo, CEO, Padi.io. “This means system integrators can concentrate on designing applications rather than point-to-point integrations.”