There was a time when I would take information from OPC Foundation and chat with the MQTT people and then return the favor. It was much like being in the midst of a religious war.

My response was (is) that the market will decide. Individual engineers will choose the solution that best fits their needs at the time. If both technologies have sufficient benefit to enough engineers to form a market, then both will survive. I think there is room in the market for both, since they sort of do the same thing, but actually each provides unique benefits.

I’ve been thinking about this for a while since I’ve had so many other things to digest. The impetus came from a couple of directions—OPC Foundation President Stefan Hoppe’s editorial in the June newsletter and from Stacey Higginbotham’s IoT Newsletter recently that discussed edge.

Hoppe wrote, “Still to this day people only think of OPC UA merely as a secure protocol to move information. It is so much more than that. It is a modeling language in cloud applications and digital twins. It is capable of file transport (since 2009). Most people know that OPC UA started as an initiative in the OT world and expanded from the PLC control plane to SCADA and later to MES and ERP. More and more people are realizing that OPC UA via MQTT is the bridge between OT and IT and is able to push information directly into Microsoft and AWS cloud dashboards without the need for an adapter.”

From Data to Data Sources

Stacey Higginbotham writing in Stacey on IoT Bringing AI to the farthest edge requires new computing.

Stacey writes about IoT generally. Most of her topics are commercial/consumer and chips (her reporting background). She does follow the IoT trail into manufacturing at times. In this newsletter she broaches into something I’ve been expounding for a long time, that is, how edge devices have become smarter with better communications. Then the IT world came up with the term Edge, which is, of course everything manufacturing.

We’re in the midst of a computing shift that’s turning the back-and-forth between cloud and edge computing on its head. This new form of computing has been creeping to the forefront for the last few years, driven by digital transformations and complicated connected devices such as cars.

But the more recent hype around AI is providing the richest examples of this shift. And it will ultimately require new forms of computing in more places, changing both how we think about the edge and the types of computing we do there. In short, the rise of AI everywhere will lead to new forms of computing specialized for different aspects of the edge. I’m calling this concept the complex edge.

As part of this shift in computing, we have to become more nuanced about what we mean when we talk about the edge. I like to think of it as a continuum moving from the most compute and power-constrained devices such as sensors to the most powerful servers that happen to be located on premise in a factory. In the middle are devices such as tablets, smartphones, programmable logic controllers (PLCs), and gateways that might handle incoming data from PLCs or sensors.

Moreover, each of these devices along the continuum might run their own AI models and require their own specialized type of computing to compare the data coming into those models. For example, I’ve written about the need for sensors to get smarter and process more information directly.

Smart sensors turn to analog compute

Cameras or image sensors are popular examples of such devices. This vision sensor from Useful Sensors, which can do person detection on a $10 device, runs a simple algorithm that looks for people and counts them. At a higher level, which requires more processing power, sensors from Sony or chips from CEVA are able to detect specific movements, faces, or other options.

A few weeks ago at the Sensors Converge event, a company called Polyn Technology showed off a version of a chip designed to take raw data and quickly convert it into an insight. To quickly process analog signals from the environment (such as vibrations or sound), the Polyn chip uses analog processing to process the signal and then sends the “insight” to another computer for more processing.

We not only have cameras shooting pictures for QA purposes, but also they are streaming video for applications from industrial engineering to surveillance to predictive maintenance. This is a vast amount of data. 

We have tools, but we will need more. Chips with built in communication and analytics are a start.

Share This

Follow this blog

Get a weekly email of all new posts.