Terrence O’Hanlon and crew produced its annual International Maintenance Conference and Reliability 4.0 live in December in (mostly) sunny Florida. I attended IMC for the first time. The last time I attended one of his excellent events was around 2003 for a different company. This edition was as good as I expected. Plenty of informative keynotes and tech sessions, as well as, many networking opportunities.
The 700 attendees were fewer than past years, but then the “international” part of IMC was a little lacking this year given the situation with Covid and traveling.
My goal was to take a deep dive into the nuances surrounding predictive maintenance. My sources in the IT and IIoT communities figured data was becoming readily available and predictive analytics were improving. Add those together and surely it was obvious that predictive maintenance was the “killer app” for them.
I didn’t see it quite that same way even while helping some of them write marketing pieces. It was time to learn more.
Condensing what I heard from several speakers, predictive maintenance was not the end goal. It was useful when connected into the plant’s workflow. It required decision making from experts and integration into the work of maintenance technicians.
Networking with other attendees often has more value than any other interaction. At dinner one evening, one long-time colleague told me another long-time colleague was there. I sat there and talked with Gopal GopalKrishnan with whom I had worked when he was at OSIsoft. He’s now with CapGemini. He introduced me to his layered approach to maintenance.
He first pointed me to a McKinsey study. Establishing the Right Analytics-based Maintenance Strategy,
The assumption that predictive maintenance is the only advanced, analytics-based use for Internet of Things (IoT) data in the maintenance world has created a great deal of misconception and loss of value. While predictive maintenance can generate substantial savings in the right circumstances, in too many cases such savings are offset by the cost of unavoidable false positives.
Then consider this thought from Emerson’s Jonas Berge.
We have a promising future of Artificial Intelligence (AI) ahead of us. But to be successful we must first learn to reject the fake visions painted by consultants eager to outdo each other. Most engineers don’t have a good handle on Al the way they have on mechanics, electricity, or chemistry. Data science has no first principles or scientific laws. It is very nebulous. So it can be hard to judge if claims made around analytics are realistic. Or you may end up using an overly complex kind of Al for a simple analytics task. It must be like the early days of thermodynamics and electromagnetism.
Now some additional thoughts from Gopal here and here:
As such, a layered fit-for-purpose approach to analytics can be extremely valuable when you also leverage simple heuristics – extracted from SME (subject-matter-expert) knowledge – with basic math and Statistics 101. You can also include first-principles physics-based calculations that require only simple algebra and make predictions by extrapolating trends – backed by sound engineering assumptions.
The takeaway – start with proven fit-for-purpose analytics before chasing AI/ML PoCs with all its attendant risks, and the false positives/false negatives indicated in the McKinsey post. Form follows function; AI/ML yields to simple analytics. The simpler ‘engineered analytics’ captures the low-hanging wins and provides the foundation and the data-engineering required for the AI/ML layer. The oft-heard “… just give me all your data, let’s put it in a data lake and we will figure it out…” is naïveté.
And a conclusion from McKinsey:
Luckily, while predictive maintenance is probably the best-known approach, there are other powerful ways to enhance a business’s maintenance-service organization and create value from analytics-based technologies. The two most valuable of these, we find, are condition-based maintenance and advanced troubleshooting.
And more from Jonas Berge:
The reason why the existing process sensors are insufficient is because by the time the problem is picked up by the existing process sensors, the problem has already gone too far. You need a change in a signal that indicates an event is about to occur. A pump bearing failure is a good example of this: by the time the bearing failure is visible on the discharge pressure it is already too late because it is a lagging indicator. You need a vibration sensor as a leading indicator where a change signals the bearing is starting to wear.
Lots of time and money can be saved if advanced sensors to collect the required data are put in from the very beginning. With the right sensors in place the AI analytics can do a fabulous job of providing early warning of failure.
I guess I’ll add that it’s not necessarily complex unless you choose to make it. But to say that predictive maintenance is the killer app is overly simplifying things to the point that you’d never really get anywhere—even to make IIoT and IT sales.
A better and more inclusive approach to market solutions could lead IT and OT/IT suppliers into more lucrative hardware, software, and services sales and profits.