Emerson now bills itself as “global software and technology leader.” I may have pointed this out before, but I find it interesting that after years of asking major automation technology providers about software, Emerson, along with Rockwell Automation and Siemens, have brought software up to a point of being a major competitive advantage.
This news from Emerson highlights an update to its machinery health platform to enable customers to migrate to a more holistic, modern interface for condition monitoring. New support brings data from edge analytics devices directly to key personnel inside and outside the control room.
Emerson has continuously evolved AMS Machine Works‘ condition monitoring technologies for better diagnostics at the industrial edge. Increased connectivity to external systems provides personnel with an intuitive, holistic asset health score supported by maintenance recommendations to help reliability teams quickly see what is wrong and how to fix it. Intuitive information and alerts are delivered directly to workstations or mobile devices to provide decision support, helping maintenance personnel make the best use of their time.
The newest version of AMS Machine Works adds support for Emerson’s AMS Asset Monitor, which provides embedded, automatic analytics at the edge using patented PeakVue technology to alert personnel to the most common faults associated with a wide range of assets. AMS Machine Works also supports open connectivity using the OPC UA protocol to make it easier to connect to external systems such as historians, computerized maintenance management systems, and more to help close the loop on plant support from identification to repair and documentation.
Terrence O’Hanlon and crew produced its annual International Maintenance Conference and Reliability 4.0 live in December in (mostly) sunny Florida. I attended IMC for the first time. The last time I attended one of his excellent events was around 2003 for a different company. This edition was as good as I expected. Plenty of informative keynotes and tech sessions, as well as, many networking opportunities.
The 700 attendees were fewer than past years, but then the “international” part of IMC was a little lacking this year given the situation with Covid and traveling.
My goal was to take a deep dive into the nuances surrounding predictive maintenance. My sources in the IT and IIoT communities figured data was becoming readily available and predictive analytics were improving. Add those together and surely it was obvious that predictive maintenance was the “killer app” for them.
I didn’t see it quite that same way even while helping some of them write marketing pieces. It was time to learn more.
Condensing what I heard from several speakers, predictive maintenance was not the end goal. It was useful when connected into the plant’s workflow. It required decision making from experts and integration into the work of maintenance technicians.
Networking with other attendees often has more value than any other interaction. At dinner one evening, one long-time colleague told me another long-time colleague was there. I sat there and talked with Gopal GopalKrishnan with whom I had worked when he was at OSIsoft. He’s now with CapGemini. He introduced me to his layered approach to maintenance.
He first pointed me to a McKinsey study. Establishing the Right Analytics-based Maintenance Strategy,
The assumption that predictive maintenance is the only advanced, analytics-based use for Internet of Things (IoT) data in the maintenance world has created a great deal of misconception and loss of value. While predictive maintenance can generate substantial savings in the right circumstances, in too many cases such savings are offset by the cost of unavoidable false positives.
Then consider this thought from Emerson’s Jonas Berge.
We have a promising future of Artificial Intelligence (AI) ahead of us. But to be successful we must first learn to reject the fake visions painted by consultants eager to outdo each other. Most engineers don’t have a good handle on Al the way they have on mechanics, electricity, or chemistry. Data science has no first principles or scientific laws. It is very nebulous. So it can be hard to judge if claims made around analytics are realistic. Or you may end up using an overly complex kind of Al for a simple analytics task. It must be like the early days of thermodynamics and electromagnetism.
Now some additional thoughts from Gopal here and here:
As such, a layered fit-for-purpose approach to analytics can be extremely valuable when you also leverage simple heuristics – extracted from SME (subject-matter-expert) knowledge – with basic math and Statistics 101. You can also include first-principles physics-based calculations that require only simple algebra and make predictions by extrapolating trends – backed by sound engineering assumptions.
The takeaway – start with proven fit-for-purpose analytics before chasing AI/ML PoCs with all its attendant risks, and the false positives/false negatives indicated in the McKinsey post. Form follows function; AI/ML yields to simple analytics. The simpler ‘engineered analytics’ captures the low-hanging wins and provides the foundation and the data-engineering required for the AI/ML layer. The oft-heard “… just give me all your data, let’s put it in a data lake and we will figure it out…” is naïveté.
And a conclusion from McKinsey:
Luckily, while predictive maintenance is probably the best-known approach, there are other powerful ways to enhance a business’s maintenance-service organization and create value from analytics-based technologies. The two most valuable of these, we find, are condition-based maintenance and advanced troubleshooting.
And more from Jonas Berge:
The reason why the existing process sensors are insufficient is because by the time the problem is picked up by the existing process sensors, the problem has already gone too far. You need a change in a signal that indicates an event is about to occur. A pump bearing failure is a good example of this: by the time the bearing failure is visible on the discharge pressure it is already too late because it is a lagging indicator. You need a vibration sensor as a leading indicator where a change signals the bearing is starting to wear.
Lots of time and money can be saved if advanced sensors to collect the required data are put in from the very beginning. With the right sensors in place the AI analytics can do a fabulous job of providing early warning of failure.
I guess I’ll add that it’s not necessarily complex unless you choose to make it. But to say that predictive maintenance is the killer app is overly simplifying things to the point that you’d never really get anywhere—even to make IIoT and IT sales.
A better and more inclusive approach to market solutions could lead IT and OT/IT suppliers into more lucrative hardware, software, and services sales and profits.
AI is perhaps the most overused buzz word in our market right now. On my drive down I-90 to Rosemont, IL for the Assembly show this morning, I listened to the first part of an interview on the Tim Ferris podcast with Eric Schmidt. This former Google CEO has written a book just released on AI. AI is short for Artificial Intelligence, a phrase which some of us refer to as neither artificial or intelligent. Schmidt defines it as software that learns. Feed it more data and it learns from it.
This news comes from ABB which has added AI to its launch of Ability Genix Asset Performance Management (APM) Suite for condition monitoring, predictive maintenance, and 360-degree asset performance insights for the process, utility and transportation industries. The quick look is:
- The launch of ABB Ability Genix Asset Performance Management Suite brings next-generation AI-based predictive maintenance, asset reliability and integrity insights to process and utility industries
- Genix APM is an enterprise-grade application to monitor assets, prescribe maintenance actions, improve equipment utilization, and support lifecycle analysis and capital planning
- Solution provides actionable insights into all aspects of asset performance, enabling customers to reduce machine downtime by up to 50 percent
The Genix APM Suite makes it easy to add asset condition monitoring to existing operational technology (OT) landscapes, enables prioritization of maintenance activities based on AI-informed predictions, and provides a comprehensive overview of asset performance.
Genix APM Suite also empowers significant improvements in operational sustainability. By assessing the remaining useful life of industrial assets, Genix APM generates a plan for preventive maintenance, which can extend equipment uptime by as much as 50 percent and increase asset life by up to 40 percent.
With reliable data insights, decision makers are provided with the information required in order to identify gaps and areas of improvement for energy efficiency and tighter control of operations, increasing asset availability and improving profit potential.
“Poor asset availability and reliability is a major problem that results in unplanned downtime and unexpected maintenance costs, and also impedes strategic planning and procurement,” said Rajesh Ramachandran, Chief Digital Officer at ABB Process Automation. “It’s not that industrial customers lack data; it’s that many lack effective ways to use their data to improve operational and business performance.”
Genix APM is built on the ABB Ability Genix Industrial Analytics and AI Suite. ABB Ability Genix is a modular, IIoT and analytics suite, which integrates IT, OT and other enterprise data in a contextualized manner, applying advanced industrial AI capabilities that support new insights to optimize operations.
An old friend and several acquaintances found themselves adrift when a magazine closed. All being entrepreneurial, they started a website and newsletter—RAM Review (Reliability, Availability, Maintenance). Old friend Jane Alexander is the editor. Not meaning she’s old, just that we’ve known each other for many years.
I met Bob Williamson 10 or 12 years ago mostly around discussions of ISO 55000 on asset management. He wrote the lead essay for a recent email newsletter on workforce. Now, I have to admit that the only part of manufacturing I never worked in was maintenance and reliability. I did work with skilled trades when I was a sales engineer, though. I considered them geniuses for the way they could fix things. One of the points of Bob’s essay is taking care of things before they break and need help.
The main workforce discussion in media concerns remote or hybrid work. Many engineering roles can be performed remotely. Many roles within manufacturing and production must be performed on site. With the current and projected future labor shortage, I like his closing paragraph except for the put down on current operators. I knew plenty who cared for their machine or process. Of course, many didn’t. Most likely a management failure. But cross-training people to be at least to some degree both competent operators and first-line RAM people seems to me to be a winning strategy. I’ve reprinted most of Bob’s essay below. You can read it on their website.
For many manufacturers, returning to traditional ways of work simply will not be an option. Something must change if they are to attract, hire, and retain a capable workforce. Therefore, I believe technology and desperately willing top-management teams will also help alter work cultures on factory floors. Respondents to the Manufacturing Alliance/Aon survey suggested offering “flexible working hours, compressed work weeks, split shifts, shift swapping, and part-time positions.” Use of such enticements with plant-floor workforces would look very different than use among the carpet dwellers in front offices.
We have another option, of course: Technology can automate our manufacturing processes, and much of it is far more affordable than it was a decade ago. In fact, given the rising cost of labor over the past decade, with increasing healthcare-cost burdens and skills shortages, many businesses have already automated some of their labor-intensive processes. The times we are in call for—make that scream for—large-scale automation. Yet, while process automation can be easier for large, deep-pocketed companies than for the smalls, it’s still a huge challenge.
There are four big hurdles to be overcome when automating manufacturing processes: availability, installation, sustainable reliability, and work-culture change. And remember, skills and labor shortages are widespread in these post-pandemic times. Moreover, despite the supply chain’s efforts to heal and keep up, manufacturers of automation technologies aren’t immune to the production-barrier ills that others face these days.
To repeat: RAM professionals are on manufacturing’s front line. Skill shortages may be affecting our ranks, but there are recruiting and training efforts underway in many companies to remedy the situation. In addition, we have technologies for carrying out data collection, analysis, and problem-solving somewhat remotely. However, the boots-on-the-ground parts of reliability and maintenance will not be virtual or remote.
So, consider this option: Recruit and train displaced production workers to wear some RAM “boots.” They’ll be familiar with industrial environments and the importance of plant equipment. Then, let’s train our current production workers to care more for their machines than they did in the past, and, in the process, become the eyes and ears for reliability, availability, and maintenance improvement.TRR
Companies are sprinkling their press releases and Websites with Artificial Intelligence (AI) like sugar on your cornflakes. Now we even have Artificial Intelligence of Internet of Things—AioT. One of my favorite series of questions these days runs something like: what do you mean by AI; how is it used; what do operators see; what does it do—really?
One outgrowth of a series of meetings from the recent ARC Advisory Group Industry Forum was an interview with serial entrepreneur Mike Brooks. Most recently he was President and COO of Mtelligence (Mtell) when it was acquired by AspenTech a bit over four years ago.
He clued me into a number of Aspen products based on the Mtell technology. More on those below. First, some insights from someone who has witnessed a lot in the industry.
Talking about machine learning, Brooks told me that it’s not just AI on its own that gives value. Look for a combination of AI plus domain knowledge. This gives you causation, not just correlation. It is also important to build AI from first principles (I’m betting many miss that one). Mostly, AI is a tool for providing event analytics for front line workers.
I’ll combine Brooks and other sources to describe the more practical AspenTech solutions:
From a blog post by Adi Pendyala, Sr. Director, Market Strategy—Aspen AIoT Hub: The Cloud-Ready Infrastructure for Industrial AI
Artificial intelligence (AI) and the Industrial Internet of Things (IIoT) are two of the most prominent technological forces driving digital transformation for capital-intensive industries today. Collectively, they’re like the body and brain of industrial digital transformation: IIoT is the body, creating and transmitting data from a variety sources that is sometimes acted upon, while AI is the brain, turning data into intelligence for smarter decisions and enabling the digital future of industrial organizations.
The confluence of these technological forces gives rise to a new digital solution category – the Artificial Intelligence of Things (AIoT) – that centers on unlocking the hidden business value in industrial data.
Impact of IT-OT Convergence: Sharp market volatility means that capital-intensive industries have to be more agile than ever before to survive and thrive in every cycle – an area that has thwarted the OT-side of industrial organizations in the past. Enterprises are looking to exploit the rapid convergence of IT and OT to significantly reduce the technological implementation risk and the time-to-market risk for introducing AI-rich, real-time applications to complex industrial operations. The rise of the digital executive, i.e. the CTO/CDO/CIO, in driving the digital transformation strategy of industrial organizations is a key influencer of this trend.
Unlock Industrial Data Value: There is a critical (and growing) need for access to industrial analytics and actionable insights in making business decisions – across all levels of the enterprise. Efforts to mine pools (silos) of data across the enterprise are often stalled by the challenges of data collection and integration, with promised business insights and agility never materializing. Organizations are switching their focus from mass data accumulation to strategic industrial data management, specifically homing in on data integration, data mobility and data accessibility across the organization – with the goal of using AI-enabled technologies to unlock the hidden value in these previously unoptimized and undiscovered sets of industrial data.
Lowering the Digitalization Barrier: Industrial organizations are increasing investment in lowering the barriers to AI adoption by deploying fit-for-purpose Industrial AI applications that combine data science and AI with software and domain expertise. This will be the key to overcome a lack of in-house skills and drastically reduce the need for an army of data scientists. To scale this effort, many enterprises are adopting new measures to reduce complexity in interoperability, overcome information silos and harmonize towards a cloud-ready infrastructure that bridges legacy systems with next-generation solutions.
The Aspen AIoT Hub – Cloud-Ready Industrial AI Infrastructure
The AIoT Hub provides the integrated data management, edge and cloud infrastructure and production-grade AI environment to build, deploy and host Industrial AI applications at enterprise speed and scale. It also serves as the foundational infrastructure to realize the transformative vision for the Self-Optimizing Plant. In fact, as part of our recent aspenONE V12 release, the AIoT Hub provides the underlying cloud-ready, enterprise-scale infrastructure that powers V12 Industrial AI applications such as Aspen AI Model Builder and Aspen Event Analytics.
Key Capabilities of the Aspen AIoT Hub
Data Integration & Mobility
On average, between 60% and 73% of all data within an enterprise goes unused. This challenge is further exacerbated by the lack of a scalable data infrastructure to power Industrial AI models from training to productization. Through the AIoT Hub, organizations will be able to access and leverage fully integrated data, from sensors to the edge and cloud, across the enterprise.
Scaling AI requires providing the tools, infrastructure and workflows for powering Industrial AI across the solution lifecycle. It also requires the software, hardware and enterprise architecture needed to productize AI in industrial environments, including broader collaboration between development, data science and infrastructure capabilities such as CloudOps, DevOps, MLOps and others. This dimension is critical to helping organizations mature beyond sporadic AI proof-of-concepts to an enterprise-wide Industrial AI strategy.
Industrial organizations are seeking to aggregate data from different sources across the enterprise, transforming it into analytics and visualizations to guide better decisions at every business level. The goal is to translate real-time data into faster, smarter, profitable business decisions to visualize deviations, sequences and trends automatically and identify risks and opportunities early. The AIoT Hub enables enterprise users’ access to real-time data and analytics to do all of this – improving collaboration, project efficiency and operations by tapping into the power of accelerated insights and enhanced visualizations.
Industrial AI Applications Ecosystem
Enterprises are looking for purpose-built, fully integrated AI environments for their data scientists to accelerate the transformation from raw data to productized AI/ML algorithms. The AIoT Hub provides an embedded workbench for feature engineering, training and rapidly productizing machine learning (ML) models, as well as supports versioning and collaboration. It empowers data scientists, at customers and partners, to collaborate and build their own data-rich AI apps.
About six months ago, ABB completed a divestiture of about 80% of its holding in ABB Power Grid business, and Hitachi acquired it. The new business, a joint venture, is called Hitachi ABB Power Grids. Today, it announced the integration of its Digital Enterprise solution with Hitachi Vantara’s Lumada portfolio of digital solutions and services for turning data into insights.
The two Hitachi business entities have agreed to rebrand the DE components as Lumada Asset Performance Management (APM), Lumada Enterprise Asset Management (EAM), and Lumada Field Service Management (FSM), adding to the growing portfolio of DataOps and Industrial IoT solutions.
The DE portfolio of solutions and its predecessors enable customers spanning multiple global industries to operate, analyze and optimize over $4 trillion of assets every day. With the incorporation of the DE portfolio into Lumada, this experience is further complemented by a leading technology engine to deliver access to information, systems, people and analytics across asset-intensive organizations.
With Digital Enterprise’s incorporation into Lumada, Hitachi ABB Power Grids’ energy domain experience will be augmented by Hitachi’s Lumada Industrial IoT platform. Hitachi was recently named a Leader in the 2020 Gartner Magic Quadrant for Industrial IoT Platforms based on Gartner Inc.’s evaluation of the company and its Lumada IoT software.
“Our software solutions and Lumada are highly complementary,” said Massimo Danieli, managing director, grid automation business unit, Hitachi ABB Power Grids. “Combining best-in-class Lumada IoT capabilities and the domain expertise built into Digital Enterprise applications provides both new and existing customers unparalleled flexibility and faster time to value, while preserving the value of their past software investments. The journey we began with our customers as part of the Digital Enterprise evolution story has become broader and more compelling, as we join the Lumada ecosystem.”
“Lumada Enterprise Asset Management and Field Service Management allow us to seamlessly expand our Ellipse EAM, enabling us to share information across all parts of our organization, tearing down silos and giving us the opportunity to formulate a longer-term, holistic strategy that reflects our specific business outcomes,” said Brian Green, general manager, asset management, from the Australian Rail Track Corporation (ARTC). “In addition, implementing these solutions allows us to optimize the quality of the data we collect and ensure safe, compliant and efficient business operations.”
“Bringing these solutions that each encapsulate deep domain expertise into the greater Lumada ecosystem gives customers an extremely powerful combination of tools to modernize their business,” said Chris Scheefer, senior vice president, Industry Practice, Hitachi Vantara. “The holistic view of assets and information provided by Lumada allows leadership to analyze and react in real-time, enabling efficient, effective operations and a foundation to create a more sustainable future.”
DE and Lumada also share core foundational features: a modern microservices design, vendor-agnostic interoperability, and a flexible deployment model, including cloud, on-premises and hybrid.
With the combination of Hitachi ABB Power Grids’ Digital Enterprise application portfolio and Hitachi’s Lumada solutions offered by Hitachi Vantara, customers will be able to benefit from additional data services including data integration, data cataloging, edge intelligence, data management, analytics and more.
The new integrated Lumada portfolio will offer advantages to customers in the following key areas:
1. Digital Transformation & Data Modernization – improving access to and insights from data
2. Connected Asset Performance – helping to predict and prevent asset failures
3. Intelligent Operations Management – improving oversight and maintenance of assets
4. Health, Safety & Environment – enabling safer environments for workers and the public