by Gary Mintchell | Jun 6, 2022 | Asset Performance Management, News
More than 800 people have gathered for the resumption of the ARC Advisory Group’s annual Forum in Orlando. Yes, that’s right, my second trip to Orlando in three weeks. And there is one more to come.
Based on just a few hours at the conference and one trip around the exhibition hall, the theme this year likely will be the future of process automation Information abounds, but this appears to be the interesting idea. Plus data. Data everywhere. More on that later as I’m working on an essay on data.
Only six companies took advantage of the assembled corps of writers to hold briefings today. Some things I’ve already written up from previous interviews. Some are embargoed until later. Here is an interesting announcement of a new name.
Hexagon AB has been on a buying spree. I don’t know when it will end, but I’m thinking not soon. It has a division called PPM that encompasses asset management. Executives said this change does not signify any portfolio changes, rather a new way of thinking about the newly aggregated companies.
The new division name is Hexagon’s Asset Lifecycle Intelligence division.
Mattias Stenberg, president of the division, said, “This evolution is driven by our customers’ needs to have real-time intelligence about their assets. This divisional name is reflective of our focus and expertise in supporting the entire asset lifecycle throughout the customer’s digital journey.”
The new division includes HxGN EAM (formerly Infor EAM), PAS Global, Jovix, and Innovatia Accelerator.
by Gary Mintchell | Jun 1, 2022 | Asset Performance Management, Data Management, News, Operations Management, Organizations
The HUG experience in Orlando is barely out of my system when I turned to Hannover Messe. No, I am not eating German food, well, at least not in Germany. Yesterday morning I sat in a 7 am (my time) press conference with OPC Foundation. More on that later. I have worked all afternoon consolidating about 20 press releases and interviews and decided at the end of the day to talk about the press conference / annual general meeting I attended virtually this morning. This from the FDT Group.
Steve Biegacki became Executive Director in January bringing experience with building this type of organization not to mention marketing and sales executive experience with both Rockwell Automation and Belden. Along with the Rockwell role he was a driving force behind ODVA and CIP.
He pulled off his initial AGM at Hannover with his usual style backed with experienced staff. Pretty much like the organizations I’ve talked with this year, they didn’t let the pandemic slow the work cranking out valuable work. Biegacki will be leading a renewed marketing effort to explain benefits of the FDT 3.0 standard.
From today’s news: Device, system, and end users now benefit from an embedded unified environment unlocking universal device management, IT/OT convergence, data analytics, services, and mobility.
FDT Group, an independent, international, not-for-profit industry association supporting the evolution of FDT technology, introduced the FDT Unified Environment (UE), and developer tools based on the new FDT 3.0 standard to deliver next-generation FDT industrial device management system and device solutions for field-to-cloud IT/OT data harmonization, analytics, services, and mobility based on user-driven requirements for smart manufacturing in the process, hybrid, and discrete markets.
Driven by digital transformation use cases to support new Industrial Internet of Things (IIoT) business models, the standard has evolved to include a new distributed, multi-user, FDT Server application that includes built-in and pre-wired OPC UA and Web servers enabling an FDT Unified Environment (FDT 3.x) merging IT/OT data analytics supporting service-oriented architectures. The new Server environment deployable in the cloud or on-premise delivers the same use cases and functionally as the previous generation FDT hosting environment, but now provides data storage for the whole device lifecycle at the core of the architecture allowing information modeling and data consistency to authenticated OPC UA and browser-based clients (tablets and phones) for modern accessibility to address the challenges of IIoT.
“Collaboration and data harmonization are the keys to manufacturing modernization,” said Steve Biegacki, managing director, FDT Group. “FDT UE delivers a data collaborative engineering specification and toolset to enable modern distributed control improving operations and production reliability, impacting the bottom line for new IIoT architectures. I’m proud to witness our first group of members showcasing their FDT 3.0 WebUI-based DTM prototypes mixed with 2.0 DTMs in the new Server and Desktop environments running IO-Link and HART here at Hannover Messe live and in person. To be present as a guest in the OPC Foundation booth to demonstrate field-to-cloud connectivity, OPC UA enterprise access and services along with mobile field device operation is one for industry history books. I especially want to thank Thomas Hadlich, FDT architecture and specification chairman, for leading the first FDT UE demo project; along with our front runner member companies for participating – Flowserve, Krohne, Omron, Magnetrol, Thorsis, CodeWrights, VEGA, Rockwell Automation, Turck, PACTware and M&M Software.”
FDT UE consists of FDT Server, FDT Desktop, and FDT DTM components. System and device suppliers can take a well-established standard they are familiar with and easily create and customize standards-based, data-centric, cross-platform FDT 3.0 solutions—expanding their portfolio offerings to meet requirements for next-generation industrial control applications. Each solution auto-enables OPC UA integration and allows the development team to focus on value-added features that differentiate their products, including WebUI and App support. FDT Desktop applications are fully backward compatible supporting the existing install base.
FDT 3.0 specification license agreements and developer toolkits are now available on the FDT website.
by Gary Mintchell | Apr 25, 2022 | Asset Performance Management, Operations Management
Emerson now bills itself as “global software and technology leader.” I may have pointed this out before, but I find it interesting that after years of asking major automation technology providers about software, Emerson, along with Rockwell Automation and Siemens, have brought software up to a point of being a major competitive advantage.
This news from Emerson highlights an update to its machinery health platform to enable customers to migrate to a more holistic, modern interface for condition monitoring. New support brings data from edge analytics devices directly to key personnel inside and outside the control room.
Emerson has continuously evolved AMS Machine Works‘ condition monitoring technologies for better diagnostics at the industrial edge. Increased connectivity to external systems provides personnel with an intuitive, holistic asset health score supported by maintenance recommendations to help reliability teams quickly see what is wrong and how to fix it. Intuitive information and alerts are delivered directly to workstations or mobile devices to provide decision support, helping maintenance personnel make the best use of their time.
The newest version of AMS Machine Works adds support for Emerson’s AMS Asset Monitor, which provides embedded, automatic analytics at the edge using patented PeakVue technology to alert personnel to the most common faults associated with a wide range of assets. AMS Machine Works also supports open connectivity using the OPC UA protocol to make it easier to connect to external systems such as historians, computerized maintenance management systems, and more to help close the loop on plant support from identification to repair and documentation.
by Gary Mintchell | Jan 26, 2022 | Asset Performance Management, Internet of Things, Operations Management
Terrence O’Hanlon and crew produced its annual International Maintenance Conference and Reliability 4.0 live in December in (mostly) sunny Florida. I attended IMC for the first time. The last time I attended one of his excellent events was around 2003 for a different company. This edition was as good as I expected. Plenty of informative keynotes and tech sessions, as well as, many networking opportunities.
The 700 attendees were fewer than past years, but then the “international” part of IMC was a little lacking this year given the situation with Covid and traveling.
My goal was to take a deep dive into the nuances surrounding predictive maintenance. My sources in the IT and IIoT communities figured data was becoming readily available and predictive analytics were improving. Add those together and surely it was obvious that predictive maintenance was the “killer app” for them.
I didn’t see it quite that same way even while helping some of them write marketing pieces. It was time to learn more.
Condensing what I heard from several speakers, predictive maintenance was not the end goal. It was useful when connected into the plant’s workflow. It required decision making from experts and integration into the work of maintenance technicians.
Networking with other attendees often has more value than any other interaction. At dinner one evening, one long-time colleague told me another long-time colleague was there. I sat there and talked with Gopal GopalKrishnan with whom I had worked when he was at OSIsoft. He’s now with CapGemini. He introduced me to his layered approach to maintenance.
He first pointed me to a McKinsey study. Establishing the Right Analytics-based Maintenance Strategy,
The assumption that predictive maintenance is the only advanced, analytics-based use for Internet of Things (IoT) data in the maintenance world has created a great deal of misconception and loss of value. While predictive maintenance can generate substantial savings in the right circumstances, in too many cases such savings are offset by the cost of unavoidable false positives.
Then consider this thought from Emerson’s Jonas Berge.
We have a promising future of Artificial Intelligence (AI) ahead of us. But to be successful we must first learn to reject the fake visions painted by consultants eager to outdo each other. Most engineers don’t have a good handle on Al the way they have on mechanics, electricity, or chemistry. Data science has no first principles or scientific laws. It is very nebulous. So it can be hard to judge if claims made around analytics are realistic. Or you may end up using an overly complex kind of Al for a simple analytics task. It must be like the early days of thermodynamics and electromagnetism.
Now some additional thoughts from Gopal here and here:
As such, a layered fit-for-purpose approach to analytics can be extremely valuable when you also leverage simple heuristics – extracted from SME (subject-matter-expert) knowledge – with basic math and Statistics 101. You can also include first-principles physics-based calculations that require only simple algebra and make predictions by extrapolating trends – backed by sound engineering assumptions.
The takeaway – start with proven fit-for-purpose analytics before chasing AI/ML PoCs with all its attendant risks, and the false positives/false negatives indicated in the McKinsey post. Form follows function; AI/ML yields to simple analytics. The simpler ‘engineered analytics’ captures the low-hanging wins and provides the foundation and the data-engineering required for the AI/ML layer. The oft-heard “… just give me all your data, let’s put it in a data lake and we will figure it out…” is naïveté.
And a conclusion from McKinsey:
Luckily, while predictive maintenance is probably the best-known approach, there are other powerful ways to enhance a business’s maintenance-service organization and create value from analytics-based technologies. The two most valuable of these, we find, are condition-based maintenance and advanced troubleshooting.
And more from Jonas Berge:
The reason why the existing process sensors are insufficient is because by the time the problem is picked up by the existing process sensors, the problem has already gone too far. You need a change in a signal that indicates an event is about to occur. A pump bearing failure is a good example of this: by the time the bearing failure is visible on the discharge pressure it is already too late because it is a lagging indicator. You need a vibration sensor as a leading indicator where a change signals the bearing is starting to wear.
Lots of time and money can be saved if advanced sensors to collect the required data are put in from the very beginning. With the right sensors in place the AI analytics can do a fabulous job of providing early warning of failure.
I guess I’ll add that it’s not necessarily complex unless you choose to make it. But to say that predictive maintenance is the killer app is overly simplifying things to the point that you’d never really get anywhere—even to make IIoT and IT sales.
A better and more inclusive approach to market solutions could lead IT and OT/IT suppliers into more lucrative hardware, software, and services sales and profits.
by Gary Mintchell | Oct 27, 2021 | Asset Performance Management, Operations Management
AI is perhaps the most overused buzz word in our market right now. On my drive down I-90 to Rosemont, IL for the Assembly show this morning, I listened to the first part of an interview on the Tim Ferris podcast with Eric Schmidt. This former Google CEO has written a book just released on AI. AI is short for Artificial Intelligence, a phrase which some of us refer to as neither artificial or intelligent. Schmidt defines it as software that learns. Feed it more data and it learns from it.
This news comes from ABB which has added AI to its launch of Ability Genix Asset Performance Management (APM) Suite for condition monitoring, predictive maintenance, and 360-degree asset performance insights for the process, utility and transportation industries. The quick look is:
- The launch of ABB Ability Genix Asset Performance Management Suite brings next-generation AI-based predictive maintenance, asset reliability and integrity insights to process and utility industries
- Genix APM is an enterprise-grade application to monitor assets, prescribe maintenance actions, improve equipment utilization, and support lifecycle analysis and capital planning
- Solution provides actionable insights into all aspects of asset performance, enabling customers to reduce machine downtime by up to 50 percent
The Genix APM Suite makes it easy to add asset condition monitoring to existing operational technology (OT) landscapes, enables prioritization of maintenance activities based on AI-informed predictions, and provides a comprehensive overview of asset performance.
Genix APM Suite also empowers significant improvements in operational sustainability. By assessing the remaining useful life of industrial assets, Genix APM generates a plan for preventive maintenance, which can extend equipment uptime by as much as 50 percent and increase asset life by up to 40 percent.
With reliable data insights, decision makers are provided with the information required in order to identify gaps and areas of improvement for energy efficiency and tighter control of operations, increasing asset availability and improving profit potential.
“Poor asset availability and reliability is a major problem that results in unplanned downtime and unexpected maintenance costs, and also impedes strategic planning and procurement,” said Rajesh Ramachandran, Chief Digital Officer at ABB Process Automation. “It’s not that industrial customers lack data; it’s that many lack effective ways to use their data to improve operational and business performance.”
Genix APM is built on the ABB Ability Genix Industrial Analytics and AI Suite. ABB Ability Genix is a modular, IIoT and analytics suite, which integrates IT, OT and other enterprise data in a contextualized manner, applying advanced industrial AI capabilities that support new insights to optimize operations.
by Gary Mintchell | Aug 24, 2021 | Asset Performance Management, Operations Management, Workforce
An old friend and several acquaintances found themselves adrift when a magazine closed. All being entrepreneurial, they started a website and newsletter—RAM Review (Reliability, Availability, Maintenance). Old friend Jane Alexander is the editor. Not meaning she’s old, just that we’ve known each other for many years.
I met Bob Williamson 10 or 12 years ago mostly around discussions of ISO 55000 on asset management. He wrote the lead essay for a recent email newsletter on workforce. Now, I have to admit that the only part of manufacturing I never worked in was maintenance and reliability. I did work with skilled trades when I was a sales engineer, though. I considered them geniuses for the way they could fix things. One of the points of Bob’s essay is taking care of things before they break and need help.
The main workforce discussion in media concerns remote or hybrid work. Many engineering roles can be performed remotely. Many roles within manufacturing and production must be performed on site. With the current and projected future labor shortage, I like his closing paragraph except for the put down on current operators. I knew plenty who cared for their machine or process. Of course, many didn’t. Most likely a management failure. But cross-training people to be at least to some degree both competent operators and first-line RAM people seems to me to be a winning strategy. I’ve reprinted most of Bob’s essay below. You can read it on their website.
For many manufacturers, returning to traditional ways of work simply will not be an option. Something must change if they are to attract, hire, and retain a capable workforce. Therefore, I believe technology and desperately willing top-management teams will also help alter work cultures on factory floors. Respondents to the Manufacturing Alliance/Aon survey suggested offering “flexible working hours, compressed work weeks, split shifts, shift swapping, and part-time positions.” Use of such enticements with plant-floor workforces would look very different than use among the carpet dwellers in front offices.
We have another option, of course: Technology can automate our manufacturing processes, and much of it is far more affordable than it was a decade ago. In fact, given the rising cost of labor over the past decade, with increasing healthcare-cost burdens and skills shortages, many businesses have already automated some of their labor-intensive processes. The times we are in call for—make that scream for—large-scale automation. Yet, while process automation can be easier for large, deep-pocketed companies than for the smalls, it’s still a huge challenge.
There are four big hurdles to be overcome when automating manufacturing processes: availability, installation, sustainable reliability, and work-culture change. And remember, skills and labor shortages are widespread in these post-pandemic times. Moreover, despite the supply chain’s efforts to heal and keep up, manufacturers of automation technologies aren’t immune to the production-barrier ills that others face these days.
To repeat: RAM professionals are on manufacturing’s front line. Skill shortages may be affecting our ranks, but there are recruiting and training efforts underway in many companies to remedy the situation. In addition, we have technologies for carrying out data collection, analysis, and problem-solving somewhat remotely. However, the boots-on-the-ground parts of reliability and maintenance will not be virtual or remote.
So, consider this option: Recruit and train displaced production workers to wear some RAM “boots.” They’ll be familiar with industrial environments and the importance of plant equipment. Then, let’s train our current production workers to care more for their machines than they did in the past, and, in the process, become the eyes and ears for reliability, availability, and maintenance improvement.TRR