GE Digital Proficy Operations Analytics Software Added To Proficy Suite

An example of software integration and pulling data for improved business.

GE Digital announced it is adding Proficy Operations Analytics to its Proficy suite of software solutions. As an integral part of the Proficy suite, Proficy Operations Analytics can use the data already collected in Proficy Historian, Proficy Plant Applications / MES, and Proficy Manufacturing Data Cloud to achieve 2-5% more efficiency from manufacturing operations year over year.

Proficy Operations Analytics is a self-provisioning, ready-to-deploy SaaS-based predictive operations center for industrial IoT and AI. This cloud analytics solution helps operations, engineering, and executive teams gain data visibility, uncover efficiencies and operational proactive actions at enterprise scale. Within minutes of connecting Proficy Operations Analytics to operational and maintenance data sources, users can achieve visibility to insights that can improve operational and revenue performance.

To achieve speedy visibility, Proficy Operations Analytics includes 100+ industrial pre-built data agents that automatically connect to historians, PLCs, MES, SCADA, ERP, Lab DBs, and IoT devices in a secure-by-design, frictionless way with automated normalization of disparate properties for immediate analysis. Self-provisioning processes configure Digital Twins automatically to add the context required to automate analytics on all process and event data. Thirty pre-built industrial predictive applications easily assigned to Digital Twins without any data science or application development requirements help to optimize operations.

With Proficy Operations Analytics, typical deployments achieve fast Return on Investment on the subscription investment. These gains are accelerated with pre-built predictive analytics applications such as Predictive Quality, Predictive Throughput, Predictive Energy Efficiency, Predictive Uptime, Predictive Asset Reliability, and Predictive Asset Life. These applications do not require data science expertise to implement and incorporate curated datasets that are easily visualized in traditional analytics dashboards, so operators can surface the economic impact of actions to executives.

As part of an overall plan to continuously optimize quality and production, a progressive manufacturer of specialty film products has leveraged predictive technologies to pull in data from multiple disparate data sources, monitor more than 400 measures of line stability, and leverage machine learning to continuously predict top potential causes of line instability and film breaks in real time. Predictive insights are presented to the process engineers in easy-to-understand displays, distilling thousands of pieces of data into just the elements needed to address the current problems. Process engineers can use analytics to understand the causes of line instability and make real-time recommendations to the operations team.

“GE Digital is uniquely positioned to help industrial companies accelerate AI and ML with a full set of industrial data management and analytics-based solutions that feature a scalable architecture, single pane of glass for visibility, and security and availability on premise and in the cloud,” said Richard Kenedi, General Manager of GE Digital’s Manufacturing and Digital Plant business. “Proficy Operations Analytics puts industrial data to work to empower workers and lead global decision-making frameworks, putting data in context to drive resilient business outcomes that make people, assets, and processes work together efficiently.”

Schneider Electric Standardizes EcoStruxure Micro Data Centers on Stratus Edge Computing

Juliet justifying her love with a man from a rival family argues, “O, be some other name! What’s in a name? That which we call a rose by any other name would smell as sweet…” A modern manufacturing OT/IT architecture includes some manner of compute and storage at the site close to the operations that contains robust networking to the cloud. 

Generally, we call that rose Edge Computing. But we can also call it a “micro data center” if we wish. This news discusses a partnership between Schneider Electric data center business and Stratus Technologies server business that has a positive effect upon the AVEVA software business and the effectiveness of Systems Integrators (Sis). This news popped last month during the flurry of conferences I was attending. 

Stratus Technologies, a provider in simplified, protected, autonomous Edge Computing platforms, has announced that Schneider Electric has released EcoStruxure Micro Data Center architectures standardized on Stratus Edge Computing platforms to accelerate the move of traditional data center capabilities to the factory floor. 

The new micro data center architectures integrate Stratus’ fault tolerance and virtualization with Schneider Electric’s uninterrupted power to consolidate software workloads and run critical equipment with no downtime. Jointly developed, the micro data centers are fully tested, validated, and available with pre-loaded software to reduce engineering complexity for System Integrators (SI).

Industry 4.0 Micro Data Centers for Automation and Control 

The Schneider Electric EcoStruxure Micro Data Center with Stratus ftServer enables end users to move data center operations to the edge, bringing computing power close to critical equipment to solve data latency and bandwidth issues. Stratus ftServer’s virtualization enables end users to concurrently run monitor and control, on-premises historian, manufacturing execution, asset performance management, and automated material handling applications as well as advanced AI and ML applications. Purpose-built for the operational environment, the unit is physically protected in a single enclosure.

John Knorr, VP of Global IT Alliances, at Schneider Electric said, “When partnering with Stratus, we spoke about the many day-to-day responsibilities of a System Integrator and the security and privacy concerns raised with outsourcing IT. We made it our mission to not only bridge the gap of IT and OT but simplify the purchase, deployment and management cycle all together with a one-stop-shop solution.” 

Less Engineering Complexity and Faster Time-to-Market 

Previously, SIs and end users had to source the compute platform and power components from separate vendors and distributors, align requirements and costs, and then assemble, configure, and test. With the combined solution, teams have a prevalidated solution available from a single source with service and support from both Stratus and Schneider Electric. As a result, organizations are able to deploy an OT-ready micro data center for 40% less field engineering cost and 20% faster time-to-market.

Tim Black, Global SI Program Manager, at AVEVA said, “This Stratus and Schneider Electric collaboration drastically reduces the engineering work and logistical complexity for AVEVA SIs and distributors deploying micro data centers in OT environments. The EcoStruxure Micro Data Center with Stratus ftServer should be the standard fault tolerant platform for Edge-to-Enterprise digital transformation projects, enabling fast deployments for Performance Intelligence.”

EcoStruxure Micro Data Center Configurations and Sizing

System Integrators and customers can order the EcoStruxure Micro Data Center systems in a range of configurations as well as pre-loaded with industrial software such as AVEVA System Platform and others.

The EcoStruxure Micro Data Center with Stratus ftServer is available in 6U, 12U or 42U rack sizes powered by Schneider Electric Secure Power solutions. Each micro data center has integrated cooling and optional environmental monitoring (temperature, humidity, fluid, smoke) and security (door sensors and camera). The 42U enclosure adds NEMA-12 with filters and ventilation fans and is ideal for larger deployments where additional IT gear is required, and dedicated IT space is not available. Schneider Electric’s smart, uninterrupted power supply (UPS) offers protection against electrical hazards, remote support, and “graceful” calculated shutdown.

Stratus ftServer delivers virtualization to run a range of concurrent software workloads and fault tolerance to eliminate downtime and data loss. Each rack size enclosure offers three Stratus ftServer configurations:

  • Stratus ftServer 2900 supports up to 10,000 I/O’s and two (2) remote clients. The unit is powered by a 1.5kVA APC Smart-UPS (uninterrupted power supply) with on-line UPS with network management card with additional capacity for lower power devices.
  • Stratus ftServer 4900 is ideal for 25,000-50,000 I/O’s and five (5) remote clients. The unit is powered by a 2.2kVA APC Smart-UPS On-line UPS with network management card. The solution offers an additional 6U of rack space for switches, KVM and other IT gear.
  • Stratus ftServer 6900 supports up to 100,000 I/O’s and twenty (20) remote clients. The unit is powered by a 3kVA APC Smart-UPS On-line UPS with network management card.

ROKLive Into the Clouds but Weaker at the Edge

This week’s conference was called ROKLive. This is the annual Rockwell Software conference that has naturally morphed over the years. It was training for distributor tech specialists. This year was virtual only and seemed a bit more general. Themes this year could be cast as corporate strategy toward software and cloud. 

Software Strong

CEO Blake Moret has been making (for Rockwell Automation) bold moves during his tenure effectively remaking the company. It has always been a hardware product company.  Software was developed as necessary for support of the hardware products. You cannot, for example, sell a PLC without programming software. Or a drive without configuration software. Then a couple of small acquisitions moved the company tentatively into software in what was described to me as an experiment. And many of those acquisitions failed to fulfill the ambitions of the software leaders at the time.

Building the Cloud

Software is no longer relegated to an experiment. It has become a core part of the business. Rockwell made a huge investment in PTC gaining access to the IoT platform of ThingWorx and Kepware. This enabled a restructuring of the software group getting products to market quickly and surely contributing to both the bottom line and to customer satisfaction. For years they wanted to talk to me about asset management. Then I’d remember that that meant helping customers keep track of Allen-Bradley spare parts in their cribs. Now with the Fiix acquisition, Rockwell gained cloud expertise in addition to an EAM and CMMS suite. Further to the cloud was the recent announced acquisition of Plex giving Rockwell an updated MES product and even more cloud expertise.

At about the same time that I left magazine media and chose Manufacturing Connection (emphasis on connection) as a blog name, Rockwell Automation announced a corporate strategy called Connected Enterprise. I told the marketing executives at the time that great minds think alike <smile>. These investments flesh out that connected enterprise strategy building upon the Ethernet strategy established years ago.

Organization Alignment

Then there are the little corporate things I tend to notice. For many years the head of software was a VP level reporting to usually the SVP of control and automation. Now there is a Sr. VP whose title is Software and Control. Further, like many if not most organizations, the organizational structure had VPs in charge of businesses. If there was a technical or business reason for two products to work together the result would tend to cost one of the VPs bonus money. They have corrected that flaw to add incentives for executives to work together. Very good organizational move to forward overall business and technology strategy.

Weak on the edge

An IT conference was held the week before. One of the themes was “edge-to-cloud”. At least one presentation at ROKLive also discussed “edge-to-cloud”. I’ve already pointed out the beginnings of a cloud strategy at Rockwell. You would expect Rockwell to be an “edge” company. I attended a conference on that topic and came away less than enthused. The discussion included industrial PCs (IPCs) and a card in a PLC. If I’m in BusDev at a company with a solid edge compute solution realizing Rockwell’s newfound penchant for strong partnerships, I’m on the phone with an SVP or CTO with a pitch.

HPE Discover Uncovers Age of Insight Into Data

HPE Discover was held this week, virtually, of course. I can’t wait for the return of in-person conferences. It’s easier for me to get relevant conversations and learn from technology users when you’re all gathered together. You attend on demand here.

I didn’t have any specific industrial/manufacturing discussions this year, although I had met up with Dr. Tom Bradicich earlier to get the latest on IoT and Edge. You can check out that conversation here.

I suppose the biggest company news was the acquisition of Determined AI (see news release below). This year’s theme Age of Insight (into data) and AI and ML are technologies required to pull insight out of the swamp of data.

HPE’s strategy remains to be an as-a-Service company. This strategy is gaining momentum. They announced 97% customer retention with Greenlake, the cloud-as-a-service platform. We are seeing an uptake of this strategy in specifically manufacturing software companies, so I hope you manufacturing IT people are studying this.

Dr. Eng Lim Goh, CTO, stated in his keynote, “We are awash in data, but it is siloed. This brings a need for a federation layer.” Later, in the HPE Labs keynote, the concept of Dataspace was discussed. My introduction to that concept came from a consortium in Europe. More on that in a bit. Goh gazed into the future predicting that we need to know what data to collect, and then look at how and where to collect and find and store data.

The HPE Labs look into Dataspaces highlighted these important characteristic: democratize data access; lead with open source; connect data producers/consumers; and remove silos. Compute can’t keep up with amount of data being generated, therefore the need for the exascale compute HPE is developing. Further, AI & ML are critical capability, but data is growing too fast to train it.

The Labs presentation brought out the need to think differently about programming in the future. There was also a look into future connectivity—looking at photonics research. This technology will enhance data movement, increase bandwidth with low power consumption. To realize the benefits, engineers will have to realize it’s more than wire-to-wire exchange. This connectivity opens up new avenues of design freedom. Also to obtain best results for exploiting this technology for data movement companies and universities must emphasize cross-disciplinary training.

Following is the news release on the Determined AI acquisition.

HPE acquires Determined AI to accelerate artificial intelligence innovation

Hewlett Packard Enterprise has acquired Determined AI, a San Francisco-based startup that delivers a software stack to train AI models faster, at any scale, using its open source machine learning (ML) platform.

HPE will combine Determined AI’s unique software solution with its world-leading AI and high performance computing (HPC) offerings to enable ML engineers to easily implement and train machine learning models to provide faster and more accurate insights from their data in almost every industry.  

“As we enter the Age of Insight, our customers recognize the need to add machine learning to deliver better and faster answers from their data,” said Justin Hotard, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), HPE. “AI-powered technologies will play an increasingly critical role in turning data into readily available, actionable information to fuel this new era. Determined AI’s unique open source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure. I am pleased to welcome the world-class Determined AI team, who share our vision to make AI more accessible for our customers and users, into the HPE family.”

Building and training optimized machine learning models at scale is considered the most demanding and critical stage of ML development, and doing it well increasingly requires researchers and scientists to face many challenges frequently found in HPC. These include properly setting up and managing a highly parallel software ecosystem and infrastructure spanning specialized compute, storage, fabric and accelerators. Additionally, users need to program, schedule and train their models efficiently to maximize the utilization of the highly specialized infrastructure they have set up, creating complexity and slowing down productivity.

Determined AI’s open source machine learning training platform closes this gap to help researchers and scientists to focus on innovation and accelerate their time to delivery by removing the complexity and cost associated with machine learning development. This includes making it easy to set-up, configure, manage and share workstations or AI clusters that run on-premises or in the cloud.


Determined AI also makes it easier and faster for users to train their models through a range of capabilities that significantly speed up training, which in one use case related to drug discovery, went from three days to three hours. These capabilities include accelerator scheduling, fault tolerance, high speed parallel and distributed training of models, advanced hyperparameter optimization and neural architecture search, reproducible collaboration and metrics tracking.

“The Determined AI team is excited to join HPE, who shares our vision to realize the potential of AI,” said Evan Sparks, CEO of Determined AI. “Over the last several years, building AI applications has become extremely compute, data, and communication intensive. By combining with HPE’s industry-leading HPC and AI solutions, we can accelerate our mission to build cutting edge AI applications and significantly expand our customer reach.” To tackle the growing complexity of AI with faster time-to-market, HPE is committed to continue delivering advanced and diverse HPC solutions to train machine learning models and optimize applications for any AI need, in any environment. By combining Determined AI’s open source capabilities, HPE is furthering its mission in making AI heterogeneous and empowering ML engineers to build AI models at a greater scale.

Additionally, through HPE GreenLake cloud services for High Performance Computing (HPC), HPE is making HPC and AI solutions even more accessible and affordable to the commercial market with fully managed services that can run in a customer’s data center, in a colocation or at the edge using the HPE GreenLake edge to cloud platform.

Determined AI was founded in 2017 by Neil Conway, Evan Sparks, and Ameet Talwalkar, and based in San Francisco. It launched its open-source platform in 2020.

Rockwell Automation To Expand Industrial Cloud Software Offering With Acquisition Of Plex Systems

Funny how things go. I recently sat in my favorite local direct trade coffee house, maybe under the influence of caffeine, contemplating industrial software and MES market. The market is ripe for further consolidation, I thought. I rested in that thought for a while, then let it go. Later I was contemplating Rockwell Automation’s software situation recognizing its move partnering with PTC ThingWorx for IoT, but how it probably needed to make a move to build momentum for its manufacturing software (MES).

This morning I do a quick scan of LinkedIn and spot this press release. This is a good move. I had some in-depth interviews with Plex within the past couple of years. Good company and good idea, but I didn’t see how it was ever going to really grow.

I think Rockwell’s new software executive team should do well with this acquisition. (And as an independent blogger/analyst guy, I’m not paid to say that.)

Rockwell Automation, the world’s largest company dedicated to industrial automation and digital transformation, and Plex Systems, the leading cloud-native smart manufacturing platform operating at scale, today announced that Rockwell has entered into an agreement to acquire Plex for $2.22 billion in cash.

Plex offers the only single-instance, multi-tenant SaaS manufacturing platform operating at scale, including advanced manufacturing execution systems, quality, and supply chain management capabilities. It has over 700 customers and manages more than 8 billion transactions per day. Plex’s software capabilities will be further differentiated by Rockwell’s global market access, complementary industry expertise, and ability to turn real-time data into actionable insights.

“This acquisition will accelerate our strategy to bring the Connected Enterprise to life, driving faster time to value for our customers as they increasingly adopt cloud solutions to improve resilience, agility, and sustainability in their operations,” said Blake Moret, Chairman and CEO of Rockwell Automation. “Combining Plex’s cutting-edge cloud technology with Rockwell’s existing software portfolio and domain expertise will add customer value and create more ways to win. The acquisition will also accelerate our software revenue growth and strengthen our annual recurring revenue streams.”

A growing dilemma for manufacturers is the urgent need to increase production and improve resilience, while driving efficiency and compliance. Companies are increasingly seeking to upgrade their production systems with modern, cloud-based manufacturing execution systems that are easy to implement, use, and maintain. Plex’s platform helps customers to connect, automate, track, and analyze their operations and connected supply chains.

“Rockwell believes in the power of data and technology to transform manufacturing and industrial operations,” said Brian Shepherd, senior vice president, Software and Control, for Rockwell Automation. “Together with the advanced asset maintenance and management capabilities provided by our recent Fiix acquisition, Rockwell will have a strong portfolio of cloud-native solutions for our customers’ production systems upon completion of the Plex acquisition.”

“Plex has always been more than a company,” said Bill Berutti, CEO of Plex. “We have been a leader in the movement to smart manufacturing and a trusted partner to more than 700 manufacturing companies around the globe. Joining forces with Rockwell is great for our customers, our partners, and our employees as we move to expand our reach and impact and accelerate our mission to bring manufacturing to the cloud.”

Plex will be reported as part of Rockwell’s Software and Control operating segment which provides leading hardware and software offerings for the design, operation, and maintenance of production automation and management systems. As a part of the acquisition, Rockwell will welcome more than 500 highly engaged new employees.

The acquisition will be financed with a combination of cash and short-term and long-term debt. Subject to customary closing conditions and completion of regulatory review, the acquisition is expected to close in Rockwell’s fiscal fourth quarter.

Autonomous Operations, Experion Operator Advisor, and Security Survey Unveiled at HUG 2021

Three of the eleven hours I spent on a variety of video platforms Monday and Tuesday were “at” Honeywell User Group, better known as HUG. I appreciate the virtual conference since there are a minimum of three places for me to attend this week. If they were all physical locations, I would never have made it.

This HUG was more interesting than I remember from the past couple of years. Honeywell Process Solutions (HPS) has, well, er, solved several hanging issues that were critical to its future success. The big one is moving with software-defined and decoupling hardware and software. This is major goal of the Open Process Automation Forum and of major Honeywell customer, ExxonMobil. It has also opened an innovative migration path for its legacy TDC systems to its latest Experion C300 systems avoiding the dreaded rip-and-replace. I was pretty impressed with the progress since last year. I don’t hear from HPS on a regular basis, so this was welcome news.

There are two press releases below. The first discusses Operator Advisor, part of its plant-wide optimization strategy. 

Jason Urso, HPS CTO, discussed the Autonomous Operations Maturity Model in his keynote and a later session with media. This model contains five levels all of which find HPS hard at work building out. Following is a brief outline.

  • Level 1—automation optimization. Introduced Electronic Work Instructions and Forge Plant-wide optimizer.
  • Level 2—Intelligent Operations. Here, Urso discussed expanding use of digital twins for modeling and HALO automated intelligence (see below).
  • Level 3—Remote Operations. For example, well head, pipeline, offshore, and mining operations. Urso discussed project execution support services leading to fully remote operations.
  • Level 4—Resilient Operations. Experion HIVE I/O increasing in useability and flexibility, decouples hardware and software eliminating concern with end-of-life issues for equipment.
  • Level 5—Autonomous Operations. HPS have introduced Energy Control System with market APIs. 

Overall, an informative couple of days devoted to Honeywell Process Solutions.

Operator Advisor Added to HALO Suite

Honeywell announced the addition of Operator Advisor to its Experion Highly Augmented Lookahead Operations (HALO) suite. 

This software solution enables plant owners to objectively measure gaps and drive operator effectiveness to the next level. This market-first solution presents users – including oil and gas, chemical, refining and petrochemical organizations – with a consolidated scorecard of enterprise automation utilization and recommended steps to address performance-related gaps.

Honeywell’s solution uses machine learning-powered analytics, a type of artificial intelligence, to gather insights from enterprise data sources such as distributed control systems and funnel those insights into dashboards. These dashboards can provide operations managers and supervisors with a clear and complete view of operator performance and improvement opportunities.

By understanding how operator actions, inactions and workload levels contribute to optimal production, organizations can develop targeted training programs, make strides toward autonomous operations and build process resilience – all of which can help them better compete in the digital age.

“According to the Abnormal Situation Management Consortium, 40% to 70% of industrial accidents are linked to human error,” said Pramesh Maheshwari, vice president and general manager, Lifecycle Solutions and Services, Honeywell Process Solutions. “This underscores the importance of deploying an enterprise-wide competency program that empowers organizations and workers through use of advanced technologies like machine learning to improve plant performance, uptime, reliability and safety.”

As part of Honeywell’s Workforce Excellence portfolio, HALO Operator Advisor is a timely response to several industry trends, including the global desire for post-COVID-19 preparedness and resilience, growing operational complexity, the aging industrial workforce and the urgent need to upskill next-generation recruits.

Honeywell data reveals the transformational impact HALO Operator Advisor can have on plant operations. Potential benefits include the reduction of 75% of incidents and human errors, leading to the recovery of $1.5 million annually per plant of production loss due to worker performance; a $2 million annual reduction in operational costs by optimizing worker productivity and training and advancing toward full autonomous plant operation; a $1.3 million annual savings in headcount through optimized production; and a $1 million savings in annual maintenance costs through improved equipment reliability.

HALO Operator Advisor will be available in October 2021. For more information, visit: https://www.honeywellprocess.com and check the HALO Operator Advisor Service Note.

Cybersecurity Research Reports Increase In USB Threats

  • Report finds that 79% of cyber threats originating from removable media could critically impact operational technology (OT) environments
  • 2021 Honeywell USB Threat Report finds 37% of all cybersecurity threats were designed to use removable media – nearly double last year’s findings

According to a report released today by Honeywell, USB-based threats that can severely impact business operations increased significantly during a disruptive year when the usage of removable media and network connectivity also grew.

Data from the 2021 Honeywell Industrial USB Threat Report indicates that 37% of threats were specifically designed to utilize removable media, which almost doubled from 19% in the 2020 report. The research also highlights that 79% of cyber threats originating from USB devices or removable media could lead to a critical business disruption in the operational technology (OT) environment.  At the same time, there was a 30% increase in the use of USB devices in production facilities last year, highlighting the growing dependence on removable media.

The report was based on aggregated cybersecurity threat data from hundreds of industrial facilities globally during a 12-month period. Along with USB attacks, research shows a growing number of cyber threats including remote access, Trojans and content-based malware have the potential to cause severe disruption to industrial infrastructure.

“USB-borne malware was a serious and expanding business risk in 2020, with clear indications that removable media has become part of the playbook used by attackers, including those that employ ransomware,” said Eric Knapp, engineering fellow and director of cybersecurity research for Honeywell Connected Enterprise. “Because USB-borne cyber intrusions have become so effective, organizations must adopt a formal program that addresses removable media and protects against intrusions to avoid potentially costly downtime.”

Many industrial and OT systems are air-gapped or cut off from the internet to protect them from attacks. Intruders are using removable media and USB devices as an initial attack vector to penetrate networks and open them up to major attacks. Knapp says hackers are loading more advanced malware on plug-in devices to directly harm their intended targets through sophisticated coding that can create backdoors to establish remote access. Hackers with remote access can then command and control the targeted systems.

The 2021 report includes data from Honeywell’s Secure Media Exchange (SMX) technology, which is designed to scan and control USB drives and removable media. To reduce the risk of USB-related threats, Honeywell recommends that organizations utilize several layers of OT cybersecurity software products and services such as Honeywell’s Secure Media Exchange (SMX), the Honeywell Forge Cybersecurity Suite, people training and process changes.

Honeywell’s Secure Media Exchange (SMX) provides advanced threat detection for critical infrastructure by monitoring, better protecting and logging use of removable media throughout industrial facilities. The Honeywell Forge Cybersecurity Suite can monitor for vulnerabilities such as open ports or the presence of USB security controls to strengthen endpoint and network security, while also ensuring better cybersecurity compliance.Read the full report here and visit Honeywell Forgeersecurity.