DH2i Launches DxEnterprise Smart Availability Software for Containers

Containers have become a must have technology for those pursuing some form of Digital Transformation, or whatever you wish to label it. I’ve written little about the subject. Following is a news release concerning a way for cloud-native Microsoft SQL Server.

DH2i, a provider of multi-platform Software Defined Perimeter (SDP) and Smart Availability software, announced June 22 the general availability (GA) of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups (AG) outside and inside Kubernetes (K8).

Container use is skyrocketing for digital transformation projects—particularly the use of stateful containers for databases such as Microsoft SQL Server. This growing stateful database container use is also generating a hard production deployment requirement for database-level high availability (HA) in Kubernetes.

For medium and large organizations running SQL Server, database-level HA has traditionally been provided by SQL Server Availability Groups (AGs). However, SQL Server AGs have not been supported in Kubernetes until now—hindering organizations’ ability to undergo digital transformations. DxEnterprise (DxE) for Containers is the answer to the problem.

DxEnterprise for Containers accelerates an enterprise’s digital transformation (DX) by speeding the adoption of highly available stateful containers. DxEnterprise (DxE) for Containers provides SQL Server Availability Group (AG) support for SQL Server containers, including for Kubernetes clusters. It enables customers to deploy stateful containers to create new and innovative applications while also improving operations with near-zero RTO to more efficiently deliver better products and services at a lower cost. Additionally, it helps organizations generate new revenue streams by enabling them to build distributed Kubernetes AG clusters across availability zones/regions, resulting in hybrid cloud and multi-cloud environments which can rapidly adapt to changes in market conditions and consumer preferences.

“Kubernetes lacks SQL Server AG support, which is essential for using stateful containers in production,” said Shamus McGillicuddy, Vice President of Research, EMA Network Management Practice. “DxEnterprise for Containers solves this problem. It enables AG support in Kubernetes.”

“DxE for Containers is the perfect complement to Kubernetes’ pod/node-level cluster HA,” said Don Boxley, DH2i CEO and Co-Founder. “DxE for Containers enables Microsoft users to confidently deploy highly available SQL Server containers in production, speeding their organizations’ digital transformation.”

DxEnterprise for Containers Features & Benefits:

–       Kubernetes SQL Server Container Availability Groups with automatic failover, an industry first – Enables customers to deploy stateful containers to create new and innovative applications

–       Near-zero recovery time objective (RTO) container database-level failover – Improves operations to more efficiently and resiliently deliver better products and services at a lower cost to the business

–       Distributed Kubernetes AG clusters across availability zones/regions, hybrid cloud and multi-cloud environment with built-in secure multi-subnet express micro-tunnel technology – Enables customers to rapidly adapt to changes in market conditions and consumer preferences

–       Intelligent Health & performance QoS monitoring, alerting management – Simplifies system management

–       Mix and match support for Windows and Linux; bare metal, virtual, cloud servers – Maximizes IT budget ROI

Organizations can now purchase DxEnterprise (DxE) for Containers directly from the DH2i website to get immediate full access to the software and support. Customers have the flexibility to select the support level and subscription duration to best meet the needs of their organization. Users can also subscribe to the Developer Edition of DxEnterprise (DxE) for Containers to dive into the technology for free for non-production use.

DH2i Company is the leading provider of multi-platform Software Defined Perimeter (SDP) and Smart Availability software for Windows and Linux. DH2i software products DxOdyssey and DxEnterprise enable customers to create an entire IT infrastructure that is “always-secure and always-on.”

HPE Discover Uncovers Age of Insight Into Data

HPE Discover was held this week, virtually, of course. I can’t wait for the return of in-person conferences. It’s easier for me to get relevant conversations and learn from technology users when you’re all gathered together. You attend on demand here.

I didn’t have any specific industrial/manufacturing discussions this year, although I had met up with Dr. Tom Bradicich earlier to get the latest on IoT and Edge. You can check out that conversation here.

I suppose the biggest company news was the acquisition of Determined AI (see news release below). This year’s theme Age of Insight (into data) and AI and ML are technologies required to pull insight out of the swamp of data.

HPE’s strategy remains to be an as-a-Service company. This strategy is gaining momentum. They announced 97% customer retention with Greenlake, the cloud-as-a-service platform. We are seeing an uptake of this strategy in specifically manufacturing software companies, so I hope you manufacturing IT people are studying this.

Dr. Eng Lim Goh, CTO, stated in his keynote, “We are awash in data, but it is siloed. This brings a need for a federation layer.” Later, in the HPE Labs keynote, the concept of Dataspace was discussed. My introduction to that concept came from a consortium in Europe. More on that in a bit. Goh gazed into the future predicting that we need to know what data to collect, and then look at how and where to collect and find and store data.

The HPE Labs look into Dataspaces highlighted these important characteristic: democratize data access; lead with open source; connect data producers/consumers; and remove silos. Compute can’t keep up with amount of data being generated, therefore the need for the exascale compute HPE is developing. Further, AI & ML are critical capability, but data is growing too fast to train it.

The Labs presentation brought out the need to think differently about programming in the future. There was also a look into future connectivity—looking at photonics research. This technology will enhance data movement, increase bandwidth with low power consumption. To realize the benefits, engineers will have to realize it’s more than wire-to-wire exchange. This connectivity opens up new avenues of design freedom. Also to obtain best results for exploiting this technology for data movement companies and universities must emphasize cross-disciplinary training.

Following is the news release on the Determined AI acquisition.

HPE acquires Determined AI to accelerate artificial intelligence innovation

Hewlett Packard Enterprise has acquired Determined AI, a San Francisco-based startup that delivers a software stack to train AI models faster, at any scale, using its open source machine learning (ML) platform.

HPE will combine Determined AI’s unique software solution with its world-leading AI and high performance computing (HPC) offerings to enable ML engineers to easily implement and train machine learning models to provide faster and more accurate insights from their data in almost every industry.  

“As we enter the Age of Insight, our customers recognize the need to add machine learning to deliver better and faster answers from their data,” said Justin Hotard, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), HPE. “AI-powered technologies will play an increasingly critical role in turning data into readily available, actionable information to fuel this new era. Determined AI’s unique open source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure. I am pleased to welcome the world-class Determined AI team, who share our vision to make AI more accessible for our customers and users, into the HPE family.”

Building and training optimized machine learning models at scale is considered the most demanding and critical stage of ML development, and doing it well increasingly requires researchers and scientists to face many challenges frequently found in HPC. These include properly setting up and managing a highly parallel software ecosystem and infrastructure spanning specialized compute, storage, fabric and accelerators. Additionally, users need to program, schedule and train their models efficiently to maximize the utilization of the highly specialized infrastructure they have set up, creating complexity and slowing down productivity.

Determined AI’s open source machine learning training platform closes this gap to help researchers and scientists to focus on innovation and accelerate their time to delivery by removing the complexity and cost associated with machine learning development. This includes making it easy to set-up, configure, manage and share workstations or AI clusters that run on-premises or in the cloud.


Determined AI also makes it easier and faster for users to train their models through a range of capabilities that significantly speed up training, which in one use case related to drug discovery, went from three days to three hours. These capabilities include accelerator scheduling, fault tolerance, high speed parallel and distributed training of models, advanced hyperparameter optimization and neural architecture search, reproducible collaboration and metrics tracking.

“The Determined AI team is excited to join HPE, who shares our vision to realize the potential of AI,” said Evan Sparks, CEO of Determined AI. “Over the last several years, building AI applications has become extremely compute, data, and communication intensive. By combining with HPE’s industry-leading HPC and AI solutions, we can accelerate our mission to build cutting edge AI applications and significantly expand our customer reach.” To tackle the growing complexity of AI with faster time-to-market, HPE is committed to continue delivering advanced and diverse HPC solutions to train machine learning models and optimize applications for any AI need, in any environment. By combining Determined AI’s open source capabilities, HPE is furthering its mission in making AI heterogeneous and empowering ML engineers to build AI models at a greater scale.

Additionally, through HPE GreenLake cloud services for High Performance Computing (HPC), HPE is making HPC and AI solutions even more accessible and affordable to the commercial market with fully managed services that can run in a customer’s data center, in a colocation or at the edge using the HPE GreenLake edge to cloud platform.

Determined AI was founded in 2017 by Neil Conway, Evan Sparks, and Ameet Talwalkar, and based in San Francisco. It launched its open-source platform in 2020.

Element Analytics and AWS IoT SiteWise Enable Condition-based Monitoring

These IT cloud services are penetrating ever more deeply into industrial and manufacturing applications. I’m beginning to wonder where the trend is going for traditional industrial suppliers combining AWS (and Google Cloud and Azure) with control becoming more and more a commodity. What sort of business shake-ups lie in store for us? At any rate, here is news from a company called Element Analytics, which bills itself as “a leading software provider in IT/OT data management”. I’ve written about this company a couple of times recently. It has started strongly.

Element, a leading software provider in IT/OT data management for industrial companies, announced June 9 a new offering featuring an API integration between its Element Unify product and AWS IoT SiteWise, a managed service from Amazon Web Services Inc. (AWS), that makes it easy to collect, store, organize, and monitor data from industrial equipment at scale. The API integration is designed to give customers the ability to centralize plant data model integration and metadata management, enabling data to be ingested into AWS services, including AWS IoT SiteWise and Amazon Simple Storage Service (S3) industrial data lake.

Available in AWS Marketplace, the Element Unify AWS IoT Sitewise API integration is designed to allow engineers and operators to monitor operations across facilities, quickly compute performance metrics, create applications that analyze industrial equipment data to prevent costly equipment issues, and reduce gaps in production.

“We are looking forward to bridging the on-premises data models we’ve built for systems like OSIsoft PI to AWS for equipment data monitoring using Element Unify,” said Philipp Frenzel, Head of Competence Center Digital Services at Covestro.

Element also announced its ISO 27001 certification proving both the security controls protect customer data and the Information Security Management System (ISMS) provide governance, risk management, and controls required for modern SaaS applications. Element Unify also supports AWS PrivateLink to provide an additional level of network security and control for customers.

“Our customers are looking for solutions that can help them improve equipment uptime, avoid revenue loss, cut O&M costs, and improve safety,” said Prabal Acharyya, Global Head of IoT Partners for Energy at AWS. “Now, the Industrial Machine Connectivity (IMC) on AWS initiative, along with Element Unify, makes possible a seamless API integration of both real-time and asset context OT data from multiple systems into an industrial data lake on AWS.”

Industrial customers need the ability to digitally transform to maximize productivity and asset availability, and lower costs in order to remain competitive. Element Unify, which aligns IT and operational technology (OT) around contextualized industrial IoT data, delivers key enterprise integration and governance capabilities to industrials. This provides them with rich insight, enabling smarter, more efficient operations. By integrating previously siloed and critical time-series metadata generated by sensors across industrial operations with established IT systems, such as Enterprise Asset Management, IT and OT teams can now easily work together using context data for their entire industrial IoT environment. 

Though the API integration, AWS IoT SiteWise customers can now:

  • Centralize plant data model integration and metadata management into a single contextual service for consumption by AWS IoT SiteWise. 
  • Ingest data directly into AWS services, including AWS IoT SiteWise and Amazon S3 data lake.
  • Integrate and contextualize metadata from IT/OT sources in single-site, multi-site, and multi-instance deployments, and deploy the data model(s) to AWS IoT SiteWise.
  • Enable metadata to be imported into AWS IoT SiteWise from the customers systems through Inductive Automation Ignition and PTC KEPServerEX to create context data. 
  • Keep both greenfield and brownfield AWS IoT SiteWise asset models and asset hierarchies updated as the Element Unify models adapt to changes in the underlying data.
  • Close the gap between raw, siloed, and disorganized IT/OT data and enriched, contextualized, and structured data that can be easily paired with business intelligence (BI), analytical tools, and condition monitoring tools like AWS IoT SiteWise. 
  • Empower the user to build complex asset data models with ease. 
  • Build and deploy data models to AWS IoT SiteWise and keep them up to date with changes happening in source systems. 
  • Easily create the underlying data models to power AWS IoT SiteWise and many other analytical systems including Amazon SageMaker, Amazon QuickSight, and Amazon Lookout for Equipment.

“Operations data must be easy to use and understand in both the plant and in the boardroom. Industrial organizations who transform their plant data into context data for use in AWS IoT SiteWise, and other AWS services, will drive greater operational insights and achieve breakthrough business outcomes,” said Andy Bane, CEO of Element.

The Element Unify and AWS IoT SiteWise integration will be available to AWS IoT SiteWise customers in AWS Marketplace

Whirlpool Migrates SAP Systems to Google Cloud for Sustainable Growth

Are you using a cloud service, yet? The competition among the various big-company cloud services working with industrial companies is becoming fierce. Here is a win for Google Cloud—one that I’ve only seen active recently. Whoever thought that these would grow to be such large businesses?

Highlights of this announcement include: 

  • Whirlpool Corp. announced the expansion of its strategic collaboration with Google Cloud to deliver critical business systems and applications on Google Cloud. 
  • As part of this expanded collaboration, Whirlpool is deploying its enterprise-wide SAP environment and applications on Google Cloud, providing its global teams with low-latency, secure access to SAP systems and data.By bringing these systems onto Google Cloud, Whirlpool is provided with an environment that ensures maximum uptime, provides global access to applications with very low latency, and empowers the company’s teams of data analysts to derive maximum value from its business data. 
  • Google Cloud is also providing Whirlpool with an elastic cloud infrastructure that can scale as needed and provides access to Google Cloud’s next-generation capabilities in AI, ML, and analytics. 


Whirlpool Corp. announced June 8 that it has expanded its strategic collaboration with Google Cloud to deliver critical business systems and applications on Google Cloud’s secure, reliable, and sustainable infrastructure. The company, which rolled out Google Workspace to its employees in 2014, has now deployed its enterprise-wide SAP environment and applications on Google Cloud, providing its global teams with low-latency, secure access to SAP systems and data.

Whirlpool relies on SAP for many aspects of its business, including supply chain management, manufacturing planning and IoT, enterprise resource planning (ERP), finance, customer relationship management (CRM), and more. Bringing these business-critical systems onto Google Cloud provides the company with an environment that ensures maximum uptime, provides global access to applications with very low latency, and empowers the company’s teams of data analysts to derive maximum value from its business data, such as data on its supply chain, financial systems, IoT, and more.

Google Cloud is also providing Whirlpool with a platform for growth, with elastic cloud infrastructure that can scale up or down as needed, and with access to Google’s next-generation capabilities in artificial intelligence, machine learning, and analytics that are increasingly significant drivers of digital transformation.

With this announcement, Whirlpool is also expanding its use of the cleanest cloud in the industry, ensuring that its business systems are run sustainably and responsibly. Google Cloud has matched 100 percent of its global electricity use with purchases of renewable energy every year since 2017, and is now building on that progress with a new goal of running entirely on carbon-free energy at all times by 2030.

“Whirlpool Corporation is committed to reaching zero emissions by 2030 and turning to Google Cloud’s clean infrastructure for our global business systems and applications is a step forward toward that goal,” said Dani Brown, senior vice president and CIO at Whirlpool Corporation. “We are excited to strengthen our strategic relationship with Google Cloud to empower our employees with cloud productivity solutions, and to ensure that our most critical business systems and applications are delivered securely, efficiently, and sustainably.”

“Whirlpool Corp. is creating a foundation for future growth with a forward-looking, cloud-first approach to its critical SAP systems, while maintaining a strong commitment to sustainability,” said Rob Enslin, President at Google Cloud. “We’re proud to expand our strategic collaboration with Whirlpool and will continue to support the company’s digital transformation across all of its global operations.”

Financial Risks When Delaying PLM Upgrades

Senior management have always been reluctant to invest in technology and especially upgrades once a technology is in place. I have seen instances where management lays off the senior engineers who implemented something like Advanced Process Control or Manufacturing Execution Systems keeping a recent graduate engineer to maintain the system, if even that. Management sees only a large salary cost reduction. Rarely is maintaining momentum a virtue.

I have been in way too many of these discussions in my career. I’ve seen results one way or another. There have been the instances where they had to hire back the laid off engineer at higher consultant rates to get the system back up and running properly.

So, this report from CIMdata detailing research on PLM software upgrading was hardly surprising. Disturbing, perhaps, but not surprising.

Digital transformation is a popular topic, and CIMdata has written much about it. While many still wonder whether digital transformation is real or just the latest buzzword, many industrial companies are taking its promise very seriously.

While it is clear to all within the PLM community that PLM is foundational to a meaningful digitalization program (or digital transformation strategy), this truth is not always understood by senior leadership within companies. While CIMdata believes that the level of investment in digital transformation is appropriate, based on our research and experience we find that executive awareness of the dependency of digital transformation on PLM is lacking. This lack of understanding of its association to PLM-related investment, sustainability and impacts on business performance and benefits puts many digital transformation programs at risk of becoming yet another program of the month.

This research on obsolescence identified areas that increased the cost of technology refresh and found that heavy customization was at the top of the list. This aligns with CIMdata’s experience in the field and is why companies strive to be more out-of-the-box with their PLM implementations. CIMdata’s view is that customization can add significant value to a PLM implementation, but it needs to be either business or cost justified and deliver an appropriate return on investment over the long-term (i.e., even through subsequent solution upgrades).

A new study from CIMdata exposes the financial risk many organizations face when they take PLM upgrades for granted. According to the study, the cost of upgrades with legacy PLM vendors can average between $732,000 and $1.25 million. The study – which compares industry heavyweights such as Dassault, PTC, and Siemens – finds the Aras PLM platform is easiest to keep current. Aras users upgrade more frequently, over a shorter duration, and at less cost than other leaders in the space. 

What’s behind PLM obsolescence? According to CIMdata, “A sustainable PLM solution is one that can meet current and future business requirements with an acceptable return on investment (ROI) via incremental enhancements and upgrades.” But as clearly shown in the research, many companies using PLM software are not staying current. The five reasons are: 


1. Technically Impossible. Typically, after an arduous deployment and the necessary customization to meet the businesses current needs, the software is no longer capable of upgrading. 
2. No ROI. If you take a year to upgrade and it costs close to a million dollars, the cost and impact to the business is so outrageous it can’t be justified.

3. No Budget. Not having the budget is a real concern, but often the lack of budget is a mistake—a mis-prioritization of what’s important to your organization’s future growth, often combined with a high percentage of the overall budget being consumed by technical debt. 
4. Companies overinvest and therefore are committed. The only thing worse than spending large amounts of money on the wrong thing is doubling down and spending more, expecting a better experience. The pandemic has accelerated the need to change, to expect transformation with less risk, less cost, and greater ROI that will lead to greater business resiliency. Throwing good money after bad is no longer being tolerated—there is more of a focus on the bottom-line and doing more with less. 
5. Leadership Doesn’t Understand Dependency of Digital Transformation on PLM. If your PLM system hasn’t been upgraded in years and isn’t the foundation for continuous digital transformation efforts, there is an absolute lack of understanding of how PLM can transform a business.

The Converged Edge Explained

Our schedules finally converged. I caught up with Tom Bradicich, PhD, known within Hewlett Packard Enterprise (HPE) as “Dr. Tom,” to learn the latest on the converged edge. Tom is one of the half-dozen or so people I know who can dump so much information on my brain that it takes some time to digest and organize it. He led development of the Edgeline device connecting with the Industrial Internet of Things. He is now VP and HPE Fellow leading HPE Labs developing software to come to grips with the complexities of the converged edge and “Converged Edge-as-a-Service”.

He likes to organize his thoughts in numerical groups. I’m going to discuss converged edge below with his groupings:

3 C’s

4 Stages of the Edge

7 Reasons for IoT and the Edge

3 Act Play

12 Challenges

The foundation of the converged edge is found in the 3 C’s:

  1. Perpetual Connectivity
  2. Pervasive Computing
  3. Precision Controls

I remember Tony Perkins following up the demise of Red Herring magazine (charting the hot startup and M&A craze of the 90s, the magazine grew so large it came in two volumes for a while) with an online group called AlwaysOn. Trouble is, back in the 90s, we weren’t “always on.” Persistent connectivity was beyond our technology back then. Now, however, things have changed. We have so much networking, with more to come, that perpetual connectivity is not only possible, but also mundane.

HPE didn’t take a personal computer and package it for the edge. It developed Edgeline with the power of its enterprise compute along with enterprise grade stacks. It is powerful.

Then we have the 4 Stages of the Edge:

  1. Things—sensors and actuators
  2. Data Capture & Controls
  3. Edge IT (networking, compute, storage)
  4. Remote Cloud or Data Center

This is where Internet of Things meets the Enterprise.

Why do we need edge compute and not just IoT-to-Cloud? 7 Reasons:

  1. Minimize Latency
  2. Reduce bandwidth
  3. Lower cost
  4. Reduce threats
  5. Avoid duplication
  6. Improve reliability
  7. Maintain compliance

The Converged Edge is a 3-Act Play:

  1. Edgeline systems & software; stack identicality
  2. Converged embedded PXI and OT Link
  3. Converged Edge-as-a-Service

At this point in time, we are faced with 12 challenges to implementation:

  1. Limited bandwidth
  2. Smaller footprint for control plane and container
  3. Limited to no IT skills at the edge
  4. Higher ratio of control systems for compute/storage nodes
  5. Provisioning & lifecycle management of OT systems and IoT devices
  6. OT applications are primarily “stateful”, cloud unfriendly
  7. Data from analog world & industrial protocols
  8. Unreliable connectivity—autonomous disconnect operation
  9. Higher security vulnerabilities
  10. Hostile and unfamiliar physical environments and locations
  11. Long-tail hardware and software revenue model—many sites, fewer systems
  12. Deep domain expertise needed for the many unique edges

Of course, we could go into each of these items. Dr. Tom does in one of his latest talks (I believe it was at Hannover). We should pause at number 12, though. This is an often-overlooked necessity by AI evangelists and other predictive maintenance would-be disrupters. When you begin messing with industrial, whether process or discrete manufacturing, it really pays to know the process deeply.

I can’t believe I summarized this in less than a 600-word essay (is that still the common university requirement?). It is just an outline, but it should reveal where HPE has been and where it is going. I think its power will be disruptive to industrial architectures.