Standards and Open Source News

Open source predominates in IT. One can find open source growing within OT. I expect more as the younger generation of engineers takes over from the Boomers. My generation has laid a great foundation of standards. These make things better for engineers just trying to get a job done with inadequate resources. A few news items have piled up in my queue. Here is a CESMII announcement followed by several from the Linux Foundation.

SME and CESMII Join Forces to Accelerate Smart Manufacturing Adoption

SME, a non-profit professional association dedicated to advancing manufacturing, and CESMII – The Smart Manufacturing Institute, are partnering to align their resources and educate the industry, helping companies boost productivity, build a strong talent pipeline, and reduce manufacturers’ carbon footprint.

CESMII and SME will address the “digital divide” by connecting manufacturers to technical knowledge. These efforts will especially help small and medium-size companies—a large part of the supply network—to overcome the cost and complexity of automation and digitization that has constrained productivity and growth initiatives. 

“The prospect of the Fourth Industrial Revolution catalyzing the revitalization of our manufacturing productivity in the U.S. is real, but still aspirational, and demands a unified effort to accelerate the evolution of this entire ecosystem,” said John Dyck, CEO, CESMII. “We couldn’t be happier to join with SME on this important mission to combine and align efforts with the best interest of the employers and educators in mind.”

Smart Manufacturing Executive Council

The first joint initiative is the formation of a new national Smart Manufacturing Executive Council. It will engage business and technology executives, thought leaders, and visionaries as a “think tank” advocating for the transformation of the ecosystem. It will build on each organization’s history of working with industry giants who volunteer their time and impart their knowledge to benefit the industry.

Members of the council will act as ambassadors to drive the national conversation and vision for smart manufacturing in America. Working with policy makers and others, the council will unify the ecosystem around a common set of interoperability, transparency, sustainability and resiliency goals and principles for the smart manufacturing ecosystem.

Focus on Manufacturing Workforce

The need for richer, scalable education and workforce development is more important than ever.

SME’s training organization, Tooling U-SME, is the industry’s leading learning and development solutions provider, working with thousands of companies, including more than half of all Fortune 500 manufacturers as well as 800 educational institutions across the country. CESMII has in-depth training content on smart manufacturing technology, business practices, and workforce development. Leveraging Tooling U-SME’s extensive reach into industry and academia, the synergistically combined CESMII and Tooling U-SME training portfolios and new content collaborations will expedite smart manufacturing adoption, driving progress through transformational workforce development.

Through this national collaboration, Tooling U-SME will become a key partner for CESMII for advancing education and workforce development around smart manufacturing. 

“Manufacturers are looking for a more effective, future-proof approach to upskill their workforce, and we believe that the best way to accomplish that is for CESMII and Tooling U-SME to work together,” said Conrad Leiva, Vice President of Ecosystem and Workforce Education at CESMII. “This partnership brings together the deep domain expertise and necessary skills with the know-how to package education, work with employers and schools and effectively deliver it at scale nationally.

Linux Foundation Announces NextArch Foundation

The Linux Foundation announced the NextArch Foundation. The new Foundation is a neutral home for open source developers and contributors to build next-generation architecture that can support compatibility between an increasing array of microservices. 

Cloud-native computing, Artificial Intelligence (AI), the Internet of Things (IoT), Edge computing and much more have led businesses down a path of massive opportunity and transformation. According to market research, the global digital transformation market size was valued at USD 336.14 billion in 2020 and is expected to grow at a compound annual growth rate (CAGR) of 23.6% from 2021 to 2028. But a lack of intelligent, centralized architecture is preventing enterprises and the developers who are creating innovation based on these technologies to fully realize their promise.

“Developers today have to make what feel like impossible decisions among different technical infrastructures and the proper tool for a variety of problems,” said Jim Zemlin, Executive Director of the Linux Foundation. “Every tool brings learning costs and complexities that developers don’t have the time to navigate yet there’s the expectation that they keep up with accelerated development and innovation. NextArch Foundation will improve ease of use and reduce the cost for developers to drive the evolution of next-generation technology architectures.” 

Next-generation architecture describes a variety of innovations in architecture, from data storage and heterogeneous hardware to engineering productivity, telecommunications and much more. Until now, there has been no ecosystem to address this massive challenge. NextArch will leverage infrastructure abstraction solutions through architecture and design and automate development, operations and project processes to increase the autonomy of development teams. Enterprises will gain easy to use and cost-effective tools to solve the problems of productization and commercialization in their digital transformation journey.

Linux Foundation and Graviti Announce Project OpenBytes to Make Open Data More Accessible to All

The Linux Foundation announced the new OpenBytes project spearheaded by Graviti. Project OpenBytes is dedicated to making open data more available and accessible through the creation of data standards and formats. 

Edward Cui is the founder of Graviti and a former machine learning expert within Uber’s Advanced Technologies Group. “For a long time, scores of AI projects were held up by a general lack of high-quality data from real use cases,” Cui said. “Acquiring higher quality data is paramount if AI development is to progress. To accomplish that, an open data community built on collaboration and innovation is urgently needed. Graviti believes it’s our social responsibility to play our part.”

By creating an open data standard and format, Project OpenBytes can reduce data contributors’ liability risks. Dataset holders are often reluctant to share their datasets publicly due to their lack of knowledge on various data licenses. If data contributors understand their ownership of data is well protected and their data will not be misused, more open data becomes accessible.
 
Project OpenBytes will also create a standard format of data published, shared, and exchanged on its open platform. A unified format will help data contributors and consumers easily find the relevant data they need and make collaboration easier. These OpenBytes functions will make high-quality data more available and accessible, which is significantly valuable to the whole AI community and will save a large amount of monetary and labor resources on repetitive data collecting.

The largest tech companies have already realized the potential of open data and how it can lead to novel academic machine learning breakthroughs and generate significant business value. However, there isn’t a well-established open data community with neutral and transparent governance across various organizations in a collaborative effort. Under the governance of the Linux Foundation, OpenBytes aims to create data standards and formats, enable contributions of good-quality data and, more importantly, be governed in a collaborative and transparent way.

Linux Foundation Announces Security Enhancements to its LFX Community Platform to Protect Software Supply Chain

The Linux Foundation announced it has enhanced its free LFX Security offering so open source projects can secure their code and reduce non-inclusive language.

The LFX platform hosts community tools for security, fundraising, community growth, project health, mentorship and more. It supports projects and empowers open source teams to write better, more secure code, drive engagement and grow sustainable ecosystems.

The LFX Security module now includes automatic scanning for secrets-in-code and non-inclusive language, adding to its existing comprehensive automated vulnerability detection capabilities. Software security firm BluBracket has contributed this functionality to open source software projects under LFX as part of its mission of making software safer and more secure. This functionality builds on contributions from leader in developer security, Snyk, now making LFX the leading vulnerability detection platform for the open source community.

The need for a community-supported and freely available code scanning is clear, especially in light of recent attacks on core software projects and recent the White House Executive Order calling for improved software supply chain security. LFX is the first and only community tool designed to make software projects of all kinds more secure and inclusive.

LFX Security now includes:
● Vulnerabilities Detection: Detect vulnerabilities in open source components and dependencies and provide fixes and recommendations to those vulnerabilities. LFX tracks how many known vulnerabilities have been found in open source Projects, identifies if those vulnerabilities have been fixed in code commits and then reports on the number of fixes per project through an intuitive dashboard. Fixing known open source vulnerabilities in open source projects helps cleanse software supply chains at their source and greatly enhances the quality and security of code further downstream in development pipelines. Snyk has provided this functionality for the community and helped open source software projects remediate nearly 12,000 known security vulnerabilities in their code.
● Code Secrets: Detect secrets-in-code such as passwords, credentials, keys and access tokens both pre- and post-commit. These secrets are used by hackers to gain entry into repositories and other important code infrastructure. BluBracket is the leading provider of secrets detection technology in the industry and has contributed these features to the Linux Foundation LFX community.
● Non-Inclusive Language: Detect non-inclusive language used in project code, which is a barrier in creating a welcoming and inclusive community. BluBracket worked with the Inclusive Naming Initiative on this functionality.

Google Cloud Brings End-to-End Visibility to Supply Chains

US Presidential Candidate Ross Perot years ago described a “giant sucking sound” using a typical businessperson’s view of government. Well, I think that a digital picture of today’s supply chains would show a giant clogging mess, like a kitchen garbage disposal gone wrong. Regardless, Google Cloud released this supply chain digital twin to show just such a condition.

We in manufacturing and production need to pay attention to these giant enterprise IT companies. They keep encroaching into our territory. Someday industrial technology will be absorbed into it at the rate we are going.

Google Cloud today announced the launch of Supply Chain Twin, a purpose-built industry solution that lets companies build a digital twin–a virtual representation of their physical supply chain–by orchestrating data from disparate sources to get a more complete view of suppliers, inventories, and other information. In addition, the Supply Chain Pulse modulealso announced today, can be used with Supply Chain Twin to provide real-time dashboards, advanced analytics, alerts on critical issues like potential disruptions, and collaboration in Google Workspace. 

The majority of companies do not have complete visibility of their supply chains, resulting in retail stock outs, aging manufacturing inventory, or weather-related disruptions. In 2020, out-of-stock items alone cost the retail industry an estimated $1.14 trillion. The past year-and-a-half of supply chain disruptions related to COVID-19 has further proven the need for more up-to-date insights into operations, inventory levels, and more.


“Siloed and incomplete data is limiting the visibility companies have into their supply chains.” said Hans Thalbauer, Managing Director, Supply Chain & Logistics Solutions, Google Cloud. “The Supply Chain Twin enables customers to gain deeper insights into their operations, helping them optimize supply chain functions—from sourcing and planning, to distribution and logistics.”  

With Supply Chain Twin, companies can bring together data from multiple sources, all while requiring less partner integration time than traditional API-based integration. Some customers have seen a 95% reduction in analytics processing time, with times for some dropping from 2.5 hours down to eight minutes. Data types supported in Supply Chain Twin include:

  • Enterprise business systems: Better understand operations by integrating information such as locations, products, orders, and inventory from ERPs and other internal systems. 
  • Supplier and partner systems: Gain a more holistic view across businesses by integrating data from suppliers, such as stock and inventory levels, and partners, such as material transportation status. 
  • Public sources: Understand your supply chain in the context of the broader environment by connecting contextual data from public sources, such as weather, risk, or sustainability-related data, including public datasets from Google.

Once customers are up-and-running on Supply Chain Twin, the Supply Chain Pulse module enables further visibility, simulations, and collaboration features:

  • Real-time visibility and advanced analytics: Drill down into key operational metrics with executive performance dashboards that make it easier to view the status of the supply chain. 
  • Alert-driven event management and collaboration across teams: Set mobile alerts that trigger when key metrics reach user-defined thresholds, and build shared workflows that allow users to quickly collaborate in Google Workspace to resolve issues. 
  • AI-driven optimization and simulation: Trigger AI-driven algorithm recommendations to suggest tactical responses to changing events, flag more complex issues to the user, and simulate the impact of hypothetical situations.

“At Renault, we are innovating on how we run efficient supply chains. Improving visibility to inventory levels across our network is a key initiative,” said Jean-François Salles, Supply Chain Global Vice President at Renault Group. “By aggregating inventory data from our suppliers and leveraging Google Cloud’s strength in organizing and orchestrating data, with solutions like the Supply Chain Twin we expect to achieve a holistic view. We aim to work with Google tools to manage both stock, improve forecasting, and eventually optimise our fulfillment.” 

“End-to-end visibility across the entire supply chain is a top priority for supply chain professionals to optimize planning, real-time decision making and monitoring,” said Simon Ellis, Program Vice President at IDC. “Google Cloud’s approach to a digital twin of the supply chain spans internal, external, and partner data networks without complex integrations. This approach can help organizations to better plan, monitor, collaborate and respond at scale.”

Customers are deploying Supply Chain Twin via Google Cloud partners 

Retailers, manufacturers, CPG firms, healthcare networks, and other logistics-heavy companies can deploy Supply Chain Twin by working directly with Google Cloud’s partner ecosystem. For example, system integration partners such as Deloitte, Pluto7, and TCS, can help customers integrate the Supply Chain Twin and relevant datasets into their existing infrastructure. 

In addition, data partners, such as Climate Engine, Craft, and Crux can augment Supply Chain Twin by providing geospatial, sustainability, and risk management data sets for a more complete macroenvironment view. Finally, application partners such as Anaplan, Automation Anywhere, and project44 can provide information from their platforms into Supply Chain Twin to help customers better understand product lifecycles, track shipments across carriers, predict ETAs, and more.

Supply Chain Twin and the Twin Pulse module are today globally available in Preview. For pricing and availability, customers should talk to their Google Cloud sales representative. For more information on Supply Chain Twin, visit here.

Google Cloud accelerates organizations’ ability to digitally transform their business with the best infrastructure, platform, industry solutions and expertise. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology – all on the cleanest cloud in the industry. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

HPE Hastens Transition To Data Management Company

Hewlett Packard Enterprise (HPE) held a Web event Sept. 28 to announce extensions and enhancements to its GreenLake edge-to-cloud platform. One commentator during the “deep dive” sessions opined that HPE is becoming a “data management company.” In other words, it is transitioning from a hardware company to a software and as-a-Service company. And the pace of the change during the past two years is picking up. Quite frankly, I’m surprised at the speed of the changes in the company over that brief period of time.

The announcements in summary: 

  • HPE unveils new cloud services for the HPE GreenLake edge-to-cloud platform
  • The HPE GreenLake platform now has more than 1,200 customers and $5.2 billion in total contract value
  • HPE takes cyberthreats and ransomware head-on with new cloud services to protect customers’ data from edge to cloud
  • HPE pursues big data and analytics software market – forecasted by IDC to reach $110B by 2023– with industry’s first cloud-native unified analytics and data lakehouse cloud services optimized for hybrid environments

Following is information from HPE’s press release. 

HPE GreenLake edge-to-cloud platform combines control and agility so customers can accelerate innovation, deliver compelling experiences, and achieve superior business outcomes

Hewlett Packard Enterprise (NYSE: HPE) today announced a sweeping series of new cloud services for the HPE GreenLake edge-to-cloud platform, providing customers unmatched capabilities to power digital transformation for their applications and data. This represents HPE’s entry into two large, high-growth software markets – unified analytics and data protection. Together, these innovations further accelerate HPE’s transition to a cloud services company and give customers greater choice and freedom for their business and IT strategy, with an open and modern platform that provides a cloud experience everywhere. The new offerings, which add to a growing portfolio of HPE GreenLake cloud services, allow customers to innovate with agility, at lower costs, and include the following:

  • HPE GreenLake for analytics – open and unified analytics cloud services to modernize all data and applications everywhere – on-premises, at the edge, and in the cloud

  • HPE GreenLake for data protection – disaster recovery and backup cloud services to help customers take ransomware head-on and secure data from edge-to-cloud

  • HPE Edge-to-Cloud Adoption Frameworkand automation tools– a comprehensive, proven set of methodologies expertise, and automation tools to accelerate and de-risk the path to a cloud experience everywhere

“The big data and analytics software market, which IDC predicts will reach $110 billion by 2023, is ripe for disruption, as customers seek a hybrid solution for enterprise datasets on-premises and at the edge,” said Antonio Neri, president and CEO, at HPE. “Data is at the heart of every modernization initiative in every industry, and yet organizations have been forced to settle for legacy analytics platforms that lack cloud-native capabilities, or force complex migrations to the public cloud that require customers to adapt new processes and risk vendor lock-in. The new HPE GreenLake cloud services for analytics empower customers to overcome these trade-offs and gives them one platform to unify and modernize data everywhere. Together with the new HPE GreenLake cloud services for data protection, HPE provides customers with an unparalleled platform to protect, secure, and capitalize on the full value of their data, from edge to cloud.” 

HPE continues to accelerate momentum for the HPE GreenLake edge-to-cloud platform. The HPE GreenLake platform now has more than 1,200 customers and $5.2 billion in total contract value. In HPE’s most recent quarter, Q3 2021, HPE announced that the company’s Annualized Revenue Run Rate was up 33 percent year-over-year, and as-a-service orders up 46 percent year-over-year. Most recently, HPE announced HPE GreenLake platform wins with Woolworths Group, Australia and New Zealand’s largest retailer, and the United States National Security Agency.

HPE GreenLake Rolls Out Industry’s First Cloud-Native Unified Analytics and Data Lakehouse Cloud Services Optimized for Hybrid Environments

HPE GreenLake for analytics enable customers to accelerate modernization initiatives, for all data, from edge to cloud. Available on the HPE GreenLake edge-to-cloud platform, the new cloud services are built to be cloud-native and avoid complex data migrations to the public cloud by providing an elastic, unified analytics platform for data and applications on-premises, at the edge and in public clouds. Now analytics and data science teams can leverage the industry’s first cloud-native solution on-premises, scale-up Apache Spark lakehouses, and speed up AI and ML workflows. The new HPE GreenLake cloud services include the following:

  • HPE Ezmeral Unified Analytics: Industry’s first unified, modern analytics and data lakehouse platform optimized for on-premises deployment and spans edge to cloud.

  • HPE Ezmeral Data Fabric Object Store: Industry’s first Kubernetes-native object store optimized for analytics performance, providing access to data sets edge to cloud.

  • Expanding HPE Ezmeral Partner Ecosystem: The HPE Ezmeral Partner Program delivers a rapidly growing set of validated full-stack solutions from ISV partners that enable customers to build their analytics engines. This includes new support from NVIDIA, Pepperdata and Confluent, and open-source projects such as Apache Spark. HPE has added 37 ISV partners to the HPE Ezmeral Partner Program since it was first introduced in March 2021, delivering additional ecosystem stack support of core use cases and workloads for customers, including big data and AI/ML use cases.

HPE Takes Cyberthreats and Ransomware Head-On with New HPE GreenLake Cloud Services to Protect Customers’ Data from Edge to Cloud

HPE today entered the rapidly growing data protection-as-a-service market with HPE GreenLake for data protection, new cloud services designed to modernize data protection from edge to cloud, overcome ransomware attacks, and deliver rapid data recovery.

  • HPE Backup and Recovery Service: Backup as a service offering that provides policy-based orchestration and automation to backup and protect customers’ virtual machines across hybrid cloud, and eliminates the complexities of managing backup hardware, software, and cloud infrastructure.

  • HPE GreenLake for Disaster Recovery: Following the close of the Zerto acquisition, HPE plans to deliver Zerto’s industry-leading disaster recovery as a service through HPE GreenLake, to help customers recover in minutes from ransomware attacks. Zerto provides best-in-class restore times without impacting business operations for all recovery scenarios.

HPE Accelerates Adoption of Cloud-Everywhere Operating Models with Proven Framework and Data-Driven Intelligence and Automations Tools

HPE also today announced a proven set of methodologies and automation tools to enable organizations to take a data-driven approach to achieve the optimal cloud operating model across all environments:

  • The HPE Edge-to-Cloud Adoption Framework leverages HPE’s expertise in delivering solutions on-premises, to meet a broad spectrum of business needs for customers across the globe. HPE has identified several critical areas that enterprises should evaluate and measure to execute an effective cloud operating model. These domains, which include Strategy and Governance, People, Operations, Innovation, Applications, DevOps, Data, and Security, form the core of the HPE Edge-to-Cloud Adoption Framework.

  • The cloud operational experience is enhanced with the industry’s leading AI Ops for infrastructure, HPE InfoSight, that now constantly observes applications and workloads running on the HPE GreenLake edge-to-cloud platform. The new capability, called HPE InfoSight App Insights, detects application anomalies, provides prescriptive recommendations, and keeps the application workloads running disruption free. HPE CloudPhysics delivers data-driven insights for smarter IT decisions across edge-to-cloud, enabling IT to optimize application workload placement, procure right-sized infrastructure services, and lower costs.

HPE GreenLake Announcement Event

Please visit the HPE Discover More Network to watch the HPE GreenLake announcement event, including the keynote from Antonio Neri, HPE president and CEO, live on September 28that 8:00 am PT or anytime on-demand.

Product Availability

HPE GreenLake for analytics and HPE GreenLake for data protection will be available in 1H 2022.

The HPE Edge-to-Cloud Adoption Framework is available now.

HPE provides additional information about HPE product and services availability in the following blogs: 

HPE GreenLake for analytics

HPE GreenLake for data protection

HPE Edge-to-Cloud Adoption Framework

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions delivered as a service – spanning Compute, Storage, Software, Intelligent Edge, High Performance Computing and Mission Critical Solutions – with a consistent experience across all clouds and edges, designed to help customers develop new business models, engage in new ways, and increase operational performance. 

DH2i Launches DxEnterprise Smart Availability Software for Containers

Containers have become a must have technology for those pursuing some form of Digital Transformation, or whatever you wish to label it. I’ve written little about the subject. Following is a news release concerning a way for cloud-native Microsoft SQL Server.

DH2i, a provider of multi-platform Software Defined Perimeter (SDP) and Smart Availability software, announced June 22 the general availability (GA) of DxEnterprise (DxE) for Containers, enabling cloud-native Microsoft SQL Server container Availability Groups (AG) outside and inside Kubernetes (K8).

Container use is skyrocketing for digital transformation projects—particularly the use of stateful containers for databases such as Microsoft SQL Server. This growing stateful database container use is also generating a hard production deployment requirement for database-level high availability (HA) in Kubernetes.

For medium and large organizations running SQL Server, database-level HA has traditionally been provided by SQL Server Availability Groups (AGs). However, SQL Server AGs have not been supported in Kubernetes until now—hindering organizations’ ability to undergo digital transformations. DxEnterprise (DxE) for Containers is the answer to the problem.

DxEnterprise for Containers accelerates an enterprise’s digital transformation (DX) by speeding the adoption of highly available stateful containers. DxEnterprise (DxE) for Containers provides SQL Server Availability Group (AG) support for SQL Server containers, including for Kubernetes clusters. It enables customers to deploy stateful containers to create new and innovative applications while also improving operations with near-zero RTO to more efficiently deliver better products and services at a lower cost. Additionally, it helps organizations generate new revenue streams by enabling them to build distributed Kubernetes AG clusters across availability zones/regions, resulting in hybrid cloud and multi-cloud environments which can rapidly adapt to changes in market conditions and consumer preferences.

“Kubernetes lacks SQL Server AG support, which is essential for using stateful containers in production,” said Shamus McGillicuddy, Vice President of Research, EMA Network Management Practice. “DxEnterprise for Containers solves this problem. It enables AG support in Kubernetes.”

“DxE for Containers is the perfect complement to Kubernetes’ pod/node-level cluster HA,” said Don Boxley, DH2i CEO and Co-Founder. “DxE for Containers enables Microsoft users to confidently deploy highly available SQL Server containers in production, speeding their organizations’ digital transformation.”

DxEnterprise for Containers Features & Benefits:

–       Kubernetes SQL Server Container Availability Groups with automatic failover, an industry first – Enables customers to deploy stateful containers to create new and innovative applications

–       Near-zero recovery time objective (RTO) container database-level failover – Improves operations to more efficiently and resiliently deliver better products and services at a lower cost to the business

–       Distributed Kubernetes AG clusters across availability zones/regions, hybrid cloud and multi-cloud environment with built-in secure multi-subnet express micro-tunnel technology – Enables customers to rapidly adapt to changes in market conditions and consumer preferences

–       Intelligent Health & performance QoS monitoring, alerting management – Simplifies system management

–       Mix and match support for Windows and Linux; bare metal, virtual, cloud servers – Maximizes IT budget ROI

Organizations can now purchase DxEnterprise (DxE) for Containers directly from the DH2i website to get immediate full access to the software and support. Customers have the flexibility to select the support level and subscription duration to best meet the needs of their organization. Users can also subscribe to the Developer Edition of DxEnterprise (DxE) for Containers to dive into the technology for free for non-production use.

DH2i Company is the leading provider of multi-platform Software Defined Perimeter (SDP) and Smart Availability software for Windows and Linux. DH2i software products DxOdyssey and DxEnterprise enable customers to create an entire IT infrastructure that is “always-secure and always-on.”

HPE Discover Uncovers Age of Insight Into Data

HPE Discover was held this week, virtually, of course. I can’t wait for the return of in-person conferences. It’s easier for me to get relevant conversations and learn from technology users when you’re all gathered together. You attend on demand here.

I didn’t have any specific industrial/manufacturing discussions this year, although I had met up with Dr. Tom Bradicich earlier to get the latest on IoT and Edge. You can check out that conversation here.

I suppose the biggest company news was the acquisition of Determined AI (see news release below). This year’s theme Age of Insight (into data) and AI and ML are technologies required to pull insight out of the swamp of data.

HPE’s strategy remains to be an as-a-Service company. This strategy is gaining momentum. They announced 97% customer retention with Greenlake, the cloud-as-a-service platform. We are seeing an uptake of this strategy in specifically manufacturing software companies, so I hope you manufacturing IT people are studying this.

Dr. Eng Lim Goh, CTO, stated in his keynote, “We are awash in data, but it is siloed. This brings a need for a federation layer.” Later, in the HPE Labs keynote, the concept of Dataspace was discussed. My introduction to that concept came from a consortium in Europe. More on that in a bit. Goh gazed into the future predicting that we need to know what data to collect, and then look at how and where to collect and find and store data.

The HPE Labs look into Dataspaces highlighted these important characteristic: democratize data access; lead with open source; connect data producers/consumers; and remove silos. Compute can’t keep up with amount of data being generated, therefore the need for the exascale compute HPE is developing. Further, AI & ML are critical capability, but data is growing too fast to train it.

The Labs presentation brought out the need to think differently about programming in the future. There was also a look into future connectivity—looking at photonics research. This technology will enhance data movement, increase bandwidth with low power consumption. To realize the benefits, engineers will have to realize it’s more than wire-to-wire exchange. This connectivity opens up new avenues of design freedom. Also to obtain best results for exploiting this technology for data movement companies and universities must emphasize cross-disciplinary training.

Following is the news release on the Determined AI acquisition.

HPE acquires Determined AI to accelerate artificial intelligence innovation

Hewlett Packard Enterprise has acquired Determined AI, a San Francisco-based startup that delivers a software stack to train AI models faster, at any scale, using its open source machine learning (ML) platform.

HPE will combine Determined AI’s unique software solution with its world-leading AI and high performance computing (HPC) offerings to enable ML engineers to easily implement and train machine learning models to provide faster and more accurate insights from their data in almost every industry.  

“As we enter the Age of Insight, our customers recognize the need to add machine learning to deliver better and faster answers from their data,” said Justin Hotard, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), HPE. “AI-powered technologies will play an increasingly critical role in turning data into readily available, actionable information to fuel this new era. Determined AI’s unique open source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure. I am pleased to welcome the world-class Determined AI team, who share our vision to make AI more accessible for our customers and users, into the HPE family.”

Building and training optimized machine learning models at scale is considered the most demanding and critical stage of ML development, and doing it well increasingly requires researchers and scientists to face many challenges frequently found in HPC. These include properly setting up and managing a highly parallel software ecosystem and infrastructure spanning specialized compute, storage, fabric and accelerators. Additionally, users need to program, schedule and train their models efficiently to maximize the utilization of the highly specialized infrastructure they have set up, creating complexity and slowing down productivity.

Determined AI’s open source machine learning training platform closes this gap to help researchers and scientists to focus on innovation and accelerate their time to delivery by removing the complexity and cost associated with machine learning development. This includes making it easy to set-up, configure, manage and share workstations or AI clusters that run on-premises or in the cloud.


Determined AI also makes it easier and faster for users to train their models through a range of capabilities that significantly speed up training, which in one use case related to drug discovery, went from three days to three hours. These capabilities include accelerator scheduling, fault tolerance, high speed parallel and distributed training of models, advanced hyperparameter optimization and neural architecture search, reproducible collaboration and metrics tracking.

“The Determined AI team is excited to join HPE, who shares our vision to realize the potential of AI,” said Evan Sparks, CEO of Determined AI. “Over the last several years, building AI applications has become extremely compute, data, and communication intensive. By combining with HPE’s industry-leading HPC and AI solutions, we can accelerate our mission to build cutting edge AI applications and significantly expand our customer reach.” To tackle the growing complexity of AI with faster time-to-market, HPE is committed to continue delivering advanced and diverse HPC solutions to train machine learning models and optimize applications for any AI need, in any environment. By combining Determined AI’s open source capabilities, HPE is furthering its mission in making AI heterogeneous and empowering ML engineers to build AI models at a greater scale.

Additionally, through HPE GreenLake cloud services for High Performance Computing (HPC), HPE is making HPC and AI solutions even more accessible and affordable to the commercial market with fully managed services that can run in a customer’s data center, in a colocation or at the edge using the HPE GreenLake edge to cloud platform.

Determined AI was founded in 2017 by Neil Conway, Evan Sparks, and Ameet Talwalkar, and based in San Francisco. It launched its open-source platform in 2020.

Element Analytics and AWS IoT SiteWise Enable Condition-based Monitoring

These IT cloud services are penetrating ever more deeply into industrial and manufacturing applications. I’m beginning to wonder where the trend is going for traditional industrial suppliers combining AWS (and Google Cloud and Azure) with control becoming more and more a commodity. What sort of business shake-ups lie in store for us? At any rate, here is news from a company called Element Analytics, which bills itself as “a leading software provider in IT/OT data management”. I’ve written about this company a couple of times recently. It has started strongly.

Element, a leading software provider in IT/OT data management for industrial companies, announced June 9 a new offering featuring an API integration between its Element Unify product and AWS IoT SiteWise, a managed service from Amazon Web Services Inc. (AWS), that makes it easy to collect, store, organize, and monitor data from industrial equipment at scale. The API integration is designed to give customers the ability to centralize plant data model integration and metadata management, enabling data to be ingested into AWS services, including AWS IoT SiteWise and Amazon Simple Storage Service (S3) industrial data lake.

Available in AWS Marketplace, the Element Unify AWS IoT Sitewise API integration is designed to allow engineers and operators to monitor operations across facilities, quickly compute performance metrics, create applications that analyze industrial equipment data to prevent costly equipment issues, and reduce gaps in production.

“We are looking forward to bridging the on-premises data models we’ve built for systems like OSIsoft PI to AWS for equipment data monitoring using Element Unify,” said Philipp Frenzel, Head of Competence Center Digital Services at Covestro.

Element also announced its ISO 27001 certification proving both the security controls protect customer data and the Information Security Management System (ISMS) provide governance, risk management, and controls required for modern SaaS applications. Element Unify also supports AWS PrivateLink to provide an additional level of network security and control for customers.

“Our customers are looking for solutions that can help them improve equipment uptime, avoid revenue loss, cut O&M costs, and improve safety,” said Prabal Acharyya, Global Head of IoT Partners for Energy at AWS. “Now, the Industrial Machine Connectivity (IMC) on AWS initiative, along with Element Unify, makes possible a seamless API integration of both real-time and asset context OT data from multiple systems into an industrial data lake on AWS.”

Industrial customers need the ability to digitally transform to maximize productivity and asset availability, and lower costs in order to remain competitive. Element Unify, which aligns IT and operational technology (OT) around contextualized industrial IoT data, delivers key enterprise integration and governance capabilities to industrials. This provides them with rich insight, enabling smarter, more efficient operations. By integrating previously siloed and critical time-series metadata generated by sensors across industrial operations with established IT systems, such as Enterprise Asset Management, IT and OT teams can now easily work together using context data for their entire industrial IoT environment. 

Though the API integration, AWS IoT SiteWise customers can now:

  • Centralize plant data model integration and metadata management into a single contextual service for consumption by AWS IoT SiteWise. 
  • Ingest data directly into AWS services, including AWS IoT SiteWise and Amazon S3 data lake.
  • Integrate and contextualize metadata from IT/OT sources in single-site, multi-site, and multi-instance deployments, and deploy the data model(s) to AWS IoT SiteWise.
  • Enable metadata to be imported into AWS IoT SiteWise from the customers systems through Inductive Automation Ignition and PTC KEPServerEX to create context data. 
  • Keep both greenfield and brownfield AWS IoT SiteWise asset models and asset hierarchies updated as the Element Unify models adapt to changes in the underlying data.
  • Close the gap between raw, siloed, and disorganized IT/OT data and enriched, contextualized, and structured data that can be easily paired with business intelligence (BI), analytical tools, and condition monitoring tools like AWS IoT SiteWise. 
  • Empower the user to build complex asset data models with ease. 
  • Build and deploy data models to AWS IoT SiteWise and keep them up to date with changes happening in source systems. 
  • Easily create the underlying data models to power AWS IoT SiteWise and many other analytical systems including Amazon SageMaker, Amazon QuickSight, and Amazon Lookout for Equipment.

“Operations data must be easy to use and understand in both the plant and in the boardroom. Industrial organizations who transform their plant data into context data for use in AWS IoT SiteWise, and other AWS services, will drive greater operational insights and achieve breakthrough business outcomes,” said Andy Bane, CEO of Element.

The Element Unify and AWS IoT SiteWise integration will be available to AWS IoT SiteWise customers in AWS Marketplace