NI Joins Open Manufacturing Platform Organization

I first heard about the Open Manufacturing Platform during my last trip to Germany, well, my last business trip anywhere, last February. I wrote about it here–Open Manufacturing Platform Expands.This effort, led by Microsoft and BMW joined by ZF, Bosch, and ABInBev, “helps manufacturers leverage advanced technologies to gain greater operational efficiencies, factory output, customer loyalty, and net profits.” That’s a tall order. These are companies that I’ve seen leverage technology for improvements over the years. This should be an advancement.

This month’s news items (2) relating to OMP include NI through its recent acquisition Optimal Plus joining the organization and a new deliverable from the OMP’s working group.

NI says that it has joined OMP “with the goal of establishing an architecture and standards for auto manufacturers to better leverage and automate analytics to improve quality, reliability and safety.”

I had an opportunity to interview Michael Schuldenfrei, NI Fellow and OptimalPlus CTO about smart manufacturing, what OptimalPlus adds to NI, and OMP. The roots of OptimalPlus lie in enterprise software relative to manufacturing of semiconductors. An early customer was Qualcomm who used the software to collect and analyze data from its numerous manufacturing plants. It branched out into assemblies, such as with a new customer Nvidia. Later the company added mechatronics to its portfolio. That was a good tie in with NI.

Rather than become just another smart manufacturing application focusing on machines, OptimalPlus brings its focus to the product being manufactured. Given NI’s strength in test and measurement, this was a definite synergy. As I have written before here and here, this enterprise software addition to NI’s portfolio is just what the company needs to advance a level.

Michael told me he was an early advocate for OMP due to seeing how his technology worked with Tier 1 automotive suppliers to drive digital transformation process. 

NI announced that its latest acquisition, OptimalPlus, has joined the Open Manufacturing Platform (OMP), a consortium led by BMW, Microsoft, ZF, Bosch and ABInBev that helps manufacturers leverage advanced technologies to gain greater operational efficiencies, factory output, customer loyalty, and net profits.

The OMP’s goals include creating a “Manufacturing Reference Architecture” for platform-agnostic, cloud-based data collection, management, analytics and other applications. This framework will provide a standard way to connect to IoT devices on equipment and define a semantic layer that unifies data across disparate data sources. All in all, this has the potential to create a rich, open-source ecosystem that enables faster and easier adoption of smart manufacturing technologies.

In the same way that interpreters at the United Nations help delegates communicate and make new policies, standardized data formats accelerate the adoption of big data and machine learning, creating a universal translator between multiple machine and process types. OptimalPlus, now part of NI, will bring to OMP its vast domain expertise in automotive manufacturing processes and provide leading production companies with actionable insights and adaptive methods from its big data analytics platform.

“We’re honored to be invited to join the prestigious Open Manufacturing Platform, which plays a key role in helping manufacturers all over the world innovate,” said Uzi Baruch, VP of NI’s Transportation business unit. “With pressure mounting to ensure quality and prevent faulty parts from shipping, it’s important that manufacturers have access to the transformative powers of AI, machine learning and big data analytics. We’re excited to collaborate with industry leaders in the OMP consortium to help manufacturers evolve and optimize their processes.”

AI and advanced analytics help to streamline manufacturing, reduce costs and improve quality, reliability and safety. OMP makes it easier for manufacturers to deploy this technology across their operations and fulfill the promise of smart manufacturing.

White Paper: Insights Into Connecting Industrial IoT Assets

The second bit of news describes a first deliverable from the OMP as it progresses toward its objective.

OMP announced delivery of a critical milestone with the publication of our first white paper. The IoT Connectivity Working Group, chaired by Sebastian Buckel and co-chaired by Dr. Veit Hammerstingl of the BMW Group, authored Insights Into Connecting Industrial IoT Assets. Contributions from member companies Capgemini, Cognizant, Microsoft, Red Hat, and ZF present a consensus view of the connectivity challenges and best practices in IIoT as the 4th industrial revolution unfolds. This paper is the initial publication laying out an approach to solving connectivity challenges while providing a roadmap for future OMP work.

Manufacturing at an Inflection Point

The intersection of information technology (IT) and operational technologies (OT), as well as the advent of the Internet of Things (IoT), presents opportunities and threats to the entire manufacturing sector. In manufacturing, multiple challenges complicate the connection of sensors, actuators, and machines to a central data center. Lack of common standards and proprietary interfaces leads each engineer to solve similar problems, introducing inefficiencies and forcing the same learning curve’s ascension over and over. The long renewal cycles of shop floor equipment, software, and processes present gaps in modern technologies and a general avoidance of making significant institutional changes. This initial publication begins to tackle these problems and lays the groundwork for future, more detailed work.

IT/OT Convergence

Each connectivity challenge will have a range of diverse constituents and the content of this paper addresses issues faced by individuals and teams across job functions. Operational technology (OT) professionals are responsible for the commissioning, operation, and maintenance of shop floor equipment. Information technology (IT) personnel look after overall data processing, the hardware and software infrastructure, and enterprise-wide IT strategy. General managers and logistics teams are typically aligned at a corporate level, coordinating processes across a network of plants. Each of these functions will have roles spanning from operational hands-on to strategic and managerial. The unique demands of each part will require connectivity solutions to be forward-thinking and value-accretive while offering practical solutions implemented with minimal incremental investment.

Industrial IoT Challenges

Also explored in the paper, are the IIoT devices’ critical real-time needs for repeatability and high availability. An example is an AI model that optimizes the parameters of a bending machine based on the current air temperature and humidity. Possible connection failures or high latencies can lead to stopped or interrupted processes or products with insufficient quality.

Manufacturing throughput requirements vary from low bandwidth for simple sensors using small packets to much higher bandwidth required for streaming data for video analytics, vibration sensors, or AR/VR visualization.  A holistic connectivity solution can address this complexity successfully, spanning from the individual devices on the shop floor up through edge gateways and servers to the central data center or cloud resources such as compute and storage.

Network Levels

Networks are usually customized to their precise environment and the desired function, and therefore can be very complex.

In the white paper, we discuss the functions of each of the network levels, their benefits and limitations, and security considerations. Additional sections of the document cover common challenges in IIoT, connectivity levels, basic principles for successful connectivity solutions, communication types, and best practices for program implementation.

High Performance Computing as a Service

I keep wondering when some enterprising entrepreneur or integrator (most likely not an incumbent in the automation sector) will check out the coming decoupling of software and hardware, latch onto the readily available high performance compute platforms, and totally disrupt the market. Maybe never. Maybe the market is too small? I see the possibilities!

New HPE GreenLake cloud services for HPC will enable any enterprise to run their most demanding workloads with fully managed, pre-bundled HPC cloud services to operate in any data center or colocation environment.

Today’s news from Hewlett Packard Enterprise (HPE) packs potential. Check out the customer references toward the bottom of the release for a suggestion of the import to industrial and manufacturing applications. 

HPE announced it is offering its HPC solutions as a service through HPE GreenLake. The new HPE GreenLake cloud services for HPC allow customers to combine the power of an agile, elastic, pay-per-use cloud experience with the world’s most-proven, market-leading HPC systems from HPE. Now any enterprise can tackle their most demanding compute and data-intensive workloads, to power AI and ML initiatives, speed time to insight, and create new products and experiences through a flexible as-a-service platform that customers can run on-premises or in a colocation facility.

It removes the complexity and cost associated with traditional HPC deployments by delivering fully managed, pre-bundled services based on purpose-built HPC systems, software, storage and networking solutions that come in small, medium or large options. Customers can order these through a self-service portal with simple point-and-click functions to choose the right configuration for their workload needs and receive services in as little as 14 days.

“The massive growth in data, along with Artificial Intelligence and high performance analytics, is driving an increased need for HPC in enterprises of all sizes, from the Fortune 500 to startups,” said Peter Ungaro, senior vice president and general manager, HPC and Mission Critical Solutions (MCS) at HPE. “We are transforming the market by delivering industry-leading HPC solutions in simplified, pre-configured services that control costs and improve governance, scalability and agility through HPE GreenLake. These HPC cloud services enable any enterprise to access the most powerful HPC and AI capabilities and unlock greater insights that will power their ability to advance critical research and achieve bold customer outcomes.”

HPC provides massive computing power, along with modeling and simulation capabilities, to turn complex data into digital models that help researchers and engineers understand what something will look like and perform in the real world. HPC also provides optimal performance to run AI and analytics to increase predictability. These combined capabilities are used to solve challenges from vaccine discovery and weather forecasting to improving designs of cars, planes and even personal and consumer products such as shampoo and laundry detergent.

Enterprises can deploy these services in any data center environment, whether on-premises in their own enterprise or in a colocation facility, and gain fully managed services that allow them to pay for only what they use, empowering them to focus on running their projects to increase time-to-insight and accelerate innovation. 

HPE will initially offer an HPC service based on HPE Apollo systems, combined with storage and networking technologies, which are purpose-built for running modeling and simulation workloads. The service also leverages key HPC software for HPC workload management, support for HPC-specific containers and orchestration, and HPC cluster management and monitoring. HPE plans to expand the rest of its HPC portfolio to as-a-service offerings in the future.

Customers can choose these bundles from small, medium or large configurations, receive in as little as 14 days, and gain a fully managed service from HPE.

As part of the offering, customers will gain the following features to easily manage, deploy and control costs for their HPC services:

  • HPE GreenLake Central offers an advanced software platform for customers to manage and optimize their HPC services.
  • HPE Self-service dashboard enables users to run and manage HPC clusters on their own, without disrupting workloads, through a point-and-click function.
  • HPE Consumption Analytics provides at-a-glance analytics of usage and cost based on metering through HPE GreenLake.
  • HPC, AI & App Services standardizes and packages HPC workloads into containers, making it easier to modernize, transfer and access data. The factory process is leveraged by experts to quickly move applications into a container platform as needed.

From Research to Reality: Improving Accuracy, Product Design and Quality

Zenseact, a software developer for autonomous driving solutions based in Sweden and China, uses HPE’s HPC solutions as-a-service through HPE GreenLake for modeling and simulation capabilities to analyze the hundreds of petabytes of data it generates globally from its network of test vehicles and software development centers. The solutions help fuel Zenseact’s mission to model and simulate autonomous driving experiences to develop next-generation software to support driver safety.

“At Zenseact, our mission is to improve Advanced Driver-Assisted Systems and Automated Driving to create robust and flexible solutions that will push the envelope in technological innovation and transform the driving experience,” said Robert Tapper, CIO at Zenseact. “By deploying HPE’s high performance computing solutions as-a-service with HPE GreenLake, we are addressing our mission by performing 10,000 simulations per second, based on driving data from our test cars, to accelerate insights for designing software to enable safe autonomous vehicles.”

Other enterprise use case examples include:

  • Building safer cars – Car manufacturers can model and test vehicle functions to improve designs, from simulating effectiveness of rubber types in tires to performing crash simulations to test impact for potential injuries to drivers and passengers.
  • Improving manufacturing with sustainable materials: Simulation is used to discover new materials for additional, sustainable options for aluminum and plastic packaging to increase efficiency and reduce costs.
  • Making critical millisecond-decisions in finance markets: Financial analysts can predict critical stock trends and trade, and even improve risk management, in milliseconds in a fast-paced financial services environment where quick and accurate insight is critical.
  • Advancing discovery for drug treatment: Scientists at research labs and pharmaceutical companies can perform complex simulations to understand biological and chemical interactions that can lead to new drug therapies for curing diseases.
  • Accelerating oil & gas exploration – Performing simulations, combined with dedicated seismic analytics, can increase discovery and accuracy of oil reservoirs while reducing overall exploration safety risks and costs by identifying when and where to drill for oil. 

Optimizing the HPC Experience with a Dedicated HPC Partner Ecosystem

HPE has a robust ecosystem of HPC partners to help enterprises easily deploy solutions for any workload need, in any data center environment. Partners include:

  • Colocation Facilities: Customers can free up their own real estate by choosing to deploy their HPC systems and equipment in a colocation facility and use their services remotely through HPE GreenLake. HPE colocation partners for HPC deployments, which provide scalable, energy-efficient data centers, include atNorth (formerly Advania Data Center), CyrusOne and ScaleMatrix.
  • Independent Software Vendors (ISV): HPE collaborates with partners, such as ActiveeonAnsysCore Scientific and TheUberCloud to provide solutions to optimize a range of software application needs from automation, artificial intelligence, analytics and blockchain to computer-aided engineering (CAE) and computer-aided design (CAD) that are critical to improving time-to-market for manufacturing, engineering and product design.


Availability

Initial pre-bundled offerings for HPE GreenLake cloud services for high performance computing (HPC) will be generally available in spring of 2021 for customers globally.

HPE plans to expand HPE GreenLake cloud services for HPC to additional technologies, which includes Cray-based compute, software, storage and networking solutions, in the future.

All HPE GreenLake cloud services, including for HPC, are available through HPE’s channel partner program.

Digital Trust and Reasons Why People Collaborate on Open Source

I have two Linux Foundation open source releases today. We connected through the EdgeX Foundry IoT platform. Then we discovered we had many common interests. One of the releases touches on a fundamental element of commerce and collaboration—trust. The Linux Foundation is an open source project. Open source is a powerful software development model. But, it takes many dedicated and talented people to accomplish the task. Why do people work on open source? LF conducted a survey to get an idea.

The Janssen Project Takes on World’s Most Demanding Digital Trust Challenges at Linux Foundation

New Janssen Project seeks to build the world’s fastest and most comprehensive cloud native identity and access management software platform

The Linux Foundation announced the Janssen Project, a cloud native identity and access management software platform that prioritizes security and performance for our digital society. Janssen is based on the Gluu Server and benefits from a rich set of signing and encryption functionalities. Engineers from IDEMIA, F5, BioID, Couchbase, and Gluu will make up the Technical Steering Committee.

Online trust is a fundamental challenge to our digital society. The Internet has connected us. But at the same time, it has undermined trust. Digital identity starts with a connection between a person and a digital device. Identity software conveys the integrity of that connection from the user’s device to a complex web of backend services. Solving the challenge of digital identity is foundational to achieving trustworthy online security.

While other identity and access management platforms exist, the Janssen Project seeks to tackle the most challenging security and performance requirements. Based on the latest code that powers the Gluu Server–which has passed more OpenID self-certification tests then any other platform–Janssen starts with a rich set of signing and encryption functionality that can be used for high assurance transactions. Having shown throughput of more than one billion authentications per day, the software can also handle the most demanding requirements for concurrency thanks to Kubernetes auto-scaling and advances in persistence.

“Trust and security are not competitive advantages–no one wins in an insecure society with low trust,” said Mike Schwartz, Chair of the Janssen Project Technical Steering Committee. “In the world of software, nothing builds trust like the open source development methodology. For organizations who cannot outsource trust, the Janssen Project strives to bring transparency, best practices and collective governance to the long term maintenance of this important effort. The Linux Foundation provides the neutral and proven forum for organizations to collaborate on this work.”

The Gluu engineering teams chose the Linux Foundation to host this community because of the Foundation’s priority of transparency in the development process and its formal framework for governance to facilitate collaboration among commercial partners. 

New digital identity challenges arise constantly, and new standards are developed to address them.  Open source ecosystems are an engine for innovation to filter and adapt to changing requirements. The Janssen Project Technical Steering Committee (“TSC”) will help govern priorities according to the charter.  The initial TSC includes: 

  • Michael Schwartz, TSC Chair, CEO Gluu
  • Rajesh Bavanantham, Domain Architect at F5 Networks/NGiNX
  • Rod Boothby, Head of Digital Trust at Santander
  • Will Cayo, Director of Software Engineering at IDEMIA Digital Labs
  • Ian McCloy, Principal Product Manager at Couchbase
  • Alexander Werner, Software Engineer at BioID

New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security

New survey reveals why contributors work on open source projects and how much time they spend on security

The Linux Foundation’s Open Source Security Foundation (OpenSSF) and the Laboratory for Innovation Science at Harvard (LISH) announced release of a new report, “Report on the 2020 FOSS Contributor Survey,” which details the findings of a contributor survey administered by the organizations and focused on how contributors engage with open source software. The research is part of an ongoing effort to study and identify ways to improve the security and sustainability of open source software. 

The FOSS (Free and Open Source Software) contributor survey and report follow the Census II analysis released earlier this year. This combined pair of works represents important steps towards understanding and addressing structural and security complexities in the modern-day supply chain where open source is pervasive but not always understood. Census II identified the most commonly used free and open source software (FOSS) components in production applications, while the FOSS Contributor Survey and report shares findings directly from nearly 1,200 respondents working on them and other FOSS software. 

Key findings from the FOSS Contributor Survey include:

  • The top three motivations for contributors are non-monetary. While the overwhelming majority of respondents (74.87 percent) are already employed full-time and more than half (51.65 percent) are specifically paid to develop FOSS, motivations to contribute focused on adding a needed feature or fix, enjoyment of learning and fulfilling a need for creative or enjoyable work. 
  • There is a clear need to dedicate more effort to the security of FOSS, but the burden should not fall solely on contributors. Respondents report spending, on average, just 2.27 percent of their total contribution time on security and express little desire to increase that time. The report authors suggest alternative methods to incentivizing security-related efforts. 
  • As more contributors are paid by their employer to contribute, stakeholders need to balance corporate and project interests. The survey revealed that nearly half (48.7 percent) of respondents are paid by their employer to contribute to FOSS, suggesting strong support for the stability and sustainability of open source projects but drawing into question what happens if corporate interest in a project diminishes or ceases.
  • Companies should continue the  positive trend of corporate support for employees’ contribution to FOSS. More than 45.45 percent of respondents stated they are free to contribute to FOSS without asking permission, compared to 35.84 percent ten years ago. However, 17.48 percent of respondents say their companies have unclear policies on whether they can contribute and 5.59 percent were unaware of what  policies – if any – their employer had. 

The report authors are Frank Nagle, Harvard Business School; David A. Wheeler, the Linux Foundation; Hila Lifshitz-Assaf, New York University; and Haylee Ham and Jennifer L. Hoffman, Laboratory for Innovation Science at Harvard. 

Shape your future with data and analytics

Microsoft Azure had its day on Dec. 3 just as I was digesting the news from rival Amazon Web Services (AWS). The theme was “all about data and analytics.” The focus was on applications Microsoft has added to its Azure services. Anyone who ever thought that these services stopped at being convenient hosts for your cloud missed the entire business model.

Industrial software developers have been busily aligning with Microsoft Azure. Maybe that is why there was no direct assault on their businesses like there was with the AWS announcements. But… Microsoft’s themes of breaking silos of information and combining advanced analytics have the possibility of rendering moot some of the developers’ own tools—unless they just repackage those from Microsoft.

The heart of the meaning of the virtual event yesterday was summed up by Julia White, Corporate Vice President, Microsoft Azure, on a blog post.

Over the years, we have had a front-row seat to digital transformation occurring across all industries and regions around the world. And in 2020, we’ve seen that digitally transformed organizations have successfully adapted to sudden disruptions. What lies at the heart of digital transformation is also the underpinning of organizations who’ve proven most resilient during turbulent times—and that is data. Data is what enables both analytical power—analyzing the past and gaining new insights, and predictive power—predicting the future and planning ahead.

To harness the power of data, first we need to break down data silos. While not a new concept, achieving this has been a constant challenge in the history of data and analytics as its ecosystem continues to be complex and heterogeneous. We must expand beyond the traditional view that data silos are the core of the problem. The truth is, too many businesses also have silos of skills and silos of technologies, not just silos of data. And, this must be addressed holistically.

For decades, specialized technologies like data warehouses and data lakes have helped us collect and analyze data of all sizes and formats. But in doing so, they often created niches of expertise and specialized technology in the process. This is the paradox of analytics: the more we apply new technology to integrate and analyze data, the more silos we can create.

To break this cycle, a new approach is needed. Organizations must break down all silos to achieve analytical power and predictive power, in a unified, secure, and compliant manner. Your organizational success over the next decade will increasingly depend on your ability to accomplish this goal.

This is why we stepped back and took a new approach to analytics in Azure. We rearchitected our operational and analytics data stores to take full advantage of a new, cloud-native architecture. This fundamental shift, while maintaining consistent tools and languages, is what enables the long-held silos to be eliminated across skills, technology, and data. At the core of this is Azure Synapse Analytics—a limitless analytics service that brings together data integration, enterprise data warehousing, and Big Data analytics into a single service offering unmatched time to insights. With Azure Synapse, organizations can run the full gamut of analytics projects and put data to work much more quickly, productively, and securely, generating insights from all data sources. And, importantly, Azure Synapse combines capabilities spanning the needs of data engineering, machine learning, and BI without creating silos in processes and tools. Customers such as Walgreens, Myntra, and P&G have achieved tremendous success with Azure Synapse, and today we move to the global generally availability, so every customer can now get access.

But, just breaking down silos is not sufficient. A comprehensive data governance solution is needed to know where all data resides across an organization. An organization that does not know where its data is, does not know what its future will be. To empower this solution, we are proud to deliver Azure Purview—a unified data governance service that helps organizations achieve a complete understanding of their data. 

Azure Purview helps discover all data across your organization, track lineage of data, and create a business glossary wherever it is stored: on-premises, across clouds, in SaaS applications, and in Microsoft Power BI. It also helps you understand your data exposures by using over 100 AI classifiers that automatically look for personally identifiable information (PII), sensitive data, and pinpoint out-of-compliance data. Azure Purview is integrated with Microsoft Information Protection which means you can apply the same sensitivity labels defined in Microsoft 365 Compliance Center. With Azure Purview, you can view your data estate pivoting on classifications and labeling and drill into assets containing sensitive data across on-premises, multi-cloud, and multi-edge locations.

 visit us here

Yesterday, Microsoft announced that the latest version of Azure Synapse is generally available, and the company also unveiled a new data governance solution, Azure Purview.

In the year since Azure Synapse was announced, Microsoft says the number of Azure customers running petabyte-scale workloads – or the equivalent of 500 billion pages of standard printed text – has increased fivefold.

Azure Purview, now available in public preview, will initially enable customers to understand exactly what data they have, manage the data’s compliance with privacy regulations and derive valuable insights more quickly.

Just as Azure Synapse represented the evolution of the traditional data warehouse, Azure Purview is the next generation of the data catalog, Microsoft says. It builds on the existing data search capabilities, adding enhancements to help customers comply with data handling laws and incorporate security controls.

The service includes three main components:

  • Data discovery, classification and mapping: Azure Purview will automatically find all of an organization’s data on premises or in the cloud and evaluate the characteristics and sensitivity of the data. Beginning in February, the capability will also be available for data managed by other storage providers.
  • Data catalog: Azure Purview enables all users to search for trusted data using a simple web-based experience. Visual graphs let users quickly see if data of interest is from a trusted source.
  • Data governance: Azure Purview provides a bird’s-eye view of a company’s data landscape, enabling data officers to efficiently govern data use. This enables key insights such as the distribution of data across environments, how data is moving and where sensitive data is stored.

Microsoft says these improvements will help break down the internal barriers that have traditionally complicated and slowed data governance.

Dell Technologies Accelerates as-a-Service Strategy

A few hours rather than a few days this week brought several conferences to my attention. One was Dell Technologies World. I first was involved with Dell through the Influencer program for Internet of Things and Edge. Both of those groups are gone. The only people I remember are selling laptops now. Only a few years ago, Michael Dell spoke of the IoT group in his keynote. This year…nada.

However, there was still much of interest this year. Just as its competitors have been building out as-a-Service strategies, Dell Technologies has announced just such a strategy. Expect this trend to spread.

Highlights

Businesses can buy and deploy IT with a simple, consistent cloud experience across the industry’s broadest IT portfolio

News summary

  • Project APEX simplifies how Dell Technologies customers can consume IT as-a-Service
  • Dell Technologies Cloud Console gives customers a single self-service interface to manage every aspect of their cloud and as-a-Service journey
  • Dell Technologies Storage as-a-Service will be deployed and managed on-premises by Dell Technologies
  • Dell Technologies Cloud Platform advancements make cloud compute resources accessible with instance-based offerings, lowering the barrier of entry and extendingsubscription availability

Dell Technologies expands its as-a-Service capabilities with Project APEX to simplify how customers and partners access Dell technology on-demand—across storage, servers, networking, hyperconverged infrastructure, PCs and broader solutions.

Project APEX will unify the company’s as-a-Service and cloud strategies, technology offerings, and go-to-market efforts. Businesses will have a consistent as-a-Service experience wherever they run workloads including on-premises, edge locations and public clouds.

“Project APEX will give our customers choice, simplicity and a consistent experience across PCs and IT infrastructure from one trusted partner—unmatched in the industry,” said Jeff Clarke, chief operating officer and vice chairman, Dell Technologies.“We’re building upon our long history of offering on-demand technology with this initiative. Our goal is to give customersthe freedom to scale resources in ways that work best for them, so they can quickly respond to changes and focus less on IT and more on their business needs.”

“By the end of 2021, the agility and adaptability that comes with as-a-Service consumption will drive a 3X increase in demand for on-premises infrastructure delivered via flexible consumption/as-a-Service solutions,” Rick Villars, group vice president, Worldwide Research at IDC.

Dell Technologies as-a-Service and cloud advancements

The new Dell Technologies Cloud Console will provide the foundation for Project APEX and will deliver a single, seamless experience for customers to manage their cloud and as-a-Service journey. Businesses can browse the marketplace and order cloud services and as-a-Service solutions to quickly address their needs. With a few clicks, customers can deploy workloads, manage their multi-cloud resources, monitor their costs in real-time and add capabilities.

Available in the first half of next year, Dell Technologies Storage as-a-Service (STaaS) is an on-premises, as-a-Service portfolio of scalable and elastic storage resources that will offer block and file data services and a broad range of enterprise-class features. STaaS is designed for OPEX transactions and allows customers to easily manage their STaaS resources via the Dell Technologies Cloud Console.

Dell continues to expand its Dell Technologies Cloud and as-a-Service offerings with additional advances:

  • Dell Technologies Cloud Platform instance-based offerings— Customers can get started with hybrid cloud for as low as $47 per instance per month with subscription pricing, making it easy to buy and scale cloud resources with pre-defined configurations through the new Dell Technologies Cloud Console.
  • Geographic Expansions— Dell Technologies is extendingDell Technologies Cloud Platform subscription availability to the United Kingdom, France and Germany with further global expansion coming soon.
  • Dell Technologies Cloud PowerProtect for Multi-cloud— This fully-managed service helps customers protect their data and applications across public clouds in a single destination via a low latency connection to the major public clouds. Businesses save costs through the PowerProtect appliance’s deduplication technology and realize additional savings with zero egress fees when retrieving their data from Microsoft Azure.
  • Pre-approved Flex On Demand pricing— The pre-configured pricing makes it simpler for customers to select and deploy Dell Technologies solutions with a pay-per-use experience. Dell Technologies partners globally will receive a rebate up to 20% on Flex On Demand solutions.

Continued sustainability focus

Dell Technologies Project APEX will help companies retire infrastructure in a secure and environmentally friendly manner. Dell manages the return and refurbishing of used IT gear while also helping to support customers’ own sustainability goals. The company is making additional strides in achieving its Progress Made Real goals by:

  • Reselling 100% of the returned leased assets.
  • Refurbishing and reselling 89% of working assetsin the North America and EMEA regions.
  • Reselling 10% of non-working assetsto Environmental Disposal Partners who repair, reuse, resell and recycle each asset. Last year, Dell recycled 240,257 kilograms of metal, glass and plastics through this program.
  • Helping customers resell or recycle their excess hardware and prepare to return leased equipment in a secure and environmentally conscious manner through Dell Asset Resale & Recycling Services

Availability

  • Dell Technologies Cloud Console​ is available now as a public preview in the United States with EMEA availability planned for the first quarter of 2021.
  • Dell Technologies Storage as-a-Service will be available in the U.S. in the first half of 2021.
  • Dell Technologies Cloud Platform instance-based offerings with subscription pricing are available in the United States, France, Germany and the U.K. Dell Technologies Cloud PowerProtect for Multi-cloud is now available in the U.S., U.K. and Germany.
  • Flex On Demand is available in select countries in North America, Europe, Latin America and the Asia-Pacific region.

Industrial Readers—Watch Where IT Is Going

The manufacturing market is finally discovering the cloud in a big way. This reminds me of similar technologies such as Ethernet in 2003 when the market suddenly moved from “we don’t trust it” to “get me more”. Professionals in the industrial market are also testing out “edge-to-cloud” calling it Industrial Internet of Things. OPC Foundation has climbed aboard that train. We’ll see more.

But here is news that HPE has finalized its acquisition of Silver Peak, bolstering its vision of “edge-to-cloud transformation.” This vision goes far deeper and broader than IIoT, although it encompasses that…and more.

HPE announced the acquisition of Silver Peak in July. The deal totaled $925 million and brought Silver Peak into the Aruba business unit. Executives said Silver Peak’s SD-WAN [software-defined wide area network] capabilities would pair well with Aruba’s wired and wireless capabilities.

HPE president and CEO Antonio Neri called WAN transformation a crucial element of his company’s Intelligent Edge and edge-to-cloud strategy.

“Armed with a comprehensive SD-WAN portfolio with the addition of Silver Peak, we will accelerate the delivery of a true distributed cloud model and cloud experience for all apps and data wherever they live,” Neri said.

Keerti Melkote, Aruba founder and HPE’s president of Intelligent Edge, said customers want to increase branch connectivity and secure remote workers. As a result, Aruba launched a software-defined branch (SD-branch) solution in 2018 and revamped it earlier this year.

“By combining Silver Peak’s advanced SD-WAN technology with Aruba’s SD-branch and remote worker solutions, customers can simplify branch office and WAN deployments to empower remote workforces, enable cloud-connected distributed enterprises, and transform business operations without compromise,” Melkote said.

New WAN Business

Silver Peak founder and CEO David Hughes now serves as senior vice president Aruba’s WAN business. He said he looks forward to accelerating “edge-to-cloud transformation initiatives.”

“Digital transformation, cloud-first IT architectures, and the need to support a mobile work-from-anywhere workforce are driving enterprises to rethink the network edge,” Hughes said. “The combination of Silver Peak and Aruba will uniquely enable customers to realize the full transformational promise of these IT megatrends.”