High Performance Computing as a Service

I keep wondering when some enterprising entrepreneur or integrator (most likely not an incumbent in the automation sector) will check out the coming decoupling of software and hardware, latch onto the readily available high performance compute platforms, and totally disrupt the market. Maybe never. Maybe the market is too small? I see the possibilities!

New HPE GreenLake cloud services for HPC will enable any enterprise to run their most demanding workloads with fully managed, pre-bundled HPC cloud services to operate in any data center or colocation environment.

Today’s news from Hewlett Packard Enterprise (HPE) packs potential. Check out the customer references toward the bottom of the release for a suggestion of the import to industrial and manufacturing applications. 

HPE announced it is offering its HPC solutions as a service through HPE GreenLake. The new HPE GreenLake cloud services for HPC allow customers to combine the power of an agile, elastic, pay-per-use cloud experience with the world’s most-proven, market-leading HPC systems from HPE. Now any enterprise can tackle their most demanding compute and data-intensive workloads, to power AI and ML initiatives, speed time to insight, and create new products and experiences through a flexible as-a-service platform that customers can run on-premises or in a colocation facility.

It removes the complexity and cost associated with traditional HPC deployments by delivering fully managed, pre-bundled services based on purpose-built HPC systems, software, storage and networking solutions that come in small, medium or large options. Customers can order these through a self-service portal with simple point-and-click functions to choose the right configuration for their workload needs and receive services in as little as 14 days.

“The massive growth in data, along with Artificial Intelligence and high performance analytics, is driving an increased need for HPC in enterprises of all sizes, from the Fortune 500 to startups,” said Peter Ungaro, senior vice president and general manager, HPC and Mission Critical Solutions (MCS) at HPE. “We are transforming the market by delivering industry-leading HPC solutions in simplified, pre-configured services that control costs and improve governance, scalability and agility through HPE GreenLake. These HPC cloud services enable any enterprise to access the most powerful HPC and AI capabilities and unlock greater insights that will power their ability to advance critical research and achieve bold customer outcomes.”

HPC provides massive computing power, along with modeling and simulation capabilities, to turn complex data into digital models that help researchers and engineers understand what something will look like and perform in the real world. HPC also provides optimal performance to run AI and analytics to increase predictability. These combined capabilities are used to solve challenges from vaccine discovery and weather forecasting to improving designs of cars, planes and even personal and consumer products such as shampoo and laundry detergent.

Enterprises can deploy these services in any data center environment, whether on-premises in their own enterprise or in a colocation facility, and gain fully managed services that allow them to pay for only what they use, empowering them to focus on running their projects to increase time-to-insight and accelerate innovation. 

HPE will initially offer an HPC service based on HPE Apollo systems, combined with storage and networking technologies, which are purpose-built for running modeling and simulation workloads. The service also leverages key HPC software for HPC workload management, support for HPC-specific containers and orchestration, and HPC cluster management and monitoring. HPE plans to expand the rest of its HPC portfolio to as-a-service offerings in the future.

Customers can choose these bundles from small, medium or large configurations, receive in as little as 14 days, and gain a fully managed service from HPE.

As part of the offering, customers will gain the following features to easily manage, deploy and control costs for their HPC services:

  • HPE GreenLake Central offers an advanced software platform for customers to manage and optimize their HPC services.
  • HPE Self-service dashboard enables users to run and manage HPC clusters on their own, without disrupting workloads, through a point-and-click function.
  • HPE Consumption Analytics provides at-a-glance analytics of usage and cost based on metering through HPE GreenLake.
  • HPC, AI & App Services standardizes and packages HPC workloads into containers, making it easier to modernize, transfer and access data. The factory process is leveraged by experts to quickly move applications into a container platform as needed.

From Research to Reality: Improving Accuracy, Product Design and Quality

Zenseact, a software developer for autonomous driving solutions based in Sweden and China, uses HPE’s HPC solutions as-a-service through HPE GreenLake for modeling and simulation capabilities to analyze the hundreds of petabytes of data it generates globally from its network of test vehicles and software development centers. The solutions help fuel Zenseact’s mission to model and simulate autonomous driving experiences to develop next-generation software to support driver safety.

“At Zenseact, our mission is to improve Advanced Driver-Assisted Systems and Automated Driving to create robust and flexible solutions that will push the envelope in technological innovation and transform the driving experience,” said Robert Tapper, CIO at Zenseact. “By deploying HPE’s high performance computing solutions as-a-service with HPE GreenLake, we are addressing our mission by performing 10,000 simulations per second, based on driving data from our test cars, to accelerate insights for designing software to enable safe autonomous vehicles.”

Other enterprise use case examples include:

  • Building safer cars – Car manufacturers can model and test vehicle functions to improve designs, from simulating effectiveness of rubber types in tires to performing crash simulations to test impact for potential injuries to drivers and passengers.
  • Improving manufacturing with sustainable materials: Simulation is used to discover new materials for additional, sustainable options for aluminum and plastic packaging to increase efficiency and reduce costs.
  • Making critical millisecond-decisions in finance markets: Financial analysts can predict critical stock trends and trade, and even improve risk management, in milliseconds in a fast-paced financial services environment where quick and accurate insight is critical.
  • Advancing discovery for drug treatment: Scientists at research labs and pharmaceutical companies can perform complex simulations to understand biological and chemical interactions that can lead to new drug therapies for curing diseases.
  • Accelerating oil & gas exploration – Performing simulations, combined with dedicated seismic analytics, can increase discovery and accuracy of oil reservoirs while reducing overall exploration safety risks and costs by identifying when and where to drill for oil. 

Optimizing the HPC Experience with a Dedicated HPC Partner Ecosystem

HPE has a robust ecosystem of HPC partners to help enterprises easily deploy solutions for any workload need, in any data center environment. Partners include:

  • Colocation Facilities: Customers can free up their own real estate by choosing to deploy their HPC systems and equipment in a colocation facility and use their services remotely through HPE GreenLake. HPE colocation partners for HPC deployments, which provide scalable, energy-efficient data centers, include atNorth (formerly Advania Data Center), CyrusOne and ScaleMatrix.
  • Independent Software Vendors (ISV): HPE collaborates with partners, such as ActiveeonAnsysCore Scientific and TheUberCloud to provide solutions to optimize a range of software application needs from automation, artificial intelligence, analytics and blockchain to computer-aided engineering (CAE) and computer-aided design (CAD) that are critical to improving time-to-market for manufacturing, engineering and product design.


Availability

Initial pre-bundled offerings for HPE GreenLake cloud services for high performance computing (HPC) will be generally available in spring of 2021 for customers globally.

HPE plans to expand HPE GreenLake cloud services for HPC to additional technologies, which includes Cray-based compute, software, storage and networking solutions, in the future.

All HPE GreenLake cloud services, including for HPC, are available through HPE’s channel partner program.

Digital Trust and Reasons Why People Collaborate on Open Source

I have two Linux Foundation open source releases today. We connected through the EdgeX Foundry IoT platform. Then we discovered we had many common interests. One of the releases touches on a fundamental element of commerce and collaboration—trust. The Linux Foundation is an open source project. Open source is a powerful software development model. But, it takes many dedicated and talented people to accomplish the task. Why do people work on open source? LF conducted a survey to get an idea.

The Janssen Project Takes on World’s Most Demanding Digital Trust Challenges at Linux Foundation

New Janssen Project seeks to build the world’s fastest and most comprehensive cloud native identity and access management software platform

The Linux Foundation announced the Janssen Project, a cloud native identity and access management software platform that prioritizes security and performance for our digital society. Janssen is based on the Gluu Server and benefits from a rich set of signing and encryption functionalities. Engineers from IDEMIA, F5, BioID, Couchbase, and Gluu will make up the Technical Steering Committee.

Online trust is a fundamental challenge to our digital society. The Internet has connected us. But at the same time, it has undermined trust. Digital identity starts with a connection between a person and a digital device. Identity software conveys the integrity of that connection from the user’s device to a complex web of backend services. Solving the challenge of digital identity is foundational to achieving trustworthy online security.

While other identity and access management platforms exist, the Janssen Project seeks to tackle the most challenging security and performance requirements. Based on the latest code that powers the Gluu Server–which has passed more OpenID self-certification tests then any other platform–Janssen starts with a rich set of signing and encryption functionality that can be used for high assurance transactions. Having shown throughput of more than one billion authentications per day, the software can also handle the most demanding requirements for concurrency thanks to Kubernetes auto-scaling and advances in persistence.

“Trust and security are not competitive advantages–no one wins in an insecure society with low trust,” said Mike Schwartz, Chair of the Janssen Project Technical Steering Committee. “In the world of software, nothing builds trust like the open source development methodology. For organizations who cannot outsource trust, the Janssen Project strives to bring transparency, best practices and collective governance to the long term maintenance of this important effort. The Linux Foundation provides the neutral and proven forum for organizations to collaborate on this work.”

The Gluu engineering teams chose the Linux Foundation to host this community because of the Foundation’s priority of transparency in the development process and its formal framework for governance to facilitate collaboration among commercial partners. 

New digital identity challenges arise constantly, and new standards are developed to address them.  Open source ecosystems are an engine for innovation to filter and adapt to changing requirements. The Janssen Project Technical Steering Committee (“TSC”) will help govern priorities according to the charter.  The initial TSC includes: 

  • Michael Schwartz, TSC Chair, CEO Gluu
  • Rajesh Bavanantham, Domain Architect at F5 Networks/NGiNX
  • Rod Boothby, Head of Digital Trust at Santander
  • Will Cayo, Director of Software Engineering at IDEMIA Digital Labs
  • Ian McCloy, Principal Product Manager at Couchbase
  • Alexander Werner, Software Engineer at BioID

New Open Source Contributor Report from Linux Foundation and Harvard Identifies Motivations and Opportunities for Improving Software Security

New survey reveals why contributors work on open source projects and how much time they spend on security

The Linux Foundation’s Open Source Security Foundation (OpenSSF) and the Laboratory for Innovation Science at Harvard (LISH) announced release of a new report, “Report on the 2020 FOSS Contributor Survey,” which details the findings of a contributor survey administered by the organizations and focused on how contributors engage with open source software. The research is part of an ongoing effort to study and identify ways to improve the security and sustainability of open source software. 

The FOSS (Free and Open Source Software) contributor survey and report follow the Census II analysis released earlier this year. This combined pair of works represents important steps towards understanding and addressing structural and security complexities in the modern-day supply chain where open source is pervasive but not always understood. Census II identified the most commonly used free and open source software (FOSS) components in production applications, while the FOSS Contributor Survey and report shares findings directly from nearly 1,200 respondents working on them and other FOSS software. 

Key findings from the FOSS Contributor Survey include:

  • The top three motivations for contributors are non-monetary. While the overwhelming majority of respondents (74.87 percent) are already employed full-time and more than half (51.65 percent) are specifically paid to develop FOSS, motivations to contribute focused on adding a needed feature or fix, enjoyment of learning and fulfilling a need for creative or enjoyable work. 
  • There is a clear need to dedicate more effort to the security of FOSS, but the burden should not fall solely on contributors. Respondents report spending, on average, just 2.27 percent of their total contribution time on security and express little desire to increase that time. The report authors suggest alternative methods to incentivizing security-related efforts. 
  • As more contributors are paid by their employer to contribute, stakeholders need to balance corporate and project interests. The survey revealed that nearly half (48.7 percent) of respondents are paid by their employer to contribute to FOSS, suggesting strong support for the stability and sustainability of open source projects but drawing into question what happens if corporate interest in a project diminishes or ceases.
  • Companies should continue the  positive trend of corporate support for employees’ contribution to FOSS. More than 45.45 percent of respondents stated they are free to contribute to FOSS without asking permission, compared to 35.84 percent ten years ago. However, 17.48 percent of respondents say their companies have unclear policies on whether they can contribute and 5.59 percent were unaware of what  policies – if any – their employer had. 

The report authors are Frank Nagle, Harvard Business School; David A. Wheeler, the Linux Foundation; Hila Lifshitz-Assaf, New York University; and Haylee Ham and Jennifer L. Hoffman, Laboratory for Innovation Science at Harvard. 

Shape your future with data and analytics

Microsoft Azure had its day on Dec. 3 just as I was digesting the news from rival Amazon Web Services (AWS). The theme was “all about data and analytics.” The focus was on applications Microsoft has added to its Azure services. Anyone who ever thought that these services stopped at being convenient hosts for your cloud missed the entire business model.

Industrial software developers have been busily aligning with Microsoft Azure. Maybe that is why there was no direct assault on their businesses like there was with the AWS announcements. But… Microsoft’s themes of breaking silos of information and combining advanced analytics have the possibility of rendering moot some of the developers’ own tools—unless they just repackage those from Microsoft.

The heart of the meaning of the virtual event yesterday was summed up by Julia White, Corporate Vice President, Microsoft Azure, on a blog post.

Over the years, we have had a front-row seat to digital transformation occurring across all industries and regions around the world. And in 2020, we’ve seen that digitally transformed organizations have successfully adapted to sudden disruptions. What lies at the heart of digital transformation is also the underpinning of organizations who’ve proven most resilient during turbulent times—and that is data. Data is what enables both analytical power—analyzing the past and gaining new insights, and predictive power—predicting the future and planning ahead.

To harness the power of data, first we need to break down data silos. While not a new concept, achieving this has been a constant challenge in the history of data and analytics as its ecosystem continues to be complex and heterogeneous. We must expand beyond the traditional view that data silos are the core of the problem. The truth is, too many businesses also have silos of skills and silos of technologies, not just silos of data. And, this must be addressed holistically.

For decades, specialized technologies like data warehouses and data lakes have helped us collect and analyze data of all sizes and formats. But in doing so, they often created niches of expertise and specialized technology in the process. This is the paradox of analytics: the more we apply new technology to integrate and analyze data, the more silos we can create.

To break this cycle, a new approach is needed. Organizations must break down all silos to achieve analytical power and predictive power, in a unified, secure, and compliant manner. Your organizational success over the next decade will increasingly depend on your ability to accomplish this goal.

This is why we stepped back and took a new approach to analytics in Azure. We rearchitected our operational and analytics data stores to take full advantage of a new, cloud-native architecture. This fundamental shift, while maintaining consistent tools and languages, is what enables the long-held silos to be eliminated across skills, technology, and data. At the core of this is Azure Synapse Analytics—a limitless analytics service that brings together data integration, enterprise data warehousing, and Big Data analytics into a single service offering unmatched time to insights. With Azure Synapse, organizations can run the full gamut of analytics projects and put data to work much more quickly, productively, and securely, generating insights from all data sources. And, importantly, Azure Synapse combines capabilities spanning the needs of data engineering, machine learning, and BI without creating silos in processes and tools. Customers such as Walgreens, Myntra, and P&G have achieved tremendous success with Azure Synapse, and today we move to the global generally availability, so every customer can now get access.

But, just breaking down silos is not sufficient. A comprehensive data governance solution is needed to know where all data resides across an organization. An organization that does not know where its data is, does not know what its future will be. To empower this solution, we are proud to deliver Azure Purview—a unified data governance service that helps organizations achieve a complete understanding of their data. 

Azure Purview helps discover all data across your organization, track lineage of data, and create a business glossary wherever it is stored: on-premises, across clouds, in SaaS applications, and in Microsoft Power BI. It also helps you understand your data exposures by using over 100 AI classifiers that automatically look for personally identifiable information (PII), sensitive data, and pinpoint out-of-compliance data. Azure Purview is integrated with Microsoft Information Protection which means you can apply the same sensitivity labels defined in Microsoft 365 Compliance Center. With Azure Purview, you can view your data estate pivoting on classifications and labeling and drill into assets containing sensitive data across on-premises, multi-cloud, and multi-edge locations.

 visit us here

Yesterday, Microsoft announced that the latest version of Azure Synapse is generally available, and the company also unveiled a new data governance solution, Azure Purview.

In the year since Azure Synapse was announced, Microsoft says the number of Azure customers running petabyte-scale workloads – or the equivalent of 500 billion pages of standard printed text – has increased fivefold.

Azure Purview, now available in public preview, will initially enable customers to understand exactly what data they have, manage the data’s compliance with privacy regulations and derive valuable insights more quickly.

Just as Azure Synapse represented the evolution of the traditional data warehouse, Azure Purview is the next generation of the data catalog, Microsoft says. It builds on the existing data search capabilities, adding enhancements to help customers comply with data handling laws and incorporate security controls.

The service includes three main components:

  • Data discovery, classification and mapping: Azure Purview will automatically find all of an organization’s data on premises or in the cloud and evaluate the characteristics and sensitivity of the data. Beginning in February, the capability will also be available for data managed by other storage providers.
  • Data catalog: Azure Purview enables all users to search for trusted data using a simple web-based experience. Visual graphs let users quickly see if data of interest is from a trusted source.
  • Data governance: Azure Purview provides a bird’s-eye view of a company’s data landscape, enabling data officers to efficiently govern data use. This enables key insights such as the distribution of data across environments, how data is moving and where sensitive data is stored.

Microsoft says these improvements will help break down the internal barriers that have traditionally complicated and slowed data governance.

Dell Technologies Accelerates as-a-Service Strategy

A few hours rather than a few days this week brought several conferences to my attention. One was Dell Technologies World. I first was involved with Dell through the Influencer program for Internet of Things and Edge. Both of those groups are gone. The only people I remember are selling laptops now. Only a few years ago, Michael Dell spoke of the IoT group in his keynote. This year…nada.

However, there was still much of interest this year. Just as its competitors have been building out as-a-Service strategies, Dell Technologies has announced just such a strategy. Expect this trend to spread.

Highlights

Businesses can buy and deploy IT with a simple, consistent cloud experience across the industry’s broadest IT portfolio

News summary

  • Project APEX simplifies how Dell Technologies customers can consume IT as-a-Service
  • Dell Technologies Cloud Console gives customers a single self-service interface to manage every aspect of their cloud and as-a-Service journey
  • Dell Technologies Storage as-a-Service will be deployed and managed on-premises by Dell Technologies
  • Dell Technologies Cloud Platform advancements make cloud compute resources accessible with instance-based offerings, lowering the barrier of entry and extendingsubscription availability

Dell Technologies expands its as-a-Service capabilities with Project APEX to simplify how customers and partners access Dell technology on-demand—across storage, servers, networking, hyperconverged infrastructure, PCs and broader solutions.

Project APEX will unify the company’s as-a-Service and cloud strategies, technology offerings, and go-to-market efforts. Businesses will have a consistent as-a-Service experience wherever they run workloads including on-premises, edge locations and public clouds.

“Project APEX will give our customers choice, simplicity and a consistent experience across PCs and IT infrastructure from one trusted partner—unmatched in the industry,” said Jeff Clarke, chief operating officer and vice chairman, Dell Technologies.“We’re building upon our long history of offering on-demand technology with this initiative. Our goal is to give customersthe freedom to scale resources in ways that work best for them, so they can quickly respond to changes and focus less on IT and more on their business needs.”

“By the end of 2021, the agility and adaptability that comes with as-a-Service consumption will drive a 3X increase in demand for on-premises infrastructure delivered via flexible consumption/as-a-Service solutions,” Rick Villars, group vice president, Worldwide Research at IDC.

Dell Technologies as-a-Service and cloud advancements

The new Dell Technologies Cloud Console will provide the foundation for Project APEX and will deliver a single, seamless experience for customers to manage their cloud and as-a-Service journey. Businesses can browse the marketplace and order cloud services and as-a-Service solutions to quickly address their needs. With a few clicks, customers can deploy workloads, manage their multi-cloud resources, monitor their costs in real-time and add capabilities.

Available in the first half of next year, Dell Technologies Storage as-a-Service (STaaS) is an on-premises, as-a-Service portfolio of scalable and elastic storage resources that will offer block and file data services and a broad range of enterprise-class features. STaaS is designed for OPEX transactions and allows customers to easily manage their STaaS resources via the Dell Technologies Cloud Console.

Dell continues to expand its Dell Technologies Cloud and as-a-Service offerings with additional advances:

  • Dell Technologies Cloud Platform instance-based offerings— Customers can get started with hybrid cloud for as low as $47 per instance per month with subscription pricing, making it easy to buy and scale cloud resources with pre-defined configurations through the new Dell Technologies Cloud Console.
  • Geographic Expansions— Dell Technologies is extendingDell Technologies Cloud Platform subscription availability to the United Kingdom, France and Germany with further global expansion coming soon.
  • Dell Technologies Cloud PowerProtect for Multi-cloud— This fully-managed service helps customers protect their data and applications across public clouds in a single destination via a low latency connection to the major public clouds. Businesses save costs through the PowerProtect appliance’s deduplication technology and realize additional savings with zero egress fees when retrieving their data from Microsoft Azure.
  • Pre-approved Flex On Demand pricing— The pre-configured pricing makes it simpler for customers to select and deploy Dell Technologies solutions with a pay-per-use experience. Dell Technologies partners globally will receive a rebate up to 20% on Flex On Demand solutions.

Continued sustainability focus

Dell Technologies Project APEX will help companies retire infrastructure in a secure and environmentally friendly manner. Dell manages the return and refurbishing of used IT gear while also helping to support customers’ own sustainability goals. The company is making additional strides in achieving its Progress Made Real goals by:

  • Reselling 100% of the returned leased assets.
  • Refurbishing and reselling 89% of working assetsin the North America and EMEA regions.
  • Reselling 10% of non-working assetsto Environmental Disposal Partners who repair, reuse, resell and recycle each asset. Last year, Dell recycled 240,257 kilograms of metal, glass and plastics through this program.
  • Helping customers resell or recycle their excess hardware and prepare to return leased equipment in a secure and environmentally conscious manner through Dell Asset Resale & Recycling Services

Availability

  • Dell Technologies Cloud Console​ is available now as a public preview in the United States with EMEA availability planned for the first quarter of 2021.
  • Dell Technologies Storage as-a-Service will be available in the U.S. in the first half of 2021.
  • Dell Technologies Cloud Platform instance-based offerings with subscription pricing are available in the United States, France, Germany and the U.K. Dell Technologies Cloud PowerProtect for Multi-cloud is now available in the U.S., U.K. and Germany.
  • Flex On Demand is available in select countries in North America, Europe, Latin America and the Asia-Pacific region.

Industrial Readers—Watch Where IT Is Going

The manufacturing market is finally discovering the cloud in a big way. This reminds me of similar technologies such as Ethernet in 2003 when the market suddenly moved from “we don’t trust it” to “get me more”. Professionals in the industrial market are also testing out “edge-to-cloud” calling it Industrial Internet of Things. OPC Foundation has climbed aboard that train. We’ll see more.

But here is news that HPE has finalized its acquisition of Silver Peak, bolstering its vision of “edge-to-cloud transformation.” This vision goes far deeper and broader than IIoT, although it encompasses that…and more.

HPE announced the acquisition of Silver Peak in July. The deal totaled $925 million and brought Silver Peak into the Aruba business unit. Executives said Silver Peak’s SD-WAN [software-defined wide area network] capabilities would pair well with Aruba’s wired and wireless capabilities.

HPE president and CEO Antonio Neri called WAN transformation a crucial element of his company’s Intelligent Edge and edge-to-cloud strategy.

“Armed with a comprehensive SD-WAN portfolio with the addition of Silver Peak, we will accelerate the delivery of a true distributed cloud model and cloud experience for all apps and data wherever they live,” Neri said.

Keerti Melkote, Aruba founder and HPE’s president of Intelligent Edge, said customers want to increase branch connectivity and secure remote workers. As a result, Aruba launched a software-defined branch (SD-branch) solution in 2018 and revamped it earlier this year.

“By combining Silver Peak’s advanced SD-WAN technology with Aruba’s SD-branch and remote worker solutions, customers can simplify branch office and WAN deployments to empower remote workforces, enable cloud-connected distributed enterprises, and transform business operations without compromise,” Melkote said.

New WAN Business

Silver Peak founder and CEO David Hughes now serves as senior vice president Aruba’s WAN business. He said he looks forward to accelerating “edge-to-cloud transformation initiatives.”

“Digital transformation, cloud-first IT architectures, and the need to support a mobile work-from-anywhere workforce are driving enterprises to rethink the network edge,” Hughes said. “The combination of Silver Peak and Aruba will uniquely enable customers to realize the full transformational promise of these IT megatrends.”

Enabling Enterprise Agility In Volatile Energy Market

This news release from AVEVA reads like something I’d have written more than 10 years ago from the predecessor company about enterprise control solutions. It has taken years, two acquisitions, and the retirement of the thought leaders behind the idea, but this looks like a step in a beneficial direction.

AVEVA announced the launch of the latest enhancement to its AVEVA Unified Supply Chain platform, Real-time Crude, designed to help customers gain visibility into their business and operations in order to quickly understand how crude quality impacts their value chain.

With the Oil and Gas industry facing disruption, a lack of visibility into the supply chain has led to difficulties with reacting to market changes in real time. AVEVA Real-Time Crude, a solution developed with Schneider Electric, combines cutting-edge analytical equipment with powerful machine-learning techniques to provide rapid and reliable crude oil assays across the enterprise in a matter of minutes. Timely information leads to advantages, including more intelligent purchasing decisions, improved operational planning, better allocation of refining resources, and more certain product volume and quality predictions.

Actually, O&G is always facing disruption, so any help managers can get will surely be appreciated.

AVEVA Unified Supply Chain is a single source of knowledge enabling enterprises to share and communicate decisions between diverse teams, promoting collaboration and discussion across global locations delivering increased visibility which enables rapid reaction to changing market conditions. It comprises modules for planning, scheduling, assay management and supply distribution that can share common information promoting an understanding of the entire plant and of the business. Utilizing common data, models, and user management, it promotes collaboration and visibility across the entire supply chain.

“Given the fluctuating oil prices, volatile markets and the severe global economic downturn projected, the launch of Real-Time Crude is opportune as it addresses many of the known issues that have been plaguing the energy industry. In an industry that survives by being nimble to fluctuations in prices of crudes and products, AVEVA’s offering provides fast information on crude quality to improve efficiency, reliability and agile decision making,” said Harpreet Gulati, Senior Vice President for Planning & Operations Business Unit, AVEVA.

“Real-Time Crude is at the cutting edge of much needed developments within the volatile energy industry, and Schneider Electric and AVEVA are committed to helping customers quickly navigate these uncertain times to function safely, efficiently and reliably. This key feature of the AVEVA Unified Supply Chain demonstrates the true potential of bringing advanced analytical equipment and machine learning technologies to the forefront without being complicated. Our combined goal is to deliver innovative solutions that will not only create efficiencies but also help customers to stay ahead of the curve,” commented Matthew Carrara, Vice President – Process Analyzers and Instrumentation, Schneider Electric, Industry Business – Process Automation.

The AVEVA Unified Supply Chain platform is unique because legacy solutions use a set of point solutions that require manual data transfer between different solutions, while AVEVA’s solution is a single, unified solution that leverages common data, models, and user management to promote enterprise collaboration and visibility across the supply chain. This eliminates the need for data transfers as well as potential errors resulting from data transfers. Everyone always has completely up-to-date information. 

Enabling Enterprise Agility In Volatile Energy Market

This news release from AVEVA reads like something I’d have written more than 10 years ago from the predecessor company about enterprise control solutions. It has taken years, two acquisitions, and the retirement of the thought leaders behind the idea, but this looks like a step in a beneficial direction.

AVEVA announced the launch of the latest enhancement to its AVEVA Unified Supply Chain platform, Real-time Crude, designed to help customers gain visibility into their business and operations in order to quickly understand how crude quality impacts their value chain.

With the Oil and Gas industry facing disruption, a lack of visibility into the supply chain has led to difficulties with reacting to market changes in real time. AVEVA Real-Time Crude, a solution developed with Schneider Electric, combines cutting-edge analytical equipment with powerful machine-learning techniques to provide rapid and reliable crude oil assays across the enterprise in a matter of minutes. Timely information leads to advantages, including more intelligent purchasing decisions, improved operational planning, better allocation of refining resources, and more certain product volume and quality predictions.

Actually, O&G is always facing disruption, so any help managers can get will surely be appreciated.

AVEVA Unified Supply Chain is a single source of knowledge enabling enterprises to share and communicate decisions between diverse teams, promoting collaboration and discussion across global locations delivering increased visibility which enables rapid reaction to changing market conditions. It comprises modules for planning, scheduling, assay management and supply distribution that can share common information promoting an understanding of the entire plant and of the business. Utilizing common data, models, and user management, it promotes collaboration and visibility across the entire supply chain.

“Given the fluctuating oil prices, volatile markets and the severe global economic downturn projected, the launch of Real-Time Crude is opportune as it addresses many of the known issues that have been plaguing the energy industry. In an industry that survives by being nimble to fluctuations in prices of crudes and products, AVEVA’s offering provides fast information on crude quality to improve efficiency, reliability and agile decision making,” said Harpreet Gulati, Senior Vice President for Planning & Operations Business Unit, AVEVA.

“Real-Time Crude is at the cutting edge of much needed developments within the volatile energy industry, and Schneider Electric and AVEVA are committed to helping customers quickly navigate these uncertain times to function safely, efficiently and reliably. This key feature of the AVEVA Unified Supply Chain demonstrates the true potential of bringing advanced analytical equipment and machine learning technologies to the forefront without being complicated. Our combined goal is to deliver innovative solutions that will not only create efficiencies but also help customers to stay ahead of the curve,” commented Matthew Carrara, Vice President – Process Analyzers and Instrumentation, Schneider Electric, Industry Business – Process Automation.

The AVEVA Unified Supply Chain platform is unique because legacy solutions use a set of point solutions that require manual data transfer between different solutions, while AVEVA’s solution is a single, unified solution that leverages common data, models, and user management to promote enterprise collaboration and visibility across the supply chain. This eliminates the need for data transfers as well as potential errors resulting from data transfers. Everyone always has completely up-to-date information. 

AVEVA Unified Supply Chain can help improve and optimize workflows in small and large enterprises. Flexible and modern integration mechanisms make working with existing business systems and processes simple, providing easy access to underlying data and results, while still retaining the security and versioning model used to ensure consistency between colleagues.