Cray, an HPE company, held a panel discussion webinar on October 18 to discuss Exascale (10^18, get it?) supercomputing. This is definitely not in my area of expertise, but it is certainly interesting.
Following is information I gleaned from links they sent to me. Basically, it is Why Supercomputing. And not only computers, but also networking to support them.
Today’s science, technology, and big data questions are bigger, more complex, and more urgent than ever. Answering them demands an entirely new approach to computing. Meet the next era of supercomputing. Code-named Shasta, this system is our most significant technology advancement in decades. With it, we’re introducing revolutionary capabilities for revolutionary questions. Shasta is the next era of supercomputing for your next era of science, discovery, and achievement.
WHY SUPERCOMPUTING IS CHANGING
The kinds of questions being asked today have created a sea-change in supercomputing. Increasingly, high-performance computing systems need to be able to handle massive converged modeling, simulation, AI, and analytics workloads.
With these needs driving science and technology, the next generation of supercomputing will be characterized by exascale performance, data-centric workloads and diversification of processor architectures.
Shasta is that entirely new design. We’ve created it from the ground up to address today’s diversifying needs.
Built to be data-centric, it runs diverse workloads all at the same time. Hardware and software innovations tackle system bottlenecks, manageability, and job completion issues that emerge or grow when core counts increase, compute node architectures proliferate, and workflows expand to incorporate AI at scale.
It eliminates the distinction between clusters and supercomputers with a single new system architecture, enabling a choice of computational infrastructure without tradeoffs. And it allows for mixing and matching multiple processor and accelerator architectures with support for our
new Cray-designed and developed interconnect we call Slingshot.
Slingshot is our new high-speed, purpose-built supercomputing interconnect. It’s our eighth generation of scalable HPC network. In earlier Cray designs, we pioneered the use of adaptive routing, pioneered the design of high-radix switch architectures, and invented a new low-diameter system topology, the dragonfly.
Slingshot breaks new ground again. It features Ethernet capability, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality-of-service capabilities. Support for both IP-routed and remote memory operations broadens the range of applications beyond traditional modeling and simulation.
Quality-of-service and novel congestion management features limit the impact to critical workloads from other applications, system services, I/O traffic, or co-tenant workloads. Reduction in the network diameter from five hops (in the current Cray XCTM generation) to three reduces cost, latency, and power while improving sustained bandwidth and reliability.
FLEXIBILITY AND TCO
As your workloads rapidly evolve, the ability to choose your architecture becomes critical. With Shasta, you can incorporate any silicon processing choice — or a heterogenous mix — with a single management and application development infrastructure. Flex from single to multi-socket nodes, GPUs, FPGAs, and other processing options that may emerge, such as AI-specialized accelerators.
Designed for a decade or more of work, Shasta also eliminates the need for frequent, expensive upgrades, giving you exceptionally low total
cost of ownership. With its software architecture you can deploy a workflow and management environment in a single system, regardless of packaging.
Shasta packaging comes in two options: a 19” air- or liquid-cooled, standard datacenter rack and a high-density, liquid-cooled rack designed to take 64 compute blades with multiple processors per blade.
Additionally, Shasta supports processors well over 500 watts, eliminating the need to do forklift upgrades of system infrastructure to accommodate higher-power processors.
I’ve followed Foxboro and Triconex for many years now in my coverage of the process automation business. A great company that, not unlike too many others, suffered now and again with very poor management. The company has now settled in nicely at its home in Schneider Electric and appears to be healthy here.
Much credit must go to Gary Freburger. He provided a steadying hand as the leader before and through the transition, as well as guiding the integration into the new home. He is retiring at the end of the year. I’ve met a number of great leaders and a few stinkers in my 20 years at this side of the business. Gary’s one of the great ones. And his chosen successor (see more below) seems more than up for the task of building on his successes.
Marcotte Succeeds Freburger as Process Automation President
This week’s major announcement revealed that Nathalie Marcotte has been selected to succeed Freburger as president of its Process Automation business, effective Jan. 1, 2020.
“After a long, successful industry career, including more than 15 years serving Invensys and Schneider Electric in various senior leadership roles, Gary has decided to retire,” said Peter Herweck, executive vice president, Industrial Automation business, Schneider Electric. “We thank him for his many contributions and his strong legacy of success. We wish him well, and I congratulate Nathalie on her appointment. She brings more than 30 years of industry knowledge, expertise and experience, as well as a long record of success. I look forward to working with her as we build on the success Gary has delivered.”
Since joining the Schneider organization in 1996, Marcotte has held several positions of increasing responsibility, including vice president of Global Performance and Consulting Services; vice president, North America marketing; general manager for the Canadian business; and, prior to her current position, vice president, marketing, Global Systems business. As the company’s current senior vice president, Industrial Automation Services, she is responsible for Schneider Electric’s Services business and offer development, ranging from product support to advanced operations and digital services. She is also responsible for the company’s Global Cybersecurity Services & Solutions business, including the Product Security Office.
“As we move through this transition, it will be business as usual for Schneider Electric and our Process Automation customers,” Marcotte said. “Gary and I are working very closely together to ensure there will be no disruptions to our day-to-day operations. This ensures our customers have the same access to the exceptional people, products and technology they have come to trust and rely on to improve the real-time safety, reliability, efficiency and profitability of their operations.”
“I thank Gary for his many contributions to Schneider Electric and to our industry in general. Under his leadership, our customers, partners and employees have never been better situated to succeed, today and tomorrow,” Marcotte said. “This transition will have no impact on our technology strategy and portfolio roadmap. We remain committed to our continuously-current philosophy, which means never leaving our customers behind. Now, by leveraging the strength of the full Schneider Electric offer, we can take the next step toward enabling an easier, less costly digital transformation for our customers, while keeping them on the path to a safer, more secure and profitable future.”
Following the opening keynotes, I had the opportunity to chat privately with Freburger and Marcotte. Following summarizes a few key takeaways.
Digitalization and Digital Transformation.
These topics were prominently displayed in the ballroom before the keynotes. In fact the welcome and opening presentation were given by Mike Martinez, Director of Digital Transformation Consulting. These are common themes in the industry—in fact, not only process automation, but also at the IT conferences I cover. Each company has its own unique take on the terms, but it still boils down to data, data integrity, databases, and data security. All of which were discussed.
Key Points From the Presidents.
Integration across Schneider Electric. One priority has been working with other business units (and their technologies) across the Schneider Electric portfolio. This could be PLCs and drives, but power is a huge emphasis. Schneider Electric management wants very much for its process automation acquisition to integrate well with its historic electric power business. This is seen as a strategic opportunity. One thought-provoking observation—is the process engineer/electrical engineer divide as serious as the IT/OT divide? No direct answer. But these domains have historically had little to no collaboration. One to watch.
Close working relationship with AVEVA. If you recall, Schneider Electric bundled its various software acquisitions including the ones from Invensys (Wonderware, Avantis) and used them to buy into AVEVA—the engineering software company. Bringing automation and software together was a constant source of pain for Invensys. Schneider Electric dealt with it through a separate company. Along the way, cooperation seems to be better than ever. Marcotte explained to me that Foxboro combines its domain expertise with the more broadly general software platforms to achieve customer values. See for example my previous post on Plant Performance Advisors Suite.
Cybersecurity. Marcotte has been leading Schneider’s cybersecurity efforts. These are seen as a key part of Schneider Electric’s offer. See especially the establishment of the ISA Global Cybersecurity Alliance. They don’t talk as much about Internet of Things as at other conferences, when I probed more deeply about IT, cybersecurity was again brought up as the key IT/OT collaboration driver.
It’s been a struggle, but the Schneider Electric process automation business (Foxboro and Triconex) seems as strong as ever. And the people here—both internal and customers—are optimistic and energetic. That’s good to see.
Inductive Automation has selected the recipients of its Ignition Firebrand Awards for 2019. The announcements were made at the Ignition Community Conference (ICC), which took place September 17-19. I get to see the poster displays and chat with the companies at ICC. I love the technology developers, but it’s fascinating to talk with people who actually use the products.
[Disclaimer: Inductive Automation is a long-time and much appreciated sponsor of The Manufacturing Connection. If you are a supplier, you, too, could be a sponsor. Contact me for more details. You would benefit from great visibility.]
The Ignition Firebrand Awards recognize system integrators and industrial organizations that use the Ignition software platform to create innovative new projects. Ignition by Inductive Automation is an industrial application platform with tools for the rapid development of solutions in human-machine interface (HMI), supervisory control and data acquisition (SCADA), manufacturing execution systems (MES), and the Industrial Internet of Things (IIoT). Ignition is used in virtually every industry, in more than 100 countries.
“The award-winning projects this year were really impressive,” said Don Pearson, chief strategy officer for Inductive Automation. “Many of them featured Ignition 8 and the new Ignition Perspective Module, both of which were released just six months ago. We were really impressed with how quickly people were able to create great projects with the new capabilities.”
These Ignition Firebrand Award winners demonstrated the power and flexibility of Ignition:
- Brock Solutions worked with the Dublin Airport in Ireland to replace the baggage handling system in Terminal 2. The new system has 100,000 tags and is the largest Ignition-controlled airport baggage handling system in the world.
- Corso Systems & SCS Engineers partnered on a pilot project for the landfill gas system of San Bernardino County, California. The pilot was so successful, it will be expanded to 27 other county sites. It provides a scalable platform with strong mobile capabilities from Ignition 8 and Ignition Perspective, plus 3D imaging from drone video and virtual reality applications.
- ESM Australia developed a scalable asset management system to monitor performance and meet service requirements for a client with systems deployed all over Australia. The solution leveraged Ignition 8, Ignition Perspective, MQTT, and legacy FTP-enabled gateways in the field.
- H2O Innovation & Automation Station partnered to create a SCADA system for the first membrane bioreactor wastewater treatment plant in Arkansas. The new system for the City of Decatur shares real-time data with neighboring water agencies as well as the mayor.
- Industrial Networking Solutions created a new oil & gas SCADA system in just six months for 37 sites at ARB Midstream. The solution included hardware upgrades, a new control room, and a diverse collection of technologies with cloud-hosted SCADA, MQTT, Ignition Edge, and SD-WAN.
- MTech Engineering developed an advanced real-time monitoring and control system for the largest data center campus in Italy. The project for Aruba S.p.A. had to work with huge amounts of data — and was done at a much lower cost than was possible with any other SCADA solution.
- NLS Engineering created a single, powerful operations and management platform for more than 30 solar-power sites for Ecoplexus, a leader in renewable energy systems. The solution provided deep data acquisition, included more than 100,000 tags, and led to the creation of a platform that can be offered to other clients.
- Streamline Innovations used Ignition, Ignition Edge, Ignition Perspective, and MQTT, to facilitate the automation of natural gas treating units that convert extremely toxic hydrogen sulfide into fertilizer-grade sulfur. The solution increased uptime, reduced costs, and provided access to much more data than Streamline had seen previously.
This is still more followup from Emerson Global Users Exchange relative to sessions on Projects Pilot Purgatory. I thought I had already written this, but just discovered it languishing in my drafts folder. While in Nashville, I ran into Jonas Berge, senior director, applied technology for Plantweb at Emerson Automation. He has been a source for technology updates for years. We followed up a brief conversation with a flurry of emails where he updated me on some presentations.
One important topic centered on IoT projects—actually applicable to other types of projects as well. He told me the secret sauce is to start small. “A World Economic Forum white paper on the fourth industrial revolution in collaboration with McKinsey suggests that to avoid getting stuck in prolonged “pilot purgatory” plants shall start small with multiple projects – just like we spoke about at EGUE and just like Denka and Chevron Oronite and others have done,” he told me.
“I personally believe the problem is when plants get advice to take a ‘big bang’ approach starting by spending years and millions on an additional ‘single software platform’ or data lake and hiring a data science team even before the first use case is tackled,” said Berge. “My blog post explains this approach to avoiding pilot purgatory in greater detail.”
I recommend visiting Berge’s blog for more detail, but I’ll provide some teaser ideas here.
First he recommends
- Think Big
- Start Small
- Scale Fast
Plants must scale digital transformation across the entire site to fully enjoy the safety benefits like fewer incidents, faster incident response time, reduced instances of non-compliance, as well as reliability benefits such as greater availability, reduced maintenance cost, extend equipment life, greater integrity (fewer instances of loss of containment), shorter turnarounds, and longer between turnarounds. The same holds true for energy benefits like lower energy consumption, cost, and reduced emissions and carbon footprint, as well as production benefits like reduced off-spec product (higher quality/yield), greater throughput, greater flexibility (feedstock use, and products/grades), reduced operations cost, and shorter lead-time.
The organization can only absorb so much change at any one time. If too many changes are introduced in one go, the digitalization will stall:
- Too many technologies at once
- Too many data aggregation layers
- Too many custom applications
- Too many new roles
- Too many vendors
Multiple Phased Projects
McKinsey research shows plants successfully scaling digital transformation instead run smaller digitalization projects; multiple small projects across the functional areas. This matches what I have personally seen in projects I have worked on.
From what I can tell it is plants that attempt a big bang approach with many digital technologies at once that struggle to scale. There are forces that encourage companies to try to achieve sweeping changes to go digital, which can lead to counterproductive overreaching.
The Boston Consulting Group (BCG) suggests a disciplined phased approach rather than attempting to boil the ocean. I have seen plants focus on a technology that can digitally transform and help multiple functional areas with common infrastructure. A good example is wireless sensor networks. Deploying wireless sensor networks in turn enables many small projects that help many departments digitally transform the way they work. The infrastructure for one technology can be deployed relatively quickly after which many small projects are executed in phases.
Small projects are low-risk. A small trial of a solution in one plant unit finishes fast. After a quick success, then scale it to the full plant area, and then scale to the entire plant. Then the team can move on to start the next pilot project. This way plants move from PoC to full-scale plant-wide implementation at speed. For large organization with multiple plants, innovations often emerge at an individual plant, then gets replicated at other sites, rolled out nation-wide and globally.
Use Existing Platform
I have also seen big bang approach where plant pours a lot of money and resources into an additional “single software platform” layer for data aggregation before the first use-case even gets started. This new data aggregation platform layer is meant to be added above the ERP with the intention to collect data from the ERP and plant historian before making it available to analytics through proprietary API requiring custom programming.
Instead, successful plants start small projects using the existing data aggregation platform; the plant historian. The historian can be scaled with additional tags as needed. This way a project can be implemented within two weeks, with the pilot running an additional three months, at low-risk.
I personally like to add you must also think of the bigger vision. A plant cannot run multiple small projects in isolation resulting in siloed solutions. Plants successful with digital transformation early on establish a vision of what the end goal looks like. Based on this they can select the technologies and architecture to build the infrastructure that supports this end goal.
NAMUR Open Architecture (NOA)
The system architecture for the digital operational infrastructure (DOI) is important. The wrong architecture leads to delays and inability to scale. NAMUR (User Association of Automation Technology in Process Industries) has defined the NAMUR Open Architecture (NOA) to enable Industry 4.0. I have found that plants that have deployed digital operational infrastructure (DOI) modelled on the same principles as NOA are able to pilot and scale very fast. Flying StartThe I&C department in plants can accelerate digital transformation to achieve operational excellence and top quartile performance by remembering Think Big, Start Small, Scale Fast. These translate into a few simple design principles:
- Phased approach
- Architecture modeled on the NAMUR Open Architecture
- Ready-made apps
- East-to-use software
- Digital ecosystem
If I would offer you an opportunity to spend $300 and make $50,000 right away with more to come and no additional expense, would you take it? What about downloading a cybersecurity hack for that much off the Dark Web and using it to steal a $50,000 car?
Such a possibility exists Etay Maor, Chief Security Officer of IntSights told me yesterday. His firm, a threat intelligence company focused on enabling enterprises to Defend Forward, released the firm’s new report, Under the Hood: Cybercriminals Exploit Automotive Industry’s Software Features. The report identifies the inherent cybersecurity risk and vulnerabilities manufacturers face as the industry matures through a radical transformation towards connectivity.
Car manufacturers offer more software features to consumers than ever before, and increasingly popular autonomous vehicles that require integrated software introduce security vulnerabilities. Widespread cloud connectivity and wireless technologies enhance vehicle functionality, safety, and reliability but expose cars to hacking exploits. In addition, the pressure to deliver products as fast as possible puts a big strain on the security capabilities of cars, manufacturing facilities, and automotive data.
The two main things that affect hackers’ motivation, regardless of their skills and knowledge are the cost effectiveness of the attack and the value of the information.
Vehicles usually have more complicated attack surfaces to penetrate compared to other options, i.e. attacks against banks or retail shops. That said, the automotive industry still has numerous attack vectors, just as any other industry: needs Phishing, credential leakages, leaked databases, open ports, and services, insider threats, brand security, and more.
Dark Web Forums
In the research, IntSights discovered online shops that sell car hacking tools that appear on the clear web and are easy to find. These online shops sell services that disconnect automobile immobilizers, as well as services that sell code grabbers and forums that give bad actors a complete tutorial on how to steal vehicles.
“The automotive manufacturing industry is wrought with issues, stemming from legacy systems that can’t be patched to the proliferation of vehicle connectivity and software as consumers demand more integration with personal devices and remote access,” said Maor. “A lack of adequate security controls and knowledge of threat vectors enables attackers to take advantage of easily acquired tools on the dark web to reap financial gain. Automakers need to have a constant pulse on dark web chatter, points of known exposure, and data for sale to mitigate risk.”
Top Vehicle Attack Vectors:
- Remote Keyless Systems
- Tire Pressure Monitoring Systems
- Software and Infotainment Applications
- GPS Spoofing
- Cellular Attacks
Other attack vectors explored include:
- Attacking Can-BUS
- Remote Attack Vectors
- Car Applications
- Physical Attack Vectors
IntSights has “the industry’s only all-in-one external threat protection platform designed to neutralize cyberattacks outside the wire.” Its cyber reconnaissance capabilities enable continuous monitoring of an enterprise’s external digital profile across the clear, deep, and dark web to identify emerging threats and orchestrate proactive response.