Years ago I dabbled in machine vision integration. It was fun and creative. My customers and I did some pretty cool quality control applications. So I maintain a liking for the technology even though the price of the hardware plummeted and ease-of-use skyrocketed. So, I bring you this interesting news.
Honeywell is collaborating with Papertech to develop and market TotalVision, a connected, camera-based detection system for the flat sheet industries. The system enables customers to identify and resolve defects on the production line, improving quality and efficiency. The fully integrated total quality control solution is designed for flat sheet and film processes in which surface detection and production break monitoring capabilities are critical for competitive success. This new solution is designed for paper, pulp, tissue, board, extruded film, calendaring, lithium-ion battery, copper and aluminium foil producers.
Combining Honeywell’s ExperionMX technology with market-leading Papertech’s TotalVision defect detection and event capturing capabilities, the solution provides a single-window operating environment for all aspects of process and quality control. Customers benefit from faster root cause determination of runnability and quality problems, thereby saving significant time in lost or downgraded production. When integrated with connected offerings such as Honeywell QCS 4.0, system data and analytics can be accessed anytime, anywhere, from any device.
“Honeywell represents an ideal collaborator for Papertech as our industry-leading WebInspector WIS and our WebVision web monitoring system (WMS) single platform TotalVision camera system seamlessly integrate with Honeywell’s quality control systems for a range of industries,” said Kari Hilden, CEO of Papertech Inc. “We look forward to working with the global Honeywell team and their customers.”
Honeywell will continue to support existing camera system users with parts and services, while offering an easy migration path to the new solution. Given the collaborative nature of the agreement, customers can choose to take a single party, single-window approach or to engage with Honeywell and Papertech separately.
“As the world moves from plastic to biomaterial-based packaging, and from hydrocarbon-based transportation to electric vehicles, flat sheet producers are under increased pressure to ensure output consistently meets a variety of performance and safety requirements,” said Michael Kennelly, global business leader for sheet, film and foil industries, Honeywell Process Solutions. “By bringing together Honeywell’s core strengths of measurement, control, connected applications and services in flat sheet production with Papertech’s leadership in web monitoring and inspection systems, we uniquely provide customers with that capability along with industry-beating lifecycle costs.”
Papertech is the global industry-leading machine vision system supplier for a range of web-based production lines with more than 1200 TotalVision installations in 42 countries. It is part of the IBS Paper Performance Group, a company with a more than 50-year history in delivering papermakers a full range of proven machine efficiency and product quality optimization solutions.
For more information visit Honeywell Quality Control Systems and Papertech TotalVision solutions.
Salesforce recently began reaching out to me. I found a (to me) surprising connection to industrial / manufacturing applications beyond CRM and the like. In general, more and more applications are moving to the cloud. In Brief: New research finds The Salesforce Economy will create more than $1 trillion in new business revenues and 4.2 million jobs between 2019 and 2024. Salesforce ecosystem is on track to become nearly six times larger than Salesforce itself by 2024, earning $5.80 for every dollar Salesforce makes.
Financial services, manufacturing and retail industries will lead the way, creating $224 billion, $212 billion and $134 billion in new business revenue respectively by 2024.
Salesforce announced new research from IDC that finds Salesforce and its ecosystem of partners will create 4.2 million new jobs and $1.2 trillion in new business revenues worldwide between 2019 and 2024. The research also finds Salesforce is driving massive gains for its partner ecosystem, which will see $5.80 in gains for every $1 Salesforce makes by 2024.
Cloud computing is driving this growth and giving rise to a host of new technologies, including mobile, social, IoT and AI, that are creating new revenue streams and jobs that further fuel the growth of the cloud — creating an ongoing virtuous cycle of innovation and growth. According to IDC, by 2024 nearly 50 percent of cloud computing software spend will be tied to digital transformation and will account for nearly half of all software sales. Worldwide spending on cloud computing between now and 2024 will grow 19 percent annually, from $179 billion in 2019 to $418 billion in 2024.
“The Salesforce ecosystem is made possible by the amazing work of our customers and partners around the world, and because of our collaboration we’re able to generate the business and job growth that we see today,” said Tyler Prince, EVP, Industries and Partners at Salesforce. “Whether it’s through industry-specific extensions or business-aligned apps, the Salesforce Customer 360 platform helps accelerate the growth of our partner ecosystem, and most importantly, the growth of our customers.”
Because organizations that spend on cloud computing subscriptions also spend on ancillary products and services, the Salesforce ecosystem in 2019 is more than four times larger than Salesforce itself and will grow to almost six times larger by 2024. IDC estimates that from 2019 through 2024, Salesforce will drive the creation of 6.6 million indirect jobs, which are created from spending in the general economy by those people filling the 4.2 million jobs previously mentioned.
“The tech skills gap will become a major roadblock for economic growth if we don’t empower everyone – regardless of class, race or gender – to skill up for the Fourth Industrial Revolution,” said Sarah Franklin, EVP and GM of Platform, Developers and Trailhead at Salesforce. “With Trailhead, our free online learning platform, people don’t need to carry six figures in debt to land a top job; instead, anyone with an Internet connection can now have an equal pathway to landing a job in the Salesforce Economy.”
Industry Economic Benefits of the Salesforce Economy
Specifically, Manufacturing industry will gain $211.7 billion in new revenues and 765,800 new jobs will be created by 2024.
Salesforce’s multi-faceted ecosystem is the driving force behind the Salesforce Economy’s massive growth:
- The global ecosystem includes multiple stakeholders, all of which play an integral part in the Salesforce Economy. This includes the world’s top five consulting firms, all of whom have prominent Salesforce digital transformation practices; independent software vendors (ISVs) that build their businesses on the Salesforce Customer 360 Platform and bring Salesforce into new industries; more than 1,200 Community Groups, with different areas of focus and expertise; and more than 200 Salesforce MVPs, product experts and brand advocates.
- Launched in 2006, Salesforce AppExchange is the world’s largest enterprise cloud marketplace, and hosts more than 4,000 solutions including apps, templates, bots and components that have been downloaded more than 7 million times. Ninety-five percent of the Fortune 100, 81 percent of the Fortune 500, and 86 percent of Salesforce customers are using AppExchange apps.
- Trailhead is Salesforce’s free online learning platform that empowers anyone to skill up for the future, learn in-demand skills and land a top job in the Salesforce Economy. Since Trailhead launched in 2014, more than 1.7 million Trailblazers have earned over 17.5 million badges; a quarter of all learners on Trailhead have leveraged their newfound skills to jump-start their careers with new jobs. Indeed, the world’s #1 job site, included Salesforce Developer in its list of best jobs in the US for 2019, noting that the number of job postings for that position had increased 129 percent year-over-year.
Cray, an HPE company, held a panel discussion webinar on October 18 to discuss Exascale (10^18, get it?) supercomputing. This is definitely not in my area of expertise, but it is certainly interesting.
Following is information I gleaned from links they sent to me. Basically, it is Why Supercomputing. And not only computers, but also networking to support them.
Today’s science, technology, and big data questions are bigger, more complex, and more urgent than ever. Answering them demands an entirely new approach to computing. Meet the next era of supercomputing. Code-named Shasta, this system is our most significant technology advancement in decades. With it, we’re introducing revolutionary capabilities for revolutionary questions. Shasta is the next era of supercomputing for your next era of science, discovery, and achievement.
WHY SUPERCOMPUTING IS CHANGING
The kinds of questions being asked today have created a sea-change in supercomputing. Increasingly, high-performance computing systems need to be able to handle massive converged modeling, simulation, AI, and analytics workloads.
With these needs driving science and technology, the next generation of supercomputing will be characterized by exascale performance, data-centric workloads and diversification of processor architectures.
Shasta is that entirely new design. We’ve created it from the ground up to address today’s diversifying needs.
Built to be data-centric, it runs diverse workloads all at the same time. Hardware and software innovations tackle system bottlenecks, manageability, and job completion issues that emerge or grow when core counts increase, compute node architectures proliferate, and workflows expand to incorporate AI at scale.
It eliminates the distinction between clusters and supercomputers with a single new system architecture, enabling a choice of computational infrastructure without tradeoffs. And it allows for mixing and matching multiple processor and accelerator architectures with support for our
new Cray-designed and developed interconnect we call Slingshot.
Slingshot is our new high-speed, purpose-built supercomputing interconnect. It’s our eighth generation of scalable HPC network. In earlier Cray designs, we pioneered the use of adaptive routing, pioneered the design of high-radix switch architectures, and invented a new low-diameter system topology, the dragonfly.
Slingshot breaks new ground again. It features Ethernet capability, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality-of-service capabilities. Support for both IP-routed and remote memory operations broadens the range of applications beyond traditional modeling and simulation.
Quality-of-service and novel congestion management features limit the impact to critical workloads from other applications, system services, I/O traffic, or co-tenant workloads. Reduction in the network diameter from five hops (in the current Cray XCTM generation) to three reduces cost, latency, and power while improving sustained bandwidth and reliability.
FLEXIBILITY AND TCO
As your workloads rapidly evolve, the ability to choose your architecture becomes critical. With Shasta, you can incorporate any silicon processing choice — or a heterogenous mix — with a single management and application development infrastructure. Flex from single to multi-socket nodes, GPUs, FPGAs, and other processing options that may emerge, such as AI-specialized accelerators.
Designed for a decade or more of work, Shasta also eliminates the need for frequent, expensive upgrades, giving you exceptionally low total
cost of ownership. With its software architecture you can deploy a workflow and management environment in a single system, regardless of packaging.
Shasta packaging comes in two options: a 19” air- or liquid-cooled, standard datacenter rack and a high-density, liquid-cooled rack designed to take 64 compute blades with multiple processors per blade.
Additionally, Shasta supports processors well over 500 watts, eliminating the need to do forklift upgrades of system infrastructure to accommodate higher-power processors.
I’ve followed Foxboro and Triconex for many years now in my coverage of the process automation business. A great company that, not unlike too many others, suffered now and again with very poor management. The company has now settled in nicely at its home in Schneider Electric and appears to be healthy here.
Much credit must go to Gary Freburger. He provided a steadying hand as the leader before and through the transition, as well as guiding the integration into the new home. He is retiring at the end of the year. I’ve met a number of great leaders and a few stinkers in my 20 years at this side of the business. Gary’s one of the great ones. And his chosen successor (see more below) seems more than up for the task of building on his successes.
Marcotte Succeeds Freburger as Process Automation President
This week’s major announcement revealed that Nathalie Marcotte has been selected to succeed Freburger as president of its Process Automation business, effective Jan. 1, 2020.
“After a long, successful industry career, including more than 15 years serving Invensys and Schneider Electric in various senior leadership roles, Gary has decided to retire,” said Peter Herweck, executive vice president, Industrial Automation business, Schneider Electric. “We thank him for his many contributions and his strong legacy of success. We wish him well, and I congratulate Nathalie on her appointment. She brings more than 30 years of industry knowledge, expertise and experience, as well as a long record of success. I look forward to working with her as we build on the success Gary has delivered.”
Since joining the Schneider organization in 1996, Marcotte has held several positions of increasing responsibility, including vice president of Global Performance and Consulting Services; vice president, North America marketing; general manager for the Canadian business; and, prior to her current position, vice president, marketing, Global Systems business. As the company’s current senior vice president, Industrial Automation Services, she is responsible for Schneider Electric’s Services business and offer development, ranging from product support to advanced operations and digital services. She is also responsible for the company’s Global Cybersecurity Services & Solutions business, including the Product Security Office.
“As we move through this transition, it will be business as usual for Schneider Electric and our Process Automation customers,” Marcotte said. “Gary and I are working very closely together to ensure there will be no disruptions to our day-to-day operations. This ensures our customers have the same access to the exceptional people, products and technology they have come to trust and rely on to improve the real-time safety, reliability, efficiency and profitability of their operations.”
“I thank Gary for his many contributions to Schneider Electric and to our industry in general. Under his leadership, our customers, partners and employees have never been better situated to succeed, today and tomorrow,” Marcotte said. “This transition will have no impact on our technology strategy and portfolio roadmap. We remain committed to our continuously-current philosophy, which means never leaving our customers behind. Now, by leveraging the strength of the full Schneider Electric offer, we can take the next step toward enabling an easier, less costly digital transformation for our customers, while keeping them on the path to a safer, more secure and profitable future.”
Following the opening keynotes, I had the opportunity to chat privately with Freburger and Marcotte. Following summarizes a few key takeaways.
Digitalization and Digital Transformation.
These topics were prominently displayed in the ballroom before the keynotes. In fact the welcome and opening presentation were given by Mike Martinez, Director of Digital Transformation Consulting. These are common themes in the industry—in fact, not only process automation, but also at the IT conferences I cover. Each company has its own unique take on the terms, but it still boils down to data, data integrity, databases, and data security. All of which were discussed.
Key Points From the Presidents.
Integration across Schneider Electric. One priority has been working with other business units (and their technologies) across the Schneider Electric portfolio. This could be PLCs and drives, but power is a huge emphasis. Schneider Electric management wants very much for its process automation acquisition to integrate well with its historic electric power business. This is seen as a strategic opportunity. One thought-provoking observation—is the process engineer/electrical engineer divide as serious as the IT/OT divide? No direct answer. But these domains have historically had little to no collaboration. One to watch.
Close working relationship with AVEVA. If you recall, Schneider Electric bundled its various software acquisitions including the ones from Invensys (Wonderware, Avantis) and used them to buy into AVEVA—the engineering software company. Bringing automation and software together was a constant source of pain for Invensys. Schneider Electric dealt with it through a separate company. Along the way, cooperation seems to be better than ever. Marcotte explained to me that Foxboro combines its domain expertise with the more broadly general software platforms to achieve customer values. See for example my previous post on Plant Performance Advisors Suite.
Cybersecurity. Marcotte has been leading Schneider’s cybersecurity efforts. These are seen as a key part of Schneider Electric’s offer. See especially the establishment of the ISA Global Cybersecurity Alliance. They don’t talk as much about Internet of Things as at other conferences, when I probed more deeply about IT, cybersecurity was again brought up as the key IT/OT collaboration driver.
It’s been a struggle, but the Schneider Electric process automation business (Foxboro and Triconex) seems as strong as ever. And the people here—both internal and customers—are optimistic and energetic. That’s good to see.
This is still more followup from Emerson Global Users Exchange relative to sessions on Projects Pilot Purgatory. I thought I had already written this, but just discovered it languishing in my drafts folder. While in Nashville, I ran into Jonas Berge, senior director, applied technology for Plantweb at Emerson Automation. He has been a source for technology updates for years. We followed up a brief conversation with a flurry of emails where he updated me on some presentations.
One important topic centered on IoT projects—actually applicable to other types of projects as well. He told me the secret sauce is to start small. “A World Economic Forum white paper on the fourth industrial revolution in collaboration with McKinsey suggests that to avoid getting stuck in prolonged “pilot purgatory” plants shall start small with multiple projects – just like we spoke about at EGUE and just like Denka and Chevron Oronite and others have done,” he told me.
“I personally believe the problem is when plants get advice to take a ‘big bang’ approach starting by spending years and millions on an additional ‘single software platform’ or data lake and hiring a data science team even before the first use case is tackled,” said Berge. “My blog post explains this approach to avoiding pilot purgatory in greater detail.”
I recommend visiting Berge’s blog for more detail, but I’ll provide some teaser ideas here.
First he recommends
- Think Big
- Start Small
- Scale Fast
Plants must scale digital transformation across the entire site to fully enjoy the safety benefits like fewer incidents, faster incident response time, reduced instances of non-compliance, as well as reliability benefits such as greater availability, reduced maintenance cost, extend equipment life, greater integrity (fewer instances of loss of containment), shorter turnarounds, and longer between turnarounds. The same holds true for energy benefits like lower energy consumption, cost, and reduced emissions and carbon footprint, as well as production benefits like reduced off-spec product (higher quality/yield), greater throughput, greater flexibility (feedstock use, and products/grades), reduced operations cost, and shorter lead-time.
The organization can only absorb so much change at any one time. If too many changes are introduced in one go, the digitalization will stall:
- Too many technologies at once
- Too many data aggregation layers
- Too many custom applications
- Too many new roles
- Too many vendors
Multiple Phased Projects
McKinsey research shows plants successfully scaling digital transformation instead run smaller digitalization projects; multiple small projects across the functional areas. This matches what I have personally seen in projects I have worked on.
From what I can tell it is plants that attempt a big bang approach with many digital technologies at once that struggle to scale. There are forces that encourage companies to try to achieve sweeping changes to go digital, which can lead to counterproductive overreaching.
The Boston Consulting Group (BCG) suggests a disciplined phased approach rather than attempting to boil the ocean. I have seen plants focus on a technology that can digitally transform and help multiple functional areas with common infrastructure. A good example is wireless sensor networks. Deploying wireless sensor networks in turn enables many small projects that help many departments digitally transform the way they work. The infrastructure for one technology can be deployed relatively quickly after which many small projects are executed in phases.
Small projects are low-risk. A small trial of a solution in one plant unit finishes fast. After a quick success, then scale it to the full plant area, and then scale to the entire plant. Then the team can move on to start the next pilot project. This way plants move from PoC to full-scale plant-wide implementation at speed. For large organization with multiple plants, innovations often emerge at an individual plant, then gets replicated at other sites, rolled out nation-wide and globally.
Use Existing Platform
I have also seen big bang approach where plant pours a lot of money and resources into an additional “single software platform” layer for data aggregation before the first use-case even gets started. This new data aggregation platform layer is meant to be added above the ERP with the intention to collect data from the ERP and plant historian before making it available to analytics through proprietary API requiring custom programming.
Instead, successful plants start small projects using the existing data aggregation platform; the plant historian. The historian can be scaled with additional tags as needed. This way a project can be implemented within two weeks, with the pilot running an additional three months, at low-risk.
I personally like to add you must also think of the bigger vision. A plant cannot run multiple small projects in isolation resulting in siloed solutions. Plants successful with digital transformation early on establish a vision of what the end goal looks like. Based on this they can select the technologies and architecture to build the infrastructure that supports this end goal.
NAMUR Open Architecture (NOA)
The system architecture for the digital operational infrastructure (DOI) is important. The wrong architecture leads to delays and inability to scale. NAMUR (User Association of Automation Technology in Process Industries) has defined the NAMUR Open Architecture (NOA) to enable Industry 4.0. I have found that plants that have deployed digital operational infrastructure (DOI) modelled on the same principles as NOA are able to pilot and scale very fast. Flying StartThe I&C department in plants can accelerate digital transformation to achieve operational excellence and top quartile performance by remembering Think Big, Start Small, Scale Fast. These translate into a few simple design principles:
- Phased approach
- Architecture modeled on the NAMUR Open Architecture
- Ready-made apps
- East-to-use software
- Digital ecosystem
If I would offer you an opportunity to spend $300 and make $50,000 right away with more to come and no additional expense, would you take it? What about downloading a cybersecurity hack for that much off the Dark Web and using it to steal a $50,000 car?
Such a possibility exists Etay Maor, Chief Security Officer of IntSights told me yesterday. His firm, a threat intelligence company focused on enabling enterprises to Defend Forward, released the firm’s new report, Under the Hood: Cybercriminals Exploit Automotive Industry’s Software Features. The report identifies the inherent cybersecurity risk and vulnerabilities manufacturers face as the industry matures through a radical transformation towards connectivity.
Car manufacturers offer more software features to consumers than ever before, and increasingly popular autonomous vehicles that require integrated software introduce security vulnerabilities. Widespread cloud connectivity and wireless technologies enhance vehicle functionality, safety, and reliability but expose cars to hacking exploits. In addition, the pressure to deliver products as fast as possible puts a big strain on the security capabilities of cars, manufacturing facilities, and automotive data.
The two main things that affect hackers’ motivation, regardless of their skills and knowledge are the cost effectiveness of the attack and the value of the information.
Vehicles usually have more complicated attack surfaces to penetrate compared to other options, i.e. attacks against banks or retail shops. That said, the automotive industry still has numerous attack vectors, just as any other industry: needs Phishing, credential leakages, leaked databases, open ports, and services, insider threats, brand security, and more.
Dark Web Forums
In the research, IntSights discovered online shops that sell car hacking tools that appear on the clear web and are easy to find. These online shops sell services that disconnect automobile immobilizers, as well as services that sell code grabbers and forums that give bad actors a complete tutorial on how to steal vehicles.
“The automotive manufacturing industry is wrought with issues, stemming from legacy systems that can’t be patched to the proliferation of vehicle connectivity and software as consumers demand more integration with personal devices and remote access,” said Maor. “A lack of adequate security controls and knowledge of threat vectors enables attackers to take advantage of easily acquired tools on the dark web to reap financial gain. Automakers need to have a constant pulse on dark web chatter, points of known exposure, and data for sale to mitigate risk.”
Top Vehicle Attack Vectors:
- Remote Keyless Systems
- Tire Pressure Monitoring Systems
- Software and Infotainment Applications
- GPS Spoofing
- Cellular Attacks
Other attack vectors explored include:
- Attacking Can-BUS
- Remote Attack Vectors
- Car Applications
- Physical Attack Vectors
IntSights has “the industry’s only all-in-one external threat protection platform designed to neutralize cyberattacks outside the wire.” Its cyber reconnaissance capabilities enable continuous monitoring of an enterprise’s external digital profile across the clear, deep, and dark web to identify emerging threats and orchestrate proactive response.