Report Identifies 4 Changes CEOs Must Implement To Maximize Digitization

Report Identifies 4 Changes CEOs Must Implement To Maximize Digitization

Digitization is on everyone’s lips these days. If you have not taken steps to implement and improve digital data flow, you are probably already behind. I receive information regularly from PwC and here is a new report on how digitization is reshaping the manufacturing industry. The report takes a look at 8 companies and showcase how they improved their efficiency, productivity and customer experience by ensuring they have the right capabilities central to their operating model and by matching them with strong skill sets in analytics and IT.

Pressure from the consumer, new regulations and advances in information technology are all reasons that are pushing manufacturing organizations to digitize so they can avoid falling behind the new breed of market-leading ‘digital champions.’ The report identifies 4 significant changes CEOs must implement to maximize the benefits of digitization.

1. Drive organizational changes that address new digital capabilities and digitalized processes – e.g., product and process design and engineering, end-to-end procurement, supply chain/distribution and after-sales – right from the top, because these are so new and different

2. Hire more software and Internet of Things (IoT) engineers and data scientists, while training the wider workforce in digital skills

3. Learn from software businesses, which have the ability to develop use cases rapidly and turn them into software products

4. Extend digitalization beyond IT to include significant operational technologies (OT) such as track and trace solutions and digital twinning

From the report, “Already, digitally ‘smart’ manufacturers are gaining a competitive advantage by exploiting emerging technologies and trends such as digital twinning, predictive maintenance, track and trace, and modular design. These companies have dramatically improved their efficiency, productivity, and customer experience by ensuring these capabilities are central to their operating models and by matching them with strong skill sets in analytics and IT. “

During 2018 and early 2019, PwC conducted in-depth digitisation case studies of eight industrial and manufacturing organisations in Germany, the US, India, Japan and the Middle East. Drawing on discussions and interviews with CEOs and division heads, we explored the key triggers for change these companies faced, assessed how digital solutions are being implemented and how digitisation is affecting key aspects of their operating models. We also compared our eight organisations with other publicly cited digitisation case studies, and leveraged PwC’s 2018 study Digital Champions: How industry leaders build integrated operations ecosystems to deliver end-to-end customer solutions and other ongoing PwC research.

This paper is the result of ongoing collaboration between PwC and the Global Manufacturing and Industrialisation Summit (GMIS). GMIS provides a forum for industry leaders to interact with governments, technologists and academia in order to navigate the challenges and opportunities brought about by the digital technologies of the Fourth Industrial Revolution. PwC has been a knowledge partner with GMIS since 2016.

The eight case studies in this report make clear how far the role of digital technology goes beyond traditional IT systems. It also encompasses OT and data and analytics technologies. Full integration and linkage among these different technologies, and the ecosystems they are part of, are essential to a successful digital transformation. Yet success is impossible without a digitally smart workforce that is familiar with Industry 4.0 skills and tools.

These challenges are the subject of the second part of the report Digital Champions: How industry leaders build integrated operations ecosystems to deliver end-to-end customer solutions, which will be published in January 2020.

The report will elaborate further on the emerging theory of digital manufacturing and operations, in which successful, digitised industrial organisations will increasingly have to act like software companies in response to four key factors:

  • The connected customer seeks a batch size of one, necessitating greater customisation of products and delivery time, improved customer experience, use of online channels and outcome-based business models.
  • Digital operations require both engineering and software abilities to enable extensive data analysis and IoT-based integration, as well as digitisation of products and services.
  • Organisations need augmented automation, in which machines become part of the organisation via closely connected machine–worker tasks and integrated IT and OT.
  • Future employees will be ‘system-savvy craftspeople’ with the skills to use sensors in order to collect and analyse accurate data, as well as design and manage connected processes.

About the authors

Anil Khurana is PwC’s global industrial, manufacturing and automotive industry leader. He is a principal with PwC US.

Reinhard Geissbauer is a partner with PwC Germany based in Munich. He is the global lead for PwC’s Digital Operations Impact Center.

Steve Pillsbury is a principal with PwC US and the US lead for PwC’s Digital Operations Impact Center.

Podcasts and Education Opportunities

Podcasts and Education Opportunities

I’ve been busy behind the microphone lately. Here is news about my latest Gary on Manufacturing podcast (I’m taking suggestions for a new name since I cover a much broader area than manufacturing) plus a conversation I had for an SAP-sponsored podcast with the famous Tamara McCleary for a series called TechUnknown. Finally, I will refer you to an education resource Website.

Gary on Manufacturing 191

Podcast 191–If we are ever going to finally bring IT and OT together, indeed break through all of a company’s silos, it will be through adopting coaching as a key component of the manager’s tool kit. I reference Trillion Dollar Coach by Schmidt, Rosenberg, and Eagle—a book about legendary Bill Campbell and how his coaching made the difference for executives at Google, Apple, and many more Silicon Valley companies. I also take a look at another Bill—Bill Gates—whose 10 top tech trends and 10 top challenges to solve appeared in this spring’s MIT Technology Review.

TechUnknown Podcast

I had an entertaining and informative conversation with Tamara McCleary. How do you manage the human element of automation & #AI adoption? I share my thoughts on real-life applications for #IIoT with @TamaraMcCleary on the @SAP #TechUnknown podcast.

Earn a Masters Degree

Industries of all sorts have a need for data scientists. I heard from a publicist for a Website that consolidates and explains degree programs in that area. If you or someone you know wants career advancement or change, check out this page.

Podcasts and Education Opportunities

Canvass Analytics, OSIsoft to Deliver Predictive Insights

Whether it is the Industrial Internet of Things or Industry 4.0 or Smart Manufacturing no benefits are garnered at the end without a superb analytics engine. Recently I talked with Humera Malik, CEO of Canvass Analytics, about a new analytics company and product that brings Artificial Intelligence (AI) and Machine Learning (ML) to the field.

Early in my management career we called accounting “ancient historians” because reports only came out 10 days following a month end. That is too late to be what we call these days “actionable information.”

Turns out that a similar problem has existed in the predictive analytics field. OSIsoft and others have provided tools to capture huge amounts of industrial and manufacturing data. To get anything out of it you needed to establish a project, bring in a bunch of data scientists, and try to glean some trends or fit some models.

What was needed was a powerful engine that can use this data closer to real time, fit it into a model (selecting one from among several), and give operators, maintenance technicians, engineers, and others information in a useable time frame without bringing in a bunch of data scientists. These data scientists it turns out need to reside in the software. The entire process must be transparent to the user.

Enter Canvass Analytics, a provider of AI-enabled predictive analytics for Industrial IoT, which just announced a partnership with OSIsoft, a global leader in operational intelligence, that will enable Industrial companies to accelerate the return on investment of their IoT initiatives.

Malik commented, “Predictive and automated analytics gives operations teams the insights to answer questions such as, how can I increase yield, how can I reduce downtime and how can I reduce my maintenance costs? Canvass’ AI-enabled analytics platform accelerates the delivery of predictive insights by automating data analysis and leveraging machine learning technologies to adapt to data changes in real-time. For operations teams, this means they have the latest intelligence in order to make critical operational decisions.”

The combination of OSIsoft’s methodology to collect, store and stream data from any Industrial IoT source with Canvass’ AI-enabled automated analytics platform brings a new approach to creating predictive models that continually retrain themselves. With the resulting insight, Industrial companies have the potential to reduce plant maintenance by up to 50 percent and optimize plant operations by 30 percent.

“We are enthusiastic about the value that we see companies like Canvass Analytics extracting from the vast amounts of IIoT and other streaming data that we collect in our role as the single source of the truth,” said J. Patrick Kennedy, founder and CEO of OSIsoft.

Data Drives A New Manufacturing Hero The Reliability Engineer

Data Drives A New Manufacturing Hero The Reliability Engineer

I chatted this week with two executives from GE Digital. Jeremiah Stone is the General Manager – Industrial Data Intelligence Software at GE Digital, and Jennifer Bennett is the General Manager – Manufacturing Software Solutions (Brilliant Factory) at GE Digital.

The conversation opened with the idea that it’s about data. Companies must become data-driven. But then it’s also beyond data. Not all data sets are equal. And it’s not just about finding anomalies–it’s really about finding that data and anomalies that matter most to business success.

Then we went a direction that I’ve never gone with GE before–remote monitoring and diagnostics (RM&D) targeted to reliability engineers. The often overlooked skillset of reliability engineers, and how their knowledge offers a distinct competitive advantage to companies battling it out in the industrial market.

As the advantages from unlocking big data insights continue to benefit enterprises of all sizes, data scientists – the gatekeepers and analysts of this data – have become an increasingly popular career choice. In fact, The Harvard Business Review proclaimed data scientists to be “the sexiest job of the 21st century.” But with more advanced Remote Monitoring and Diagnosis (RM&D) technologies being utilized to find and address problems before they happen, reducing the costs of planned and unplanned downtime, the emerging industrial superstars are reliability engineers.

This list summarizes our conversation:

  • RM&D in the cloud uncovers the gap of reliability-centered maintenance and operations. This new technology shines a light on an old problem for customers– frustrations around the fact that they’re not able to executive consistently on maintenance and operations.
  • Successful asset monitoring is more than just software. Organizations have a false sense of security that if they install monitoring software, they instantly have a handle on their operations. But the real secret in handling the complexity that monitoring creates with RM&D is the reliability engineers that can run and interpret the technology.
  • Identifying anomalies in RM&D is not the problem. Identifying anomalies that matter to operations is the problem. RM&D create numerous alerts so it’s hard for an organization to know which ones to really focus on. Reliability engineers have the expertise to shift through the notifications and identify false positives, telling their organizations which ones to ignore and which ones to pay attention to.
  • Cloud-based business strategy is becoming less about technology and more about knowledge sharing. The benefits of utilizing cloud technology are increasingly becoming centered on the fact that organizations can internally share and learn from a pooled knowledge base, no matter the location. The cloud offers a way for reliability engineers to capture and preserve knowledge that is crucial to the business’s ongoing success.

Stone said that this idea ties in to GE’s strategy itself. As disciples of Deming, the company is data driven, and a lot of that means remote monitoring and diagnostics for GE’s fleet. Incorporating technologies such as those from the SmartSignal acquisition, company engineers and managers are now excited. With the RM&D, they now can execute on goals, avoid failure, achieve greater reliability, and be more proactive. “Now we are excited to bring tools we use to the rest of the industrial world.”

Today’s RM&D enables excellence in manufacturing from a larger, systemic view, in order to deliver business advances, added Stone. Now engineers and managers can look at the entire scope/span of problem, not just one process or loop. “We help companies on the journey beginning with an assessment of where they are and what they want to achieve. We offer professional services to help them figure out what are outcomes they want to achieve. Not just getting connected to get data but doing it in a way that makes sense.”

Bennett pointed to the variety and complexity of data. “The problem has been all data has been in silos, but the value is upstream and downstream. Some challenges in manufacturing are often quite complex. Data flows from contexts requiring tracking back to cause. The platform we’re building on Predix brings data together. We can make insightful decisions. In RM&D we’re looking at history records, maintenance records, and the like. In the past  we relied on people who have knowledge and experience for data. Now we can combine and analyze.”

We started discussing workforce and the challenges of recruiting and retaining younger people. Stone noted that young people today are looking for autonomy, mastery, and purpose. “What was magic 20 years ago isn’t now.  We find a sense of curiosity in new people and a desire for a job with meaningful impact.”

One improvement in the job situation is the ability to spend more time problem solving and less time gathering data. According to studies, a typical data and analytics project required 80% of the time just collecting and collating data. Stone noted, “Our focus is on dramatically minimizing amount of time to get the data so people can start moving toward problem solving and analytics. Traditionally reliability engineers have been frustrated by availability of data. We are talking about taking it from calendar time to wrist watch time. Then we give collaborative capability. Both newer and more senior engineers are delighted with this new possibility to spend more time problem solving.”

Data Science The Next Requirement To Realize Internet of Things

Data Science The Next Requirement To Realize Internet of Things

Michael Stonebraker data scienceThere are so many ways we can go to try to understand and then to make use of the Industrial Internet of Things. As my thinking coalesces I’ve come to the conclusion that the IIoT is a tool. It is a tool to be used in the service of an overall manufacturing/production strategy.

In order to properly use this tool of connected devices serving real-time data, we are going to need advances in data science.

Two database types seem to dominate in manufacturing—at least as expounded by suppliers. One is a relational (SQL) database. The other type is data historian.

I remember talking to some of the tech guys at Opto 22 about exploring semi-structured and open source variants such as NoSQL. At the time they thought that SQL would be all they need. And maybe so. But that was a couple of years ago.

All that discussion introduces an important podcast I just listened to. I subscribe to the O’Reilly Radar podcasts on iTunes. They’ve been cranking out about one per week—usually to promote an O’Reilly book or O’Reilly conference.

 Data Science

Michael Stonebraker was awarded the 2014 ACM Turing Award for fundamental contributions to the concepts and practices underlying modern database systems. In this podcast, he discusses the future of data science and the importance—and difficulty—of data curation.

[Notes from the O’Reilly Website]

One size does not fit all

Stonebraker notes that since about 2000, everyone has realized they need a database system, across markets and across industries. “Now, it’s everybody who’s got a big data problem,” he says. “The business data processing solution simply doesn’t fit all of these other marketplaces.” Stonebraker talks about the future of data science — and data scientists — and the tools and skill sets that are going to be required:

It’s all going to move to data science as soon as enough data scientists get trained by our universities to do this stuff. It’s fairly clear to me that you’re probably not going to retread a business analyst to be a data scientist because you’ve got to know statistics, you’ve got to know machine learning. You’ve got to know what regression means, what Naïve Bayes means, what k-Nearest Neighbors means. It’s all statistics.

All of that stuff turns out to be defined on arrays. It’s not defined on tables. The tools of future data scientists are going to be array-based tools. Those may live on top of relational database systems. They may live on top of an array database system, or perhaps something else. It’s completely open.

Getting meaning out of unstructured data

Gathering, processing, and analyzing unstructured data presents unique challenges. Stonebraker says the problem really is with semi-structured data, and that “relational database systems are doing just fine with that”:

When you say unstructured data, you mean one of two things. You either mean text or you mean semi-structured data. Mostly, the NoSQL guys are talking about semi-structured data. When you say unstructured data, I think text. … Everybody who’s trying to get meaning out of text has an application-specific parser because they’re not interested in general natural language processing. They’re interested in specific kinds of things. They’re all turning that into semi-structured data. The real problem is on semi-structured data. Text is converted to semi-structured data. … I think relational database systems are doing just fine on that. … Most any database system is happy to ingest that stuff. I don’t see that being a hard problem.

Data curation at scale

Data curation, on the other hand, is “the 800-pound gorilla in the corner,” says Stonebraker. “You can solve your volume problem with money. You can solve your velocity problem with money. Curation is just plain hard.” The traditional solution of extract, transform, and load (ETL) works for 10, 20, or 30 data sources, he says, but it doesn’t work for 500. To curate data at scale, you need automation and a human domain expert. Stonebraker explains:

If you want to do it at scale — 100s, to 1000s, to 10,000s — you cannot do it by manually sending a programmer out to look. You’ve got to pick the low-hanging fruit automatically, otherwise you’ll never get there; it’s just too expensive. Any product that wants to do it at scale has got to apply machine learning and statistics to make the easy decisions automatically.

The second thing it has to do is, go back to ETL. You send a programmer out to understand the data source. In the case of Novartis, some of the data they have is genomic data. Your programmer sees an ICU 50 and an ICE 50, those are genetic terms. He has no clue whether they’re the same thing or different things. You’re asking him to clean data where he has no clue what the data means. The cleaning has to be done by what we could call the business owner, somebody who understands the data, and not by an IT guy. … You need domain knowledge to do the cleaning — pick the low-hanging fruit automatically and when you can’t do that, ask a domain expert, who invariably is not a programmer. Ask a human domain expert. Those are the two things you’ve got to be able to do to get stuff done at scale.

Stonebraker discusses the problem of curating data at scale in more detail in his contributed chapter in a new free ebook, Getting Data Right.

Follow this blog

Get a weekly email of all new posts.