Return of the Large Trade Show

IMTS / Hannover Messe invaded Chicago this week. I drove down a couple of days. It was huge. Booths populated all four halls. I did not see everything. Or even half.

Hannover Messe (in Chicago) has co-located for the past three or four events. As in the past, the automation / Hannover Messe part encompassed a few aisles in the East hall.

I’ll have more news items in the next post.

Best of what I saw:

Nokia. What?! I was approached for an appointment. I said yes figuring on a 5G private network discussion. I was partly right.

Let me back up for context.

  • Enterprises crave data to feed their information systems.
  • Data from industrial / manufacturing operations were bottled in isolated, siloed systems
  • Networking became robust
  • Interoperable protocols grew
  • The Internet of Things (IoT) became a thing
  • Suddenly data could go where and when needed

Solutions.

  • Automation vendors claimed connectivity to enterprise but that fell short
  • IT suppliers, supporters of the enterprise, tried to enter the market with gateways, networking, partnerships and ecosystems to get the data.
  • They couldn’t find the formula to sell to manufacturing (known as OT)
  • We have gateways, databases, networking, but still no enterprise solution

Nokia.

  • Builds off networking technology which has progressed to 5G Private Networks
  • Has added edge compute devices
  • Partnership with PTC (Kepware / Thingworx) for software connectivity
  • Attacking this open market from a new perspective–both the enterprise IT side and the operations OT side

I am not predicting success. I never do. What I love about trade shows is finding this nugget of original thinking cloaked in the mundane. They have the foundation. Can they sell?

Check out this page on the Nokia site.

Identify Data Sets With Orphaned Data Enabling Appropriate Actions

Steve Leeper, VP Systems Engineering and Marketing, and Carl D’Halluin, Chief Technology Officer of Datadobi, met with me recently to discuss data, unstructured data management, data migration, the company, and those poor little orphaned files and data left in storage long after they had useful life.

I’ve been writing about the company’s products for about two years. Datadobi is about to celebrate its 13th birthday. Organically funded, management expects it to be around for some time to come.

The occasion of this conversation was release 6.2 and an upgrade to the recently announced StorageMAP product—a multi-vendor, multi-cloud data management platform with the introduction of capabilities to discover and remediate orphaned data.

Orphaned data is owned by employees of a company that are inactive but still enabled on an organization’s network or systems. These datasets, which can amount to a significant part of a company’s total data stored, represent risk and cost to every organization. Any enterprise with a large employee base, normal to high attrition rates, or undertaking merger and acquisition (M&A) activities will be vulnerable to orphaned data. Orphaned data is mostly not of value to the organization and creates liability due to its unknown content. Eliminating orphaned data enables IT leaders to reduce cost, lower carbon footprint, and lower risk while maximizing value out of relevant unstructured data used by active users.

With the new capabilities, StorageMAP can identify data sets with high amounts of orphaned data, allowing IT leaders to group and organize the data independent of the business organization. The orphaned datasets can then be migrated, archived to a lower-cost platform, or deleted by StorageMAP. The platform provides an easy-to-read, granular report on the top users with orphaned data and for additional detail, a search function allows targeted searches to be executed across part of or the entire data estate.

The 6.2 release comes after Datadobi launched its StorageMAP platform earlier this year. StorageMAP provides a single pane of glass for organizations to manage unstructured data across their complete data storage estate on-premises and in the cloud. This latest update enables IT leaders to fully take control of orphaned data on their networks.

“Due to the scale and complexity of unstructured data in today’s heterogeneous storage environments, enterprises can easily lose track of orphaned data within networks and open themselves up to excess storage costs and risk,” said Carl D’Halluin, CTO, Datadobi. “StorageMAP’s new capabilities allow for a seamless experience identifying and taking the appropriate action on orphaned data to give IT leaders peace of mind.”

“As data proliferation continues, IT leaders are going to see more orphaned data on their networks. This is why it is so important that organizations turn to unstructured data management solutions like StorageMAP to find datasets associated with inactive users and take the appropriate action,” said Craig Ledo, IT Validation Analyst at Enterprise Strategy Group (ESG). “Combined with a high level of cross-vendor knowledge, years of real-life experience, and great customer support, enterprises can let StorageMAP do the heavy lifting when it comes to orphaned data.”

Insights from Product Data Boost Competitiveness

We keep returning to the theme of the importance of product data. This report from NI summarizing recent research into using product-centric data, such as test data, in product development brings forth some data on data.

Hundreds of senior product innovators say that product data is key to staying competitive, yet more than half of the respondents recognize gaps in the way they extract value from their test data. There is also a strong correlation between advanced data strategies and increased degrees of innovation, with two-thirds of respondents believing that a data strategy is essential to optimizing the product lifecycle.

“Companies face a dual challenge of increased product complexity and shrinking time to market, causing a shift in the way products are developed. They recognize that the status quo will not work anymore,” says Mike Santori, fellow at NI. “Business performance can improve through connected product data and analytics, and this research provides evidence that test data are a strategic differentiator.”

While many of the engineering vice presidents and heads of R&D recognize the value of data in their product development, over half cited cost as the inhibiting factor preventing the transformation of their production models. The research also pointed to test being an underutilized resource with 38% of respondents saying they rarely use test to inform product design and 51% recognizing that they could extract more value from their data if they implemented test earlier in their processes.

Additional findings include:

  • 52% of companies with an integrated company-wide product data strategy experienced faster time-to-market in the last 12 months, compared to 33% of companies without this advantage
  • 55% of product innovators say that integrating test data into the product development process will be a key priority over the next 12 months
  • 40% identify integrating test data into the product development process among the initiatives that could bring the most value to their business

The survey was conducted among senior product innovators in 10 industries including semiconductor, transportation, consumer electronics, and aerospace and defense. It was produced by FT Longitude, the specialist research and content marketing division of the Financial Times Group.

Fix Your Data Flow and Transform Your Business

Back in the 70s I actually had a job in a manufacturing company with a principle function regarding data. All the engineering, product, and cost data. Little did I know then that 45 years later data is the new oil. (Don’t really believe all the hype.)

I’ve been interested in the new function and applications called DataOps, among other things for data management. I’ve been part of the HPE VIP Community for a few years. HPE isn’t talking as much about manufacturing lately, but many of its technologies are germane. I picked up this short post from the Community site on data management.

Data management complexity is everywhere in the modern enterprise, and it can feel insurmountable. IT infrastructure has evolved into a complex web of fragmented software and hardware in many organizations, with disparate management tools, complicated procurement processes, inflexible provisioning cycles, and siloed data. Organizations are running multiple storage platforms, each with their own management tools, which becomes a progressively tougher problem at scale.

Complexity is a growing problem, and according to a recent ESG study, 74% of IT decision makers acknowledge that it is holding them back in their digital transformation journey. Their data management capabilities simply can’t keep pace with business demands. As a result, IT organizations are forced to spend time managing the infrastructure, as opposed to really leveraging the data.

Constant firefighting might sound like business as usual to many tech pros, but it can be avoided. To accelerate transformation, IT leaders must confront and eliminate data complexity first, getting data to flow where and when it needs to, and allowing the business to get back to business.

Of course, HPE has a solution—GreenLake for Block Storage.

To Everything There Is A Season

What if the time has come to rethink all these specific silos and strategies that we build software solutions around?

Folk/rock group The Byrds popularized a Pete Seeger tune in the 1960s, “To everything (turn, turn, turn) there is a season (turn, turn, turn) and a time for every purpose under heaven.” 

The time has come to rethink all the departmental silos manufacturing executives constructed over the years with vendors targeting their applications to fit. This era of the Internet of Things (IoT), sensor-driven real-time data, innovative unstructured databases, powerful analytics engines, and visualization provide us with new ways of thinking about organizing manufacturing.

HMI/SCADA can become IoT enabling software expanding beyond the normal visualization role. Types of MES software break the bounds of traditional silos. Not just quality metrics, OEE calculators, or maintenance schedulers, what if we thought of MES as operational intelligence bringing disparate parts together? These can provide managers of all levels the kind of information needed for better, faster decision making.

I have worked with a number of maintenance and reliability media companies. They have all been embroiled in discussions of the comparative value of maintenance strategies: Reactive (run-to-failure), Preventive, Predictive, Reliability-centered. These are presented as a continuum progressing from the Stone Age to StarTrek. With them are always discussion about which is best.

The IT companies I have worked with fixated on predictive. They had powerful predictive analytics to combine with their database and compute capabilities and saw that as the Next Big Thing. They were wrong.

I was taught early in my career that Preventive was also known as scheduled maintenance. Management sends technicians out on rounds on a regular basis with lube equipment and meters to check out and lubricate and adjust. As often as not, these adjustments would disturb the Force and something would break down.

What if? What if we use all the sensor data from equipment sent to the cloud to a powerful database? What if we use that data to intelligently dispatch technicians to the  necessary equipment with the appropriate tools to fix before breaking and at an appropriate collaborative time?

A company called Matics recently was introduced to me via a long-time marketing contact. They wanted to talk about the second definition of preventive maintenance. Not just unscheduled rounds but using sensor-driven data, or IoT, to feed its Central Data Repository with the goal of providing Real-time Operational Intelligence (RtOI) to its customers.

According to Matics, its RtOI system has provided customers with:

  • 25% increased machine availability
  • 30% decrease in rejects
  • 10% reduction in energy consumption

Smarter preventive maintenance leverages continuous condition monitoring targeting as-needed maintenance resulting in fewer unnecessary checks and less machine stoppage for repair.

I am not trying to write a marketing piece for Matics, although the company does compensate me for some content development. But their software provides me a way to riff into a new way of thinking.

Usually product engineers and marketing people will show me a new product. I’ll become enthused. “Wow, this is cool. Now if you could just do this and this…” I drive product people crazy in those meetings. I think the same here. I like the approach. Now, if customers can take the ball and run with it thinking about manufacturing in a a new way, that would be cool—and beneficial and profitable. I think innovative managers and engineers could find new ways to bring engineering, production, and maintenance together in a more collaborative way around real-time information.

DataOps Portfolio from Hitachi Vantara

I’ve felt DataOps was destined to be an important data management tool since I was introduced to it a few years back. Hitachi Vantara is one of two companies I follow specifically bringing this technology to industrial applications. Here it introduces a new portfolio bringing IIoT specifically into the core working with digital twins, machine learning (ML), and user interface.

Background. Ultimately, most industrial IoT difficulties are rooted in data management shortcomings. However, these challenges are not the same as those faced in a purely IT setting. For example, operational technology (OT) data is high-velocity time series and event information that many times lacks the detailed metadata descriptors and features needed to leverage it outside of the operations organization. In comparison, business IT data comes across in batches or transaction records with different metadata descriptors where time-stamp references are not always correlated. Merging these datasets, in context, is not trivial work, but, if done right, yields new operational insights that can provide a competitive advantage.

Lumada Industrial DataOps automates the process of abstracting, tagging, and rationalizing IT and OT data and organizes it in the data lake or data warehouse so it is usable for analysis and building AI and ML solutions. Data pipelines are established, and multiple transformations and inferences can be calculated and orchestrated as part of the workflow. Industrial process engineers can work with data scientists, analysts, and applications consultants to unlock the combined value and make major operations improvements.

But reality hasn’t lived up to the promise, and industrial operations have had a mixed relationship with IoT technologies. While there has been considerable success at the project level, broad IIoT deployments and the resulting analytics capabilities have progressed in fits and starts. Enterprises will need to leverage IIoT as well as AI and ML technology across far more use cases to better support their existing workforce and overcome supply chain issues. It turns out that it is more complicated than developers anticipated to scale their IIoT proofs of concept to stretch across a company.

The Lumada Industrial DataOps portfolio adds IIoT Core software with IIoT platform framework capabilities. The new toolkit is delivered as IIoT Analytics to accelerate the convergence of traditional IT with expanding IIoT data sources and bring powerful new software-based capabilities to life. IIoT Analytics offers prepackaged modules that provide data integration and preconfigured functions that give you a faster start on your application so you can focus on fine-tuning it for your specific requirements. A typical IIoT Analytics toolkit includes:

  • Digital twins for data and asset organization
  • ML models for faster assembly
  • Simulation software interfaces for greater accuracy
  • ML services framework for deploying AI/ML applications

Lumada Industrial DataOps directly addresses the four key challenges that hinder the enterprise-wide expansion of IIoT applications.

Challenge 1: The Need for High-Level Data Management Organizations need solutions that make it possible to access data in motion and at rest from the widest array of sources, integrate all that data, transform the data, and perform analysis. While all that happens, data security must be maintained and policies enforced to adhere to compliance and governance requirements.

Challenge 2: Automating Data Organization To create an efficient production pipeline for AI models, data scientists and analysts need an environment within which they can organize data and build models to detect events. This requires a system that automates the data analysis function, rejecting noise and providing people with a rich data signal that can be predictive or prescriptive in context.

Challenge 3: Accelerate the Training of AI Models Starting every model from scratch is not practical, as this approach may introduce delays and costs that get in the way of meeting business objectives. Data science personnel instead need templates that provide a proven foundation that they can then refine and adapt to meet specific requirements in a timely manner.

Challenge 4: Shorten Application Delivery Time Engineers and developers also need ready-made application components that provide a starting point.

Using Lumada Industrial DataOps, organizations can accelerate their development of digital twins, which can be further combined with new AI and ML analytic templates that address a variety of critical industrial activities. These analytics include anomaly detection and prediction capabilities for maintenance and operations effectiveness. These data management and application building blocks support the many industry-specific solutions offered by Hitachi to speed cooperative deployment efforts for Hitachi clients and partner organizations.

Lumada Industrial DataOps embraces the synergistic srelationships between data management, AI applications, and the next-level decision-making required in modern industrial environments. With Lumada Industrial DataOps, Hitachi empowers industrial enterprises to move their IIoT-driven AI applications out of the endless pilot phase and more quickly develop and scale for enterprise-wide deployment.