Advantech Quietly an Internet of Things Leader

Advantech Quietly an Internet of Things Leader

Advantech has been appearing on a variety of lists of prominent Internet of Things suppliers. The Taiwanese computer company with a US office in Cincinnati, OH and intellectual leadership, supplies intelligent I/O, a variety of computing devices, and HMI devices.

Several years ago I was privileged to be invited to Suzhou, China to attend Advantech’s user conference. It was an impressive event. This year they called it the “first IoT Co-Creation Summit.”

More than five thousand Advantech clients and partners from around the world attended the summit. Here Advantech introduced its newest IoT platform structure WISE-PaaS 3.0 and 32 IoT solution ready packages (SRPs) co-created with software and industry partners.

The event in itself will aid in the software/hardware integration for various industries, connect and build a complete industrial IoT ecosystem and value chain, and allow Advantech and partners to officially step into the next IoT stage.

Advantech Chairman KC Liu stated that in view of IoT application characteristic’s diversity and fragmented market, Advantech has assisted industries in integrating and connecting existing hardware and software and regards creating a complete industry value chain as its primary task in IoT industry development.

Advantech is introducing new features for its WISE-PaaS 3.0 and sharing a number of IoT solution ready packages (SRPs), based on WISE-PaaS, developed with numerous co-creation partners. The company is also outlining future co-creation strategies and schedules for the upcoming year.

Allan Yang, Chief Technology Officer at Advantech said, “While IoT is currently flourishing and many companies have invested in connectivity and data collection equipment, we are still in the early stages of generating value from IoT data. Since WISE-PaaS launched in 2014, Advantech has continued its integration and improved connectivity with open source communities. Our IoT software modules are developed to create operational cloud platform services oriented around the commercial value generated by data acquisition. Data-driven innovation has thus become the main target for our WISE-PaaS evolution.

WISE-PaaS 3.0 offers four main function modules:

  • WISE-PaaS/SaaS Composer: a cloud configuration tool with visible workflow. WISE-PaaS/SaaS Composer supports customized component plotting for simple and intuitive 3D modeling application and interaction. It updates views at millisecond rates and, together with WISE-PaaS/Dashboard, presents critical management data in a visually intuitive display to help extract valuable data and improve operational efficiency.
  • WISE-PaaS/AFS (AI Framework Service): an artificial intelligence training model and deployment service framework. The WISE-PaaS/AFS provides a simple drag and drop interface that allows developers to quickly input industrial data. When combined with AI algorithms, the service builds an effective inference engine with automatic deployment to edge computing platforms. AFS offers model accuracy management, model retraining, and automated redeployment. It simultaneously controls multiple AI models in the application field; offering automated model accuracy improvements and life-cycle management services.
  • WISE-PaaS/APM (Asset Performance Management): an equipment network connection remote maintenance service framework. WISE-PaaS/APM connects to a wide array of on-site industrial equipment controls and communication protocols. It supports the latest edge computing open standard, EdgeX Foundry, and includes built-in equipment management and workflow integration templates. Jointly with the AFS, APM accelerates Machine to Intelligence (M2I) application development.
  • Microservice development framework: WISE-PaaS contains a micro service development framework to help developers rapidly create program design frameworks while reducing development requisites. Micro service functions, such as service finding, load balancing, service administration, and configuration center, all offer built-in flexible support mechanisms.

Advantech recently established a water treatment system, jointly developed with GSD (China) Co., Ltd., and a CNC equipment remote operation service, jointly developed with Yeong Chin Machinery Industries Co. Ltd. Both partnerships demonstrate how industrial digital transformations, led by Advantech and its partners through the co-creation model, offer innovative win-win IoT solutions.

Advantech’s IIoT iAutomation Group has launched a broad selection of rackmount GPU Servers from 1U to 4U. The SKY-6000 GPU server series are powered by Intel Xeon scalable processors and each of these highly scalable GPU-optimized servers support up to five NVIDIA Tesla P4 GPUs. IPMI management functions and smart fan control ensure better temperature control and thermal management environments. Every GPU pair includes one high-speed PCIe slot for highly parallel applications like artificial intelligence (AI), deep learning, self-driving cars, smart city applications, health care, high performance computing, virtual reality, and much more.

AI Deep Learning GPU Solution

With support for up to five pcs of half-length half-height (HHHL) GPU cards or one full-height full-length (FHFL) double deck card, plus one full-height half-length (FHHL) GPU card, the SKY-6100 series are designed for NVidia Tesla P4 HHHL GPU cards, making it the best choice for deep learning applications.

IPMI Server Management

With IPMI 2.0 support, the SKY-6000 series allows users to monitor, manage, and control servers remotely and receive alerts if any sensors detect device or component faults. In addition, event logs record important information about the server which can be controlled remotely using the IPMI KVM.

Smart Fan Control

The optimized thermal design separates the CPU and GPU fan zones, making sure the GPU card is not preheated or thermally affected by any other heat source. Also, with the smart fan control mechanism, fan speeds are controlled based on different CPU and GPU workloads and ambient temperature. This feature lowers the acoustic noise of GPUs that have heavy loading but not CPUs. Advantech’s SKY-6000 server series are available for order now.

AI Understood and Misunderstood

AI Understood and Misunderstood

Artificial Intelligence (AI) draws a lot of attention and media space. But what is it, really? Elon Musk has recently spoken about AI in the terms of dystopian SciFi movies. Recently I’ve heard Eric Schmidt and Michael Dell talk much more positively about the potential for AI for solving human problems.

Most of the discussion about AI these days has little to do with replicating human brains with silicon circuitry. The core involves machine learning and neural networks. Recently two different organizations have sent me studies about the current state of AI. The first comes from ABI Research. The second from McKinsey Global Institute.

ABI Research AI to the Cloud

Artificial Intelligence (AI) will see a significant shift out of the cloud and on to the edge (aka on-device, gateway, and on-premise server). This will happen initially in terms of inference (machine learning) and then by training. This shift means a huge opportunity for those chipset vendors with power-efficient chipsets and other products that can meet the demand for edge AI computing.  Edge AI inference will grow from just 6% in 2017 to 43% in 2023, announced ABI Research, a market-foresight advisory firm providing strategic guidance on the most compelling transformative technologies.

“The shift to the edge for AI processing will be driven by cheaper edge hardware, mission-critical applications, a lack of reliable and cost-effective connectivity options, and a desire to avoid expensive cloud implementation. Consumer electronics, automotive, and machine vision vendors will play an initial critical role in driving the market for edge AI hardware. Scaling said hardware to a point where it becomes cost effective will enable a greater number of verticals to begin moving processing out of the cloud and on to the edge,” says Jack Vernon, Industry Analyst at ABI Research.

ABI Research has identified 11 verticals ripe for the adoption of AI, including automotive, mobile devices, wearables, smart home, robotics, small unmanned aerial vehicles, smart manufacturing, smart retail, smart video, smart building, and oil and gas sectors and split across a further 58 use cases.  By 2023 the market will witness 1.2 billion shipments of devices capable of on-device AI inference – up from 79 million in 2017.

Cloud providers will still play a pivotal role, particularly when it comes to AI training. Out of the 3 billion AI device shipments that will take place in 2023, over 2.2 billion will rely on cloud service providers for AI training – this is still a real-term decline in the cloud providers market share for AI training, which currently stands around 99%, but will fall to 76% by 2023. Hardware providers should not be too concerned about this shift away from the cloud, as AI training is likely to be supported by the same hardware, only at the edge, either on-premise servers or gateway systems.

The power-efficient chipset is the main driver of edge AI. Mobile vendor Huawei is already introducing on-device AI training for battery power management in its P20 pro handset, in partnership with Cambricon Technologies. Chip vendors NVIDIA, Intel, and Qualcomm are also making a push to deliver the hardware that will enable automotive OEMs to experiment with on-device AI training to support their efforts in autonomous driving. Training at the edge on-device is beginning to gain momentum in terms of R&D, but it could still take some take some time for it to be a realist approach in most segments.

“The massive growth in devices using AI is positive for all players in the ecosystem concerned, but critically those players enabling AI at the edge are going to see an increase in demand that the industry to date has overlooked. Vendors can no longer go on ignoring the potential of AI at the edge. As the market momentum continues to swing toward ultra-low latency and more robust analytics, end users must start to incorporate edge AI in their roadmap. They need to start thinking about new business models like end-to-end integration or chipset as a service,” Vernon concludes.

These findings are from ABI Research’s Artificial Intelligence and Machine Learning market data. This report is part of the company’s AI and Machine Learning research service, which includes research, data, and Executive Foresights.

McKinsey

AI Could Add $2 Trillion to Manufacturing Value, McKinsey Paper Says

Artificial intelligence for manufacturing is like the old BASF chemical company slogan that it does not make many of the things you buy, it makes them better.
As much as $2 trillion better, according to a McKinsey Global Institute discussion paper covering more than 400 AI use cases. In 69 percent of the use cases, researchers found that adding AI to established analytical techniques could improve performance and generate additional insights and applications.

Nearly a quarter of the use cases directly or indirectly touched manufacturing.
“Manufacturing is the second largest domain when it comes to potential in value creation (right behind Marketing & Sales,” said Mehdi Miremadi, an MGI partner and co-author of the paper. “Application of advanced deep learning models in manufacturing and supply chain have the potential to create $1.2-2 trillion in annual economic value.”

Two deep-learning neural networks offer the greatest promise in applying AI to manufacturing.

Feed Forward — information moves forward from the input layer forward through the “hidden” layers to the output layer. It has been enhanced by advances in computer power, training algorithms and available data.

Convolutional – Connections between the neural layers mimic the visual human cortex that processes images. These are well suited to visual perception tasks.

Of the two, Feed Forward Neural Networks is most applicable with wide applications in predictive maintenance, yield, efficiency and energy. The most value is derived when using AI with existing analytics tools.

“In predictive maintenance, neural networks improve the ability to incorporate and process a broader set of data, including unstructured data such as videos and images,” Miremadi said. Better algorithm precision and accuracy can result in better decisions. And there is the possibility of taking better advantage of live data.”

Small and medium size manufacturers, often left on the sidelines of Industry 4.0 technologies like AI, can do more than they might think, Miremadi said.

The cost of setting up AI applications is declining. Hardware sensors and actuators are much more affordable and reliable. And data systems and deep-learning algorithms are more accessible. This allows a choice of internal development or working with one of the many tech providers in the market.

Advantech Quietly an Internet of Things Leader

NI Focusing on Test and Measurement Updates LabView

NI Week was last week, and for only the second time in 20 years, I didn’t go. NI, formerly National Instruments, has been focusing more on test and measurement lately. Not so much automation. My interest is mostly on its IoT efforts especially TSN. I figure I can get an interview with Todd Walter or whomever without the expense of a conference.

NI’s core competency lies as the provider of a software-defined platform that helps accelerate the development and performance of automated test and automated measurement systems. At NI Week it announced the release of LabVIEW 2018.

Applications that impact our daily lives are increasing in complexity due to the rapid innovation brought on by industry trends such as 5G, the Industrial Internet of Things, and autonomous vehicles. Consequently, the challenge of testing these devices to ensure reliability, quality and safety introduce new demands and test configurations, with decreased time and budget. Engineers need better tools to organize, develop and integrate systems so they can accomplish their goals within the acceptable boundaries.

Engineers can use LabVIEW 2018 to address a multitude of these challenges. They can integrate more third-party IP from tools like Python to make the most of the strengths of each package or existing IP from their stakeholders. Test engineers can use new functionality in LabVIEW 2018 to strengthen code reliability by automating the building and execution of software through integration with open interface tools like Jenkins for continuous delivery. Capabilities like this empower test engineers to focus on system integration and development where they can offer unique differentiation, rather than get bogged down in the semantics of how to use software tools or move IP from one to another. For test engineers using FPGAs for high-performance processing, new deep learning functions and improved floating-point operations can reduce time to market.

“NI’s continued commitment to its software-centric platform accelerates my productivity so I can focus on the challenges that yield the highest ROIs,” says Chris Cilino, LabVIEW framework architect at Cirrus Logic. “LabVIEW continues to minimize the effort of adding tests and code modifications to our validation framework, delivering a consistent process to maintain our software and incorporate the reuse of valuable IP without rewrites.”

To meet demands like testing higher complexity DUTs and shorter timeframes, engineers need tools tailored to their needs that they can efficiently use through their workflow, helping them to meet their exact application requirements. LabVIEW 2018 is the latest addition to NI’s software-centric platform that features products tailored to needs within distinct stages of their workflow – products that have been adopted in whole or in part by more than 300,000 active users.

With InstrumentStudio software providing an interactive multi-instrument experience, TestStand test management software handling overall execution and reporting and SystemLink software managing assets and software deployments, this workflow improves the productivity of test and validation labs across many industries. Each piece of the workflow is also interoperable with third-party software to maximize code and IP reuse and draws on the LabVIEW Tools Network ecosystem of add-ons and tools for more application-specific requirements.

Engineers can access both LabVIEW 2018 and LabVIEW NXG with a single purchase of LabVIEW.

Internet of Things Prominent at Dell Technologies World

Internet of Things Prominent at Dell Technologies World

A few of us gathered for a round table discussion of Internet of Things while I was at Dell Technologies World at the beginning of the month. I arrived a little early and had a private round table for several minutes before others arrive and the discussion became broader.

Ray O’Farrell, CTO of VMware and GM of IoT at Dell Technologies, said the focus of last 6 months since the new Internet of Things organization was announced included these three points:

1. Dell is 7 companies, trying to achieve one cohesive strategy across all; one organization when facing customers.

2. Best way is to work within the ecosystem, that is history of VMWare.

3. Building technology and leverage solutions. This is a complex undertaking as not all challenges within IoT are alike—there are few cookie cutter applications.

The evolution of Internet of Things within Dell to Dell EMC to Dell Technologies constitutes an upward spiraling path encompassing the greater breadth of technologies and organization reflecting the post-merger company. When I first came along, the concept was building an ecosystem around selling an edge device appliance. Now the strategy is much broader bringing the goal of IT/OT convergence closer to reality. As I’ve mentioned before, the IT companies are attacking that convergence from the IT side after years of manufacturing/production oriented suppliers trying to accomplish the same thing from the OT side. Maybe like the old country song we’ll meet in the middle someday.

Everyone talks Artificial Intelligence (AI) these days, and Dell Technologies is not exception. However, AI is not the science fiction doom and gloom predicted by Ray Kurzweil, Elon Musk, and others. Mostly it entails machine learning (ML) from detected patterns in the data.

Or as Dell Technologies says, it is applying AI and ML technology to turn data into intelligent insights, drive a faster time to market, and achieve better business outcomes.

News summary

• Dell EMC PowerEdge expands portfolio to accelerate AI-driven workloads, analytics, deployment and efficiency

• Deepens relationship with Intel to advance AI community innovation, machine learning (ML) and deep learning (DL) capabilities with Dell EMC Ready Solutions

• Dell Precision Optimizer 5.0 now enhanced with machine learning algorithms, intelligently tunes the speed and productivity of Dell Precision workstations.

• Dell EMC uses AI, ML and DL to transform support and deployment

14th generation Dell EMC PowerEdge four-socket servers and Dell Precision Optimizer 5.0 are designed to further strengthen AI and ML capabilities.

According to the recently released update of the Enterprise Strategy Group (ESG) 2018 IT Transformation Maturity Curve Index, commissioned by Dell EMC, transformed companies are 18X more likely to make better and faster data-driven decisions than their competition. Additionally, transformed companies are 22X as likely to be ahead of the competition with new products and services to market.

“The Internet of Things is driving an onslaught of data and compute at the edge, requiring organizations to embrace an end-to-end IT infrastructure strategy that can effectively, efficiently and quickly mine all that data into business intelligence gold,” said Jeff Clarke, vice chairman, Products & Operations, Dell. “This is where the power of AI and machine learning becomes real – when organizations can deliver better products, services, solutions and experiences based on data-driven decisions.”

Unlike competitors’ four-socket offerings, these servers also support field programmable gate arrays (FPGAs)3, which excel on data-intensive computations. Both servers feature OpenManage Enterprise to monitor and manage the IT infrastructure, as well as agent-free Integrated Dell Remote Access Controller (iDRAC) for automated, efficient management to improve productivity.

Dell EMC is also announcing its next generation PowerMax storage solution, built with a machine learning engine which makes autonomous storage a reality.

Leveraging predictive analytics and pattern recognition, a single PowerMax system analyzes and forecasts 40 million data sets in real-time per array4, driving six billion decisions per day5 to automatically maximize efficiency and performance of mixed data storage workloads.

The new Dell Precision Optimizer 5.0 uses AI to automatically adjust applications running on Dell Precision workstations to maximize performance by:

• Custom-optimizing applications: Dell Precision Optimizer learns each application’s behavior in the background and uses that data to employ a trained machine learning model that will automatically adjust the system to optimized settings and deliver up to 394% improvement in application performance.

• Automating systems configuration adjustments: Once activated and a supported application is launched, the software automatically adjusts system configurations such as CPU, memory, storage, graphics and operating system settings.

Speaking of partners and collaboration, Dell Technologies and Microsoft join forces to build secure, intelligent edge-to-cloud solution featuring Dell Edge Gateways, VMware Pulse IoT Center, and Microsoft Azure IoT Edge

News summary

• Joint IoT solution helps simplify management, enhances security and help lowers cost of deployment at the edge

• Built on innovative analytics applications, management tools and edge gateways to enable network security from edge devices to the cloud

• Accelerates IoT adoption in industry verticals key to economic growth and development

The joint solution offers an underlying IoT infrastructure, management capabilities, and security for customers looking to deploy IoT for scenarios like predictive maintenance, supply chain visibility and other use cases. The solution will deliver:

• Intelligence at the edge with Microsoft Azure IoT Edge: This application extends cloud intelligence to edge devices so that devices can act locally and leverage the cloud for global coordination and machine learning at scale

• Management and monitoring of edge devices with VMware Pulse IoT Center: This provides more secure, enterprise-grade management and monitoring of diverse, certified edge devices including gateways and connected IoT devices, bios and operating systems.  This ecosystem will be built over time involving deeper integration and certification to support customer requirements.

• High-performance, rugged Dell Edge Gateways: IoT devices with powerful dual-core Intel® Atom™ processors connect a variety of wired and wireless devices and systems to aggregate and analyze inputs and send relevant data to the cloud

VMware Pulse IoT Center will serve as the management glue between the hardware (Dell Edge Gateways or other certified edge systems), connected sensors and devices and the Microsoft Azure IoT Edge. Initially, Pulse will help to deploy the Microsoft Azure IoT Edge to the requisite edge systems so that it can start collecting, analyzing and acting on data in real-time.

Internet of Things Prominent at Dell Technologies World

Modernizing Manufacturing Operations With AI

Artificial Intelligence, always known as AI, along with its sometime companion robots leads the mainstream media hype cycle. It’s going to put everyone out of jobs, destroy civilization as we know it, and probable destroy the planet.

I lived through the Japanese robotic revolution-that-wasn’t in the 80s. Media loved stories about robots taking over and how Japan was going to rule the industrialized world because they had so many. Probing the details told an entirely different story. Japan and the US counted robots differently. What we called simple pick-and-place mechanisms they called robots.

What set Japanese industrial companies apart in those days was not technology. It was management. The Toyota Production Method (aka Lean Manufacturing) turned the manufacturing world on its head.

My take for years based on living in manufacturing and selling and installing automation has been, and still is, that much of this technology actually assisted humans—it performed the dangerous work, removing humans from danger, taking over repetitive tasks that lead to long-term stress related injuries, and performing work humans realistically couldn’t do.

Now for AI. This press release went out the other day, “With AI, humans and machines work smarter and better, together.” So, I was intrigued. How do they define AI and what does it do?

Sensai, an augmented productivity platform for manufacturing operations, recently announced the launch of its pilot program in the United States. Sensai increases throughput and decreases downtime with an AI technology that enables manufacturing operations teams to effectively monitor machinery, accurately diagnose problems before they happen and quickly implement solutions.

The company says it empowers both people and digital transformation using a cloud-based collaboration hub.

“The possibility for momentous change within manufacturing operations through digital transformation is here and now,” said Porfirio Lima, CEO of Sensai. “As an augmented productivity platform, Sensai integrates seamlessly into old or new machinery and instantly maximizes uptime and productivity by harnessing the power of real time data, analytics and predictive AI. Armed with this information, every person involved – from the shop floor to the top floor – has the power to make better and faster decisions to increase productivity. Sensai is a true digital partner for the operations and maintenance team as the manufacturing industry takes the next step in digital transformation.”

By installing a set of non-invasive wireless sensors that interconnect through a smart mesh network of gateways, Sensai collects data through its IIoT Hub, gateways and sensors, and sends it to the cloud or an on-premise location to be processed and secured. Data visualization and collaboration are fostered through user-friendly dashboards, mobile applications and cloud-based connectivity to machinery.

The AI part

Sensai’s differentiator is that it provides a full state of awareness, not only of the current status, but also of the future conditions of the people, assets and processes on the manufacturing floor. Sensai will learn a businesses’ process and systems with coaching from machine operators, process and maintenance engineers. It will then make recommendations based on repeating patterns that were not previously detected. Sensai does this by assessing the team’s experiences and historical data from the knowledge base and cross checking patterns of previous failures against a real-time feed. With this information, Sensai provides recommendations to avoid costly downtime and production shutdowns. Sensai is a true digital peer connecting variables in ways that are not humanly possible to process at the speed required on a today’s modern plant floor.

About the Pilot Program

Participation in Sensai’s pilot program is possible now for interested manufacturers. Already incorporated throughout Metalsa, a leading global manufacturer of automotive structural components, Sensai is set to digitally disrupt the manufacturing industry through AI, including those in automotive, heavy metal and stamping, construction materials, consumer goods and more.

Porfirio Lima, Sensai CEO, answered a number of follow up questions I had. (I hate when I receive press releases with lots of vague benefits and buzz words.)

1. You mention AI, What specifically is meant by AI and how is it used?

Sensai uses many different aspects of Artificial Intelligence. We are specifically focused on machine learning (ML), natural language processing (NLP), deep learning, data science, and predictive analytics. When used together correctly, these tools serve a specific use case allowing us to generate knowledge from the resulting data. We use NLP to enable human and computer interaction helping us derive meaning from human input. We use ML and deep learning to learn from data and create predictive and statistical models. Finally, we use data science and predictive analytics to extract insights from the unstructured data deriving from multiple sources. All of these tools and techniques allow us to cultivate an environment of meaningful data that is coming from people, sensors, programmable logistics controllers (PLCs) and business systems.

2. “Learn processes through operators”—How do you get the input, how do you log it, how does it feed it back?

Our primary sources of data (inputs) are people, sensors, PLCs, and business systems. In the case of people on the shop floor or operators, we created a very intuitive and easy to use interface that they can use on their cellphones or in the Human Machine Interfaces (HMIs) that are installed in their machines, so they can give us feedback about the root causes of failures and machine stoppages. We acquire this data in real-time and utilize complex machine learning algorithms to generate knowledge that people can use in their day-to-day operations. Currently, we offer web and mobile interfaces so that users can quickly consume this knowledge to make decisions. We then store their decisions in our system and correlate it with the existing data allowing us to optimize their decision-making process through time. The more a set of decisions and conditions repeats, the easier for our system is to determine the expected outcome of a given set of data.

3. Pattern? What patterns? How is it derived? Where did the data come from? How is it displayed to managers/engineers?

We create “digital fingerprints” (patterns) with ALL the data we are collecting. These “patterns” allow us to see how indicators look before a failure occurs, enabling us to then predict when another failure will happen. Data comes from the machine operators, the machines or equipment, our sensors, and other systems that have been integrated to Sensai’s IIOT hub.

We trigger alerts to let managers and engineers know that a specific situation is happening. They are then able to review it in their cellphones as a push notification that takes them to a detailed description of the condition in their web browser where they can review more information in depth.

4. What specifically are you looking for from the pilots?

We are not a cumbersome solution, for us is all about staying true about agility and value creation. We look for pilots that can give us four main outcomes:

– Learn more about our customer needs and how to better serve them

– A clear business case that can deliver ROI in less than 6 months after implementation and can begin demonstrating value in less than 3 months.

– A pilot that is easy to scale up and replicate across the organization so we can take the findings from the pilot and capitalize them in a short period of time.

– A pilot that can help Sensai and its customers create a state of suspended disbelief that technology can truly deliver the value that is intended and that can be quickly deployed across the entire organization.

Follow this blog

Get a weekly email of all new posts.