Sensors were the focus of my last post, and I promised more. These notes came from the Consumer Electronics Show (CES), which was not large as usual and generated far fewer news items than expected. Most of the sensor news revolved around autonomous vehicles. We may or may not be interested in autonomous cars in this blog, but trucks for the supply chain and warehouse vehicles provide utility in an age of declining workforce.
Following comments from prominent people in the field were supplied to me from an agency promoting CES.
Paul Drysch, CEO of PreAct Technologies (sensors for autonomy)
• The Last 50 Feet: Attention will start to shift towards “the last 50 feet” (short range or near field sensing), in order to meet the demands of self-driving vehicles (trucking, robotaxis, etc.), and customer demands for more advanced ADAS and convenience features. The market will need to adapt since traditional radar and ultrasound are not sufficient anymore. The last 50 feet is a much harder problem to solve than highway driving, and it’s also the most important to the success of full autonomy within a city.
• New Sensor Technologies: Newer technologies like 4D radar and continuous wave time of flight (CWTOF) cameras are starting to gain significant traction because of the need for better near-field sensing, and we’ll ultimately see ultrasound and other tech go away.
• LiDAR Gains Traction: Lidar will finally start to find some traction in production vehicles, however that volume will remain minuscule and there are still way too many lidar companies so there will be some consolidation and some players disappearing in 2022-2023.
Blair LaCorte, CEO of AEye (NASDAQ: LIDR – LiDAR technology)
• Shift from robotaxis to trucking: Robotaxi implementations in closed-loop environments have been well-proven. The next major transportation network to tackle autonomous mobility will be trucks delivering goods across cities and states.
• The “Marquee” App emerges: Highway autopilot will emerge as the “marquee app” in automotive ADAS, as sensor sophistication, especially with the addition of LiDAR, enables small object detection at highway speeds.
• Over the air updates become more prevalent: Automakers will take a cue from Elon Musk, looking to over-the-air updates to deliver continuous feature enhancements, and moving toward subscription pricing to drive future revenues.
• Industrial markets will drive autonomous mobility: What’s old is new, as “old school” companies and industries lead the way in implementing autonomy. Think aerospace & defense, construction and rail – companies with “closed loop” scenarios with much more predictable use cases and challenges than those in automotive.
• Continued Consolidation: As the industry seeks complete autonomous mobility solutions, get ready for continued consolidation, especially with regard to the software perception layer (i.e. Uber/Aurora, Aptiv/Hyundai).
News of vision and imaging advances are queuing in my outliner. Some is sparked by autonomous vehicle requirements. Others by quality in manufacturing requirements. This news from Neurala couples interest in artificial intelligence (AI) with vision systems.
Neurala announced launch of new detection technology, designed and developed for manufacturing and industrial settings. Neurala’s latest feature enables manufacturers to find and identify objects in a field of view to ultimately improve the quality inspection process.
This feature is the latest AI model from the company, adding to its existing classification and anomaly recognition capabilities. With the addition of detection, manufacturers can now easily find objects in a field of view, opening up new use cases and applications across industries such as robotics, automotive and logistics. For example, automotive parts manufacturers could use Neurala’s detection feature to ensure the correct number of parts are in any given kit when checked against a bill of materials. If the technology identifies an object or part is missing, it can flag it to the system to be handled accordingly.
“At Neurala we are on a mission to help the industrial and manufacturing industries harness the power of vision AI. With the addition of detection, manufacturers will be able to unlock new use cases and applications that will help them advance their Industry 4.0 initiatives,” said Max Versace, CEO and co-founder of Neurala. “We’ve heard from our partners that detection is a feature their customers have been asking for, so we are excited to bring this capability to market as we continue to expand our solutions and offerings aimed at helping manufacturers improve the quality inspection process.”
“Today’s manufacturers are under pressure to deliver, so they are looking for innovative solutions that will add value to their production line,” said Andrea Rossi, Managing Director, Visionlink srl. “The addition of detection expands the breadth of what Neurala’s VIA solution is capable of, giving customers a solution that addresses their most pressing needs while saving time, money and resources to improve their bottom line.”
In conjunction with the launch of its detector feature, Neurala is also releasing EtherNet/IP, expanding VIA’s ability to easily integrate into existing machinery, without the need for additional adapters or specialized knowledge. The most common industrial protocol in North America, the addition of EtherNet/IP as the next industrial protocol VIA supports enables Neurala to better serve both new and existing customers with native supported industrial outputs.
Looks like this is the next evolution of vision sensing. Vision is one of the key sensors touted by IIoT marketers. Eyeonic Vision Sensor said to be poised to accelerate next-generation machine vision in mobility, robotics, security, and other markets.
From its press release: SiLC Technologies Inc. (SiLC) has launched its powerful, compact vision sensor delivering coherent vision and chip-scale integration to the broader market. The Eyeonic Vision Sensor takes LiDAR to a new level of performance by providing accurate instantaneous depth, velocity, and dual-polarization intensity information while enabling immunity to multi-user and environmental interference. These features will enable robotic vehicles and machines to have the necessary data to perceive and classify their environment and help them predict future dynamics using low-latency, low-compute power and rule-based algorithms.
Harvesting the additional information that is carried by photons, Eyeonic is the foundation for the next generation of machine vision. The Eyeonic Vision Sensor is a first-of-its-kind FMCW LiDAR transceiver. At the nexus of the Eyeonic Vision Sensor is SiLC’s silicon photonic chip which integrates LiDAR functionality into a single, tiny chip. Representing decades of silicon photonics innovation, this chip is the only readily-integratable solution for manufacturers building the next generation of autonomous vehicles, security solutions and industrial robots.
Keenly driven by the technology challenges of commercializing silicon photonics, SiLC’s integration platform brings together high-performance components into a single silicon chip through mature semiconductor fabrication processes, offering a low-cost, compact, and low-power solution.
The ability to manufacture commercial-grade coherent LiDAR solutions has become a pacing factor for market growth – a situation that SiLC is remedying. SiLC intends to make this technology available to all system integrators and end-users, starting with enabling early access to strategic partners in autonomy, security and industrial applications. Offered in two configurations, fiber and fiberless, Eyeonic addresses the current roadblocks facing industries that rely upon vision sensors to embrace burgeoning market opportunities.
Fiberless vision sensors have long been sought after as they enable the lowest cost in a compact configuration. The fiber pigtailed Eyeonic allows for design flexibility by supporting configurations where the FMCW LiDAR transceiver and scanning unit are at different locations.
To facilitate customer development efforts, SiLC offers reference designs and a range of key components needed to develop a full solution. Examples of fully configured systems, based on the Eyeonic platform, will be made available as prototypes to enable rapid evaluation by customers and end users.
I had a friend who has since retired who speculated on what one could do with the data from sensors installed all over a large petroleum or petro-chemical production facility. Just think of what you could know in order to be able to manage the entire facility, he’d exclaim. Recently I interviewed a team at a large automation supplier who discussed concentrating data from sensors and inputs from large machine lines into a PLC.
Sometimes the project you’re contemplating or the business model you have requires thousands of sensors, many gateways, large compute power. What if you built a business model that is profitable servicing different customer needs? What if instead of a few customers buying thousands of sensors you had thousands of customers who needed only a few sensors in each area? Sounds like Chris Anderson’s The Long Tail.
Ray Almgren, President and CEO of Swift Sensors, recently took time out to talk with me about what’s happening there. When I first met Swift Sensors about five years ago, it reminded me of the companies built up around the ZigBee wireless mesh networking standard of about 20 years before. Much progress in terms of products and business models has come the past couple of years.
Swift has designed more robust sensors and developed gateways and cloud-based software making life better for customers. It has found a place in that long tail of many companies who require fewer sensors per location but have many locations. Working with managed services suppliers mostly from the IT market has proved to be a good channel for recurring revenue from satisfying customers’ ongoing needs.
I’ve promoted to a subset of suppliers for years to look outside traditional manufacturing for expanded markets. Swift Sensors markets to schools, pharma companies, agriculture, hospitality, small manufacturing. There continues to be a robust market for taking 4-20 mA signals to the cloud. That’s sort of a cave man meets StarTrek scenario, but still fills a need.
I suggest keeping an eye on the Managed Services, or “as-a-Service”, area for growth if you are a supplier and better service if you are a user.
Following is more description of the latest of Swift Sensors from its Website.
Swift Sensors is a simple and cost-effective solution for the automated monitoring of all your important assets. There are three components of the Swift Sensors system:
Sensors. The Sensors record measurement readings and transmit this vital information wirelessly using encrypted BLE5 technology to the Gateway.
Gateway. The Gateway transmits the measurement readings up to the cloud over a SSL-secured WiFi, ethernet or cellular connection.
Cloud Software. You can access your secure cloud account right from your phone, desktop, laptop, or tablet. Check readings, create thresholds, monitor alerts, leave notes, review historical data and export reports.
Sensors are powered with 2 AAA lithium polymer batteries with an average lifespan of 6 – 8 years. Sensors can be powered on or put into sleep mode by pressing the center of the sensor. A green LED in the sensor blinks when the sensor powers on, turns solid when transitioning to sleep mode, and will blink when the “Find my Sensor” command is sent from the Console. All sensors send encrypted data to the gateway.
The Cloud makes it simple and easy to store sensor data without the hassle of setting up wires, servers and storage devices. Our state-of-the-art console combined with our seamless integration of cloud sensing technology allows you to work without the stress of monitoring your data 24/7. The Swift Sensors Console gives you the peace of mind knowing your equipment, products, and facility are monitored 24/7. Notifications can be set based on specific thresholds that you apply to specific measurements. These thresholds, whether set low or high, can trigger email, SMS, or phone call notifications that are sent immediately when a problem occurs. This security not only helps optimize your facility but also prevent the most catastrophic events. Third-party developers can create their own custom front-end web or mobile app to display and manage Swift Sensors data and hardware. Web API documentation is available for free, and provides full access to the Swift Sensors cloud system. APIs can be found here.
Funding isn’t my primary interest, but this one has interest because of all the hype around artificial intelligence (AI). I continue to see articles in major media that imply that AI is a sort of Star Trek technology rather than something we’ve been using for 30 years or so. However, Neurala has been feeding me a lot of news this year about advances in its vision AI software. Now it has more money to further its development.
Neurala, the leader in vision AI software, announced that it has raised $12 million in funding to advance the development of vision AI for manufacturing. The round, led by Zebra Ventures and Pelion Venture Partners, with participation from Draper Associates, Friulia, AddValue, 360 Capital Partners, Idinvest Partners, Cougar Capital, and industrial investors IMA and Antares Vision, brings the total invested in Neurala to $26 million.
The funding will enable Neurala to evolve and accelerate adoption of its vision AI in the industrial and manufacturing sectors on a global scale, as manufacturers increasingly prioritize automation as part of Industry 4.0 initiatives.
Neurala is a pioneer in vision AI for manufacturing. Built on the company’s deep AI expertise, Neurala’s VIA software delivers an integrated solution designed to help manufacturers improve quality inspection on the production line. With VIA, manufacturers are empowered to answer the call for increased productivity, accuracy and speed.
In the last twelve months, the company has increased its capacity to identify and resolve problems in manufacturing facilities through expert system integrator partners and well-entrenched suppliers. In addition to expanding its work with system integrators threefold, Neurala has also worked with an ever-growing number of OEMs, including investors IMA, IHI Logistics & Machinery and FLIR, to deliver easy-to-use AI solutions that will improve the speed and efficiency of inspections at a price point that makes them affordable for a wide range of customers.
The funding comes at a time when manufacturers are increasingly focused on AI and automation as a key tool in their ability to adapt to new realities established by the pandemic. With this funding, Neurala will be able to evolve VIA to make it more efficient for a wider range of applications and use cases.
“This past year we were able to turn a global crisis into an opportunity to both completely transform our business and to catalyze much-needed innovation in the AI space,” said Max Versace, CEO and co-founder of Neurala. “There was always an opportunity for AI and automation to improve manufacturing, but the pandemic really accelerated the industry’s willingness to embrace the technology. Our team has worked relentlessly over the last year to introduce VIA to partners and customers across the globe, and now that the world is ready to embrace it, we are ready to deliver it. The funding will enable us to do that at a much greater scale that meets the demand we’re seeing in the space.”
This funding comes on the heels of the launch of Neurala’s subsidiary, Neurala Europe, based in Trieste, Italy. The new capital represents the next phase of growth for Neurala as it will be used to expand upon its newfound global presence as the company continues to help manufacturers around the world harness the power of vision AI.
“Today’s manufacturers are leveraging AI and automation to address challenges such as production constraints, supply chain disruptions, and imperfect workforce availability,” said Tony Palcheck, managing director of Zebra Ventures. “Zebra Technologies is proud to invest in Neurala as it commercializes VIA software to enable faster, more cost-effective, easy-to-deploy solutions for customers looking to improve their decision making and productivity on the production line.”
“As a long-time investor in Neurala, we have always recognized the power of its technology to enable smarter, autonomous decision-making in real-world scenarios,” said Ben Lambert, General Partner at Pelion Venture Partners. “Now we’re seeing a significant impact as Neurala has focused on applications in industrial and manufacturing. There’s a big opportunity for Neurala to grow that presence, not only in the US, but in Europe, Asia and beyond. We are excited to support the Neurala team in that journey as we know that it has the right team, the cutting-edge technology, and the global reach to capitalize on this significant market opportunity. “
What these cloud companies are doing with their platforms is becoming amazing. This news is from Google—a little later than first Amazon Web Services and then Microsoft Azure. It is quickly adding some interesting capabilities. Once again, we’re seeing artificial intelligence (AI) built into so many applications that we should cease to have surprise and awe. It’s a tool—and a powerful one if used appropriately. Check out this vision inspection solution.
Google Cloud today launched Visual Inspection AI, a new purpose-built solution to help manufacturers, consumer packaged goods companies, and other businesses worldwide reduce defects and deliver significant operational savings from the manufacturing and inspection process.
Today, defects in products such as computer chips, cars, machinery, and other products cost manufacturers billions of dollars annually. In fact, quality-related costs can consume 15% to 20% of sales revenue. In addition, high production volumes outpace the ability of humans to manually inspect each part.
Google Cloud has traditionally supported manufacturing quality control through its general purpose AI product, AutoML. Today, it is taking the next step by offering a purpose-built solution for manufacturers. Using Google Cloud’s leading computer vision technology, Visual Inspection AI automates the quality control process, enabling manufacturers to quickly and accurately detect defects before products are shipped. By identifying defects early in the process, customers can improve production throughput, increase yields, reduce rework, and reduce return and repair costs. Visual Inspection AI operates across a wide range of industries and use cases, potentially saving manufacturers millions of dollars at each facility.
Based on pilots run by Google Cloud customers, Visual Inspection AI can build accurate models with up to 300 times fewer human-labelled images than general-purpose ML platforms. This allows the solution to be deployed quickly and easily in any manufacturing setting. In addition, Visual Inspection AI customers improved accuracy in production trials by up to 10X compared with general-purpose ML approaches. And, unlike competing solutions that use simple anomaly detection, Visual Inspection AI’s deep learning allows customers to train models that detect, classify, and precisely locate multiple defect types in a single image.
“AI has proven to be particularly beneficial in helping to automate the visual quality control process for manufacturers—a particular pain point felt by the industry. We’ve been delighted by the strong interest in Visual Inspection AI, and we look forward to supporting more organizations as they continue to find innovative new ways to deploy AI at scale,” said Dominik Wee, Managing Director Manufacturing and Industrial at Google Cloud.
“We’ve been listening to the specific needs of the industry and have brought the best of Google AI technologies to help address those needs. The outcome is an AI solution that, built upon years of computer vision expertise, is purpose-built to solve quality control problems for nearly any type of discrete manufacturing process,” said Mandeep Waraich, Head of Product for Industrial AI at Google Cloud.
Building and training machine learning models typically requires deep AI expertise, as well as extensive databases containing thousands of labelled images. Such systems usually run in an on-premise data center or in the cloud, making them difficult to deploy at scale across the factory floor. With Google Cloud Visual Inspection AI:
- No special expertise is required. Quality, test, and manufacturing engineers can use the solution without any computer vision or AI subject-matter expertise. An intuitive user interface guides employees through all of the necessary steps.
- Engineers can get started quickly and build more accurate models. Machine learning models can be trained using as few as 10 labelled images (vs. thousands) and will automatically increase in accuracy over time as they are exposed to more products.
- Full edge-to-cloud capability: Inspection models can be downloaded to machines on the factory floor and run autonomously at the edge, whether it be for data governance reasons or to improve latency. At the same time, Visual Inspection AI is fully integrated in Google Cloud’s portfolio of analytics and ML/AI solutions. This enables manufacturers to combine insights from Visual Inspection AI with other data sources on the shop floor and beyond, for instance to identify root causes of quality problems or to cross-reference with supplier and customer data.
- Problems are resolved faster. Not only does the solution flag a defective component, but also Visual Inspection AI can locate and identify the specific defect within each part, which reduces the time spent by engineers to diagnose problems, rework parts, and implement process improvements.
“Google Cloud’s approach to visual inspection is the roadmap most manufacturing companies are looking for. Manufacturers want flexibility, scale, inherent edge-to-cloud capabilities, access to both real-time and historical data, and ease of use and maintainability”, said Kevin Prouty, Group Vice President at IDC. “Google is one of those companies that has the potential to bring together IT, OT and an ecosystem of partners that manufacturers need to deploy AI on the shop floor at scale.”
Wide Range of Use Cases for Visual Inspection AI
Automotive manufacturers: A typical vehicle factory produces around 300,000 vehicles each year, and up to 10% of them may have parts that underwent rework or replacement during the manufacturing process to address some type of production defect . By automatically identifying defects in paint finish, seat fabrication, body welds, and end-of-line testing of mechanical parts, Visual Inspection AI could save automakers more than $50 million annually per plant.
“Google Cloud’s strength in machine learning and artificial intelligence is accelerating Renault’s Industry 4.0 transformation. We are adopting innovative computer vision solutions like Visual Inspection AI, AutoML and Vertex AI to implement more accurate quality controls with a significantly reduced time to market at a lower cost. We are working now on deploying these new tools in every Renault factory. Renault is ready for future-oriented manufacturing and welcomes the partnership with Google Cloud,” said Dominique Tachet, Digital Project Leader, Renault.
Electronics manufacturing services (EMS): Of the 15 million circuit boards produced each year in a typical EMS factory, as many as 6% may be reworked or scrapped during the assembly process due to internal or external quality failures, such as soldering errors or missing screws . Reducing rework and material waste can save such a facility nearly $23 million each year.
“It’s been amazing to work with Google Cloud to bring innovative machine learning and computer vision technologies to our quality processes. Engineers from FIH Mobile, a subsidiary of Foxconn, trust Google Cloud and we are achieving considerable product improvements through our collaboration. We cannot wait to roll out the Visual Inspection AI solution further across our extensive PCB manufacturing operations.” said Sabcat Shih, Senior Associate Manager, FIH Mobile.
Semiconductor production: A chip fabrication plant that produces 600,000 wafers per year could see yield losses of up to 3% from cracks and other defects . Implementing Visual Inspection AI can reduce production delays and scrap, saving up to $56 million per fab.
“With the shortage of AI engineers, Visual Inspection AI is an innovative service that can be used by non-AI engineers. We have found that we are able to create highly accurate models with as few as 10-20 defective images with Visual Inspection AI. We will continue to strengthen our partnership with Google to develop solutions that will lead our customers’ digital transformation projects to success.” said Masaharu Akieda, Division Manager, Digital Solution Division, KYOCERA Communication Systems Co., Ltd.
 “Cost of Quality,” American Society for Quality (ASQ).
 “Internal documents reveal the grueling way Tesla hit its 5,000 Model 3 target,” Business Insider
 “Capturing the value of good quality in medical devices,” McKinsey & Company
 “Taking the next leap forward in semiconductor yield improvement,” McKinsey & Company
- Visual Inspection AI solution webpage
- Visual Inspection AI launch blog
- Visual Inspection AI overview video
- FIH Mobile case study
- Keep up with the latest Google Cloud news on our newsroom and blog
Google Cloud accelerates organizations’ ability to digitally transform their business with the best infrastructure, platform, industry solutions and expertise. We deliver enterprise-grade cloud solutions that leverage Google’s cutting-edge technology to help companies operate more efficiently and adapt to changing needs, giving customers a foundation for the future. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to solve their most critical business problems.