Andrew Ng of Landing AI on Building Vision AI Project

The new A3 organization (motion/vision/robotic associations) held its annual show virtually over five days this week. I was busy, but I did tune in for some keynotes and panel discussions. I also browsed the trade show.

The platforms are getting better all the time. I was blown away by all the cool things today’s keynoter was able to pull off. But they still can’t quite get the trade show experience up to expectations.

Today’s keynote was given by Andrew Ng, CEO of Landing AI, a machine vision AI company. His talk was a low-key, effective explanation of AI and how to implement a successful AI-enabled vision inspection project. I’d almost call this “beyond hype”. 

Here are a few key points:

75% of AI projects never go live.


Vision inspection has gone from rules-based to deep learning (aka, AI, ML), learn automatically.

Ng polled his audience about experiences with AI projects with the key responses:

  • Lack of data
  • Unreal expectations
  • Use case not well defined
  • Hype—perception of AI as futuristic

Challenges

  • Not sufficiently accurate
  • Insufficient data
  • More than just initial ML code needed
  • System able to learn continuously

AI Systems = Model + Data

Improving the system depends upon improving either Model or Data; experience in manufacturing shows best results come from improving data.

One Landing AI partner estimated 80% of his work was on preparing data (data processing) and only 20% on training a model.

AI Project Lifecycle

Scope  Collect Data  Train Model  Deploy in Production

Train Model feedback to Collect Data

Deploy feedback to train model and also feedback to collect data

Common problem—is data labeled consistently? E.g. are defects consistently defined?

Common data issues: inconsistent label; definition between two defects ambiguous; too few examples

Final advice:

  • Start quickly
  • Focus on data
  • End-to-end platform support (lifecycle)

Coincidentally, Ng was Interviewed at MIT Technology Review and I received an email notice today. I’ve included a link, but you may need a subscription to get in.

Karen Hao for MIT Technology Review: I’m sure people frequently ask you, “How do I build an AI-first business?” What do you usually say to that?

Andrew Ng: I usually say, “Don’t do that.” If I go to a team and say, “Hey, everyone, please be AI-first,” that tends to focus the team on technology, which might be great for a research lab. But in terms of how I execute the business, I tend to be customer-led or mission-led, almost never technology-led.

A very frequent mistake I see CEOs and CIOs make: they say to me something like “Hey, Andrew, we don’t have that much data—my data’s a mess. So give me two years to build a great IT infrastructure. Then we’ll have all this great data on which to build AI.” I always say, “That’s a mistake. Don’t do that.” First, I don’t think any company on the planet today—maybe not even the tech giants—thinks their data is completely clean and perfect. It’s a journey. Spending two or three years to build a beautiful data infrastructure means that you’re lacking feedback from the AI team to help prioritize what IT infrastructure to build.

For example, if you have a lot of users, should you prioritize asking them questions in a survey to get a little bit more data? Or in a factory, should you prioritize upgrading the sensor from something that records the vibrations 10 times a second to maybe 100 times a second? It is often starting to do an AI project with the data you already have that enables an AI team to give you the feedback to help prioritize what additional data to collect.

In industries where we just don’t have the scale of consumer software internet, I feel like we need to shift in mindset from big data to good data. If you have a million images, go ahead, use it—that’s great. But there are lots of problems that can use much smaller data sets that are cleanly labeled and carefully curated.

Autonomous Driving With Next Generation Zero Accidents Sensing Platform

Not my normal news item, but this has general interest for those interested in the latest developments in autonomous driving. Maybe in my lifetime…

NPS 500 Autonomous Sensing Platform, the Most Compact, AI-Powered, Ultra Long-Range System that Intelligently Fuses and Integrates LiDAR, Radar and Cameras

Disruptive New Technology Establishes New Standard for Transportation Safety 

New High Performance LiDAR of More than 500 Meter Range and Innovative Radar Technology to See Around Corners

Neural Propulsion Systems (NPS), a pioneer in autonomous sensing platforms, today emerged from stealth to launch NPS 500™, the safest and most reliable platform for autonomous vehicles that enables the industry to reach Zero Accidents Vision. NPS 500 is the world’s first all-in-one deeply integrated multi-model sensor system focused on Level 4/5 autonomy.

The radically new sensor-fused system precisely interconnects the NPS revolutionary solid-state MIMO LiDAR, super resolution SWAM radar and cameras to cooperatively detect and process 360° high resolution data giving vehicles the ability to prevent all accidents. The densely integrated sensor system enables vehicles to see around corners and over 500 meters of range with ultra-resolution accuracy together with highly adaptive frame rate. The NPS 500 breakthrough capabilities make it 10X more reliable than currently announced sensor solutions.

“LiDAR, radar and cameras will all play significant roles in creating the ideal autonomous driving platform and there is no question that tightly connected sensors with onboard data fusion for automated driving enables more functionalities,” said Pierrick Boulay, senior analyst at Yole Développement. “This direction is unique and is likely to succeed in a market that could reach $25B in 2025* for sensing and computing in both ADAS and robot vehicles.” 

“Our goal to prevent all transportation accidents is the holy grail for autonomous vehicles,” said Behrooz Rezvani, founder and CEO of NPS. “We are the sensing system behind the Zero Accidents Platform for large volume deployment at affordable cost. Existing technologies are not sufficient to achieve this paradigm, so we created our own more powerful LiDAR and radar. Our AI-driven sensor-fusion system processes this ultra-high resolution data to create the safest and most reliable solution in the market today. The NPS 500 slashes time-to-market for autonomous vehicle manufacturers, while being the most cost-effective.”

NPS 500 Product Details

The NPS next generation precision-built, multi-modal sensor system is the industry’s most advance autonomous driving solution that addresses physics-based limitations of each sensory system. The NPS 500 enhances and combines the strengths of LiDAR, radar and cameras to create a platform that leverages the capabilities of each technology, while addressing today’s challenges of Level 4/5 autonomy, including:

  • Cameras: Provide high resolution images, but lack depth information and depend on lighting conditions
  • Radar: Measure velocity with great precision, but have lower resolution than LiDAR and are vulnerable to interference from other radars
  • LiDAR: Provide super precision depth information, but its performance and reliability degrade in adversarial, weather and light conditions, and it can get occluded fairly easily

NPS 500 is the world’s first all-in-one deeply integrated multi-model sensor system focused on Level 4/5 autonomy.

Features:

  • LiDAR: Revolutionary new solid-state MIMO-LiDARTM architecture doubles range to ≥ 500 meters with super resolution and adaptive multi-beam search
  • Radar: New class of radar technology with 10X better detection reliability, simultaneous, multi-band 360° FoV, 70X better against other radar signal interference
  • Software: First ever AI fusion technology to “see-around-the-corner”
  • Chips: 650 Tb/s sensor-data processing on network of tightly connected custom signal processing chips

Benefits:

  • Range ≥ 500 meters @ 10% reflectivity
  • Doubling the reaction time currently available LiDAR
  • Significant increase in sensor data reliability 
  • See-around-the-corner capabilities
  • Anticipating pedestrian’s movement well before reaching cross section 
  • Detecting moving objects approaching intersections well in advance 
  • Built-in redundancy for maximum reliability in harsh environments, bad driving and tough terrains
  • Low maintenance, automakers can efficiently rely on NPS sensors once the vehicles leave the dealership
  • Multi-beam adaptive scan up to 100 FPS to detect and track subtle movements 
  • See thru occlusion
  • Reduced time to market
  • Cost effective
  • Low CAPEX and OPEX for OEM customers

Pricing & Availability

Customers and partners may contact [email protected] for more information.

About Neural Propulsion Systems (NPS)

The NPS mission is to eliminate all transportation accidents to achieve the Zero Accidents Vision. Founded in 2017 by Silicon Valley luminaries, including entrepreneur Behrooz Rezvani, former Ikanos and Quantenna founder and CEO. NPS is delivering the world’s first all-in-one deeply integrated multi-model sensor system for Level 4/5 autonomy for large volume deployment at an affordable cost. Its flagship product NPS 500™ precisely interconnects the NPS revolutionary solid-state MIMO LiDARTM, super resolution SWAMTM radar and cameras to cooperatively detect and process 360° high resolution data. The densely integrated deep sensor-fusion system gives vehicles the ability to see around corners and over 500 meters away, making it 10X more reliable than competitors.

AWS Announces Availability of Amazon Lookout for Vision

I have to admit amusement when IT companies discover industrial technologies and approach them from what they know—huge databases, analytics, teams of data scientists, and the like. This vision system announcement could have described how I approached selling and installing vision systems in the mid-90s. (Obviously not enough of them—I switched careers to writing about the technology.)

However, many engineers and maintenance managers have learned about non-standard purchasing of automation components. AutomationDirect (back then, PLCDirect) broke the mold of needing a local distributor rep. Now searching eBay for parts is almost common. So, why not go to Amazon? Heck, we all have Prime, right?

This is interesting. I’d be very curious to find out how many of you are looking to Amazon not only for servers and Web Services, but also for components and devices.

Amazon Lookout for Vision uses AWS-trained computer vision models on images and video streams to find anomalies and flaws in products or production processes

GE Healthcare, Amazon, and Basler among customers and partners using Lookout for Vision

Amazon Web Services Inc. (AWS), an Amazon.com Inc. company, announced the general availability of Amazon Lookout for Vision, a new service that analyzes images using computer vision and sophisticated machine learning capabilities to spot product or process defects and anomalies in manufactured products. By employing a machine learning technique called “few-shot learning,” Amazon Lookout for Vision is able to train a model for a customer using as few as 30 baseline images. 

Customers can get started quickly using Amazon Lookout for Vision to detect manufacturing and production defects (e.g. cracks, dents, incorrect color, irregular shape, etc.) in their products and prevent those costly errors from progressing down the operational line and from ever reaching customers. Together with Amazon Lookout for Equipment, Amazon Monitron, and AWS Panorama, Amazon Lookout for Vision provides industrial and manufacturing customers with the most comprehensive suite of cloud-to-edge industrial machine learning services available. With Amazon Lookout for Vision, there is no up-front commitment or minimum fee, and customers pay by the hour for their actual usage to train the model and detect anomalies or defects using the service. 

To get started with Amazon Lookout for Vision, visit https://aws.amazon.com/lookout-for-vision/

In today’s manufacturing industry, production line shutdowns due to missed defects or quality inconsistencies can result in millions of dollars of cost overruns and lost revenue every year. To avoid these expensive issues, industrial companies must maintain constant diligence to ensure quality control. Quality assurance in industrial processes typically requires human inspection, which can be tedious and inconsistent at best, or at worst, infeasible. 

Computer vision brings the speed and accuracy needed to identify defects consistently; however, implementing traditional computer vision solutions can be complex. Building computer vision models from scratch requires large amounts of carefully labeled images for each element of the manufacturing process. Then, teams of data scientists need to build, train, deploy, monitor, and fine tune computer vision models to analyze each individual phase of the product inspection process. Even small changes in the manufacturing process (e.g. replacing an out of stock component with an equivalent alternative, updates to the product specifications, or a change in lighting) means having to retrain and redeploy the individual model and perhaps other models downstream in the production process, which is tedious, complex, and time consuming. Because of these barriers, computer vision-powered visual anomaly systems remain out of reach for the vast majority of companies.

Amazon Lookout for Vision offers customers a highly accurate, low-cost anomaly detection solution that uses computer vision to process thousands of images an hour to spot defects and anomalies – with no machine learning experience required. Customers send camera images to Amazon Lookout for Vision in real-time to identify anomalies, such as damage to a product’s surface, missing components, and other irregularities in production lines. Utilizing a machine learning technique called few-shot learning (where the machine learning model is able to classify data based on a very small amount of training data), the service needs as few as 30 images of the acceptable and anomalous state as a baseline to begin assessing machine parts or manufactured products. 

In addition to enabling the service to detect anomalies without large amounts of training data, this capability also allows the service to be adaptable to a wide range of inspection tasks within industrial settings. After analyzing the data, Amazon Lookout for Vision then reports images that differ from the baseline via the service dashboard or the “DetectAnomalies” real-time API so that appropriate action can be taken. Amazon Lookout for Vision is sophisticated enough to maintain high accuracy with variances in camera angle, pose, and lighting arising from changes in work environments.  Customers also have the ability to provide feedback on the results (e.g. whether a prediction correctly identified an anomaly or not), and Lookout for Vision will automatically retrain the underlying model so that the service continuously improves. This feature allows the technology to adapt to changes in the manufacturing process and even understand when variations are permissible or not based on customer feedback. This means that customers can be more nimble and adapt their processes based on competitive advantages or external factors impacting their operations.

“Whether a customer is placing toppings on a frozen pizza or manufacturing finely calibrated parts for an airplane, what we’ve heard unequivocally is that guaranteeing only high-quality products reach end-users is fundamental to their business. While this may seem obvious, ensuring such quality control in industrial pipelines can in fact be very challenging,” said Swami Sivasubramanian, Vice President of Amazon Machine Learning for AWS. “We’re excited to deliver Amazon Lookout for Vision to customers of all sizes and across all industries to help them quickly and cost effectively detect defects at scale to save time and money while maintaining the quality their consumers rely on – with no machine learning experience required.”

Lookout for Vision is available directly via the AWS console as well through supporting partners to help customers embed computer vision into existing operating systems within their facilities. The service is also compatible with AWS CloudFormation. Lookout for Vision is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfort), Asia Pacific (Tokyo), and Asia Pacific (Seoul), with availability in additional regions in the coming months.

GE Healthcare is a leading global medical technology and digital solutions innovator that enables clinicians to make faster, more informed decisions through intelligent devices, data analytics, applications, and services. “We are excited about the encouraging early results from Amazon Lookout for Vision that will promise to help improve the speed, consistency, and accuracy of detecting product defects across our factories,” said Kozaburo Fujimoto, Operating Officer, General Manager, Manufacturing Division, Plant Manager, GE Healthcare Japan. “As one of the world’s most trusted healthcare companies with more than a century of technological progress and digital innovations, we look forward to capitalizing on the benefits that AWS’s industrial machine learning services will potentially bring to our manufacturing environments.”

Amazon’s Print-On-Demand (POD) facilities print books on demand to fulfill customer orders. “With POD, since books are manufactured when ordered by a customer, it is imperative to ensure precision at every step of the manufacturing process to offer a fast delivery time and the highest quality books to our customers,” said David Symonds, Worldwide Director of POD for Amazon. “With Amazon Lookout for Vision, we can automate and scale visual inspection at each step of manufacturing while running at full processing speeds, helping us ensure a great customer experience.” 

Basler is a global manufacturer and solution provider in industrial vision, providing cameras and machine vision systems for applications such as semiconductor inspection, robotics, food inspection, postal sorting, and inspections of printed images. “Reducing faults is one of the most important KPIs to consider for manufacturing companies. Traditional manual inspection is labor intensive and difficult to scale. By using computer vision for quality inspection, this process can be automated and lead to a significant reduction of costs,” said Gerrit Fischer, Head of Marketing for Basler AG. “Basler and Amazon Lookout for Vision provide a very lean architecture to adopt vision based anomaly detection in any manufacturing site. We’re excited to jointly provide our customers with complete vision solutions by combining Basler’s expertise in industrial vision and edge platforms with AWS’s investments in industrial machine learning.”

Dafgards is a household name in Sweden, manufacturing a broad assortment of foods. “We previously tried Amazon Lookout for Vision to automate the inspection of our pizza production lines to detect whether pizzas had enough cheese and the correct toppings, with good results,” said Fredrik Dafgård, Head of Operational Excellence & Industrial IoT for Dafgards. “We’re excited to extend Lookout for Vision to our other production lines such as hamburgers and quiches, to help us detect any anomalies like incorrect ingredients. Over time, we plan to scale Lookout for Vision across multiple production lines. Amazon Lookout for Vision will allow Dafgards to improve the consistency and accuracy of detecting defects and anomalies, allowing us to improve our overall production quality at scale.”

About Amazon Web Services

For almost 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 77 Availability Zones (AZs) within 24 geographic regions, with announced plans for 18 more Availability Zones and six more AWS Regions in Australia, India, Indonesia, Japan, Spain, and Switzerland. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs.

Automate Forward Keynote Speakers, Agenda Announced

This announcement for Automate Forward, the conference and trade show of the Association for Advancing Automation (A3), popped into my inbox this morning. The conference part of pandemic-era events draws many excellent speakers. The technology has greatly improved over the years to at least not getting in the way, if not improving the experience. Networking and trade show visiting still lag, but I’ve seen strides in the booth visitation area. 

A personal observation from an old guy. I remember past lives where each of the areas of this trade show were large events in themselves. Industry and technology consolidation have reduced the size, but the importance to manufacturing remains.

In the spring of 2021, the Robotic Industries Association (RIA),  AIA – Advancing Vision + Imaging, Motion Control & Motor Association (MCMA), and A3 Mexico  will become the Association for Advancing Automation (A3), the global advocate for the benefits of automating. A3 promotes automation technologies and ideas that transform the way business is done. Combined, these associations represent over 1,100 automation manufacturers, component suppliers, system integrators, end users, research groups and consulting firms from throughout the world that drive automation forward.

A3 hosts a number of industry-leading events, including the new virtual Automate Forward (March 22-26, 2021) and the Automate Show & Conference (June 6-9, 2022, in Detroit, MI).

More than 80 global experts will speak at Automate Forward, the world’s premier virtual automation trade show and conference set for March 22-26, 2021.  The event also features more than 250 leading companies in an expanded exhibit area, enhanced networking opportunities, and a look at innovative automation startups.

Speakers include senior executives from companies such as 3M, General Motors, Intel, Microsoft, UPS, IBM, GE, FedEx, Siemens, and Proctor & Gamble.

“With the adoption of automation accelerating, and the impossibility of holding large in-person shows in the US at the moment, Automate Forward will play a critical role in educating companies about how robotics, AI, machine vision, motion control, and related automation technologies can immediately help improve product quality, productivity, competitiveness, and worker safety,” said Jeff Burnstein, President of the Association for Advancing Automation (A3), the event’s host.

Automate Forward will include a robust virtual exhibit hall and networking center where attendees can connect directly with companies and experts to solve their automation solutions and get immediate answers. The trade show will be open daily from 9:00 AM – 5:00 PM ET for attendees to learn about products and systems that can help with unique challenges.

A3 will share a sneak peek of the association’s new brand identity at 9:30 am ET on Monday, March 22 exclusively for Automate Forward attendees. Join live to learn about how its four current brands – RIA, AIA, MCMA, and A3 Mexico – are combing to create the new A3 representing over 1100 global companies and organizations active in automation.

Automate Forward Keynote Sessions

Monday, March 22

10 AM ET PANEL: The New Industries Driving The Growth of Automation and Robotics
Robert Little, CEO, ATI Industrial Automation
Mark Lewandowski, Director – Robotics Innovation, Procter & Gamble
John Dulchinos, Vice President, Jabil
Ted Dengel, Managing Director, Operations Technology and Innovation, FedEx Ground
John Bubnikovich, Chief Regional Officer – North America, KUKA Robotics

11 AM ET: The Competitive Advantage is Here and It’s All About Digital
Raj Batra, President, Digital Industries, Siemens

1:30 PM ET: Moving Automation Forward: What is required?

Greg Brown, Vice President of Strategy and R&D, UPS

Tuesday, March 23

10 AM ET PANEL: The 2021 State of the Automation Industry Executive Roundtable
Mike Cicco, President & CEO, FANUC AMERICA
Patrick McDermott, President North America, B&R Automation
Dr. Thomas Evans, CTO Robotics, Honeywell Intelligrated
Christine Boles, Vice President, Internet of Things Group – General Manager, Industrial Solutions Division, Intel
Sebastien Schmitt, North American Robotics Division Manager, Stäubli

11 AM ET: Human Aware Robot Software and Tools for Delivering it
Rodney Brooks, Co-Founder and CTO, Robust.AI

1:30 PM ET: 3M’s Automation Journey: Driving Growth & Productivity
Debarati Sen, Vice President & General Manager Abrasive Systems Division Safety & Industrial Business Group, 3M

Wednesday, March 24

10 AM ET PANEL: The Rise of Smart Automation
Rashmi Misra, GM AI Platforms, Business Development, Microsoft
Jorge Ramirez, Global Director Automation and Chief Mfg. Cybersecurity Officer, General Motors
Rishi Vaish, CTO and VP, IBM AI Applications, IBM
John Lizzi, Executive Leader – Robotics, GE

Tom Panzarella, Senior Director of Perception, Seegrid

11 AM ET: Using Deep Learning and Simulation to Teach Robots Manipulation in Complex Environments
Dieter Fox, Senior Director of Robotics Research, NVIDIA

1:30 PM ET: Automation and the Future of Manufacturing
Indranil Sircar, CTO, Manufacturing Industry, Microsoft

Thursday, March 25

10 AM ET PANEL: How Collaborative Automation is Driving Productivity
Co-sponsored by the International Federation of Robotics
Milton Guerry, President, Schunk USA
Joe Gemma, Global Vice President of Sales & Marketing, Calvary Robotics
Greg Smith, President of the Industrial Automation Group at Teredyne
David Robers, Robotics Sales Manager – Americas, Denso Robotics

11 AM ET: Value Chain Integration and Optimization Through Robotics in Consumer segments and Retail
Marc Segura, Executive Global Business Line Leader – Consumer Segment Service Robotics, ABB Robotics and Machine Automation

Friday, March 26

10 AM ET PANEL: Autonomous Mobile Robots: How to Get Started
Karen Leavitt, Chief Marketing Officer, Locus Robotics

Søren E. Nielsen, President, Mobile Industrial Robots
Matt Rendall, CEO and Co-Founder, OTTO Motors
Rob Sullivan, President, AutoGuide Mobile Robots
Melonee Wise, CEO and Founder, Fetch Robotics

11 AM ET: Using an End-to-End Workflow to Build, Iterate, and Operationalize Deep Learning-Powered Visual Inspection Projects

Andrew Ng, CEO & Founder, Landing AI

ABB Launches Next Generation Cobots, Touts Energy Saving Motors and Drives

ABB has undergone significant divesting but retains a broad portfolio of industrial technologies. I have two pieces of news that fit today’s trends. The exciting things right now in robotics is collaborative robots, called cobots. ABB has upgraded its products. I remember trying to sell drives for energy savings in the 90s, and no one cared that much. Now under the guise of sustainability, energy savings is hot.

ABB is expanding its collaborative robot (cobot) portfolio with the new GoFa and SWIFTI cobot families, offering higher payloads and speeds, to complement YuMi and Single Arm YuMi in ABB’s cobot line-up. These stronger, faster and more capable cobots will accelerate the company’s expansion in high-growth segments including electronics, healthcare, consumer goods, logistics and food and beverage, among others, meeting the growing demand for automation across multiple industries.  

GoFa and SWIFTI are intuitively designed so customers need not rely on in-house programming specialists. This will unlock industries that have low levels of automation, with customers able to operate their cobot within minutes of installation, straight out of the box, with no specialized training. 

“Our new cobot portfolio is the most diverse on the market, offering the potential to transform workplaces and help our customers achieve new levels of operational performance and growth.” said Sami Atiya, President of ABB’s Robotics & Discrete Automation Business Area.  “They are easy to use and configure and backed by our global network of on-call, on-line service experts to ensure that businesses of all sizes and new sectors of the economy, far beyond manufacturing, can embrace robots for the first time.” 

ABB’s cobot portfolio expansion is engineered to help existing and new robot users accelerate automation amid four key megatrends including individualized consumers, labor shortages, digitalization and uncertainty that are transforming business and driving automation into new sectors of the economy.  The expansion follows the Business Area’s focus on high-growth segments through portfolio innovation, helping to drive profitable growth.   

Cobots are designed to operate in the presence of workers without the need for physical safety measures such as fences and to be very easy to use and install. In 2019, more than 22,000 new collaborative robots were deployed globally, up 19 percent compared to the previous year. The demand for collaborative robots is estimated to grow at a CAGR of 17 percent between 2020 and 2025 while the value of global cobot sales is expected to increase from an estimated ~$0.7 billion in 2019 to ~$1.4bn by 2025. The global market for all industrial robots is projected to grow from ~$45 billion in 2020 to ~$58 billion by 2023 (CAGR of 9 percent).

GoFa and SWIFTI are engineered to help businesses automate processes to assist workers with tasks including material handling, machine tending​, component assembly​ and packaging in manufacturing, medical laboratories, logistics hubs and warehouses, workshops, and small production facilities.

Users comfortable with operating a tablet or smartphone will be able to program and re-program the new cobots with ease, using ABB’s fast set-up tools. Customers will also benefit from ABB’s global industry and application expertise, which has been developed from installing more than 500,000 robot solutions since 1974 and supported by ABB’s network of over 1,000 global partners.

ABB urges greater adoption of high-efficiency motors and drives to combat climate change – global electricity consumption to be reduced by 10%

In a new whitepaper published this week, ABB reveals potential for significant energy efficiency improvements in industry and infrastructure enabled by the latest and most high-efficiency motors and variable speed drives. ABB calls on governments and industry to accelerate adoption of the technology to help combat climate change.

According to the International Energy Agency (IEA), industry accounts for 37% of global energy use and some 30% of global energy is consumed in buildings.

While mostly hidden from public view, electric motors – and the variable speed drives which optimize their operation – are embedded in almost every built environment. They power a vast range of applications fundamental to our modern way of life, from industrial pumps, fans and conveyors for manufacturing and propulsion systems for transportation to compressors for electrical appliances and heating, ventilation and air conditioning systems in buildings.

Motor and drive technologies have seen exceptionally rapid advancement in the past decade, with today’s innovative designs delivering remarkable energy efficiencies. However, a significant number of industrial electric motor-driven systems in operation today – in the region of 300 million globally – are inefficient or consume much more power than required, resulting in monumental energy wastage.

Independent research estimates that if these systems were replaced with optimized, high-efficiency equipment, the gains to be realized could reduce global electricity consumption by up to 10%. In turn, this would account for a significant reduction in greenhouse gas emissions needed to meet the 2040 climate goals established by the Paris Agreement.

“Industrial energy efficiency, more than any other challenge, has the single greatest capacity for combatting the climate emergency.  It is essentially the world’s invisible climate solution.”, said Morten Wierod, President ABB Motion. “For ABB, sustainability is a key part of our company Purpose and of the value that we create for all of our stakeholders. By far the biggest impact we can have in reducing greenhouse gas emissions is through our leading technologies, which reduce energy use in industry, buildings and transport.

Considerable steps have already been taken to support the uptake of electric vehicles and renewable energy sources. ABB believes it is time to do the same for an industrial technology that will deliver even greater benefits for the environment and the global economy.

To take advantage of the tremendous opportunities afforded by energy efficient drives and motors to reduce greenhouse gas emissions, ABB says all stakeholders have a critical role to play:

  • Public decision makers and government regulators need to incentivize their rapid adoption,
  • Businesses, cities, and countries need to be aware of both the cost savings and environmental advantages and be willing to make the investment, and
  • Investors need to reallocate capital towards companies better prepared to address the climate risk.

“While our role at ABB is to always provide the most efficient technologies, products and services to our customers, and continue to innovate for ever greater efficiency, that in itself is not enough. All stakeholders need to work together to bring about a holistic transformation in how we use energy. By acting and innovating together, we can keep critical services up and running while saving energy and combatting climate change”, concludes Morten Wierod.

ABB’s white paper “Achieving the Paris Agreement: The Vital Role of High-Efficiency Motors and Drives in Reducing Energy Consumption” can be downloaded here.

Save the Date Automation Webinar Moderated by Me

Join me on March 24 at 1 PM CDT as I moderate a discussion during an automation webinar of an application of Bosch Rexroth’s new ctrlX CORE controller by DWFritz Automation. Experts from both companies will come together to discuss how they implemented a concurrent design/build project for a complex, high speed “factory of the future” automated assembly line. During the webinar, attendees will learn how Bosch Rexroth can transform current manufacturing lines into the future. 

DWFritz Automation
designs and manufactures custom automation systems for advanced manufacturing in the medical device, aerospace, consumer electronics, energy storage, automotive, semiconductor, and other industries. 

The discussion will explore the challenges as well as the engineering and technology solutions used to successfully speed up the process for getting a high-precision consumer electronics assembly line to market.

In addition to engineering support, Bosch Rexroth provided a multi-technology offering from its Automation and Electrification, Linear Motion and Assembly Technology business units.

The automation platform included the new ctrlX CORE controller from Bosch Rexroth’s recently introduced ctrlX AUTOMATION system. With its open and flexible architecture, ctrlX CORE removes the boundaries between IPC, embedded system and drive-based technology platforms.

Relevant to anyone responsible for managing complex machine automation projects, the panel of experts will cover several areas, including:

  • Simplifying a complex design/build process to shorten lead times and accelerate time to market
  • Successful collaboration for engineering, logistics and fulfillment
  • Realization of advanced automation, linear motion and assembly technology and components

In addition, this free, one-hour webinar will open up for attendee Q&A.

For more information and to register for “Rexroth ON LOCATION — The Future of Factory Automation.”