The new A3 organization (motion/vision/robotic associations) held its annual show virtually over five days this week. I was busy, but I did tune in for some keynotes and panel discussions. I also browsed the trade show.

The platforms are getting better all the time. I was blown away by all the cool things today’s keynoter was able to pull off. But they still can’t quite get the trade show experience up to expectations.

Today’s keynote was given by Andrew Ng, CEO of Landing AI, a machine vision AI company. His talk was a low-key, effective explanation of AI and how to implement a successful AI-enabled vision inspection project. I’d almost call this “beyond hype”. 

Here are a few key points:

75% of AI projects never go live.


Vision inspection has gone from rules-based to deep learning (aka, AI, ML), learn automatically.

Ng polled his audience about experiences with AI projects with the key responses:

  • Lack of data
  • Unreal expectations
  • Use case not well defined
  • Hype—perception of AI as futuristic

Challenges

  • Not sufficiently accurate
  • Insufficient data
  • More than just initial ML code needed
  • System able to learn continuously

AI Systems = Model + Data

Improving the system depends upon improving either Model or Data; experience in manufacturing shows best results come from improving data.

One Landing AI partner estimated 80% of his work was on preparing data (data processing) and only 20% on training a model.

AI Project Lifecycle

Scope  Collect Data  Train Model  Deploy in Production

Train Model feedback to Collect Data

Deploy feedback to train model and also feedback to collect data

Common problem—is data labeled consistently? E.g. are defects consistently defined?

Common data issues: inconsistent label; definition between two defects ambiguous; too few examples

Final advice:

  • Start quickly
  • Focus on data
  • End-to-end platform support (lifecycle)

Coincidentally, Ng was Interviewed at MIT Technology Review and I received an email notice today. I’ve included a link, but you may need a subscription to get in.

Karen Hao for MIT Technology Review: I’m sure people frequently ask you, “How do I build an AI-first business?” What do you usually say to that?

Andrew Ng: I usually say, “Don’t do that.” If I go to a team and say, “Hey, everyone, please be AI-first,” that tends to focus the team on technology, which might be great for a research lab. But in terms of how I execute the business, I tend to be customer-led or mission-led, almost never technology-led.

A very frequent mistake I see CEOs and CIOs make: they say to me something like “Hey, Andrew, we don’t have that much data—my data’s a mess. So give me two years to build a great IT infrastructure. Then we’ll have all this great data on which to build AI.” I always say, “That’s a mistake. Don’t do that.” First, I don’t think any company on the planet today—maybe not even the tech giants—thinks their data is completely clean and perfect. It’s a journey. Spending two or three years to build a beautiful data infrastructure means that you’re lacking feedback from the AI team to help prioritize what IT infrastructure to build.

For example, if you have a lot of users, should you prioritize asking them questions in a survey to get a little bit more data? Or in a factory, should you prioritize upgrading the sensor from something that records the vibrations 10 times a second to maybe 100 times a second? It is often starting to do an AI project with the data you already have that enables an AI team to give you the feedback to help prioritize what additional data to collect.

In industries where we just don’t have the scale of consumer software internet, I feel like we need to shift in mindset from big data to good data. If you have a million images, go ahead, use it—that’s great. But there are lots of problems that can use much smaller data sets that are cleanly labeled and carefully curated.

Share This

Follow this blog

Get a weekly email of all new posts.