“Siri, what’s the weather in Bangor?”
“Alexa, buy some toilet paper.”
“Zelda, check the status of the control loop at P28.”
Operator interface is many years removed from its last significant upgrade. Yes, the Abnormal Situation Management Consortium (led by Honeywell) and Human-Centered Design used by Emerson Process Management and the work of the Center for Operator Performance have all worked on developing more readable and intuitive screens.
But, there is something more revolutionary on the horizon.
A big chunk of time last week on the Gillmor Gang, a technology-oriented video conversation, discussed conversational interfaces. Apple’s Siri has become quite popular. Amazon Echo (Alexa) has gained a large following.
Voice activation for operator interface
Many challenges lie ahead for conversation (or voice) interfaces. Obviously many smart people are working on the technology. This may be a great place for the next industrial automation startup. This or bots. But let’s just concentrate on voice right now.
Especially look at how the technologies of various devices are coming together.
I use the Apple ecosystem, but you could do this in Android.
Right now my MacBook Air, iPad, and iPhone are all interconnected. I shoot a photo on my iPhone and it appears in my Photos app on the other two. If I had an Apple Watch, then I could communicate through my iPhone verbally. It’s all intriguing.
I can hear all the objections, right now. OK, Luddites <grin>, I remember a customer in the early 90s who told me there would never be a wire (other than I/O) connected to a PLC in his plant. So much for predictions. We’re all wired, now.
What have you heard or seen? How close are we? I’ve done a little angel investing, but I don’t have enough money to fund this. But for a great idea…who knows?
Hey Google, take a video.
Apps are so last year. Now the topic of the future appears to be bots and conversational interfaces (Siri, etc.). Many automation and control suppliers have added apps for smart phones. I have a bunch loaded on my iPhone. How many do you have? Do you use them? What if there were another interface?
I’ve run across two articles lately that deal with a coming new interface. Check them out and let me know what you think about these in the context of the next HMI/automation/control/MES generations.
Sam Lessin wrote a good overview at The Information (that is a subscription Website, but as a subscriber I can unlock some articles) “On Bots, Conversational Apps, and Fin.”
Lessin looks at the history of personal computing from shrink wrapped applications to the Web to apps to bots. Another way to look at it is client side to server side to client side and now back to server side. Server side is easier for developers and removes some power from vertical companies.
Lessen also notes a certain “app fatigue” where we have loaded up on apps on our phones only to discover we use only a fraction of them.
I spotted this on Medium–a new “blogging” platform for non-serious bloggers.
It was written by Ryan Block–former editor-in-chief of Engadget, founder of gdgt (both of which sold to AOL), and now a serial entrepreneur.
He looks at human/computer interfaces, “People who’ve been around technology a while have a tendency to think of human-computer interfaces as phases in some kind of a Jobsian linear evolution, starting with encoded punch cards, evolving into command lines, then graphical interfaces, and eventually touch.”
Continuing, “Well, the first step is to stop thinking of human computer interaction as a linear progression. A better metaphor might be to think of interfaces as existing on a scale, ranging from visible to invisible.”
Examples of visible interfaces would include the punchcard, many command line interfaces, and quite a bit of very useful, but ultimately shoddy, pieces of software.
Completely invisible interfaces, on the other hand, would be characterized by frictionless, low cognitive load usage with little to no (apparent) training necessary. Invisibility doesn’t necessarily mean that you can’t physically see the interface (although some invisible interfaces may actually be invisible); instead, think of it as a measure of how fast and how much you can forget that the tool is there at all, even while you’re using it.
Examples of interfaces that approach invisibility include many forms of messaging, the Amazon Echo, the proximity-sensing / auto-locking doors on the Tesla Model S, and especially the ship computer in Star Trek (the voice interface, that is — not the LCARS GUI, which highly visible interface. Ahem!).
Conversation-driven product design is still nascent, but messaging-driven products are still represent massive growth and opportunity, expected to grow by another another billion users in the next two years alone.
For the next generation, Snapchat is the interface for communicating with friends visually, iMessage and Messenger is the interface for communicating with friends textually, and Slack is (or soon will be) the interface for communicating with colleagues about work. And that’s to say nothing of the nearly two billion users currently on WhatsApp, WeChat, and Line.
As we move to increasingly invisible interfaces, I believe we’ll see a new class of messaging-centric platforms emerge alongside existing platforms in mobile, cloud, etc.
As with every platform and interface paradigm, messaging has its own unique set of capabilities, limitations, and opportunities. That’s where bots come in. In the context of a conversation, bots are the primary mode for manifesting a machine interface.
Organizations will soon discover — yet again — that teams want to work the way they live, and we all live in messaging. Workflows will be retooled from the bottom-up to optimize around real-time, channel based, searchable, conversational interfaces.
Humans will always be the entities we desire talking to and collaborating with. But in the not too distant future, bots will be how things actually get done.