Reports continue to appear in conversations and blogs that management finally has had a “come to Jesus” moment regarding threats of cyber attacks a critical control infrastructure.
This reminds me of the situation with safety systems 15-20 years ago. Safe enough was good enough for many managers. When engineers began figuring out, albeit slowly, how to speak in the language managers know–dollars–then integrating safety into machine and process design became routine. Designing with safety in mind led to more robust designs that also increased reliability.
Regarding cybersecurity, many engineers felt that if only you don’t connect a control system to the Internet, you’d be safe. This is called the “air gap” approach. But not all threats come over the wired network. Sometimes they come from a social network–handing USB memory sticks to workers who insert them into computers that are part of a control system which was how Stuxnet propagated initially. As Joe Weiss reminds me in our periodic conversations, control systems in and of themselves are not secure.
Eric Byres at Tofino Security just posted a blog asking if the air gap debate is over. He figures that there will always be people who think that they are safe if they are not connected.
I am writing an article about security in the age of cloud computing. (If you know something about that, please contact me.) While researching, I ran across a company that I’ve never spoken to before–Wedge Networks. Its approach is to monitor all the message traffic into and out of a device which could be a control system and match it to libraries of known problems. But it can also block certain data types from entering or leaving the device.
This was certainly not a thorough description of Wedge Networks’ solution. It does point to interesting possibilities with one more tool to help secure control systems.