We’re still getting more thoughts and information from the virus that hit a Siemens WinCC customer. When you add that to the continuing revelations coming from the BP Deepwater Horizon oil rig, and it’s been a summer of safety and security.
I put both in the same sentence, because often safety problems and security problems stem from a common root cause–a pattern of risky behaviors by people. Sometimes there is lack of company policy. Sometimes lack of oversight. Sometimes just small decisions that don’t appear by themselves to be risky but that when added to others create a pattern or environment of risk.
The attack on Siemens actually exploited a Microsoft Windows hole. But the hole only works if a person comes into possession of an infected USB stick–and inserts it into a PC that allows these devices to automatically boot and load into the PC. Either a risky behavior, or a security breech where an unauthorized person has access to a PC.
The problems in the Gulf appear also to have been the result of the build up of small decisions that cumulated into a large problem. Today in The New York Times, a worker alleges that the oil rig alarm was not fully turned on so that workers wouldn’t be awakened.
Meanwhile Computerworld (the source of my first learning of the Siemens virus) has followed up with more information about the worm.
The New York Times also has an article discussing how we learn from disasters.
Eric Murphy at the OPC Exchange blog weighed in on security. He rightly points out that OPC itself has used part of its recent upgrade to enhance security.
While I’m on the subject of OPC, I’ll point to Tom Burke’s blog where he announces the OPC Foundation Certification Lab.