By Gregory Hale
Just a bit over a year after the Triton attack shut down a gas refinery in Saudi Arabia through an attack of the safety system, it has become even more clear manufacturing automation users need to adopt a complete cover to cover cybersecurity approach to ensure systems stay up and running – and safe.
The Triton incident sent a chill throughout the industry as it was a documented case of a safety system that ended up compromised to potentially cause serious harm to a critical infrastructure facility.
Black Hat: Breaking Down Safety System Attack
Black Hat: Get to Root Cause
Forget Hyperbole: Stay True to Security Message
Political Ploy or Not, Industry Needs to Act
Age of Misdirection: Stay Focused, Safe, Secure
Now, it would be very easy to dwell on all the negatives of the incident, and there have been many talks at conferences with people jumping on the hype of the incident saying “we could have prevented this.”
Well, maybe these experts could have nor maybe not, but the real thing to look at is what has the industry learned and where is it headed?
In August last year, the Saudi critical infrastructure user suffered a shutdown of its facility and the controllers of a targeted Schneider Electric Triconex safety system failed safe. During an initial investigation security professionals noticed there were some suspicious things going on and that is when they found malware. The safety instrumented system (SIS) engineering workstation was compromised and had the Triton (also called Trisis and HatMan) malware deployed on it. The distributed control system (DCS) was also compromised. It is possible to envision an attack where the attacker had the ability to manipulate the DCS while reprogramming the SIS controllers.
In one part of the attack, it was learned the Tricon workstation comes with a memory protect mode so there is a switch on the front panel that allows you to protect the memory from being written to. One of the precursors to this attack is the fact the key switch has to be in program mode. In this case, the user had it in program mode.
With these more sophisticated attacks hitting the industry, users are starting to not only become more aware of the issue, but are taking some action.
“We are starting to have conversations about some very good topics in safety and cybersecurity,” said Andy Kling, director of cyber security and architecture at Schneider Electric. “We are having interesting discussions around things like the insurance industry helping to drive some cyber awareness. Will insurance companies bring a measure of accountability to companies trying to insure their plants? The conversation is getting out of the blogs and into the real world. We said one of the primary lessons we took out of Triton was a call to action across the industry.
“The industry has to wake up to the fact these kinds of attacks not only can happen, but have happened and will continue to happen. The sacrosanct SIS that nobody ever thought would be attacked, was attacked. An embedded device that everybody thought was so obscure and difficult to attack, was attacked. These are things that a year ago we would have never thought was possible,” Kling said.
“Triton demonstrated that any critical infrastructure is vulnerable to cyber attack,” said Dewan Chowdhury, chief executive and Found of security provider, MalCrawler. “Government regulators and cyber companies were more focused on the electric grid as the primary target for cyber attacks targeting industrial control system. The Triton attack showed that even the operator’s safety instrument systems were being targeted by nation states.”
For a long time, users were aware a security breach of some sort could cause a safety incident, but it was more theoretical.
Linking Safety and Security
“I do believe that industry has taken a lesson from this and similar incidents and reports,” said Eric Cosman, security expert and consultant with ARC Advisory Group. “Although the link between safety and security has been acknowledged for some time, there has been a renewed interest in the subject since this report, particularly in the development of standards and practices.
“The ISA84 committee has published a technical report (ISA-TR84.00.09) that addresses cybersecurity from the perspective of a safety engineer. The responsible work group will soon begin work on a third edition of this report to reflect developing interest and guidance.
“The other development is the broadening acceptance of using a ‘Cyber PHA’ approach to doing risk assessment of control systems, including associated safety systems. Several of the leading service and consulting companies are now using this approach with their clients. The recently approved 62443-3-2 standard describes such an approach.
“Perhaps the most important lesson from this and similar incidents (going back to Stuxnet) is the importance of protecting not only the running systems, but also the development or engineering systems used to configure them. Compromising the development platform allows an attacker to modify the code loaded into safety systems, which can be quite difficult to detect.
“One of the most significant developments is a renewed call for safety systems that are totally disconnected from and independent of the BPCS. This could have a negative impact on products from companies that put a lot of money into the development and promotion of ‘integrated safety’ as part of their architecture. That said, I am personally not convinced that integrated safety is a bad thing, as long as the asset owner takes the appropriate measures to protect their development systems. Of course, practices such as leaving a running system in ‘program mode’ are simply unacceptable,” Cosman said.
Awareness is great, but more companies must become more active and start – or elevate – a security program.
“For any company to move forward having a simple cybersecurity plan is better than no plan,” Chowdhury said. “Fortunately, NIST has developed the NIST cybersecurity framework that allows any OT operator to simply assess their current program or to figure out where to start on a cybersecurity program.
“An unregulated organization has an advantage as they don’t feel the pressure of creating a cyber program to just ‘check some boxes’ for the satisfaction of regulators. They can go at their own pace and implement security solutions that fit their budget, reality, etc.”
At a minimum, Chowdhury said, critical infrastructure companies should do the following:
• Perform a self-assessment using a guideline like the NIST Cybersecurity Framework to determine their baseline/maturity (This is the most important thing to do)
• Perform a tabletop exercise of some type of hack on their OT environment to learn what would happen, who would respond, impact on the supply chain, their interactions with law enforcement/regulators/government agencies/etc.
• Have a detailed inventory of all connected devices in their OT environment
• Ensure upper management is involved in the cybersecurity program for support and awareness
Companies are making changes and the idea of cybersecurity is there is a constant state of change and thinking.
“One of the ways things changed internally here (at Schneider Electric) was us recognizing the traditional method of looking at a threat model for a product and saying we will address the most severe vulnerabilities first and then the next most severe and the next most severe that model broke down on us,” Kling said.
“The truth is the kind of attack that worked against us was a fairly low priority vulnerability. We always believed the barriers to entry were too high to make an attack of this type,” Kling said. “Now we know we have to understand the types of attacks and techniques used by the attackers and understand what those techniques are and mix that in with our threat modeling, mix that in with our prioritization scheme. Don’t just base it on the CVSS score that gives us a severity ranking. We should understand what are techniques being used and build some defenses against those.”