By Gregory Hale
There were signs of an impending security issue was imminent months before the Triton safety system attack on a Saudi Arabian refinery, a researcher revealed Tuesday.
“What isn’t publicly known is there was an additional outage in June 2017 on a Saturday evening where there was a skeleton crew working,” said Julian Gutmanis, during a Tuesday talk at the S4x19 conference in Miami. Gutmanis is a security researcher initially brought in by the victim organization once the attack had been discovered.
In the Triton event, a Saudi Arabian refinery suffered a shutdown of its facility in August 2017 and the controllers of a targeted Schneider Electric Triconex safety system failed safe.
During an initial investigation after the August incident, security professionals noticed there were some suspicious things going on and that is when they found the Triton malware. The safety instrumented system (SIS) engineering workstation was compromised and had the Triton (also called Trisis and HatMan) malware deployed on it. The distributed control system (DCS) was also compromised. The attacker had the ability to manipulate the DCS while reprogramming the SIS controllers.
In Gutmanis’ talk, he mentioned in the June incident, one safety controller ended up affected, but no one really could figure out the issue, so the company pulled the controller out and sent it in to Schneider to conduct a diagnostic safety check.
At that point, “nothing surprising was identified,” Gutmanis said.
“The next outage was on August 4 – a Friday where multiple controllers were affected; six controllers went down,” he said.
At the time, the distributed control system (DCS) was still reflecting normal operations. The engineers came out to investigate. They reviewed Windows event logs and identified unexpected RDP sessions. At this point the vendor, whom Gutmanis did not name, recommended actions on the impacted systems.
That is when Gutmanis’ team kicked off an investigation. They conducted typical incident response activities, and it didn’t take long to understand they had to expand the scope of the investigation. They then started a timeline of events.
Some of the initial findings included:
• Python scripts were created in the engineering workstation in close proximity to the August outage, which was the Triton malware
• Unknown programs were running in the affected controllers’ memory
• There was poor configuration of the DMZ which allowed attackers to pivot to the control network
• Communications to engineering workstation traced through pivot points to the organization’s perimeter
There was then an escalation in the investigation where the researchers found the entire environment was thought to be compromised where extremely complex attack tools were identified and traced to a remote attacker.
Safety System Compromise
In addition, the found the integrity of the emergency shut down system (ESD) was compromised, and could not be trusted. Also the incident response team provided the safety system vendor attack tools for analysis. The team also recommended a full compromise assessment across the entire organization.
At that point, the team went into containment mode. They surmised recent incidents having an impact on the region focused on system destruction and disruption, but not yet reaching the OT environment. They also found what they said were suspected beacons identified from the control network. There were considerations made regarding potential for a “time bomb” style attack. Also, actions were taken to isolate the systems, with significant considerations regarding emergency manual shutdown procedures. The eradication event included multiple parties across entire IT/OT environments.
“The target got lucky,” Gutmanis said. “The outages were not intended. While the target was lucky, it was still expensive for them.”
In his post op thoughts, Gutmanis said it was unclear where support demarcations and staff movement resulted in security holes, plus the scope of the initial investigation was insufficient which allowed the attackers another two months to tune their tools.
“There was many places that this incident could have been prevented, identified or stopped earlier,” Gutmanis said.
In addition, the victim needs to ensure a proper security culture is maintained within the plant environment. Also, it should “ensure support demarcation roles and responsibilities are well defined.
Other thoughts Gutmanis shared were:
• You should properly deploy, audit and monitor your defense
• Understand communication flows within your network and look for anomalies
• Make sure you and your vendors are on the same page
• Get help before you need it
As a result of the report at S4x19, Schneider Electric officials released a statement:
“In light of new claims made at S4x19, we recount our response to the Triton incident, which occurred on August 4, 2017.
“We deployed a support engineer to the site within four hours of the end user’s request. Thereafter, our on-site experts conducted a comprehensive analysis. Once they determined the incident to be cybersecurity-related, they turned the investigation over to the end user, who hired FireEye for attack eviction and site remediation. FireEye worked directly with the end user, and at the end user’s request, Schneider Electric communicated only with FireEye. At every step, we have cooperated fully with the end user, FireEye and the U.S. Department of Homeland Security, with coordination from the U.S. Federal Bureau of Investigation.
“We continue to be open and transparent about the incident to learn from Triton and to help the broader goal of worldwide cyberattack prevention.”