From the Triton attack to anticipating an attacker’s next move, security continues to be a battle of wits between the defender and an increasingly sophisticated attacker.
In the end, it all comes down to understanding what you have, applying your security standards and principals, remaining consistent and resilient.
With that in mind, Peter Herweck, executive vice president for Schneider Electric Industry, and Andrew Kling, industry automation product security officer and senior director of system architecture at Schneider, sat down with Gregory Hale, Editor and Founder of Industrial Safety and Security Source (ISSSource.com) at the ARC Industry Forum 2019 in Orlando, FL, a little bit ago to discuss management’s role in cybersecurity. The following is the second part of a two-part Q&A discussion:
ISSSource: Systems are out there that are 20, 30, 40 years old, how do you talk to people about securing a system that is older than the people trying to secure it?
Herweck: That is one of the offers we have to go in with a customer and do a thorough analysis and a thorough view on what is there and our whole expertise is telling the customer it is securable to some level and here we may have problem. After that consultation, we leave it to the customer whether he wants to deploy or not. This results in a clear proposal in what needs to be done with software upgrades or hardware upgrades or to having certain areas of the network protected. You can also cover the procedural and people aspects of it. The customer needs to decide whether it is an investment that is worthwhile for them. It is possible when we go back into old PLC systems or DCS systems, we find one of the things customers still use is the OEM password that had been deployed hardcoded into the system and never changed. You can go onto the Internet and enter the PLC controller or vendor ABC and you can find out quickly what is the original manufacturer’s password and in many cases it has not been changed. Even in 20 or 30 year old systems that problem exists. An easy measure is to change the password. You need to know. That differentiates us from IT cyber companies because they don’t know. This is a simple thing, but there may be areas where you go in and say this is not really a cyber-secure product and you recommend to upgrade it.
ISSSource: Supply chain security: We have seen incidents where the main company was not the initial victim, but the people who build things for them are the weak link, how do you ensure there are things like backdoors not built in?
Herweck: There are two components to that question. There is the product side making sure your products follow certain standards where you have as high an assurance they are designed in a cyber secure way, and the second is you have people you send to the customer where you do the deployment. You are making sure the people you are sending are not the problem. Product plus the people aspect because the customer may open their network for engineers to go in for the deployment we have to make sure they bring in equipment that is totally clean. We have developed our own procedures to make sure that is happening. Can we be 100 percent sure? No, you cannot be. You bring in new employees and you train them and you find a time when they adhere to those polices and we check them. Sometimes you find they may think it is a good idea to install a game on their PC or laptop they bring to the customer, which is of course forbidden. You need to have procedures to check it. On the product side, we have a person or large team that is part of the product security office. Any product we are developing can only be sold if our product security team has checked, tested and validated these products to be in accordance of the respective standards. For example the product security officer (PSO) of the whole group reports to me and on a monthly basis we are looking at the products coming out. He has the right to pull the red line if the business owner says the product is finished and I am going to bring it to market and the PSO says no, you haven’t done the right tests and it isn’t going to market.
Kling: I have done this. I stopped ship when a product arrived to ship decision and there was a known vulnerability that had not been addressed and I stopped the product and we had to go back and fix it before we could let it go. That is not the back door you talked about, but a vulnerability in your product that has a known exploit against it is as good as putting a clear backdoor with a password in there.
Herweck: I think with the backdoors we can never find out. Some of the recent cases of microprocessor manufacturers with backdoors in there, we have taken 10 years to find out they are in there, it is just impossible.
Kling: We do our best hold our supply chain accountable to cybersecurity standards to the point of going out and auditing their processes. We want to assure not only do they have good secure development processes, but the ability to deal with incidents. On the other hand, we also realize we are part of the supply chain to our customers.
ISSSource: Triton, the word scares quite a few people. Yes, you didn’t want the incident, but I am seeing positives coming out of it. I am seeing a more open discussion and I am seeing safety professionals understanding they are not as impenetrable as they thought they once were, so let’s make sure safety is just as secure as the PLC and the DCS …
Herweck: At the complete installation. I always compare this simple analogy, saying my neighbor asked me is how can I protect myself against burglars and I say one good idea is you close your door. If you see the neighbor’s door is open, close your because the burglar will go there first. As simple as this may sound, one needs to look at not only the door, but it has to be the building and the complete installation. That is one thing. I would take quite a few positive learnings out of the incident and I think we have as a company, our customers have and the industry has because we have been as transparent as we could in the case. What is cumbersome is there is a lot of superficial talk. Superficial talk is not helpful for people that don’t have a lot of time to read something and not get into the depth of it. There are people that write three lines about it and you read cybersecurity, and Triton and Schneider Electric and it ends up in the financial paper and it has no depth and what is left is the bad part of it. I think in total, it has been a good learning for us, a good learning for the industry and a good learning for the customer. I think a lot of things were done right in the aftermath of the incident.
ISSSource: In the aftermath of the attack, is there anything you would have changed about the Schneider approach?
Herweck: It is hard to say to jump back the day before the incident and say what would we have done different. I don’t think we would have done anything else differently because we didn’t think this could happen.
Kling: If we think about it, there is ‘no we would have done this instead of that.’ We are more mature organization today than we were two years ago when this happened and as a result, we have improved. We have a better incident response process now, We have exercises to ensure the incident response process is working, it is flowing, and it is up to date. We didn’t need an example to take it to the safety team or other teams and show them it could happen to you, those examples already existed. We are vulnerable. Our safety systems are vulnerable. Our DCS is vulnerable. We have seen this, there are examples. Our own development process has improved, our delivery process has improved, our cyber services processes have improved. It is something we reflect upon. When it happened and when it was largely completed ,there was a very thorough audit and every single decision and every single event was time-lined and reviewed. We have looked at those to help us learn those lessons so we can make sure we are better next time.
Herweck: There is now a lot of anticipation on what could potentially go wrong. In many of those instances, sometimes products are used exactly in the way they were designed to. It was one of the things where people didn’t think they would use that functionality with bad intent. You can program a safety system, you can chop down the PLC. This is functionality you can use and it should exist. Also, the safety system should also drive the plant safely down, which it did. But was the functionality used in its intended design? No, it wasn’t. So, we should look at things from different angles and anticipate some of the things that could happen.
Kling: That is part of our secure development lifecycle. Our prep models change how we look at threats that come into our systems. They have evolved and obviously they account for this kind of attack and we use that to look elsewhere.
ISSSource: It ends up being a philosophical question, but you said we didn’t think it would be used that way. At what level do you think about the next potential attack on something else? How far do you go to think about securing the system the next time when there is a more obtuse attack?
Herweck: Much more farther than before all the way to hiring friendly hackers. You need to anticipate some of the thoughts and the things that are there. There is no 100 percent security, but how far can we move up to get pretty nice security with little effort. The more you anticipate earlier, the higher protection you have with little effort. That is what we are trying to figure out.
Kling: When we were going through the threat models post Triton, we didn’t just fix the one vulnerability we announced, we adjusted roughly a dozen aspects. We looked at the trade craft used in the attack and adjusted many things across the product base in order to anticipate this type of attack in the future coming from different directions.