Just who can you trust along the supply chain? If the purchasing company has a long standing cybersecurity plan working with its partners it is one thing, but, let’s face it, not very many have that kind of program.
With the potential for back doors built into unguarded product, security professionals will have a difficult time going through all the chips, and other products, coming in.
With the outsourcing of microchip design and fabrication worldwide being a $350 billion business, bad guys all along the supply chain have a multitude of opportunities to install malicious circuitry in chips.
The reality is these Trojans can allow attackers to sabotage healthcare devices; critical infrastructure; financial; military, or government electronics.
There is now a chip in development with an embedded module that proves its calculations are correct and an external module that validates the first module’s proofs, said Siddharth Garg, an assistant professor of electrical and computer engineering at the NYU Tandon School of Engineering, who is working on the technology with fellow researchers.
While software viruses are easy to spot and fix with downloadable patches, deliberately inserted hardware defects are invisible and act surreptitiously.
One case in point could be a secretly inserted back door that could allow attackers to alter or take over a device or system at a specific time. Garg’s configuration, an example of an approach called verifiable computing (VC), keeps tabs on a chip’s performance and can spot telltale signs of Trojans.
That ability to verify has become vital in an electronics age without trust: Gone are the days when a company could design, prototype, and manufacture its own chips. Manufacturing costs are now so high designs go to offshore foundries, where security is not a guarantee.
But under the system proposed by Garg, the verifying processor can end up fabricated separately from the chip.
“Employing an external verification unit made by a trusted fabricator means that I can go to an untrusted foundry to produce a chip that has not only the circuitry-performing computations, but also a module that presents proofs of correctness,” Garg said.
The chip designer then turns to a trusted foundry to build a separate, less complex module: An ASIC (application-specific integrated circuit), whose sole job is to validate the proofs of correctness generated by the internal module of the untrusted chip.
Garg said this arrangement provides a safety net for the chip maker and the end user.
“Under the current system, I can get a chip back from a foundry with an embedded Trojan. It might not show up during post-fabrication testing, so I’ll send it to the customer,” Garg said. “But two years down the line it could begin misbehaving. The nice thing about our solution is that I don’t have to trust the chip because every time I give it a new input, it produces the output and the proofs of correctness, and the external module lets me continuously validate those proofs.”
An added advantage is the chip built by the external foundry is smaller, faster, and more power-efficient than the trusted ASIC, sometimes by orders of magnitude. The VC setup can therefore potentially reduce the time, energy, and chip area needed to generate proofs.
“For certain types of computations, it can even outperform the alternative; performing the computation directly on a trusted chip,” Garg said.
Researchers next plan to investigate techniques to reduce the overhead that generating and verifying proofs imposes on a system and the bandwidth required between the prover and verifier chips. “And because with hardware, the proof is always in the pudding, we plan to prototype our ideas with real silicon chips,” Garg said.
To pursue the promise of verifiable ASICs, Garg, and colleagues will share a five-year National Science Foundation Large Grant of $3 million.