Last month was scarier than most. Inside jobs always are.
Details are sketchy, but it was definitely a back door hack of Juniper. That almost always points in the direction of an inside job.
So far, no one quite knows how exactly the hack was accomplished. But what scares me is that, supposedly, this code has been in the system for three years already.
Drilling down a bit, it turns out there was more than one back door. One of them allowed whomever to remotely log into commonly used VPN networks to spy on what are, supposedly, some of the most secure communications within the network where it is deployed.
That door allows the hacker to monitor encrypted traffic on the network, and decrypt the communications. A fix is out, but because Juniper’s equipment is so widely deployed, especially among business enterprises, government, and various security agencies, it will take a while to find out just how much has been compromised.
The back door turned out to be some “unauthorized” code written into the firmware that runs the NetScreen firewalls. NetScreen is a platform for enterprise firewalls. It is supposed to be a relatively bleeding edge solution that incorporates Juniper’s third-generation security ASIC and distributed system architecture.
I spoke with Pankaj Rohatgi, director of engineering at Cryptography Research, and got his thoughts on this. He believes that somewhere along the line hackers installed a default password that allowed them to log onto the system and become the administrator on the VPN. At the same time, he said that the hackers also installed a cryptographic back door that allowed access to the programming of the RNG. This was the second back door.
“From the sophistication of these attacks, these weren’t casual hackers,” he says. That is especially true with the RNG compromise. Someone who can change the code in the RNG has to know exactly what they are doing, and be mathematically sophisticated enough to pull it off. It requires some knowledge of cryptography, as well.
Rohatgi agrees it was an inside job. “Because these are critical infrastructure components, it was likely an insider who was working at the behest of some government.”
There are some interesting ramifications to this. First of all, this is going to fuel the fires over the government on one side, and chip developers on the other, about back doors. The argument has been over putting in back doors the government can use to access data to fight antiterrorism. Overall, industries have resisted, so this debate isn’t going away anytime soon. There are other issues over back doors, as well.
But here is what this is really all about. Back doors have been a tool that has been used for years as a failsafe mechanism to be able to access whatever system is targeted. The issue is the security of them. Back doors themselves are quite useful. Back in the day I used them myself to access locked or corrupt client computer systems. And had they not been available, the cost to recover, if possible, from the front door would have been astronomical. I also have used back doors from vendors to access my hardware when all else failed.
Back doors also can be used by anti-terrorist and other security-type government agencies to aid in nefarious activity discovery. But not everyone who works for the government, and not every government, is good. So, there is a lot of debate around back doors. If code were perfect and people were perfect, we wouldn’t need back doors. But neither are, and back doors have come to the rescue countless times.
This is an important issue, and it has to be resolved neatly. I am going on record to say they are necessary. But there also has to be a fundamental change in how they are implemented. There are a number of ways to do that, and that is a discussion for another day. But the one thing that is tantamount to any methodology behind back doors is security. If we have back doors, they and all processes around them must be secure…period. Then we can talk about using them.