Lessons From Healthcare.gov

Setting reasonable schedules, understanding who’s in charge, and making sure you know what they’re doing is a critical part of IT management. In this case, everything went wrong.

popularity

Patterning equipment uses software and needs software security. With that rather weak segue, I would like to discuss software projects, considering they are in the news at the moment.

The stories about the healthcare.gov rollout bring back fond memories for all of us of software projects that have gone horribly wrong. On the list of things that guarantee a project will miss deadlines, late changes are close the top. Late changes are simply disastrous, and the only way to succeed is for a strong project manager to push back on any scope change— however trivial. In fact, I am convinced that the quality of the requirements or specification phases is the biggest determiner of success.

It is early in the “blame the innocent and reward the guilty” phase in the recriminations over the healthcare Web site. It certainly sounds like there is an organizational problem here. There are plenty of changes being made and 55 subcontractors, with no single contractor responsible for the whole program.

The second guarantee to cause failure is insufficient time for integration testing as well as all the consequent rework and bug fixes. On healthcare.gov, it sounds as if they had 2 weeks, which is a ludicrously short time even for a small project. Two weeks is hardly enough time to test, and it assumes there are no bugs to fix. There are techniques to monitor test progress with documented test cases and metrics such a bug detection rates and fix rates that can be used to tell how close a project is to a realistic launch. I have found that a a realistic time split is one-third design, one-third write and one-third test and fix. On top of that, test had to include module and integrated system test.

The software teams on heathcare.gov have my sympathy because they are now immersed in a pressure cooker, which will produce incremental improvements in the site with almost no chance of getting any kudos from the community at large.

On the other side, software security, also has been in the news, courtesy of the NSA and Mr. Snowden. My reaction, when the news first broke, was the most remarkable part of the Snowden NSA saga is how a relatively low-level contractor got access to all this stuff. He was a system administrator.

At small startups I have worked at, we used to joke that the one truly essential employee who we could not afford to lose and who could destroy the company with a single key stroke was the IT system guy. The systems administrator had access to everything, including everyone’s password. Forget the CEO or CTO. They see the big picture, but operationally they can leave town and the bills get paid. Apparently the NSA is set up the same way. Presumably they compartmentalize the work teams, but someone, somewhere, has to store the compartments. Blame the IT guy.

The NSA’s response was to fire 90% of its system administrators. Keith Alexander, the director of the NSA, the U.S. spy agency charged with monitoring foreign electronic communications, told a cybersecurity conference in New York City that automating much of the work would improve security.

“What we’re in the process of doing—not fast enough—is reducing our (1,000) system administrators by about 90 percent,” Alexander said.

The lessons for corporate security are real. It is not enough to have back-ups off-site. If one guy could erase it all, copy it and give it away, or change all the passwords, or any other naughty business, then the company is completely exposed.

Oh…and never ever allow an IT person to work off their notice. Walk them out the door without touching a keyboard. That means it is essential to have at least two completely independent people fully conversant with the IT system at all times.

For more thoughts on patterning please go to www.impattern.com.



Leave a Reply


(Note: This name will be displayed publicly)