Why Do Big IT Projects Fail So Often?
Obamacare's website problems can teach us a lot about large-scale project management and execution.
Soon after the launch on Oct. 1, former federal CTO Aneesh Chopra, in an Aspen Institute interview with The New York Times ' Thomas Friedman, shrugged off the website problems, saying that "glitches happen." Chopra compared the healthcare.gov downtime to the frequent appearances of Twitter's "fail whale" as heavy traffic overwhelmed that site during the 2010 soccer World Cup.
But given that the size of the signup audience was well known in advance and that website technology is mature and well understood, how could the government create such an IT mess? Especially given how much lead time the government had (more than three years) and how much it spent on building the site, (estimated between $300 million and $500 million).
This project failure isn't quite so unusual, unfortunately. Industry research suggests that large IT projects are at far greater risk of failure than smaller efforts. A 2012 McKinsey study revealed that 17% of lT projects budgeted at $15 million or higher go so badly as to threaten the company's existence, and more than 40% of them fail. As bad as the U.S. healthcare website debut is, there are dozens of examples, in both the government and private sector, of similar debacles.
In a landmark 1995 study, the Standish Group established that only about 17% of IT projects could be considered "fully successful," another 52% were "challenged" (they didn't meet budget, quality or time goals) and 30% were "impaired or failed." In a recent update of that study conducted for ComputerWorld, Standish examined 3,555 IT projects between 2003 and 2012 that had labor costs of at least $10 million and found that only 6.4% of them were successful.
... Read full story on InformationWeek
Post a comment to the original version of this story on InformationWeek