WHY and how do large computer projects go so badly wrong? The column on ‘Why IT just doesn’t compute’ on 2 May triggered a large and vociferous postbag from practitioners who offered some illuminating first-hand insights into a very British malaise.
Almost everyone agreed that the overall picture is grim – perhaps even worse than it appears at first sight. Public-sector failure is, by definition, public and visible (which brings its own pressures). In the private sector it is hidden – which also means that managers do not learn from it, an important factor in explaining why improvement is so slow.
Thus one senior manager in financial services (one of the heaviest IT investors) said he had not seen a successful IT project in 25 years. The reason: internal politics ensured that project teams would be made up of political allies rather than the managerially and technically competent. Then, when a programme flagged, it was quickly pushed under the carpet.
The tell-tale sign, he said, was the announcement that the full savings would be realised in a previously unsignalled second phase – but unfortunately there were now higher priority demands on resources. ‘This necessitated another ‘whizz’ new project to be initiated quickly, which meant that it was not thought through properly, and so back to square one.’
How do such daft projects ever get off the ground? ‘Collusion and illusion,’ said one consultant, describing it as ‘a happy spirit of joint wish-fulfilment’ in which both sides tacitly agreed to enter a fantasy world in which assumptions about costs, benefits and risks were wholly artificial. Once the project is under way it was too embarrassing for either side to admit the figures were make-believe, so it ground on with everyone hoping for the best – ‘even though that didn’t work last time or the time before or, come to think of it, ever’.
If truly honest assessments were carried out, many or most IT projects (rightly) would not be started if they were monitored in the same spirit they would (rightly) be killed before they were implemented, he says. But this, of course, is a message that no one wants to hear.
Other writers took issue with the British Computer Society’s diagnosis that lack of professionalism in software engineering was to blame, fingering crude human resources management as the real culprit. A contract-based, low-commitment, ‘plug-compatible programmer’ mentality had grown up among managers, one claimed, which was incompatible with quality, teamwork and the dialogue needed to keep projects on track. Certification, put forward as a solution by the BCS, was therefore irrelevant: the central issue was the ‘woeful’ quality of IT management.
The woefulness extended to purchasing. Several correspondents queried the conventional wisdom of outsourcing, noting that it could easily lead to lower quality and higher cost.
One reason may be the poor HR management outlined above. More fundamentally, the outsourcer answered to different shareholders than the customer and their interests were far from identical: for instance, suppliers had an incentive to lock in customers by building systems that were hard to maintain by anyone else – maintenance (sometimes neglected at the justification stage) on average accounts for 60 per cent of all software costs. And the power relationship is unequal: a major outsourcer can absorb the loss of one customer, but the customer can’t absorb the loss of its IT.
The government, said another, ‘tends to be a very poor buyer. Orders come down the line about what to do, but very little about why they are doing it’. When research found that few people were using the Inland Revenue’s expensively developed online services, managers said they hadn’t been told to encourage taxpayers to use them: if they had, it would have been done differently.
More generally, he argued that the increasingly sharp-edged contract cul ture was inexorably damaging trust and raising costs in all kinds of ways – the purchasing process got longer, gaming around contracts became more intense, and every time the contract was switched a new learning period had to be gone through. Do the nominal cost savings outweigh the loss of knowledge and trust? Probably not. ‘We are rapidly heading for a state where we truly do know the cost of everything and the value of nothing.’
Finally, many projects made the mistake that they ‘automate rather than eliminate’. For example, companies were willing buyers of technology for automating call centres and routing calls, ‘but precious little effort is given to trying to eliminate the need for the customer to be phoning in at all’.
This applies to call centres, too: many of them exist for the sole purpose of answering calls that shouldn’t need to be made in the first place, institutionalising IT costs on top of an already ineffective system.
The blunt truth is that on its own, investing in IT neither cuts costs nor reduces headcounts. This was the original insight of the ‘re-engineering’ movement a decade ago, and it is reinforced by recent work by McKinsey showing that IT investment is a much weaker predictor of productivity improvement than overall management capabilities.
Together, management and IT are a potent force. But without adequate management capabilities, McKinsey warns that heavy IT spending may actually damage productivity. ‘The cost of investing in IT, in management time and capital, can be value-destroying, due to inappropriately scoped or over-engineered systems.’
The Observer, 16 May 2004