Lean times for innovation

IT’S OFTEN said that the UK has an innovation problem: it’s good at inventing, but hopeless at turning the new thing into a commercial product.

So the official emphasis is on more research and development and scientific innovation while Harvard professor Michael Porter, in his report on UK competitiveness, urges companies to develop more innovative, higher-value goods.

Like all good cliches, the ‘innovation deficit’ contains a nugget of truth but also like all cliches it has been repeated so many times that it has been all but drained of meaning. Yes, of course companies should innovate: that’s what companies are for. Innovation, however, is not only a question of new products and services, but also the way companies do things – their practices and processes.

How things are made is often ignored but in fact it’s an essential complement to the sexier field of product innovation. Trendy new products aren’t much use if you can’t make them properly (BMC and its successors reportedly never made a penny out of the original Mini). Indeed, some new products emerge out of new processes.

Moreover, process innovation is one of the best ways of stealing a competitive march on rivals. By adopting new production methods, Airbus UK boosted productivity in wing assembly by 25 per cent and quality by 40 per cent in six months. In recent research, the consultancy McKinsey identified innovation diffusion – the rapid take-up of promising management practices – as the single most important means of tackling UK’s productivity problems.

The most important element in these innovative practices is the approach known as ‘lean’, which concentrates on cutting down the waste, variability and inflexibility in organisations that blocks them from responding to customers.

Such approaches are central to productivity and innovation, notes London Business School professor Chris Voss, who heads a research programme studying promising practices for the Advanced Institute for Management. ResearchLean is therefore a key link between macro- and micro-productivity issues.

Three McKinsey consultants – John Drew, Blair McCallum and Stefan Roggenhofer – have brought this alive at company level in a new book, Journey to Lean (Palgrave Macmillan).

This down-to-earth, nuts-and-bolts account claims to be the first to address the issue of ‘stickability’. If lean works – and since it started in Japan in the 1950s there is plenty of positive evidence – why are companies so slow to pick it up? And when they do, why does it so often fail to stick, success fading into mediocrity as the bad old habits reassert themselves?

The answer, as the authors note, is that ‘lean changes everything’. Much more than a collection of problem-solving tools and techniques, lean is ‘an alternative to mass production, not a complement to it’ in which production mix and quantity are dictated as directly as possible by the customer.

Redesigning the company’s operations from that perspective involves business-wide changes, which can only be done through people, not to them. Since process innovation is delivered on the front line, employees take a much bigger role, for which they need different mental as well as physical equipment. The investment emphasis switches from machinery to people.

By the same token, managers are required to stop managing at one remove and engage with the shop floor or office where the issues surface – an unfamiliar and revealing exercise. Changes have to be supported by different pay and other administrative systems.

As the first part of the book describing the lean landscape shows, many of the changes are counter-intuitive. Under lean, you don’t hide problems, you bring them to the surface, even if it means stopping production. Being responsive to the customer doesn’t mean building more stock, but less. The economies of scale that managers have been brought up on no longer apply – in fact the whole apparatus of mass production is now a liability, not an asset.

In the book’s second half, a fictitious UK manufacturing company embarks on the lean journey, encountering the practical pitfalls on the road. One is the failure to anchor the operational changes with institutional ones. Another is the temptation to take short cuts.

The blunt fact is there aren’t any. You can’t think yourself into a new way of doing things there is no alternative to doing it. That goes (especially) for managers, and is why apparent remedies like outsourcing and IT may turn out to be painful mistakes.

Outsourcing can be a way of avoiding the hard work of internal improvement in general, ‘too many managers have an exaggerated sense of the role new technology can play in improving performance’.

Add ‘r’ to lean and you get ‘learn’, and that’s what lean is: organisational learn ing in action. No other car company can replicate Toyota’s operational excellence because it contains 50 years of accumulated knowledge of responding to customers.

‘If you’re serious about lean, it becomes your strategy,’ says Drew. Or as BP’s John Browne puts it: ‘Our organisation is our strategy.’ Quite so. Not every organisation can invent a product that will revolutionise the world but every organisation can find a new way of doing things better every day.

The Observer, 23 May 2004

‘Don’t automate, eliminate’

WHY and how do large computer projects go so badly wrong? The column on ‘Why IT just doesn’t compute’ on 2 May triggered a large and vociferous postbag from practitioners who offered some illuminating first-hand insights into a very British malaise.

Almost everyone agreed that the overall picture is grim – perhaps even worse than it appears at first sight. Public-sector failure is, by definition, public and visible (which brings its own pressures). In the private sector it is hidden – which also means that managers do not learn from it, an important factor in explaining why improvement is so slow.

Thus one senior manager in financial services (one of the heaviest IT investors) said he had not seen a successful IT project in 25 years. The reason: internal politics ensured that project teams would be made up of political allies rather than the managerially and technically competent. Then, when a programme flagged, it was quickly pushed under the carpet.

The tell-tale sign, he said, was the announcement that the full savings would be realised in a previously unsignalled second phase – but unfortunately there were now higher priority demands on resources. ‘This necessitated another ‘whizz’ new project to be initiated quickly, which meant that it was not thought through properly, and so back to square one.’

How do such daft projects ever get off the ground? ‘Collusion and illusion,’ said one consultant, describing it as ‘a happy spirit of joint wish-fulfilment’ in which both sides tacitly agreed to enter a fantasy world in which assumptions about costs, benefits and risks were wholly artificial. Once the project is under way it was too embarrassing for either side to admit the figures were make-believe, so it ground on with everyone hoping for the best – ‘even though that didn’t work last time or the time before or, come to think of it, ever’.

If truly honest assessments were carried out, many or most IT projects (rightly) would not be started if they were monitored in the same spirit they would (rightly) be killed before they were implemented, he says. But this, of course, is a message that no one wants to hear.

Other writers took issue with the British Computer Society’s diagnosis that lack of professionalism in software engineering was to blame, fingering crude human resources management as the real culprit. A contract-based, low-commitment, ‘plug-compatible programmer’ mentality had grown up among managers, one claimed, which was incompatible with quality, teamwork and the dialogue needed to keep projects on track. Certification, put forward as a solution by the BCS, was therefore irrelevant: the central issue was the ‘woeful’ quality of IT management.

The woefulness extended to purchasing. Several correspondents queried the conventional wisdom of outsourcing, noting that it could easily lead to lower quality and higher cost.

One reason may be the poor HR management outlined above. More fundamentally, the outsourcer answered to different shareholders than the customer and their interests were far from identical: for instance, suppliers had an incentive to lock in customers by building systems that were hard to maintain by anyone else – maintenance (sometimes neglected at the justification stage) on average accounts for 60 per cent of all software costs. And the power relationship is unequal: a major outsourcer can absorb the loss of one customer, but the customer can’t absorb the loss of its IT.

The government, said another, ‘tends to be a very poor buyer. Orders come down the line about what to do, but very little about why they are doing it’. When research found that few people were using the Inland Revenue’s expensively developed online services, managers said they hadn’t been told to encourage taxpayers to use them: if they had, it would have been done differently.

More generally, he argued that the increasingly sharp-edged contract cul ture was inexorably damaging trust and raising costs in all kinds of ways – the purchasing process got longer, gaming around contracts became more intense, and every time the contract was switched a new learning period had to be gone through. Do the nominal cost savings outweigh the loss of knowledge and trust? Probably not. ‘We are rapidly heading for a state where we truly do know the cost of everything and the value of nothing.’

Finally, many projects made the mistake that they ‘automate rather than eliminate’. For example, companies were willing buyers of technology for automating call centres and routing calls, ‘but precious little effort is given to trying to eliminate the need for the customer to be phoning in at all’.

This applies to call centres, too: many of them exist for the sole purpose of answering calls that shouldn’t need to be made in the first place, institutionalising IT costs on top of an already ineffective system.

The blunt truth is that on its own, investing in IT neither cuts costs nor reduces headcounts. This was the original insight of the ‘re-engineering’ movement a decade ago, and it is reinforced by recent work by McKinsey showing that IT investment is a much weaker predictor of productivity improvement than overall management capabilities.

Together, management and IT are a potent force. But without adequate management capabilities, McKinsey warns that heavy IT spending may actually damage productivity. ‘The cost of investing in IT, in management time and capital, can be value-destroying, due to inappropriately scoped or over-engineered systems.’

The Observer, 16 May 2004

Kicking the six-figure habi

THE difference between companies is people. With capital and technology in plentiful supply, the critical resource for companies in the knowledge era will be human talent. Companies full of achievers will, by definition, outperform organisations of plodders. Ergo, compete ferociously for the best people. Poach and pamper stars ruthlessly weed out second-raters.

This in essence has been the recruitment strategy of the ambitious company of the past decade. The ‘talent mindset’ was given definitive form in two reports by the consultancy McKinsey famously entitled The War for Talent. Although the intensity of the warfare subsequently subsided along with the air in the internet bubble, it has been warming up again as the economy tightens: labour shortages, for example, are the reason the government has laid out the welcome mat for immigrants from the new Europe.

Yet while the diagnosis – people are important – is evident to the point of platitude, the apparently logical prescription – hire the best – like so much in management is not only not obvious: it is in fact profoundly wrong.

The first suspicions dawned with the crash to earth of the dotcom meteors, which showed that dumb is dumb whatever the IQ of those who perpetrate it.

The point was illuminated in brilliant relief by Enron, whose leaders, as a New Yorker article called ‘The Talent Myth’ entertainingly related, were so convinced of their own cleverness that they never twigged that collective intelligence is not the sum of a lot of individual intelligences. In fact in a profound sense the two are opposites.

Enron believed in stars, noted author Malcolm Gladwell, because they didn’t believe in systems. But companies don’t just create: ‘they execute and compete and co-ordinate the efforts of many people, and the organisations that are most successful at that task are the ones where the system is the star’.

The truth is that you can’t win the talent wars by hiring stars – only lose it. New light on why this should be so is thrown by an analysis of star behaviour in this month’s Harvard Business Review. In a study of the careers of 1,000 star-stock analysts in the 1990s, the researchers found that when a company recruited a star performer, three things happened.

First, stardom doesn’t easily transfer from one organisation to another. In many cases, performance dropped sharply when high performers switched employers and in some instances never recovered. More of success than commonly supposed is due to the working environment – systems, processes, leadership, accumulated embedded learning that are absent in and can’t be transported to the new firm.

Moreover, precisely because of their past stellar performance, stars were unwilling to learn new tricks and antagonised those (on whom they now unwittingly depended) who could teach them. So they moved, upping their salary as they did – 36 per cent moved on within three years, fast even for Wall Street.

Second, group performance suffered as a result of tensions and resentment within the team. One respondent likened hiring a star to an organ transplant. The new organ can damage others by hogging the blood supply, other organs can start aching or threaten to stop working or the body can reject the transplant altogether, he said. ‘You should think about it very carefully before you do [a transplant] to a healthy body.’

Third, investors punished the offender by selling its stock. This is ironic, since the motive for importing stars was often a suffering share price in the first place. Shareholders evidently believe that the company is overpaying, the hiree is cashing in on a glorious past rather than preparing for a glowing present, and a spending spree is in the offing.

The result of mass star hirings as well as individual ones seem to confirm such doubts. Look at County NatWest and Barclays de Zoete Wedd, both of which hired teams of stars with loud fanfare to do great things in investment banking in the 1990s. Both failed dismally.

Everyone accepts the cliche that peo ple make the organisation – but much more does the organisation make the people. When researchers studied the performance of fund managers in the 1990s, they discovered that just 30 per cent of variation in fund performance was due to the individual, compared to 70 per cent to the company-specific setting.

That will be no surprise to those familiar with systems thinking. W Edwards Deming used to say that there was no point in beating up on people when 90 per cent of performance variation was down to the system within which they worked. Consistent improvement, he said, is a matter not of raising the level of individual intelligence, but of the learning of the organisation as a whole.

So if not by hiring stars, how do you compete in the war for talent? You grow your own. This worked for investment analysts, where some companies were not only better at creating stars but also at retaining them. Because they had a much more sophisticated view of the interdependent relationship between star and system, they kept them longer without resorting to the exorbitant salaries that were so destructive to rivals.

The star system is glamorous – for the few. But it rarely benefits the company that thinks it is working it. And the knock-on consequences indirectly affect everyone else too. As one internet response to Gladwell’s New Yorker article put it: after Enron, ‘the rest of corporate America is stuck with overpaid, arrogant, underachieving, and relatively useless talent.’

The Observer, 9 May 2004

Why IT just doesn’t compute

THIS YEAR the UK will waste pounds 10 billion or more on IT projects that go over time, over budget and just plain wrong. Half of the total will be public money. Much of the waste is avoidable, the result not of technical problems but of dismal management and shoddy methods that, like a computer virus, endlessly replicate the mistakes of the past.

This is the grim picture painted in a new report, The Challenge of Complex IT Projects, released last week by the British Computer Society and the Royal Academy of Engineering. Although there are some good examples, Britain’s lack of professionalism in software engineering is both dangerous and economically debilitating, says the report.

The background is that as the country struggles to renew its infrastructure and improve delivery of services, it is spending more and more on computerisation. This year total IT spending will hit pounds 23 billion, pounds 12bn of which will come from the public sector, now the largest customer. Yet just 16 per cent of British projects are fully successful, according to one estimate, while others put the success rate even lower. This contrasts poorly with the US – where the success rate has doubled to 34 per cent over the last decade – and with the rest of Europe. The UK seems uniquely uncommitted to finding ways of improving.

Hence the regular diet of computer disaster stories, of which the current cock-ups at the Child Support Agency (a pounds 400 million system that was switched on 18 months late and still leaves staff reliant on pocket calculators) or the magistrates’ courts (pounds 318m and still counting – ‘one of the worst IT projects I have ever seen,’ according the chairman of the Public Accounts Committee last year) are fully representative.

The report lays much of the blame for the failures on management shortcomings, in particular the failure of both customers and suppliers to follow known best practice. Contrary to many assertions, big IT projects have much in common with other large-scale engineering schemes. ‘It’s project management,’ says the Dr Mike Rodd of the British Computer Society (BCS). ‘There’s not much difference between a big IT project and building a second Severn crossing.’

Professor John McDermid of York University, one of the report’s authors, laments that ‘in IT best practice is rarely practised. Projects are often poorly defined, codes of practice are frequently ignored and there is a woeful inability to learn from past experience’.

The difference between best and typical, he adds, is enormous. Although the researchers found no hard evidence that the public sector was worse at IT management than the private, the stakes are considerably higher.

For one thing, public sector projects are more visible. For another, they are often larger and subject to both technological and political ‘creep’. Moore’s Law – which predicts the quadrupling of hardware capability every three years – fuels ambition and complexity, which eat up potential cost gains and far outrun software improvements and the ability to manage them.

The danger is greatly amplified when customers are technologically illiterate, which is generally the case with British managers, but particularly with those in Whitehall and Westminster. The result is often naive over-optimism about the benefits of IT and gross underestimation of the difficulties. ‘It’s hard to be an ‘intelligent customer’ if you don’t know what you’re buying,’ notes McDermid.

New project-management procedures put in place by the Office of Government Commerce under Sir Peter Gershon will undoubtedly help. Nevertheless alarm bells should be clanging violently around at least three very large and highly ambitious projects now being undertaken in the UK public sector: identity cards, the merger of the Inland Revenue and Customs and Excise, and the NHS computerisation programme.

‘I’m not saying they can’t be done,’ says McDermid. ‘But I’d like some reassurance that the analysis has been carried out. Have the risks been analysed? Are the projects implementable? Will they do what Ministers want cost-effectively? Some searching questions need to be asked.’

Part of the questioning, of course, should be about the need for IT in the first place. The fact that the private sector is pulling back from customer relationship management and blanket e-enablement while the public sector is piling in should tell us something.

The e-government initiative has so far absorbed pounds 8bn on the self-evident assumption that it is ‘a good thing’, with little evidence of benefit or payback. On a smaller scale, central specification of everything from call centres to document-processing technology similarly assumes that IT is the answer – often before the real question has been identified.

What’s the remedy for this perennial British weakness? Being clearly linked to wider British management failings, it is not susceptible to easy answers. Better, tougher customers are one part of the equation – McDermid regrets that short awareness courses for senior defence department managers have been discontinued, for example. On the supply side, the report sets store by increasing the professionalism of the software industry. The BCS launched a chartered IT professional qualification last week.

But the key remains breaking cottage-industry mentalities and getting both customers and suppliers to accept that there is a better way. The report mentions the pioneering work of the Software Engineering Institute at Carnegie Mellon University in the US, which has shown that delivering software projects on time and within budget is perfectly possible using well-tried ‘lean’ principles that build quality in from the start. It argues that Britain needs a similar institute to provide a focus and drive improvements.

In the meantime, we can only share the feelings of the Japanese programmers who have taken to replacing unhelpful and impersonal Microsoft error messages with wistful haikus:

Yesterday it worked.

Today it is not working.

Computers are like that.

Or

Three things are certain:

Death, taxes and lost data.

Guess which has occurred.

The Observer, 2 May 2004

Wellcome: ‘Online science journals 30 pc cheaper’

FREE online publication of scientific research – so-called open access – could cut scientific publishing costs by 30 per cent and still provide a viable business model, according to research released by the Wellcome Trust.

Open access, whereby authors pay fees to cover e-publishing costs, is challenging the traditional subscription-based journal publishers such as Reed Elsevier, Walters Kluwer and Blackwell, which dominate the $7 billion global field in science publishing.

Wellcome, which funds £400 million worth of medical research a year, is one of several supporters that believe open access will advance medical progress by loosening the stranglehold of a few big publishers over the distribution of publicly funded research.

‘We pay for the research, then we have to pay again to look at it,’ said one person in the field. The NHS, Britain’s largest medical researcher, can freely access only 30 per cent of its own research, he claimed.

Discontent with the status quo has been fuelled by journal price rises of 200 per cent over the past decade, prompt ing interest in internet-based publishing. But though open access has many enthusiasts, doubts over quality and the long-term viability of the model have been raised.

The Wellcome report suggests open access could cut publishing costs per article from $2,750 (£1,553) to $1,950 (£1,100) for a top-line publication by slashing distribution and publishing costs.

However, Reed Elsevier, the biggest commercial publisher, said it would question these figures.

A Parliamentary select committee is now investigating the issue of open access.

The Observer, 2 May 2004

Labour pays consultants pounds 1bn

CENTRAL government spending on management consultancy doubled last year, making it the UK’s largest single market for business advice, according to figures released yesterday by the Management Consultancies Association.

Central government shelled out nearly pounds 1 billion and other public-sector bodies a further pounds 300 million out of a total of pounds 6bn spent with MCA members in 2003. The overall UK consultancy market is put at pounds 10bn by the MCA. However, the MCA does not represent some of the top strategy consultants, such as McKinsey, whose services central government and other public bodies also used last year. So final official totals could be much higher.

Driven by the public service improvement agenda, ‘growth in public-sector consulting far outstripped that of any major area of the economy’, says the MCA.

While central government was the biggest public consultancy buyer, the NHS bought pounds 25m worth of advice from MCA members last year, a 185 per cent rise over 2002, while income from defence soared from pounds 42m to pounds 83m.

About half the public-sector total was spending on consultancy related to four big contracts awarded by the Department of Work and Pensions, Transport for London, Inland Revenue and the NHS. Public-sector clients were also the main purchasers of business process re-engineering, e-business, marketing and strategy, says the report.

The Observer, 25 April 2004

Goodbye industry, hello Government

WHEN the Management Consultancies Association (MCA) was established in 1956, fee income generated by its member firms was £6 million and the total number of consultants employed 1,500. As the MCA’s newly released report on the UK consultancy industry in 2003-4 shows, those figures have now swollen to nearly £6 billion and 40,000 – the highest on record.

The corporate aid and advice sector has become a substantial industry in its own right, important particularly for its consumption and generation of some of the country’s most skilled manpower. But it is also eagerly watched for another reason: as a unique barometer of the temperature of British business, holding up a mirror to management’s – and, increasingly, the Government’s – main concerns and preoccupations.

Working with consultants on a daily basis is no longer just for senior managers in PLCs. One of the big stories of the last year is the rapid rise of the public sector as a consultancy consumer as the pressure for service improvement grows. Another is the explosion of outsourcing, the implications of which touch employees far down the employment scale.

Both these trends have been building for years and the extent of their implications are only now becoming clear, for clients as well as for the consultancy industry itself.

In the past, says Fiona Czerniawska, head of the MCA’s think-tank and the author of the report, the public sector has always been a countercyclical – even residual – user of consultancy, taking advantage of slack periods when prices soften and doing without when demand from the private sector (and prices) increase.

This time, however, the pattern may be different. For the moment at least, the balance between public and private has substantially shifted. The £1.3 billion shelled out last year for consultancy by the public sector was nearly double the figure for 2002. With three-quarters of that total, central government has become the UK’s largest single consultancy buyer.

The public sector’s unprecedented demand accounted for all of last year’s 13 per cent overall rise in British consultancy income – like-for-like revenues from the private sector actually shrank by 4 per cent.

Where does the money go? In contrast to the private sector, much of the growth in the public sector is attributed to consultancy around some very large IT projects (notably NHS computerisation), but clients also bought increasing amounts of traditional strategy and HR advice.

Given the political importance of public sector reform, consultants are likely to remain a familiar presence in Whitehall and local-authority offices at least until the election, Czerniawska believes. If last year’s tentative private sector recovery continues, there could be greater competition for consultancy services. Although prices have fallen by 15 per cent in the last couple of years, a public sector constrained by efficiency measures might find it hard to outbid the private sector for resources in particular areas.

How far is the public sector simply following the consultancy fashions set by private sector companies several years before? Inevitably, says Czerniawska, this is partly true. The public sector has found an appetite for IT-related consultancy, previously the staple of the private sector customer relationship management and e-business services are also in demand while they have fallen sharply in the private sector.

Czerniawska notes that over the last decade the life-cycle of new consultancy products has accelerated, the curve of adoption and tail-off becoming noticeably steeper. ‘It’s hard for the public sector not to follow the same cycles,’ she says, although she believes managers are trying hard to learn from the mistakes of the early adopters. They may also be getting better at negotiating contracts.

If past patterns of adoption hold true, then the wave of outsourcing that has engulfed the private sector still has a considerable way to run.

Outsourcing has been around for a decade, but income from this source grew by 46 per cent last year. Note that the MCA figures only represent income for consultancy, not provision of the service itself, which would add several billion to the total. Even so, one-third of all consultancy activity now concerns outsourcing of one kind or another. So far, most of the projects are in the private sector, but given the public-sector pro jects presumably in the pipeline, demand for outsourcing has probably not peaked, says the report.

This raises important questions about the shape of the industry in the future. Technology now dominates the British consulting industry. Taken together, outsourcing, IT and programme management account for almost three quarters of consulting-fee income.

Czerniawska is not alone in seeing an industry polarisation looming. ‘There’s a sense that as the industry comes out of recession, it is finding itself in a quite different landscape from when it went in,’ she says. At one end of the scale, clients will face a few very large outsourcers and firms capable of handling big technology projects and at the other, a large number of smaller companies providing traditional advisory and management consultancy. ‘We’ve reached a fork in the road where ways of working move apart. The mid-ground may be quite a scary place to be.’

However, she cautions that there is no such thing as a consultancy product that doesn’t have a shelf life – except, ironically, the traditional strategy and advisory lines that have recently been taking a back seat to technology. Will the outsourcing vogue eventually hit the buffers, as appears to have been the case with IT-related consultancy and CRM? If so, then more industry upheaval could be in the offing, although few people would bet against technology continuing to be a dominant factor.

Meanwhile, industry gossip is that having spun off or otherwise disengaged their consulting arms in the wake of the recent series of corporate scandals, the big accounting firms such as PWC, KPMG and Deloitte are now quietly re-forming them.

This is a scenario that has been played out many times before and is almost inevitable given the low returns from auditing relative to consulting. Also inevitable, despite the proscriptions of America’s Sarbanes Oxley Act, will be new areas of controversy over how far accounting and consulting can be kept separate.

Even leaving aside economic vicissitudes, with markets evolving as fast as industry structures, a quiet life is unlikely to be the lot of either the consultants or their clients in the coming decade.

WORKING WITH MANAGEMENT CONSULTANTS, Special Report, The Observer 25 April 2004

So what’s the big idea?

DELIVERY, delivery, delivery – with political parties now competing as much on managerial as on ideological credentials, it’s perhaps not surprising that pressure to improve public services has triggered an explosion of demand for consultancy in the public sector. Last year the public sector consumed £1.3 billion of advice, almost double the 2002 figure. Three-quarters of the total derived from central government, making it the UK’s largest consultancy market.

Although much of the work was based around giant public-sector IT-based contracts (Inland Revenue, Department of Work and Pensions, Transport for London and the NHS), public-sector appetite for other consultancy lines also increased sharply – strategy and HR but above all operations: programme management, business process reengineering and change management.

Are public-sector clients learning the lesson that the real benefits of investment in IT are only reaped if it is accompanied by joined-up change in other areas – that if you automate an existing inefficient process you just get bad results quicker?

‘Public sector managers are conscious that they have a special set of problems and they are doing their best to think them through intelligently,’ says the MCA’s Fiona Czerniewska. Indeed, an encouraging proportion of winners in the 2004 MCA best management practice awards went to public-sector assignments – including Westminster Council, which carried off the platinum award.

There’s a way to go, however, before the results of all change assignments are so successful. This is true of the private sector too – large-scale change initiatives are notoriously prone to failure everywhere – but the constraints that the public sector operates under makes them especially tricky.

This is partly because, in the absence of the hard edge of profitability, measures of success are less obvious but also because the very urgency of the pressures brought to bear from above can fight against change. Fear is the mortal enemy of rational reflection – jumping too energetically into e-government and other initiatives may be storing up trouble for the future.

Thus Anne Bennett, senior consultant at ER Consulting, believes that the huge amounts of money involved don’t necessarily mean that the public sector is a mature buyer of consultancy services. In the Whitehall procurement culture, consultancy comes and goes as an acceptable object of spending.

Nor is it good enough, she says, for consultants to pitch up and simply give clients what they think they want. There are a few breakthroughs, but too many ‘solutions’ fail to challenge traditional thinking and may actually make matters more complicated. For example, public-sector outsourcing sets up new boundaries for people to work across, making joined-up activity and organisational learning that much harder.

‘The quality of the debate is not as high as it should be,’ says Bennett. The consultancies need to put their house in order, just as clients should beware putting too much faith in bandwagons and received formulae of ‘best practice’.

Becoming more self-reliant is clearly wise, although not easy to do. Central pressures are hard to resist, even when they do a powerful job of generating internal resistance to change.

‘The starting point for many clients is, ‘how can we get them to do what I want?’,’ says John Seddon, head of lean service specialist Vanguard Consulting. Under pressure to meet targets, they assume the starting-point for change is an immediate programme of communications, training and projects. ‘Because it doesn’t proceed from systematic knowledge and data that just produces arguments which no one can win except by coercion,’ says Seddon. The result is compliance at best, disengagement and hostility at worst. Hence the myth of middle management and the frontline workers obdurately hostile to change.

In fact, it is often senior managers who are most uncomfortable with unlearning the current approach. Before altering anything it is essential to go back to first principles – including the purpose of the service and whom the changes are intended to benefit as well as the ‘what’ and ‘why’ of the present system, especially accurate measures of real demand and how it is being met.

That may sound obvious, but such basics are often lacking in both public and private sectors – with the added complication in the public sector that managers instinctively face upwards and many change initiatives implicitly assume that the Government is the customer rather than the individual citizen. It is an important reason why frontline workers get so demoralised and frustrated and citizens fail to perceive the desired improvements.

The real object of reform must be to break away from the cycle of wrenching, one-off, target-driven change initiatives and put the improvement methods in the hands of those who carry out the work every day. The benefits of managing change in this way are not just that it generates surprising results – service times and cost slashed at the same time as quality raised. It also makes longer-term savings and productivity gains by ensuring that IT and automation are ‘pulled’ in where necessary to support core work that is already lean rather than automatically ‘pushed’ from above.

In some cases, IT turns out to be much less important than the official view assumes. The real issue in service improvement is reconnecting human brains with work as much as artificial ones. It is building competence and knowledge and a system that can self-adapt to changes in demand.

‘The important thing about the public sector is that it’s vocational,’ says Seddon. ‘Regimes of targets and central specifications have taken the heart out of it. We need to give people the method to put it back.’

WORKING WITH MANAGEMENT CONSULTANTS, Special Report, The Observer 25 April 2004

The difference a year makes

AS EVERYONE knows, the British work the longest hours in Europe. And they are getting longer: the average working week in 2003 was 39.6 hours, 40 minutes longer than it was in 1998, according to a CIPD survey last year.

Altogether, 26 per cent of Britain’s workers – nearly 4 million – now work more than 48 hours, the maximum allowed by the European Working Time Directive without an opt-out, compared to only 10 per cent six years ago. Nine per cent work more than 60 hours.

Working such long hours comes at a cost. Most respondents in the CIPD survey noted harmful effects on performance, including making mistakes, taking longer to finish a job or not doing it so well. More than 25 per cent said they had suffered physical or mental ailments as a result. Many claimed their relationships suffered, too. There is good evidence that long hours and lack of flexible working options are significant contributors to stress, which costs industry pounds 370 million a year.

Since, as the saying goes, no one goes to their grave wishing they had spent more time at the office, why do we do it? The short answers are ‘pressure of work’ and ‘because we always have’. But, like many other aspects of British employment relations, these are just trap doors to other questions, all of which lead back to the vexed issue of productivity: is the propensity of workers to work long hours a competitive advantage (‘flexibility’) to be preserved at all costs, or a symptom of British management’s inability to organise people to work smarter?

On the test issue of the moment – whether Britain should retain the ability for individuals to opt out of the Working Time Directive’s maximum of 48 hours – opinions at first sight divide pretty much on market lines.

Employers, supported by the Government (and a House of Lords committee), want to keep the opt-out for flexibility and competitiveness, while the TUC is waging a campaign to abolish it. Britain’s long hours are a symptom of bad management and poor productivity, it argues ending the opt-out is the only way of beginning to shift Britain’s per vasive long-hours culture.But the polarity between positions may be less extreme than it appears. For example, although it supports the opt-out, the EEF, the manufacturers’ association, believes that the need for it will be substantially reduced if the regulations automatically allow for hours worked to be averaged out over 52 weeks.

That sounds technical. But what it implies is that companies should start thinking in terms of annualised hours contracts for staff – which just happens to be one of the most potent ways advanced companies have found of planning their use of resources better, to the benefit of employees and productivity.

In a 2002 report, Professors David Bell and Robert Hart of Stirling University found that in companies that had adopted an annual hours cycle, workers were 13 per cent better off and did less than half as much overtime as those working conventional hours. Annual hours were often used to improve plant efficiencies in companies that experienced volatile and unpredictable demand. Blue Circle Cement, BP Chemicals, Tesco, Zeneca and Gleneagles Hotel have such systems in place.

‘Flexibility comes in different guises,’ notes Bell. ‘One is the traditional employer-driven spur-of-the-moment variety the other is by agreement and consultation, as with annual hours. It’s not hard to think which would be more appealing to employees.’

David Yeandle, deputy director of employment policy at the EEF, has no problem with this. ‘Absolutely. Annualised hours contracts do provide more flexibility of the kind companies need.’ The EEF provides guidance for companies wanting to set them up.

So why aren’t more companies moving in this direction? At the moment, for all its attraction only a small minority of companies have done so. One reason is that the high set-up costs: it may take two years of trial and error and intense consultation to get a scheme up and running, says Mike Sweeney, professor of operations management at Cranfield Management School.

That’s hard enough. It may be even harder to get started. Annualised hours are easier to put in place in small firms or where there’s a trade union to provide a negotiating point – and unions and collective agreements are diminishing in importance.

Second, Sweeney muses, it may be impossible to do without a measure of trust in the first place. ‘How do you get flexibility in the labour force? It’s a good question,’ he says. ‘We have a potential competitive advantage in our labour laws, but firms don’t always use it. It could be that it’s good relations that enable the essential initial leap.’ In other words, only firms that are already good at dealing with employees can move to the next step.

And how many of those are there? This is where hours rejoin the larger management issue. After all, long hours haven’t given Britain better productivity, a better standard of living or more effective organisations – rather the reverse. Although Britain often claims to be a wealthy country, wages are lower and purchasing power less than other developed economies.

The truth is that working patterns are complex and ingrained, both symp tom and cause, locked into a much larger employment relations system. If a country’s labour market institutions reflect its values, as has been suggested, the prevalence of long standard hours over advanced practices like annualised hours contracts tells its own story.

In Britain most companies are still less concerned with picking up and diffusing new management techniques for improving productivity and competitiveness than with that traditional British priority: muddling through.

The Observer, 18 April 2004

Give that carrot some stick

DURING the Battle of Britain, German fighter pilots were paid a competitive bonus for each enemy Spitfire or Hurricane they shot down.

Sounds sensible? Luckily for us, it was a disaster. The aces behaved like prima donnas, the non-aces were demoralised and resentful, and they were all diverted from their strategic job of protecting bombers. Still worse (or better), they systematically overestimated the number of British kills, giving German high command a falsely optimistic picture of the overall situation, perhaps even changing the course of the battle.

Almost everyone instinctively accepts that pay should reflect contribution rather than simple presence. With many private-sector companies won over, spreading performance pay to the public sector has become an article of faith for public service reform. Next week’s strike by job centre and benefits staff, the bitterest civil service dispute in a decade, is partly about performance pay, as is the stand-off between employers and university teachers.

Yet despite its near ubiquity, there is strikingly little evidence that pay for performance has the desired effects in practice and plenty that it is often counterproductive. As the Second World War example illustrates, incentives are all too often divisive and highly vulnerable to the law of unintended consequences.

Making them dependent on appraisal – a favourite but truly toxic combination – ensures that results are arbitrary and subject to favouritism, as does the assumption that performance can be judged in isolation from the system as a whole. Ed Lawler, an acknowledged guru of high performance, has one word to say of merit-based pay assessment: ‘Don’t’.

So why do managers so obstinately believe in it? The most obvious reason is that rewards do alter behaviour – so it’s easy for managers to believe that they have fixed the problem, or they will have after another tweak to the incentives. What is less obvious is the damage they do in the long term by, in effect, managing people by bribes.

Performance-related rewards derive ultimately from behaviourist experiments with rats in labs. In this context, rewards are simply the opposite side of the coin of punishment – as Alfie Kohn pointed out in a still-to-be-bettered Harvard Business Review article on the subject a decade ago: ‘Do this and you’ll get pounds 100’ is no different in kind from ‘Do this and you’ll get a good kicking’. Both are manipulative and controlling, classic manifestations of command-and-control management. And just like punitive management, management-by-reward produces movement rather than motivation, compliance rather than commitment. Kohn says: ‘Do rewards motivate people? Absolutely. They motivate people to get rewards’.

So what should a good pay system look like? Pay being incapable of motivating on its own, it follows from the above that one of its most important jobs is actually negative: not to demotivate. That has some significant implications. For the individual, fairness, openness and simplicity become the most important characteristics.

At the same time, there’s no denying that pay is a powerful shaper of organisational culture. The snag is that the traditional hierarchical job-based pay systems that cemented bureaucratic organisations can do little to foster learning new skills or encourage teamworking – the main prerequisites of today’s high-performing organisations.

From the organisation’s point of view, therefore, the need is also negative: to eliminate the pinch-points where current pay policies work against desired results such as customer focus and improving the system.

Some companies are basing part of their pay policy on competence rather than the job, so people are paid more as they acquire extra skills.

Often both cases are served by dispensing with individual incentives, whose ‘gains’ can easily be wiped out by their disadvantages for the wider organisation: reinforcing barriers among functional areas and business units, demotivating others, promoting competition rather than teamwork. The latter is a powerful reason why performance pay is so strongly resisted in professions like teaching.

Crudely, incentives often cause people to do things that benefit themselves but harm the company. Alternatively, they spend a disproportionate amount of effort on one aspect of the work to the detriment of others – a particular problem where jobs are complex – making performance measures hard to set up.

Incentives are particularly damaging where they take the form of payments triggered by reaching fixed targets. Incentives of this kind trigger intense political lobbying to negotiate achievable figures and then cheating of one sort or another to hit them. Success becomes an internal matter of satisfying the boss rather than an external one of satisfying the customer.

To change this inward-looking focus, some large companies are throwing out pay systems that rely on fixed performance contracts. Instead they judge performance relative to competitors and pay appropriate bonuses retrospectively, allowing for special circumstances to be taken into account.Typically, the bonus is determined by the performance of the business unit rather than the individual subsidiary, so that people have an incentive to work for the greater good.

The disadvantage is that the larger the unit, the less opportunity individuals have to make a difference.

The inconvenient truth is that pay is both extremely important and a blunt instrument. It can make things go horribly wrong but by itself can’t make them go beautifully right. Individual incentives are the last thing an employer should want for multi-dimensional jobs needing lots of teamwork. Don’t throw the straightforward salary out just yet.

The Observer, 11 April 2004