Britain is a rip-off. Why?

IN 1750, RECOUNTS the great historian Eric Hobsbawm, the first things the foreign visitor to England noticed as he or she stepped ashore in Kent were the tidiness of the countryside and the eye-watering prices of the inns.

Nothing much new there, then. In its latest annual calculation of living costs, Mercer Human Resource Consulting reported in May that London is now the second-most expensive capital city in the world, 19 per cent dearer than New York, the baseline, and trailing only Tokyo.

‘Start the morning with a glass of orange juice and you can forget about that vacation. Restaurants should just merge with second-mortgage companies,’ gagged a Time correspondent. Others gasp at the world’s highest rail, Tube and taxi fares. Last winter, two enterprising Londoners won headlines (but little astonishment) by recounting how they had saved money on a trip to Liverpool to watch a football match by flying via Belgium rather than taking the train.

Prices that are out of whack with the UK’s no-more-than-average wages are storing up problems for the future.

The spiralling price of housing in southern England is now both socially divisive and economically dangerous, according to Shelter. One supermarket group buses staff from Tottenham (north London) to Croydon (south London) because the low-paid can no longer afford to live near their work. Officials admit that London’s reputation for costliness is driving away not only businesses and tourism, but its own citizens: up to a third of UK residents are thinking of leaving the country in search of a lower cost and higher quality of life, according to one recent survey.

Meanwhile, a committee of MPs concluded this week that Britons aren’t saving enough because they don’t trust financial service companies not to rip them off in the future, as they have done in the past. Price comparisons and other new services that would make markets work better are stunted by telephone companies keeping broadband prices two or three times higher than in France, for example.

In theory, prices that are too high can’t exist for long in a competitive marketplace. Consumers will stop buying and new entrants will be attracted by fat profit margins. As Adam Smith pointed out, the rate of profit is naturally higher in poor countries than in rich ones, where it is normally competed away.

In a few cases, this happens according to the textbook. In scientific publishing, traditional high-price, high-margin incumbents are being challenged by new entrants with a lower-cost distribution model built round the internet. The newcomers insist they will still be profitable – but margins will be thinner.

Or take the UK’s private healthcare industry. When the government initially asked for tenders from private firms to carry out day surgery for the NHS, no domestic company made the list: UK consultants, it transpired, charged double the rates per operation of their foreign counterparts. In a more recent contest, however, UK firms were more competitive. The consultants had brought their charges into line. More cynically, you could say they couldn’t get away with it any more. And here’s a clue.

Part of the reason for high prices is high costs – at least some of which is down to poor management. The counter-intuitive lesson of ‘lean’ production methods is that poor service is always more expensive to produce than good service. Too few UK companies are lean, a factor that is reflected in the country’s poor relative productivity performance.

But another factor in price levels is the intensity of competition. Competitive intensity has several elements, one of the most important being customer expectations. Good firms tend to have demanding customers, which stands to reason: picky customers keep you up to the mark by requiring value for money and telling you if you don’t give it.

And we are not demanding enough customers – something that enrages visitors almost as much as the prices. As the Time writer put it: ‘New Yorkers believe an almost-sort-of-affordable city is a civil right, and anyone who threatens that right deserves to be screamed at and tipped really poorly. Londoners believe a city is a noble and costly test of endurance.’

There is academic support for the idea that this does us no favours. Customers, says Chris Voss, professor of operations management at London Business School and leader of a team which has compared consumer behaviour in the UK and US, play a vital part in develop ing service quality. Confirming the stereotype, his research found that the British complain less about poor service than do Americans. ‘We don’t give as much feedback, so organisations have less knowledge about how to improve service: sometimes managers don’t know just how bad it is,’ Voss says.

The cause is cultural, but the result is a self-fulfilling prophecy: service is bad because that’s what we expect and let companies get away with. Alongside government and managers, consumers can’t escape their responsibility for making the economy more competitive.

You can see what’s coming next. If we get the service we deserve, the conclusion is self-evident. Stop suffering in silence. Loosen the stiff upper lip. Go on: rant, rave, whinge, moan, shout, scream and complain. Be as embarrassing as possible. It’ll make you feel better – and it’s your personal contribution to raising the standard of British management.

The Observer, 1 August 2004

Awopbopaloobopalopbamboom!

FIFTY YEARS ago this month, a 19-year-old white truck driver walked into a recording studio in Memphis and almost by accident cut two lithe, sexy and ferociously self-confident tracks that redefined a culture and set the music industry on its ear.

Actually, Elvis Presley was far from the first recording idol. ‘That’s All Right’ certainly wasn’t the first rock’n’roll record, and although it was a sensation locally it wasn’t even a national hit. For Memphis to claim 2004 as the 50th anniversary of the birth of rock’n’roll is self-serving braggadocio to rival some of the early rock’n’rollers.

Still, that’s showbiz, an industry whose relationship with reality has never been more than a one-night stand. Ironically, 2004 is more likely to be remembered for the traditional music industry’s funeral – killed off by the new economy in the shape of Apple’s iTunes and GarageBand. These two innovative products, from a different industry altogether, let consumers download and share files over the internet and make their own music – that is, do what they wanted all along.

In fact, for anyone with a sense of history, the ghostly reverb of 1950s guitar solos is plainly detectable behind the unmaking of the industry today. For the downfall of the majors – the merger approved by the EU last week between Sony and BMG is the last bar of a knackered old record rather than the first of a new – is not so much the new economy as the repetition of a very old pattern of behaviour: such rapaciousness and stupidity was already well in evidence as rock’n’roll was born.

So hear my story, sad but true… Like many adults (‘Who is this Elmer Prescott?’ asked my mother bemusedly), despite today’s rewriting of history, the music establishment was initially baffled by rock’n’roll, missing and then denying the real significance of those first Memphis recordings as unerringly as it would do several subsequent industry turning points.

What Presley invented (and if he hadn’t, someone else would) wasn’t a new musical form but a new image for an old one. Quickly reinforced by a stream of contemporaries, he created a mass market for a black-inflected music that white radio stations would play and white kids could buy – which they did, in their millions.

The music industry was appalled. The effect of the incomprehensible words and sexed-up rhythms on the kids was one thing, but to the record majors the wider import of songs like ‘Tutti Frutti’, ‘Great Balls Of Fire’ and ‘Rip It Up’ was as clear as a ringing bell: they were losing control of the business. The inmates were taking over the institution and needed to be put back in their place.

By 1959 the establishment had pretty much succeeded. It bought up the indies’ best artists, covered the originals with polite white singers, and watered down the lyrics. It helped that many of the main protagonists (Presley, Buddy Holly, Chuck Berry, Eddie Cochrane and Little Richard) had self-destructed or were otherwise out of commission. After just five years, the first wave of rock ‘n’ roll was dead.

Trouble was, in reasserting control, the industry had also flattened the market that the rock’n’rollers had created. It was only resurrected by another injection of self-generated energy, this time not from southern America but, improbably, from a north British seaport. In context, the surprising thing about the Beatles is not that an unfortunate A&R man turned them down, but that anyone had the gumption to pick them up. The same could be said of punk a decade and a half later.

Incomprehension and short-sightedness was also the story in technology, which twice baled out the industry in spite of itself. Tapes had the unfortunate drawback of allowing consumers to record what they wanted, but the record companies were soon reconciled by the discovery that the demand for music on the go didn’t oblige them to do anything new, just sell the old stuff in a new format – a wheeze that was even more satisfying when pulled off again, more expensively, with CDs.

But the whirligig of time, as a more elevated wordsmith once wrote, brings in his revenges.

When, in the final instalment, the internet arrived, the music companies again missed the beat. It took ingenious young consumers and a computer firm to figure out how to make and distribute music in digital form. But by now the industry was out of luck as well as tune, its credit with both consumers and musician/suppliers as usable as a worn-out 78. They had been ripped off too often by poor quality, excessive prices and cynical issuing policies to experience anything but pleasure when technology at last offered them the chance to help themselves. Unsurprisingly, a commercial policy of suing the keenest consumers for piracy turned out to have limited effect.

Rock’n’roll is long gone, and Presley (another irony of today’s celebrations) ended up not the king but the perfect symbol of the music’s decay, corrupted into a grotesque parody of the sentimental ‘entertainment’ the music industry preferred.

But what rock’n’roll had briefly but exuberantly hinted at, the internet confirmed: although companies can control what, when, and how a product is delivered for a while, it can’t do it forever. At that point, it’s too late to discover that it’s the customer that really matters, not the technology. Fifty years on, with a little help from their friends, customers have killed off the seller’s market and the music companies that exploited it for so long. Read my lips: awopbopaloobopalopbamboom.

The Observer, 25 Julay 2004

Remember us, Sir Humphrey?

CAN IT be done? Will it be done? Although the headlines in last week’s papers were all about the 100,000 civil service jobs scheduled to go in the government’s latest spending round, whether it can achieve its goals will depend not on downsizing but on the biggest shakeup of the way government does business in 100 years.

As acknowledged in Sir Peter Gershon’s efficiency review, which was published last week, taking more than 16 per cent out of central government running costs, removing pounds 21.5 billion from administration and converting it into hospitals, schools and policemen requires nothing less than the transformation of the relationship between central government and the front line.

This is why, when 40 top central and local government officials were interviewed for a report on what the ‘reforms’ looked like to those who would have to carry them out (www.kablenet.com/ kablereport), the research team, of which I was a part, found a paradox: while everyone agrees that the scope for improvement in efficiency is huge, actually getting there would be a heroic achievement. Cuts, yes. But not many insiders believe they will benefit ordinary citizens.

Why is it so difficult to do the bleeding obvious? After all, it is stupid and unacceptable that every government department has its own non-communicating payroll, human resources, finance and property management arrangements; that the public sector has 30,000 back offices collecting and processing information, only 2 per cent of which are big enough to stand alone; or that, as Soham horribly underlined, there is no national criminal intelligence system, partly because the 52 separate police authorities’ computers (and sometimes officers as well) won’t talk to each other.

The barriers are formidable. ‘There is no culture of sharing across Whitehall’, noted one report respondent – rather the reverse. Civil service incentives favour empire-building, not sharing for the common good, and distrust between departments is pervasive. Each Yes, Minister rerun scores a palpable hit: Sir Humphrey is alive and well in every department.

Local service providers, which handle 80 per cent of official interactions with the public, bitterly resent one-size-fits-all policies and methods handed down from on high without regard to local circumstances.

‘Central government takes a central-government-centric view of public services. Elsewhere people are quietly – and sometimes quite efficiently – getting on with it,’ says a commentator. If the reforms are seen as just another top-down cost-reduction target – ‘bend over, here it comes again,’ as one cynic described them – they will fail.

Likewise for technology. So far the government has committed pounds 8bn to obliging the public sector to e-enable service delivery, with no evidence of payback or customer appeal. If – as seems likely from Gordon Brown’s statement – it does the same with back-office services, mandating investment without regard for the citizen/customer, the results will similarly fall short.

The reality is that the government, let alone the rest of the public sector, is not a unified whole but a vast and untidy agglomeration, with thousands of decision-making locations bristling with different agendas.

Like a pile of random iron filings, the interests of the myriad actors point every which way – downwards to customers, upwards to ministers, inwards to themselves, switching unpredictably with political currents. These randomised interests can’t be managed on traditional lines. They can’t be aligned by fiat, appeals to efficiency, nor, as the government seems desperately to hope, by IT. There are no levers to pull.

In his review, Gershon noted that success of the programme depended on political will, incentives for managers to take tough efficiency decisions, and the creation of ‘change agents’ to get things done. But this is the wrong way round. The spending round has lost sight of the reason and purpose for the activity in the first place: the customer/citizen.

The only force strong enough to magnetise the filings to face in the same direction is focus on the citizen: improving service at the point of delivery.

It’s not enough just to command more infantry into the front line. There needs to be a method. How many doctors, teachers and policemen? What kind of support services? It’s only by going back to the customer – establishing real demand, measuring current capacity against that purpose and then reorganising the work to do it better – that method can be established and competing interests pulled into line.

As hundreds of initiatives across the wider public sector have demonstrated, starting from customer needs pulls everything into place after it. It tells you how many people you need and where. It tells you what services can be shared, and what kind of automation you need to do it. In short, it tells you what can be cut and what needs to be spent.

And that is invariably less than the centre supposes. For the best news is that this dynamic dispatches the assumption (perversely as strong in government as anywhere else) that better service costs more. On the contrary, bad service always costs more to deliver than good. The better the service, the less need for regulation, inspection and audit (cost: pounds 7 billion a year), the less need for targets, corrections and rework, and management interference. It goes beyond management by compliance and gives public servants back their vocation. As our report concludes, it is possible to achieve public-sector efficiency by improving service to the customer – but not the other way round.

The Observer, 18 July 2004

Counting the wrong beans

THE ACCOUNTANCY profession is in denial, betraying its past and endangering the present. Five years after the dotcom bubble, three years after the collapse of Enron and the evaporation of Arthur Andersen – then one of the globally pre-eminent audit firms – the business world is no nearer any reliable means of valuing the intangible assets that are critical to the way things are made.

In fact, says Clive Holtham, professor of information management at Cass Business School in London, the situation is worse than five years ago. ‘The issue of measurement and reporting of intangibles is not only being ignored, there are active efforts afoot to play down its significance by the accountancy profession,’ he claims.

When, in the late 1990s, UK companies were given greater flexibility to report intangibles, not one top-100 finance director showed any interest, according to a Loughborough University report. Most companies are indifferent or hostile to new measures, Holtham believes.

The result is a dangerous paradox. With 75 per cent of wealth-creation now reckoned to be attributable to intangible assets such as knowledge and information, rather than physical assets, the numbers accountants give to investors, bankers and indeed their own managers are increasingly irrelevant.

Failure to come up with a robust way of measuring intangibles was at the heart of the dotcom boom and bust, the most spectacular miscalculation and misallocation of capital since the South Sea Bubble. A convention of fortune tellers would have blanched at the analysis used to justify some investment decisions, says Holtham. ‘Yet the accountancy profession appears to be using the Enron scandal to retreat into seeking reliability of traditional tangible financial statements, and paying even less attention to extending reporting.’

This can only increase the risk of the same thing happening again.

The accounting retreat betrays not only investors, but companies too. Whether companies choose to report on ‘soft’ issues – brand, reputation, human capital, learning, innovation – or not they do, like investors, have to allocate resources. That’s hardly made easier when the accountancy stance gives credence to the view that measuring intangibles is both unimportant and impossible.

The absence of agreed overarching accounting principles at least has the advantage that companies can choose how they measure intangibles internally, points out Holtham, since they don’t have to satisfy formal stock exchange requirements.

As to importance, consider Shell, whose current travails are a classic case of knowledge mismanagement. Some high-level officers were clearly aware of the discrepancies in reserve estimates and the dangers they posed to the company’s reputation, but their doubts were suppressed. There also appears to have been a strong element of groupthink on the board. Shell has always prided itself on being a socially responsible company, but it evidently didn’t devote enough resources to nurturing the cause.

‘At some stage something happened to Shell’s values that made it acceptable to put up figures that weren’t completely above board,’ Holtham says.

On the other hand explicitly managing intangibles, as elusive and unpindownable as they seem, can bring substantial benefits. This is because, as the Shell and Enron cases demonstrate, managing intangibles is closely linked to issues of risk and sustainability.

In a report entitled Unlocking the Hidden Wealth of Organisations, Cass researchers have developed a framework for looking at intangibles and identified a group of organisations that, despite the lack of official encouragement, have decided to cultivate their intangible production factors.

They include B&Q (sponsor of the report), Whitbread (‘from manufacturer to brand manager’), Bloomberg, the UK Fire and Rescue Service, MMO2, Italian cosmetics company Intercos, the Austrian Research Centres and Swedish learning consultancy Celemi. Together they chart some of the different ways in which companies can cherish their invisible assets and use them as a source of competitive advantage and wealth creation.

But it shouldn’t be left to practitioners to pioneer new accounting methods, says Holtham. He contrasts the timid approach of accountants and most managers to devising measures for the things that matter with physicians’ commitment to accumulating a deepening evidence base. ‘Medicine as a profession has a deep belief that it can use evidence to develop better ways of making decisions than in the past,’ says Holtham. ‘You can’t but marvel at that compared with what’s not happening in business.’

Holtham, himself accountancy trained, notes that the first written script, cuneiform, was devised in Mesopotamia 5,000 years ago not by storytellers but (in effect) by accountants, to record transactions and stock levels for an increasingly settled society – a brilliant social and economic inven tion. Other accounting innovations such as double-entry bookkeeping in 15th century Venice and today’s financial accounting were equally daring intellectual advances.

We’re now desperately in need of a new cuneiform for the knowledge era, but there’s little chance we shall get it from a profession that seems determined to disavow its illustrious intellectual heritage.

The Observer, 4 Julay 2004

Masterclasses they’re not

MANAGEMENT is in crisis and the MBA, the flagship course of most business schools, bears a substantial part of the blame. Such is the thesis of Henry Mintzberg’s indignant and stimulating new book, Managers Not MBAs

Although the headline message itself is not new, having been refined over at least 15 years in countless revisions, this is nonetheless a powerful statement and a terrific read: Mintzberg is a fine writer with a caustic turn of phrase and to make his case he draws on inside knowledge as both member of the academy (professor of management at McGill University, Montreal) and a distinguished strategy researcher in his own right.

But what gives the book its resonance is that it is ‘a book about management education that is about management’. Both the MBA and its practitioners are deeply troubled, in Mintzberg’s view, but locked in an infernal embrace which means that ‘neither can be changed without changing the other’.

This is because the assumptions of the classical US-based MBA – some accidental, some ideological, many self-contradictory – have worked themselves deep into the body commercial and politic, where they are rarely questioned. The consequences, Mintzberg believes, are deeply corrupting of education, management itself and the wider society.

He starts from the principle that try ing to teach management to someone who has never managed, as most MBA programmes do (Mintzberg specifically exempts some UK courses from his criticism), is as misguided as ‘trying to teach psychology to someone who has never met another human being’.

Worse, by elevating management ‘science’ (analysis) over its equally important components of craft (experience) and art (vision), current business teaching equips unsuitable people with both potential weapons of mass destruction and the massive overconfidence to use them. Paradoxically, MBAs teach little about the real, messy, difficult business of managing: they teach business functions and management disappears in the interstices between them.

‘MBAs haven’t been trained to manage, and many don’t have the will for it,’ Mintzberg writes. ‘But they are determined to lead. So a trajectory has been developed to take them round management into leadership. The trouble… is that many of these people make dreadful leaders, precisely because their hands are off the business. In fact, the landscape of the economy is now littered with the corpses of companies of headstrong individuals who never learned their business.’

An exaggeration? If MBAs are so disastrous, how come the US, the home and capital of the MBA, is still the most vibrant economy and its companies the most powerful in the world? Mintzberg’s answer (implied, not fully spelled out) is that American companies have succeeded despite rather than because of current business education. A striking number of the most admired managers (Jack Welch, Bill Gates, Warren Buffet, Michael Dell) don’t have MBAs, and plenty of MBA-led firms, with Enron at their head, have plunged to disaster.

Moreover, the costs of today’s paradigm – ‘training the wrong people in the wrong ways with the wrong consequences’ – are escalating all the time. In the grip of their own business goals, business schools are legitimising practices they should be challenging. Businesses are frenziedly becoming leaner and meaner, driving people harder and exploiting both workers and customers in the name of (academically sanctioned) shareholder value. Startlingly, Mintzberg argues that, rather than producing flexible, progressive entrepreneurs (on the whole, MBAs aren’t particularly entrepreneurial), MBAs are breeding a new race of big-company bureaucrats, at home in the world of formal analysis and control but ill-equipped to operate in the webs, networks and teams of today’s evolving organisations.

As for society as a whole, how strange, notes Mintzberg, that an America so proud of throwing off the yoke of British aristocracy should now be congratulating itself on producing one all of its own. America, particularly, he argues is a society tilted crazily out of balance, its public sector distorted and demeaned by ‘MBA management syndrome’, ‘ambling around like an amnesiac pretending to be business’, developing mission statements, looking for ‘customers’ to serve and merging everything in sight.

Is there any light amid the gloom? Some. Mintzerg exonerates the UK from blanket criticism, noting the growing number of specialised MBA programmes (MBA without the A), including for public-sector managers, and long-established courses for practising managers (the right people at least).

A growing number of academics accept, at least in private, that something is amiss with a system that is as fragmented and departmentalised as the commercial firms it criticises, and that competes primarily on the basis of the salaries graduates can command rather than more fundamental criteria. Mintzberg is not alone in believing that breaking vested academic interests is an important part of the way forward.

Finally, thoughtful practicising managers are becoming increasingly troubled by the contradictions that MBA management leads them into. Although some corporate social responsibility is cynical PR, its huge and instant popularity can also be viewed as something more hopeful: a massive pent-up desire for legitimacy which today’s officially approved management models can’t provide.

Again counterintuitively, what management education needs is not ‘relevance’ and ‘practicality’, as the parrot cry has it. Managers, as Mintzberg points out, live practice every day. What they really need is insight: theories or models that enable them to make sense of practice, learn from experience and reach better judgments. That’s what business schools should be for, not turning out MBAs.

Managers not MBAs, published by Financial Times Prentice Hall, £24.99

The Observer, 27 June 2004

Fat profits are bad for you

THE OBESITY crisis is the sharpest (no, weightiest) test yet of the contradictions at the heart of the corporate social responsibility movement.

It’s a striking fact that many large companies involved in the food chain, whether manufacturers or retailers, are among the most admired and feted in the management literature. Coca-Cola is famously the most valuable brand in the world, McDonald’s not far behind. Nestle, Pepsi, Kellogg, Unilever, Heinz, Danone and Sara Lee all figure among Fortune ‘s globally most admired companies: Wal-Mart is No 1 (McKinsey approvingly reckons that, by itself, the world’s biggest retailer is responsible for 35 per cent of the impressive efficiency gains made by the entire US retail sector).

Tesco is Britain’s most admired company. Cadbury Schweppes and other UK chocolate makers (although none now independent) pride themselves on their staunch Quaker antecedents.

As a glance at any website will show, all these companies make much of their citizenship credentials, sporting value statements, philosophies, social and environmental reports galore. So why do they at the same time compete to load up their products with sugar, salt and fat, which they know will harm their customers? Why do they charge more for healthier products, run promotions to make people eat (and drink) more and, in the case of retailers, pile up sweets at checkouts and hide healthy items at the back?

Why do their executives, who presumably don’t beat their wives and want the best for their own children, pride themselves on devising viral marketing campaigns to persuade other people’s kids to pester their parents to buy them food that makes them fat? Why do they lobby against regulations that would oblige them to accept the responsibility that they claim through CSR? Why, in short, do companies and individuals set out to damage overall welfare, doing things in business they would never permit themselves in private life?

The answer is that they have been persuaded (fairly easily, it must be said), to subscribe to the idea that in a market economy the sole function of business is to make money for shareholders and that social ambition is a destructive dereliction of that duty.

Thus, companies are ‘admired’ in the Fortune or Management Today sense because they are very and consistently profitable. The difference between the Marks & Spencer of a decade ago – winner of so many ‘most-admired’ awards on the trot that one was discontinued – and now is less its ethics than its ability to keep profits moving upward.

Conversely, Wal-Mart, Microsoft (third in the Fortune table) and GE (fourth) have sustained precious little damage to their admiration quotient from their sometimes dubious employment and business methods.

It’s the virtue of the House of Commons health committee report on obesity that it makes these contradictions impossible to ignore. The idea that it could be a legitimate function of business to enrich shareholders at the expense of undermining the financial basis of the NHS simply collapses under the weight of its own grossness it fails the test of common sense.

Equally absurd is the proposition that work in the community or support for sport or a symphony orchestra could compensate for behaviour that contributes to the wave of amputations, blindness and heart disease that the committee theatrically predicts, or helps to make today’s children’s life expectancy shorter than that of their parents.

It’s time to recognise such theories for what they are: junk, self-serving and as harmful to corporate health as fat and salt-laden convenience meals are to individuals. Although individuals and communities have undoubtedly benefited from initiatives done in the name of CSR, its essential function is to act as a fig leaf for shareholder-value theories that cause managers to undermine society, delegitimise their own companies and induce corporate and individual schizophrenia.

As the FT put it in an admirably trenchant editorial, food companies now have no option but to swallow hard, embrace the report and reverse their present strategies: cut fat and salt levels, make, label and promote healthier products and wholeheartedly channel their competitive energy into the public campaign for healthier living. In other words, they must put CSR where it belongs – at the heart of the purpose of the firm. Real social responsibility is innovating for the public good anything else is corporate social hypocrisy.

The government has a part to play here. This is a problem in which its own short-termist expediencies are substantially to blame. For instance, the economy-led suppression of school meals and sale of playing fields are panting ponderously home to roost. Appealing to companies’ better nature is not enough. Instead, it should grasp the nettle and alter the system conditions by hard or soft regulation to encourage firms to behave well and penalise them if they behave badly – taxing fat, for example. Calling on firms to do more CSR (a policy roundly rejected by the voluntary sector last week) is abnegation.

For a while before New Labour came to power, it had worked up some enthusiasm for a more inclusive stakeholder model of capitalism. In office, it hastily backtracked in the face of massive vested interest and its own bedazzled admiration (that word again) for US enterprise. You were right the first time, chaps – shareholder value can’t take precedence over the health (literally) of the wider system of which it is a part. The one thing that can be said in favour of obesity is that it kills shareholder value stone dead.

The Observer, 6 June 2004

On the right track at last?

IN OCTOBER 2002, when Network Rail took over management of Britain’s rail estate of track, stations, bridges and tunnels the rail network was in the grip of what was memorably described as a collective nervous breakdown.

Still numb after the Hatfield crash, in the limbo of administration after Railtrack went bust, the railway definitely wasn’t getting there. Blanket speed restrictions had been imposed, pushing punctuality below 80 per cent. Costs had increased sharply. ‘Corporate discipline had been non-existent,’ splutters chairman Ian McAllister. ‘Chaos.’

It’s a measure of how far the network has pulled itself together that last week it underwent the second largest structural change since privatisation in 1996 – and no one noticed. Last Monday what had been a geographically based organisation was replaced by a ‘functional’ structure aligned around routes and customers, cutting out layers of management and bringing train and track operators closer together than at any time since denationalisation.

The change marks a significant step in accelerating Railtrack’s coming of age, shedding the disastrous legacy of Railtrack.

‘It’s been an incredible journey,’ says Iain Coucher, deputy chief executive and driver of many of its myriad projects. ‘We’re pushing through a 10-year programme in three years. It’s exhilarating and, for some people, a bit scary.’

Of course, blaming previous incumbents is a traditional management sport. But there’s no doubt that the organisation inherited by the incoming team (basically six senior people) was dysfunctional. At its heart was a conflict of interest between shareholders and customers, both vying for the same resources. Like much of Britain’s infrastructure, the network was fragile and overstretched, a monument to underinvestment and political short-termism compounded by a 30 per cent increase in traffic since 1996.

It was organisationally flawed, too. There was little commonality between the regional operating fiefdoms, each of which had its own operating procedures. Maintenance and renewal costs varied wildly and the centre had no idea what it was getting for its money. The asset register was nearly useless because there was no record of its condition – priceless knowledge lost at privatisation, says McAllister.

And because of the way the system incentives were set up, network and train operating companies (TOCs) were at each others’ throats.

The first step in restoring the network’s sanity was to get a grip on pro duction by stabilising the system: redefining accountabilities, standardising operating procedures and installing basic management disciplines.

Half of the top 100 managers were replaced. ‘In the first six months we didn’t improve, but we stopped the slide,’ says Coucher. ‘In the second half of the year we improved performance by 20 per cent, and it has continued at that level. Every department and project is on time and within budget.’

There were blips such as last summer, when the heat again exposed gaping holes in the maintenance record, but the system came under control and the network began a second phase of improving efficiencies. The starting point, unexpectedly, was to in-source rail and computer maintenance.

It became blindingly clear, says Coucher (who, ironically once worked for outsourcer EDS) that bringing maintenance back inside was the only way of getting a grip on costs and understanding the state of the assets. Payback has been instantaneous and spectacular. In the Thames Valley, the first area to come in-house, delays fell by 41 per cent in five months, and in Wessex 19 per cent in three months.

That has a direct and multiple effect on cost: out go the middleman’s profit, management duplication and transaction costs and down goes compensation to the TOCs, which last year ran at pounds 500m. Response is quicker and coordinated.

Sophisticated maintenance is at the heart of the network’s risk management, variability and asset life. ‘Rail and ballast was the most important split on the railway, not rail and wheel,’ says Jeremy Long, managing director of First Rail, who warmly welcomed the decision. The second big change was last week’s ‘functional’ reorganisation, a move that is also cautiously welcomed by train operators.

The new climate of cooperation has been accompanied by the setting up of joint improvement teams – such as the one that has helped Midland Mainline improve punctuality by 20 per cent in a year – and joint control centres, where TOC and Network Rail managers sit side by side to oversee operations. In time the hope is that growing trust will lead to the establishment of truly integrated teams under one overall manager.

‘We always said it would take 18 months for the public to see a difference and five years to give them a responsive modern railway,’ says Coucher. But although the train is making good time, there’s still a long way to go.

To get there the company must spend the staggering total of pounds 14m a day -pounds 26bn over the next five years – to renew the 100-year-old infrastructure and increase capacity while at the same time meeting the regulator’s requirement to slash costs and improve performance by one third in the next five years. In the short term it is facing a Department of Transport rail review, due by July, not to mention threatened strike action by the RMT.

‘Yes, we’ll get there,’ asserts McAllister. ‘There’s a huge reservoir of loyalty and pride in the rail tradition here – it’s unique. What we’re doing is giving them the leadership and direction to put it into the biggest rail job in the world.’

The Observer, 30 May 2004

Lean times for innovation

IT’S OFTEN said that the UK has an innovation problem: it’s good at inventing, but hopeless at turning the new thing into a commercial product.

So the official emphasis is on more research and development and scientific innovation while Harvard professor Michael Porter, in his report on UK competitiveness, urges companies to develop more innovative, higher-value goods.

Like all good cliches, the ‘innovation deficit’ contains a nugget of truth but also like all cliches it has been repeated so many times that it has been all but drained of meaning. Yes, of course companies should innovate: that’s what companies are for. Innovation, however, is not only a question of new products and services, but also the way companies do things – their practices and processes.

How things are made is often ignored but in fact it’s an essential complement to the sexier field of product innovation. Trendy new products aren’t much use if you can’t make them properly (BMC and its successors reportedly never made a penny out of the original Mini). Indeed, some new products emerge out of new processes.

Moreover, process innovation is one of the best ways of stealing a competitive march on rivals. By adopting new production methods, Airbus UK boosted productivity in wing assembly by 25 per cent and quality by 40 per cent in six months. In recent research, the consultancy McKinsey identified innovation diffusion – the rapid take-up of promising management practices – as the single most important means of tackling UK’s productivity problems.

The most important element in these innovative practices is the approach known as ‘lean’, which concentrates on cutting down the waste, variability and inflexibility in organisations that blocks them from responding to customers.

Such approaches are central to productivity and innovation, notes London Business School professor Chris Voss, who heads a research programme studying promising practices for the Advanced Institute for Management. ResearchLean is therefore a key link between macro- and micro-productivity issues.

Three McKinsey consultants – John Drew, Blair McCallum and Stefan Roggenhofer – have brought this alive at company level in a new book, Journey to Lean (Palgrave Macmillan).

This down-to-earth, nuts-and-bolts account claims to be the first to address the issue of ‘stickability’. If lean works – and since it started in Japan in the 1950s there is plenty of positive evidence – why are companies so slow to pick it up? And when they do, why does it so often fail to stick, success fading into mediocrity as the bad old habits reassert themselves?

The answer, as the authors note, is that ‘lean changes everything’. Much more than a collection of problem-solving tools and techniques, lean is ‘an alternative to mass production, not a complement to it’ in which production mix and quantity are dictated as directly as possible by the customer.

Redesigning the company’s operations from that perspective involves business-wide changes, which can only be done through people, not to them. Since process innovation is delivered on the front line, employees take a much bigger role, for which they need different mental as well as physical equipment. The investment emphasis switches from machinery to people.

By the same token, managers are required to stop managing at one remove and engage with the shop floor or office where the issues surface – an unfamiliar and revealing exercise. Changes have to be supported by different pay and other administrative systems.

As the first part of the book describing the lean landscape shows, many of the changes are counter-intuitive. Under lean, you don’t hide problems, you bring them to the surface, even if it means stopping production. Being responsive to the customer doesn’t mean building more stock, but less. The economies of scale that managers have been brought up on no longer apply – in fact the whole apparatus of mass production is now a liability, not an asset.

In the book’s second half, a fictitious UK manufacturing company embarks on the lean journey, encountering the practical pitfalls on the road. One is the failure to anchor the operational changes with institutional ones. Another is the temptation to take short cuts.

The blunt fact is there aren’t any. You can’t think yourself into a new way of doing things there is no alternative to doing it. That goes (especially) for managers, and is why apparent remedies like outsourcing and IT may turn out to be painful mistakes.

Outsourcing can be a way of avoiding the hard work of internal improvement in general, ‘too many managers have an exaggerated sense of the role new technology can play in improving performance’.

Add ‘r’ to lean and you get ‘learn’, and that’s what lean is: organisational learn ing in action. No other car company can replicate Toyota’s operational excellence because it contains 50 years of accumulated knowledge of responding to customers.

‘If you’re serious about lean, it becomes your strategy,’ says Drew. Or as BP’s John Browne puts it: ‘Our organisation is our strategy.’ Quite so. Not every organisation can invent a product that will revolutionise the world but every organisation can find a new way of doing things better every day.

The Observer, 23 May 2004

‘Don’t automate, eliminate’

WHY and how do large computer projects go so badly wrong? The column on ‘Why IT just doesn’t compute’ on 2 May triggered a large and vociferous postbag from practitioners who offered some illuminating first-hand insights into a very British malaise.

Almost everyone agreed that the overall picture is grim – perhaps even worse than it appears at first sight. Public-sector failure is, by definition, public and visible (which brings its own pressures). In the private sector it is hidden – which also means that managers do not learn from it, an important factor in explaining why improvement is so slow.

Thus one senior manager in financial services (one of the heaviest IT investors) said he had not seen a successful IT project in 25 years. The reason: internal politics ensured that project teams would be made up of political allies rather than the managerially and technically competent. Then, when a programme flagged, it was quickly pushed under the carpet.

The tell-tale sign, he said, was the announcement that the full savings would be realised in a previously unsignalled second phase – but unfortunately there were now higher priority demands on resources. ‘This necessitated another ‘whizz’ new project to be initiated quickly, which meant that it was not thought through properly, and so back to square one.’

How do such daft projects ever get off the ground? ‘Collusion and illusion,’ said one consultant, describing it as ‘a happy spirit of joint wish-fulfilment’ in which both sides tacitly agreed to enter a fantasy world in which assumptions about costs, benefits and risks were wholly artificial. Once the project is under way it was too embarrassing for either side to admit the figures were make-believe, so it ground on with everyone hoping for the best – ‘even though that didn’t work last time or the time before or, come to think of it, ever’.

If truly honest assessments were carried out, many or most IT projects (rightly) would not be started if they were monitored in the same spirit they would (rightly) be killed before they were implemented, he says. But this, of course, is a message that no one wants to hear.

Other writers took issue with the British Computer Society’s diagnosis that lack of professionalism in software engineering was to blame, fingering crude human resources management as the real culprit. A contract-based, low-commitment, ‘plug-compatible programmer’ mentality had grown up among managers, one claimed, which was incompatible with quality, teamwork and the dialogue needed to keep projects on track. Certification, put forward as a solution by the BCS, was therefore irrelevant: the central issue was the ‘woeful’ quality of IT management.

The woefulness extended to purchasing. Several correspondents queried the conventional wisdom of outsourcing, noting that it could easily lead to lower quality and higher cost.

One reason may be the poor HR management outlined above. More fundamentally, the outsourcer answered to different shareholders than the customer and their interests were far from identical: for instance, suppliers had an incentive to lock in customers by building systems that were hard to maintain by anyone else – maintenance (sometimes neglected at the justification stage) on average accounts for 60 per cent of all software costs. And the power relationship is unequal: a major outsourcer can absorb the loss of one customer, but the customer can’t absorb the loss of its IT.

The government, said another, ‘tends to be a very poor buyer. Orders come down the line about what to do, but very little about why they are doing it’. When research found that few people were using the Inland Revenue’s expensively developed online services, managers said they hadn’t been told to encourage taxpayers to use them: if they had, it would have been done differently.

More generally, he argued that the increasingly sharp-edged contract cul ture was inexorably damaging trust and raising costs in all kinds of ways – the purchasing process got longer, gaming around contracts became more intense, and every time the contract was switched a new learning period had to be gone through. Do the nominal cost savings outweigh the loss of knowledge and trust? Probably not. ‘We are rapidly heading for a state where we truly do know the cost of everything and the value of nothing.’

Finally, many projects made the mistake that they ‘automate rather than eliminate’. For example, companies were willing buyers of technology for automating call centres and routing calls, ‘but precious little effort is given to trying to eliminate the need for the customer to be phoning in at all’.

This applies to call centres, too: many of them exist for the sole purpose of answering calls that shouldn’t need to be made in the first place, institutionalising IT costs on top of an already ineffective system.

The blunt truth is that on its own, investing in IT neither cuts costs nor reduces headcounts. This was the original insight of the ‘re-engineering’ movement a decade ago, and it is reinforced by recent work by McKinsey showing that IT investment is a much weaker predictor of productivity improvement than overall management capabilities.

Together, management and IT are a potent force. But without adequate management capabilities, McKinsey warns that heavy IT spending may actually damage productivity. ‘The cost of investing in IT, in management time and capital, can be value-destroying, due to inappropriately scoped or over-engineered systems.’

The Observer, 16 May 2004

Kicking the six-figure habi

THE difference between companies is people. With capital and technology in plentiful supply, the critical resource for companies in the knowledge era will be human talent. Companies full of achievers will, by definition, outperform organisations of plodders. Ergo, compete ferociously for the best people. Poach and pamper stars ruthlessly weed out second-raters.

This in essence has been the recruitment strategy of the ambitious company of the past decade. The ‘talent mindset’ was given definitive form in two reports by the consultancy McKinsey famously entitled The War for Talent. Although the intensity of the warfare subsequently subsided along with the air in the internet bubble, it has been warming up again as the economy tightens: labour shortages, for example, are the reason the government has laid out the welcome mat for immigrants from the new Europe.

Yet while the diagnosis – people are important – is evident to the point of platitude, the apparently logical prescription – hire the best – like so much in management is not only not obvious: it is in fact profoundly wrong.

The first suspicions dawned with the crash to earth of the dotcom meteors, which showed that dumb is dumb whatever the IQ of those who perpetrate it.

The point was illuminated in brilliant relief by Enron, whose leaders, as a New Yorker article called ‘The Talent Myth’ entertainingly related, were so convinced of their own cleverness that they never twigged that collective intelligence is not the sum of a lot of individual intelligences. In fact in a profound sense the two are opposites.

Enron believed in stars, noted author Malcolm Gladwell, because they didn’t believe in systems. But companies don’t just create: ‘they execute and compete and co-ordinate the efforts of many people, and the organisations that are most successful at that task are the ones where the system is the star’.

The truth is that you can’t win the talent wars by hiring stars – only lose it. New light on why this should be so is thrown by an analysis of star behaviour in this month’s Harvard Business Review. In a study of the careers of 1,000 star-stock analysts in the 1990s, the researchers found that when a company recruited a star performer, three things happened.

First, stardom doesn’t easily transfer from one organisation to another. In many cases, performance dropped sharply when high performers switched employers and in some instances never recovered. More of success than commonly supposed is due to the working environment – systems, processes, leadership, accumulated embedded learning that are absent in and can’t be transported to the new firm.

Moreover, precisely because of their past stellar performance, stars were unwilling to learn new tricks and antagonised those (on whom they now unwittingly depended) who could teach them. So they moved, upping their salary as they did – 36 per cent moved on within three years, fast even for Wall Street.

Second, group performance suffered as a result of tensions and resentment within the team. One respondent likened hiring a star to an organ transplant. The new organ can damage others by hogging the blood supply, other organs can start aching or threaten to stop working or the body can reject the transplant altogether, he said. ‘You should think about it very carefully before you do [a transplant] to a healthy body.’

Third, investors punished the offender by selling its stock. This is ironic, since the motive for importing stars was often a suffering share price in the first place. Shareholders evidently believe that the company is overpaying, the hiree is cashing in on a glorious past rather than preparing for a glowing present, and a spending spree is in the offing.

The result of mass star hirings as well as individual ones seem to confirm such doubts. Look at County NatWest and Barclays de Zoete Wedd, both of which hired teams of stars with loud fanfare to do great things in investment banking in the 1990s. Both failed dismally.

Everyone accepts the cliche that peo ple make the organisation – but much more does the organisation make the people. When researchers studied the performance of fund managers in the 1990s, they discovered that just 30 per cent of variation in fund performance was due to the individual, compared to 70 per cent to the company-specific setting.

That will be no surprise to those familiar with systems thinking. W Edwards Deming used to say that there was no point in beating up on people when 90 per cent of performance variation was down to the system within which they worked. Consistent improvement, he said, is a matter not of raising the level of individual intelligence, but of the learning of the organisation as a whole.

So if not by hiring stars, how do you compete in the war for talent? You grow your own. This worked for investment analysts, where some companies were not only better at creating stars but also at retaining them. Because they had a much more sophisticated view of the interdependent relationship between star and system, they kept them longer without resorting to the exorbitant salaries that were so destructive to rivals.

The star system is glamorous – for the few. But it rarely benefits the company that thinks it is working it. And the knock-on consequences indirectly affect everyone else too. As one internet response to Gladwell’s New Yorker article put it: after Enron, ‘the rest of corporate America is stuck with overpaid, arrogant, underachieving, and relatively useless talent.’

The Observer, 9 May 2004