D for dunce: the great exam failure

The current educational algo-debacle is an exquisitely English cock-up*: a slow-motion train wreck that is the product of 30 years of educational initiatives, reorganisations and adjustments to alleviate the problems generated by previous changes, all piled up on each other without consistent architecture or, needless to say, political consensus. Finally this year Covid nudged it over the cliff of its own contradictions.

Our education system is a perfectly designed generator of grade inflation. Like executive pay under shareholder capitalism, it’s an escalator engineered to move in one direction only: up.

This year’s events are the culmination of a story that began in 1992 when 38 polytechnics were elevated to university status, nearly doubling the overall estate. Growth has continued ever since: there are now no less than 132 UK universities, with a student body that has expanded to match. Nineteen-seventy’s total of 200,000 students had mushroomed to almost 2m in 2019.

At a stroke, higher education morphed from an elite to a mass education system. Unfortunately, having willed the end, naturally with no diminution of quality, the government neglected to provide the means to bridge the gulf in standards – judged on traditional measures – between the old and the new. Accurately reflecting the gulf in respective resources, it was and in some cases remains large.

Real levelling up would have required a massive injection of resources into the new-borns. Instead, as ever, the government opted for a sleight of hand whose costs would only surface later. Traditionally, to maintain standards new universities underwent an adjustment period during which they administered degrees set by longer-established institutions. By contrast, the post-1992 cohort were granted degree-awarding powers from the start. There was no way a first from an under-resourced new university could be worth the same as a first from a top established one, but at a stroke the difference as made invisible – except to external examiners, who are often still pressured to verify marks that they know are too high or less often too low, depending on where they come from.

Tuition fees did provide universities with extra resources. But they were a two-edged sword. Particularly after 2010, when they jumped to £9000, they set in motion a programme of marketisation that, as the government intended, turned students from learners into consumers, a process encouraged by the creation of albeit unofficial league tables and increasingly important student satisfaction surveys. By the same token, universities became fierce competitors for their custom. Much of the extra resource was diverted into marketing, facilities and highly paid administration, while students began to argue that shelling out £9000 a year entitled them to a good degree and the teaching that ensured they got it. Lecturers and their employers had strong incentives to oblige. The casualty: a steady inflation of students’ grades.

As part of the supply chain, schools have naturally been sucked into the upward vortex. They were also subject to strong pressures of their own. Exam boards are competing commercial entities, and schools exploit discreet exam arbitrage between them. Moreover, education was an early testing ground for the New Public Management (NPM), the drive to sharpen up the public sector by subjecting it to private-sector methods and techniques. Unsurprisingly, the regime of targets, inspection, league tables and fierce performance management (‘targets and terror’) had the same dismal effects as in other public services such as health. Particularly harmful were the inducements for heads and teachers to play the numbers game by quietly dropping ‘harder’ subjects, excluding poor performers and ‘teaching to the test’ – a classic illustration of the folly of making professionals accountable to ministers and inspectors rather than those they directly serve. While it is widely accepted that many schools, eg London, have improved, the cost has been high in the shape, again, of grade inflation.

Briefly, consider that the percentage of top ‘A’ passes at A-level had gone up from 12 per cent in 1990 to 26 per cent in 2017, and ‘A’s plus ‘B’s from 27 to 55 per cent. The upward progression in degrees is even more marked. As a New Statesman article put it last year, ‘British universities… have increased the number if degrees they award fivefold since 1990, while the proportion of firsts they hand out has quadrupled – from 7% in 1994 to 29% in 2019. For every student who got a first in the early 1990s, nearly 20 do now… The proportion of students getting “good honours” – a first or 2:1 – has leapt from 47% to 79%: at 13 universities more than 90% of students were given at least a 2:1 [in 2018].’ In a perfect self-reinforcing cycle, universities justify this progression by pointing at the schools: it’s not surprising we’re giving more good degrees, they say, because we’re getting better students – just look at the A-level results.

This is the backstory to this year’s school shenanigans, when the creaking system was brought crashing down by the cancellation of GCSEs and A-levels during the lockdown. Without the restraining influence of real marks for real work, the government invented two unreal ones – centrally assessed grades (or CAGS) and a version moderated by the famous algorithm to damp down what it saw as alarming grade inflation. Both measures are barely comprehensible in their complexity (sample: ‘CAGs are not teacher grades or predicted grades, but a centres profile of the most likely grades distributed to students based on the professional views of teachers’). But the circle was unsquarable. While the algorithm did moderate the grades, it could only do so at the price of such manifestly unfair side effects that the government hastily retreated. CAGs, and by extension, grade inflation, on this occasion however justified, rolled on.

So we arrive at a familiar destination. Grade inflation is a symptom of what Ray Ison and Ed Straw, authors of the important new The Hidden Power of Systems Thinking, call a system-determined problem – one that can’t be resolved by first-order change, only by rethinking the system itself. Tinkering with the existing system to make it work better is our old friend doing the wrong thing righter, which ends up making it wronger. And we end up with the worst of both worlds: private-sector market competition moderated by Soviet-style regulation that achieves neither efficiency nor accountability, and whose figures won’t bear the mildest scrutiny. When we most needed a system based on professional trust and respect, we have the reverse, a regime established to assure academic standards that has overseen their almost complete debasement.

This has the potential to be much more than a little local difficulty. Higher and to a lesser extent secondary education, backed up by league tables that conveniently big up their strengths, have long been talked up as one of this country’s strongest international success stories. Covid’s inconvenient intervention suggests a more accurate characterisation might be a house of cards, built on statistical foundations that don’t even come up to O-level standards.

* As a Scottish reader correctly notes, it is increasingly hard to generalise across the component parts of the union in such matters.

How masks became a weapon in the culture wars

Trust in government is emerging as an important factor in how a country fares on what might be called the coronavirus performance league table. That stands to reason: in the absence of a vaccine, ‘beating the virus’ is a collective social enterprise as much as a medical one – just as ‘saving our NHS’ was at the peak of the infection, although the government appears to have forgotten it. (The cost was perilously high, but that’s another story.) In other words, performance is less a matter of science, more a matter of political competence and leadership.

New support for that idea comes from a recent paper in the Lancet describing the ‘Cummings effect’. When the story of the adviser’s dash for Durham, breaching official lockdown advice, broke in May, the result wasn’t just an immediate and continuing loss of public confidence in the government – it changed people’s behaviour. Their growing unwillingness to follow the guidelines was the other side of the coin of declining trust. Rubbing it in, Durham’s former chief constable noted: ‘People were actually using the word “Cummings” in encounters with the police to justify antisocial behaviour’.

A more insidious seepage of confidence – leading to an almost virus-like spike of consternation, rage and conspiracy theories – has been triggered by the government’s vacillation over the desirability of wearing face masks. Indeed, when the history of the pandemic is written, there will likely be a special section on this mundane piece of cloth and gauze, which has become an unlikely symbol of the contradictions and jagged social and political divides that the coronavirus has generated.

It should have been simple. When everyone wears one, the face mask is an important element, along with maintaining distance, hand washing and restricting frequentation, in limiting transmission of the virus.

But it is not quite as straightforward as it looks. The mask has a systemic dimension, and the benefits are asymmetric. For the individual, wearing a mask is a mild inconvenience for not much return. For the collective, on the other hand, there is no downside, and the benefit is multiplicative because of a kind of network effect: the more widespread the use, the greater the value, including to individuals. If sufficient numbers mask up, in protecting other people you protect yourself. This makes it too, and this likewise has been much neglected, a powerful signifier. In the context of the above, wearing a mask is a badge of common endeavour, a recognition of the fact that your health depends partly on the behaviour of others, just as theirs depends on yours.

Yet for many in the individualistic US and UK, these scraps of fabric have become objects of scorn (‘face nappies’) and wearing them an affront to liberty – ‘facemask hell’ and ‘a monstrous imposition’, according to one MP. For some Americans they are symbol of oppression, even totalitarianism, an insult to religious feeling (‘denial of the God-created means of breathing’) or even a threat to wellbeing (one American woman bizarrely shouted to camera, ‘the reason I don’t wear a mask is the same as for not wearing underwear: things gotta breathe!’). According to a trade union poll, 44% of McDonald’s employees had been threatened or abused for insisting that customers don a mask. At least one American has been shot.

In short, instead of being a simple precaution, covering your face has morphed into a weapon in the culture wars – a sign of wokeness or meek compliance with an oppressive state on one hand, an identifier of aggressive right-wing libertarianism on the other.

How has this come about? In microcosm, the depressing story of the face mask mirrors the convulsive progress of the crisis as a whole: a drunken lurch from under- to overreaction, accompanied by mixed messaging and subsequent public cynicism, augmented by the Cummings effect and the utter untrustworthiness of the testing statistics. In the absence of trust, leaders have no levers to pull when they want to get a scared, suspicious and increasingly resentful country back to work. They can only beg and bribe.

In the UK no one has ever explained in simple, clearly understandable terms the cumulative benefits of mask-wearing. And, disastrously, western authorities, including the World Health Organisation (WHO), initially played down of masks not for medical reasons but because they feared that a rush on masks would aggravate the strains on national health services then struggling with critical shortages of PPE, including face coverings. Not surprisingly, people now instructed to wear one are apt to take a cynical view.

The consequences of the failures to come clean are now coming home to roost. Ironically, even in the US and UK, most people are in principle in favour of wearing masks and even of making them compulsory. Yet in the UK, uniquely, this has not translated into behaviour: in an IPSOS Mori poll of 23 July, four months after the start of lockdown, just 28 per cent said they wore one, compared with double that proportion in France, Italy and the US. This is one reason why the UK now has another dubious Europe-beating qualification to add to its list: alongside the highest number of covid-related deaths and the worst hit economy, we are the slowest and most reluctant to return to work.

But if citizens now are slow to wear masks and resist going back to work, it’s largely not because they are bloody-minded or stupid. Inadequate leadership is squarely to blame.

Slavery, Inc

Like most people, including Alfred Chandler in his magnum opus The Visible Hand, I always accepted that – with a nod to ancient institutions like universities, the army and the Catholic church – the origins of modern management lay in the US railroads and the factories of the Industrial Revolution. 

But although long denied or ignored, it is becoming clear that some of the founding practices were already well developed in the 18th-century slave plantations of the Caribbean and the southern states of America. When F.W. Taylor’s The principles of scientific management appeared in 1911, echoes of the earlier ‘scientific agriculture’ practised on some of the sugar and cotton plantations were not lost on contemporary critics who found some of Taylor’s practices uncomfortably reminiscent of ‘slave-driving’ – nor on supporters who on the contrary praised them for the advance they represented over slaveholding.

This is troubling stuff to write about. But the aim is not to pick at the scabs of the past for the sake of it. It is that, as ever, the present is the child of the past, and coming to terms with the history is the first step to resolving the unfinished business it has left behind.

In Accounting for Slavery: Masters and Management, a remarkable piece of primary research, Caitlin Rosenthal, a young McKinsey consultant turned academic, parses surviving plantation account and record books to paint a chilling picture of the blend of violence and innovative data practices that turned plantations into extreme exemplars of scientific management – ‘machines built out of men, women and children’ where ‘the soft power of quantification supplemented the driving force of the whip.’ 

Slavery, Rosenthal notes, ‘plays almost no role in histories of management’. Whether conscious or not, this is denial, the erasure accomplished by Chandler’s comforting categorisation of plantation management as primitive and pre-modern. Not a bit of it, counters Rosenthal. Sophisticated information and accounting practices thrived precisely because slavery suppressed the key variable that makes management difficult – the human. As she puts it, ‘Slavery became a laboratory for the development of accounting because the control drawn on paper matched the reality of the plantation more closely than that of almost any other American business’.

The combination of labour that was essentially free, unspeakably brutal management and smart accounting meant that slaveholding was exceptionally profitable. Plantation owners were among the one percent of the period; at the time of the Civil War, there were more millionaire slave-owners in the south than factory-owners in the North. In the UK, as we are sharply reminded, many Downtons were built on the trade or forced labour of slaves. Historians mostly don’t include human capital in their calculations, but plantation owners did, using depreciation to assess the changing value of slaves according to age, strength and fertility well before the concept was in use in the North, and routinely using them as collateral for loans and mortgages. By buying and selling judiciously, slave-owners could add steady capital accumulation to the profits from cotton and sugar. 

Pace Chandler, plantations were management- as well as capital-intensive: according to one calculation, in 1860, when the railroads were emerging as the acceptable crucible of management, 38,000 plantation overseers, or middle managers, were managing 4m slaves using techniques that included incentives as well as indescribable punishments. Rosenthal recounts that in 1750 a British pamphleteer launched a prospectus for a kind of business school whose target clientele included sons of American planters. Slaveholders, concludes Rosenthal, ‘built an innovative, profit-hungry labor regime that contributed to the emergence of the modern economy… Slavery was central to the emergence of the economic system that goes by [the name of capitalism].’ 

With some estates numbering thousands of slaves, the plantations represented a milestone in managing scale. Even more important, the tools developed there enabled owners to manage their enterprise remotely. The slaveholder no longer had to suffer the physical discomforts of colonial life – or the mental discomfort of seeing at first hand the appalling human cost of his or her mounting wealth. Studying the numbers in the account books – embryonic spreadsheets – in a study in Bristol, London or Liverpool, he (or she) could see at a glance the productivity and profitability of each slave and decide their fate with a tick or a cross. 

This was a genuine management innovation, perfectly aligning the need for distant control with conditions on the ground. It was also crucial in another way. Representing humans as numbers not only put them out of sight and out of mind. It also encoded them as simple instruments of profit, no different in that respect from mules or horses, or the machinery for turning raw cane into sugar. It was to this vision of unfettered capitalism, where the only sanctity was property, that the southern states (and the British ‘West India interest’) clung to so tenaciously for so long – and in the former’s case, went to war to protect.

They lost that battle. But even after abolition the ghost of the old regime lived on in the south in the infamous penal labor and convict leasing schemes – and endures today through the for-profit prison-industrial complex that has seen the quadrupling of the (disproportionately black) US prison population since 1970. A whole raft of blue-chip US companies continue to profit from captive prison labour today.

The debate about economic freedoms and ends and means in business that slavery started rumbles on in 2020. When Milton Friedman wrote in 1970 that the social responsibility of business was to increase its profits, he was reasserting the primacy of capital owners’ property rights, and in an extreme version of Adam Smith’s ‘invisible hand’ argument insisting that anything they do to increase those profits contributes to the common good. Now the management wheel is turning again towards a more inclusive view, although with how much conviction it remains to be seen. If there is any hesitation, slavery should remind us with crystal clarity how far people will go in pursuit of profit if allowed to; that management’s urge to reduce everything to numbers can all too easily result in the destruction of its own humanity as well as the lives of those being managed; in short, that management can be a force for evil rather than for good. Making a clean breast of the dark side of its history is the only way to close off those bleakest avenues for ever.

Remind me: what is HR for?

In case you missed it, May 20 was International HR Day. To celebrate it, the CIPD tweeted five reasons ‘to recognise HR right now’: putting people first, enabling remote and flexible working, championing physical and mental wellbeing, encouraging virtual collaboration, and supporting people and organisations to adjust to the new normal.

Nothing much to object to there – it’s motherhood and apple pie. Yes. And that’s the problem.

Like a great deal – most? – of management advice, what is proposed is true but useless; preaching, as Jeff Pfeffer puts it.

One clue is that you can’t imagine many people arguing a case for putting people last or stubbornly upholding the old normal. More deviously, the five reasons for celebrating HR are actually nothing of the sort. They are really abstract desired outcomes – practices that companies ought to have – pretending to be inputs – processes or principles that companies and organisations actually observe.

But they don’t: the banality of the desiderata is in inverse ratio to their occurrence in real life. As such, the list declines reasons to despair of HR, not celebrate it.

Managing with rather than against the grain of human needs is not a new prescription, nor a controversial one. As big-name researchers from Herzberg (‘to get people to do a good job, give them a good job to do’) in the 1970s and 1980s to Pfeffer (The Human Equation) in 1998 to Julian Birkinshaw (Becoming A Better Boss, 2013) have emphasized in their different ways, effective work arrangements that enlist people’s abilities and motivation are a better and more sustainable route to economic success than downsizing, contracting out and relying on sharp incentives and sanctions. Countless research studies say the same thing.

And it is true today. At the recent launch of a joint RSA-Carnegie Trust report on the question, ‘Can Good Work Solve the [UK’s] Productivity Puzzle?’, top representatives from the Bank of England, the TUC, McKinsey and the RSA all agreed: yes, it can and should. There are simply no downsides.

Except that it doesn’t happen. Despite the lip service, ‘good work’ is almost exclusively honoured in the breach rather than than the observance. Standard management practices unambiguously put shareholders first, and people last, literally.

In today’s economy, companies create full-time ‘good work, at a good wage’ (the RSA’s hopeful formulation) only as a last resort. They rely instead on contingent workers who can be turned on and off at will and are increasingly managed by algorithm, thus dispensing with another tranche of the workforce. Pay is wildly unequal, even though studies again show that wide dispersion undermines teamwork, involvement and attachment to the organisation. Tight supervision and micromanagement kill trust and initiative – and even where, pushed by coronavirus, companies have moved to home working and virtual collaboration, the latter are almost comically sabotaged by the increasing use of digital surveillance to monitor and control remote employees. Meet the new work, actually a return to the old work, where all the risk and responsibility is borne by the individual, none by the corporation.

Given the yawning mismatch between the ideal and the grubby reality that most employees think their company doesn’t care about them and they don’t care about their work, the obvious question is, where on earth is HR in all this? If it and its nominal agenda are so comprehsnsively disregarded, why does it even exist?

There is much hand-wringing within HR and the academic literature over this. Every few years HR is called on to ‘reinvent itself’ or ‘make itself more relevant to business’ in one of the top management journals. But cynicism continues to grow, along with ineffectual programmes and surveys with no follow-up. ‘I do whatever the CEO wants,’ one HR head shrugged to HBR in 2015.

But the frustrations of HR can be explained if you think of it, at least in its current form, as a figleaf. In Beyond Command and Control, John Seddon describes HRM as a by-product of the industrialisation of service organisations along command-and-control lines. HR departments, he says, ‘grew up to treat the unwelcome symptoms of command-and-control management and have steadily expanded as the symptoms have got worse’. HR is, bluntly, damage limitation – yet another example of management consuming itself in trying to do the wrong thing righter (Ackoff), or doing more efficiently that which shouldn’t be done at all (Drucker).

As with so much of management, the way forward isn’t for HR to invent new things to do, but to give up doing old pointless ones. Managers should quit obsessing over individual performance and instead pay attention to the system that governs it. If they stopped demotivating people, removed conditions that get in the way of doing good work (‘So much of management consists of making it difficult for people to work’ (Drucker), ceased measuring activity rather than achievement of purpose and above all did away with incentives that distort priorities and divert ingenuity into gaming the system – bingo! the need for most of what passes for HR today (performance monitoring and surveillance, inspection, culture and engagement surveys, appraisals, courses on coping with change and other fake subjects that add no value) would simply evaporate. When the system changes, says Seddon, so does behaviour; as people act their way into a new way of thinking, culture change comes free.

That’s what an organisation that puts people first looks like. But it’s a result, not a cause. And you may have to kill off HR to get there.

Hitting the target and missing the point

Targets. Stretch targets. 100,000 coronavirus tests a day by the end of April. That turned out well, didn’t it?

When on 2 April health secretary Matt Hancock announced his goal of carrying out the famous 100,000 tests a day by the end of April, the result was predictable.

Given that at the time the daily testing rate was around 11,000, attention naturally focused on the number, and whether it would be achieved. And that’s where the debate stuck for the month. Not on why 100,000 or the purpose of the testing – the number.

On 1 May Hancock used the daily coronavirus briefing to declare that testing numbers had hit 122,347: the pledge had been met. Again, the number hogged the attention. Was it true? Had it really been hit? How?

Well, yes and no. It transpired that between the announcement of the target and the declaration of victory, the definition of ‘completed tests’, which previously meant ‘completed tests’, had quietly changed to ‘completed tests plus test kits in the post’. Subtracting the latter category left a ‘real’ figure of 82,000 actually carried out. Cue a new furore – again about the numbers.

What happened is a textbook illustration of the unintended effects of targets and their faithful sidekick, Goodhart’s Law.

To paraphrase W. Edwards Deming: in the case of a stable system there’s no point in setting a target, because you’ll get what it delivers. But with a non-stable system, there’s no point in setting a target either, because you have no idea what it will deliver. A numerical target in such circumstances is a finger stuck up in the air. Unless you know how to improve system capability permanently (I don’t think so), to hit it you have to be either incredibly lucky (in which case you’ll have to be even luckier to do it again tomorrow); or alter the parameters to make the target attainable.

Hancock did what everyone does when faced with the imperative to hit an arbitrary target: he managed the thing that he could – in this case, the definition of success.

But this is not a harmless bit of jugglery. Deming again: ‘What do “targets” accomplish? Nothing. Wrong: their accomplishment is negative.’ There is a high cost to his action – which is where Goodhart comes in.

As economic adviser at the Bank of England, Charles Goodhart noted that attempts to manage monetary policy by using any definition of the money supply was constantly subverted by actors finding novel ways to circumvent the definition. Hence his law, usually formulated as, ‘when a measure becomes a target, it ceases to be useful as a measure.’ A metric can be either a target or a measure. It can’t be both.

Take Hancock and his tests. To meet his target, he included in his count for 30 April around 40,000 test kits mailed out to the public and to hospitals. Of this number (pay attention here), while the Department of Health and Social Care counts the number of people that test positive, it doesn’t collect figures for tests completed.

What’s worse, since mid-April the government figures include on the same basis (ie people testing positive but not tests completed) 17,500 variegated tests consisting of both diagnostic and antibody tests, thus adding oranges to uneaten, partially eaten and completely eaten apples. As Tim Harford declared incredulously on his latest ‘More or Less‘ show: ‘It’s almost as if they don’t care if the number of tests is consistent or indeed accurate, as long as it’s big.’

At any rate, the upshot of this piece of target-setting is exactly as Deming and Goodhart predicted: the system is beyond comprehension and the figures such a dog’s breakfast that no one can tell what they mean. It seems highly unlikely that Hancock’s original target has been met at all since 30 April, but how can anyone know for sure, including the government? The only certainty about the figures is that they are bogus. You might think that when the subject is life or death, this matters, no?

Yet the damage done by targets doesn’t stop there. What most people don’t get (including a ‘science writer’ on a previous edition of ‘More or Less’) is that the problem with targets isn’t that they don’t work. It’s that they do.

A target is typically a one-club solution to a problem with many moving parts. But the first law of systems is that you can’t optimise one part of a multipart system without sub-optimising others. Any benefits are outweighed by unintended consequences elsewhere in the system. Focusing attention (often with added incentives) on the target rather than the purpose ensures that even if the target is hit, the point is missed.

Targets displace purpose. Tests are a means, not an end. But reporting 100,000 of them became the purpose, both for Hancock and his critics. Yet why 100,000 a day, rather than 75,000 or 250,000? What are we testing for in the first place? Deming once more: ‘Focus on outcome is not an effective way to improve a process or an activity…[M]anagement by numerical goal is an attempt to manage without knowledge of what to do’. Another finger in the air. Or, more tersely: ‘Having lost sight of our objectives, we redoubled our efforts.’

Consistent failure to meet the daily target underlines the point: it bears no relation to purpose, or any other kind of reality, really. Not production capacity, as we have seen. Even more serious, not with demand either – shortage of which, or shortage of which in the right place, has been put forward as a reason for the target debacle.

To be effective, a system needs to be designed against demand. And demand is determined locally. Testing is the first step in the ‘test, trace, isolate’ strategy that the government first initiated and then discontinued in March, and has now resurrected. By definition, that strategy has to play out out locally, where the infection occurs, tracing begins and treatment takes place. But bypassing hospitals and 400 or so existing small labs dotted around the country, all tightly linked to local primary care, the government, as with the Nightingale hospitals, is relying on giant regional testing factories, set up from scratch and remote from their users in every sense. A lurch backward to early 20th century industrial thinking, these in the view of many observers are the exact opposite of what is needed.

We can all support a goal of ramping up testing capacity to the level necessary to meet the purpose, whatever that number is. In fact it would be a good idea. But the minute you set it as a numerical target, it is subject to Goodhart. Managing backwards from an outcome plucked from thin air is a feature of command-and-control management, the only kind of management that government knows. But it is back-to-front. Targets are a disease. They destroy purpose, distort priorities, and soak up energy in games-playing and bureaucracy. They are the problem, to be avoided like, well, the plague.

Who saved the NHS?

‘Stay at home, protect the NHS, save lives.’ It’s a simple, understandable mantra, and we have internalised it well. So well, in fact, that we haven’t noticed the sleight of hand – desperate? cynical? – that’s going on here.

How didn’t we see it? The formula says it is our duty as citizens and patients to protect our NHS – and, by the way, the government will punish us if we don’t. But that’s the wrong way round. The NHS is supposed to protect us. Protecting the NHS, and us, is the government’s job, and if it fails to do it, we should punish it at the next election. (Whether the NHS is safe with this or that party is a question rightly posed at every vote.)

As it turns out, we have done our job of not using the NHS for the purpose for which it was designed so well that A&E departments are half empty, patients are politely declining to come forward for cancer diagnosis and ministers are reduced to acting like fairground barkers to drum up trade: ‘Roll up, roll up, we’re still open for business!’

As ever, crisis has brought the best out of the NHS, which has performed heroically with the resources at its disposal. In fact, though, the conventional image of doctors, nurses, porters and auxiliaries as saints and martyrs does them a disservice, obscuring a much more interesting reality. Under pressure the NHS is quietly doing things that it, and we, would have thought impossible a couple of months ago: partnering to build and equip hospitals in a week, redeploying and retraining staff, sharing work with the private sector, improvising procedures and equipment to keep, so far successfully, one step ahead of the flow of infections… Bureaucracy, what bureaucracy?

‘It’s impossible to overstate what has happened in the last two months,’ judges one observer. Of course, it has been a desperately close-run thing, and no one would want a re-run any time soon. But the imperative to do whatever it takes (to coin a phrase) to win a real-life game of life and death is being seen as a liberation for some NHS managers, who find themselves for once with control where it should be – in their own hands. ‘We can’t go back after this,’ one is quoted as saying.

Where the news is less good is at the point where help from the citizenry’s self-isolation from health services runs out – that is, in the supply of the basic equipment that allows our saints and martyrs to perform the heroics we rightly celebrate. Or not help much, compared with the need. But that is not for want of effort. Witness the countless stories of ‘small ships’ – small businesses, individuals and groups of volunteers pitching in to sew scrubs, gowns and even craft visors, some with ‘bleeding fingers’ at the end of the day; cooking free meals for NHS staff; providing transport to and from work; and putting up and feeding them at the end of their shifts. And what about the astonishing 750,000 volunteers who came forward to help the NHS at the beginning of the crisis, a phenomenon that has left the rest of the world marvelling?

Unfortunately, the one party missing from this spontaneous outbreak of inventiveness, collaboration and sense of common purpose is the one that matters most – the government. It’s a commonplace that the pandemic has made government and the state vital again, as the only entities with comprehensive national reach. Accordingly, although the fit isn’t perfect, the gruesome league table of covid mortality also looks like a pretty fair reflection of government competence. In general the countries that come out best are those in south-east Asia, Germany and Denmark in Europe, and New Zealand that acted in character and as might have been predicted – calmly, early and firmly.

At the other end of the scale, currently the worst outcomes are likely to be in the US and UK, both notable for fractured politics and a strong belief at the top that the state is just another interest group, and in their own exceptionalism – mercilessly skewered by Fintan O’Toole in the latter case (although he failed to mention that the virus itself had already done the same thing by putting the country’s prime minister, health secretary, and chief medical officer, not to mention No 10’s chief special adviser, out of action with covid19 all at the same time – a unique full house of haplessness). The Johnson team’s slow reaction to the spread of the disease, subsequent policy zigzags and implementation failures faithfully reproduce on fast-forward the distinctive shortcomings of British government over the decades, with an added layer of arrogance thrown in.

In this context, the delegation of responsibility for protecting the NHS to citizens is no surprise – it is the culmination of the series of ‘reversifications’ under which the dehumanising pressures of financialisation and targets have gradually turned processes, functions and whole institutions into their dark opposite – think the Home Office as department of alienation and hostility, welfare as punishment for poverty instead of a helping hand. Tellingly, while ministers learned enough from the financial crisis of 2008 to enact an instant bail-out of the economy, their lack of systemic social vision is leaving a trail of destructive unintended consequences behind its initial ‘protect-our-NHS’ call: the ‘collateral damage’ of extra deaths as ill patients shun A&E and cancer departments, the unfolding tragedy of care homes become killing fields (another reversification), and now the finding that people mystified by the government’s switch from herd immunity to strict lock-down will have to be cajoled out of their houses to ‘assert their inalienable right to go to the pub’ when restrictions are lifted.

Covid19 has ruthlessly exposed the hidden faultlines and contradictions in our society, and the lazy, self-serving economics and management thinking that first engineered and then ignored them for 40 years. When we come out of it, we will indeed remember those failings. But we will also take heart from the unforced collaborations, cooperation and solidarity emerging in the lockdown that reflect a more positive view of human nature – one that will form the basis of new and more realistic versions of that dried-up, pre-Darwinian thinking. It will have been us who have protected the NHS, not governments, and it’s the least that they owe us in return.

What coronavirus teaches about business

When the Global Peter Drucker Forum chose ‘business ecosystems’ as its theme for last November’s conference, it was met by some gentle scoffing. Bit airy-fairy? More fad than substance? Was ecosystem management even a thing?

Well, now we know. The coronavirus is not only a thing; manging its ecology and ecosystem is the biggest test of management, and leadership – which just happens to be the Drucker Forum theme for 2020 – the world has recently seen: greater than 9/11 and the 2008 financial crash, a matter of literal life and death for many thousands of people, and financial life or death for thousands, perhaps millions more.

Given this, it’s at first sight odd that political leaders have conspicuously not been beating a path to the doors of leadership and crisis-management ‘experts’ in business schools or large companies for their advice on facing down the virus. After all, business websites and blogs teem with strategies for managing in a world of volatility, uncertainty, complexity and ambiguity, or VUCA, as the military call it. And did I dream that Google and/or Facebook once boasted of being able to use information they had sucked out of their users to predict the spread of disease?

Yet at a second glance, the shunning of conventional management is understandable – the dirty secret is that it has significantly contributed to the weakness of the current global response. All too willingly coopted into the neoliberal economic consensus by the appeal to self interest in the 1970s, management has been the Trojan Horse that released a strain of free-market dogma into the economic and political mainstream that we’re all suffering from now. One of the side-effects is the willful self-fragilisation of some of our biggest corporations.

Thus the big US airlines that are now seeking a $58bn bailout have over the last decade spent roughly the same amount to buy back their own shares to the exclusive benefit of their own executives and stockholders. Boeing, the epitome of the ‘downsize and distribute’ approach to capital allocation, having donated $43bn to shareholders through buybacks between 2013 and 2019, now wants $60bn to prop up aviation-industry supply lines. Even mainstream commentators, such as the FT’s excellent Rana Foroohar, are suggesting that this time round it cannot be simply a case of socialising such firms’ losses – the price of a public bailout should include equity participation and ban on buybacks and executive bonuses until the debt is repaid – not to mention improvements for customers and employees.

Fragilisation is not confined to companies. The same ideological emphasis on markets and self-interest is a factor in the opioid crisis, the tax minimisation strategies of Big Tech that have seriously diminished the resources of national governments for any kind of public spending, and more insidiously so undermined the instruments and confidence of the administrative state that its ability to intervene is now desperately compromised. Together with the retreating state, the systematic application of competition and markets to the public sector has given us a decade of austerity reflected in the parlous states of US and UK safety nets, making the medium-term social and – irony – financial toll of COVID-19 much greater than it should have been. Is it fanciful to suggest that Johnson and Trump’s fanatical faith in laissez faire and small government is behind their naive exceptionalism and hands-off initial response to the coronavirus?

Lenin once remarked that ‘there are decades when nothing happens. And there are weeks when decades happen’. Decades are happening now, and one of the things that is collapsing before our eyes is the edifice of neo-liberal economic theory, whose equations are simply swept away by a pandemic that operates on a plane of rude physicality governed by medical and ecological laws. A similar thing is happening to management. In the context of a wider business ecosystem that is itself nested in the wider systems of the economy, society and the natural environment, the idea of a company as a standalone maximisation engine is an aberration. In an ecology, something that maximises itself is a cancer to be feared. Instead, the purpose of business is to play its part in optimising the whole, which means investing in the jobs and salaries without which the wheels of capitalism seize up – as is graphically illustrated by a coronavirus that keeps people at home and shuts economies down.

It became quickly clear in November’s Drucker Forum sessions that managing in ecosystems was as different as quantum to Newtonian physics. Previous certainties – indeed, the very idea of certainty – suddenly become suspect. Standard leadership strategies or lists of personal qualities are useless when everything is shifting around you. In a world where nothing is certain, leadership no longer makes any sense as an abstract ‘thing’, separate from what is being done. It is situational or nothing.

In those circumstances, what do you use as a guiderail when leading followers on a course that is as likely to be wrong as right? Again, coronavirus gives the clue. Consider luxury purveyor LVMH using perfume lines to turn out hand sanitisers; or Dyson designing ventilators rather than vacuum cleaners; ex-footballers turning over their hotel free to NHS staff; Amazon about to deliver testing kits; CEOs such as Danny Meyer of the Hospitality Group cutting executive salaries and foregoing his own to avoid laying off low-paid employees; engineering professors, PhDs and F1 racing teams around the UK collaborating on basic ventilator designs and breathing aids; and a staggering 750,000 individuals volunteering to help the beleaguered NHS. The list goes on.

Altruism, yes. But another way of describing it is people and companies self-organising to redirect their resources to solving problems that matter. It’s an ecosystem responding creatively and cooperatively to threat. Note the lengths people will go to when offered a good job to do. As Warren Bennis once put it: ‘Problem-solving is the task we evolved for – it gives us as much pleasure as sex’.

Although they have largely forgotten it, solving problems is what companies evolved for, too. For economists Eric Beinhocker and Nick Hanauer, ‘the accumulation of solutions to human problems’ is a better measure of progress and prosperity than GDP. It is access to more of those solutions – air-conditioning, the world’s information on a smartphone, a cure for COVID-19 – that is for its citizens the real difference between living in an advanced economy and a poorer one. In a functioning capitalist ecosystem, business creates solutions that benefit humanity, while also employing and paying people such that they can take advantage of those solutions for themselves.

In turn, that defines what leadership looks like in the VUCA world, wherever the volatility or uncertainty comes from: acting fast to deploy resources where they can contribute most to optimising the system of which they are part. In Peter Drucker’s famous distinction, that is doing the right thing, which is leadership, as opposed to management, which is doing things right. COVID-19 could hardly have pointed the way ahead more clearly.

RIP Jack Welch

Rarely can a passing have marked the end of a business era more aptly than that of Jack Welch. When the legendary CEO of GE, who died this week, retired in 2001, the giant company that he had shaped over 20 years was at the height of its fame and financial success. GE’s fabled management academy at Crotonville, where Welch often lectured, was a big part of it, a conveyor belt as efficient as any of its plants at turning out successful executives modeled on Welch’s hard-nosed, hard-driving style. Had not Welch’s unforgiving approach and uncanny ability to meet Wall Street’s quarterly earnings targets, earned him in 1999 the grandiose title of Fortune’s ‘manager of the century’?

Yet it it is now evident that 2000 was GE’s high point. Never has its reputation, or earning, touched those levels since. Its share price has dropped 80 per cent as Welch’s successors struggled to cope with the legacy of his push into financial services, which nearly sank the group in 2008. Net profits of $15bn as late as 2014 have vanished in three of the last four years, and in 2018 GE suffered the indignity of being dropped from the Dow Jones Industrial Average. It was the last surviving member of the original class of 1896.

Just as butal has been the collapse of GE’s management aura. Consider the careers of the post-Welch generation of leaders from GE’s proud leadership finishing school. Jim McNerney, one of the unsuccessful suitors in the contest for Welch’s old job in 2000, decamped to another famous company, 3M, which he left on its knees five years later. His next berth was at Boeing, where as its first CEO with a non-aviation background (he was also president and chairman), he took the decision to build the 737 Max-8, currently in the news for all the wrong reasons, rather than design a new airliner from scratch.The second spurned internal candidate, Bob Nardelli, left Home Depot, his new company, similarly weakened, alienating customers and long-serving workers with sweeping cost cuts and provoking shareholder ire with his outsize compensation package. As for GE itself, when insider Jeff Immelt’s ultimately unsatisfactory reign ended in 2016, the board turned to another internal candidate, John Flannery – before abruptly ousting him two years later and – oh ignominy – replacing him with GE’s first CEO from outside the company. The rout of GE management is complete.

In retrospect, for the period of his CEO-ship, GE and Welch were perfectly matched. Russ Ackoff once noted that one of capitalism’s dirtiest secrets is that ‘we are committed to a market economy at the national (macro) level and to a nonmarket, centrally planned, hierarchically managed (micro) economy within most corporations’. A centrally planned, shareholder-value-driven company – of which GE was the epiome – can only operate as a top-down hierarchy, because without line of sight to the actions needed to meet indirect corporate targets, employees have to be told what to do.

Only an imperial CEO like Welch could make the model work, up to a point, by relentlessly holding underlings to account for punishing plans and performance targets, and having no compunction about sacking people who fell foul of GE’s infamous forced ranking assessment system. He was equally ruthless in pursuit of efficiencies through job cuts (hence ‘Neutron Jack’), offshoring and selling off units that couldn’t make number one or two in their industry.

In truth, GE under Welch may have been about as well run as a large top-down corporate economy could be, as he pushed managers to break down silos and exhorted them to beat back competition from the far east. But he couldn’t escape the limitations of command and control. He railed for example at the bureaucracy of budgeting, which he castigated as ‘the bane of corporate America – it should never have existed’, but couldn’t get rid of; and complained that ‘the talents of our people are greatly underestimated and their skills are underutilised’. In neither case did he understand why. Unchecked, he overreached imperially in financial services (a poisoned legacy that helped to cripple his successors), in his botched attempt to take over Honeywell in 2000, and in his exorbitant material demands – a scandal that tarnished his reputation when it came to light in a subsequent messy divorce.

Looking back, you might say that Welch perfected the dinosaur just in time for evolution to sweep it into the dustbin of history. Shareholder primacy, as even Welch came to admit – although not before he had retired on its imperial proceeds – was ‘the dumbest thing in the world’, shareholder value being a result, not a cause. Immoderate personal gains are no longer viewed with indulgence. Most decisively, business has evolved: today’s emerging business ecosystems, fluid and constantly shifting, simply aren’t amenable to the detailed planning and control that were GE’s, and Welch’s, forte. Management has to evolve too.

Poigantly, one part of the old conglomerate that is conspicuouly thriving is GE Appliances, albeit under different parentage. In 2016 it was sold to the Chinese Haier group. Under its inventive chairman, Zhang Ruimin, Haier is in the process of transforming itself from a white goods manufacturer into a ‘major appliances ecosystem’ in which all its related products are linked into, for example, an ‘internet of food’ or an ‘internet of clothes’. Standalone products don’t cut it any more. ‘We all need to transform into ecosystem companies, or we won’t survive,’ Zhang says.

In 2018, GE Appliances announced plans to invest $465m in new US manufacturing and distribution facilities.

RIP, then, Jack Welch. RIP too – and not before time – an entire industrial management paradigm.

Reinventing welfare

Radical Help: How We Can Remake the Relationships Between us and Revolutionise the Welfare State, Hilary Cottam, Virago, 2018

Beyond Command and Control, John Seddon et al, Vanguard, 2019

The NHS is seizing up. Patients have to work nearly as hard as GPs do just to get a doctor’s appointment, and hospitals can’t cope. Elder and adult social care are unworthy of the name. Last year 25,000 people in the UK slept rough for at least one night, which is all you need to know about accessible housing and the homeless. As for benefits, Universal Credit, supposedly the answer to a fragmented system and the flagship of recent reform, is a multi-dimensional nightmare, an exorbitantly costly, semi-computerised cross between Kafka and Orwell that treats the poor as morally defective and a creates misery and despair rather than a platform for establishing a good life. In short, the UK’s welfare state is nothing of the kind. Far from providing a safety net to cushion a fall, it enmeshes the needy in a grim battle to prove they are poor or disabled enough to qualify for aid devised to maintain them at subsistence level and no more. The once pioneering, even revolutionary, institutions imagIned by Sir William Beveridge nearly a century ago are still an example to the world, but now a pitiful one of how not to do it.

As these two timely and welcome recent books reveal in their different ways, the failure behind the crisis is not primarily one of money (although austerity has brought things to a head by not only making life more difficult for those already needing help, but also tipping more people into poverty – the signature of an illfare not a welfare state). It is that, partly because of the advances made under the Beveridge settlement, the institutional framework established the best part of a century ago is no longer fit for purpose. As social entrepreneur and activist Hilary Cottam points out, Beveridge’s design was for an industrial age. The remedy for the five scourges he identified – disease, idleness, ignorance, squalor and want – was to build hospitals, houses, schools and factories that would fix them. People would proceed in linear fashion between them until retirement, which no one imagined would last longer than a few years.

Instead there has been a massive failure of the imagination – an ideologically learned inability to see beyond what is to envision what might be. It is ideological because the institutional prison we’re in isn’t the buildings Beveridge constructed but the linear, factory-style processes we use to order what goes on inside and particularly between them. It’s management, stupid.

As Cottam spells out it, the welfare state ‘has become a management state: an elaborate and expensive system of managing needs and their accompanying risks. Those of us who need care, who can’t find work, who are sick or less able are moved around as if in a game of pass-the-parcel: assessed, referred and then assessed again. Everyone suffers in a system where 80 per cent of the resource available must be spent on gate-keeping: on managing the queue, on referring individuals from service to service, on recording every interaction to ensure that no one is responsible for those who inevitably fall through the gaps.’

For the patient or citizen, much of this busy supply-side activity is pointless. As a result, whether in health, education, employment or well-being, the neediest is in every area become ‘permafrost’ – stuck blocks of humanity that the current system can’t shift and have no agency to do so for themselves. They are spectators of rather than active participants in their own lives. The buildings – schools, Jobcentres, hospitals – go through their allotted motions, but they no longer connect with the social currents outside. Beveridge’s wants still exist, but in different forms. Ironically, the severest case of ignorance, identified by both Cottam and Seddon, is the state, bereft of ideas for real reform and reduced to cutting costs and demanding more effort.

Cottam sets her narrative – the story of experiments in organising to help people deal with family life, growing up, finding good work, keeping healthy and ageing well – in the framework of Beveridge and his original institutional innovations to emphasize the pioneering heritage the UK should be building on. Encouragingly, but not surprisingly, since both authors think in systems terms, the principles she arrives at largely echo those Seddon has described in previous books: the goal of welfare is the good life (as defined by those living it), the means is helping people to help themselves (building capabilities and possibility), using the joined-up resources of society as a whole, based on relationships rather than transactions. In other words, treating people as humans.

She recognises the importance of measures (‘We cannot transition [to a better system] if we are held to account by metrics rooted in cultures and transactions that we need to leave behind’), and again like Seddon emphasizes that not only do people-centred welfare models cost less to deliver, they relocate you in a different, positive sum game. The knock-on effects of restoring a troubled family to stability, for example, are invisible and cumulative, reducing future demand on police, courts, education and hospital resources as well as social services, while often also creating a new source of community support into the bargain. Engagement is a function of help that works.

Each of Cottam’s experiments ‘can be seen as a response to the failure of a government department to command change through top-down edict’, she says. How and why top-down edicts fail and what to do about it is, as its title suggests, the subject of Seddon’s book (disclosure: I am proud to have helped edit it), which thus neatly complements Cottam’s. Seddon, whom perhaps we should describe as a management activist, locates the sources of the overbearing ‘management state’ in theories of control devised in the industrial era, which are now a serious brake on productivity as well as human feeling. They cause managers to measure and manage the wrong things – unit costs, activity, transactions, people – which tell them nothing about how well they are meeting their proper purpose and turn them into willing prey for purveyors of fads and fashionable tools that falsely promise to make the machine turn faster. Almost all the vast superstructure of management bureaucracy – a $3tr burden on the US alone, according to Gary Hamel – is attributable to add-ons such as risk, performance, culture, budget, and reputation management, which are not just useless, but actively lead performance astray. As examples, Seddon singles out for special attention Agile (what’s the use of doing it faster if it shouldn’t be done at all?), IT (what’s the use of digitising it etc), and, perhaps more surprisingly to some, HR.

HR departments, notes Seddon, are the indirect result of managers’ misplaced obsession with improving performance by managing people. But it’s the system that governs performance (or 95 per cent of it, according to W. Edwards Deming), not individual effort. ‘HR departments grew up to treat the unwelcome symptoms of command and control management and have steadily expanded as the symptoms got worse… Thus, having designed jobs that demoralise and disengage we set PR people to work on measuring the extent of demoralisation and/or [developing] methods to motivate or create engagement,’ Seddon writes. Most of these, notably incentive and appraisal, damage rather than boost performance. Instead HR should be insisting that systems are designed to give people the good jobs that ensure good jobs are done – in which case, you barely need something called ‘HR’ any more.

It is clear from both these books that you can’t get to the properly personalised forms of support that will underpin good lives, a better society and more productive economy from within today’s transactional, Gradgrind management paradigm. The two are simply incompatible. A switch to a new paradigm requires a tipping point. The appalling management behaviour on show at the top of the Home Office and the DWP suggests that there is still some way to go in that respect. But both these books are heartening confirmation that There Is An Alternative that is far better for those in need of help, for those who provide it and for the community as a whole even before you count the social and economic savings that result. And conceptually, it is not complicated. Humanity invented management, Seddon points out; and we can do it again. ‘Counterintuitively… productivity 2.0 isn’t about, scale, huge IT investment or automation. It’s about human beings working together to solve the problems of other human beings.’ These two books bring that future just a little bit nearer.

Universities challenged

A piece in the press about academic bullying caught my eye recently. This one was about Cambridge, but there was another about Oxford, and when I delved a bit, about universities generally. There seems to be a lot of it about, particularly, though not exclusively, of bullying by academics of administrative staff.

Now this struck me as odd. I know, and have known, a great many academics, and none of them is the bullying kind. Rather the reverse: although definite in their professional opinions, they are otherwise polite, well-brought up and if anything rather put upon themselves, being subjected to ever more bureaucratic measurement and performance management indignity. It’s hard to imagine them terrorising anyone. So what is going on?

We all know that real bullying exists – and what it looks like. On the one hand it’s Harvey Weinstein, Robert Maxwell and others of that ilk who instrumentalise power and use it brutally to get what they want, whether, money, sex or position, brooking no opposition. On the other, there is organised institutional harassment of the kind I wrote about a few weeks ago, as practised by France Telecom, and legally condemned as such, when it was doing its worst to ‘persuade’ 20,000 employees to move on in 007.

But although there are still unreconstructed individuals around, and some industries are less scrupulously managed than others – the film and music businesses, the City of London and retail come to mind, not to mention some newspapers – that kind of extreme behaviour is getting harder to get away with in the age of #MeToo and what we might call managerial correctness, the latter as it happens being particularly prevalent in universities.

Nor should bullying be confused with bad temper. We’ve all made stupid mistakes in our time and been ticked off, sometimes roundly, in return – with the result that one doesn’t make the same mistake again. Most newsrooms are not places that are easy to hide in. But people sometimes getting cross, or even raising their voices when urgent action is needed, is not the same as bullying.

What I suspect is happening in universities is more the indirect effect of bad management than direct oppression. As we know, bad management is depressingly common – more people leave their jobs to escape their immediate superior than for any other cause. A thinktank reported recently that fully one-third of UK jobs were of such low quality they were a danger to health. But poor interpersonal skills are only part of the story. More pervasively frustrating is the systemic growth of managerial bureaucracy.

New Public Management came late to the universities, but they have enthusiastically made up for lost time. British higher education is now subject to the full panoply of market disciplines: competition, audit, cost control, numerical targets and regulation (effectively government control under another name). In this quasi market, universities have the worst of both worlds, market discipline on the one hand but no effective autonomy on the other. For academics, this translates, to a degree unknown anywhere else in the world, into stringent performance management, continuous change initiatives and endlessly metastasizing forms of assessment – including mock assessment before the real one, assessment feedback afterwards, feedback on the feedback, and so on.

Now add in the huge expansion of management pay, and numbers, in a period when academic salaries marked time, and a raft of managerially correct attempts by HR functions to smooth over the friction points by sending faculty on mandatory courses on ‘coping with change’, introducing wellbeing and work-life balance programmes, or, still more futile, as at Cambridge, appointing fleets of work counsellors, conciliators and moderators – and it would be hard to imagine a richer compendium of counterproductive management practices, or a more toxic stew of misunderstanding and bad feeling as a result.

These symptoms are far from unique to tertiary education, of course. But universities present them in fairly extreme form, along with their unsurprising consequence – a pervasive resentment of management methods that as Peter Drucker once sighed, seem to have been designed to make it difficult for people to work: strenuous attempts to do the wrong thing righter, generating a profusion of bullshit jobs (perhaps as much as 60 per cent of the total) with ever more pretentious (and yet unmemorable) names, managing other people, the ‘clustering’ of professional services, numbers, targets, measures and other data – all parasitic on the real academic work of carrying out research and helping people to learn. No wonder the pot sometimes boils over into bad temper and raised voices. But while there is no excuse for systemically taking it out on junior staff, for senior managers to blame the mayhem caused by stupid management on others rather than themselves is insulting bad faith. The fault is theirs, and so is the remedy. It is to stop doing what Drucker labelled the summit of absurdity: appointing people to do more efficiently what shouldn’t be done at all.