Is capitalism over?

To say that Paul Mason’s PostCapitalism: A Guide to Our Future is controversial is an understatement (see a selection of reviews here). Even in the same newspaper, while one reviewer admires the author as a worthy successor to Marx (in a limited sense, admittedly), another dismisses him as a Trot ‘shackled to the remnants of a hopelessly impractical ideology’ that even Jeremy Corbyn would be embarrassed to embrace.

But there is an intellectual energy to this fascinating, challenging, puzzling, absurdly ambitious and sometimes equally annoying book that makes it an enthralling read even when you want to throw it across the room or can’t get your head round what he is trying to say.

As economics editor of Channel 4 News, Mason has made his name with vivid close-to-the ground reporting from Greece, Tunisia and other trouble-spots that stands out precisely because it seeks to locate immediate events in a wider, global context – as conveyed in the title of his previous book, Why It’s Kicking Off Everywhere. The TV reports are a microcosm, or perhaps a blueprint, for the wildy more grandiose project of PostCapitalism, which is to draw together things and events usually treated as discrete (Greece, the Arab Spring, austerity, migration, the long cycles of innovation and technological advance, successive corporate scandals culminating in the crash of 2008, the decline of the working class, automation and inequality, to name a few) into a single narrative thread describing the trajectory of capitalism. Not content with tracing the history and causes its internal crisis, greatly exacerbated by exogenous shocks like far-reaching climate and democratic change, Mason also seeks to project into the future a superior world beyond capitalism, where a lot of stuff is free and people are too, in the sense that they don’t work, or don’t work much, anyway.

As this suggests, one of the refreshing things about this book is its contrarian-ness – even its optimism, which is based not on resisting automation and the encroachment of computers into working life, as you might expect, but on embracing it, as fast and and far as possible. This is definitely not an old-fashioned Marxist scenario (at various points in the book Mason ticks off Marx, Engels, Lenin and a gallery of early Russian economists, and tips a hat to bogeymen of the left such as Friedrich Hayek and Ludwig von Mises). Nor indeed does he see it as inevitable. We have to want it and make it happen, too.

Briefly (it can only be briefly), Mason’s thesis is that ‘info-tech’, as he calls it, is different from anything that has gone before. It affects every actor in the economy, down to individuals. It is increasingly making work redundant – think of the Oxford Martin School prediction that almost 50 per cent of US jobs are vulnerable to automation in the coming decades, a figure which some now think an underestimate. But info-tech also does funny things to the price mechanism. In a world where the marginal cost of replicating algorithms, MP3s, MS Office or news stories is effectively zero, high prices and profits can only be maintained by monopolies (Apple’s walled garden, Microsoft’s lock-in, Google and Facebook’s appropriation of externalities in the shape of personal information involuntarily given away) – in other words by making artificially scarce what could economically perfectly well be abundant.

But if the inherent manufacturing cost is tending to zero, what happens, finally, when people deliberately decide to de-monopolise (sorry) products or services by collaborating in ways that ‘no longer correspond to the dictates of the market or the managerial hierarchy’ – for example, by jointly writing the free Open Source software that powers most of the internet, the most powerful computer in the world and the dominant mobile phone operating system (Android), or contributing to the largest information product, Wikipedia? The answer is that these non-market initiatives create something new – what might be called a brand-new ‘monopoly of free’. By removing whole chunks of enterprise from the market economy (as a commercial site Wikipedia could be taking in $3bn a year, by one reckoning), they effectively bar the terrain to the market. A new economy of abundance is in the process of being born.

Bring it on, says Mason. Capitalism as we know it, having spawned economic and technological forces that it can no longer contain, has run out of steam. It has always driven forward by opening up new markets, whether demographic, geographic or technological: but the monopoly of free makes the last frontier, the personal, harder to cross. Without it, capitalism is ‘an economic system in disequilibrium with its environment and insufficient to satisfy the needs of a rapidly changing humanity’.

So let’s move on, says Mason – not through the forced destruction of market mechanisms by the working class, as anticipated by the left (‘over the past 25 years it is the left’s project that has collapsed’), but rather through the spontaneous, creative, self-organising activities of the ‘networked individuals’ that the working class has morphed into under neoliberal capitalism’s economic and technological pressures. In this, he believes, they will need to be supported by institutional innovation, imaginative state intervention, including the provision of a universal basic income, and internationally-agreed action to deal with the threats of climate change and the overhang of global debt.

Inevitably Mason’s ‘guide to the future’ is a lot sketchier and more speculative than his dense (sometimes over-dense, although vastly impressive in range and depth of reference) and rich analysis of the past – which has equally inevitably brought down a hail of criticism from sceptics. But two things are worth noting. First, in his closely documented The Rise of the Robots, Silicon Valley software entrepreneur Martin Ford strikingly arrives at the same end point from a completely departure: a basic guaranteed wage for all, he concludes, will be the only means of generating enough demand to keep the wheels of capitalism turning when work disappears into the machine. And second, the scoffers who ridicule Mason for being both apocalyptic and utopian are on no firmer ground themselves. It’s true that capitalism has shown a protean ability to adapt and transform up to now. But faith that it will do so again because it has done it before is just that – faith – and no more based on hard evidence than Mason’s scenario. Likewise the labour theory of value that underpins much of his economic thinking may be unfamiliar. But the assumptions that it rests on are actually no more outlandish than the perfect information, efficient markets and utility-maximisation of neoclassical theory, which has hardly proved itself a reliable guide to the real economy.

The truth is that Mason doesn’t know any more than anyone else what the world is going to throw at us in the next decades, and it is certain that at least half of what he proposes will be wrong – including, very possibly, the positive outcome (Evgeny Morozov, for example, takes a rather different view). Meanwhile, the range of possibilities that he presents is tantalising and invigorating, not least because he grants agency to individuals – if they (we) care, or have the willpower, to use it. You certainly won’t agree with everything here, and I lost count of the scribbled question and exclamation marks against the text. But Mason is asking the right questions, and in its heady interweaving of social, economic and political history, his book is a thriller. Even if you find yourself at an unfamiliar or even unwelcome destination, as is quite likely, you won’t have been bored on the ride.

The unstoppable rise of the robots

The French newspaper Libération devoted four pages of a recent issue to an investigation of the idea of a minimum basic income for every adult. Among the surprises for an Anglo-Saxon reader, for whose politicians such a notion is so far out of mind as to be science fiction, is the discovery that in other countries it is very much on the map. The left-wing Spanish party Poderemos has it in its manifesto. The Finns are considering it, and the Swiss will vote on it in a referendum later this year. The practical Dutch are testing it in a pilot in Utrecht.

If Martin Ford is right, such initiatives, far from being a rush of Corbynitis to the head, are necessary and urgent. Rather than signalling the demise of capitalism, he suggests in his impressively researched and soberly argued book, The Rise of the Robots, a basic minimum wage may turn out to be the only way to save it.

Ford is a successful Silicon Valley software entrepreneur, which at first glance makes him an unlikely technological doomsayer. But unlike gung-ho contemporaries such as Ray Kurzweil who enthuse about the coming ‘singularity’, the moment when machine intelligence surpasses the human kind, and the achievement of immortality (sic), Ford concentrates on what is happening in the here and now to make a powerful case that this time it really is different; meaning that the threat in his subtitle – ‘technology and the threat of a jobless future’ – is very likely to come true.

The techno-optimists’ cheerful view of the economic consequences of technological advance is based on faith and precedent. They point out that since the Industrial Revolution first disrupted craft-working in the 18th century, each succeeding wave of progress has produced new markets and industries that generate more and better-paid jobs than they destroy. This is Schumpeter’s ‘creative destruction’ in action, a virtuous circle in which technological innovation drives higher wages and increasing demand that floats all boats. Better education will keep human capability ahead of the machines; economic growth will provide a steady flow of new jobs in partly-mechanised sectors and others that spring up to serve them.

But although many economists (and almost all politicians) are still parroting the old mantras about a return to growth, Ford notes that the positive relationships had started breaking down long before the financial crash of 2008, and substantially pre-dating the current round of techno-acceleration. (Ford writes mainly about the US, but in economics as in other spheres the UK can be relied on to imitate the transatlantic experience with a short time lag, if in slightly less extreme form.)

Thus Ford lists ‘seven deadly trends’ that have been ticking away behind the economists’ comfortable assumptions like a colony of deathwatch beetle: average wages stagnant since the 1980s; a shrinking labour share of GDP; declining labour-force participation (particularly among less qualified men); jobless recoveries (it took until mid-2014 for US employment to regain pre-crash totals, by which time the working population had increased by 15 million – in fact the US economy has put on no net new jobs this century); soaring inequality; a diminishing premium and growing underemployment for graduates; and the polarisation of the jobs market between well-paid full-time employment for the very few and part-time and freelance for the many – ‘uberisation’, let’s say.

Now take this malign dynamic and supercharge it with the most powerful general purpose technology (ie whose effects will leave no industry untouched) ever devised – one, moreover, whose advance is accelerating with the undiminished momentum of Moore’s Law. Machine intelligence is improving by leaps and bounds. IBM’s Watson supercomputer defeats human champions not only at chess, a bounded problem, but also at the game show Jeopardy!, a cryptic, unbounded one. Watson is now being deployed commercially. While Artificial Intelligence (AI) is still the ‘narrow’ variety, the ‘strong’ version, or Artificial General Intelligence (AGI), is now being vigorously pursued not just in research labs, as in the past, but competitively by ambitious, well-resourced giants such as Google, Facebook and Apple, pushed by powerful commercial incentives to make it work. Another AI, Artificial Intuition, is in the works.

A recent Forbes contribution (entitled none too subtly ‘Deep Learning And Machine Intelligence Will Eat The World’) could not have put it more clearly: ‘The effects of this technology will change the economics of virtually every industry. And although the market value of machine learning and data science talent is climbing rapidly, the value of most human labor will precipitously fall.’ Publications (reputedly including Forbes) already employ software to write news stories and reports; in a few years time 90 per cent will be machine generated, in one estimate. In many other industries, automation, says Ford, is simply ‘the logical next step’. Beware, he warns: if you’re working with computer software, you’re probably training it to replace you.

In a much quoted 2013 report, the Oxford Martin School suggested that 47 per cent of US jobs would be susceptible to computerisation in the next two decades. Later estimates raise that to 80 per cent. Given that the essence of computerisation is enabling more to be done with less, is it conceivable that new industries based on it will be labour intensive? Looking at early evidence from those avatars of the new economy YouTube (which had a value of $1.65bn when it was acquired with its workforce of 65) Instagram ($1bn and 13) and WhatsApp (a staggering $19bn and 55), the answer seems pretty clear. Uber and Airbnb just underline the point: for the first time new technology is not only creating fewer jobs than it consumes: it is also creating worse ones. The circumstantial evidence keeps flooding in. Thirty per cent of US science and technology graduates are currently labouring in jobs that don’t need degrees; most UK graduates are in non-graduate jobs. ‘The assumption that we will transition to a more productive … economy just by increasing the conveyor belt of graduates [the method used in the past] is proven to be flawed,’ says the Chartered Institute of Personnel and Development (CIPD).

So far, so not very good at all. But if the scenario follows Ford’s trajectory, the effects will be massively self-reinforcing through the resulting demand deflation. Ford quotes the following exchange between Henry Ford II and union boss Walter Reuther on a factory visit: ‘Walter, how are you going to get those robots to pay your union dues?’ ‘Henry, how are you going to get them to buy your cars?’

At the extreme, radical inequality is unsustainable in the most basic sense: by hollowing out the middle classes, the winner-takes-all economy becomes a contradiction in terms. There won’t be anything to take, because the very rich simply don’t consume enough to keep the wheels turning. Being partly the result of what happens when average consumers do keep consuming, but using debt rather than cash to do so, the 2008 crash should be a warning here. As Ford relates, companies are already abandoning the middle market to chase the 1 per cent of super-spenders; despite increasingly frantic advertising, the average US car is now 12 years old, a forerunner of the tangible consequences of rising inequality. In this context, the gathering cloud of graduate debt overhanging the US and UK looks increasingly ominous not only for individuals with increasingly uncertain earning prospects but also for the economy as a whole, while the policies that created it are revealed as the monstrous false economy they seemed to many at the time.

This is the background against which Ford sets out his proposal for a universal minimum basic income. Ironically, the original progenitor was the English radical Tom Paine, who advocated it as a blow for social justice against the cruelty of emerging capitalism. By contrast, for Ford it is primarily a prop to keep that system going. As he concedes, the idea is controversial, with weighty considerations on both sides. Yet the need for solutions may be even more pressing than he thinks it is. Ford didn’t foresee another emerging result of global inequalities, the rising tide of immigration. And like almost every other commentator, whether optimist or pessimist, Ford has internalised, and thus leaves out of his reckoning, the most deadly trend of all: the pernicious incentives at the heart of today’s shareholder capitalism.

This is the first great wave of technological evolution whose justification is not that it benefits humanity but that it benefits shareholders. The lesson of the last 30 years is that Investments driven by self-interest and shareholder value – where the benefits are supposed to accrue to one group in society – do not produce a generalised increase in wellbeing, because they are designed not to. As Jeff Pfeffer succinctly puts it, ‘Economic performance and costs trump employee [and societal] wellbeing’. Under today’s incentives, investments in accelerating technology will just destroy more of it. To be clear about this, consider a quote from the founder of a start-up planning to automate the production of customised gourmet hamburgers: ‘Our device isn’t meant to make employees more efficient. It’s meant to completely obviate them.’ Or this from another start-up entrepreneur warning that executive jobs too are in the firing line: ‘It will not be possible to hide in the C-Suite for much longer. The same cost/benefit analyses performed by shareholders against line workers and office managers will soon be applied to executives and their generous salaries’. Oh yes, the name of this start-up: iCEO.

At this point, as Ford notes logically if bleakly, at the end of his book, falling demand may run out even for further automation. It might. But it would surely be unwise to wait to get that far to find out.

****I’m away at the moment: next piece beginning of September****

Productivity is only half the story

And another thing…

The pother over productivity has reached such a pitch that I make no apology for returning to it.

In one of the latest contributions, the newly-Japanese-owned FT recently carried an article suggesting that the solution for the UK’s low-productivity problem was to cancel August – in other words, everyone should take less holiday. I think it was meant seriously.

It brought irresistibly to mind W. E. Deming’s acerbic, ‘Having lost sight of our objective, we redoubled our efforts’, instantly putting a finger on what’s missing from the productivity debate: it’s all very well working harder, but to what end?

‘Productivity’ is about ‘efficiency’, and there is no argument that UK productivity/efficiency is historically poor, lagging that of many of our national competitors. But efficiency is a crude proxy for a more important and profounder measure, which is effectiveness. Like GDP, productivity is a measure of activity in general, irrespective of whether it is useful and beneficial for society or not. It measures outputs against inputs and is about means – doing things right. Effectiveness measures outputs against objectives and is about ends, or doing the right thing. Unless it’s related to the right thing, productivity is of secondary importance. As Peter Drucker decisively summed it up, ‘There is nothing so useless as doing efficiently that which should not be done at all.’

The economic story told by effectiveness differs in important ways from the one derived from the productivity figures. For example, the official narrative puts much emphasis on improving productivity through indirect supply-side measures such as investment in infrastructure and education, and hiving off as much activity as possible to the supposedly more efficient private sector.

Of course, functioning infrastructure, including education, is essential. But it has nothing to do with organisational effectiveness, which is not a private-vs-public-sector issue: it is one of system design. All over the economy people are being beaten up to do more efficiently stuff which shouldn’t be done at all – either because they are attempting to do the wrong thing righter, which, as systems guru Russell Ackoff points out, just makes them wronger, or because they are redoing something that wasn’t done or wasn’t done properly the first time round: ‘failure demand’, in John Seddon’s term.

The dirty secret is that the NHS and many public services are stuffed full of failure demand – in some cases 60 to 80 per cent of contacts are repeat calls from folk who haven’t had their problem solved the first time. But so too are the customer-service departments of banks, phone companies and other utilities which measure their activities in terms of efficiency (x number of calls per hour, x number of rings to answer the phone) rather than effectiveness (the overall time it takes to fix a customer’s problem). In other words, measured against their purpose, whatever the productivity figures say, they are hopelessly ineffective.

At a stroke, this nullifies a second part of the official narrative: in shorthand, the death spiral of public services. The conventional story is that services are unsustainable, groaning under ever-increasing demand and expectation, necessitating constant cost-cutting to make them more efficient. But this is a travesty of the truth. What they are groaning under is the weight of citizens failing to get routine problems fixed by a system that is designed against cost, not to meet predictable demand.

Nor is it true that there is an insurmountable resource crisis; or rather, there is, but it is caused by a work design in which jobs are so tenuously connected with needs that they mostly make things worse rather than better. That includes anyone working to internal service agreements or standards, having to meet numerical or time targets or quotas, managing demand (rationing) rather than meeting it, working in back offices, shared-service and most contact centres; central specifiers, commissioners, inspectors and regulators; and anyone managing the same, basically policing other people and administering the rules. That’s a lot of people.

The bad news is that all these are what anthropologist David Graeber has termed ‘bullshit jobs’, make-work employment offering no meaning or pride (no wonder levels of engagement are so dismal). The good news is that any improvement (doing the right thing, however imperfectly at first) brings a double benefit, reducing wasted work, and therefore cost, on one side, while freeing up capacity to do more on the other. Climbing morale as people are reconnected with a meaningful purpose adds a third whammy. So yes, there is a resource crisis – but it’s a crisis of management effectiveness, not individual productivity.

I recently came across an intriguing concept called ‘Eroom’s Law’ – Moore’s Law backwards, if you haven’t got there, and it applies to processes that unlike computer computer processors get slower and more difficult over time. It was first applied to the seemingly inexorable slow-down in new drug discovery, but it also usefully illustrates what’s happening to management as it proliferates and slows under its own friction (I wrote a bit about how it happens previously here). Unless we can prise management out of the grip of Eroom’s Law, any increase in overall productivity will will be more than eaten up by the rising tide of bullshit and a corresponding decrease in effectiveness.

Being more customer led – do you listen to research or to your intuition

To be blunt: how can you be customer-led if customers don’t know or can’t tell you what they want themselves? From the company angle the Holy Grail is a market game-changer: the automobile to replace the horse, say, or an iPhone, or an iTunes or Spotify. But if as a customer you’ve been thinking in terms of more rapid equine transport, a telephone that’s better at talking on, or a larger collection of CDs, what do you make of a machine with four wheels and a clattering engine, a supercomputer with a touch-screen, or a virtual jukebox hosted in something called ‘the cloud’? Round the other way, how come despite decades of experience, pollsters still find surprises and contradictions in people’s actual behaviour, even when polled the day before an election about how they are going to vote?

The answer to the questions that came back from an intriguing Foundation Forum on 10th June was that human reactions remain more or less as infuriatingly difficult to call as ever. We have a lot more data about why we react as we do. But while that helps us understand the degree of our unpredictability, knowing more accurately the scale of the problem doesn’t make it easier to get business decisions right. To paraphrase Lord Leverhulme, we know that half our knowledge about the way people will jump in any situation is wrong: we just don’t know which half.

‘If you’d told me in 1987 that I’d employ 100 filmmakers in 2015, I’d have wondered why I’d ever do that, or that I’d employ 20 economists’

The challenge for the research industry therefore remains as great as ever. As Ben Page, chief executive of pollster Ipsos MORI, noted, while some of the techniques are remarkably unchanged – companies still knock on doors, do telephone interviews, and send out postal surveys – they have deepened. ‘If you’d told me in 1987 that I’d employ 100 filmmakers in 2015, I’d have wondered why I’d ever do that – or that I’d employ 20 economists’. That reflects more sophisticated attempts to tease meaning out of the information gathered, with a discernible shift of emphasis from asking questions to watching and observing what people do. Today’s key trends, says Page, are speed (clients want reports in hours, not weeks), mobile (location recording, instant selfies at the breakfast table) – and the dawning realisation that since subjects aren’t completely rational, it’s not enough just to record what people think they do.

That means that no one research tool is adequate on its own. ‘Instead what we’re seeing is layering of these different techniques, so clients will be looking at a whole range of different data sources. And as we better understand these things using all the techniques at our disposal we’re getting a much better and richer understanding of human behaviour’, he said. So it’s not a question of either intuition or research, but both/and, and a lot of other things besides. Indeed, intuition remains a powerful force – ‘an Ipsos MORI chief executive wouldn’t be advising anyone to scrap research and just listen to instinct, but it’s amazing how many clients pay millions of pounds to evaluate an ad campaign and then cheerfully ignore the data and go with their gut feel.’

To understand how people work ‘you need research, you need data and you need psychology’

Marc Michaels, the second panellist, agreed that to understand how people work ‘you need research, you need data and you need psychology’. And as someone initially recruited to set up a government direct-marketing unit and then more generally to work on changing behaviour – persuading people to eat more sensibly, give blood, join the army: a tough brief – he is clear that in making research and data actionable, the new findings of psychology and behavioural economics are critical. It’s not that we lie, he says; but as Daniel Kahneman demonstrated in Thinking, Fast and Slow we each have two brains, a System 1, ‘Homer Simpson’ organ for instinctive, holistic, and instant decision-making, and a System 2, ‘Spock’ brain for more analytical, deliberative, demanding thought. There are cognitive biases that undermine strict ‘rationality’ but which also give opportunities to ‘nudge’ people towards certain choices or behaviours by fitting the way they are engaged to the innate predisposition to respond in one direction or another.

Thus there’s a general human tendency to fear losses more than to value gains (‘a bird in the hand is actually worth two point five, sometimes even three, in the bush’). The slacker, Homer Simpson brain will try to get away with answering an easier question than the one asked. Big, complicated issues are often avoided, so ‘chunking’ them down is likely to win a more positive response. As Stanley Milgram’s famous 1960s experiments showed, people respect and obey authority, sometimes to a frightening degree. So, in one celebrated example, the Department of Health and COI recruited Anne Diamond, a respected and well-loved newsreader who had suffered a cot death, to counter the grandmother-sanctioned traditional wisdom of putting infants to sleep on their fronts and persuade young mothers to sleep them instead on their back or side. The result: a reduction in cot deaths of 70 per cent. But doing it needed authority to fight authority.

In a sense, research has gone full circle, observed Clive Humby, co-founder of data-led research business dunnhumby and now Starcount, which uses social media and ‘fan science’ to craft brand influence strategies. ‘Really, what we’re really talking about with data is understanding customers through the things that we observe. We’ve heard about watching them through videos, asking them questions, and obviously looking at what people physically do close-up and the transactions they’ve made using information. It’s gone through a complete revolution’.

Humby is credited with the line that data is the new oil, and he drew two important comparisons. First, the gusher in its raw state has little value, only becoming usable when it is processed into something else. Data is the same: ‘Data is everywhere. I’ve got 147 devices in my house that have their own IP address – smart TVs, a lighting system, computers, phones, all those items are generating data about you all the time. The data on ourselves generated in the last day, exceeds all the data generated in a year 12 months ago. So the real challenge isn’t in collecting data any more, there’s far too much of it – it’s making it useful’. So it’s the algorithm guys, the pattern-identifiers that are the stars, the car designers building on the potential of oil.

‘The real challenge isn’t collecting data any more, there’s far too much of it – it’s making it useful’

Yet solving one problem often just reveals another. Made usable, into petrol for instance, oil becomes volatile. So too with data, which turns not just volatile but nuclear when it collides with privacy. Benign nudging and behaviour-recording with consent are one thing; but what about using your shopping list as a basis for insurance premiums? Or a real example from Tesco: ‘One of the most important correlations we found in terms of data we could have commercially exploited, was the one between £4.99 Chardonnay and condoms. But we never acted on it. The reality is that just because you know, doesn’t mean you should. And that is the dilemma we’re all facing’: Is it cool or is it creepy?

The dilemma can only intensify with the rapidly emerging internet of things, in ways that have barely yet been registered. Suddenly the issue is no longer the quantity of personal information being given away to a faceless corporate. ‘We think about privacy in terms of our big corporate systems and frontline operators who talk to customers’, Humby pointed out. But actually the data is available to developers, app people and potentially everyone in the organisations that has access to it. The people who repair your car know everything about where it has been, how fast it was driven, how long it was parked. How easy would it be for someone to get this and use it for something unforeseen and with bad intent? ‘Once that happens, everyone becomes a possible liability. And we have to really worry about that as leaders in organisations’.

The paradox of research, as with most things human, is that the more we know, the more complicated it gets

The paradox of research, as with most things human, is that the more we know, the more complicated it gets – and a simple scientific synthesis seems as far off as ever. At least for the time being, it’s humans that rule, not algorithms. Noted Page: ‘It’s taken some time, but I think the industry is getting there. One of the things that’s holding us back is that sometimes we’re conservative, and our clients are just as conservative because they’ve been tracking data the same way for 30 years, and it’s consistent and tells them things’. His advice is: relax a bit, be creative, and remember the lesson of Alex Salmond – who on the basis of extremely expensive US analysis through social media knew for certain that the Scots would vote for independence, never mind the opinion polls.

‘People aren’t rational,’ summed up Michaels. ‘When they tell you they want to do something, you may see from the data side that they’re doing something, but you’ve got to think about what is going on there. You can think data, but you need to talk human’.

The Foundation’s thoughts

Four of the most significant points which emerged for us were as follows:

• Collaborative businesses are succeeding because they bring at least three useful characteristics together in a way that reinforce each other. Any new and better business model tends to do this, creating a virtuous circle that is different enough to the incumbents’ for it to be impossible to copy with a simple adjustment:

• To get to a good understanding of what people think, feel, and crucially, do, can take the application of all of the approaches described above. On the evening we talked about triangulation, using market research to understand the landscape, then in the areas of interest conducting deeper exploration. This might use more extensive real-world data, and the vastness is made useful by developing and testing hypotheses based on human understanding that respects the instinctive ways we often act. Another way to describe the process is one of detective work – an overall conundrum to be solved, and lines of enquiry established around the possibilities. Each then explored creatively (what could be going on here?) then challenged to eliminate as much as possible from enquiries using data, further specific research or conducting experiments.

• As Clive reminded us, there is a rear view mirror issue with data. It can tell us what’s happened, and used well it can give us insight into why. But it can’t predict the future. Which might make some of the big data investment going on right now look a bit optimistic

• The vastness of the data we each generate creates real ethical issues that aren’t currently being addressed. It is much easier than we realise to share information on everywhere we’ve been, everyone we’ve spoken to and a fair bit of what’s been exchanged with all sorts of organisations and individuals that we might be wary of if we sat down and thought about it. As we heard, modern cars contain information on where they have been and how they were driven, all easily accessed by your local car dealer, the police or your insurance company… or anyone who knew how to hack and steal it. We often allow apps to get this kind of information from our mobile phones, because we click ‘allow’ and because the Apple Ts&Cs are, in Clive’s words, longer than Shakespeare’s The Tempest.

• Our human intuition isn’t just at the end of the telescope trained on customers. The users of insight have just the same biases, from the more entertaining ‘I don’t care what the research says, we’re running the ad’, to the more important problems we find with inconvenient market research findings getting short shrift from a leadership team trapped in a world they see from the inside of their business looking out. It can be useful to see the conclusions from insight work as the start of another challenge, giving it the impact it needs to get the organisational response it requires. For example, getting leaders speaking to customers themselves so they create their own stories and beliefs in line with the bigger picture.

George Osborne’s productivity delusion

The air of unreality that made the election so weird has only deepened with George Osborne’s ‘big budget’ last week. It’s a novelty to find The Economist, Guardian and Financial Times in unison on anything much, but all three judged that the budget’s political astuteness was only matched by its economic irrelevance. The Economist was particularly severe on ‘indefensible’ cuts to benefits for the lowest paid, ‘barmy’ inheritance-tax reductions on houses, and the ‘outrageous favouritism’ of the welfare cuts.

The Chancellor inhabits a Humpty Dumpty world in which words mean what he chooses them to mean, not what anyone else understands. So in a budget that will do the opposite of what he claims, it is perhaps no surprise to find a ‘productivity review’ that says it ‘sets the agenda for the whole of government over the parliament to reverse the UK’s long-term productivity problem and secure rising living standards and a better quality of life for all our citizens,’ but in fact is just a mash-up of what was in the budget, which itself apart from a levy to fund apprenticeships offered nothing that was either new or remotely relevant to the real productivity issues.

No hint here that productivity is a problem which the UK has been failing to fix using exactly the same tired and half-hearted supply-side means for more than half a century (I was writing about failure to electrify the railways in the 1980s); no hint that we are moving from the old economy, which was already tough enough, into a qualitatively different new technological era where the challenge may to create any jobs at all; no hint that with companies already bulging with cash and labour’s share of the economy shrinking by the minute, there are no strings left for the government to pull to tickle entrepreneurs’ and managers’ jaded animal spirits.

For anyone with eyes to read, the pages of Harvard Business Review, not a publication of the hard left, have been sounding the alarm for the past two or three years: in the US and UK capitalists have given up on the virtuous circle of reinvestment and and innovation that kept wages rising and economies moving forward since the WWII. So any benefit of lowering corporation tax to 18 per cent will simply disappear in bigger payouts to shareholders in the shape of dividends and share buy-backs, thank you very much, with at best some no-risk investment in cost, and job, cutting. This is the new normal, and it’s to do with with relationships and incentives between the firm and its stakeholders – its corporate governance – not the state of the infrastructure, education or housing, for goodness sake (memo to George: if builders haven’t already built on brownfield sites it’s because they’re too expensive – the land is contaminated or low lying – and people don’t want to live there).

In these circumstances, the otherwise welcome announcement that the government has recruited John Lewis chairman Charlie Mayfield to lead a taskforce developing ideas for raising business productivity is unlikely to lead very far. As it happens, Mayfield put his thumb right on the sore point that is the UK productivity record in a recent interview on the BBC’s Today programme. Asked about his role at John Lewis, he replied: ‘I work for the partners in the Partnership. My job is to invest in them, help them to work as well as they can, and if we do that, we’ll succeed as a business…. They hold me to account for that.’ Well, yes. It’s not rocket science. Giving people a job with a purpose, the means to improve and a pay packet that takes wages off the agenda are what sustains the engagement that feeds the high-productivity workplace. All the rest is secondary.

The catch is most companies don’t have what goes with it at John Lewis – in particular committed long-term governance that aligns bosses with the workforce that actually create values, not shareholders. In his other capacity as chairman of the UK Commission on Employment and Skills, Mayfield recently illustrated the extent of the management switch that the country needs to make to plug the productivity gap by drawing attention to OECD research showing that an astonishing 22 per cent of UK jobs only require the educational level of an 11-year-old, a proportion exceeded solely in Spain among our competitors. By contrast, Germany has just 5 per cent of jobs that are as undemanding as this, and the US 10 per cent.

Underlining the point, Will Hutton notes that lackadaisical governance and financialisation have turned the UK into a sub-contract economy with ‘a string of technology-light, productivity-poor small companies’, a yawning trade balance, a hollowed-out industrial base, and a record of unending decline in its share of world exports. The erstwhile workshop of the world’s current champion industrial sector? Food processing.

Reversing the productivity spiral ideally wouldn’t start from here. While it’s difficult, though, impossible it’s not. What it does require, as Mariana Mazzucato has eloquently laid out, is a new, richer and more optimistic narrative of innovation and wealth creation that emphasizes the importance of patient, committed capital and recognizes that productive capitalism ‘is one in which business, the state, and the working population work together to create wealth’, not appropriate it. This is the opposite of the risible, infantile obsession with ‘business friendliness’, and indeed of almost everything in Osborne’s budget. Don’t hold your breath.

The nature of technology

What is technology and how does it develop? Remarkably for something so dominant in our lives, until recently no one had much systematic idea. With a few exceptions economists treat it as a black box. There’s masses of work on technicalities, but technology’s nature, its relationship with innovation and economics and how it evolves, have largely gone by default.

Enter in 2009 W. Brian Arthur and a remarkable book called The Nature of Technology. Arthur is well known for his groundbreaking work on economics and complexity at the multidisciplinary Santa Fe Institute in New Mexico, and he draws on both for his project, the formulation of an overarching theory of technology and technological development.

In (very) short, Arthur concludes that technology – roughly defined as natural phenomena repurposed for human ends – is not a collection of arbitrary standalone techniques and inventions, as previously viewed. Instead, it is something much more like chemistry or biology than Newtonian physics, sharing with life its ‘connectedness, its adaptiveness, its tendency to evolve, its organic quality. Its messy vitality.’

Technology, says Arthur, ‘builds itself organically from itself’ as individual developments feed on each other and cumulate. In a process of ‘combinatorial evolution’, proliferating possible technology combinations become almost infinite. Advance is non-linear, so that abrupt discontinuities can occur as tipping points arrive in record time.

Although as qualified as anyone, Arthur doesn’t do technology prediction – he doesn’t even mention Moore’s Law, perhaps today’s most powerful innovation and technology amplifier. But it’s easy to see the process he describes in today’s digital transformation, the startling speed and unpredictability of technological evolution strikingly borne out by events.

For example, when he was writing just six years ago the idea of drones on sale on the High Street would have been strictly science fiction. Even three years ago a self-driving car was assumed to be decades away. In robotics, too, new devices suddenly sit ‘at the nexus of visual perception, spatial computation, and dexterity [reaching] the final frontier of machine automation’, as Martin Ford puts it in his (also impressive) The Rise of the Robots. Ford notes that a San Francisco startup has devised a robot that that aims to automate the production of custom-made gourmet hamburgers – not, that is, as in ‘make burger flippers more efficient’, but as in ‘obviate the need for them altogether’.

As Arthur sees it, as technologies, collections of technologies and sub-technologies interact with each other in ‘messy vitality’, they generate a teeming ‘supply’ of possible new technology combinations, all available to be used and built on by innovators. But although they emerge from their own history, the form technologies actually take is by no means inevitable, being inflected by human agency and historical small events.

One such human agency is management, as a ‘purposeful system’ itself also a technology in the broadest sense. Although it is rarely considered by commentators, and Arthur doesn’t go into it in depth, management is obviously a crucial influence on demand for technologies and how they are used. When writers such as Ford (himself incidentally a successful Silicon Valley entrepreneur) raise worries about ‘technology and the threat of a jobless future’, to borrow the subtitle of his book, the response of techno-optimists is invariably something like – ‘Trust us. The great technological leaps of the past have always created more jobs than they have destroyed. Invest in the supply side (education, infrastructure, easing the transitions), have faith, and all will be well’.

In the light of Arthur’s theory and Ford’s practice, the rejoinder has to be, ‘What about the demand side? Look at the context: the technological combinations managers and businesses have used, and more particularly what they have chosen to use them for.’

As Ford points out, while production technologies helped raise productivity by 107 per cent between 1973 and 2013, in the letter year a typical US production worker took home 13 per cent less in real terms than 40 years before. In the decade to 2010, the US economy created no net new jobs. Inequality soared as productivity gains were monopolised by shareholders (including and especially top managers). In 2010, the US computer industry employed 166,000 fewer people than in 1975. Meanwhile the ‘sharing economy’ shreds jobs and spits them out as micro-employment, and more generally the internet economy is based on a business model of surveillance which turns consumers into products and only incidentally (and then mostly unpaid) producers.

To emphasize, none of these developments was inevitable. The same digital technology crossed with a different management technology would have produced different outcomes: it’s not hard to imagine peer-to-peer platforms devoted to medical or social ends, for example, or an internet which put individuals in charge of their own data and reversed the current relationship between consumers and companies.

All this suggests that it would be unwise to bank on historical precedent providing a reliable guide to our economic evolution from here on in (that’s what discontinuity means). Arthur ends his book by noting the increasing ambivalence with which humanity views its miraculous technological creation. On one hand it is undeniably a blessing serving our lives; yet on the other there is growing unease at the way it has estranged us from nature, now endangering the future of the planet, and a dawning fear that the apprentice’s magic is outstripping that of the erstwhile sorcerer.

Seeing the manifestations of Arthur’s ‘combinatorial innovation’– the internet of things, learning machines, automation, rapidly progressing Artificial Intelligence – emerging around us, all turbocharged by Moore’s Law, it seems probable that in terms of sheer processing power the race against the machine is already in course of being lost. In which case we’d better sort ourselves out as humans and decide what and for whom this awesome thing is to be used for – before it does it for itself.