My dad was a mathematical logician, one of the key minds in IBM’s heyday, an expert on modal logic and its data processing applications. What many of his colleagues didn’t know is that the barrel-chested wisecracking guy with the Liverpudlian accent in the office next door was an entirely self-taught man.
Fortunately for him, he had a connection, from wartime service in the RAF, to one of the greatest minds in post-war computing: Ted Codd, the man who invented relational databases and without whom there would be no artificial intelligence. (Decades later, Codd’s IBM patents were sold to Oracle, enabling Larry Ellison to build a data empire.)
There was no GI Bill in the U.K. My dad did a double external degree in applied mathematics and applied physics at the University of London in an astonishing eighteen months, commuting by train from the Birkenhead bartending gig he had during the week. In those days if you simply passed the exam, they gave you a degree. He did — and they did.
Why share all this? Because my father was one of the few people who knew both Alan Turing and Marvin Minsky. Turing, of course, was the genius who helped break the Enigma and Lorenz ciphers; Minsky was widely known as the Einstein of AI, a leading star at MIT for nearly three decades (his light is considerably dimmer in these post–Jeffrey Epstein days).
In the late 1980s, my father was always going to AI conferences. He came home once having shared an amusing hallway sidebar with Minsky. This was well before the notion of neural networks and the great hopes and fears about artificial intelligence. “Just imagine,” Minsky had enthused. “Someday you’ll be able to have an actual conversation with your thermostat.”
Now my dad was a technoskeptic with a keen sense of the absurd, a broken-nosed middleweight boxer whose idea of a great conversation usually included a pint of Guinness. He replied, “Why the hell would I want to do that?”
The point is this. Every once in a while, if you search long enough, you strike gold when trying to really understand an emerging field. Small example: the data virtuoso Michael I. Jordan, of Berkeley’s AMPLab, one of the big brains in the field of “big data.” In 2014, IEEE magazine — one of the bibles of the data processing trade — quoted Jordan at length in a rather cautionary piece. Big data is no different from any other form of data processing, he said. As such, it is absolutely not immune to the problems of garbage in, garbage out.
Jordan’s comments caused a minor sensation, but really all he did was suggest that the hype around big data was utterly overblown. To wit: It is “patently false” to think that there is any neuroscience behind current deep learning and machine language techniques. We are “not yet in an era” when we can use an understanding of the brain to create intelligent computer systems. “It’s pretty clear” that the brain doesn’t work via the methods used in machine learning, notably back propagation. “There’s no clear reason that hope should be borne out,” in reference to building useful silicon-based intelligence. “There are many, many hard problems” in vision that remain to be solved. And the notion of “error bars” is missing “in much of the current machine learning literature.”
A lively debate erupted on the IEEE website, as folks argued over what Jordan had actually meant. To be sure, the field today is white hot, and we’ve seen manifold advances in AI and machine learning since the kerfuffle. But the contrary cases — Mercedes-Benz withdrawing robots from its assembly line in favour of human beings, IBM Watson’s utter (and very public) failure as a medical diagnosis tool, the limits of blockchain-based contracts — do more than affirm Jordan’s 2014 cautioning. They at once underscore the hype and delimit a central truth about the present state of play.
That truth is that AI is still deep in the “magic box” phase; its core abilities have many admirable applications, but there are warning lights all over the place.
Here’s one: How many AI algorithms are written by women? Precious few, but the deficit runs still deeper. Less than 15 percent of all academic publications about AI and machine learning are researched and written by women. And this isn’t about political correctness: it’s a caution that a technology that stands to profoundly affect all humanity is blind in one eye.
With any given topic, one tends to find two species of non-fiction books. First, there’s the nuts-and-bolts overview, which more or less relies on extant evidence and synthetically works toward an interim conclusion. In my journalism days, we’d call this a newser. Then there are other books — far more rare, often obscure — that are real dynamite, poised to bust a paradigm wide open, to use a five-dollar phrase.
Andrés Oppenheimer has been an award-winning journalist for the Miami Herald for decades. His reportage on Cuba and Mexico, for one thing, is dead superb. With The Robots Are Coming!, he demonstrates what’s possible when a first-class journalistic mind surveys a compelling contemporary issue. He’s assembled a remarkable array of interviews in support of a measured thesis: that artificial intelligence will have a profound impact on industries that depend on repetitive task sets, in places like an Amazon warehouse, an autonomous car, or (maybe) even a sushi restaurant.
Oppenheimer’s book is what the Romans called a vade mecum: a handy guide that omnivorous readers and other journalists will use for scanning the lay of the land. He’s done a crackingly insightful job of that and then some. Again in journalism-speak, he’s nailed the turnkey story.
This is not to damn Oppenheimer with faint praise, because his is a superbly well-worked piece of reporting. It’s actually a must-read. But to tackle AI as a cultural ethnographic issue — which is the story everybody wants and probably needs — is almost beyond the ken of any traditional journalist.
On the other hand, Inhuman Power, by a trio of University of Western Ontario scholars, might well be one of those books that people will look back on twenty years from now and say, Yeah, they got that right.
Put it this way: the relative utility of these books is radically different. Oppenheimer’s is what programmers once called a cog book: a cogent, tightly written Fodor’s, a road map to the ins and outs of the AI debate. His epilogue, a menu of professions likely to grow in parallel with the social changes, is surprisingly optimistic: it serves up alternative energy design, sales (because global consumption will grow immensely on the back of AI-mediated productivity), and personal care of all stripes. The last group includes teachers and professors who might just help curate a renaissance in humanism.
Call Oppenheimer a cautious but well-grounded optimist when it comes to both the changes and the challenges of the coming digital innovation. I buy some of his conclusions — in the main, those of the many futurists he interviewed. But as I see the inevitable changes edging closer, I wonder about late-stage capitalism, about increasing precarity, and, most of all, about humanity’s response to the “cyborging” of work.
Inhuman Power, for its part, is a radically written and conceived jeremiad, almost Talmudically dense. It powerfully suggests there are shifting tectonic plates when it comes to what AI can and might do — and, tellingly, why it might do it. Notwithstanding its Marxist origins, this book is remarkably dispassionate and, whatever one makes of its conclusions — there’s a harsh truth in many of them — it represents a profoundly crafted argument that AI is at once a weapon and a mare’s nest of ethical horrors.
To be sure, Dyer-Witheford, Kjøsen, and Steinhoff have not assembled an easy read. But in shaking the crystal ball, they’ve done a great service, revealing many unspoken assumptions within the AI debate. What they advance here, in many ways, is a disciplined set of diagnostic questions for whatever happens next.
If you want to probe capitalism and its discontents, runs the Inhuman Power argument, Marx did some of the homework for you. Of course, Marx never thought of AI. Not even close. So, arguably, his theories don’t apply all that well to the arguments of today and tomorrow. But the Western trio contend that the future of AI-mediated capitalism isn’t so much dystopian as it is a disjunction with sobering implications.
When one considers what AI can and might do to the so-called social factory, the argument continues, the outcome is the inverse of what Henry Ford tried to do. Ford succeeded in creating a new class of consumers from those who built the cars that they themselves wanted to buy. AI will instead induce a counter-revolution.
In this way, Inhuman Power offers a stark contrast to Oppenheimer’s take: AI offers very little promise, as is obvious to anyone who’s walked the streets of San Francisco, ground zero for the commercialization these books both describe. The tent cities are there for a reason. The brute facts of late-stage capitalism and technology live and breathe in call centres and fulfillment centres (now there’s an Orwellian phrase), where humans are entirely under digital control. Those well-off enough to enjoy the conveniences of digital efficiencies ought to be one hell of a lot more mindful that there are human beings whose life work underwrites their privilege. The Uber backlash is but one chapter in this battle, and a first-class firestorm is brewing around Airbnb, literally around the globe.
Inhuman Power lives up to its name. It reveals the cauldron of problems that arise when the politics of hypercapitalism slams into the working lives of actual human beings. And not just those in the West, but also people in the emerging countries — the Indonesias and Kenyas and Indias — where the downstream effects will shake all the economic furniture. Both books agree that the cauldron is bubbling.
The Western authors have gone well beyond questioning the technological capabilities of AI and whether it will soon overwhelm us. They have audited the sociology, anthropology, and political ideology around the technology, to codify and characterize its dimensions and, insofar as they’re able, to assess the possible and probable outcomes.
What they suggest is deeply worrying, though not perhaps in the intended dialectic fashion. What if AI makes truly planned economies possible? What is the future of the West and its democratic values in the face of China, for example, whose AI theoreticians have suggested such an economy is the national objective?
This, in embryo, is the makings of an inhumanity whose scale and crushing power were captured both by Orwell and by Koestler, in Darkness at Noon. What’s in play here is a kind of creeping Stalinism in the name of progress, advancing not merely because of the market — but because we can!
Then there’s the money. Companies all over Silicon Valley are forming task forces to examine the roles, in-house and with strategic partners, of AI and data governance. Partly, of course, their aim is to head off the politicians, stung by the data scandals — including the 2016 U.S. election — who are looking for heads on pikes. There are votes in tech-bashing, it seems.
Combative questions about who owns the data that will feed the AI machinery, and who will profit from it, will punctuate the near future, as the era of data robber barons morphs into something like a moment of self-reflection. This reminds me of another Oppenheimer — Robert — who watched as the first nuclear blast lit up the New Mexico desert and thought: “Now I am become Death, the destroyer of worlds.”
Inhuman Power’s recipe for the AI apocalypse, however well-cast in its intimations of a trans-humanist hell, did have me grinding my teeth at points, as I tried to parse complex sentences that would have given even Hegel a headache. That said, these three are on to something with the lens they offer, if not the vision they advocate. They ought to get plenty of airtime for spreading their core thesis — counter-revolution is coming! — far and wide.
What might the predicted counter-revolution look like, or possibly start? A kind of answer is coalescing.
My marketing communications days suggest a helpful case study. I once had one of Canada’s great financial institutions as a client. In my view, if anyone is going to suffer the guillotine blade when full-blown AI comes into play, it will be those in the banking towers of Toronto and Montreal and Vancouver and Winnipeg. If we thought the disappearance of middle-class jobs with the collapse of trade unions was dire, particularly in the U.S. and the U.K., we won’t believe our eyes when AI really takes hold in the world of insurance and financial services. And it won’t be the fearless leaders in the C-suites who are out the door — it’ll be the bank tellers and the “people-people,” the repetitive aspects of whose tasks are ripe for “efficiencies.” Look around your local branch, and you’ll see it happening even now. Mortgages are already terrifying, for one thing. Streamlining a process that cries out for the human touch and stellar customer service feels all wrong. Which is, of course, why the bean counters will impose it. The counter-revolution will be customer rebellion against being served by a machine.
I’m no dystopian. I have seen many political resisters in my post-Soviet journalism work, and I know that human beings are far more adaptable and resilient than we often think. I’m temperamentally much more an absurdist (apples don’t fall far from the tree).
Having spent the better part of a fortnight thinking intensively about artificial intelligence, because of these two very good books, I think the most sensible prediction of what it will actually do is that it will empower those people most motivated to chase the almighty dollar. Political resistance to yet another generation of tech billionaires hasn’t amounted to much; one wonders if an Extinction Rebellion targeted at AI might put a roadblock in its way. I strongly suspect it will.
AI and data governance ethicists like Batya Friedman and Helen Nissenbaum, along with activists like Toronto’s indomitable Bianca Wylie, are scanning the horizon for something else, something deeper, something unformed. These prescient women are watching for a battle line in the making. I ask this for myself: Is the battle for the future of AI wholly unrelated to the crises of late-stage hypercapitalism and its Iago, patriarchy itself?
The bottom line is that one’s view of AI is deeply coloured by one’s view of life — its meaning and purpose. I have real doubts that AI and robotics will do all the harm expected of them. In fact, like Andrés Oppenheimer, I sense really useful and interesting work stemming from the as-yet-untapped power of the technology: optimizing urban design via crowdsourced insights; breaking long-undeciphered lost languages; modelling if not emulating the coarser unconscious computation we humans do every second, for stroke patients and others who might truly benefit from machine learning in their day-to-day lives.
But there’s something else: the spirit of human inventiveness and improvisation and of emotional, moving moments born of the unique connections we make with one another. Each of us owes the universe a death, yet much has been speculated — to choose but one of dozens of speculations — about AI genomics and the prospect of inventing superlongevity. We also owe ourselves and those we love a life. And that’s where, I believe, the debate around AI and robotics will ultimately go, if it hasn’t already. What, as Schrödinger asked in 1944 as he set the table for the double-helix saga scant years later, is life?
For my part, I guess I’ve taken a page out of my dad’s playbook. I’ve developed a media technology that’s won interest from a highly reputable AI firm. The conversations I’ve had with its chief scientist have been remarkable, in the sense that they haven’t been about modal logic or neural networks or the post-space-age jargon that makes the news about AI. Quite the contrary: we’ve been talking about mindfulness and originality, poetry and dance, and the composer’s role in making all those things in which the human voice and experience and ability to connect are finally and inextricably bound — the musical, the visual, and the dramatic.
At the end of one recent conversation, my chief scientist friend observed, “You know, what people don’t understand is that AI never fires anybody. Robots don’t take your job. It’s your manager who fires you. It’s the CEO who approves an AI that eats jobs. All AI does is expedite the expeditable. Human damage and market self-interest? Totally different story — but that’s where the action really is.”
A moment passed, and he continued, “Whatever AI can do, and whatever you can’t do, will boil down to what and why and how we choose to digitally articulate the care and the intimacy of the relationships we already have. It’s a choice.”
For my money, my colleague named the threat implicit in both The Robots Are Coming! and Inhuman Power. AI’s gravest problems aren’t technical or ethical; they’re metaphysical. As the cartoon character Pogo said decades ago, we have met the enemy — invented it, actually, if the enemy AI be — and he is us.