Skip to content

From the archives

That Ever Governed Frenzy

Through the eyes of Jody Wilson-Raybould and Michael Wernick

Rumble on Parliament Hill

In the ring with Justin Trudeau

Return of the Robber Barons

Chrystia Freeland asks if we can tell “makers” from “takers” among the new super-rich

Lawgivers of the Mind

The moral coding of artificial intelligence

Brendan Howley

Cognitive Code: Post-anthropocentric Intelligence and the Infrastructural Brain

Johannes Bruder

McGill-Queen’s University Press

216 pages, hardcover, softcover, and ebook

Morality by Design: Technology’s Challenge to Human Values

Wade Rowland

Intellect

120 pages, softcover

In the entranceway to our house, my significant other, a long-time wearer of the smart hat, keeps her hat stand. It’s a mock Victorian bust: on the surface of the ceramic head lies a phrenology map, created by one L. N. Fowler, Ludgate Circus, London, and “entered at Stationers Hall” sometime around 1850. Natives of upstate New York, Lorenzo Niles Fowler and his brother, Orson, were an astonishing pair. Theirs was an international industry of touring, lecturing, and reading, all the while churning out vast numbers of journals, periodicals, and pamphlets and operating a phrenology museum.

L. N. Fowler & Co. mapped out what are now generally accepted as completely spurious competences for various volumes of the human brain. The “perfecting group,” for example, occupies a quadrant roughly corresponding to the right temple. I’ve no idea what the perfecting group represents, but that’s a serious chunk of cranial real estate. The “literary faculties” live just above the right eye socket. I imagine these sensibilities lurking beneath the echoing sinuses, awaiting discovery, perhaps even an agent in LA.

The brothers made real hay: They used phrenological mesmerism to prep patients for surgery. VIPs flocked to have their heads read, including Mark Twain, Clara Barton, and Walt Whitman, who counted phrenologists among “the lawgivers of poets.” The Fowlers barely kept up with the demand for their ministrations. Lorenzo seems to have died of a stroke in 1896, perhaps due to sheer overwork in keeping the phrenological bubble aloft.

A century and change later, the “construction [that] underlies the structure of every perfect poem” (Whitman again) is no longer phrenology. Many want us to believe it’s now artificial intelligence. With Cognitive Code, the perceptive and droll Johannes Bruder, who’s clearly been around the scientific block more than once, has written a minor masterpiece, as neat an anatomy of the state of play of the “science” of AI as one could want. I’ll make no bones about it: this Swiss researcher has lifted the veil and brought a sociologist’s skeptical eye to a marketplace of ideas that, if you ask around, is about as oversold as any real estate or phrenological bubble has ever been.

Consider the equation (y = beta_0 + beta_1 * x), which DataRobot, a highly successful AI company out of Boston, explains on its blog:

Ordinary Least Squares is the simplest and most common estimator in which the two (beta)s are chosen to minimize the square of the distance between the predicted values and the actual values. Even though this model is quite rigid and often does not reflect the true relationship, this still remains a popular approach for several reasons. For one, it is computationally cheap to calculate the coefficients. It is also easier to interpret than more sophisticated models, and in situations where the goal is understanding a simple model in detail, rather than estimating the response well, they can provide insight into what the model captures.

Roughly translated, DataRobot is saying, “Well, between us and a computable reality is this one-size-fits-all mechanism that doesn’t quite work, so we apply fudge factors (coefficients) to tighten the bolts, computationally speaking. Why? Because that’s cheaper than, say, asking a subject matter expert — those guys cost real money. And then we’ll call this Rube Goldberg contraption a model, just in case we need the wiggle room. (Which we will.)”

What I’m getting at is this: science is messy, perhaps especially the science of the brain. The whole point of the scientific method is to fail, fail again, and then fail some more, until something like a testable theory emerges (emphasis on “emerges”). Why? Because any truly useful scientific conclusion is but an interruption on a path to a greater understanding — a stepping stone. The core assumption of most AI as practised in 2020 is that there’s something to be engineered that mimics the human substrate. Whitman’s lawgivers, indeed.

What AI people want more than anything, as Bruder emphasizes, is the proverbial turnkey solution to a big problem: the problem of consciousness. There’s money in that one. And this, as Bruder’s dispassionate dispatch from the front lines of AI research makes lethally clear, is where things get sticky.

It turns out that many of the “advances” that AI researchers made when this stuff really took off (far earlier than you’d probably think, in the late 1970s) were predicated on brain scans, which were themselves statistically generated images. The MRI machines that made them were so primitive that the resolution of those images (never mind the monitors on which the initiates interpreted them) was appalling.

I know this from personal experience. In 1981, at the dawn of a new era, my dad was thought to have a brain tumour behind his right mastoid, discovered by one of the first MRI machines in Manhattan, at the IBM Major Medical Plan’s expense. He was rightly terrified of the pending neurosurgery, which more than likely would have killed him, because he had record-­setting hypertension (240/180 at one point) and (as a later clinician would say) a circulatory system in a state of massive disrepair. I flew to New York to make farewells. It was that dire, my siblings and I thought.

Somehow, I smelled a rat in Manhattan, be­cause the MRI technicians themselves told me (I wangled my way into the lab where they worked) that a meningioma (benign) tumour has different surface characteristics than a glioma (malign and highly lethal) one. But which did my dad have? When I asked, the techies basically shrugged and didn’t even look up.

The tell was that those monitors — state of the art in 1981 — brought to mind Pac‑Man graphics: you could drive a truck between the pixels, so coarse was the resolution. As I was growing increasingly skeptical of my dad’s scan, I bumped into his cardiologist at the elevator bank. I asked the avuncular doctor if anyone could authoritatively vet the radiological images on those high-tech screens. Direct quote: “Jury’s out on that one — if he were my dad, I’d get him the hell out of here.” I had my dad on the Amtrak back to Poughkeepsie that afternoon. He lived another eighteen years with not a glimmer of a symptom except an outsize thirst for lager.

Here’s the real scandal: those smudgy images were and still are used to make the case for serious investment in deconstructing brain functions in the name of neuroscience. Bear in mind these scans aren’t photographs like you’d get with an X‑ray; they’re shadow images, generated by equations. They’re mathematical constructs.

And guess what? Researchers near and far quickly grasped that even if there wasn’t a one-to-one relationship between what’s notionally the firing of synapses captured by an image scan and an underlying function, you could raise a pant-load of cash to investigate things further. Bruder makes this point through a series of cautionary tales (which are at once troubling and Marx Brothers hilarious) with specialists in freshly minted disciplines riding waves of nuclear magnetic resonance hype.

Nuclear magnetic resonance isn’t the Fowler Brothers reborn, you say. But it is. It’s phrenology with topspin, derived from atomic resonances, all very defensible, until it isn’t. This is not to say — nor does Bruder suggest — that there isn’t serious and valuable neuroscience under way with AI research. But Bruder’s tempered sense of things is refreshingly clear: to name but one consideration, he warns that algorithm design bias is one red flag against putting psychological and political modelling in the hands of programmers and engineers.

There’s another, even more intellectually corrosive problem: deploying AI without having a clear idea of what the hell a machine-­learning interpretation layer really means with respect to existing psychological norms. To put it bluntly, do we want the people who gave us Uber creating an “infrastructural brain” in the cloud that determines what is and what isn’t — say — ­mental illness?

It gets bigger still.

The Internet of Things is what you get from a reductionist, mechanistic system on steroids, permeating human life with computational infrastructures and a whack of sensors sensing all around us and generating feedback control loops. In many cases, it does indeed create human value through sheer speed — largely leisure, for those who can afford it. This is the vengeful return of the Jetsons-­era “labour-­saving kitchen,” which we know actually creates more work for someone.

For those who can’t enjoy the free time afforded by sensor-­driven convenience, there are AI-­modelled time-and-­motion performance “benchmarks” to be met, for $16 an hour: the online fulfillment centre as sweatshop. The AI giveth; the AI taketh away.

Bruder makes a point in Cognitive Code that is as direct as it is fluently expressed: The human brain was once, in the Aristotelian sense, a stand-­alone universe of astonishing computational elegance. It is now rapidly becoming a node on a network of ever-­increasing computational power and reach — Arthur Koestler’s “ghost in the machine” for the digital age.

The upshot of all this is one mixed bag indeed. There’s the instructive case of the Boeing 737 Max, a nightmare where cost-benefit analyses met the very limits of computational modelling and design. Human beings died by the hundreds. Contrariwise, AI and deep learning can and does accelerate, through the cloud, cross-­correlations of human behaviours that are highly valuable, and reliably so. But Bruder warns there must be a limit, both ethical and moral, to human advances won at the expense of human qualities: compassion, empathy, the ability to laugh and love and live with unresolved contradiction and still find meaning. That limit, in his view, lies where human psychology lives and breathes, as mysterious and singular as ever — where the imagination flickers and ignites and the new is made. “For if psychic life is to be colonized by the rhythms, waves, and patterns of machines,” he writes, “we should make sure that it is, in this very process, infinitely queered and diversified.”

It’s a powerful cri de coeur. Bruder warns us about the limits of design in the hands of technology — or, rather, technologists — especially when such design can scale in unforeseen ways, virus-­like. And he’s right.

Wade Rowland’s Morality by Design is also a cri de coeur, a kind of twenty-first-century Ten Commandments. Human morality ought to inform technological design, Rowland ­contends, so that it has known and knowable limits. There should be instinctual oversight of digital endeavours — by virtue of virtue itself. The book is beautifully written, with sparingly few platitudes. And Rowland, a communications scholar from York University, has put his finger on the type of response that almost always surfaces when technological achievement outpaces our sense of how best to apply new techniques and methodologies. Think of recombinant DNA technologies and the genomic revolution: because we could, we did things (and still do) that, whether by design or not, delimited a new sense of what it means to be human. We’re still working out the consequences of the Crick-Watson-­Franklin-­Wilkins discovery of DNA structure a lifetime ago. We’re simply incapable of digesting such advances overnight. They require massive adaptive resources and time — gobs of it.

Right now in Silicon Valley, managers of the big tech companies are striking committees to address, in part, what Rowland is rightly demanding: an ethical framework for ­engineering design, as if human beings truly matter. Rowland might well approve of these attempts at self-­reflection in the digital heartland: he cogently argues that we all need to undertake ethical decisions out of our own right reason — our innate moral infrastructure, to borrow from Bruder. His approach is eerily close to the thinking of the Yale University atheist and ethicist Martin Häggland, who also reasons that a moral approach to designing a life can be derived from first principles. Häggland’s central notion is that if there’s no afterlife, we’re actually in a better position — under a greater imperative — to treat one another as we ourselves would like to be treated. The approaches dovetail; Rowland’s chapter on the “alchemy of capitalism,” in particular, is a stellar exegesis on why human failings need not lead — linearly, if at all — to human failure in how we collectively create value.

We can reason our way to a better solution, less noxious, more humane. The question, for Rowland, is whether we have the effort and discipline required: an amalgam of reason and passion, of political acumen and empathy.

In radically different fashions, Bruder and Rowland make a remarkably similar point: there’s always a choice in how we make and remake ourselves, not least because life itself is even messier than the science we try to apply to our realities. They both call for a moral philosophy of technology; they both suspect that unconsidered progress yields fertile ground for black swans and monsters of our own creation, simply because we lack the thoroughness of insight — the self-reflection — to understand the why behind a new gadget or technique.

These two books are far from abstract incitements to a moral philosophy of technology. They’re far more practical than any phrenological construction. We’re living through a case study in the limits of science and technology, of engineered life itself, with the salient lesson being that political expediency tops human benefit, time and time again. “Have you learn’d the physiology, phrenology, politics, geography, pride, freedom, friendship of the land?” Whitman asked. “Its substratums and objects?”

What we’ve learned is this: It ain’t the tech. It’s the politics of the tech we ought to watch.

Brendan Howley spent a decade covering covert operations and white-collar crime for The Fifth Estate. He co-invented HUME, a context software engine.

Advertisement

Advertisement